Educators need to work with vendors and tech developers to ensure artificial intelligence-driven innovations for schools go hand-in-hand with managing the technology’s risks, recommends guidance released July 8 by the U.S. Department of Education.
Also, on somewhat related notes see the following items:
I highly recommend the https://t.co/QE0Eze3qsr training for educators. It’s well-structured, engaging, and you learn a lot about prompting AI as you create your own custom bot for education. @shaanmasala has built something amazing. PlayLab is a nonprofit, the training is free,…
— Anna Mills, annamillsoer.bsky.social, she/her (@AnnaRMills) July 8, 2024
Mary Meeker has written her first report in over four years, focused on the relationship between artificial intelligence and U.S. higher education.
Why it matters: Meeker’s annual “Internet Trends” reports were among Silicon Valley’s most cited and consumed documents.
Each one dug deep into the new tech economy, with hundreds of pages of slides. The last one was published in 2019.
Meeker’s new effort is a shorter attempt (16 pages!) at reconciling tech’s brave new world and America’s economic vitality, with higher ed as the connective tissue.
Actions taken in the next five years will be consequential. It’s important for higher education to take a leadership role, in combination with industry and government. The ramp in artificial intelligence – which leverages the history of learning for learning – affects all forms of learning, teaching, understanding, and decision making. This should be the best of times…
Our first-pass observations on these topics follow. We begin with an overview, followed by thoughts on the unprecedented ramp in AI usage and the magnitude of investment in AI from America’s leading global technology companies. Then we explore ways that this rapidly changing AI landscape may drive transformations in higher education. We hope these add to the discussion.
AI & Universities – Will Masters of Learning Master New Learnings?
In a time of rapid technological change led by American companies, American universities must determine how best to optimize for the future. Many institutions have work to do to meet these changes in demand, per the Burning Glass Institute. As the AI challenge looms, they will need thoughtful plans that balance their rich traditions and research history with the needs of a rapidly evolving marketplace supercharged by innovation. Keeping an eye on the output and trends in various AI skunkworks, such as the team at AI Acceleration at Arizona State, may help universities determine the products and software tools that could transform the educational experience.
This episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.
Key moments:
00:05 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 Rapid AI development raises questions about controlling its global application.
33:31 Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.
The team has summarized their primary contributions as follows.
The team has offered the first instance of a simple, scalable oversight technique that greatly assists humans in more thoroughly detecting problems in real-world RLHF data.
Within the ChatGPT and CriticGPT training pools, the team has discovered that critiques produced by CriticGPT catch more inserted bugs and are preferred above those written by human contractors.
Compared to human contractors working alone, this research indicates that teams consisting of critic models and human contractors generate more thorough criticisms. When compared to reviews generated exclusively by models, this partnership lowers the incidence of hallucinations.
This study provides Force Sampling Beam Search (FSBS), an inference-time sampling and scoring technique. This strategy well balances the trade-off between minimizing bogus concerns and discovering genuine faults in LLM-generated critiques.
a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.
The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.
Google Translate can come in handy when you’re traveling or communicating with someone who speaks another language, and thanks to a new update, you can now connect with some 614 million more people. Google is adding 110 new languages to its Translate tool using its AI PaLM 2 large language model (LLM), which brings the total of supported languages to nearly 250. This follows the 24 languages added in 2022, including Indigenous languages of the Americas as well as those spoken across Africa and central Asia.
Gen-3 Alpha Text to Video is now available to everyone.
A new frontier for high-fidelity, fast and controllable video generation.
Meanwhile, a separate survey of faculty released Thursday by Ithaka S+R, a higher education consulting firm, showcased that faculty—while increasingly familiar with AI—often do not know how to use it in classrooms. Two out of five faculty members are familiar with AI, the Ithaka report found, but only 14 percent said they are confident in their ability to use AI in their teaching. Just slightly more (18 percent) said they understand the teaching implications of generative AI.
“Serious concerns about academic integrity, ethics, accessibility, and educational effectiveness are contributing to this uncertainty and hostility,” the Ithaka report said.
The diverging views about AI are causing friction. Nearly a third of students said they have been warned to not use generative AI by professors, and more than half (59 percent) are concerned they will be accused of cheating with generative AI, according to the Pearson report, which was conducted with Morning Consult and surveyed 800 students.
What teachers want from AI — from hechingerreport.org by Javeria Salman When teachers designed their own AI tools, they built math assistants, tools for improving student writing, and more
An AI chatbot that walks students through how to solve math problems. An AI instructional coach designed to help English teachers create lesson plans and project ideas. An AI tutor that helps middle and high schoolers become better writers.
These aren’t tools created by education technology companies. They were designed by teachers tasked with using AI to solve a problem their students were experiencing.
Over five weeks this spring, about 300 people – teachers, school and district leaders, higher ed faculty, education consultants and AI researchers – came together to learn how to use AI and develop their own basic AI tools and resources. The professional development opportunity was designed by technology nonprofit Playlab.ai and faculty at the Relay Graduate School of Education.
Next-Gen Classroom Observations, Powered by AI — from educationnext.org by Michael J. Petrilli The use of video recordings in classrooms to improve teacher performance is nothing new. But the advent of artificial intelligence could add a helpful evaluative tool for teachers, measuring instructional practice relative to common professional goals with chatbot feedback.
Multiple companies are pairing AI with inexpensive, ubiquitous video technology to provide feedback to educators through asynchronous, offsite observation. It’s an appealing idea, especially given the promise and popularity of instructional coaching, as well as the challenge of scaling it effectively (see “Taking Teacher Coaching To Scale,” research, Fall 2018).
…
Enter AI. Edthena is now offering an “AI Coach” chatbot that offers teachers specific prompts as they privately watch recordings of their lessons. The chatbot is designed to help teachers view their practice relative to common professional goals and to develop action plans to improve.
To be sure, an AI coach is no replacement for human coaching.
We need to shift our thinking about GenAI tutors serving only as personal learning tools. The above activities illustrate how these tools can be integrated into contemporary classroom instruction. The activities should not be seen as prescriptive but merely suggestive of how GenAI can be used to promote social learning. Although I specifically mention only one online activity (“Blended Learning”), all can be adapted to work well in online or blended classes to promote social interaction.
Stealth AI — from higherai.substack.com by Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler What happens when students use AI all the time, but aren’t allowed to talk about it?
In many ways, this comes back to one of my general rules: You cannot ban AI in the classroom. You can only issue a gag rule.
And if you do issue a gag rule, then it deprives students of the space they often need to make heads and tails of this technology.
We need to listen to actual students talking about actual uses, and reflecting on their actual feelings. No more abstraction.
In this conversation, Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler about what students are saying about Artificial Intelligence and education.
Welcome to our monthly update for Teams for Education and thank you so much for being part of our growing community! We’re thrilled to share over 20 updates and resources and show them in action next week at ISTELive 24 in Denver, Colorado, US.
… Copilot for Microsoft 365 – Educator features Guided Content Creation
Coming soon to Copilot for Microsoft 365 is a guided content generation experience to help educators get started with creating materials like assignments, lesson plans, lecture slides, and more. The content will be created based on the educator’s requirements with easy ways to customize the content to their exact needs. Standards alignment and creation Quiz generation through Copilot in Forms Suggested AI Feedback for Educators Teaching extension
To better support educators with their daily tasks, we’ll be launching a built-in Teaching extension to help guide them through relevant activities and provide contextual, educator-based support in Copilot. Education data integration
Copilot for Microsoft 365 – Student features Interactive practice experiences
Flashcards activity
Guided chat activity Learning extension in Copilot for Microsoft 365
…
New AI tools for Google Workspace for Education — from blog.google by Akshay Kirtikar and Brian Hendricks We’re bringing Gemini to teen students using their school accounts to help them learn responsibly and confidently in an AI-first future, and empowering educators with new tools to help create great learning experiences.
The New Entry-Level Job (and Skill)
In a world where AI can perform entry-level tasks, and employers are prioritizing experienced candidates, how can recent college graduates and job seekers find a job?
AI is the new entry-level skill. As AI permeates every industry, it’s becoming increasingly common for employers to ask candidates how they think about applying AI to their jobs. (We’ve started asking this here at Reach ourselves.) Even if the job is not technical and doesn’t list AI as a skill, candidates would do well to prepare. Journalists, for instance, are warming up to using AI to transcribe interviews and suggest headlines.
So it’s not just AI that may take your entry-level role, but rather the person who knows how to use it. Candidates who are bracing for this technological shift and proactively building their AI literacy and expertise will have a leg up.
Leading school systems are incorporating AI tools such as tutoring, chatbots, and teacher assistants, and promoting AI literacy among teachers and students to adapt to the evolving role of AI in education.
Not sure if you're behind or ahead in AI adoption? I created this guide to help you benchmark.
? ? ?
* many who think they're behind are actually on track, and some who think they're ahead are not ** these insights are my own opinion based on years of work with hundreds… pic.twitter.com/Wr28azOwDS
Then came ChatGPT, Open AI’s widely used artificial intelligence bot. For Fender, it was a no-brainer to query it for help developing deep opening questions.
The chatbot and other AI tools like it have found an eager audience among homeschoolers and microschoolers, with parents and teachers readily embracing it as a brainstorming and management tool, even as public schools take a more cautious approach, often banning it outright.
A few observers say AI may even make homeschooling more practical, opening it up to busy parents who might have balked previously.
“Not everyone is using it, but some are very excited about it,” said Amir Nathoo, co-founder of Outschool, an online education platform.
Frontier intelligence Claude 3.5 Sonnet sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions and is exceptional at writing high-quality content with a natural, relatable tone.
2x speed
State-of-the-art vision
Introducing Artifacts—a new way to use Claude We’re also introducing Artifacts on claude.ai, a new feature that expands how you can interact with Claude. When you ask Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside your conversation.This creates a dynamic workspace where you can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into your projects and workflows.
If you teach computer science, user interface design, or anything involving web development, you can have students prompt Claude to produce web pages’ source code, see this code produced on the right side, preview it after it has compiled, and iterate through code+preview combinations.
If you teach economics, financial analysis, or accounting, you can have students prompt Claude to create analyses of markets or businesses, including interactive infographics, charts, or reports via React. Since it shows its work with Artifacts, your students can see how different prompts result in different statistical analyses, different representations of this information, and more.
If you teach subjects that produce purely textual outputs without a code intermediary, like philosophy, creative writing, or journalism, your students can compare prompting techniques, easily review their work, note common issues, and iterate drafts by comparing versions.
I see this as the first serious step towards improving the otherwise terrible user interfaces of LLMs for broad use. It may turn out to be a small change in the grand scheme of things, but it sure feels like a big improvement — especially in the pedagogical context.
And speaking of training students on AI, also see:
Schools could enhance their curricula by incorporating debate, Model UN and mock government programs, business plan competitions, internships and apprenticeships, interdisciplinary and project-based learning initiatives, makerspaces and innovation labs, community service-learning projects, student-run businesses or non-profits, interdisciplinary problem-solving challenges, public speaking, and presentation skills courses, and design thinking workshop.
These programs foster essential skills such as recognizing and addressing complex challenges, collaboration, sound judgment, and decision-making. They also enhance students’ ability to communicate with clarity and precision, while nurturing creativity and critical thinking. By providing hands-on, real-world experiences, these initiatives bridge the gap between theoretical knowledge and practical application, preparing students more effectively for the multifaceted challenges they will face in their future academic and professional lives.
Our vision for Claude has always been to create AI systems that work alongside people and meaningfully enhance their workflows. As a step in this direction, Claude.ai Pro and Team users can now organize their chats into Projects, bringing together curated sets of knowledge and chat activity in one place—with the ability to make their best chats with Claude viewable by teammates. With this new functionality, Claude can enable idea generation, more strategic decision-making, and exceptional results.
Projects are available on Claude.ai for all Pro and Team customers, and can be powered by Claude 3.5 Sonnet, our latest release which outperforms its peers on a wide variety of benchmarks. Each project includes a 200K context window, the equivalent of a 500-page book, so users can add all of the relevant documents, code, and insights to enhance Claude’s effectiveness.
And to understand the value of AI, they need to do R&D. Since AI doesn’t work like traditional software, but more like a person (even though it isn’t one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.
Ilya Sutskever Has a New Plan for Safe Superintelligence — from bloomberg.com by Ashlee Vance (behind a paywall) OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.
Ilya Sutskever is kind of a big deal in AI, to put it lightly.
Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.
… Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.
If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.
As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world’s most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source.
Federal officials, AI model operators and cybersecurity companies ran the first joint simulation of a cyberattack involving a critical AI system last week.
Why it matters: Responding to a cyberattack on an AI-enabled system will require a different playbook than the typical hack, participants told Axios.
The big picture: Both Washington and Silicon Valley are attempting to get ahead of the unique cyber threats facing AI companies before they become more prominent.
Immediately after we saw Sora-like videos from KLING, Luma AI’s Dream Machine video results overshadowed them.
…
Dream Machine is a next-generation AI video model that creates high-quality, realistic shots from text instructions and images.
Introducing Gen-3 Alpha — from runwayml.com by Anastasis Germanidis A new frontier for high-fidelity, controllable video generation.
AI-Generated Movies Are Around the Corner — from news.theaiexchange.com by The AI Exchange The future of AI in filmmaking; participate in our AI for Agencies survey
AI-Generated Feature Films Are Around the Corner.
We predict feature-film length AI-generated films are coming by the end of 2025, if not sooner.
It further seems to me that there is increasingly a divide in the use of generative AI between larger firms and smaller firms. Some will jump on me for saying that, because there are clearly smaller firms that are leading the pack in their development and use of generative AI. (I’m looking at you, Siskind Susser.) By the same token, there are larger firms that have locked their doors to generative AI.
But of the firms that are most openly incorporating generative AI into their workflows, they seem mostly to be larger firms. There is good reason for this. Larger firms have innovation officers and KM professionals and others on staff who are leading the charge on generative AI. Thanks to them, those firms are better equipped to survey the AI landscape and test products under controlled and secure conditions.
Eighteen months ago, the first-of-its-kind Judicial Innovation Fellowship launched with the mission of embedding experienced technologists and designers within state, local, and tribal courts to develop technology-based solutions to improve the public’s access to justice. Housed within the Institute for Technology Law & Policy at Georgetown University Law Center, the program was designed to be a catalyst for innovation to enable courts to better serve the legal needs of the public.
The advances with generative AI tools open new career opportunities for lawyers, from legal tech consultants to junior lawyers supervising AI systems.
…
Institutions like Harvard Law School and Yale Law School are introducing courses that focus on AI’s implications in the legal field and such career oppotunities continue to arise.
…
Pursuing a career in Legal AI requires a unique blend of legal knowledge and technical skills. There are various educational pathways can equip aspiring professionals with these competencies.
We have to provide instructors the support they need to leverage educational technologies like generative AI effectively in the service of learning. Given the amount of benefit that could accrue to students if powerful tools like generative AI were used effectively by instructors, it seems unethical not to provide instructors with professional development that helps them better understand how learning occurs and what effective teaching looks like. Without more training and support for instructors, the amount of student learning higher education will collectively “leave on the table” will only increase as generative AI gets more and more capable. And that’s a problem.
From DSC: As is often the case, David put together a solid posting here. A few comments/reflections on it:
I agree that more training/professional development is needed, especially regarding generative AI. This would help achieve a far greater ROI and impact.
The pace of change makes it difficult to see where the sand is settling…and thus what to focus on
The Teaching & Learning Groups out there are also trying to learn and grow in their knowledge (so that they can train others)
The administrators out there are also trying to figure out what all of this generative AI stuff is all about; and so are the faculty members. It takes time for educational technologies’ impact to roll out and be integrated into how people teach.
As we’re talking about multiple disciplines here, I think we need more team-based content creation and delivery.
There needs to be more research on how best to use AI — again, it would be helpful if the sand settled a bit first, so as not to waste time and $$. But then that research needs to be piped into the classrooms far better. .
Introducing Gen-3 Alpha: Runway’s new base model for video generation.
Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions.https://t.co/YQNE3eqoWf
Introducing GEN-3 Alpha – The first of a series of new models built by creatives for creatives. Video generated with @runwayml‘s new Text-2-Video model.
Learning personalisation. LinkedIn continues to be bullish on its video-based learning platform, and it appears to have found a strong current among users who need to skill up in AI. Cohen said that traffic for AI-related courses — which include modules on technical skills as well as non-technical ones such as basic introductions to generative AI — has increased by 160% over last year.
You can be sure that LinkedIn is pushing its search algorithms to tap into the interest, but it’s also boosting its content with AI in another way.
For Premium subscribers, it is piloting what it describes as “expert advice, powered by AI.” Tapping into expertise from well-known instructors such as Alicia Reece, Anil Gupta, Dr. Gemma Leigh Roberts and Lisa Gates, LinkedIn says its AI-powered coaches will deliver responses personalized to users, as a “starting point.”
These will, in turn, also appear as personalized coaches that a user can tap while watching a LinkedIn Learning course.
Personalized learning for everyone: Whether you’re looking to change or not, the skills required in the workplace are expected to change by 68% by 2030.
Expert advice, powered by AI: We’re beginning to pilot the ability to get personalized practical advice instantly from industry leading business leaders and coaches on LinkedIn Learning, all powered by AI. The responses you’ll receive are trained by experts and represent a blend of insights that are personalized to each learner’s unique needs. While human professional coaches remain invaluable, these tools provide a great starting point.
Personalized coaching, powered by AI, when watching a LinkedIn course: As learners —including all Premium subscribers — watch our new courses, they can now simply ask for summaries of content, clarify certain topics, or get examples and other real-time insights, e.g. “Can you simplify this concept?” or “How does this apply to me?”
How Learning Designers Are Using AI for Analysis — from drphilippahardman.substack.com by Dr. Philippa Hardman A practical guide on how to 10X your analysis process using free AI tools, based on real use cases
There are three key areas where AI tools make a significant impact on how we tackle the analysis part of the learning design process:
Understanding the why: what is the problem this learning experience solves? What’s the change we want to see as a result?
Defining the who: who do we need to target in order to solve the problem and achieve the intended goal?
Clarifying the what: given who our learners are and the goal we want to achieve, what concepts and skills do we need to teach?
Two new surveys, both released this month, show how high school and college-age students are embracing artificial intelligence. There are some inconsistencies and many unanswered questions, but what stands out is how much teens are turning to AI for information and to ask questions, not just to do their homework for them. And they’re using it for personal reasons as well as for school. Another big takeaway is that there are different patterns by race and ethnicity with Black, Hispanic and Asian American students often adopting AI faster than white students.
We’ve ceded so much trust to digital systems already that most simply assume a tool is safe to use with students because a company published it. We don’t check to see if it is compliant with any existing regulations. We don’t ask what powers it. We do not question what happens to our data or our student’s data once we upload it. We likewise don’t know where its information came from or how it came to generate human-like responses. The trust we put into these systems is entirely unearned and uncritical.
…
The allure of these AI tools for teachers is understandable—who doesn’t want to save time on the laborious process of designing lesson plans and materials? But we have to ask ourselves what is lost when we cede the instructional design process to an automated system without critical scrutiny.
From DSC: I post this to be a balanced publisher of information. I don’t agree with everything Marc says here, but he brings up several solids points.
As news about generative artificial intelligence (GenAI) continually splashes across social media feeds, including how ChatGPT 4o can help you play Rock, Paper, Scissors with a friend, breathtaking pronouncements about GenAI’s “disruptive” impact aren’t hard to find.
It turns out that it doesn’t make much sense to talk about GenAI as being “disruptive” in and of itself.
Can it be part of a disruptive innovation? You bet.
But much more important than just the AI technology in determining whether something is disruptive is the business model in which the AI is used—and its competitive impact on existing products and services in different markets.
On a somewhat note, also see:
National summit explores how digital education can promote deeper learning — from digitaleducation.stanford.edu by Jenny Robinson; via Eric Kunnen on Linkedin.com The conference, held at Stanford, was organized to help universities imagine how digital innovation can expand their reach, improve learning, and better serve the public good.
The summit was organized around several key questions: “What might learning design, learning technologies, and educational media look like in three, five, or ten years at our institutions? How will blended and digital education be poised to advance equitable, just, and accessible education systems and contribute to the public good? What structures will we need in place for our teams and offices?”
Dream Machine is an AI model that makes high quality, realistic videos fast from text and images.
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
Luma AI just dropped a Sora-like AI video generator called Dream Machine.
But unlike Sora or KLING, it’s completely open access to the public.