Meanwhile, a separate survey of faculty released Thursday by Ithaka S+R, a higher education consulting firm, showcased that faculty—while increasingly familiar with AI—often do not know how to use it in classrooms. Two out of five faculty members are familiar with AI, the Ithaka report found, but only 14 percent said they are confident in their ability to use AI in their teaching. Just slightly more (18 percent) said they understand the teaching implications of generative AI.
“Serious concerns about academic integrity, ethics, accessibility, and educational effectiveness are contributing to this uncertainty and hostility,” the Ithaka report said.
The diverging views about AI are causing friction. Nearly a third of students said they have been warned to not use generative AI by professors, and more than half (59 percent) are concerned they will be accused of cheating with generative AI, according to the Pearson report, which was conducted with Morning Consult and surveyed 800 students.
What teachers want from AI — from hechingerreport.org by Javeria Salman When teachers designed their own AI tools, they built math assistants, tools for improving student writing, and more
An AI chatbot that walks students through how to solve math problems. An AI instructional coach designed to help English teachers create lesson plans and project ideas. An AI tutor that helps middle and high schoolers become better writers.
These aren’t tools created by education technology companies. They were designed by teachers tasked with using AI to solve a problem their students were experiencing.
Over five weeks this spring, about 300 people – teachers, school and district leaders, higher ed faculty, education consultants and AI researchers – came together to learn how to use AI and develop their own basic AI tools and resources. The professional development opportunity was designed by technology nonprofit Playlab.ai and faculty at the Relay Graduate School of Education.
Next-Gen Classroom Observations, Powered by AI — from educationnext.org by Michael J. Petrilli The use of video recordings in classrooms to improve teacher performance is nothing new. But the advent of artificial intelligence could add a helpful evaluative tool for teachers, measuring instructional practice relative to common professional goals with chatbot feedback.
Multiple companies are pairing AI with inexpensive, ubiquitous video technology to provide feedback to educators through asynchronous, offsite observation. It’s an appealing idea, especially given the promise and popularity of instructional coaching, as well as the challenge of scaling it effectively (see “Taking Teacher Coaching To Scale,” research, Fall 2018).
…
Enter AI. Edthena is now offering an “AI Coach” chatbot that offers teachers specific prompts as they privately watch recordings of their lessons. The chatbot is designed to help teachers view their practice relative to common professional goals and to develop action plans to improve.
To be sure, an AI coach is no replacement for human coaching.
We need to shift our thinking about GenAI tutors serving only as personal learning tools. The above activities illustrate how these tools can be integrated into contemporary classroom instruction. The activities should not be seen as prescriptive but merely suggestive of how GenAI can be used to promote social learning. Although I specifically mention only one online activity (“Blended Learning”), all can be adapted to work well in online or blended classes to promote social interaction.
Stealth AI — from higherai.substack.com by Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler What happens when students use AI all the time, but aren’t allowed to talk about it?
In many ways, this comes back to one of my general rules: You cannot ban AI in the classroom. You can only issue a gag rule.
And if you do issue a gag rule, then it deprives students of the space they often need to make heads and tails of this technology.
We need to listen to actual students talking about actual uses, and reflecting on their actual feelings. No more abstraction.
In this conversation, Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler about what students are saying about Artificial Intelligence and education.
Welcome to our monthly update for Teams for Education and thank you so much for being part of our growing community! We’re thrilled to share over 20 updates and resources and show them in action next week at ISTELive 24 in Denver, Colorado, US.
… Copilot for Microsoft 365 – Educator features Guided Content Creation
Coming soon to Copilot for Microsoft 365 is a guided content generation experience to help educators get started with creating materials like assignments, lesson plans, lecture slides, and more. The content will be created based on the educator’s requirements with easy ways to customize the content to their exact needs. Standards alignment and creation Quiz generation through Copilot in Forms Suggested AI Feedback for Educators Teaching extension
To better support educators with their daily tasks, we’ll be launching a built-in Teaching extension to help guide them through relevant activities and provide contextual, educator-based support in Copilot. Education data integration
Copilot for Microsoft 365 – Student features Interactive practice experiences
Flashcards activity
Guided chat activity Learning extension in Copilot for Microsoft 365
…
New AI tools for Google Workspace for Education — from blog.google by Akshay Kirtikar and Brian Hendricks We’re bringing Gemini to teen students using their school accounts to help them learn responsibly and confidently in an AI-first future, and empowering educators with new tools to help create great learning experiences.
And to understand the value of AI, they need to do R&D. Since AI doesn’t work like traditional software, but more like a person (even though it isn’t one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.
Ilya Sutskever Has a New Plan for Safe Superintelligence — from bloomberg.com by Ashlee Vance (behind a paywall) OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.
Ilya Sutskever is kind of a big deal in AI, to put it lightly.
Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.
… Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.
If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.
As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world’s most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source.
Federal officials, AI model operators and cybersecurity companies ran the first joint simulation of a cyberattack involving a critical AI system last week.
Why it matters: Responding to a cyberattack on an AI-enabled system will require a different playbook than the typical hack, participants told Axios.
The big picture: Both Washington and Silicon Valley are attempting to get ahead of the unique cyber threats facing AI companies before they become more prominent.
Immediately after we saw Sora-like videos from KLING, Luma AI’s Dream Machine video results overshadowed them.
…
Dream Machine is a next-generation AI video model that creates high-quality, realistic shots from text instructions and images.
Introducing Gen-3 Alpha — from runwayml.com by Anastasis Germanidis A new frontier for high-fidelity, controllable video generation.
AI-Generated Movies Are Around the Corner — from news.theaiexchange.com by The AI Exchange The future of AI in filmmaking; participate in our AI for Agencies survey
AI-Generated Feature Films Are Around the Corner.
We predict feature-film length AI-generated films are coming by the end of 2025, if not sooner.
Introducing Gen-3 Alpha: Runway’s new base model for video generation.
Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions.https://t.co/YQNE3eqoWf
Introducing GEN-3 Alpha – The first of a series of new models built by creatives for creatives. Video generated with @runwayml‘s new Text-2-Video model.
Learning personalisation. LinkedIn continues to be bullish on its video-based learning platform, and it appears to have found a strong current among users who need to skill up in AI. Cohen said that traffic for AI-related courses — which include modules on technical skills as well as non-technical ones such as basic introductions to generative AI — has increased by 160% over last year.
You can be sure that LinkedIn is pushing its search algorithms to tap into the interest, but it’s also boosting its content with AI in another way.
For Premium subscribers, it is piloting what it describes as “expert advice, powered by AI.” Tapping into expertise from well-known instructors such as Alicia Reece, Anil Gupta, Dr. Gemma Leigh Roberts and Lisa Gates, LinkedIn says its AI-powered coaches will deliver responses personalized to users, as a “starting point.”
These will, in turn, also appear as personalized coaches that a user can tap while watching a LinkedIn Learning course.
Personalized learning for everyone: Whether you’re looking to change or not, the skills required in the workplace are expected to change by 68% by 2030.
Expert advice, powered by AI: We’re beginning to pilot the ability to get personalized practical advice instantly from industry leading business leaders and coaches on LinkedIn Learning, all powered by AI. The responses you’ll receive are trained by experts and represent a blend of insights that are personalized to each learner’s unique needs. While human professional coaches remain invaluable, these tools provide a great starting point.
Personalized coaching, powered by AI, when watching a LinkedIn course: As learners —including all Premium subscribers — watch our new courses, they can now simply ask for summaries of content, clarify certain topics, or get examples and other real-time insights, e.g. “Can you simplify this concept?” or “How does this apply to me?”
From DSC: Very interesting to see the mention of an R&D department here! Very cool.
Baker said ninth graders inthe R&D department designed the essential skills rubric for their grade so that regardless of what content classes students take, they all get the same immersion into critical career skills. Student voice is now so integrated into Edison’s core that teachers work with student designers to plan their units. And he said teachers are becoming comfortable with the language of career-centered learning and essential skills while students appreciate the engagement and develop a new level of confidence.
… The R&D department has grown to include teachers from every department working with students to figure out how to integrate essential skills into core academic classes. In this way, they’re applying one of the XQ Institute’s crucial Design Principles for innovative high schools: Youth Voice and Choice. .
Client-connected projects have become a focal point of the Real World Learning initiative, offering students opportunities to solve real-world problems in collaboration with industry professionals.
Organizations like CAPS, NFTE, and Journalistic Learning facilitate community connections and professional learning opportunities, making it easier to implement client projects and entrepreneurship education.
Important trend: client projects. Work-based learning has been growing with career academies and renewed interest in CTE. Six years ago, a subset of WBL called client-connected projects became a focal point of the Real World Learning initiative in Kansas City where they are defined as authentic problems that students solve in collaboration with professionals from industry, not-for-profit, and community-based organizations….and allow students to: engage directly with employers, address real-world problems, and develop essential skills.
The Community Portrait approach encourages diverse voices to shape the future of education, ensuring it reflects the needs and aspirations of all stakeholders.
Active, representative community engagement is essential for creating meaningful and inclusive educational environments.
The Portrait of a Graduate—a collaborative effort to define what learners should know and be able to do upon graduation—has likely generated enthusiasm in your community. However, the challenge of future-ready graduates persists: How can we turn this vision into a reality within our diverse and dynamic schools, especially amid the current national political tensions and contentious curriculum debates?
The answer lies in active, inclusive community engagement. It’s about crafting a Community Portrait that reflects the rich diversity of our neighborhoods. This approach, grounded in the same principles used to design effective learning systems, seeks to cultivate deep, reciprocal relationships within the community. When young people are actively involved, the potential for meaningful change increases exponentially.
Although Lindsay E. Jones came from a family of educators, she didn’t expect that going to law school would steer her back into the family business. Over the years she became a staunch advocate for children with disabilities. And as mom to a son with learning disabilities and ADHD who is in high school and doing great, her advocacy is personal.
Jones previously served as president and CEO of the National Center for Learning Disabilities and was senior director for policy and advocacy at the Council for Exceptional Children. Today, she is the CEO at CAST, an organization focused on creating inclusive learning environments in K–12. EdTech: Focus on K–12 spoke with Jones about how digital transformation, artificial intelligence and visionary leaders can support inclusive learning environments.
Our brains are all as different as our fingerprints, and throughout its 40-year history, CAST has been focused on one core value: People are not broken, systems are poorly designed. And those systems are creating a barrier that holds back human innovation and learning.
Dream Machine is an AI model that makes high quality, realistic videos fast from text and images.
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
Luma AI just dropped a Sora-like AI video generator called Dream Machine.
But unlike Sora or KLING, it’s completely open access to the public.
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
AI’s New Conversation Skills Eyed for Education — from insidehighered.com by Lauren Coffey The latest ChatGPT’s more human-like verbal communication has professors pondering personalized learning, on-demand tutoring and more classroom applications.
ChatGPT’s newest version, GPT-4o ( the “o” standing for “omni,” meaning “all”), has a more realistic voice and quicker verbal response time, both aiming to sound more human. The version, which should be available to free ChatGPT users in coming weeks—a change also hailed by educators—allows people to interrupt it while it speaks, simulates more emotions with its voice and translates languages in real time. It also can understand instructions in text and images and has improved video capabilities.
…
Ajjan said she immediately thought the new vocal and video capabilities could allow GPT to serve as a personalized tutor. Personalized learning has been a focus for educators grappling with the looming enrollment cliff and for those pushing for student success.
There’s also the potential for role playing, according to Ajjan. She pointed to mock interviews students could do to prepare for job interviews, or, for example, using GPT to play the role of a buyer to help prepare students in an economics course.
Hello GPT-4o — from openai.com We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.
GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
Providing inflection, emotions, and a human-like voice
Understanding what the camera is looking at and integrating it into the AI’s responses
Providing customer service
With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.
This demo is insane.
A student shares their iPad screen with the new ChatGPT + GPT-4o, and the AI speaks with them and helps them learn in *realtime*.
Imagine giving this to every student in the world.
The Ethical and Emotional Implications of AI Voice Preservation
Legal Considerations and Voice Rights From a legal perspective, the burgeoning use of AI in voice cloning also introduces a complex web of rights and permissions. The recent passage of Tennessee’s ELVIS Act, which allows legal action against unauthorized recreations of an artist’s voice, underscores the necessity for robust legal frameworks to manage these technologies. For non-celebrities, the idea of a personal voice bank brings about its own set of legal challenges. How do we regulate the use of an individual’s voice after their death? Who holds the rights to control and consent to the usage of these digital artifacts?
To safeguard against misuse, any system of voice banking would need stringent controls over who can access and utilize these voices. The creation of such banks would necessitate clear guidelines and perhaps even contractual agreements stipulating the terms under which these voices may be used posthumously.
Should we all consider creating voice banks to preserve our voices, allowing future generations the chance to interact with us even after we are gone?
Microsoft’s new ChatGPT competitor… — from The Rundown AI
The Rundown: Microsoft is reportedly developing a massive 500B parameter in-house LLM called MAI-1, aiming to compete with top AI models from OpenAI, Anthropic, and Google.
Hampton runs a private community for high-growth tech founders and CEOs. We asked our community of founders and owners how AI has impacted their business and what tools they use
Here’s a sneak peek of what’s inside:
The budgets they set aside for AI research and development
The most common (and obscure) tools founders are using
Measurable business impacts founders have seen through using AI
Where they are purposefully not using AI and much more
To help leaders and organizations overcome AI inertia, Microsoft and LinkedIn looked at how AI will reshape work and the labor market broadly, surveying 31,000 people across 31 countries, identifying labor and hiring trends from LinkedIn, and analyzing trillions of Microsoft 365 productivity signals as well as research with Fortune 500 customers. The data points to insights every leader and professional needs to know—and actions they can take—when it comes to AI’s implications for work.
The rapid rise of artificial intelligence appears to be taking a toll on the shares of online education companies Chegg and Coursera.
Both stocks sank by more than 10% on Tuesday after issuing disappointing guidance in part because of students using AI tools such as ChatGPT from OpenAI.
“Combining AI Literacy with Core Humanities Learning Goals: Practical, Critical, and Playful Approaches”
It was amazing to get to do an in-person keynote at @csunorthridge‘s AI Pedagogy Showcase.
Sharing slides: https://t.co/LDIGAZ3ORO 1/2
— Anna Mills, annamillsoer.bsky.social, she/her (@AnnaRMills) May 3, 2024
Synthetic Video & AI Professors— from drphilippahardman.substack.com by Dr. Philippa Hardman Are we witnessing the emergence of a new, post-AI model of async online learning?
TLDR: by effectively tailoring the learning experience to the learner’s comprehension levels and preferred learning modes, AI can enhance the overall learning experience, leading to increased “stickiness” and higher rates of performance in assessments.
…
TLDR: AI enables us to scale responsive, personalised “always on” feedback and support in a way that might help to solve one of the most wicked problems of online async learning – isolation and, as a result, disengagement.
…
In the last year we have also seen the rise of an unprecedented number of “always on” AI tutors, built to provide coaching and feedback how and when learners need it.
Perhaps the most well-known example is Khan Academy’s Khanmigo and its GPT sidekick Tutor Me. We’re also seeing similar tools emerge in K12 and Higher Ed where AI is being used to extend the support and feedback provided for students beyond the physical classroom.
Given the potential ramifications of artificial intelligence (AI) diffusion on matters of diversity, equity, inclusion, and accessibility, now is the time for higher education institutions to adopt culturally aware, analytical decision-making processes, policies, and practices around AI tools selection and use.
The theme of the day was Human Connection V Innovative Technology. I see this a lot at conferences, setting up the human connection (social) against the machine (AI). I think this is ALL wrong. It is, and has always been a dialectic, human connection (social) PLUS the machine. Everyone had a smartphone, most use it for work, comms and social media. The binary between human and tech has long disappeared.
I recently created an AI version of myself—REID AI—and recorded a Q&A to see how this digital twin might challenge me in new ways. The video avatar is generated by Hour One, its voice was created by Eleven Labs, and its persona—the way that REID AI formulates responses—is generated from a custom chatbot built on GPT-4 that was trained on my books, speeches, podcasts and other content that I’ve produced over the last few decades. I decided to interview it to test its capability and how closely its responses match—and test—my thinking. Then, REID AI asked me some questions on AI and technology. I thought I would hate this, but I’ve actually ended up finding the whole experience interesting and thought-provoking.
From DSC: This ability to ask questions of a digital twin is very interesting when you think about it in terms of “interviewing” a historical figure. I believe character.ai provides this kind of thing, but I haven’t used it much.