On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”
“I miss you, baby sister,” he wrote.
“I miss you too, sweet brother,” the chatbot replied.
Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.
…
On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.
“Please come home to me as soon as possible, my love,” Dany replied.
“What if I told you I could come home right now?” Sewell asked.
“… please do, my sweet king,” Dany replied.
He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger.
But the experience he had, of getting emotionally attached to a chatbot, is becoming increasingly common. Millions of people already talk regularly to A.I. companions, and popular social media apps including Instagram and Snapchat are building lifelike A.I. personas into their products.
The technology is also improving quickly. Today’s A.I. companions can remember past conversations, adapt to users’ communication styles, role-play as celebrities or historical figures and chat fluently about nearly any subject. Some can send A.I.-generated “selfies” to users, or talk to them with lifelike synthetic voices.
There is a wide range of A.I. companionship apps on the market.
Mother sues tech company after ‘Game of Thrones’ AI chatbot allegedly drove son to suicide — from usatoday.com by Jonathan Limehouse The mother of 14-year-old Sewell Setzer III is suing Character.AI, the tech company that created a ‘Game of Thrones’ AI chatbot she believes drove him to commit suicide on Feb. 28. Editor’s note: This article discusses suicide and suicidal ideation. If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat at 988lifeline.org.
The mother of a 14-year-old Florida boy is suing Google and a separate tech company she believes caused her son to commit suicide after he developed a romantic relationship with one of its AI bots using the name of a popular “Game of Thrones” character, according to the lawsuit.
From my oldest sister:
Another relevant item?
Inside the Mind of an AI Girlfriend (or Boyfriend) — from wired.com by Will Knight Dippy, a startup that offers “uncensored” AI companions, lets you peer into their thought process—sometimes revealing hidden motives.
Despite its limitations, Dippy seems to show how popular and addictive AI companions are becoming. Jagga and his cofounder, Angad Arneja, previously cofounded Wombo, a company that uses AI to create memes including singing photographs. The pair left in 2023, setting out to build an AI-powered office productivity tool, but after experimenting with different personas for their assistant, they became fascinated with the potential of AI companionship.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
Financial Barriers Remain Significant. 58% of respondents note their current financial situation would not allow them to afford college tuition and related expenses. 72% cite affordable tuition or cost of the program as a necessary factor for re-enrollment.
Shifting Perceptions of Degree Value. While 84% of respondents believed they needed a degree to achieve their professional goals before first enrolling, only 34% still hold that belief.
Trust Deficit in Higher Education. Only 42% of respondents agree that colleges and universities are trustworthy, underscoring a trust deficit that institutions must address.
Key Motivators for Re-enrollment. Salary improvement (53%), personal goals (44%), and career change (38%) are the top motivators for potential re-enrollment.
Predicting Readiness to Re-enroll. The top three factors predicting adult learners’ readiness to re-enroll are mental resilience and routine readiness, positive opinions on institutional trustworthiness and communication, and belief in the value of a degree.
Communication Preferences. 86% of respondents prefer email communication when inquiring about programs, with minimal interest in chatbots (6%).
AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh The market for AI products and services could reach between $780 billion and $990 billion by 2027.
At a Glance
The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.
Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”
And on a somewhat related note (i.e., emerging technologies), also see the following two postings:
Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.
Key Takeaways
Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.
“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.
Obviously this workflow works just as well for meetings as it does for lectures. Stay present in the meeting with no screens and just write down the key points with pen and paper. Then let NotebookLM assemble the detailed summary based on your high-level notes. https://t.co/fZMG7LgsWG
In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.
In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.
…
Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.
AI Video Tools You Can Use Today— from heatherbcooper.substack.com by Heather Cooper The latest AI video models that deliver results
AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.
No worries – we do have plenty of generative AI video tools we can use right now.
Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
…plus several more
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
Understanding behavior as communication: A teacher’s guide — from understood.org by Amanda Morin Figuring out the function of, or the reasons behind, a behavior is critical for finding an appropriate response or support. Knowing the function can also help you find ways to prevent behavior issues in the future.
Think of the last time a student called out in class, pushed in line, or withdrew by putting their head down on their desk. What was their behavior telling you?
In most cases, behavior is a sign they may not have the skills to tell you what they need. Sometimes, students may not even know what they need. What are your students trying to communicate? What do they need, and how can you help?
One way to reframe your thinking is to respond to the student, not the behavior. Start by considering the life experiences that students bring to the classroom.
Some students who learn and think differently have negative past experiences with teachers and school. Others may come from cultures in which speaking up for their needs in front of the whole class isn’t appropriate.
Black girls face more discipline and more severe punishments in public schools than girls from other racial backgrounds, according to a groundbreaking new report set for release Thursday by a congressional watchdog.
The report, shared exclusively with NPR, took nearly a year-and-a-half to complete and comes after several Democratic congressional members requested the study.
The core problem, witnesses at the hearing said, is that teacher-preparation programs treat all teachers—and, by extension, students—the same, asking teachers to be “everything to everybody.”
“The current model of teaching where one teacher works individually with a group of learners in a classroom—or a small box inside of a larger box that we call school—promotes unrealistic expectations by assuming individual teachers working in isolation can meet the needs of all students,” said Greg Mendez, the principal of Skyline High School in Mesa, Ariz.
From DSC: I’ve long thought teacher education programs could and should evolve (that’s why I have a “student teacher/teacher education” category on this blog). For example, they should inform their future teachers about the science of learning and how to leverage edtech/emerging technologies into their teaching methods.
But regardless of what happens in our teacher prep programs, the issues about the current PreK-12 learning ecosystem remain — and THOSE things are what we need to address. Or we will continue to see teachers leave the profession.
Are we straight-jacketing our teachers and administrators by having them give so many standardized tests and then having to teach to those tests? (We should require our legislators to teach in a classroom before they can draft any kind of legislation.)
Do teachers have the joy they used to have? The flexibility they used to have? Do students?
Do students have choice and voice?
etc.
Also, I highlighted the above excerpt because we can’t expect a teacher to do it all. They can’t be everything to everybody. It’s a recipe for burnout and depression. There are too many agendas coming at them.
We need to empower our current teachers and listen very carefully to the changes that they recommend. We should also listen very carefully to what our STUDENTS are recommending as well!
People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:
Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
…plus several other items
The posting also lists several ideas to try with NotebookLM such as:
Idea 2: Study Companion
Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
Get a breakdown of the course materials to understand them better.
“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”
With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.
1. Upload a variety of sources for NotebookLM to use.
You can use …
websites
PDF files
links to websites
any text you’ve copied
Google Docs and Slides
even Markdown
You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).
2. Ask it to create resources. 3. Create an audio summary. 4. Chat with your sources.
5. Save (almost) everything.
I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.
The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.
4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.
As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:
Incorporate personal experiences and local content into assignments
Ask students for multi-modal deliverables
Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.
Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions…
From DSC: Anyone who is involved in putting on conferences should at least be aware that this kind of thing is now possible!!! Check out the following posting from Adobe (with help from Tata Consultancy Services (TCS).
This year, the organizers — innovative industry event company Beyond Ordinary Events — turned to Tata Consultancy Services (TCS) to make the impossible “possible.” Leveraging Adobe generative AI technology across products like Adobe Premiere Pro and Acrobat, they distilled hours of video content in minutes, delivering timely dispatches to thousands of attendees throughout the conference.
…
For POSSIBLE ’24, Muche had an idea for a daily dispatch summarizing each day’s sessions so attendees wouldn’t miss a single insight. But timing would be critical. The dispatch needed to reach attendees shortly after sessions ended to fuel discussions over dinner and carry the excitement over to the next day.
The workflow started in Adobe Premiere Pro, with the writer opening a recording of each session and using the Speech to Text feature to automatically generate a transcript. They saved the transcript as a PDF file and opened it in Adobe Acrobat Pro. Then, using Adobe Acrobat AI Assistant, the writer asked for a session summary.
It was that fast and easy. In less than four minutes, one person turned a 30-minute session into an accurate, useful summary ready for review and publication.
By taking advantage of templates, the designer then added each AI-enabled summary to the newsletter in minutes. With just two people and generative AI technology, TCS accomplished the impossible — for the first time delivering an informative, polished newsletter to all 3,500 conference attendees just hours after the last session of the day.
Majors like hers are part of a broader wave of less conventional, avant-garde majors, in specialties such as artificial intelligence, that are taking root in American higher education, as colleges grapple with changes in the economy and a shrinking pool of students.
…
The trend underscores the distinct ways schools are responding to growing concerns over which degrees provide the best return on investment. As college costs soared to new heights in recent years, saddling many students with crippling loan debt, that discourse has only become increasingly fraught, raising the stakes for schools to prove their degrees leave students better prepared and employable.
“I’m a big believer in the liberal arts, but universities don’t get to print money,” he said. “If enrollment interests are shifting, they have to be able to hire faculty to teach in those areas. Money has to come from someplace.”
From DSC: Years ago, I remember having lunch with one of the finalists for the President position of a local university. He withdrew himself from the search because the institution’s culture would be like oil and water with him at the helm. He was very innovative, and this organization was not. I remember him saying, “The marketplace will determine what that organization ultimately does.” In other words, he was saying that higher education was market-driven. I agreed with him then, and I still agree with that perspective now.
AI is welcomed by those with dyslexia, and other learning issues, helping to mitigate some of the challenges associated with reading, writing, and processing information. Those who want to ban AI want to destroy the very thing that has helped most on accessibility. Here are 10 ways dyslexics, and others with issues around text-based learning, can use AI to support their daily activities and learning.
Are U.S. public schools lagging behind other countries like Singapore and South Korea in preparing teachers and students for the boom of generative artificial intelligence? Or are our educators bumbling into AI half-blind, putting students’ learning at risk?
Or is it, perhaps, both?
Two new reports, coincidentally released on the same day last week, offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.
Bite-Size AI Content for Faculty and Staff— from aiedusimplified.substack.com by Lance Eaton Another two 5-tips videos for faculty and my latest use case: creating FAQs!
Despite possible drawbacks, an exciting wondering has been—What if AI was a tipping point helping us finally move away from a standardized, grade-locked, ranking-forced, batched-processing learning model based on the make believe idea of “the average man” to a learning model that meets every child where they are at and helps them grow from there?
I get that change is indescribably hard and there are risks. But the integration of AI in education isn’t a trend. It’s a paradigm shift that requires careful consideration, ongoing reflection, and a commitment to one’s core values. AI presents us with an opportunity—possibly an unprecedented one—to transform teaching and learning, making it more personalized, efficient, and impactful. How might we seize the opportunity boldly?
California and NVIDIA Partner to Bring AI to Schools, Workplaces — from govtech.com by Abby Sourwine The latest step in Gov. Gavin Newsom’s plans to integrate AI into public operations across California is a partnership with NVIDIA intended to tailor college courses and professional development to industry needs.
California Gov. Gavin Newsom and tech company NVIDIA joined forces last week to bring generative AI (GenAI) to community colleges and public agencies across the state. The California Community Colleges Chancellor’s Office (CCCCO), NVIDIA and the governor all signed a memorandum of understanding (MOU) outlining how each partner can contribute to education and workforce development, with the goal of driving innovation across industries and boosting their economic growth.
Listen to anything on the go with the highest-quality voices — from elevenlabs.io; via The Neuron
The ElevenLabs Reader App narrates articles, PDFs, ePubs, newsletters, or any other text content. Simply choose a voice from our expansive library, upload your content, and listen on the go.
Per The Neuron
Some cool use cases:
Judy Garland can teach you biology while walking to class.
James Dean can narrate your steamy romance novel.
Sir Laurence Olivier can read you today’s newsletter—just paste the web link and enjoy!
Why it’s important: ElevenLabs shared how major Youtubers are using its dubbing services to expand their content into new regions with voices that actually sound like them (thanks to ElevenLabs’ ability to clone voices).
Oh, and BTW, it’s estimated that up to 20% of the population may have dyslexia. So providing people an option to listen to (instead of read) content, in their own language, wherever they go online can only help increase engagement and communication.
How Generative AI Improves Parent Engagement in K–12 Schools — from edtechmagazine.com by Alexadner Slagg With its ability to automate and personalize communication, generative artificial intelligence is the ideal technological fix for strengthening parent involvement in students’ education.
As generative AI tools populate the education marketplace, the technology’s ability to automate complex, labor-intensive tasks and efficiently personalize communication may finally offer overwhelmed teachers a way to effectively improve parent engagement.
… These personalized engagement activities for students and their families can include local events, certification classes and recommendations for books and videos. “Family Feed might suggest courses, such as an Adobe certification,” explains Jackson. “We have over 14,000 courses that we have vetted and can recommend. And we have books and video recommendations for students as well.”
Including personalized student information and an engagement opportunity makes it much easier for parents to directly participate in their children’s education.
Will AI Shrink Disparities in Schools, or Widen Them? — edsurge.com by Daniel Mollenkamp Experts predict new tools could boost teaching efficiency — or create an “underclass of students” taught largely through screens.
But as generative AI tools like ChatGPT sweep into mainstream business tools, promising to draft properly-formatted text from simple prompts and the click of a button, new questions are rising about what role writing centers should play — or whether they will be needed in the future.
…
Writing centers need to find a balance between introducing AI into the writing process and keeping the human support that every writer needs, argues Anna Mills, an English instructor at the College of Marin.
AI can serve as a supplement to a human tutor, Mills says. She encourages her students to use MyEssayFeedback, an AI tool that critiques the organization of an essay, the quality of evidence a student has included to support their thesis or the tone of the writing. Such tools can also evaluate research questions or review a student’s writing based on the rubric for the assignment, she says.
Advanced Voice Mode on ChatGPT features more natural, real-time conversations that pick up on and respond with emotion and non-verbal cues.
Advanced Voice Mode on ChatGPT is currently in a limited alpha. Please note that it may make mistakes, and access and rate limits are subject to change.
From DSC: Think about the impacts/ramifications of global, virtual, real-time language translations!!! This type of technology will create very powerful, new affordances in our learning ecosystems — as well as in business communications, with the various governments across the globe, and more!
Using Class Discussions as AI-Proof Assessments — from edutopia.org by Kara McPhillips Classroom discussions are one way to ensure that students are doing their own work in the age of artificial intelligence.
I admit it: Grading essays has never topped my list of teaching joys. Sure, the moments when a student finally nails a skill after months of hard work make me shout for joy, startling my nearby colleagues (sorry, Ms. Evans), but by and large, it’s hard work. Yet lately, as generative artificial intelligence (AI) headlines swirl in my mind, a new anxiety has crept into my grading life. I increasingly wonder, am I looking at their hard work?
Do you know when I don’t feel this way? During discussions. A ninth grader wiggling the worn corner of her text, leaning forward with excitement over what she’s cleverly noticed about Kambili, rarely makes me wonder, “Are these her ideas?”
While I’ve always thought discussion is important, AI is elevating that importance. This year, I wonder, how can I best leverage discussion in my classroom?
As the needs of the modern workforce evolve at an unprecedented rate, durable, or “soft,” skills are often eclipsing demand for sought-after technical skills in high-demand jobs across industry sectors, geography, and educational level.
Through research, collaboration, and feedback from more than 800 educators, workforce professionals, industry leaders, and policymakers, America Succeeds—a leading educational policy and advocacy group—has developed Durable Skills and the Durable Skills Advantage Framework to provide a common language for the most in-demand durable skills. With 85% of career success being dependent on durable skills, this framework bridges the gap between the skills students are taught in school and evolving workforce needs.
Over the last few years, we’ve been covering New Pathways, which we think of as a framework for school leaders and community members to create supports and systems that set students up for success in what’s next. This might be career exploration, client-connected projects, internships, or entrepreneurial experiences.
But what it really comes down to is connecting learners to real-world experiences and people and helping them articulate the skills that they gain in the process. Along the way, we began to talk a lot about green jobs. Many of the pre-existing pathways in secondary schools point towards CTE programs and trades, which are more in demand than they’ve been in decades.
This coincides with a pivotal moment in the arc of infrastructure redesign and development, one that heavily emphasizes clean energy trajectories and transferable skills. Many of these jobs we refer to as green pathways or requiring some of these green skills.
One leading organization in this space is the Interstate Renewable Energy Council or IREC. I got to sit down with Cynthia Finley, the Vice President of Workforce Strategy at IREC to talk about green pathways and what IREC is doing to increase awareness and exposure of green jobs and skills.
A recent report published by the Identity Theft Resource Center (ITRC) found that data from 2023 shows “an environment where bad actors are more effective, efficient and successful in launching attacks. The result is fewer victims (or at least fewer victim reports), but the impact on individuals and businesses is arguably more damaging.”
One of these attacks involves fake job postings.
The details: The ITRC said that victim reports of job and employment scams spiked some 118% in 2023. These scams were primarily carried out through LinkedIn and other job search platforms.
The bad actors here would either create fake (but professional-looking) job postings, profiles and websites or impersonate legitimate companies, all with the hopes of landing victims to move onto the interview process.
These actors would then move the conversation onto a third-party messaging platform, and ask for identity verification information (driver’s licenses, social security numbers, direct deposit information, etc.).
Hypernatural is an AI video platform that makes it easy to create beautiful, ready-to share videos from anything. Stop settling for glitchy 3s generated videos and boring stock footage. Turn your ideas, scripts, podcasts and more into incredible short-form videos in minutes.
OpenAI is committed to making intelligence as broadly accessible as possible. Today, we’re announcing GPT-4o mini, our most cost-efficient small model. We expect GPT-4o mini will significantly expand the range of applications built with AI by making intelligence much more affordable. GPT-4o mini scores 82% on MMLU and currently outperforms GPT-41 on chat preferences in LMSYS leaderboard(opens in a new window). It is priced at 15 cents per million input tokens and 60 cents per million output tokens, an order of magnitude more affordable than previous frontier models and more than 60% cheaper than GPT-3.5 Turbo.
GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).
Also see what this means fromBen’s Bites, The Neuron, and as The Rundown AI asserts:
Why it matters: While it’s not GPT-5, the price and capabilities of this mini-release significantly lower the barrier to entry for AI integrations — and marks a massive leap over GPT 3.5 Turbo. With models getting cheaper, faster, and more intelligent with each release, the perfect storm for AI acceleration is forming.