In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
DC: I’m really hoping that a variety of AI-based tools, technologies, and services will significantly help with our Access to Justice (#A2J) issues here in America. So this article, per Kristen Sonday at Thomson Reuters — caught my eye.
***
AI for Legal Aid: How to empower clients in need — from thomsonreuters.com by Kristen Sonday In this second part of this series, we look at how AI-driven technologies can empower those legal aid clients who may be most in need
It’s hard to overstate the impact that artificial intelligence (AI) is expected to have on helping low-income individuals achieve better access to justice.And for those legal services organizations (LSOs) that serve on the front lines, too often without sufficient funding, staff, or technology, AI presents perhaps their best opportunity to close the justice gap. With the ability of AI-driven tools to streamline agency operations, minimize administrative work, more effectively reallocate talent, and allow LSOs to more effectively service clients, the implementation of these tools is essential.
Innovative LSOs leading the way
Already many innovative LSOs are taking the lead, utilizing new technology to complete tasks from complex analysis to AI-driven legal research. Here are two compelling examples of how AI is already helping LSOs empower low-income clients in need.
Criminal charges, even those that are eligible for simple, free expungement, can prevent someone from obtaining housing or employment. This is a simple barrier to overcome if only help is available.
… AI offers the capacity to provide quick, accurate information to a vast audience, particularly to those in urgent need. AI can also help reduce the burden on our legal staff…
Everything you thought you knew about being a lawyer is about to change.
Legal Dive spoke with Podinic about the transformative nature of AI, including the financial risks to lawyers’ billing models and how it will force general counsel and chief legal officers to consider how they’ll use the time AI is expected to free up for the lawyers on their teams when they no longer have to do administrative tasks and low-level work.
Traditionally, law firms have been wary of adopting technologies that could compromise data privacy and legal accuracy; however, attitudes are changing
Despite concerns about technology replacing humans in the legal sector, legaltech is more likely to augment the legal profession than replace it entirely
Generative AI will accelerate digital transformation in the legal sector
The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises
Add sound to your video via text — Project Super Sonic:
New Dream Weaver — from aisecret.us Explore Adobe’s New Firefly Video Generative Model
Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.
From DSC: Great…we have another tool called Canvas. Or did you say Canva?
Introducing canvas — from OpenAI A new way of working with ChatGPT to write and code
We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.
Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.
The way Americans buy homes is changing dramatically.
New industry rules about how home buyers’ real estate agents get paid are prompting a reckoning among housing experts and the tech sector. Many house hunters who are already stretched thin by record-high home prices and closing costs must now decide whether, and how much, to pay an agent.
A 2-3% commission on the median home price of $416,700 could be well over $10,000, and in a world where consumers are accustomed to using technology for everything from taxes to tickets, many entrepreneurs see an opportunity to automate away the middleman, even as some consumer advocates say not so fast.
The Great Mismatch — from the-job.beehiiv.com. by Paul Fain Artificial intelligence could threaten millions of decent-paying jobs held by women without degrees.
Women in administrative and office roles may face the biggest AI automation risk, find Brookings researchers armed with data from OpenAI. Also, why Indiana could make the Swiss apprenticeship model work in this country, and how learners get disillusioned when a certificate doesn’t immediately lead to a good job.
…
A major new analysisfrom the Brookings Institution, using OpenAI data, found that the most vulnerable workers don’t look like the rail and dockworkers who have recaptured the national spotlight. Nor are they the creatives—like Hollywood’s writers and actors—that many wealthier knowledge workers identify with. Rather, they’re predominantly women in the 19M office support and administrative jobs that make up the first rung of the middle class.
“Unfortunately the technology and automation risks facing women have been overlooked for a long time,” says Molly Kinder, a fellow at Brookings Metro and lead author of the new report. “Most of the popular and political attention to issues of automation and work centers on men in blue-collar roles. There is far less awareness about the (greater) risks to women in lower-middle-class roles.”
introducing swarm: an experimental framework for building, orchestrating, and deploying multi-agent systems. ?https://t.co/97n4fehmtM
Is this how AI will transform the world over the next decade? — from futureofbeinghuman.com by Andrew Maynard Anthropic’s CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It’s audacious, compelling, and a must-read for anyone working at the intersection of AI and society.
But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.
Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.
That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.
And speaking of that essay, here’s a summary from The Rundown AI:
Anthropic CEO Dario Amodei just published a lengthy essay outlining an optimistic vision for how AI could transform society within 5-10 years of achieving human-level capabilities, touching on longevity, politics, work, the economy, and more.
The details:
Amodei believes that by 2026, ‘powerful AI’ smarter than a Nobel Prize winner across fields, with agentic and all multimodal capabilities, will be possible.
He also predicted that AI could compress 100 years of scientific progress into 10 years, curing most diseases and doubling the human lifespan.
The essay argued AI could strengthen democracy by countering misinformation and providing tools to undermine authoritarian regimes.
The CEO acknowledged potential downsides, including job displacement — but believes new economic models will emerge to address this.
He envisions AI driving unprecedented economic growth but emphasizes ensuring AI’s benefits are broadly distributed.
Why it matters:
As the CEO of what is seen as the ‘safety-focused’ AI lab, Amodei paints a utopia-level optimistic view of where AI will head over the next decade. This thought-provoking essay serves as both a roadmap for AI’s potential and a call to action to ensure the responsible development of technology.
However, most workers remain unaware of these efforts. Only a third (33%) of all U.S. employees say their organization has begun integrating AI into their business practices, with the highest percentage in white-collar industries (44%).
… White-collar workers are more likely to be using AI. White-collar workers are, by far, the most frequent users of AI in their roles. While 81% of employees in production/frontline industries say they never use AI, only 54% of white-collar workers say they never do and 15% report using AI weekly.
… Most employees using AI use it for idea generation and task automation. Among employees who say they use AI, the most common uses are to generate ideas (41%), to consolidate information or data (39%), and to automate basic tasks (39%).
Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.
Here’s my AI toolkit — from wondertools.substack.com by Jeremy Caplan and Nikita Roy How and why I use the AI tools I do — an audio conversation
1. What are two useful new ways to use AI?
AI-powered research: Type a detailed search query into Perplexity instead of Google to get a quick, actionable summary response with links to relevant information sources. Read more of my take on why Perplexity is so useful and how to use it.
Notes organization and analysis: Tools like NotebookLM, Claude Projects, and Mem can help you make sense of huge repositories of notes and documents. Query or summarize your own notes and surface novel connections between your ideas.
This article seeks to apply some lessons from brand management to learning design at a high level. Throughout the rest of this article, it is essential to remember that the context is an autonomous, interactive learning experience. The experience is created adaptively by Gen AI or (soon enough) by agents, not by rigid scripts. It may be that an AI will choose to present prewritten texts or prerecorded videos from a content library according to the human users’ responses or questions. Still, the overall experience will be different for each user. It will be more like a conversation than a book. …
In summary, while AI chatbots have the potential to enhance learning experiences, their acceptance and effectiveness depend on several factors, including perceived usefulness, ease of use, trust, relational factors, perceived risk, and enjoyment.
Personalization and building trust are essential for maintaining user engagement and achieving positive learning outcomes. The right “voice” for autonomous AI or a chatbot can enhance trust by making interactions more personal, consistent, and empathetic.
AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh The market for AI products and services could reach between $780 billion and $990 billion by 2027.
At a Glance
The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.
Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”
And on a somewhat related note (i.e., emerging technologies), also see the following two postings:
Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.
Key Takeaways
Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.
“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.
Obviously this workflow works just as well for meetings as it does for lectures. Stay present in the meeting with no screens and just write down the key points with pen and paper. Then let NotebookLM assemble the detailed summary based on your high-level notes. https://t.co/fZMG7LgsWG
In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.
In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.
…
Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.
AI Video Tools You Can Use Today— from heatherbcooper.substack.com by Heather Cooper The latest AI video models that deliver results
AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.
No worries – we do have plenty of generative AI video tools we can use right now.
Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
…plus several more
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
RIP To Human First Pass Document Review?— from abovethelaw.com by Joe Patrice Using actual humans to perform an initial review isn’t gone yet, but the days are numbered.
Lawyers are still using real, live people to take a first crack at document review, but much like the “I’m not dead yet” guy from Monty Python and the Holy Grail, it’s a job that will be stone dead soon. Because there are a lot of deeply human tasks that AI will struggle to replace, but getting through a first run of documents doesn’t look like one of them.
At last week’s Relativity Fest, the star of the show was obviously Relativity aiR for Review, which the company moved to general availability. In conjunction with the release, Relativity pointed to impressive results the product racked up during the limited availability period including Cimplifi reporting that the product cut review time in half and JND finding a 60 percent cut in costs.
When it comes to efficiencies, automation plays a big role. In a solo or small firm, resources come at a premium. Learn to reduce wasted input through standardized, repeatable operating procedures and automation. (There are even tech products that help you create written standard processes learning from and organizing the work you’re already doing).
Imagine speaking into an app as you “brain dump” and having those thoughts come out organized and notated for later use. Imagine dictating legal work into an app and having AI organize your dictation, even correct it. You don’t need to type everything in today’s tech world. Maximize downtime.
It’s all about training yourself to think “automation first.” Even when a virtual assistant (VA) located in another country can fill gaps in your practice, learn your preferences, match your brand, and help you be your most efficient you without hiring a full-tie employee. Today’s most successful law firms are high-tech hubs. Don’t let fear of the unknown hold you back.
Several of our regular Legaltech Week panelists were in Chicago for RelativityFest last week, so we took the opportunity to get together and broadcast our show live from the same room (instead of Zoom squares).
If you missed it Friday, here’s the video recording.
Today (24 September) LexisNexis has released a new report – Need for Speedier Legal Services sees AI Adoption Accelerate – which reveals a sharp increase in the number of lawyers using generative AI for legal work.
The survey of 800+ UK legal professionals at firms and in-house teams found 41% are currently using AI for work, up from 11% in July 2023. Lawyers with plans to use AI for legal work in the near future also jumped from 28% to 41%, while those with no plans to adopt AI dropped from 61% to 15%. The survey found that 39% of private practice lawyers now expect to adjust their billing practices due to AI, up from 18% in January 2024.
‘What if legal review cost just $1? What if legal review was 1,000X cheaper than today?’ he muses.
And, one could argue we are getting there already – at least in theory. How much does it actually cost to run a genAI tool, that is hitting the accuracy levels you require, over a relatively mundane contract in order to find top-level information? If token costs drop massively in the years ahead and tech licence costs have been shared out across a major legal business….then what is the cost to the firm per document?
Of course, there is review and there is review. A very deep and thorough review, with lots of redlining, back and forth negotiation, and redrafting by top lawyers is another thing. But, a ‘quick once-over’? It feels like we are already at the ‘pennies on the dollar’ stage for that.
In some cases the companies on the convergence path are just getting started and only offer a few additional skills (so far), in other cases, large companies with diverse histories have almost the same multi-skill offering across many areas.
Understanding behavior as communication: A teacher’s guide — from understood.org by Amanda Morin Figuring out the function of, or the reasons behind, a behavior is critical for finding an appropriate response or support. Knowing the function can also help you find ways to prevent behavior issues in the future.
Think of the last time a student called out in class, pushed in line, or withdrew by putting their head down on their desk. What was their behavior telling you?
In most cases, behavior is a sign they may not have the skills to tell you what they need. Sometimes, students may not even know what they need. What are your students trying to communicate? What do they need, and how can you help?
One way to reframe your thinking is to respond to the student, not the behavior. Start by considering the life experiences that students bring to the classroom.
Some students who learn and think differently have negative past experiences with teachers and school. Others may come from cultures in which speaking up for their needs in front of the whole class isn’t appropriate.
Black girls face more discipline and more severe punishments in public schools than girls from other racial backgrounds, according to a groundbreaking new report set for release Thursday by a congressional watchdog.
The report, shared exclusively with NPR, took nearly a year-and-a-half to complete and comes after several Democratic congressional members requested the study.
The XQ Institute shares this mindset as part of our mission to reimagine the high school learning experience so it’s more relevant and engaging for today’s learners, while better preparing them for the future.We see AI as a tool with transformative potential for educators and makers to leverage — but only if it’s developed and implemented with ethics, transparency and equity at the forefront. That’s why we’re building partnerships between educators and AI developers to ensure that products are shaped by the real needs and challenges of students, teachers and schools. Here’s how we believe all stakeholders can embrace the Department’s recommendations through ongoing collaborations with tech leaders, educators and students alike.
…lead me to the XQ Institute, and I very much like what I’m initially seeing! Here are some excerpts from their website:
Transforming high school isn’t easy, but it is possible. ? Educator @nwallacecxh from XQ’s @CrosstownHigh shares real-world strategies to make learning relevant and meaningful. Ready to see how it’s done? ? https://t.co/xD8hkP33TH
People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:
Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
…plus several other items
The posting also lists several ideas to try with NotebookLM such as:
Idea 2: Study Companion
Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
Get a breakdown of the course materials to understand them better.
“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”
With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.
1. Upload a variety of sources for NotebookLM to use.
You can use …
websites
PDF files
links to websites
any text you’ve copied
Google Docs and Slides
even Markdown
You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).
2. Ask it to create resources. 3. Create an audio summary. 4. Chat with your sources.
5. Save (almost) everything.
I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.
The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.
4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.
As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:
Incorporate personal experiences and local content into assignments
Ask students for multi-modal deliverables
Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.
Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions…
As we navigate the rapidly evolving landscape of artificial intelligence in education, a troubling trend has emerged. What began as cautious skepticism has calcified into rigid opposition. The discourse surrounding AI in classrooms has shifted from empirical critique to categorical rejection, creating a chasm between the potential of AI and its practical implementation in education.
This hardening of attitudes comes at a significant cost. While educators and policymakers debate, students find themselves caught in the crossfire. They lack safe, guided access to AI tools that are increasingly ubiquitous in the world beyond school walls. In the absence of formal instruction, many are teaching themselves to use these tools, often in less than productive ways. Others live in a state of constant anxiety, fearing accusations of AI reliance in their work. These are just a few symptoms of an overarching educational culture that has become resistant to change, even as the world around it transforms at an unprecedented pace.
Yet, as this calcification sets in, I find myself in a curious position: the more I thoughtfully integrate AI into my teaching practice, the more I witness its potential to enhance and transform education
The urgency to integrate AI competencies into education is about preparing students not just to adapt to inevitable changes but to lead the charge in shaping an AI-augmented world. It’s about equipping them to ask the right questions, innovate responsibly, and navigate the ethical quandaries that come with such power.
AI in education should augment and complement their aptitude and expertise, to personalize and optimize the learning experience, and to support lifelong learning and development. AI in education should be a national priority and a collaborative effort among all stakeholders, to ensure that AI is designed and deployed in an ethical, equitable, and inclusive way that respects the diversity and dignity of all learners and educators and that promotes the common good and social justice. AI in education should be about the production of AI, not just the consumption of AI, meaning that learners and educators should have the opportunity to learn about AI, to participate in its creation and evaluation, and to shape its impact and direction.
Top Software Engineering Newsletters in 2024 — from ai-supremacy.com by Michael Spencer Including a very select few ML, AI and product Newsletters into the mix for Software Engineers.
This is an article specifically for the software engineers and developers among you.
In the past year (2023-2024) professionals are finding more value in Newsletters than ever before (especially on Substack).
As working from home took off, the nature of mentorship and skill acquisition has also evolved and shifted. Newsletters with pragmatic advice on our careers it turns out, are super valuable. This article is a resource list. Are you a software developer, work with one or know someone who is or wants to be?
Today we rolled out OpenAI o1-preview and o1-mini to all ChatGPT Plus/Team users & Tier 5 developers in the API.
o1 marks the start of a new era in AI, where models are trained to “think” before answering through a private chain of thought. The more time they take to think, the…
This is @Google‘s wow moment in AI.
Notebook LM can generate engaging podcasts on your uploaded material for FREE.
I tested it with uploading the latest issue of the Tensor, it generated a podcast for me within 2 mins.
“Who to follow in AI” in 2024? — from ai-supremacy.com by Michael Spencer Part III – #35-55 – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts
This list features both some of the best Newsletters on AI and people who make LinkedIn posts about AI papers, advances and breakthroughs. In today’s article we’ll be meeting the first 19-34, in a list of 180+.
Newsletter Writers
YouTubers
Engineers
Researchers who write
Technologists who are Creators
AI Educators
AI Evangelists of various kinds
Futurism writers and authors
I have been sharing the list in reverse chronological order on LinkedIn here.
Inside Google’s 7-Year Mission to Give AI a Robot Body — from wired.com by Hans Peter Brondmo As the head of Alphabet’s AI-powered robotics moonshot, I came to believe many things. For one, robots can’t come soon enough. For another, they shouldn’t look like us.
Learning to Reason with LLMs — from openai.com We are introducing OpenAI o1, a new large language model trained with reinforcement learning to perform complex reasoning. o1 thinks before it answers—it can produce a long internal chain of thought before responding to the user.
As a preview of the upcoming Summit interview, here are Khan’s views on two critical questions, edited for space and clarity:
What are the enduring human work skills in a world with ever-advancing AI? Some people say students should study liberal arts. Others say deep domain expertise is the key to remaining professionally relevant. Others say you need to have the skills of a manager to be able to delegate to AI. What do you think are the skills or competencies that ensure continued relevance professionally, employability, etc.?
A lot of organizations are thinking about skills-based approaches to their talent. It involves questions like, ‘Does someone know how to do this thing or not?’ And what are the ways in which they can learn it and have some accredited way to know they actually have done it? That is one of the ways in which people use Khan Academy. Do you have a view of skills-based approaches within workplaces, and any thoughts on how AI tutors and training fit within that context?