Top AI Tools of 2024 — from ai-supremacy.com by Michael Spencer (behind a paywall) Which AI tools stood out for me in 2024? My list.
Memorable AI Tools of 2024
Catergories included:
Useful
Popular
Captures the zeighest of AI product innovation
Fun to try
Personally satisfying
NotebookLM
Perplexity
Claude
…
New “best” AI tool? Really? — from theneurondaily.com by Noah and Grant
PLUS: A free workaround to the “best” new AI…
What is Google’s Deep Research tool, and is it really “the best” AI research tool out there? … Here’s how it works: Think of Deep Research as a research team that can simultaneously analyze 50+ websites, compile findings, and create comprehensive reports—complete with citations.
Unlike asking ChatGPT to research for you, Deep Research shows you its research plan before executing, letting you edit the approach to get exactly what you need.
…
It’s currently free for the first month (though it’ll eventually be $20/month) when bundled with Gemini Advanced. Then again, Perplexity is always free…just saying.
We couldn’t just take J-Cal’s word for it, so we rounded up some other takes:
Our take: We then compared Perplexity, ChatGPT Search, and Deep Research (which we’re calling DR, or “The Docta” for short) on robot capabilities from CES revealed:
An excerpt from today’s Morning Edition from Bloomberg
Global banks will cut as many as 200,000 jobs in the next three to five years—a net 3% of the workforce—as AI takes on more tasks, according to a Bloomberg Intelligence survey. Back, middle office and operations are most at risk. A reminder that Citi said last year that AI is likely to replace more jobs in banking than in any other sector. JPMorgan had a more optimistic view (from an employee perspective, at any rate), saying its AI rollout has augmented, not replaced, jobs so far.
Introducing the 2025 Wonder Media Calendar for tweens, teens, and their families/households.Designed by Sue Ellen Christian and her students in her Global Media Literacy class (in the fall 2024 semester at Western Michigan University), the calendar’s purpose is to help people create a new year filled with skills and smart decisions about their media use. This calendar is part of the ongoing Wonder Media Library.comproject that includes videos, lesson plans, games, songs and more. The website is funded by a generous grant from the Institute of Museum and Library Services, in partnership with Western Michigan University and the Library of Michigan.
Picture your enterprise as a living ecosystem,where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.
Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?
The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.
Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.
What Is Agentic AI? — from blogs.nvidia.com by Erik Pounds Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.
The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.
Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.
When it comes to classroom edtech use, digital tools have a drastically different impact when they are used actively instead of passively–a critical difference examined in the 2023-2024 Speak Up Research by Project Tomorrow.
Students also outlined their ideal active learning technologies:
AI Tutors: Hype or Hope for Education? — from educationnext.org by John Bailey and John Warner In a new book, Sal Khan touts the potential of artificial intelligence to address lagging student achievement. Our authors weigh in.
In Salman Khan’s new book, Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing) (Viking, 2024), the Khan Academy founder predicts that AI will transform education by providing every student with a virtual personalized tutor at an affordable cost. Is Khan right? Is radically improved achievement for all students within reach at last? If so, what sorts of changes should we expect to see, and when? If not, what will hold back the AI revolution that Khan foresees? John Bailey, a visiting fellow at the American Enterprise Institute, endorses Khan’s vision and explains the profound impact that AI technology is already making in education. John Warner, a columnist for the Chicago Tribune and former editor for McSweeney’s Internet Tendency, makes the case that all the hype about AI tutoring is, as Macbeth quips, full of sound and fury, signifying nothing.
Risks on the Horizon: ASL Levels The two key risks Dario is concerned about are:
a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy
These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):
1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.
Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.
Should You Still Learn to Code in an A.I. World? — from nytimes.com by Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.
Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.
For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.
There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).
It’s called Lovable, the “world’s first AI fullstack engineer.”
… Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.
Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.
As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.
From DSC: I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:
This is mind-blowing!
NVIDIA has introduced Edify 3D, a 3D AI generator that lets us create high-quality 3D scenes using just a simple prompt. And all the assets are fully editable!
…you will see that they outline which skills you should consider mastering in 2025 if you want to stay on top of the latest career opportunities. They then list more information about the skills, how you apply the skills, and WHERE to get those skills.
I assert that in the future, people will be able to see this information on a 24x7x365 basis.
Which jobs are in demand?
What skills do I need to do those jobs?
WHERE do I get/develop those skills?
And that last part (about the WHERE do I develop those skills) will pull from many different institutions, people, companies, etc.
BUT PEOPLE are the key! Oftentimes, we need to — and prefer to — learn with others!
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
The TLDR here is that, as useful as popular AI tools are for learners, as things stand they only enable us to take the very first steps on what is a long and complex journey of learning.
AI tools like ChatGPT 4o, Claude 3.5 & NotebookLM can help to give us access to information but (for now at least) the real work of learning remains in our – the humans’ – hands.
To which Anna Mills had a solid comment:
It might make a lot of sense to regulate generated audio to require some kind of watermark and/or metadata. Instructors who teach online and assign voice recordings, we need to recognize that these are now very easy and free to auto-generate. In some cases we are assigning this to discourage students from using AI to just autogenerate text responses, but audio is not immune.
The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises
Add sound to your video via text — Project Super Sonic:
New Dream Weaver — from aisecret.us Explore Adobe’s New Firefly Video Generative Model
Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.
Duolingo’s new Video Call feature represents a leap forward in language practice for learners. This AI-powered tool allows Duolingo Max subscribers to engage in spontaneous, realistic conversations with Lily, one of Duolingo’s most popular characters. The technology behind Video Call is designed to simulate natural dialogue and provides a personalized, interactive practice environment. Even beginner learners can converse in a low-pressure environment because Video Call is designed to adapt to their skill level. By offering learners the opportunity to converse in real-time,Video Call builds the confidence needed to communicate effectively in real-world situations. Video Call is available for Duolingo Max subscribers learning English, Spanish, and French.
Ello, the AI reading companion that aims to support kids struggling to read, launched a new product on Monday that allows kids to participate in the story-creation process.
Called “Storytime,” the new AI-powered feature helps kids generate personalized stories by picking from a selection of settings, characters, and plots. For instance, a story about a hamster named Greg who performed in a talent show in outer space.
Giving ELA Lessons a Little Edtech Boost — from edutopia.org by Julia Torres Common activities in English language arts classes such as annotation and note-taking can be improved through technology.
6 ELA Practices That Can Be Enhanced by EdTech
Book clubs.
Collective note-taking.
Comprehension checks.
Video lessons.
..and more
Using Edtech Tools to Differentiate Learning— from edutopia.org by Katie Novak and Mary E. Pettit Teachers can use tech tools to make it easier to give students choice about their learning, increasing engagement.
People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:
Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
…plus several other items
The posting also lists several ideas to try with NotebookLM such as:
Idea 2: Study Companion
Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
Get a breakdown of the course materials to understand them better.
“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”
With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.
1. Upload a variety of sources for NotebookLM to use.
You can use …
websites
PDF files
links to websites
any text you’ve copied
Google Docs and Slides
even Markdown
You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).
2. Ask it to create resources. 3. Create an audio summary. 4. Chat with your sources.
5. Save (almost) everything.
I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.
The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.
4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.
As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:
Incorporate personal experiences and local content into assignments
Ask students for multi-modal deliverables
Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.
Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions…
Today, I’m excited to share with you all the fruit of our effort at @OpenAI to create AI models capable of truly general reasoning: OpenAI’s new o1 model series! (aka ?) Let me explain ? 1/ pic.twitter.com/aVGAkb9kxV
We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.
The wait is over. OpenAI has just released GPT-5, now called OpenAI o1.
It brings advanced reasoning capabilities and can generate entire video games from a single prompt.
Think of it as ChatGPT evolving from fast, intuitive thinking (System-1) to deeper, more deliberate… pic.twitter.com/uAMihaUjol
OpenAI Strawberry (o1) is out! We are finally seeing the paradigm of inference-time scaling popularized and deployed in production. As Sutton said in the Bitter Lesson, there’re only 2 techniques that scale indefinitely with compute: learning & search. It’s time to shift focus to… pic.twitter.com/jTViQucwxr
The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.
To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.
What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack
The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.
Recently, many creators (myself included) have been exploring super realistic AI more and more.
But where can this actually be used?
Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.
Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.
Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.
Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.
We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.
This week, as I kick off the 20th cohort of my AI-Learning Design bootcamp, I decided to do some analysis of the work habits of the hundreds of amazing AI-embracing instructional designers who I’ve worked with over the last year or so.
My goal was to answer the question: which AI tools do we use most in the instructional design process, and how do we use them?
Here’s where we are in September, 2024:
…
Developing Your Approach to Generative AI — from scholarlyteacher.com by Caitlin K. Kirby, Min Zhuang, Imari Cheyne Tetu, & Stephen Thomas (Michigan State University)
As generative AI becomes integrated into workplaces, scholarly work, and students’ workflows, we have the opportunity to take a broad view of the role of generative AI in higher education classrooms. Our guiding questions are meant to serve as a starting point to consider, from each educator’s initial reaction and preferences around generative AI, how their discipline, course design, and assessments may be impacted, and to have a broad view of the ethics of generative AI use.
AI technology tools hold remarkable promise for providing more accessible, equitable, and inclusive learning experiences for students with disabilities.