Top AI Tools of 2024 — from ai-supremacy.com by Michael Spencer (behind a paywall) Which AI tools stood out for me in 2024? My list.
Memorable AI Tools of 2024
Catergories included:
Useful
Popular
Captures the zeighest of AI product innovation
Fun to try
Personally satisfying
NotebookLM
Perplexity
Claude
…
New “best” AI tool? Really? — from theneurondaily.com by Noah and Grant
PLUS: A free workaround to the “best” new AI…
What is Google’s Deep Research tool, and is it really “the best” AI research tool out there? … Here’s how it works: Think of Deep Research as a research team that can simultaneously analyze 50+ websites, compile findings, and create comprehensive reports—complete with citations.
Unlike asking ChatGPT to research for you, Deep Research shows you its research plan before executing, letting you edit the approach to get exactly what you need.
…
It’s currently free for the first month (though it’ll eventually be $20/month) when bundled with Gemini Advanced. Then again, Perplexity is always free…just saying.
We couldn’t just take J-Cal’s word for it, so we rounded up some other takes:
Our take: We then compared Perplexity, ChatGPT Search, and Deep Research (which we’re calling DR, or “The Docta” for short) on robot capabilities from CES revealed:
An excerpt from today’s Morning Edition from Bloomberg
Global banks will cut as many as 200,000 jobs in the next three to five years—a net 3% of the workforce—as AI takes on more tasks, according to a Bloomberg Intelligence survey. Back, middle office and operations are most at risk. A reminder that Citi said last year that AI is likely to replace more jobs in banking than in any other sector. JPMorgan had a more optimistic view (from an employee perspective, at any rate), saying its AI rollout has augmented, not replaced, jobs so far.
NVIDIA’s Apple moment?! — from theneurondaily.com by Noah Edelman and Grant Harvey PLUS: How to level up your AI workflows for 2025…
NVIDIA wants to put an AI supercomputer on your desk (and it only costs $3,000). … And last night at CES 2025, Jensen Huang announced phase two of this plan: Project DIGITS, a $3K personal AI supercomputer that runs 200B parameter models from your desk. Guess we now know why Apple recently developed an NVIDIA allergy…
… But NVIDIA doesn’t just want its “Apple PC moment”… it also wants its OpenAI moment. NVIDIA also announced Cosmos, a platform for building physical AI (think: robots and self-driving cars)—which Jensen Huang calls “the ChatGPT moment for robotics.”
NVIDIA is bringing AI from the cloud to personal devices and enterprises, covering all computing needs from developers to ordinary users.
At CES 2025, which opened this morning, NVIDIA founder and CEO Jensen Huang delivered a milestone keynote speech, revealing the future of AI and computing. From the core token concept of generative AI to the launch of the new Blackwell architecture GPU, and the AI-driven digital future, this speech will profoundly impact the entire industry from a cross-disciplinary perspective.
From DSC: I’m posting this next item (involving Samsung) as it relates to how TVs continue to change within our living rooms. AI is finding its way into our TVs…the ramifications of this remain to be seen.
The Rundown: Samsung revealed its new “AI for All” tagline at CES 2025, introducing a comprehensive suite of new AI features and products across its entire ecosystem — including new AI-powered TVs, appliances, PCs, and more.
The details:
Vision AI brings features like real-time translation, the ability to adapt to user preferences, AI upscaling, and instant content summaries to Samsung TVs.
Several of Samsung’s new Smart TVs will also have Microsoft Copilot built in, while also teasing a potential AI partnership with Google.
Samsung also announced the new line of Galaxy Book5 AI PCs, with new capabilities like AI-powered search and photo editing.
AI is also being infused into Samsung’s laundry appliances, art frames, home security equipment, and other devices within its SmartThings ecosystem.
Why it matters: Samsung’s web of products are getting the AI treatment — and we’re about to be surrounded by AI-infused appliances in every aspect of our lives. The edge will be the ability to sync it all together under one central hub, which could position Samsung as the go-to for the inevitable transition from smart to AI-powered homes.
***
“Samsung sees TVs not as one-directional devices for passive consumption but as interactive, intelligent partners that adapt to your needs,” said SW Yong, President and Head of Visual Display Business at Samsung Electronics. “With Samsung Vision AI, we’re reimagining what screens can do, connecting entertainment, personalization, and lifestyle solutions into one seamless experience to simplify your life.” — from Samsung
The following framework I offer for defining, understanding, and preparing for agentic AI blends foundational work in computer science with insights from cognitive psychology and speculative philosophy. Each of the seven levels represents a step-change in technology, capability, and autonomy. The framework expresses increasing opportunities to innovate, thrive, and transform in a data-fueled and AI-driven digital economy.
The Rise of AI Agents and Data-Driven Decisions — from devprojournal.com by Mike Monocello Fueled by generative AI and machine learning advancements, we’re witnessing a paradigm shift in how businesses operate and make decisions.
AI Agents Enhance Generative AI’s Impact Burley Kawasaki, Global VP of Product Marketing and Strategy at Creatio, predicts a significant leap forward in generative AI. “In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.”
Everyone’s talking about the potential of AI agents in 2025 (and don’t get me wrong, it’s really significant), but there’s a crucial detail that keeps getting overlooked: the gap between current capabilities and practical reliability.
Here’s the reality check that most predictions miss: AI agents currently operate at about 80% accuracy (according to Microsoft’s AI CEO). Sounds impressive, right? But here’s the thing – for businesses and users to actually trust these systems with meaningful tasks, we need 99% reliability. That’s not just a 19% gap – it’s the difference between an interesting tech demo and a business-critical tool.
This matters because it completely changes how we should think about AI agents in 2025. While major players like Microsoft, Google, and Amazon are pouring billions into development, they’re all facing the same fundamental challenge – making them work reliably enough that you can actually trust them with your business processes.
Think about it this way: Would you trust an assistant who gets things wrong 20% of the time? Probably not. But would you trust one who makes a mistake only 1% of the time, especially if they could handle repetitive tasks across your entire workflow? That’s a completely different conversation.
In the tech world, we like to label periods as the year of (insert milestone here). This past year (2024) was a year of broader experimentation in AI and, of course, agentic use cases.
As 2025 opens, VentureBeat spoke to industry analysts and IT decision-makers to see what the year might bring. For many, 2025 will be the year of agents, when all the pilot programs, experiments and new AI use cases converge into something resembling a return on investment.
In addition, the experts VentureBeat spoke to see 2025 as the year AI orchestration will play a bigger role in the enterprise. Organizations plan to make management of AI applications and agents much more straightforward.
Here are some themes we expect to see more in 2025.
AI agents take charge
Jérémy Grandillon, CEO of TC9 – AI Allbound Agency, said “Today, AI can do a lot, but we don’t trust it to take actions on our behalf. This will change in 2025. Be ready to ask your AI assistant to book a Uber ride for you.” Start small with one agent handling one task. Build up to an army.
“If 2024 was agents everywhere, then 2025 will be about bringing those agents together in networks and systems,” said Nicholas Holland, vice president of AI at Hubspot. “Micro agents working together to accomplish larger bodies of work, and marketplaces where humans can ‘hire’ agents to work alongside them in hybrid teams. Before long, we’ll be saying, ‘there’s an agent for that.'”
… Voice becomes default
Stop typing and start talking. Adam Biddlecombe, head of brand at Mindstream, predicts a shift in how we interact with AI. “2025 will be the year that people start talking with AI,” he said. “The majority of people interact with ChatGPT and other tools in the text format, and a lot of emphasis is put on prompting skills.
Biddlecombe believes, “With Apple’s ChatGPT integration for Siri, millions of people will start talking to ChatGPT. This will make AI so much more accessible and people will start to use it for very simple queries.”
Get ready for the next wave of advancements in AI. AGI arrives early, AI agents take charge, and voice becomes the norm. Video creation gets easy, AI embeds everywhere, and one-person billion-dollar companies emerge.
To better understand the types of roles that AI is impacting, ZoomInfo’s research team looked to its proprietary database of professional contacts for answers. The platform, which detects more than 1.5 million personnel changes per day, revealed a dramatic increase in AI-related job titles since 2022. With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce.
Why does this shift in AI titles matter for every industry?
Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.
…
Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.
Here are the five most popular stories that Education Week published in 2024 about AI in schools.
Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:
1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change
Per The Rundown: OpenAI just launched a surprising new way to access ChatGPT — through an old-school 1-800 number & also rolled out a new WhatsApp integration for global users during Day 10 of the company’s livestream event.
Agentic AI represents a significant evolution in artificial intelligence, offering enhanced autonomy and decision-making capabilities beyond traditional AI systems. Unlike conventional AI, which requires human instructions, agentic AI can independently perform complex tasks, adapt to changing environments, and pursue goals with minimal human intervention.
This makes it a powerful tool across various industries, especially in the customer service function. To understand it better, let’s compare AI Agents with non-AI agents.
… Characteristics of Agentic AI
Autonomy: Achieves complex objectives without requiring human collaboration.
Language Comprehension: Understands nuanced human speech and text effectively.
Rationality: Makes informed, contextual decisions using advanced reasoning engines.
Adaptation: Adjusts plans and goals in dynamic situations.
Workflow Optimization: Streamlines and organizes business workflows with minimal oversight.
How, then, can we research and observe how our systems are used while rigorously maintaining user privacy?
Claude insights and observations, or “Clio,” is our attempt to answer this question. Clio is an automated analysis tool that enables privacy-preserving analysis of real-world language model use. It gives us insights into the day-to-day uses of claude.ai in a way that’s analogous to tools like Google Trends. It’s also already helping us improve our safety measures. In this post—which accompanies a full research paper—we describe Clio and some of its initial results.
Evolving tools redefine AI video — from heatherbcooper.substack.com by Heather Cooper Google’s Veo 2, Kling 1.6, Pika 2.0 & more
AI video continues to surpass expectations
The AI video generation space has evolved dramatically in recent weeks, with several major players introducing groundbreaking tools.
Here’s a comprehensive look at the current landscape:
Veo 2…
Pika 2.0…
Runway’s Gen-3…
Luma AI Dream Machine…
Hailuo’s MiniMax…
OpenAI’s Sora…
Hunyuan Video by Tencent…
There are several other video models and platforms, including …
Picture your enterprise as a living ecosystem,where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.
Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?
The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.
Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.
What Is Agentic AI? — from blogs.nvidia.com by Erik Pounds Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.
The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.
Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.
Client expectations have shifted significantly in today’s technology-driven world. Quick communication and greater transparency are now a priority for clients throughout the entire case life cycle. This growing demand for tech-enhanced processes comes not only from clients but also from staff, and is set to rise even further as more advances become available.
…
I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day…
Just 10% of law firms and 21% of corporate legal teams have now implemented policies to guide their organisation’s use of generative AI, according to a report out today (2 December) from Thomson Reuters.
Artificial Intelligence (AI) has been rapidly deployed around the world in a growing number of sectors, offering unprecedented opportunities while raising profound legal and ethical questions. This symposium will explore the transformative power of AI, focusing on its benefits, limitations, and the legal challenges it poses.
AI’s ability to revolutionize sectors such as healthcare, law, and business holds immense potential, from improving efficiency and access to services, to providing new tools for analysis and decision-making. However, the deployment of AI also introduces significant risks, including bias, privacy concerns, and ethical dilemmas that challenge existing legal and regulatory frameworks. As AI technologies continue to evolve, it is crucial to assess their implications critically to ensure responsible and equitable development.
The role of legal teams in creating AI ethics guardrails — from legaldive.com by Catherine Dawson For organizations to balance the benefits of artificial intelligence with its risk, it’s important for counsel to develop policy on data governance and privacy.
How Legal Aid and Tech Collaboration Can Bridge the Justice Gap — from law.com by Kelli Raker and Maya Markovich “Technology, when thoughtfully developed and implemented, has the potential to expand access to legal services significantly,” write Kelli Raker and Maya Markovich.
Challenges and Concerns Despite the potential benefits, legal aid organizations face several hurdles in working with new technologies:
1. Funding and incentives: Most funding for legal aid is tied to direct legal representation, leaving little room for investment in general case management or exploration of innovative service delivery methods to exponentially scale impact.
2. Jurisdictional inconsistency: The lack of a unified court system or standardized forms across regions makes it challenging to develop accurate and widely applicable tech solutions in certain types of matters.
3. Organizational capacity: Many legal aid organizations lack the time and resources to thoroughly evaluate new tech offerings or collaboration opportunities or identify internal workflows and areas of unmet need with the highest chance for impact.
4. Data privacy and security: Legal aid providers need assurance that tech protects client data and avoids misuse of sensitive information.
5. Ethical considerations: There’s significant concern about the accuracy of information produced by consumer-facing technology and the potential for inadvertent unauthorized practice of law.
Legal: Historically resistant to tech, the legal industry ($350 million in enterprise AI spend) is now embracing generative AI to manage massive amounts of unstructured data and automate complex, pattern-based workflows. The field broadly divides into litigation and transactional law, with numerous subspecialties. Rooted in litigation, Everlaw* focuses on legal holds, e-discovery, and trial preparation, while Harvey and Spellbook are advancing AI in transactional law with solutions for contract review, legal research, and M&A. Specific practice areas are also targeted AI innovations: EvenUp focuses on injury law, Garden on patents and intellectual property, Manifest on immigration and employment law, while Eve* is re-inventing plaintiff casework from client intake to resolution.
CodeSignal, an AI tech company, has launched Conversation Practice, an AI-driven platform to help learners practice critical workplace communication and soft skills.
Conversation Practice uses multiple AI models and a natural spoken interface to simulate real-world scenarios and provide feedback.
The goal is to address the challenge of developing conversational skills through iterative practice, without the awkwardness of peer role-play.
What I learned about this software changed my perception about how I can prepare in the future for client meetings. Here’s what I’ve taken away from the potential use of this software in a legal practice setting:
I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day:
Cloud-based case management solutions. These help enhance productivity through collaboration tools and automated workflows while keeping data secure.
E-discovery tools. These tools manage vast amounts of data and help speed up litigation processes.
Artificial intelligence. AI has helped automate tasks for legal professionals including for case management, research, contract review and predictive analytics.
Google’s worst nightmare just became reality. OpenAI didn’t just add search to ChatGPT – they’ve launched an all-out assault on traditional search engines.
It’s the beginning of the end for search as we know it.
Let’s be clear about what’s happening: OpenAI is fundamentally changing how we’ll interact with information online. While Google has spent 25 years optimizing for ad revenue and delivering pages of blue links, OpenAI is building what users actually need – instant, synthesized answers from current sources.
The rollout is calculated and aggressive: ChatGPT Plus and Team subscribers get immediate access, followed by Enterprise and Education users in weeks, and free users in the coming months. This staged approach is about systematically dismantling Google’s search dominance.
Open for AI: India Tech Leaders Build AI Factories for Economic Transformation — from blogs.nvidia.com Yotta Data Services, Tata Communications, E2E Networks and Netweb are among the providers building and offering NVIDIA-accelerated infrastructure and software, with deployments expected to double by year’s end.
From DSC: Great…we have another tool called Canvas. Or did you say Canva?
Introducing canvas — from OpenAI A new way of working with ChatGPT to write and code
We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.
Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.
The way Americans buy homes is changing dramatically.
New industry rules about how home buyers’ real estate agents get paid are prompting a reckoning among housing experts and the tech sector. Many house hunters who are already stretched thin by record-high home prices and closing costs must now decide whether, and how much, to pay an agent.
A 2-3% commission on the median home price of $416,700 could be well over $10,000, and in a world where consumers are accustomed to using technology for everything from taxes to tickets, many entrepreneurs see an opportunity to automate away the middleman, even as some consumer advocates say not so fast.
The Great Mismatch — from the-job.beehiiv.com. by Paul Fain Artificial intelligence could threaten millions of decent-paying jobs held by women without degrees.
Women in administrative and office roles may face the biggest AI automation risk, find Brookings researchers armed with data from OpenAI. Also, why Indiana could make the Swiss apprenticeship model work in this country, and how learners get disillusioned when a certificate doesn’t immediately lead to a good job.
…
A major new analysisfrom the Brookings Institution, using OpenAI data, found that the most vulnerable workers don’t look like the rail and dockworkers who have recaptured the national spotlight. Nor are they the creatives—like Hollywood’s writers and actors—that many wealthier knowledge workers identify with. Rather, they’re predominantly women in the 19M office support and administrative jobs that make up the first rung of the middle class.
“Unfortunately the technology and automation risks facing women have been overlooked for a long time,” says Molly Kinder, a fellow at Brookings Metro and lead author of the new report. “Most of the popular and political attention to issues of automation and work centers on men in blue-collar roles. There is far less awareness about the (greater) risks to women in lower-middle-class roles.”
introducing swarm: an experimental framework for building, orchestrating, and deploying multi-agent systems. ?https://t.co/97n4fehmtM
Is this how AI will transform the world over the next decade? — from futureofbeinghuman.com by Andrew Maynard Anthropic’s CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It’s audacious, compelling, and a must-read for anyone working at the intersection of AI and society.
But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.
Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.
That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.
And speaking of that essay, here’s a summary from The Rundown AI:
Anthropic CEO Dario Amodei just published a lengthy essay outlining an optimistic vision for how AI could transform society within 5-10 years of achieving human-level capabilities, touching on longevity, politics, work, the economy, and more.
The details:
Amodei believes that by 2026, ‘powerful AI’ smarter than a Nobel Prize winner across fields, with agentic and all multimodal capabilities, will be possible.
He also predicted that AI could compress 100 years of scientific progress into 10 years, curing most diseases and doubling the human lifespan.
The essay argued AI could strengthen democracy by countering misinformation and providing tools to undermine authoritarian regimes.
The CEO acknowledged potential downsides, including job displacement — but believes new economic models will emerge to address this.
He envisions AI driving unprecedented economic growth but emphasizes ensuring AI’s benefits are broadly distributed.
Why it matters:
As the CEO of what is seen as the ‘safety-focused’ AI lab, Amodei paints a utopia-level optimistic view of where AI will head over the next decade. This thought-provoking essay serves as both a roadmap for AI’s potential and a call to action to ensure the responsible development of technology.
However, most workers remain unaware of these efforts. Only a third (33%) of all U.S. employees say their organization has begun integrating AI into their business practices, with the highest percentage in white-collar industries (44%).
… White-collar workers are more likely to be using AI. White-collar workers are, by far, the most frequent users of AI in their roles. While 81% of employees in production/frontline industries say they never use AI, only 54% of white-collar workers say they never do and 15% report using AI weekly.
… Most employees using AI use it for idea generation and task automation. Among employees who say they use AI, the most common uses are to generate ideas (41%), to consolidate information or data (39%), and to automate basic tasks (39%).
Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.
Here’s my AI toolkit — from wondertools.substack.com by Jeremy Caplan and Nikita Roy How and why I use the AI tools I do — an audio conversation
1. What are two useful new ways to use AI?
AI-powered research: Type a detailed search query into Perplexity instead of Google to get a quick, actionable summary response with links to relevant information sources. Read more of my take on why Perplexity is so useful and how to use it.
Notes organization and analysis: Tools like NotebookLM, Claude Projects, and Mem can help you make sense of huge repositories of notes and documents. Query or summarize your own notes and surface novel connections between your ideas.
The Tutoring Revolution— from educationnext.org by Holly Korbey More families are seeking one-on-one help for their kids. What does that tell us about 21st-century education?
Recent research suggests that the number of students seeking help with academics is growing, and that over the last couple of decades, more families have been turning to tutoring for that help.
…
What the Future Holds Digital tech has made private tutoring more accessible, more efficient, and more affordable. Students whose families can’t afford to pay $75 an hour at an in-person center can now log on from home to access a variety of online tutors, including Outschool, Wyzant, and Anchorbridge, and often find someone who can cater to their specific skills and needs—someone who can offer help in French to a student with ADHD, for example. Online tutoring is less expensive than in-person programs. Khan Academy’s Khanmigo chatbot can be a student’s virtual AI tutor, no Zoom meeting required, for $4 a month, and nonprofits like Learn to Be work with homeless shelters and community centers to give virtual reading and math tutoring free to kids who can’t afford it and often might need it the most.
On Tuesday, Workera announced Sage, an AI agent you can talk with that’s designed to assess an employee’s skill level, goals, and needs. After taking some short tests, Workera claims Sage will accurately gauge how proficient someone is at a certain skill. Then, Sage can recommend the appropriate online courses through Coursera, Workday, or other learning platform partners.Through chatting with Sage, Workera is designed to meet employees where they are, testing their skills in writing, machine learning, or math, and giving them a path to improve.
AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh The market for AI products and services could reach between $780 billion and $990 billion by 2027.
At a Glance
The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.
Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”
And on a somewhat related note (i.e., emerging technologies), also see the following two postings:
Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.
Key Takeaways
Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.
“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.
Obviously this workflow works just as well for meetings as it does for lectures. Stay present in the meeting with no screens and just write down the key points with pen and paper. Then let NotebookLM assemble the detailed summary based on your high-level notes. https://t.co/fZMG7LgsWG
In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.
In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.
…
Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.
AI Video Tools You Can Use Today— from heatherbcooper.substack.com by Heather Cooper The latest AI video models that deliver results
AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.
No worries – we do have plenty of generative AI video tools we can use right now.
Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
…plus several more
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.
All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.
In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.
This weekend, the @xAI team brought our Colossus 100k H100 training cluster online. From start to finish, it was done in 122 days.
Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200k (50k H200s) in a few months.
The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.
… Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.
Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.
Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.
The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall] The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.
[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.
Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.
What is the AI Risk Repository? The AI Risk Repository has three parts:
The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
The Domain Taxonomy of AIRisks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).
SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.
Per Oncely:
The Details:
Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.
Over the coming days, start creating and chatting with Gems: customizable versions of Gemini that act as topic experts. ?
We’re also launching premade Gems for different scenarios – including Learning coach to break down complex topics and Coding partner to level up your skills… pic.twitter.com/2Dk8NxtTCE
We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.
Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.
Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.
In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.
…
Citing AWS guidance, ZDNET counts six different potential types of AI agents:
Simple reflex agents for tasks like resetting passwords
Model-based reflex agents for pro vs. con decision making
Goal-/rule-based agents that compare options and select the most efficient pathways
Utility-based agents that compare for value
Learning agents
Hierarchical agents that manage and assign subtasks to other agents
Advanced Voice Mode on ChatGPT features more natural, real-time conversations that pick up on and respond with emotion and non-verbal cues.
Advanced Voice Mode on ChatGPT is currently in a limited alpha. Please note that it may make mistakes, and access and rate limits are subject to change.
From DSC: Think about the impacts/ramifications of global, virtual, real-time language translations!!! This type of technology will create very powerful, new affordances in our learning ecosystems — as well as in business communications, with the various governments across the globe, and more!
How to use Perplexity in your daily workflow — from ai-supremacy.com by Michael Spencer and Alex McFarland “I barely use Google anymore (for anything)” says today’s guest author.
Make Perplexity your go-to research companion with these strategies:
Morning briefings: Start your day by asking Perplexity for the latest news in your field. (I personally like to use Perplexity to curate the top AI news of the day to consider writing about for Unite AI and Techopedia.)
Fact-checking: Use it to quickly verify information before including it in your work.
Brainstorming: Generate ideas for projects or content by asking open-ended questions.
Learning new concepts: When you encounter an unfamiliar term or idea, turn to Perplexity for a quick, comprehensive explanation.
Writing assistance: Use it to find relevant statistics, examples, or counterarguments for your content.
Elon Musk’s Memphis Supercluster is a newly activated AI training cluster that is claimed to be the most powerful in the world. Here are the key details about this supercomputer:
1. Location: The supercluster is located in Memphis, Tennessee[1][2].
2. Hardware: It consists of 100,000 liquid-cooled Nvidia H100 GPUs connected through a single RDMA (Remote Direct Memory Access) fabric[1][3].
3. Purpose: The supercluster is designed for training large language models (LLMs) and other advanced AI technologies for Musk’s xAI company[1][2].
4. Activation: The Memphis Supercluster began training at approximately 4:20 AM local time on July 22, 2024[1][3].
5. Collaboration: The project is a result of collaboration between xAI, X (formerly Twitter), Nvidia, and other supporting companies[1][2].
6. Investment: With each H100 GPU estimated to cost between $30,000 to $40,000, the total investment in GPUs alone is estimated to be between $3 billion to $4 billion[5].
7. Goals: Musk claims that this supercluster will be used to develop “the world’s most powerful AI by every measure” by December 2024[1].
8. Comparison: The Memphis Supercluster’s 100,000 H100 GPUs significantly outclass other supercomputers in terms of GPU horsepower, such as Frontier (37,888 AMD GPUs) and Microsoft Eagle (14,400 Nvidia H100 GPUs)[3].
9. Infrastructure: The project required significant infrastructure development, including fiber optic networking[5].
While Musk’s claims about the supercluster’s capabilities are ambitious, it remains to be seen how it will perform in practice and whether it will meet the stated goals within the given timeframe[1].
Elon’s AI empire expands — from theneurondaily.com by Grant Harvey Elon Musk’s team at xAI just powered on the “World’s Most Powerful AI Training Cluster.”
If you don’t know what a supercluster is, it’s basically a massive network of Nvidia GPUs (computer chips) working together as a single unit to solve “super” complex calculations at unprecedented speeds.
And this Memphis Supercluster is the most “super” supercluster we’ve ever seen. The new facility, dubbed the “Gigafactory of Compute”, is a beast:
100,000 liquid-cooled Nvidia H100 GPUs on a single RDMA fabric (for context, Google snagged only 50,000 H100 GPUs last year).
Up to 150 megawatts of electricity usage per hour—enough for 100K homes.
At least one million gallons of water per day to keep cool!
What to expect: Better models, more frequently. That’s been the trend, at least—look at how the last few model releases have become more squished together.
GPT-4o Advanced Voice is an entirely new type of voice assistant, similar to but larger than the recently unveiled French model Moshi, which argued with me over a story.
In demos of the model, we’ve seen GPT-4o Advanced Voice create custom character voices, generate sound effects while telling a story and even act as a live translator.
This native speech ability is a significant step in creating more natural AI assistants. In the future, it will also come with live vision abilities, allowing the AI to see what you see.
“Biggest IT outage in history” proves we’re not ready for AGI. …
Here’s the TL;DR—a faulty software update from cybersecurity firm Crowdstrike made this happen:
Grounded 5,000+ flights around the world.
Slowed healthcare across the UK.
Forced retailers to revert to cash-only transactions in Australia (what is this, the stone ages?!).
…
Here’s where AI comes in: Imagine today’s AI as a new operating system. In 5-10 years, it’ll likely be as integrated into our economy as Microsoft’s cloud servers are now. This isn’t that far-fetched—Microsoft is already planning to embed AI into all its programs.
So what if a Crowdstrike-like incident happens with a more powerful AI system? Some experts predict an AI-powered IT outage could be 10x worse than Friday’s fiasco.
The CrowdStrike software bug that took down global IT infrastructure exposed a single-point-of-failure risk unrelated to malicious cyberattack.
National and cybersecurity experts say the risk of this kind of technical outage is increasing alongside the risk of hacks, and the market will need to adopt better competitive practices.
Government is also likely to look at new regulations related to software updates and patches.