How to use NotebookLM for personalized knowledge synthesis — from ai-supremacy.com by Michael Spencer and Alex McFarland Two powerful workflows that unlock everything else. Intro: Golden Age of AI Tools and AI agent frameworks begins in 2025.
What is Google Learn about? Google’s new AI tool, Learn About, is designed as a conversational learning companion that adapts to individual learning needs and curiosity. It allows users to explore various topics by entering questions, uploading images or documents, or selecting from curated topics. The tool aims to provide personalized responses tailored to the user’s knowledge level, making it user-friendly and engaging for learners of all ages.
Is Generative AI leading to a new take on Educational technology? It certainly appears promising heading into 2025.
The Learn About tool utilizes the LearnLM AI model, which is grounded in educational research and focuses on how people learn. Google insists that unlike traditional chatbots, it emphasizes interactive and visual elements in its responses, enhancing the educational experience. For instance, when asked about complex topics like the size of the universe, Learn About not only provides factual information but also includes related content, vocabulary building tools, and contextual explanations to deepen understanding.
Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.
The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.
Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.
In the last two years, the world has seen a lot of breakneck advancement in the Generative AI space, right from text-to-text, text-to-image and text-to-video based Generative AI capabilities. And all of that’s been nothing short of stepping stones for the next big AI breakthrough – AI agents. According to Bloomberg, OpenAI is preparing to launch its first autonomous AI agent, which is codenamed ‘Operator,’ as soon as in January 2025.
Apparently, this OpenAI agent – or Operator, as it’s codenamed – is designed to perform complex tasks independently. By understanding user commands through voice or text, this AI agent will seemingly do tasks related to controlling different applications in the computer, send an email, book flights, and no doubt other cool things. Stuff that ChatGPT, Copilot, Google Gemini or any other LLM-based chatbot just can’t do on its own.
In the enterprise of the future, human workers are expected to work closely alongside sophisticated teams of AI agents.
According to McKinsey, generative AI and other technologies have the potential to automate 60 to 70% of employees’ work. And, already, an estimated one-third of American workers are using AI in the workplace — oftentimes unbeknownst to their employers.
However, experts predict that 2025 will be the year that these so-called “invisible” AI agents begin to come out of the shadows and take more of an active role in enterprise operations.
“Agents will likely fit into enterprise workflows much like specialized members of any given team,” said Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI.
A recent report from McKinsey predicts that generative AI could unlock up to $2.6 to $4.4 annually trillion in value within product development and innovation across various industries. This staggering figure highlights just how significantly generative AI is set to transform the landscape of product development. Generative AI app development is driving innovation by using the power of advanced algorithms to generate new ideas, optimize designs, and personalize products at scale. It is also becoming a cornerstone of competitive advantage in today’s fast-paced market. As businesses look to stay ahead, understanding and integrating technologies like generative AI app development into product development processes is becoming more crucial than ever.
AI agents handle complex, autonomous tasks beyond simple commands, showcasing advanced decision-making and adaptability.
The Based AI Agent template by Coinbase and Replit provides an easy starting point for developers to build blockchain-enabled AI agents.
AI based agents specifically integrate with blockchain, supporting crypto wallets and transactions.
Securing API keys in development is crucial to protect the agent from unauthorized access.
What are AI Agents and How Are They Used in Different Industries?— from rtinsights.com by Salvatore Salamone AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries.
Google’s worst nightmare just became reality. OpenAI didn’t just add search to ChatGPT – they’ve launched an all-out assault on traditional search engines.
It’s the beginning of the end for search as we know it.
Let’s be clear about what’s happening: OpenAI is fundamentally changing how we’ll interact with information online. While Google has spent 25 years optimizing for ad revenue and delivering pages of blue links, OpenAI is building what users actually need – instant, synthesized answers from current sources.
The rollout is calculated and aggressive: ChatGPT Plus and Team subscribers get immediate access, followed by Enterprise and Education users in weeks, and free users in the coming months. This staged approach is about systematically dismantling Google’s search dominance.
Open for AI: India Tech Leaders Build AI Factories for Economic Transformation — from blogs.nvidia.com Yotta Data Services, Tata Communications, E2E Networks and Netweb are among the providers building and offering NVIDIA-accelerated infrastructure and software, with deployments expected to double by year’s end.
We’ve added a new analysis tool. The tool helps Claude respond with mathematically precise and reproducible answers. You can then create interactive data visualizations with Artifacts.
We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
A few days ago, Anthropic released Claude Computer Use, which is a model + code that allows Claude to control a computer. It takes screenshots to make decisions, can run bash commands and so forth.
It’s cool, but obviously very dangerous because of prompt injection.Claude Computer Use enables AI to run commands on machines autonomously, posing severe risks if exploited via prompt injection.
This blog post demonstrates that it’s possible to leverage prompt injection to achieve, old school, command and control (C2) when giving novel AI systems access to computers. … We discussed one way to get malware onto a Claude Computer Use host via prompt injection. There are countless others, like another way is to have Claude write the malware from scratch and compile it. Yes, it can write C code, compile and run it. There are many other options.
TrustNoAI.
And again, remember do not run unauthorized code on systems that you do not own or are authorized to operate on.
From a survey with more than 800 senior business leaders, this report’s findings indicate that weekly usage of Gen AI has nearly doubled from 37% in 2023 to 72% in 2024, with significant growth in previously slower-adopting departments like Marketing and HR. Despite this increased usage, businesses still face challenges in determining the full impact and ROI of Gen AI. Sentiment reports indicate leaders have shifted from feelings of “curiosity” and “amazement” to more positive sentiments like “pleased” and “excited,” and concerns about AI replacing jobs have softened. Participants were full-time employees working in large commercial organizations with 1,000 or more employees.
For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.
The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”
We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
The TLDR here is that, as useful as popular AI tools are for learners, as things stand they only enable us to take the very first steps on what is a long and complex journey of learning.
AI tools like ChatGPT 4o, Claude 3.5 & NotebookLM can help to give us access to information but (for now at least) the real work of learning remains in our – the humans’ – hands.
To which Anna Mills had a solid comment:
It might make a lot of sense to regulate generated audio to require some kind of watermark and/or metadata. Instructors who teach online and assign voice recordings, we need to recognize that these are now very easy and free to auto-generate. In some cases we are assigning this to discourage students from using AI to just autogenerate text responses, but audio is not immune.
As we look to the future of umpiring in baseball, a balanced approach may offer the best solution. Rather than an all-or-nothing choice between human umpires and full automation, a hybrid system could potentially offer the benefits of both worlds. For instance, automated tracking systems could be used to assist human umpires, providing them with real-time data to inform their calls. This would maintain the human element and authority on the field while significantly enhancing accuracy and consistency.
Such a system would allow umpires to focus more on game management, player interactions, and the myriad other responsibilities that require human judgment and experience. It would preserve the traditional aspects of the umpire’s role that fans and players value, while leveraging technology to address concerns about accuracy and fairness.
Introduction Continuing with our baseball analogy, we now turn our focus to the courtroom.
The intersection of technology and the justice system is a complex and often contentious space, much like the debate over automated umpires in baseball. As Major League Baseball considers whether automated systems should replace the human element in calling balls and strikes, the legal world faces similar questions: How far should we go in allowing technology to aid our decision-making processes, and what is the right balance between innovation and the traditions that define the courtroom?
AI and the rise of the Niche Lawyer— from jordanfurlong.substack.com by Jordan Furlong A new legal market will create a new type of lawyer: Specialized, flexible, customized, fractional, home-based and online, exclusive, balanced and focused. This could be your future legal career.
Think of a new picture. A lawyer dressed in Professional Casual, or Business Comfortable, an outfit that looks sharp but feels relaxed. A lawyer inside their own apartment, in an extra bedroom, or in a shared workspace on a nearby bus route, taking an Uber to visit some clients and using Zoom to meet with others. A lawyer with a laptop and a tablet and a smartphone and no other capital expenditures. A lawyer whose overhead is only what’s literally over their head.
This lawyer starts work when they feel like it (maybe 7 am, maybe 10; maybe Monday, maybe not) and they stop working when they feel like it (maybe 4 pm, maybe 9). They have as many clients as they need, for whom they provide very specific, very personalized services. They provide some services that aren’t even “legal” to people who aren’t “clients” as we understand both terms. They have essential knowledge and skills that all lawyers share but unique knowledge and skills that hardly any others possess. They make as much money as they need in order to meet the rent and pay down their debts and afford a life with the people they love. They’re in complete charge of their career and their destiny, something they find terrifying and stressful and wonderful and fulfilling.
While the latest ChatGPT model is dominating tech headlines, I was unexpectedly blown away by Google’s recent release of a new NotebookLM feature: Audio Overview. This tool, which transforms written content into simulated conversations, caught me off guard with its capabilities. I uploaded some of my blog posts on AI and the justice system, and what it produced left me speechless. The AI generated podcast-like discussions felt remarkably authentic, complete with nuanced interpretations and even slight misunderstandings of my ideas. This mirrors real-life discussions perfectly – after all, how often do we hear our own thoughts expressed by others and think, “That’s not quite what I meant”?
1. Record audio from class on your phone
2. Keep laptop closed. Just jot down short phrases to describe most important points
3. Upload audio and PDF scan of notes to NotebookLM
4. Ask Notebook to expand your notes with details from recording… pic.twitter.com/wfmCTJfRba
Unlock deeper insights with NotebookLM: Now analyze YouTube videos & audio files alongside your docs. Plus, easily share your Audio Overview with a new sharing option!
I. Introduction (0:00 – 6:16): …
II. Historical Contextualization (6:16 – 11:30): …
III. The Role of Product Fit in AI’s Impact (11:30 – 17:10): …
IV. AI and the Future of Knowledge Work (17:10 – 24:03): …
V. Teaching About AI in Higher Ed: A Measured Approach (24:03 – 34:20): …
VI. AI & the Evolving Skills Landscape (34:20 – 44:35): …
VII. Ethical & Pedagogical Considerations in an AI-Driven World (44:35 – 54:03):…
VIII. AI Beyond the Classroom: Administrative Applications & the Need for Intuition (54:03 – 1:04:30): …
IX. Reflections & Future Directions (1:04:30 – 1:11:15): ….
Part 2: Administrative Impacts & Looking Ahead
X. Bridging the Conversation: From Classroom to Administration (1:11:15 – 1:16:45): …
XI. The Administrative Potential of AI: A Looming Transformation (1:16:45 – 1:24:42): …
XII. The Need for Intuitiveness & the Importance of Real-World Applications (1:24:42 – 1:29:45): …
XIII. Looking Ahead: From Hype to Impactful Integration (1:29:45 – 1:34:25): …
XIV. Conclusion and Call to Action (1:34:25 – 1:36:03): …
Most language learners do not have access to affordable 1:1 tutoring, which is also proven to be the most effective way to learn (short of moving to a specific country for complete immersion). Meanwhile, language learning is a huge market, and with an estimated 60% of this still dominated by “offline” solutions, meaning it is prime for disruption and never more so than with the opportunities unlocked through AI powered language learning. Therefore — we believe this presents huge opportunities for new startups creating AI native products to create the next language learning unicorns.
I never imagined I’d learn so much without paying for a course.
It’s not that AI is inherently biased, but in its current state, it favors those who can afford it. The wealthy districts continue to pull ahead, leaving schools without resources further behind. Students in these underserved areas aren’t just being deprived of technology—they’re being deprived of the future.
But imagine a different world—one where AI doesn’t deepen the divide, but helps to bridge it. Technology doesn’t have to be the luxury of the wealthy. It can be a tool for every student, designed to meet them where they are. Adaptive AI systems, integrated into schools regardless of their budget, can provide personalized learning experiences that help students catch up and push forward, all while respecting the limits of their current infrastructure. This is where AI’s true potential lies—not in widening the gap, but in leveling the field.
But imagine if, instead of replacing teachers, AI helped to support them. Picture a world where teachers are freed from the administrative burdens that weigh them down. Where AI systems handle the logistics, so teachers can focus on what they do best—teaching, mentoring, and inspiring the next generation. Professional development could be personalized, helping teachers integrate AI into their classrooms in ways that enhance their teaching, without adding to their workload. This is the future we should be striving toward—one where technology serves to lift up educators, not push them out.
AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh The market for AI products and services could reach between $780 billion and $990 billion by 2027.
At a Glance
The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.
Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”
And on a somewhat related note (i.e., emerging technologies), also see the following two postings:
Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.
Key Takeaways
Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.
“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.
Obviously this workflow works just as well for meetings as it does for lectures. Stay present in the meeting with no screens and just write down the key points with pen and paper. Then let NotebookLM assemble the detailed summary based on your high-level notes. https://t.co/fZMG7LgsWG
In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.
In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.
…
Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.
AI Video Tools You Can Use Today— from heatherbcooper.substack.com by Heather Cooper The latest AI video models that deliver results
AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.
No worries – we do have plenty of generative AI video tools we can use right now.
Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
…plus several more
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
RIP To Human First Pass Document Review?— from abovethelaw.com by Joe Patrice Using actual humans to perform an initial review isn’t gone yet, but the days are numbered.
Lawyers are still using real, live people to take a first crack at document review, but much like the “I’m not dead yet” guy from Monty Python and the Holy Grail, it’s a job that will be stone dead soon. Because there are a lot of deeply human tasks that AI will struggle to replace, but getting through a first run of documents doesn’t look like one of them.
At last week’s Relativity Fest, the star of the show was obviously Relativity aiR for Review, which the company moved to general availability. In conjunction with the release, Relativity pointed to impressive results the product racked up during the limited availability period including Cimplifi reporting that the product cut review time in half and JND finding a 60 percent cut in costs.
When it comes to efficiencies, automation plays a big role. In a solo or small firm, resources come at a premium. Learn to reduce wasted input through standardized, repeatable operating procedures and automation. (There are even tech products that help you create written standard processes learning from and organizing the work you’re already doing).
Imagine speaking into an app as you “brain dump” and having those thoughts come out organized and notated for later use. Imagine dictating legal work into an app and having AI organize your dictation, even correct it. You don’t need to type everything in today’s tech world. Maximize downtime.
It’s all about training yourself to think “automation first.” Even when a virtual assistant (VA) located in another country can fill gaps in your practice, learn your preferences, match your brand, and help you be your most efficient you without hiring a full-tie employee. Today’s most successful law firms are high-tech hubs. Don’t let fear of the unknown hold you back.
Several of our regular Legaltech Week panelists were in Chicago for RelativityFest last week, so we took the opportunity to get together and broadcast our show live from the same room (instead of Zoom squares).
If you missed it Friday, here’s the video recording.
Today (24 September) LexisNexis has released a new report – Need for Speedier Legal Services sees AI Adoption Accelerate – which reveals a sharp increase in the number of lawyers using generative AI for legal work.
The survey of 800+ UK legal professionals at firms and in-house teams found 41% are currently using AI for work, up from 11% in July 2023. Lawyers with plans to use AI for legal work in the near future also jumped from 28% to 41%, while those with no plans to adopt AI dropped from 61% to 15%. The survey found that 39% of private practice lawyers now expect to adjust their billing practices due to AI, up from 18% in January 2024.
‘What if legal review cost just $1? What if legal review was 1,000X cheaper than today?’ he muses.
And, one could argue we are getting there already – at least in theory. How much does it actually cost to run a genAI tool, that is hitting the accuracy levels you require, over a relatively mundane contract in order to find top-level information? If token costs drop massively in the years ahead and tech licence costs have been shared out across a major legal business….then what is the cost to the firm per document?
Of course, there is review and there is review. A very deep and thorough review, with lots of redlining, back and forth negotiation, and redrafting by top lawyers is another thing. But, a ‘quick once-over’? It feels like we are already at the ‘pennies on the dollar’ stage for that.
In some cases the companies on the convergence path are just getting started and only offer a few additional skills (so far), in other cases, large companies with diverse histories have almost the same multi-skill offering across many areas.
People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:
Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
…plus several other items
The posting also lists several ideas to try with NotebookLM such as:
Idea 2: Study Companion
Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
Get a breakdown of the course materials to understand them better.
“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”
With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.
1. Upload a variety of sources for NotebookLM to use.
You can use …
websites
PDF files
links to websites
any text you’ve copied
Google Docs and Slides
even Markdown
You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).
2. Ask it to create resources. 3. Create an audio summary. 4. Chat with your sources.
5. Save (almost) everything.
I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.
The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.
4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.
As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:
Incorporate personal experiences and local content into assignments
Ask students for multi-modal deliverables
Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.
Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions…
As we navigate the rapidly evolving landscape of artificial intelligence in education, a troubling trend has emerged. What began as cautious skepticism has calcified into rigid opposition. The discourse surrounding AI in classrooms has shifted from empirical critique to categorical rejection, creating a chasm between the potential of AI and its practical implementation in education.
This hardening of attitudes comes at a significant cost. While educators and policymakers debate, students find themselves caught in the crossfire. They lack safe, guided access to AI tools that are increasingly ubiquitous in the world beyond school walls. In the absence of formal instruction, many are teaching themselves to use these tools, often in less than productive ways. Others live in a state of constant anxiety, fearing accusations of AI reliance in their work. These are just a few symptoms of an overarching educational culture that has become resistant to change, even as the world around it transforms at an unprecedented pace.
Yet, as this calcification sets in, I find myself in a curious position: the more I thoughtfully integrate AI into my teaching practice, the more I witness its potential to enhance and transform education
The urgency to integrate AI competencies into education is about preparing students not just to adapt to inevitable changes but to lead the charge in shaping an AI-augmented world. It’s about equipping them to ask the right questions, innovate responsibly, and navigate the ethical quandaries that come with such power.
AI in education should augment and complement their aptitude and expertise, to personalize and optimize the learning experience, and to support lifelong learning and development. AI in education should be a national priority and a collaborative effort among all stakeholders, to ensure that AI is designed and deployed in an ethical, equitable, and inclusive way that respects the diversity and dignity of all learners and educators and that promotes the common good and social justice. AI in education should be about the production of AI, not just the consumption of AI, meaning that learners and educators should have the opportunity to learn about AI, to participate in its creation and evaluation, and to shape its impact and direction.
Today we rolled out OpenAI o1-preview and o1-mini to all ChatGPT Plus/Team users & Tier 5 developers in the API.
o1 marks the start of a new era in AI, where models are trained to “think” before answering through a private chain of thought. The more time they take to think, the…
This is @Google‘s wow moment in AI.
Notebook LM can generate engaging podcasts on your uploaded material for FREE.
I tested it with uploading the latest issue of the Tensor, it generated a podcast for me within 2 mins.
“Who to follow in AI” in 2024? — from ai-supremacy.com by Michael Spencer Part III – #35-55 – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts
This list features both some of the best Newsletters on AI and people who make LinkedIn posts about AI papers, advances and breakthroughs. In today’s article we’ll be meeting the first 19-34, in a list of 180+.
Newsletter Writers
YouTubers
Engineers
Researchers who write
Technologists who are Creators
AI Educators
AI Evangelists of various kinds
Futurism writers and authors
I have been sharing the list in reverse chronological order on LinkedIn here.
Inside Google’s 7-Year Mission to Give AI a Robot Body — from wired.com by Hans Peter Brondmo As the head of Alphabet’s AI-powered robotics moonshot, I came to believe many things. For one, robots can’t come soon enough. For another, they shouldn’t look like us.
Learning to Reason with LLMs — from openai.com We are introducing OpenAI o1, a new large language model trained with reinforcement learning to perform complex reasoning. o1 thinks before it answers—it can produce a long internal chain of thought before responding to the user.
As a preview of the upcoming Summit interview, here are Khan’s views on two critical questions, edited for space and clarity:
What are the enduring human work skills in a world with ever-advancing AI? Some people say students should study liberal arts. Others say deep domain expertise is the key to remaining professionally relevant. Others say you need to have the skills of a manager to be able to delegate to AI. What do you think are the skills or competencies that ensure continued relevance professionally, employability, etc.?
A lot of organizations are thinking about skills-based approaches to their talent. It involves questions like, ‘Does someone know how to do this thing or not?’ And what are the ways in which they can learn it and have some accredited way to know they actually have done it? That is one of the ways in which people use Khan Academy. Do you have a view of skills-based approaches within workplaces, and any thoughts on how AI tutors and training fit within that context?
Today, I’m excited to share with you all the fruit of our effort at @OpenAI to create AI models capable of truly general reasoning: OpenAI’s new o1 model series! (aka ?) Let me explain ? 1/ pic.twitter.com/aVGAkb9kxV
We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.
The wait is over. OpenAI has just released GPT-5, now called OpenAI o1.
It brings advanced reasoning capabilities and can generate entire video games from a single prompt.
Think of it as ChatGPT evolving from fast, intuitive thinking (System-1) to deeper, more deliberate… pic.twitter.com/uAMihaUjol
OpenAI Strawberry (o1) is out! We are finally seeing the paradigm of inference-time scaling popularized and deployed in production. As Sutton said in the Bitter Lesson, there’re only 2 techniques that scale indefinitely with compute: learning & search. It’s time to shift focus to… pic.twitter.com/jTViQucwxr
The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.
To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.
What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack
The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.
Recently, many creators (myself included) have been exploring super realistic AI more and more.
But where can this actually be used?
Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.
Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.
Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.
Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.
We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.