.
How generative AI expands curiosity and understanding with LearnLM — from blog.google
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.
Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.
Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.
On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.
…
Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.
Google I/O 2024: An I/O for a new generation — from blog.google
The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.
- Gemini Era
- Multimodality and long context
- AI agents
- Breaking new ground
- Search
- More intelligent Gemini experiences
- Responsible AI
- Creating the future
Google just announced huge Gemini updates, a Sora competitor, AI agents, and more.
The 12 most impressive announcements at Google I/O:
1. Project Astra: An AI agent that can see AND hear what you do live in real-time.pic.twitter.com/sA2YT80O5G
— Rowan Cheung (@rowancheung) May 15, 2024
Daily Digest: Google I/O 2024 – AI search is here. — from bensbites.beehiiv.com
PLUS: It’s got Agents, Video and more. And, Ilya leaves OpenAI
- Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
- All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
- Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.
Google just casually announced Veo, a new rival to OpenAI’s Sora.
It can generate insanely good 1080p video up to 60 seconds.
9 wild examples:
— Proper ? (@ProperPrompter) May 14, 2024
New ways to engage with Gemini for Workspace — from workspace.google.com
Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.
Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals — from techcrunch.com by Kyle Wiggers
Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.
At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.
Generative AI in Search: Let Google do the searching for you — from blog.google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.