Nvidia Earnings: Stock Rallies As AI Giant Reports 600% Profit Explosion, 10-For-1 Stock Split — from forbes.com by Derek Saul
- Nvidia reported $6.12 earnings per share and $26 billion of sales for the three-month period ending April 30, shattering mean analyst forecasts of $5.60 and $24.59 billion, according to FactSet.
- Nvidia’s profits and revenues skyrocketed by 628% and 268% compared to 2023’s comparable period, respectively.
- This was Nvidia’s most profitable and highest sales quarter ever, topping the quarter ending this January’s record $12.3 billion net income and $22.1 billion revenue.
- Driving the numerous superlatives for Nvidia’s financial growth over the last year is unsurprisingly its AI-intensive datacenter division, which raked in $22.6 billion of revenue last quarter, a 427% year-over-year increase and a whopping 20 times higher than the $1.1 billion the segment brought in in 2020.
Per ChatPGT today:
NVIDIA is a prominent technology company known for its contributions to various fields, primarily focusing on graphics processing units (GPUs) and artificial intelligence (AI). Here’s an overview of NVIDIA’s main areas of activity:
1. **Graphics Processing Units (GPUs):**
– **Consumer GPUs:** NVIDIA is famous for its GeForce series of GPUs, which are widely used in gaming and personal computing for their high performance and visual capabilities.
– **Professional GPUs:** NVIDIA’s Quadro series is designed for professional applications like 3D modeling, CAD (Computer-Aided Design), and video editing.
2. **Artificial Intelligence (AI) and Machine Learning:**
– NVIDIA GPUs are extensively used in AI research and development. They provide the computational power needed for training deep learning models.
– The company offers specialized hardware for AI, such as the NVIDIA Tesla and A100 GPUs, which are used in data centers and supercomputing environments.
3. **Data Centers:**
– NVIDIA develops high-performance computing solutions for data centers, including GPU-accelerated servers and AI platforms. These products are essential for tasks like big data analytics, scientific simulations, and AI workloads.
4. **Autonomous Vehicles:**
– Through its DRIVE platform, NVIDIA provides hardware and software solutions for developing autonomous vehicles. This includes AI-based systems for perception, navigation, and decision-making.
5. **Edge Computing:**
– NVIDIA’s Jetson platform caters to edge computing, enabling AI-powered devices and applications to process data locally rather than relying on centralized data centers.
6. **Gaming and Entertainment:**
– Beyond GPUs, NVIDIA offers technologies like G-SYNC (for smoother gaming experiences) and NVIDIA GameWorks (a suite of tools for game developers).
7. **Healthcare:**
– NVIDIA’s Clara platform utilizes AI and GPU computing to advance medical imaging, genomics, and other healthcare applications.
8. **Omniverse:**
– NVIDIA Omniverse is a real-time graphics collaboration platform for 3D production pipelines. It’s designed for industries like animation, simulation, and visualization.
9. **Crypto Mining:**
– NVIDIA GPUs are also popular in the cryptocurrency mining community, although the company has developed specific products like the NVIDIA CMP (Cryptocurrency Mining Processor) to cater to this market without impacting the availability of GPUs for gamers and other users.
Overall, NVIDIA’s influence spans a broad range of industries, driven by its innovations in GPU technology and AI advancements.
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.
.
AI in Education: Google’s LearnLM product has incredible potential — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Google’s Ed Suite is giving Teachers new ideas for incorporating AI into the classroom.
We often talk about what Generative AI will do for coders, healthcare, science or even finance, but what about the benefits for the next generation? Permit me if you will, here I’m thinking about teachers and students.
It’s no secret that some of the most active users of ChatGPT in its heyday, were students. But how are other major tech firms thinking about this?
I actually think one of the best products with the highest ceiling from Google I/O 2024 is LearnLM. It has to be way more than a chatbot, it has to feel like a multimodal tutor. I can imagine frontier model agents (H) doing this fairly well.
What if everyone, everywhere could have their own personal AI tutor, on any topic?
ChatGPT4o Is the TikTok of AI Models — from nickpotkalitsky.substack.com by Nick Potkalitsky
In Search of Better Tools for AI Access in K-12 Classrooms
Nick makes the case that we should pause on the use of OpenAI in the classrooms:
In light of these observations, it’s clear that we must pause and rethink the use of OpenAI products in our classrooms, except for rare cases where accessibility needs demand it. The rapid consumerization of AI, epitomized by GPT4o’s transformation into an AI salesperson, calls for caution.
The Future of AI in Education: Google and OpenAI Strategies Unveiled — from edtechinsiders.substack.comby Ben Kornell
Google’s Strategy: AI Everywhere
Key Points
- Google will win through seamless Gemini integration across all Google products
- Enterprise approach in education to make Gemini the default at low/no additional cost
- Functional use cases and model tuning demonstrate Google’s knowledge of educators
OpenAI’s Strategy: ChatGPT as the Front Door
Key Points
- OpenAI taking a consumer-led freemium approach to education
- API powers an app layer that delivers education-specific use cases
- Betting on a large user base + app marketplace
Khan Academy and Microsoft partner to expand access to AI tools that personalize teaching and help make learning fun — from news.microsoft.com
[On 5/21/24] at Microsoft Build, Microsoft and Khan Academy announced a new partnership that aims to bring these time-saving and lesson-enhancing AI tools to millions of educators. By donating access to Azure AI-optimized infrastructure, Microsoft is enabling Khan Academy to offer all K-12 educators in the U.S. free access to the pilot of Khanmigo for Teachers, which will now be powered by Azure OpenAI Service.
The two companies will also collaborate to explore opportunities to improve AI tools for math tutoring in an affordable, scalable and adaptable way with a new version of Phi-3, a family of small language models (SLMs) developed by Microsoft.
Also see/referenced:
Also relevant/see:
Khan Academy and Microsoft are teaming up to give teachers a free AI assistant — from fastcompany.com by Steven Melendez
AI assistant Khanmigo can help time-strapped teachers come up with lesson ideas and test questions, the companies say.
Khan Academy’s AI assistant, Khanmigo, has earned praise for helping students to understand and practice everything from math to English, but it can also help teachers devise lesson plans, formulate questions about assigned readings, and even generate reading passages appropriate for students at different levels. More than just a chatbot, the software offers specific AI-powered tools for generating quizzes and assignment instructions, drafting lesson plans, and formulating letters of recommendation.
…
Having a virtual teaching assistant is especially valuable in light of recent research from the RAND Corporation that found teachers work longer hours than most working adults, which includes administrative and prep work outside the classroom.
.
Grasp is the world’s first generative AI platform for finance professionals.
We build domain-specific AI systems that address the complex needs of investment bankers and management consultants.
By automating finance workflows, Grasp dramatically increases employee productivity and satisfaction.
OPINION: Americans need help paying for new, nondegree programs and college alternatives — from hechingerreport.org by Connor Diemand-Yauman and Rebecca Taber Staehelin
Updating the Pell Grant program would be an excellent way to support much-needed alternatives
Janelle’s story is all too familiar throughout the U.S. — stuck in a low-paying job, struggling to make ends meet after being failed by college. Roughly 40 million Americans have left college without completing a degree — historically seen as a golden ticket to the middle class.
Yet even with a degree, many fall short of economic prosperity.
Why children with disabilities are missing school and losing skills — from npr.org by Cory Turner
The fact that a district could struggle so mightily with special education staffing that students are missing school – that’s not just a Del Norte problem. A recent federal survey of school districts across the U.S. found special education jobs were among the hardest to staff – and vacancies were widespread. But what’s happening in Del Norte is extreme. Which is why the Lenovers and five other families are suing the school district, as well as state education leadership, with help from the Disability Rights Education and Defense Fund.
…
The district sits hidden away like a secret between Oregon, the frigid Pacific and some of the largest redwood trees in the world. It’s too isolated and the pay is not competitive enough, Harris says, to attract workers from outside Del Norte. Locally, these aides – like the one Emma requires – earn about as much as they would working at McDonald’s.
Introducing Copilot+ PCs — from blogs.microsoft.com
[On May 20th], at a special event on our new Microsoft campus, we introduced the world to a new category of Windows PCs designed for AI, Copilot+ PCs.
Copilot+ PCs are the fastest, most intelligent Windows PCs ever built. With powerful new silicon capable of an incredible 40+ TOPS (trillion operations per second), all–day battery life and access to the most advanced AI models, Copilot+ PCs will enable you to do things you can’t on any other PC. Easily find and remember what you have seen in your PC with Recall, generate and refine AI images in near real-time directly on the device using Cocreator, and bridge language barriers with Live Captions, translating audio from 40+ languages into English.
From DSC:
As a first off-the-hip look, Recall could be fraught with possible security/privacy-related issues. But what do I know? The Neuron states “Microsoft assures that everything Recall sees remains private.” Ok…
From The Rundown AI concerning the above announcements:
The details:
- A new system enables Copilot+ PCs to run AI workloads up to 20x faster and 100x more efficiently than traditional PCs.
Windows 11 has been rearchitected specifically for AI, integrating the Copilot assistant directly into the OS. - New AI experiences include a new feature called Recall, which allows users to search for anything they’ve seen on their screen with natural language.
- Copilot’s new screen-sharing feature allows AI to watch, hear, and understand what a user is doing on their computer and answer questions in real-time.
- Copilot+ PCs will start at $999, and ship with OpenAI’s latest GPT-4o models.
Why it matters: Tony Stark’s all-powerful JARVIS AI assistant is getting closer to reality every day. Once Copilot, ChatGPT, Project Astra, or anyone else can not only respond but start executing tasks autonomously, things will start getting really exciting — and likely initiate a whole new era of tech work.
From DSC:
My wife does a lot of work with foster families and CASA kids, and she recommends these resources for helping children who have experienced adversity, early harm, toxic stress, and/or trauma.
TBRI: Trust Based Relational Intervention — from child.tcu.edu by Karyn Purvis Institute of Child Development
TBRI® is an attachment-based, trauma-informed intervention that is designed to meet the complex needs of vulnerable children. TBRI® uses Empowering Principles to address physical needs, Connecting Principles for attachment needs, and Correcting Principles to disarm fear-based behaviors. While the intervention is based on years of attachment, sensory processing, and neuroscience research, the heartbeat of TBRI® is connection.
The Connected Child by Karen Purvis
The adoption of a child is always a joyous moment in the life of a family. Some adoptions, though, present unique challenges. Welcoming these children into your family–and addressing their special needs–requires care, consideration, and compassion. Written by two research psychologists specializing in adoption and attachment, The Connected Child will help you:
- Build bonds of affection and trust with your adopted child
- Effectively deal with any learning or behavioral disorders
- Discipline your child with love without making him or her feel threatened
“Frameless” art museum in London where art truly comes to life pic.twitter.com/O4bP2NUE1K
— Historic Vids (@historyinmemes) May 17, 2024
Landscapes Radiate Light and Drama in Erin Hanson’s Vibrant Oil Paintings — from thisiscolossal.com by Kate Mothes and Erin Hanson
AI’s New Conversation Skills Eyed for Education — from insidehighered.com by Lauren Coffey
The latest ChatGPT’s more human-like verbal communication has professors pondering personalized learning, on-demand tutoring and more classroom applications.
ChatGPT’s newest version, GPT-4o ( the “o” standing for “omni,” meaning “all”), has a more realistic voice and quicker verbal response time, both aiming to sound more human. The version, which should be available to free ChatGPT users in coming weeks—a change also hailed by educators—allows people to interrupt it while it speaks, simulates more emotions with its voice and translates languages in real time. It also can understand instructions in text and images and has improved video capabilities.
…
Ajjan said she immediately thought the new vocal and video capabilities could allow GPT to serve as a personalized tutor. Personalized learning has been a focus for educators grappling with the looming enrollment cliff and for those pushing for student success.
There’s also the potential for role playing, according to Ajjan. She pointed to mock interviews students could do to prepare for job interviews, or, for example, using GPT to play the role of a buyer to help prepare students in an economics course.
The State of the American High School in 2024 — from gettingsmart.com by Tom Vander Ark
Over the past 120 days, we’ve conducted tours of over 50 high schools in more than 1,000 classrooms across various cities including Boston, Dallas, Los Angeles, Northern Colorado, Kansas City, Twin Cities, Pittsburgh, and San Diego. These schools were purposefully selected for their dedication to real world learning, positioning them at the forefront of innovative education. These visits showed schools leading the way into new pathways, active learning methods, and work-based learning initiatives. From our observations at these leading schools, we’ve identified 8 key insights about the state of American high schools.
We are on the brink of a significant transformation in how education qualifications are perceived and valued, thanks to a strategic move by ETS to make Mastery Transcript Consortium (MTC) a subsidiary. This pivotal development marks a shift from traditional metrics of educational success—courses and grades—to a more nuanced representation of student abilities through skills transcripts.
The partnership between ETS and MTC is not just a merger of organizations, but a fusion of visions that aim to recalibrate educational assessment. The collaboration is set to advance “Skills for the Future,” focusing on authentic, dynamic assessment methods that provide clear, actionable insights into student capabilities. This shift away from the century-old Carnegie Unit model, which measures educational attainment by time rather than skill mastery, aims to foster learning environments that prioritize personal growth over time spent in a classroom.
As we move forward, this approach could redefine success in education, making learning experiences more adaptive, equitable, and aligned with the demands of the modern world.
See:
Skills Transcripts at Scale: Why The ETS & MTC Partnership is a Big Deal — from gettingsmart.com by Tom Vander Ark
Key Points
- One of the core problems is that education is based on time rather than learning.
- We finally have a chance to move courses and grades into the background and foreground powerful personalized learning experiences and capture and communicate the resulting capabilities in much more descriptive ways—and do it at scale
How to Help Older Students Who Struggle to Read — from nataliewexler.substack.com by Natalie Wexler
Many students above third grade need help deciphering words with multiple syllables
Kockler hypothesizes that the reading struggles of many older students are due in large part to two issues. One has to do with “linguistic difference.” If a child’s family and community speak a variant of English that differs from the kind generally used in books and by teachers—for example, African-American English—it could be harder for them to decode words and connect those words to their meanings.
The Decoding Threshold
The other issue has to do with difficulties in decoding multisyllabic words. Kockler points to a couple of large-scale research studies that have identified a “decoding threshold.”
In theory, students’ reading comprehension ability should improve as they advance to higher grade levels—and it often does. But the researchers found that if students are above fourth grade—past the point where they’re likely to get decoding instruction—and their decoding ability is below a certain level, they’re “extremely unlikely [to] make significant progress in reading comprehension in the following years.” The studies, which were conducted in a high-poverty, largely African-American district, found that almost 40% of fifth-graders and 20% of tenth-graders included in the sample fell below the decoding threshold.
What Is Doxxing, and How Can Educators Protect Their Privacy Online? — from edweek.org by Sarah D. Sparks
The education profession relies on teachers being accessible to their students and families and open to sharing with colleagues. But a little information can be a dangerous thing.
A Guide to the GPT-4o ‘Omni’ Model — from aieducation.substack.com by Claire Zau
The closest thing we have to “Her” and what it means for education / workforce
Today, OpenAI introduced its new flagship model, GPT-4o, that delivers more powerful capabilities and real-time voice interactions to its users. The letter “o” in GPT-4o stands for “Omni”, referring to its enhanced multimodal capabilities. While ChatGPT has long offered a voice mode, GPT-4o is a step change in allowing users to interact with an AI assistant that can reason across voice, text, and vision in real-time.
Facilitating interaction between humans and machines (with reduced latency) represents a “small step for machine, giant leap for machine-kind” moment.
Everyone gets access to GPT-4: “the special thing about GPT-4o is it brings GPT-4 level intelligence to everyone, including our free users”, said CTO Mira Murati. Free users will also get access to custom GPTs in the GPT store, Vision and Code Interpreter. ChatGPT Plus and Team users will be able to start using GPT-4o’s text and image capabilities now
ChatGPT launched a desktop macOS app: it’s designed to integrate seamlessly into anything a user is doing on their keyboard. A PC Windows version is also in the works (notable that a Mac version is being released first given the $10B Microsoft relationship)
Also relevant, see:
OpenAI Drops GPT-4 Omni, New ChatGPT Free Plan, New ChatGPT Desktop App — from theneuron.ai [podcast]
In a surprise launch, OpenAI dropped GPT-4 Omni, their new leading model. They also made a bunch of paid features in ChatGPT free and announced a new desktop app. Pete breaks down what you should know and what this says about AI.
What really matters — from theneurondaily.com
- Free users get 16 ChatGPT-4o messages per 3 hours.
- Plus users get 80 ChatGPT-4o messages per 3 hours
- Teams users 160 ChatGPT-4o messages per 3 hours.
.
How generative AI expands curiosity and understanding with LearnLM — from blog.google
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.
Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.
Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.
On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.
…
Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.
Google I/O 2024: An I/O for a new generation — from blog.google
The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.
- Gemini Era
- Multimodality and long context
- AI agents
- Breaking new ground
- Search
- More intelligent Gemini experiences
- Responsible AI
- Creating the future
Google just announced huge Gemini updates, a Sora competitor, AI agents, and more.
The 12 most impressive announcements at Google I/O:
1. Project Astra: An AI agent that can see AND hear what you do live in real-time.pic.twitter.com/sA2YT80O5G
— Rowan Cheung (@rowancheung) May 15, 2024
Daily Digest: Google I/O 2024 – AI search is here. — from bensbites.beehiiv.com
PLUS: It’s got Agents, Video and more. And, Ilya leaves OpenAI
- Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
- All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
- Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.
Google just casually announced Veo, a new rival to OpenAI’s Sora.
It can generate insanely good 1080p video up to 60 seconds.
9 wild examples:
— Proper ? (@ProperPrompter) May 14, 2024
New ways to engage with Gemini for Workspace — from workspace.google.com
Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.
Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals — from techcrunch.com by Kyle Wiggers
Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.
At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.
Generative AI in Search: Let Google do the searching for you — from blog.google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.