Educators need to work with vendors and tech developers to ensure artificial intelligence-driven innovations for schools go hand-in-hand with managing the technology’s risks, recommends guidance released July 8 by the U.S. Department of Education.
Also, on somewhat related notes see the following items:
I highly recommend the https://t.co/QE0Eze3qsr training for educators. It’s well-structured, engaging, and you learn a lot about prompting AI as you create your own custom bot for education. @shaanmasala has built something amazing. PlayLab is a nonprofit, the training is free,…
— Anna Mills, annamillsoer.bsky.social, she/her (@AnnaRMills) July 8, 2024
This episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.
Key moments:
00:05 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 Rapid AI development raises questions about controlling its global application.
33:31 Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.
The team has summarized their primary contributions as follows.
The team has offered the first instance of a simple, scalable oversight technique that greatly assists humans in more thoroughly detecting problems in real-world RLHF data.
Within the ChatGPT and CriticGPT training pools, the team has discovered that critiques produced by CriticGPT catch more inserted bugs and are preferred above those written by human contractors.
Compared to human contractors working alone, this research indicates that teams consisting of critic models and human contractors generate more thorough criticisms. When compared to reviews generated exclusively by models, this partnership lowers the incidence of hallucinations.
This study provides Force Sampling Beam Search (FSBS), an inference-time sampling and scoring technique. This strategy well balances the trade-off between minimizing bogus concerns and discovering genuine faults in LLM-generated critiques.
a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.
The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.
Google Translate can come in handy when you’re traveling or communicating with someone who speaks another language, and thanks to a new update, you can now connect with some 614 million more people. Google is adding 110 new languages to its Translate tool using its AI PaLM 2 large language model (LLM), which brings the total of supported languages to nearly 250. This follows the 24 languages added in 2022, including Indigenous languages of the Americas as well as those spoken across Africa and central Asia.
Gen-3 Alpha Text to Video is now available to everyone.
A new frontier for high-fidelity, fast and controllable video generation.
Meanwhile, a separate survey of faculty released Thursday by Ithaka S+R, a higher education consulting firm, showcased that faculty—while increasingly familiar with AI—often do not know how to use it in classrooms. Two out of five faculty members are familiar with AI, the Ithaka report found, but only 14 percent said they are confident in their ability to use AI in their teaching. Just slightly more (18 percent) said they understand the teaching implications of generative AI.
“Serious concerns about academic integrity, ethics, accessibility, and educational effectiveness are contributing to this uncertainty and hostility,” the Ithaka report said.
The diverging views about AI are causing friction. Nearly a third of students said they have been warned to not use generative AI by professors, and more than half (59 percent) are concerned they will be accused of cheating with generative AI, according to the Pearson report, which was conducted with Morning Consult and surveyed 800 students.
What teachers want from AI — from hechingerreport.org by Javeria Salman When teachers designed their own AI tools, they built math assistants, tools for improving student writing, and more
An AI chatbot that walks students through how to solve math problems. An AI instructional coach designed to help English teachers create lesson plans and project ideas. An AI tutor that helps middle and high schoolers become better writers.
These aren’t tools created by education technology companies. They were designed by teachers tasked with using AI to solve a problem their students were experiencing.
Over five weeks this spring, about 300 people – teachers, school and district leaders, higher ed faculty, education consultants and AI researchers – came together to learn how to use AI and develop their own basic AI tools and resources. The professional development opportunity was designed by technology nonprofit Playlab.ai and faculty at the Relay Graduate School of Education.
Next-Gen Classroom Observations, Powered by AI — from educationnext.org by Michael J. Petrilli The use of video recordings in classrooms to improve teacher performance is nothing new. But the advent of artificial intelligence could add a helpful evaluative tool for teachers, measuring instructional practice relative to common professional goals with chatbot feedback.
Multiple companies are pairing AI with inexpensive, ubiquitous video technology to provide feedback to educators through asynchronous, offsite observation. It’s an appealing idea, especially given the promise and popularity of instructional coaching, as well as the challenge of scaling it effectively (see “Taking Teacher Coaching To Scale,” research, Fall 2018).
…
Enter AI. Edthena is now offering an “AI Coach” chatbot that offers teachers specific prompts as they privately watch recordings of their lessons. The chatbot is designed to help teachers view their practice relative to common professional goals and to develop action plans to improve.
To be sure, an AI coach is no replacement for human coaching.
We need to shift our thinking about GenAI tutors serving only as personal learning tools. The above activities illustrate how these tools can be integrated into contemporary classroom instruction. The activities should not be seen as prescriptive but merely suggestive of how GenAI can be used to promote social learning. Although I specifically mention only one online activity (“Blended Learning”), all can be adapted to work well in online or blended classes to promote social interaction.
Stealth AI — from higherai.substack.com by Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler What happens when students use AI all the time, but aren’t allowed to talk about it?
In many ways, this comes back to one of my general rules: You cannot ban AI in the classroom. You can only issue a gag rule.
And if you do issue a gag rule, then it deprives students of the space they often need to make heads and tails of this technology.
We need to listen to actual students talking about actual uses, and reflecting on their actual feelings. No more abstraction.
In this conversation, Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler about what students are saying about Artificial Intelligence and education.
Welcome to our monthly update for Teams for Education and thank you so much for being part of our growing community! We’re thrilled to share over 20 updates and resources and show them in action next week at ISTELive 24 in Denver, Colorado, US.
… Copilot for Microsoft 365 – Educator features Guided Content Creation
Coming soon to Copilot for Microsoft 365 is a guided content generation experience to help educators get started with creating materials like assignments, lesson plans, lecture slides, and more. The content will be created based on the educator’s requirements with easy ways to customize the content to their exact needs. Standards alignment and creation Quiz generation through Copilot in Forms Suggested AI Feedback for Educators Teaching extension
To better support educators with their daily tasks, we’ll be launching a built-in Teaching extension to help guide them through relevant activities and provide contextual, educator-based support in Copilot. Education data integration
Copilot for Microsoft 365 – Student features Interactive practice experiences
Flashcards activity
Guided chat activity Learning extension in Copilot for Microsoft 365
…
New AI tools for Google Workspace for Education — from blog.google by Akshay Kirtikar and Brian Hendricks We’re bringing Gemini to teen students using their school accounts to help them learn responsibly and confidently in an AI-first future, and empowering educators with new tools to help create great learning experiences.
Not sure if you're behind or ahead in AI adoption? I created this guide to help you benchmark.
? ? ?
* many who think they're behind are actually on track, and some who think they're ahead are not ** these insights are my own opinion based on years of work with hundreds… pic.twitter.com/Wr28azOwDS
We have to provide instructors the support they need to leverage educational technologies like generative AI effectively in the service of learning. Given the amount of benefit that could accrue to students if powerful tools like generative AI were used effectively by instructors, it seems unethical not to provide instructors with professional development that helps them better understand how learning occurs and what effective teaching looks like. Without more training and support for instructors, the amount of student learning higher education will collectively “leave on the table” will only increase as generative AI gets more and more capable. And that’s a problem.
From DSC: As is often the case, David put together a solid posting here. A few comments/reflections on it:
I agree that more training/professional development is needed, especially regarding generative AI. This would help achieve a far greater ROI and impact.
The pace of change makes it difficult to see where the sand is settling…and thus what to focus on
The Teaching & Learning Groups out there are also trying to learn and grow in their knowledge (so that they can train others)
The administrators out there are also trying to figure out what all of this generative AI stuff is all about; and so are the faculty members. It takes time for educational technologies’ impact to roll out and be integrated into how people teach.
As we’re talking about multiple disciplines here, I think we need more team-based content creation and delivery.
There needs to be more research on how best to use AI — again, it would be helpful if the sand settled a bit first, so as not to waste time and $$. But then that research needs to be piped into the classrooms far better. .
Introducing Gen-3 Alpha: Runway’s new base model for video generation.
Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions.https://t.co/YQNE3eqoWf
Introducing GEN-3 Alpha – The first of a series of new models built by creatives for creatives. Video generated with @runwayml‘s new Text-2-Video model.
Learning personalisation. LinkedIn continues to be bullish on its video-based learning platform, and it appears to have found a strong current among users who need to skill up in AI. Cohen said that traffic for AI-related courses — which include modules on technical skills as well as non-technical ones such as basic introductions to generative AI — has increased by 160% over last year.
You can be sure that LinkedIn is pushing its search algorithms to tap into the interest, but it’s also boosting its content with AI in another way.
For Premium subscribers, it is piloting what it describes as “expert advice, powered by AI.” Tapping into expertise from well-known instructors such as Alicia Reece, Anil Gupta, Dr. Gemma Leigh Roberts and Lisa Gates, LinkedIn says its AI-powered coaches will deliver responses personalized to users, as a “starting point.”
These will, in turn, also appear as personalized coaches that a user can tap while watching a LinkedIn Learning course.
Personalized learning for everyone: Whether you’re looking to change or not, the skills required in the workplace are expected to change by 68% by 2030.
Expert advice, powered by AI: We’re beginning to pilot the ability to get personalized practical advice instantly from industry leading business leaders and coaches on LinkedIn Learning, all powered by AI. The responses you’ll receive are trained by experts and represent a blend of insights that are personalized to each learner’s unique needs. While human professional coaches remain invaluable, these tools provide a great starting point.
Personalized coaching, powered by AI, when watching a LinkedIn course: As learners —including all Premium subscribers — watch our new courses, they can now simply ask for summaries of content, clarify certain topics, or get examples and other real-time insights, e.g. “Can you simplify this concept?” or “How does this apply to me?”
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
AI Policy 101: a Beginners’ Framework — from drphilippahardman.substack.com by Dr. Philippa Hardman How to make a case for AI experimentation & testing in learning & development
The role of learning & development
Given these risks, what can L&D professionals do to ensure generative AI contributes to effective learning? The solution lies in embracing the role of trusted learning advisors, guiding the use of AI tools in a way that prioritizes achieving learning outcomes over only speed. Here are three key steps to achieve this:
1. Playtest and Learn About AI… 2. Set the Direction for AI to Be Learner-Centered…
3. Become Trusted Learning Advisors…
In the last year, AI has become even more intertwined with our education system. More teachers, parents, and students are aware of it and have used it themselves on a regular basis. It is all over our education system today.
While negative views of AI have crept up over the last year, students, teachers, and parents feel very positive about it in general. On balance they see positive uses for the technology in school, especially if they have used it themselves.
Most K-12 teachers, parents, and students don’t think their school is doing much about AI, despite its widespread use. Most say their school has no policy on it, is doing nothing to offer desired teacher training, and isn’t meeting the demand of students who’d like a career in a job that will need AI.
The AI vacuum in school policy means it is currently used “unauthorized,” while instead people want policies that encourage AI. Kids, parents, and teachers are figuring it out on their own/without express permission, whereas all stakeholders would rather have a policy that explicitly encourages AI from a thoughtful foundation.
There is much discourse about the rise and prevalence of AI in education and beyond. These debates often lack the perspectives of key stakeholders – parents, students and teachers.
In 2023, the Walton Family Foundation commissioned the first national survey of teacher and student attitudes toward ChatGPT. The findings showed that educators and students embrace innovation and are optimistic that AI can meaningfully support traditional instruction.
A new survey conducted May 7-15, 2024, showed that knowledge of and support for AI in education is growing among parents, students and teachers. More than 80% of each group says it has had a positive impact on education.
Apple announced “Apple Intelligence” at WWDC 2024, its name for a new suite of AI features for the iPhone, Mac, and more. Starting later this year, Apple is rolling out what it says is a more conversational Siri, custom, AI-generated “Genmoji,” and GPT-4o access that lets Siri turn to OpenAI’s chatbot when it can’t handle what you ask it for.
SAN FRANCISCO — Apple officially launched itself into the artificial intelligence arms race, announcing a deal with ChatGPT maker OpenAI to use the company’s technology in its products and showing off a slew of its own new AI features.
The announcements, made at the tech giant’s annual Worldwide Developers Conference on Monday in Cupertino, Calif., are aimed at helping the tech giant keep up with competitors such as Google and Microsoft, which have boasted in recent months that AI makes their phones, laptops and software better than Apple’s. In addition to Apple’s own homegrown AI tech, the company’s phones, computers and iPads will also have ChatGPT built in “later this year,” a huge validation of the importance of the highflying start-up’s tech.
The highly anticipated AI partnership is the first of its kind for Apple, which has been regarded by analysts as slower to adopt artificial intelligence than other technology companies such as Microsoft and Google.
The deal allows Apple’s millions of users to access technology from OpenAI, one of the highest-profile artificial intelligence companies of recent years. OpenAI has already established partnerships with a variety of technology and publishing companies, including a multibillion-dollar deal with Microsoft.
The real deal here is that Apple is literally putting AI into the hands of >1B people, most of whom will probably be using AI for the 1st time. And it’s delivering AI that’s actually useful (forget those Genmojis, we’re talking about implanting ChatGPT-4o’s brain into Apple devices).
It’s WWDC 2024 keynote time! Each year Apple kicks off its Worldwide Developers Conference with a few hours of just straight announcements, like the long-awaited Apple Intelligence and a makeover for smart AI assistant, Siri. We expected much of them to revolve around the company’s artificial intelligence ambitions (and here), and Apple didn’t disappoint. We also bring you news about Vision Pro and lots of feature refreshes.
Why Gamma is great for presentations — from Jeremy Caplan
Gamma has become one of my favorite new creativity tools. You can use it like Powerpoint or Google Slides, adding text and images to make impactful presentations. It lets you create vertical, square or horizontal slides. You can embed online content to make your deck stand out with videos, data or graphics. You can even use it to make quick websites.
Its best feature, though, is an easy-to-use application of AI. The AI will learn from any document you import, or you can use a text prompt to create a strong deck or site instantly. .
ChatGPT has 180.5 million users out of which 100 million users are active weekly.
In January 2024, ChatGPT got 2.3 billion website visits and 2 million developers are using its API.
The highest percentage of ChatGPT users belong to USA (46.75%), followed by India (5.47%). ChatGPT is banned in 7 countries including Russia and China.
OpenAI’s projected revenue from ChatGPT is $2billion in 2024.
Running ChatGPT costs OpenAI around $700,000 daily.
Sam Altman is seeking $7 trillion for a global AI chip project while Open AI is also listed as a major shareholder in Reddit.
ChatGPT offers a free version with GPT-3.5 and a Plus version with GPT-4, which is 40% more accurate and 82% safer costing $20 per month.
ChatGPT is being used for automation, education, coding, data-analysis, writing, etc.
43% of college students and 80% of the Fortune 500 companies are using ChatGPT.
A 2023 study found 25% of US companies surveyed saved $50K-$70K using ChatGPT, while 11% saved over $100K.
Stable Audio Open is an open source text-to-audio model for generating up to 47 seconds of samples and sound effects.
Users can create drum beats, instrument riffs, ambient sounds, foley and production elements.
The model enables audio variations and style transfer of audio samples.
Some comments from Rundown AI:
Why it matters: While the AI advances in text-to-image models have been the most visible (literally), both video and audio are about to take the same leap. Putting these tools in the hands of creatives will redefine traditional workflows — from musicians brainstorming new beats to directors crafting sound effects for film and TV.
Microsoft is partnering with Khan Academy in a multifaceted deal to demonstrate how AI can transform the way we learn. The cornerstone of today’s announcement centers on Khan Academy’s Khanmigo AI agent. Microsoft says it will migrate the bot to its Azure OpenAI Service, enabling the nonprofit educational organization to provide all U.S. K-12 educators free access to Khanmigo.
In addition, Microsoft plans to use its Phi-3 model to help Khan Academy improve math tutoring and collaborate to generate more high-quality learning content while making more courses available within Microsoft Copilot and Microsoft Teams for Education.
One in three American teachers have used artificial intelligence tools in their teaching at least once, with English and social studies teachers leading the way, according to a RAND Corporation survey released last month. While the new technology isn’t yet transforming how kids learn, both teachers and district leaders expect that it will become an increasingly common feature of school life.
When ChatGPT emerged a year and half ago, many professors immediately worried that their students would use it as a substitute for doing their own written assignments — that they’d click a button on a chatbot instead of doing the thinking involved in responding to an essay prompt themselves.
But two English professors at Carnegie Mellon University had a different first reaction: They saw in this new technology a way to show students how to improve their writing skills.
“They start really polishing way too early,” Kaufer says. “And so what we’re trying to do is with AI, now you have a tool to rapidly prototype your language when you are prototyping the quality of your thinking.”
He says the concept is based on writing research from the 1980s that shows that experienced writers spend about 80 percent of their early writing time thinking about whole-text plans and organization and not about sentences.
On Building AI Models for Education — from aieducation.substack.com by Claire Zau Google’s LearnLM, Khan Academy/MSFT’s Phi-3 Models, and OpenAI’s ChatGPT Edu
This piece primarily breaks down how Google’s LearnLM was built, and takes a quick look at Microsoft/Khan Academy’s Phi-3 and OpenAI’s ChatGPT Edu as alternative approaches to building an “education model” (not necessarily a new model in the latter case, but we’ll explain). Thanks to the public release of their 86-page research paper, we have the most comprehensive view into LearnLM. Our understanding of Microsoft/Khan Academy small language models and ChatGPT Edu is limited to the information provided through announcements, leaving us with less “under the hood” visibility into their development.
Answer AI is among a handful of popular apps that are leveraging the advent of ChatGPT and other large language models to help students with everything from writing history papers to solving physics problems. Of the top 20 education apps in the U.S. App Store, five are AI agents that help students with their school assignments, including Answer AI, according to data from Data.ai on May 21.
If your school (district) or university has not yet made significant efforts to think about how you will prepare your students for a World of AI, I suggest the following steps:
July 24 – Administrator PD & AI Guidance In July, administrators should receive professional development on AI, if they haven’t already. This should include…
August 24 –Professional Development for Teachers and Staff…
Fall 24 — Parents; Co-curricular; Classroom experiments…
December 24 — Revision to Policy…
New ChatGPT Version Aiming at Higher Ed — from insidehighered.com by Lauren Coffey ChatGPT Edu, emerging after initial partnerships with several universities, is prompting both cautious optimism and worries.
OpenAI unveiled a new version of ChatGPT focused on universities on Thursday, building on work with a handful of higher education institutions that partnered with the tech giant.
The ChatGPT Edu product, expected to start rolling out this summer, is a platform for institutions intended to give students free access. OpenAI said the artificial intelligence (AI) toolset could be used for an array of education applications, including tutoring, writing grant applications and reviewing résumés.
Introducing ChatGPT Edu — from openai.com An affordable offering for universities to responsibly bring AI to campus.
We’re announcing ChatGPT Edu, a version of ChatGPT built for universities to responsibly deploy AI to students, faculty, researchers, and campus operations. Powered by GPT-4o, ChatGPT Edu can reason across text and vision and use advanced tools such as data analysis. This new offering includes enterprise-level security and controls and is affordable for educational institutions.
We built ChatGPT Edu because we saw the success universities like the University of Oxford, Wharton School of the University of Pennsylvania(opens in a new window), University of Texas at Austin, Arizona State University(opens in a new window), and Columbia University in the City of New York were having with ChatGPT Enterprise.
ChatGPT can help with various tasks across campus, such as providing personalized tutoring for students and reviewing their resumes, helping researchers write grant applications, and assisting faculty with grading and feedback.
Tool use, which enables Claude to interact with external tools and APIs,is now generally available across the entire Claude 3 model family on the Anthropic Messages API, Amazon Bedrock, and Google Cloud’s Vertex AI. With tool use, Claude can perform tasks, manipulate data, and provide more dynamic—and accurate—responses.
Define a toolset for Claude and specify your request in natural language. Claude will then select the appropriate tool to fulfill the task and, when appropriate, execute the corresponding action:
Extract structured data from unstructured text…
Convert natural language requests into structured API calls…
Answer questions by searching databases or using web APIs…
Automate simple tasks through software APIs…
Orchestrate multiple fast Claude subagents for granular tasks…
From DSC: The above posting reminds me of this other posting…as AGENTS are likely going to become much more popular and part of our repertoire:
Forget Chatbots. AI Agents Are the Future — from wired.com by Will Knight Startups and tech giants are trying to move from chatbots that offer help via text, to AI agents that can get stuff done. Recent demos include an AI coder called Devin and agents that play videogames.
Devin is just the latest, most polished example of a trend I’ve been tracking for a while—the emergence of AI agents that instead of just providing answers or advice about a problem presented by a human can take action to solve it. A few months back I test drove Auto-GPT, an open source program that attempts to do useful chores by taking actions on a person’s computer and on the web. Recently I tested another program called vimGPT to see how the visual skills of new AI models can help these agents browse the web more efficiently.
Nvidia reported $6.12 earnings per share and $26 billion of sales for the three-month period ending April 30, shattering mean analyst forecasts of $5.60 and $24.59 billion, according to FactSet.
Nvidia’s profits and revenues skyrocketed by 628% and 268% compared to 2023’s comparable period, respectively.
This was Nvidia’s most profitable and highest sales quarter ever, topping the quarter ending this January’s record $12.3 billion net income and $22.1 billion revenue.
Driving the numerous superlatives for Nvidia’s financial growth over the last year is unsurprisingly its AI-intensive datacenter division, which raked in $22.6 billion of revenue last quarter, a 427% year-over-year increase and a whopping 20 times higher than the $1.1 billion the segment brought in in 2020.
Per ChatPGT today:
NVIDIA is a prominent technology company known for its contributions to various fields, primarily focusing on graphics processing units (GPUs) and artificial intelligence (AI).Here’s an overview of NVIDIA’s main areas of activity:
1. **Graphics Processing Units (GPUs):**
– **Consumer GPUs:** NVIDIA is famous for its GeForce series of GPUs, which are widely used in gaming and personal computing for their high performance and visual capabilities.
– **Professional GPUs:** NVIDIA’s Quadro series is designed for professional applications like 3D modeling, CAD (Computer-Aided Design), and video editing.
2. **Artificial Intelligence (AI) and Machine Learning:**
– NVIDIA GPUs are extensively used in AI research and development. They provide the computational power needed for training deep learning models.
– The company offers specialized hardware for AI, such as the NVIDIA Tesla and A100 GPUs, which are used in data centers and supercomputing environments.
3. **Data Centers:**
– NVIDIA develops high-performance computing solutions for data centers, including GPU-accelerated servers and AI platforms. These products are essential for tasks like big data analytics, scientific simulations, and AI workloads.
4. **Autonomous Vehicles:**
– Through its DRIVE platform, NVIDIA provides hardware and software solutions for developing autonomous vehicles. This includes AI-based systems for perception, navigation, and decision-making.
5. **Edge Computing:**
– NVIDIA’s Jetson platform caters to edge computing, enabling AI-powered devices and applications to process data locally rather than relying on centralized data centers.
6. **Gaming and Entertainment:**
– Beyond GPUs, NVIDIA offers technologies like G-SYNC (for smoother gaming experiences) and NVIDIA GameWorks (a suite of tools for game developers).
7. **Healthcare:**
– NVIDIA’s Clara platform utilizes AI and GPU computing to advance medical imaging, genomics, and other healthcare applications.
8. **Omniverse:**
– NVIDIA Omniverse is a real-time graphics collaboration platform for 3D production pipelines. It’s designed for industries like animation, simulation, and visualization.
9. **Crypto Mining:**
– NVIDIA GPUs are also popular in the cryptocurrency mining community, although the company has developed specific products like the NVIDIA CMP (Cryptocurrency Mining Processor) to cater to this market without impacting the availability of GPUs for gamers and other users.
Overall, NVIDIA’s influence spans a broad range of industries, driven by its innovations in GPU technology and AI advancements.
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.
We often talk about what Generative AI will do for coders, healthcare, science or even finance, but what about the benefits for the next generation? Permit me if you will, here I’m thinking about teachers and students.
It’s no secret that some of the most active users of ChatGPT in its heyday, were students. But how are other major tech firms thinking about this?
I actually think one of the best products with the highest ceiling from Google I/O 2024 is LearnLM. It has to be way more than a chatbot, it has to feel like a multimodal tutor. I can imagine frontier model agents (H) doing this fairly well.
What if everyone, everywhere could have their own personal AI tutor, on any topic?
ChatGPT4o Is the TikTok of AI Models — from nickpotkalitsky.substack.com by Nick Potkalitsky In Search of Better Tools for AI Access in K-12 Classrooms
Nick makes the case that we should pause on the use of OpenAI in the classrooms:
In light of these observations, it’s clear that we must pause and rethink the use of OpenAI products in our classrooms, except for rare cases where accessibility needs demand it. The rapid consumerization of AI, epitomized by GPT4o’s transformation into an AI salesperson, calls for caution.