And to understand the value of AI, they need to do R&D. Since AI doesn’t work like traditional software, but more like a person (even though it isn’t one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.
Ilya Sutskever Has a New Plan for Safe Superintelligence — from bloomberg.com by Ashlee Vance (behind a paywall) OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.
Ilya Sutskever is kind of a big deal in AI, to put it lightly.
Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.
… Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.
If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.
As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world’s most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source.
Federal officials, AI model operators and cybersecurity companies ran the first joint simulation of a cyberattack involving a critical AI system last week.
Why it matters: Responding to a cyberattack on an AI-enabled system will require a different playbook than the typical hack, participants told Axios.
The big picture: Both Washington and Silicon Valley are attempting to get ahead of the unique cyber threats facing AI companies before they become more prominent.
Immediately after we saw Sora-like videos from KLING, Luma AI’s Dream Machine video results overshadowed them.
…
Dream Machine is a next-generation AI video model that creates high-quality, realistic shots from text instructions and images.
Introducing Gen-3 Alpha — from runwayml.com by Anastasis Germanidis A new frontier for high-fidelity, controllable video generation.
AI-Generated Movies Are Around the Corner — from news.theaiexchange.com by The AI Exchange The future of AI in filmmaking; participate in our AI for Agencies survey
AI-Generated Feature Films Are Around the Corner.
We predict feature-film length AI-generated films are coming by the end of 2025, if not sooner.
From DSC: Very interesting to see the mention of an R&D department here! Very cool.
Baker said ninth graders inthe R&D department designed the essential skills rubric for their grade so that regardless of what content classes students take, they all get the same immersion into critical career skills. Student voice is now so integrated into Edison’s core that teachers work with student designers to plan their units. And he said teachers are becoming comfortable with the language of career-centered learning and essential skills while students appreciate the engagement and develop a new level of confidence.
… The R&D department has grown to include teachers from every department working with students to figure out how to integrate essential skills into core academic classes. In this way, they’re applying one of the XQ Institute’s crucial Design Principles for innovative high schools: Youth Voice and Choice. .
Client-connected projects have become a focal point of the Real World Learning initiative, offering students opportunities to solve real-world problems in collaboration with industry professionals.
Organizations like CAPS, NFTE, and Journalistic Learning facilitate community connections and professional learning opportunities, making it easier to implement client projects and entrepreneurship education.
Important trend: client projects. Work-based learning has been growing with career academies and renewed interest in CTE. Six years ago, a subset of WBL called client-connected projects became a focal point of the Real World Learning initiative in Kansas City where they are defined as authentic problems that students solve in collaboration with professionals from industry, not-for-profit, and community-based organizations….and allow students to: engage directly with employers, address real-world problems, and develop essential skills.
The Community Portrait approach encourages diverse voices to shape the future of education, ensuring it reflects the needs and aspirations of all stakeholders.
Active, representative community engagement is essential for creating meaningful and inclusive educational environments.
The Portrait of a Graduate—a collaborative effort to define what learners should know and be able to do upon graduation—has likely generated enthusiasm in your community. However, the challenge of future-ready graduates persists: How can we turn this vision into a reality within our diverse and dynamic schools, especially amid the current national political tensions and contentious curriculum debates?
The answer lies in active, inclusive community engagement. It’s about crafting a Community Portrait that reflects the rich diversity of our neighborhoods. This approach, grounded in the same principles used to design effective learning systems, seeks to cultivate deep, reciprocal relationships within the community. When young people are actively involved, the potential for meaningful change increases exponentially.
Although Lindsay E. Jones came from a family of educators, she didn’t expect that going to law school would steer her back into the family business. Over the years she became a staunch advocate for children with disabilities. And as mom to a son with learning disabilities and ADHD who is in high school and doing great, her advocacy is personal.
Jones previously served as president and CEO of the National Center for Learning Disabilities and was senior director for policy and advocacy at the Council for Exceptional Children. Today, she is the CEO at CAST, an organization focused on creating inclusive learning environments in K–12. EdTech: Focus on K–12 spoke with Jones about how digital transformation, artificial intelligence and visionary leaders can support inclusive learning environments.
Our brains are all as different as our fingerprints, and throughout its 40-year history, CAST has been focused on one core value: People are not broken, systems are poorly designed. And those systems are creating a barrier that holds back human innovation and learning.
Dream Machine is an AI model that makes high quality, realistic videos fast from text and images.
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
Luma AI just dropped a Sora-like AI video generator called Dream Machine.
But unlike Sora or KLING, it’s completely open access to the public.
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
Stable Audio Open is an open source text-to-audio model for generating up to 47 seconds of samples and sound effects.
Users can create drum beats, instrument riffs, ambient sounds, foley and production elements.
The model enables audio variations and style transfer of audio samples.
Some comments from Rundown AI:
Why it matters: While the AI advances in text-to-image models have been the most visible (literally), both video and audio are about to take the same leap. Putting these tools in the hands of creatives will redefine traditional workflows — from musicians brainstorming new beats to directors crafting sound effects for film and TV.
If 2023 was the year the world discovered generative AI (gen AI), 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year, with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.
Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value. .
.
What’s the future of AI?— from mckinsey.com AI is here to stay. To outcompete in the future, organizations and individuals alike need to get familiar fast. This series of McKinsey Explainers dives deep into the seven technologies that are already shaping the years to come.
We’re in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. We don’t know exactly what the future will look like. But we do know that these seven technologies will play a big role. .
?ANNOUNCING SHOWRUNNER?
We believe the future is a mix of game & movie.
Simulations powering 1000s of Truman Shows populated by interactive AI characters.
The new Canva Canva announced “a whole new Canva” to improve workplace collaborative creation and a revamped platform to simplify its tools for anyone to use.
At Canva Create, several AI features were announced that enhance the design and content creation process:
Magic Design: Upload an image and select a style to get a curated selection of personalized templates.
Magic Write: An AI-powered copywriting assistant that can generate written content from a text prompt, useful for presentations and website copy.
Magic Eraser: This feature can remove unwanted objects or backgrounds from images.
Magic Edit: Users can swap an object with something else entirely using generative AI.
Beat Sync: Automatically matches video footage to a soundtrack of your choice.
Translate: Automatically translates text in designs to over 100 different languages.
Things might get more interesting in business settings as AI companies start deploying so-called “AI agents,” which can take action by operating other software on a computer or via the internet.
Anthropic, a competitor to OpenAI, announced a major new product today that attempts to prove the thesis that tool use is needed for AI’s next leap in usefulness.
AI Film Festival | AI comes to filmmaking — from Bloomberg
This week Runway AI Inc., which makes AI video generating and editing tools, held its second annual AI Film Festival in Los Angeles — its first stop before heading to New York next week. To give a sense for how much the event has grown since last year, Runway co-founder CristóbalValenzuela said last year people submitted 300 videos for festival consideration. This year they sent in 3,000.
A crowd of hundreds of filmmakers, techies, artists, venture capitalists and at least one well-known actor (Poker Face star Natasha Lyonne) gathered at the Orpheum Theatre in downtown LA Wednesday night to view the 10 finalists chosen by the festival’s judges.
AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.
The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.
For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.
It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”
His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.
…
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”
Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”
From DSC: This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:
Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.
Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.
What about course videos? Professors can create them (by lecturing into a camera for several hours hopefully in different clothes) from the readings, from their interpretations of the readings, from their own case experiences – from anything they like. But now professors can direct the creation of the videos by talking – actually describing – to a CustomGPTabout what they’d like the video to communicate with their or another image. Wait. What?They can make a video by talking to a CustomGPT and even select the image they want the “actor” to use? Yes. They can also add a British accent and insert some (GenAI-developed) jokes into the videos if they like. All this and much more is now possible. This means that a professor can specify how long the video should be, what sources should be consulted and describe the demeanor the professor wants the video to project.
From DSC: Though I wasn’t crazy about the clickbait type of title here, I still thought that the article was solid and thought-provoking. It contained several good ideas for using AI.
Excerpt from a recent EdSurge Higher Ed newsletter:
There are darker metaphors though — ones that focus on the hazards for humanity of the tech. Some professors worry that AI bots are simply replacing hired essay-writers for many students, doing work for a student that they can then pass off as their own (and doing it for free).
From DSC: Hmmm…the use of essay writers was around long before AI became mainstream within higher education. So we already had a serious problem where students didn’t see the why in what they were being asked to do. Some students still aren’t sold on the why of the work in the first place. The situation seems to involve ethics, yes, but it also seems to say that we haven’t sold students on the benefits of putting in the work. Students seem to be saying I don’t care about this stuff…I just need the degree so I can exit stage left.
My main point: The issue didn’t start with AI…it started long before that.
This financial stagnation is occurring as we face a multitude of escalating challenges. These challenges include but are in no way limited to, chronic absenteeism, widespread student mental health issues, critical staff shortages, rampant classroom behavior issues, a palpable sense of apathy for education in students, and even, I dare say, hatred towards education among parents and policymakers.
…
Our current focus is on keeping our heads above water, ensuring our students’ safety and mental well-being, and simply keeping our schools staffed and our doors open.
What is Ed? An easy-to-understand learning platform designed by Los Angeles Unified to increase student achievement. It offers personalized guidance and resources to students and families 24/7 in over 100 languages.
Also relevant/see:
Los Angeles Unified Bets Big on ‘Ed,’ an AI Tool for Students — from by Lauraine Langreo
The Los Angeles Unified School District has launched an AI-powered learning tool that will serve as a “personal assistant” to students and their parents.The tool, named “Ed,” can provide students from the nation’s second-largest district information about their grades, attendance, upcoming tests, and suggested resources to help them improve their academic skills on their own time, Superintendent Alberto Carvalho announced March 20. Students can also use the app to find social-emotional-learning resources, see what’s for lunch, and determine when their bus will arrive.
Could OpenAI’s Sora be a big deal for elementary school kids?— from futureofbeinghuman.com by Andrew Maynard Despite all the challenges it comes with, AI-generated video could unleash the creativity of young children and provide insights into their inner worlds – if it’s developed and used responsibly
Like many others, I’m concerned about the challenges that come with hyper-realistic AI-generated video. From deep fakes and disinformation to blurring the lines between fact and fiction, generative AI video is calling into question what we can trust, and what we cannot.
And yet despite all the issues the technology is raising, it also holds quite incredible potential, including as a learning and development tool — as long as we develop and use it responsibly.
I was reminded of this a few days back while watching the latest videos from OpenAI created by their AI video engine Sora — including the one below generated from the prompt “an elephant made of leaves running in the jungle”
…
What struck me while watching this — perhaps more than any of the other videos OpenAI has been posting on its TikTok channel — is the potential Sora has for translating the incredibly creative but often hard to articulate ideas someone may have in their head, into something others can experience.
Can AI Aid the Early Education Workforce? — from edsurge.com by Emily Tate Sullivan During a panel at SXSW EDU 2024, early education leaders discussed the potential of AI to support and empower the adults who help our nation’s youngest children.
While the vast majority of the conversations about AI in education have centered on K-12 and higher education, few have considered the potential of this innovation in early care and education settings.
At the conference, a panel of early education leaders gathered to do just that, in a session exploring the potential of AI to support and empower the adults who help our nation’s youngest children, titled, “ChatECE: How AI Could Aid the Early Educator Workforce.”
Hau shared that K-12 educators are using the technology to improve efficiency in a number of ways, including to draft individualized education programs (IEPs), create templates for communicating with parents and administrators, and in some cases, to support building lesson plans.
Educators are, perhaps rightfully so, cautious about incorporating AI in their classrooms. With thoughtful implementation, however, AI image generators, with their ability to use any language, can provide powerful ways for students to engage with the target language and increase their proficiency.
While AI offers numerous benefits, it’s crucial to remember that it is a tool to empower educators, not replace them. The human connection between teacher and student remains central to fostering creativity, critical thinking, and social-emotional development. The role of teachers will shift towards becoming facilitators, curators, and mentors who guide students through personalized learning journeys. By harnessing the power of AI, educators can create dynamic and effective classrooms that cater to each student’s individual needs. This paves the way for a more engaging and enriching learning experience that empowers students to thrive.
In this article, seven teachers across the world share their insights on AI tools for educators. You will hear a host of varied opinions and perspectives on everything from whether AI could hasten the decline of learning foreign languages to whether AI-generated lesson plans are an infringement on teachers’ rights. A common theme emerged from those we spoke with: just as the internet changed education, AI tools are here to stay, and it is prudent for teachers to adapt.
Even though it’s been more than a year since ChatGPT made a big splash in the K-12 world, many teachers say they are still not receiving any training on using artificial intelligence tools in the classroom.
More than 7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey of 953 educators, including 553 teachers, conducted between Jan. 31 and March 4.
From DSC: This article mentioned the following resource:
How Early Adopters of Gen AI Are Gaining Efficiencies — from knowledge.wharton.upenn.edu by Prasanna (Sonny) Tambe and Scott A. Snyder; via Ray Schroeder on LinkedIn Enterprises are seeing gains from generative AI in productivity and strategic planning, according to speakers at a recent Wharton conference.
Its unique strengths in translation, summation, and content generation are especially useful in processing unstructured data. Some 80% of all new data in enterprises is unstructured, he noted, citing research firm Gartner. Very little of that unstructured data that resides in places like emails “is used effectively at the point of decision making,” he noted. “[With gen AI], we have a real opportunity” to garner new insights from all the information that resides in emails, team communication platforms like Slack, and agile project management tools like Jira, he said.
Here are 6 YouTube channels I watch to stay up to date with AI. This list will be useful whether you’re a casual AI enthusiast or an experienced programmer.
1. Matt Wolfe: AI for non-coders
This is a fast-growing YouTube channel focused on artificial intelligence for non-coders. On this channel, you’ll find videos about ChatGPT, Midjourney, and any AI tool that it’s gaining popularity.
#3 Photomath
Photomath is a comprehensive math help app that provides step-by-step explanations for a wide range of math problems, from elementary to college level. Photomath is only available as a mobile app. (link)
Features:
Get step-by-step solutions with multiple methods to choose from
Scan any math problem, including word problems, using the app’s camera
Access custom visual aids and extra “how” and “why” tips for deeper understanding
Google researchers have developed a new artificial intelligence system that can generate lifelike videos of people speaking, gesturing and moving — from just a single still photo. The technology, called VLOGGER, relies on advanced machine learning models to synthesize startlingly realistic footage, opening up a range of potential applications while also raising concerns around deepfakes and misinformation.
I’m fascinated by the potential of these tools to augment and enhance our work and creativity. There’s no denying the impressive capabilities we’re already seeing with text generation, image creation, coding assistance, and more. Used thoughtfully, AI can be a powerful productivity multiplier.
At the same time, I have significant concerns about the broader implications of this accelerating technology, especially for education and society at large. We’re traversing new ground at a breakneck pace, and it’s crucial that we don’t blindly embrace AI without considering the potential risks.
My worry is that by automating away too many tasks, even seemingly rote ones like creating slide decks, we risk losing something vital—humanity at the heart of knowledge work.
Nvidia has announced a partnership with Hippocratic AI to introduce AI “agents” aimed at replacing nurses in hospitals. These AI “nurses” come at a significantly low cost compared to human nurses and are purportedly intended to address staffing issues by handling “low-risk,” patient-facing tasks via video calls. However, concerns are raised regarding the ethical implications and effectiveness of replacing human nurses with AI, particularly given the complex nature of medical care.
A glimpse of the future of AI at work:
I got early access to Devin, the “AI developer” – it is slow & breaks often, but you can start to see what an AI agent can do.
It makes a plan and executes it autonomously, doing research, writing code & debugging, without you watching. pic.twitter.com/HHBQQDQZ9q
Below are some items for those creatives who might be interested in telling stories, designing games, crafting audio-based experiences, composing music, developing new worlds using 3D graphics, and more.
The sounds of any game can make or break the experience for its players. Many of our favorite adventures come roaring back into our minds when we hear a familiar melody, or maybe it’s a special sound effect that reminds us of our time performing a particularly heroic feat… or the time we just caused some havoc with friends. With Lightfall sending Guardians to explore the new destination of Neomuna, there’s an entire universe hidden away within the sounds—both orchestral and diegetic—for Guardians to uncover and immerse themselves in. We recently assembled some of Destiny’s finest sound designers and composers to dive a little bit deeper into the stunning depths of Neomuna’s auditory experience.
Before diving into the interview with our incredible team, we wanted to make sure you have seen the Lightfall music documentary that went out shortly after the expansion’s release. This short video is a great introduction to how our team worked to create the music of Lightfall and is a must-see for audiophiles and Destiny fans alike.
Every game has a story to tell, a journey to take players through that — if done well — can inspire wonderful memories that last a lifetime. Unlike other storytelling mediums, the art of video games is an intricate interweaving of experiences, including psychological cues that are designed to entrance players and make them feel like they’re a part of the story. One way this is achieved is through the art of audio. And no, we aren’t just talking about the many incredible soundtracks out there, we’re talking about the oftentimes overlooked universe of audio design.
… What does an audio designer do?
“Number one? We don’t work on music. That’s a thing almost everyone thinks every audio designer does,” jokes Nyte when opening up about beginning her quest into the audio world. “That, or for a game like Destiny, people just assume we only work on weapon sounds and nothing else. Which, [Juan] Uribe does, but a lot of us don’t. There is this entire gamut of other sounds that are in-game that people don’t really notice. Some do, and that’s always cool, but audio is about all sounds coming together for a ‘whole’ audio experience.”
On the Transformation of Entertainment
What company will be the Pixar of the AI era? What talent agency will be the CAA of the AI era? How fast can the entertainment industry evolve to natively leverage AI, and what parts will be disrupted by the industry’s own ambivalence? Or are all of these questions myopic…and should we anticipate a wave of entirely new categories of entertainment?
We are starting to see material adoption of AI tools across many industries, including media and entertainment. No doubt, these tools will transform the processes behind generating content. But what entirely new genres of content might emerge? The platform shift to AI-based workflows might give rise to entirely new types of companies that transform entertainment as we know it – from actor representation, Hollywood economics, consumption devices and experiences, to the actual mediums of entertainment themselves. Let’s explore just a few of the more edgy implications:
New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.
Dave told me that he couldn’t have made Borrowing Time without AI—it’s an expensive project that traditional Hollywood studios would never bankroll. But after Dave’s short went viral, major production houses approached him to make it a full-length movie. I think this is an excellent example of how AI is changing the art of filmmaking, and I came out of this interview convinced that we are on the brink of a new creative age.
We dive deep into the world of AI tools for image and video generation, discussing how aspiring filmmakers can use them to validate their ideas, and potentially even secure funding if they get traction. Dave walks me through how he has integrated AI into his movie-making process, and as we talk, we make a short film featuring Nicolas Cage using a haunted roulette ball to resurrect his dead movie career, live on the show.
The scammers used digitally recreated versions of an international company’s Chief Financial Officer and other employees to order $25 million in money transfers during a video conference call containing just one real person.
The victim, an employee at the Hong Kong branch of an unnamed multinational firm, was duped into taking part in a video conference call in which they were the only real person – the rest of the group were fake representations of real people, writes SCMP.
As we’ve seen in previous incidents where deepfakes were used to recreate someone without their permission, the scammers utilized publicly available video and audio footage to create these digital versions.
Since we launched Bard last year, people all over the world have used it to collaborate with AI in a completely new way — to prepare for job interviews, debug code, brainstorm new business ideas or, as we announced last week, create captivating images.
Our mission with Bard has always been to give you direct access to our AI models, and Gemini represents our most capable family of models. To reflect this, Bard will now simply be known as Gemini.
A new way to discover places with generative AI in Maps— from blog.google by Miriam Daniel; via AI Valley Here’s a look at how we’re bringing generative AI to Maps — rolling out this week to select Local Guides in the U.S.
Today, we’re introducing a new way to discover places with generative AI to help you do just that — no matter how specific, niche or broad your needs might be. Simply say what you’re looking for and our large-language models (LLMs) will analyze Maps’ detailed information about more than 250 million places and trusted insights from our community of over 300 million contributors to quickly make suggestions for where to go.
Starting in the U.S., this early access experiment launches this week to select Local Guides, who are some of the most active and passionate members of the Maps community. Their insights and valuable feedback will help us shape this feature so we can bring it to everyone over time.
Google Prepares for a Future Where Search Isn’t King — from wired.com by Lauren Goode CEO Sundar Pichai tells WIRED that Google’s new, more powerful Gemini chatbot is an experiment in offering users a way to get things done without a search engine. It’s also a direct shot at ChatGPT.