Voice has become one of the most influential elements in how digital content is experienced. From podcasts and videos to apps, ads, and interactive platforms, spoken audio shapes how messages are understood and remembered. In recent years, the rise of the ai voice generator has changed how creators and brands approach audio production, lowering barriers while expanding creative possibilities.
Rather than relying exclusively on traditional voice recording, many teams now use AI-generated voices as part of their content and brand strategies. This shift is not simply about efficiency; it reflects broader changes in how digital experiences are produced, scaled, and personalised.
The Future Role Of AI-Generated Voice As AI voice technology continues to improve, its role in creative and brand workflows will likely expand. Future developments may include more adaptive voices that respond to context, audience behaviour, or emotional cues in real time. Rather than replacing traditional voice work, AI-generated voice is becoming another option in a broader creative toolkit, one that offers speed, flexibility, and accessibility.
At CES 2026, Everything Is AI. What Matters Is How You Use It — from wired.com by Boone Ashworth Integrated chatbots and built-in machine intelligence are no longer standout features in consumer tech. If companies want to win in the AI era, they’ve got to hone the user experience.
Beyond Wearables
Right now, AI is on your face and arms—smart glasses and smart watches—but this year will see it proliferate further into products like earbuds, headphones, and smart clothing.
Health tech will see an influx of AI features too, as companies aim to use AI to monitor biometric data from wearables like rings and wristbands. Heath sensors will also continue to show up in newer places like toilets, bath mats, and brassieres.
The smart home will continue to be bolstered by machine intelligence, with more products that can listen, see, and understand what’s happening in your living space. Familiar candidates for AI-powered upgrades like smart vacuums and security cameras will be joined by surprising AI bedfellows like refrigerators and garage door openers.
After a year of bot battles, one thing stands out: There is no single best AI. The smartest way to use chatbots today is to pick different tools for different jobs — and not assume one bot can do it all.
Some enterprise platforms now support cross-agent communication and integration with ecosystems maintained by companies like Microsoft, NVIDIA, Google, and Oracle. These cross-platform data fabrics break down silos and turn isolated AI pilots into enterprise-wide services. The result is an IT backbone that not only automates but also collaborates for continuous learning, diagnostics, and system optimization in real time.
It’s difficult to think of any single company that had a bigger impact on Wall Street and the AI trade in 2025 than Nvidia (NVDA).
…
Nvidia’s revenue soared in 2025, bringing in $187.1 billion, and its market capitalization continued to climb, briefly eclipsing the $5 trillion mark before settling back in the $4 trillion range.
There were plenty of major highs and deep lows throughout the year, but these 15 were among the biggest moments of Nvidia’s 2025.
Could Your Next Side Hustle Be Training AI? — from builtin.com by Jeff Rumage As automation continues to reshape the labor market, some white-collar professionals are cashing in by teaching AI models to do their jobs.
Summary: Artificial intelligence may be replacing jobs, but it’s also creating some new ones. Professionals in fields like medicine, law and engineering can earn big money training AI models, teaching them human skills and expertise that may one day make those same jobs obsolete.
Here’s the thing: voice is finally good enough to replace typing now. And I mean actually good enough, not “Siri, play Despacito” good enough.
To Paraphrase Andrej Karpathy’s famous quote, “the hottest new programming language is English”, in this case, the hottest new user interface is talking.
The Great Convergence: Why Voice Is Having Its Moment Three massive shifts just collided to make voice interfaces inevitable.
First, speech recognition stopped being terrible. …
Second, our devices got ears everywhere. …
Third, and most importantly: LLMs made voice assistants smart enough to be worth talking to. …
Update on November 20, 2025: Early feedback from the pilot has been positive, so we’re expanding group chats to all logged-in users on ChatGPT Free, Go, Plus and Pro plans globally over the coming days. We will continue refining the experience as more people start using it.
Today, we’re beginning to pilot a new experience in a few regions that makes it easy for people to collaborate with each other—and with ChatGPT—in the same conversation. With group chats, you can bring friends, family, or coworkers into a shared space to plan, make decisions, or work through ideas together.
Whether you’re organizing a group dinner or drafting an outline with coworkers, ChatGPT can help. Group chats are separate from your private conversations, and your personal ChatGPT memory is never shared with anyone in the chat.
“Where generative AI creates, agentic AI acts.” That’s how my trusted assistant, Gemini 2.5 Pro deep research, describes the difference.
…
Agents, unlike generative tools, create and perform multistep goals with minimal human supervision. The essential difference is found in its proactive nature. Rather than waiting for a specific, step-by-step command, agentic systems take a high-level objective and independently create and execute a plan to achieve that goal. This triggers a continuous, iterative workflow that is much like a cognitive loop. The typical agentic process involves six key steps, as described by Nvidia:
Our 2025 national survey of over 650 respondents across 49 states and Puerto Rico reveals both encouraging trends and important challenges. While AI adoption and optimism are growing, concerns about cheating, privacy, and the need for training persist.
Despite these challenges, I’m inspired by the resilience and adaptability of educators. You are the true game-changers in your students’ growth, and we’re honored to support this vital work.
This report reflects both where we are today and where we’re headed with AI. More importantly, it reflects your experiences, insights, and leadership in shaping the future of education.
This groundbreaking collaboration represents a transformative step forward in education technology and will begin with, but is not limited to, an effort between Instructure and OpenAI to enhance the Canvas experience by embedding OpenAI’s next-generation AI technology into the platform.
IgniteAI announced earlier today, establishes Instructure’s future-ready, open ecosystem with agentic support as the AI landscape continues to evolve. This partnership with OpenAI exemplifies this bold vision for AI in education. Instructure’s strategic approach to AI emphasizes the enhancement of connections within an educational ecosystem comprising over 1,100 edtech partners and leading LLM providers.
“We’re committed to delivering next-generation LMS technologies designed with an open ecosystem that empowers educators and learners to adapt and thrive in a rapidly changing world,” said Steve Daly, CEO of Instructure. “This collaboration with OpenAI showcases our ambitious vision: creating a future-ready ecosystem that fosters meaningful learning and achievement at every stage of education. This is a significant step forward for the education community as we continuously amplify the learning experience and improve student outcomes.”
Faculty Latest Targets of Big Tech’s AI-ification of Higher Ed— from insidehighered.com by Kathryn Palmer A new partnership between OpenAI and Instructure will embed generative AI in Canvas. It may make grading easier, but faculty are skeptical it will enhance teaching and learning.
The two companies, which have not disclosed the value of the deal, are also working together to embed large language models into Canvas through a feature called IgniteAI. It will work with an institution’s existing enterprise subscription to LLMs such as Anthropic’s Claude or OpenAI’s ChatGPT, allowing instructors to create custom LLM-enabled assignments. They’ll be able to tell the model how to interact with students—and even evaluate those interactions—and what it should look for to assess student learning. According to Instructure, any student information submitted through Canvas will remain private and won’t be shared with OpenAI.
… Faculty Unsurprised, Skeptical
Few faculty were surprised by the Canvas-OpenAI partnership announcement, though many are reserving judgment until they see how the first year of using it works in practice.
A new study measuring the use of generative artificial intelligence in different professions has just gone public, and its main message to people working in some fields is harsh. It suggests translators, historians, text writers, sales representatives, and customer service agents might want to consider new careers as pile driver or dredge operators, railroad track layers, hardwood floor sanders, or maids — if, that is, they want to lower the threat of AI apps pushing them out of their current jobs.
From DSC: Unfortunately, this is where the hyperscalers are going to get their ROI from all of the capital expenditures that they are making. Companies are going to use their services in order to reduce headcount at their organizations. CEOs are even beginning to brag about the savings that are realized by the use of AI-based technologies: (or so they claim.)
“As a CEO myself, I can tell you, I’m extremely excited about it. I’ve laid off employees myself because of AI. AI doesn’t go on strike. It doesn’t ask for a pay raise. These things that you don’t have to deal with as a CEO.”
My first position out of college was being a Customer Service Representative at Baxter Healthcare. It was my most impactful job, as it taught me the value of a customer. From then on, whoever I was trying to assist was my customer — whether they were internal or external to the organization that I was working for. Those kinds of jobs are so important. If they evaporate, what then? How will young people/graduates get their start?
Alex’s take: We’re seeing browsers fundamentally transition from search engines ? answer engines ? action engines. Gone are the days of having to trawl through pages of search results. Commands are the future. They are the direct input to arrive at the outcomes we sought in the first place, such as booking a hotel or ordering food. I’m interested in watching Microsoft’s bet develop as browsers become collaborative (and proactive) assistants.
Amazon just invested in an AI that can create full TV episodes—and it wants you to star in them.
Remember when everyone lost their minds over AI generating a few seconds of video? Well, Amazon just invested in a company called Fable Studio whose system called Showrunner can generates entire 22-minute TV episodes.
… Where does this go from here? Imagine asking AI to rewrite the ending of Game of Thrones, or creating a sitcom where you and your friends are the main characters. This type of tech could create personalized entertainment experiences just like that.
Our take: Without question, we’re moving toward a world where every piece of media can be customized to you personally. Your Netflix could soon generate episodes where you’re the protagonist, with storylines tailored to your interests and sense of humor.
And if this technology scales, the entire entertainment industry could flip upside down. The pitch goes: why watch someone else’s story when you can generate your own?
The End of Work as We Know It — from gizmodo.com by Luc Olinga CEOs call it a revolution in efficiency. The workers powering it call it a “new era in forced labor.” I spoke to the people on the front lines of the AI takeover.
Yet, even in this vision of a more pleasant workplace, the specter of displacement looms large. Miscovich acknowledges that companies are planning for a future where headcount could be “reduced by 40%.” And Clark is even more direct. “A lot of CEOs are saying that, knowing that they’re going to come up in the next six months to a year and start laying people off,” he says. “They’re looking for ways to save money at every single company that exists.”
But we do not have much time. As Clark told me bluntly: “I am hired by CEOs to figure out how to use AI to cut jobs. Not in ten years. Right now.”
Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.
Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats “visible to millions.” While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.
Today, we’re dropping the world’s first AI-native social feed.
Feed from Character.AI is a dynamic, scrollable content platform that connects users with the latest Characters, Scenes, Streams, and creator-driven videos in one place.
This is a milestone in the evolution of online entertainment.
For the last 10 years, social platforms have been all about passive consumption. The Character.AI Feed breaks that paradigm and turns content into a creative playground. Every post is an invitation to interact, remix, and build on what others have made. Want to rewrite a storyline? Make yourself the main character? Take a Character you just met in someone else’s Scene and pop it into a roast battle or a debate? Now it’s easy. Every story can have a billion endings, and every piece of content can change and evolve with one tap.
.Get the 2025 Student Guide to Artificial Intelligence — from studentguidetoai.org This guide is made available under a Creative Commons license by Elon University and the American Association of Colleges and Universities (AAC&U). .
Agentic AI is taking these already huge strides even further. Rather than simply asking a question and receiving an answer, an AI agent can assess your current level of understanding and tailor a reply to help you learn. They can also help you come up with a timetable and personalized lesson plan to make you feel as though you have a one-on-one instructor walking you through the process. If your goal is to learn to speak a new language, for example, an agent might map out a plan starting with basic vocabulary and pronunciation exercises, then progress to simple conversations, grammar rules and finally, real-world listening and speaking practice.
…
For instance, if you’re an entrepreneur looking to sharpen your leadership skills, an AI agent might suggest a mix of foundational books, insightful TED Talks and case studies on high-performing executives. If you’re aiming to master data analysis, it might point you toward hands-on coding exercises, interactive tutorials and real-world datasets to practice with.
The beauty of AI-driven learning is that it’s adaptive. As you gain proficiency, your AI coach can shift its recommendations, challenge you with new concepts and even simulate real-world scenarios to deepen your understanding.
Ironically, the very technology feared by workers can also be leveraged to help them. Rather than requiring expensive external training programs or lengthy in-person workshops, AI agents can deliver personalized, on-demand learning paths tailored to each employee’s role, skill level, and career aspirations. Given that 68% of employees find today’s workplace training to be overly “one-size-fits-all,” an AI-driven approach will not only cut costs and save time but will be more effective.
This is one reason why I don’t see AI-embedded classrooms and AI-free classrooms as opposite poles. The bone of contention, here, is not whether we can cultivate AI-free moments in the classroom, but for how long those moments are actually sustainable.
Can we sustain those AI-free moments for an hour? A class session? Longer?
…
Here’s what I think will happen. As AI becomes embedded in society at large, the sustainability of imposed AI-free learning spaces will get tested. Hard. I think it’ll become more and more difficult (though maybe not impossible) to impose AI-free learning spaces on students.
However, consensual and hybrid AI-free learning spaces will continue to have a lot of value. I can imagine classes where students opt into an AI-free space. Or they’ll even create and maintain those spaces.
Duolingo’s AI Revolution — from drphilippahardman.substack.com by Dr. Philippa Hardman What 148 AI-Generated Courses Tell Us About the Future of Instructional Design & Human Learning
Last week, Duolingo announced an unprecedented expansion: 148 new language courses created using generative AI, effectively doubling their content library in just one year. This represents a seismic shift in how learning content is created — a process that previously took the company 12 years for their first 100 courses.
As CEO Luis von Ahn stated in the announcement, “This is a great example of how generative AI can directly benefit our learners… allowing us to scale at unprecedented speed and quality.”
In this week’s blog, I’ll dissect exactly how Duolingo has reimagined instructional design through AI, what this means for the learner experience, and most importantly, what it tells us about the future of our profession.
Medical education is experiencing a quiet revolution—one that’s not taking place in lecture theatres or textbooks, but with headsets and holograms. At the heart of this revolution are Mixed Reality (MR) AI Agents, a new generation of devices that combine the immersive depth of mixed reality with the flexibility of artificial intelligence. These technologies are not mere flashy gadgets; they’re revolutionising the way medical students interact with complicated content, rehearse clinical skills, and prepare for real-world situations. By combining digital simulations with the physical world, MR AI Agents are redefining what it means to learn medicine in the 21st century.
4 Reasons To Use Claude AI to Teach — from techlearning.com by Erik Ofgang Features that make Claude AI appealing to educators include a focus on privacy and conversational style.
After experimenting using Claude AI on various teaching exercises, from generating quizzes to tutoring and offering writing suggestions, I found that it’s not perfect, but I think it behaves favorably compared to other AI tools in general, with an easy-to-use interface and some unique features that make it particularly suited for use in education.
From DSC: Whenever we’ve had a flat tire over the years, a tricky part of the repair process is jacking up the car so that no harm is done to the car (or to me!). There are some grooves underneath the Toyota Camry where one is supposed to put the jack. But as the car is very low to the ground, these grooves are very hard to find (even in good weather and light).
What’s needed is a robotic jack with vision.
If the jack had “vision” and had wheels on it, the device could locate the exact location of the grooves, move there, and then ask the owner whether they are ready for the car to be lifted up. The owner could execute that order when they are ready and the robotic jack could safely hoist the car up.
This type of robotic device is already out there in other areas. But this idea for assistance with replacing a flat tire represents an AI and robotic-based, consumer-oriented application that we’ll likely be seeing much more of in the future. Carmakers and suppliers, please add this one to your list!
Duolingo’s new Video Call feature represents a leap forward in language practice for learners. This AI-powered tool allows Duolingo Max subscribers to engage in spontaneous, realistic conversations with Lily, one of Duolingo’s most popular characters. The technology behind Video Call is designed to simulate natural dialogue and provides a personalized, interactive practice environment. Even beginner learners can converse in a low-pressure environment because Video Call is designed to adapt to their skill level. By offering learners the opportunity to converse in real-time,Video Call builds the confidence needed to communicate effectively in real-world situations. Video Call is available for Duolingo Max subscribers learning English, Spanish, and French.
Ello, the AI reading companion that aims to support kids struggling to read, launched a new product on Monday that allows kids to participate in the story-creation process.
Called “Storytime,” the new AI-powered feature helps kids generate personalized stories by picking from a selection of settings, characters, and plots. For instance, a story about a hamster named Greg who performed in a talent show in outer space.
Rolling out today: Gemini Live <– Google swoops in before OpenAI can get their Voice Mode out there
Gemini Live is a mobile conversational experience that lets you have free-flowing conversations with Gemini. Want to brainstorm potential jobs that are well-suited to your skillset or degree? Go Live with Gemini and ask about them. You can even interrupt mid-response to dive deeper on a particular point, or pause a conversation and come back to it later. It’s like having a sidekick in your pocket who you can chat with about new ideas or practice with for an important conversation.
Gemini Live is also available hands-free: You can keep talking with the Gemini app in the background or when your phone is locked, so you can carry on your conversation on the go, just like you might on a regular phone call. Gemini Live begins rolling out today in English to our Gemini Advanced subscribers on Android phones, and in the coming weeks will expand to iOS and more languages.
To make speaking to Gemini feel even more natural, we’re introducing 10 new voices to choose from, so you can pick the tone and style that works best for you.
.
We’re introducing Gemini Live, a more natural way to interact with Gemini. You can now have a free-flowing conversation, and even interrupt or change topics just like you might on a regular phone call. Available to Gemini Advanced subscribers. #MadeByGooglepic.twitter.com/eNjlNKubsv
Why it matters: Real-time voice is slowly shifting AI from a tool we text/prompt with, to an intelligence that we collaborate, learn, consult, and grow with. As the world’s anticipation for OpenAI’s unreleased products grows, Google has swooped in to steal the spotlight as the first to lead widespread advanced AI voice rollouts.
In a recent Q&A session at Stanford, Eric Schmidt, former CEO and Chairman of search giant Google, offered a compelling vision of the near future in artificial intelligence. His predictions, both exciting and sobering, paint a picture of a world on the brink of a technological revolution that could dwarf the impact of social media.
Schmidt highlighted three key advancements that he believes will converge to create this transformative wave: very large context windows, agents, and text-to-action capabilities. These developments, according to Schmidt, are not just incremental improvements but game-changers that could reshape our interaction with technology and the world at large.
.
Eric Schmidt says in the next year, AI models will unite three key pillars: very large context windows, agents and text-to-action, and no-one understands what the impact will be but it will involve everyone having a fleet of AI agents at their command pic.twitter.com/roYSfZGQ5J
The rise of multimodal AI agents— from 11onze.cat Technology companies are investing large amounts of money in creating new multimodal artificial intelligence models and algorithms that can learn, reason and make decisions autonomously after collecting and analysing data.
The future of multimodal agents In practical terms, a multimodal AI agent can, for example, analyse a text while processing an image, spoken language, or an audio clip to give a more complete and accurate response, both through voice and text. This opens up new possibilities in various fields: from education and healthcare to e-commerce and customer service.
AI Change Management: 41 Tactics to Use (August 2024)— from flexos.work by Daan van Rossum Future-proof companies are investing in driving AI adoption, but many don’t know where to start. The experts recommend these 41 tips for AI change management.
As Matt Kropp told me in our interview, BCG has a 10-20-70 rule for AI at work:
10% is the LLM or algorithm
20% is the software layer around it (like ChatGPT)
70% is the human factor
This 70% is exactly why change management is key in driving AI adoption.
But where do you start?
As I coach leaders at companies like Apple, Toyota, Amazon, L’Oréal, and Gartner in our Lead with AI program, I know that’s the question on everyone’s minds.
I don’t believe in gatekeeping this information, so here are 41 principles and tactics I share with our community members looking for winning AI change management principles.
Claude is so good
Prompt:
——–
I am using a video generator
Please give me a map of all the different types of shots and things I can enter for my prompt.
When you keep getting distracted from all of the extraneous items — such as those annoying videos and advertisements — that appear when you launch a web page, there is a solution to quickly hiding all of those items. It’s called Postlight Reader. I’ve been using it for years and wanted to put this information out there for folks who might not have heard about it.
I highly recommend it if you are having trouble reading an article and processing the information that it contains. Instructional Designers will know all about Extraneous Load (one of the types of Cognitive Load) and how it negatively impacts one’s learning and processing of the information that really counts (i.e., the Germane Cognitive Load).
Note the differences when I used Postlight Reader on an article out at cbsnews.com:
The page appears with all kinds of ads and videos going on…I can hardly
process the information on the article due to these items:
Then, after I enabled this extension in Chrome and click on the icon for Postlight Reader, it strips away all of those items and leaves me with the article that I wanted to read:
If you aren’t using it, I highly recommend that you give it a try.
The Postlight Reader extension for Chrome removes ads and distractions, leaving only text and images for a clean and consistent reading view on every site. Features:
Disable surrounding webpage noise and clutter with one click
Send To Kindle functionality
Adjust typeface and text size, and toggle between light or dark themes
Quick keyboard shortcut (Cmd + Esc for Mac users, Alt + ` for Windows users) to switch to Reader on any article page
From DSC: The above item is simply excellent!!! I love it!
We’re going to see a lot more of the Square, Stripe, Shopify-type startups pop up for agentic AI.
This one is like an AI-human broker.
1) Prompt an AI with a need
2) Give the AI a budget (real money)
3) AI turns need into plan with tasks
4) AI finds humans to complete the… https://t.co/UXf1bNZ4AK
3 new Chrome AI features for even more helpful browsing — from blog.google from Parisa Tabriz See how Chrome’s new AI features, including Google Lens for desktop and Tab compare, can help you get things done more easily on the web.
On speaking to AI — from oneusefulthing.org by Ethan Mollick Voice changes a lot of things
So, let’s talk about ChatGPT’s new Advanced Voice mode and the new AI-powered Siri. They are not just different approaches to talking to AI. In many ways, they represent the divide between two philosophies of AI – Copilots versus Agents, small models versus large ones, specialists versus generalists.
1. Flux, an open-source text-to-image creator that is comparable to industry leaders like Midjourney, was released by Black Forest Labs (the “original team” behind Stable Diffusion). It is capable of generating high quality text in images (there are tons of educational use cases). You can play with it on their demo page, on Poe, or by running it on your own computer (tutorial here).
Other items re: Flux:
How to FLUX — from heatherbcooper.substack.com by Heather Cooper Where to use FLUX online & full tutorial to create a sleek ad in minutes
.
Also from Heather Cooper:
Introducing FLUX: Open-Source text to image model
FLUX… has been EVERYWHERE this week, as I’m sure you have seen. Developed by Black Forest Labs, is an open-source image generation model that’s gaining attention for its ability to rival leading models like Midjourney, DALL·E 3, and SDXL.
What sets FLUX apart is its blend of creative freedom, precision, and accessibility—it’s available across multiple platforms and can be run locally.
Why FLUX Matters
FLUX’s open-source nature makes it accessible to a broad audience, from hobbyists to professionals.
It offers advanced multimodal and parallel diffusion transformer technology, delivering high visual quality, strong prompt adherence, and diverse outputs.
It’s available in 3 models:
FLUX.1 [pro]: A high-performance, commercial image synthesis model.
FLUX.1 [dev]: An open-weight, non-commercial variant of FLUX.1 [pro]
FLUX.1 [schnell]: A faster, distilled version of FLUX.1, operating up to 10x quicker.
During the weekend, image models made a comeback. Recently released Flux models can create realistic images with near-perfect text—straight from the model, without much patchwork. To get the party going, people are putting these images into video generation models to create pretty–trippy–videos. I can’t identify half of them as AI, and they’ll only get better. See this tutorial on how to create a video ad for your product..
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
Apple announced “Apple Intelligence” at WWDC 2024, its name for a new suite of AI features for the iPhone, Mac, and more. Starting later this year, Apple is rolling out what it says is a more conversational Siri, custom, AI-generated “Genmoji,” and GPT-4o access that lets Siri turn to OpenAI’s chatbot when it can’t handle what you ask it for.
SAN FRANCISCO — Apple officially launched itself into the artificial intelligence arms race, announcing a deal with ChatGPT maker OpenAI to use the company’s technology in its products and showing off a slew of its own new AI features.
The announcements, made at the tech giant’s annual Worldwide Developers Conference on Monday in Cupertino, Calif., are aimed at helping the tech giant keep up with competitors such as Google and Microsoft, which have boasted in recent months that AI makes their phones, laptops and software better than Apple’s. In addition to Apple’s own homegrown AI tech, the company’s phones, computers and iPads will also have ChatGPT built in “later this year,” a huge validation of the importance of the highflying start-up’s tech.
The highly anticipated AI partnership is the first of its kind for Apple, which has been regarded by analysts as slower to adopt artificial intelligence than other technology companies such as Microsoft and Google.
The deal allows Apple’s millions of users to access technology from OpenAI, one of the highest-profile artificial intelligence companies of recent years. OpenAI has already established partnerships with a variety of technology and publishing companies, including a multibillion-dollar deal with Microsoft.
The real deal here is that Apple is literally putting AI into the hands of >1B people, most of whom will probably be using AI for the 1st time. And it’s delivering AI that’s actually useful (forget those Genmojis, we’re talking about implanting ChatGPT-4o’s brain into Apple devices).
It’s WWDC 2024 keynote time! Each year Apple kicks off its Worldwide Developers Conference with a few hours of just straight announcements, like the long-awaited Apple Intelligence and a makeover for smart AI assistant, Siri. We expected much of them to revolve around the company’s artificial intelligence ambitions (and here), and Apple didn’t disappoint. We also bring you news about Vision Pro and lots of feature refreshes.
Why Gamma is great for presentations — from Jeremy Caplan
Gamma has become one of my favorite new creativity tools. You can use it like Powerpoint or Google Slides, adding text and images to make impactful presentations. It lets you create vertical, square or horizontal slides. You can embed online content to make your deck stand out with videos, data or graphics. You can even use it to make quick websites.
Its best feature, though, is an easy-to-use application of AI. The AI will learn from any document you import, or you can use a text prompt to create a strong deck or site instantly. .
ChatGPT has 180.5 million users out of which 100 million users are active weekly.
In January 2024, ChatGPT got 2.3 billion website visits and 2 million developers are using its API.
The highest percentage of ChatGPT users belong to USA (46.75%), followed by India (5.47%). ChatGPT is banned in 7 countries including Russia and China.
OpenAI’s projected revenue from ChatGPT is $2billion in 2024.
Running ChatGPT costs OpenAI around $700,000 daily.
Sam Altman is seeking $7 trillion for a global AI chip project while Open AI is also listed as a major shareholder in Reddit.
ChatGPT offers a free version with GPT-3.5 and a Plus version with GPT-4, which is 40% more accurate and 82% safer costing $20 per month.
ChatGPT is being used for automation, education, coding, data-analysis, writing, etc.
43% of college students and 80% of the Fortune 500 companies are using ChatGPT.
A 2023 study found 25% of US companies surveyed saved $50K-$70K using ChatGPT, while 11% saved over $100K.