What Are AI Agents—And Who Profits From Them? — from every.to by Evan Armstrong
The newest wave of AI research is changing everything

I’ve spent months talking with founders, investors, and scientists, trying to understand what this technology is and who the players are. Today, I’m going to share my findings. I’ll cover:

  • What an AI agent is
  • The major players
  • The technical bets
  • The future

Agentic workflows are loops—they can run many times in a row without needing a human involved for each step in the task. A language model will make a plan based on your prompt, utilize tools like a web browser to execute on that plan, ask itself if that answer is right, and close the loop by getting back to you with that answer.

But agentic workflows are an architecture, not a product. It gets even more complicated when you incorporate agents into products that customers will buy.

Early reports of GPT-5 are that it is “materially better” and is being explicitly prepared for the use case of AI agents.

 


[Report] Generative AI Top 150: The World’s Most Used AI Tools (Feb 2024) — from flexos.work by Daan van Rossum
FlexOS.work surveyed Generative AI platforms to reveal which get used most. While ChatGPT reigns supreme, countless AI platforms are used by millions.

As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme.

2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch.
As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.

With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.

.


Artificial Intelligence Act: MEPs adopt landmark law — from europarl.europa.eu

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations


The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock


A New Surge in Power Use Is Threatening U.S. Climate Goals — from nytimes.com by Brad Plumer and Nadja Popovich
A boom in data centers and factories is straining electric grids and propping up fossil fuels.

Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.

Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.


OpenAI and the Fierce AI Industry Debate Over Open Source — from bloomberg.com by Rachel Metz

The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?

The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.


Researchers develop AI-driven tool for near real-time cancer surveillance — from medicalxpress.com by Mark Alewine; via The Rundown AI
Artificial intelligence has delivered a major win for pathologists and researchers in the fight for improved cancer treatments and diagnoses.

In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.


 

How AI Is Already Transforming the News Business — from politico.com by Jack Shafer
An expert explains the promise and peril of artificial intelligence.

The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.

Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.

Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.

Also see:

Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena — from cjr.org by Felix Simon

TABLE OF CONTENTS



EMO: Emote Portrait Alive – Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions — from humanaigc.github.io Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo

We proposed EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, our method can generate vocal avatar videos with expressive facial expressions, and various head poses, meanwhile, we can generate videos with any duration depending on the length of input video.


Adobe previews new cutting-edge generative AI tools for crafting and editing custom audio — from blog.adobe.com by the Adobe Research Team

New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.


How AI copyright lawsuits could make the whole industry go extinct — from theverge.com by Nilay Patel
The New York Times’ lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI.

There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.


FCC officially declares AI-voiced robocalls illegal — from techcrunch.com by Devom Coldewey

The FCC’s war on robocalls has gained a new weapon in its arsenal with the declaration of AI-generated voices as “artificial” and therefore definitely against the law when used in automated calling scams. It may not stop the flood of fake Joe Bidens that will almost certainly trouble our phones this election season, but it won’t hurt, either.

The new rule, contemplated for months and telegraphed last week, isn’t actually a new rule — the FCC can’t just invent them with no due process. Robocalls are just a new term for something largely already prohibited under the Telephone Consumer Protection Act: artificial and pre-recorded messages being sent out willy-nilly to every number in the phone book (something that still existed when they drafted the law).


EIEIO…Chips Ahoy! — from dashmedia.co by Michael Moe, Brent Peus, and Owen Ritz


Here Come the AI Worms — from wired.com by Matt Burgess
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

 

Generative AI’s environmental costs are soaring — and mostly secret — from nature.com by Kate Crawfold
First-of-its-kind US bill would address the environmental costs of the technology, but there’s a long way to go.

Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. It’s an unusual admission. At the World Economic Forum’s annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.

I’m glad he said it. I’ve seen consistent downplaying and denial about the AI industry’s environmental costs since I started publishing about them in 2018. Altman’s admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.


Get ready for the age of sovereign AI | Jensen Huang interview— from venturebeat.com by Dean Takahashi

Yesterday, Nvidia reported $22.1 billion in revenue for its fourth fiscal quarter of fiscal 2024 (ending January 31, 2024), easily topping Wall Street’s expectations. The revenues grew 265% from a year ago, thanks to the explosive growth of generative AI.

He also repeated a notion about “sovereign AI.” This means that countries are protecting the data of their users and companies are protecting data of employees through “sovereign AI,” where the large-language models are contained within the borders of the country or the company for safety purposes.



Yikes, Google — from theneurondaily.com by Noah Edelman
PLUS: racially diverse nazis…WTF?!

Google shoots itself in the foot.
Last week was the best AND worst week for Google re AI.

The good news is that its upcoming Gemini 1.5 Pro model showcases remarkable capabilities with its expansive context window (details forthcoming).

The bad news is Google’s AI chatbot “Gemini” is getting A LOT of heat after generating some outrageous responses. Take a look:

Also from the Daily:

  • Perplexity just dropped this new podcast, Discover Daily, that recaps the news in 3-4 minutes.
  • It already broke into the top #200 news pods within a week.
  • AND it’s all *100% AI-generated*.

Daily Digest: It’s Nvidia’s world…and we’re just living in it. — from bensbites.beehiiv.com

  • Nvidia is building a new type of data centre called AI factory. Every company—biotech, self-driving, manufacturing, etc will need an AI factory.
  • Jensen is looking forward to foundational robotics and state space models. According to him, foundational robotics could have a breakthrough next year.
  • The crunch for Nvidia GPUs is here to stay. It won’t be able to catch up on supply this year. Probably not next year too.
  • A new generation of GPUs called Blackwell is coming out, and the performance of Blackwell is off the charts.
  • Nvidia’s business is now roughly 70% inference and 30% training, meaning AI is getting into users’ hands.

Gemma: Introducing new state-of-the-art open models  — from blog.google


 

 

Text to video via OpenAI’s Sora. (I had taken this screenshot on the 15th, but am posting it now.)

We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.

Introducing Sora, our text-to-video model. Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.

Along these lines, also see:

Pika; via Superhuman AI



An Ivy League school just announced its first AI degree — from qz.com by Michelle Cheng; via Barbara Anna Zielonka on LinkedIn
It’s a sign of the times. At the same time, AI talent is scarce

At the University of Pennsylvania, undergraduate students in its school of engineering will soon be able to study for a bachelor of science degree in artificial intelligence.

What can one do with an AI degree? The University of Pennsylvania says students will be able to apply the skills they learn in school to build responsible AI tools, develop materials for emerging chips and hardware, and create AI-driven breakthroughs in healthcare through new antibiotics, among other things.



Google Pumps $27 Million Into AI Training After Microsoft Pledge—Here’s What To Know — from forbes.com by Robert Hart

Google on Monday announced plans to help train people in Europe with skills in artificial intelligence, the latest tech giant to invest in preparing workers and economies amid the disruption brought on by technologies they are racing to develop.


The Exhausting Pace of AI: Google’s Ultra Leap — from marcwatkins.substack.com by Marc Watkins

The acceleration of AI deployments has gotten so absurdly out of hand that a draft post I started a week ago about a new development is now out of date.

The Pace is Out of Control
A mere week since Ultra 1.0’s announcement, Google has now introduced us to Ultra 1.5, a model they are clearly positioning to be the leader in the field. Here is the full technical report for Gemini Ultra 1.5, and what it can do is stunning.

 

 

 


Maryville Announces $21 Million Investment in AI and New Technologies Amidst Record Growth — from maryville.edu; via Arthur “Art” Fredrich on LinkedIn

[St. Louis, MO, February 14, 2024] – In a bold move that counters the conventions of more traditional schools, Maryville University has unveiled a substantial $21 million multi-year investment in artificial intelligence (AI) and cutting-edge technologies. This groundbreaking initiative is set to transform the higher education experience to be powered by the latest technology to support student success and a five-star experience for thousands of students both on-campus and online.

 

 

From DSC:
This would be huge for all of our learning ecosystems, as the learning agents could remember where a particular student or employee is at in terms of their learning curve for a particular topic.


Say What? Chat With RTX Brings Custom Chatbot to NVIDIA RTX AI PCs — from blogs.nvidia.com
Tech demo gives anyone with an RTX GPU the power of a personalized GPT chatbot.



 

OpenAI announces first partnership with a university — from cnbc.com by Hayden Field

Key Points:

  • OpenAI on Thursday announced its first partnership with a higher education institution.
  • Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more.
  • The partnership has been in the works for at least six months.
  • ASU plans to build a personalized AI tutor for students, allow students to create AI avatars for study help and broaden the university’s prompt engineering course.

A new collaboration with OpenAI charts the future of AI in higher education — from news.asu.edu

The collaboration between ASU and OpenAI brings the advanced capabilities of ChatGPT Enterprise into higher education, setting a new precedent for how universities enhance learning, creativity and student outcomes.

“ASU recognizes that augmented and artificial intelligence systems are here to stay, and we are optimistic about their ability to become incredible tools that help students to learn, learn more quickly and understand subjects more thoroughly,” ASU President Michael M. Crow said. “Our collaboration with OpenAI reflects our philosophy and our commitment to participating directly to the responsible evolution of AI learning technologies.”


AI <> Academia — from drphilippahardman.substack.com by Dr. Philippa Hardman
What might emerge from ASU’s pioneering partnership with OpenAI?

Phil’s Wish List #2: Smart Curriculum Development
ChatGPT assists in creating and updating course curricula, based on both student data and emerging domain and pedagogical research on the topic.

Output: using AI it will be possible to review course content and make data-informed automate recommendations based on latest pedagogical and domain-specific research

Potential Impact: increased dynamism and relevance in course content and reduced administrative lift for academics.


A full list of AI ideas from AI for Education dot org

A full list of AI ideas from AI-for-Education.org

You can filter by category, by ‘What does it do?’, by AI tool or search for keywords.


Navigating the new normal: Adapting in the age of AI and hybrid work models — from chieflearningofficer.com by Dr. Kylie Ensrud

Unlike traditional leadership, adaptable leadership is not bound by rigid rules and protocols. Instead, it thrives on flexibility. Adaptable leaders are willing to experiment, make course corrections, and pivot when necessary. Adaptable leadership is about flexibility, resilience and a willingness to embrace change. It embodies several key principles that redefine the role of leaders in organizations:

  1. Embracing uncertainty

Adaptable leaders understand that uncertainty is the new norm. They do not shy away from ambiguity but instead, see it as an opportunity for growth and innovation. They encourage a culture of experimentation and learning from failure.

  1. Empowering teams

Instead of dictating every move, adaptable leaders empower their teams to take ownership of their work. They foster an environment of trust and collaboration, enabling individuals to contribute their unique perspectives and skills.

  1. Continuous learning

Adaptable leaders are lifelong learners. They are constantly seeking new knowledge, stay informed about industry trends and encourage their teams to do the same. They understand that knowledge is a dynamic asset that must be constantly updated.


Major AI in Education Related Developments this week — from stefanbauschard.substack.com by Stefan Bauschard
ASU integrates with ChatGPT, K-12 AI integrations, Agents & the Rabbit, Uruguay, Meta and AGI, Rethinking curriculum

“The greatest risk is leaving school curriculum unchanged when the entire world is changing.”
Hadi Partovi, founder Code.org, Angel investor in Facebook, DropBox, AirBnb, Uber

Tutorbots in college. On a more limited scale, Georgia State University, Morgan State University, and the University of Central Florida are piloting a project using chatbots to support students in foundational math and English courses.


Pioneering AI-Driven Instructional Design in Small College Settings — from campustechnology.com by Gopu Kiron
For institutions that lack the budget or staff expertise to utilize instructional design principles in online course development, generative AI may offer a way forward.

Unfortunately, smaller colleges — arguably the institutions whose students are likely to benefit the most from ID enhancements — frequently find themselves excluded from authentically engaging in the ID arena due to tight budgets, limited faculty online course design expertise, and the lack of ID-specific staff roles. Despite this, recent developments in generative AI may offer these institutions a low-cost, tactical avenue to compete with more established players.


Google’s new AI solves math olympiad problems — from bensbites.beehiiv.com

There’s a new AI from Google DeepMind called AlphaGeometry that totally nails solving super hard geometry problems. We’re talking problems so tough only math geniuses who compete in the International Mathematical Olympiad can figure them out.


 

Introducing the GPT Store

Introducing the GPT Store — from OpenAI
We’re launching the GPT Store to help you find useful and popular custom versions of ChatGPT.

It’s been two months since we announced GPTs, and users have already created over 3 million custom versions of ChatGPT. Many builders have shared their GPTs for others to use. Today, we’re starting to roll out the GPT Store to ChatGPT Plus, Team and Enterprise users so you can find useful and popular GPTs. Visit chat.openai.com/gpts to explore.



Introducing ChatGPT Team — from openai.com
We’re launching a new ChatGPT plan for teams of all sizes, which provides a secure, collaborative workspace to get the most out of ChatGPT at work.

ChatGPT Team offers access to our advanced models like GPT-4 and DALL·E 3, and tools like Advanced Data Analysis. It additionally includes a dedicated collaborative workspace for your team and admin tools for team management. As with ChatGPT Enterprise, you own and control your business data—we do not train on your business data or conversations, and our models don’t learn from your usage. More details on our data privacy practices can be found on our privacy page and Trust Portal.


GPT Store — from theneurondaily.com by Noah Edelman & Pete Huang

The App Store for ChatGPTs is here.

OpenAI finally launched its GPT Store—a hub offering access to over 3 million GPTs, for paid users (#sorrynotsorry).
If you missed pt. 1, pt. 2, and pt. 3 of our GPTs analysis, here’s the TLDR: GPTs are customized versions of ChatGPT pre-loaded with prompts or context, each designed to be good at specific tasks.

There’s a GPT for everything, like one for lesson plans, one that crunches numbers, and one that recommends books you’ll buy but never read.

The GPT Store is a game-changer.


OpenAI Just Released The GPT Store. Here’s How To Use It And Make Money With Your GPT — from artificialcorner.com by The Pycoach
Learn how to publish your GPT to the store and monetize it.

How to stand out on the GPT Store
The low barrier to entry for making GPTs will make earning money on the GPT store difficult. Not everyone will make tons of money off their GPT, but I think those with more chances of success will:

  • Use custom actions: This is a feature that allows your GPT to connect to an API. Connecting to APIs gives your GPT new functionalities that others won’t be able to replicate unless they have access to the API (here you can see my tutorial on how to add custom action to your GPT)
  • Use knowledge: Knowledge is a feature that allows you to add files to your GPT. Adding exclusive information could enrich your GPT and help it stand out from the pack. Just remember that files can be downloaded when the code interpreter is enabled.

OpenAI releases the app store of AI — from superhuman.ai by Zain Kahn

App stores are ginormous businesses. According to CNBC’s estimates, Apple’s App Store grossed north of $70 Billion in 2022. That’s more revenue than Spotify, Shopify and Airbnb generated in the same year — combined.

When you look at the size of the opportunity that app stores built on top of popular platforms unlock, OpenAI’s latest move to launch a GPT Store is another bold bet by the startup that’s already leading the LLM and chatbot markets with GPT-4 and ChatGPT.

Announced [on 1/10/24], the GPT Store is a place for ChatGPT users to find custom versions of the chatbot that are designed for specific use cases.

 

OpenAI’s app store for GPTs will launch next week — from techcrunch.com by Kyle Wiggers

OpenAI plans to launch a store for GPTs, custom apps based on its text-generating AI models (e.g. GPT-4), sometime in the coming week.

The GPT Store was announced last year during OpenAI’s first annual developer conference, DevDay, but delayed in December — almost certainly due to the leadership shakeup that occurred in November, just after the initial announcement.

 

The biggest things that happened in AI this year — from superhuman.ai by Zain Kahn

January:

  • Microsoft raises eyebrows with a huge $10 Billion investment in OpenAI.

February:

  • Meta launches Llama 2, their open-source rival to OpenAI’s models.
  • OpenAI announces ChatGPT Plus, a paid version of their chatbot.
  • Microsoft announces a new AI-powered Bing Search.

March:

  • OpenAI announces the powerful GPT-4 model, still considered to be the gold standard.
  • Midjourney releases V5, which brings AI-powered image generation one step closer to reality.
  • Microsoft launches Copilot for Microsoft 365.
  • Google launches Bard, its rival to ChatGPT.

…and more


AI 2023: A Year in Review — from stefanbauschard.substack.com by Stefan Bauschard
2023 developments in AI and a hint of what they are building toward

Some of the items that Stefan includes in his posting include:

  • ChatGPT and other language models that generate text.
  • Image generators.
  • Video generators.
  • AI models that that can read, hear, and speak.
  • AI models that can see.
  • Improving models.
  • “Multimodal” models.
  • Training on specific content.
  • Reasoning & planning.
  • …and several others

The Dictionary.com Word of the Year is “hallucinate.” — from content.dictionary.com by Nick Norlen and Grant Barrett; via The Rundown AI

hallucinate
[ huhloo-suh-neyt ]

verb
(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.


Soon, every employee will be both AI builder and AI consumer — from zdnet.com by Joe McKendrick, via Robert Gibson on LinkedIn
“Standardized tools and platforms as well as advanced low- or no-code tech may enable all employees to become low-level engineers,” suggests a recent report.

The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report’s authors argue, or — as they put it — “instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one.”

Along these lines, see:

  • TECH TRENDS 2024 — from deloitte.com
    Six emerging technology trends demonstrate that in an age of generative machines, it’s more important than ever for organizations to maintain an integrated business strategy, a solid technology foundation, and a creative workforce.

UK Supreme Court rules AI is not an inventor — from theverge.com by Emilia David

The ruling follows a similar decision denying patent registrations naming AI as creators.

The UK Supreme Court ruled that AI cannot get patents, declaring it cannot be named as an inventor of new products because the law considers only humans or companies to be creators.


The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work — from nytimes.com by Michael M. Grynbaum and Ryan Mac

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

On this same topic, also see:


Apple’s iPhone Design Chief Enlisted by Jony Ive, Sam Altman to Work on AI Devices — from bloomberg.com by Mark Gurman (behind paywall)

  • Design executive Tang Tan is set to leave Apple in February
  • Tan will join Ive’s LoveFrom design studio, work on AI project

AI 2023: Chatbots Spark New Tools — from heatherbcooper.substack.com by Jeather Cooper

ChatGPT and Other Chatbots
The arrival of ChatGPT sparked tons of new AI tools and changed the way we thought about using a chatbot in our daily lives.

Chatbots like ChatGPT, Perplexity, Claude, and Bing Chat can help content creators by quickly generating ideas, outlines, drafts, and full pieces of content, allowing creators to produce more high-quality content in less time.

These AI tools boost efficiency and creativity in content production across formats like blog posts, social captions, newsletters, and more.


Microsoft’s next Surface laptops will reportedly be its first true ‘AI PCs’ — from theverge.com by Emma Roth
Next year’s Surface Laptop 6 and Surface Pro 10 will feature Arm and Intel options, according to Windows Central.

Microsoft is getting ready to upgrade its Surface lineup with new AI-enabled features, according to a report from Windows Central. Unnamed sources told the outlet the upcoming Surface Pro 10 and Surface Laptop 6 will come with a next-gen neural processing unit (NPU), along with Intel and Arm-based options.


How one of the world’s oldest newspapers is using AI to reinvent journalism — from theguardian.com by Alexandra Topping
Berrow’s Worcester Journal is one of several papers owned by the UK’s second biggest regional news publisher to hire ‘AI-assisted’ reporters

With the AI-assisted reporter churning out bread and butter content, other reporters in the newsroom are freed up to go to court, meet a councillor for a coffee or attend a village fete, says the Worcester News editor, Stephanie Preece.

“AI can’t be at the scene of a crash, in court, in a council meeting, it can’t visit a grieving family or look somebody in the eye and tell that they’re lying. All it does is free up the reporters to do more of that,” she says. “Instead of shying away from it, or being scared of it, we are saying AI is here to stay – so how can we harness it?”



What to Expect in AI in 2024 — from hai.stanford.edu by
Seven Stanford HAI faculty and fellows predict the biggest stories for next year in artificial intelligence.

Topics include:

  • White Collar Work Shifts
  • Deepfake Proliferation
  • GPUs Shortage
  • More Helpful Agents
  • Hopes for U.S. Regulation
  • Asking Big Questions, Applying New Policies
  • Companies Will Navigate Complicated Regulations

Addendum on 1/2/24:


 
© 2024 | Daniel Christian