2024: The State of Generative AI in the Enterprise — from menlovc.com (Menlo Ventures)
The enterprise AI landscape is being rewritten in real time. As pilots give way to production, we surveyed 600 U.S. enterprise IT decision-makers to reveal the emerging winners and losers.

This spike in spending reflects a wave of organizational optimism; 72% of decision-makers anticipate broader adoption of generative AI tools in the near future. This confidence isn’t just speculative—generative AI tools are already deeply embedded in the daily work of professionals, from programmers to healthcare providers.

Despite this positive outlook and increasing investment, many decision-makers are still figuring out what will and won’t work for their businesses. More than a third of our survey respondents do not have a clear vision for how generative AI will be implemented across their organizations. This doesn’t mean they’re investing without direction; it simply underscores that we’re still in the early stages of a large-scale transformation. Enterprise leaders are just beginning to grasp the profound impact generative AI will have on their organizations.


Business spending on AI surged 500% this year to $13.8 billion, says Menlo Ventures — from cnbc.com by Hayden Field

Key Points

  • Business spending on generative AI surged 500% this year, hitting $13.8 billion — up from just $2.3 billion in 2023, according to data from Menlo Ventures released Wednesday.
  • OpenAI ceded market share in enterprise AI, declining from 50% to 34%, per the report.
  • Amazon-backed Anthropic doubled its market share from 12% to 24%.

Microsoft quietly assembles the largest AI agent ecosystem—and no one else is close — from venturebeat.com by Matt Marshall

Microsoft has quietly built the largest enterprise AI agent ecosystem, with over 100,000 organizations creating or editing AI agents through its Copilot Studio since launch – a milestone that positions the company ahead in one of enterprise tech’s most closely watched and exciting  segments.

The rapid adoption comes as Microsoft significantly expands its agent capabilities. At its Ignite conference [that started on 11/19/24], the company announced it will allow enterprises to use any of the 1,800 large language models (LLMs) in the Azure catalog within these agents – a significant move beyond its exclusive reliance on OpenAI’s models. The company also unveiled autonomous agents that can work independently, detecting events and orchestrating complex workflows with minimal human oversight.


Now Hear This: World’s Most Flexible Sound Machine Debuts — from
Using text and audio as inputs, a new generative AI model from NVIDIA can create any combination of music, voices and sounds.

Along these lines, also see:


AI Agents Versus Human Agency: 4 Ways To Navigate Our AI-Driven World — from forbes.com by Cornelia C. Walther

To understand the implications of AI agents, it’s useful to clarify the distinctions between AI, generative AI, and AI agents and explore the opportunities and risks they present to our autonomy, relationships, and decision-making.

AI Agents: These are specialized applications of AI designed to perform tasks or simulate interactions. AI agents can be categorized into:

    • Tool Agents…
    • Simulation Agents..

While generative AI creates outputs from prompts, AI agents use AI to act with intention, whether to assist (tool agents) or emulate (simulation agents). The latter’s ability to mirror human thought and action offers fascinating possibilities — and raises significant risks.

 

Miscommunication Leads AI-Based Hiring Tools Astray — from adigaskell.org

Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.

The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.

Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.


AI agents explained: Why OpenAI, Google and Microsoft are building smarter AI agents — from digit.in by Jayesh Shinde

In the last two years, the world has seen a lot of breakneck advancement in the Generative AI space, right from text-to-text, text-to-image and text-to-video based Generative AI capabilities. And all of that’s been nothing short of stepping stones for the next big AI breakthrough – AI agents. According to Bloomberg, OpenAI is preparing to launch its first autonomous AI agent, which is codenamed ‘Operator,’ as soon as in January 2025.

Apparently, this OpenAI agent – or Operator, as it’s codenamed – is designed to perform complex tasks independently. By understanding user commands through voice or text, this AI agent will seemingly do tasks related to controlling different applications in the computer, send an email, book flights, and no doubt other cool things. Stuff that ChatGPT, Copilot, Google Gemini or any other LLM-based chatbot just can’t do on its own.


2025: The year ‘invisible’ AI agents will integrate into enterprise hierarchies  — from venturebeat.com by Taryn Plumb

In the enterprise of the future, human workers are expected to work closely alongside sophisticated teams of AI agents.

According to McKinsey, generative AI and other technologies have the potential to automate 60 to 70% of employees’ work. And, already, an estimated one-third of American workers are using AI in the workplace — oftentimes unbeknownst to their employers.

However, experts predict that 2025 will be the year that these so-called “invisible” AI agents begin to come out of the shadows and take more of an active role in enterprise operations.

“Agents will likely fit into enterprise workflows much like specialized members of any given team,” said Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI.


State of AI Report 2024 Summary — from ai-supremacy.com by Michael Spencer
Part I, Consolidation, emergence and adoption. 


Which AI Image Model Is the Best Speller? Let’s Find Out! — from whytryai.com by Daniel Nest
I test 7 image models to find those that can actually write.

The contestants
I picked 7 participants for today’s challenge:

  1. DALL-E 3 by OpenAI (via Microsoft Designer)
  2. FLUX1.1 [pro] by Black Forest Labs (via Glif)
  3. Ideogram 2.0 by Ideogram (via Ideogram)
  4. Imagen 3 by Google (via Image FX)
  5. Midjourney 6.1 by Midjourney (via Midjourney)
  6. Recraft V3 by Recraft (via Recraft)
  7. Stable Diffusion 3.5 Large by Stability AI (via Hugging Face)

How to get started with AI agents (and do it right) — from venturebeat.com by Taryn Plumb

So how can enterprises choose when to adopt third-party models, open source tools or build custom, in-house fine-tuned models? Experts weigh in.


OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI — from bloomberg.com (behind firewall)
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.


OpenAI and others seek new path to smarter AI as current methods hit limitations — from reuters.com by Krystal Hu and Anna Tong

Summary

  • AI companies face delays and challenges with training new large language models
  • Some researchers are focusing on more time for inference in new models
  • Shift could impact AI arms race for resources like chips and energy

NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools — from blogs.nvidia.com by Spencer Huang
New Project GR00T workflows and AI world model development technologies to accelerate robot dexterity, control, manipulation and mobility.


How Generative AI is Revolutionizing Product Development — from intelligenthq.com

A recent report from McKinsey predicts that generative AI could unlock up to $2.6 to $4.4 annually trillion in value within product development and innovation across various industries. This staggering figure highlights just how significantly generative AI is set to transform the landscape of product development. Generative AI app development is driving innovation by using the power of advanced algorithms to generate new ideas, optimize designs, and personalize products at scale. It is also becoming a cornerstone of competitive advantage in today’s fast-paced market. As businesses look to stay ahead, understanding and integrating technologies like generative AI app development into product development processes is becoming more crucial than ever.


What are AI Agents: How To Create a Based AI Agent — from ccn.com by Lorena Nessi

Key Takeaways

  • AI agents handle complex, autonomous tasks beyond simple commands, showcasing advanced decision-making and adaptability.
  • The Based AI Agent template by Coinbase and Replit provides an easy starting point for developers to build blockchain-enabled AI agents.
  • AI based agents specifically integrate with blockchain, supporting crypto wallets and transactions.
  • Securing API keys in development is crucial to protect the agent from unauthorized access.

What are AI Agents and How Are They Used in Different Industries? — from rtinsights.com by Salvatore Salamone
AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries.

 

10 Graphic Design Trends to Pay Attention to in 2025 — from graphicmama.com by Al Boicheva

We’ll go on a hunt for bold, abstract, and naturalist designs, cutting-edge AI tools, and so much more, all pushing boundaries and rethinking what we already know about design. In 2025, we will see new ways to animate ideas, revisit retro styles with a modern twist, and embrace clean, but sophisticated aesthetics. For designers and design enthusiasts alike, these trends are set to bring a new level of excitement to the world of design.

Here are the Top 10 Graphic Design Trends in 2025:

 

Opening Keynote – GS1

Bringing generative AI to video with Adobe Firefly Video Model

Adobe Launches Firefly Video Model and Enhances Image, Vector and Design Models

  • The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
  • Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
  • Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises

Photoshop delivers powerful innovation for image editing, ideation, 3D design, and more

Even more speed, precision, and power: Get started with the latest Illustrator and InDesign features for creative professionals

Adobe Introduces New Global Initiative Aimed at Helping 30 Million Next-Generation Learners Develop AI Literacy, Content Creation and Digital Marketing Skills by 2030

Add sound to your video via text — Project Super Sonic:



New Dream Weaver — from aisecret.us
Explore Adobe’s New Firefly Video Generative Model

Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.

 

Finalists of the 2024 Comedy Wildlife Photography Awards Focus on the Wily and Witless — from thisiscolossal.com by Kate Mothes


Speaking of photography, here’s a related item:

AI Photo Editors: A Quick Guide to Elevate Your Images — from intelligenthq.com

With the rise of artificial intelligence, photo editing has become accessible and efficient for everyone. An AI photo Editing Tool can transform photos in seconds, producing professional-level results without requiring extensive skills. From adjusting lighting to removing backgrounds, these tools automate complex edits, enabling users to create stunning visuals effortlessly. Whether a beginner or an experienced photographer, AI-powered editors offer a wide range of features that help elevate your images. This guide will introduce you to the key functionalities of AI image editors and provide insights on maximising their potential.

 

AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh
The market for AI products and services could reach between $780 billion and $990 billion by 2027.

At a Glance

  • The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
  • Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
  • Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.

Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”


And on a somewhat related note (i.e., emerging technologies), also see the following two postings:

Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo
As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.

Key Takeaways

  • Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
  • Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
  • The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.

Proto hologram tech allows cancer patients to receive specialist care without traveling large distances — from inavateonthenet.net

“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.




Clone your voice in minutes: The AI trick 95% don’t know about — from aidisruptor.ai by Alex McFarland
Warning: May cause unexpected bouts of talking to yourself

Now that you’ve got your voice clone, what can you do with it?

  1. Content Creation:
    • Podcast Production: Record episodes in half the time. Your listeners won’t know the difference, but your schedule will thank you.
    • Audiobook Narration: Always wanted to narrate your own book? Now you can, without spending weeks in a recording studio.
    • YouTube Videos: Create voiceovers for your videos in multiple languages. World domination, here you come!
  2. Business Brilliance:
    • Customer Service: Personalized automated responses that actually sound personal.
    • Training Materials: Create engaging e-learning content in your own voice, minus the hours of recording.
    • Presentations: Never worry about losing your voice before a big presentation again. Your clone’s got your back.

185 real-world gen AI use cases from the world’s leading organizations — from blog.google by Brian Hall; via Daniel Nest’s Why Try AI

In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.

In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.

Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.


AI Data Drop: 3 Key Insights from Real-World Research on AI Usage — from microsoft.com; via Daniel Nest’s Why Try AI
One of the largest studies of Copilot usage—at nearly 60 companies—reveals how AI is changing the way we work.

  1. AI is starting to liberate people from email
  2. Meetings are becoming more about value creation
  3. People are co-creating more with AI—and with one another


*** Dharmesh has been working on creating agent.ai — a professional network for AI agents.***


Speaking of agents, also see:

Onboarding the AI workforce: How digital agents will redefine work itself — from venturebeat.com by Gary Grossman

AI in 2030: A transformative force

  1. AI agents are integral team members
  2. The emergence of digital humans
  3. AI-driven speech and conversational interfaces
  4. AI-enhanced decision-making and leadership
  5. Innovation and research powered by AI
  6. The changing nature of job roles and skills

AI Video Tools You Can Use Today — from heatherbcooper.substack.com by Heather Cooper
The latest AI video models that deliver results

AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.

No worries – we do have plenty of generative AI video tools we can use right now.

  • Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
  • Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
  • Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
  • …plus several more

 



Introducing OpenAI o1 – from openai.com

We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.




Something New: On OpenAI’s “Strawberry” and Reasoning — from oneusefulthing.org by Ethan Mollick
Solving hard problems in new ways

The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.

To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.


What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack

The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.

Recently, many creators (myself included) have been exploring super realistic AI more and more.

But where can this actually be used?

Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.

Heather goes on to mention applications in:

  • Creative Industries
  • Entertainment and Media
  • Education and Training

NotebookLM now lets you listen to a conversation about your sources — from blog.google by Biao Wang
Our new Audio Overview feature can turn documents, slides, charts and more into engaging discussions with one click.

Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.


Bringing generative AI to video with Adobe Firefly Video Model — from blog.adobe.com by Ashley Still

Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.

Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.

We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.

 

A third of all generative AI projects will be abandoned, says Gartner — from zdnet.com by Tiernan Ray
The high upfront cost of deployment is one of the challenges that can doom generative AI projects

Companies are “struggling” to find value in the generative artificial intelligence (Gen AI) projects they have undertaken and one-third of initiatives will end up getting abandoned, according to a recent report by analyst Gartner.

The report states at least 30% of Gen AI projects will be abandoned after the proof-of-concept stage by the end of 2025.

From DSC:
But I wouldn’t write off the other two thirds of projects that will make it. I wouldn’t write off the future of AI in our world. AI-based technologies are already massively impacting graphic design, film, media, and more creative outlets. See the tweet below for some examples of what I’m talking about.



 

When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia
As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.


Per The Rundown AI:

The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.

Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.


Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius
Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.

Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.


The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall]
The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.


[Claude] Artifacts are now generally available — from anthropic.com

[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.

Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.


MIT's AI Risk Repository -- a comprehensive database of risks from AI systems

What are the risks from Artificial Intelligence?
A comprehensive living database of over 700 AI risks categorized by their cause and risk domain.

What is the AI Risk Repository?
The AI Risk Repository has three parts:

  • The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
  • The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).

California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI — from newsday.com by The Associated Press

SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

Per Oncely:

The Details:

  • Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
  • Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
  • Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.

New in Gemini: Custom Gems and improved image generation with Imagen 3 — from blog.google
The ability to create custom Gems is coming to Gemini Advanced subscribers, and updated image generation capabilities with our latest Imagen 3 model are coming to everyone.

We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.


Cut the Chatter, Here Comes Agentic AI — from trendmicro.com

Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.

Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.

In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.

Citing AWS guidance, ZDNET counts six different potential types of AI agents:

    • Simple reflex agents for tasks like resetting passwords
    • Model-based reflex agents for pro vs. con decision making
    • Goal-/rule-based agents that compare options and select the most efficient pathways
    • Utility-based agents that compare for value
    • Learning agents
    • Hierarchical agents that manage and assign subtasks to other agents

Ask Claude: Amazon turns to Anthropic’s AI for Alexa revamp — from reuters.com by Greg Bensinger

Summary:

  • Amazon developing new version of Alexa with generative AI
  • Retailer hopes to generate revenue by charging for its use
  • Concerns about in-house AI prompt Amazon to turn to Anthropic’s Claude, sources say
  • Amazon says it uses many different technologies to power Alexa

Alibaba releases new AI model Qwen2-VL that can analyze videos more than 20 minutes long — from venturebeat.com by Carl Franzen


Hobbyists discover how to insert custom fonts into AI-generated images — from arstechnica.com by Benj Edwards
Like adding custom art styles or characters, in-world typefaces come to Flux.


200 million people use ChatGPT every week – up from 100 million last fall, says OpenAI — from zdnet.com by Sabrina Ortiz
Nearly two years after launching, ChatGPT continues to draw new users. Here’s why.

 

Generative AI and the Time Management Revolution — from ai-mindset.ai by Conor Grennan

Here’s how we need to change our work lives:

  1. RECLAIM: Use generative AI to speed up your daily tasks. Be ruthless. Anything that can be automated, should be.
  2. PROTECT: This is the crucial step. That time you’ve saved? Protect it like it’s the last slice of pizza. Block it off in your calendar. Tell your team it’s sacred.
  3. ELEVATE: Use this protected time for high-level thinking. Strategy. Innovation. The big, meaty problems you never have time for.
  4. AMPLIFY: Here’s where it gets cool. Use generative AI to amp up your strategic thinking. Need to brainstorm solutions to a complex problem? Want to analyze market trends? Generative AI is your new thinking partner.

The top 100 Gen AI Consumer Apps — 3rd edition — from a16z.com by Andreessen Horowitz

But amid the relentless onslaught of product launches, investment announcements, and hyped-up features, it’s worth asking: Which of these gen AI apps are people actually using? Which behaviors and categories are gaining traction among consumers? And which AI apps are people returning to, versus dabbling and dropping?

Welcome to the third installment of the Top 100 Gen AI Consumer Apps.
.

 


Gen AI’s next inflection point: From employee experimentation to organizational transformation — from mckinsey.com by Charlotte Relyea, Dana Maor, and Sandra Durth with Jan Bouly
As many employees adopt generative AI at work, companies struggle to follow suit. To capture value from current momentum, businesses must transform their processes, structures, and approach to talent.

To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.

Our research shows that early adopters prioritize talent and the human side of gen AI more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development. Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.


Adobe drops ‘Magic Fixup’: An AI breakthrough in the world of photo editing — from venturebeat.com by Michael Nuñez

Adobe researchers have revealed an AI model that promises to transform photo editing by harnessing the power of video data. Dubbed “Magic Fixup,” this new technology automates complex image adjustments while preserving artistic intent, potentially reshaping workflows across multiple industries.

Magic Fixup’s core innovation lies in its unique approach to training data. Unlike previous models that relied solely on static images, Adobe’s system learns from millions of video frame pairs. This novel method allows the AI to understand the nuanced ways objects and scenes change under varying conditions of light, perspective, and motion.


Top AI tools people actually use — from heatherbcooper.substack.com by Heather Cooper
How generative AI tools are changing the creative landscape

The shift toward creative tools
Creative tools made up 52% of the top generative AI apps on the list. This seems to reflect a growing consumer demand for accessible creativity through AI with tools for image, music, speech, video, and editing.

Creative categories include:

  • Image: Civitai, Leonardo, Midjourney, Yodayo, Ideogram, SeaArt
  • Music: Suno, Udio, VocalRemover
  • Speech: ElevenLabs, Speechify
  • Video: Luma AI, Viggle, Invideo AI, Vidnoz, ClipChamp
  • Editing: Cutout Pro, Veed, Photoroom, Pixlr, PicWish

Why it matters:
Creative apps are gaining traction because they empower digital artists and content creators with AI-driven tools that simplify and enhance the creative process, making professional-level work more accessible than ever.

 

What Students Want: Key Results from DEC Global AI Student Survey 2024 — from digitaleducationcouncil.com by Digital Education Council

  • 86% of students globally are regularly using AI in their studies, with 54% of them using AI on a weekly basis, the recent Digital Education Council Global AI Student Survey found.
  • ChatGPT was found to be the most widely used AI tool, with 66% of students using it, and over 2 in 3 students reported using AI for information searching.
  • Despite their high rates of AI usage, 1 in 2 students do not feel AI ready. 58% reported that they do not feel that they had sufficient AI knowledge and skills, and 48% do not feel adequately prepared for an AI-enabled workplace.

Chatting with WEF about ChatGPT in the classroom — from futureofbeinghuman.com by Andrew Maynard
A short video on generative AI in education from the World Economic Forum


The Post-AI Instructional Designer — from drphilippahardman.substack.com by Dr. Philippa Hardman
How the ID role is changing, and what this means for your key skills, roles & responsibilities

Specifically, the study revealed that teachers who reported most productivity gains were those who used AI not just for creating outputs (like quizzes or worksheets) but also for seeking input on their ideas, decisions and strategies.

Those who engaged with AI as a thought partner throughout their workflow, using it to generate ideas, define problems, refine approaches, develop strategies and gain confidence in their decisions gained significantly more from their collaboration with AI than those who only delegated functional tasks to AI.  


Leveraging Generative AI for Inclusive Excellence in Higher Education — from er.educause.edu by Lorna Gonzalez, Kristi O’Neil-Gonzalez, Megan Eberhardt-Alstot, Michael McGarry and Georgia Van Tyne
Drawing from three lenses of inclusion, this article considers how to leverage generative AI as part of a constellation of mission-centered inclusive practices in higher education.

The hype and hesitation about generative artificial intelligence (AI) diffusion have led some colleges and universities to take a wait-and-see approach.Footnote1 However, AI integration does not need to be an either/or proposition where its use is either embraced or restricted or its adoption aimed at replacing or outright rejecting existing institutional functions and practices. Educators, educational leaders, and others considering academic applications for emerging technologies should consider ways in which generative AI can complement or augment mission-focused practices, such as those aimed at accessibility, diversity, equity, and inclusion. Drawing from three lenses of inclusion—accessibility, identity, and epistemology—this article offers practical suggestions and considerations that educators can deploy now. It also presents an imperative for higher education leaders to partner toward an infrastructure that enables inclusive practices in light of AI diffusion.

An example way to leverage AI:

How to Leverage AI for Identity Inclusion
Educators can use the following strategies to intentionally design instructional content with identity inclusion in mind.

  • Provide a GPT or AI assistant with upcoming lesson content (e.g., lecture materials or assignment instructions) and ask it to provide feedback (e.g., troublesome vocabulary, difficult concepts, or complementary activities) from certain perspectives. Begin with a single perspective (e.g., first-time, first-year student), but layer in more to build complexity as you interact with the GPT output.

Gen AI’s next inflection point: From employee experimentation to organizational transformation — from mckinsey.com by Charlotte Relyea, Dana Maor, and Sandra Durth with Jan Bouly
As many employees adopt generative AI at work, companies struggle to follow suit. To capture value from current momentum, businesses must transform their processes, structures, and approach to talent.

To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.

Our research shows that early adopters prioritize talent and the human side of gen AI more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development. Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.


7 Ways to Use AI Music in Your Classroom — from classtechtips.com by Monica Burns


Change blindness — from oneusefulthing.org by Ethan Mollick
21 months later

I don’t think anyone is completely certain about where AI is going, but we do know that things have changed very quickly, as the examples in this post have hopefully demonstrated. If this rate of change continues, the world will look very different in another 21 months. The only way to know is to live through it.


My AI Breakthrough — from mgblog.org by Miguel Guhlin

Over the subsequent weeks, I’ve made other adjustments, but that first one was the one I asked myself:

  1. What are you doing?
  2. Why are you doing it that way?
  3. How could you change that workflow with AI?
  4. Applying the AI to the workflow, then asking, “Is this what I was aiming for? How can I improve the prompt to get closer?”
  5. Documenting what worked (or didn’t). Re-doing the work with AI to see what happened, and asking again, “Did this work?”

So, something that took me WEEKS of hard work, and in some cases I found impossible, was made easy. Like, instead of weeks, it takes 10 minutes. The hard part? Building the prompt to do what I want, fine-tuning it to get the result. But that doesn’t take as long now.

 

Augmented Course Design: Using AI to Boost Efficiency and Expand Capacity — from er.educause.edu by Berlin Fang and Kim Broussard
The emerging class of generative AI tools has the potential to significantly alter the landscape of course development.

Using generative artificial intelligence (GenAI) tools such as ChatGPT, Gemini, or CoPilot as intelligent assistants in instructional design can significantly enhance the scalability of course development. GenAI can significantly improve the efficiency with which institutions develop content that is closely aligned with the curriculum and course objectives. As a result, institutions can more effectively meet the rising demand for flexible and high-quality education, preparing a new generation of future professionals equipped with the knowledge and skills to excel in their chosen fields.1 In this article, we illustrate the uses of AI in instructional design in terms of content creation, media development, and faculty support. We also provide some suggestions on the effective and ethical uses of AI in course design and development. Our perspectives are rooted in medical education, but the principles can be applied to any learning context.

Table 1 summarizes a few low-hanging fruits in AI usage in course development.
.

Table 1. Types of Use of GenAI in Course Development
Practical Use of AI Use Scenarios and Examples
Inspiration
  • Exploring ideas for instructional strategies
  • Exploring ideas for assessment
  • Course mapping
  • Lesson or unit content planning
Supplementation
  • Text to audio
  • Transcription for audio
  • Alt text auto-generation
  • Design optimization (e.g., using Microsoft PPT Design)
Improvement
  • Improving learning objectives
  • Improving instructional materials
  • Improving course content writing (grammar, spelling, etc.)
Generation
  • Creating a PowerPoint draft using learning objectives
  • Creating peripheral content materials (introductions, conclusions)
  • Creating decorative images for content
Expansion
  • Creating a scenario based on learning objectives
  • Creating a draft of a case study
  • Creating a draft of a rubric

.


Also see:

10 Ways Artificial Intelligence Is Transforming Instructional Design — from er.educause.edu by Rob Gibson
Artificial intelligence (AI) is providing instructors and course designers with an incredible array of new tools and techniques to improve the course design and development process. However, the intersection of AI and content creation is not new.

I have been telling my graduate instructional design students that AI technology is not likely to replace them any time soon because learning and instruction are still highly personalized and humanistic experiences. However, as these students embark on their careers, they will need to understand how to appropriately identify, select, and utilize AI when developing course content. Examples abound of how instructional designers are experimenting with AI to generate and align student learning outcomes with highly individualized course activities and assessments. Instructional designers are also using AI technology to create and continuously adapt the custom code and power scripts embedded into the learning management system to execute specific learning activities.Footnote1 Other useful examples include scripting and editing videos and podcasts.

Here are a few interesting examples of how AI is shaping and influencing instructional design. Some of the tools and resources can be used to satisfy a variety of course design activities, while others are very specific.


Taking the Lead: Why Instructional Designers Should Be at the Forefront of Learning in the Age of AI — from medium.com by Rob Gibson
Education is at a critical juncture and needs to draw leaders from a broader pool, including instructional designers

The world of a medieval stone cutter and a modern instructional designer (ID) may seem separated by a great distance, but I wager any ID who upon hearing the story I just shared would experience an uneasy sense of déjà vu. Take away the outward details, and the ID would recognize many elements of the situation: the days spent in projects that fail to realize the full potential of their craft, the painful awareness that greater things can be built, but are unlikely to occur due to a poverty of imagination and lack of vision among those empowered to make decisions.

Finally, there is the issue of resources. No stone cutter could ever hope to undertake a large-scale enterprise without a multitude of skilled collaborators and abundant materials. Similarly, instructional designers are often departments of one, working in scarcity environments, with limited ability to acquire resources for ambitious projects and — just as importantly — lacking the authority or political capital needed to launch significant initiatives. For these reasons, instructional design has long been a profession caught in an uncomfortable stasis, unable to grow, evolve and achieve its full potential.

That is until generative AI appeared on the scene. While the discourse around AI in education has been almost entirely about its impact on teaching and assessment, there has been a dearth of critical analysis regarding AI’s potential for impacting instructional design.

We are at a critical juncture for AI-augmented learning. We can either stagnate, missing opportunities to support learners while educators continue to debate whether the use of generative AI tools is a good thing, or we can move forward, building a transformative model for learning akin to the industrial revolution’s impact.

Too many professional educators remain bound by traditional methods. The past two years suggest that leaders of this new learning paradigm will not emerge from conventional educational circles. This vacuum of leadership can be filled, in part, by instructional designers, who are prepared by training and experience to begin building in this new learning space.

 

From DSC:
The above item is simply excellent!!! I love it!



Also relevant/see:

3 new Chrome AI features for even more helpful browsing — from blog.google from Parisa Tabriz
See how Chrome’s new AI features, including Google Lens for desktop and Tab compare, can help you get things done more easily on the web.


On speaking to AI — from oneusefulthing.org by Ethan Mollick
Voice changes a lot of things

So, let’s talk about ChatGPT’s new Advanced Voice mode and the new AI-powered Siri. They are not just different approaches to talking to AI. In many ways, they represent the divide between two philosophies of AI – Copilots versus Agents, small models versus large ones, specialists versus generalists.


Your guide to AI – August 2024 — from nathanbenaich.substack.com by Nathan Benaich and Alex Chalmers


Microsoft says OpenAI is now a competitor in AI and search — from cnbc.com by Jordan Novet

Key Points

  • Microsoft’s annually updated list of competitors now includes OpenAI, a long-term strategic partner.
  • The change comes days after OpenAI announced a prototype of a search engine.
  • Microsoft has reportedly invested $13 billion into OpenAI.


Excerpt from by Graham Clay

1. Flux, an open-source text-to-image creator that is comparable to industry leaders like Midjourney, was released by Black Forest Labs (the “original team” behind Stable Diffusion). It is capable of generating high quality text in images (there are tons of educational use cases). You can play with it on their demo page, on Poe, or by running it on your own computer (tutorial here).

Other items re: Flux:

How to FLUX  — from heatherbcooper.substack.com by Heather Cooper
Where to use FLUX online & full tutorial to create a sleek ad in minutes

.

Also from Heather Cooper:

Introducing FLUX: Open-Source text to image model

FLUX… has been EVERYWHERE this week, as I’m sure you have seen. Developed by Black Forest Labs, is an open-source image generation model that’s gaining attention for its ability to rival leading models like Midjourney, DALL·E 3, and SDXL.

What sets FLUX apart is its blend of creative freedom, precision, and accessibility—it’s available across multiple platforms and can be run locally.

Why FLUX Matters
FLUX’s open-source nature makes it accessible to a broad audience, from hobbyists to professionals.

It offers advanced multimodal and parallel diffusion transformer technology, delivering high visual quality, strong prompt adherence, and diverse outputs.

It’s available in 3 models:
FLUX.1 [pro]: A high-performance, commercial image synthesis model.
FLUX.1 [dev]: An open-weight, non-commercial variant of FLUX.1 [pro]
FLUX.1 [schnell]: A faster, distilled version of FLUX.1, operating up to 10x quicker.

Daily Digest: Huge (in)Flux of AI videos. — from bensbites.beehiiv.com
PLUS: Review of ChatGPT’s advanced voice mode.

  1. During the weekend, image models made a comeback. Recently released Flux models can create realistic images with near-perfect text—straight from the model, without much patchwork. To get the party going, people are putting these images into video generation models to create prettytrippyvideos. I can’t identify half of them as AI, and they’ll only get better. See this tutorial on how to create a video ad for your product..

 


7 not only cool but handy use cases of new Claude — from techthatmatters.beehiiv.com by Harsh Makadia

  1. Data visualization
  2. Infographic
  3. Copy the UI of a website
  4. …and more

Achieving Human Level Competitive Robot Table Tennis — from sites.google.com

 

Free Sites for Back to School — from techlearning.com by Diana Restifo
Top free and freemium sites for learning

An internet search for free learning resources will likely return a long list that includes some useful sites amid a sea of not-really-free and not-very-useful sites.

To help teachers more easily find the best free and freemium sites they can use in their classrooms and curricula, I’ve curated a list that describes the top free/freemium sites for learning.

In some cases, Tech & Learning has reviewed the site in detail, and those links are included so readers can find out more about how to make the best use of the online materials. In all cases, the websites below provide valuable educational tools, lessons, and ideas, and are worth exploring further.


Two bonus postings here! 🙂 

 




Kuaishou Unveils Kling: A Text-to-Video Model To Challenge OpenAI’s Sora — from maginative.com by Chris McKay


Generating audio for video — from deepmind.google


LinkedIn leans on AI to do the work of job hunting — from  techcrunch.com by Ingrid Lunden

Learning personalisation. LinkedIn continues to be bullish on its video-based learning platform, and it appears to have found a strong current among users who need to skill up in AI. Cohen said that traffic for AI-related courses — which include modules on technical skills as well as non-technical ones such as basic introductions to generative AI — has increased by 160% over last year.

You can be sure that LinkedIn is pushing its search algorithms to tap into the interest, but it’s also boosting its content with AI in another way.

For Premium subscribers, it is piloting what it describes as “expert advice, powered by AI.” Tapping into expertise from well-known instructors such as Alicia Reece, Anil Gupta, Dr. Gemma Leigh Roberts and Lisa Gates, LinkedIn says its AI-powered coaches will deliver responses personalized to users, as a “starting point.”

These will, in turn, also appear as personalized coaches that a user can tap while watching a LinkedIn Learning course.

Also related to this, see:

Unlocking New Possibilities for the Future of Work with AI — from news.linkedin.com

Personalized learning for everyone: Whether you’re looking to change or not, the skills required in the workplace are expected to change by 68% by 2030. 

Expert advice, powered by AI: We’re beginning to pilot the ability to get personalized practical advice instantly from industry leading business leaders and coaches on LinkedIn Learning, all powered by AI. The responses you’ll receive are trained by experts and represent a blend of insights that are personalized to each learner’s unique needs. While human professional coaches remain invaluable, these tools provide a great starting point.

Personalized coaching, powered by AI, when watching a LinkedIn course: As learners —including all Premium subscribers — watch our new courses, they can now simply ask for summaries of content, clarify certain topics, or get examples and other real-time insights, e.g. “Can you simplify this concept?” or “How does this apply to me?”

 


Roblox’s Road to 4D Generative AI — from corp.roblox.com by Morgan McGuire, Chief Scientist

  • Roblox is building toward 4D generative AI, going beyond single 3D objects to dynamic interactions.
  • Solving the challenge of 4D will require multimodal understanding across appearance, shape, physics, and scripts.
  • Early tools that are foundational for our 4D system are already accelerating creation on the platform.

 
© 2024 | Daniel Christian