Adobe Reinvents its Entire Creative Suite with AI Co-Pilots, Custom Models, and a New Open Platform — from theneuron.ai by Grant Harvey
Adobe just put an AI co-pilot in every one of its apps, letting you chat with Photoshop, train models on your own style, and generate entire videos with a single subscription that now includes top models from Google, Runway, and Pika.

Adobe came to play, y’all.

At Adobe MAX 2025 in Los Angeles, the company dropped an entire creative AI ecosystem that touches every single part of the creative workflow. In our opinion, all these new features aren’t about replacing creators; it’s about empowering them with superpowers they can actually control.

Adobe’s new plan is to put an AI co-pilot in every single app.

  • For professionals, the game-changer is Firefly Custom Models. Start training one now to create a consistent, on-brand look for all your assets.
  • For everyday creators, the AI Assistants in Photoshop and Express will drastically speed up your workflow.
  • The best place to start is the Photoshop AI Assistant (currently in private beta), which offers a powerful glimpse into the future of creative software—a future where you’re less of a button-pusher and more of a creative director.

Adobe MAX Day 2: The Storyteller Is Still King, But AI Is Their New Superpower — from theneuron.ai by Grant Harvey
Adobe’s Day 2 keynote showcased a suite of AI-powered creative tools designed to accelerate workflows, but the real message from creators like Mark Rober and James Gunn was clear: technology serves the story, not the other way around.

On the second day of its annual MAX conference, Adobe drove home a message that has been echoing through the creative industry for the past year: AI is not a replacement, but a partner. The keynote stage featured a powerful trio of modern storytellers—YouTube creator Brandon Baum, science educator and viral video wizard Mark Rober, and Hollywood director James Gunn—who each offered a unique perspective on a shared theme: technology is a powerful tool, but human instinct, hard work, and the timeless art of storytelling remain paramount.

From DSC:
As Grant mentioned, the demos dealt with ideation, image generation, video generation, audio generation, and editing.


Adobe Max 2025: all the latest creative tools and AI announcements — from theverge.com by Jess Weatherbed

The creative software giant is launching new generative AI tools that make digital voiceovers and custom soundtracks for videos, and adding AI assistants to Express and Photoshop for web that edit entire projects using descriptive prompts. And that’s just the start, because Adobe is planning to eventually bring AI assistants to all of its design apps.


Also see Adobe Delivers New AI Innovations, Assistants and Models Across Creative Cloud to Empower Creative Professionals plus other items from the News section from Adobe


 

 

The Top 100 [Gen AI] Consumer Apps 5th edition — from a16z.com


And in an interesting move by Microsoft and Samsung:

A smarter way to talk to your TV: Microsoft Copilot launches on Samsung TVs and monitors — from microsoft.com

Voice-powered AI meets a visual companion for entertainment, everyday help, and everything in between. 

Redmond, Wash., August 27—Today, we’re announcing the launch of Copilot on select Samsung TVs and monitors, transforming the biggest screen in your home into your most personal and helpful companion—and it’s free to use.

Copilot makes your TV easier and more fun to use with its voice-powered interface, friendly on-screen character, and simple visual cards. Now you can quickly find what you’re looking for and discover new favorites right from your living room.

Because it lives on the biggest screen in the home, Copilot is a social experience—something you can use together with family and friends to spark conversations, help groups decide what to watch, and turn the TV into a shared space for curiosity and connection.

 

Firefly adds new video capabilities, industry leading AI models, and Generate Sound Effects feature — from blog.adobe.com

Today, we’re introducing powerful enhancements to our Firefly Video Model, including improved motion fidelity and advanced video controls that will accelerate your workflows and provide the precision and style you need to elevate your storytelling. We are also adding new generative AI partner models within Generate Video on Firefly, giving you the power to choose which model works best for your creative needs across image, video and sound.

Plus, our new workflow tools put you in control of your video’s composition and style. You can now layer in custom-generated sound effects right inside the Firefly web app — and start experimenting with AI-powered avatar-led videos.

Generate Sound Effects (beta)
Sound is a powerful storytelling tool that adds emotion and depth to your videos. Generate Sound Effects (beta) makes it easy to create custom sounds, like a lion’s roar or ambient nature sounds, that enhance your visuals. And like our other Firefly generative AI models, Generate Sound Effects (beta) is commercially safe, so you can create with confidence.

Just type a simple text prompt to generate the sound effect you need. Want even more control? Use your voice to guide the timing and intensity of the sound. Firefly listens to the energy and rhythm of your voice to place sound effects precisely where they belong — matching the action in your video with cinematic timing.

 

Mary Meeker AI Trends Report: Mind-Boggling Numbers Paint AI’s Massive Growth Picture — from ndtvprofit.com
Numbers that prove AI as a tech is unlike any other the world has ever seen.

Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.

  • AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
  • ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
  • ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.

Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg
How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.

The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.

 

Also see Meeker’s actual report at:

Trends – Artificial Intelligence — from bondcap.com by Mary Meeker / Jay Simons / Daegwon Chae / Alexander Krey



The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.

The details:

  • Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
  • The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
  • The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
  • Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.

Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.

 

 

Values in the wild: Discovering and analyzing values in real-world language model interactions — from anthropic.com

In the latest research paper from Anthropic’s Societal Impacts team, we describe a practical way we’ve developed to observe Claude’s values—and provide the first large-scale results on how Claude expresses those values during real-world conversations. We also provide an open dataset for researchers to run further analysis of the values and how often they arise in conversations.

Per the Rundown AI

Why it matters: AI is increasingly shaping real-world decisions and relationships, making understanding their actual values more crucial than ever. This study also moves the alignment discussion toward more concrete observations, revealing that AI’s morals and values may be more contextual and situational than a static point of view.

Also from Anthropic, see:

Anthropic Education Report: How University Students Use Claude


Adobe Firefly: The next evolution of creative AI is here — from blog.adobe.com

In just under two years, Adobe Firefly has revolutionized the creative industry and generated more than 22 billion assets worldwide. Today at Adobe MAX London, we’re unveiling the latest release of Firefly, which unifies AI-powered tools for image, video, audio, and vector generation into a single, cohesive platform and introduces many new capabilities.

The new Firefly features enhanced models, improved ideation capabilities, expanded creative options, and unprecedented control. This update builds on earlier momentum when we introduced the Firefly web app and expanded into video and audio with Generate Video, Translate Video, and Translate Audio features.

Per The Rundown AI (here):

Why it matters: OpenAI’s recent image generator and other rivals have shaken up creative workflows, but Adobe’s IP-safe focus and the addition of competing models into Firefly allow professionals to remain in their established suite of tools — keeping users in the ecosystem while still having flexibility for other model strengths.

 

Google Workspace enables the future of AI-powered work for every business  — from workspace.google.com

The following AI capabilities will start rolling out to Google Workspace Business customers today and to Enterprise customers later this month:

  • Get AI assistance in Gmail, Docs, Sheets, Meet, Chat, Vids, and more: Do your best work faster with AI embedded in the tools you use every day. Gemini streamlines your communications by helping you summarize, draft, and find information in your emails, chats, and files. It can be a thought partner and source of inspiration, helping you create professional documents, slides, spreadsheets, and videos from scratch. Gemini can even improve your meetings by taking notes, enhancing your audio and video, and catching you up on the conversation if you join late.
  • Chat with Gemini Advanced, Google’s next-gen AI: Kickstart learning, brainstorming, and planning with the Gemini app on your laptop or mobile device. Gemini Advanced can help you tackle complex projects including coding, research, and data analysis and lets you build Gems, your team of AI experts to help with repeatable or specialized tasks.
  • Unlock the power of NotebookLM PlusWe’re bringing the revolutionary AI research assistant to every employee, to help them make sense of complex topics. Upload sources to get instant insights and Audio Overviews, then share customized notebooks with the team to accelerate their learning and onboarding.

And per Evelyn from the Stay Ahead newsletter (at FlexOS)

Google’s Gemini AI is stepping up its game in Google Workspace, bringing powerful new capabilities to your favorite tools like Gmail, Docs, and Sheets:

  • AI-Powered Summaries: Get concise, AI-generated summaries of long emails and documents so you can focus on what matters most.
  • Smart Reply: Gemini now offers context-aware email replies that feel more natural and tailored to your style.
  • Slides and images generation: Gemini in Slides can help you generate new images, summarize your slides, write and rewrite content, and refer to existing Drive files and/or emails.
  • Automated Data Insights: In Google Sheets, Gemini helps create a task tracker, conference agenda, spot trends, suggest formulas, and even build charts with simple prompts.
  • Intelligent Drafting: Google Docs now gets a creativity boost, helping you draft reports, proposals, or blog posts with AI suggestions and outlines.
  • Meeting Assistance: Say goodbye to the awkward AI attendees to help you take notes, now Gemini can natively do that for you – no interruption, no avatar, and no extra attendee. Meet can now also automatically generate captions to lower the language barrier.

Eveyln (from FlexOS) also mentions that CoPilot is getting enhancements too:

Copilot is now included in Microsoft 365 Personal and Family — from microsoft.com

Per Evelyn:

It’s exactly what we predicted: stand-alone AI apps like note-takers and image generators have had their moment, but as the tech giants step in, they’re bringing these features directly into their ecosystems, making them harder to ignore.


Announcing The Stargate Project — from openai.com

The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.

The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.


Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Adobe’s new AI tool can edit 10,000 images in one click — from theverge.com by  Jess Weatherbed
Firefly Bulk Create can automatically remove, replace, or extend image backgrounds in huge batches.

Adobe is launching new generative AI tools that can automate labor-intensive production tasks like editing large batches of images and translating video presentations. The most notable is “Firefly Bulk Create,” an app that allows users to quickly resize up to 10,000 images or replace all of their backgrounds in a single click instead of tediously editing each picture individually.

 

Where to start with AI agents: An introduction for COOs — from fortune.com by Ganesh Ayyar

Picture your enterprise as a living ecosystem, where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.

Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?

The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.


Create podcasts in minutes — from elevenlabs.io by Eleven Labs
Now anyone can be a podcast producer


Top AI tools for business — from theneuron.ai


This week in AI: 3D from images, video tools, and more — from heatherbcooper.substack.com by Heather Cooper
From 3D worlds to consistent characters, explore this week’s AI trends

Another busy AI news week, so I organized it into categories:

  • Image to 3D
  • AI Video
  • AI Image Models & Tools
  • AI Assistants / LLMs
  • AI Creative Workflow: Luma AI Boards

Want to speak Italian? Microsoft AI can make it sound like you do. — this is a gifted article from The Washington Post;
A new AI-powered interpreter is expected to simulate speakers’ voices in different languages during Microsoft Teams meetings.

Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.


What Is Agentic AI?  — from blogs.nvidia.com by Erik Pounds
Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.


 

Opening Keynote – GS1

Bringing generative AI to video with Adobe Firefly Video Model

Adobe Launches Firefly Video Model and Enhances Image, Vector and Design Models

  • The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
  • Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
  • Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises

Photoshop delivers powerful innovation for image editing, ideation, 3D design, and more

Even more speed, precision, and power: Get started with the latest Illustrator and InDesign features for creative professionals

Adobe Introduces New Global Initiative Aimed at Helping 30 Million Next-Generation Learners Develop AI Literacy, Content Creation and Digital Marketing Skills by 2030

Add sound to your video via text — Project Super Sonic:



New Dream Weaver — from aisecret.us
Explore Adobe’s New Firefly Video Generative Model

Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.

 



Introducing OpenAI o1 – from openai.com

We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.




Something New: On OpenAI’s “Strawberry” and Reasoning — from oneusefulthing.org by Ethan Mollick
Solving hard problems in new ways

The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.

To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.


What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack

The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.

Recently, many creators (myself included) have been exploring super realistic AI more and more.

But where can this actually be used?

Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.

Heather goes on to mention applications in:

  • Creative Industries
  • Entertainment and Media
  • Education and Training

NotebookLM now lets you listen to a conversation about your sources — from blog.google by Biao Wang
Our new Audio Overview feature can turn documents, slides, charts and more into engaging discussions with one click.

Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.


Bringing generative AI to video with Adobe Firefly Video Model — from blog.adobe.com by Ashley Still

Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.

Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.

We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.

 

From DSC:
Anyone who is involved in putting on conferences should at least be aware that this kind of thing is now possible!!! Check out the following posting from Adobe (with help from Tata Consultancy Services (TCS).


From impossible to POSSIBLE: Tata Consultancy Services uses Adobe Firefly generative AI and Acrobat AI Assistant to turn hours of work into minutes — from blog.adobe.com

This year, the organizers — innovative industry event company Beyond Ordinary Events — turned to Tata Consultancy Services (TCS) to make the impossible “possible.” Leveraging Adobe generative AI technology across products like Adobe Premiere Pro and Acrobat, they distilled hours of video content in minutes, delivering timely dispatches to thousands of attendees throughout the conference.

For POSSIBLE ’24, Muche had an idea for a daily dispatch summarizing each day’s sessions so attendees wouldn’t miss a single insight. But timing would be critical. The dispatch needed to reach attendees shortly after sessions ended to fuel discussions over dinner and carry the excitement over to the next day.

The workflow started in Adobe Premiere Pro, with the writer opening a recording of each session and using the Speech to Text feature to automatically generate a transcript. They saved the transcript as a PDF file and opened it in Adobe Acrobat Pro. Then, using Adobe Acrobat AI Assistant, the writer asked for a session summary.

It was that fast and easy. In less than four minutes, one person turned a 30-minute session into an accurate, useful summary ready for review and publication.

By taking advantage of templates, the designer then added each AI-enabled summary to the newsletter in minutes. With just two people and generative AI technology, TCS accomplished the impossible — for the first time delivering an informative, polished newsletter to all 3,500 conference attendees just hours after the last session of the day.

 

Generative AI and the Time Management Revolution — from ai-mindset.ai by Conor Grennan

Here’s how we need to change our work lives:

  1. RECLAIM: Use generative AI to speed up your daily tasks. Be ruthless. Anything that can be automated, should be.
  2. PROTECT: This is the crucial step. That time you’ve saved? Protect it like it’s the last slice of pizza. Block it off in your calendar. Tell your team it’s sacred.
  3. ELEVATE: Use this protected time for high-level thinking. Strategy. Innovation. The big, meaty problems you never have time for.
  4. AMPLIFY: Here’s where it gets cool. Use generative AI to amp up your strategic thinking. Need to brainstorm solutions to a complex problem? Want to analyze market trends? Generative AI is your new thinking partner.

The top 100 Gen AI Consumer Apps — 3rd edition — from a16z.com by Andreessen Horowitz

But amid the relentless onslaught of product launches, investment announcements, and hyped-up features, it’s worth asking: Which of these gen AI apps are people actually using? Which behaviors and categories are gaining traction among consumers? And which AI apps are people returning to, versus dabbling and dropping?

Welcome to the third installment of the Top 100 Gen AI Consumer Apps.
.

 


Gen AI’s next inflection point: From employee experimentation to organizational transformation — from mckinsey.com by Charlotte Relyea, Dana Maor, and Sandra Durth with Jan Bouly
As many employees adopt generative AI at work, companies struggle to follow suit. To capture value from current momentum, businesses must transform their processes, structures, and approach to talent.

To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.

Our research shows that early adopters prioritize talent and the human side of gen AI more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development. Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.


Adobe drops ‘Magic Fixup’: An AI breakthrough in the world of photo editing — from venturebeat.com by Michael Nuñez

Adobe researchers have revealed an AI model that promises to transform photo editing by harnessing the power of video data. Dubbed “Magic Fixup,” this new technology automates complex image adjustments while preserving artistic intent, potentially reshaping workflows across multiple industries.

Magic Fixup’s core innovation lies in its unique approach to training data. Unlike previous models that relied solely on static images, Adobe’s system learns from millions of video frame pairs. This novel method allows the AI to understand the nuanced ways objects and scenes change under varying conditions of light, perspective, and motion.


Top AI tools people actually use — from heatherbcooper.substack.com by Heather Cooper
How generative AI tools are changing the creative landscape

The shift toward creative tools
Creative tools made up 52% of the top generative AI apps on the list. This seems to reflect a growing consumer demand for accessible creativity through AI with tools for image, music, speech, video, and editing.

Creative categories include:

  • Image: Civitai, Leonardo, Midjourney, Yodayo, Ideogram, SeaArt
  • Music: Suno, Udio, VocalRemover
  • Speech: ElevenLabs, Speechify
  • Video: Luma AI, Viggle, Invideo AI, Vidnoz, ClipChamp
  • Editing: Cutout Pro, Veed, Photoroom, Pixlr, PicWish

Why it matters:
Creative apps are gaining traction because they empower digital artists and content creators with AI-driven tools that simplify and enhance the creative process, making professional-level work more accessible than ever.

 


“Who to follow in AI” in 2024? [Part I] — from ai-supremacy.com by Michael Spencer [some of posting is behind a paywall]
#1-20 [of 150] – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts

Also see:

Along these lines, also see:


AI In Medicine: 3 Future Scenarios From Utopia To Dystopia — from medicalfuturist.com by Andrea Koncz
There’s a vast difference between baseless fantasizing and realistic forward planning. Structured methodologies help us learn how to “dream well”.

Key Takeaways

  • We’re often told that daydreaming and envisioning the future is a waste of time. But this notion is misguided.
  • We all instinctively plan for the future in small ways, like organizing a trip or preparing for a dinner party. This same principle can be applied to larger-scale issues, and smart planning does bring better results.
  • We show you a method that allows us to think “well” about the future on a larger scale so that it better meets our needs.

Adobe Unveils Powerful New Innovations in Illustrator and Photoshop Unlocking New Design Possibilities for Creative Pros — from news.adobe.com

  • Latest Illustrator and Photoshop releases accelerate creative workflows, save pros time and empower designers to realize their visions faster
  • New Firefly-enabled features like Generative Shape Fill in Illustrator along with the Dimension Tool, Mockup, Text to Pattern, the Contextual Taskbar and performance enhancement tools accelerate productivity and free up time so creative pros can dive deeper into the parts of their work they love
  • Photoshop introduces all-new Selection Brush Tool and the general availability of Generate Image, Adjustment Brush Tool and other workflow enhancements empowering creators to make complex edits and unique designs
    .


Nike is using AI to turn athletes’ dreams into shoes — from axios.com by Ina Fried

Zoom in: Nike used genAI for ideation, including using a variety of prompts to produce images with different textures, materials and color to kick off the design process.

What they’re saying: “It’s a new way for us to work,” Nike lead footwear designer Juliana Sagat told Axios during a media tour of the showcase on Tuesday.
.


AI meets ‘Do no harm’: Healthcare grapples with tech promises — from finance.yahoo.com by Maya Benjamin

Major companies are moving at high speed to capture the promises of artificial intelligence in healthcare while doctors and experts attempt to integrate the technology safely into patient care.

“Healthcare is probably the most impactful utility of generative AI that there will be,” Kimberly Powell, vice president of healthcare at AI hardware giant Nvidia (NVDA), which has partnered with Roche’s Genentech (RHHBY) to enhance drug discovery in the pharmaceutical industry, among other investments in healthcare companies, declared at the company’s AI Summit in June.


Mistral reignites this week’s LLM rivalry with Large 2 (source) — from superhuman.ai

Today, we are announcing Mistral Large 2, the new generation of our flagship model. Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning. It also provides a much stronger multilingual support, and advanced function calling capabilities.


Meta releases the biggest and best open-source AI model yet — from theverge.com by Alex Heath
Llama 3.1 outperforms OpenAI and other rivals on certain benchmarks. Now, Mark Zuckerberg expects Meta’s AI assistant to surpass ChatGPT’s usage in the coming months.

Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI.

Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.


4 ways to boost ChatGPT — from wondertools.substack.com by Jeremy Caplan & The PyCoach
Simple tactics for getting useful responses

To help you make the most of ChatGPT, I’ve invited & edited today’s guest post from the author of a smart AI newsletter called The Artificial Corner. I appreciate how Frank Andrade pushes ChatGPT to produce better results with four simple, clever tactics. He offers practical examples to help us all use AI more effectively.

Frank Andrade: Most of us fail to make the most of ChatGPT.

  1. We omit examples in our prompts.
  2. We fail to assign roles to ChatGPT to guide its behavior.
  3. We let ChatGPT guess instead of providing it with clear guidance.

If you rely on vague prompts, learning how to create high-quality instructions will get you better results. It’s a skill often referred to as prompt engineering. Here are several techniques to get you to the next level.

 

From DSC:
As I can’t embed his posting, I’m copying/pasting Jeff’s posting on LinkedIn:


According to Flighty, I logged more than 2,220 flight miles in the last 5 days traveling to three conferences to give keynotes and spend time with housing officers in Milwaukee, college presidents in Mackinac Island, MI, and enrollment and marketing leaders in Raleigh.

Before I rest, I wanted to post some quick thoughts about what I learned. Thank you to everyone who shared their wisdom these past few days:

  • We need to think about the “why” and “how” of AI in higher ed. The “why” shouldn’t be just because everyone else is doing it. Rather, the “why” is to reposition higher ed for a different future of competitors. The “how” shouldn’t be to just seek efficiency and cut jobs. Rather we should use AI to learn from its users to create a better experience going forward.
  • Residence halls are not just infrastructure. They are part and parcel of the student experience and critical to student success. Almost half of students living on campus say it increases their sense of belonging, according to research by the Association of College & University Housing Officers.
  • How do we extend the “residential experience”? More than half of traditional undergraduates who live on campus now take at least once course online. As students increasingly spend time off campus – or move off campus as early as their second year in college – we need to help continue to make the connections for them that they would in a dorm. Why? 47% of college students believe living in a college residence hall enhanced their ability to resolve conflicts.
  • Career must be at the core of the student experience for colleges to thrive in the future, says Andy Chan. Yes, some people might see that as too narrow of a view of higher ed or might not want to provide cogs for the wheel of the workforce, but without the job, none of the other benefits of college follow–citizenship, health, engagement.
  • A “triple threat grad”–someone who has an internship, a semester-long project, and an industry credential (think Salesforce or Adobe in addition to their degree–matters more in the job market than major or institution, says Brandon Busteed.
  • Every faculty member should think of themselves as an ambassador for the institution. Yes, care about their discipline/department, but that doesn’t survive if the rest of the institution falls down around them.
  • Presidents need to place bigger bets rather than spend pennies and dimes on a bunch of new strategies. That means to free up resources they need to stop doing things.
  • Higher ed needs a new business model. Institutions can’t make money just from tuition, and new products like certificates, are pennies on the dollars of degrees.
  • Boards aren’t ready for the future. They are over-indexed on philanthropy and alumni and not enough on the expertise needed for leading higher ed.

From DSC:
As I can’t embed his posting, I’m copying/pasting Jeff’s posting on LinkedIn:


It’s the stat that still gnaws at me: 62%.

That’s the percentage of high school graduates going right on to college. A decade ago it was around 70%. So for all the bellyaching about the demographic cliff in higher ed, just imagine if today we were close to that 70% number? We’d be talking a few hundred thousand more students in the system.

As I told a gathering of presidents of small colleges and universities last night on Mackinac Island — the first time I had to take [numerous modes of transportation] to get to a conference — being small isn’t distinctive anymore.

There are many reasons undergrad enrollment is down, but they all come down to two interrelated trends: jobs and affordability.

The job has become so central to what students want out of the experience. It’s almost as if colleges now need to guarantee a job.

These institutions will need to rethink the learner relationship with work. Instead of college with work on the side, we might need to move to more of a mindset of work with college on the side by:

  • Making campus jobs more meaningful. Why can’t we have accounting and finance majors work in the CFO office, liberal arts majors work in IT on platforms such as Salesforce and Workday, which are skills needed in the workplace, etc.?
  • Apprenticeships are not just for the trades anymore. Integrate work-based learning into the undergrad experience in a much bigger way than internships and even co-ops.
  • Credentials within the degree. Every graduate should leave college with more than just a BA but also a certified credential in things like data viz, project management, the Adobe suite, Alteryx, etc.
  • The curriculum needs to be more flexible for students to combine work and learning — not only for the experience but also money for college — so more availability of online courses, hybrid courses, and flexible semesters.

How else can we think about learning and earning?


 

Daniel Christian: My slides for the Educational Technology Organization of Michigan’s Spring 2024 Retreat

From DSC:
Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.

Topics/agenda:

  • Topics & resources re: Artificial Intelligence (AI)
    • Top multimodal players
    • Resources for learning about AI
    • Applications of AI
    • My predictions re: AI
  • The powerful impact of pursuing a vision
  • A potential, future next-gen learning platform
  • Share some lessons from my past with pertinent questions for you all now
  • The significant impact of an organization’s culture
  • Bonus material: Some people to follow re: learning science and edtech

 

Education Technology Organization of Michigan -- ETOM -- Spring 2024 Retreat on June 6-7

PowerPoint slides of Daniel Christian's presentation at ETOM

Slides of the presentation (.PPTX)
Slides of the presentation (.PDF)

 


Plus several more slides re: this vision.

 
© 2025 | Daniel Christian