Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.

After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.

They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.

Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.


A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester
Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.

The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.

  1. Understand the impact…
  2. Understand the different types of AI tools…
  3. Focus on institutional data and knowledge repositories…

Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.

Also from Educause, see:


AI school opens – learners are not good or bad but fast and slow — from donaldclarkplanb.blogspot.com by Donald Clark

That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.

For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.


Agentic AI Named Top Tech Trend for 2025 — from campustechnology.com by David Ramel

Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

More realistic challenges that the firm has listed elsewhere include:

    • Agentic AI proliferating without governance or tracking;
    • Agentic AI making decisions that are not trustworthy;
    • Agentic AI relying on low-quality data;
    • Employee resistance; and
    • Agentic-AI-driven cyberattacks enabling “smart malware.”

Also from campustechnology.com, see:


Three items from edcircuit.com:


All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly
Looking for specific solutions at the conference exhibit hall, with an educator focus

Here are some notable trends:

  • Alignment with campus policies: …
  • Choose your own AI adventure: …
  • Integrate AI throughout a workflow: …
  • Moving from prompt engineering to bot building: …
  • More complex problem-solving: …


Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.

The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.


 

 



Google’s worst nightmare just became reality — from aidisruptor.ai by Alex McFarland
OpenAI just launched an all-out assault on traditional search engines.

Google’s worst nightmare just became reality. OpenAI didn’t just add search to ChatGPT – they’ve launched an all-out assault on traditional search engines.

It’s the beginning of the end for search as we know it.

Let’s be clear about what’s happening: OpenAI is fundamentally changing how we’ll interact with information online. While Google has spent 25 years optimizing for ad revenue and delivering pages of blue links, OpenAI is building what users actually need – instant, synthesized answers from current sources.

The rollout is calculated and aggressive: ChatGPT Plus and Team subscribers get immediate access, followed by Enterprise and Education users in weeks, and free users in the coming months. This staged approach is about systematically dismantling Google’s search dominance.




Open for AI: India Tech Leaders Build AI Factories for Economic Transformation — from blogs.nvidia.com
Yotta Data Services, Tata Communications, E2E Networks and Netweb are among the providers building and offering NVIDIA-accelerated infrastructure and software, with deployments expected to double by year’s end.


 

How to Level Up Your Job Hunt With AI Using AI to find, evaluate, and apply for jobs. — from whytryai.com by Daniel Nest

AI is best seen as a sparring partner that helps you through all stages of the job hunt.

Here are the ones I’ll cover:

  1. Self-discovery: What are you good at and what are your values?
  2. Upskilling: What gaps exist in your skillset and how can you close them?
  3. Job search: What existing jobs fit your profile and expectations?
  4. Company research: What can you learn about a specific company before applying?
  5. Application process: How do you tailor your CV and cover letter to the job?
  6. Job interview prepHow do you prepare and practice for job interviews?
  7. Feedback analysis: What insights can you gain from any feedback from potential employers?
  8. Decision and negotiation: How do you evaluate job offers and negotiate the best terms?

Now let’s look at each phase in detail and see how AI can help.

 

Along these same lines, see:

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.


ZombAIs: From Prompt Injection to C2 with Claude Computer Use — from embracethered.com by Johann Rehberger

A few days ago, Anthropic released Claude Computer Use, which is a model + code that allows Claude to control a computer. It takes screenshots to make decisions, can run bash commands and so forth.

It’s cool, but obviously very dangerous because of prompt injection. Claude Computer Use enables AI to run commands on machines autonomously, posing severe risks if exploited via prompt injection.

This blog post demonstrates that it’s possible to leverage prompt injection to achieve, old school, command and control (C2) when giving novel AI systems access to computers.

We discussed one way to get malware onto a Claude Computer Use host via prompt injection. There are countless others, like another way is to have Claude write the malware from scratch and compile it. Yes, it can write C code, compile and run it. There are many other options.

TrustNoAI.

And again, remember do not run unauthorized code on systems that you do not own or are authorized to operate on.

Also relevant here, see:


Perplexity Grows, GPT Traffic Surges, Gamma Dominates AI Presentations – The AI for Work Top 100: October 2024 — from flexos.work by Daan van Rossum
Perplexity continues to gain users despite recent controversies. Five out of six GPTs see traffic boosts. This month’s highest gainers including Gamma, Blackbox, Runway, and more.


Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report — from ai.wharton.upenn.edu by  Jeremy Korst, Stefano Puntoni, & Mary Purk

From a survey with more than 800 senior business leaders, this report’s findings indicate that weekly usage of Gen AI has nearly doubled from 37% in 2023 to 72% in 2024, with significant growth in previously slower-adopting departments like Marketing and HR. Despite this increased usage, businesses still face challenges in determining the full impact and ROI of Gen AI. Sentiment reports indicate leaders have shifted from feelings of “curiosity” and “amazement” to more positive sentiments like “pleased” and “excited,” and concerns about AI replacing jobs have softened. Participants were full-time employees working in large commercial organizations with 1,000 or more employees.


Apple study exposes deep cracks in LLMs’ “reasoning” capabilities — from arstechnica.com by Kyle Orland
Irrelevant red herrings lead to “catastrophic” failure of logical inference.

For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”


Google CEO says more than a quarter of the company’s new code is created by AI — from businessinsider.in by Hugh Langley

  • More than a quarter of new code at Google is made by AI and then checked by employees.
  • Google is doubling down on AI internally to make its business more efficient.

Top Generative AI Chatbots by Market Share – October 2024 


Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview — from github.blog

We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.

 

Can A.I. Be Blamed for a Teen’s Suicide?

Can A.I. Be Blamed for a Teen’s Suicide? — from nytimes.com by Kevin Roose

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.

“Please come home to me as soon as possible, my love,” Dany replied.

“What if I told you I could come home right now?” Sewell asked.

“… please do, my sweet king,” Dany replied.

He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger.

But the experience he had, of getting emotionally attached to a chatbot, is becoming increasingly common. Millions of people already talk regularly to A.I. companions, and popular social media apps including Instagram and Snapchat are building lifelike A.I. personas into their products.

The technology is also improving quickly. Today’s A.I. companions can remember past conversations, adapt to users’ communication styles, role-play as celebrities or historical figures and chat fluently about nearly any subject. Some can send A.I.-generated “selfies” to users, or talk to them with lifelike synthetic voices.

There is a wide range of A.I. companionship apps on the market.


Mother sues tech company after ‘Game of Thrones’ AI chatbot allegedly drove son to suicide — from usatoday.com by Jonathan Limehouse
The mother of 14-year-old Sewell Setzer III is suing Character.AI, the tech company that created a ‘Game of Thrones’ AI chatbot she believes drove him to commit suicide on Feb. 28. Editor’s note: This article discusses suicide and suicidal ideation. If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat at 988lifeline.org.

The mother of a 14-year-old Florida boy is suing Google and a separate tech company she believes caused her son to commit suicide after he developed a romantic relationship with one of its AI bots using the name of a popular “Game of Thrones” character, according to the lawsuit.


From my oldest sister:


Another relevant item?

Inside the Mind of an AI Girlfriend (or Boyfriend) — from wired.com by Will Knight
Dippy, a startup that offers “uncensored” AI companions, lets you peer into their thought process—sometimes revealing hidden motives.

Despite its limitations, Dippy seems to show how popular and addictive AI companions are becoming. Jagga and his cofounder, Angad Arneja, previously cofounded Wombo, a company that uses AI to create memes including singing photographs. The pair left in 2023, setting out to build an AI-powered office productivity tool, but after experimenting with different personas for their assistant, they became fascinated with the potential of AI companionship.

 

AI-governed robots can easily be hacked — from theaivalley.com by Barsee
PLUS: Sam Altman’s new company “World” introduced…

In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.

What did they find ?

  • Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
  • Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
  • Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.

Why does it matter?

This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.

From DSC:
Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.


From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?”  I can’t say I didn’t feel the same way.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku — from anthropic.com

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.

Per The Rundown AI:

The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.

Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.

Also related/see:

  • What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer
    Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
  • New Claude, Who Dis? — from theneurondaily.com
    Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
  • When you give a Claude a mouse — from oneusefulthing.org by Ethan Mollick
    Some quick impressions of an actual agent

Introducing Act-One — from runwayml.com
A new way to generate expressive character performances using simple video inputs.

Per Lore by Nathan Lands:

What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.

Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.

Also related/see:


Google to buy nuclear power for AI datacentres in ‘world first’ deal — from theguardian.com
Tech company orders six or seven small nuclear reactors from California’s Kairos Power

Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.

The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.

Related:


ChatGPT Topped 3 Billion Visits in September — from similarweb.com

After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May

ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.


Crazy “AI Army” — from aisecret.us

Also from aisecret.us, see World’s First Nuclear Power Deal For AI Data Centers

Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.


New updates to help creators build community, drive business, & express creativity on YouTube — from support.google.com

Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.

Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!


New autonomous agents scale your team like never before — from blogs.microsoft.com

Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.

  • First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
  • Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.

10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum
While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.


Multi-Modal AI: Video Creation Simplified — from heatherbcooper.substack.com by Heather Cooper

Emerging Multi-Modal AI Video Creation Platforms
The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.

Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis — from developer.nvidia.com/

 

DC: I’m really hoping that a variety of AI-based tools, technologies, and services will significantly help with our Access to Justice (#A2J) issues here in America. So this article, per Kristen Sonday at Thomson Reuters — caught my eye.

***

AI for Legal Aid: How to empower clients in need — from thomsonreuters.com by Kristen Sonday
In this second part of this series, we look at how AI-driven technologies can empower those legal aid clients who may be most in need

It’s hard to overstate the impact that artificial intelligence (AI) is expected to have on helping low-income individuals achieve better access to justice. And for those legal services organizations (LSOs) that serve on the front lines, too often without sufficient funding, staff, or technology, AI presents perhaps their best opportunity to close the justice gap. With the ability of AI-driven tools to streamline agency operations, minimize administrative work, more effectively reallocate talent, and allow LSOs to more effectively service clients, the implementation of these tools is essential.

Innovative LSOs leading the way

Already many innovative LSOs are taking the lead, utilizing new technology to complete tasks from complex analysis to AI-driven legal research. Here are two compelling examples of how AI is already helping LSOs empower low-income clients in need.

#A2J #justice #tools #vendors #society #legal #lawfirms #AI #legaltech #legalresearch

Criminal charges, even those that are eligible for simple, free expungement, can prevent someone from obtaining housing or employment. This is a simple barrier to overcome if only help is available.

AI offers the capacity to provide quick, accurate information to a vast audience, particularly to those in urgent need. AI can also help reduce the burden on our legal staff…

 


A legal tech executive explains how AI will fully change the way lawyers work — from legaldive.com by Justin Bachman
A senior executive with ContractPodAi discusses how legal AI poses economic benefits for in-house departments and disruption risks for law firm billing models.

Everything you thought you knew about being a lawyer is about to change.

Legal Dive spoke with Podinic about the transformative nature of AI, including the financial risks to lawyers’ billing models and how it will force general counsel and chief legal officers to consider how they’ll use the time AI is expected to free up for the lawyers on their teams when they no longer have to do administrative tasks and low-level work.


Legaltech will augment lawyers’ capabilities but not replace them, says GlobalData — from globaldata.com

  • Traditionally, law firms have been wary of adopting technologies that could compromise data privacy and legal accuracy; however, attitudes are changing
  • Despite concerns about technology replacing humans in the legal sector, legaltech is more likely to augment the legal profession than replace it entirely
  • Generative AI will accelerate digital transformation in the legal sector
 

Fresh Voices on Legal Tech with Megan Ma — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Dr. Megan Ma

Episode Notes
As genAI continues to edge into all facets of our lives, Dr. Megan Ma has been exploring integrations for this technology in legal, but, more importantly, how it can help lawyers and law students hone their legal skills. Dennis and Tom talk with Dr. Ma about her work and career path and many of the latest developments in legal tech. They take a deep dive into a variety of burgeoning AI tools and trends, and Dr. Ma discusses how her interdisciplinary mindset has helped her develop a unique perspective on the possibilities for AI in the legal profession and beyond.

Legal tech disruption: Doing it on purpose — from localgovernmentlawyer.co.uk
Thomson Reuters looks at the role that a legal technology roadmap can play in improving the operations of in-house legal departments.

Disruption in the legal industry remains a powerful force – from the death of the billable hour to robot lawyers and generative AI. Leaders are facing weighty issues that demand long-term, visionary thinking and that will change the way legal professionals do their jobs.

With half of in-house legal departments increasing their use of legal technology tools, many GCs are taking the initiative to address continued, growing expectations from the business for systems that can make operations better. How can you prepare for a tech or process change so that people come along with you, rather than living in constant fire-fighting mode?

 


Are ChatGPT, Claude & NotebookLM *Really* Disrupting Education? — from drphilippahardman.substack.com
Evaluating Gen AI’s *real* impact on human learning

The TLDR here is that, as useful as popular AI tools are for learners, as things stand they only enable us to take the very first steps on what is a long and complex journey of learning.

AI tools like ChatGPT 4o, Claude 3.5 & NotebookLM can help to give us access to information but (for now at least) the real work of learning remains in our – the humans’ – hands.


To which Anna Mills had a solid comment:

It might make a lot of sense to regulate generated audio to require some kind of watermark and/or metadata. Instructors who teach online and assign voice recordings, we need to recognize that these are now very easy and free to auto-generate. In some cases we are assigning this to discourage students from using AI to just autogenerate text responses, but audio is not immune.




 

From DSC:
Whenever we’ve had a flat tire over the years, a tricky part of the repair process is jacking up the car so that no harm is done to the car (or to me!). There are some grooves underneath the Toyota Camry where one is supposed to put the jack. But as the car is very low to the ground, these grooves are very hard to find (even in good weather and light). 

 

What’s needed is a robotic jack with vision.

If the jack had “vision” and had wheels on it, the device could locate the exact location of the grooves, move there, and then ask the owner whether they are ready for the car to be lifted up. The owner could execute that order when they are ready and the robotic jack could safely hoist the car up.

This type of robotic device is already out there in other areas. But this idea for assistance with replacing a flat tire represents an AI and robotic-based, consumer-oriented application that we’ll likely be seeing much more of in the future. Carmakers and suppliers, please add this one to your list!

Daniel

 

Opening Keynote – GS1

Bringing generative AI to video with Adobe Firefly Video Model

Adobe Launches Firefly Video Model and Enhances Image, Vector and Design Models

  • The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
  • Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
  • Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises

Photoshop delivers powerful innovation for image editing, ideation, 3D design, and more

Even more speed, precision, and power: Get started with the latest Illustrator and InDesign features for creative professionals

Adobe Introduces New Global Initiative Aimed at Helping 30 Million Next-Generation Learners Develop AI Literacy, Content Creation and Digital Marketing Skills by 2030

Add sound to your video via text — Project Super Sonic:



New Dream Weaver — from aisecret.us
Explore Adobe’s New Firefly Video Generative Model

Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.

 

Finalists of the 2024 Comedy Wildlife Photography Awards Focus on the Wily and Witless — from thisiscolossal.com by Kate Mothes


Speaking of photography, here’s a related item:

AI Photo Editors: A Quick Guide to Elevate Your Images — from intelligenthq.com

With the rise of artificial intelligence, photo editing has become accessible and efficient for everyone. An AI photo Editing Tool can transform photos in seconds, producing professional-level results without requiring extensive skills. From adjusting lighting to removing backgrounds, these tools automate complex edits, enabling users to create stunning visuals effortlessly. Whether a beginner or an experienced photographer, AI-powered editors offer a wide range of features that help elevate your images. This guide will introduce you to the key functionalities of AI image editors and provide insights on maximising their potential.

 

From DSC:
Great…we have another tool called Canvas. Or did you say Canva?

Introducing canvas — from OpenAI
A new way of working with ChatGPT to write and code

We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.

Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.


Using AI to buy your home? These companies think it’s time you should — from usatoday.com by Andrea Riquier

The way Americans buy homes is changing dramatically.

New industry rules about how home buyers’ real estate agents get paid are prompting a reckoning among housing experts and the tech sector. Many house hunters who are already stretched thin by record-high home prices and closing costs must now decide whether, and how much, to pay an agent.

A 2-3% commission on the median home price of $416,700 could be well over $10,000, and in a world where consumers are accustomed to using technology for everything from taxes to tickets, many entrepreneurs see an opportunity to automate away the middleman, even as some consumer advocates say not so fast.


The State of AI Report 2024 — from nathanbenaich.substack.com by Nathan Benaich


The Great Mismatch — from the-job.beehiiv.com. by Paul Fain
Artificial intelligence could threaten millions of decent-paying jobs held by women without degrees.

Women in administrative and office roles may face the biggest AI automation risk, find Brookings researchers armed with data from OpenAI. Also, why Indiana could make the Swiss apprenticeship model work in this country, and how learners get disillusioned when a certificate doesn’t immediately lead to a good job.

major new analysis from the Brookings Institution, using OpenAI data, found that the most vulnerable workers don’t look like the rail and dockworkers who have recaptured the national spotlight. Nor are they the creatives—like Hollywood’s writers and actors—that many wealthier knowledge workers identify with. Rather, they’re predominantly women in the 19M office support and administrative jobs that make up the first rung of the middle class.

“Unfortunately the technology and automation risks facing women have been overlooked for a long time,” says Molly Kinder, a fellow at Brookings Metro and lead author of the new report. “Most of the popular and political attention to issues of automation and work centers on men in blue-collar roles. There is far less awareness about the (greater) risks to women in lower-middle-class roles.”



Is this how AI will transform the world over the next decade? — from futureofbeinghuman.com by Andrew Maynard
Anthropic’s CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It’s audacious, compelling, and a must-read for anyone working at the intersection of AI and society.

But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.

Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.

That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.

And speaking of that essay, here’s a summary from The Rundown AI:

Anthropic CEO Dario Amodei just published a lengthy essay outlining an optimistic vision for how AI could transform society within 5-10 years of achieving human-level capabilities, touching on longevity, politics, work, the economy, and more.

The details:

  • Amodei believes that by 2026, ‘powerful AI’ smarter than a Nobel Prize winner across fields, with agentic and all multimodal capabilities, will be possible.
  • He also predicted that AI could compress 100 years of scientific progress into 10 years, curing most diseases and doubling the human lifespan.
  • The essay argued AI could strengthen democracy by countering misinformation and providing tools to undermine authoritarian regimes.
  • The CEO acknowledged potential downsides, including job displacement — but believes new economic models will emerge to address this.
  • He envisions AI driving unprecedented economic growth but emphasizes ensuring AI’s benefits are broadly distributed.

Why it matters: 

  • As the CEO of what is seen as the ‘safety-focused’ AI lab, Amodei paints a utopia-level optimistic view of where AI will head over the next decade. This thought-provoking essay serves as both a roadmap for AI’s potential and a call to action to ensure the responsible development of technology.

AI in the Workplace: Answering 3 Big Questions — from gallup.com by Kate Den Houter

However, most workers remain unaware of these efforts. Only a third (33%) of all U.S. employees say their organization has begun integrating AI into their business practices, with the highest percentage in white-collar industries (44%).

White-collar workers are more likely to be using AI. White-collar workers are, by far, the most frequent users of AI in their roles. While 81% of employees in production/frontline industries say they never use AI, only 54% of white-collar workers say they never do and 15% report using AI weekly.

Most employees using AI use it for idea generation and task automation. Among employees who say they use AI, the most common uses are to generate ideas (41%), to consolidate information or data (39%), and to automate basic tasks (39%).


Nvidia Blackwell GPUs sold out for the next 12 months as AI market boom continues — from techspot.com by Skye Jacobs
Analysts expect Team Green to increase its already formidable market share

Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.


Here’s my AI toolkit — from wondertools.substack.com by Jeremy Caplan and Nikita Roy
How and why I use the AI tools I do — an audio conversation

1. What are two useful new ways to use AI?

  • AI-powered research: Type a detailed search query into Perplexity instead of Google to get a quick, actionable summary response with links to relevant information sources. Read more of my take on why Perplexity is so useful and how to use it.
  • Notes organization and analysis: Tools like NotebookLM, Claude Projects, and Mem can help you make sense of huge repositories of notes and documents. Query or summarize your own notes and surface novel connections between your ideas.
 



According to Notebook LM on this Future U podcast Searching for Fit: The Impacts of AI in Higher Edhere are some excerpts from the generated table of contents:

Part 1: Setting the Stage

I. Introduction (0:00 – 6:16): …
II. Historical Contextualization (6:16 – 11:30): …
III. The Role of Product Fit in AI’s Impact (11:30 – 17:10): …
IV. AI and the Future of Knowledge Work (17:10 – 24:03): …
V. Teaching About AI in Higher Ed: A Measured Approach (24:03 – 34:20): …
VI. AI & the Evolving Skills Landscape (34:20 – 44:35): …
VII. Ethical & Pedagogical Considerations in an AI-Driven World (44:35 – 54:03):…
VIII. AI Beyond the Classroom: Administrative Applications & the Need for Intuition (54:03 – 1:04:30): …
IX. Reflections & Future Directions (1:04:30 – 1:11:15): ….

Part 2: Administrative Impacts & Looking Ahead

X. Bridging the Conversation: From Classroom to Administration (1:11:15 – 1:16:45): …
XI. The Administrative Potential of AI: A Looming Transformation (1:16:45 – 1:24:42): …
XII. The Need for Intuitiveness & the Importance of Real-World Applications (1:24:42 – 1:29:45): …
XIII. Looking Ahead: From Hype to Impactful Integration (1:29:45 – 1:34:25): …
XIV. Conclusion and Call to Action (1:34:25 – 1:36:03): …


The future of language learning — from medium.com by Sami Tatar

Most language learners do not have access to affordable 1:1 tutoring, which is also proven to be the most effective way to learn (short of moving to a specific country for complete immersion). Meanwhile, language learning is a huge market, and with an estimated 60% of this still dominated by “offline” solutions, meaning it is prime for disruption and never more so than with the opportunities unlocked through AI powered language learning. Therefore — we believe this presents huge opportunities for new startups creating AI native products to create the next language learning unicorns.



“The Broken Mirror: Rethinking Education, AI, and Equity in America’s Classrooms” — from nickpotkalitsky.substack.com by JC Price

It’s not that AI is inherently biased, but in its current state, it favors those who can afford it. The wealthy districts continue to pull ahead, leaving schools without resources further behind. Students in these underserved areas aren’t just being deprived of technology—they’re being deprived of the future.

But imagine a different world—one where AI doesn’t deepen the divide, but helps to bridge it. Technology doesn’t have to be the luxury of the wealthy. It can be a tool for every student, designed to meet them where they are. Adaptive AI systems, integrated into schools regardless of their budget, can provide personalized learning experiences that help students catch up and push forward, all while respecting the limits of their current infrastructure. This is where AI’s true potential lies—not in widening the gap, but in leveling the field.

But imagine if, instead of replacing teachers, AI helped to support them. Picture a world where teachers are freed from the administrative burdens that weigh them down. Where AI systems handle the logistics, so teachers can focus on what they do best—teaching, mentoring, and inspiring the next generation. Professional development could be personalized, helping teachers integrate AI into their classrooms in ways that enhance their teaching, without adding to their workload. This is the future we should be striving toward—one where technology serves to lift up educators, not push them out.

 

Employers Say Students Need AI Skills. What If Students Don’t Want Them? — from insidehighered.com by Ashley Mowreader
Colleges and universities are considering new ways to incorporate generative AI into teaching and learning, but not every student is on board with the tech yet. Experts weigh in on the necessity of AI in career preparation and higher education’s role in preparing students for jobs of the future.

Among the 5,025-plus survey respondents, around 2 percent (n=93), provided free responses to the question on AI policy and use in the classroom. Over half (55) of those responses were flat-out refusal to engage with AI. A few said they don’t know how to use AI or are not familiar with the tool, which impacts their ability to apply appropriate use to coursework.

But as generative AI becomes more ingrained into the workplace and higher education, a growing number of professors and industry experts believe this will be something all students need, in their classes and in their lives beyond academia.

From DSC:
I used to teach a Foundations of Information Technology class. Some of the students didn’t want to be there as they began the class, as it was a required class for non-CS majors. But after seeing what various applications and technologies could do for them, a good portion of those same folks changed their minds. But not all. Some students (2% sounds about right) asserted that they would never use technologies in their futures. Good luck with that I thought to myself. There’s hardly a job out there that doesn’t use some sort of technology.

And I still think that today — if not more so. If students want good jobs, they will need to learn how to use AI-based tools and technologies. I’m not sure there’s much of a choice. And I don’t think there’s much of a choice for the rest of us either — whether we’re still working or not. 

So in looking at the title of the article — “Employers Say Students Need AI Skills. What If Students Don’t Want Them?” — those of us who have spent any time working within the world of business already know the answer.

#Reinvent #Skills #StayingRelevant #Surviving #Workplace + several other categories/tags apply.


For those folks who have tried AI:

Skills: However, genAI may also be helpful in building skills to retain a job or secure a new one. People who had used genAI tools were more than twice as likely to think that these tools could help them learn new skills that may be useful at work or in locating a new job. Specifically, among those who had not used genAI tools, 23 percent believed that these tools might help them learn new skills, whereas 50 percent of those who had used the tools thought they might be helpful in acquiring useful skills (a highly statistically significant difference, after controlling for demographic traits).

Source: Federal Reserve Bank of New York

 
© 2025 | Daniel Christian