Miscommunication Leads AI-Based Hiring Tools Astray — from adigaskell.org

Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.

The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.

Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.


AI agents explained: Why OpenAI, Google and Microsoft are building smarter AI agents — from digit.in by Jayesh Shinde

In the last two years, the world has seen a lot of breakneck advancement in the Generative AI space, right from text-to-text, text-to-image and text-to-video based Generative AI capabilities. And all of that’s been nothing short of stepping stones for the next big AI breakthrough – AI agents. According to Bloomberg, OpenAI is preparing to launch its first autonomous AI agent, which is codenamed ‘Operator,’ as soon as in January 2025.

Apparently, this OpenAI agent – or Operator, as it’s codenamed – is designed to perform complex tasks independently. By understanding user commands through voice or text, this AI agent will seemingly do tasks related to controlling different applications in the computer, send an email, book flights, and no doubt other cool things. Stuff that ChatGPT, Copilot, Google Gemini or any other LLM-based chatbot just can’t do on its own.


2025: The year ‘invisible’ AI agents will integrate into enterprise hierarchies  — from venturebeat.com by Taryn Plumb

In the enterprise of the future, human workers are expected to work closely alongside sophisticated teams of AI agents.

According to McKinsey, generative AI and other technologies have the potential to automate 60 to 70% of employees’ work. And, already, an estimated one-third of American workers are using AI in the workplace — oftentimes unbeknownst to their employers.

However, experts predict that 2025 will be the year that these so-called “invisible” AI agents begin to come out of the shadows and take more of an active role in enterprise operations.

“Agents will likely fit into enterprise workflows much like specialized members of any given team,” said Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI.


State of AI Report 2024 Summary — from ai-supremacy.com by Michael Spencer
Part I, Consolidation, emergence and adoption. 


Which AI Image Model Is the Best Speller? Let’s Find Out! — from whytryai.com by Daniel Nest
I test 7 image models to find those that can actually write.

The contestants
I picked 7 participants for today’s challenge:

  1. DALL-E 3 by OpenAI (via Microsoft Designer)
  2. FLUX1.1 [pro] by Black Forest Labs (via Glif)
  3. Ideogram 2.0 by Ideogram (via Ideogram)
  4. Imagen 3 by Google (via Image FX)
  5. Midjourney 6.1 by Midjourney (via Midjourney)
  6. Recraft V3 by Recraft (via Recraft)
  7. Stable Diffusion 3.5 Large by Stability AI (via Hugging Face)

How to get started with AI agents (and do it right) — from venturebeat.com by Taryn Plumb

So how can enterprises choose when to adopt third-party models, open source tools or build custom, in-house fine-tuned models? Experts weigh in.


OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI — from bloomberg.com (behind firewall)
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.


OpenAI and others seek new path to smarter AI as current methods hit limitations — from reuters.com by Krystal Hu and Anna Tong

Summary

  • AI companies face delays and challenges with training new large language models
  • Some researchers are focusing on more time for inference in new models
  • Shift could impact AI arms race for resources like chips and energy

NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools — from blogs.nvidia.com by Spencer Huang
New Project GR00T workflows and AI world model development technologies to accelerate robot dexterity, control, manipulation and mobility.


How Generative AI is Revolutionizing Product Development — from intelligenthq.com

A recent report from McKinsey predicts that generative AI could unlock up to $2.6 to $4.4 annually trillion in value within product development and innovation across various industries. This staggering figure highlights just how significantly generative AI is set to transform the landscape of product development. Generative AI app development is driving innovation by using the power of advanced algorithms to generate new ideas, optimize designs, and personalize products at scale. It is also becoming a cornerstone of competitive advantage in today’s fast-paced market. As businesses look to stay ahead, understanding and integrating technologies like generative AI app development into product development processes is becoming more crucial than ever.


What are AI Agents: How To Create a Based AI Agent — from ccn.com by Lorena Nessi

Key Takeaways

  • AI agents handle complex, autonomous tasks beyond simple commands, showcasing advanced decision-making and adaptability.
  • The Based AI Agent template by Coinbase and Replit provides an easy starting point for developers to build blockchain-enabled AI agents.
  • AI based agents specifically integrate with blockchain, supporting crypto wallets and transactions.
  • Securing API keys in development is crucial to protect the agent from unauthorized access.

What are AI Agents and How Are They Used in Different Industries? — from rtinsights.com by Salvatore Salamone
AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries.

 

LinkedIn launches its first AI agent to take on the role of job recruiters — from techcrunch.com by Ingrid Lunden

LinkedIn, the social platform used by professionals to connect with others in their field, hunt for jobs, and develop skills, is taking the wraps off its latest effort to build artificial intelligence tools for users. Hiring Assistant is a new product designed to take on a wide array of recruitment tasks, from ingesting scrappy notes and thoughts to turn into longer job descriptions to sourcing candidates and engaging with them.

LinkedIn is describing Hiring Assistant as a milestone in its AI trajectory: It is, per the Microsoft-owned company, its first “AI agent” and one that happens to be targeting one of LinkedIn’s most lucrative categories of users — recruiters.


Also relevant/see:

 

Along these same lines, see:

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.


ZombAIs: From Prompt Injection to C2 with Claude Computer Use — from embracethered.com by Johann Rehberger

A few days ago, Anthropic released Claude Computer Use, which is a model + code that allows Claude to control a computer. It takes screenshots to make decisions, can run bash commands and so forth.

It’s cool, but obviously very dangerous because of prompt injection. Claude Computer Use enables AI to run commands on machines autonomously, posing severe risks if exploited via prompt injection.

This blog post demonstrates that it’s possible to leverage prompt injection to achieve, old school, command and control (C2) when giving novel AI systems access to computers.

We discussed one way to get malware onto a Claude Computer Use host via prompt injection. There are countless others, like another way is to have Claude write the malware from scratch and compile it. Yes, it can write C code, compile and run it. There are many other options.

TrustNoAI.

And again, remember do not run unauthorized code on systems that you do not own or are authorized to operate on.

Also relevant here, see:


Perplexity Grows, GPT Traffic Surges, Gamma Dominates AI Presentations – The AI for Work Top 100: October 2024 — from flexos.work by Daan van Rossum
Perplexity continues to gain users despite recent controversies. Five out of six GPTs see traffic boosts. This month’s highest gainers including Gamma, Blackbox, Runway, and more.


Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report — from ai.wharton.upenn.edu by  Jeremy Korst, Stefano Puntoni, & Mary Purk

From a survey with more than 800 senior business leaders, this report’s findings indicate that weekly usage of Gen AI has nearly doubled from 37% in 2023 to 72% in 2024, with significant growth in previously slower-adopting departments like Marketing and HR. Despite this increased usage, businesses still face challenges in determining the full impact and ROI of Gen AI. Sentiment reports indicate leaders have shifted from feelings of “curiosity” and “amazement” to more positive sentiments like “pleased” and “excited,” and concerns about AI replacing jobs have softened. Participants were full-time employees working in large commercial organizations with 1,000 or more employees.


Apple study exposes deep cracks in LLMs’ “reasoning” capabilities — from arstechnica.com by Kyle Orland
Irrelevant red herrings lead to “catastrophic” failure of logical inference.

For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”


Google CEO says more than a quarter of the company’s new code is created by AI — from businessinsider.in by Hugh Langley

  • More than a quarter of new code at Google is made by AI and then checked by employees.
  • Google is doubling down on AI internally to make its business more efficient.

Top Generative AI Chatbots by Market Share – October 2024 


Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview — from github.blog

We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.

 

AI-governed robots can easily be hacked — from theaivalley.com by Barsee
PLUS: Sam Altman’s new company “World” introduced…

In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.

What did they find ?

  • Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
  • Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
  • Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.

Why does it matter?

This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.

From DSC:
Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.


From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?”  I can’t say I didn’t feel the same way.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku — from anthropic.com

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.

Per The Rundown AI:

The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.

Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.

Also related/see:

  • What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer
    Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
  • New Claude, Who Dis? — from theneurondaily.com
    Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
  • When you give a Claude a mouse — from oneusefulthing.org by Ethan Mollick
    Some quick impressions of an actual agent

Introducing Act-One — from runwayml.com
A new way to generate expressive character performances using simple video inputs.

Per Lore by Nathan Lands:

What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.

Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.

Also related/see:


Google to buy nuclear power for AI datacentres in ‘world first’ deal — from theguardian.com
Tech company orders six or seven small nuclear reactors from California’s Kairos Power

Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.

The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.

Related:


ChatGPT Topped 3 Billion Visits in September — from similarweb.com

After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May

ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.


Crazy “AI Army” — from aisecret.us

Also from aisecret.us, see World’s First Nuclear Power Deal For AI Data Centers

Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.


New updates to help creators build community, drive business, & express creativity on YouTube — from support.google.com

Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.

Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!


New autonomous agents scale your team like never before — from blogs.microsoft.com

Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.

  • First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
  • Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.

10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum
While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.


Multi-Modal AI: Video Creation Simplified — from heatherbcooper.substack.com by Heather Cooper

Emerging Multi-Modal AI Video Creation Platforms
The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.

Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis — from developer.nvidia.com/

 

AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh
The market for AI products and services could reach between $780 billion and $990 billion by 2027.

At a Glance

  • The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
  • Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
  • Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.

Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”


And on a somewhat related note (i.e., emerging technologies), also see the following two postings:

Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo
As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.

Key Takeaways

  • Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
  • Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
  • The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.

Proto hologram tech allows cancer patients to receive specialist care without traveling large distances — from inavateonthenet.net

“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.




Clone your voice in minutes: The AI trick 95% don’t know about — from aidisruptor.ai by Alex McFarland
Warning: May cause unexpected bouts of talking to yourself

Now that you’ve got your voice clone, what can you do with it?

  1. Content Creation:
    • Podcast Production: Record episodes in half the time. Your listeners won’t know the difference, but your schedule will thank you.
    • Audiobook Narration: Always wanted to narrate your own book? Now you can, without spending weeks in a recording studio.
    • YouTube Videos: Create voiceovers for your videos in multiple languages. World domination, here you come!
  2. Business Brilliance:
    • Customer Service: Personalized automated responses that actually sound personal.
    • Training Materials: Create engaging e-learning content in your own voice, minus the hours of recording.
    • Presentations: Never worry about losing your voice before a big presentation again. Your clone’s got your back.

185 real-world gen AI use cases from the world’s leading organizations — from blog.google by Brian Hall; via Daniel Nest’s Why Try AI

In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.

In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.

Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.


AI Data Drop: 3 Key Insights from Real-World Research on AI Usage — from microsoft.com; via Daniel Nest’s Why Try AI
One of the largest studies of Copilot usage—at nearly 60 companies—reveals how AI is changing the way we work.

  1. AI is starting to liberate people from email
  2. Meetings are becoming more about value creation
  3. People are co-creating more with AI—and with one another


*** Dharmesh has been working on creating agent.ai — a professional network for AI agents.***


Speaking of agents, also see:

Onboarding the AI workforce: How digital agents will redefine work itself — from venturebeat.com by Gary Grossman

AI in 2030: A transformative force

  1. AI agents are integral team members
  2. The emergence of digital humans
  3. AI-driven speech and conversational interfaces
  4. AI-enhanced decision-making and leadership
  5. Innovation and research powered by AI
  6. The changing nature of job roles and skills

AI Video Tools You Can Use Today — from heatherbcooper.substack.com by Heather Cooper
The latest AI video models that deliver results

AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.

No worries – we do have plenty of generative AI video tools we can use right now.

  • Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
  • Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
  • Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
  • …plus several more

 



“Who to follow in AI” in 2024? — from ai-supremacy.com by Michael Spencer
Part III – #35-55 – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts

This list features both some of the best Newsletters on AI and people who make LinkedIn posts about AI papers, advances and breakthroughs. In today’s article we’ll be meeting the first 19-34, in a list of 180+.

Newsletter Writers
YouTubers
Engineers
Researchers who write
Technologists who are Creators
AI Educators
AI Evangelists of various kinds
Futurism writers and authors

I have been sharing the list in reverse chronological order on LinkedIn here.


Inside Google’s 7-Year Mission to Give AI a Robot Body — from wired.com by Hans Peter Brondmo
As the head of Alphabet’s AI-powered robotics moonshot, I came to believe many things. For one, robots can’t come soon enough. For another, they shouldn’t look like us.


Learning to Reason with LLMs — from openai.com
We are introducing OpenAI o1, a new large language model trained with reinforcement learning to perform complex reasoning. o1 thinks before it answers—it can produce a long internal chain of thought before responding to the user.


Items re: Microsoft Copilot:

Also see this next video re: Copilot Pages:


Sal Khan on the critical human skills for an AI age — from time.com by Kevin J. Delaney

As a preview of the upcoming Summit interview, here are Khan’s views on two critical questions, edited for space and clarity:

  1. What are the enduring human work skills in a world with ever-advancing AI? Some people say students should study liberal arts. Others say deep domain expertise is the key to remaining professionally relevant. Others say you need to have the skills of a manager to be able to delegate to AI. What do you think are the skills or competencies that ensure continued relevance professionally, employability, etc.?
  2. A lot of organizations are thinking about skills-based approaches to their talent. It involves questions like, ‘Does someone know how to do this thing or not?’ And what are the ways in which they can learn it and have some accredited way to know they actually have done it? That is one of the ways in which people use Khan Academy. Do you have a view of skills-based approaches within workplaces, and any thoughts on how AI tutors and training fit within that context?

 

From DSC:
The above item is simply excellent!!! I love it!



Also relevant/see:

3 new Chrome AI features for even more helpful browsing — from blog.google from Parisa Tabriz
See how Chrome’s new AI features, including Google Lens for desktop and Tab compare, can help you get things done more easily on the web.


On speaking to AI — from oneusefulthing.org by Ethan Mollick
Voice changes a lot of things

So, let’s talk about ChatGPT’s new Advanced Voice mode and the new AI-powered Siri. They are not just different approaches to talking to AI. In many ways, they represent the divide between two philosophies of AI – Copilots versus Agents, small models versus large ones, specialists versus generalists.


Your guide to AI – August 2024 — from nathanbenaich.substack.com by Nathan Benaich and Alex Chalmers


Microsoft says OpenAI is now a competitor in AI and search — from cnbc.com by Jordan Novet

Key Points

  • Microsoft’s annually updated list of competitors now includes OpenAI, a long-term strategic partner.
  • The change comes days after OpenAI announced a prototype of a search engine.
  • Microsoft has reportedly invested $13 billion into OpenAI.


Excerpt from by Graham Clay

1. Flux, an open-source text-to-image creator that is comparable to industry leaders like Midjourney, was released by Black Forest Labs (the “original team” behind Stable Diffusion). It is capable of generating high quality text in images (there are tons of educational use cases). You can play with it on their demo page, on Poe, or by running it on your own computer (tutorial here).

Other items re: Flux:

How to FLUX  — from heatherbcooper.substack.com by Heather Cooper
Where to use FLUX online & full tutorial to create a sleek ad in minutes

.

Also from Heather Cooper:

Introducing FLUX: Open-Source text to image model

FLUX… has been EVERYWHERE this week, as I’m sure you have seen. Developed by Black Forest Labs, is an open-source image generation model that’s gaining attention for its ability to rival leading models like Midjourney, DALL·E 3, and SDXL.

What sets FLUX apart is its blend of creative freedom, precision, and accessibility—it’s available across multiple platforms and can be run locally.

Why FLUX Matters
FLUX’s open-source nature makes it accessible to a broad audience, from hobbyists to professionals.

It offers advanced multimodal and parallel diffusion transformer technology, delivering high visual quality, strong prompt adherence, and diverse outputs.

It’s available in 3 models:
FLUX.1 [pro]: A high-performance, commercial image synthesis model.
FLUX.1 [dev]: An open-weight, non-commercial variant of FLUX.1 [pro]
FLUX.1 [schnell]: A faster, distilled version of FLUX.1, operating up to 10x quicker.

Daily Digest: Huge (in)Flux of AI videos. — from bensbites.beehiiv.com
PLUS: Review of ChatGPT’s advanced voice mode.

  1. During the weekend, image models made a comeback. Recently released Flux models can create realistic images with near-perfect text—straight from the model, without much patchwork. To get the party going, people are putting these images into video generation models to create prettytrippyvideos. I can’t identify half of them as AI, and they’ll only get better. See this tutorial on how to create a video ad for your product..

 


7 not only cool but handy use cases of new Claude — from techthatmatters.beehiiv.com by Harsh Makadia

  1. Data visualization
  2. Infographic
  3. Copy the UI of a website
  4. …and more

Achieving Human Level Competitive Robot Table Tennis — from sites.google.com

 

What Students Want When It Comes To AI — from onedtech.philhillaa.com by Glenda Morgan
The Digital Education Council Global AI Student Survey 2024

The Digital Education Council (DEC) this week released the results of a global survey of student opinions on AI. It’s a large survey with nearly 4,000 respondents conducted across 16 countries, but more importantly, it asks some interesting questions. There are many surveys about AI out there right now, but this one stands out. I’m going to go into some depth here, as the entire survey report is worth reading.

.

.


AI is forcing a teaching and learning evolution — from eschoolnews.com by Laura Ascione
AI and technology tools are leading to innovative student learning–along with classroom, school, and district efficiency

Key findings from the 2024 K-12 Educator + AI Survey, which was conducted by Hanover Research, include:

  • Teachers are using AI to personalize and improve student learning, not just run classrooms more efficiently, but challenges remain
  • While post-pandemic challenges persist, the increased use of technology is viewed positively by most teachers and administrators
  • …and more

From DSC:
I wonder…how will the use of AI in education square with the issues of using smartphones/laptops within the classrooms? See:

  • Why Schools Are Racing to Ban Student Phones — from nytimes.com by Natasha Singer; via GSV
    As the new school year starts, a wave of new laws that aim to curb distracted learning is taking effect in Indiana, Louisiana and other states.

A three-part series from Dr. Phillippa Hardman:

Part 1: Writing Learning Objectives  
The Results Part 1: Writing Learning Objectives

In this week’s post I will dive into the results from task 1: writing learning objectives. Stay tuned over the next two weeks to see all of the the results.

Part 2: Selecting Instructional Strategies.
The Results Part 2: Selecting an Instructional Strategy

Welcome back to our three-part series exploring the impact of AI on instructional design.

This week, we’re tackling a second task and a crucial aspect of instructional design: selecting instructional strategies. The ability to select appropriate instructional strategies to achieve intended objectives is a mission-critical skill for any instructional designer. So, can AI help us do a good job of it? Let’s find out!

Part 3: How Close is AI to Replacing Instructional Designers?
The Results Part 3: Creating a Course Outline

Today, we’re diving into what many consider to be the role-defining task of the instructional designer: creating a course design outline.


ChatGPT Cheat Sheet for Instructional Designers! — from Alexandra Choy Youatt EdD

Instructional Designers!
Whether you’re new to the field or a seasoned expert, this comprehensive guide will help you leverage AI to create more engaging and effective learning experiences.

What’s Inside?
Roles and Tasks: Tailored prompts for various instructional design roles and tasks.
Formats: Different formats to present your work, from training plans to rubrics.
Learning Models: Guidance on using the ADDIE model and various pedagogical strategies.
Engagement Tips: Techniques for online engagement and collaboration.
Specific Tips: Industry certifications, work-based learning, safety protocols, and more.

Who Can Benefit?
Corporate Trainers
Curriculum Developers
E-Learning Specialists
Instructional Technologists
Learning Experience Designers
And many more!

ChatGPT Cheat Sheet | Instructional Designer


5 AI Tools I Use Every Day (as a Busy Student) — from theaigirl.substack.com by Diana Dovgopol
AI tools that I use every day to boost my productivity.
#1 Gamma
#2 Perplexity
#3 Cockatoo

I use this AI tool almost every day as well. Since I’m still a master’s student at university, I have to attend lectures and seminars, which are always in English or German, neither of which is my native language. With the help of Cockatoo, I create scripts of the lectures and/or translations into my language. This means I don’t have to take notes in class and then manually translate them afterward. All I need to do is record the lecture audio on any device or directly in Cockatoo, upload it, and then you’ll have the audio and text ready for you.

…and more


Students Worry Overemphasis on AI Could Devalue Education — from insidehighered.com by Juliette Rowsell
Report stresses that AI is “new standard” and universities need to better communicate policies to learners.

Rising use of AI in higher education could cause students to question the quality and value of education they receive, a report warns.

This year’s Digital Education Council Global AI Student Survey, of more than 3,800 students from 16 countries, found that more than half (55 percent) believed overuse of AI within teaching devalued education, and 52 percent said it negatively impacted their academic performance.

Despite this, significant numbers of students admitted to using such technology. Some 86 percent said they “regularly” used programs such as ChatGPT in their studies, 54 percent said they used it on a weekly basis, and 24 percent said they used it to write a first draft of a submission.

Higher Ed Leadership Is Excited About AI – But Investment Is Lacking — from forbes.com by Vinay Bhaskara

As corporate America races to integrate AI into its core operations, higher education finds itself in a precarious position. I conducted a survey of 63 university leaders revealing that while higher ed leaders recognize AI’s transformative potential, they’re struggling to turn that recognition into action.

This struggle is familiar for higher education — gifted with the mission of educating America’s youth but plagued with a myriad of operational and financial struggles, higher ed institutions often lag behind their corporate peers in technology adoption. In recent years, this gap has become threateningly large. In an era of declining enrollments and shifting demographics, closing this gap could be key to institutional survival and success.

The survey results paint a clear picture of inconsistency: 86% of higher ed leaders see AI as a “massive opportunity,” yet only 21% believe their institutions are prepared for it. This disconnect isn’t just a minor inconsistency – it’s a strategic vulnerability in an era of declining enrollments and shifting demographics.


(Generative) AI Isn’t Going Anywhere but Up — from stefanbauschard.substack.com by Stefan Bauschard
“Hype” claims are nonsense.

There has been a lot of talk recently about an “AI Bubble.” Supposedly, the industry, or at least the generative AI subset of it, will collapse. This is known as the “Generative AI Bubble.” A bubble — a broad one or a generative one — is nonsense. These are the reasons we will continue to see massive growth in AI.


AI Readiness: Prepare Your Workforce to Embrace the Future — from learningguild.com by Danielle Wallace

Artificial Intelligence (AI) is revolutionizing industries, enhancing efficiency, and unlocking new opportunities. To thrive in this landscape, organizations need to be ready to embrace AI not just technologically but also culturally.

Learning leaders play a crucial role in preparing employees to adapt and excel in an AI-driven workplace. Transforming into an AI-empowered organization requires more than just technological adoption; it demands a shift in organizational mindset. This guide delves into how learning leaders can support this transition by fostering the right mindset attributes in employees.


Claude AI for eLearning Developers — from learningguild.com by Bill Brandon

Claude is fast, produces grammatically correct  text, and outputs easy-to-read articles, emails, blog posts, summaries, and analyses. Take some time to try it out. If you worry about plagiarism and text scraping, put the results through Grammarly’s plagiarism checker (I did not use Claude for this article, but I did send the text through Grammarly).


Survey: Top Teacher Uses of AI in the Classroom — from thejournal.com by Rhea Kelly

A new report from Cambium Learning Group outlines the top ways educators are using artificial intelligence to manage their classrooms and support student learning. Conducted by Hanover Research, the 2024 K-12 Educator + AI Survey polled 482 teachers and administrators at schools and districts that are actively using AI in the classroom.

More than half of survey respondents (56%) reported that they are leveraging AI to create personalized learning experiences for students. Other uses included providing real-time performance tracking and feedback (cited by 52% of respondents), helping students with critical thinking skills (50%), proofreading writing (47%), and lesson planning (44%).

On the administrator side, top uses of AI included interpreting/analyzing student data (61%), managing student records (56%), and managing professional development (56%).


Addendum on 8/14/24:

 

The Three Wave Strategy of AI Implementation — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

The First Wave: Low-Hanging Fruit

These are just examples:

  • Student services
  • Resume and Cover Letter Review (Career Services)Offering individual resume critiques
  • Academic Policy Development and Enforcement (Academic Affairs)…
  • Health Education and Outreach (Health and Wellness Services) …
  • Sustainability Education and Outreach (Sustainability and Environmental Initiatives) …
  • Digital Marketing and Social Media Management (University Communications and Marketing) …
  • Grant Proposal Development and Submission (Research and Innovation) …
  • Financial Aid Counseling (Financial Aid and Scholarships) …
  • Alumni Communications (Alumni Relations and Development) …
  • Scholarly Communications (Library Services) …
  • International Student and Scholar Services (International Programs and Global Engagement)

Duolingo Max: A Paid Subscription to Learn a Language Using ChatGPT AI (Worth It?) — from theaigirl.substack.com by Diana Dovgopol (behind paywall for the most part)
The integration of AI in language learning apps could be game-changing.


Research Insights #12: Copyrights and Academia — from aiedusimplified.substack.com by Lance Eaton
Scholarly authors are not going to be happy…

A while back, I wrote about some of my thoughts on generative AI around the copyright issues. Not much has changed since then, but a new article (Academic authors ‘shocked’ after Taylor & Francis sells access to their research to Microsoft AI) is definitely stirring up all sorts of concerns by academic authors. The basics of that article are that Taylor & Francis sold access to authors’ research to Microsoft for AI development without informing the authors, sparking significant concern among academics and the Society of Authors about transparency, consent, and the implications for authors’ rights and future earnings.

The stir can be seen as both valid and redundant. Two folks’ points stick out to me in this regard.

 


Bill Gates Reveals Superhuman AI Prediction — from youtube.com by Rufus Griscom, Bill Gates, Andy Sack, and Adam Brotman

This episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.

Key moments:

00:05 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 Rapid AI development raises questions about controlling its global application.
33:31 Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.


OpenAI Introduces CriticGPT: A New Artificial Intelligence AI Model based on GPT-4 to Catch Errors in ChatGPT’s Code Output — from marktechpost.com

The team has summarized their primary contributions as follows.

  1. The team has offered the first instance of a simple, scalable oversight technique that greatly assists humans in more thoroughly detecting problems in real-world RLHF data.
  1. Within the ChatGPT and CriticGPT training pools, the team has discovered that critiques produced by CriticGPT catch more inserted bugs and are preferred above those written by human contractors.
  1. Compared to human contractors working alone, this research indicates that teams consisting of critic models and human contractors generate more thorough criticisms. When compared to reviews generated exclusively by models, this partnership lowers the incidence of hallucinations.
  1. This study provides Force Sampling Beam Search (FSBS), an inference-time sampling and scoring technique. This strategy well balances the trade-off between minimizing bogus concerns and discovering genuine faults in LLM-generated critiques.

Character.AI now allows users to talk with AI avatars over calls — from techcrunch.com by Ivan Mehta

a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.

The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.


Google Translate Just Added 110 More Languages — from lifehacker.com by
You can now use the app to communicate in languages you’ve never even heard of.

Google Translate can come in handy when you’re traveling or communicating with someone who speaks another language, and thanks to a new update, you can now connect with some 614 million more people. Google is adding 110 new languages to its Translate tool using its AI PaLM 2 large language model (LLM), which brings the total of supported languages to nearly 250. This follows the 24 languages added in 2022, including Indigenous languages of the Americas as well as those spoken across Africa and central Asia.




Listen to your favorite books and articles voiced by Judy Garland, James Dean, Burt Reynolds and Sir Laurence Olivier — from elevenlabs.io
ElevenLabs partners with estates of iconic stars to bring their voices to the Reader App

 

A New Digital Divide: Student AI Use Surges, Leaving Faculty Behind— from insidehighered.com by Lauren Coffey
While both students and faculty have concerns with generative artificial intelligence, two new reports show a divergence in AI adoption. 

Meanwhile, a separate survey of faculty released Thursday by Ithaka S+R, a higher education consulting firm, showcased that faculty—while increasingly familiar with AI—often do not know how to use it in classrooms. Two out of five faculty members are familiar with AI, the Ithaka report found, but only 14 percent said they are confident in their ability to use AI in their teaching. Just slightly more (18 percent) said they understand the teaching implications of generative AI.

“Serious concerns about academic integrity, ethics, accessibility, and educational effectiveness are contributing to this uncertainty and hostility,” the Ithaka report said.

The diverging views about AI are causing friction. Nearly a third of students said they have been warned to not use generative AI by professors, and more than half (59 percent) are concerned they will be accused of cheating with generative AI, according to the Pearson report, which was conducted with Morning Consult and surveyed 800 students.


What teachers want from AI — from hechingerreport.org by Javeria Salman
When teachers designed their own AI tools, they built math assistants, tools for improving student writing, and more

An AI chatbot that walks students through how to solve math problems. An AI instructional coach designed to help English teachers create lesson plans and project ideas. An AI tutor that helps middle and high schoolers become better writers.

These aren’t tools created by education technology companies. They were designed by teachers tasked with using AI to solve a problem their students were experiencing.

Over five weeks this spring, about 300 people – teachers, school and district leaders, higher ed faculty, education consultants and AI researchers – came together to learn how to use AI and develop their own basic AI tools and resources. The professional development opportunity was designed by technology nonprofit Playlab.ai and faculty at the Relay Graduate School of Education.


The Comprehensive List of Talks & Resources for 2024 — from aiedusimplified.substack.com by Lance Eaton
Resources, talks, podcasts, etc that I’ve been a part of in the first half of 2024

Resources from things such as:

  • Lightning Talks
  • Talks & Keynotes
  • Workshops
  • Podcasts & Panels
  • Honorable Mentions

Next-Gen Classroom Observations, Powered by AI — from educationnext.org by Michael J. Petrilli
The use of video recordings in classrooms to improve teacher performance is nothing new. But the advent of artificial intelligence could add a helpful evaluative tool for teachers, measuring instructional practice relative to common professional goals with chatbot feedback.

Multiple companies are pairing AI with inexpensive, ubiquitous video technology to provide feedback to educators through asynchronous, offsite observation. It’s an appealing idea, especially given the promise and popularity of instructional coaching, as well as the challenge of scaling it effectively (see “Taking Teacher Coaching To Scale,” research, Fall 2018).

Enter AI. Edthena is now offering an “AI Coach” chatbot that offers teachers specific prompts as they privately watch recordings of their lessons. The chatbot is designed to help teachers view their practice relative to common professional goals and to develop action plans to improve.

To be sure, an AI coach is no replacement for human coaching.


Personalized AI Tutoring as a Social Activity: Paradox or Possibility? — from er.educause.edu by Ron Owston
Can the paradox between individual tutoring and social learning be reconciled though the possibility of AI?

We need to shift our thinking about GenAI tutors serving only as personal learning tools. The above activities illustrate how these tools can be integrated into contemporary classroom instruction. The activities should not be seen as prescriptive but merely suggestive of how GenAI can be used to promote social learning. Although I specifically mention only one online activity (“Blended Learning”), all can be adapted to work well in online or blended classes to promote social interaction.


Stealth AI — from higherai.substack.com by Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler
What happens when students use AI all the time, but aren’t allowed to talk about it?

In many ways, this comes back to one of my general rules: You cannot ban AI in the classroom. You can only issue a gag rule.

And if you do issue a gag rule, then it deprives students of the space they often need to make heads and tails of this technology.

We need to listen to actual students talking about actual uses, and reflecting on their actual feelings. No more abstraction.

In this conversation, Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler about what students are saying about Artificial Intelligence and education.


What’s New in Microsoft EDU | ISTE Edition June 2024 — from techcommunity.microsoft.com

Welcome to our monthly update for Teams for Education and thank you so much for being part of our growing community! We’re thrilled to share over 20 updates and resources and show them in action next week at ISTELive 24 in Denver, Colorado, US.

Copilot for Microsoft 365 – Educator features
Guided Content Creation
Coming soon to Copilot for Microsoft 365 is a guided content generation experience to help educators get started with creating materials like assignments, lesson plans, lecture slides, and more. The content will be created based on the educator’s requirements with easy ways to customize the content to their exact needs.
Standards alignment and creation
Quiz generation through Copilot in Forms
Suggested AI Feedback for Educators
Teaching extension
To better support educators with their daily tasks, we’ll be launching a built-in Teaching extension to help guide them through relevant activities and provide contextual, educator-based support in Copilot.
Education data integration

Copilot for Microsoft 365 – Student features
Interactive practice experiences
Flashcards activity
Guided chat activity
Learning extension in Copilot for Microsoft 365


New AI tools for Google Workspace for Education — from blog.google by Akshay Kirtikar and Brian Hendricks
We’re bringing Gemini to teen students using their school accounts to help them learn responsibly and confidently in an AI-first future, and empowering educators with new tools to help create great learning experiences.

 

Latent Expertise: Everyone is in R&D — from oneusefulthing.org by Ethan Mollick
Ideas come from the edges, not the center

Excerpt (emphasis DSC):

And to understand the value of AI, they need to do R&D. Since AI doesn’t work like traditional software, but more like a person (even though it isn’t one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.


OpenAI’s former chief scientist is starting a new AI company — from theverge.com by Emma Roth
Ilya Sutskever is launching Safe Superintelligence Inc., an AI startup that will prioritize safety over ‘commercial pressures.’

Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.

Ilya Sutskever Has a New Plan for Safe Superintelligence — from bloomberg.com by Ashlee Vance (behind a paywall)
OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.

Safe Superintelligence — from theneurondaily.com by Noah Edelman

Ilya Sutskever is kind of a big deal in AI, to put it lightly.

Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.

Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.

If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.


AI is exhausting the power grid. Tech firms are seeking a miracle solution. — from washingtonpost.com by Evan Halper and Caroline O’Donovan
As power needs of AI push emissions up and put big tech in a bind, companies put their faith in elusive — some say improbable — technologies.

As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world’s most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source.


Microsoft, OpenAI, Nvidia join feds for first AI attack simulation — from axios.com by Sam Sabin

Federal officials, AI model operators and cybersecurity companies ran the first joint simulation of a cyberattack involving a critical AI system last week.

Why it matters: Responding to a cyberattack on an AI-enabled system will require a different playbook than the typical hack, participants told Axios.

The big picture: Both Washington and Silicon Valley are attempting to get ahead of the unique cyber threats facing AI companies before they become more prominent.


Hot summer of AI video: Luma & Runway drop amazing new models — from heatherbcooper.substack.com by Heather Cooper
Plus an amazing FREE video to sound app from ElevenLabs

Immediately after we saw Sora-like videos from KLING, Luma AI’s Dream Machine video results overshadowed them.

Dream Machine is a next-generation AI video model that creates high-quality, realistic shots from text instructions and images.


Introducing Gen-3 Alpha — from runwayml.com by Anastasis Germanidis
A new frontier for high-fidelity, controllable video generation.


AI-Generated Movies Are Around the Corner — from news.theaiexchange.com by The AI Exchange
The future of AI in filmmaking; participate in our AI for Agencies survey

AI-Generated Feature Films Are Around the Corner.
We predict feature-film length AI-generated films are coming by the end of 2025, if not sooner.

Don’t believe us? You need to check out Runway ML’s new Gen-3 model they released this week.

They’re not the only ones. We also have Pika, which just raised $80M. And Google’s Veo. And OpenAI’s Sora. (+ many others)

 




Kuaishou Unveils Kling: A Text-to-Video Model To Challenge OpenAI’s Sora — from maginative.com by Chris McKay


Generating audio for video — from deepmind.google


LinkedIn leans on AI to do the work of job hunting — from  techcrunch.com by Ingrid Lunden

Learning personalisation. LinkedIn continues to be bullish on its video-based learning platform, and it appears to have found a strong current among users who need to skill up in AI. Cohen said that traffic for AI-related courses — which include modules on technical skills as well as non-technical ones such as basic introductions to generative AI — has increased by 160% over last year.

You can be sure that LinkedIn is pushing its search algorithms to tap into the interest, but it’s also boosting its content with AI in another way.

For Premium subscribers, it is piloting what it describes as “expert advice, powered by AI.” Tapping into expertise from well-known instructors such as Alicia Reece, Anil Gupta, Dr. Gemma Leigh Roberts and Lisa Gates, LinkedIn says its AI-powered coaches will deliver responses personalized to users, as a “starting point.”

These will, in turn, also appear as personalized coaches that a user can tap while watching a LinkedIn Learning course.

Also related to this, see:

Unlocking New Possibilities for the Future of Work with AI — from news.linkedin.com

Personalized learning for everyone: Whether you’re looking to change or not, the skills required in the workplace are expected to change by 68% by 2030. 

Expert advice, powered by AI: We’re beginning to pilot the ability to get personalized practical advice instantly from industry leading business leaders and coaches on LinkedIn Learning, all powered by AI. The responses you’ll receive are trained by experts and represent a blend of insights that are personalized to each learner’s unique needs. While human professional coaches remain invaluable, these tools provide a great starting point.

Personalized coaching, powered by AI, when watching a LinkedIn course: As learners —including all Premium subscribers — watch our new courses, they can now simply ask for summaries of content, clarify certain topics, or get examples and other real-time insights, e.g. “Can you simplify this concept?” or “How does this apply to me?”

 


Roblox’s Road to 4D Generative AI — from corp.roblox.com by Morgan McGuire, Chief Scientist

  • Roblox is building toward 4D generative AI, going beyond single 3D objects to dynamic interactions.
  • Solving the challenge of 4D will require multimodal understanding across appearance, shape, physics, and scripts.
  • Early tools that are foundational for our 4D system are already accelerating creation on the platform.

 

Can Microsoft Copilot Replace Popular AI Tools Like ChatGPT, Gamma AI, and Midjourney? — from flexos.work by Daan van Rossum
Can Microsoft Copilot win from popular AI tools like ChatGPT, Gamma AI, and Midjourney, and which AI best fits your business?

From DSC:
The article talks about the pros and cons of Microsoft Copilot. But I really appreciated the following table/information:


Also regarding Microsoft and AI, see:

Windows Recall stores all your history UNENCRYPTED. — from bensbites.beehiiv.com by Ben Tossell

Remember Microsoft’s shiny new AI tool, “Recall”? It’s like your personal time machine, answering questions about your browsing history and laptop activity by taking screenshots every 5 seconds. Sounds cool, right? Well, it gets problematic.

What’s going on here?
Security researchers have found a potential privacy nightmare lurking within this seemingly convenient tool.

What does this mean?
Recall stores all those screenshots in an unencrypted database on your laptop. This means anyone with access to your device could potentially see everything you’ve been doing. Cybersecurity experts are already comparing it to spyware, and one ethical hacker even built a tool called “TotalRecall” (yes, like the movie) that can pull all the information Recall saves. Yikes.

 

Microsoft teams with Khan Academy to make its AI tutor free for K-12 educators and will develop a Phi-3 math model — from venturebeat.com by Ken Yeung

Microsoft is partnering with Khan Academy in a multifaceted deal to demonstrate how AI can transform the way we learn. The cornerstone of today’s announcement centers on Khan Academy’s Khanmigo AI agent. Microsoft says it will migrate the bot to its Azure OpenAI Service, enabling the nonprofit educational organization to provide all U.S. K-12 educators free access to Khanmigo.

In addition, Microsoft plans to use its Phi-3 model to help Khan Academy improve math tutoring and collaborate to generate more high-quality learning content while making more courses available within Microsoft Copilot and Microsoft Teams for Education.


One-Third of Teachers Have Already Tried AI, Survey Finds — from the74million.org by Kevin Mahnken
A RAND poll released last month finds English and social studies teachers embracing tools like ChatGPT.

One in three American teachers have used artificial intelligence tools in their teaching at least once, with English and social studies teachers leading the way, according to a RAND Corporation survey released last month. While the new technology isn’t yet transforming how kids learn, both teachers and district leaders expect that it will become an increasingly common feature of school life.


Professors Try ‘Restrained AI’ Approach to Help Teach Writing — from edsurge.com by Jeffrey R. Young
Can ChatGPT make human writing more efficient, or is writing an inherently time-consuming process best handled without AI tools?

This article is part of the guide: For Education, ChatGPT Holds Promise — and Creates Problems.

When ChatGPT emerged a year and half ago, many professors immediately worried that their students would use it as a substitute for doing their own written assignments — that they’d click a button on a chatbot instead of doing the thinking involved in responding to an essay prompt themselves.

But two English professors at Carnegie Mellon University had a different first reaction: They saw in this new technology a way to show students how to improve their writing skills.

“They start really polishing way too early,” Kaufer says. “And so what we’re trying to do is with AI, now you have a tool to rapidly prototype your language when you are prototyping the quality of your thinking.”

He says the concept is based on writing research from the 1980s that shows that experienced writers spend about 80 percent of their early writing time thinking about whole-text plans and organization and not about sentences.


On Building AI Models for Education — from aieducation.substack.com by Claire Zau
Google’s LearnLM, Khan Academy/MSFT’s Phi-3 Models, and OpenAI’s ChatGPT Edu

This piece primarily breaks down how Google’s LearnLM was built, and takes a quick look at Microsoft/Khan Academy’s Phi-3 and OpenAI’s ChatGPT Edu as alternative approaches to building an “education model” (not necessarily a new model in the latter case, but we’ll explain). Thanks to the public release of their 86-page research paper, we have the most comprehensive view into LearnLM. Our understanding of Microsoft/Khan Academy small language models and ChatGPT Edu is limited to the information provided through announcements, leaving us with less “under the hood” visibility into their development.


AI tutors are quietly changing how kids in the US study, and the leading apps are from China — from techcrunch.com by Rita Liao

Answer AI is among a handful of popular apps that are leveraging the advent of ChatGPT and other large language models to help students with everything from writing history papers to solving physics problems. Of the top 20 education apps in the U.S. App Store, five are AI agents that help students with their school assignments, including Answer AI, according to data from Data.ai on May 21.


Is your school behind on AI? If so, there are practical steps you can take for the next 12 months — from stefanbauschard.substack.com by Stefan Bauschard

If your school (district) or university has not yet made significant efforts to think about how you will prepare your students for a World of AI, I suggest the following steps:

July 24 – Administrator PD & AI Guidance
In July, administrators should receive professional development on AI, if they haven’t already. This should include…

August 24 –Professional Development for Teachers and Staff…
Fall 24 — Parents; Co-curricular; Classroom experiments…
December 24 — Revision to Policy…


New ChatGPT Version Aiming at Higher Ed — from insidehighered.com by Lauren Coffey
ChatGPT Edu, emerging after initial partnerships with several universities, is prompting both cautious optimism and worries.

OpenAI unveiled a new version of ChatGPT focused on universities on Thursday, building on work with a handful of higher education institutions that partnered with the tech giant.

The ChatGPT Edu product, expected to start rolling out this summer, is a platform for institutions intended to give students free access. OpenAI said the artificial intelligence (AI) toolset could be used for an array of education applications, including tutoring, writing grant applications and reviewing résumés.

 
© 2024 | Daniel Christian