From DSC:
Great…we have another tool called Canvas. Or did you say Canva?

Introducing canvas — from OpenAI
A new way of working with ChatGPT to write and code

We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.

Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.


Using AI to buy your home? These companies think it’s time you should — from usatoday.com by Andrea Riquier

The way Americans buy homes is changing dramatically.

New industry rules about how home buyers’ real estate agents get paid are prompting a reckoning among housing experts and the tech sector. Many house hunters who are already stretched thin by record-high home prices and closing costs must now decide whether, and how much, to pay an agent.

A 2-3% commission on the median home price of $416,700 could be well over $10,000, and in a world where consumers are accustomed to using technology for everything from taxes to tickets, many entrepreneurs see an opportunity to automate away the middleman, even as some consumer advocates say not so fast.


The State of AI Report 2024 — from nathanbenaich.substack.com by Nathan Benaich


The Great Mismatch — from the-job.beehiiv.com. by Paul Fain
Artificial intelligence could threaten millions of decent-paying jobs held by women without degrees.

Women in administrative and office roles may face the biggest AI automation risk, find Brookings researchers armed with data from OpenAI. Also, why Indiana could make the Swiss apprenticeship model work in this country, and how learners get disillusioned when a certificate doesn’t immediately lead to a good job.

major new analysis from the Brookings Institution, using OpenAI data, found that the most vulnerable workers don’t look like the rail and dockworkers who have recaptured the national spotlight. Nor are they the creatives—like Hollywood’s writers and actors—that many wealthier knowledge workers identify with. Rather, they’re predominantly women in the 19M office support and administrative jobs that make up the first rung of the middle class.

“Unfortunately the technology and automation risks facing women have been overlooked for a long time,” says Molly Kinder, a fellow at Brookings Metro and lead author of the new report. “Most of the popular and political attention to issues of automation and work centers on men in blue-collar roles. There is far less awareness about the (greater) risks to women in lower-middle-class roles.”



Is this how AI will transform the world over the next decade? — from futureofbeinghuman.com by Andrew Maynard
Anthropic’s CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It’s audacious, compelling, and a must-read for anyone working at the intersection of AI and society.

But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.

Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.

That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.

And speaking of that essay, here’s a summary from The Rundown AI:

Anthropic CEO Dario Amodei just published a lengthy essay outlining an optimistic vision for how AI could transform society within 5-10 years of achieving human-level capabilities, touching on longevity, politics, work, the economy, and more.

The details:

  • Amodei believes that by 2026, ‘powerful AI’ smarter than a Nobel Prize winner across fields, with agentic and all multimodal capabilities, will be possible.
  • He also predicted that AI could compress 100 years of scientific progress into 10 years, curing most diseases and doubling the human lifespan.
  • The essay argued AI could strengthen democracy by countering misinformation and providing tools to undermine authoritarian regimes.
  • The CEO acknowledged potential downsides, including job displacement — but believes new economic models will emerge to address this.
  • He envisions AI driving unprecedented economic growth but emphasizes ensuring AI’s benefits are broadly distributed.

Why it matters: 

  • As the CEO of what is seen as the ‘safety-focused’ AI lab, Amodei paints a utopia-level optimistic view of where AI will head over the next decade. This thought-provoking essay serves as both a roadmap for AI’s potential and a call to action to ensure the responsible development of technology.

AI in the Workplace: Answering 3 Big Questions — from gallup.com by Kate Den Houter

However, most workers remain unaware of these efforts. Only a third (33%) of all U.S. employees say their organization has begun integrating AI into their business practices, with the highest percentage in white-collar industries (44%).

White-collar workers are more likely to be using AI. White-collar workers are, by far, the most frequent users of AI in their roles. While 81% of employees in production/frontline industries say they never use AI, only 54% of white-collar workers say they never do and 15% report using AI weekly.

Most employees using AI use it for idea generation and task automation. Among employees who say they use AI, the most common uses are to generate ideas (41%), to consolidate information or data (39%), and to automate basic tasks (39%).


Nvidia Blackwell GPUs sold out for the next 12 months as AI market boom continues — from techspot.com by Skye Jacobs
Analysts expect Team Green to increase its already formidable market share

Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.


Here’s my AI toolkit — from wondertools.substack.com by Jeremy Caplan and Nikita Roy
How and why I use the AI tools I do — an audio conversation

1. What are two useful new ways to use AI?

  • AI-powered research: Type a detailed search query into Perplexity instead of Google to get a quick, actionable summary response with links to relevant information sources. Read more of my take on why Perplexity is so useful and how to use it.
  • Notes organization and analysis: Tools like NotebookLM, Claude Projects, and Mem can help you make sense of huge repositories of notes and documents. Query or summarize your own notes and surface novel connections between your ideas.
 

Voice and Trust in Autonomous Learning Experiences — from learningguild.com by Bill Brandon

This article seeks to apply some lessons from brand management to learning design at a high level. Throughout the rest of this article, it is essential to remember that the context is an autonomous, interactive learning experience. The experience is created adaptively by Gen AI or (soon enough) by agents, not by rigid scripts. It may be that an AI will choose to present prewritten texts or prerecorded videos from a content library according to the human users’ responses or questions. Still, the overall experience will be different for each user. It will be more like a conversation than a book.

In summary, while AI chatbots have the potential to enhance learning experiences, their acceptance and effectiveness depend on several factors, including perceived usefulness, ease of use, trust, relational factors, perceived risk, and enjoyment. 

Personalization and building trust are essential for maintaining user engagement and achieving positive learning outcomes. The right “voice” for autonomous AI or a chatbot can enhance trust by making interactions more personal, consistent, and empathetic.

 

Legal budgets will get an AI-inspired makeover in 2025: survey — from legaldive.com by Justin Bachman
Nearly every general counsel is budgeting to add generative AI tools to their departments – and they’re all expecting to realize efficiencies by doing so.

Dive Brief:

  • Nearly all general counsel say their budgets are up slightly after wrestling with widespread cuts last year. And most of them, 61%, say they expect slightly larger budgets next year as well, an average of 5% more, according to the 2025 In-House Legal Budgeting Report from Axiom and Wakefield Research. Technology was ranked as the top in-house investment priority for both 2024 and 2025 for larger companies.
  • Legal managers predict their companies will boost investment on technology and real estate/facilities in 2025, while reducing outlays for human resources and mergers and acquisition activity, according to the survey. This mix of changing priorities might disrupt legal budgets.
  • Among the planned legal tech spending, the top three areas for investment are virtual legal assistants/AI-powered chatbots (35%); e-billing and spend-management software (31%); and contract management platforms (30%).
 

When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia
As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.


Per The Rundown AI:

The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.

Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.


Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius
Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.

Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.


The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall]
The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.


[Claude] Artifacts are now generally available — from anthropic.com

[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.

Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.


MIT's AI Risk Repository -- a comprehensive database of risks from AI systems

What are the risks from Artificial Intelligence?
A comprehensive living database of over 700 AI risks categorized by their cause and risk domain.

What is the AI Risk Repository?
The AI Risk Repository has three parts:

  • The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
  • The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).

California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI — from newsday.com by The Associated Press

SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

Per Oncely:

The Details:

  • Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
  • Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
  • Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.

New in Gemini: Custom Gems and improved image generation with Imagen 3 — from blog.google
The ability to create custom Gems is coming to Gemini Advanced subscribers, and updated image generation capabilities with our latest Imagen 3 model are coming to everyone.

We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.


Cut the Chatter, Here Comes Agentic AI — from trendmicro.com

Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.

Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.

In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.

Citing AWS guidance, ZDNET counts six different potential types of AI agents:

    • Simple reflex agents for tasks like resetting passwords
    • Model-based reflex agents for pro vs. con decision making
    • Goal-/rule-based agents that compare options and select the most efficient pathways
    • Utility-based agents that compare for value
    • Learning agents
    • Hierarchical agents that manage and assign subtasks to other agents

Ask Claude: Amazon turns to Anthropic’s AI for Alexa revamp — from reuters.com by Greg Bensinger

Summary:

  • Amazon developing new version of Alexa with generative AI
  • Retailer hopes to generate revenue by charging for its use
  • Concerns about in-house AI prompt Amazon to turn to Anthropic’s Claude, sources say
  • Amazon says it uses many different technologies to power Alexa

Alibaba releases new AI model Qwen2-VL that can analyze videos more than 20 minutes long — from venturebeat.com by Carl Franzen


Hobbyists discover how to insert custom fonts into AI-generated images — from arstechnica.com by Benj Edwards
Like adding custom art styles or characters, in-world typefaces come to Flux.


200 million people use ChatGPT every week – up from 100 million last fall, says OpenAI — from zdnet.com by Sabrina Ortiz
Nearly two years after launching, ChatGPT continues to draw new users. Here’s why.

 

For college students—and for higher ed itself—AI is a required course — from forbes.com by Jamie Merisotis

Some of the nation’s biggest tech companies have announced efforts to reskill people to avoid job losses caused by artificial intelligence, even as they work to perfect the technology that could eliminate millions of those jobs.

It’s fair to ask, however: What should college students and prospective students, weighing their choices and possible time and financial expenses, think of this?

The news this spring was encouraging for people seeking to reinvent their careers to grab middle-class jobs and a shot at economic security.

 


Addressing Special Education Needs With Custom AI Solutions — from teachthought.com
AI can offer many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

For too long, students with learning disabilities have struggled to navigate a traditional education system that often fails to meet their unique needs. But what if technology could help bridge the gap, offering personalized support and unlocking the full potential of every learner?

Artificial intelligence (AI) is emerging as a powerful ally in special education, offering many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

.


 

.


11 Summer AI Developments Important to Educators — from stefanbauschard.substack.com by Stefan Bauschard
Equity demands that we help students prepare to thrive in an AI-World

*SearchGPT
*Smaller & on-device (phones, glasses) AI models
*AI TAs
*Access barriers decline, equity barriers grow
*Claude Artifacts and Projects
*Agents, and Agent Teams of a million+
*Humanoid robots & self-driving cars
*AI Curricular integration
*Huge video and video-segmentation gains
*Writing Detectors — The final blow
*AI Unemployment, Student AI anxiety, and forward-thinking approaches
*Alternative assessments


Academic Fracking: When Publishers Sell Scholars Work to AI — from aiedusimplified.substack.com by Lance Eaton
Further discussion of publisher practices selling scholars’ work to AI companies

Last week, I explored AI and academic publishing in response to an article that came out a few weeks ago about a deal Taylor & Francis made to sell their books to Microsoft and one other AI company (unnamed) for a boatload of money.

Since then, two more pieces have been widely shared including this piece from Inside Higher Ed by Kathryn Palmer (and to which I was interviewed and mentioned in) and this piece from Chronicle of Higher Ed by Christa Dutton. Both pieces try to cover the different sides talking to authors, scanning the commentary online, finding some experts to consult and talking to the publishers. It’s one of those things that can feel like really important and also probably only to a very small amount of folks that find themselves thinking about academic publishing, scholarly communication, and generative AI.


At the Crossroads of Innovation: Embracing AI to Foster Deep Learning in the College Classroom — from er.educause.edu by Dan Sarofian-Butin
AI is here to stay. How can we, as educators, accept this change and use it to help our students learn?

The Way Forward
So now what?

In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.11

Therefore, I believe we must double down on a learning-centered model in the age of AI.

The first step is to fully and enthusiastically embrace AI.

The second step is to find the “jagged technological frontier” of using AI in the college classroom.


.

.


.

.


Futures Thinking in Education — from gettingsmart.com by Getting Smart Staff

Key Points

  • Educators should leverage these tools to prepare for rapid changes driven by technology, climate, and social dynamics.
  • Cultivating empathy for future generations can help educators design more impactful and forward-thinking educational practices.
 

Per the Rundown AI:

Why it matters: AI is slowly shifting from a tool we text/prompt with, to an intelligence that we collaborate, learn, and grow with. Advanced Voice Mode’s ability to understand and respond to emotions in real-time convos could also have huge use cases in everything from customer service to mental health support.

Also relevant/see:


Creators to Have Personalized AI Assistants, Meta CEO Mark Zuckerberg Tells NVIDIA CEO Jensen Huang — from blogs.nvidia.com by Brian Caulfield
Zuckerberg and Huang explore the transformative potential of open source AI, the launch of AI Studio, and exchange leather jackets at SIGGRAPH 2024.

“Every single restaurant, every single website will probably, in the future, have these AIs …” Huang said.

“…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI,” Zuckerberg responded.

More broadly, the advancement of AI across a broad ecosystem promises to supercharge human productivity, for example, by giving every human on earth a digital assistant — or assistants — allowing people to live richer lives that they can interact with quickly and fluidly.

Also related/see:


From DSC:
Today was a MUCH better day for Nvidia however (up 12.81%). But it’s been very volatile in the last several weeks — as people and institutions ask where the ROI’s are going to come from.






9 compelling reasons to learn how to use AI Chatbots — from interestingengineering.com by Atharva Gosavi
AI Chatbots are conversational agents that can act on your behalf and converse with humans – a futuristic novelty that is already getting people excited about its usage in improving efficiency.

7. Accessibility and inclusivity
Chatbots can be designed to support multiple languages and accessibility needs, making services more inclusive. They can cater to users with disabilities by providing voice interaction capabilities and simplifying access to information. Understanding how to develop inclusive chatbots can help you contribute to making technology more accessible to everyone, a crucial aspect in today’s diverse society.

8. Future-proofing your skills
AI and automation are the future of work. Having the skills of building AI chatbots is a great way to future-proof your skills, and given the rising trajectory of AI, it’ll be a demanding skill in the market in the years to come. Staying ahead of technological trends is a great way to ensure you remain relevant and competitive in the job market.


Top 7 generative AI use cases for business — from cio.com by Grant Gross
Advanced chatbots, digital assistants, and coding helpers seem to be some of the sweet spots for gen AI use so far in business.

Many AI experts say the current use cases for generative AI are just the tip of the iceberg. More uses cases will present themselves as gen AIs get more powerful and users get more creative with their experiments.

However, a handful of gen AI use cases are already bubbling up. Here’s a look at the most popular and promising.

 

How to use Perplexity in your daily workflow — from ai-supremacy.com by Michael Spencer and Alex McFarland
“I barely use Google anymore (for anything)” says today’s guest author.

Make Perplexity your go-to research companion with these strategies:

  • Morning briefings: Start your day by asking Perplexity for the latest news in your field. (I personally like to use Perplexity to curate the top AI news of the day to consider writing about for Unite AI and Techopedia.)
  • Fact-checking: Use it to quickly verify information before including it in your work.
  • Brainstorming: Generate ideas for projects or content by asking open-ended questions.
  • Learning new concepts: When you encounter an unfamiliar term or idea, turn to Perplexity for a quick, comprehensive explanation.
  • Writing assistance: Use it to find relevant statistics, examples, or counterarguments for your content.

From DSC:
So I used it to ask “What is Elon Musk’s Memphis Supercluster?” (which went online yesterday I believe) — which resulted in this reply:

Elon Musk’s Memphis Supercluster is a newly activated AI training cluster that is claimed to be the most powerful in the world. Here are the key details about this supercomputer:

1. Location: The supercluster is located in Memphis, Tennessee[1][2].

2. Hardware: It consists of 100,000 liquid-cooled Nvidia H100 GPUs connected through a single RDMA (Remote Direct Memory Access) fabric[1][3].

3. Purpose: The supercluster is designed for training large language models (LLMs) and other advanced AI technologies for Musk’s xAI company[1][2].

4. Activation: The Memphis Supercluster began training at approximately 4:20 AM local time on July 22, 2024[1][3].

5. Collaboration: The project is a result of collaboration between xAI, X (formerly Twitter), Nvidia, and other supporting companies[1][2].

6. Investment: With each H100 GPU estimated to cost between $30,000 to $40,000, the total investment in GPUs alone is estimated to be between $3 billion to $4 billion[5].

7. Goals: Musk claims that this supercluster will be used to develop “the world’s most powerful AI by every measure” by December 2024[1].

8. Comparison: The Memphis Supercluster’s 100,000 H100 GPUs significantly outclass other supercomputers in terms of GPU horsepower, such as Frontier (37,888 AMD GPUs) and Microsoft Eagle (14,400 Nvidia H100 GPUs)[3].

9. Infrastructure: The project required significant infrastructure development, including fiber optic networking[5].

While Musk’s claims about the supercluster’s capabilities are ambitious, it remains to be seen how it will perform in practice and whether it will meet the stated goals within the given timeframe[1].

Citations:
[1] https://venturebeat.com/ai/elon-musk-announces-most-powerful-ai-training-cluster-in-the-world/
[2] https://www.tuaw.com/2024/07/22/elon-musks-xai-unveils-worlds-most-powerful-ai-training-cluster/
[3] https://www.tomshardware.com/pc-components/gpus/elon-musk-fires-up-the-most-powerful-ai-training-cluster-in-the-world-uses-100000-nvidia-h100-gpus-on-a-single-fabric
[4] https://www.thestack.technology/xai-elon-musk-ai-supercomputer/
[5] https://www.benzinga.com/news/24/07/39881748/elon-musks-xai-flips-the-switch-on-100k-nvidia-h100-gpus-worth-up-to-4b-in-memphis-supercluster-most
[6] https://www.youtube.com/watch?v=0JhtJa05C2Y

—–
Elon’s AI empire expands — from theneurondaily.com by Grant Harvey
Elon Musk’s team at xAI just powered on the “World’s Most Powerful AI Training Cluster.”
If you don’t know what a supercluster is, it’s basically a massive network of Nvidia GPUs (computer chips) working together as a single unit to solve “super” complex calculations at unprecedented speeds.

And this Memphis Supercluster is the most “super” supercluster we’ve ever seen. The new facility, dubbed the “Gigafactory of Compute”, is a beast:

  • 100,000 liquid-cooled Nvidia H100 GPUs on a single RDMA fabric (for context, Google snagged only 50,000 H100 GPUs last year).
  • Up to 150 megawatts of electricity usage per hour—enough for 100K homes.
  • At least one million gallons of water per day to keep cool!

What to expect: Better models, more frequently. That’s been the trend, at least—look at how the last few model releases have become more squished together. 


OpenAI to make GPT-4o Advanced Voice available by the end of the month to select group of users — from tomsguide.com by Ryan Morrison

GPT-4o Advanced Voice is an entirely new type of voice assistant, similar to but larger than the recently unveiled French model Moshi, which argued with me over a story.

In demos of the model, we’ve seen GPT-4o Advanced Voice create custom character voices, generate sound effects while telling a story and even act as a live translator.

This native speech ability is a significant step in creating more natural AI assistants. In the future, it will also come with live vision abilities, allowing the AI to see what you see.


Could AGI break the world? — from theneurondaily.com by Noah Edelman

“Biggest IT outage in history” proves we’re not ready for AGI.

Here’s the TL;DR
—a faulty software update from cybersecurity firm Crowdstrike made this happen:

  • Grounded 5,000+ flights around the world.
  • Slowed healthcare across the UK.
  • Forced retailers to revert to cash-only transactions in Australia (what is this, the stone ages?!).


Here’s where AI comes in: Imagine today’s AI as a new operating system. In 5-10 years, it’ll likely be as integrated into our economy as Microsoft’s cloud servers are now. This isn’t that far-fetched—Microsoft is already planning to embed AI into all its programs.

So what if a Crowdstrike-like incident happens with a more powerful AI system? Some experts predict an AI-powered IT outage could be 10x worse than Friday’s fiasco.


The Crowdstrike outage and global software’s single-point failure problem — from cnbc.com by Kaya Ginsky

KEY POINTS

  • The CrowdStrike software bug that took down global IT infrastructure exposed a single-point-of-failure risk unrelated to malicious cyberattack.
  • National and cybersecurity experts say the risk of this kind of technical outage is increasing alongside the risk of hacks, and the market will need to adopt better competitive practices.
  • Government is also likely to look at new regulations related to software updates and patches.

The “largest IT outage in history,” briefly explained — from vox.com by Li Zhou
Airlines, banks, and hospitals saw computer systems go down because of a CrowdStrike software glitch.

 

What aspects of teaching should remain human? — from hechingerreport.org by Chris Berdik
Even techno optimists hesitate to say teaching is best left to the bots, but there’s a debate about where to draw the line

ATLANTA — Science teacher Daniel Thompson circulated among his sixth graders at Ron Clark Academy on a recent spring morning, spot checking their work and leading them into discussions about the day’s lessons on weather and water. He had a helper: As Thompson paced around the class, peppering them with questions, he frequently turned to a voice-activated AI to summon apps and educational videos onto large-screen smartboards.

When a student asked, “Are there any animals that don’t need water?” Thompson put the question to the AI. Within seconds, an illustrated blurb about kangaroo rats appeared before the class.

Nitta said there’s something “deeply profound” about human communication that allows flesh-and-blood teachers to quickly spot and address things like confusion and flagging interest in real time.


Deep Learning: Five New Superpowers of Higher Education — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
How Deep Learning is Transforming Higher Education

While the traditional model of education is entrenched, emerging technologies like deep learning promise to shake its foundations and usher in an age of personalized, adaptive, and egalitarian education. It is expected to have a significant impact across higher education in several key ways.

…deep learning introduces adaptivity into the learning process. Unlike a typical lecture, deep learning systems can observe student performance in real-time. Confusion over a concept triggers instant changes to instructional tactics. Misconceptions are identified early and remediated quickly. Students stay in their zone of proximal development, constantly challenged but never overwhelmed. This adaptivity prevents frustration and stagnation.


InstructureCon 24 Conference Notes — from onedtech.philhillaa.com by Glenda Morgan
Another solid conference from the market leader, even with unclear roadmap

The new stuff: AI
Instructure rolled out multiple updates and improvements – more than last year. These included many AI-based or focused tools and services as well as some functional improvements. I’ll describe the AI features first.

Sal Khan was a surprise visitor to the keynote stage to announce the September availability of the full suite of AI-enabled Khanmigo Teacher Tools for Canvas users. The suite includes 20 tools, such as tools to generate lesson plans and quiz questions and write letters of recommendation. Next year, they plan to roll out tools for students themselves to use.

Other AI-based features include.

    • Discussion tool summaries and AI-generated responses…
    • Translation of inbox messages and discussions…
    • Smart search …
    • Intelligent Insights…

 

 

Introducing Eureka Labs — “We are building a new kind of school that is AI native.” — by Andrej Karpathy, Previously Director of AI @ Tesla, founding team @ OpenAI

However, with recent progress in generative AI, this learning experience feels tractable. The teacher still designs the course materials, but they are supported, leveraged and scaled with an AI Teaching Assistant who is optimized to help guide the students through them. This Teacher + AI symbiosis could run an entire curriculum of courses on a common platform. If we are successful, it will be easy for anyone to learn anything, expanding education in both reach (a large number of people learning something) and extent (any one person learning a large amount of subjects, beyond what may be possible today unassisted).


After Tesla and OpenAI, Andrej Karpathy’s startup aims to apply AI assistants to education — from techcrunch.com by Rebecca Bellan

Andrej Karpathy, former head of AI at Tesla and researcher at OpenAI, is launching Eureka Labs, an “AI native” education platform. In tech speak, that usually means built from the ground up with AI at its core. And while Eureka Labs’ AI ambitions are lofty, the company is starting with a more traditional approach to teaching.

San Francisco-based Eureka Labs, which Karpathy registered as an LLC in Delaware on June 21, aims to leverage recent progress in generative AI to create AI teaching assistants that can guide students through course materials.


What does it mean for students to be AI-ready? — from timeshighereducation.com by David Joyner
Not everyone wants to be a computer scientist, a software engineer or a machine learning developer. We owe it to our students to prepare them with a full range of AI skills for the world they will graduate into, writes David Joyner

We owe it to our students to prepare them for this full range of AI skills, not merely the end points. The best way to fulfil this responsibility is to acknowledge and examine this new category of tools. More and more tools that students use daily – word processors, email, presentation software, development environments and more – have AI-based features. Practising with these tools is a valuable exercise for students, so we should not prohibit that behaviour. But at the same time, we do not have to just shrug our shoulders and accept however much AI assistance students feel like using.


Teachers say AI usage has surged since the school year started — from eschoolnews.com by Laura Ascione
Half of teachers report an increase in the use of AI and continue to seek professional learning

Fifty percent of educators reported an increase in AI usage, by both students and teachers, over the 2023–24 school year, according to The 2024 Educator AI Report: Perceptions, Practices, and Potential, from Imagine Learning, a digital curriculum solutions provider.

The report offers insight into how teachers’ perceptions of AI use in the classroom have evolved since the start of the 2023–24 school year.


OPINION: What teachers call AI cheating, leaders in the workforce might call progress — from hechingerreport.org by C. Edward Waston and Jose Antonio Bowen
Authors of a new guide explore what AI literacy might look like in a new era

Excerpt (emphasis DSC):

But this very ease has teachers wondering how we can keep our students motivated to do the hard work when there are so many new shortcuts. Learning goals, curriculums, courses and the way we grade assignments will all need to be reevaluated.

The new realities of work also must be considered. A shift in employers’ job postings rewards those with AI skills. Many companies report already adopting generative AI tools or anticipate incorporating them into their workflow in the near future.

A core tension has emerged: Many teachers want to keep AI out of our classrooms, but also know that future workplaces may demand AI literacy.

What we call cheating, business could see as efficiency and progress.

It is increasingly likely that using AI will emerge as an essential skill for students, regardless of their career ambitions, and that action is required of educational institutions as a result.


Teaching Writing With AI Without Replacing Thinking: 4 Tips — from by Erik Ofgang
AI has a lot of potential for writing students, but we can’t let it replace the thinking parts of writing, says writing professor Steve Graham

Reconciling these two goals — having AI help students learn to write more efficiently without hijacking the cognitive benefits of writing — should be a key goal of educators. Finding the ideal balance will require more work from both researchers and classroom educators, but Graham shares some initial tips for doing this currently.




Why I ban AI use for writing assignments — from timeshighereducation.com by James Stacey Taylor
Students may see handwriting essays in class as a needlessly time-consuming approach to assignments, but I want them to learn how to engage with arguments, develop their own views and convey them effectively, writes James Stacey Taylor

Could they use AI to generate objections to the arguments they read? Of course. AI does a good job of summarising objections to Singer’s view. But I don’t want students to parrot others’ objections. I want them to think of objections themselves. 

Could AI be useful for them in organising their exegesis of others’ views and their criticisms of them? Yes. But, again, part of what I want my students to learn is precisely what this outsources to the AI: how to organise their thoughts and communicate them effectively. 


How AI Will Change Education — from digitalnative.tech by Rex Woodbury
Predicting Innovation in Education, from Personalized Learning to the Downfall of College 

This week explores how AI will bleed into education, looking at three segments of education worth watching, then examining which business models will prevail.

  1. Personalized Learning and Tutoring
  2. Teacher Tools
  3. Alternatives to College
  4. Final Thoughts: Business Models and Why Education Matters

New Guidance from TeachAI and CSTA Emphasizes Computer Science Education More Important than Ever in an Age of AI — from csteachers.org by CSTA
The guidance features new survey data and insights from teachers and experts in computer science (CS) and AI, informing the future of CS education.

SEATTLE, WA – July 16, 2024 – Today, TeachAI, led by Code.org, ETS, the International Society of Technology in Education (ISTE), Khan Academy, and the World Economic Forum, launches a new initiative in partnership with the Computer Science Teachers Association (CSTA) to support and empower educators as they grapple with the growing opportunities and risks of AI in computer science (CS) education.

The briefs draw on early research and insights from CSTA members, organizations in the TeachAI advisory committee, and expert focus groups to address common misconceptions about AI and offer a balanced perspective on critical issues in CS education, including:

  • Why is it Still Important for Students to Learn to Program?
  • How Are Computer Science Educators Teaching With and About AI?
  • How Can Students Become Critical Consumers and Responsible Creators of AI?
 

Can Schools and Vendors Work Together Constructively on AI? A New Guide May Help — from edweek.org by Alyson Klein
The Education Department outlines key steps on AI development for schools

Educators need to work with vendors and tech developers to ensure artificial intelligence-driven innovations for schools go hand-in-hand with managing the technology’s risks, recommends guidance released July 8 by the U.S. Department of Education.

The guidance—called “Designing for Education with Artificial Intelligence: An Essential Guide for Developers“—includes extensive recommendations for both vendors and school district officials.


Also, on somewhat related notes see the following items:


 

Daniel Christian: My slides for the Educational Technology Organization of Michigan’s Spring 2024 Retreat

From DSC:
Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.

Topics/agenda:

  • Topics & resources re: Artificial Intelligence (AI)
    • Top multimodal players
    • Resources for learning about AI
    • Applications of AI
    • My predictions re: AI
  • The powerful impact of pursuing a vision
  • A potential, future next-gen learning platform
  • Share some lessons from my past with pertinent questions for you all now
  • The significant impact of an organization’s culture
  • Bonus material: Some people to follow re: learning science and edtech

 

Education Technology Organization of Michigan -- ETOM -- Spring 2024 Retreat on June 6-7

PowerPoint slides of Daniel Christian's presentation at ETOM

Slides of the presentation (.PPTX)
Slides of the presentation (.PDF)

 


Plus several more slides re: this vision.

 

AI Policy 101: a Beginners’ Framework — from drphilippahardman.substack.com by Dr. Philippa Hardman
How to make a case for AI experimentation & testing in learning & development


6 AI Tools Recommended By Teachers That Aren’t ChatGPT — from forbes.com by Dan Fitzpatrick

Here are six AI tools making waves in classrooms worldwide:

  • Brisk Teaching
  • SchoolAI
  • Diffit
  • Curipod
  • Skybox by Blockade Labs in ThingLink
  • Ideogram

With insights from educators who are leveraging their potential, let’s explore them in more detail.


AI Is Speeding Up L&D But Are We Losing the Learning? — from learningguild.com by Danielle Wallace

The role of learning & development
Given these risks, what can L&D professionals do to ensure generative AI contributes to effective learning? The solution lies in embracing the role of trusted learning advisors, guiding the use of AI tools in a way that prioritizes achieving learning outcomes over only speed. Here are three key steps to achieve this:

1. Playtest and Learn About AI
2. Set the Direction for AI to Be Learner-Centered…
3. Become Trusted Learning Advisors…


Some other tools to explore:

Descript: If you can edit text, you can edit videos. — per Bloomberg’s Vlad Savov
Descript is the AI-powered, fully featured, end-to-end video editor that you already know how to use.

A video editor that works like docs and slides
No need to learn a new tool — Descript works like the tools you’ve already learned.

Audeze | Filter — per Bloomberg’s Vlad Savov


AI Chatbots in Schools Findings from a Poll of K-12 Teachers, Students, Parents, and College Undergraduates — from Impact Research; via Michael Spencer and Lily Lee

Key Findings

  • In the last year, AI has become even more intertwined with our education system. More teachers, parents, and students are aware of it and have used it themselves on a regular basis. It is all over our education system today.
  • While negative views of AI have crept up over the last year, students, teachers, and parents feel very positive about it in general. On balance they see positive uses for the technology in school, especially if they have used it themselves.
  • Most K-12 teachers, parents, and students don’t think their school is doing much about AI, despite its widespread use. Most say their school has no policy on it, is doing nothing to offer desired teacher training, and isn’t meeting the demand of students who’d like a career in a job that will need AI.
  • The AI vacuum in school policy means it is currently used “unauthorized,” while instead people want policies that encourage AI. Kids, parents, and teachers are figuring it out on their own/without express permission, whereas all stakeholders would rather have a policy that explicitly encourages AI from a thoughtful foundation.

The Value of AI in Today’s Classrooms — from waltonfamilyfoundation.org

There is much discourse about the rise and prevalence of AI in education and beyond. These debates often lack the perspectives of key stakeholders – parents, students and teachers.

In 2023, the Walton Family Foundation commissioned the first national survey of teacher and student attitudes toward ChatGPT. The findings showed that educators and students embrace innovation and are optimistic that AI can meaningfully support traditional instruction.

A new survey conducted May 7-15, 2024, showed that knowledge of and support for AI in education is growing among parents, students and teachers. More than 80% of each group says it has had a positive impact on education.

 

 

Microsoft teams with Khan Academy to make its AI tutor free for K-12 educators and will develop a Phi-3 math model — from venturebeat.com by Ken Yeung

Microsoft is partnering with Khan Academy in a multifaceted deal to demonstrate how AI can transform the way we learn. The cornerstone of today’s announcement centers on Khan Academy’s Khanmigo AI agent. Microsoft says it will migrate the bot to its Azure OpenAI Service, enabling the nonprofit educational organization to provide all U.S. K-12 educators free access to Khanmigo.

In addition, Microsoft plans to use its Phi-3 model to help Khan Academy improve math tutoring and collaborate to generate more high-quality learning content while making more courses available within Microsoft Copilot and Microsoft Teams for Education.


One-Third of Teachers Have Already Tried AI, Survey Finds — from the74million.org by Kevin Mahnken
A RAND poll released last month finds English and social studies teachers embracing tools like ChatGPT.

One in three American teachers have used artificial intelligence tools in their teaching at least once, with English and social studies teachers leading the way, according to a RAND Corporation survey released last month. While the new technology isn’t yet transforming how kids learn, both teachers and district leaders expect that it will become an increasingly common feature of school life.


Professors Try ‘Restrained AI’ Approach to Help Teach Writing — from edsurge.com by Jeffrey R. Young
Can ChatGPT make human writing more efficient, or is writing an inherently time-consuming process best handled without AI tools?

This article is part of the guide: For Education, ChatGPT Holds Promise — and Creates Problems.

When ChatGPT emerged a year and half ago, many professors immediately worried that their students would use it as a substitute for doing their own written assignments — that they’d click a button on a chatbot instead of doing the thinking involved in responding to an essay prompt themselves.

But two English professors at Carnegie Mellon University had a different first reaction: They saw in this new technology a way to show students how to improve their writing skills.

“They start really polishing way too early,” Kaufer says. “And so what we’re trying to do is with AI, now you have a tool to rapidly prototype your language when you are prototyping the quality of your thinking.”

He says the concept is based on writing research from the 1980s that shows that experienced writers spend about 80 percent of their early writing time thinking about whole-text plans and organization and not about sentences.


On Building AI Models for Education — from aieducation.substack.com by Claire Zau
Google’s LearnLM, Khan Academy/MSFT’s Phi-3 Models, and OpenAI’s ChatGPT Edu

This piece primarily breaks down how Google’s LearnLM was built, and takes a quick look at Microsoft/Khan Academy’s Phi-3 and OpenAI’s ChatGPT Edu as alternative approaches to building an “education model” (not necessarily a new model in the latter case, but we’ll explain). Thanks to the public release of their 86-page research paper, we have the most comprehensive view into LearnLM. Our understanding of Microsoft/Khan Academy small language models and ChatGPT Edu is limited to the information provided through announcements, leaving us with less “under the hood” visibility into their development.


AI tutors are quietly changing how kids in the US study, and the leading apps are from China — from techcrunch.com by Rita Liao

Answer AI is among a handful of popular apps that are leveraging the advent of ChatGPT and other large language models to help students with everything from writing history papers to solving physics problems. Of the top 20 education apps in the U.S. App Store, five are AI agents that help students with their school assignments, including Answer AI, according to data from Data.ai on May 21.


Is your school behind on AI? If so, there are practical steps you can take for the next 12 months — from stefanbauschard.substack.com by Stefan Bauschard

If your school (district) or university has not yet made significant efforts to think about how you will prepare your students for a World of AI, I suggest the following steps:

July 24 – Administrator PD & AI Guidance
In July, administrators should receive professional development on AI, if they haven’t already. This should include…

August 24 –Professional Development for Teachers and Staff…
Fall 24 — Parents; Co-curricular; Classroom experiments…
December 24 — Revision to Policy…


New ChatGPT Version Aiming at Higher Ed — from insidehighered.com by Lauren Coffey
ChatGPT Edu, emerging after initial partnerships with several universities, is prompting both cautious optimism and worries.

OpenAI unveiled a new version of ChatGPT focused on universities on Thursday, building on work with a handful of higher education institutions that partnered with the tech giant.

The ChatGPT Edu product, expected to start rolling out this summer, is a platform for institutions intended to give students free access. OpenAI said the artificial intelligence (AI) toolset could be used for an array of education applications, including tutoring, writing grant applications and reviewing résumés.

 

Via The Rundown AI

The Rundown: Nvidia CEO Jensen Huang just announced a series of new AI announcements during a keynote at the Computex conference, including next-gen ‘Rubin’ chips, a new AI gaming assistant, and AI tools for creating lifelike avatars.

The details:

  • Nvidia’s ‘Rubin’ platform is slated for 2026, with the ‘Rubin Ultra’ coming a year later as part of what Huang called a “new industrial revolution”.
  • Nvidia also showed off Project G-Assist, an AI gaming assistant that provides context-aware help and personalized responses for PC games.
  • The company also introduced ACE, a suite of AI services that simplify the creation of digital avatars for applications like customer service and healthcare.

More re: Nvidia:

  • Nvidia and AMD announced new next-generation AI chips — from qz.com by Britney Nguyen
    Nvidia CEO Jensen Huang announced the company’s next AI platform, called Rubin
  • ‘Accelerate Everything,’ NVIDIA CEO Says Ahead of COMPUTEX — from blogs.nvidia.com by Brian Caulfield
    Emphasizing cost reduction and sustainability, Huang detailed new semiconductors, software and systems to power data centers, factories, consumer devices, robots and more, driving a new industrial revolution.
  • Nvidia Unveils Next-Generation Rubin AI Platform for 2026 — from bloomberg.com by Ian King and Vlad Savov
    CEO Jensen Huang reveals plans for annual upgrade cycle | Company details plans for Blackwell Ultra and subsequent chips
    Nvidia Corp. Chief Executive Officer Jensen Huang said the company plans to upgrade its AI accelerators every year, announcing a Blackwell Ultra chip for 2025 and a next-generation platform in development called Rubin for 2026.
 

Introducing ChatGPT Edu — from openai.com
An affordable offering for universities to responsibly bring AI to campus.

We’re announcing ChatGPT Edu, a version of ChatGPT built for universities to responsibly deploy AI to students, faculty, researchers, and campus operations. Powered by GPT-4o, ChatGPT Edu can reason across text and vision and use advanced tools such as data analysis. This new offering includes enterprise-level security and controls and is affordable for educational institutions.

We built ChatGPT Edu because we saw the success universities like the University of Oxford, Wharton School of the University of Pennsylvania(opens in a new window), University of Texas at Austin, Arizona State University(opens in a new window), and Columbia University in the City of New York were having with ChatGPT Enterprise.

ChatGPT can help with various tasks across campus, such as providing personalized tutoring for students and reviewing their resumes, helping researchers write grant applications, and assisting faculty with grading and feedback. 


Claude can now use tools — from anthropic.com

Excerpt (emphasis DSC):

Tool use, which enables Claude to interact with external tools and APIs, is now generally available across the entire Claude 3 model family on the Anthropic Messages API, Amazon Bedrock, and Google Cloud’s Vertex AI. With tool use, Claude can perform tasks, manipulate data, and provide more dynamic—and accurate—responses.

Define a toolset for Claude and specify your request in natural language. Claude will then select the appropriate tool to fulfill the task and, when appropriate, execute the corresponding action:

  • Extract structured data from unstructured text…
  • Convert natural language requests into structured API calls…
  • Answer questions by searching databases or using web APIs…
  • Automate simple tasks through software APIs…
  • Orchestrate multiple fast Claude subagents for granular tasks…

From DSC:
The above posting reminds me of this other posting…as AGENTS are likely going to become much more popular and part of our repertoire:

Forget Chatbots. AI Agents Are the Future — from wired.com by Will Knight
Startups and tech giants are trying to move from chatbots that offer help via text, to AI agents that can get stuff done. Recent demos include an AI coder called Devin and agents that play videogames.

Devin is just the latest, most polished example of a trend I’ve been tracking for a while—the emergence of AI agents that instead of just providing answers or advice about a problem presented by a human can take action to solve it. A few months back I test drove Auto-GPT, an open source program that attempts to do useful chores by taking actions on a person’s computer and on the web. Recently I tested another program called vimGPT to see how the visual skills of new AI models can help these agents browse the web more efficiently.

 


Looking Back on My AI Blog One Year In: AI Unfolding as Predicted — from stefanbauschard.substack.com Stefan Bauschard

On May 30, 2023, I’ve started blogging about AI, and, so far, I think things have been unfolding as predicted.

Topics included:

  • AGI
  • It’s not just another piece of Edtech
  • AI Literacy
  • Bot Teachers/tutors
  • AI Writing Detectors
  • AI Use in the Classroom is Uncontrollable
  • …and more

 

 
© 2024 | Daniel Christian