2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

How to use NotebookLM for personalized knowledge synthesis — from ai-supremacy.com by Michael Spencer and Alex McFarland
Two powerful workflows that unlock everything else. Intro: Golden Age of AI Tools and AI agent frameworks begins in 2025.

What is Google Learn about?
Google’s new AI tool, Learn About, is designed as a conversational learning companion that adapts to individual learning needs and curiosity. It allows users to explore various topics by entering questions, uploading images or documents, or selecting from curated topics. The tool aims to provide personalized responses tailored to the user’s knowledge level, making it user-friendly and engaging for learners of all ages.

Is Generative AI leading to a new take on Educational technology? It certainly appears promising heading into 2025.

The Learn About tool utilizes the LearnLM AI model, which is grounded in educational research and focuses on how people learn. Google insists that unlike traditional chatbots, it emphasizes interactive and visual elements in its responses, enhancing the educational experience. For instance, when asked about complex topics like the size of the universe, Learn About not only provides factual information but also includes related content, vocabulary building tools, and contextual explanations to deepen understanding.

 

Five key issues to consider when adopting an AI-based legal tech — from legalfutures.co.uk by Mark Hughes

As more of our familiar legal resources have started to embrace a generative AI overhaul, and new players have come to the market, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.

  • Licensing
  • Data protection
  • The data sets
  • …and others

Knowable Introduces Gen AI Tool It Says Will Revolutionize How Companies Interact with their Contracts — from lawnext.com by Bob Ambrogi

Knowable, a legal technology company specializing in helping organizations bring order and organization to their executed agreements, has announced Ask Knowable, a suite of generative AI-powered tools aimed at transforming how legal teams interact with and understand what is in their contracts.

Released today as a commercial preview and set to launch for general availability in March 2025, the feature marks a significant step forward in leveraging large language models to address the complexities of contract management, the company says.


The Global Legal Post teams up with LexisNexis to explore challenges and opportunities of Gen AI adoption — from globallegalpost.com by
Series of articles will investigate key criteria to consider when investing in Gen AI

The Global Legal Post has teamed up with LexisNexis to help inform readers’ decision-making in the selection of generative AI (Gen AI) legal research solutions.

The Generative AI Legal Research Hub in association with LexisNexis will host a series of articles exploring the key criteria law firms and legal departments should consider when seeking to harness the power of Gen AI to improve the delivery of legal services.


Leveraging AI to Grow Your Legal Practice — from americanbar.org

Summary

  • AI-powered tools like chat and scheduling meet clients’ demand for instant, personalized service, improving engagement and satisfaction.
  • Firms using AI see up to a 30% increase in lead conversion, cutting client acquisition costs and maximizing marketing investments.
  • AI streamlines processes, speeds up response times, and enhances client engagement—driving growth and long-term client retention.

How a tech GC views AI-enabled efficiencies and regulation — from legaldive.com by Justin Bachman
PagerDuty’s top in-house counsel sees legal AI tools as a way to scale resources without adding headcount while focusing lawyers on their high-value work.


Innovations in Legal Practice: How Tim Billick’s Firm Stays Ahead with AI and Technology — from techtimes.com by Elena McCormick

Enhancing Client Service through Technology
Beyond internal efficiency, Billick’s firm utilizes technology to improve client communication and engagement. By adopting client-facing AI tools, such as chatbots for routine inquiries and client portals for real-time updates, Practus makes legal processes more transparent and accessible to its clients. According to Billick, this responsiveness is essential in IP law, where clients often need quick updates and answers to time-sensitive questions about patents, trademarks, and licensing agreements.

AI-driven client management software is also part of the firm’s toolkit, enabling Billick and his team to track each client’s case progress and share updates efficiently. The firm’s technology infrastructure supports clients from various sectors, including engineering, software development, and consumer products, tailoring case workflows to meet unique needs within each industry. “Clients appreciate having immediate access to their case status, especially in industries where timing is crucial,” Billick shares.


New Generative AI Study Highlights Adoption, Use and Opportunities in the Legal Industry — from prnewswire.com by Relativity

CHICAGO, Nov. 12, 2024 /PRNewswire/ — Relativity, a global legal technology company, today announced findings from the IDC InfoBrief, Generative AI in Legal 2024, commissioned by Relativity. The study uncovers the rapid increase of generative AI adoption in the legal field, examining how legal professionals are navigating emerging challenges and seizing opportunities to drive legal innovation.

The international study surveyed attorneys, paralegals, legal operations professionals and legal IT professionals from law firms, corporations and government agencies. Respondents were located in Australia, Canada, Ireland, New Zealand, the United Kingdom and the United States. The data uncovered important trends on how generative AI has impacted the legal industry and how legal professionals will use generative AI in the coming years.

 

A Code-Red Leadership Crisis: A Wake-Up Call for Talent Development — from learningguild.com by Dr. Arika Pierce Williams

This company’s experience offers three crucial lessons for other organizational leaders who may be contemplating cutting or reducing talent development investments in their 2025 budgets to focus on “growth.”

  1. Leadership development isn’t a luxury – it’s a strategic imperative…
  2. Succession planning must be an ongoing process, not a reactive measure…
  3. The cost of developing leaders is far less than the cost of not having them when you need them most…

Also from The Learning Guild, see:

5 Key EdTech Innovations to Watch — from learningguild.com by Paige Yousey

  1. AI-driven course design
  2. Hyper-personalized content curation
  3. Immersive scenario-based training
  4. Smart chatbots
  5. Wearable devices
 

Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.

After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.

They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.

Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.


A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester
Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.

The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.

  1. Understand the impact…
  2. Understand the different types of AI tools…
  3. Focus on institutional data and knowledge repositories…

Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.

Also from Educause, see:


AI school opens – learners are not good or bad but fast and slow — from donaldclarkplanb.blogspot.com by Donald Clark

That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.

For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.


Agentic AI Named Top Tech Trend for 2025 — from campustechnology.com by David Ramel

Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

More realistic challenges that the firm has listed elsewhere include:

    • Agentic AI proliferating without governance or tracking;
    • Agentic AI making decisions that are not trustworthy;
    • Agentic AI relying on low-quality data;
    • Employee resistance; and
    • Agentic-AI-driven cyberattacks enabling “smart malware.”

Also from campustechnology.com, see:


Three items from edcircuit.com:


All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly
Looking for specific solutions at the conference exhibit hall, with an educator focus

Here are some notable trends:

  • Alignment with campus policies: …
  • Choose your own AI adventure: …
  • Integrate AI throughout a workflow: …
  • Moving from prompt engineering to bot building: …
  • More complex problem-solving: …


Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.

The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.


 

 

From DSC:
Great…we have another tool called Canvas. Or did you say Canva?

Introducing canvas — from OpenAI
A new way of working with ChatGPT to write and code

We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.

Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.


Using AI to buy your home? These companies think it’s time you should — from usatoday.com by Andrea Riquier

The way Americans buy homes is changing dramatically.

New industry rules about how home buyers’ real estate agents get paid are prompting a reckoning among housing experts and the tech sector. Many house hunters who are already stretched thin by record-high home prices and closing costs must now decide whether, and how much, to pay an agent.

A 2-3% commission on the median home price of $416,700 could be well over $10,000, and in a world where consumers are accustomed to using technology for everything from taxes to tickets, many entrepreneurs see an opportunity to automate away the middleman, even as some consumer advocates say not so fast.


The State of AI Report 2024 — from nathanbenaich.substack.com by Nathan Benaich


The Great Mismatch — from the-job.beehiiv.com. by Paul Fain
Artificial intelligence could threaten millions of decent-paying jobs held by women without degrees.

Women in administrative and office roles may face the biggest AI automation risk, find Brookings researchers armed with data from OpenAI. Also, why Indiana could make the Swiss apprenticeship model work in this country, and how learners get disillusioned when a certificate doesn’t immediately lead to a good job.

major new analysis from the Brookings Institution, using OpenAI data, found that the most vulnerable workers don’t look like the rail and dockworkers who have recaptured the national spotlight. Nor are they the creatives—like Hollywood’s writers and actors—that many wealthier knowledge workers identify with. Rather, they’re predominantly women in the 19M office support and administrative jobs that make up the first rung of the middle class.

“Unfortunately the technology and automation risks facing women have been overlooked for a long time,” says Molly Kinder, a fellow at Brookings Metro and lead author of the new report. “Most of the popular and political attention to issues of automation and work centers on men in blue-collar roles. There is far less awareness about the (greater) risks to women in lower-middle-class roles.”



Is this how AI will transform the world over the next decade? — from futureofbeinghuman.com by Andrew Maynard
Anthropic’s CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It’s audacious, compelling, and a must-read for anyone working at the intersection of AI and society.

But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.

Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.

That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.

And speaking of that essay, here’s a summary from The Rundown AI:

Anthropic CEO Dario Amodei just published a lengthy essay outlining an optimistic vision for how AI could transform society within 5-10 years of achieving human-level capabilities, touching on longevity, politics, work, the economy, and more.

The details:

  • Amodei believes that by 2026, ‘powerful AI’ smarter than a Nobel Prize winner across fields, with agentic and all multimodal capabilities, will be possible.
  • He also predicted that AI could compress 100 years of scientific progress into 10 years, curing most diseases and doubling the human lifespan.
  • The essay argued AI could strengthen democracy by countering misinformation and providing tools to undermine authoritarian regimes.
  • The CEO acknowledged potential downsides, including job displacement — but believes new economic models will emerge to address this.
  • He envisions AI driving unprecedented economic growth but emphasizes ensuring AI’s benefits are broadly distributed.

Why it matters: 

  • As the CEO of what is seen as the ‘safety-focused’ AI lab, Amodei paints a utopia-level optimistic view of where AI will head over the next decade. This thought-provoking essay serves as both a roadmap for AI’s potential and a call to action to ensure the responsible development of technology.

AI in the Workplace: Answering 3 Big Questions — from gallup.com by Kate Den Houter

However, most workers remain unaware of these efforts. Only a third (33%) of all U.S. employees say their organization has begun integrating AI into their business practices, with the highest percentage in white-collar industries (44%).

White-collar workers are more likely to be using AI. White-collar workers are, by far, the most frequent users of AI in their roles. While 81% of employees in production/frontline industries say they never use AI, only 54% of white-collar workers say they never do and 15% report using AI weekly.

Most employees using AI use it for idea generation and task automation. Among employees who say they use AI, the most common uses are to generate ideas (41%), to consolidate information or data (39%), and to automate basic tasks (39%).


Nvidia Blackwell GPUs sold out for the next 12 months as AI market boom continues — from techspot.com by Skye Jacobs
Analysts expect Team Green to increase its already formidable market share

Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.


Here’s my AI toolkit — from wondertools.substack.com by Jeremy Caplan and Nikita Roy
How and why I use the AI tools I do — an audio conversation

1. What are two useful new ways to use AI?

  • AI-powered research: Type a detailed search query into Perplexity instead of Google to get a quick, actionable summary response with links to relevant information sources. Read more of my take on why Perplexity is so useful and how to use it.
  • Notes organization and analysis: Tools like NotebookLM, Claude Projects, and Mem can help you make sense of huge repositories of notes and documents. Query or summarize your own notes and surface novel connections between your ideas.
 

Voice and Trust in Autonomous Learning Experiences — from learningguild.com by Bill Brandon

This article seeks to apply some lessons from brand management to learning design at a high level. Throughout the rest of this article, it is essential to remember that the context is an autonomous, interactive learning experience. The experience is created adaptively by Gen AI or (soon enough) by agents, not by rigid scripts. It may be that an AI will choose to present prewritten texts or prerecorded videos from a content library according to the human users’ responses or questions. Still, the overall experience will be different for each user. It will be more like a conversation than a book.

In summary, while AI chatbots have the potential to enhance learning experiences, their acceptance and effectiveness depend on several factors, including perceived usefulness, ease of use, trust, relational factors, perceived risk, and enjoyment. 

Personalization and building trust are essential for maintaining user engagement and achieving positive learning outcomes. The right “voice” for autonomous AI or a chatbot can enhance trust by making interactions more personal, consistent, and empathetic.

 

Legal budgets will get an AI-inspired makeover in 2025: survey — from legaldive.com by Justin Bachman
Nearly every general counsel is budgeting to add generative AI tools to their departments – and they’re all expecting to realize efficiencies by doing so.

Dive Brief:

  • Nearly all general counsel say their budgets are up slightly after wrestling with widespread cuts last year. And most of them, 61%, say they expect slightly larger budgets next year as well, an average of 5% more, according to the 2025 In-House Legal Budgeting Report from Axiom and Wakefield Research. Technology was ranked as the top in-house investment priority for both 2024 and 2025 for larger companies.
  • Legal managers predict their companies will boost investment on technology and real estate/facilities in 2025, while reducing outlays for human resources and mergers and acquisition activity, according to the survey. This mix of changing priorities might disrupt legal budgets.
  • Among the planned legal tech spending, the top three areas for investment are virtual legal assistants/AI-powered chatbots (35%); e-billing and spend-management software (31%); and contract management platforms (30%).
 

When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia
As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.


Per The Rundown AI:

The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.

Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.


Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius
Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.

Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.


The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall]
The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.


[Claude] Artifacts are now generally available — from anthropic.com

[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.

Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.


MIT's AI Risk Repository -- a comprehensive database of risks from AI systems

What are the risks from Artificial Intelligence?
A comprehensive living database of over 700 AI risks categorized by their cause and risk domain.

What is the AI Risk Repository?
The AI Risk Repository has three parts:

  • The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
  • The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).

California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI — from newsday.com by The Associated Press

SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

Per Oncely:

The Details:

  • Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
  • Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
  • Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.

New in Gemini: Custom Gems and improved image generation with Imagen 3 — from blog.google
The ability to create custom Gems is coming to Gemini Advanced subscribers, and updated image generation capabilities with our latest Imagen 3 model are coming to everyone.

We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.


Cut the Chatter, Here Comes Agentic AI — from trendmicro.com

Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.

Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.

In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.

Citing AWS guidance, ZDNET counts six different potential types of AI agents:

    • Simple reflex agents for tasks like resetting passwords
    • Model-based reflex agents for pro vs. con decision making
    • Goal-/rule-based agents that compare options and select the most efficient pathways
    • Utility-based agents that compare for value
    • Learning agents
    • Hierarchical agents that manage and assign subtasks to other agents

Ask Claude: Amazon turns to Anthropic’s AI for Alexa revamp — from reuters.com by Greg Bensinger

Summary:

  • Amazon developing new version of Alexa with generative AI
  • Retailer hopes to generate revenue by charging for its use
  • Concerns about in-house AI prompt Amazon to turn to Anthropic’s Claude, sources say
  • Amazon says it uses many different technologies to power Alexa

Alibaba releases new AI model Qwen2-VL that can analyze videos more than 20 minutes long — from venturebeat.com by Carl Franzen


Hobbyists discover how to insert custom fonts into AI-generated images — from arstechnica.com by Benj Edwards
Like adding custom art styles or characters, in-world typefaces come to Flux.


200 million people use ChatGPT every week – up from 100 million last fall, says OpenAI — from zdnet.com by Sabrina Ortiz
Nearly two years after launching, ChatGPT continues to draw new users. Here’s why.

 

For college students—and for higher ed itself—AI is a required course — from forbes.com by Jamie Merisotis

Some of the nation’s biggest tech companies have announced efforts to reskill people to avoid job losses caused by artificial intelligence, even as they work to perfect the technology that could eliminate millions of those jobs.

It’s fair to ask, however: What should college students and prospective students, weighing their choices and possible time and financial expenses, think of this?

The news this spring was encouraging for people seeking to reinvent their careers to grab middle-class jobs and a shot at economic security.

 


Addressing Special Education Needs With Custom AI Solutions — from teachthought.com
AI can offer many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

For too long, students with learning disabilities have struggled to navigate a traditional education system that often fails to meet their unique needs. But what if technology could help bridge the gap, offering personalized support and unlocking the full potential of every learner?

Artificial intelligence (AI) is emerging as a powerful ally in special education, offering many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

.


 

.


11 Summer AI Developments Important to Educators — from stefanbauschard.substack.com by Stefan Bauschard
Equity demands that we help students prepare to thrive in an AI-World

*SearchGPT
*Smaller & on-device (phones, glasses) AI models
*AI TAs
*Access barriers decline, equity barriers grow
*Claude Artifacts and Projects
*Agents, and Agent Teams of a million+
*Humanoid robots & self-driving cars
*AI Curricular integration
*Huge video and video-segmentation gains
*Writing Detectors — The final blow
*AI Unemployment, Student AI anxiety, and forward-thinking approaches
*Alternative assessments


Academic Fracking: When Publishers Sell Scholars Work to AI — from aiedusimplified.substack.com by Lance Eaton
Further discussion of publisher practices selling scholars’ work to AI companies

Last week, I explored AI and academic publishing in response to an article that came out a few weeks ago about a deal Taylor & Francis made to sell their books to Microsoft and one other AI company (unnamed) for a boatload of money.

Since then, two more pieces have been widely shared including this piece from Inside Higher Ed by Kathryn Palmer (and to which I was interviewed and mentioned in) and this piece from Chronicle of Higher Ed by Christa Dutton. Both pieces try to cover the different sides talking to authors, scanning the commentary online, finding some experts to consult and talking to the publishers. It’s one of those things that can feel like really important and also probably only to a very small amount of folks that find themselves thinking about academic publishing, scholarly communication, and generative AI.


At the Crossroads of Innovation: Embracing AI to Foster Deep Learning in the College Classroom — from er.educause.edu by Dan Sarofian-Butin
AI is here to stay. How can we, as educators, accept this change and use it to help our students learn?

The Way Forward
So now what?

In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.11

Therefore, I believe we must double down on a learning-centered model in the age of AI.

The first step is to fully and enthusiastically embrace AI.

The second step is to find the “jagged technological frontier” of using AI in the college classroom.


.

.


.

.


Futures Thinking in Education — from gettingsmart.com by Getting Smart Staff

Key Points

  • Educators should leverage these tools to prepare for rapid changes driven by technology, climate, and social dynamics.
  • Cultivating empathy for future generations can help educators design more impactful and forward-thinking educational practices.
 

Per the Rundown AI:

Why it matters: AI is slowly shifting from a tool we text/prompt with, to an intelligence that we collaborate, learn, and grow with. Advanced Voice Mode’s ability to understand and respond to emotions in real-time convos could also have huge use cases in everything from customer service to mental health support.

Also relevant/see:


Creators to Have Personalized AI Assistants, Meta CEO Mark Zuckerberg Tells NVIDIA CEO Jensen Huang — from blogs.nvidia.com by Brian Caulfield
Zuckerberg and Huang explore the transformative potential of open source AI, the launch of AI Studio, and exchange leather jackets at SIGGRAPH 2024.

“Every single restaurant, every single website will probably, in the future, have these AIs …” Huang said.

“…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI,” Zuckerberg responded.

More broadly, the advancement of AI across a broad ecosystem promises to supercharge human productivity, for example, by giving every human on earth a digital assistant — or assistants — allowing people to live richer lives that they can interact with quickly and fluidly.

Also related/see:


From DSC:
Today was a MUCH better day for Nvidia however (up 12.81%). But it’s been very volatile in the last several weeks — as people and institutions ask where the ROI’s are going to come from.






9 compelling reasons to learn how to use AI Chatbots — from interestingengineering.com by Atharva Gosavi
AI Chatbots are conversational agents that can act on your behalf and converse with humans – a futuristic novelty that is already getting people excited about its usage in improving efficiency.

7. Accessibility and inclusivity
Chatbots can be designed to support multiple languages and accessibility needs, making services more inclusive. They can cater to users with disabilities by providing voice interaction capabilities and simplifying access to information. Understanding how to develop inclusive chatbots can help you contribute to making technology more accessible to everyone, a crucial aspect in today’s diverse society.

8. Future-proofing your skills
AI and automation are the future of work. Having the skills of building AI chatbots is a great way to future-proof your skills, and given the rising trajectory of AI, it’ll be a demanding skill in the market in the years to come. Staying ahead of technological trends is a great way to ensure you remain relevant and competitive in the job market.


Top 7 generative AI use cases for business — from cio.com by Grant Gross
Advanced chatbots, digital assistants, and coding helpers seem to be some of the sweet spots for gen AI use so far in business.

Many AI experts say the current use cases for generative AI are just the tip of the iceberg. More uses cases will present themselves as gen AIs get more powerful and users get more creative with their experiments.

However, a handful of gen AI use cases are already bubbling up. Here’s a look at the most popular and promising.

 

How to use Perplexity in your daily workflow — from ai-supremacy.com by Michael Spencer and Alex McFarland
“I barely use Google anymore (for anything)” says today’s guest author.

Make Perplexity your go-to research companion with these strategies:

  • Morning briefings: Start your day by asking Perplexity for the latest news in your field. (I personally like to use Perplexity to curate the top AI news of the day to consider writing about for Unite AI and Techopedia.)
  • Fact-checking: Use it to quickly verify information before including it in your work.
  • Brainstorming: Generate ideas for projects or content by asking open-ended questions.
  • Learning new concepts: When you encounter an unfamiliar term or idea, turn to Perplexity for a quick, comprehensive explanation.
  • Writing assistance: Use it to find relevant statistics, examples, or counterarguments for your content.

From DSC:
So I used it to ask “What is Elon Musk’s Memphis Supercluster?” (which went online yesterday I believe) — which resulted in this reply:

Elon Musk’s Memphis Supercluster is a newly activated AI training cluster that is claimed to be the most powerful in the world. Here are the key details about this supercomputer:

1. Location: The supercluster is located in Memphis, Tennessee[1][2].

2. Hardware: It consists of 100,000 liquid-cooled Nvidia H100 GPUs connected through a single RDMA (Remote Direct Memory Access) fabric[1][3].

3. Purpose: The supercluster is designed for training large language models (LLMs) and other advanced AI technologies for Musk’s xAI company[1][2].

4. Activation: The Memphis Supercluster began training at approximately 4:20 AM local time on July 22, 2024[1][3].

5. Collaboration: The project is a result of collaboration between xAI, X (formerly Twitter), Nvidia, and other supporting companies[1][2].

6. Investment: With each H100 GPU estimated to cost between $30,000 to $40,000, the total investment in GPUs alone is estimated to be between $3 billion to $4 billion[5].

7. Goals: Musk claims that this supercluster will be used to develop “the world’s most powerful AI by every measure” by December 2024[1].

8. Comparison: The Memphis Supercluster’s 100,000 H100 GPUs significantly outclass other supercomputers in terms of GPU horsepower, such as Frontier (37,888 AMD GPUs) and Microsoft Eagle (14,400 Nvidia H100 GPUs)[3].

9. Infrastructure: The project required significant infrastructure development, including fiber optic networking[5].

While Musk’s claims about the supercluster’s capabilities are ambitious, it remains to be seen how it will perform in practice and whether it will meet the stated goals within the given timeframe[1].

Citations:
[1] https://venturebeat.com/ai/elon-musk-announces-most-powerful-ai-training-cluster-in-the-world/
[2] https://www.tuaw.com/2024/07/22/elon-musks-xai-unveils-worlds-most-powerful-ai-training-cluster/
[3] https://www.tomshardware.com/pc-components/gpus/elon-musk-fires-up-the-most-powerful-ai-training-cluster-in-the-world-uses-100000-nvidia-h100-gpus-on-a-single-fabric
[4] https://www.thestack.technology/xai-elon-musk-ai-supercomputer/
[5] https://www.benzinga.com/news/24/07/39881748/elon-musks-xai-flips-the-switch-on-100k-nvidia-h100-gpus-worth-up-to-4b-in-memphis-supercluster-most
[6] https://www.youtube.com/watch?v=0JhtJa05C2Y

—–
Elon’s AI empire expands — from theneurondaily.com by Grant Harvey
Elon Musk’s team at xAI just powered on the “World’s Most Powerful AI Training Cluster.”
If you don’t know what a supercluster is, it’s basically a massive network of Nvidia GPUs (computer chips) working together as a single unit to solve “super” complex calculations at unprecedented speeds.

And this Memphis Supercluster is the most “super” supercluster we’ve ever seen. The new facility, dubbed the “Gigafactory of Compute”, is a beast:

  • 100,000 liquid-cooled Nvidia H100 GPUs on a single RDMA fabric (for context, Google snagged only 50,000 H100 GPUs last year).
  • Up to 150 megawatts of electricity usage per hour—enough for 100K homes.
  • At least one million gallons of water per day to keep cool!

What to expect: Better models, more frequently. That’s been the trend, at least—look at how the last few model releases have become more squished together. 


OpenAI to make GPT-4o Advanced Voice available by the end of the month to select group of users — from tomsguide.com by Ryan Morrison

GPT-4o Advanced Voice is an entirely new type of voice assistant, similar to but larger than the recently unveiled French model Moshi, which argued with me over a story.

In demos of the model, we’ve seen GPT-4o Advanced Voice create custom character voices, generate sound effects while telling a story and even act as a live translator.

This native speech ability is a significant step in creating more natural AI assistants. In the future, it will also come with live vision abilities, allowing the AI to see what you see.


Could AGI break the world? — from theneurondaily.com by Noah Edelman

“Biggest IT outage in history” proves we’re not ready for AGI.

Here’s the TL;DR
—a faulty software update from cybersecurity firm Crowdstrike made this happen:

  • Grounded 5,000+ flights around the world.
  • Slowed healthcare across the UK.
  • Forced retailers to revert to cash-only transactions in Australia (what is this, the stone ages?!).


Here’s where AI comes in: Imagine today’s AI as a new operating system. In 5-10 years, it’ll likely be as integrated into our economy as Microsoft’s cloud servers are now. This isn’t that far-fetched—Microsoft is already planning to embed AI into all its programs.

So what if a Crowdstrike-like incident happens with a more powerful AI system? Some experts predict an AI-powered IT outage could be 10x worse than Friday’s fiasco.


The Crowdstrike outage and global software’s single-point failure problem — from cnbc.com by Kaya Ginsky

KEY POINTS

  • The CrowdStrike software bug that took down global IT infrastructure exposed a single-point-of-failure risk unrelated to malicious cyberattack.
  • National and cybersecurity experts say the risk of this kind of technical outage is increasing alongside the risk of hacks, and the market will need to adopt better competitive practices.
  • Government is also likely to look at new regulations related to software updates and patches.

The “largest IT outage in history,” briefly explained — from vox.com by Li Zhou
Airlines, banks, and hospitals saw computer systems go down because of a CrowdStrike software glitch.

 

What aspects of teaching should remain human? — from hechingerreport.org by Chris Berdik
Even techno optimists hesitate to say teaching is best left to the bots, but there’s a debate about where to draw the line

ATLANTA — Science teacher Daniel Thompson circulated among his sixth graders at Ron Clark Academy on a recent spring morning, spot checking their work and leading them into discussions about the day’s lessons on weather and water. He had a helper: As Thompson paced around the class, peppering them with questions, he frequently turned to a voice-activated AI to summon apps and educational videos onto large-screen smartboards.

When a student asked, “Are there any animals that don’t need water?” Thompson put the question to the AI. Within seconds, an illustrated blurb about kangaroo rats appeared before the class.

Nitta said there’s something “deeply profound” about human communication that allows flesh-and-blood teachers to quickly spot and address things like confusion and flagging interest in real time.


Deep Learning: Five New Superpowers of Higher Education — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
How Deep Learning is Transforming Higher Education

While the traditional model of education is entrenched, emerging technologies like deep learning promise to shake its foundations and usher in an age of personalized, adaptive, and egalitarian education. It is expected to have a significant impact across higher education in several key ways.

…deep learning introduces adaptivity into the learning process. Unlike a typical lecture, deep learning systems can observe student performance in real-time. Confusion over a concept triggers instant changes to instructional tactics. Misconceptions are identified early and remediated quickly. Students stay in their zone of proximal development, constantly challenged but never overwhelmed. This adaptivity prevents frustration and stagnation.


InstructureCon 24 Conference Notes — from onedtech.philhillaa.com by Glenda Morgan
Another solid conference from the market leader, even with unclear roadmap

The new stuff: AI
Instructure rolled out multiple updates and improvements – more than last year. These included many AI-based or focused tools and services as well as some functional improvements. I’ll describe the AI features first.

Sal Khan was a surprise visitor to the keynote stage to announce the September availability of the full suite of AI-enabled Khanmigo Teacher Tools for Canvas users. The suite includes 20 tools, such as tools to generate lesson plans and quiz questions and write letters of recommendation. Next year, they plan to roll out tools for students themselves to use.

Other AI-based features include.

    • Discussion tool summaries and AI-generated responses…
    • Translation of inbox messages and discussions…
    • Smart search …
    • Intelligent Insights…

 

 

Introducing Eureka Labs — “We are building a new kind of school that is AI native.” — by Andrej Karpathy, Previously Director of AI @ Tesla, founding team @ OpenAI

However, with recent progress in generative AI, this learning experience feels tractable. The teacher still designs the course materials, but they are supported, leveraged and scaled with an AI Teaching Assistant who is optimized to help guide the students through them. This Teacher + AI symbiosis could run an entire curriculum of courses on a common platform. If we are successful, it will be easy for anyone to learn anything, expanding education in both reach (a large number of people learning something) and extent (any one person learning a large amount of subjects, beyond what may be possible today unassisted).


After Tesla and OpenAI, Andrej Karpathy’s startup aims to apply AI assistants to education — from techcrunch.com by Rebecca Bellan

Andrej Karpathy, former head of AI at Tesla and researcher at OpenAI, is launching Eureka Labs, an “AI native” education platform. In tech speak, that usually means built from the ground up with AI at its core. And while Eureka Labs’ AI ambitions are lofty, the company is starting with a more traditional approach to teaching.

San Francisco-based Eureka Labs, which Karpathy registered as an LLC in Delaware on June 21, aims to leverage recent progress in generative AI to create AI teaching assistants that can guide students through course materials.


What does it mean for students to be AI-ready? — from timeshighereducation.com by David Joyner
Not everyone wants to be a computer scientist, a software engineer or a machine learning developer. We owe it to our students to prepare them with a full range of AI skills for the world they will graduate into, writes David Joyner

We owe it to our students to prepare them for this full range of AI skills, not merely the end points. The best way to fulfil this responsibility is to acknowledge and examine this new category of tools. More and more tools that students use daily – word processors, email, presentation software, development environments and more – have AI-based features. Practising with these tools is a valuable exercise for students, so we should not prohibit that behaviour. But at the same time, we do not have to just shrug our shoulders and accept however much AI assistance students feel like using.


Teachers say AI usage has surged since the school year started — from eschoolnews.com by Laura Ascione
Half of teachers report an increase in the use of AI and continue to seek professional learning

Fifty percent of educators reported an increase in AI usage, by both students and teachers, over the 2023–24 school year, according to The 2024 Educator AI Report: Perceptions, Practices, and Potential, from Imagine Learning, a digital curriculum solutions provider.

The report offers insight into how teachers’ perceptions of AI use in the classroom have evolved since the start of the 2023–24 school year.


OPINION: What teachers call AI cheating, leaders in the workforce might call progress — from hechingerreport.org by C. Edward Waston and Jose Antonio Bowen
Authors of a new guide explore what AI literacy might look like in a new era

Excerpt (emphasis DSC):

But this very ease has teachers wondering how we can keep our students motivated to do the hard work when there are so many new shortcuts. Learning goals, curriculums, courses and the way we grade assignments will all need to be reevaluated.

The new realities of work also must be considered. A shift in employers’ job postings rewards those with AI skills. Many companies report already adopting generative AI tools or anticipate incorporating them into their workflow in the near future.

A core tension has emerged: Many teachers want to keep AI out of our classrooms, but also know that future workplaces may demand AI literacy.

What we call cheating, business could see as efficiency and progress.

It is increasingly likely that using AI will emerge as an essential skill for students, regardless of their career ambitions, and that action is required of educational institutions as a result.


Teaching Writing With AI Without Replacing Thinking: 4 Tips — from by Erik Ofgang
AI has a lot of potential for writing students, but we can’t let it replace the thinking parts of writing, says writing professor Steve Graham

Reconciling these two goals — having AI help students learn to write more efficiently without hijacking the cognitive benefits of writing — should be a key goal of educators. Finding the ideal balance will require more work from both researchers and classroom educators, but Graham shares some initial tips for doing this currently.




Why I ban AI use for writing assignments — from timeshighereducation.com by James Stacey Taylor
Students may see handwriting essays in class as a needlessly time-consuming approach to assignments, but I want them to learn how to engage with arguments, develop their own views and convey them effectively, writes James Stacey Taylor

Could they use AI to generate objections to the arguments they read? Of course. AI does a good job of summarising objections to Singer’s view. But I don’t want students to parrot others’ objections. I want them to think of objections themselves. 

Could AI be useful for them in organising their exegesis of others’ views and their criticisms of them? Yes. But, again, part of what I want my students to learn is precisely what this outsources to the AI: how to organise their thoughts and communicate them effectively. 


How AI Will Change Education — from digitalnative.tech by Rex Woodbury
Predicting Innovation in Education, from Personalized Learning to the Downfall of College 

This week explores how AI will bleed into education, looking at three segments of education worth watching, then examining which business models will prevail.

  1. Personalized Learning and Tutoring
  2. Teacher Tools
  3. Alternatives to College
  4. Final Thoughts: Business Models and Why Education Matters

New Guidance from TeachAI and CSTA Emphasizes Computer Science Education More Important than Ever in an Age of AI — from csteachers.org by CSTA
The guidance features new survey data and insights from teachers and experts in computer science (CS) and AI, informing the future of CS education.

SEATTLE, WA – July 16, 2024 – Today, TeachAI, led by Code.org, ETS, the International Society of Technology in Education (ISTE), Khan Academy, and the World Economic Forum, launches a new initiative in partnership with the Computer Science Teachers Association (CSTA) to support and empower educators as they grapple with the growing opportunities and risks of AI in computer science (CS) education.

The briefs draw on early research and insights from CSTA members, organizations in the TeachAI advisory committee, and expert focus groups to address common misconceptions about AI and offer a balanced perspective on critical issues in CS education, including:

  • Why is it Still Important for Students to Learn to Program?
  • How Are Computer Science Educators Teaching With and About AI?
  • How Can Students Become Critical Consumers and Responsible Creators of AI?
 

Can Schools and Vendors Work Together Constructively on AI? A New Guide May Help — from edweek.org by Alyson Klein
The Education Department outlines key steps on AI development for schools

Educators need to work with vendors and tech developers to ensure artificial intelligence-driven innovations for schools go hand-in-hand with managing the technology’s risks, recommends guidance released July 8 by the U.S. Department of Education.

The guidance—called “Designing for Education with Artificial Intelligence: An Essential Guide for Developers“—includes extensive recommendations for both vendors and school district officials.


Also, on somewhat related notes see the following items:


 
© 2024 | Daniel Christian