A third of all generative AI projects will be abandoned, says Gartner — from zdnet.com by Tiernan Ray
The high upfront cost of deployment is one of the challenges that can doom generative AI projects

Companies are “struggling” to find value in the generative artificial intelligence (Gen AI) projects they have undertaken and one-third of initiatives will end up getting abandoned, according to a recent report by analyst Gartner.

The report states at least 30% of Gen AI projects will be abandoned after the proof-of-concept stage by the end of 2025.

From DSC:
But I wouldn’t write off the other two thirds of projects that will make it. I wouldn’t write off the future of AI in our world. AI-based technologies are already massively impacting graphic design, film, media, and more creative outlets. See the tweet below for some examples of what I’m talking about.



 

From DSC:
Anyone who is involved in putting on conferences should at least be aware that this kind of thing is now possible!!! Check out the following posting from Adobe (with help from Tata Consultancy Services (TCS).


From impossible to POSSIBLE: Tata Consultancy Services uses Adobe Firefly generative AI and Acrobat AI Assistant to turn hours of work into minutes — from blog.adobe.com

This year, the organizers — innovative industry event company Beyond Ordinary Events — turned to Tata Consultancy Services (TCS) to make the impossible “possible.” Leveraging Adobe generative AI technology across products like Adobe Premiere Pro and Acrobat, they distilled hours of video content in minutes, delivering timely dispatches to thousands of attendees throughout the conference.

For POSSIBLE ’24, Muche had an idea for a daily dispatch summarizing each day’s sessions so attendees wouldn’t miss a single insight. But timing would be critical. The dispatch needed to reach attendees shortly after sessions ended to fuel discussions over dinner and carry the excitement over to the next day.

The workflow started in Adobe Premiere Pro, with the writer opening a recording of each session and using the Speech to Text feature to automatically generate a transcript. They saved the transcript as a PDF file and opened it in Adobe Acrobat Pro. Then, using Adobe Acrobat AI Assistant, the writer asked for a session summary.

It was that fast and easy. In less than four minutes, one person turned a 30-minute session into an accurate, useful summary ready for review and publication.

By taking advantage of templates, the designer then added each AI-enabled summary to the newsletter in minutes. With just two people and generative AI technology, TCS accomplished the impossible — for the first time delivering an informative, polished newsletter to all 3,500 conference attendees just hours after the last session of the day.

 

When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia
As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.


Per The Rundown AI:

The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.

Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.


Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius
Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.

Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.


The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall]
The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.


[Claude] Artifacts are now generally available — from anthropic.com

[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.

Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.


MIT's AI Risk Repository -- a comprehensive database of risks from AI systems

What are the risks from Artificial Intelligence?
A comprehensive living database of over 700 AI risks categorized by their cause and risk domain.

What is the AI Risk Repository?
The AI Risk Repository has three parts:

  • The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
  • The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).

California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI — from newsday.com by The Associated Press

SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

Per Oncely:

The Details:

  • Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
  • Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
  • Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.

New in Gemini: Custom Gems and improved image generation with Imagen 3 — from blog.google
The ability to create custom Gems is coming to Gemini Advanced subscribers, and updated image generation capabilities with our latest Imagen 3 model are coming to everyone.

We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.


Cut the Chatter, Here Comes Agentic AI — from trendmicro.com

Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.

Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.

In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.

Citing AWS guidance, ZDNET counts six different potential types of AI agents:

    • Simple reflex agents for tasks like resetting passwords
    • Model-based reflex agents for pro vs. con decision making
    • Goal-/rule-based agents that compare options and select the most efficient pathways
    • Utility-based agents that compare for value
    • Learning agents
    • Hierarchical agents that manage and assign subtasks to other agents

Ask Claude: Amazon turns to Anthropic’s AI for Alexa revamp — from reuters.com by Greg Bensinger

Summary:

  • Amazon developing new version of Alexa with generative AI
  • Retailer hopes to generate revenue by charging for its use
  • Concerns about in-house AI prompt Amazon to turn to Anthropic’s Claude, sources say
  • Amazon says it uses many different technologies to power Alexa

Alibaba releases new AI model Qwen2-VL that can analyze videos more than 20 minutes long — from venturebeat.com by Carl Franzen


Hobbyists discover how to insert custom fonts into AI-generated images — from arstechnica.com by Benj Edwards
Like adding custom art styles or characters, in-world typefaces come to Flux.


200 million people use ChatGPT every week – up from 100 million last fall, says OpenAI — from zdnet.com by Sabrina Ortiz
Nearly two years after launching, ChatGPT continues to draw new users. Here’s why.

 

ILTACON 2024: Selling legal tech’s monorail — from abajournal.com by Nicole Black

The bottom line: The promise of GenAI for our profession is great, but all signs point to the realization of its potential being six months out or more. So the question remains: Will generative AI change the legal landscape, ushering in an era of frictionless, seamless legal work? Or have we reached the pinnacle of its development, left only with empty promises? I think it’s the former since there is so much potential, and many companies are investing significantly in AI development, but only time will tell.


From LegalZoom to AI-Powered Platforms: The Rise of Smart Legal Services — from tmcnet.com by Artem Vialykh

In today’s digital age, almost every industry is undergoing a transformation driven by technological innovation, and the legal field is no exception. Traditional legal services, often characterized by high fees, time-consuming processes, and complex paperwork, are increasingly being challenged by more accessible, efficient, and cost-effective alternatives.

LegalZoom, one of the pioneers in offering online legal services, revolutionized the way individuals and small businesses accessed legal assistance. However, with the advent of artificial intelligence (AI) and smart technologies, we are witnessing the rise of even more sophisticated platforms that are poised to reshape the legal landscape further.

The Rise of AI-Powered Legal Platforms
AI-powered legal platforms represent the next frontier in legal services. These platforms leverage the power of artificial intelligence, machine learning, and natural language processing to provide legal services that are not only more efficient but also more accurate and tailored to the needs of the user.

AI-powered platforms offer many advantages, with one of them being their ability to rapidly process and analyze large amounts of data quickly. This capability allows them to provide users with precise legal advice and document generation in a fraction of the time it would take a human attorney. For example, AI-driven platforms can review and analyze contracts, identify potential legal risks, and even suggest revisions, all in real-time. This level of automation significantly reduces the time and cost associated with traditional legal services.


AI, Market Dynamics, and the Future of Legal Services with Harbor’s Zena Applebaum — from geeklawblog.com by Greg Lambert

Zena talks about the integration of generative AI (Gen AI) into legal research tools, particularly at Thomson Reuters, where she previously worked. She emphasizes the challenges in managing expectations around AI’s capabilities while ensuring that the products deliver on their promises. The legal industry has high expectations for AI to simplify the time-consuming and complex nature of legal research. However, Applebaum highlights the need for balance, as legal research remains inherently challenging, and overpromising on AI’s potential could lead to dissatisfaction among users.

Zena shares her outlook on the future of the legal industry, particularly the growing sophistication of in-house legal departments and the increasing competition for legal talent. She predicts that as AI continues to enhance efficiency and drive changes in the industry, the demand for skilled legal professionals will rise. Law firms will need to adapt to these shifts by embracing new technologies and rethinking their strategies to remain competitive in a rapidly evolving market.


Future of the Delivery of Legal Services — from americanbar.org
The legal profession is in the midst of unprecedented change. Learn what might be next for the industry and your bar.


What. Just. Happened? (Post-ILTACon Emails Week of 08-19-2024) — from geeklawblog.com by Greg Lambert

Here’s this week’s edition of What. Just. Happened? Remember, you can track these daily with the AI Lawyer Talking Tech podcast (Spotify or Apple) which covers legal tech news and summarizes stories.


From DSC:
And although this next one is not necessarily legaltech-related, I wanted to include it here anyway — as I’m
always looking to reduce the costs of obtaining a degree.

Improve the Diversity of the Profession By Addressing the Costs of Becoming a Lawyer — from lssse.indiana.edu by Joan Howarth

Not surprisingly, then, research shows that economic assets are a significant factor in bar passage. And LSSSE research shows us the connections between the excessive expense of becoming a lawyer and the persistent racial and ethnic disparities in bar passage rate.

The racial and ethnic bar passage disparities are extreme. For example, the national ABA statistics for first time passers in 2023-24 show White candidates passing at 83%, compared to Black candidates (57%) with Asians and Hispanics in the middle (75% and 69%, respectively).

These disturbing figures are very related to the expense of becoming a lawyer.

Finally, though, after decades of stability — or stagnation — in attorney licensing, change is here. And some of the changes, such as the new pathway to licensure in Oregon based on supervised practice instead of a traditional bar exam, or the Nevada Plan in which most of the requirements can be satisfied during law school, should significantly decrease the costs of licensure and add flexibility for candidates with responsibilities beyond studying for a bar exam.  These reforms are long overdue.


Thomson Reuters acquires Safe Sign Technologies — from legaltechnology.com by Caroline Hill

Thomson Reuters today (21 August) announced it has acquired Safe Sign Technologies (SST), a UK-based startup that is developing legal-specific large language models (LLMs) and as of just eight months ago was operating in stealth mode.

 

Generative AI and the Time Management Revolution — from ai-mindset.ai by Conor Grennan

Here’s how we need to change our work lives:

  1. RECLAIM: Use generative AI to speed up your daily tasks. Be ruthless. Anything that can be automated, should be.
  2. PROTECT: This is the crucial step. That time you’ve saved? Protect it like it’s the last slice of pizza. Block it off in your calendar. Tell your team it’s sacred.
  3. ELEVATE: Use this protected time for high-level thinking. Strategy. Innovation. The big, meaty problems you never have time for.
  4. AMPLIFY: Here’s where it gets cool. Use generative AI to amp up your strategic thinking. Need to brainstorm solutions to a complex problem? Want to analyze market trends? Generative AI is your new thinking partner.

The top 100 Gen AI Consumer Apps — 3rd edition — from a16z.com by Andreessen Horowitz

But amid the relentless onslaught of product launches, investment announcements, and hyped-up features, it’s worth asking: Which of these gen AI apps are people actually using? Which behaviors and categories are gaining traction among consumers? And which AI apps are people returning to, versus dabbling and dropping?

Welcome to the third installment of the Top 100 Gen AI Consumer Apps.
.

 


Gen AI’s next inflection point: From employee experimentation to organizational transformation — from mckinsey.com by Charlotte Relyea, Dana Maor, and Sandra Durth with Jan Bouly
As many employees adopt generative AI at work, companies struggle to follow suit. To capture value from current momentum, businesses must transform their processes, structures, and approach to talent.

To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.

Our research shows that early adopters prioritize talent and the human side of gen AI more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development. Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.


Adobe drops ‘Magic Fixup’: An AI breakthrough in the world of photo editing — from venturebeat.com by Michael Nuñez

Adobe researchers have revealed an AI model that promises to transform photo editing by harnessing the power of video data. Dubbed “Magic Fixup,” this new technology automates complex image adjustments while preserving artistic intent, potentially reshaping workflows across multiple industries.

Magic Fixup’s core innovation lies in its unique approach to training data. Unlike previous models that relied solely on static images, Adobe’s system learns from millions of video frame pairs. This novel method allows the AI to understand the nuanced ways objects and scenes change under varying conditions of light, perspective, and motion.


Top AI tools people actually use — from heatherbcooper.substack.com by Heather Cooper
How generative AI tools are changing the creative landscape

The shift toward creative tools
Creative tools made up 52% of the top generative AI apps on the list. This seems to reflect a growing consumer demand for accessible creativity through AI with tools for image, music, speech, video, and editing.

Creative categories include:

  • Image: Civitai, Leonardo, Midjourney, Yodayo, Ideogram, SeaArt
  • Music: Suno, Udio, VocalRemover
  • Speech: ElevenLabs, Speechify
  • Video: Luma AI, Viggle, Invideo AI, Vidnoz, ClipChamp
  • Editing: Cutout Pro, Veed, Photoroom, Pixlr, PicWish

Why it matters:
Creative apps are gaining traction because they empower digital artists and content creators with AI-driven tools that simplify and enhance the creative process, making professional-level work more accessible than ever.

 

College Writing Centers Worry AI Could Replace Them — from edsurge.com by Maggie Hicks
Those who run the centers argue that they could be a hub for teaching AI literacy.

But as generative AI tools like ChatGPT sweep into mainstream business tools, promising to draft properly-formatted text from simple prompts and the click of a button, new questions are rising about what role writing centers should play — or whether they will be needed in the future.

Writing centers need to find a balance between introducing AI into the writing process and keeping the human support that every writer needs, argues Anna Mills, an English instructor at the College of Marin.

AI can serve as a supplement to a human tutor, Mills says. She encourages her students to use MyEssayFeedback, an AI tool that critiques the organization of an essay, the quality of evidence a student has included to support their thesis or the tone of the writing. Such tools can also evaluate research questions or review a student’s writing based on the rubric for the assignment, she says.

 

Gemini makes your mobile device a powerful AI assistant — from blog.google
Gemini Live is available today to Advanced subscribers, along with conversational overlay on Android and even more connected apps.

Rolling out today: Gemini Live <– Google swoops in before OpenAI can get their Voice Mode out there
Gemini Live is a mobile conversational experience that lets you have free-flowing conversations with Gemini. Want to brainstorm potential jobs that are well-suited to your skillset or degree? Go Live with Gemini and ask about them. You can even interrupt mid-response to dive deeper on a particular point, or pause a conversation and come back to it later. It’s like having a sidekick in your pocket who you can chat with about new ideas or practice with for an important conversation.

Gemini Live is also available hands-free: You can keep talking with the Gemini app in the background or when your phone is locked, so you can carry on your conversation on the go, just like you might on a regular phone call. Gemini Live begins rolling out today in English to our Gemini Advanced subscribers on Android phones, and in the coming weeks will expand to iOS and more languages.

To make speaking to Gemini feel even more natural, we’re introducing 10 new voices to choose from, so you can pick the tone and style that works best for you.

.

Per the Rundown AI:
Why it matters: Real-time voice is slowly shifting AI from a tool we text/prompt with, to an intelligence that we collaborate, learn, consult, and grow with. As the world’s anticipation for OpenAI’s unreleased products grows, Google has swooped in to steal the spotlight as the first to lead widespread advanced AI voice rollouts.

Beyond Social Media: Schmidt Predicts AI’s Earth-Shaking Impact— from wallstreetpit.com
The next wave of AI is coming, and if Schmidt is correct, it will reshape our world in ways we are only beginning to imagine.

In a recent Q&A session at Stanford, Eric Schmidt, former CEO and Chairman of search giant Google, offered a compelling vision of the near future in artificial intelligence. His predictions, both exciting and sobering, paint a picture of a world on the brink of a technological revolution that could dwarf the impact of social media.

Schmidt highlighted three key advancements that he believes will converge to create this transformative wave: very large context windows, agents, and text-to-action capabilities. These developments, according to Schmidt, are not just incremental improvements but game-changers that could reshape our interaction with technology and the world at large.

.


The rise of multimodal AI agents— from 11onze.cat
Technology companies are investing large amounts of money in creating new multimodal artificial intelligence models and algorithms that can learn, reason and make decisions autonomously after collecting and analysing data.

The future of multimodal agents
In practical terms, a multimodal AI agent can, for example, analyse a text while processing an image, spoken language, or an audio clip to give a more complete and accurate response, both through voice and text. This opens up new possibilities in various fields: from education and healthcare to e-commerce and customer service.


AI Change Management: 41 Tactics to Use (August 2024)— from flexos.work by Daan van Rossum
Future-proof companies are investing in driving AI adoption, but many don’t know where to start. The experts recommend these 41 tips for AI change management.

As Matt Kropp told me in our interview, BCG has a 10-20-70 rule for AI at work:

  • 10% is the LLM or algorithm
  • 20% is the software layer around it (like ChatGPT)
  • 70% is the human factor

This 70% is exactly why change management is key in driving AI adoption.

But where do you start?

As I coach leaders at companies like Apple, Toyota, Amazon, L’Oréal, and Gartner in our Lead with AI program, I know that’s the question on everyone’s minds.

I don’t believe in gatekeeping this information, so here are 41 principles and tactics I share with our community members looking for winning AI change management principles.


 

How Generative AI will change what lawyers do — from jordanfurlong.substack.com by Jordan Furlong
As we enter the Age of Accessible Law, a wave of new demand is coming our way — but AI will meet most of the surge. What will be left for lawyers? Just the most valuable and irreplaceable role in law.

AI can already provide actionable professional advice; within the next ten years, if it takes that long, I believe it will offer acceptable legal advice. No one really wants “AI courts,” but soon enough, we’ll have AI-enabled mediation and arbitration, which will have a much greater impact on everyday dispute resolution.

I think it’s dangerous to assume that AI will never be able to do something that lawyers now do. “Never” is a very long time. And AI doesn’t need to replicate the complete arsenal of the most gifted lawyer out there. If a Legal AI can replicate 80% of what a middling lawyer can do, for 10% of the cost, in 1% of the time, that’s all the revolution you’ll need.

From DSC:
It is my sincere hope that AI will open up the floodgates to FAR great Access to Justice (A2J) in the future.


It’s the Battle of the AI Legal Assistants, As LexisNexis Unveils Its New Protégé and Thomson Reuters Rolls Out CoCounsel 2.0 — from lawnext.com by Bob Ambrogi

It’s not quite BattleBots, but competitors LexisNexis and Thomson Reuters both made significant announcements today involving the development of generative AI legal assistants within their products.

Thomson Reuters, which last year acquired the CoCounsel legal assistant originally developed by Casetext, and which later announced plans to deploy it throughout its product lines, today unveiled what it says is the “supercharged” CoCounsel 2.0.

Meanwhile, LexisNexis said today it is rolling out the commercial preview version of its Protégé Legal AI Assistant, which it describes as a “substantial leap forward in personalized generative AI that will transform legal work.” It is part of the launch of the third generation of Lexis+ AI, the AI-driven legal research platform the company launched last year.


Thomson Reuters Launches CoCounsel 2.0 — from abovethelaw.com by Joe Patrice
New release promises results three times faster than the last version.

It seems like just last year we were talking about CoCounsel 1.0, the generative AI product launched by Casetext and then swiftly acquired by Thomson Reuters. That’s because it was just last year. Since then, Thomson Reuters has worked to marry Casetext’s tool with TR’s treasure trove of data.

It’s not an easy task. A lot of the legal AI conversation glosses over how constructing these tools requires a radical confrontation with the lawyers’ mind. Why do attorneys do what they do every day? Are there seemingly “inefficient” steps that actually serve a purpose? Does an AI “answer” advance the workflow or hinder the research alchemy? As recently as April, Thomson Reuters was busy hyping the fruits of its efforts to get ahead of these challenges.


Though this next item is not necessarily related to legaltech, it’s still relevant to the legal realm:

A Law Degree Is No Sure Thing— from cew.georgetown.edu
Some Law School Graduates Earn Top Dollar, but Many Do Not

Summary
Is law school worth it? A Juris Doctor (JD) offers high median earnings and a substantial earnings boost relative to a bachelor’s degree in the humanities or social sciences—two of the more common fields of study that lawyers pursue as undergraduate students. However, graduates of most law schools carry substantial student loan debt, which dims the financial returns associated with a JD.

A Law Degree Is No Sure Thing: Some Law School Graduates Earn Top Dollar, but Many Do Not finds that the return on investment (ROI) in earnings and career outcomes varies widely across law schools. The median earnings net of debt payments are $72,000 four years after graduation for all law school graduates, but exceed $200,000 at seven law schools. By comparison, graduates of 33 law schools earn less than $55,000 net of debt payments four years after graduation.

From DSC:
A former boss’ husband was starting up a local public defender’s office in Michigan and needed to hire over two dozen people. The salaries were in the $40K’s she said. This surprised me greatly, as I thought all lawyers were bringing in the big bucks. This is not the case, clearly. Many lawyers do not make the big bucks, as this report shows:

…graduates of 33 law schools earn less than $55,000 net of debt payments four years after graduation.

.

Also relevant/see:

 

From DSC:
The above item is simply excellent!!! I love it!



Also relevant/see:

3 new Chrome AI features for even more helpful browsing — from blog.google from Parisa Tabriz
See how Chrome’s new AI features, including Google Lens for desktop and Tab compare, can help you get things done more easily on the web.


On speaking to AI — from oneusefulthing.org by Ethan Mollick
Voice changes a lot of things

So, let’s talk about ChatGPT’s new Advanced Voice mode and the new AI-powered Siri. They are not just different approaches to talking to AI. In many ways, they represent the divide between two philosophies of AI – Copilots versus Agents, small models versus large ones, specialists versus generalists.


Your guide to AI – August 2024 — from nathanbenaich.substack.com by Nathan Benaich and Alex Chalmers


Microsoft says OpenAI is now a competitor in AI and search — from cnbc.com by Jordan Novet

Key Points

  • Microsoft’s annually updated list of competitors now includes OpenAI, a long-term strategic partner.
  • The change comes days after OpenAI announced a prototype of a search engine.
  • Microsoft has reportedly invested $13 billion into OpenAI.


Excerpt from by Graham Clay

1. Flux, an open-source text-to-image creator that is comparable to industry leaders like Midjourney, was released by Black Forest Labs (the “original team” behind Stable Diffusion). It is capable of generating high quality text in images (there are tons of educational use cases). You can play with it on their demo page, on Poe, or by running it on your own computer (tutorial here).

Other items re: Flux:

How to FLUX  — from heatherbcooper.substack.com by Heather Cooper
Where to use FLUX online & full tutorial to create a sleek ad in minutes

.

Also from Heather Cooper:

Introducing FLUX: Open-Source text to image model

FLUX… has been EVERYWHERE this week, as I’m sure you have seen. Developed by Black Forest Labs, is an open-source image generation model that’s gaining attention for its ability to rival leading models like Midjourney, DALL·E 3, and SDXL.

What sets FLUX apart is its blend of creative freedom, precision, and accessibility—it’s available across multiple platforms and can be run locally.

Why FLUX Matters
FLUX’s open-source nature makes it accessible to a broad audience, from hobbyists to professionals.

It offers advanced multimodal and parallel diffusion transformer technology, delivering high visual quality, strong prompt adherence, and diverse outputs.

It’s available in 3 models:
FLUX.1 [pro]: A high-performance, commercial image synthesis model.
FLUX.1 [dev]: An open-weight, non-commercial variant of FLUX.1 [pro]
FLUX.1 [schnell]: A faster, distilled version of FLUX.1, operating up to 10x quicker.

Daily Digest: Huge (in)Flux of AI videos. — from bensbites.beehiiv.com
PLUS: Review of ChatGPT’s advanced voice mode.

  1. During the weekend, image models made a comeback. Recently released Flux models can create realistic images with near-perfect text—straight from the model, without much patchwork. To get the party going, people are putting these images into video generation models to create prettytrippyvideos. I can’t identify half of them as AI, and they’ll only get better. See this tutorial on how to create a video ad for your product..

 


7 not only cool but handy use cases of new Claude — from techthatmatters.beehiiv.com by Harsh Makadia

  1. Data visualization
  2. Infographic
  3. Copy the UI of a website
  4. …and more

Achieving Human Level Competitive Robot Table Tennis — from sites.google.com

 

What Students Want When It Comes To AI — from onedtech.philhillaa.com by Glenda Morgan
The Digital Education Council Global AI Student Survey 2024

The Digital Education Council (DEC) this week released the results of a global survey of student opinions on AI. It’s a large survey with nearly 4,000 respondents conducted across 16 countries, but more importantly, it asks some interesting questions. There are many surveys about AI out there right now, but this one stands out. I’m going to go into some depth here, as the entire survey report is worth reading.

.

.


AI is forcing a teaching and learning evolution — from eschoolnews.com by Laura Ascione
AI and technology tools are leading to innovative student learning–along with classroom, school, and district efficiency

Key findings from the 2024 K-12 Educator + AI Survey, which was conducted by Hanover Research, include:

  • Teachers are using AI to personalize and improve student learning, not just run classrooms more efficiently, but challenges remain
  • While post-pandemic challenges persist, the increased use of technology is viewed positively by most teachers and administrators
  • …and more

From DSC:
I wonder…how will the use of AI in education square with the issues of using smartphones/laptops within the classrooms? See:

  • Why Schools Are Racing to Ban Student Phones — from nytimes.com by Natasha Singer; via GSV
    As the new school year starts, a wave of new laws that aim to curb distracted learning is taking effect in Indiana, Louisiana and other states.

A three-part series from Dr. Phillippa Hardman:

Part 1: Writing Learning Objectives  
The Results Part 1: Writing Learning Objectives

In this week’s post I will dive into the results from task 1: writing learning objectives. Stay tuned over the next two weeks to see all of the the results.

Part 2: Selecting Instructional Strategies.
The Results Part 2: Selecting an Instructional Strategy

Welcome back to our three-part series exploring the impact of AI on instructional design.

This week, we’re tackling a second task and a crucial aspect of instructional design: selecting instructional strategies. The ability to select appropriate instructional strategies to achieve intended objectives is a mission-critical skill for any instructional designer. So, can AI help us do a good job of it? Let’s find out!

Part 3: How Close is AI to Replacing Instructional Designers?
The Results Part 3: Creating a Course Outline

Today, we’re diving into what many consider to be the role-defining task of the instructional designer: creating a course design outline.


ChatGPT Cheat Sheet for Instructional Designers! — from Alexandra Choy Youatt EdD

Instructional Designers!
Whether you’re new to the field or a seasoned expert, this comprehensive guide will help you leverage AI to create more engaging and effective learning experiences.

What’s Inside?
Roles and Tasks: Tailored prompts for various instructional design roles and tasks.
Formats: Different formats to present your work, from training plans to rubrics.
Learning Models: Guidance on using the ADDIE model and various pedagogical strategies.
Engagement Tips: Techniques for online engagement and collaboration.
Specific Tips: Industry certifications, work-based learning, safety protocols, and more.

Who Can Benefit?
Corporate Trainers
Curriculum Developers
E-Learning Specialists
Instructional Technologists
Learning Experience Designers
And many more!

ChatGPT Cheat Sheet | Instructional Designer


5 AI Tools I Use Every Day (as a Busy Student) — from theaigirl.substack.com by Diana Dovgopol
AI tools that I use every day to boost my productivity.
#1 Gamma
#2 Perplexity
#3 Cockatoo

I use this AI tool almost every day as well. Since I’m still a master’s student at university, I have to attend lectures and seminars, which are always in English or German, neither of which is my native language. With the help of Cockatoo, I create scripts of the lectures and/or translations into my language. This means I don’t have to take notes in class and then manually translate them afterward. All I need to do is record the lecture audio on any device or directly in Cockatoo, upload it, and then you’ll have the audio and text ready for you.

…and more


Students Worry Overemphasis on AI Could Devalue Education — from insidehighered.com by Juliette Rowsell
Report stresses that AI is “new standard” and universities need to better communicate policies to learners.

Rising use of AI in higher education could cause students to question the quality and value of education they receive, a report warns.

This year’s Digital Education Council Global AI Student Survey, of more than 3,800 students from 16 countries, found that more than half (55 percent) believed overuse of AI within teaching devalued education, and 52 percent said it negatively impacted their academic performance.

Despite this, significant numbers of students admitted to using such technology. Some 86 percent said they “regularly” used programs such as ChatGPT in their studies, 54 percent said they used it on a weekly basis, and 24 percent said they used it to write a first draft of a submission.

Higher Ed Leadership Is Excited About AI – But Investment Is Lacking — from forbes.com by Vinay Bhaskara

As corporate America races to integrate AI into its core operations, higher education finds itself in a precarious position. I conducted a survey of 63 university leaders revealing that while higher ed leaders recognize AI’s transformative potential, they’re struggling to turn that recognition into action.

This struggle is familiar for higher education — gifted with the mission of educating America’s youth but plagued with a myriad of operational and financial struggles, higher ed institutions often lag behind their corporate peers in technology adoption. In recent years, this gap has become threateningly large. In an era of declining enrollments and shifting demographics, closing this gap could be key to institutional survival and success.

The survey results paint a clear picture of inconsistency: 86% of higher ed leaders see AI as a “massive opportunity,” yet only 21% believe their institutions are prepared for it. This disconnect isn’t just a minor inconsistency – it’s a strategic vulnerability in an era of declining enrollments and shifting demographics.


(Generative) AI Isn’t Going Anywhere but Up — from stefanbauschard.substack.com by Stefan Bauschard
“Hype” claims are nonsense.

There has been a lot of talk recently about an “AI Bubble.” Supposedly, the industry, or at least the generative AI subset of it, will collapse. This is known as the “Generative AI Bubble.” A bubble — a broad one or a generative one — is nonsense. These are the reasons we will continue to see massive growth in AI.


AI Readiness: Prepare Your Workforce to Embrace the Future — from learningguild.com by Danielle Wallace

Artificial Intelligence (AI) is revolutionizing industries, enhancing efficiency, and unlocking new opportunities. To thrive in this landscape, organizations need to be ready to embrace AI not just technologically but also culturally.

Learning leaders play a crucial role in preparing employees to adapt and excel in an AI-driven workplace. Transforming into an AI-empowered organization requires more than just technological adoption; it demands a shift in organizational mindset. This guide delves into how learning leaders can support this transition by fostering the right mindset attributes in employees.


Claude AI for eLearning Developers — from learningguild.com by Bill Brandon

Claude is fast, produces grammatically correct  text, and outputs easy-to-read articles, emails, blog posts, summaries, and analyses. Take some time to try it out. If you worry about plagiarism and text scraping, put the results through Grammarly’s plagiarism checker (I did not use Claude for this article, but I did send the text through Grammarly).


Survey: Top Teacher Uses of AI in the Classroom — from thejournal.com by Rhea Kelly

A new report from Cambium Learning Group outlines the top ways educators are using artificial intelligence to manage their classrooms and support student learning. Conducted by Hanover Research, the 2024 K-12 Educator + AI Survey polled 482 teachers and administrators at schools and districts that are actively using AI in the classroom.

More than half of survey respondents (56%) reported that they are leveraging AI to create personalized learning experiences for students. Other uses included providing real-time performance tracking and feedback (cited by 52% of respondents), helping students with critical thinking skills (50%), proofreading writing (47%), and lesson planning (44%).

On the administrator side, top uses of AI included interpreting/analyzing student data (61%), managing student records (56%), and managing professional development (56%).


Addendum on 8/14/24:

 

How to Set Up District-Wide Choice-Based Professional Development — from edutopia.org by Kathi Kersznowski
This district reimagined PD by tapping interested teachers to guide PD sessions on new software, creating a model that can be used for smaller initiatives as well.

As an edtech coach, I constantly seek innovative ways to deliver impactful professional development (PD), and when our district decided to transition from Microsoft to Google, I saw an opportunity to reimagine our approach to PD. My team of edtech specialists, plus our administrators and I.T. department, and I created Google Day, setting up a transformative model that engaged our entire staff and created lasting impact.

While our focus was a major platform change, this model can be adapted for any professional development topics or district-wide rollouts. Whether you‘re introducing a new curriculum or teaching strategies or starting any other initiative, the principles we applied can help you create a dynamic, effective PD experience.

The key elements—leveraging in-house expertise, offering choice in sessions, and creating a lasting set of resources—can be applied to virtually any PD need.

 

For college students—and for higher ed itself—AI is a required course — from forbes.com by Jamie Merisotis

Some of the nation’s biggest tech companies have announced efforts to reskill people to avoid job losses caused by artificial intelligence, even as they work to perfect the technology that could eliminate millions of those jobs.

It’s fair to ask, however: What should college students and prospective students, weighing their choices and possible time and financial expenses, think of this?

The news this spring was encouraging for people seeking to reinvent their careers to grab middle-class jobs and a shot at economic security.

 


Addressing Special Education Needs With Custom AI Solutions — from teachthought.com
AI can offer many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

For too long, students with learning disabilities have struggled to navigate a traditional education system that often fails to meet their unique needs. But what if technology could help bridge the gap, offering personalized support and unlocking the full potential of every learner?

Artificial intelligence (AI) is emerging as a powerful ally in special education, offering many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

.


 

.


11 Summer AI Developments Important to Educators — from stefanbauschard.substack.com by Stefan Bauschard
Equity demands that we help students prepare to thrive in an AI-World

*SearchGPT
*Smaller & on-device (phones, glasses) AI models
*AI TAs
*Access barriers decline, equity barriers grow
*Claude Artifacts and Projects
*Agents, and Agent Teams of a million+
*Humanoid robots & self-driving cars
*AI Curricular integration
*Huge video and video-segmentation gains
*Writing Detectors — The final blow
*AI Unemployment, Student AI anxiety, and forward-thinking approaches
*Alternative assessments


Academic Fracking: When Publishers Sell Scholars Work to AI — from aiedusimplified.substack.com by Lance Eaton
Further discussion of publisher practices selling scholars’ work to AI companies

Last week, I explored AI and academic publishing in response to an article that came out a few weeks ago about a deal Taylor & Francis made to sell their books to Microsoft and one other AI company (unnamed) for a boatload of money.

Since then, two more pieces have been widely shared including this piece from Inside Higher Ed by Kathryn Palmer (and to which I was interviewed and mentioned in) and this piece from Chronicle of Higher Ed by Christa Dutton. Both pieces try to cover the different sides talking to authors, scanning the commentary online, finding some experts to consult and talking to the publishers. It’s one of those things that can feel like really important and also probably only to a very small amount of folks that find themselves thinking about academic publishing, scholarly communication, and generative AI.


At the Crossroads of Innovation: Embracing AI to Foster Deep Learning in the College Classroom — from er.educause.edu by Dan Sarofian-Butin
AI is here to stay. How can we, as educators, accept this change and use it to help our students learn?

The Way Forward
So now what?

In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.11

Therefore, I believe we must double down on a learning-centered model in the age of AI.

The first step is to fully and enthusiastically embrace AI.

The second step is to find the “jagged technological frontier” of using AI in the college classroom.


.

.


.

.


Futures Thinking in Education — from gettingsmart.com by Getting Smart Staff

Key Points

  • Educators should leverage these tools to prepare for rapid changes driven by technology, climate, and social dynamics.
  • Cultivating empathy for future generations can help educators design more impactful and forward-thinking educational practices.
 

Per the Rundown AI:

Why it matters: AI is slowly shifting from a tool we text/prompt with, to an intelligence that we collaborate, learn, and grow with. Advanced Voice Mode’s ability to understand and respond to emotions in real-time convos could also have huge use cases in everything from customer service to mental health support.

Also relevant/see:


Creators to Have Personalized AI Assistants, Meta CEO Mark Zuckerberg Tells NVIDIA CEO Jensen Huang — from blogs.nvidia.com by Brian Caulfield
Zuckerberg and Huang explore the transformative potential of open source AI, the launch of AI Studio, and exchange leather jackets at SIGGRAPH 2024.

“Every single restaurant, every single website will probably, in the future, have these AIs …” Huang said.

“…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI,” Zuckerberg responded.

More broadly, the advancement of AI across a broad ecosystem promises to supercharge human productivity, for example, by giving every human on earth a digital assistant — or assistants — allowing people to live richer lives that they can interact with quickly and fluidly.

Also related/see:


From DSC:
Today was a MUCH better day for Nvidia however (up 12.81%). But it’s been very volatile in the last several weeks — as people and institutions ask where the ROI’s are going to come from.






9 compelling reasons to learn how to use AI Chatbots — from interestingengineering.com by Atharva Gosavi
AI Chatbots are conversational agents that can act on your behalf and converse with humans – a futuristic novelty that is already getting people excited about its usage in improving efficiency.

7. Accessibility and inclusivity
Chatbots can be designed to support multiple languages and accessibility needs, making services more inclusive. They can cater to users with disabilities by providing voice interaction capabilities and simplifying access to information. Understanding how to develop inclusive chatbots can help you contribute to making technology more accessible to everyone, a crucial aspect in today’s diverse society.

8. Future-proofing your skills
AI and automation are the future of work. Having the skills of building AI chatbots is a great way to future-proof your skills, and given the rising trajectory of AI, it’ll be a demanding skill in the market in the years to come. Staying ahead of technological trends is a great way to ensure you remain relevant and competitive in the job market.


Top 7 generative AI use cases for business — from cio.com by Grant Gross
Advanced chatbots, digital assistants, and coding helpers seem to be some of the sweet spots for gen AI use so far in business.

Many AI experts say the current use cases for generative AI are just the tip of the iceberg. More uses cases will present themselves as gen AIs get more powerful and users get more creative with their experiments.

However, a handful of gen AI use cases are already bubbling up. Here’s a look at the most popular and promising.

 

How to use Perplexity in your daily workflow — from ai-supremacy.com by Michael Spencer and Alex McFarland
“I barely use Google anymore (for anything)” says today’s guest author.

Make Perplexity your go-to research companion with these strategies:

  • Morning briefings: Start your day by asking Perplexity for the latest news in your field. (I personally like to use Perplexity to curate the top AI news of the day to consider writing about for Unite AI and Techopedia.)
  • Fact-checking: Use it to quickly verify information before including it in your work.
  • Brainstorming: Generate ideas for projects or content by asking open-ended questions.
  • Learning new concepts: When you encounter an unfamiliar term or idea, turn to Perplexity for a quick, comprehensive explanation.
  • Writing assistance: Use it to find relevant statistics, examples, or counterarguments for your content.

From DSC:
So I used it to ask “What is Elon Musk’s Memphis Supercluster?” (which went online yesterday I believe) — which resulted in this reply:

Elon Musk’s Memphis Supercluster is a newly activated AI training cluster that is claimed to be the most powerful in the world. Here are the key details about this supercomputer:

1. Location: The supercluster is located in Memphis, Tennessee[1][2].

2. Hardware: It consists of 100,000 liquid-cooled Nvidia H100 GPUs connected through a single RDMA (Remote Direct Memory Access) fabric[1][3].

3. Purpose: The supercluster is designed for training large language models (LLMs) and other advanced AI technologies for Musk’s xAI company[1][2].

4. Activation: The Memphis Supercluster began training at approximately 4:20 AM local time on July 22, 2024[1][3].

5. Collaboration: The project is a result of collaboration between xAI, X (formerly Twitter), Nvidia, and other supporting companies[1][2].

6. Investment: With each H100 GPU estimated to cost between $30,000 to $40,000, the total investment in GPUs alone is estimated to be between $3 billion to $4 billion[5].

7. Goals: Musk claims that this supercluster will be used to develop “the world’s most powerful AI by every measure” by December 2024[1].

8. Comparison: The Memphis Supercluster’s 100,000 H100 GPUs significantly outclass other supercomputers in terms of GPU horsepower, such as Frontier (37,888 AMD GPUs) and Microsoft Eagle (14,400 Nvidia H100 GPUs)[3].

9. Infrastructure: The project required significant infrastructure development, including fiber optic networking[5].

While Musk’s claims about the supercluster’s capabilities are ambitious, it remains to be seen how it will perform in practice and whether it will meet the stated goals within the given timeframe[1].

Citations:
[1] https://venturebeat.com/ai/elon-musk-announces-most-powerful-ai-training-cluster-in-the-world/
[2] https://www.tuaw.com/2024/07/22/elon-musks-xai-unveils-worlds-most-powerful-ai-training-cluster/
[3] https://www.tomshardware.com/pc-components/gpus/elon-musk-fires-up-the-most-powerful-ai-training-cluster-in-the-world-uses-100000-nvidia-h100-gpus-on-a-single-fabric
[4] https://www.thestack.technology/xai-elon-musk-ai-supercomputer/
[5] https://www.benzinga.com/news/24/07/39881748/elon-musks-xai-flips-the-switch-on-100k-nvidia-h100-gpus-worth-up-to-4b-in-memphis-supercluster-most
[6] https://www.youtube.com/watch?v=0JhtJa05C2Y

—–
Elon’s AI empire expands — from theneurondaily.com by Grant Harvey
Elon Musk’s team at xAI just powered on the “World’s Most Powerful AI Training Cluster.”
If you don’t know what a supercluster is, it’s basically a massive network of Nvidia GPUs (computer chips) working together as a single unit to solve “super” complex calculations at unprecedented speeds.

And this Memphis Supercluster is the most “super” supercluster we’ve ever seen. The new facility, dubbed the “Gigafactory of Compute”, is a beast:

  • 100,000 liquid-cooled Nvidia H100 GPUs on a single RDMA fabric (for context, Google snagged only 50,000 H100 GPUs last year).
  • Up to 150 megawatts of electricity usage per hour—enough for 100K homes.
  • At least one million gallons of water per day to keep cool!

What to expect: Better models, more frequently. That’s been the trend, at least—look at how the last few model releases have become more squished together. 


OpenAI to make GPT-4o Advanced Voice available by the end of the month to select group of users — from tomsguide.com by Ryan Morrison

GPT-4o Advanced Voice is an entirely new type of voice assistant, similar to but larger than the recently unveiled French model Moshi, which argued with me over a story.

In demos of the model, we’ve seen GPT-4o Advanced Voice create custom character voices, generate sound effects while telling a story and even act as a live translator.

This native speech ability is a significant step in creating more natural AI assistants. In the future, it will also come with live vision abilities, allowing the AI to see what you see.


Could AGI break the world? — from theneurondaily.com by Noah Edelman

“Biggest IT outage in history” proves we’re not ready for AGI.

Here’s the TL;DR
—a faulty software update from cybersecurity firm Crowdstrike made this happen:

  • Grounded 5,000+ flights around the world.
  • Slowed healthcare across the UK.
  • Forced retailers to revert to cash-only transactions in Australia (what is this, the stone ages?!).


Here’s where AI comes in: Imagine today’s AI as a new operating system. In 5-10 years, it’ll likely be as integrated into our economy as Microsoft’s cloud servers are now. This isn’t that far-fetched—Microsoft is already planning to embed AI into all its programs.

So what if a Crowdstrike-like incident happens with a more powerful AI system? Some experts predict an AI-powered IT outage could be 10x worse than Friday’s fiasco.


The Crowdstrike outage and global software’s single-point failure problem — from cnbc.com by Kaya Ginsky

KEY POINTS

  • The CrowdStrike software bug that took down global IT infrastructure exposed a single-point-of-failure risk unrelated to malicious cyberattack.
  • National and cybersecurity experts say the risk of this kind of technical outage is increasing alongside the risk of hacks, and the market will need to adopt better competitive practices.
  • Government is also likely to look at new regulations related to software updates and patches.

The “largest IT outage in history,” briefly explained — from vox.com by Li Zhou
Airlines, banks, and hospitals saw computer systems go down because of a CrowdStrike software glitch.

 

The race to deploy GenAI in the legal sector — from sifted.eu by Kai Nicol-Schwarz
LegalFly’s €15m Series A is the latest in a string of raises for European GenAI legaltech startups

Speak to any lawyer and you’ll soon discover that the job is a far cry from the fevered excitement of a courtroom drama. Behind the scenes, there’s an endless amount of laborious and typically manual tasks like drafting, reviewing and negotiating contracts and other legal documents that have to be done manually daily.

It was this realisation that led four product managers at dating app giant Tinder, frustrated by what they saw as a lack of AI adoption at the company, to jump ship and found Belgium-based LegalFly last year. The startup is building a generative AI copilot for lawyers which eventually, it says, will be able to automate entire workflows in the legal profession.

“We were looking at what GenAI was good at, which is synthesising data and generating content,” says founder and CEO Ruben Miessen. “What industry works like that? Law, and it does it all in a very manual way.”

“The legal industry is a global behemoth that’s seen minimal innovation since the advent of Microsoft Word in the 90s,” says Carina Namih, partner at Plural. “GenAI — especially with a human in the loop to keep accuracy high — is ideally suited to drafting, editing and negotiating legal documents.”


Legal Technology Company Relativity Announces OpenAI ChatGPT Integration — from lawfuel.com

CHICAGO– July 18 – Relativity, a global legal technology company, today announced it is integrating with OpenAI’s ChatGPT Enterprise Compliance API. The integration adds ChatGPT Enterprise as a Collect in RelativityOne data source, allowing users to seamlessly collect and process human-to-AI conversational data.

“The future around human and AI interaction is changing rapidly, calling for innovative legal data management solutions to include novel data sources, such as conversations with AI agents,” said Chris Brown, Chief Product Officer at Relativity. “In answering that call, we are committed to equipping our community with the tools they need to traverse the evolving future of human-to-AI conversational data and putting users in control of this new data landscape.”

 
© 2024 | Daniel Christian