Google’s AI-powered note-taking app is the messy beginning of something great — from theverge.com by David Pierce; via AI Insider
NotebookLM is a neat research tool with some big ideas. It’s still rough and new, but it feels like Google is onto something.

Excerpts (emphasis DSC):

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. 

Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like.

 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 

10 Ways Artificial Intelligence Is Transforming Instructional Design — from er.educause.edu by Robert Gibson
Artificial intelligence (AI) is providing instructors and course designers with an incredible array of new tools and techniques to improve the course design and development process. However, the intersection of AI and content creation is not new.

What does this mean for the field of instructional and course design? I have been telling my graduate instructional design students that AI technology is not likely to replace them any time soon because learning and instruction are still highly personalized and humanistic experiences. However, as these students embark on their careers, they will need to understand how to appropriately identify, select, and utilize AI when developing course content.

Here are a few interesting examples of how AI is shaping and influencing instructional design. Some of the tools and resources can be used to satisfy a variety of course design activities, while others are very specific.


GenAI Chatbot Prompt Library for Educators — from aiforeducation.io
We have a variety of prompts to help you lesson plan and do adminstrative tasks with GenAI chatbots like ChatGPT, Claude, Bard, and Perplexity.

Also relevant/see:

AI for Education — from linkedin.com
Helping teachers and schools unlock their full potential through AI



Google Chrome will summarize entire articles for you with built-in generative AI — from theverge.com by Jay Peters
Google’s AI-powered article summaries are rolling out for iOS and Android first, before coming to Chrome on the desktop.

Google’s AI-powered Search Generative Experience (SGE) is getting a major new feature: it will be able to summarize articles you’re reading on the web, according to a Google blog post. SGE can already summarize search results for you so that you don’t have to scroll forever to find what you’re looking for, and this new feature is designed to take that further by helping you out after you’ve actually clicked a link.


A Definitive Guide to Using Midjourney — from every.to by Lucas Crespo
Everything you need to know about generating AI Images

In this article, I’ll walk you through the most powerful and useful techniques I’ve come across. We’ll cover:

  • Getting started in Midjourney
  • Understanding Midjourney’s quirks with interpreting prompts
  • Customizing Midjourney’s image outputs after the fact
  • Experimenting with a range of styles and content
  • Uploading and combining images to make new ones via image injections
  • Brainstorming art options with parameters like “chaos” and “weird”
  • Finalizing your Midjourney output’s aspect ratio

And much more.


Report: Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over — from arstechnica.com by Ashley Belanger; via Misha da Vinci
OpenAI could be fined up to $150,000 for each piece of infringing content.

Weeks after The New York Times updated its terms of service (TOS) to prohibit AI companies from scraping its articles and images to train AI models, it appears that the Times may be preparing to sue OpenAI. The result, experts speculate, could be devastating to OpenAI, including the destruction of ChatGPT’s dataset and fines up to $150,000 per infringing piece of content.

NPR spoke to two people “with direct knowledge” who confirmed that the Times’ lawyers were mulling whether a lawsuit might be necessary “to protect the intellectual property rights” of the Times’ reporting.


Midjourney Is Easily Tricked Into Making AI Misinformation, Study Finds — from bloomberg.com (paywall)


AI-generated art cannot be copyrighted, rules a US Federal Judge — from msn.com by Wes Davis; via Tom Barrett


Do you want to Prepare your Students for the AI World? Support your Speech and Debate Team Now — from stefanbauschard.substack.com by Stefan Bauschard
Adding funding to the debate budget is a simple and immediate step administrators can take as part of developing a school’s “AI Strategy.”

 


Gen-AI Movie Trailer For Sci Fi Epic “Genesis” — from forbes.com by Charlie Fink

The movie trailer for “Genesis,” created with AI, is so convincing it caused a stir on Twitter [on July 27]. That’s how I found out about it. Created by Nicolas Neubert, a senior product designer who works for Elli by Volkswagen in Germany, the “Genesis” trailer promotes a dystopian sci-fi epic reminiscent of the Terminator. There is no movie, of course, only the trailer exists, but this is neither a gag nor a parody. It’s in a class of its own. Eerily made by man, but not.



Google’s water use is soaring. AI is only going to make it worse. — from businessinsider.com by Hugh Langley

Google just published its 2023 environmental report, and one thing is for certain: The company’s water use is soaring.

The internet giant said it consumed 5.6 billion gallons of water in 2022, the equivalent of 37 golf courses. Most of that — 5.2 billion gallons — was used for the company’s data centers, a 20% increase on the amount Google reported the year prior.


We think prompt engineering (learning to converse with an AI) is overrated. — from the Neuron

We think prompt engineering (learning to converse with an AI) is overrated. Yup, we said it. We think the future of chat interfaces will be a combination of preloading context and then allowing AI to guide you to the information you seek.

From DSC:
Agreed. I think we’ll see a lot more interface updates and changes to make things easier to use, find, develop.


Radar Trends to Watch: August 2023 — from oreilly.com by Mike Loukides
Developments in Programming, Web, Security, and More

Artificial Intelligence continues to dominate the news. In the past month, we’ve seen a number of major updates to language models: Claude 2, with its 100,000 token context limit; LLaMA 2, with (relatively) liberal restrictions on use; and Stable Diffusion XL, a significantly more capable version of Stable Diffusion. Does Claude 2’s huge context really change what the model can do? And what role will open access and open source language models have as commercial applications develop?


Try out Google ‘TextFX’ and its 10 creative AI tools for rappers, writers — from 9to5google.com by Abner Li; via Barsee – AI Valley 

Google Lab Sessions are collaborations between “visionaries from all realms of human endeavor” and the company’s latest AI technology. [On 8/2/23], Google released TextFX as an “experiment to demonstrate how generative language technologies can empower the creativity and workflows of artists and creators” with Lupe Fiasco.

Google’s TextFX includes 10 tools and is powered by the PaLM 2 large language model via the PALM API. Meant to aid in the creative process of rappers, writers, and other wordsmiths, it is part of Google Labs.

 
 

YouTube tests AI-generated quizzes on educational videos — from techcrunch.com by Lauren Forristal

YouTube tests AI-generated quizzes on educational videos

YouTube is experimenting with AI-generated quizzes on its mobile app for iOS and Android devices, which are designed to help viewers learn more about a subject featured in an educational video. The feature will also help the video-sharing platform get a better understanding of how well each video covers a certain topic.


Incorporating AI in Teaching: Practical Examples for Busy Instructors — from danielstanford.substack.com by Daniel Stanford; with thanks to Derek Bruff on LinkedIn for the resource

Since January 2023, I’ve talked with hundreds of instructors at dozens of institutions about how they might incorporate AI into their teaching. Through these conversations, I’ve noticed a few common issues:

  • Faculty and staff are overwhelmed and burned out. Even those on the cutting edge often feel they’re behind the curve.
  • It’s hard to know where to begin.
  • It can be difficult to find practical examples of AI use that are applicable across a variety of disciplines.

To help address these challenges, I’ve been working on a list of AI-infused learning activities that encourage experimentation in (relatively) small, manageable ways.


September 2023: The Secret Intelligent Beings on Campus — from stefanbauschard.substack.com by Stefan Bauschard
Many of your students this fall will be enhanced by artificial intelligence, even if they don’t look like actual cyborgs. Do you want all of them to be enhanced, or just the highest SES students?


How to report better on artificial intelligence — from cjr.org (Columbia Journalism Review) by Syash Kapoor, Hilke Schellmann, and Ari Sen

In the past few months we have been deluged with headlines about new AI tools and how much they are going to change society.

Some reporters have done amazing work holding the companies developing AI accountable, but many struggle to report on this new technology in a fair and accurate way.

We—an investigative reporter, a data journalist, and a computer scientist—have firsthand experience investigating AI. We’ve seen the tremendous potential these tools can have—but also their tremendous risks.

As their adoption grows, we believe that, soon enough, many reporters will encounter AI tools on their beat, so we wanted to put together a short guide to what we have learned.


AI

.
DSC:
Something I created via Adobe Firefly (Beta version)

 


The 5 reasons L&D is going to embrace ChatGPT — from chieflearningoffice.com by Josh Bersin

Does this mean it will do away with the L&D job? Not at all — these tools give you superhuman powers to find content faster, put it in front of employees in a more useful way and more creatively craft character simulations, assessments, learning in the flow of work and more.

And it’s about time. We really haven’t had a massive innovation in L&D since the early days of the learning experience platform market, so we may be entering the most exciting era in a long time.

Let me give you the five most significant use cases I see. And more will come.


AI and Tech with Scenarios: ID Links 7/11/23 — from christytuckerlearning.com by Christy Tucker

As I read online, I bookmark resources I find interesting and useful. I share these links periodically here on my blog. This post includes links on using tech with scenarios: AI, xAPI, and VR. I’ll also share some other AI tools and links on usability, resume tips for teachers, visual language, and a scenario sample.



It’s only a matter of time before A.I. chatbots are teaching in primary schools — from cnbc.com by Mikaela Cohen

Key Points

  • Microsoft co-founder Bill Gates saying generative AI chatbots can teach kids to read in 18 months rather than years.
  • Artificial intelligence is beginning to prove that it can accelerate the impact teachers have on students and help solve a stubborn teacher shortage.
  • Chatbots backed by large language models can help students, from primary education to certification programs, self-guide through voluminous materials and tailor their education to specific learning styles [preferences].

The Rise of AI: New Rules for Super T Professionals and Next Steps for EdLeaders — from gettingsmart.com by Tom Vander Ark

Key Points

  • The rise of artificial intelligence, especially generative AI, boosts productivity in content creation–text, code, images and increasingly video.
  • Here are six preliminary conclusions about the nature of work and learning.

The Future Of Education: Embracing AI For Student Success — from forbes.com by Dr. Michael Horowitz

Unfortunately, too often attention is focused on the problems of AI—that it allows students to cheat and can undermine the value of what teachers bring to the learning equation. This viewpoint ignores the immense possibilities that AI can bring to education and across every industry.

The fact is that students have already embraced this new technology, which is neither a new story nor a surprising one in education. Leaders should accept this and understand that people, not robots, must ultimately create the path forward. It is only by deploying resources, training and policies at every level of our institutions that we can begin to realize the vast potential of what AI can offer.


AI Tools in Education: Doing Less While Learning More — from campustechnology.com by Mary Grush
A Q&A with Mark Frydenberg


Why Students & Teachers Should Get Excited about ChatGPT — from ivypanda.com with thanks to Ruth Kinloch for this resource

Table of Contents for the article at IvyPanda.com entitled Why Students & Teachers Should Get Excited about ChatGPT

Excerpt re: Uses of ChatGPT for Teachers

  • Diverse assignments.
  • Individualized approach.
  • Interesting classes.
  • Debates.
  • Critical thinking.
  • Grammar and vocabulary.
  • Homework review.

SAIL: State of Research: AI & Education — from buttondown.email by George Siemens
Information re: current AI and Learning Labs, education updates, and technology


Why ethical AI requires a future-ready and inclusive education system — from weforum.org


A specter is haunting higher education — from aiandacademia.substack.com by Bryan Alexander
Fall semester after the generative AI revolution

In this post I’d like to explore that apocalyptic model. For reasons of space, I’ll leave off analyzing student cheating motivations or questioning the entire edifice of grade-based assessment. I’ll save potential solutions for another post.

Let’s dive into the practical aspects of teaching to see why Mollick and Bogost foresee such a dire semester ahead.


Items re: Code Interpreter

Code Interpreter continues OpenAI’s long tradition of giving terrible names to things, because it might be most useful for those who do not code at all. It essentially allows the most advanced AI available, GPT-4, to upload and download information, and to write and execute programs for you in a persistent workspace. That allows the AI to do all sorts of things it couldn’t do before, and be useful in ways that were impossible with ChatGPT.

.


Legal items


MISC items


 

Accenture announces jaw-dropping $3 billion investment in AI — from venturebeat.com by Carl Franzen; via Superhuman

Excerpt:

The generative AI announcements are coming fast and furious these days, but among the biggest in terms of sheer dollar commitments just landed: Accenture, the global professional services and consulting giant, today announced it will invest $3 billion (with a “b”!) in AI over the next three years in building out its team of AI professionals and AI-focused solutions for its clients.

“There is unprecedented interest in all areas of AI, and the substantial investment we are making in our Data & AI practice will help our clients move from interest to action to value, and in a responsible way with clear business cases,” said Julie Sweet, Accenture’s chairwoman and CEO.

Also related/see:

Artificial intelligence creates 40,000 new roles at Accenture — from computerweekly.com by Karl Flinders
Accenture is planning to add thousands of AI experts to its workforce as part of a $3bn investment in its data and artificial intelligence practice

Why leaders need to evolve alongside generative AI — from fastcompany.com by Kelsey Behringer
Even if you’re not an educator, you should not be sitting on the sidelines watching the generative AI conversation being had around you—hop in.

Excerpts (emphasis DSC):

Leaders should be careful to watch and support education right now. At the end of the day, the students sitting in K-12 and college classrooms are going to be future CPAs, lawyers, writers, and teachers. If you are parenting a child, you have skin in the game. If you use professional services, you have skin in the game. When it comes to education, we all have skin in the game.

Students need to master fundamental skills like editing, questioning, researching, and verifying claims before they can use generative AI exceptionally well.

GenAI & Education: Enhancement, not Replacement — from drphilippahardman.substack.com by Dr. Philipa Hardman
How to co-exist in the age of automation

Excerpts (emphasis DSC):

[On 6/15/23, I joined] colleagues from OpenAI, Google, Microsoft, Stanford, Harvard and other others at the first meeting of the GenAI Summit. Our shared goal [was] to help to educate universities & schools in Europe about the impact of Generative AI on their work.

how can we effectively communicate to education professionals that generative AI will enhance their work rather than replace them?

A recent controlled study found that ChatGPT can help professionals increase their efficiency in routine tasks by ~35%. If we keep in mind that the productivity gains brought by the steam engine in the nineteenth century was ~25%, this is huge.

As educators, we should embrace the power of ChatGPT to automate the repetitive tasks which we’ve been distracted by for decades. Lesson planning, content creation, assessment design, grading and feedback – generative AI can help us to do all of these things faster than ever before, freeing us up to focus on where we bring most value for our students.

Google, one of AI’s biggest backers, warns own staff about chatbots — from reuters.com by Jeffrey Dastin and Anna Tong

Excerpt:

SAN FRANCISCO, June 15 (Reuters) – Alphabet Inc (GOOGL.O) is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, four people familiar with the matter told Reuters.

The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information.

The economic potential of generative AI: The next productivity frontier — from mckinsey.com
Generative AI’s impact on productivity could add trillions of dollars in value to the global economy—and the era is just beginning.



Preparing for the Classrooms and Workplaces of the Future: Generative AI in edX — from campustechnology.com by Mary Grush
A Q&A with Anant Agarwal


Adobe Firefly for the Enterprise — Dream Bigger with Adobe Firefly.
Dream it, type it, see it with Firefly, our creative generative AI engine. Now in Photoshop (beta), Illustrator, Adobe Express, and on the web.


Apple Vision Pro, Higher Education and the Next 10 Years — from insidehighered.com by Joshua Kim
How this technology will play out in our world over the next decade.



Zoom can now give you AI summaries of the meetings you’ve missed — from theverge.com by Emma Roth


Mercedes-Benz Is Adding ChatGPT to Cars for AI Voice Commands — from decrypt.co by Jason Nelson; via Superhuman
The luxury automaker is set to integrate OpenAI’s ChatGPT chatbot into its Mercedes-Benz User Experience (MBUX) feature in the U.S.


 

Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence — from bbc.com by  James Clayton

Excerpt:

The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI). Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities – and pitfalls – of the new technology. In a matter of months, several AI models have entered the market. Mr Altman said a new agency should be formed to license AI companies.

Also related to that item, see:
Why artificial intelligence developers say regulation is needed to keep AI in check — from pbs.org

Excerpt:

Artificial intelligence was a focus on Capitol Hill Tuesday. Many believe AI could revolutionize, and perhaps upend, considerable aspects of our lives. At a Senate hearing, some said AI could be as momentous as the industrial revolution and others warned it’s akin to developing the atomic bomb. William Brangham discussed that with Gary Marcus, who was one of those who testified before the Senate.



Are you ready for the Age of Intelligence? — from linusekenstam.substack.com Linus Ekenstam
Let me walk you through my current thoughts on where we are, and where we are going.

From DSC:
I post this one to relay the exponential pace of change that Linus also thinks we’ve entered, and to present a knowledgeable person’s perspectives on the future.


Catastrophe / Eucatastrophe — from oneusefulthing.org by Ethan Mollick
We have more agency over the future of AI than we think.

Excerpt (emphasis DSC):

Every organizational leader and manager has agency over what they decide to do with AI, just as every teacher and school administrator has agency over how AI will be used in their classrooms. So we need to be having very pragmatic discussions about AI, and we need to have them right now: What do we want our world to look like?



Also relevant/see:


That wasn’t Google I/O — it was Google AI — from technologyreview.com by Mat Honan
If you thought generative AI was a big deal last year, wait until you see what it looks like in products already used by billions.



What Higher Ed Gets Wrong About AI Chatbots — From the Student Perspective — from edsurge.com by Mary Jo Madda (Columnist)

 

Google I/O 2023: Making AI more helpful for everyone — from by; with thanks to Barsee out at The AI Valley 
Editor’s Note: Here is a summary of what we announced at Google I/O 2023. See all the announcements in our collection.

Keynote here:

100 things we announced at I/O 2023 — from blog.google by Molly McHugh-Johnson
A lot happened yesterday — here’s a recap.


Google partners with Adobe to bring art generation to Bard — from techcrunch.com by Kyle Wiggers

Excerpt:

Bard, Google’s answer to OpenAI’s ChatGPT, is getting new generative AI capabilities courtesy of Adobe.

Adobe announced today that Firefly, its recently introduced collection of AI models for generating media content, is coming to Bard alongside Adobe’s free graphic design tool, Adobe Express. Firefly — currently in public beta — will become the “premier generative AI partner” for Bard, Adobe says, powering text-to-image capabilities.

Also relevant/see:

 



 

From DSC:
As Rob Toews points out in his recent article out at Forbes.com, we had better hope that the Taiwan Semiconductor Manufacturing Company (TSMC) builds out the capacity to make chips in various countries. Why? Because:

The following statement is utterly ludicrous. It is also true. The world’s most important advanced technology is nearly all produced in a single facility.

What’s more, that facility is located in one of the most geopolitically fraught areas on earth—an area in which many analysts believe that war is inevitable within the decade.

The future of artificial intelligence hangs in the balance.

The Taiwan Semiconductor Manufacturing Company (TSMC) makes ***all of the world’s advanced AI chips.*** Most importantly, this means Nvidia’s GPUs; it also includes the AI chips from Google, AMD, Amazon, Microsoft, Cerebras, SambaNova, Untether and every other credible competitor.

— from The Geopolitics Of AI Chips Will Define The Future Of AI
out at Forbes.com by Rob Toews

Little surprise, then, that Time Magazine described TSMC
as “the world’s most important company that you’ve
probably never heard of.”

 


From DSC:
If that facility was actually the only one and something happened to it, look at how many things would be impacted as of early May 2023!


 

Examples of generative AI models

 

Radar Trends to Watch: May 2023 Developments in Programming, Security, Web, and More — from oreilly.com by Mike Loukides

Excerpt:

Large language models continue to colonize the technology landscape. They’ve broken out of the AI category, and now are showing up in security, programming, and even the web. That’s a natural progression, and not something we should be afraid of: they’re not coming for our jobs. But they are remaking the technology industry.

One part of this remaking is the proliferation of “small” large language models. We’ve noted the appearance of llama.cpp, Alpaca, Vicuna, Dolly 2.0, Koala, and a few others. But that’s just the tip of the iceberg. Small LLMs are appearing every day, and some will even run in a web browser. This trend promises to be even more important than the rise of the “large” LLMs, like GPT-4. Only a few organizations can build, train, and run the large LLMs. But almost anyone can train a small LLM that will run on a well-equipped laptop or desktop.

 
 

Google Devising Radical Search Changes to Beat Back A.I. Rivals — from nytimes.com by Nico Grant
The tech giant is sprinting to protect its core business with a flurry of projects, including updates to its search engine and plans for an all-new one.

Excerpt:

Google’s employees were shocked when they learned in March that the South Korean consumer electronics giant Samsung was considering replacing Google with Microsoft’s Bing as the default search engine on its devices.

For years, Bing had been a search engine also-ran. But it became a lot more interesting to industry insiders when it recently added new artificial intelligence technology.

A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology. It is also upgrading the existing one with A.I. features, according to internal documents reviewed by The Times.

P.S. According to The Rundown’s take on this:

The bottom line: Google is replacing the old-school method of displaying 10 results per page with an intelligent chatbot that provides instant answers.


Also relevant/see:

Google planning new search engine while working on new search features under Project Magi — from searchengineland.com by Barry Schwartz
Project Magi will help searchers complete transactions while incorporating search ads on the page.


Also relevant/see:

 

How to use AI to do practical stuff: A new guide — from oneusefulthing.substack.com by Ethan Mollick
People often ask me how to use AI. Here’s an overview with lots of links.

Excerpts:

We live in an era of practical AI, but many people haven’t yet experienced it, or, if they have, they might have wondered what the big deal is. Thus, this guide. It is a modified version of one I put out for my students earlier in the year, but a lot has changed. It is an overview of ways to get AI to do practical things.

I want to try to show you some of why AI is powerful, in ways both exciting and anxiety-producing.

The Six Large Language Models


Also see Ethan’s posting:

Power and Weirdness: How to Use Bing AI
Bing AI is a huge leap over ChatGPT, but you have to learn its quirks


 

Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’ — from venturebeat.com by Sharon Goldman

Excerpt:

As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”

Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman
We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.

Excerpt (emphasis DSC):

We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.

New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.    

ChatGPT plugins — from openai.com

Excerpt:

Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.



Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are? — from cbc.ca by Matt Meuse
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’

Excerpt:

LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”

Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.

“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.


10 gifts we unboxed at Canva Create — from canva.com
Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.


Google Bard Plagiarized Our Article, Then Apologized When Caught — from tomshardware.com by Avram Piltch
The chatbot implied that it had conducted its own CPU tests.

 
© 2024 | Daniel Christian