Expanding Bard’s understanding of YouTube videos — via AI Valley

  • What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
  • Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.

Reshaping the tree: rebuilding organizations for AI — from oneusefulthing.org by Ethan Mollick
Technological change brings organizational change.

I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.

From DSC:
Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.

 


 



Nvidia’s revenue triples as AI chip boom continues — from cnbc.com by Jordan Novet; via GSV

KEY POINTS

  • Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
  • Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
  • Nvidia announced the GH200 GPU during the quarter.

Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

  • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
  • Revenue: $18.12 billion, vs. $16.18 billion expected

Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.



 

From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

Shocking AI Statistics in 2023 — from techthatmatters.beehiiv.com by Harsh Makadia

  1. Chat GPT reached 100 million users faster than any other app. By February 2023, the chat.openai.com website saw an average of 25 million daily visitors. How can this rise in AI usage benefit your business’s function?
  2. 45% of executives say the popularity of ChatGPT has led them to increase investment in AI. If executives are investing in AI personally, then how will their beliefs affect corporate investment in AI to drive automation further? Also, how will this affect the amount of workers hired to manage AI systems within companies?
  3. eMarketer predicts that in 2024 at least 20% of Americans will use ChatGPT monthly and that a fifth of them are 25-34 year olds in the workforce. Does this mean that there are more young workers using AI?
  4. …plus 10 more stats

People are speaking with ChatGPT for hours, bringing 2013’s Her closer to reality — from arstechnica.com by Benj Edwards
Long mobile conversations with the AI assistant using AirPods echo the sci-fi film.

It turns out that Willison’s experience is far from unique. Others have been spending hours talking to ChatGPT using its voice recognition and voice synthesis features, sometimes through car connections. The realistic nature of the voice interaction feels largely effortless, but it’s not flawless. Sometimes, it has trouble in noisy environments, and there can be a pause between statements. But the way the ChatGPT voices simulate vocal ticks and noises feels very human. “I’ve been using the voice function since yesterday and noticed that it makes breathing sounds when it speaks,” said one Reddit user. “It takes a deep breath before starting a sentence. And today, actually a minute ago, it coughed between words while answering my questions.”

From DSC:
Hmmmmmmm….I’m not liking the sound of this on my initial take of it. But perhaps there are some real positives to this. I need to keep an open mind.


Working with AI: Two paths to prompting — from oneusefulthing.org by Ethan Mollick
Don’t overcomplicate things

  1. Conversational Prompting [From DSC: i.e., keep it simple]
  2. Structured Prompting

For most people, [Conversational Prompting] is good enough to get started, and it is the technique I use most of the time when working with AI. Don’t overcomplicate things, just interact with the system and see what happens. After you have some experience, however, you may decide that you want to create prompts you can share with others, prompts that incorporate your expertise. We call this approach Structured Prompting, and, while improving AIs may make it irrelevant soon, it is currently a useful tool for helping others by encoding your knowledge into a prompt that anyone can use.


These fake images reveal how AI amplifies our worst stereotypes — from washingtonpost.com by Nitasha Tiku, Kevin Schaul, and Szu Yu Chen (behind paywall)
AI image generators like Stable Diffusion and DALL-E amplify bias in gender and race, despite efforts to detoxify the data fueling these results.

Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.

These stereotypes don’t reflect the real world; they stem from the data that trains the technology. Grabbed from the internet, these troves can be toxic — rife with pornography, misogyny, violence and bigotry.

Abeba Birhane, senior advisor for AI accountability at the Mozilla Foundation, contends that the tools can be improved if companies work hard to improve the data — an outcome she considers unlikely. In the meantime, the impact of these stereotypes will fall most heavily on the same communities harmed during the social media era, she said, adding: “People at the margins of society are continually excluded.”


ChatGPT app revenue shows no signs of slowing, but some other AI apps top it — from techcrunch.com by Sarah Perez; Via AI Valley – Barsee

ChatGPT, the AI-powered chatbot from OpenAI, far outpaces all other AI chatbot apps on mobile devices in terms of downloads and is a market leader by revenue, as well. However, it’s surprisingly not the top AI app by revenue — several photo AI apps and even other AI chatbots are actually making more money than ChatGPT, despite the latter having become a household name for an AI chat experience.


ChatGPT can now analyze files you upload to it without a plugin — from bgr.com by Joshua Hawkins; via Superhuman

According to new reports, OpenAI has begun rolling out a more streamlined approach to how people use ChatGPT. The new system will allow the AI to choose a model automatically, letting you run Python code, open a web browser, or generate images with DALL-E without extra interaction. Additionally, ChatGPT will now let you upload and analyze files.

 

60+ Ideas for ChatGPT Assignments — from stars.library.ucf.edu by Kevin Yee, Kirby Whittington, Erin Doggette, and Laurie Uttich

60+ ideas for using ChatGPT in your assignments today


Artificial intelligence is disrupting higher education — from itweb.co.za by Rennie Naidoo; via GSV
Traditional contact universities need to adapt faster and find creative ways of exploring and exploiting AI, or lose their dominant position.

Higher education professionals have a responsibility to shape AI as a force for good.


Introducing Canva’s biggest education launch — from canva.com
We’re thrilled to unveil our biggest education product launch ever. Today, we’re introducing a whole new suite of products that turn Canva into the all-in-one classroom tool educators have been waiting for.

Also see Canva for Education.
Create and personalize lesson plans, infographics,
posters, video, and more. 
100% free for
teachers and students at eligible schools.


ChatGPT and generative AI: 25 applications to support student engagement — from timeshighereducation.com by Seb Dianati and Suman Laudari
In the fourth part of their series looking at 100 ways to use ChatGPT in higher education, Seb Dianati and Suman Laudari share 25 prompts for the AI tool to boost student engagement


There are two ways to use ChatGPT — from theneurondaily.com

  1. Type to it.
  2. Talk to it (new).


Since then, we’ve looked to it for a variety of real-world business advice. For example, Prof Ethan Mollick posted a great guide using ChatGPT-4 with voice as a negotiation instructor.

In a similar fashion, you can consult ChatGPT with voice for feedback on:

  • Job interviews.
  • Team meetings.
  • Business presentations.



Via The Rundown: Google is using AI to analyze the company’s Maps data and suggest adjustments to traffic light timing — aiming to cut driver waits, stops, and emissions.


Google Pixel’s face-altering photo tool sparks AI manipulation debate — from bbc.com by Darren Waters

The camera never lies. Except, of course, it does – and seemingly more often with each passing day.
In the age of the smartphone, digital edits on the fly to improve photos have become commonplace, from boosting colours to tweaking light levels.

Now, a new breed of smartphone tools powered by artificial intelligence (AI) are adding to the debate about what it means to photograph reality.

Google’s latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than devices from other companies. They are using AI to help alter people’s expressions in photographs.



From Digital Native to AI-Empowered: Learning in the Age of Artificial Intelligence — from campustechnology.com by Kim Round
The upcoming generation of learners will enter higher education empowered by AI. How can institutions best serve these learners and prepare them for the workplace of the future?

Dr. Chris Dede, of Harvard University and Co-PI of the National AI Institute for Adult Learning and Online Education, spoke about the differences between knowledge and wisdom in AI-human interactions in a keynote address at the 2022 Empowering Learners for the Age of AI conference. He drew a parallel between Star Trek: The Next Generation characters Data and Picard during complex problem-solving: While Data offers the knowledge and information, Captain Picard offers the wisdom and context from on a leadership mantle, and determines its relevance, timing, and application.


The Near-term Impact of Generative AI on Education, in One Sentence — from opencontent.org by David Wiley

This “decreasing obstacles” framing turned out to be helpful in thinking about generative AI. When the time came, my answer to the panel question, “how would you summarize the impact generative AI is going to have on education?” was this:

“Generative AI greatly reduces the degree to which access to expertise is an obstacle to education.”

We haven’t even started to unpack the implications of this notion yet, but hopefully just naming it will give the conversation focus, give people something to disagree with, and help the conversation progress more quickly.


How to Make an AI-Generated Film — from heatherbcooper.substack.com by Heather Cooper
Plus, Midjourney finally has a new upscale tool!


Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning — from blogs.nvidia.com by Angie Lee
AI agent uses LLMs to automatically generate reward algorithms to train robots to accomplish complex tasks.

From DSC:
I’m not excited about this, as I can’t help but wonder…how long before the militaries of the world introduce this into their warfare schemes and strategies?


The 93 Questions Schools Should Ask About AI — from edweek.org by Alyson Klein

The toolkit recommends schools consider:

  • Purpose: How can AI help achieve educational goals?
  • Compliance: How does AI fit with existing policies?
  • Knowledge: How can schools advance AI Literacy?
  • Balance: What are the benefits and risks of AI?
  • Integrity: How does AI fit into policies on things like cheating?
  • Agency: How can humans stay in the loop on AI?
  • Evaluation: How can schools regularly assess the impact of AI?
 

41 states sue Meta, claiming Instagram, Facebook are addictive, harm kids — from washingtonpost.com by Cristiano Lima and Naomi Nix
The action marks the most sprawling state challenge to date over social media’s impact on children’s mental health

Forty-one states and the District of Columbia are suing Meta, alleging that the tech giant harms children by building addictive features into Instagram and Facebook. Tuesday’s legal actions represent the most significant effort by state enforcers to rein in the impact of social media on children’s mental health.

 

WHAT WAS GARY MARCUS THINKING, IN THAT INTERVIEW WITH GEOFF HINTON? — from linkedin.com by Stephen Downes

Background (emphasis DSC): 60 Minutes did an interview with ‘the Godfather of AI’, Geoffrey Hinton. In response, Gary Marcus wrote a column in which he inserted his own set of responses into the transcript, as though he were a panel participant. Neat idea. So, of course, I’m stealing it, and in what follows, I insert my own comments as I join the 60 Minutes panel with Geoffrey Hinton and Gary Marcus.

Usually I put everyone else’s text in italics, but for this post I’ll put it all in normal font, to keep the format consistent.

Godfather of Artificial Intelligence Geoffrey Hinton on the promise, risks of advanced AI


OpenAI’s Revenue Skyrockets to $1.3 Billion Annualized Rate — from maginative.com by Chris McKay
This means the company is generating over $100 million per month—a 30% increase from just this past summer.

OpenAI, the company behind the viral conversational AI ChatGPT, is experiencing explosive revenue growth. The Information reports that CEO Sam Altman told the staff this week that OpenAI’s revenue is now crossing $1.3 billion on an annualized basis. This means the company is generating over $100 million per month—a 30% increase from just this past summer.

Since the launch of a paid version of ChatGPT in February, OpenAI’s financial growth has been nothing short of meteoric. Additionally, in August, the company announced the launch of ChatGPT Enterprise, a commercial version of its popular conversational AI chatbot aimed at business users.

For comparison, OpenAI’s total revenue for all of 2022 was just $28 million. The launch of ChatGPT has turbocharged OpenAI’s business, positioning it as a bellwether for demand for generative AI.



From 10/13:


New ways to get inspired with generative AI in Search — from blog.google
We’re testing new ways to get more done right from Search, like the ability to generate imagery with AI or creating the first draft of something you need to write.

 

Next month Microsoft Corp. will start making its artificial intelligence features for Office widely available to corporate customers. Soon after, that will include the ability for it to read your emails, learn your writing style and compose messages on your behalf.

From DSC:
As readers of this blog know, I’m generally pro-technology. I see most technologies as tools — which can be used for good or for ill. So I will post items both pro and con concerning AI.

But outsourcing email communications to AI isn’t on my wish list or to-do list.

 

Deepfakes: An evidentiary tsunami! — fromthebrainyacts.beehiiv.com by Josh Kubicki

Excerpt: (emphasis DSC):

I’ve written and spoken about this before but the rise of deepfakes is going to have a profound impact on courts throughout the world. This week we saw three major deepfake stories.

Whether you are a lawyer or not, this topic will impact you. So, please consider these questions as we will need to have answers for each one very soon (if not now).

  1. How will we establish a reliable and consistent standard to authenticate digital evidence as genuine and not altered by deepfake technology?
  2. Will the introduction of deepfakes shift the traditional burdens of proof or production, especially when digital evidence is introduced?
  3. Will courts require expert witnesses for digital evidence authentication in every case, and what standards will be used to qualify these experts?
  4. Are there existing technological tools or methods to detect deepfakes? (yes there is but it is not 100%) How can courts keep abreast of rapidly advancing technology?
  5. …plus several more questions

From DSC:
What are law schools doing about this? Are they addressing this?


And speaking of legal matters and law schools, this might be interesting or helpful to someone out there:

 

The Prompt #14: Your Guide to Custom Instructions — from noisemedia.ai by Alex Banks

Whilst we typically cover a single ‘prompt’ to use with ChatGPT, today we’re exploring a new feature now available to everyone: custom instructions.

You provide specific directions for ChatGPT leading to greater control of the output. It’s all about guiding the AI to get the responses you really want.

To get started:
Log into ChatGPT ? Click on your name/email bottom left corner ? select ‘Custom instructions’


Meet Zoom AI Companion, your new AI assistant! Unlock the benefits with a paid Zoom account — from blog.zoom.us by Smita Hashim

We’re excited to introduce you to AI Companion (formerly Zoom IQ), your new generative AI assistant across the Zoom platform. AI Companion empowers individuals by helping them be more productive, connect and collaborate with teammates, and improve their skills.

Envision being able to interact with AI Companion through a conversational interface and ask for help on a whole range of tasks, similarly to how you would with a real assistant. You’ll be able to ask it to help prepare for your upcoming meeting, get a consolidated summary of prior Zoom meetings and relevant chat threads, and even find relevant documents and tickets from connected third-party applications with your permission.

From DSC:
You can ask AI Companion to catch you up on what you missed during a meeting in progress.”

And what if some key details were missed? Should you rely on this? I’d treat this with care/caution myself.



A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data — from fortune.com by Stephen Pastis (behind paywall)

That’s because, as it turns out, it’s nearly impossible to remove a user’s data from a trained A.I. model without resetting the model and forfeiting the extensive money and effort put into training it. To use a human analogy, once an A.I. has “seen” something, there is no easy way to tell the model to “forget” what it saw. And deleting the model entirely is also surprisingly difficult.

This represents one of the thorniest, unresolved, challenges of our incipient artificial intelligence era, alongside issues like A.I. “hallucinations” and the difficulties of explaining certain A.I. outputs. 


More companies see ChatGPT training as a hot job perk for office workers — from cnbc.com by Mikaela Cohen

Key points:

  • Workplaces filled with artificial intelligence are closer to becoming a reality, making it essential that workers know how to use generative AI.
  • Offering specific AI chatbot training to current employees could be your next best talent retention tactic.
  • 90% of business leaders see ChatGPT as a beneficial skill in job applicants, according to a report from career site Resume Builder.

OpenAI Plugs ChatGPT Into Canva to Sharpen Its Competitive Edge in AI — from decrypt.co by Jose Antonio Lanz
Now ChatGPT Plus users can “talk” to Canva directly from OpenAI’s bot, making their workflow easier.

This strategic move aims to make the process of creating visuals such as logos, banners, and more, even more simple for businesses and entrepreneurs.

This latest integration could improve the way users generate visuals by offering a streamlined and user-friendly approach to digital design.


From DSC:
This Tweet addresses a likely component of our future learning ecosystems:


Large language models aren’t people. Let’s stop testing them as if they were. — from technologyreview.com by Will Douglas Heaven
With hopes and fears about this technology running wild, it’s time to agree on what it can and can’t do.

That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.

“There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.”

“There is a long history of developing methods to test the human mind,” says Laura Weidinger, a senior research scientist at Google DeepMind. “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models.”


We Analyzed Millions of ChatGPT User Sessions: Visits are Down 29% since May, Programming Assistance is 30% of Use — from sparktoro.com by Rand Fishkin

In concert with the fine folks at Datos, whose opt-in, anonymized panel of 20M devices (desktop and mobile, covering 200+ countries) provides outstanding insight into what real people are doing on the web, we undertook a challenging project to answer at least some of the mystery surrounding ChatGPT.



Crypto in ‘arms race’ against AI-powered scams — Quantstamp co-founder — from cointelegraph.com by Tom Mitchelhill
Quantstamp’s Richard Ma explained that the coming surge in sophisticated AI phishing scams could pose an existential threat to crypto organizations.

With the field of artificial intelligence evolving at near breakneck speed, scammers now have access to tools that can help them execute highly sophisticated attacks en masse, warns the co-founder of Web3 security firm Quantstamp.


 

Introductory comments from DSC:

Sometimes people and vendors write about AI’s capabilities in such a glowingly positive way. It seems like AI can do everything in the world. And while I appreciate the growing capabilities of Large Language Models (LLMs) and the like, there are some things I don’t want AI-driven apps to do.

For example, I get why AI can be helpful in correcting my misspellings, my grammatical errors, and the like. That said, I don’t want AI to write my emails for me. I want to write my own emails. I want to communicate what I want to communicate. I don’t want to outsource my communication. 

And what if an AI tool summarizes an email series in a way that I miss some key pieces of information? Hmmm…not good.

Ok, enough soapboxing. I’ll continue with some resources.


ChatGPT Enterprise

Introducing ChatGPT Enterprise — from openai.com
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.

We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.

Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. — from openai.com


NVIDIA

Nvidia’s Q2 earnings prove it’s the big winner in the generative AI boom — from techcrunch.com by Kirsten Korosec

Nvidia Quarterly Earnings Report Q2 Smashes Expectations At $13.5B — from techbusinessnews.com.au
Nvidia’s quarterly earnings report (Q2) smashed expectations coming in at $13.5B more than doubling prior earnings of $6.7B. The chipmaker also projected October’s total revenue would peak at $16B


MISC

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending — from theinformation.com by Amir Efrati and Aaron Holmes

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. That’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

OpenAI’s GPTBot blocked by major websites and publishers — from the-decoder.com by Matthias Bastian
An emerging chatbot ecosystem builds on existing web content and could displace traditional websites. At the same time, licensing and financing are largely unresolved.

OpenAI offers publishers and website operators an opt-out if they prefer not to make their content available to chatbots and AI models for free. This can be done by blocking OpenAI’s web crawler “GPTBot” via the robots.txt file. The bot collects content to improve future AI models, according to OpenAI.

Major media companies including the New York Times, CNN, Reuters, Chicago Tribune, ABC, and Australian Community Media (ACM) are now blocking GPTBot. Other web-based content providers such as Amazon, Wikihow, and Quora are also blocking the OpenAI crawler.

Introducing Code Llama, a state-of-the-art large language model for coding  — from ai.meta.com

Takeaways re: Code Llama:

  • Is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
  • Is free for research and commercial use.
  • Is built on top of Llama 2 and is available in three models…
  • In our own benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks

Key Highlights of Google Cloud Next ‘23— from analyticsindiamag.com by Shritama Saha
Meta’s Llama 2, Anthropic’s Claude 2, and TII’s Falcon join Model Garden, expanding model variety.

AI finally beats humans at a real-life sport— drone racing — from nature.com by Dan Fox
The new system combines simulation with onboard sensing and computation.

From DSC:
This is scary — not at all comforting to me. Militaries around the world continue their jockeying to be the most dominant, powerful, and effective killers of humankind. That definitely includes the United States and China. But certainly others as well. And below is another alarming item, also pointing out the downsides of how we use technologies.

The Next Wave of Scams Will Be Deepfake Video Calls From Your Boss — from bloomberg.com by Margi Murphy; behind paywall

Cybercriminals are constantly searching for new ways to trick people. One of the more recent additions to their arsenal was voice simulation software.

10 Great Colleges For Studying Artificial Intelligence — from forbes.com by Sim Tumay

The debut of ChatGPT in November created angst for college admission officers and professors worried they would be flooded by student essays written with the undisclosed assistance of artificial intelligence. But the explosion of interest in AI has benefits for higher education, including a new generation of students interested in studying and working in the field. In response, universities are revising their curriculums to educate AI engineers.

 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 

AI for Education Webinars — from youtube.com by Tom Barrett and others

AI for education -- a webinar series by Tom Barrett and company


Post-AI Assessment Design — from drphilippahardman.substack.com by Dr. Philippa Hardman
A simple, three-step guide on how to design assessments in a post-AI world

Excerpt:

Step 1: Write Inquiry-Based Objectives
Inquiry-based objectives focus not just on the acquisition of knowledge but also on the development of skills and behaviours, like critical thinking, problem-solving, collaboration and research skills.

They do this by requiring learners not just to recall or “describe back” concepts that are delivered via text, lecture or video. Instead, inquiry-based objectives require learners to construct their own understanding through the process of investigation, analysis and questioning.

Step 1 -- Write Inquiry-Based Objectives

.


Massive Disruption Now: What AI Means for Students, Educators, Administrators and Accreditation Boards
— from stefanbauschard.substack.com by Stefan Bauschard; via Will Richardson on LinkedIn
The choices many colleges and universities make regarding AI over the next 9 months will determine if they survive. The same may be true for schools.

Excerpts:

Just for a minute, consider how education would change if the following were true

  • AIs “hallucinated” less than humans
  • AIs could write in our own voices
  • AIs could accurately do math
  • AIs understood the unique academic (and eventually developmental) needs of each student and adapt instruction to that student
  • AIs could teach anything any student wanted or need to know any time of day or night
  • AIs could do this at a fraction of the cost of a human teacher or professor

Fall 2026 is three years away. Do you have a three year plan? Perhaps you should scrap it and write a new one (or at least realize that your current one cannot survive). If you run an academic institution in 2026 the same way you ran it in 2022, you might as well run it like you would have in 1920.  If you run an academic institution in 2030 (or any year when AI surpasses human intelligence) the same way you ran it in 2022, you might as well run it like you would have in 1820.  AIs will become more intelligent than us, perhaps in 10-20 years (LeCun), though there could be unanticipated breakthroughs that lower the time frame to a few years or less (Benjio); it’s just a question of when, not “if.”


On one creative use of AI — from aiandacademia.substack.com by Bryan Alexander
A new practice with pedagogical possibilities

Excerpt:

Look at those material items again. The voiceover? Written by an AI and turned into audio by software. The images? Created by human prompts in Midjourney. The music is, I think, human created. And the idea came from a discussion between a human and an AI?

How might this play out in a college or university class?

Imagine assignments which require students to craft such a video. Start from film, media studies, or computer science classes. Students work through a process:


Generative Textbooks — from opencontent.org by David Wiley

Excerpt (emphasis DSC):

I continue to try to imagine ways generative AI can impact teaching and learning, including learning materials like textbooks. Earlier this week I started wondering – what if, in the future, educators didn’t write textbooks at all? What if, instead, we only wrote structured collections of highly crafted prompts? Instead of reading a static textbook in a linear fashion, the learner would use the prompts to interact with a large language model. These prompts could help learners ask for things like:

  • overviews and in-depth explanations of specific topics in a specific sequence,
  • examples that the learner finds personally relevant and interesting,
  • interactive practice – including open-ended exercises – with immediate, corrective feedback,
  • the structure of the relationships between ideas and concepts,
  • etc.

Also relevant/see:


.


Generating The Future of Education with AI — from aixeducation.com

AI in Education -- An online-based conference taking place on August 5-6, 2023

Designed for K12 and Higher-Ed Educators & Administrators, this conference aims to provide a platform for educators, administrators, AI experts, students, parents, and EdTech leaders to discuss the impact of AI on education, address current challenges and potentials, share their perspectives and experiences, and explore innovative solutions. A special emphasis will be placed on including students’ voices in the conversation, highlighting their unique experiences and insights as the primary beneficiaries of these educational transformations.


How Teachers Are Using ChatGPT in Class — from edweek.org by Larry Ferlazzo

Excerpt:

The use of generative AI in K-12 settings is complex and still in its infancy. We need to consider how these tools can enhance student creativity, improve writing skills, and be transparent with students about how generative AI works so they can better understand its limitations. As with any new tech, our students will be exposed to it, and it is our task as educators to help them navigate this new territory as well-informed, curious explorers.


Japan emphasizes students’ comprehension of AI in new school guidelines — from japantimes.co.jp by Karin Kaneko; via The Rundown

Excerpt:

The education ministry has emphasized the need for students to understand artificial intelligence in new guidelines released Tuesday, setting out how generative AI can be integrated into schools and the precautions needed to address associated risks.

Students should comprehend the characteristics of AI, including its advantages and disadvantages, with the latter including personal information leakages and copyright infringement, before they use it, according to the guidelines. They explicitly state that passing off reports, essays or any other works produced by AI as one’s own is inappropriate.


AI’s Teachable Moment: How ChatGPT Is Transforming the Classroom — from cnet.com by Mark Serrels
Teachers and students are already harnessing the power of AI, with an eye toward the future.

Excerpt:

Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He’s unsure. But instead of burying his head in the sand, he’s doing what any tech-savvy parent would do: He’s teaching his son how to use AI.

In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams with Midjourney, a generative AI tool that uses language prompts to produce images.


The AI Dictionary for Educators — from blog.profjim.com

To bridge this knowledge gap, I decided to make a quick little dictionary of AI terms specifically tailored for educators worldwide. Initially created for my own benefit, I’ve reworked my own AI Dictionary for Educators and expanded it to help my fellow teachers embrace the advancements AI brings to education.


7 Strategies to Prepare Educators to Teach With AI — from edweek.org by Lauraine Langreo; NOTE: Behind paywall


 

Introducing Superalignment — from openai.com
We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.

Excerpts (emphasis DSC):

How do we ensure AI systems much smarter than humans follow human intent?

Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.

Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.

From DSC:
Hold up. We’ve been told for years that AI is at the toddler stage. But now assertions are being made that AI systems are smarter than humans — much smarter even. That said, then why is the goal of OpenAI to build a roughly human-level automated alignment researcher if humans aren’t that smart after all…? Which is it? I must be missing or misunderstanding something here…

OpenAI are jumping back on the alignment bandwagon with the brilliantly-named Superalignment Team. And you guessed it – they’re researching alignment of future superintelligent AIs. They reckon that AI can align other AI faster than humans can, and the plan is to build an AI that does just that. Head-spinning stuff…

Ben’s Bites

Plus…

Who else should be on this team? We certainly don’t want a team comprised of just technical people. How about including rabbis, pastors, priests, parents, teachers, professors, social workers, judges, legislators, and many others who can help represent other specialties, disciplines, and perspectives to protect society?


Authors file a lawsuit against OpenAI for unlawfully ‘ingesting’ their books — from theguardian.com by Ella Creamer; via Ben’s Bytes
Mona Awad and Paul Tremblay allege that their books, which are copyrighted, were ‘used to train’ ChatGPT because the chatbot generated ‘very accurate summaries’ of the works
.


How AI is Transforming Workplace Architecture and Design — from workdesign.com by Christian Lehmkuhl


London Futurists | Generative AI drug discovery breakthrough, with Alex Zhavoronkov — from londonfuturists.buzzsprout.com

Alex Zhavoronkov is our first guest to make a repeat appearance, having first joined us in episode 12, last November. We are delighted to welcome him back, because he is doing some of the most important work on the planet, and he has some important news.

In 2014, Alex founded Insilico Medicine, a drug discovery company which uses artificial intelligence to identify novel targets and novel molecules for pharmaceutical companies. Insilico now has drugs designed with AI in human clinical trials, and it is one of a number of companies that are demonstrating that developing drugs with AI can cut the time and money involved in the process by as much as 90%.


Watch This Space: New Field of Spatial Finance Uses AI to Estimate Risk, Monitor Assets, Analyze Claims — from blogs.nvidia.com

When making financial decisions, it’s important to look at the big picture — say, one taken from a drone, satellite or AI-powered sensor.

The emerging field of spatial finance harnesses AI insights from remote sensors and aerial imagery to help banks, insurers, investment firms and businesses analyze risks and opportunities, enable new services and products, measure the environmental impact of their holdings, and assess damage after a crisis.


Secretive hardware startup Humane’s first product is the Ai Pin — from techcrunch.com by Kyle Wiggers; via The Rundown AI

Excerpt:

Humane, the startup launched by ex-Apple design and engineering duo Imran Chaudhri and Bethany Bongiorno, today revealed details about its first product: The Humane Ai Pin.

Humane’s product, as it turns out, is a wearable gadget with a projected display and AI-powered features. Chaudhri gave a live demo of the device onstage during a TED Talk in April, but a press release issued today provides a few additional details.

The Humane Ai Pin is a new type of standalone device with a software platform that harnesses the power of AI to enable innovative personal computing experiences.


He Spent $140 Billion on AI With Little to Show. Now He Is Trying Again. — from wsj.com by Eliot Brown; via Superhuman
Billionaire Masayoshi Son said he would make SoftBank ‘the investment company for the AI revolution,’ but he missed out on the most recent frenzy


“Stunning”—Midjourney update wows AI artists with camera-like feature — from arstechnica.com by Benj Edwards; via Sam DeBrule from Machine Learnings
Midjourney v5.2 features camera-like zoom control over framing, more realism.


What is AIaaS? Guide to Artificial Intelligence as a Service — from eweek.com by Shelby Hiter
Artificial intelligence as a service, AIaaS, is an outsourced AI service provided by cloud-based AI providers.

AIaaS Definition
When a company is interested in working with artificial intelligence but doesn’t have the in-house resources, budget, and/or expertise to build and manage its own AI technology, it’s time to invest in AIaaS.

Artificial intelligence as a service, or AIaaS, is an outsourced service model AI that cloud-based companies provide to other businesses, giving them access to different AI models, algorithms, and other resources directly through a cloud computing platform; this access is usually managed through an API or SDK connection.


The Rise of the AI Engineer — from latent.space


Boost ChatGPT with new plugins — from wondertools.substack.com by Jeremy Caplan
Wonder Tools | Six new ways to use AI
.


A series re: AI from Jeff Foster out at ProvideoCoalition.com


The AI upskilling imperative to build a future-ready workforce — from businessinsider.com

Excerpts:

Skill development has always been crucial, but recent technological advancements have raised the stakes. We are currently in the midst of the fourth industrial revolution, where automation and breakthroughs in artificial intelligence (AI) are revolutionising the workplace. In this era of quick change and short half-life of skills, upskilling shouldn’t be an afterthought. Instead, reskilling and upskilling have to evolve into requirements for effective professional development.

To understand the significance of upskilling for your career trajectory, it is important to recognise the ever-evolving nature of technology and the rapid pace of digital transformation. Business Insider India has been exploring how businesses and thought leaders are driving innovation by educating their staff on the technologies and skills that will shape the future.

 
 
© 2024 | Daniel Christian