[Report] Generative AI Top 150: The World’s Most Used AI Tools (Feb 2024) — from flexos.work by Daan van Rossum
FlexOS.work surveyed Generative AI platforms to reveal which get used most. While ChatGPT reigns supreme, countless AI platforms are used by millions.

As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme.

2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch.
As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.

With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.

.


Artificial Intelligence Act: MEPs adopt landmark law — from europarl.europa.eu

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations


The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock


A New Surge in Power Use Is Threatening U.S. Climate Goals — from nytimes.com by Brad Plumer and Nadja Popovich
A boom in data centers and factories is straining electric grids and propping up fossil fuels.

Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.

Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.


OpenAI and the Fierce AI Industry Debate Over Open Source — from bloomberg.com by Rachel Metz

The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?

The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.


Researchers develop AI-driven tool for near real-time cancer surveillance — from medicalxpress.com by Mark Alewine; via The Rundown AI
Artificial intelligence has delivered a major win for pathologists and researchers in the fight for improved cancer treatments and diagnoses.

In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.


 

Also see:

Cognition Labs Blob

 

How AI Is Already Transforming the News Business — from politico.com by Jack Shafer
An expert explains the promise and peril of artificial intelligence.

The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.

Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.

Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.

Also see:

Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena — from cjr.org by Felix Simon

TABLE OF CONTENTS



EMO: Emote Portrait Alive – Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions — from humanaigc.github.io Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo

We proposed EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, our method can generate vocal avatar videos with expressive facial expressions, and various head poses, meanwhile, we can generate videos with any duration depending on the length of input video.


Adobe previews new cutting-edge generative AI tools for crafting and editing custom audio — from blog.adobe.com by the Adobe Research Team

New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.


How AI copyright lawsuits could make the whole industry go extinct — from theverge.com by Nilay Patel
The New York Times’ lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI.

There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.


FCC officially declares AI-voiced robocalls illegal — from techcrunch.com by Devom Coldewey

The FCC’s war on robocalls has gained a new weapon in its arsenal with the declaration of AI-generated voices as “artificial” and therefore definitely against the law when used in automated calling scams. It may not stop the flood of fake Joe Bidens that will almost certainly trouble our phones this election season, but it won’t hurt, either.

The new rule, contemplated for months and telegraphed last week, isn’t actually a new rule — the FCC can’t just invent them with no due process. Robocalls are just a new term for something largely already prohibited under the Telephone Consumer Protection Act: artificial and pre-recorded messages being sent out willy-nilly to every number in the phone book (something that still existed when they drafted the law).


EIEIO…Chips Ahoy! — from dashmedia.co by Michael Moe, Brent Peus, and Owen Ritz


Here Come the AI Worms — from wired.com by Matt Burgess
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

 

World’s largest projection mapping snags Guinness World Record — from inavateonthenet.net
A nightly projection mapping display at the Tokyo metropolitan government headquarters has been recognised by Guinness World Records as the largest in the world.

 

Scammers trick company employee using video call filled with deepfakes of execs, steal $25 million — from techspot.com by Rob Thubron; via AI Valley
The victim was the only real person on the video conference call

The scammers used digitally recreated versions of an international company’s Chief Financial Officer and other employees to order $25 million in money transfers during a video conference call containing just one real person.

The victim, an employee at the Hong Kong branch of an unnamed multinational firm, was duped into taking part in a video conference call in which they were the only real person – the rest of the group were fake representations of real people, writes SCMP.

As we’ve seen in previous incidents where deepfakes were used to recreate someone without their permission, the scammers utilized publicly available video and audio footage to create these digital versions.


Letter from the YouTube CEO: 4 Big bets for 2024 — from blog.youtube by Neal Mohan, CEO, YouTube; via Ben’s Bites

.

#1: AI will empower human creativity.

#2: Creators should be recognized as next-generation studios.

#3: YouTube’s next frontier is the living room and subscriptions.

#4: Protecting the creator economy is foundational.

Viewers globally now watch more than 1 billion hours on average of YouTube content on their TVs every day.


Bard becomes Gemini: Try Ultra 1.0 and a new mobile app today — from blog.google by Sissie Hsiao; via Rundown AI
Bard is now known as Gemini, and we’re rolling out a mobile app and Gemini Advanced with Ultra 1.0.

Since we launched Bard last year, people all over the world have used it to collaborate with AI in a completely new way — to prepare for job interviews, debug code, brainstorm new business ideas or, as we announced last week, create captivating images.

Our mission with Bard has always been to give you direct access to our AI models, and Gemini represents our most capable family of models. To reflect this, Bard will now simply be known as Gemini.


A new way to discover places with generative AI in Maps — from blog.google by Miriam Daniel; via AI Valley
Here’s a look at how we’re bringing generative AI to Maps — rolling out this week to select Local Guides in the U.S.

Today, we’re introducing a new way to discover places with generative AI to help you do just that — no matter how specific, niche or broad your needs might be. Simply say what you’re looking for and our large-language models (LLMs) will analyze Maps’ detailed information about more than 250 million places and trusted insights from our community of over 300 million contributors to quickly make suggestions for where to go.

Starting in the U.S., this early access experiment launches this week to select Local Guides, who are some of the most active and passionate members of the Maps community. Their insights and valuable feedback will help us shape this feature so we can bring it to everyone over time.


Google Prepares for a Future Where Search Isn’t King — from wired.com by Lauren Goode
CEO Sundar Pichai tells WIRED that Google’s new, more powerful Gemini chatbot is an experiment in offering users a way to get things done without a search engine. It’s also a direct shot at ChatGPT.


 

 


From voice synthesis to fertility tracking, here are some actually helpful AI products at CES — from techcrunch.com by Devin Coldewey

But a few applications of machine learning stood out as genuinely helpful or surprising — here are a few examples of AI that might actually do some good.

The whole idea that AI might not be a total red flag occurred to me when I chatted with Whispp at a press event. This small team is working on voicing the voiceless, meaning people who have trouble speaking normally due to a condition or illness.

Whispp gives a voice to people who can’t speak


CES 2024: Everything revealed so far, from Nvidia and Sony to the weirdest reveals and helpful AI — from techcrunch.com by Christine Hall

Kicking off the first day were some bigger announcements from companies, including Nvidia, LG, Sony and Samsung. Those livestreams have ended, but you can watch most of their archives and catch up right here. And with the event still ongoing, and the show floor open, here’s how you can follow along with our team’s coverage.

Or, to dive into each day’s updates directly, you can follow these links:

 

 
 

Prompt engineering — from platform.openai.com

This guide shares strategies and tactics for getting better results from large language models (sometimes referred to as GPT models) like GPT-4. The methods described here can sometimes be deployed in combination for greater effect. We encourage experimentation to find the methods that work best for you.

Some of the examples demonstrated here currently work only with our most capable model, gpt-4. In general, if you find that a model fails at a task and a more capable model is available, it’s often worth trying again with the more capable model.

You can also explore example prompts which showcase what our models are capable of…


Preparedness — from openai.com

The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematize our safety thinking, we are adopting the initial version of our Preparedness Framework. It describes OpenAI’s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.


Every Major Tech Development From 2023 — from newsletter.thedailybite.co
The yearly tech round-up, Meta’s smart glasses upgrade, and more…

Here’s every major innovation from the last 365 days:

  • Microsoft: Launched additional OpenAI-powered features, including Copilot for Microsoft Dynamics 365 and Microsoft 365, enhancing business functionalities like text summarization, tone adjustment in emails, data insights, and automatic presentation creation.
  • Google: Introduced Duet, akin to Microsoft’s Copilot, integrating Gen AI across Google Workspace for writing assistance and custom visual creation. Also debuted Generative AI Studio, enabling developers to craft AI apps, and unveiled Gemini & Bard, a new AI technology with impressive features.
  • Salesforce: …
  • Adobe: …
  • Amazon Web Services (AWS): …
  • IBM:  …
  • Nvidia:  …
  • OpenAI:  …
  • Meta (Facebook):
  • Tencent:
  • Baidu:

News in chatbots — from theneurondaily.com by Noah Edelman & Pete Huang

Here’s what’s on the horizon:

  • Multimodal AI gets huge. Instead of just typing, more people will talk to AI, listen to it, create images, get visual feedback, create graphs, and more.
  • AI video gets really good. So far, AI videos have been cool-but-not-practical. They’re getting way better and we’re on the verge of seeing 100% AI-generated films, animations, and cartoons.
  • AI on our phones. Imagine Siri with the brains of ChatGPT-4 and the ambition of Alexa. TBD who pulls this off first!
  • GPT-5. ‘Nuff said.

20 Best AI Chatbots in 2024 — from eweek.com by Aminu Abdullahi
These leading AI chatbots use generative AI to offer a wide menu of functionality, from personalized customer service to improved information retrieval.

Top 20 Generative AI Chatbot Software: Comparison Chart
We compared the key features of the top generative AI chatbot software to help you determine the best option for your company…


What Google Gemini Teaches Us About Trust and The Future — from aiwithallie.beehiiv.com by Allie K. Miller
The AI demo may have been misleading, but it teaches us two huge lessons.

TL;DR (too long, didn’t read)

  1. We’re moving from ‘knowledge’ to ‘action’. 
    AI moving into proactive interventions.
  2. We’re getting more efficient. 
    Assume 2024 brings lower AI OpEx.
  3. It’s multi-modal from here on out. 
    Assume 2024 is multi-modal.
  4. There’s no one model to rule them all.
    Assume 2024 has more multi-model orchestration & delegation.

Stay curious, stay informed,
Allie


Chatbot Power Rankings — from theneurondaily.com by Noah Edelman

Here’s our power rankings of the best chatbots for (non-technical) work:

1: ChatGPT-4Unquestionably the smartest, with the strongest writing, coding, and reasoning abilities.

T1: Gemini Ultra—In theory as powerful as GPT-4. We won’t know for sure until it’s released in 2024.

2: Claude 2Top choice for managing lengthy PDFs (handles ~75,000 words), and rarely hallucinates. Can be somewhat stiff.

3: PerplexityIdeal for real-time information. Upgrading to Pro grants access to both Claude-2 and GPT-4.

T4: PiThe most “human-like” chatbot, though integrating with business data can be challenging.

T4: Bing ChatDelivers GPT-4-esque responses, has internet access, and can generate images. Bad UX and doesn’t support PDFs.

T4: BardNow powered by Gemini Pro, offers internet access and answer verification. Tends to hallucinate more frequently.

and others…


Midjourney + ChatGPT = Amazing AI Art — from theaigirl.substack.com by Diana Dovgopol and the Pycoach
Turn ChatGPT into a powerful Midjourney prompt machine with basic and advanced formulas.


Make music with AI — from aitestkitchen.withgoogle.com re: Music FX


 

 

Animate Anyone — from theneurondaily.com by Noah Edelman & Pete Huang

Animate Anyone is a new project from Alibaba that can animate any image to move however you’d like.

While the technology is bonkers (duh), the demo video has stirred up mixed reactions.

I mean…just check out the (justified) fury on Twitter in response to this research.

To the researchers’ credit, they haven’t released a working demo yet, probably for this exact concern.


 

34 Big Ideas that will change our world in 2024 — from linkedin.com

34 Big Ideas that will change our world in 2024 -- from linkedin.com 

Excerpts:

6. ChatGPT’s hype will fade, as a new generation of tailor-made bots rises up
11. We’ll finally turn the corner on teacher pay in 2024
21. Employers will combat job applicants’ use of AI with…more AI
31. Universities will view the creator economy as a viable career path

 

Exploring blockchain’s potential impact on the education sector — from e27.co by Moch Akbar Azzihad M
By the year 2024, the application of blockchain technology is anticipated to have a substantial influence on the education sector

Areas mentioned include:

  • Credentials that are both secure and able to be verified
  • Records of accomplishments that are not hidden
  • Enrollment process that is both streamlined and automated
  • Storage of information that is both secure and decentralised
  • Financing and decentralised operations
 

Expanding Bard’s understanding of YouTube videos — via AI Valley

  • What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
  • Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.

Reshaping the tree: rebuilding organizations for AI — from oneusefulthing.org by Ethan Mollick
Technological change brings organizational change.

I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.

From DSC:
Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.

 


 



Nvidia’s revenue triples as AI chip boom continues — from cnbc.com by Jordan Novet; via GSV

KEY POINTS

  • Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
  • Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
  • Nvidia announced the GH200 GPU during the quarter.

Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

  • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
  • Revenue: $18.12 billion, vs. $16.18 billion expected

Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.



 

The Beatles’ final song is now streaming thanks to AI — from theverge.com by Chris Welch
Machine learning helped Paul McCartney and Ringo Starr turn an old John Lennon demo into what’s likely the band’s last collaborative effort.


Scientists excited by AI tool that grades severity of rare cancer — from bbc.com by Fergus Walsh

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.


Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving — from venturebeat.com by Michael Nuñez

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

Also from Michael Nuñez at venturebeat.com, see:


GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448 — from exponentialviewco by Azeem Azhar and Chantal Smith


Personalized A.I. Agents Are Here. Is the World Ready for Them? — from ytimes.com by Kevin Roose (behind a paywall)

You could think of the recent history of A.I. chatbots as having two distinct phases.

The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”


From DSC:
Very cool!


Nvidia Stock Jumps After Unveiling of Next Major AI Chip. It’s Bad News for Rivals. — from barrons.com

On Monday, Nvidia (ticker: NVDA) announced its new H200 Tensor Core GPU. The chip incorporates 141 gigabytes of memory and offers up to 60% to 90% performance improvements versus its current H100 model when used for inference, or generating answers from popular AI models.

From DSC:
The exponential curve seems to be continuing — 60% to 90% performance improvements is a huge boost in performance.

Also relevant/see:


The 5 Best GPTs for Work — from the AI Exchange

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

 

Introductory comments from DSC:

Sometimes people and vendors write about AI’s capabilities in such a glowingly positive way. It seems like AI can do everything in the world. And while I appreciate the growing capabilities of Large Language Models (LLMs) and the like, there are some things I don’t want AI-driven apps to do.

For example, I get why AI can be helpful in correcting my misspellings, my grammatical errors, and the like. That said, I don’t want AI to write my emails for me. I want to write my own emails. I want to communicate what I want to communicate. I don’t want to outsource my communication. 

And what if an AI tool summarizes an email series in a way that I miss some key pieces of information? Hmmm…not good.

Ok, enough soapboxing. I’ll continue with some resources.


ChatGPT Enterprise

Introducing ChatGPT Enterprise — from openai.com
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.

We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.

Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. — from openai.com


NVIDIA

Nvidia’s Q2 earnings prove it’s the big winner in the generative AI boom — from techcrunch.com by Kirsten Korosec

Nvidia Quarterly Earnings Report Q2 Smashes Expectations At $13.5B — from techbusinessnews.com.au
Nvidia’s quarterly earnings report (Q2) smashed expectations coming in at $13.5B more than doubling prior earnings of $6.7B. The chipmaker also projected October’s total revenue would peak at $16B


MISC

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending — from theinformation.com by Amir Efrati and Aaron Holmes

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. That’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

OpenAI’s GPTBot blocked by major websites and publishers — from the-decoder.com by Matthias Bastian
An emerging chatbot ecosystem builds on existing web content and could displace traditional websites. At the same time, licensing and financing are largely unresolved.

OpenAI offers publishers and website operators an opt-out if they prefer not to make their content available to chatbots and AI models for free. This can be done by blocking OpenAI’s web crawler “GPTBot” via the robots.txt file. The bot collects content to improve future AI models, according to OpenAI.

Major media companies including the New York Times, CNN, Reuters, Chicago Tribune, ABC, and Australian Community Media (ACM) are now blocking GPTBot. Other web-based content providers such as Amazon, Wikihow, and Quora are also blocking the OpenAI crawler.

Introducing Code Llama, a state-of-the-art large language model for coding  — from ai.meta.com

Takeaways re: Code Llama:

  • Is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
  • Is free for research and commercial use.
  • Is built on top of Llama 2 and is available in three models…
  • In our own benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks

Key Highlights of Google Cloud Next ‘23— from analyticsindiamag.com by Shritama Saha
Meta’s Llama 2, Anthropic’s Claude 2, and TII’s Falcon join Model Garden, expanding model variety.

AI finally beats humans at a real-life sport— drone racing — from nature.com by Dan Fox
The new system combines simulation with onboard sensing and computation.

From DSC:
This is scary — not at all comforting to me. Militaries around the world continue their jockeying to be the most dominant, powerful, and effective killers of humankind. That definitely includes the United States and China. But certainly others as well. And below is another alarming item, also pointing out the downsides of how we use technologies.

The Next Wave of Scams Will Be Deepfake Video Calls From Your Boss — from bloomberg.com by Margi Murphy; behind paywall

Cybercriminals are constantly searching for new ways to trick people. One of the more recent additions to their arsenal was voice simulation software.

10 Great Colleges For Studying Artificial Intelligence — from forbes.com by Sim Tumay

The debut of ChatGPT in November created angst for college admission officers and professors worried they would be flooded by student essays written with the undisclosed assistance of artificial intelligence. But the explosion of interest in AI has benefits for higher education, including a new generation of students interested in studying and working in the field. In response, universities are revising their curriculums to educate AI engineers.

 

What value do you offer? — from linkedin.com by Dan Fitzpatrick — The AI Educator

Excerpt (emphasis DSC): 

So, as educators, mentors, and guides to our future generations, we must ask ourselves three pivotal questions:

  1. What value do we offer to our students?
  2. What value will they need to offer to the world?
  3. How are we preparing them to offer that value?

The answers to these questions are crucial, and they will redefine the trajectory of our education system.

We need to create an environment that encourages curiosity, embraces failure as a learning opportunity, and celebrates diversity. We need to teach our students how to learn, how to ask the right questions, and how to think for themselves.


AI 101 for Teachers



5 Little-Known ChatGPT Prompts to Learn Anything Faster — from medium.com by Eva Keiffenheim
Including templates, you can copy.

Leveraging ChatGPT for learning is the most meaningful skill this year for lifelong learners. But it’s too hard to find resources to master it.

As a learning science nerd, I’ve explored hundreds of prompts over the past months. Most of the advice doesn’t go beyond text summaries and multiple-choice testing.

That’s why I’ve created this article — it merges learning science with prompt writing to help you learn anything faster.


From DSC:
This is a very nice, clearly illustrated, free video to get started with the Midjourney (text-to-image) app. Nice work Dan!

Also see Dan’s
AI Generated Immersive Learning Series


What is Academic Integrity in the Era of Generative Artificial intelligence? — from silverliningforlearning.org by Chris Dede

In the new-normal of generative AI, how does one articulate the value of academic integrity? This blog presents my current response in about 2,500 words; a complete answer could fill a sizable book.

Massive amounts of misinformation are disseminated about generative AI, so the first part of my discussion clarifies what large language models (Chat-GPT and its counterparts) can currently do and what they cannot accomplish at this point in time. The second part describes ways in which generative AI can be misused as a means of learning; unfortunately, many people are now advocating for these mistaken applications to education. The third part describes ways in which large language models (LLM), used well, may substantially improve learning and education. I close with a plea for a robust, informed public discussion about these topics and issues.


Dr. Chris Dede and the Necessity of Training Students and Faculty to Improve Their Human Judgment and Work Properly with AIs — from stefanbauschard.substack.com by Stefan Bauschard
We need to stop using test-driven curriculums that train students to listen and to compete against machines, a competition they cannot win. Instead, we need to help them augment their Judgment.


The Creative Ways Teachers Are Using ChatGPT in the Classroom — from time.com by Olivia B. Waxman

Many of the more than a dozen teachers TIME interviewed for this story argue that the way to get kids to care is to proactively use ChatGPT in the classroom.

Some of those creative ideas are already in effect at Peninsula High School in Gig Harbor, about an hour from Seattle. In Erin Rossing’s precalculus class, a student got ChatGPT to generate a rap about vectors and trigonometry in the style of Kanye West, while geometry students used the program to write mathematical proofs in the style of raps, which they performed in a classroom competition. In Kara Beloate’s English-Language Arts class, she allowed students reading Shakespeare’s Othello to use ChatGPT to translate lines into modern English to help them understand the text, so that they could spend class time discussing the plot and themes.


AI in Higher Education: Aiding Students’ Academic Journey — from td.org by J. Chris Brown

Topics/sections include:

Automatic Grading and Assessment
AI-Assisted Student Support Services
Intelligent Tutoring Systems
AI Can Help Both Students and Teachers


Shockwaves & Innovations: How Nations Worldwide Are Dealing with AI in Education — from the74million.org by Robin Lake
Lake: Other countries are quickly adopting artificial intelligence in schools. Lessons from Singapore, South Korea, India, China, Finland and Japan.

I found that other developed countries share concerns about students cheating but are moving quickly to use AI to personalize education, enhance language lessons and help teachers with mundane tasks, such as grading. Some of these countries are in the early stages of training teachers to use AI and developing curriculum standards for what students should know and be able to do with the technology.

Several countries began positioning themselves several years ago to invest in AI in education in order to compete in the fourth industrial revolution.


AI in Education — from educationnext.org by John Bailey
The leap into a new era of machine intelligence carries risks and challenges, but also plenty of promise

In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.


Generative AI and education futures — from ucl.ac.uk
Video highlights from Professor Mike Sharples’ keynote address at the 2023 UCL Education Conference, which explored opportunities to prosper with AI as a part of education.


Bringing AI Literacy to High Schools — from by Nikki Goth Itoi
Stanford education researchers collaborated with teachers to develop classroom-ready AI resources for high school instructors across subject areas.

To address these two imperatives, all high schools need access to basic AI tools and training. Yet the reality is that many underserved schools in low-income areas lack the bandwidth, skills, and confidence to guide their students through an AI-powered world. And if the pattern continues, AI will only worsen existing inequities. With this concern top of mind plus initial funding from the McCoy Ethics Center, Lee began recruiting some graduate students and high school teachers to explore how to give more people equal footing in the AI space.


 
© 2024 | Daniel Christian