Future of Work Report AI at Work — from economicgraph.linkedin.com; via Superhuman

The intersection of AI and the world of work: Not only are job postings increasing, but we’re seeing more LinkedIn members around the globe adding AI skills to their profiles than ever before. We’ve seen a 21x increase in the share of global English-language job postings that mention new AI technologies such as GPT or ChatGPT since November 2022. In June 2023, the number of AI-skilled members was 9x larger than in January 2016, globally.

The state of play of Generative AI (GAI) in the workforce: GAI technologies, including ChatGPT, are poised to start to change the way we work. In fact, 47% of US executives believe that using generative AI will increase productivity, and 92% agree that people skills are more important than ever. This means jobs won’t necessarily go away but they will change as will the skills necessary to do them.

Also relevant/see:

The Working Future: More Human, Not Less — from bain.com
It’s time to change how we think about work

Contents

  • Introduction
  • Motivations for Work Are Changing.
  • Beliefs about What Makes a “Good Job” Are Diverging
  • Automation Is Helping to Rehumanize Work
  • Technological Change Is Blurring the Boundaries of the Firm
  • Young Workers Are Increasingly Overwhelmed
  • Rehumanizing Work: The Journey Ahead
 

Nearly Half of Legal Professionals and Consumers Believe Generative AI Will Transform Law Practice, LexisNexis Survey Finds — from lawnext.com

A new international survey of lawyers, law students and consumers finds that nearly half believe generative AI will have a significant or transformative impact on the practice of law.

Conducted by LexisNexis and released this morning at ILTACON, the annual conference of the International Legal Technology Association, the survey polled 7,950 lawyers, law students and consumers in the U.S., U.K., Canada and France about their overall awareness of generative AI and their perspectives on its potential impact on the practice of law.

Also relevant/see:

Thomson Reuters Releases Report on Impact of AI of Future of Legal Professionals. — from deweybstrategic.com by Jean O’Grady

Thomson Reuters has released its Future of Professionals Report. The research was conducted during the months of May and June 2023 via an online survey. More than 1,200 professionals from the legal, tax and accounting, and risk professions employed by corporations, firms, and government agencies completed the survey.

Art generated by AI can’t be copyrighted, DC court says — from abajournal.com by Amanda Robert

Art created by artificial intelligence cannot receive copyright protection under U.S. law, a federal judge ruled last week in a case that could influence the outcomes of future disputes over authorship and intellectual property.

 

Introductory comments from DSC:

Sometimes people and vendors write about AI’s capabilities in such a glowingly positive way. It seems like AI can do everything in the world. And while I appreciate the growing capabilities of Large Language Models (LLMs) and the like, there are some things I don’t want AI-driven apps to do.

For example, I get why AI can be helpful in correcting my misspellings, my grammatical errors, and the like. That said, I don’t want AI to write my emails for me. I want to write my own emails. I want to communicate what I want to communicate. I don’t want to outsource my communication. 

And what if an AI tool summarizes an email series in a way that I miss some key pieces of information? Hmmm…not good.

Ok, enough soapboxing. I’ll continue with some resources.


ChatGPT Enterprise

Introducing ChatGPT Enterprise — from openai.com
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.

We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.

Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. — from openai.com


NVIDIA

Nvidia’s Q2 earnings prove it’s the big winner in the generative AI boom — from techcrunch.com by Kirsten Korosec

Nvidia Quarterly Earnings Report Q2 Smashes Expectations At $13.5B — from techbusinessnews.com.au
Nvidia’s quarterly earnings report (Q2) smashed expectations coming in at $13.5B more than doubling prior earnings of $6.7B. The chipmaker also projected October’s total revenue would peak at $16B


MISC

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending — from theinformation.com by Amir Efrati and Aaron Holmes

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. That’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

OpenAI’s GPTBot blocked by major websites and publishers — from the-decoder.com by Matthias Bastian
An emerging chatbot ecosystem builds on existing web content and could displace traditional websites. At the same time, licensing and financing are largely unresolved.

OpenAI offers publishers and website operators an opt-out if they prefer not to make their content available to chatbots and AI models for free. This can be done by blocking OpenAI’s web crawler “GPTBot” via the robots.txt file. The bot collects content to improve future AI models, according to OpenAI.

Major media companies including the New York Times, CNN, Reuters, Chicago Tribune, ABC, and Australian Community Media (ACM) are now blocking GPTBot. Other web-based content providers such as Amazon, Wikihow, and Quora are also blocking the OpenAI crawler.

Introducing Code Llama, a state-of-the-art large language model for coding  — from ai.meta.com

Takeaways re: Code Llama:

  • Is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
  • Is free for research and commercial use.
  • Is built on top of Llama 2 and is available in three models…
  • In our own benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks

Key Highlights of Google Cloud Next ‘23— from analyticsindiamag.com by Shritama Saha
Meta’s Llama 2, Anthropic’s Claude 2, and TII’s Falcon join Model Garden, expanding model variety.

AI finally beats humans at a real-life sport— drone racing — from nature.com by Dan Fox
The new system combines simulation with onboard sensing and computation.

From DSC:
This is scary — not at all comforting to me. Militaries around the world continue their jockeying to be the most dominant, powerful, and effective killers of humankind. That definitely includes the United States and China. But certainly others as well. And below is another alarming item, also pointing out the downsides of how we use technologies.

The Next Wave of Scams Will Be Deepfake Video Calls From Your Boss — from bloomberg.com by Margi Murphy; behind paywall

Cybercriminals are constantly searching for new ways to trick people. One of the more recent additions to their arsenal was voice simulation software.

10 Great Colleges For Studying Artificial Intelligence — from forbes.com by Sim Tumay

The debut of ChatGPT in November created angst for college admission officers and professors worried they would be flooded by student essays written with the undisclosed assistance of artificial intelligence. But the explosion of interest in AI has benefits for higher education, including a new generation of students interested in studying and working in the field. In response, universities are revising their curriculums to educate AI engineers.

 

OpenAI angles to put ChatGPT in classrooms with special tutor prompts — from techcrunch.com by Devin Coldewey

Taking the bull by the horns, the company has proposed a few ways for teachers to put the system to use… outside its usual role as “research assistant” for procrastinating students.
.

Teaching with AI -- a guide from OpenAI


Q2 Earnings Roundup – EdTech Generative AI — from aieducation.substack.com by Claire Zau
A roundup of LLM and AI discussions from Q2 EdTech Earnings

In this piece, we’ll be breaking down how a few of edtech’s most important companies are thinking about AI developments.

  • Duolingo
  • Powerschool
  • Coursera
  • Docebo
  • Instructure
  • Nerdy
 

From DSC:
Yesterday, I posted the item about Google’s NotebookLM research tool. Excerpt:

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes.

That got me to thinking…

What if the presenter/teacher/professor/trainer/preacher provided a set of notes for the AI to compare to the readers’ notes? 

That way, the AI could see the discrepancies between what the presenter wanted their audience to learn/hear and what was actually being learned/heard. In a sort of digital Socratic Method, the AI could then generate some leading questions to get the audience member to check their thinking/understanding of the topic.

The end result would be that the main points were properly communicated/learned/received.

 

Google’s AI-powered note-taking app is the messy beginning of something great — from theverge.com by David Pierce; via AI Insider
NotebookLM is a neat research tool with some big ideas. It’s still rough and new, but it feels like Google is onto something.

Excerpts (emphasis DSC):

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. 

Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like.

 

Letter from the Editor: Experienced teachers are leaving Michigan schools. This is why. — from mlive.com by Matthew Miller

They talked instead about issues like pay, stress and the sense that they no longer had the solid backing of school administrators.

Sue Harper, who retired this summer from Kreeger Elementary in Fowlerville, blamed what she called “bulldozer parents.”

“I have never been one to quit anything, and teaching is my passion, but this is not teaching,” one teacher wrote. “This is hours of endless paperwork, this is social work, this is counseling, this is parenting, this is babysitting, this is coaching, this is everything but teaching.”

Also relevant/see:

Low pay, culture wars, and ‘bulldozer parents.’ Why Michigan’s best teachers are calling it quits. — from mlive.com by Melissa Frick and Matthew Miller

Now a change management coordinator for Fifth Third Bank, she said, “I don’t take the stress from my job home. I don’t feel guilty, like I always could be doing more for someone.”

Thousands of experienced teachers have retired or left the profession in the years since the COVID-19 pandemic first closed schools and shifted classes to Zoom.

Teachers say they’re burnt out, tired of a lack of support and lack of respect, feeling the impact of the increasingly acrimonious politics surrounding public education.

And finally:


Let’s Use ChatGPT to ‘Think Different’ About K-12 Schools — from gettingsmart.com by Kara Stern

So, in addition to asking ChatGPT to think like a school communications professional, a principal, or a teacher, what if we asked ChatGPT to think like the populations we’re serving, as a way of improving the education (or UX) we’re delivering?


Why I Keep Teaching — from edutopia.org by Rachel Jorgensen
A veteran educator explains why, despite the many challenges, she continues to try to change students’ lives, in turn enriching her own.

EVERY TIME I SHOW UP FOR WORK, A STUDENT MIGHT CHANGE MY LIFE FOR THE BETTER

EVERY TIME I SHOW UP FOR WORK, A STUDENT MIGHT CHANGE MY LIFE FOR THE BETTER

MY WORK HAS INVISIBLE RIPPLE EFFECTS


34 Ways to Quiet a Rambunctious Class — from edutopia.org by Daniel Leonard
From “Silent 20” to imaginary marshmallows, these teacher-tested strategies for all grade levels can help you snap an unruly classroom back to attention.


Per EdSurge:

‘THE MOTH’ GOES TO SCHOOL: For more than a decade, the nonprofit behind the popular storytelling podcast The Moth has run workshops in schools to help students share impactful stories from their lives. Now the group started a spin-off podcast, Grown, highlighting those student stories. Here’s what they’re learning, and why they say storytelling needs to be taught in schools.

.

Grown, a podcast from The Moth
.



 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 

***
From DSC:
Having come from various other areas of higher education back in 2017, I was *amazed* to see *how far behind* legal education was from the rest of higher ed. And this is directly tied to what the American Bar Association allows (or doesn’t allow). The ABA has done a terrible job of helping Americans deal with today’s pace of change.

 


Speaking of technology within the legal world, also relevant/see:

How in-house legal professionals can embrace technology — from legaldive.com by Lyle Moran
Colin Levy says generative AI tools, as well as well-known legacy products, can help lawyers and other legal department staff enhance their work.

 

10 Ways Artificial Intelligence Is Transforming Instructional Design — from er.educause.edu by Robert Gibson
Artificial intelligence (AI) is providing instructors and course designers with an incredible array of new tools and techniques to improve the course design and development process. However, the intersection of AI and content creation is not new.

What does this mean for the field of instructional and course design? I have been telling my graduate instructional design students that AI technology is not likely to replace them any time soon because learning and instruction are still highly personalized and humanistic experiences. However, as these students embark on their careers, they will need to understand how to appropriately identify, select, and utilize AI when developing course content.

Here are a few interesting examples of how AI is shaping and influencing instructional design. Some of the tools and resources can be used to satisfy a variety of course design activities, while others are very specific.


GenAI Chatbot Prompt Library for Educators — from aiforeducation.io
We have a variety of prompts to help you lesson plan and do adminstrative tasks with GenAI chatbots like ChatGPT, Claude, Bard, and Perplexity.

Also relevant/see:

AI for Education — from linkedin.com
Helping teachers and schools unlock their full potential through AI



Google Chrome will summarize entire articles for you with built-in generative AI — from theverge.com by Jay Peters
Google’s AI-powered article summaries are rolling out for iOS and Android first, before coming to Chrome on the desktop.

Google’s AI-powered Search Generative Experience (SGE) is getting a major new feature: it will be able to summarize articles you’re reading on the web, according to a Google blog post. SGE can already summarize search results for you so that you don’t have to scroll forever to find what you’re looking for, and this new feature is designed to take that further by helping you out after you’ve actually clicked a link.


A Definitive Guide to Using Midjourney — from every.to by Lucas Crespo
Everything you need to know about generating AI Images

In this article, I’ll walk you through the most powerful and useful techniques I’ve come across. We’ll cover:

  • Getting started in Midjourney
  • Understanding Midjourney’s quirks with interpreting prompts
  • Customizing Midjourney’s image outputs after the fact
  • Experimenting with a range of styles and content
  • Uploading and combining images to make new ones via image injections
  • Brainstorming art options with parameters like “chaos” and “weird”
  • Finalizing your Midjourney output’s aspect ratio

And much more.


Report: Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over — from arstechnica.com by Ashley Belanger; via Misha da Vinci
OpenAI could be fined up to $150,000 for each piece of infringing content.

Weeks after The New York Times updated its terms of service (TOS) to prohibit AI companies from scraping its articles and images to train AI models, it appears that the Times may be preparing to sue OpenAI. The result, experts speculate, could be devastating to OpenAI, including the destruction of ChatGPT’s dataset and fines up to $150,000 per infringing piece of content.

NPR spoke to two people “with direct knowledge” who confirmed that the Times’ lawyers were mulling whether a lawsuit might be necessary “to protect the intellectual property rights” of the Times’ reporting.


Midjourney Is Easily Tricked Into Making AI Misinformation, Study Finds — from bloomberg.com (paywall)


AI-generated art cannot be copyrighted, rules a US Federal Judge — from msn.com by Wes Davis; via Tom Barrett


Do you want to Prepare your Students for the AI World? Support your Speech and Debate Team Now — from stefanbauschard.substack.com by Stefan Bauschard
Adding funding to the debate budget is a simple and immediate step administrators can take as part of developing a school’s “AI Strategy.”

 
 

Will one of our future learning ecosystems look like a Discord server type of service? [Christian]

 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 

Excerpts from the Too Long Didn’t Read (TLDR) section from AIxEducation Day 1: My Takeaways — from stefanbauschard.substack.com by Stefan Bauschard (emphasis DSC)

* There was a lot of talk about learning bots. This talk included the benefits of 1:1 tutoring, access to education for those who don’t currently have it (developing world), the ability to do things for which we currently don’t have enough teachers and support staff (speech pathology), individualized instruction (it will be good at this soon), and stuff that it is already good at (24/7 availability, language tutoring, immediate feedback regarding argumentation and genre (not facts :), putting students on the right track, comprehensive feedback, more critical feedback).

* Students are united. The student organizers and those who spoke at the conference have concerns about future employment, want to learn to use generative AI, and express concern about being prepared for the “real world.” They also all want a say in how generative AI is used in the college classroom. Many professors spoke about the importance of having conversations with students and involving them in the creation of AI policies as well.

* I think it’s fair to say that all professors who spoke thought students were going to use generative AI regardless of whether or not it was permitted, though some hoped for honesty.

* No professor who spoke thought using a plagiarism detector was a good idea.

* Everyone thought that significant advancements in AI technology were inevitable.

* Almost everyone expressed being overwhelmed by the rate of change.


Stefan recommended the following resource:


 
© 2025 | Daniel Christian