Introductory comments from DSC:

Sometimes people and vendors write about AI’s capabilities in such a glowingly positive way. It seems like AI can do everything in the world. And while I appreciate the growing capabilities of Large Language Models (LLMs) and the like, there are some things I don’t want AI-driven apps to do.

For example, I get why AI can be helpful in correcting my misspellings, my grammatical errors, and the like. That said, I don’t want AI to write my emails for me. I want to write my own emails. I want to communicate what I want to communicate. I don’t want to outsource my communication. 

And what if an AI tool summarizes an email series in a way that I miss some key pieces of information? Hmmm…not good.

Ok, enough soapboxing. I’ll continue with some resources.


ChatGPT Enterprise

Introducing ChatGPT Enterprise — from openai.com
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.

We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.

Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. — from openai.com


NVIDIA

Nvidia’s Q2 earnings prove it’s the big winner in the generative AI boom — from techcrunch.com by Kirsten Korosec

Nvidia Quarterly Earnings Report Q2 Smashes Expectations At $13.5B — from techbusinessnews.com.au
Nvidia’s quarterly earnings report (Q2) smashed expectations coming in at $13.5B more than doubling prior earnings of $6.7B. The chipmaker also projected October’s total revenue would peak at $16B


MISC

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending — from theinformation.com by Amir Efrati and Aaron Holmes

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. That’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

OpenAI’s GPTBot blocked by major websites and publishers — from the-decoder.com by Matthias Bastian
An emerging chatbot ecosystem builds on existing web content and could displace traditional websites. At the same time, licensing and financing are largely unresolved.

OpenAI offers publishers and website operators an opt-out if they prefer not to make their content available to chatbots and AI models for free. This can be done by blocking OpenAI’s web crawler “GPTBot” via the robots.txt file. The bot collects content to improve future AI models, according to OpenAI.

Major media companies including the New York Times, CNN, Reuters, Chicago Tribune, ABC, and Australian Community Media (ACM) are now blocking GPTBot. Other web-based content providers such as Amazon, Wikihow, and Quora are also blocking the OpenAI crawler.

Introducing Code Llama, a state-of-the-art large language model for coding  — from ai.meta.com

Takeaways re: Code Llama:

  • Is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
  • Is free for research and commercial use.
  • Is built on top of Llama 2 and is available in three models…
  • In our own benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks

Key Highlights of Google Cloud Next ‘23— from analyticsindiamag.com by Shritama Saha
Meta’s Llama 2, Anthropic’s Claude 2, and TII’s Falcon join Model Garden, expanding model variety.

AI finally beats humans at a real-life sport— drone racing — from nature.com by Dan Fox
The new system combines simulation with onboard sensing and computation.

From DSC:
This is scary — not at all comforting to me. Militaries around the world continue their jockeying to be the most dominant, powerful, and effective killers of humankind. That definitely includes the United States and China. But certainly others as well. And below is another alarming item, also pointing out the downsides of how we use technologies.

The Next Wave of Scams Will Be Deepfake Video Calls From Your Boss — from bloomberg.com by Margi Murphy; behind paywall

Cybercriminals are constantly searching for new ways to trick people. One of the more recent additions to their arsenal was voice simulation software.

10 Great Colleges For Studying Artificial Intelligence — from forbes.com by Sim Tumay

The debut of ChatGPT in November created angst for college admission officers and professors worried they would be flooded by student essays written with the undisclosed assistance of artificial intelligence. But the explosion of interest in AI has benefits for higher education, including a new generation of students interested in studying and working in the field. In response, universities are revising their curriculums to educate AI engineers.

 

OpenAI angles to put ChatGPT in classrooms with special tutor prompts — from techcrunch.com by Devin Coldewey

Taking the bull by the horns, the company has proposed a few ways for teachers to put the system to use… outside its usual role as “research assistant” for procrastinating students.
.

Teaching with AI -- a guide from OpenAI


Q2 Earnings Roundup – EdTech Generative AI — from aieducation.substack.com by Claire Zau
A roundup of LLM and AI discussions from Q2 EdTech Earnings

In this piece, we’ll be breaking down how a few of edtech’s most important companies are thinking about AI developments.

  • Duolingo
  • Powerschool
  • Coursera
  • Docebo
  • Instructure
  • Nerdy
 

Don’t Be Fooled: How You Can Master Media Literacy in the Digital Age — from youtube.com by Professor Sue Ellen Christian

During this special keynote presentation, Western Michigan University (WMU) professor Sue Ellen Christian speaks about the importance of media literacy for all ages and how we can help educate our friends and families about media literacy principles. Hosted by the Grand Rapids Public Library and GRTV, a program of the Grand Rapids Community Media Center. Special thanks to the Grand Rapids Public Library Foundation for their support of this program.

Excerpts:

Media Literacy is the ability to access, analyze, evaluate, and create media in a variety of forms. Center for Media Literacy

5 things to do when confronted with concerns about content.


Also relevant/see:

Kalamazoo Valley Museum’s newest exhibit teaches community about media literacy — from mlive.com by Gabi Broekema

 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 

Will one of our future learning ecosystems look like a Discord server type of service? [Christian]

 

What value do you offer? — from linkedin.com by Dan Fitzpatrick — The AI Educator

Excerpt (emphasis DSC): 

So, as educators, mentors, and guides to our future generations, we must ask ourselves three pivotal questions:

  1. What value do we offer to our students?
  2. What value will they need to offer to the world?
  3. How are we preparing them to offer that value?

The answers to these questions are crucial, and they will redefine the trajectory of our education system.

We need to create an environment that encourages curiosity, embraces failure as a learning opportunity, and celebrates diversity. We need to teach our students how to learn, how to ask the right questions, and how to think for themselves.


AI 101 for Teachers



5 Little-Known ChatGPT Prompts to Learn Anything Faster — from medium.com by Eva Keiffenheim
Including templates, you can copy.

Leveraging ChatGPT for learning is the most meaningful skill this year for lifelong learners. But it’s too hard to find resources to master it.

As a learning science nerd, I’ve explored hundreds of prompts over the past months. Most of the advice doesn’t go beyond text summaries and multiple-choice testing.

That’s why I’ve created this article — it merges learning science with prompt writing to help you learn anything faster.


From DSC:
This is a very nice, clearly illustrated, free video to get started with the Midjourney (text-to-image) app. Nice work Dan!

Also see Dan’s
AI Generated Immersive Learning Series


What is Academic Integrity in the Era of Generative Artificial intelligence? — from silverliningforlearning.org by Chris Dede

In the new-normal of generative AI, how does one articulate the value of academic integrity? This blog presents my current response in about 2,500 words; a complete answer could fill a sizable book.

Massive amounts of misinformation are disseminated about generative AI, so the first part of my discussion clarifies what large language models (Chat-GPT and its counterparts) can currently do and what they cannot accomplish at this point in time. The second part describes ways in which generative AI can be misused as a means of learning; unfortunately, many people are now advocating for these mistaken applications to education. The third part describes ways in which large language models (LLM), used well, may substantially improve learning and education. I close with a plea for a robust, informed public discussion about these topics and issues.


Dr. Chris Dede and the Necessity of Training Students and Faculty to Improve Their Human Judgment and Work Properly with AIs — from stefanbauschard.substack.com by Stefan Bauschard
We need to stop using test-driven curriculums that train students to listen and to compete against machines, a competition they cannot win. Instead, we need to help them augment their Judgment.


The Creative Ways Teachers Are Using ChatGPT in the Classroom — from time.com by Olivia B. Waxman

Many of the more than a dozen teachers TIME interviewed for this story argue that the way to get kids to care is to proactively use ChatGPT in the classroom.

Some of those creative ideas are already in effect at Peninsula High School in Gig Harbor, about an hour from Seattle. In Erin Rossing’s precalculus class, a student got ChatGPT to generate a rap about vectors and trigonometry in the style of Kanye West, while geometry students used the program to write mathematical proofs in the style of raps, which they performed in a classroom competition. In Kara Beloate’s English-Language Arts class, she allowed students reading Shakespeare’s Othello to use ChatGPT to translate lines into modern English to help them understand the text, so that they could spend class time discussing the plot and themes.


AI in Higher Education: Aiding Students’ Academic Journey — from td.org by J. Chris Brown

Topics/sections include:

Automatic Grading and Assessment
AI-Assisted Student Support Services
Intelligent Tutoring Systems
AI Can Help Both Students and Teachers


Shockwaves & Innovations: How Nations Worldwide Are Dealing with AI in Education — from the74million.org by Robin Lake
Lake: Other countries are quickly adopting artificial intelligence in schools. Lessons from Singapore, South Korea, India, China, Finland and Japan.

I found that other developed countries share concerns about students cheating but are moving quickly to use AI to personalize education, enhance language lessons and help teachers with mundane tasks, such as grading. Some of these countries are in the early stages of training teachers to use AI and developing curriculum standards for what students should know and be able to do with the technology.

Several countries began positioning themselves several years ago to invest in AI in education in order to compete in the fourth industrial revolution.


AI in Education — from educationnext.org by John Bailey
The leap into a new era of machine intelligence carries risks and challenges, but also plenty of promise

In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.


Generative AI and education futures — from ucl.ac.uk
Video highlights from Professor Mike Sharples’ keynote address at the 2023 UCL Education Conference, which explored opportunities to prosper with AI as a part of education.


Bringing AI Literacy to High Schools — from by Nikki Goth Itoi
Stanford education researchers collaborated with teachers to develop classroom-ready AI resources for high school instructors across subject areas.

To address these two imperatives, all high schools need access to basic AI tools and training. Yet the reality is that many underserved schools in low-income areas lack the bandwidth, skills, and confidence to guide their students through an AI-powered world. And if the pattern continues, AI will only worsen existing inequities. With this concern top of mind plus initial funding from the McCoy Ethics Center, Lee began recruiting some graduate students and high school teachers to explore how to give more people equal footing in the AI space.


 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 

Excerpts from the Too Long Didn’t Read (TLDR) section from AIxEducation Day 1: My Takeaways — from stefanbauschard.substack.com by Stefan Bauschard (emphasis DSC)

* There was a lot of talk about learning bots. This talk included the benefits of 1:1 tutoring, access to education for those who don’t currently have it (developing world), the ability to do things for which we currently don’t have enough teachers and support staff (speech pathology), individualized instruction (it will be good at this soon), and stuff that it is already good at (24/7 availability, language tutoring, immediate feedback regarding argumentation and genre (not facts :), putting students on the right track, comprehensive feedback, more critical feedback).

* Students are united. The student organizers and those who spoke at the conference have concerns about future employment, want to learn to use generative AI, and express concern about being prepared for the “real world.” They also all want a say in how generative AI is used in the college classroom. Many professors spoke about the importance of having conversations with students and involving them in the creation of AI policies as well.

* I think it’s fair to say that all professors who spoke thought students were going to use generative AI regardless of whether or not it was permitted, though some hoped for honesty.

* No professor who spoke thought using a plagiarism detector was a good idea.

* Everyone thought that significant advancements in AI technology were inevitable.

* Almost everyone expressed being overwhelmed by the rate of change.


Stefan recommended the following resource:


 

InstructureCon 23 Conference Notes — from onedtech.beehiiv.com by Phil Hill

The company is increasingly emphasizing its portfolio of products built around the Canvas LMS, what they call the Instructure Unified Learning Platform. Perhaps the strongest change in message is the increased emphasis on the EdTech Collective, Instructure’s partner ecosystem. In fact, two of the three conference press releases were on the ecosystem – describing the 850 partners as “a larger partner community than any other LMS provider” and announcing a partnership with Khan Academy with its Khanmigo AI-based tutoring and teaching assistant tool (more on generative AI approach below).

Anthology Together 23 Conference Notes — from philhillaa.com by Glenda Morgan

The Anthology conference, held from July 17-19, marked the second gathering since Blackboard ceased operating as a standalone company and transformed into a brand for a product line.

What stood out was not just the number of added features but the extent to which these enhancements were driven by customer input. There has been a noticeable shift in how Anthology listens to clients, which had been a historical weakness for Blackboard. This positive change was emphasized not only by Anthology executives, but more importantly by customers themselves, even during unscripted side conversations.

D2L Fusion 23 Conference Notes — from onedtech.beehiiv.com

D2L is a slow burn company, and in the past eight years in a good way. The company started working on its move to the cloud, tied to its user experience redesign as Brightspace, in 2014. Five years later, the company’s LMS was essentially all cloud (with one or two client exceptions). More importantly, D2L Brightspace in this time period became fully competitive with Instructure Canvas, winning head-to-head competitions not just due to specialized features but more broadly in terms of general system usability and intuitive design. That multi-year transformation is significant, particularly for a founder-led company.

 


Gen-AI Movie Trailer For Sci Fi Epic “Genesis” — from forbes.com by Charlie Fink

The movie trailer for “Genesis,” created with AI, is so convincing it caused a stir on Twitter [on July 27]. That’s how I found out about it. Created by Nicolas Neubert, a senior product designer who works for Elli by Volkswagen in Germany, the “Genesis” trailer promotes a dystopian sci-fi epic reminiscent of the Terminator. There is no movie, of course, only the trailer exists, but this is neither a gag nor a parody. It’s in a class of its own. Eerily made by man, but not.



Google’s water use is soaring. AI is only going to make it worse. — from businessinsider.com by Hugh Langley

Google just published its 2023 environmental report, and one thing is for certain: The company’s water use is soaring.

The internet giant said it consumed 5.6 billion gallons of water in 2022, the equivalent of 37 golf courses. Most of that — 5.2 billion gallons — was used for the company’s data centers, a 20% increase on the amount Google reported the year prior.


We think prompt engineering (learning to converse with an AI) is overrated. — from the Neuron

We think prompt engineering (learning to converse with an AI) is overrated. Yup, we said it. We think the future of chat interfaces will be a combination of preloading context and then allowing AI to guide you to the information you seek.

From DSC:
Agreed. I think we’ll see a lot more interface updates and changes to make things easier to use, find, develop.


Radar Trends to Watch: August 2023 — from oreilly.com by Mike Loukides
Developments in Programming, Web, Security, and More

Artificial Intelligence continues to dominate the news. In the past month, we’ve seen a number of major updates to language models: Claude 2, with its 100,000 token context limit; LLaMA 2, with (relatively) liberal restrictions on use; and Stable Diffusion XL, a significantly more capable version of Stable Diffusion. Does Claude 2’s huge context really change what the model can do? And what role will open access and open source language models have as commercial applications develop?


Try out Google ‘TextFX’ and its 10 creative AI tools for rappers, writers — from 9to5google.com by Abner Li; via Barsee – AI Valley 

Google Lab Sessions are collaborations between “visionaries from all realms of human endeavor” and the company’s latest AI technology. [On 8/2/23], Google released TextFX as an “experiment to demonstrate how generative language technologies can empower the creativity and workflows of artists and creators” with Lupe Fiasco.

Google’s TextFX includes 10 tools and is powered by the PaLM 2 large language model via the PALM API. Meant to aid in the creative process of rappers, writers, and other wordsmiths, it is part of Google Labs.

 

The future of learning and skilling with AI in the picture — from chieflearningofficer.com by Janice Burns
Janice Burns, chief transformation officer at Degreed, looks at how AI is impacting the future of learning and skilling.

Sections include:

  • Saving L&D time
  • Recommending and personalizing
  • ‘As you need it’ learning 
  • A career coach for everyone?
  • More advances coming
  • Be mindful of the limitations
  • Remain open to the changes coming

Also relevant/see:


Who Will Train Digital (Legal) Talent At Scale? — from forbes.com by Mark A. Cohen

Excerpt (emphasis DSC):

The urgency to fill existing and prospective positions with digital talent and to upskill those already in the workforce are among the reasons why leading companies have boldly assessed and transformed their enterprise talent management strategies. Some key initiatives leading companies are undertaking include:

  • Direct involvement by the C-Suite in the formulation of the enterprise talent strategy and lifecycle;
  • A paradigmatic hiring shift from diplomas to skills;
  • Increased investment in upskilling and career advancement to promote retention and to identify high-performers early on;
  • Targeted collaboration with universities focused on training in areas of existing and projected talent supply demand
  • Promoting a learning-for-life mindset and encouraging creative thinking, cross-cultural collaboration, and forging a culture that values these and other humanistic values.
  • Collaborating with other companies to create joint solutions for fulfilling skill demand

Practical, powerful employee education: How interactivity supports greater learning online — from chieflearningofficer.com by Natasha Nicholson

Consider this comparison: In more passive online learning, a participant will learn primarily by listening, watching and observing. Conversely, in an interactive model, the participant will be expected to engage with a story or situation by being asked to make choices that will show potential consequences.

Here are some of the elements that, when combined, make interactive learning especially effective:

 

Navigating the Future of Learning in a Digitally-Disrupted World — from thinklearningstudio.org by Russell Cailey

Are we on the frontier of unveiling an unseen revolution in education? The hypothesis is that this quiet upheaval’s importance is far more significant than we imagine. As our world adjusts, restructures, and emerges from a year which launched an era of mass AI, so too does a new academic year dawn for many – with hope and enthusiasm about new roles, titles, or simply just a new mindset. Concealed from sight, however, I believe a significant transformative wave has started and will begin to reshape our education systems and push us into a new stage of innovative teaching practice whether we desire it or not. The risk and hope is that the quiet revolution remains outside the regulator’s and ministries’ purview, which could risk a dangerous fragmentation of education policy and practice, divorced from the actualities of the world ‘in and outside school’.

“This goal can be achieved through continued support for introducing more new areas of study, such as ‘foresight and futures’, in the high school classroom.”


Four directions for assessment redesign in the age of generative AI— from timeshighereducation.com by Julia Chen
The rise of generative AI has led universities to rethink how learning is quantified. Julia Chen offers four options for assessment redesign that can be applied across disciplines

Direction 1: From written description to multimodal explanation and application

Direction 2: From literature review alone to referencing lectures

Direction 3: From presentation of ideas to defence of views

Direction 4: From working alone to student-staff partnership




15 Inspirational Voices in the Space Between AI and Education — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Get Inspired for AI and The Future of Education.

If you are just back from vacation and still not quite sure what to do about AI, let me assure you that you are not the only one. My advice for you today is this: fill your LinkedIn-feed and/or inbox with ideas, inspirational writing and commentary on AI. This will get you up to speed quickly and is a great way to stay informed on the newest movements you need to be aware of.

My personal recommendation for you is to check out these bright people who are all very active on LinkedIn and/or have a newsletter worth paying attention to. I have kept the list fairly short – only 15 people – in order to make it as easy as possible for you to begin exploring.


Universities say AI cheats can’t be beaten, moving away from attempts to block AI (Australia) — from abc.net.au by Jake Evans

Key points:

  • Universities have warned against banning AI technologies in academia
  • Several say AI cheating in tests will be too difficult to stop, and it is more practical to change assessment methods
  • The sector says the entire nature of teaching will have to change to ensure students continue to effectively learn

aieducator.tools


Navigating A World of Generative AI: Suggestions for Educators — from nextlevellab.gse.harvard.edu by Lydia Cao and Chris Dede

Understanding the nature of generative AI is crucial for educators to navigate the evolving landscape of teaching and learning. In a new report from the Next Level Lab, Lydia Cao and Chris Dede reflect on the role of generative AI in learning and how this pushes us to reconceptualize our visions of effective education. Though there are concerns of plagiarism and replacement of human jobs, Cao and Dede argue that a more productive way forward is for educators to focus on demystifying AI, emphasizing the learning process over the final product, honoring learner agency, orchestrating multiple sources of motivation, cultivating skills that AI cannot easily replicate, and fostering intelligence augmentation (IA) through building human-AI partnerships.

Navigating A World of Generative AI: Suggestions for Educators -- by Lydia Cao and Chris Dede


20 CHATGPT PROMPTS FOR ELA TEACHERS — from classtechtips.com by Dr. Monica Burns

Have you used chatbots to save time this school year? ChatGPT and generative artificial intelligence (AI) have changed the way I think about instructional planning. Today on the blog, I have a selection of ChatGPT prompts for ELA teachers.

You can use chatbots to tackle tedious tasks, gather ideas, and even support your work to meet the needs of every student. In my recent quick reference guide published by ISTE and ASCD, Using AI Chatbots to Enhance Planning and Instruction, I explore this topic. You can also find 50 more prompts for educators in this free ebook.


Professors Craft Courses on ChatGPT With ChatGPT — from insidehighered.com by Lauren Coffey
While some institutions are banning the use of the new AI tool, others are leaning into its use and offering courses dedicated solely to navigating the new technology.

Maynard, along with Jules White at Vanderbilt University, are among a small number of professors launching courses focused solely on teaching students across disciplines to better navigate AI and ChatGPT.

The offerings go beyond institutions flexing their innovation skills—the faculty behind these courses view them as imperative to ensure students are prepared for ever-changing workforce needs.


GPT-4 can already pass freshman year at Harvard | professors need to adapt to their students’ new reality — fast — from chronicle.com by Maya Bodnick (an undergraduate at Harvard University, studying government)

A. A. A-. B. B-. Pass.

That’s a solid report card for a freshman in college, a respectable 3.57 GPA. I recently finished my freshman year at Harvard, but those grades aren’t mine — they’re GPT-4’s.

Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)

Here are the prompts with links to the essays, the names of instructors, and the grades each essay received…

The impact that AI is having on liberal-arts homework is indicative of the AI threat to the career fields that liberal-arts majors tend to enter. So maybe what we should really be focused on isn’t, “How do we make liberal-arts homework better?” but rather, “What are jobs going to look like over the next 10–20 years, and how do we prepare students to succeed in that world?”



The great assessment rethink — from timeshighereducation.com by
How to measure learning and protect academic integrity in the age of ChatGPT

Items from Times Higher Education re: redesigning assessment

 

A cam/mic/light/teleprompter remote kit for non-tech-savvy guests, including Shure MV7 — from provideocoalition.com by Allan Tépper

Excerpt (emphasis DSC):

Inspired by my recent Review: Shure MV7 dynamic hybrid studio microphone – near, far and beyond, Beaker Films of Fairfield, Connecticut, US has developed and deployed a first batch of 10 kits to capture remote conversations from different locations worldwide. Beaker Films is frequently contracted to record remote interviews or testimonials from medical professionals. For this project, Beaker Films’ clients wanted consistent, high quality audio and video, but with 3 additional challenges: they preferred to have no visible microphone in the shot, they needed a teleprompter function and the whole kit needed to be as simple as possible for non-technical guests.




Speaking of A/V-related items, also see:

Seven worlds one planet at the BBC Earth Experience — from inavateonthenet.net by Paul Milligan

‘Holographic’ animal-free zoo opens in Australia — from inavateonthenet.net

XR Lab opens in UK college — from inavateonthenet.net

West Suffolk College in the UK has opened its Extended Reality Lab (XR Lab), the facilities comprise of four distinct areas: an Immersion Lab, a Collaboration Theatre, a Green Room, and a Conference Room. The project was designed by architects WindsorPatania for Eastern Colleges Group.

CJP to create virtual studio for Solent University — from inavateonthenet.net

Systems integrator CJP Broadcast Service Solutions, has won a tender to build a virtual production environment for Solent University in the UK.

The new facilities, converted from an existing studio space, will provide students on the film production courses with outstanding opportunities to develop their creative output.

 

Camera fixed on a surgery being used to provide remote learning and feeds

Learning Experience — from inavateemea.com by Tim Kridel

“Some of the stuff we’re doing is creating templates and workflows that capture multiple feeds: not just the teacher, [but also] the white board, an overhead camera,” Risby says.

“The student can then go in and pick what they look at, so it’s more interactive. You might be watching it the first time to listen to the lecturer, but you might watch the second time to concentrate on the experiment. It makes the stream more valuable.”

 

Partnership with American Journalism Project to support local news — from openai.com; via The Rundown AI
A new $5+ million partnership aims to explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.


SEC’s Gensler Warns AI Risks Financial Stability — from bloomberg.com by Lydia Beyoud; via The Brainyacts
SEC on lookout for fraud, conflicts of interest, chair says | Gensler cautions companies touting AI in corporate docs


Per a recent Brainyacts posting:

The recent petition from Kenyan workers who engage in content moderation for OpenAI’s ChatGPT, via the intermediary company Sama, has opened a new discussion in the global legal market. This dialogue surrounds the concept of “harmful and dangerous technology work” and its implications for laws and regulations within the expansive field of AI development and deployment.

The petition, asking for investigations into the working conditions and operations of big tech companies outsourcing services in Kenya, is notable not just for its immediate context but also for the broader legal issues it raises. Central among these is the notion of “harmful and dangerous technology work,” a term that encapsulates the uniquely modern form of labor involved in developing and ensuring the safety of AI systems.

The most junior data labelers, or agents, earned a basic salary of 21,000 Kenyan shillings ($170) per month, with monthly bonuses and commissions for meeting performance targets that could elevate their hourly rate to just $1.44 – a far cry from the $12.50 hourly rate that OpenAI paid Sama for their work. This discrepancy raises crucial questions about the fair distribution of economic benefits in the AI value chain.


How ChatGPT Code Interpreter (And Four Other AI Initiatives) Might Revolutionize Education — from edtechinsiders.substack.com by Phuong Do, Alex Sarlin, and Sarah Morin
And more on Meta’s Llama, education LLMs, the Supreme Court affirmative action ruling, and Byju’s continued unraveling

Let’s put it all together for emphasis. With Code Interpreter by ChatGPT, you can:

  1. Upload any file
  2. Tell ChatGPT what you want to do with it
  3. Receive your instructions translated into Python
  4. Execute the code
  5. Transform the output back into readable language (or visuals, charts, graphs, tables, etc.)
  6. Provide the results (and the underlying Python code)


AI Tools and Links — from Wally Boston

It’s become so difficult to track AI tools as they are revealed. I’ve decided to create a running list of tools as I find out about them.  The list is in alphabetical order even though there are classification systems that I’ve seen others use. Although it’s not good in blogging land to update posts, I’ll change the date every time that I update this list. Please feel free to respond to me with your comments about any of these as well as AI tools that you use that I do not have on the list. I’ll post your comments next to a tool when appropriate. Thanks.


Meet Claude — A helpful new AI assistant — from wondertools.substack.com by Jeremy Caplan
How to make the most of ChatGPT’s new alternative

Claude has surprising capabilities, including a couple you won’t find in the free version of ChatGPT.

Since this new AI bot launched on July 11, I’ve found Claude useful for summarizing long transcripts, clarifying complex writings, and generating lists of ideas and questions. It also helps me put unstructured notes into orderly tables. For some things, I prefer Claude to ChatGPT. Read on for Claude’s strengths and limitations, and ideas for using it creatively.

Claude’s free version allows you to attach documents for analysis. ChatGPT’s doesn’t.


The Next Frontier For Large Language Models Is Biology — from forbes.com by Rob Toews

Large language models like GPT-4 have taken the world by storm thanks to their astonishing command of natural language. Yet the most significant long-term opportunity for LLMs will entail an entirely different type of language: the language of biology.

In the near term, the most compelling opportunity to apply large language models in the life sciences is to design novel proteins.



Seven AI companies agree to safeguards in the US — from bbc.com by Shiona McCallum; via Tom Barrett

Seven leading companies in artificial intelligence have committed to managing risks posed by the tech, the White House has said.

This will include testing the security of AI, and making the results of those tests public.

Representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI joined US President Joe Biden to make the announcement.

 
© 2024 | Daniel Christian