Future of Work Report AI at Work — from economicgraph.linkedin.com; via Superhuman

The intersection of AI and the world of work: Not only are job postings increasing, but we’re seeing more LinkedIn members around the globe adding AI skills to their profiles than ever before. We’ve seen a 21x increase in the share of global English-language job postings that mention new AI technologies such as GPT or ChatGPT since November 2022. In June 2023, the number of AI-skilled members was 9x larger than in January 2016, globally.

The state of play of Generative AI (GAI) in the workforce: GAI technologies, including ChatGPT, are poised to start to change the way we work. In fact, 47% of US executives believe that using generative AI will increase productivity, and 92% agree that people skills are more important than ever. This means jobs won’t necessarily go away but they will change as will the skills necessary to do them.

Also relevant/see:

The Working Future: More Human, Not Less — from bain.com
It’s time to change how we think about work

Contents

  • Introduction
  • Motivations for Work Are Changing.
  • Beliefs about What Makes a “Good Job” Are Diverging
  • Automation Is Helping to Rehumanize Work
  • Technological Change Is Blurring the Boundaries of the Firm
  • Young Workers Are Increasingly Overwhelmed
  • Rehumanizing Work: The Journey Ahead
 

Nearly Half of Legal Professionals and Consumers Believe Generative AI Will Transform Law Practice, LexisNexis Survey Finds — from lawnext.com

A new international survey of lawyers, law students and consumers finds that nearly half believe generative AI will have a significant or transformative impact on the practice of law.

Conducted by LexisNexis and released this morning at ILTACON, the annual conference of the International Legal Technology Association, the survey polled 7,950 lawyers, law students and consumers in the U.S., U.K., Canada and France about their overall awareness of generative AI and their perspectives on its potential impact on the practice of law.

Also relevant/see:

Thomson Reuters Releases Report on Impact of AI of Future of Legal Professionals. — from deweybstrategic.com by Jean O’Grady

Thomson Reuters has released its Future of Professionals Report. The research was conducted during the months of May and June 2023 via an online survey. More than 1,200 professionals from the legal, tax and accounting, and risk professions employed by corporations, firms, and government agencies completed the survey.

Art generated by AI can’t be copyrighted, DC court says — from abajournal.com by Amanda Robert

Art created by artificial intelligence cannot receive copyright protection under U.S. law, a federal judge ruled last week in a case that could influence the outcomes of future disputes over authorship and intellectual property.

 

From DSC:
Yesterday, I posted the item about Google’s NotebookLM research tool. Excerpt:

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes.

That got me to thinking…

What if the presenter/teacher/professor/trainer/preacher provided a set of notes for the AI to compare to the readers’ notes? 

That way, the AI could see the discrepancies between what the presenter wanted their audience to learn/hear and what was actually being learned/heard. In a sort of digital Socratic Method, the AI could then generate some leading questions to get the audience member to check their thinking/understanding of the topic.

The end result would be that the main points were properly communicated/learned/received.

 

Google’s AI-powered note-taking app is the messy beginning of something great — from theverge.com by David Pierce; via AI Insider
NotebookLM is a neat research tool with some big ideas. It’s still rough and new, but it feels like Google is onto something.

Excerpts (emphasis DSC):

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. 

Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like.

 

***
From DSC:
Having come from various other areas of higher education back in 2017, I was *amazed* to see *how far behind* legal education was from the rest of higher ed. And this is directly tied to what the American Bar Association allows (or doesn’t allow). The ABA has done a terrible job of helping Americans deal with today’s pace of change.

 

The Ready Player One Test: Systems for Personalized Learning — from gettingsmart.com by Dagan Bernstein

Key Points

  • The single narrative education system is no longer working.
  • Its main limitation is its inability to honor young people as the dynamic individuals that they are.
  • New models of teaching and learning need to be designed to center on the student, not the teacher.

When the opportunity arises to implement learning that uses immersive technology ask yourself if the learning you are designing passes the Ready Player One Test: 

  • Does it allow learners to immerse themselves in environments that would be too expensive or dangerous to experience otherwise?
  • Can the learning be personalized by the student?
  • Is it regenerative?
  • Does it allow for learning to happen non-linearly, at any time and place?
 

From DSC: If this is true, how will we meet this type of demand?!?

RESKILLING NEEDED FOR 40% OF WORKFORCE BECAUSE OF AI, REPORT FROM IBM SAYS — from staffingindustry.com; via GSV

Generative AI will require skills upgrades for workers, according to a report from IBM based on a survey of executives from around the world. One finding: Business leaders say 40% of their workforces will need to reskill as AI and automation are implemented over the next three years. That could translate to 1.4 billion people in the global workforce who require upskilling, according to the company.

 

Will one of our future learning ecosystems look like a Discord server type of service? [Christian]

 

What value do you offer? — from linkedin.com by Dan Fitzpatrick — The AI Educator

Excerpt (emphasis DSC): 

So, as educators, mentors, and guides to our future generations, we must ask ourselves three pivotal questions:

  1. What value do we offer to our students?
  2. What value will they need to offer to the world?
  3. How are we preparing them to offer that value?

The answers to these questions are crucial, and they will redefine the trajectory of our education system.

We need to create an environment that encourages curiosity, embraces failure as a learning opportunity, and celebrates diversity. We need to teach our students how to learn, how to ask the right questions, and how to think for themselves.


AI 101 for Teachers



5 Little-Known ChatGPT Prompts to Learn Anything Faster — from medium.com by Eva Keiffenheim
Including templates, you can copy.

Leveraging ChatGPT for learning is the most meaningful skill this year for lifelong learners. But it’s too hard to find resources to master it.

As a learning science nerd, I’ve explored hundreds of prompts over the past months. Most of the advice doesn’t go beyond text summaries and multiple-choice testing.

That’s why I’ve created this article — it merges learning science with prompt writing to help you learn anything faster.


From DSC:
This is a very nice, clearly illustrated, free video to get started with the Midjourney (text-to-image) app. Nice work Dan!

Also see Dan’s
AI Generated Immersive Learning Series


What is Academic Integrity in the Era of Generative Artificial intelligence? — from silverliningforlearning.org by Chris Dede

In the new-normal of generative AI, how does one articulate the value of academic integrity? This blog presents my current response in about 2,500 words; a complete answer could fill a sizable book.

Massive amounts of misinformation are disseminated about generative AI, so the first part of my discussion clarifies what large language models (Chat-GPT and its counterparts) can currently do and what they cannot accomplish at this point in time. The second part describes ways in which generative AI can be misused as a means of learning; unfortunately, many people are now advocating for these mistaken applications to education. The third part describes ways in which large language models (LLM), used well, may substantially improve learning and education. I close with a plea for a robust, informed public discussion about these topics and issues.


Dr. Chris Dede and the Necessity of Training Students and Faculty to Improve Their Human Judgment and Work Properly with AIs — from stefanbauschard.substack.com by Stefan Bauschard
We need to stop using test-driven curriculums that train students to listen and to compete against machines, a competition they cannot win. Instead, we need to help them augment their Judgment.


The Creative Ways Teachers Are Using ChatGPT in the Classroom — from time.com by Olivia B. Waxman

Many of the more than a dozen teachers TIME interviewed for this story argue that the way to get kids to care is to proactively use ChatGPT in the classroom.

Some of those creative ideas are already in effect at Peninsula High School in Gig Harbor, about an hour from Seattle. In Erin Rossing’s precalculus class, a student got ChatGPT to generate a rap about vectors and trigonometry in the style of Kanye West, while geometry students used the program to write mathematical proofs in the style of raps, which they performed in a classroom competition. In Kara Beloate’s English-Language Arts class, she allowed students reading Shakespeare’s Othello to use ChatGPT to translate lines into modern English to help them understand the text, so that they could spend class time discussing the plot and themes.


AI in Higher Education: Aiding Students’ Academic Journey — from td.org by J. Chris Brown

Topics/sections include:

Automatic Grading and Assessment
AI-Assisted Student Support Services
Intelligent Tutoring Systems
AI Can Help Both Students and Teachers


Shockwaves & Innovations: How Nations Worldwide Are Dealing with AI in Education — from the74million.org by Robin Lake
Lake: Other countries are quickly adopting artificial intelligence in schools. Lessons from Singapore, South Korea, India, China, Finland and Japan.

I found that other developed countries share concerns about students cheating but are moving quickly to use AI to personalize education, enhance language lessons and help teachers with mundane tasks, such as grading. Some of these countries are in the early stages of training teachers to use AI and developing curriculum standards for what students should know and be able to do with the technology.

Several countries began positioning themselves several years ago to invest in AI in education in order to compete in the fourth industrial revolution.


AI in Education — from educationnext.org by John Bailey
The leap into a new era of machine intelligence carries risks and challenges, but also plenty of promise

In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.


Generative AI and education futures — from ucl.ac.uk
Video highlights from Professor Mike Sharples’ keynote address at the 2023 UCL Education Conference, which explored opportunities to prosper with AI as a part of education.


Bringing AI Literacy to High Schools — from by Nikki Goth Itoi
Stanford education researchers collaborated with teachers to develop classroom-ready AI resources for high school instructors across subject areas.

To address these two imperatives, all high schools need access to basic AI tools and training. Yet the reality is that many underserved schools in low-income areas lack the bandwidth, skills, and confidence to guide their students through an AI-powered world. And if the pattern continues, AI will only worsen existing inequities. With this concern top of mind plus initial funding from the McCoy Ethics Center, Lee began recruiting some graduate students and high school teachers to explore how to give more people equal footing in the AI space.


 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 

Excerpts from the Too Long Didn’t Read (TLDR) section from AIxEducation Day 1: My Takeaways — from stefanbauschard.substack.com by Stefan Bauschard (emphasis DSC)

* There was a lot of talk about learning bots. This talk included the benefits of 1:1 tutoring, access to education for those who don’t currently have it (developing world), the ability to do things for which we currently don’t have enough teachers and support staff (speech pathology), individualized instruction (it will be good at this soon), and stuff that it is already good at (24/7 availability, language tutoring, immediate feedback regarding argumentation and genre (not facts :), putting students on the right track, comprehensive feedback, more critical feedback).

* Students are united. The student organizers and those who spoke at the conference have concerns about future employment, want to learn to use generative AI, and express concern about being prepared for the “real world.” They also all want a say in how generative AI is used in the college classroom. Many professors spoke about the importance of having conversations with students and involving them in the creation of AI policies as well.

* I think it’s fair to say that all professors who spoke thought students were going to use generative AI regardless of whether or not it was permitted, though some hoped for honesty.

* No professor who spoke thought using a plagiarism detector was a good idea.

* Everyone thought that significant advancements in AI technology were inevitable.

* Almost everyone expressed being overwhelmed by the rate of change.


Stefan recommended the following resource:


 


Gen-AI Movie Trailer For Sci Fi Epic “Genesis” — from forbes.com by Charlie Fink

The movie trailer for “Genesis,” created with AI, is so convincing it caused a stir on Twitter [on July 27]. That’s how I found out about it. Created by Nicolas Neubert, a senior product designer who works for Elli by Volkswagen in Germany, the “Genesis” trailer promotes a dystopian sci-fi epic reminiscent of the Terminator. There is no movie, of course, only the trailer exists, but this is neither a gag nor a parody. It’s in a class of its own. Eerily made by man, but not.



Google’s water use is soaring. AI is only going to make it worse. — from businessinsider.com by Hugh Langley

Google just published its 2023 environmental report, and one thing is for certain: The company’s water use is soaring.

The internet giant said it consumed 5.6 billion gallons of water in 2022, the equivalent of 37 golf courses. Most of that — 5.2 billion gallons — was used for the company’s data centers, a 20% increase on the amount Google reported the year prior.


We think prompt engineering (learning to converse with an AI) is overrated. — from the Neuron

We think prompt engineering (learning to converse with an AI) is overrated. Yup, we said it. We think the future of chat interfaces will be a combination of preloading context and then allowing AI to guide you to the information you seek.

From DSC:
Agreed. I think we’ll see a lot more interface updates and changes to make things easier to use, find, develop.


Radar Trends to Watch: August 2023 — from oreilly.com by Mike Loukides
Developments in Programming, Web, Security, and More

Artificial Intelligence continues to dominate the news. In the past month, we’ve seen a number of major updates to language models: Claude 2, with its 100,000 token context limit; LLaMA 2, with (relatively) liberal restrictions on use; and Stable Diffusion XL, a significantly more capable version of Stable Diffusion. Does Claude 2’s huge context really change what the model can do? And what role will open access and open source language models have as commercial applications develop?


Try out Google ‘TextFX’ and its 10 creative AI tools for rappers, writers — from 9to5google.com by Abner Li; via Barsee – AI Valley 

Google Lab Sessions are collaborations between “visionaries from all realms of human endeavor” and the company’s latest AI technology. [On 8/2/23], Google released TextFX as an “experiment to demonstrate how generative language technologies can empower the creativity and workflows of artists and creators” with Lupe Fiasco.

Google’s TextFX includes 10 tools and is powered by the PaLM 2 large language model via the PALM API. Meant to aid in the creative process of rappers, writers, and other wordsmiths, it is part of Google Labs.

 

Navigating the Future of Learning in a Digitally-Disrupted World — from thinklearningstudio.org by Russell Cailey

Are we on the frontier of unveiling an unseen revolution in education? The hypothesis is that this quiet upheaval’s importance is far more significant than we imagine. As our world adjusts, restructures, and emerges from a year which launched an era of mass AI, so too does a new academic year dawn for many – with hope and enthusiasm about new roles, titles, or simply just a new mindset. Concealed from sight, however, I believe a significant transformative wave has started and will begin to reshape our education systems and push us into a new stage of innovative teaching practice whether we desire it or not. The risk and hope is that the quiet revolution remains outside the regulator’s and ministries’ purview, which could risk a dangerous fragmentation of education policy and practice, divorced from the actualities of the world ‘in and outside school’.

“This goal can be achieved through continued support for introducing more new areas of study, such as ‘foresight and futures’, in the high school classroom.”


Four directions for assessment redesign in the age of generative AI— from timeshighereducation.com by Julia Chen
The rise of generative AI has led universities to rethink how learning is quantified. Julia Chen offers four options for assessment redesign that can be applied across disciplines

Direction 1: From written description to multimodal explanation and application

Direction 2: From literature review alone to referencing lectures

Direction 3: From presentation of ideas to defence of views

Direction 4: From working alone to student-staff partnership




15 Inspirational Voices in the Space Between AI and Education — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Get Inspired for AI and The Future of Education.

If you are just back from vacation and still not quite sure what to do about AI, let me assure you that you are not the only one. My advice for you today is this: fill your LinkedIn-feed and/or inbox with ideas, inspirational writing and commentary on AI. This will get you up to speed quickly and is a great way to stay informed on the newest movements you need to be aware of.

My personal recommendation for you is to check out these bright people who are all very active on LinkedIn and/or have a newsletter worth paying attention to. I have kept the list fairly short – only 15 people – in order to make it as easy as possible for you to begin exploring.


Universities say AI cheats can’t be beaten, moving away from attempts to block AI (Australia) — from abc.net.au by Jake Evans

Key points:

  • Universities have warned against banning AI technologies in academia
  • Several say AI cheating in tests will be too difficult to stop, and it is more practical to change assessment methods
  • The sector says the entire nature of teaching will have to change to ensure students continue to effectively learn

aieducator.tools


Navigating A World of Generative AI: Suggestions for Educators — from nextlevellab.gse.harvard.edu by Lydia Cao and Chris Dede

Understanding the nature of generative AI is crucial for educators to navigate the evolving landscape of teaching and learning. In a new report from the Next Level Lab, Lydia Cao and Chris Dede reflect on the role of generative AI in learning and how this pushes us to reconceptualize our visions of effective education. Though there are concerns of plagiarism and replacement of human jobs, Cao and Dede argue that a more productive way forward is for educators to focus on demystifying AI, emphasizing the learning process over the final product, honoring learner agency, orchestrating multiple sources of motivation, cultivating skills that AI cannot easily replicate, and fostering intelligence augmentation (IA) through building human-AI partnerships.

Navigating A World of Generative AI: Suggestions for Educators -- by Lydia Cao and Chris Dede


20 CHATGPT PROMPTS FOR ELA TEACHERS — from classtechtips.com by Dr. Monica Burns

Have you used chatbots to save time this school year? ChatGPT and generative artificial intelligence (AI) have changed the way I think about instructional planning. Today on the blog, I have a selection of ChatGPT prompts for ELA teachers.

You can use chatbots to tackle tedious tasks, gather ideas, and even support your work to meet the needs of every student. In my recent quick reference guide published by ISTE and ASCD, Using AI Chatbots to Enhance Planning and Instruction, I explore this topic. You can also find 50 more prompts for educators in this free ebook.


Professors Craft Courses on ChatGPT With ChatGPT — from insidehighered.com by Lauren Coffey
While some institutions are banning the use of the new AI tool, others are leaning into its use and offering courses dedicated solely to navigating the new technology.

Maynard, along with Jules White at Vanderbilt University, are among a small number of professors launching courses focused solely on teaching students across disciplines to better navigate AI and ChatGPT.

The offerings go beyond institutions flexing their innovation skills—the faculty behind these courses view them as imperative to ensure students are prepared for ever-changing workforce needs.


GPT-4 can already pass freshman year at Harvard | professors need to adapt to their students’ new reality — fast — from chronicle.com by Maya Bodnick (an undergraduate at Harvard University, studying government)

A. A. A-. B. B-. Pass.

That’s a solid report card for a freshman in college, a respectable 3.57 GPA. I recently finished my freshman year at Harvard, but those grades aren’t mine — they’re GPT-4’s.

Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)

Here are the prompts with links to the essays, the names of instructors, and the grades each essay received…

The impact that AI is having on liberal-arts homework is indicative of the AI threat to the career fields that liberal-arts majors tend to enter. So maybe what we should really be focused on isn’t, “How do we make liberal-arts homework better?” but rather, “What are jobs going to look like over the next 10–20 years, and how do we prepare students to succeed in that world?”



The great assessment rethink — from timeshighereducation.com by
How to measure learning and protect academic integrity in the age of ChatGPT

Items from Times Higher Education re: redesigning assessment

 

Generative AI and the future of work in America — from mckinsey.com by Kweilin Ellingrud, Saurabh Sanghvi, Gurneet Singh Dandona, Anu Madgavkar, Michael Chui, Olivia White, and Paige Hasebe

At a glance

  • During the pandemic (2019–22), the US labor market saw 8.6 million occupational shifts, 50 percent more than in the previous three-year period.
  • By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automated—a trend accelerated by generative AI.
  • Federal investment to address climate and infrastructure, as well as structural shifts, will also alter labor demand.
  • An additional 12 million occupational transitions may be needed by 2030.
  • The United States will need workforce development on a far larger scale as well as more expansive hiring approaches from employers.

Employers will need to hire for skills and competencies rather than credentials, recruit from overlooked populations (such as rural workers and people with disabilities), and deliver training that keeps pace with their evolving needs.


The AI-Powered, Totally Autonomous Future of War Is Here — from wired.com by Will Knight
Ships without crews. Self-directed drone swarms. How a US Navy task force is using off-the-shelf robotics and artificial intelligence to prepare for the next age of conflict.

From DSC:
Hhhhmmmmm…..not good. Is anyone surprised by this? No, I didn’t think so either. That’s why the United States and China are so heated up about semiconductor chips.


AI puts glitch in graduates’ employment plans — from hrdive.com by Ginger Christ
Recent grads are worried how AI will affect their career prospects, a new survey found.

Excerpt:

  • The proliferation of new technologies like generative artificial intelligence is making recent graduates uneasy, a new study released Thursday found. A third of the 1,000 people who graduated in the past year said they are second-guessing their career choice, while roughly half reported questioning their workforce preparedness and feeling threatened by AI, according to the 2023 Employability Report by Cengage Group, a global education technology company.

“The workplace has changed rapidly in the last few years, and now we are witnessing a new shift as AI begins to reshape worker productivity, job requirements, hiring habits and even entire industries,” Michael Hansen, Cengage Group CEO, said in a news release. 

Along these lines, also see:

AI Boom Creates Concerns for Recent Graduates — from insidehighered.com by  Lauren Coffey

More than half of recent graduates question whether they are properly prepared for the workforce in light of the rise of artificial intelligence, a survey finds.

There is also more of a preference for skills training credentials. Among employers, nearly 40 percent said skills training credentials are most important, while only 19 percent ranked a college degree as most important.

However, recent graduates did cite an issue with most higher education institutions’ ability to teach employability skills. In 2023, 43 percent of students said their degree program taught them the necessary skills for their first job, down 20 percentage points from 2022.


Instructure, Khan Academy Announce Major Partnership On AI Tutoring, Teaching
— from forbes.com by Derek Newton

The news is that Instructure, one of the few public education companies and the market leader in learning management with their signature product Canvas, struck a partnership with Khan Academy to create an AI-powered tutoring and teaching assistant tool – merging Khan’s innovative instructional content and Instructure’s significant reach, scale, and data insights. The partnership and related tools will be known as Khanmigo, according to the announcement.

On brand names alone, this is a big deal. On potential impact, it could be even bigger.


How To Use AI to Write Scenarios — from christytuckerlearning.com by Christy Tucker
How can you use AI to write scenarios for learning? Read this example with prompts and results using ChatGPT and Bard.

Excerpts:

So far, I have found these tools helpful in generating ideas, writing first drafts, and summarizing. They work better for general knowledge tasks than really specific topics unless I provide more details to them, which makes sense.

This post isn’t going to give you “5 magical prompts to instantly write scenarios for you” or anything like that. Instead, this is a “working out loud” post where I’ll share some prompts I have used.

Christy’s posting includes:

  1. “The Meeting from Hell”
  2. “The Backstabbing Coworker”
  3. “The Boss from Hell”
  4. “The Office Romance Gone Wrong”
  5. “The New Hire with Attitude”

Some potential tools for you to check out:



The Rise of the Talent Economy — from drphilippahardman.substack.com by Dr. Philippa Hardman
How Education & Training Will Dictate the Future & Impact of AI

“Talent, more than capital, will represent the critical factor of production.”

In short, the demand for AI skills requires a significant transformation in training and education models. To bridge the global skills gap, educational institutions, online learning providers, and employers must design and deliver training programs that cater to the rapidly evolving AI-driven labor market. 


How ChatGPT killed my discussion boards and prompted new prompts — from timeshighereducation.com by Sara Cline; per Robert Gibson on LinkedIn
Advice on learning and discussion prompts that require students to think beyond the remit of AI responses

Excerpts:

To combat this problem, we modified some of our prompts this summer to try to prevent students from using AI to avoid learning. I’m sharing some of our strategies in the hope that they help you out as you adapt your course to a world of generative AI.

  1. Use prompts that force a personal opinion.
  2. Have students include their source(s) as an attachment.
  3. Use current or local events.
  4. Have them take and caption a photo.
  5. Draw a diagram or chart.
  6. Build and explain a 3D model.
  7. Include timestamps from lecture videos.
  8. Scrap the discussion boards.

Dark web ChatGPT is here… — from therundown.ai

The Rundown: A new cybercrime generative AI tool called FraudGPT is being advertised on the Dark web and Telegram channels, offering offensive capabilities like crafting spear-phishing emails and creating undetectable malware.

Why it matters: Scammers can now look more realistic than ever before and at a larger scale. The sad truth is that the emergence of cybercrime AI tools like FraudGPT is just beginning.


From DSC:
If true and if it could help build and/or contribute to cloud-based learner profiles,  this could be huge.


Wayfair’s AI tool can redraw your living room and sell you furniture — from theverge.com by Wes Davis
The home decoration company’s new Decorify AI remodeling tool is clumsy but could be effective for visualization while remodeling.

A living room -- Wayfair is experimenting with using AI technologies to help people envision interior design moves

 

22 Classroom-Focused Resources on AI from Teachers Everywhere

22 Classroom-Focused Resources on AI from Teachers Everywhere — from coolcatteacher.com by Vicki Davis; via GSV

***


Back to School Survey: 44% of Teens “Likely” to Use AI To Do Their Schoolwork for Them This School Year — from prnewswire.com by Junior Achievement
Research by Junior Achievement Shows 60% of Teens Consider the Use of AI to Do Their Schoolwork for Them as “Cheating”

Excerpt:

COLORADO SPRINGS, Colo.July 26, 2023 /PRNewswire/ — A new survey of teens conducted for Junior Achievement by the research firm Big Village shows that nearly half of teens (44%) are “likely” to use AI to do their schoolwork instead of doing it themselves this coming school year. However, most teens (60%) consider using AI in this way as “cheating.” The survey of 1,006 13- to 17-year-olds was conducted by Big Village from July 6 through 11, 2023.

From DSC:
In a competitive society as we have in the U.S. and when many of our K-12 learning ecosystems are designed to create game players, we shouldn’t be surprised to see a significant amount of our students using AI to “win”/game the system.

As it becomes appropriate for each student, offering more choice and control should help to allow more students to pursue what they want to learn about. They won’t be as interested in gaming the system if they truly want to learn about something.

 
© 2025 | Daniel Christian