The Learning & Employment Records (LER) Ecosystem Map — with thanks to Melanie Booth on LinkedIn for this resource
Driving Opportunity and Equity Through Learning & Employment Records

The Learning & Employment Records (LER) Ecosystem Map

Imagine A World Where…

  • Everyone is empowered to access learning and earning opportunities based on what they know and can do, whether those skills and abilities are obtained through degrees, work experiences, or independent learning.
  • People can capture and communicate the skills and competencies they’ve acquired across their entire learning journey — from education, experience and service — with more ease, confidence, and clarity than a traditional resume.
  • Learners and earners control their information and can curate their skills to take advantage of every opportunity they are truly qualified to pursue, opening up pathways that help address systemic inequities.
  • Employers can tap into a wider talent pool and better match applicants to opportunities with verifiable credentials that represent skills, competencies, and achievements.

This is the world that we believe can be created by Learning and Employment Records (LERs), i.e. digital records of learning and work experiences that are linked to and controlled by learners and earners. An interoperable, well-governed LER ecosystem has the potential to transform the future of work so that it is more equitable, efficient, and effective for everyone involved— individuals, training and education providers, employers, and policymakers.


Also per Melanie Booth, see:

 

WHAT WAS GARY MARCUS THINKING, IN THAT INTERVIEW WITH GEOFF HINTON? — from linkedin.com by Stephen Downes

Background (emphasis DSC): 60 Minutes did an interview with ‘the Godfather of AI’, Geoffrey Hinton. In response, Gary Marcus wrote a column in which he inserted his own set of responses into the transcript, as though he were a panel participant. Neat idea. So, of course, I’m stealing it, and in what follows, I insert my own comments as I join the 60 Minutes panel with Geoffrey Hinton and Gary Marcus.

Usually I put everyone else’s text in italics, but for this post I’ll put it all in normal font, to keep the format consistent.

Godfather of Artificial Intelligence Geoffrey Hinton on the promise, risks of advanced AI


OpenAI’s Revenue Skyrockets to $1.3 Billion Annualized Rate — from maginative.com by Chris McKay
This means the company is generating over $100 million per month—a 30% increase from just this past summer.

OpenAI, the company behind the viral conversational AI ChatGPT, is experiencing explosive revenue growth. The Information reports that CEO Sam Altman told the staff this week that OpenAI’s revenue is now crossing $1.3 billion on an annualized basis. This means the company is generating over $100 million per month—a 30% increase from just this past summer.

Since the launch of a paid version of ChatGPT in February, OpenAI’s financial growth has been nothing short of meteoric. Additionally, in August, the company announced the launch of ChatGPT Enterprise, a commercial version of its popular conversational AI chatbot aimed at business users.

For comparison, OpenAI’s total revenue for all of 2022 was just $28 million. The launch of ChatGPT has turbocharged OpenAI’s business, positioning it as a bellwether for demand for generative AI.



From 10/13:


New ways to get inspired with generative AI in Search — from blog.google
We’re testing new ways to get more done right from Search, like the ability to generate imagery with AI or creating the first draft of something you need to write.

 

180 Degree Turn: NYC District Goes From Banning ChatGPT to Exploring AI’s Potential — from edweek.org by Alyson Klein (behind paywall)

New York City Public Schools will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.


The Leader’s Blindspot: How to Prepare for the Real Future — from preview.mailerlite.io by the AIEducator
The Commonly Held Belief: AI Will Automate Only Boring, Repetitive Tasks First

The Days of Task-Based Views on AI Are Numbered
The winds of change are sweeping across the educational landscape (emphasis DSC):

  1. Multifaceted AI: AI technologies are not one-trick ponies; they are evolving into complex systems that can handle a variety of tasks.
  2. Rising Expectations: As technology becomes integral to our lives, the expectations for personalised, efficient education are soaring.
  3. Skill Transformation: Future job markets will demand a different skill set, one that is symbiotic with AI capabilities.

Teaching: How to help students better understand generative AI — from chronicle.com by Beth McMurtrie
Beth describes ways professors have used ChatGPT to bolster critical thinking in writing-intensive courses

Kevin McCullen, an associate professor of computer science at the State University of New York at Plattsburgh, teaches a freshman seminar about AI and robotics. As part of the course, students read Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff. McCullen had the students work in groups to outline and summarize the first three chapters. Then he showed them what ChatGPT had produced in an outline.

“Their version and ChatGPT’s version seemed to be from two different books,” McCullen wrote. “ChatGPT’s version was essentially a ‘laundry list’ of events. Their version was narratives of what they found interesting. The students had focused on what the story was telling them, while ChatGPT focused on who did what in what year.” The chatbot also introduced false information, such as wrong chapter names.

The students, he wrote, found the writing “soulless.”


7 Questions with Dr. Cristi Ford, VP of Academic Affairs at D2L — from campustechnology.com by Rhea Kelly

In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?

To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L’s institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.


AI Platform Built by Teachers, for Teachers, Class Companion Raises $4 Million to Tap Into the Power of Practice — from prweb.com

“If we want to use AI to improve education, we need more teachers at the table,” said Avery Pan, Class Companion co-founder and CEO. “Class Companion is designed by teachers, for teachers, to harness the most sophisticated AI and improve their classroom experience. Developing technologies specifically for teachers is imperative to supporting our next generation of students and education system.”


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


10 Ways Technology Leaders Can Step Up and Into the Generative AI Discussion in Higher Ed — from er.educause.edu by Lance Eaton and Stan Waddell

  1. Offer Short Primers on Generative AI
  2. Explain How to Get Started
  3. Suggest Best Practices for Engaging with Generative AI
  4. Give Recommendations for Different Groups
  5. Recommend Tools
  6. Explain the Closed vs. Open-Source Divide
  7. Avoid Pitfalls
  8. Conduct Workshops and Events
  9. Spot the Fake
  10. Provide Proper Guidance on the Limitations of AI Detectors


 

As AI Chatbots Rise, More Educators Look to Oral Exams — With High-Tech Twist — from edsurge.com by Jeffrey R. Young

To use Sherpa, an instructor first uploads the reading they’ve assigned, or they can have the student upload a paper they’ve written. Then the tool asks a series of questions about the text (either questions input by the instructor or generated by the AI) to test the student’s grasp of key concepts. The software gives the instructor the choice of whether they want the tool to record audio and video of the conversation, or just audio.

The tool then uses AI to transcribe the audio from each student’s recording and flags areas where the student answer seemed off point. Teachers can review the recording or transcript of the conversation and look at what Sherpa flagged as trouble to evaluate the student’s response.

 

Four Scenarios for the Future of Legal Education — from denniskennedy.com by Dennis Kennedy

Scenario 1: Fully Digitalized Law School
Scenario 2: Blended Law School Experience
Scenario 3: Specialized Legal Education
Scenario 4: Decentralized Legal Education

In the decentralized legal education scenario, the traditional model of law schools is disrupted by the emergence of alternative education platforms and micro-credentialing. The concept of a law degree is replaced by a more flexible and personalized approach to legal education. Students can choose from an array of legal courses offered by various providers, including universities, law firms, online platforms, and even government agencies.

 

The next wave of AI will be interactive — from joinsuperhuman.ai by Zain Kahn
ALSO: AI startups raise over $500 million

Google DeepMind cofounder Mustafa Suleyman thinks that generative is a passing phase, and that interactive AI is the next big thing in AI. Suleyman called the transformation “a profound moment” in the history of technology.

Suleyman divided AI’s evolution into 3 waves:

  1. Classification: Training computers to classify various types of data like images and text.
  2. Generative: The current wave, which takes input data to generate new data. ChatGPT is the best example of this.
  3. Interactive: The next wave, where an AI will be capable of communicating and operating autonomously.

“Think of it as autonomous software that can talk to other apps to get things done.”

From DSC:
Though I find this a generally positive thing, the above sentence makes me exclaim, “No, nothing could possibly go wrong there.”


 

Preparing Students for the AI-Enhanced Workforce — from insidehighered.com by Ray Schroeder
Our graduating and certificate-completing students need documented generative AI skills, and they need them now.

The common adage repeated again and again is that AI will not take your job; a person with AI skills will replace you. The learners we are teaching this fall who will be entering, re-entering or seeking advancement in the workforce at the end of the year or in the spring must become demonstrably skilled in using generative AI. The vast majority of white-collar jobs will demand the efficiencies and flexibilities defined by generative AI now and in the future. As higher education institutions, we will be called upon to document and validate generative AI skills.


AI image generators: 10 tools, 10 classroom uses — from ditchthattextbook.com by Matt Miller

AI image generators: 10 tools, 10 classroom uses


A Majority of New Teachers Aren’t Prepared to Teach With Technology. What’s the Fix? — from edweek.org by Alyson Klein

Think all incoming teachers have a natural facility with technology just because most are digital natives? Think again.

Teacher preparation programs have a long way to go in preparing prospective educators to teach with technology, according to a report released September 12 by the International Society for Technology in Education, a nonprofit.

In fact, more than half of incoming teachers—56 percent—lack confidence in using learning technology prior to entering the classroom, according to survey data included with the report.


5 Actual Use Cases of AI in Education: Newsletter #68 — from transcend.substack.com by Alberto Arenaza
What areas has AI truly impacted educators, learners & workers?

  1. AI Copilot for educators, managers and leaders
  2. Flipped Classrooms Chatbots
  3. AI to assess complex answers
  4. AI as a language learning tool
  5. AI to brainstorm ideas

AI-Powered Higher Ed — from drphilippahardman.substack.com by  Dr. Philippa Hardman
What a House of Commons round table discussion tells us about how AI will impact the purpose of higher education

In this week’s blog post I’ll summarise the discussion and share what we agreed would be the most likely new model of assessment in HE in the post-AI world.

But this in turn raises a bigger question: why do people go to university, and what is the role of higher education in the twenty first century? Is it to create the workforce of the future? Or an institution for developing deep and original domain expertise? Can and should it be both?


How To Develop Computational Thinkers — from iste.org by Jorge Valenzuela

In my previous position with Richmond Public Schools, we chose to dive in with computational thinking, programming and coding, in that order. I recommend building computational thinking (CT) competency first by helping students recognize and apply the four elements of CT to familiar problems/situations. Computational thinking should come first because it’s the highest order of problem-solving, is a cross-curricular skill and is understandable to both machines and humans. Here are the four components of CT and how to help students understand them.

 

Generative A.I. + Law – Background, Applications and Use Cases Including GPT-4 Passes the Bar Exam – Speaker Deck — from speakerdeck.com by Professor Daniel Martin Katz

 

 

 


Also relevant/see:

AI-Powered Virtual Legal Assistants Transform Client Services — from abovethelaw.com by Olga V. Mack
They can respond more succinctly than ever to answer client questions, triage incoming requests, provide details, and trigger automated workflows that ensure lawyers handle legal issues efficiently and effectively.

Artificial Intelligence in Law: How AI Can Reshape the Legal Industry — from jdsupra.com

 

Next, The Future of Work is… Intersections — from linkedin.com by Gary A. Bolles; via Roberto Ferraro

So much of the way that we think about education and work is organized into silos. Sure, that’s one way to ensure a depth of knowledge in a field and to encourage learners to develop mastery. But it also leads to domains with strict boundaries. Colleges are typically organized into school sub-domains, managed like fiefdoms, with strict rules for professors who can teach in different schools.

Yet it’s at the intersections of seemingly-disparate domains where breakthrough innovation can occur.

Maybe intersections bring a greater chance of future work opportunity, because that young person can increase their focus in one arena or another as they discover new options for work — and because this is what meaningful work in the future is going to look like.

From DSC:
This posting strikes me as an endorsement for interdisciplinary degrees. I agree with much of this. It’s just hard to find the right combination of disciplines. But I supposed that depends upon the individual student and what he/she is passionate or curious about.


Speaking of the future of work, also see:

Centaurs and Cyborgs on the Jagged Frontier — from oneusefulthing.org by Ethan Mollick
I think we have an answer on whether AIs will reshape work…

A lot of people have been asking if AI is really a big deal for the future of work. We have a new paper that strongly suggests the answer is YES.
.

Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without. Those are some very big impacts. Now, let’s add in the nuance.

 


Big Ideas in Education — from edweek.org by various

Big Ideas is Education Week’s annual special report that brings the expertise of our newsroom—and occasionally those beyond our newsroom—to bear on the challenges you might be facing in your classroom, school, or district. Big Ideas questions the status quo and explores opportunities to help you build a better, more just learning environment for all students. Browse our collection.


 

 

10 Free AI Tools for Graphic Designing — from medium.com by Qz Ruslan

With the advancements in Artificial Intelligence (AI), designers now have access to a wide array of free AI-powered tools that streamline their creative process, enhance productivity, and add a touch of uniqueness to their designs. In this article, we will explore ten such free AI tools websites for graphic designing that have revolutionized the way designers approach their craft.


Generative Art in Motion — from heatherbcooper.substack.com by Heather Cooper
Animation and video tools create an explosion of creative expression


World’s first AI cinema opening in Auckland to make all your Matrix fantasies come true — from stuff.co.nz by Jonny Mahon-Heap
Review: My HyperCinema experience was futuristic, sleek – and slightly insane as I became the star of my own show.


AI That Alters Voice and Imagery in Political Ads Will Require Disclosure on Google and YouTube — from usnews.com by Associated Press
Political ads using artificial intelligence on Google and YouTube must soon be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered

Google will soon require that political ads using artificial intelligence be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered.

AI-generated election ads on YouTube and other Google platforms that alter people or events must include a clear disclaimer located somewhere that users are likely to notice, the company said in an update this week to its political content policy.


 

The Prompt #14: Your Guide to Custom Instructions — from noisemedia.ai by Alex Banks

Whilst we typically cover a single ‘prompt’ to use with ChatGPT, today we’re exploring a new feature now available to everyone: custom instructions.

You provide specific directions for ChatGPT leading to greater control of the output. It’s all about guiding the AI to get the responses you really want.

To get started:
Log into ChatGPT ? Click on your name/email bottom left corner ? select ‘Custom instructions’


Meet Zoom AI Companion, your new AI assistant! Unlock the benefits with a paid Zoom account — from blog.zoom.us by Smita Hashim

We’re excited to introduce you to AI Companion (formerly Zoom IQ), your new generative AI assistant across the Zoom platform. AI Companion empowers individuals by helping them be more productive, connect and collaborate with teammates, and improve their skills.

Envision being able to interact with AI Companion through a conversational interface and ask for help on a whole range of tasks, similarly to how you would with a real assistant. You’ll be able to ask it to help prepare for your upcoming meeting, get a consolidated summary of prior Zoom meetings and relevant chat threads, and even find relevant documents and tickets from connected third-party applications with your permission.

From DSC:
You can ask AI Companion to catch you up on what you missed during a meeting in progress.”

And what if some key details were missed? Should you rely on this? I’d treat this with care/caution myself.



A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data — from fortune.com by Stephen Pastis (behind paywall)

That’s because, as it turns out, it’s nearly impossible to remove a user’s data from a trained A.I. model without resetting the model and forfeiting the extensive money and effort put into training it. To use a human analogy, once an A.I. has “seen” something, there is no easy way to tell the model to “forget” what it saw. And deleting the model entirely is also surprisingly difficult.

This represents one of the thorniest, unresolved, challenges of our incipient artificial intelligence era, alongside issues like A.I. “hallucinations” and the difficulties of explaining certain A.I. outputs. 


More companies see ChatGPT training as a hot job perk for office workers — from cnbc.com by Mikaela Cohen

Key points:

  • Workplaces filled with artificial intelligence are closer to becoming a reality, making it essential that workers know how to use generative AI.
  • Offering specific AI chatbot training to current employees could be your next best talent retention tactic.
  • 90% of business leaders see ChatGPT as a beneficial skill in job applicants, according to a report from career site Resume Builder.

OpenAI Plugs ChatGPT Into Canva to Sharpen Its Competitive Edge in AI — from decrypt.co by Jose Antonio Lanz
Now ChatGPT Plus users can “talk” to Canva directly from OpenAI’s bot, making their workflow easier.

This strategic move aims to make the process of creating visuals such as logos, banners, and more, even more simple for businesses and entrepreneurs.

This latest integration could improve the way users generate visuals by offering a streamlined and user-friendly approach to digital design.


From DSC:
This Tweet addresses a likely component of our future learning ecosystems:


Large language models aren’t people. Let’s stop testing them as if they were. — from technologyreview.com by Will Douglas Heaven
With hopes and fears about this technology running wild, it’s time to agree on what it can and can’t do.

That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.

“There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.”

“There is a long history of developing methods to test the human mind,” says Laura Weidinger, a senior research scientist at Google DeepMind. “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models.”


We Analyzed Millions of ChatGPT User Sessions: Visits are Down 29% since May, Programming Assistance is 30% of Use — from sparktoro.com by Rand Fishkin

In concert with the fine folks at Datos, whose opt-in, anonymized panel of 20M devices (desktop and mobile, covering 200+ countries) provides outstanding insight into what real people are doing on the web, we undertook a challenging project to answer at least some of the mystery surrounding ChatGPT.



Crypto in ‘arms race’ against AI-powered scams — Quantstamp co-founder — from cointelegraph.com by Tom Mitchelhill
Quantstamp’s Richard Ma explained that the coming surge in sophisticated AI phishing scams could pose an existential threat to crypto organizations.

With the field of artificial intelligence evolving at near breakneck speed, scammers now have access to tools that can help them execute highly sophisticated attacks en masse, warns the co-founder of Web3 security firm Quantstamp.


 

Future of Work Report AI at Work — from economicgraph.linkedin.com; via Superhuman

The intersection of AI and the world of work: Not only are job postings increasing, but we’re seeing more LinkedIn members around the globe adding AI skills to their profiles than ever before. We’ve seen a 21x increase in the share of global English-language job postings that mention new AI technologies such as GPT or ChatGPT since November 2022. In June 2023, the number of AI-skilled members was 9x larger than in January 2016, globally.

The state of play of Generative AI (GAI) in the workforce: GAI technologies, including ChatGPT, are poised to start to change the way we work. In fact, 47% of US executives believe that using generative AI will increase productivity, and 92% agree that people skills are more important than ever. This means jobs won’t necessarily go away but they will change as will the skills necessary to do them.

Also relevant/see:

The Working Future: More Human, Not Less — from bain.com
It’s time to change how we think about work

Contents

  • Introduction
  • Motivations for Work Are Changing.
  • Beliefs about What Makes a “Good Job” Are Diverging
  • Automation Is Helping to Rehumanize Work
  • Technological Change Is Blurring the Boundaries of the Firm
  • Young Workers Are Increasingly Overwhelmed
  • Rehumanizing Work: The Journey Ahead
 

From DSC:
Yesterday, I posted the item about Google’s NotebookLM research tool. Excerpt:

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes.

That got me to thinking…

What if the presenter/teacher/professor/trainer/preacher provided a set of notes for the AI to compare to the readers’ notes? 

That way, the AI could see the discrepancies between what the presenter wanted their audience to learn/hear and what was actually being learned/heard. In a sort of digital Socratic Method, the AI could then generate some leading questions to get the audience member to check their thinking/understanding of the topic.

The end result would be that the main points were properly communicated/learned/received.

 

Google’s AI-powered note-taking app is the messy beginning of something great — from theverge.com by David Pierce; via AI Insider
NotebookLM is a neat research tool with some big ideas. It’s still rough and new, but it feels like Google is onto something.

Excerpts (emphasis DSC):

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. 

Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like.

 
© 2025 | Daniel Christian