Credentialed learning for all -- from Getting Smart

 

Why credential section -- from Getting Smart's Credentialed Learning for All

Credentialed Learning For All — from gettingsmart.com

Vision

Learning happens throughout life and is not isolated to the K-12 or higher education sectors. Yet, often, validations of learning only happen in these specific areas. The system of evaluation based on courses, grades, and credit serves as a poor proxy for communicating skills given the variation in course content, grade inflation, and inclusion of participation and extra credit within course grades.

Credentialed learning provides a way to accurately document human capability for all learners throughout their life. A lifetime credentialed learning ecosystem provides better granularity around learning, better documentation of the learning, and more relevance for both the credential recipient and reviewer. This improves the match between higher education and/or employment with the individual, while also providing a more clear and accurate lifetime learning pathway.

With a fully-credentialed system, individuals can own well-documented evidence of a lifetime of learning and choose what and when to share this data. This technology enables every learner to have more opportunities for finding the best career match without today’s existing barriers around cost, access, and proxies.


Addendum on 4/28/23 — speaking of credentials:

First Rung — from the-job.beehiiv.com by Paul Fain
New research shows stacking credentials pays off for low-income learners.

Stacking credentials pays off for many low-income students, new research finds, but only if learners move up the education ladder. Also, Kansas is hoping a new grant program will attract more companies to participate in microinternships.


 

The future of education in a world of AI — from oneusefulthing.org by Ethan Mollick
A positive vision for the transformation to come

Excerpt:

Imagine introducing high-quality AI tutors into the flipped classroom model. These AI-powered systems have the potential to significantly enhance the learning experience for students and make flipped classrooms even more effective. They provide personalized learning, where AI tutors can tailor instruction to each student’s unique needs while continually adjusting content based on performance. This means that students can engage with the content at home more effectively, ensuring they come to class better prepared and ready to dive into hands-on activities or discussions.

With AI tutors taking care of some of the content delivery outside of class, teachers can devote more time to fostering meaningful interactions with their students during class. They can also use insights from the AI tutors to identify areas where students might need extra support or guidance, enabling them to provide more personalized and effective instruction. And with AI assistance, they can design better active learning opportunities in class to make sure learnings stick.


Also relevant/see:

ChatGPT is going to change education, not destroy it — from technologyreview.com by Will Douglas Heaven
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more.

Will ChatGPT Change How Professors Assess Learning? — from chronicle.com by Becky Supiano
It won’t be easy without their colleges’ support.

What the Past Can Teach Us About the Future of AI and Education — from campustechnology.com by Dr. David Wiley
Current attitudes toward generative AI hearken back to early skepticism about the impact of the internet on education. Both then and now, technology has created challenges but also opportunities that can’t be ignored.


 

From DSC:
Before we get to Scott Belsky’s article, here’s an interesting/related item from Tobi Lutke:


Our World Shaken, Not Stirred: Synthetic entertainment, hybrid social experiences, syncing ourselves with apps, and more. — from implications.com by Scott Belsky
Things will get weird. And exciting.

Excerpts:

Recent advances in technology will stir shake the pot of culture and our day-to-day experiences. Examples? A new era of synthetic entertainment will emerge, online social dynamics will become “hybrid experiences” where AI personas are equal players, and we will sync ourselves with applications as opposed to using applications.

A new era of synthetic entertainment will emerge as the world’s video archives – as well as actors’ bodies and voices – will be used to train models. Expect sequels made without actor participation, a new era of ai-outfitted creative economy participants, a deluge of imaginative media that would have been cost prohibitive, and copyright wars and legislation.

Unauthorized sequels, spin-offs, some amazing stuff, and a legal dumpster fire: Now lets shift beyond Hollywood to the fast-growing long tail of prosumer-made entertainment. This is where entirely new genres of entertainment will emerge including the unauthorized sequels and spinoffs that I expect we will start seeing.


Also relevant/see:

Digital storytelling with generative AI: notes on the appearance of #AICinema — from bryanalexander.org by Bryan Alexander

Excerpt:

This is how I viewed a fascinating article about the so-called #AICinema movement.  Benj Edwards describes this nascent current and interviews one of its practitioners, Julie Wieland.  It’s a great example of people creating small stories using tech – in this case, generative AI, specifically the image creator Midjourney.

Bryan links to:

Artists astound with AI-generated film stills from a parallel universe — from arstechnica.com by Benj Edwards
A Q&A with “synthographer” Julie Wieland on the #aicinema movement.

An AI-generated image from an #aicinema still series called Vinyl Vengeance by Julie Wieland, created using Midjourney.


From DSC:
How will text-to-video impact the Learning and Development world? Teaching and learning? Those people communicating within communities of practice? Those creating presentations and/or offering webinars?

Hmmm…should be interesting!


 

Who Will Be the Academic Prompt Engineering Experts? — from wallyboston.com by Wally Boston

Excerpts:

As I mentioned in a post a month ago, prompt engineering is the term of art for prompting a generative AI tool like ChatGPT-4 to produce an answer to a question, analysis of a problem, or an essay about a particular topic.

It’s ironic that the “experts” providing prompt engineering courses are not academics. I happen to believe that there are academics mastering prompt engineering as I write this.

Last week, I wrote about generative AI tools that could be used to decrease the instructional design time required to build a college course. I also wrote an article about Villanova University professor Noah Barsky’s opinion piece suggesting that all MBA curriculums should be rewritten to teach their students how to utilize AI tools in their professions. Business schools with outdated curriculums will not prepare their graduates for businesses expecting them to be able to master state of the art AI tools.


Also relevant/see:

From DSC:
Read that again…

“In June, when students are finishing up their classes, they may be punished by their instructors for using AI as a writing tool. But in July, when they’re out on the job market, they can be punished by their employers or potential employers for not knowing how to use AI as a writing tool.”

 


Also relevant/see:

11 Tips to Take Your ChatGPT Prompts to the Next Level — from wired.com by David Nield
Sure, anyone can use OpenAI’s chatbot. But with smart engineering, you can get way more interesting results.

Excerpts:

You don’t have to do all the typing yourself when it comes to ChatGPT. Copy and paste is your friend, and there’s no problem with pasting in text from other sources.

Another way to improve the responses you get from ChatGPT is to give it some data to work with before you ask your question.

Your answers can be seriously improved if you give ChatGPT some ingredients to work with before asking for a response.

 

Fastcase, vLex merger accelerates investment into legal AI — from reuters.com by Sara Merken

Excerpts (emphasis DSC):

(Reuters) – As artificial intelligence pushes deeper into the legal industry, Fastcase and vLex are merging in a deal the legal research companies said [on 4/4/23] will speed up the creation of AI tools for lawyers.

The merger creates a law library that is “the biggest legal data corpus ever assembled,” the companies said. The new company will have more than one billion legal documents from more than 100 countries, including judicial opinions, statutes, regulations, briefs, pleadings and legal news articles, they said.


Also see:

In Major Legal Tech Deal, vLex and Fastcase Merge, Creating A Global Legal Research Company, Backed By Oakley Capital and Bain Capital — from lawnext.com by Bob Ambrogi

Excerpt:

In a deal that will reshape the legal research and legal technology landscape on a global basis and threaten the longstanding “Wexis” legal research duopoly, the companies vLex and Fastcase today announced that they have merged into a single entity that they say will have the world’s largest subscriber base of lawyers and law firms and a legal research library of more than 1 billion documents from more than 100 countries.


Speaking about the legal realm and innovations, also see:

On LawNext: 15 Years, 15 Lessons: Clio Founder Jack Newton On What He’s Learned About Building a Successful Company — from lawnext.com by Bob Ambrogi and Jack Newton

Excerpt:

As Clio marks its 15th anniversary in 2023, Newton sat down with me to share 15 lessons he has learned along the way regarding what makes a successful company and a successful leader. He also reminisces about the early days of starting Clio and his early successes and challenges. Notably, he and Gauvreau founded Clio in the middle of the Great Recession, and one of the lessons he shares in this episode is his belief that a recession is a great time to build a company.

For anyone who has founded or is thinking of founding a legal tech startup, this episode is a must-listen. Even for those who are not tech founders, but law firm founders, many of Newton’s lessons apply.

 
 

A museum without screens: The Media Museum of Sound and Vision in Hilversum — from inavateonthenet.net

Excerpt:

Re-opened to the public last month after five years of planning and two-and-a-half years of renovations, The Media Museum of Sound and Vision in Hilversum in the Netherlands, is an immersive experience exploring modern media. It’s become a museum that continuously adapts to the actions of its visitors in order to reflect the ever-changing face of media culture.

How we consume media is revealed in five zones in the building: Share, Inform, Sell, Tell and Play. The Media Museum includes more than 50 interactives, with hundreds of hours of AV material and objects from history. The experience uses facial recognition and the user’s own smartphone to make it a personalised museum journey for everyone.

 

A portion of the Media Museum in Hilversum, the Netherlands
Photo from Mike Bink

From DSC:
Wow! There is some serious AV work and creativity in the Media Museum of Sound and Vision!

 
 

Why some college professors are adopting ChatGPT AI as quickly as students — from cnbc.com by Carolyn Chun

Key Points:

  • A recent analysis by researchers at NYU, Princeton and the Wharton School finds that many of the jobs that will be most “exposed” to generative AI such as ChatGPT are in the college teaching profession.
  • One of the first narratives to emerge from the sudden explosion in usage of ChatGPT is the risk of students cheating on writing assignments.
  • But use by college teachers is growing quickly too, and adoption by educators may be critical to making the case that AI will augment the jobs humans are doing rather than replace them.

Also relevant/see:


 


Also relevant/see:


Learning Designers will have to adapt or die. 10 ways to UPSKILL to AI…. — from donaldclarkplanb.blogspot.com by Donald Clark

Learning Designers need to upskill


From Ethan Mollick on LinkedIn:

Take a look at this simulated negotiation, with grading and feedback. Prompt: “I want to do deliberate practice about how to conduct negotiations. You will be my negotiation teacher. You will simulate a detailed scenario in which I have to engage in a negotiation. You will fill the role of one party, I will fill the role of the other. You will ask for my response to in each step of the scenario and wait until you receive it. After getting my response, you will give me details of what the other party does and says. You will grade my response and give me detailed feedback about what to do better using the science of negotiation. You will give me a harder scenario if I do well, and an easier one if I fail.”

Samples from Bing Creative mode and ChatGPT-4 (3.5, the free version, does not work as well)


I’m having a blast with ChatGPT – it’s testing ME! — from by Mark Mrohs
Using ChatGPT as an agent for asynchronous active learning

I have been experimenting with possible ways to incorporate interactions with ChatGPT into instruction. And I’m blown away. I want to show you some of what I’ve come up with.

 

Teaching: What You Can Learn From Students About ChatGPT — from chronicle.com by Beth McMurtrie

Excerpts (emphasis DSC):

Like a lot of you, I have been wondering how students are reacting to the rapid launch of generative AI tools. And I wanted to point you to creative ways in which professors and teaching experts have helped involve them in research and policymaking.

At Kalamazoo College, Autumn Hostetter, a psychology professor, and six of her students surveyed faculty members and students to determine whether they could detect an AI-written essay, and what they thought of the ethics of using various AI tools in writing. You can read their research paper here.

Next, participants were asked about a range of scenarios, such as using Grammarly, using AI to make an outline for a paper, using AI to write a section of a paper, looking up a concept on Google and copying it directly into a paper, and using AI to write an entire paper. As expected, commonly used tools like Grammarly were considered the most ethical, while writing a paper entirely with AI was considered the least. But researchers found variation in how people approached the in-between scenarios. Perhaps most interesting: Students and faculty members shared very similar views with each scenario.

 


Also relevant/see:

This Was Written By a Human: A Real Educator’s Thoughts on Teaching in the Age of ChatGPT — from er.educause.edu educause.org by Jered Borup
The well-founded concerns surrounding ChatGPT shouldn’t distract us from considering how it might be useful.


 

From DSC:
After seeing this…

…I wondered:

  • Could GPT-4 create the “Choir Practice” app mentioned below?
    (Choir Practice was an idea for an app for people who want to rehearse their parts at home)
  • Could GPT-4 be used to extract audio/parts from a musical score and post the parts separately for people to download/practice their individual parts?

This line of thought reminded me of this posting that I did back on 10/27/2010 entitled, “For those institutions (or individuals) who might want to make a few million.”

Choir Practice -- an app for people who want to rehearse at home

And I want to say that when I went back to look at this posting, I was a bit ashamed of myself. I’d like to apologize for the times when I’ve been too excited about something and exaggerated/hyped an idea up on this Learning Ecosystems blog. For example, I used the words millions of dollars in the title…and that probably wouldn’t be the case these days. (But with inflation being what it is, heh…who knows!? Maybe I shouldn’t be too hard on myself.) I just had choirs in mind when I posted the idea…and there aren’t as many choirs around these days.  🙂

 

The above Tweet links to:

Pause Giant AI Experiments: An Open Letter — from futureoflife.org
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.



However, the letter has since received heavy backlash, as there seems to be no verification in signing it. Yann LeCun from Meta denied signing the letter and completely disagreed with the premise. (source)


In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT — from wired.com by Will Knight (behind paywall)
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.


 

ChatGPT: Student insights are necessary to help universities plan for the future — from theconversation.com by Alpha Abebe and Fenella Amarasinghe

Excerpt:

In the race to get ahead of new technologies, are we forgetting about the perspectives of the most important stakeholders within our post-secondary institutions: the students?

Leaving students out of early discussions and decision-making processes is almost always a recipe for ill-fitting, ineffective and/or damaging approaches. The mantra “nothing for us without us” comes to mind here.

 


Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 
© 2024 | Daniel Christian