OECS’s articles will not only establish a shared understanding of foundational concepts, but also showcase cutting-edge debates and introduce core subfields, central concepts, significant phenomena, and key methodologies.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
.
On giving AI eyes and ears— from oneusefulthing.org by Ethan Mollick AI can listen and see, with bigger implications than we might realize.
Excerpt:
But even this is just the beginning, and new modes of using AI are appearing, which further increases their capabilities. I want to show you some examples of this emerging world, which I think will soon introduce a new wave of AI use cases, and accompanying disruption.
We need to recognize that these capabilities will continue to grow, and AI will be able to play a more active role in the real world by observing and listening. The implications are likely to be profound, and we should start thinking through both the huge benefits and major concerns today.
Even though generative AI is a new thing, it doesn’t change why students cheat. They’ve always cheated for the same reason: They don’t find the work meaningful, and they don’t think they can achieve it to their satisfaction. So we need to design assessments that students find meaning in.
Tricia Bertram Gallant
Caught off guard by AI— from chonicle.com by Beth McMurtrie and Beckie Supiano Professor scrambled to react to ChatGPT this spring — and started planning for the fall
Excerpt:
Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate?
…OpenAI built tool use right into the GPT API with an update called function calling. It’s a little like a child’s ability to ask their parents to help them with a task that they know they can’t do on their own. Except in this case, instead of parents, GPT can call out to external code, databases, or other APIs when it needs to.
Each function in function calling represents a tool that a GPT model can use when necessary, and GPT gets to decide which ones it wants to use and when. This instantly upgrades GPT capabilities—not because it can now do every task perfectly—but because it now knows how to ask for what it wants and get it. .
.
How ChatGPT can help disrupt assessment overload— from timeshighereducation.com by David Carless Advances in AI are not necessarily the enemy – in fact, they should prompt long overdue consideration of assessment types and frequency, says David Carless
Excerpt:
Reducing the assessment burden could support trust in students as individuals wanting to produce worthwhile, original work. Indeed, students can be co-opted as partners in designing their own assessment tasks, so they can produce something meaningful to them.
A strategic reduction in quantity of assessment would also facilitate a refocusing of assessment priorities on deep understanding more than just performance and carries potential to enhance feedback processes.
If we were to tackle assessment overload in these ways, it opens up various possibilities. Most significantly there is potential to revitalise feedback so that it becomes a core part of a learning cycle rather than an adjunct at its end. End-of-semester, product-oriented feedback, which comes after grades have already been awarded, fails to encourage the iterative loops and spirals typical of productive learning. .
Since AI in education has been moving at the speed of light, we built this AI Tools in Education database to keep track of the most recent AI tools in education and the changes that are happening every day.This database is intended to be a community resource for educators, researchers, students, and other edtech specialists looking to stay up to date. This is a living document, so be sure to come back for regular updates.
These claims conjure up the rosiest of images: human resource departments and their robot buddies solving discrimination in workplace hiring. It seems plausible, in theory, that AI could root out unconscious bias, but a growing body of research shows the opposite may be more likely.
…
Companies’ use of AI didn’t come out of nowhere: For example, automated applicant tracking systems have been used in hiring for decades. That means if you’ve applied for a job, your resume and cover letter were likely scanned by an automated system. You probably heard from a chatbot at some point in the process. Your interview might have been automatically scheduled and later even assessed by AI.
From DSC:
Here was my reflection on this:
DC: Along these lines, I wonder if Applicant Tracking Systems cause us to become like typecast actors and actresses — only thought of for certain roles. Pigeonholed.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
In June, ResumeBuilder.com surveyed more than 1,000 employees who are involved in hiring processes at their workplaces to find out about their companies’ use of AI interviews.
The results:
43% of companies already have or plan to adopt AI interviews by 2024
Two-thirds of this group believe AI interviews will increase hiring efficiency
15% say that AI will be used to make decisions on candidates without any human input
More than half believe AI will eventually replace human hiring managers
Watch OpenAI CEO Sam Altman on the Future of AI — from bloomberg.com Sam Altman, CEO & Co-Founder, OpenAI discusses the explosive rise of OpenAI and its products and what an AI-laced future can look like with Bloomberg’s Emily Chang at the Bloomberg Technology Summit.
The implementation of generative AI within these products will dramatically improve educators’ ability to deliver personalized learning to students at scale by enabling the application of personalized assessments and learning pathways based on individual student needs and learning goals. K-12 educators will also benefit from access to OpenAI technology…
After chronicling 160+ AI tools (which is surely only a small fraction of the total), we’re seeing a few clear patterns among the tools that have come out so far- here are 10 categories that are jumping out!
“I don’t usually get worked up about announcements but I see promise in JFF’s plans for a new Center for Artificial Intelligence & the Future of Work, in no small part because the organization bridges higher ed, K-12 education, employers, and policymakers.”
BOSTON June 14, 2023 —Jobs for the Future (JFF), a national nonprofit that drives transformation in the U.S. education and workforce systems, today announced the launch of its new Center for Artificial Intelligence &the Future of Work. This center will play an integral role in JFF’s mission and newly announced 10-year North Star goal to help 75 million people facing systemic barriers to advancement work in quality jobs. As AI’s explosive growth reshapes every aspect of how we learn, work, and live, this new center will serve as a nexus of collaboration among stakeholders from every part of the education-to-career ecosystem to explore the most promising opportunities—and profound challenges—of AI’s potential to advance an accessible and equitable future of learning and work.
OpenAI Considers ‘App Store’ For ChatGPT — from searchenginejournal.com by; with thanks to Barsee at AI Valley for this resource OpenAI explores launching an ‘app store’ for AI models, potentially challenging current partners and expanding customer reach.
Highlights:
OpenAI considers launching an ‘app store’ for customized AI chatbots.
This move could create competition with current partners and extend OpenAI’s customer reach.
Early interest from companies like Aquant and Khan Academy shows potential, but product development and market positioning challenges remain.
The rise of artificial intelligence, especially generative AI, boosts productivity in content creation–text, code, images and increasingly video.
Here are six preliminary conclusions about the nature of work and learning.
Wonder Tools: AI to try— from wondertools.substack.com by Jeremy Caplan 9 playful little ways to explore AI
Excerpt:
Create a personalized children’s story ? | Schrodi Collaborate with AI on a free customized, illustrated story for someone special. Give your story’s hero a name, pick a genre (e.g. comedy, thriller), choose an illustration style (e.g. watercolor, 3d animation) and provide a prompt to shape a simple story. You can even suggest a moral. After a minute, download a full-color PDF to share. Or print it and read your new mini picture book aloud.
Generate a quiz ? | Piggy Put in a link, a topic, or some text and you’ll get a quiz you can share, featuring multiple-choice or true-false questions. Example: try this quick entrepreneurship quiz Piggy generated for me.
Q: How will generative AI impact teaching and learning in the near and long term?
Baker Stein: One-on-one tutoring at scale is finally being unlocked for learners around the world. This type of quality education is no longer only available to students with the means to hire a private tutor.I’m also particularly excited to see how educators make use of generative AI tools to create courses much faster and likely at a higher quality with increased personalization for each student or even by experimenting with new technologies like extended reality. Professors will be able to put their time toward high-impact activities like mentoring, researching and office hours instead of tedious course-creation tasks. This helps open up the capacity for educators to iterate on their courses faster to keep pace with industry and global changes that may impact their field of study.
Another important use case is how generative AI can serve as a great equalizer for students when it comes to writing, especially second language learners.
The generative AI announcements are coming fast and furious these days, but among the biggest in terms of sheer dollar commitments just landed: Accenture, the global professional services and consulting giant, today announced it will invest $3 billion (with a “b”!) in AI over the next three years in building out its team of AI professionals and AI-focused solutions for its clients.
“There is unprecedented interest in all areas of AI, and the substantial investment we are making in our Data & AI practice will help our clients move from interest to action to value, and in a responsible way with clear business cases,” said Julie Sweet, Accenture’s chairwoman and CEO.
Also related/see:
Artificial intelligence creates 40,000 new roles at Accenture— from computerweekly.com by Karl Flinders Accenture is planning to add thousands of AI experts to its workforce as part of a $3bn investment in its data and artificial intelligence practice
Why leaders need to evolve alongside generative AI — from fastcompany.com by Kelsey Behringer Even if you’re not an educator, you should not be sitting on the sidelines watching the generative AI conversation being had around you—hop in.
Excerpts (emphasis DSC):
Leaders should be careful to watch and support education right now. At the end of the day, the students sitting in K-12 and college classrooms are going to be future CPAs, lawyers, writers, and teachers. If you are parenting a child, you have skin in the game. If you use professional services, you have skin in the game. When it comes to education, we all have skin in the game. … Students need to master fundamental skills like editing, questioning, researching, and verifying claims before they can use generative AI exceptionally well.
[On 6/15/23, I joined] colleagues from OpenAI, Google, Microsoft, Stanford, Harvard and other others at the first meeting of the GenAI Summit. Our shared goal [was] to help to educate universities & schools in Europe about the impact of Generative AI on their work.
…how can we effectively communicate to education professionals that generative AI will enhance their work rather than replace them?
A recent controlled study found that ChatGPT can help professionals increase their efficiency in routine tasks by ~35%. If we keep in mind that the productivity gains brought by the steam engine in the nineteenth century was ~25%, this is huge.
As educators, we should embrace the power of ChatGPT to automate the repetitive tasks which we’ve been distracted by for decades. Lesson planning, content creation, assessment design, grading and feedback – generative AI can help us to do all of these things faster than ever before, freeing us up to focus on where we bring most value for our students.
SAN FRANCISCO, June 15 (Reuters) – Alphabet Inc (GOOGL.O) is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, four people familiar with the matter told Reuters.
The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information.
Adobe Firefly for the Enterprise — Dream Bigger with Adobe Firefly. Dream it, type it, see it with Firefly, our creative generative AI engine. Now in Photoshop (beta), Illustrator, Adobe Express, and on the web.
We’re rolling out web browsing and Plugins to all ChatGPT Plus users over the next week! Moving from alpha to beta, they allow ChatGPT to access the internet and to use 70+ third-party plugins. https://t.co/t4syFUj0fLpic.twitter.com/Mw9FMpKq91
Introducing the ChatGPT app for iOS— from openai.com The ChatGPT app syncs your conversations, supports voice input, and brings our latest model improvements to your fingertips.
Excerpt:
Since the release of ChatGPT, we’ve heard from users that they love using ChatGPT on the go. Today, we’re launching the ChatGPT app for iOS.
The ChatGPT app is free to use and syncs your history across devices. It also integrates Whisper, our open-source speech-recognition system, enabling voice input. ChatGPT Plus subscribers get exclusive access to GPT-4’s capabilities, early access to features and faster response times, all on iOS.
A few episodes back, we presented Tristan Harris and Aza Raskin’s talk The AI Dilemma. People inside the companies that are building generative artificial intelligence came to us with their concerns about the rapid pace of deployment and the problems that are emerging as a result. We felt called to lay out the catastrophic risks that AI poses to society and sound the alarm on the need to upgrade our institutions for a post-AI world.
The talk resonated – over 1.6 million people have viewed it on YouTube as of this episode’s release date. The positive reception gives us hope that leaders will be willing to come to the table for a difficult but necessary conversation about AI.
However, now that so many people have watched or listened to the talk, we’ve found that there are some AI myths getting in the way of making progress. On this episode of Your Undivided Attention, we debunk five of those misconceptions.
The State of Voice Technology in 2023 — from deepgram.com; with thanks to The Rundown for this resource Explore the latest insights on speech AI applications and automatic speech recognition (ASR) across a dozen industries, as seen by 400 business leaders surveyed for this report by Opus Research.
Your guide to AI: May 2023 — from nathanbenaich.substack.com by Nathan Benaich and Othmane Sebbouh Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI research (particularly for this issue!), industry, geopolitics and startups during April 2023.
ChatGPT: 30 incredible ways to use the AI-powered chatbot — from interestingengineering.com by Christopher McFadden You’ve heard of ChatGPT, but do you know how to use it? Or what to use it for? If not, then here are some ideas to get you started.
Excerpts:
It’s great at writing CVs and resumes
It can also read and improve the existing CV or resume
There are obvious questions like “Are the AI’s algorithms good enough?” (probably not yet) and “What will happen to Google?” (nobody knows), but I’d like to take a step back and ask some more fundamental questions: why chat? And why now?
Most people don’t realize that the AI model powering ChatGPT is not all that new. It’s a tweaked version of a foundation model, GPT-3, that launched in June 2020. Many people have built chatbots using it before now. OpenAI even has a guide in its documentation showing exactly how you can use its APIs to make one.
So what happened? The simple narrative is that AI got exponentially more powerful recently, so now a lot of people want to use it. That’s true if you zoom out. But if you zoom in, you start to see that something much more complex and interesting is happening.
This leads me to a surprising hypothesis: perhaps the ChatGPT moment never would have happened without DALL-E 2 and Stable Diffusion happening earlier in the year!
Like writing and coding before it, prompt engineering is an emergent form of thinking. It lies somewhere between conversation and query, between programming and prose. It is the one part of this fast-changing, uncertain future that feels distinctly human.
OpenAI’s ChatGPT, with new funding from Microsoft, has grown to over one million users faster than many of dominant tech companies, apps and platforms of the past decade.
Unlike the metaverse concept, which had a hype cycle based on an idea still nebulous to many, generative AI as tech’s next big thing is being built on top of decades of existing machine learning already embedded in business processes.
We asked top technology officers, specifically reaching out to many at non-tech sector companies, to break down the potential and pitfalls of AI adoption.
Introducing: ChatGPT Edu-Mega-Prompts— from drphilippahardman.substack.com by Dr. Philippa Hardman; with thanks to Ray Schroeder out on LinkedIn for this resource How to combine the power of AI + learning science to improve your efficiency & effectiveness as an educator
From DSC:
Before relaying some excerpts, I want to say that I get the gist of what Dr. Hardman is saying re: quizzes. But I’m surprised to hear she had so many pedagogical concerns with quizzes. I, too, would like to see quizzes used as an instrument of learning and to practice recall — and not just for assessment. But I would give quizzes a higher thumbs up than what she did. I think she was also trying to say that quizzes don’t always identify misconceptions or inaccurate foundational information.
Excerpts:
The Bad News: Most AI technologies that have been built specifically for educators in the last few years and months imitate and threaten to spread the use of broken instructional practices (i.e. content + quiz).
The Good News: Armed with prompts which are carefully crafted to ask the right thing in the right way, educators can use AI like GPT3 to improve the effectiveness of their instructional practices.
As is always the case, ChatGPT is your assistant. If you’re not happy with the result, you can edit and refine it using your expertise, either alone or through further conversation with ChatGPT.
For example, once the first response is generated, you can ask ChatGPT to make the activity more or less complex, to change the scenario and/or suggest more or different resources – the options are endless.
Philippa recommended checking out Rob Lennon’s streams of content. Here’s an example from his Twitter account:
Everyone’s using ChatGPT.
But almost everyone’s STUCK in beginner mode.
10 techniques to get massively ahead with AI:
(cut-and-paste these prompts?)
— Rob Lennon ? | Audience Growth (@thatroblennon) January 3, 2023
AI-assisted design and development work
This is the trend most likely to have a dramatic evolution this year.
…
Solutions like large language models, speech generators, content generators, image generators, translation tools, transcription tools, and video generators, among many others, will transform the way IDs create the learning experiences our organizations use. Two examples are:
1. IDs will be doing more curation and less creation:
Many IDs will start pulling raw material from content generators (built using natural language processing platforms like Open AI’s GPT-3, Microsoft’s LUIS, IBM’s Watson, Google’s BERT, etc.) to obtain ideas and drafts that they can then clean up and add to the assets they are assembling. As technology advances, the output from these platforms will be more suitable to become final drafts, and the curation and clean-up tasks will be faster and easier.
Then, the designer can leverage a solution like DALL-E 2 (or a product developed based on it) to obtain visuals that can (or not) be modified with programs like Illustrator or Photoshop (see image below for Dall-E’s “Cubist interpretation of AI and brain science.”
2. IDs will spend less, and in some cases no time at all, creating learning pathways
AI engines contained in LXPs and other platforms will select the right courses for employees and guide these learners from their current level of knowledge and skill to their goal state with substantially less human intervention.
Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.
… A growing number of leaders in the field are warning of the dangers of AI. Do you have any misgivings about the technology?
This is a unique moment in time where we do have agency in how it shapes society. And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity? There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.
Gerganov adapted it from a program called Whisper, released in September by OpenAI, the same organization behind ChatGPTand dall-e. Whisper transcribes speech in more than ninety languages. In some of them, the software is capable of superhuman performance—that is, it can actually parse what somebody’s saying better than a human can.
…
Until recently, world-beating A.I.s like Whisper were the exclusive province of the big tech firms that developed them.
Ever since I’ve had tape to type up—lectures to transcribe, interviews to write down—I’ve dreamed of a program that would do it for me. The transcription process took so long, requiring so many small rewindings, that my hands and back would cramp. As a journalist, knowing what awaited me probably warped my reporting: instead of meeting someone in person with a tape recorder, it often seemed easier just to talk on the phone, typing up the good parts in the moment.
From DSC: Journalism majors — and even seasoned journalists — should keep an eye on this type of application, as it will save them a significant amount of time and/or money.
Built on the familiar, all-in-one collaborative experience of Microsoft Teams, Teams Premium brings the latest technologies, including Large Language Models powered by OpenAI’s GPT-3.5, to make meetings more intelligent, personalized, and protected—whether it’s one-on-one, large meetings, virtual appointments, or webinars.
LLMs Will Make Creating the Content Infrastructure Significantly Easier, Faster, and Cheaper
LLMs will dramatically increase the speed of creating the informational resources that comprise the content infrastructure. Of course the drafts of these informational resources will need to be reviewed and improvements will need to be made – just as is the case with all first drafts – to insure accuracy and timeliness. But it appears that LLMs can get us 80% or so of the way to reasonable first drafts orders of magnitude faster, eliminating the majority of the expense involved in this part of the process. Here’s an example of what I’m talking about. Imagine you’re a SME who has been tasked with writing the content for an introductory economics textbook. (The following examples are from ChatGPT.)
Speaking of ID and higher education, also relevant/see:
A learning ecosystem is composed of people, tools, technologies, content, processes, culture, strategies, and any other resource that helps one learn. Learning ecosystems can be at an individual level as well as at an organizational level.
Some example components:
Subject Matter Experts (SMEs) such as faculty, staff, teachers, trainers, parents, coaches, directors, and others
Fellow employees
L&D/Training professionals
Managers
Instructional Designers
Librarians
Consultants
Types of learning
Active learning
Adult learning
PreK-12 education
Training/corporate learning
Vocational learning
Experiential learning
Competency-based learning
Self-directed learning (i.e., heutagogy)
Mobile learning
Online learning
Face-to-face-based learning
Hybrid/blended learning
Hyflex-based learning
Game-based learning
XR-based learning (AR, MR, and VR)
Informal learning
Formal learning
Lifelong learning
Microlearning
Personalized/customized learning
Play-based learning
Cloud-based learning apps
Coaching & mentoring
Peer feedback
Job aids/performance tools and other on-demand content
Websites
Conferences
Professional development
Professional organizations
Social networking
Social media – Twitter, LinkedIn, Facebook/Meta, other
Communities of practice
Artificial Intelligence (AI) — including ChatGPT, learning agents, learner profiles,
ChatGPT, Chatbots and Artificial Intelligence in Education — from ditchthattextbook.com by Matt Miller AI just stormed into the classroom with the emergence of ChatGPT. How do we teach now that it exists? How can we use it? Here are some ideas.
Excerpt: Now, we’re wondering …
What is ChatGPT? And, more broadly, what are chatbots and AI?
How is this going to impact education?
How can I teach tomorrow knowing that this exists?
Can I use this as a tool for teaching and learning?
Should we block it through the school internet filter — or try to ban it?
The tech world is abuzz over ChatGPT, a chat bot that is said to be the most advanced ever made.
It can create poems, songs, and even computer code. It convincingly constructed a passage of text on how to remove a peanut butter sandwich from a VCR, in the voice of the King James Bible.
As a PhD microbiologist, I devised a 10-question quiz that would be appropriate as a final exam for college-level microbiology students. ChatGPT blew it away.
On the one hand, yes, ChatGPT is capable of producing prose that looks convincing. But on the other hand, what it means to be convincing depends on context. The kind of prose you might find engaging and even startling in the context of a generative encounter with an AI suddenly seems just terrible in the context of a professional essay published in a magazine such as The Atlantic. And, as Warner’s comments clarify, the writing you might find persuasive as a teacher (or marketing manager or lawyer or journalist or whatever else) might have been so by virtue of position rather than meaning: The essay was extant and competent; the report was in your inbox on time; the newspaper article communicated apparent facts that you were able to accept or reject.
These lines of demarcation—the lines between when a tool can do all of a job, some of it, or none of it—are both constantly moving and critical to watch. Because they define knowledge work and point to the future of work. We need to be teaching people how to do the kinds of knowledge work that computers can’t do well and are not likely to be able to do well in the near future. Much has been written about the economic implications to the AI revolution, some of which are problematic for the employment market. But we can put too much emphasis on that part. Learning about artificial intelligence can be a means for exploring, appreciating, and refining natural intelligence. These tools are fun. I learn from using them. Those two statements are connected.
Google is planning to create a new AI feature for its Search engine, one that would rival the recently released and controversial ChatGPT from OpenAI. The company revealed this after a recent Google executive meeting that involved the likes of its CEO Sundar Pichai and AI head, Jeff Dean, that talked about the technology that the internet company already has, soon for development.
Employees from the Mountain View giant were concerned that it was behind the current AI trends to the likes of OpenAI despite already having a similar technology laying around.
And more focused on the business/vocational/corporate training worlds:
There are a lot of knowledge management, enterprise learning and enterprise search products on the market today, but what Sana believes it has struck on uniquely is a platform that combines all three to work together: a knowledge management-meets-enterprise-search-meets-e-learning platform.
Three sources briefed on OpenAI’s recent pitch to investors said the organization expects $200 million in revenue next year and $1 billion by 2024.
The forecast, first reported by Reuters, represents how some in Silicon Valley are betting the underlying technology will go far beyond splashy and sometimes flawed public demos.
“We’re going to see advances in 2023 that people two years ago would have expected in 2033. It’s going to be extremely important not just for Microsoft’s future, but for everyone’s future,” he said in an interview this week.
Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.
The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.
In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.
Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?
And what does it mean for professors if the answer to those questions is “yes”?
…
Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.
“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
1/Large language models like Galactica and ChatGPT can spout nonsense in a confident, authoritative tone. This overconfidence – which reflects the data they’re trained on – makes them more likely to mislead.
The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.
And on the legal side of things:
In the legal education context, I’ve been playing around with generating fact patterns and short documents to use in exercises.
The venerable stock image site, Getty, boasts a catalog of 80 million images. Shutterstock, a rival of Getty, offers 415 million images. It took a few decades to build up these prodigious libraries.
Now, it seems we’ll have to redefine prodigious. In a blog post last week, OpenAI said its machine learning algorithm, DALL-E 2, is generating over two million images a day. At that pace, its output would equal Getty and Shutterstock combined in eight months. The algorithm is producing almost as many images daily as the entire collection of free image site Unsplash.
And that was before OpenAI opened DALL-E 2 to everyone.
A sample video generated by Meta’s new AI text-to-video model, Make-A-Video. The text prompt used to create the video was “a teddy bear painting a portrait.” Image: Meta
From DSC: Hmmm…I wonder…how might these emerging technologies impact copyrights, intellectual property, and/or other types of legal matters and areas?
Learning 3.0: A data-fueled, equitable future for corporate learning — from chieflearningofficer.com by Marc Ramos and Marc Zao-Sanders Learning pedagogy, technology and practice inevitably draw on (but tend to lag behind) the developments of the web, the world’s main stage for advancement and innovation.
Excerpts:
Tomorrow could be extraordinary. Many of the crowning jewels of Web 3.0 and web3 have been designed to be open source, user-friendly and ship with APIs, such as OpenAI’s GPT3, which generates natural language to an expert human level, seemingly at will. This means that the time between the launch of cutting-edge technology and it reaching corporate learning will decrease substantially. Learning might finally advance from the back seat to a board seat. There is already a growing list of GPT3 content creation tools that will impact creators, publishers, academic and corporate education materials as well as the design process.
We’re less than five years from this. The technology is here already. What’s missing is the data.
We need less tweaking and more rethinking of how to deliver greater access, affordability and equity in higher education, and we must do it at scale. We need a new paradigm for the majority of students in higher education today that commits to meaningful employment and sustainable-wage careers upon completion of a degree or credential.
The challenge is the same for the business of higher education in serving future, more fluid students — and today’s nontraditional students. Many need to flow in and out of jobs and education, rather than pursue a degree in two or four years. Increasingly, they will seek to direct their educational experience toward personalized career opportunities, while stacking and banking credentials and experience into degrees.
From DSC: Coming in and going out of “higher education” throughout one’s career and beyond…constant changes…morphing…hmm…sounds like a lifelong learning ecosystem to me.
Although private nonprofit institutions accounted for 44% of all master’s programs in the data, they made up 75% of programs with high debt and low earnings.
From DSC: From someone who is paying for rent for a college student — along with tuition, books, fees, etc. — this has direct application to our household. If there isn’t a perfect storm developing in higher ed, then I don’t know what that phrase means.
#costofhighereducation #inflation
HBCUs see a historic jump in enrollments — from npr.org with Michel Martin; with thanks to Marcela Rodrigues-Sherley and Julia Piper from The Chronicle for the resource
Also from that same newsletter:
What would Harvard University’s ranking be if the only criteria considered was economic mobility? According toThe Washington Post, it would be 847th out of 1,320. First place would go to California State University at Los Angeles.
Change is a constant in higher ed, just as it is in the labor market. Staying up to date and flexible is more important than ever for colleges and universities, and through the pandemic, many relied on their continuing and workforce education divisions to support their agility. In fact, 56% of higher ed leaders said the role of their CE units expanded through the pandemic.
The pandemic led to some of the biggest innovations in continuing ed in recent memory.
Lobbying for more support for students with learning disabilities in higher education, the students called for increased funding for the National Center for Special Education Research and the Individuals with Disabilities Education Act (IDEA Act) — legislation which requires that children with disabilities be given a free and appropriate public education, and makes it possible for states and local educational agencies to provide federal funds to make sure that happens. They also encouraged lawmakers to pass the RISE Act, a bill designed to better support neurodiverse students in higher education.
Partnerships between higher education institutions and employers can be difficult to create, often because of misalignment between the cultures, structures and values of the two groups, according to a July report from California Competes, a nonprofit policy organization focused on higher education.
Higher ed leaders could improve employer relations by making industry engagement an expected responsibility of both faculty and staff, said the report, which drew from 28 interviews with people at colleges and employers.
Robust employer engagement can strengthen enrollment and job outcomes for students, the authors argued, while also benefiting state and local economies.