From DSC:
For those who dog the “doomsayers” of higher ed…
You need to realize many “doomsayers” are trying to get traditional institutions of higher education to change, experiment, lower their price tags, collaborate with K12 and/or with the corporate/vocational realms, and to innovate
While many of those same institutions haven’t closed (at least not yet), there are many examples of budget cuts, downsizing, layoffs, early retirements, etc.
Many of those same institutions are not the same as they were 20-30 years ago — not even close. This is becoming especially true for liberal arts colleges.
Here’s one example that made me post this reflection:
NPR’s Sarah McCammon talks with Hechinger Report Author Jon Marcus about the financial woes of rural universities and why some are dropping dozens of programs.
Excerpt:
Many colleges and universities in rural America are slashing budgets as enrollment numbers continue to dwindle. And often, the first things to be cut are humanities programs like history and English. It’s forcing some students to consider transferring to other schools or leaving higher education altogether. Jon Marcus has been covering this erosion of funding at rural universities and its domino effects with The Hechinger Report, and he joins us now. Welcome to the program.
GPT Takes the Bar Exam — from papers.ssrn.com by Michael James Bommarito and Daniel Martin Katz; with thanks to Gabe Teninbaum for his tweet on this
Excerpt from the Abstract (emphasis DSC):
While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.
From DSC: For me, I wish politicians and legislators would stay out of the way and let public and private educators make the decisions. But if any politician is about to vote on significant education-related policies, laws, etc. — I would like to suggest that society require them to either:
teach a K12-based class for at least one month or
be in the classroom for the entire day to observe — and do this for at least one month
Perhaps we would have far less standardized testing. Perhaps we would have far more joy and wonder — for the teachers as well as for the students. Perhaps lifelong learning — and the love of learning — would get the wind in its sails that it so desperately needs. .
From DSC: Along these lines of enjoyment in everyday things, could this type of thing happen more within education?
Real Ways Professionals Can Use ChatGPT to Improve Job Performance
Let’s dive into some real examples of how professionals across sales, marketing, product management, project management, recruiting, and teaching can take advantage of this new tool and leverage it for even more impact in their careers.
Teachers and ChatGPT
Help with grading and feedback on student work.
Example prompt: “Tell me every grammar rule that’s been violated in this student’s essay: [paste in essay]”
Create personalized learning materials.
Example prompt: “Help me explain photosynthesis to a 10th grade student in a way similar to sports.”
Generate lesson plans and activities.
Example prompt: “Create an activity for 50 students that revolves around how to learn the different colors of the rainbow.” or “Generate a lesson plan for a high school English class on the theme of identity and self-discovery, suitable for a 45-minute class period.”
Write fake essays several reading levels below your class, then print them out, and have your students review and edit the AI’s work to make it better.
Example prompt: “Generate a 5th grade level short essay about Maya Angelou and her work.”
Providing one-on-one support to students. Example prompt: “How can I best empower an introverted student in my classroom during reading time?”
From DSC: I haven’t tried these prompts. Rather I post this because I’m excited about the potential of Artificial Intelligence (AI) to help people teach and to help people to learn.
Some use cases of the convergence of AI and 5G are:
Metaverse: AI is a key technology that helps bring Metaverse to life and now with the addition of 5G, streaming experiences would become enjoyable and maintaining connectivity without any disruption of external factors like geographical locations would be eliminated.
Digital assistants in the form of chatbots and virtual avatars: Digital assistants today use AI to replicate the human brain and converse with people in human language by understanding intent. With 5G the speed at which the speech is converted to text will improve drastically.
Education: AI and 5G are helping bring education to students’ doorstep through virtual reality and is making this available, efficiently as in real-world classrooms. Solving queries is possible quickly without any restrictions. Betty in Archie Comics attending her classes virtually is a reality due to these technologies.
Healthcare: AI and 5G in healthcare are proliferating an accurate diagnosis of diseases, real-time monitoring, and quick treatment facilities. This has become possible with the right use of data- collection, transmission, and analysis.
Automotive: AI and 5G together is making vehicles smarter and reducing the risk of mishaps on roads by employing various data-powered safety and driving efficiency measures in vehicles.
What I do take away from these surveys however, is that the majority of AI experts take the prospect of very powerful AI technology seriously. It is not the case that AI researchers dismiss extremely powerful AI as mere fantasy.
The huge majority thinks that in the coming decades there is an even chance that we will see AI technology which will have a transformative impact on our world. While some have long timelines, many think it is possible that we have very little time before these technologies arrive. Across the three surveys more than half think that there is a 50% chance that a human-level AI would be developed before some point in the 2060s, a time well within the lifetime of today’s young people.
Three reasons why NLP will go mainstream in healthcare in 2023 — from healthcareitnews.com by Bill Siwicki A natural language processing expert explains why he feels the technology’s kinks have been ironed out, its ROI has been proven and the timing is right for healthcare to take advantage of information-extraction tools.
13 tech predictions for 2023 — from enterprisersproject.com by Katie Sanders What can you expect in the world of IT next year? Business and IT leaders share their thoughts
Until now, AI has operated almost exclusively in the cloud. But increasingly diverse streams of data are being generated around the clock from sensors at the edge. These require real-time inference, which is leading more AI deployments to move to edge computing.
For airports, stores, hospitals and more, AI brings advanced efficiency, automation and even cost reduction, which is why edge AI adoption accelerated last year.
In 2023, expect to see a similarly challenging environment, which will drive the following edge AI trends.
Digital transformation: 5 trends to watch in 2023 — from enterprisersproject.com by Ritish Reddy As enterprises continue to digitally transform, IT leaders must look toward the future. Expect to see these trends in 2023
STEM education has an access issue: let’s change that.
Around the world, people believe the we need more people in STEM careers. Eighty-seven percent of people believe we need to do more to encourage and retain girls in STEM education. At the same time, barriers remain – 73% of people believe underrepresented minorities often lack equal access STEM education.
Not The Science Type gets to the heart of access and gender inequity in STEM education and STEM fields. This four-part docuseries features four female scientists who are challenging stereotypes and confronting gender, racial and age discrimination as they rise to prominence.
Not The Science Type highlights four brilliant minds, showcasing women who break down boundaries within their fields – biology, engineering and science and technology-based applications. .
While each woman has taken a different path to pursue scientific excellence, they are bound by the common experience of feeling excluded, or “not the type” in traditionally homogenous fields.
ChatGPT, Chatbots and Artificial Intelligence in Education — from ditchthattextbook.com by Matt Miller AI just stormed into the classroom with the emergence of ChatGPT. How do we teach now that it exists? How can we use it? Here are some ideas.
Excerpt: Now, we’re wondering …
What is ChatGPT? And, more broadly, what are chatbots and AI?
How is this going to impact education?
How can I teach tomorrow knowing that this exists?
Can I use this as a tool for teaching and learning?
Should we block it through the school internet filter — or try to ban it?
The tech world is abuzz over ChatGPT, a chat bot that is said to be the most advanced ever made.
It can create poems, songs, and even computer code. It convincingly constructed a passage of text on how to remove a peanut butter sandwich from a VCR, in the voice of the King James Bible.
As a PhD microbiologist, I devised a 10-question quiz that would be appropriate as a final exam for college-level microbiology students. ChatGPT blew it away.
On the one hand, yes, ChatGPT is capable of producing prose that looks convincing. But on the other hand, what it means to be convincing depends on context. The kind of prose you might find engaging and even startling in the context of a generative encounter with an AI suddenly seems just terrible in the context of a professional essay published in a magazine such as The Atlantic. And, as Warner’s comments clarify, the writing you might find persuasive as a teacher (or marketing manager or lawyer or journalist or whatever else) might have been so by virtue of position rather than meaning: The essay was extant and competent; the report was in your inbox on time; the newspaper article communicated apparent facts that you were able to accept or reject.
These lines of demarcation—the lines between when a tool can do all of a job, some of it, or none of it—are both constantly moving and critical to watch. Because they define knowledge work and point to the future of work. We need to be teaching people how to do the kinds of knowledge work that computers can’t do well and are not likely to be able to do well in the near future. Much has been written about the economic implications to the AI revolution, some of which are problematic for the employment market. But we can put too much emphasis on that part. Learning about artificial intelligence can be a means for exploring, appreciating, and refining natural intelligence. These tools are fun. I learn from using them. Those two statements are connected.
Google is planning to create a new AI feature for its Search engine, one that would rival the recently released and controversial ChatGPT from OpenAI. The company revealed this after a recent Google executive meeting that involved the likes of its CEO Sundar Pichai and AI head, Jeff Dean, that talked about the technology that the internet company already has, soon for development.
Employees from the Mountain View giant were concerned that it was behind the current AI trends to the likes of OpenAI despite already having a similar technology laying around.
And more focused on the business/vocational/corporate training worlds:
There are a lot of knowledge management, enterprise learning and enterprise search products on the market today, but what Sana believes it has struck on uniquely is a platform that combines all three to work together: a knowledge management-meets-enterprise-search-meets-e-learning platform.
Three sources briefed on OpenAI’s recent pitch to investors said the organization expects $200 million in revenue next year and $1 billion by 2024.
The forecast, first reported by Reuters, represents how some in Silicon Valley are betting the underlying technology will go far beyond splashy and sometimes flawed public demos.
“We’re going to see advances in 2023 that people two years ago would have expected in 2033. It’s going to be extremely important not just for Microsoft’s future, but for everyone’s future,” he said in an interview this week.
From DSC: I continue to think about the idea of wiping the slate completely clean. If we were to design a lifelong learning ecosystem, what would it look like? How could we apply Design Thinking to this new slate/canvas?
Perhaps we could start by purposefully creating more pathways to weave in and out of the various siloes — and then come back into the “silos” with new ideas, knowledge, and experiences:
PreK-12
Higher education
Vocational programs
Business and the corporate world
Government
Communities of practice
Other
Integrate apprenticeships, jobs, sabbaticals, rest, purpose, passions, intrinsic motivations, other into this lifelong learning ecosystem. Take one’s new learning back to one’s former silo(s) and improve things therein. Such a design would help keep curricula and learning/training environments up-to-date and relevant.
It would also allow people more pathways through their careers — and to keep learning while doing real-world projects. It would help people — and institutions— grow in many ways.
Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.
The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.
In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.
Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?
And what does it mean for professors if the answer to those questions is “yes”?
…
Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.
“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
1/Large language models like Galactica and ChatGPT can spout nonsense in a confident, authoritative tone. This overconfidence – which reflects the data they’re trained on – makes them more likely to mislead.
The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.
And on the legal side of things:
In the legal education context, I’ve been playing around with generating fact patterns and short documents to use in exercises.
From DSC: “How to retain talented staff members should be high on every administrator’s 2023 agenda.” This highlight from a recent email from The Chronicle of Higher Education linked to:
How to Retain Your Best Staff Members — from chronicle.com by Meredith Davis Four tips from a former student-affairs administrator on how to improve work culture on campus.
Unfortunately, this important item wasn’t high on the agenda in the majority of the years that I was working in higher education. I often thought that folks in higher education could have learned from the corporate world in this regard. Although even the corporate world hasn’t been doing a good job these days about treating their people well. But that wasn’t the case in my experience at Baxter Healthcare, Kraft Foods, and Wells Fargo from years ago.
Perhaps we should have more people “crossing over” between the silos that we seem to have established. That is, a person could work within higher education for 2-3 years, move over to a corporate environment/government/vocational space/other, and then works a few years there before coming back to higher education in a different capacity. Perhaps more pathways and tighter collaboration could exist in this manner.
Hmmm…design thinking…there’s got to be something here…
From DSC: I was watching a sermon the other day, and I’m always amazed when the pastor doesn’t need to read their notes (or hardly ever refers to them). And they can still do this in a much longer sermon too. Not me man.
It got me wondering about the idea of having a teleprompter on our future Augmented Reality (AR) glasses and/or on our Virtual Reality (VR) headsets. Or perhaps such functionality will be provided on our mobile devices as well (i.e., our smartphones, tablets, laptops, other) via cloud-based applications.
One could see one’s presentation, sermon, main points for the meeting, what charges are being brought against the defendant, etc. and the system would know to scroll down as you said the words (via Natural Language Processing (NLP)). If you went off script, the system would stop scrolling and you might need to scroll down manually or just begin where you left off.
For that matter, I suppose a faculty member could turn on and off a feed for an AI-based stream of content on where a topic is in the textbook. Or a CEO or University President could get prompted to refer to a particular section of the Strategic Plan. Hmmm…I don’t know…it might be too much cognitive load/overload…I’d have to try it out.
And/or perhaps this is a feature in our future videoconferencing applications.
But I just wanted to throw these ideas out there in case someone wanted to run with one or more of them.
Along these lines, see:
Tell me you’re not going to want a pair of #AR glasses ?
One of the best decisions I’ve ever made as a prof is to start building my classes to start with 1 week of onboarding followed by just 12 weeks of content. Last 2 weeks are just catchup and reassessment. Course is basically over at Thanksgiving.
There’s a certain feeling that happens when a new technology adjusts your thinking about computing. Google did it. Firefox did it. AWS did it. iPhone did it. OpenAI is doing it with ChatGPT.
In 2023, businesses will realise that, in order to get out of FAQ Land, they need to synchronise business systems together to deliver personalised transactional experiences for customers.
“We finally have the technologies to do all the things we imagined 10 years ago.”