“…what if generative AI could provide every instructor with a genuine teaching assistant – a teaching assistant that actually assisted instructors with their teaching?”
For my cryptography course, Mollick’s first option would probably mean throwing out all my existing reading questions. My intent with these reading questions was noble, that is, to guide students to the big questions and debates in the field, but those are exactly the kinds of questions for which AI can write decent answers. Maybe the AI tools would fare worse in a more advanced course with very specialized readings, but in my intro to cryptography course, they can handle my existing reading questions with ease.
What about option two? I think one version of this would be to do away with the reading response assignment altogether.
Generative AI imagines new protein structures — from news.mit.edu by Rachel Gordon; resource from Sunday Signal MIT researchers develop “FrameDiff,” a computational tool that uses generative AI to craft new protein structures, with the aim of accelerating drug development and improving gene therapy.
Ready to Sing Elvis Karaoke … as Elvis? The Weird Rise of AI Music — from rollingstone.com by Brian Hiatt; resource from Misha da Vinci From voice-cloning wars to looming copyright disputes to a potential flood of nonhuman music on streaming, AI is already a musical battleground
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
.
On giving AI eyes and ears— from oneusefulthing.org by Ethan Mollick AI can listen and see, with bigger implications than we might realize.
Excerpt:
But even this is just the beginning, and new modes of using AI are appearing, which further increases their capabilities. I want to show you some examples of this emerging world, which I think will soon introduce a new wave of AI use cases, and accompanying disruption.
We need to recognize that these capabilities will continue to grow, and AI will be able to play a more active role in the real world by observing and listening. The implications are likely to be profound, and we should start thinking through both the huge benefits and major concerns today.
Even though generative AI is a new thing, it doesn’t change why students cheat. They’ve always cheated for the same reason: They don’t find the work meaningful, and they don’t think they can achieve it to their satisfaction. So we need to design assessments that students find meaning in.
Tricia Bertram Gallant
Caught off guard by AI— from chonicle.com by Beth McMurtrie and Beckie Supiano Professor scrambled to react to ChatGPT this spring — and started planning for the fall
Excerpt:
Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate?
…OpenAI built tool use right into the GPT API with an update called function calling. It’s a little like a child’s ability to ask their parents to help them with a task that they know they can’t do on their own. Except in this case, instead of parents, GPT can call out to external code, databases, or other APIs when it needs to.
Each function in function calling represents a tool that a GPT model can use when necessary, and GPT gets to decide which ones it wants to use and when. This instantly upgrades GPT capabilities—not because it can now do every task perfectly—but because it now knows how to ask for what it wants and get it. .
.
How ChatGPT can help disrupt assessment overload— from timeshighereducation.com by David Carless Advances in AI are not necessarily the enemy – in fact, they should prompt long overdue consideration of assessment types and frequency, says David Carless
Excerpt:
Reducing the assessment burden could support trust in students as individuals wanting to produce worthwhile, original work. Indeed, students can be co-opted as partners in designing their own assessment tasks, so they can produce something meaningful to them.
A strategic reduction in quantity of assessment would also facilitate a refocusing of assessment priorities on deep understanding more than just performance and carries potential to enhance feedback processes.
If we were to tackle assessment overload in these ways, it opens up various possibilities. Most significantly there is potential to revitalise feedback so that it becomes a core part of a learning cycle rather than an adjunct at its end. End-of-semester, product-oriented feedback, which comes after grades have already been awarded, fails to encourage the iterative loops and spirals typical of productive learning. .
Since AI in education has been moving at the speed of light, we built this AI Tools in Education database to keep track of the most recent AI tools in education and the changes that are happening every day.This database is intended to be a community resource for educators, researchers, students, and other edtech specialists looking to stay up to date. This is a living document, so be sure to come back for regular updates.
These claims conjure up the rosiest of images: human resource departments and their robot buddies solving discrimination in workplace hiring. It seems plausible, in theory, that AI could root out unconscious bias, but a growing body of research shows the opposite may be more likely.
…
Companies’ use of AI didn’t come out of nowhere: For example, automated applicant tracking systems have been used in hiring for decades. That means if you’ve applied for a job, your resume and cover letter were likely scanned by an automated system. You probably heard from a chatbot at some point in the process. Your interview might have been automatically scheduled and later even assessed by AI.
From DSC:
Here was my reflection on this:
DC: Along these lines, I wonder if Applicant Tracking Systems cause us to become like typecast actors and actresses — only thought of for certain roles. Pigeonholed.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
In June, ResumeBuilder.com surveyed more than 1,000 employees who are involved in hiring processes at their workplaces to find out about their companies’ use of AI interviews.
The results:
43% of companies already have or plan to adopt AI interviews by 2024
Two-thirds of this group believe AI interviews will increase hiring efficiency
15% say that AI will be used to make decisions on candidates without any human input
More than half believe AI will eventually replace human hiring managers
Watch OpenAI CEO Sam Altman on the Future of AI — from bloomberg.com Sam Altman, CEO & Co-Founder, OpenAI discusses the explosive rise of OpenAI and its products and what an AI-laced future can look like with Bloomberg’s Emily Chang at the Bloomberg Technology Summit.
The implementation of generative AI within these products will dramatically improve educators’ ability to deliver personalized learning to students at scale by enabling the application of personalized assessments and learning pathways based on individual student needs and learning goals. K-12 educators will also benefit from access to OpenAI technology…
Why it matters: The best AI assistants will be the ones that require the least prompting. They’ll get to know who you are, what you need, and your modus operandi. Profiles are a good starting point, but we believe the game-changer will be larger context windows (that’s nerd-speak for the amount of context ChatGPT can handle). .
From DSC: And how about taking this a step further and remembering — or being able to access — our constantly updated Cloud-Based Learning Profiles?
My hypothesis and research suggest that as bar associations and the ABA begin to recognize the on-going systemic issues of high-cost legal education, growing legal deserts (where no lawyer serves a given population), on-going and pervasive access to justice issues, and a public that is already weary of the legal system – alternative options that are already in play might become more supported.
What might that look like?
The combination of AI-assisted education with traditional legal apprenticeships has the potential to create a rich, flexible, and engaging learning environment. Here are three scenarios that might illustrate what such a combination could look like:
Scenario One – Personalized Curriculum Development
Scenario Two – On-Demand Tutoring and Mentoring
Scenario Three – AI-assisted Peer Networks and Collaborative Learning:
We know that there are challenges – a threat to human jobs, the potential implications for cyber security and data theft, or perhaps even an existential threat to humanity as a whole. But we certainly don’t yet have a full understanding of all of the implications. In fact, a World Economic Forum report recently stated that organizations “may currently underappreciate AI-related risks,” with just four percent of leaders considering the risk level to be “significant.”
A survey carried out by analysts Baker McKenzie concluded that many C-level leaders are over-confident in their assessments of organizational preparedness in relation to AI. In particular, it exposed concerns about the potential implications of biased data when used to make HR decisions.
AI & lawyer training: How law firms can embrace hybrid learning & development — thomsonreuters.com A big part of law firms’ successful adaptation to the increased use of ChatGPT and other forms of generative AI, may depend upon how firmly they embrace online learning & development tools designed for hybrid work environments
Excerpt:
As law firms move forward in using of advanced artificial intelligence such as ChatGPT and other forms of generative AI, their success may hinge upon how they approach lawyer training and development and what tools they enlist for the process.
One of the tools that some law firms use to deliver a new, multi-modal learning environment is an online, video-based learning platform, Hotshot, that delivers more than 250 on-demand courses on corporate, litigation, and business skills.
Ian Nelson, co-founder of Hotshot, says he has seen a dramatic change in how law firms are approaching learning & development (L&D) in the decade or so that Hotshot has been active. He believes the biggest change is that 10 years ago, firms hadn’t yet embraced the need to focus on training and development.
From DSC: Heads up law schools. Are you seeing/hearing this!?
Are we moving more towards a lifelong learning model within law schools?
If not, shouldn’t we be doing that?
Are LLM programs expanding quickly enough? Is more needed?
Last night, Jensen Huang of NVIDIA gave his very first live keynote in 4-years.
The most show-stopping moment from the event was when he showed off the real-time AI in video games. A human speaks, the NPC responds, in real time and the dialogue was generated with AI on the fly. pic.twitter.com/TDoUM1zSiy
Bill Gates says AI is poised to destroy search engines and Amazon — from futurism.com by Victor Tangermann Who will win the AI [competition]? (DSC: I substituted the word competition here, as that’s what it is. It’s not a war, it’s a part of America’s way of doing business.)
“Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” Gates said during a Goldman Sachs event on AI in San Francisco this week, as quoted by CNBC.
These AI assistants could “read the stuff you don’t have time to read,” he said, allowing users to get to information without having to use a search engine like Google.
The online learning platform edX introduced two new tools on Friday based on OpenAI’s ChatGPT technology: an edX plugin for ChatGPT and a learning assistant embedded in the edX platform, called Xpert.
According to the company, its plugin will enable ChatGPT Plus subscribers to discover educational programs and explore learning content such as videos and quizzes across edX’s library of 4,200 courses.
Bing is now the default search for ChatGPT— from theverge.com by Tom Warren; via superhuman.beehiiv.com The close partnership between Microsoft and OpenAI leads to plug-in interoperability and search defaults.
Excerpt:
OpenAI will start using Bing as the default search experience for ChatGPT. The new search functionality will be rolling out to ChatGPT Plus users today and will be enabled for all free ChatGPT users soon through a plug-in in ChatGPT.
Students with mobility challenges may find it easier to use generative AI tools — such as ChatGPT or Elicit — to help them conduct research if that means they can avoid a trip to the library.
Students who have trouble navigating conversations — such as those along the autism spectrum — could use these tools for “social scripting.” In that scenario, they might ask ChatGPT to give them three ways to start a conversation with classmates about a group project.
Students who have trouble organizing their thoughts might benefit from asking a generative AI tool to suggest an opening paragraph for an essay they’re working on — not to plagiarize, but to help them get over “the terror of the blank page,” says Karen Costa, a faculty-development facilitator who, among other things, focuses on teaching, learning, and living with ADHD. “AI can help build momentum.”
ChatGPT is good at productive repetition. That is a practice most teachers use anyway to reinforce learning. But AI can take that to the next level by allowing students who have trouble processing information to repeatedly generate examples, definitions, questions, and scenarios of concepts they are learning.
It’s not all on you to figure this out and have all the answers. Partner with your students and explore this together.
The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI). Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities – and pitfalls – of the new technology. In a matter of months, several AI models have entered the market. Mr Altman said a new agency should be formed to license AI companies.
Artificial intelligence was a focus on Capitol Hill Tuesday. Many believe AI could revolutionize, and perhaps upend, considerable aspects of our lives. At a Senate hearing, some said AI could be as momentous as the industrial revolution and others warned it’s akin to developing the atomic bomb. William Brangham discussed that with Gary Marcus, who was one of those who testified before the Senate.
We’re rolling out web browsing and Plugins to all ChatGPT Plus users over the next week! Moving from alpha to beta, they allow ChatGPT to access the internet and to use 70+ third-party plugins. https://t.co/t4syFUj0fLpic.twitter.com/Mw9FMpKq91
Are you ready for the Age of Intelligence? — from linusekenstam.substack.com Linus Ekenstam Let me walk you through my current thoughts on where we are, and where we are going.
From DSC: I post this one to relay the exponential pace of change that Linus also thinks we’ve entered, and to present a knowledgeable person’s perspectives on the future.
Catastrophe / Eucatastrophe — from oneusefulthing.org by Ethan Mollick We have more agency over the future of AI than we think.
Excerpt (emphasis DSC):
Every organizational leader and manager has agency over what they decide to do with AI, just as every teacher and school administrator has agency over how AI will be used in their classrooms. So we need to be having very pragmatic discussions about AI, and we need to have them right now: What do we want our world to look like?
Also relevant/see:
That wasn’t Google I/O — it was Google AI — from technologyreview.com by Mat Honan If you thought generative AI was a big deal last year, wait until you see what it looks like in products already used by billions.
Google is in trouble.
I got early ‘Alpha’ access to GPT-4 with browsing and ran some tests.
This all means that a time may be coming when companies need to compensate star employees for their input to AI tools rather than their just their output, which may not ultimately look much different from their AI-assisted colleagues.
“It wouldn’t be far-fetched for them to put even more of a premium on those people because now that kind of skill gets amplified and multiplied throughout the organization,” said Erik Brynjolfsson, a Stanford professor and one of the study’s authors. “Now that top worker could change the whole organization.”
Of course, there’s a risk that companies won’t heed that advice. If AI levels performance, some executives may flatten the pay scale accordingly. Businesses would then potentially save on costs — but they would also risk losing their top performers, who wouldn’t be properly compensated for the true value of their contributions under this system.
WASHINGTON, April 24 – The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.
The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.
Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.
According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.
***
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
What Is Agent Assist? — from blogs.nvidia.com Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across retail, telecom and other industries conduct conversations with customers.
Excerpt:
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.
It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.
From DSC: Is this type of thing going to provide a learning assistant/agent as well?
AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.
One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.
In the most recent update, Adobe is now using AI to Denoise, Enhance and create Super Resolution or 2x the file size of the original photo. Click here to read Adobe’s post and below are photos of how I used the new AI Denoise on a photo. The big trick is that photos have to be shot in RAW.
A few current categories of AI in Edtech particularly jump out:
Teacher Productivity and Joy: Tools to make educators’ lives easier (and more fun?) by removing some of the more rote tasks of teaching, like lesson planning (we counted at least 8 different tools for lesson planning), resource curation and data collection.
Personalization and Learning Delivery: Tools to tailor instruction to the particular interests, learning preferences and preferred media consumption of students. This includes tools that convert text to video, video to text, text to comic books, Youtube to notes, and many more.
Study and Course Creation Tools: Tools for learners to automatically make quizzes, flashcards, notes or summaries of material, or even to automatically create full courses from a search term.
AI Tutors, Chatbots and Teachers: There will be no shortage of conversational AI “copilots” (which may take many guises) to support students in almost any learning context. Many Edtech companies launched their own during the conference. Possible differentiators here could be personality, safety, privacy, access to a proprietary or specific data set, or bots built on proprietary LLMs.
Simplifying Complex Processes: One of the most inspiring conversations of the conference for me was with Tiffany Green, founder of Uprooted Academy, about how AI can and should be used to remove bureaucratic barriers to college for underrepresented students (for example, used to autofill FAFSA forms, College Applications, to search for schools and access materials, etc). This is not the only complex bureaucratic process in education.
Educational LLMs: The race is on to create usable large language models for education that are safe, private, appropriate and classroom-ready. Merlyn Mind is working on this, and companies that make LLMs are sprouting up in other sectors…
This week I spent a few days at the ASU/GSV conference and ran into 7,000 educators, entrepreneurs, and corporate training people who had gone CRAZY for AI.
No, I’m not kidding. This community, which makes up people like training managers, community college leaders, educators, and policymakers is absolutely freaked out about ChatGPT, Large Language Models, and all sorts of issues with AI. Now don’t get me wrong: I’m a huge fan of this. But the frenzy is unprecedented: this is bigger than the excitement at the launch of the i-Phone.
Second, the L&D market is about to get disrupted like never before. I had two interactive sessions with about 200 L&D leaders and I essentially heard the same thing over and over. What is going to happen to our jobs when these Generative AI tools start automatically building content, assessments, teaching guides, rubrics, videos, and simulations in seconds?
The answer is pretty clear: you’re going to get disrupted. I’m not saying that L&D teams need to worry about their careers, but it’s very clear to me they’re going to have to swim upstream in a big hurry. As with all new technologies, it’s time for learning leaders to get to know these tools, understand how they work, and start to experiment with them as fast as you can.
Speaking of the ASU+GSV Summit, see this posting from Michael Moe:
Last week, the 14th annual ASU+GSV Summit hosted over 7,000 leaders from 70+ companies well as over 900 of the world’s most innovative EdTech companies. Below are some of our favorite speeches from this year’s Summit…
High-quality tutoring is one of the most effective educational interventions we have – but we need both humans and technology for it to work. In a standing-room-only session, GSE Professor Susanna Loeb, a faculty lead at the Stanford Accelerator for Learning, spoke alongside school district superintendents on the value of high-impact tutoring. The most important factors in effective tutoring, she said, are (1) the tutor has data on specific areas where the student needs support, (2) the tutor has high-quality materials and training, and (3) there is a positive, trusting relationship between the tutor and student. New technologies, including AI, can make the first and second elements much easier – but they will never be able to replace human adults in the relational piece, which is crucial to student engagement and motivation.
ChatGPT, Bing Chat, Google’s Bard—AI is infiltrating the lives of billions.
The 1% who understand it will run the world.
Here’s a list of key terms to jumpstart your learning:
Being “good at prompting” is a temporary state of affairs.The current AI systems are already very good at figuring out your intent, and they are getting better. Prompting is not going to be that important for that much longer. In fact, it already isn’t in GPT-4 and Bing. If you want to do something with AI, just ask it to help you do the thing. “I want to write a novel, what do you need to know to help me?” will get you surprisingly far.
…
The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively. Try asking for something. Then ask the AI to modify or adjust its output. Work with the AI, rather than trying to issue a single command that does everything you want. The more you experiment, the better off you are. Just use the AI a lot, and it will make a big difference – a lesson my class learned as they worked with the AI to create essays.
From DSC: Agreed –> “Being “good at prompting” is a temporary state of affairs.” The User Interfaces that are/will be appearing will help greatly in this regard.
From DSC: Bizarre…at least for me in late April of 2023:
FaceTiming live with AI… This app came across the @ElunaAI Discord and I was very impressed with its responsiveness, natural expression and language, etc…
Feels like the beginning of another massive wave in consumer AI products.
The rise of AI-generated music has ignited legal and ethical debates, with record labels invoking copyright law to remove AI-generated songs from platforms like YouTube.
Tech companies like Google face a conundrum: should they take down AI-generated content, and if so, on what grounds?
Some artists, like Grimes, are embracing the change, proposing new revenue-sharing models and utilizing blockchain-based smart contracts for royalties.
The future of AI-generated music presents both challenges and opportunities, with the potential to create new platforms and genres, democratize the industry, and redefine artist compensation.
The Need for AI PD — from techlearning.com by Erik Ofgang Educators need training on how to effectively incorporate artificial intelligence into their teaching practice, says Lance Key, an award-winning educator.
“School never was fun for me,” he says, hoping that as an educator he could change that with his students. “I wanted to make learning fun.” This ‘learning should be fun’ philosophy is at the heart of the approach he advises educators take when it comes to AI.
At its 11th annual conference in 2023, educational company Coursera announced it is adding ChatGPT-powered interactive ed tech tools to its learning platform, including a generative AI coach for students and an AI course-building tool for teachers. It will also add machine learning-powered translation, expanded VR immersive learning experiences, and more.
Coursera Coach will give learners a ChatGPT virtual coach to answer questions, give feedback, summarize video lectures and other materials, give career advice, and prepare them for job interviews. This feature will be available in the coming months.
From DSC: Yes…it will be very interesting to see how tools and platforms interact from this time forth. The term “integration” will take a massive step forward, at least in my mind.
A New Era for Education — from linkedin.com by Amit Sevak, CEO of ETS and Timothy Knowles, President of the Carnegie Foundation for the Advancement of Teaching
Excerpt (emphasis DSC):
It’s not every day you get to announce a revolution in your sector. But today, we’re doing exactly that. Together, we are setting out to overturn 117 years of educational tradition. … The fundamental assumption [of the Carnegie Unit] is that time spent in a classroom equals learning. This formula has the virtue of simplicity. Unfortunately, a century of research tells us that it’s woefully inadequate.
From DSC: It’s more than interesting to think that the Carnegie Unit has outlived its usefulness and is breaking apart. In fact, the thought is very profound.
If that turns out to be the case, the ramifications will be enormous and we will have the opportunity to radically reinvent/rethink/redesign what our lifelong learning ecosystems will look like and provide.
So I appreciate what Amit and Timothy are saying here and I appreciate their relaying what the new paradigm might look like. It goes with the idea of using design thinking to rethink how we build/reinvent our learning ecosystems. They assert:
It’s time to change the paradigm. That’s why ETS and the Carnegie Foundation have come together to design a new future of assessment.
Whereas the Carnegie Unit measures seat time, the new paradigm willmeasureskills—with a focus on the ones we know are most important for success in career and in life.
Whereas the Carnegie Unit never leaves the classroom, the new paradigm willcapture learning wherever it takes place—whether that is in after-school activities, during a work-experience placement, in an internship, on an apprenticeship, and so on.
Whereas the Carnegie Unit offers only one data point—pass or fail—the new paradigm willgenerate insights throughout the learning process, the better to guide students, families, educators, and policymakers.
I could see this type of information being funneled into peoples’ cloud-based learner profiles — which we as individuals will own and determine who else can access them. I diagrammed this back in January of 2017 using blockchain as the underlying technology. That may or may not turn out to be the case. But the concept will still hold I think — regardless of the underlying technology(ies).
For example, we are seeing a lot more articles regarding things like Comprehensive Learner Records (CLR) or Learning and Employment Records (LER; examplehere), and similar items.
Speaking of reinventing our learning ecosystems, also see:
Learning happens throughout life and is not isolated to the K-12 or higher education sectors. Yet, often, validations of learning only happen in these specific areas. The system of evaluation based on courses, grades, and credit serves as a poor proxy for communicating skills given the variation in course content, grade inflation, and inclusion of participation and extra credit within course grades.
Credentialed learning provides a way to accurately document human capability for all learners throughout their life. A lifetime credentialed learning ecosystem provides better granularity around learning, better documentation of the learning, and more relevance for both the credential recipient and reviewer. This improves the match between higher education and/or employment with the individual, while also providing a more clear and accurate lifetime learning pathway.
With a fully-credentialed system, individuals can own well-documented evidence of a lifetime of learning and choose what and when to share this data. This technology enables every learner to have more opportunities for finding the best career match without today’s existing barriers around cost, access, and proxies.
Addendum on 4/28/23 — speaking of credentials:
First Rung — from the-job.beehiiv.com by Paul Fain New research shows stacking credentials pays off for low-income learners.
Stacking credentials pays off for many low-income students, new research finds, but only if learners move up the education ladder. Also, Kansas is hoping a new grant program will attract more companies to participate in microinternships.
GPT-4 passes basically every exam. And doesn’t just pass…
The Bar Exam: 90%
LSAT: 88%
GRE Quantitative: 80%, Verbal: 99%
Every AP, the SAT… pic.twitter.com/zQW3k6uM6Z
Sal Khan walks through Khan Academy’s GPT-4 integration (not generally available yet). Folks can join the waitlist at Khanacademy.org. To learn more about Khanmigo, visit: khanacademy.org/khan-labs
We believe that AI has the potential to transform learning in a positive way, but we are also keenly aware of the risks. To test the possibilities, we’re inviting our district partners to opt in to Khan Labs, a new space for testing learning technology. We want to ensure that our work always puts the needs of students and teachers first, and we are focused on ensuring that the benefits of AI are shared equally across society. In addition to teachers and students, we’re inviting the general public to join a waitlist to test Khanmigo. Teachers, students and donors will be our partners on this learning journey, helping us test AI to see if we can harness it as a learning tool for all.
GPT-4 has arrived. It will blow ChatGPT out of the water.— from washingtonpost.com by Drew Harwell and Nitasha Tiku The long-awaited tool, which can describe images in words, marks a huge leap forward for AI power — and another major shift for ethical norms
For example, [GPT-4] passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.
ChatGPT as a teaching tool, not a cheating tool — from timeshighereducation.com by Jennifer Rose How to use ChatGPT as a tool to spur students’ inner feedback and thus aid their learning and skills development
Excerpt:
Use ChatGPT to spur student’s inner feedback
One way that ChatGPT answers can be used in class is by asking students to compare what they have written with a ChatGPT answer. This draws on David Nicol’s work on making inner feedback explicit and using comparative judgement. His work demonstrates that in writing down answers to comparative questions students can produce high-quality feedback for themselves which is instant and actionable. Applying this to a ChatGPT answer, the following questions could be used:
Which is better, the ChatGPT response or yours? Why?
What two points can you learn from the ChatGPT response that will help you improve your work?
What can you add from your answer to improve the ChatGPT answer?
How could the assignment question set be improved to allow the student to demonstrate higher-order skills such as critical thinking?
How can you use what you have learned to stay ahead of AI and produce higher-quality work than ChatGPT?
Artificial Intelligence is now taking the world of learning by storm. Here are 5 ways you can successfully incorporate AI in online learning.
Let’s say you’re training sales reps on handling different customer personalities. You can use this technology to diversify your branching scenarios so that trainees can also speak and not only type. This way, not only will the training become more realistic, but you’ll also be able to assess and work on additional elements, such as tone of voice, volume, speech tempo, etc.