The theme of the day was Human Connection V Innovative Technology. I see this a lot at conferences, setting up the human connection (social) against the machine (AI). I think this is ALL wrong. It is, and has always been a dialectic, human connection (social) PLUS the machine. Everyone had a smartphone, most use it for work, comms and social media. The binary between human and tech has long disappeared.
NVIDIA Digital Human Technologies Bring AI Characters to Life
Leading AI Developers Use Suite of NVIDIA Technologies to Create Lifelike Avatars and Dynamic Characters for Everything From Games to Healthcare, Financial Services and Retail Applications
Today is the beginning of our moonshot to solve embodied AGI in the physical world. I’m so excited to announce Project GR00T, our new initiative to create a general-purpose foundation model for humanoid robot learning.
When it comes to generative AI at school and work, Gen Z says: Bring it on.
Why it matters: While some workers are fearful or ambivalent about how ChatGPT, DALL-E and their ilk will affect their jobs, many college students and newly minted grads think it can give them a career edge.
So-called “AI natives” who are studying the technology in school may have a leg up on older “digital natives” — the same way that “digital native” millennials smugly bested their “digital immigrant” elders.
From DSC: As you can see from the above items, Mr. David Goodrich, a great human being and a fellow Instructional Designer, had a great comment and question regarding the source of my hope that AI — and other forms of legaltech — could significantly provide more access to justice here in America. Our civil justice system has some serious problems — involving such areas as housing, employment, healthcare, education, families, and more.
I’d like to respond to that question here.
First of all, I completely get what David is saying. I, too, have serious doubts that our horrible access to justice (#A2J) situation will get better. Why? Because:
Many people working within the legal field like it this way, as they are all but assured victory in most of the civil lawsuits out there.
The Bar Associations of most of the states do not support changes that would threaten their incomes/livelihoods. This is especially true in California and Florida.
The legal field in general is not composed, for the most part, of highly innovative people who make things happen for the benefit of others. For example, the American Bar Association is 20+ years behind in terms of providing the level of online-based learning opportunities that they should be offering. They very tightly control how legal education is delivered in the U.S.
Here are several areas that provide me with hope for our future
There are innovative individuals out there fighting for change.
And though some of these individuals don’t reside in the United States, their work still impacts many here in America. For examples, see:
The AAAi Lab: A web center supporting American Arbitration Association® (AAA) users, arbitrators, in-house counsel and law firms with policy guidance, educational webinars and tools for embracing generative AI in alternative dispute resolution.
There are innovative new tools and technologies out there such as:
Artificial Intelligence (AI) and Machine Learning (ML)
AI and machine learning remain pivotal in legaltech, especially for in-house lawyers who deal with vast quantities of contracts and complex legal matters. In 2024, these technologies will be integral for legal research, contract review, and the drafting of legal documents. Statistics from the Tech & the Law 2023 Report state more than three in five corporate legal departments (61%) have adopted generative AI in some capacity, with 7% actively using generative AI in their day-to-day work. With constant improvements to LLM (Large Language Models) by the big players, i.e. OpenAI, Google, and Microsoft (via OpenAI), 2024 will see more opportunities open and efficiencies gained for legal teams. (Source)
From drafting contracts to answering legal questions and summarising legal issues, AI is revolutionising the legal profession and although viewed with a sceptical eye by some law firms, is generally perceived to be capable of bringing huge benefits. (Source)
Due to COVID 19, there were virtual courtrooms set up and just like with virtual/online-based learning within higher education, many judges, litigants, lawyers, and staff appreciated the time savings and productivity gains. Along these lines, see Richard Susskind’s work. [Richard] predicts a world of online courts, AI-based global legal businesses, disruptive legal technologies, liberalized markets, commoditization, alternative sourcing, simulated practice on the metaverse, and many new legal jobs. (Source)
There are innovative states out there fighting for change. For examples:
Utah in 2020 launched a pilot program that suspended ethics rules to allow for non-lawyer ownership of legal services providers and let non-lawyers apply for a waiver to offer certain legal services. (Source)
Arizona in 2021 changed its regulatory rules to allow for non-lawyer ownership. (Source)
And the last one — but certainly not the least one — is where my faith comes into play. I believe that the Triune God exists — The Father, The Son, and The Holy Spirit — and that the LORD is very active in our lives and throughout the globe. And one of the things the LORD values highly is JUSTICE. For examples:
“This is what the Lord Almighty said: ‘Administer true justice; show mercy and compassion to one another.Do not oppress the widow or the fatherless, the foreigner or the poor. Do not plot evil against each other.’ Zechariah 7:9-10
Many seek an audience with a ruler, but it is from the Lord that one gets justice. Proverbs 29:26 NIV
These are the things you are to do: Speak the truth to each other, and render true and sound judgment in your courts;Zechariah 8:16 NIV
…and many others as can be seen below
So I believe that the LORD will actively help us provide greater access to justice in America.
Well…there you have it David. Thanks for your question/comment! I appreciate it!
6. ChatGPT’s hype will fade, as a new generation of tailor-made bots rises up
11. We’ll finally turn the corner on teacher pay in 2024
21. Employers will combat job applicants’ use of AI with…more AI
31. Universities will view the creator economy as a viable career path
DSC: I’m a bit confused this am as I’m seeing multiple — but different – references to “Q” and what it is. It seems to be at least two different things: 1) OpenAI’s secret project Q* and 2) Amazon Q, a new type of generative artificial intelligence-powered assistant
We need to start aligning the educational system with a world where humans live with machines that have intelligence capabilities that approximate their own.
Today’s freshmen *may* graduate into a world where AIs have at least similar intelligence abilities to humans. Today’s 1st graders *probably* will.
Efforts need to be made to align the educational system with a world where machines will have intelligence capabilities similar to those of humans.
AWS Announces Amazon Q to Reimagine the Future of Work— from press.aboutamazon.com New type of generative AI-powered assistant, built with security and privacy in mind, empowers employees to get answers to questions, solve problems, generate content, and take actions using the data and expertise found at their company
LAS VEGAS–(BUSINESS WIRE)–At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), today announced Amazon Q, a new type of generative artificial intelligence-(AI) powered assistant that is specifically for work and can be tailored to a customer’s business. Customers can get fast, relevant answers to pressing questions, generate content, and take actions—all informed by a customer’s information repositories, code, and enterprise systems. Amazon Q provides information and advice to employees to streamline tasks, accelerate decision making and problem solving, and help spark creativity and innovation at work.
From DSC: As I’ve long stated on the Learning from the Living [Class]Room vision, we are heading toward a new AI-empowered learning platform — where humans play a critically important role in making this new learning ecosystem work.
Along these lines, I ran into this site out on X/Twitter. We’ll see how this unfolds, but it will be an interesting space to watch.
From DSC: This future learning platform will also focus on developing skills and competencies. Along those lines, see:
Scale for Skills-First— from the-job.beehiiv.com by Paul Fain An ed-tech giant’s ambitious moves into digital credentialing and learner records.
A Digital Canvas for Skills
Instructure was a player in the skills and credentials space before its recent acquisition of Parchment, a digital transcript company. But that $800M move made many observers wonder if Instructure can develop digital records of skills that learners, colleges, and employers might actually use broadly.
…
Ultimately, he says, the CLR approach will allow students to bring these various learning types into a coherent format for employers.
Instructure seeks a leadership role in working with other organizations to establish common standards for credentials and learner records, to help create consistency. The company collaborates closely with 1EdTech. And last month it helped launch the 1EdTech TrustEd Microcredential Coalition, which aims to increase quality and trust in digital credentials.
ChatGPT can produce a perfectly serviceable writing “product,” she said. But writing isn’t a product per se — it’s a tool for thinking, for organizing ideas, she said.
“ChatGPT and other text-based tools can’t think for us,” she said. “There’s still things to learn when it comes to writing because writing is a form of figuring out what you think.”
When students could contrast their own writing to ChatGPT’s more generic version, Levine said, they were able to “understand what their own voice is and what it does.”
But what about text? Should — and if so, how should — writers be recognized and remunerated for AI-generated works that mimic their voices?
Those are questions that are likely to be raised by a feature in Grammarly, the cloud-based typing assistant, that’s scheduled to launch by the end of the year for subscribers to Grammarly’s business tier. Called “Personalized voice detection and application,” the feature automatically detects a person’s unique writing style and creates a “voice profile” that can rewrite any text in the person’s style.
From bustling metropolises to serene hamlets, schools across the globe are greeting a new companion—Artificial Intelligence (AI). This companion promises to redefine the essence of education, making learning a journey tailored to each child’s unique abilities.
The advent of AI in education is akin to a gentle breeze, subtly transforming the academic landscape. Picture a classroom where each child, with their distinct capabilities and pace, embarks on a personalized learning path. AI morphs this vision into reality, crafting a personalized educational landscape that celebrates the unique potential harbored within every learner.
Books have always held a special place in my heart. As an avid reader and AI enthusiast, I have curated a list of books on artificial intelligence specifically tailored for educators. These books delve into the realms of AI, exploring its applications, ethical considerations, and its impact on education. Share your suggestions and let me know which books you would like to see included on this list.
We held our fourth online Empowering Learners for the Age of AI conference last week. We sold out at 1500 people (a Whova and budget limit). The recordings/playlist from the conference can now be accessed here.
Wagner herself recently relied on the power of collegial conversations to probe the question: What’s on the minds of educators as they make ready for the growing influence of AI in higher education? CT asked her for some takeaways from the process.
We are in the very early days of seeing how AI is going to affect education. Some of us are going to need to stay focused on the basic research to test hypotheses. Others are going to dive into laboratory “sandboxes” to see if we can build some new applications and tools for ourselves. Still others will continue to scan newsletters like ProductHunt every day to see what kinds of things people are working on. It’s going to be hard to keep up, to filter out the noise on our own. That’s one reason why thinking with colleagues is so very important.
We are interested how K-12 education will change in terms of foundational learning. With in-class, active learning designs, will younger students do a lot more intensive building of foundational writing and critical thinking skills before they get to college?
The Human in the Loop: AI is built using math: think of applied statistics on steroids. Humans will be needed more than ever to manage, review and evaluate the validity and reliability of results. Curation will be essential.
We will need to generate ideas about how to address AI factors such as privacy, equity, bias, copyright, intellectual property, accessibility, and scalability.
Have other institutions experimented with AI detection and/or have held off on emerging tools related to this? We have just recently adjusted guidance and paused some tools related to this given the massive inaccuracies in detection (and related downstream issues in faculty-elevated conduct cases)
Even though we learn repeatedly that innovation has a lot to do with effective project management and a solid message that helps people understand what they can do to implement change, people really need innovation to be more exciting and visionary than that. This is the place where we all need to help each other stay the course of change.
I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.
I can’t blame people for asking because, for whatever reason, the companies actually building and releasing Large Language Models often seem allergic to providing any sort of documentation or tutorial besides technical notes. I was given much better documentation for the generic garden hose I bought on Amazon than for the immensely powerful AI tools being released by the world’s largest companies. So, it is no surprise that rumor has been the way that people learn about AI capabilities.
Currently, there are only really three AIs to consider: (1) OpenAI’s GPT-4 (which you can get access to with a Plus subscription or via Microsoft Bing in creative mode, for free), (2) Google’s Bard (free), or (3) Anthropic’s Claude 2 (free, but paid mode gets you faster access). As of today, GPT-4 is the clear leader, Claude 2 is second best (but can handle longer documents), and Google trails, but that will likely change very soon when Google updates its model, which is rumored to be happening in the near future.
New York City Public Schools will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.
Kevin McCullen, an associate professor of computer science at the State University of New York at Plattsburgh, teaches a freshman seminar about AI and robotics. As part of the course, students read Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff. McCullen had the students work in groups to outline and summarize the first three chapters. Then he showed them what ChatGPT had produced in an outline.
“Their version and ChatGPT’s version seemed to be from two different books,” McCullen wrote. “ChatGPT’s version was essentially a ‘laundry list’ of events. Their version was narratives of what they found interesting. The students had focused on what the story was telling them, while ChatGPT focused on who did what in what year.” The chatbot also introduced false information, such as wrong chapter names.
The students, he wrote, found the writing “soulless.”
In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?
To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L’s institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.
“If we want to use AI to improve education, we need more teachers at the table,” said Avery Pan, Class Companion co-founder and CEO. “Class Companion is designed by teachers, for teachers, to harness the most sophisticated AI and improve their classroom experience. Developing technologies specifically for teachers is imperative to supporting our next generation of students and education system.”
7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.
The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.
We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.
The intersection of AI and the world of work: Not only are job postings increasing, but we’re seeing more LinkedIn members around the globe adding AI skills to their profiles than ever before. We’ve seen a 21x increase in the share of global English-language job postings that mention new AI technologies such as GPT or ChatGPT since November 2022. In June 2023, the number of AI-skilled members was 9x larger than in January 2016, globally.
The state of play of Generative AI (GAI) in the workforce: GAI technologies, including ChatGPT, are poised to start to change the way we work. In fact, 47% of US executives believe that using generative AI will increase productivity, and 92% agree that people skills are more important than ever. This means jobs won’t necessarily go away but they will change as will the skills necessary to do them.
So, as educators, mentors, and guides to our future generations, we must ask ourselves three pivotal questions:
What value do we offer to our students?
What value will they need to offer to the world?
How are we preparing them to offer that value?
The answers to these questions are crucial, and they will redefine the trajectory of our education system.
We need to create an environment that encourages curiosity, embraces failure as a learning opportunity, and celebrates diversity. We need to teach our students how to learn, how to ask the right questions, and how to think for themselves.
Leveraging ChatGPT for learning is the most meaningful skill this year for lifelong learners. But it’s too hard to find resources to master it.
As a learning science nerd, I’ve explored hundreds of prompts over the past months. Most of the advice doesn’t go beyond text summaries and multiple-choice testing.
That’s why I’ve created this article — it merges learning science with prompt writing to help you learn anything faster.
Midjourney AI Art for Teachers (for any kind of teacher, not just Art Teachers) — from The AI Educator on YouTube by Dan Fitzpatrick
From DSC: This is a very nice, clearly illustrated, free video to get started with the Midjourney (text-to-image) app. Nice work Dan!
In the new-normal of generative AI, how does one articulate the value of academic integrity? This blog presents my current response in about 2,500 words; a complete answer could fill a sizable book.
Massive amounts of misinformation are disseminated about generative AI, so the first part of my discussion clarifies what large language models (Chat-GPT and its counterparts) can currently do and what they cannot accomplish at this point in time. The second part describes ways in which generative AI can be misused as a means of learning; unfortunately, many people are now advocating for these mistaken applications to education. The third part describes ways in which large language models (LLM), used well, may substantially improve learning and education. I close with a plea for a robust, informed public discussion about these topics and issues.
Many of the more than a dozen teachers TIME interviewed for this story argue that the way to get kids to care is to proactively use ChatGPT in the classroom.
…
Some of those creative ideas are already in effect at Peninsula High School in Gig Harbor, about an hour from Seattle. In Erin Rossing’s precalculus class, a student got ChatGPT to generate a rap about vectors and trigonometry in the style of Kanye West, while geometry students used the program to write mathematical proofs in the style of raps, which they performed in a classroom competition. In Kara Beloate’s English-Language Arts class, she allowed students reading Shakespeare’s Othello to use ChatGPT to translate lines into modern English to help them understand the text, so that they could spend class time discussing the plot and themes.
I found that other developed countries share concerns about students cheating but are moving quickly to use AI to personalize education, enhance language lessons and help teachers with mundane tasks, such as grading. Some of these countries are in the early stages of training teachers to use AI and developing curriculum standards for what students should know and be able to do with the technology.
Several countries began positioning themselves several years ago to invest in AI in education in order to compete in the fourth industrial revolution.
AI in Education— from educationnext.org by John Bailey The leap into a new era of machine intelligence carries risks and challenges, but also plenty of promise
In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.
Generative AI and education futures — from ucl.ac.uk Video highlights from Professor Mike Sharples’ keynote address at the 2023 UCL Education Conference, which explored opportunities to prosper with AI as a part of education.
Bringing AI Literacy to High Schools— from by Nikki Goth Itoi Stanford education researchers collaborated with teachers to develop classroom-ready AI resources for high school instructors across subject areas.
To address these two imperatives, all high schools need access to basic AI tools and training. Yet the reality is that many underserved schools in low-income areas lack the bandwidth, skills, and confidence to guide their students through an AI-powered world. And if the pattern continues, AI will only worsen existing inequities. With this concern top of mind plus initial funding from the McCoy Ethics Center, Lee began recruiting some graduate students and high school teachers to explore how to give more people equal footing in the AI space.