Gen-AI Movie Trailer For Sci Fi Epic “Genesis” — from forbes.com by Charlie Fink

The movie trailer for “Genesis,” created with AI, is so convincing it caused a stir on Twitter [on July 27]. That’s how I found out about it. Created by Nicolas Neubert, a senior product designer who works for Elli by Volkswagen in Germany, the “Genesis” trailer promotes a dystopian sci-fi epic reminiscent of the Terminator. There is no movie, of course, only the trailer exists, but this is neither a gag nor a parody. It’s in a class of its own. Eerily made by man, but not.



Google’s water use is soaring. AI is only going to make it worse. — from businessinsider.com by Hugh Langley

Google just published its 2023 environmental report, and one thing is for certain: The company’s water use is soaring.

The internet giant said it consumed 5.6 billion gallons of water in 2022, the equivalent of 37 golf courses. Most of that — 5.2 billion gallons — was used for the company’s data centers, a 20% increase on the amount Google reported the year prior.


We think prompt engineering (learning to converse with an AI) is overrated. — from the Neuron

We think prompt engineering (learning to converse with an AI) is overrated. Yup, we said it. We think the future of chat interfaces will be a combination of preloading context and then allowing AI to guide you to the information you seek.

From DSC:
Agreed. I think we’ll see a lot more interface updates and changes to make things easier to use, find, develop.


Radar Trends to Watch: August 2023 — from oreilly.com by Mike Loukides
Developments in Programming, Web, Security, and More

Artificial Intelligence continues to dominate the news. In the past month, we’ve seen a number of major updates to language models: Claude 2, with its 100,000 token context limit; LLaMA 2, with (relatively) liberal restrictions on use; and Stable Diffusion XL, a significantly more capable version of Stable Diffusion. Does Claude 2’s huge context really change what the model can do? And what role will open access and open source language models have as commercial applications develop?


Try out Google ‘TextFX’ and its 10 creative AI tools for rappers, writers — from 9to5google.com by Abner Li; via Barsee – AI Valley 

Google Lab Sessions are collaborations between “visionaries from all realms of human endeavor” and the company’s latest AI technology. [On 8/2/23], Google released TextFX as an “experiment to demonstrate how generative language technologies can empower the creativity and workflows of artists and creators” with Lupe Fiasco.

Google’s TextFX includes 10 tools and is powered by the PaLM 2 large language model via the PALM API. Meant to aid in the creative process of rappers, writers, and other wordsmiths, it is part of Google Labs.

 

AI for Education Webinars — from youtube.com by Tom Barrett and others

AI for education -- a webinar series by Tom Barrett and company


Post-AI Assessment Design — from drphilippahardman.substack.com by Dr. Philippa Hardman
A simple, three-step guide on how to design assessments in a post-AI world

Excerpt:

Step 1: Write Inquiry-Based Objectives
Inquiry-based objectives focus not just on the acquisition of knowledge but also on the development of skills and behaviours, like critical thinking, problem-solving, collaboration and research skills.

They do this by requiring learners not just to recall or “describe back” concepts that are delivered via text, lecture or video. Instead, inquiry-based objectives require learners to construct their own understanding through the process of investigation, analysis and questioning.

Step 1 -- Write Inquiry-Based Objectives

.


Massive Disruption Now: What AI Means for Students, Educators, Administrators and Accreditation Boards
— from stefanbauschard.substack.com by Stefan Bauschard; via Will Richardson on LinkedIn
The choices many colleges and universities make regarding AI over the next 9 months will determine if they survive. The same may be true for schools.

Excerpts:

Just for a minute, consider how education would change if the following were true

  • AIs “hallucinated” less than humans
  • AIs could write in our own voices
  • AIs could accurately do math
  • AIs understood the unique academic (and eventually developmental) needs of each student and adapt instruction to that student
  • AIs could teach anything any student wanted or need to know any time of day or night
  • AIs could do this at a fraction of the cost of a human teacher or professor

Fall 2026 is three years away. Do you have a three year plan? Perhaps you should scrap it and write a new one (or at least realize that your current one cannot survive). If you run an academic institution in 2026 the same way you ran it in 2022, you might as well run it like you would have in 1920.  If you run an academic institution in 2030 (or any year when AI surpasses human intelligence) the same way you ran it in 2022, you might as well run it like you would have in 1820.  AIs will become more intelligent than us, perhaps in 10-20 years (LeCun), though there could be unanticipated breakthroughs that lower the time frame to a few years or less (Benjio); it’s just a question of when, not “if.”


On one creative use of AI — from aiandacademia.substack.com by Bryan Alexander
A new practice with pedagogical possibilities

Excerpt:

Look at those material items again. The voiceover? Written by an AI and turned into audio by software. The images? Created by human prompts in Midjourney. The music is, I think, human created. And the idea came from a discussion between a human and an AI?

How might this play out in a college or university class?

Imagine assignments which require students to craft such a video. Start from film, media studies, or computer science classes. Students work through a process:


Generative Textbooks — from opencontent.org by David Wiley

Excerpt (emphasis DSC):

I continue to try to imagine ways generative AI can impact teaching and learning, including learning materials like textbooks. Earlier this week I started wondering – what if, in the future, educators didn’t write textbooks at all? What if, instead, we only wrote structured collections of highly crafted prompts? Instead of reading a static textbook in a linear fashion, the learner would use the prompts to interact with a large language model. These prompts could help learners ask for things like:

  • overviews and in-depth explanations of specific topics in a specific sequence,
  • examples that the learner finds personally relevant and interesting,
  • interactive practice – including open-ended exercises – with immediate, corrective feedback,
  • the structure of the relationships between ideas and concepts,
  • etc.

Also relevant/see:


.


Generating The Future of Education with AI — from aixeducation.com

AI in Education -- An online-based conference taking place on August 5-6, 2023

Designed for K12 and Higher-Ed Educators & Administrators, this conference aims to provide a platform for educators, administrators, AI experts, students, parents, and EdTech leaders to discuss the impact of AI on education, address current challenges and potentials, share their perspectives and experiences, and explore innovative solutions. A special emphasis will be placed on including students’ voices in the conversation, highlighting their unique experiences and insights as the primary beneficiaries of these educational transformations.


How Teachers Are Using ChatGPT in Class — from edweek.org by Larry Ferlazzo

Excerpt:

The use of generative AI in K-12 settings is complex and still in its infancy. We need to consider how these tools can enhance student creativity, improve writing skills, and be transparent with students about how generative AI works so they can better understand its limitations. As with any new tech, our students will be exposed to it, and it is our task as educators to help them navigate this new territory as well-informed, curious explorers.


Japan emphasizes students’ comprehension of AI in new school guidelines — from japantimes.co.jp by Karin Kaneko; via The Rundown

Excerpt:

The education ministry has emphasized the need for students to understand artificial intelligence in new guidelines released Tuesday, setting out how generative AI can be integrated into schools and the precautions needed to address associated risks.

Students should comprehend the characteristics of AI, including its advantages and disadvantages, with the latter including personal information leakages and copyright infringement, before they use it, according to the guidelines. They explicitly state that passing off reports, essays or any other works produced by AI as one’s own is inappropriate.


AI’s Teachable Moment: How ChatGPT Is Transforming the Classroom — from cnet.com by Mark Serrels
Teachers and students are already harnessing the power of AI, with an eye toward the future.

Excerpt:

Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He’s unsure. But instead of burying his head in the sand, he’s doing what any tech-savvy parent would do: He’s teaching his son how to use AI.

In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams with Midjourney, a generative AI tool that uses language prompts to produce images.


The AI Dictionary for Educators — from blog.profjim.com

To bridge this knowledge gap, I decided to make a quick little dictionary of AI terms specifically tailored for educators worldwide. Initially created for my own benefit, I’ve reworked my own AI Dictionary for Educators and expanded it to help my fellow teachers embrace the advancements AI brings to education.


7 Strategies to Prepare Educators to Teach With AI — from edweek.org by Lauraine Langreo; NOTE: Behind paywall


 

Law Firms Are Recruiting More AI Experts as Clients Demand ‘More for Less’ — from bloomberg.com by Irina Anghel
Data scientists, software engineers among roles being sought | Legal services seen as vulnerable to ChatGPT-type software

Excerpt (emphasis DSC):

Chatbots, data scientists, software engineers. As clients demand more for less, law firms are hiring growing numbers of staff who’ve studied technology not tort law to try and stand out from their rivals.

Law firms are advertising for experts in artificial intelligence “more than ever before,” says Chris Tart-Roberts, head of the legal technology practice at Macfarlanes, describing a trend he says began about six months ago.


Legal is the second industry with the highest potential for automation

.


AI Will Threaten Law Firm Jobs, But Innovators Will Thrive — from law.com

Excerpts:

What You Need to Know

  • Law firm leaders and consultants are unsure of how AI use will ultimately impact the legal workforce.
  • Consults are advising law firms and attorneys alike to adapt to the use of generative AI, viewing this as an opportunity for attorneys to learn new skills and law firms to take a look at their business models.

Split between foreseeing job cuts and opportunities to introduce new skills and additional efficiencies into the office, firm leaders and consultants remain uncertain about the impact of artificial intelligence on the legal workforce.

However, one thing is certain: law firms and attorneys need to adapt and learn how to integrate this new technology in their business models, according to consultants. 


AI Lawyer — A personal AI lawyer at your fingertips — from ailawyer.pro

AI Lawyer

From DSC:
I hope that we will see a lot more of this kind of thing!
I’m counting on it.
.


Revolutionize Your Legal Education with Law School AI — from law-school-ai.vercel.app
Your Ultimate Study Partner

Are you overwhelmed by countless cases, complex legal concepts, and endless readings? Law School AI is here to help. Our cutting-edge AI chatbot is designed to provide law students with an accessible, efficient, and engaging way to learn the law. Our chatbot simplifies complex legal topics, delivers personalized study guidance, and answers your questions in real-time – making your law school journey a whole lot easier.


Job title of the future: metaverse lawyer — from technologyreview.com by Amanda Smith
Madaline Zannes’s virtual offices come with breakout rooms, an art gallery… and a bar.
.

Excerpt:

Lot #651 on Somnium Space belongs to Zannes Law, a Toronto-based law firm. In this seven-level metaverse office, principal lawyer Madaline Zannes conducts private consultations with clients, meets people wandering in with legal questions, hosts conferences, and gives guest lectures. Zannes says that her metaverse office allows for a more immersive, imaginative client experience. She hired a custom metaverse builder to create the space from scratch—with breakout rooms, presentation stages, offices to rent, an art gallery, and a rooftop bar.


A Literal Generative AI Discussion: How AI Could Reshape Law — from geeklawblog.com by Greg Lambert

Excerpt:

Greg spoke with an AI guest named Justis for this episode. Justis, powered by OpenAI’s GPT-4, was able to have a natural conversation with Greg and provide insightful perspectives on the use of generative AI in the legal industry, specifically in law firms.

In the first part of their discussion, Justis gave an overview of the legal industry’s interest in and uncertainty around adopting generative AI. While many law firm leaders recognize its potential, some are unsure of how it fits into legal work or worry about risks. Justis pointed to examples of firms exploring AI and said letting lawyers experiment with the tools could help identify use cases.


Robots aren’t representing us in court but here are 7 legal tech startups transforming the legal system — from tech.eu by Cate Lawrence
Legal tech startups are stepping up to the bar, using tech such as AI, teleoperations, and apps to bring justice to more people than ever before. This increases efficiency, reduces delays, and lowers costs, expanding legal access.


Putting Humans First: Solving Real-Life Problems With Legal Innovation — from abovethelaw.com by Olga Mack
Placing the end-user at the heart of the process allows innovators to identify pain points and create solutions that directly address the unique needs and challenges individuals and businesses face.

 

AI21 Labs concludes largest Turing Test experiment to date — from ai21.com
As part of an ongoing social and educational research project, AI21 Labs is thrilled to share the initial results of what has now become the largest Turing Test in history by scale.
.

People found it easier to identify a fellow human. When talking to humans, participants guessed right in 73% of the cases. When talking to bots, participants guessed right in just 60% of the cases.

 


From DSC:
I also wanted to highlight the item below, which Barsee also mentioned above, as it will likely hit the world of education and training as well:



Also relevant/see:


 

The perils of consulting an Electric Monk — from jordanfurlong.substack.com by Jordan Furlong
Don’t blame ChatGPT for the infamous incident of the made-up cases. And don’t be too hard on the lawyer, either. We’re all susceptible to a machine that tells us exactly what we want to hear.

Excerpt:

But then the “ChatGPT Lawyer” story happened, and all hell broke loose on LawTwitter and LawLinkedIn, and I felt I needed to make three points, one of which involves an extra-terrestrial robot.

My first two points are pretty straightforward:

  1. The tsunami of gleeful overreaction from lawyers on social media, urging bans on the use of ChatGPT and predicting prison time for the hapless practitioner, speaks not only to their fear and loathing of generative AI, but also to their desperate hope that it’s all really nothing but hype and won’t disturb their happy status quo. Good luck with that.
  2. The condemnation and mockery of the lawyer himself, who made a bad mistake but who’s been buried by an utterly disproportionate avalanche of derision, speaks to the lack of compassion in this profession, whose members should pray that their worst day as a lawyer never makes it to the front page of The New York Times. There but for the grace of God.

Are you looking for evidence to support the side that’s hired you? Or are you looking for the truth? Choosing the first option has never been easier. It’s also never been more dangerous.


As referenced topic-wise by Jordan above, also see:

A lawyer used ChatGPT to prepare a court filing. It went horribly awry. — from cbsnews.com by Megan Cerullo


What I learned at CLOC 2023 — from alexofftherecord.com by Alex Su
This week I attended the premier legal operations conference. Here’s what I heard.

Excerpt:

Theme 1: Generative AI isn’t going anywhere
This was a huge theme throughout the conference. Whether it was vendors announcing GPT integrations, or panels discussing how to use AI, there was just an enormous amount of attention on generative AI. I’m certainly no stranger to all this hype, but I’d always wondered if it was all from my Silicon Valley bubble. It wasn’t.

What was driving all this interest in AI? Well, the ubiquity of ChatGPT. Everyone’s talking about it and trying to figure out how to incorporate it into the business. And not just in the U.S. It’s a worldwide trend. Word on the street is that it’s a CEO-level priority. Everywhere. So naturally it trickles down to the legal department.


We need to talk about ChatGPT — from mnbar.org by Damien Riehl

Excerpt:

How well do LLMs perform on legal tasks? 

Personal experience and anecdotal evidence indicate that LLMs’ current state provides impressive output in various legal tasks. Specifically, they provide extraordinary results on the following:

  • Drafting counterarguments.
  • Exploring client fact inquiries (e.g., “How did you lose money?”).
  • Ideating voir dire questions (and rating responses).
  • Summarizing statutes.
  • Calculating works’ copyright expiration.
  • Drafting privacy playbooks.
  • Drafting motions to dismiss.
  • Responding to cease-and-desist letters.
  • Crafting decision trees.
  • Creating chronologies.
  • Drafting contracts.
  • Extracting key elements from depositions.

 

 

Corporate legal departments see use cases for generative AI & ChatGPT, new report finds — from thomsonreuters.com


New legal tech tools showcased at CLOC 2023 — from legaldive.comRobert Freedman
Innovations include a better way to evaluate law firm proposals, centralize all in-house legal requests in a single intake function and analyze agreements.

Guest post: CLOC 2023 – Key insights into how to drive value during changing economic times — from legaltechnology.com by Valerie Chan

Excerpt:

Typically, Legalweek has always been focused on eDiscovery, while CLOC was focused on matter management and contracts management. This time I noticed more balance in the vendor hall and sessions, with a broader range of services providers than before, including staffing providers, contracts management vendors and other new entrants in addition to eDiscovery vendors.

One theme dominated the show floor conversations: Over and over, the legal operators I talked with said if their technologies and vendors were able to develop better workflows, achieve more cost savings and report on the metrics that mattered to their GC, the GC could function as more of a business advisor to the C-suite.


AI is already being used in the legal system—we need to pay more attention to how we use it — by phys.org Morgiane Noel

Excerpt:

While ChatGPT and the use of algorithms in social media get lots of attention, an important area where AI promises to have an impact is law.

The idea of AI deciding guilt in legal proceedings may seem far-fetched, but it’s one we now need to give serious consideration to.

That’s because it raises questions about the compatibility of AI with conducting fair trials. The EU has enacted legislation designed to govern how AI can and can’t be used in criminal law.


Legal Innovation as a Service, Now Enhanced with AI — from denniskennedy.com by Dennis Kennedy

Excerpt:

Over the last semester, I’ve been teaching two classes at Michigan State University College of Law, one called AI and the Law and the other called New Technologies and the Law, and a class at University of Michigan Law School called Legal Technology Literacy and Leadership. All three classes pushed me to keep up-to-date with the nearly-daily developments in AI, ChatGPT, and LLMs. I also did quite a lot of experiments, primarily with ChatGPT, especially GPT-4, and with Notion AI.


Emerging Tech Trends: The rise of GPT tools in contract analysis — from abajournal.com by Nicole Black

Excerpt:

Below, you’ll learn about many of the solutions currently available. Keep in mind that this overview is not exhaustive. There are other similar tools currently available and the number of products in this category will undoubtedly increase in the months to come.


Politicians need to learn how AI works—fast — link.wired.com

Excerpt:

This week we’ll hear from someone who has deep experience in assessing and regulating potentially harmful uses of automation and artificial intelligence—valuable skills at a moment when many people, including lawmakers, are freaking out about the chaos that the technology could cause.


 

 

This company adopted AI. Here’s what happened to its human workers — from npr.org by Greg Rosalsky|

Excerpt:

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.

The article links to:
Generative AI at Work — from nber.org by Erik Brynjolfsson, Danielle Li & Lindsey R. Raymond

We study the staggered introduction of a generative AI-based conversational assistant using data from 5,179 customer support agents. Access to the tool increases productivity, as measured by issues resolved per hour, by 14 percent on average, with the greatest impact on novice and low-skilled workers, and minimal impact on experienced and highly skilled workers. We provide suggestive evidence that the AI model disseminates the potentially tacit knowledge of more able workers and helps newer workers move down the experience curve. In addition, we show that AI assistance improves customer sentiment, reduces requests for managerial intervention, and improves employee retention.

 

In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.


Also relevant/see:


 

You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender

You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

Excerpts:

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

 

How ChatGPT is going to change the future of work and our approach to education — from livemint.com

From DSC: 
I thought that the article made a good point when it asserted:

The pace of technological advancement is booming aggressively and conversations around ChatGPT snatching away jobs are becoming more and more frequent. The future of work is definitely going to change and that makes it clear that the approach toward education is also demanding a big shift.

A report from Dell suggests that 85% of jobs that will be around in 2030 do not exist yet. The fact becomes important as it showcases that the jobs are not going to vanish, they will just change and most of the jobs by 2030 will be new.

The Future of Human Agency — from pewresearch.org by Janna Anderson and Lee Rainie

Excerpt:

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

    • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
    • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

What are the things humans really want agency over? When will they be comfortable turning to AI to help them make decisions? And under what circumstances will they be willing to outsource decisions altogether to digital systems?

The next big threat to AI might already be lurking on the web — from zdnet.com by Danny Palmer; via Sam DeBrule
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences.

Excerpts:

Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it’s possible to affect the decisions that the AI makes in a way that is hard to track.

By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make ‘wrong’ decisions that have significant consequences.

Why AI Won’t Cause Unemployment — from pmarca.substack.com by Marc Andreessen

Excerpt:

Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.

Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.

How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.

How do I know that? Because:


From DSC:
And for me, it boils down to an inconvenient truth: What’s the state of our hearts and minds?

AI, ChatGPT, Large Language Models (LLMs), and the like are tools. How we use such tools varies upon what’s going on in our hearts and minds. A fork can be used to eat food. It can also be used as a weapon. I don’t mean to be so blunt, but I can’t think of another way to say it right now.

  • Do we care about one another…really?
  • Has capitalism gone astray?
  • Have our hearts, our thinking, and/or our mindsets gone astray?
  • Do the products we create help or hurt others? It seems like too many times our perspective is, “We will sell whatever they will buy, regardless of its impact on others — as long as it makes us money and gives us the standard of living that we want.” Perhaps we could poll some former executives from Philip Morris on this topic.
  • Or we will develop this new technology because we can develop this new technology. Who gives a rat’s tail about the ramifications of it?

 

ChatGPT for Spanish Classrooms — from rdene915.com by Nicole Biscotti, M. Ed.

Excerpt:

ChatGPT is just what the busy Spanish teacher necesita – no wasted time searching for the perfect “lectura” (text). Effective language instruction is coupled with learning about culture and now I’m able to generate texts in seconds AND I can even center them around a Latin American country, cultural point of interest, holiday, grammatical structure, etc.  Differentiation and personalized learning, those lofty teaching ideals that can feel a bit heavy when you mean well but have 35 kids in your room, have become that much easier to attain with ChatGPT.  It’s possible to generate texts about diverse aspects of culture in seconds and make adjustments for interests, length, rigor, etc. (Kuo & Lai, 2006) (Salaberry, 1999; Rost, 2002).

CURATING YOUR CLASSROOM WITH 9 MUST-HAVE TOOLS FOR RESOURCE COLLECTION – EASY EDTECH PODCAST 202 — from classtechtips.com by Monica Burns

Description:

How do you share resources with students? In this episode, we’ll focus on what happens after you find the very best resources to share with students. You’ll also hear about nine digital tools to help educators build a resource collection for students. So whether you have ten great resources on endangered species to share with your fourth graders or a dozen tutorial videos to share with your eleventh graders, this episode is for you!

50+ Useful AI Writing Tools to Know (2023) — from hongkiat.com

Excerpt:

AI writing tools generate content based on the keywords or prompt provided by users. You can then improve upon the output and make it suitable according to your own requirements.

There are different types of AI writing tools and in this post we are featuring some of the best ones. From content generators and editors to translators and typing assistants, there’s a whole gamut of AI-powered writing tools in the list. Take a look and see if one (or more) catches your interest.

How to Use Minecraft as a Teaching Tool — from intelligenthq.com

Excerpt:

Kids today have grown up with Minecraft, so it’s easy to get them enthusiastic about lessons using it. They can build anything they like, and use Minecraft skins to make the characters they create uniquely their own, getting them especially enthusiastic and involved in their lessons.

Teachers who learn how to use Minecraft as a teaching tool have found that it noticeably improves problem solving, creativity, and the ability to work together. It teaches both 21st century skills and timeless lessons.


On a somewhat related note, also see:


 

Introducing: ChatGPT Edu-Mega-Prompts — from drphilippahardman.substack.com by Dr. Philippa Hardman; with thanks to Ray Schroeder out on LinkedIn for this resource
How to combine the power of AI + learning science to improve your efficiency & effectiveness as an educator

From DSC:
Before relaying some excerpts, I want to say that I get the gist of what Dr. Hardman is saying re: quizzes. But I’m surprised to hear she had so many pedagogical concerns with quizzes. I, too, would like to see quizzes used as an instrument of learning and to practice recall — and not just for assessment. But I would give quizzes a higher thumbs up than what she did. I think she was also trying to say that quizzes don’t always identify misconceptions or inaccurate foundational information. 

Excerpts:

The Bad News: Most AI technologies that have been built specifically for educators in the last few years and months imitate and threaten to spread the use of broken instructional practices (i.e. content + quiz).

The Good News: Armed with prompts which are carefully crafted to ask the right thing in the right way, educators can use AI like GPT3 to improve the effectiveness of their instructional practices.

As is always the case, ChatGPT is your assistant. If you’re not happy with the result, you can edit and refine it using your expertise, either alone or through further conversation with ChatGPT.

For example, once the first response is generated, you can ask ChatGPT to make the activity more or less complex, to change the scenario and/or suggest more or different resources – the options are endless.

Philippa recommended checking out Rob Lennon’s streams of content. Here’s an example from his Twitter account:


Also relevant/see:

3 trends that may unlock AI's potential for Learning and Development in 2023

3 Trends That May Unlock AI’s Potential for L&D in 2023 — from learningguild.com by Juan Naranjo

Excerpts:

AI-assisted design and development work
This is the trend most likely to have a dramatic evolution this year.

Solutions like large language models, speech generators, content generators, image generators, translation tools, transcription tools, and video generators, among many others, will transform the way IDs create the learning experiences our organizations use. Two examples are:

1. IDs will be doing more curation and less creation:

  • Many IDs will start pulling raw material from content generators (built using natural language processing platforms like Open AI’s GPT-3, Microsoft’s LUIS, IBM’s Watson, Google’s BERT, etc.) to obtain ideas and drafts that they can then clean up and add to the assets they are assembling. As technology advances, the output from these platforms will be more suitable to become final drafts, and the curation and clean-up tasks will be faster and easier.
  • Then, the designer can leverage a solution like DALL-E 2 (or a product developed based on it) to obtain visuals that can (or not) be modified with programs like Illustrator or Photoshop (see image below for Dall-E’s “Cubist interpretation of AI and brain science.”

2. IDs will spend less, and in some cases no time at all, creating learning pathways

AI engines contained in LXPs and other platforms will select the right courses for employees and guide these learners from their current level of knowledge and skill to their goal state with substantially less human intervention.

 


The Creator of ChatGPT Thinks AI Should Be Regulated — from time.com by John Simons

Excerpts:

Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.

A growing number of leaders in the field are warning of the dangers of AI. Do you have any misgivings about the technology?

This is a unique moment in time where we do have agency in how it shapes society. And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity? There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.


Whispers of A.I.’s Modular Future — from newyorker.com by James Somers; via Sam DeBrule

Excerpts:

Gerganov adapted it from a program called Whisper, released in September by OpenAI, the same organization behind ChatGPTand dall-e. Whisper transcribes speech in more than ninety languages. In some of them, the software is capable of superhuman performance—that is, it can actually parse what somebody’s saying better than a human can.

Until recently, world-beating A.I.s like Whisper were the exclusive province of the big tech firms that developed them.

Ever since I’ve had tape to type up—lectures to transcribe, interviews to write down—I’ve dreamed of a program that would do it for me. The transcription process took so long, requiring so many small rewindings, that my hands and back would cramp. As a journalist, knowing what awaited me probably warped my reporting: instead of meeting someone in person with a tape recorder, it often seemed easier just to talk on the phone, typing up the good parts in the moment.

From DSC:
Journalism majors — and even seasoned journalists — should keep an eye on this type of application, as it will save them a significant amount of time and/or money.

Microsoft Teams Premium: Cut costs and add AI-powered productivity — from microsoft.com by Nicole Herskowitz

Excerpt:

Built on the familiar, all-in-one collaborative experience of Microsoft Teams, Teams Premium brings the latest technologies, including Large Language Models powered by OpenAI’s GPT-3.5, to make meetings more intelligent, personalized, and protected—whether it’s one-on-one, large meetings, virtual appointments, or webinars.


 

Also from Lior see:

 

Meet MathGPT: a Chatbot Tutor Built Specific to a Math Textbook — from thejournal.com by Kristal Kuykendall

Excerpt:

Micro-tutoring platform PhotoStudy has unveiled a new chatbot built on OpenAI’s ChatGPT APIs that can teach a complete elementary algebra textbook with “extremely high accuracy,” the company said.

“Textbook publishers and teachers can now transform their textbooks and teaching with a ChatGPT-like assistant that can teach all the material in a textbook, assess student progress, provide personalized help in weaker areas, generate quizzes with support for text, images, audio, and ultimately a student customized avatar for video interaction,” PhotoStudy said in its news release.

Some sample questions the MathGPT tool can answer:

    • “I don’t know how to solve a linear equation…”
    • “I have no idea what’s going on in class but we are doing Chapter 2. Can we start at the top?”
    • “Can you help me understand how to solve this mixture of coins problem?”
    • “I need to practice for my midterm tomorrow, through Chapter 6. Help.”
 
© 2024 | Daniel Christian