Introducing: ChatGPT Edu-Mega-Prompts — from drphilippahardman.substack.com by Dr. Philippa Hardman; with thanks to Ray Schroeder out on LinkedIn for this resource
How to combine the power of AI + learning science to improve your efficiency & effectiveness as an educator

From DSC:
Before relaying some excerpts, I want to say that I get the gist of what Dr. Hardman is saying re: quizzes. But I’m surprised to hear she had so many pedagogical concerns with quizzes. I, too, would like to see quizzes used as an instrument of learning and to practice recall — and not just for assessment. But I would give quizzes a higher thumbs up than what she did. I think she was also trying to say that quizzes don’t always identify misconceptions or inaccurate foundational information. 

Excerpts:

The Bad News: Most AI technologies that have been built specifically for educators in the last few years and months imitate and threaten to spread the use of broken instructional practices (i.e. content + quiz).

The Good News: Armed with prompts which are carefully crafted to ask the right thing in the right way, educators can use AI like GPT3 to improve the effectiveness of their instructional practices.

As is always the case, ChatGPT is your assistant. If you’re not happy with the result, you can edit and refine it using your expertise, either alone or through further conversation with ChatGPT.

For example, once the first response is generated, you can ask ChatGPT to make the activity more or less complex, to change the scenario and/or suggest more or different resources – the options are endless.

Philippa recommended checking out Rob Lennon’s streams of content. Here’s an example from his Twitter account:


Also relevant/see:

3 trends that may unlock AI's potential for Learning and Development in 2023

3 Trends That May Unlock AI’s Potential for L&D in 2023 — from learningguild.com by Juan Naranjo

Excerpts:

AI-assisted design and development work
This is the trend most likely to have a dramatic evolution this year.

Solutions like large language models, speech generators, content generators, image generators, translation tools, transcription tools, and video generators, among many others, will transform the way IDs create the learning experiences our organizations use. Two examples are:

1. IDs will be doing more curation and less creation:

  • Many IDs will start pulling raw material from content generators (built using natural language processing platforms like Open AI’s GPT-3, Microsoft’s LUIS, IBM’s Watson, Google’s BERT, etc.) to obtain ideas and drafts that they can then clean up and add to the assets they are assembling. As technology advances, the output from these platforms will be more suitable to become final drafts, and the curation and clean-up tasks will be faster and easier.
  • Then, the designer can leverage a solution like DALL-E 2 (or a product developed based on it) to obtain visuals that can (or not) be modified with programs like Illustrator or Photoshop (see image below for Dall-E’s “Cubist interpretation of AI and brain science.”

2. IDs will spend less, and in some cases no time at all, creating learning pathways

AI engines contained in LXPs and other platforms will select the right courses for employees and guide these learners from their current level of knowledge and skill to their goal state with substantially less human intervention.

 


The Creator of ChatGPT Thinks AI Should Be Regulated — from time.com by John Simons

Excerpts:

Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.

A growing number of leaders in the field are warning of the dangers of AI. Do you have any misgivings about the technology?

This is a unique moment in time where we do have agency in how it shapes society. And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity? There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.


Whispers of A.I.’s Modular Future — from newyorker.com by James Somers; via Sam DeBrule

Excerpts:

Gerganov adapted it from a program called Whisper, released in September by OpenAI, the same organization behind ChatGPTand dall-e. Whisper transcribes speech in more than ninety languages. In some of them, the software is capable of superhuman performance—that is, it can actually parse what somebody’s saying better than a human can.

Until recently, world-beating A.I.s like Whisper were the exclusive province of the big tech firms that developed them.

Ever since I’ve had tape to type up—lectures to transcribe, interviews to write down—I’ve dreamed of a program that would do it for me. The transcription process took so long, requiring so many small rewindings, that my hands and back would cramp. As a journalist, knowing what awaited me probably warped my reporting: instead of meeting someone in person with a tape recorder, it often seemed easier just to talk on the phone, typing up the good parts in the moment.

From DSC:
Journalism majors — and even seasoned journalists — should keep an eye on this type of application, as it will save them a significant amount of time and/or money.

Microsoft Teams Premium: Cut costs and add AI-powered productivity — from microsoft.com by Nicole Herskowitz

Excerpt:

Built on the familiar, all-in-one collaborative experience of Microsoft Teams, Teams Premium brings the latest technologies, including Large Language Models powered by OpenAI’s GPT-3.5, to make meetings more intelligent, personalized, and protected—whether it’s one-on-one, large meetings, virtual appointments, or webinars.


 

Also from Lior see:

 

Canary in the coal mine for coding bootcamps? — from theview.substack.com by gordonmacrae; with thanks to Mr. Ryan Craig for this resource

Excerpt:

If you run a software development bootcamp, a recent Burning Glass institute report should keep you awake at night.

The report, titled How Skills Are Disrupting Work, looks at a decade of labor market analysis and identifies how digital skill training and credentials have responded to new jobs.

Three trends stuck out to me:

  • The most future-proof skills aren’t technical
  • Demand for software development is in decline
  • One in eight postings feature just four skill sets

These three trends should sound a warning for software development bootcamps, in particular. Let’s see why, and how you can prepare to face the coming challenges.


Also relevant/see:

Issue #14: Trends in Bootcamps — from theview.substack.com by gordonmacrae

Excerpt:

Further consolidation of smaller providers seems likely to continue in 2023. A number of VC-backed providers will run out of money.

A lot of bootcamps will be available cheaply for any larger providers, or management companies. Growth will continue to be an option in the Middle East, as funding doesn’t look like drying up any time soon. And look for the larger bootcamps to expand into hire-train-deploy, apprenticeships or licensing.

As Alberto pointed out this week, it’s hard for bootcamps to sustain the growth trajectory VC’s expect. But there are other options available.


 

Top 6 VR learning trends in 2023 — from chieflearningofficer.com by Anders Gronstedt

Excerpt:

From virtual to mixed reality
A new class of “mixed reality” headsets will launch in 2023, promising to break the barriers between the real and virtual worlds. Meta launched a developer version of this technology a few months ago called “Quest Pro,” which superimposes computer-generated images into the real world around us. The mass-market version of this headset, Quest 3, will hit the shelves this fall for $400. Meanwhile, Apple is rumored to finally premiere a more premium mixed reality headset this spring. This can be an important step toward a vision of true augmented reality glasses that is still years away.

The new year will see the mainstream adoption of VR for safety, equipment operations and service, logistics, manufacturing, emergency response and health care training. Other applications will take several more years to mature. Current XR technology is not ripe for soft skills training, IT systems training, conferences and all-purpose collaboration (Meta can’t even get its own employees to hold regular meetings in VR). Procedural hands-on training simulations will be the edge of the new frontier of XR learning in the new year.

 

Educator considerations for ChatGPT — from platform.openai.com; with thanks to Anna Mills for this resource

Excerpt:

Streamlined and personalized teaching
Some examples of how we’ve seen educators exploring how to teach and learn with tools like ChatGPT:

  • Drafting and brainstorming for lesson plans and other activities
  • Help with design of quiz questions or other exercises
  • Experimenting with custom tutoring tools
  • Customizing materials for different preferences (simplifying language, adjusting to different reading levels, creating tailored activities for different interests)
  • Providing grammatical or structural feedback on portions of writing
  • Use in upskilling activities in areas like writing and coding (debugging code, revising writing, asking for explanations)
  • Critique AI generated text

While several of the above draw on ChatGPT’s potential to be explored as a tool for personalization, there are risks associated with such personalization as well, including student privacy, biased treatment, and development of unhealthy habits. Before students use tools that offer these services without direct supervision, they and their educators should understand the limitations of the tools outlined below.

Also relevant/see:

Excerpt (emphasis DSC):
David Wiley wrote a thoughtful post on the ways in which AI and Large Language Models (LLMs) can “provide instructional designers with first drafts of some of the work they do.” He says “imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of” learning outcomes, discussion prompts, rubrics, and formative assessment items.  The point is that LLMs can quickly generate rough drafts that are mostly accurate drafts, that humans can then “review, augment, and polish,” potentially shifting the work of instructional designers from authors to editors. The post is well worth your time.

The question that I’d like to spend some time thinking about is the following: What new knowledge, capacities, and skills do  instructional designers need in their role as editors and users of LLMs?

This resonated with me. Instructional Designer positions are starting to require AI and ML chops. I’m introducing my grad students to AI and ChatGPT this semester. I have an assignment based on it.

(This ain’t your father’s instructional design…)

Robert Gibson


 

The practical guide to using AI to do stuff — from oneusefulthing.substack.com by Ethan Mollick; with thanks to Sam DeBrule for this resource. Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania where he studies entrepreneurship & innovation, as well as how we can better learn and teach.
A resource for students in my classes (and other interested people).

Excerpts:

My classes now require AI (and if I didn’t require AI use, it wouldn’t matter, everyone is using AI anyway). But how can students use AI well? Here is a basic tutorial and guide I am providing my classes. It covers some of the many ways to use AI to be more productive, creative, and successful, using the technology available in early 2023, as well as some of the risks.

Come up with ideas 
Open Source Option: Nothing very good
Best free (for now) option: ChatGPT (registration may require a phone number)
Best option if ChatGPT is down: OpenAI Playground
.


Also relevant/see:

ChatGPT for educators -- a free 17 lesson course

 



On a relevant note:

Gen Z says school is not equipping them with the skills they need to survive in a digital world — from fastcompany.com by Shalene Gupta; with thanks to Robert Gibson for this resource
According to a study from Dell Technologies, Gen Z-ers in 15 different countries feel their government could do better.

Excerpt:

They see an education and skills gap: Forty-four percent said that school only taught them very basic computing skills, while 37% said that school education (for children under age 16) didn’t prepare them with the technology skills they needed for their planned careers. Forty percent consider learning new digital skills essential to future career options.

It’s clear that Gen Z see technology as pivotal for their future prosperity. It is now up to us—leading technology providers, governments, and the public sector—to work together and set them up for success by improving the quality and access to digital learning. Forty-four percent of Gen Z feel educators and businesses should work together to bridge the digital skills gap, and with the speed at which technology continues to evolve, this will require constant collaboration.

Aongus Hegarty, president of international markets at Dell Technologies


 

 

Revolutionising Criminal Law with AI — from seotraininglondon.org by Danny Richman
This case study outlines how I helped a law firm use Artificial Intelligence (AI) to streamline new client enquiries, resulting in significant savings of time and money.

Excerpt:

However, this process took up a lot of time and resources, meaning that highly qualified, well-paid individuals had to dedicate their time and energy to processing email enquiries instead of working on client cases.

That’s why I developed an app for Stuart Miller built on OpenAI’s GPT-3 technology. This app receives the content of the client’s email and makes the same determination as the human team of lawyers. It then immediately alerts the relevant lawyer to any enquiries flagged as high-priority, high-value cases. The entire process is automated requiring no human interaction.

From DSC:
Hmmm…something to keep on the radar.


Also relevant/see:

Here’s Why Lawyers Are Paying Attention to ChatGPT — from legallydisrupted.com by Zach Abramowitz
AI Will Continue to Be a Talking Point Throughout the Year

Excerpts:

Ready to get disrupted? Me neither, but let’s take the plunge.

ChatGPT is all anyone in legal wants to talk about right now, and for good reason.

Smash cut to yesterday, and this webinar focusing on ChatGPT is sold out and the sheer number of questions from the audience (which ranged from law students to in-house counsel and law firm partners) was more than 10x a normal webinar.

The point is that I’m not in a bubble this time. Everyone in legal is paying attention to ChatGPT, not just the legaltech nerds. This @#$% is going mainstream.


 
 

Also relevant/see:

 

Top edtech trends in 2023 and the ASU example — from news.asu.edu

Excerpt:

In spite of our tendency to break things down into tidy time frames, like a new year or academic semester, change constantly turns over the status quo. Especially in the world of technology, where disruptive innovation may evolve rapidly from the fringe to the mainstream.

“At ASU’s Enterprise Technology, we work in spaces where technology is not just revolutionizing higher education, but the world at large,” said Lev Gonick, chief information officer at Arizona State University. “We strive to be proactive, not reactive, to new paradigms changing the ways in which we work, learn and thrive.”

As referenced by the above article:

Thus, the top higher education technology trends to watch out for in 2023 include Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (AR), Digital Twins, the Metaverse (including digital avatars and NFT art for use in the Metaverse and other Web3-based virtual environments), Internet of Things (IoT), Blockchain, Cloud, Gamification, and Chatbots. These technologies will support the expansion of the Digital Transformation of higher education going forward.

Also relevant/see:

 

 

Microsoft Plans to Build OpenAI, ChatGPT Features Into All Products — from wsj.com by Sam Schechner (behind paywall)
Offering for businesses and end users to be transformed by incorporating tools like ChatGPT, CEO Satya Nadella says

Excerpt:

DAVOS, Switzerland—Microsoft Corp. MSFT 2.86%increase; green up pointing triangle plans to incorporate artificial-intelligence tools like ChatGPT into all of its products and make them available as platforms for other businesses to build on, Chief Executive Satya Nadella said.

It’s a matter of time before the LMSs like Canvas and Anthology do the same. Really going to change the complexion of online learning.

Jared Stein; via Robert Gibson on LinkedIn

Also relevant/see:

Donald Clark’s thoughts out on LinkedIn re: Google and AI

Excerpt:

Microsoft are holding a lot of great cards in the AI game, especially ChatGPT-3, but Google also have a great hand, in fact they have a bird in the hand:

Sparrow, from Deepmind, is likely to launch soon. Their aim is to trump ChatGTP by having a chatbot that is more useful and reduces the risk of unsafe and inappropriate answers. In the released paper, they also indicate that it will have moral constraints. Smart move.

Hassabis has promised some sort of release in 2023. Their goal is to reduce wrong and invented information by linking it to Google Search and Scholar for citations.

Donald Clark’s thought re: Apple’s strategy for AI — from donaldclarkplanb.blogspot.com

Wonder Tools:7 ways to Use ChatGPT — from wondertools.substack.com by Jeremy Caplan

Excerpt:

4 recommended ChatGPT resources

  • The Art of ChatGPT PromptingA Guide to Crafting Clear and Effective Prompts.
    This free e-book acts a useful guide for beginners.
  • Collection of ChatGPT Resources
    Use ChatGPT in Google Docs, WhatsApp, as a desktop app, with your voice, or in other ways with this running list of tools.
  • Awesome ChatGPT prompts
    Dozens of clever pre-written prompts you can use to initiate your own conversations with ChatGPT to get it to reply as a fallacy finder or a journal reviewer or whatever else.
  • Writing for Renegades – Co-writing with AI
    This free 17-page resource has writing exercises you can try with ChatGPT. It also includes interesting nuggets, like Wycliffe A. Hill’s 1936 attempt at writing automation, Plot Genie.

 


We often see the battle between technology and humans as a zero-sum game. And that’s how much of the discussion about ChatGPT is being framed now. Like many others who have been experimenting with ChatGPT in recent weeks, I find that a lot of the output depends on the input. In other words, the better the human question, the better the ChatGPT answer.

So instead of seeing ourselves competing with technology, we should find ways to complement it and view ChatGPT as a tool that assists us in collecting information and in writing drafts.

If we reframe the threat, think about how much time can be freed up to read, to think, to write?

As many have noted, including Michael Horn on the Class Disrupted podcast he co-hosts, ChatGPT is to writing what calculators were once to math and other STEM disciplines. 

Jeff Selingo: ‘The Calculator’ for a New Generation?

 


GPT in Higher Education — from insidehighered.com by Ray Schroeder
ChatGPT has caught our attention in higher education. What will it mean in 2023?

Excerpt:

Founder and CEO at Moodle Martin Dougiamas writes in Open Ed Tech that as educators, we must recognize that artificial general intelligence will become ubiquitous. “In short, we need to embrace that AI is going to be a huge part of our lives when creating anything. There is no gain in banning it or avoiding it. It’s actually easier (and better) to use this moment to restructure our education processes to be useful and appropriate in today’s environment (which is full of opportunities).”

Who, at your institution, is examining the impact of AI, and in particular GPT, upon the curriculum? Are instructional designers working with instructors in revising syllabi and embedding AI applications into the course offerings? What can you do to ensure that your university is preparing learners for the future rather than the past?

Ray Schroeder

ChatGPT Advice Academics Can Use Now — from insidehighered.com by Susan D’Agostino
To harness the potential and avert the risks of OpenAI’s new chat bot, academics should think a few years out, invite students into the conversation and—most of all—experiment, not panic. 

Alarmed by AI Chatbots, Universities Start Revamping How They Teach — from The New York Times (out at Yahoo) by Kalley Huang

Excerpt:

At schools including George Washington University in Washington, D.C., Rutgers University in New Brunswick, New Jersey, and Appalachian State University in Boone, North Carolina, professors are phasing out take-home, open-book assignments — which became a dominant method of assessment in the pandemic but now seem vulnerable to chatbots. They are instead opting for in-class assignments, handwritten papers, group work and oral exams.

Gone are prompts like “write five pages about this or that.” Some professors are instead crafting questions that they hope will be too clever for chatbots and asking students to write about their own lives and current events.

With ChatGPT, Teachers Can Plan Lessons, Write Emails, and More. What’s the Catch? — from edweek.org by Madeline Will  (behind paywall)

Why Banning ChatGPT in Class Is a Mistake — from campustechnology.com by Thomas Mennella
Artificial intelligence can be a valuable learning tool, if used in the right context. Here are ways to embrace ChatGPT and encourage students to think critically about the content it produces.

.


Let the Lawsuits Against Generative AI Begin! — from legallydisrupted.com by Zach Abramowitz
Getty Sues Stability AI as Lawsuits Mount Against GenAI Companies

Excerpt:

Well, it was bound to happen. Anytime you have a phenomenon as disruptive as generative AI, you can expect lawsuits.

Case in point: the lawsuit recently filed by Getty Images against Stability AI, highlighting the ongoing legal challenges posed by the use of AI in the creative industries. But it’s not the only lawsuit recently filed, see e.g. Now artists sue AI image generation tools Stable Diffusion, Midjourney over copyright | Technology News, The Indian Express


.

 

14 Technology Predictions for Higher Education in 2023 — from campustechnology.com by Rhea Kelly
How will technologies and practices like artificial intelligence, predictive analytics, digital transformation, and change management impact colleges and universities this year? Here’s what the experts told us.

Excerpt:

In an open call on LinkedIn, we asked higher education and ed tech industry leaders to forecast the most important trends to watch in the coming year. Their responses reflect both the challenges on the horizon — persistent cyber attacks, the disruptive force of emerging technologies, failures in project management — as well as the opportunities that technology brings to better serve students and support the institutional mission. Here are 14 predictions to help steer your technology efforts in 2023.

 

ChatGPT Creator Is Talking to Investors About Selling Shares at $29 Billion Valuation — from wsj.com by Berber Jin and Miles Kruppa
Tender offer at that valuation would make OpenAI one of the most valuable U.S. startups

Here’s how Microsoft could use ChatGPT — from The Algorithm by Melissa Heikkilä

Excerpt (emphasis DSC):

Microsoft is reportedly eyeing a $10 billion investment in OpenAI, the startup that created the viral chatbot ChatGPT, and is planning to integrate it into Office products and Bing search. The tech giant has already invested at least $1 billion into OpenAI. Some of these features might be rolling out as early as March, according to The Information.

This is a big deal. If successful, it will bring powerful AI tools to the masses. So what would ChatGPT-powered Microsoft products look like? We asked Microsoft and OpenAI. Neither was willing to answer our questions on how they plan to integrate AI-powered products into Microsoft’s tools, even though work must be well underway to do so. However, we do know enough to make some informed, intelligent guesses. Hint: it’s probably good news if, like me, you find creating PowerPoint presentations and answering emails boring.

And speaking of Microsoft and AI, also see:

I have maintained for several years, including a book ‘AI for Learning’, that AI is the technology of the age and will change everything. This is unfolding as we speak but it is interesting to ask who the winners are likely to be.

Donald Clark

The Expanding Dark Forest and Generative AI — from maggieappleton.com by
Proving you’re a human on a web flooded with generative AI content

Assumed audience:

People who have heard of GPT-3 / ChatGPT, and are vaguely following the advances in machine learning, large language models, and image generators. Also people who care about making the web a flourishing social and intellectual space.

That dark forest is about to expand. Large Language Models (LLMs) that can instantly generate coherent swaths of human-like text have just joined the party.

 

DeepMind CEO Demis Hassabis Urges Caution on AI — from time.com by Billy Perrigo

It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things.”

“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.

Demis Hassabis 

Excerpt (emphasis DSC):

Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.

But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets.

AI-assisted plagiarism? ChatGPT bot says it has an answer for that — from theguardian.com by Alex Hern
Silicon Valley firm insists its new text generator, which writes human-sounding essays, can overcome fears over cheating

Excerpt:

Headteachers and university lecturers have expressed concerns that ChatGPT, which can provide convincing human-sounding answers to exam questions, could spark a wave of cheating in homework and exam coursework.

Now, the bot’s makers, San Francisco-based OpenAI, are trying to counter the risk by “watermarking” the bot’s output and making plagiarism easier to spot.

Schools Shouldn’t Ban Access to ChatGPT — from time.com by Joanne Lipman and Rebecca Distler

Excerpt (emphasis DSC):

Students need now, more than ever, to understand how to navigate a world in which artificial intelligence is increasingly woven into everyday life. It’s a world that they, ultimately, will shape.

We hail from two professional fields that have an outsize interest in this debate. Joanne is a veteran journalist and editor deeply concerned about the potential for plagiarism and misinformation. Rebecca is a public health expert focused on artificial intelligence, who champions equitable adoption of new technologies.

We are also mother and daughter. Our dinner-table conversations have become a microcosm of the argument around ChatGPT, weighing its very real dangers against its equally real promise. Yet we both firmly believe that a blanket ban is a missed opportunity.

ChatGPT: Threat or Menace? — from insidehighered.com by Steven Mintz
Are fears about generative AI warranted?

And see Joshua Kim’s A Friendly Attempt to Balance Steve Mintz’s Piece on Higher Ed Hard Truths out at nsidehighered.com | Comparing the health care and higher ed systems.

 



What Leaders Should Know About Emerging Technologies — from forbes.com by Benjamin Laker

Excerpt (emphasis DSC):

The rapid pace of change is driven by a “perfect storm” of factors, including the falling cost of computing power, the rise of data-driven decision-making, and the increasing availability of new technologies. “The speed of current breakthroughs has no historical precedent,” concluded Andrew Doxsey, co-founder of Libra Incentix, in an interview. “Unlike previous technological revolutions, the Fourth Industrial Revolution is evolving exponentially rather than linearly. Furthermore, it disrupts almost every industry worldwide.”

I asked ChatGPT to write my cover letters. 2 hiring managers said they would have given me an interview but the letters lacked personality. — from businessinsider.com by Beatrice Nolan

Key points:

  • An updated version of the AI chatbot ChatGPT was recently released to the public.
  • I got the chatbot to write cover letters for real jobs and asked hiring managers what they thought.
  • The managers said they would’ve given me a call but that the letters lacked personality.

.



 

Is your Law Firm Ready for Continued Virtual Legal Proceedings? — from jdsupra.com

Excerpt:

For 2023, one trend is obvious: legal professionals prefer remote work. According to an ABA report on the future of the profession, 87% of lawyers say their workplaces allow them to work remotely. And in just a few years, the percentage of attorneys working exclusively in the office has dropped to less than 30%.

Also relevant/see:

The Metaverse: What Is It? How Does It Affect Law Firms? — from by Annette Choti
A new set of legal issues and advertising opportunities.

Excerpt:

Law Firms And The Metaverse
Since the Metaverse is so new, it will continue to develop and change. Distinct kinds of legal issues and implications have not been uncovered yet. The Metaverse will likely create various legal challenges in the future. This creates a new legal landscape for law firms and lawyers.

Those who anticipate the questions and challenges that may arise will be able to take advantage of this new digital market. Here are some ways a law firm can capitalize on the virtual realities of the Metaverse:

From DSC:
My point in posting this item about “The Metaverse” is not to say that it’s here…but to be sure that it’s on your legal radar. There will be enough legal ramifications of AI to last a while, but I would still recommend someone in your firm look at the place of emerging technologies — those techs not only to be leveraged by your firm but also as to what types of legal issues your lawyers will need to be up-to-speed on.

 
© 2025 | Daniel Christian