Disconnect grows between law firm service and client expectation, survey finds — from legaltechnology.com by Caroline Hill

Over 90% of in-house counsel and three quarters of private practice lawyers said that the legal sector is slow to embrace data, technology and new delivery models – a significant increase on the 64% who felt the same way last year.

In terms of the disconnect, 96% of in-house counsel agreed with the statement that what law firms provide is out of kilter with what clients expect. The majority (78%) of private practice lawyers also largely agreed.


ndaOK Unleashes Next-Level Efficiency in Legal Tech with GPT-4 Powered NDA Review — from legaldive.com by Christina Pennell
AI-driven legal tool promises to reduce NDA review times by over 90%

AUSTIN, Tex. —  ndaOK, an innovator in AI-powered legal technology, today announces the launch of its next-generation non-disclosure agreement (NDA) review system. This advanced solution leverages OpenAI’s GPT-4 large multimodal model, a first among legal technology companies, offering unprecedented performance and efficiency in reviewing NDAs.

Capitalizing on the computational power and versatility of GPT-4, ndaOK accurately reviews and edits documents based on a user’s pre-determined requirements without the need for human assistance or input. This unique capability makes ndaOK faster and easier to deploy than any other AI-based contract review solution.


And here are two relevant postings that I missed a while back:

ANALYSIS: Meet the Law Schools Leading the Way in Innovation — from news.bloomberglaw.com by Francis Boustany

As law firms, businesses, and their clients adapt to the new realities of the legal and business worlds, law schools must prepare students in new ways—beyond traditional law school curricula and teaching methods—to give students an experience and education that better prepares them for their post-graduation careers.

Bloomberg Law launched its inaugural Law School Innovation Program as a means of promoting, acknowledging, and connecting the law schools that are innovating in the legal education space and providing their students with new ways of learning the law.

In this reimagined version of law school, students are taught to have ‘an entrepreneur’s mind’ — from fastcompany.com by Grace Buono
The University of Richmond program asks students not only to think like lawyers, but as entrepreneurs.

The demand for good lawyers is nothing new and, each year, law schools churn out graduates equipped with virtually the same skills as decades of law students before them. But one school is trying to change that.

The University of Richmond’s Legal Business Design Hub has students focus on the strategy, design, and operations of legal services in addition to their regular coursework, applying what the program calls “an entrepreneur’s mind” to their studies.

 

Generative AI and the future of work in America — from mckinsey.com by Kweilin Ellingrud, Saurabh Sanghvi, Gurneet Singh Dandona, Anu Madgavkar, Michael Chui, Olivia White, and Paige Hasebe

At a glance

  • During the pandemic (2019–22), the US labor market saw 8.6 million occupational shifts, 50 percent more than in the previous three-year period.
  • By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automated—a trend accelerated by generative AI.
  • Federal investment to address climate and infrastructure, as well as structural shifts, will also alter labor demand.
  • An additional 12 million occupational transitions may be needed by 2030.
  • The United States will need workforce development on a far larger scale as well as more expansive hiring approaches from employers.

Employers will need to hire for skills and competencies rather than credentials, recruit from overlooked populations (such as rural workers and people with disabilities), and deliver training that keeps pace with their evolving needs.


The AI-Powered, Totally Autonomous Future of War Is Here — from wired.com by Will Knight
Ships without crews. Self-directed drone swarms. How a US Navy task force is using off-the-shelf robotics and artificial intelligence to prepare for the next age of conflict.

From DSC:
Hhhhmmmmm…..not good. Is anyone surprised by this? No, I didn’t think so either. That’s why the United States and China are so heated up about semiconductor chips.


AI puts glitch in graduates’ employment plans — from hrdive.com by Ginger Christ
Recent grads are worried how AI will affect their career prospects, a new survey found.

Excerpt:

  • The proliferation of new technologies like generative artificial intelligence is making recent graduates uneasy, a new study released Thursday found. A third of the 1,000 people who graduated in the past year said they are second-guessing their career choice, while roughly half reported questioning their workforce preparedness and feeling threatened by AI, according to the 2023 Employability Report by Cengage Group, a global education technology company.

“The workplace has changed rapidly in the last few years, and now we are witnessing a new shift as AI begins to reshape worker productivity, job requirements, hiring habits and even entire industries,” Michael Hansen, Cengage Group CEO, said in a news release. 

Along these lines, also see:

AI Boom Creates Concerns for Recent Graduates — from insidehighered.com by  Lauren Coffey

More than half of recent graduates question whether they are properly prepared for the workforce in light of the rise of artificial intelligence, a survey finds.

There is also more of a preference for skills training credentials. Among employers, nearly 40 percent said skills training credentials are most important, while only 19 percent ranked a college degree as most important.

However, recent graduates did cite an issue with most higher education institutions’ ability to teach employability skills. In 2023, 43 percent of students said their degree program taught them the necessary skills for their first job, down 20 percentage points from 2022.


Instructure, Khan Academy Announce Major Partnership On AI Tutoring, Teaching
— from forbes.com by Derek Newton

The news is that Instructure, one of the few public education companies and the market leader in learning management with their signature product Canvas, struck a partnership with Khan Academy to create an AI-powered tutoring and teaching assistant tool – merging Khan’s innovative instructional content and Instructure’s significant reach, scale, and data insights. The partnership and related tools will be known as Khanmigo, according to the announcement.

On brand names alone, this is a big deal. On potential impact, it could be even bigger.


How To Use AI to Write Scenarios — from christytuckerlearning.com by Christy Tucker
How can you use AI to write scenarios for learning? Read this example with prompts and results using ChatGPT and Bard.

Excerpts:

So far, I have found these tools helpful in generating ideas, writing first drafts, and summarizing. They work better for general knowledge tasks than really specific topics unless I provide more details to them, which makes sense.

This post isn’t going to give you “5 magical prompts to instantly write scenarios for you” or anything like that. Instead, this is a “working out loud” post where I’ll share some prompts I have used.

Christy’s posting includes:

  1. “The Meeting from Hell”
  2. “The Backstabbing Coworker”
  3. “The Boss from Hell”
  4. “The Office Romance Gone Wrong”
  5. “The New Hire with Attitude”

Some potential tools for you to check out:



The Rise of the Talent Economy — from drphilippahardman.substack.com by Dr. Philippa Hardman
How Education & Training Will Dictate the Future & Impact of AI

“Talent, more than capital, will represent the critical factor of production.”

In short, the demand for AI skills requires a significant transformation in training and education models. To bridge the global skills gap, educational institutions, online learning providers, and employers must design and deliver training programs that cater to the rapidly evolving AI-driven labor market. 


How ChatGPT killed my discussion boards and prompted new prompts — from timeshighereducation.com by Sara Cline; per Robert Gibson on LinkedIn
Advice on learning and discussion prompts that require students to think beyond the remit of AI responses

Excerpts:

To combat this problem, we modified some of our prompts this summer to try to prevent students from using AI to avoid learning. I’m sharing some of our strategies in the hope that they help you out as you adapt your course to a world of generative AI.

  1. Use prompts that force a personal opinion.
  2. Have students include their source(s) as an attachment.
  3. Use current or local events.
  4. Have them take and caption a photo.
  5. Draw a diagram or chart.
  6. Build and explain a 3D model.
  7. Include timestamps from lecture videos.
  8. Scrap the discussion boards.

Dark web ChatGPT is here… — from therundown.ai

The Rundown: A new cybercrime generative AI tool called FraudGPT is being advertised on the Dark web and Telegram channels, offering offensive capabilities like crafting spear-phishing emails and creating undetectable malware.

Why it matters: Scammers can now look more realistic than ever before and at a larger scale. The sad truth is that the emergence of cybercrime AI tools like FraudGPT is just beginning.


From DSC:
If true and if it could help build and/or contribute to cloud-based learner profiles,  this could be huge.


Wayfair’s AI tool can redraw your living room and sell you furniture — from theverge.com by Wes Davis
The home decoration company’s new Decorify AI remodeling tool is clumsy but could be effective for visualization while remodeling.

A living room -- Wayfair is experimenting with using AI technologies to help people envision interior design moves

 

22 Classroom-Focused Resources on AI from Teachers Everywhere

22 Classroom-Focused Resources on AI from Teachers Everywhere — from coolcatteacher.com by Vicki Davis; via GSV

***


Back to School Survey: 44% of Teens “Likely” to Use AI To Do Their Schoolwork for Them This School Year — from prnewswire.com by Junior Achievement
Research by Junior Achievement Shows 60% of Teens Consider the Use of AI to Do Their Schoolwork for Them as “Cheating”

Excerpt:

COLORADO SPRINGS, Colo.July 26, 2023 /PRNewswire/ — A new survey of teens conducted for Junior Achievement by the research firm Big Village shows that nearly half of teens (44%) are “likely” to use AI to do their schoolwork instead of doing it themselves this coming school year. However, most teens (60%) consider using AI in this way as “cheating.” The survey of 1,006 13- to 17-year-olds was conducted by Big Village from July 6 through 11, 2023.

From DSC:
In a competitive society as we have in the U.S. and when many of our K-12 learning ecosystems are designed to create game players, we shouldn’t be surprised to see a significant amount of our students using AI to “win”/game the system.

As it becomes appropriate for each student, offering more choice and control should help to allow more students to pursue what they want to learn about. They won’t be as interested in gaming the system if they truly want to learn about something.

 

McKinsey Technology Trends Outlook 2023 — from mckinsey.com

Excerpt:

Which technology trends have the most momentum in an accelerating world? We ranked the top cross-industry trends that matter most for companies and executives.

McKinsey Technology Trends Outlook 2023

 

Partnership with American Journalism Project to support local news — from openai.com; via The Rundown AI
A new $5+ million partnership aims to explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.


SEC’s Gensler Warns AI Risks Financial Stability — from bloomberg.com by Lydia Beyoud; via The Brainyacts
SEC on lookout for fraud, conflicts of interest, chair says | Gensler cautions companies touting AI in corporate docs


Per a recent Brainyacts posting:

The recent petition from Kenyan workers who engage in content moderation for OpenAI’s ChatGPT, via the intermediary company Sama, has opened a new discussion in the global legal market. This dialogue surrounds the concept of “harmful and dangerous technology work” and its implications for laws and regulations within the expansive field of AI development and deployment.

The petition, asking for investigations into the working conditions and operations of big tech companies outsourcing services in Kenya, is notable not just for its immediate context but also for the broader legal issues it raises. Central among these is the notion of “harmful and dangerous technology work,” a term that encapsulates the uniquely modern form of labor involved in developing and ensuring the safety of AI systems.

The most junior data labelers, or agents, earned a basic salary of 21,000 Kenyan shillings ($170) per month, with monthly bonuses and commissions for meeting performance targets that could elevate their hourly rate to just $1.44 – a far cry from the $12.50 hourly rate that OpenAI paid Sama for their work. This discrepancy raises crucial questions about the fair distribution of economic benefits in the AI value chain.


How ChatGPT Code Interpreter (And Four Other AI Initiatives) Might Revolutionize Education — from edtechinsiders.substack.com by Phuong Do, Alex Sarlin, and Sarah Morin
And more on Meta’s Llama, education LLMs, the Supreme Court affirmative action ruling, and Byju’s continued unraveling

Let’s put it all together for emphasis. With Code Interpreter by ChatGPT, you can:

  1. Upload any file
  2. Tell ChatGPT what you want to do with it
  3. Receive your instructions translated into Python
  4. Execute the code
  5. Transform the output back into readable language (or visuals, charts, graphs, tables, etc.)
  6. Provide the results (and the underlying Python code)


AI Tools and Links — from Wally Boston

It’s become so difficult to track AI tools as they are revealed. I’ve decided to create a running list of tools as I find out about them.  The list is in alphabetical order even though there are classification systems that I’ve seen others use. Although it’s not good in blogging land to update posts, I’ll change the date every time that I update this list. Please feel free to respond to me with your comments about any of these as well as AI tools that you use that I do not have on the list. I’ll post your comments next to a tool when appropriate. Thanks.


Meet Claude — A helpful new AI assistant — from wondertools.substack.com by Jeremy Caplan
How to make the most of ChatGPT’s new alternative

Claude has surprising capabilities, including a couple you won’t find in the free version of ChatGPT.

Since this new AI bot launched on July 11, I’ve found Claude useful for summarizing long transcripts, clarifying complex writings, and generating lists of ideas and questions. It also helps me put unstructured notes into orderly tables. For some things, I prefer Claude to ChatGPT. Read on for Claude’s strengths and limitations, and ideas for using it creatively.

Claude’s free version allows you to attach documents for analysis. ChatGPT’s doesn’t.


The Next Frontier For Large Language Models Is Biology — from forbes.com by Rob Toews

Large language models like GPT-4 have taken the world by storm thanks to their astonishing command of natural language. Yet the most significant long-term opportunity for LLMs will entail an entirely different type of language: the language of biology.

In the near term, the most compelling opportunity to apply large language models in the life sciences is to design novel proteins.



Seven AI companies agree to safeguards in the US — from bbc.com by Shiona McCallum; via Tom Barrett

Seven leading companies in artificial intelligence have committed to managing risks posed by the tech, the White House has said.

This will include testing the security of AI, and making the results of those tests public.

Representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI joined US President Joe Biden to make the announcement.

 

New York Times sues AI — theneurondaily.com


In Hollywood, writers and actors/actresses are on strike due to AI-related items.


From DSC:
And while some are asking about other industries’/individuals’ data, Bryan Alexander asks this about academics:

While I’m here, also see Bryan’s posting –> How colleges and university are responding to AI now


AI’s Coming Constitutional Convention — from thebrainyacts.beehiiv.com

For centuries, constitutional conventions have been pivotal moments in history for codifying the rights and responsibilities that shape civilized societies. As artificial intelligence rapidly grows more powerful, the AI community faces a similar historic inflection point. The time has come to draft a “Constitutional Convention for AI” – to proactively encode principles that will steer these transformative technologies toward justice, empowerment, and human flourishing.

AI promises immense benefits, from curing diseases to unlocking clean energy. But uncontrolled, it poses existential dangers that could undermine human autonomy and dignity. Lawyers understand well that mere regulations or oversight are no match for a determined bad actor. Fundamental principles must be woven into the very fabric of the system.

 

The invisible cost of resisting AI in higher education — from blogs.lse.ac.uk by Dr. Philippa Hardman

Excerpt (emphasis DSC):

The implications of this development are perhaps more significant than we realise. There has been much discussion in recent months about the risks associated with the rise of generative AI for higher education, with most of the discussion centring around the challenge that ChatGPT poses to academic integrity.

However, much less work has been done on exploring the negative – even existential – consequences that might stem from not embracing AI in higher education. Are these new principles enough to reverse the risk of irrelevance?

What if we reimagine “learning” in higher education as something more than the recall and restructuring of existing information? What if instead of lectures, essays and exams we shifted to a model of problem sets, projects and portfolios?

I am often asked what this could look like in practice. If we turn to tried and tested instructional strategies which optimise for learner motivation and mastery, it would look something like this…

Also relevant/see:

Do or Die? — from drphilippahardman.substack.com by Dr. Philippa Hardman
The invisible cost of resisting AI in higher education

Excerpt:

  • Embracing AI in the higher education sector prepares students for the increasingly technology-driven job market and promotes more active, participatory learning experiences which we know lead to better outcomes for both students and employers.
  • With the rising popularity of alternative education routes such as bootcamps and apprenticeships, it’s crucial for traditional higher education to engage positively with AI in order to maintain its competitiveness and relevance.

For example, a teacher crafting a lesson plan no longer has to repeat that they’re teaching 3rd grade science. A developer preferring efficient code in a language that’s not Python – they can say it once, and it’s understood. Grocery shopping for a big family becomes easier, with the model accounting for 6 servings in the grocery list.


This is the worst AI will ever be, so focused are educators on the present they can’t see the future — from donaldclarkplanb.blogspot.com by Donald Clark

Teaching technology
There is also the misconception around the word ‘generative’, the assumption that all it does is create blocks of predictable text. Wrong. May of its best uses in learning are its ability to summarise, outline, provide guidance, support and many other pedagogic features that can be built into the software. This works and will mean tutors, teachers, teaching support, not taking support, coaches and many other services will emerge that aid both teaching and learning. They are being developed in their hundreds as we speak.

This simple fact, that this is the first technology to ‘learn’ and learn fast, on scale, continuously, across a range of media and tasks, it what makes it extraordinary.


On holding back the strange AI tide — from oneusefulthing.org by Ethan Mollick
There is no way to stop the disruption. We need to channel it instead

And empowering workers is not going to be possible with a top-down solution alone. Instead, consider:

  • Radical incentives to ensure that workers are willing to share what they learn. If they are worried about being punished, they won’t share. If they are worried they won’t be rewarded, they won’t share. If they are worried that the AI tools that they develop might replace them, or their coworkers, they won’t share. Corporate leaders need to figure out a way to reassure and reward workers, something they are not used to doing.
  • Empowering user-to-user innovation. Build prompt libraries that help workers develop and share prompts with other people inside the organization. Open up tools broadly to workers to use (while still setting policies around proprietary information), and see what they come up with. Create slack time for workers to develop, and discuss, AI approaches.
  • Don’t rely on outside providers or your existing R&D groups to tell you the answer. We are in the very early days of a new technology. Nobody really knows anything about the best ways to use AI, and they certainly don’t know the best ways to use it in your company. Only by diving in, responsibly, can you hope to figure out the best use cases.

Teaching: Preparing yourself for AI in the classroom — from chronicle.com by Beth McMurtrie

Auburn’s modules cover the following questions:

  • What do I need to know about AI?
  • What are the ethical considerations in a higher-ed context?
  • How will AI tools affect the courses I teach?
  • How are students using AI tools, and how can I partner with my students?
  • How do I need to rethink exams, papers, and projects I assign?
  • How do I redesign my courses in the wake of AI disruption?
  • What other AI tools or capabilities are coming, and how can I design for them?
  • What conversations need to happen in my department or discipline, and what is my role?

Transforming Higher Education: AI as an Assistive Technology for Inclusive Learning — from fenews.co.uk by Gain Hoole

In recent years, I have witnessed the transformative power of technology in higher education. One particular innovation that has captured my attention is Artificial Intelligence (AI). AI holds tremendous potential as an assistive technology for students with reasonable adjustments in further education (FE) and higher education (HE).

In this comprehensive blog post, I will delve into the multifaceted aspects of AI as an assistive technology, exploring its benefits, considerations, challenges, and the future it holds for transforming higher education.

The integration of AI as an assistive technology can create an inclusive educational environment where all students, regardless of disabilities or specific learning needs, have equal access to educational resources. Real-time transcription services, text-to-speech capabilities, and personalized learning experiences empower students like me to engage with course content in various formats and at our own pace (Fenews, 2023). This not only removes barriers but also fosters a more inclusive and diverse academic community.


5 Ways to Ease Students Off the Lecture and Into Active Learning — from chronicle.com by Jermey T. Murphy
Lecturing endures in college classrooms in part because students prefer that style of teaching. How can we shift that preference?

What can we do? Here are five considerations I’ll be following this coming fall in response to that nagging “less discussion, more instruction” evaluation.

  • Lecture … sparingly. 
  • Routinely ask how the course is going.
  • Be transparent.
  • …and more

A three-part series re: courseware out at The Chronicle of Higher Education:

  1. Millions of Students a Year Are Required to Buy Courseware. Often, It Replaces the Professor. — from chronicle.com by Taylor Swaak
    .
  2. Courseware Can Be Integral to a Course. Why, Then, Are Students Footing the Bill for It? — from chronicle.com by Taylor Swaak
    The Homework Tax | For students already struggling to afford college, courseware can add to the burden
    Their argument is multifold: For one, they say, products like these — which often deliver key elements of a course that an instructor would typically be responsible for, like homework, assessments, and grading — should not be the student’s burden. At least one student advocate said colleges, rather, should cover or subsidize the cost, as they do with software like learning-management systems, if they’re allowing faculty free rein to adopt the products.

    And the fact that students’ access to these products expires — sometimes after just a semester — rubs salt in the wound, and risks further disadvantaging students.
    .
  3. Bots Are Grabbing Students’ Personal Data When They Complete Assignments — from chronicle.com by Taylor Swaak
    When students use courseware, how much personal data is it collecting?

Institutions aren’t “letting the wolf into the henhouse”; instead, “we’re letting the hens out into a forest of wolves,” said Billy Meinke, an open educational resources technologist with the Outreach College at the University of Hawaii-Manoa who’s done research on publisher misuse of student data.
.


Here are five reading challenges to learn about learning this summer — from retrievalpractice.org by Pooja K. Agarwal, Ph.D.

Excerpt (emphasis DSC):

Here are five summer reading challenges to learn about the science of learning.

Important: make sure you remember what you learn! Engage yourself in retrieval practice and retrieve two things after each book, practice guide, and research article you read. Share your two things with our communities on Twitter and Facebook, make a list of what you’ve learned to boost your long-term learning,…


Assignment Makeovers in the AI Age: Essay Edition — from derekbruff.org Derek Bruff

Last week, I explored some ways an instructor might want to (or need to) redesign a reading response assignment for the fall, given the many AI text generation tools now available to students. This week, I want to continue that thread with another assignment makeover. Reading response assignments were just the warm up; now we’re tackling the essay assignment.


Here are ways professional education leaders can prepare students for the rise of AI — from highereddive.com by A. Benjamin Spencer
Institutions must adapt their curricula to incorporate artificial intelligence-related topics, the dean of William & Mary Law School argues.

First, they need to understand that the technological side of AI can no longer be simply left to the information technology experts. Regardless of the professional domain, understanding what AI is, how it works, how the underlying code and algorithms are designed, and what assumptions lie behind the computer code are important components to being able to use and consume the products of AI tools appropriately. 

 

It’s time for a Legal Moonshot — from jordanfurlong.substack.com by Jordan Furlong
All the challenges facing the legal sector today are systemic and entrenched. To solve them, we have to make a radical commitment to accomplish what we once believed impossible.

Here are three Legal Moonshots that the legal profession could take the lead on.

  1. Establish universal access to justice.
    Someday, this will be reality. Everyone will know their basic legal rights and can easily exercise them. Legal remedies will be free or extremely low-cost. Courts will be integrated into communities with simple entry and guided assistance, delivering clear and swift justice. AI-driven online services will render business agreements and settle everyday disputes. Everyone will have a last will and testament. Nobody will have to represent themselves. Justice will be real. That is all possible, and lawyers can lead the way there. It’s our Holy Grail. Let’s make it actually happen.
  2. Eliminate violence against women.  
  3. Root out public and private corruption.

The Tech Stack Law Firms and Legal Professionals Need to Succeed (Adriana Linares – LawTech Partners) — from tlpodcast.com with Adriana Linares

Adriana explains the differences between case management software, document management platforms, and practice management software. She also touches on the importance of document assembly software and how to maximize the use of data captured during the various stages of a legal matter. She closes out the discussion explaining why many in legal are missing out when they don’t use CRMs–Client and Customer Relationship Management platforms.


How To Use AI in Your Firm (with examples!) — a 1.5 hour webinar recording from clio.com; via The Brainyacts
You know your firm could benefit from AI—now, see how.

In this webinar recording you’ll learn about:

  • Practical use cases for AI in law firms—from legal research to practice area-specific prompts.
  • Popular AI tools and how to choose ones that work with your firm’s budget and goals.
  • The limitations, risks, and ethical considerations of AI for legal professionals.

Virtual Law Firms: Reinventing The Legal Profession With Technology — from forbes.com by Mohaimina Haque

Given these intrinsic advantages, it should come as no surprise that virtual law firms are on the rise. The shutdown and disruptions caused by Covid have provided a further impetus to this trend. For example, it would take an attorney the whole day to drive to the courthouse, park, wait for the judge to call their case, argue the matter, then drive back to the office. Now the same matter can be handled through Zoom and court filings can be filed online. However, after these measures were instituted, I’ve seen how the opposition to such virtual measures has eroded as the real savings in time and money to all parties concerned have become very clear.


Legal Soft Revolutionizes Legal Training with AI-Powered Platform — from globenewswire.com

LOS ANGELES, July 21, 2023 (GLOBE NEWSWIRE) — Legal Soft, a pioneering company in legal technology, is leveraging the power of artificial intelligence (AI) to revolutionize the development of training materials for law firms. Committed to advancing legal education and professional development, Legal Soft’s innovative AI-driven platform is transforming the training landscape for legal professionals.


AI Legal Case Analysis: The Next Frontier in Legal Technology — from dtgreviews.com

AI legal case analysis refers to the use of AI algorithms to analyze legal cases, identify patterns, predict outcomes, and provide insights that can aid in legal decision-making. This technology has the potential to revolutionize the way lawyers approach case strategy, conduct legal research, and even interact with clients.

One of the most significant benefits of AI legal case analysis is its ability to process vast amounts of data quickly and accurately. Traditional legal research is a time-consuming process that involves sifting through hundreds, if not thousands, of cases to find relevant precedents. AI can automate this process, analyzing thousands of cases in a fraction of the time it would take a human. This not only saves time but also increases the accuracy of the research reducing the risk of human error.

Moreover, AI can identify patterns and trends in case law that might be overlooked human researchers. For instance, AI can analyze the decisions of a particular judge or court to determine how they typically rule on certain issues. This information can be invaluable in formulating a case strategy or predicting the outcome of a case.

From DSC:
I’m not sure how I feel about this yet…but I have to admit that I’m very tentative and a bit suspect at this point.

 

‘Future lawyers need to be versatile and willing to innovate’ — from legalcheek.com

Lawyers’ jobs will be different and new roles have and will continue to emerge, such as the legal technologist. I believe further momentum will come with the growth of future lawyers.

I would like to see AI support the people via perhaps a chatbot that can offer reliable and accurate guidance to support individuals and services to help mitigate the pain relating to basic but fundamental legal matters to fill this void. Issues such as homelessness, debt, asylum, slavery etc. would be examples of such areas to prioritise.

To Future-Proof Their Firms, Attorneys Must Embrace AI — from forbes.com by Daniel Farrar

Law firms that want to use AI-powered legal tech should first adopt the mindset that AI is there to supplement their attorneys’ roles, not replace those roles. They must come to view AI-driven legal tech as a means to complete their jobs more efficiently and better address their clients’ needs.

Generative AI and the future of the legal profession: LexisNexis UK report canvases in-house and law firm views  — from legaltechnology.com by Caroline Hill

LexisNexis Legal & Professional (on 13 July) released a new report entitled “Generative AI and the future of the legal profession”, which highlights the at times surprising expectations of in-house counsel and law firms when it comes to generative AI adoption.

Forty nine percent of in-house counsel expect their law firms to be using generative AI in the next 12 months, including 11% who say they expect firms to be already using the technology. Only 8% didn’t want AI used on their work. In contrast, 24% of firms believe their clients would not want them to use AI.

The survey, conducted among 1,175 UK legal professionals from May to June 2023, finds 87% of legal professionals are aware of generative AI tools and of that group, 95% agree these tools will have an impact on the practice of law (38% said it will have a significant impact,11% said it will be transformative and 46% thought it would have “some impact”).

5 Considerations Before Starting With New Legal Tech  — from jdsupra.com

Suddenly the most urgent in-house legal conversations centered on which new technology to install next. And while I’m glad to see more legal departments shift their mindsets, I still preach caution. Budgets are tight, change is difficult and selecting the right tool at the wrong moment can prove counterproductive.

So before starting the search for innovative or new legal tech, it’s always best to assess your readiness with a few quick criteria.

 

Teaching Assistants that Actually Assist Instructors with Teaching — from opencontent.org by David Wiley

“…what if generative AI could provide every instructor with a genuine teaching assistant – a teaching assistant that actually assisted instructors with their teaching?”

Assignment Makeovers in the AI Age: Reading Response Edition — from derekbruff.org by Derek Bruff

For my cryptography course, Mollick’s first option would probably mean throwing out all my existing reading questions. My intent with these reading questions was noble, that is, to guide students to the big questions and debates in the field, but those are exactly the kinds of questions for which AI can write decent answers. Maybe the AI tools would fare worse in a more advanced course with very specialized readings, but in my intro to cryptography course, they can handle my existing reading questions with ease.

What about option two? I think one version of this would be to do away with the reading response assignment altogether.

4 Steps to Help You Plan for ChatGPT in Your Classroom — from chronicle.com by Flower Darby
Why you should understand how to teach with AI tools — even if you have no plans to actually use them.


Some items re: AI in other areas:

15 Generative AI Tools A billion+ people will be collectively using very soon. I use most of them every day — from stefanbauschard.substack.com by Stefan Bauschard
ChatGPT, Bing, Office Suite, Google Docs, Claude, Perplexity.ai, Plug-Ins, MidJourney, Pi, Runway, Bard, Bing, Synthesia, D-ID

The Future of AI in Video: a look forward — from provideocoalition.com by Iain Anderson

Actors say Hollywood studios want their AI replicas — for free, forever — from theverge.com by Andrew Webster; resource from Tom Barrett

Along these lines of Hollywood and AI, see this Tweet:

Claude 2: ChatGPT rival launches chatbot that can summarise a novel –from theguardian.com by Dan Milmo; resource from Tom Barrett
Anthropic releases chatbot able to process large blocks of text and make judgments on what it is producing

Generative AI imagines new protein structures — from news.mit.edu by Rachel Gordon; resource from Sunday Signal
MIT researchers develop “FrameDiff,” a computational tool that uses generative AI to craft new protein structures, with the aim of accelerating drug development and improving gene therapy.

Google’s medical AI chatbot is already being tested in hospitals — from theverge.com by Wes Davis; resource via GSV

Ready to Sing Elvis Karaoke … as Elvis? The Weird Rise of AI Music — from rollingstone.com by Brian Hiatt; resource from Misha da Vinci
From voice-cloning wars to looming copyright disputes to a potential flood of nonhuman music on streaming, AI is already a musical battleground

 
 

Actors say Hollywood studios want their AI replicas — for free, forever — from theverge.com by Andrew Webster
The reveal came as SAG-AFTRA actors confirmed they were going on strike.

When asked about the proposal during the press conference, Crabtree-Ireland said that “This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that’s a groundbreaking proposal, I suggest you think again.”

 

The Future of Law: Embracing AI in the Legal Profession — from ethicalailawinstitute.com by Trent Kubasiak

Excerpt:

Improving Access to Justice:
One significant advantage of AI in the legal profession is its potential to improve access to justice. The high costs associated with legal services have traditionally created barriers for individuals with limited financial means. However, AI-powered solutions can help bridge this gap by providing affordable and accessible legal information and guidance. Virtual legal assistants and chatbots can assist individuals with legal queries, empowering them to navigate legal processes more effectively and make informed decisions. By leveraging AI, the legal profession can become more inclusive and ensure that legal services are available to a broader segment of society.


Also relevant/see:

Law Unlimited: Welcome to the re-envisioned legal profession — from jordanfurlong.substack.com by Jordan Furlong
Will Generative AI destroy law firms? Only if lawyers are too fixed in their ways to see the possibilities that lie beyond who we’ve always been and what we’ve always done.

Excerpt:

The immediate impact of Gen AI on legal services will be to introduce unprecedented efficiency to the production of countless legal documents and processes. For most of the last century, lawyers have personally performed this work, spending and billing hours or parts of hours to accomplish each task. Law firms have used this production method to provide on-the-job training for inexperienced lawyers and have leveraged those hours to generate profits for their partners. But LLMs can now do the same work in seconds, as effectively as lawyers can today and much better in the near future. This is, among other things, a very serious problem for law firms’ business models and talent development practices, not to mention a real challenge to lawyer education and training and potentially a revolution in access to justice.

 

YouTube tests AI-generated quizzes on educational videos — from techcrunch.com by Lauren Forristal

YouTube tests AI-generated quizzes on educational videos

YouTube is experimenting with AI-generated quizzes on its mobile app for iOS and Android devices, which are designed to help viewers learn more about a subject featured in an educational video. The feature will also help the video-sharing platform get a better understanding of how well each video covers a certain topic.


Incorporating AI in Teaching: Practical Examples for Busy Instructors — from danielstanford.substack.com by Daniel Stanford; with thanks to Derek Bruff on LinkedIn for the resource

Since January 2023, I’ve talked with hundreds of instructors at dozens of institutions about how they might incorporate AI into their teaching. Through these conversations, I’ve noticed a few common issues:

  • Faculty and staff are overwhelmed and burned out. Even those on the cutting edge often feel they’re behind the curve.
  • It’s hard to know where to begin.
  • It can be difficult to find practical examples of AI use that are applicable across a variety of disciplines.

To help address these challenges, I’ve been working on a list of AI-infused learning activities that encourage experimentation in (relatively) small, manageable ways.


September 2023: The Secret Intelligent Beings on Campus — from stefanbauschard.substack.com by Stefan Bauschard
Many of your students this fall will be enhanced by artificial intelligence, even if they don’t look like actual cyborgs. Do you want all of them to be enhanced, or just the highest SES students?


How to report better on artificial intelligence — from cjr.org (Columbia Journalism Review) by Syash Kapoor, Hilke Schellmann, and Ari Sen

In the past few months we have been deluged with headlines about new AI tools and how much they are going to change society.

Some reporters have done amazing work holding the companies developing AI accountable, but many struggle to report on this new technology in a fair and accurate way.

We—an investigative reporter, a data journalist, and a computer scientist—have firsthand experience investigating AI. We’ve seen the tremendous potential these tools can have—but also their tremendous risks.

As their adoption grows, we believe that, soon enough, many reporters will encounter AI tools on their beat, so we wanted to put together a short guide to what we have learned.


AI

.
DSC:
Something I created via Adobe Firefly (Beta version)

 


The 5 reasons L&D is going to embrace ChatGPT — from chieflearningoffice.com by Josh Bersin

Does this mean it will do away with the L&D job? Not at all — these tools give you superhuman powers to find content faster, put it in front of employees in a more useful way and more creatively craft character simulations, assessments, learning in the flow of work and more.

And it’s about time. We really haven’t had a massive innovation in L&D since the early days of the learning experience platform market, so we may be entering the most exciting era in a long time.

Let me give you the five most significant use cases I see. And more will come.


AI and Tech with Scenarios: ID Links 7/11/23 — from christytuckerlearning.com by Christy Tucker

As I read online, I bookmark resources I find interesting and useful. I share these links periodically here on my blog. This post includes links on using tech with scenarios: AI, xAPI, and VR. I’ll also share some other AI tools and links on usability, resume tips for teachers, visual language, and a scenario sample.



It’s only a matter of time before A.I. chatbots are teaching in primary schools — from cnbc.com by Mikaela Cohen

Key Points

  • Microsoft co-founder Bill Gates saying generative AI chatbots can teach kids to read in 18 months rather than years.
  • Artificial intelligence is beginning to prove that it can accelerate the impact teachers have on students and help solve a stubborn teacher shortage.
  • Chatbots backed by large language models can help students, from primary education to certification programs, self-guide through voluminous materials and tailor their education to specific learning styles [preferences].

The Rise of AI: New Rules for Super T Professionals and Next Steps for EdLeaders — from gettingsmart.com by Tom Vander Ark

Key Points

  • The rise of artificial intelligence, especially generative AI, boosts productivity in content creation–text, code, images and increasingly video.
  • Here are six preliminary conclusions about the nature of work and learning.

The Future Of Education: Embracing AI For Student Success — from forbes.com by Dr. Michael Horowitz

Unfortunately, too often attention is focused on the problems of AI—that it allows students to cheat and can undermine the value of what teachers bring to the learning equation. This viewpoint ignores the immense possibilities that AI can bring to education and across every industry.

The fact is that students have already embraced this new technology, which is neither a new story nor a surprising one in education. Leaders should accept this and understand that people, not robots, must ultimately create the path forward. It is only by deploying resources, training and policies at every level of our institutions that we can begin to realize the vast potential of what AI can offer.


AI Tools in Education: Doing Less While Learning More — from campustechnology.com by Mary Grush
A Q&A with Mark Frydenberg


Why Students & Teachers Should Get Excited about ChatGPT — from ivypanda.com with thanks to Ruth Kinloch for this resource

Table of Contents for the article at IvyPanda.com entitled Why Students & Teachers Should Get Excited about ChatGPT

Excerpt re: Uses of ChatGPT for Teachers

  • Diverse assignments.
  • Individualized approach.
  • Interesting classes.
  • Debates.
  • Critical thinking.
  • Grammar and vocabulary.
  • Homework review.

SAIL: State of Research: AI & Education — from buttondown.email by George Siemens
Information re: current AI and Learning Labs, education updates, and technology


Why ethical AI requires a future-ready and inclusive education system — from weforum.org


A specter is haunting higher education — from aiandacademia.substack.com by Bryan Alexander
Fall semester after the generative AI revolution

In this post I’d like to explore that apocalyptic model. For reasons of space, I’ll leave off analyzing student cheating motivations or questioning the entire edifice of grade-based assessment. I’ll save potential solutions for another post.

Let’s dive into the practical aspects of teaching to see why Mollick and Bogost foresee such a dire semester ahead.


Items re: Code Interpreter

Code Interpreter continues OpenAI’s long tradition of giving terrible names to things, because it might be most useful for those who do not code at all. It essentially allows the most advanced AI available, GPT-4, to upload and download information, and to write and execute programs for you in a persistent workspace. That allows the AI to do all sorts of things it couldn’t do before, and be useful in ways that were impossible with ChatGPT.

.


Legal items


MISC items


 

The Homework Apocalypse — from oneusefulthing.org by Ethan Mollick
Fall is going to be very different this year. Educators need to be ready.

Excerpt:

Students will cheat with AI. But they also will begin to integrate AI into everything they do, raising new questions for educators. Students will want to understand why they are doing assignments that seem obsolete thanks to AI. They will want to use AI as a learning companion, a co-author, or a teammate. They will want to accomplish more than they did before, and also want answers about what AI means for their future learning paths. Schools will need to decide how to respond to this flood of questions.

The challenge of AI in education can feel abstract, so to understand a bit more about what is going to happen, I wanted to examine some common assignment types.

 
© 2024 | Daniel Christian