We have a massive opportunity to transform law and legal services to better serve society. We MUST AUTOMATE the majority of what lawyers do today so that we can expand access to the law and justice for EVERYONE in the future.
We need to have REAL discussions about the impact of… https://t.co/1XdWoe7KYz
— Daniel W. Linna Jr. (@DanLinna) June 8, 2023
From DSC:
I put the following comment on Dan’s posting:
I couldn’t agree more Dan. Millions of people could benefit from the considered, careful research of — and eventual application of — technologies to significantly improve/impact access to justice (#A2J).
Also see:
Generative AI could radically alter the practice of law — from The Economist
Even if it doesn’t replace lawyers en masse
Excerpts:
According to a recent report from Goldman Sachs, a bank, 44% of legal tasks could be performed by AI, more than in any occupation surveyed except for clerical and administrative support. Lawyers spend an awful lot of time scrutinising tedious documents—the sort of thing that AI has already demonstrated it can do well. Lawyers use AI for a variety of tasks, including due diligence, research and data analytics. These applications have largely relied on “extractive” AI, which, as the name suggests, extracts information from a text, answering specific questions about its contents.
Ultimately this will be good news for clients. “People who go to lawyers don’t want lawyers: they want resolutions to their problems or the avoidance of problems altogether,” explains Mr Susskind. If AI can provide those outcomes then people will use AI. Many people already use software to do their taxes rather than rely on professionals; “Very few of them are complaining about the lack of social interaction with their tax advisers.”
Also see:
On @LawNextPodcast – NoellaSudbury, founder of @RasaLegal, on how technology can simplify criminal records expungement. https://t.co/3NAmLjLxqf pic.twitter.com/ge4pt1qgl2
— Bob Ambrogi (@bobambrogi) June 6, 2023
AI-assisted cheating isn’t a temptation if students have a reason to care about their own learning.
Yesterday I happened to listen to two different podcasts that ended up resonating with one another and with an idea that’s been rattling around inside my head with all of this moral uproar about generative AI:
** If we trust students – and earn their trust in return – then they will be far less motivated to cheat with AI or in any other way. **
First, the question of motivation. On the Intentional Teaching podcast, while interviewing James Lang and Michelle Miller on the impact of generative AI, Derek Bruff points out (drawing on Lang’s Cheating Lessons book) that if students have “real motivation to get some meaning out of [an] activity, then there’s far less motivation to just have ChatGPT write it for them.” Real motivation and real meaning FOR THE STUDENT translates into an investment in doing the work themselves.
…
Then I hopped over to one of my favorite podcasts – Teaching in Higher Ed – where Bonni Stachowiak was interviewing Cate Denial about a “pedagogy of kindness,” which is predicated on trusting students and not seeing them as adversaries in the work we’re doing.
So the second key element: being kind and trusting students, which builds a culture of mutual respect and care that again diminishes the likelihood that they will cheat.
…
Again, human-centered learning design seems to address so many of the concerns and challenges of the current moment in higher ed. Maybe it’s time to actually practice it more consistently. #aiineducation #higheredteaching #inclusiveteaching