Why it matters: The best AI assistants will be the ones that require the least prompting. They’ll get to know who you are, what you need, and your modus operandi. Profiles are a good starting point, but we believe the game-changer will be larger context windows (that’s nerd-speak for the amount of context ChatGPT can handle). .
From DSC: And how about taking this a step further and remembering — or being able to access — our constantly updated Cloud-Based Learning Profiles?
My hypothesis and research suggest that as bar associations and the ABA begin to recognize the on-going systemic issues of high-cost legal education, growing legal deserts (where no lawyer serves a given population), on-going and pervasive access to justice issues, and a public that is already weary of the legal system – alternative options that are already in play might become more supported.
What might that look like?
The combination of AI-assisted education with traditional legal apprenticeships has the potential to create a rich, flexible, and engaging learning environment. Here are three scenarios that might illustrate what such a combination could look like:
Scenario One – Personalized Curriculum Development
Scenario Two – On-Demand Tutoring and Mentoring
Scenario Three – AI-assisted Peer Networks and Collaborative Learning:
We know that there are challenges – a threat to human jobs, the potential implications for cyber security and data theft, or perhaps even an existential threat to humanity as a whole. But we certainly don’t yet have a full understanding of all of the implications. In fact, a World Economic Forum report recently stated that organizations “may currently underappreciate AI-related risks,” with just four percent of leaders considering the risk level to be “significant.”
A survey carried out by analysts Baker McKenzie concluded that many C-level leaders are over-confident in their assessments of organizational preparedness in relation to AI. In particular, it exposed concerns about the potential implications of biased data when used to make HR decisions.
AI & lawyer training: How law firms can embrace hybrid learning & development — thomsonreuters.com A big part of law firms’ successful adaptation to the increased use of ChatGPT and other forms of generative AI, may depend upon how firmly they embrace online learning & development tools designed for hybrid work environments
Excerpt:
As law firms move forward in using of advanced artificial intelligence such as ChatGPT and other forms of generative AI, their success may hinge upon how they approach lawyer training and development and what tools they enlist for the process.
One of the tools that some law firms use to deliver a new, multi-modal learning environment is an online, video-based learning platform, Hotshot, that delivers more than 250 on-demand courses on corporate, litigation, and business skills.
Ian Nelson, co-founder of Hotshot, says he has seen a dramatic change in how law firms are approaching learning & development (L&D) in the decade or so that Hotshot has been active. He believes the biggest change is that 10 years ago, firms hadn’t yet embraced the need to focus on training and development.
From DSC: Heads up law schools. Are you seeing/hearing this!?
Are we moving more towards a lifelong learning model within law schools?
If not, shouldn’t we be doing that?
Are LLM programs expanding quickly enough? Is more needed?
The generative AI announcements are coming fast and furious these days, but among the biggest in terms of sheer dollar commitments just landed: Accenture, the global professional services and consulting giant, today announced it will invest $3 billion (with a “b”!) in AI over the next three years in building out its team of AI professionals and AI-focused solutions for its clients.
“There is unprecedented interest in all areas of AI, and the substantial investment we are making in our Data & AI practice will help our clients move from interest to action to value, and in a responsible way with clear business cases,” said Julie Sweet, Accenture’s chairwoman and CEO.
Also related/see:
Artificial intelligence creates 40,000 new roles at Accenture— from computerweekly.com by Karl Flinders Accenture is planning to add thousands of AI experts to its workforce as part of a $3bn investment in its data and artificial intelligence practice
Why leaders need to evolve alongside generative AI — from fastcompany.com by Kelsey Behringer Even if you’re not an educator, you should not be sitting on the sidelines watching the generative AI conversation being had around you—hop in.
Excerpts (emphasis DSC):
Leaders should be careful to watch and support education right now. At the end of the day, the students sitting in K-12 and college classrooms are going to be future CPAs, lawyers, writers, and teachers. If you are parenting a child, you have skin in the game. If you use professional services, you have skin in the game. When it comes to education, we all have skin in the game. … Students need to master fundamental skills like editing, questioning, researching, and verifying claims before they can use generative AI exceptionally well.
[On 6/15/23, I joined] colleagues from OpenAI, Google, Microsoft, Stanford, Harvard and other others at the first meeting of the GenAI Summit. Our shared goal [was] to help to educate universities & schools in Europe about the impact of Generative AI on their work.
…how can we effectively communicate to education professionals that generative AI will enhance their work rather than replace them?
A recent controlled study found that ChatGPT can help professionals increase their efficiency in routine tasks by ~35%. If we keep in mind that the productivity gains brought by the steam engine in the nineteenth century was ~25%, this is huge.
As educators, we should embrace the power of ChatGPT to automate the repetitive tasks which we’ve been distracted by for decades. Lesson planning, content creation, assessment design, grading and feedback – generative AI can help us to do all of these things faster than ever before, freeing us up to focus on where we bring most value for our students.
SAN FRANCISCO, June 15 (Reuters) – Alphabet Inc (GOOGL.O) is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, four people familiar with the matter told Reuters.
The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information.
Adobe Firefly for the Enterprise — Dream Bigger with Adobe Firefly. Dream it, type it, see it with Firefly, our creative generative AI engine. Now in Photoshop (beta), Illustrator, Adobe Express, and on the web.
We have a massive opportunity to transform law and legal services to better serve society. We MUST AUTOMATE the majority of what lawyers do today so that we can expand access to the law and justice for EVERYONE in the future.
From DSC: I put the following comment on Dan’s posting:
I couldn’t agree more Dan. Millions of people could benefit from the considered, careful research of — and eventual application of — technologies to significantly improve/impact access to justice (#A2J).
According to a recent report from Goldman Sachs, a bank, 44% of legal tasks could be performed by AI, more than in any occupation surveyed except for clerical and administrative support. Lawyers spend an awful lot of time scrutinising tedious documents—the sort of thing that AI has already demonstrated it can do well. Lawyers use AI for a variety of tasks, including due diligence, research and data analytics. These applications have largely relied on “extractive” AI, which, as the name suggests, extracts information from a text, answering specific questions about its contents.
Ultimately this will be good news for clients. “People who go to lawyers don’t want lawyers: they want resolutions to their problems or the avoidance of problems altogether,” explains Mr Susskind. If AI can provide those outcomes then people will use AI. Many people already use software to do their taxes rather than rely on professionals; “Very few of them are complaining about the lack of social interaction with their tax advisers.”
“With Vision Pro, you’re no longer limited by a display,” Apple CEO Tim Cook said, introducing the new headset at WWDC 2023. Unlike earlier mixed reality reports, the system is far more focused on augmented reality than virtual. The company refresh to this new paradigm is “spatial computing.”
“This is the first Apple product you look through and not at.” – Tim Cook
And with those famous words, Apple announced a new era of consumer tech.
Apple’s new headset will operate on VisionOS – its new operating system – and will work with existing iOS and iPad apps. The new OS is created specifically for spatial computing — the blend of digital content into real space.
Vision Pro is controlled through hand gestures, eye movements and your voice (parts of it assisted by AI). You can use apps, change their size, capture photos and videos and more.
From DSC: Time will tell what happens with this new operating system and with this type of platform. I’m impressed with the engineering — as Apple wants me to be — but I doubt that this will become mainstream for quite some time yet. Also, I wonder what Steve Jobs would think of this…? Would he say that people would be willing to wear this headset (for long? at all?)? What about Jony Ive?
I’m sure the offered experiences will be excellent. But I won’t be buying one, as it’s waaaaaaaaay too expensive.
Ernst and Young dug a little deeper. “Today’s disruptive working landscape requires organisations to largely restructure the way they are doing work,” they noted in a bulletin in March this year. “Time now spent on tasks will be equally divided between people and machines. For these reasons, workforce roles will change and so do the skills needed to perform them.”
The World Economic Forum has pointed to this global skills gap and estimates that, while 85 million jobs will be displaced, 50% of all employees will need reskilling and/or upskilling by 2025. This, it almost goes without saying, will require Learning and Development departments to do the heavy-lifting in this initial transformational phase but also in an on-going capacity.
“And that’s the big problem,” says Hardman. “2025 is only two and half years away and the three pillars of L&D – knowledge transference, knowledge reinforcement and knowledge assessment – are crumbling. They have been unchanged for decades and are now, faced by revolutionary change, no longer fit for purpose.”
ChatGPT is the shakeup education needs— from eschoolnews.com by Joshua Sine As technology evolves, industries must evolve alongside it, and education is no exception–especially when students heavily and regularly rely on edtech
Key points:
Education must evolve along with technology–students will expect it
Embracing new technologies helps education leverage adaptive technology that engage student interest
DC: A heads up to us as emerging techs continue to hit the radar. We need to ask “should we build this?” or “how should we build this to ensure benefits to society?” — & not just build/develop things because we can. AI offers powerful pros as well as cons. https://t.co/JcrZqzDhzH
— Daniel Christian (he/him/his) (@dchristian5) June 1, 2023
From DSC: I also wanted to highlight the item below, which Barsee also mentioned above, as it will likely hit the world of education and training as well:
Last night, Jensen Huang of NVIDIA gave his very first live keynote in 4-years.
The most show-stopping moment from the event was when he showed off the real-time AI in video games. A human speaks, the NPC responds, in real time and the dialogue was generated with AI on the fly. pic.twitter.com/TDoUM1zSiy
The perils of consulting an Electric Monk — from jordanfurlong.substack.com by Jordan Furlong Don’t blame ChatGPT for the infamous incident of the made-up cases. And don’t be too hard on the lawyer, either. We’re all susceptible to a machine that tells us exactly what we want to hear.
Excerpt:
But then the “ChatGPT Lawyer” story happened, and all hell broke loose on LawTwitter and LawLinkedIn, and I felt I needed to make three points, one of which involves an extra-terrestrial robot.
My first two points are pretty straightforward:
The tsunami of gleeful overreaction from lawyers on social media, urging bans on the use of ChatGPT and predicting prison time for the hapless practitioner, speaks not only to their fear and loathing of generative AI, but also to their desperate hope that it’s all really nothing but hype and won’t disturb their happy status quo. Good luck with that.
The condemnation and mockery of the lawyer himself, who made a bad mistake but who’s been buried by an utterly disproportionate avalanche of derision, speaks to the lack of compassion in this profession, whose members should pray that their worst day as a lawyer never makes it to the front page of The New York Times. There but for the grace of God.
Are you looking for evidence to support the side that’s hired you? Or are you looking for the truth? Choosing the first option has never been easier. It’s also never been more dangerous.
As referenced topic-wise by Jordan above, also see:
What I learned at CLOC 2023 — from alexofftherecord.com by Alex Su This week I attended the premier legal operations conference. Here’s what I heard.
Excerpt:
Theme 1: Generative AI isn’t going anywhere
This was a huge theme throughout the conference. Whether it was vendors announcing GPT integrations, or panels discussing how to use AI, there was just an enormous amount of attention on generative AI. I’m certainly no stranger to all this hype, but I’d always wondered if it was all from my Silicon Valley bubble. It wasn’t.
What was driving all this interest in AI? Well, the ubiquity of ChatGPT. Everyone’s talking about it and trying to figure out how to incorporate it into the business. And not just in the U.S. It’s a worldwide trend. Word on the street is that it’s a CEO-level priority. Everywhere. So naturally it trickles down to the legal department.
Personal experience and anecdotal evidence indicate that LLMs’ current state provides impressive output in various legal tasks. Specifically, they provide extraordinary results on the following:
Drafting counterarguments.
Exploring client fact inquiries (e.g., “How did you lose money?”).
Ideating voir dire questions (and rating responses).
Professors Plan Summer AI Upskilling, With or Without Support — from insidehighered.com by Susan D’Agostino Academics seeking respite from the fire hose of AI information and hot takes launch summer workshops. But many of the grass-roots efforts fall short of meeting demand.
Excerpt:
In these summer faculty AI workshops, some plan to take their first tentative steps in redesigning assignments to recognize the AI-infused landscape. Others expect to evolve their in-progress teaching-with-AI practices. At some colleges, full-time staff will deliver the workshops or pay participants for professional development time. But some offerings are grassroots efforts delivered by faculty volunteers attended by participants on their own time. Even so, many worry that the efforts will fall short of meeting demand.
From DSC: We aren’t used to this pace of change. It will take time for faculty members — as well as Instructional Designers, Instructional Technologists, Faculty Developers, Learning Experience Designers, Librarians, and others — to learn more about AI and its implications for teaching and learning. Faculty are learning. Staff are learning. Students are learning. Grace is needed. And faculty/staff modeling what it is to learn themselves is a good thing for students to see as well.
This can be done first and foremost through collaboration, bringing more people at the table, in a meaningful workflow, whereby they can make the best use of their expertise. Moreover, we need to take a step back and keep the big picture in mind, if we want to provide our students with a valuable experience.
…
This is all about creating and nurturing partnerships. Thinking in an inclusive way about who is at the table when we design our courses and our programmes and who we are currently missing. Generally speaking, the main actors involved should be: teaching staff, learning design professionals (under all their various names) and students. Yes, students. Although we are designing for their learning, they are all too often not part of the process.
In order to yield results, collaborative practice needs to be embedded in the institutional fabric, and this takes time. Building silos happens fast, breaking them is a long term process. Creating a culture of dialogue, with clear and replicable processes is key to making collaborative learning design work.
From DSC: To me, Alexandra is addressing the topic of using teams to design, develop, and teach/offer courses. This is where a variety of skills and specialties can come together to produce an excellent learning experience. No one individual has all of the necessary skills — nor the necessary time. No way.
In the fifth free webinar on AI for Education, I am joined by Dean Pearman, Head of Education at Beaconhills College and Tom Oliphant, Head of Technology and Enterprise at St John’s Grammar School. We explore questions about creativity, design and the new skills that are emerging. Join us to invest in your AI Literacy. Expand your toolset, broaden your understanding, and challenge your thinking.
You are watching the fourth free webinar, a dialogue about AI for education with Sophie Fenton, Steve Brophy and hosted by Tom Barrett Building Blocks – key AI concepts to learn. Practical frameworks for navigating the challenges of AI for education. Be part of a dialogue on provocations about AI and human centred education, truth and identity.
Using ChatGPT in Math Lesson Planning — from edutopia.org by Kristen Moore Artificial intelligence tools are useful beyond language arts classes. Math teachers can use them to save time and create interesting lessons.
Democratic Inputs to AI — from openai.com Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.
AI Canon — from a16z.com (Andreessen Horowitz) by Derrick Harris, Matt Bornstein, and Guido Appenzeller; via The Neuron newsletter
Excerpts:
Research in artificial intelligence is increasing at an exponential rate. It’s difficult for AI experts to keep up with everything new being published, and even harder for beginners to know where to start. So, in this post, we’re sharing a curated list of resources we’ve relied on to get smarter about modern AI. We call it the “AI Canon” because these papers, blog posts, courses, and guides have had an outsized impact on the field over the past several years.
NVIDIA ACE for Games is a new foundry for intelligent in-game characters powered by generative AI. Developers of middleware, tools, and games can use NVIDIA ACE for Games to build and deploy customized speech, conversation, and animation AI models in their software and games.
From DSC: I can’t wait to see this type of thing integrated into educational games/simulations/training.
Last night, Jensen Huang of NVIDIA gave his very first live keynote in 4-years.
The most show-stopping moment from the event was when he showed off the real-time AI in video games. A human speaks, the NPC responds, in real time and the dialogue was generated with AI on the fly. pic.twitter.com/TDoUM1zSiy
Bill Gates says AI is poised to destroy search engines and Amazon — from futurism.com by Victor Tangermann Who will win the AI [competition]? (DSC: I substituted the word competition here, as that’s what it is. It’s not a war, it’s a part of America’s way of doing business.)
“Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” Gates said during a Goldman Sachs event on AI in San Francisco this week, as quoted by CNBC.
These AI assistants could “read the stuff you don’t have time to read,” he said, allowing users to get to information without having to use a search engine like Google.
The online learning platform edX introduced two new tools on Friday based on OpenAI’s ChatGPT technology: an edX plugin for ChatGPT and a learning assistant embedded in the edX platform, called Xpert.
According to the company, its plugin will enable ChatGPT Plus subscribers to discover educational programs and explore learning content such as videos and quizzes across edX’s library of 4,200 courses.
Bing is now the default search for ChatGPT— from theverge.com by Tom Warren; via superhuman.beehiiv.com The close partnership between Microsoft and OpenAI leads to plug-in interoperability and search defaults.
Excerpt:
OpenAI will start using Bing as the default search experience for ChatGPT. The new search functionality will be rolling out to ChatGPT Plus users today and will be enabled for all free ChatGPT users soon through a plug-in in ChatGPT.
Students with mobility challenges may find it easier to use generative AI tools — such as ChatGPT or Elicit — to help them conduct research if that means they can avoid a trip to the library.
Students who have trouble navigating conversations — such as those along the autism spectrum — could use these tools for “social scripting.” In that scenario, they might ask ChatGPT to give them three ways to start a conversation with classmates about a group project.
Students who have trouble organizing their thoughts might benefit from asking a generative AI tool to suggest an opening paragraph for an essay they’re working on — not to plagiarize, but to help them get over “the terror of the blank page,” says Karen Costa, a faculty-development facilitator who, among other things, focuses on teaching, learning, and living with ADHD. “AI can help build momentum.”
ChatGPT is good at productive repetition. That is a practice most teachers use anyway to reinforce learning. But AI can take that to the next level by allowing students who have trouble processing information to repeatedly generate examples, definitions, questions, and scenarios of concepts they are learning.
It’s not all on you to figure this out and have all the answers. Partner with your students and explore this together.
Intentional Teaching — from intentionalteaching.buzzsprout.com by Derek Bruff Rethinking Teaching in an Age of AI with James M. Lang and Michelle D. Miller
Excerpt:
In her 2022 book Remembering and Forgetting in the Age of Technology, Michelle D. Miller writes about the “moral panics” that often happen in response to new technologies. In his 2013 book Cheating Lessons: Learning from Academic Dishonesty, James M. Lang argues that the best way to reduce cheating is through better course design. What do these authors have to say about teaching in an age of generative AI tools like ChatGPT? Lots!
Governance of superintelligence — from openai.com Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.
AI is developing rapidly enough and the dangers it may pose are clear enough that OpenAI’s leadership believes that the world needs an international regulatory body akin to that governing nuclear power — and fast. But not too fast. In a post to the company’s blog, OpenAI founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever explain that the pace of innovation in artificial intelligence is so fast that we can’t expect existing authorities to adequately rein in the technology. While there’s a certain quality of patting themselves on the back here, it’s clear to any impartial observer that the tech, most visibly in OpenAI’s explosively popular ChatGPT conversational agent, represents a unique threat as well as an invaluable asset.
OpenAI-backed robot startup beats Elon Musk’s Tesla, deploys AI-enabled robots in real world — from firstpost.com by Mehul Reuben Das; via The Rundown A robotics startup backed by OpenAI, the makers of ChatGPT has beaten Elon Musk’s Tesla in the humanoid robots race, and has successfully deployed humanoid robots as security guards. Next, they will be deploying the robots in hospices and assisted living facilities
A robotics startup backed by OpenAI, the makers of ChatGPT has beaten Elon Musk’s Tesla in the humanoid robots race, and has successfully deployed humanoid robots as security guards. Next, they will be deploying the robots in hospices and assisted living facilities.
From DSC: Hmmm…given the crisis of loneliness in the United States, I’m not sure that this type of thing is a good thing. But I’m sure there are those who would argue the other side of this.