Emerging technical solutions are addressing the main challenges of using Generative AI in legal applications, such as lack of consistency and accuracy, limited explainability, privacy concerns, and difficulty in obtaining and training models on legal domain data.
Structural impediments in the legal industry, such as the billable hour, lack of standardization, vendor dependence, and incumbent control, moderate the success of generative AI startups.
Our defined “client-facing” LegalTech market is segmented into three broad lines of work: Research and Analysis, Document Review and Drafting, and Litigation. We view the total LegalTech market in the United States to be estimated at ~$13B in 2023, with litigation being the largest category.
LegalTech incumbents play a significant role in the adoption of generative AI technologies, often opting for market consolidation through partnerships or acquisitions rather than building solutions organically.
Future evolution in LegalTech may involve specialization in areas such as patent and IP, immigration, insurance, and regulatory compliance. There is also potential for productivity tools and access to legal services, although the latter faces structural challenges related to the Unauthorized Practice of Law (UPL).
EPISODE NOTES
Creative thinking and design elements can help you elevate your legal practice and develop more meaningful solutions for clients. Dennis and Tom welcome Tessa Manuello to discuss her insights on legal technology with a particular focus on creative design adaptations for lawyers. Tessa discusses the tech learning process for attorneys and explains how a more creative approach for both learning and implementing tech can help lawyers make better use of current tools, AI included.
In honor of International Women’s Day, Sharma discusses on LinkedIn the need for more female role models in the tech sector as AI opens up traditional career pathways and creates opportunities to welcome more women to the space.
Sharma invited Thomson Reuters female leaders working in legal technology to share their perspectives, including Rawia Ashraf, Emily Colbert, and Anu Dodda.
Vast swaths of the United States are at risk of running short of power as electricity-hungry data centers and clean-technology factories proliferate around the country, leaving utilities and regulators grasping for credible plans to expand the nation’s creaking power grid.
…
A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing. Tech firms like Amazon, Apple, Google, Meta and Microsoft are scouring the nation for sites for new data centers, and many lesser-known firms are also on the hunt.
The Obscene Energy Demands of A.I.— from newyorker.com by Elizabeth Kolbert How can the world reach net zero if it keeps inventing new ways to consume energy?
“There’s a fundamental mismatch between this technology and environmental sustainability,” de Vries said. Recently, the world’s most prominent A.I. cheerleader, Sam Altman, the C.E.O. of OpenAI, voiced similar concerns, albeit with a different spin. “I think we still don’t appreciate the energy needs of this technology,” Altman said at a public appearance in Davos. He didn’t see how these needs could be met, he went on, “without a breakthrough.” He added, “We need fusion or we need, like, radically cheaper solar plus storage, or something, at massive scale—like, a scale that no one is really planning for.”
A generative AI reset: Rewiring to turn potential into value in 2024 — from mckinsey.com by Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel; via Philippa Hardman The generative AI payoff may only come when companies do deeper organizational surgery on their business.
Figure out where gen AI copilots can give you a real competitive advantage
Upskill the talent you have but be clear about the gen-AI-specific skills you need
Form a centralized team to establish standards that enable responsible scaling
Set up the technology architecture to scale
Ensure data quality and focus on unstructured data to fuel your models
Build trust and reusability to drive adoption and scale
Since ChatGPT dropped in the fall of 2022, everyone and their donkey has tried their hand at prompt engineering—finding a clever way to phrase your query to a large language model (LLM) or AI art or video generator to get the best results or sidestep protections. The Internet is replete with prompt-engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.
…
However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.
There is one very clear parallel between the digital spreadsheet and generative AI: both are computer apps that collapse time. A task that might have taken hours or days can suddenly be completed in seconds. So accept for a moment the premise that the digital spreadsheet has something to teach us about generative AI. What lessons should we absorb?
It’s that pace of change that gives me pause. Ethan Mollick, author of the forthcoming book Co-Intelligence, tells me “if progress on generative AI stops now, the spreadsheet is not a bad analogy”. We’d get some dramatic shifts in the workplace, a technology that broadly empowers workers and creates good new jobs, and everything would be fine. But is it going to stop any time soon? Mollick doubts that, and so do I.
At this moment, as a college student trying to navigate the messy, fast-developing, and varied world of generative AI, I feel more confused than ever. I think most of us can share that feeling. There’s no roadmap on how to use AI in education, and there aren’t the typical years of proof to show something works. However, this promising new tool is sitting in front of us, and we would be foolish to not use it or talk about it.
…
I’ve used it to help me understand sample code I was viewing, rather than mindlessly trying to copy what I was trying to learn from. I’ve also used it to help prepare for a debate, practicing making counterarguments to the points it came up with.
AI alone cannot teach something; there needs to be critical interaction with the responses we are given. However, this is something that is true of any form of education. I could sit in a lecture for hours a week, but if I don’t do the homework or critically engage with the material, I don’t expect to learn anything.
Survey: K-12 Students Want More Guidance on Using AI — from govtech.com by Lauraine Langreo Research from the nonprofit National 4-H Council found that most 9- to 17-year-olds have an idea of what AI is and what it can do, but most would like help from adults in learning how to use different AI tools.
“Preparing young people for the workforce of the future means ensuring that they have a solid understanding of these new technologies that are reshaping our world,” Jill Bramble, the president and CEO of the National 4-H Council, said in a press release.
1,444
The number of students who were enrolled at Notre Dame College in fall 2022, down 37% from 2014. The Roman Catholic college recently said it would close after the spring term, citing declining enrollment, along with rising costs and significant debt.
28
The number of academic programs that Valparaiso University may eliminate. Eric Johnson, the Indiana institution’s provost, said it offers too many majors, minors and graduate degrees in relation to its enrollment.
…
A couple of other items re: higher education that caught my eye were:
University administrators see the need to implement education technology in their classrooms but are at a loss regarding how to do so, according to a new report.
The College Innovation Network released its first CIN Administrator EdTech survey today, which revealed that more than half (53 percent) of the 214 administrators surveyed do not feel extremely confident in choosing effective ed-tech products for their institutions.
“While administrators are excited about offering new ed-tech tools, they are lacking knowledge and data to help them make informed decisions that benefit students and faculty,” Omid Fotuhi, director of learning and innovation at WGU Labs, which funds the network, saidin a statement.
From DSC: I always appreciated our cross-disciplinary team at Calvin (then College). As we looked at enhancing our learning spaces, we had input from the Teaching & Learning Group, IT, A/V, the academic side of the house, and facilities. It was definitely a team-based approach. (As I think about it, it would have been helpful to have more channels for student feedback as well.)
Optionality. In my keynote, I pointed out that the academic calendar and credit hour in higher ed are like “shelf space” on the old television schedule that has been upended by streaming. In much the same way, we need similar optionality to meet the challenges of higher ed right now: in how students access learning (in-person, hybrid, online) to credentials (certificates, degrees) to how those experiences stack together for lifelong learning.
Culture in institutions. The common thread throughout the conference was how the culture of institutions (both universities and governments) need to change so our structures and practices can evolve. Too many people in higher ed right now are employing a scarcity mindset and seeing every change as a zero-sum game. If you’re not happy about the present, as many attendees suggested you’re not going to be excited about the future.
A new study from the University of Tokyo has highlighted the positive effect that immersive virtual reality experiences have for depression anti-stigma and knowledge interventions compared to traditional video.
…
The study found that depression knowledge improved for both interventions, however, only the immersive VR intervention reduced stigma. The VR-powered intervention saw depression knowledge score positively associated with a neural response in the brain that is indicative of empathetic concern. The traditional video intervention saw the inverse, with participants demonstrating a brain-response which suggests a distress-related response.
From DSC: This study makes me wonder why we haven’t heard of more VR-based uses in diversity training. I’m surprised we haven’t heard of situations where we are put in someone else’s mocassins so to speak. We could have a lot more empathy for someone — and better understand their situation — if we were to experience life as others might experience it. In the process, we would likely uncover some hidden biases that we have.
1. AI already plays a central part in the instructional design process A whopping 95.3% of the instructional designers interviewed said they use AI in their day to day work. Those who don’t use AI cite access or permission issues as the primary reason that they haven’t integrated AI into their process.
2. AI is predominantly used at the design and development stages of the instructional design process
When mapped to the ADDIE process, the breakdown of use cases goes as follows:
Analysis: 5.5% of use cases
Design: 32.1%
Development: 53.2%
Implementation: 1.8%
Evaluation: 7.3%
Speaking of AI in our learning ecosystems, also see:
The majority of educators expect use of artificial intelligence tools will increase in their school or district over the next year, according to an EdWeek Research Center survey.
Applying Multimodal AI to L&D — from learningguild.com by Sarah Clark We’re just starting to see Multimodal AI systems hit the spotlight. Unlike the text-based and image-generation AI tools we’ve seen before, multimodal systems can absorb and generate content in multiple formats – text, image, video, audio, etc.
The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.
Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.
Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.
We proposed EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, our method can generate vocal avatar videos with expressive facial expressions, and various head poses, meanwhile, we can generate videos with any duration depending on the length of input video.
New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.
“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.
There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.
The FCC’s war on robocalls has gained a new weapon in its arsenal with the declaration of AI-generated voices as “artificial” and therefore definitely against the law when used in automated calling scams. It may not stop the flood of fake Joe Bidens that will almost certainly trouble our phones this election season, but it won’t hurt, either.
The new rule, contemplated for months and telegraphed last week, isn’t actually a new rule — the FCC can’t just invent them with no due process. Robocalls are just a new term for something largely already prohibited under the Telephone Consumer Protection Act: artificial and pre-recorded messages being sent out willy-nilly to every number in the phone book (something that still existed when they drafted the law).
EIEIO…Chips Ahoy!— from dashmedia.co by Michael Moe, Brent Peus, and Owen Ritz
Here Come the AI Worms — from wired.com by Matt Burgess Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.
Justice tech — legal tech that helps low-income folks with no or some ability to pay, that assists the lawyers who serve those folks, and that makes the courts more efficient and effective — must contend with a higher hurdle than wooing Silicon Valley VCs: the civil justice system itself.
A checkerboard of technology systems and data infrastructures across thousands of local court jurisdictions makes it nearly impossible to develop tools with the scale needed to be sustainable. Courts are themselves a key part of the access to justice problem: opaque, duplicative and confusing court forms and burdensome filing processes make accessing the civil justice system deeply inefficient for the sophisticated, and an impenetrable maze for the 70+% of civil litigants who don’t have a lawyer.
Noxtua, the first sovereign European legal AI with its proprietary Language Model, allows lawyers in corporations and law firms to benefit securely from the advantages of generative AI. The Berlin-based AI startup Xayn and the largest German business law firm CMS are developing Noxtua as a Legal AI with its own Legal Large Language Model and AI assistant. Lawyers from corporations and law firms can use the Noxtua chat to ask questions about legal documents, analyze them, check them for compliance with company guidelines, (re)formulate texts, and have summaries written. The Legal Copilot, which specializes in legal texts, stands out as an independent and secure alternative from Europe to the existing US offerings.
Gen AI is game-changing technology, directly impacting the way legal work is done and the current law firm-client business model; and while much remains unsettled, within 10 years, Gen AI is likely to change corporate legal departments and law firms in profound and unique ways
Generative artificial intelligence (Gen AI) isn’t a futuristic technology — it’s here now, and it’s already impacting the legal industry in many ways.
Feb 29 (Reuters) – A new venture by a legal technology entrepreneur and a former Kirkland & Ellis partner says it can use artificial intelligence to help lawyers understand how individual judges think, allowing them to tailor their arguments and improve their courtroom results.
The Toronto-based legal research startup, Bench IQ, was founded by Jimoh Ovbiagele, the co-founder of now-shuttered legal research company ROSS Intelligence, alongside former ROSS senior software engineer Maxim Isakov and former Kirkland bankruptcy partner Jeffrey Gettleman.
Dave told me that he couldn’t have made Borrowing Time without AI—it’s an expensive project that traditional Hollywood studios would never bankroll. But after Dave’s short went viral, major production houses approached him to make it a full-length movie. I think this is an excellent example of how AI is changing the art of filmmaking, and I came out of this interview convinced that we are on the brink of a new creative age.
We dive deep into the world of AI tools for image and video generation, discussing how aspiring filmmakers can use them to validate their ideas, and potentially even secure funding if they get traction. Dave walks me through how he has integrated AI into his movie-making process, and as we talk, we make a short film featuring Nicolas Cage using a haunted roulette ball to resurrect his dead movie career, live on the show.
Last month, I discussed a GPT that I had created around enhancing prompts. Since then, I have been actively using my Prompt Enhancer GPT to much more effective outputs. Last week, I did a series of mini-talks on generative AI in different parts of higher education (faculty development, human resources, grants, executive leadership, etc) and structured it as “5 tips”. I included a final bonus tip in all of them—a tip that I heard from many afterwards was probably the most useful tip—especially because you can only access the Prompt Enhancer GPT if you are paying for ChatGPT.
Effectively integrating generative AI into higher education requires policy development, cross-functional engagement, ethical principles, risk assessments, collaboration with other institutions, and an exploration of diverse use cases.
Creating Guidelines for the Use of Gen AI Across Campus— from campustechnology.com by Rhea Kelly The University of Kentucky has taken a transdisciplinary approach to developing guidelines and recommendations around generative AI, incorporating input from stakeholders across all areas of the institution. Here, the director of UK’s Center for the Enhancement of Learning and Teaching breaks down the structure and thinking behind that process.
That resulted in a set of instructional guidelines that we released in August of 2023 and updated in December of 2023. We’re also looking at guidelines for researchers at UK, and we’re currently in the process of working with our colleagues in the healthcare enterprise, UK Healthcare, to comb through the additional complexities of this technology in clinical care and to offer guidance and recommendations around those issues.
My experiences match with the results of the above studies. The second study cited above found that 83% of those students who haven’t used AI tools are “not interested in using them,” so it is no surprise that many students have little awareness of their nature. The third study cited above found that, “apart from 12% of students identifying as daily users,” most students’ use cases were “relatively unsophisticated” like summarizing or paraphrasing text.
For those of us in the AI-curious bubble, we need to continually work to stay current, but we also need to recognize that what we take to be “common knowledge” is far from common outside of the bubble.
Despite general familiarity, however, technical knowledge shouldn’t be assumed for district leaders or others in the school community. For instance, it’s critical that any materials related to AI not be written in “techy talk” so they can be clearly understood, said Ann McMullan, project director for the Consortium for School Networking’s EmpowerED Superintendents Initiative.
To that end, CoSN, a nonprofit that promotes technological innovation in K-12, has released an array of AI resources to help superintendents stay ahead of the curve, including a one-page explainer that details definitions and guidelines to keep in mind as schools work with the emerging technology.
Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. It’s an unusual admission. At the World Economic Forum’s annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.
I’m glad he said it. I’ve seen consistent downplaying and denial about the AI industry’s environmental costs since I started publishing about them in 2018. Altman’s admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.
Yesterday, Nvidia reported $22.1 billion in revenue for its fourth fiscal quarter of fiscal 2024 (ending January 31, 2024), easily topping Wall Street’s expectations. The revenues grew 265% from a year ago, thanks to the explosive growth of generative AI.
…
He also repeated a notion about “sovereign AI.” This means that countries are protecting the data of their users and companies are protecting data of employees through “sovereign AI,” where the large-language models are contained within the borders of the country or the company for safety purposes.
??BREAKING: Adobe has created a new 50-person AI research org called CAVA (Co-Creation for Audio, Video, & Animation).
I can’t help but wonder if OpenAI’s Sora has been a wake up call for Adobe to formalize and accelerate their video and multimodal creation efforts?
Nvidia is building a new type of data centre called AI factory. Every company—biotech, self-driving, manufacturing, etc will need an AI factory.
Jensen is looking forward to foundational robotics and state space models. According to him, foundational robotics could have a breakthrough next year.
The crunch for Nvidia GPUs is here to stay. It won’t be able to catch up on supply this year. Probably not next year too.
A new generation of GPUs called Blackwell is coming out, and the performance of Blackwell is off the charts.
Nvidia’s business is now roughly 70% inference and 30% training, meaning AI is getting into users’ hands.
Using AI, the team was able to plow through 32.6 million possible battery materials in 80 hours, a task the team estimates would have taken them 20 years to do.
Four days of seminars, lectures and demonstrations at the 39th annual ABA Techshow boiled down to Saturday morning’s grand finale, where panelists rounded up their favorite tech tips and apps. The underlying theme: artificial intelligence.
“It’s an amazing tool, but it’s kind of scary, so watch out,” said Cynthia Thomas, the Techshow co-chair, and owner of PLMC & Associates, talking about the new tool from OpenAI, Sora, which takes text and turns it into video.
Other panelists during the traditional Techshow closer, “60 sites, 60 tips and gadgets and gizmos,” highlighted a wide of AI-enabled or augmented tools to help users perform a large range of tasks, including quickly sift through user reviews for products, generate content, or keep up-to-date on the latest AI tools. For those looking for a non-AI tips and tools, they also suggested several devices, websites, tips and apps that have helped them with their practice and with life in general.
ABA Techshow 2024 stressed the importance of ethics in legal technology adoption. Ethics lawyer Stuart I. Teicher warned of the potential data breaches and urged attorneys to be proactive in understanding and supervising new tools. Education and oversight are key to maintaining data protection and integrity.
Though it might be more accurate to call TECHSHOW an industry showcase because with each passing year it seems that more and more of the show involves other tech companies looking to scoop up enterprising new companies. A tone that’s set by the conference’s opening event: the annual Startup Alley pitch competition.
This year, 15 companies presented. If you were taking a shot every time someone mentioned “AI” then my condolences because you are now dead. If you included “machine learning” or “large language model” then you’ve died, come back as a zombie, and been killed again.
Text to video via OpenAI’s Sora. (I had taken this screenshot on the 15th, but am posting it now.)
We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.
Introducing Sora, our text-to-video model. Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.
At the University of Pennsylvania, undergraduate students in its school of engineering will soon be able to study for a bachelor of science degree in artificial intelligence.
What can one do with an AI degree? The University of Pennsylvania says students will be able to apply the skills they learn in school to build responsible AI tools, develop materials for emerging chips and hardware, and create AI-driven breakthroughs in healthcare through new antibiotics, among other things.
Google on Monday announced plans to help train people in Europe with skills in artificial intelligence, the latest tech giant to invest in preparing workers and economies amid the disruption brought on by technologies they are racing to develop.
The acceleration of AI deployments has gotten so absurdly out of hand that a draft post I started a week ago about a new development is now out of date.
… The Pace is Out of Control
A mere week since Ultra 1.0’s announcement, Google has now introduced us to Ultra 1.5, a model they are clearly positioning to be the leader in the field. Here is the full technical report for Gemini Ultra 1.5, and what it can do is stunning.
[St. Louis, MO, February 14, 2024] – In a bold move that counters the conventions of more traditional schools, Maryville University has unveiled a substantial $21 million multi-year investment in artificial intelligence (AI) and cutting-edge technologies. This groundbreaking initiative is set to transform the higher education experience to be powered by the latest technology to support student success and a five-star experience for thousands of students both on-campus and online.
“The world is adapting to seismic shifts from generative AI,” says Luben Pampoulov, Partner at GSV Ventures. “AI co-pilots, AI tutors, AI content generators—AI is ubiquitous, and differentiation is increasingly critical. This is an impressive group of EdTech companies that are leveraging AI and driving positive outcomes for learners and society.”
Workforce Learning comprises 34% of the list, K-12 29%, Higher Education 24%, Adult Consumer Learning 10%, and Early Childhood 3%. Additionally, 21% of the companies stretch across two or more “Pre-K to Gray” categories. A broader move towards profitability is also evident: the collective gross and EBITDA margin score of the 2024 cohort increased 5% compared to 2023.
Selected from 2,000+ companies around the world based on revenue scale, revenue growth, user reach, geographic diversification, and margin profile, this impressive group is reaching an estimated 3 billion people and generating an estimated $23 billion in revenue.