Along these same lines, see:

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.


ZombAIs: From Prompt Injection to C2 with Claude Computer Use — from embracethered.com by Johann Rehberger

A few days ago, Anthropic released Claude Computer Use, which is a model + code that allows Claude to control a computer. It takes screenshots to make decisions, can run bash commands and so forth.

It’s cool, but obviously very dangerous because of prompt injection. Claude Computer Use enables AI to run commands on machines autonomously, posing severe risks if exploited via prompt injection.

This blog post demonstrates that it’s possible to leverage prompt injection to achieve, old school, command and control (C2) when giving novel AI systems access to computers.

We discussed one way to get malware onto a Claude Computer Use host via prompt injection. There are countless others, like another way is to have Claude write the malware from scratch and compile it. Yes, it can write C code, compile and run it. There are many other options.

TrustNoAI.

And again, remember do not run unauthorized code on systems that you do not own or are authorized to operate on.

Also relevant here, see:


Perplexity Grows, GPT Traffic Surges, Gamma Dominates AI Presentations – The AI for Work Top 100: October 2024 — from flexos.work by Daan van Rossum
Perplexity continues to gain users despite recent controversies. Five out of six GPTs see traffic boosts. This month’s highest gainers including Gamma, Blackbox, Runway, and more.


Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report — from ai.wharton.upenn.edu by  Jeremy Korst, Stefano Puntoni, & Mary Purk

From a survey with more than 800 senior business leaders, this report’s findings indicate that weekly usage of Gen AI has nearly doubled from 37% in 2023 to 72% in 2024, with significant growth in previously slower-adopting departments like Marketing and HR. Despite this increased usage, businesses still face challenges in determining the full impact and ROI of Gen AI. Sentiment reports indicate leaders have shifted from feelings of “curiosity” and “amazement” to more positive sentiments like “pleased” and “excited,” and concerns about AI replacing jobs have softened. Participants were full-time employees working in large commercial organizations with 1,000 or more employees.


Apple study exposes deep cracks in LLMs’ “reasoning” capabilities — from arstechnica.com by Kyle Orland
Irrelevant red herrings lead to “catastrophic” failure of logical inference.

For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”


Google CEO says more than a quarter of the company’s new code is created by AI — from businessinsider.in by Hugh Langley

  • More than a quarter of new code at Google is made by AI and then checked by employees.
  • Google is doubling down on AI internally to make its business more efficient.

Top Generative AI Chatbots by Market Share – October 2024 


Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview — from github.blog

We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.

 

Norway law decrees: Let childhood be childhood — from hechingerreport.org by Jackie Mader
In the Scandinavian country, early childhood education is a national priority, enshrined in law

Ullmann’s conclusion embodies one of Norway’s goals for its citizens: to build a nation of thriving adults by providing childhoods that are joyful, secure and inclusive. Perhaps nowhere is this belief manifested more clearly than in the nation’s approach to early child care. (In Norway, all education for children 5 and under is referred to as “barnehagen,” the local translation of “kindergarten.”) To an American, the Norwegian philosophy, both in policy and in practice, could feel alien. The government’s view isn’t that child care is a place to put children so parents can work, or even to prepare children for the rigors of elementary school. It’s about protecting childhood.

“A really important pillar of Norway’s early ed philosophy is the value of childhood in itself,” said Henrik D. Zachrisson, a professor at the Centre for Research on Equality in Education at the University of Oslo. “Early ed is supposed to be a place where children can be children and have the best childhood possible.”

 

Very creative architecture!


From DSC:
I thought this was very creative! Nice work.


A Curtain-Like Facade Wraps a Seoul Textile Maker in Billowing Brick — from thisiscolossal.com by Kate Mothes and German architecture firm behet bondzio lin architekten

A South Korea fashion brand and textile manufacturer’s headquarters in Seoul gets a stunning new look thanks to German architecture firm behet bondzio lin architekten. Located in Seongsu-dong, a neighborhood historically known for its red brick factory buildings, the new multistory structure defies the material’s traditionally angular application by incorporating an undulating, drapery-like facade.

The architects conceived of a design inspired both by the flow and flexibility of textiles and the consistent rhythm of ocean waves.
.

 


ChatGPT Voice Mode Is Here: Will It Revolutionize AI Communication?


Advanced Voice Mode – FAQ — from help.openai.com
Learn more about our Advanced Voice capabilities.

Advanced Voice Mode on ChatGPT features more natural, real-time conversations that pick up on and respond with emotion and non-verbal cues.

Advanced Voice Mode on ChatGPT is currently in a limited alpha. Please note that it may make mistakes, and access and rate limits are subject to change.


From DSC:
Think about the impacts/ramifications of global, virtual, real-time language translations!!! This type of technology will create very powerful, new affordances in our learning ecosystems — as well as in business communications, with the various governments across the globe, and more!

 

 

The race to deploy GenAI in the legal sector — from sifted.eu by Kai Nicol-Schwarz
LegalFly’s €15m Series A is the latest in a string of raises for European GenAI legaltech startups

Speak to any lawyer and you’ll soon discover that the job is a far cry from the fevered excitement of a courtroom drama. Behind the scenes, there’s an endless amount of laborious and typically manual tasks like drafting, reviewing and negotiating contracts and other legal documents that have to be done manually daily.

It was this realisation that led four product managers at dating app giant Tinder, frustrated by what they saw as a lack of AI adoption at the company, to jump ship and found Belgium-based LegalFly last year. The startup is building a generative AI copilot for lawyers which eventually, it says, will be able to automate entire workflows in the legal profession.

“We were looking at what GenAI was good at, which is synthesising data and generating content,” says founder and CEO Ruben Miessen. “What industry works like that? Law, and it does it all in a very manual way.”

“The legal industry is a global behemoth that’s seen minimal innovation since the advent of Microsoft Word in the 90s,” says Carina Namih, partner at Plural. “GenAI — especially with a human in the loop to keep accuracy high — is ideally suited to drafting, editing and negotiating legal documents.”


Legal Technology Company Relativity Announces OpenAI ChatGPT Integration — from lawfuel.com

CHICAGO– July 18 – Relativity, a global legal technology company, today announced it is integrating with OpenAI’s ChatGPT Enterprise Compliance API. The integration adds ChatGPT Enterprise as a Collect in RelativityOne data source, allowing users to seamlessly collect and process human-to-AI conversational data.

“The future around human and AI interaction is changing rapidly, calling for innovative legal data management solutions to include novel data sources, such as conversations with AI agents,” said Chris Brown, Chief Product Officer at Relativity. “In answering that call, we are committed to equipping our community with the tools they need to traverse the evolving future of human-to-AI conversational data and putting users in control of this new data landscape.”

 

Introducing Eureka Labs — “We are building a new kind of school that is AI native.” — by Andrej Karpathy, Previously Director of AI @ Tesla, founding team @ OpenAI

However, with recent progress in generative AI, this learning experience feels tractable. The teacher still designs the course materials, but they are supported, leveraged and scaled with an AI Teaching Assistant who is optimized to help guide the students through them. This Teacher + AI symbiosis could run an entire curriculum of courses on a common platform. If we are successful, it will be easy for anyone to learn anything, expanding education in both reach (a large number of people learning something) and extent (any one person learning a large amount of subjects, beyond what may be possible today unassisted).


After Tesla and OpenAI, Andrej Karpathy’s startup aims to apply AI assistants to education — from techcrunch.com by Rebecca Bellan

Andrej Karpathy, former head of AI at Tesla and researcher at OpenAI, is launching Eureka Labs, an “AI native” education platform. In tech speak, that usually means built from the ground up with AI at its core. And while Eureka Labs’ AI ambitions are lofty, the company is starting with a more traditional approach to teaching.

San Francisco-based Eureka Labs, which Karpathy registered as an LLC in Delaware on June 21, aims to leverage recent progress in generative AI to create AI teaching assistants that can guide students through course materials.


What does it mean for students to be AI-ready? — from timeshighereducation.com by David Joyner
Not everyone wants to be a computer scientist, a software engineer or a machine learning developer. We owe it to our students to prepare them with a full range of AI skills for the world they will graduate into, writes David Joyner

We owe it to our students to prepare them for this full range of AI skills, not merely the end points. The best way to fulfil this responsibility is to acknowledge and examine this new category of tools. More and more tools that students use daily – word processors, email, presentation software, development environments and more – have AI-based features. Practising with these tools is a valuable exercise for students, so we should not prohibit that behaviour. But at the same time, we do not have to just shrug our shoulders and accept however much AI assistance students feel like using.


Teachers say AI usage has surged since the school year started — from eschoolnews.com by Laura Ascione
Half of teachers report an increase in the use of AI and continue to seek professional learning

Fifty percent of educators reported an increase in AI usage, by both students and teachers, over the 2023–24 school year, according to The 2024 Educator AI Report: Perceptions, Practices, and Potential, from Imagine Learning, a digital curriculum solutions provider.

The report offers insight into how teachers’ perceptions of AI use in the classroom have evolved since the start of the 2023–24 school year.


OPINION: What teachers call AI cheating, leaders in the workforce might call progress — from hechingerreport.org by C. Edward Waston and Jose Antonio Bowen
Authors of a new guide explore what AI literacy might look like in a new era

Excerpt (emphasis DSC):

But this very ease has teachers wondering how we can keep our students motivated to do the hard work when there are so many new shortcuts. Learning goals, curriculums, courses and the way we grade assignments will all need to be reevaluated.

The new realities of work also must be considered. A shift in employers’ job postings rewards those with AI skills. Many companies report already adopting generative AI tools or anticipate incorporating them into their workflow in the near future.

A core tension has emerged: Many teachers want to keep AI out of our classrooms, but also know that future workplaces may demand AI literacy.

What we call cheating, business could see as efficiency and progress.

It is increasingly likely that using AI will emerge as an essential skill for students, regardless of their career ambitions, and that action is required of educational institutions as a result.


Teaching Writing With AI Without Replacing Thinking: 4 Tips — from by Erik Ofgang
AI has a lot of potential for writing students, but we can’t let it replace the thinking parts of writing, says writing professor Steve Graham

Reconciling these two goals — having AI help students learn to write more efficiently without hijacking the cognitive benefits of writing — should be a key goal of educators. Finding the ideal balance will require more work from both researchers and classroom educators, but Graham shares some initial tips for doing this currently.




Why I ban AI use for writing assignments — from timeshighereducation.com by James Stacey Taylor
Students may see handwriting essays in class as a needlessly time-consuming approach to assignments, but I want them to learn how to engage with arguments, develop their own views and convey them effectively, writes James Stacey Taylor

Could they use AI to generate objections to the arguments they read? Of course. AI does a good job of summarising objections to Singer’s view. But I don’t want students to parrot others’ objections. I want them to think of objections themselves. 

Could AI be useful for them in organising their exegesis of others’ views and their criticisms of them? Yes. But, again, part of what I want my students to learn is precisely what this outsources to the AI: how to organise their thoughts and communicate them effectively. 


How AI Will Change Education — from digitalnative.tech by Rex Woodbury
Predicting Innovation in Education, from Personalized Learning to the Downfall of College 

This week explores how AI will bleed into education, looking at three segments of education worth watching, then examining which business models will prevail.

  1. Personalized Learning and Tutoring
  2. Teacher Tools
  3. Alternatives to College
  4. Final Thoughts: Business Models and Why Education Matters

New Guidance from TeachAI and CSTA Emphasizes Computer Science Education More Important than Ever in an Age of AI — from csteachers.org by CSTA
The guidance features new survey data and insights from teachers and experts in computer science (CS) and AI, informing the future of CS education.

SEATTLE, WA – July 16, 2024 – Today, TeachAI, led by Code.org, ETS, the International Society of Technology in Education (ISTE), Khan Academy, and the World Economic Forum, launches a new initiative in partnership with the Computer Science Teachers Association (CSTA) to support and empower educators as they grapple with the growing opportunities and risks of AI in computer science (CS) education.

The briefs draw on early research and insights from CSTA members, organizations in the TeachAI advisory committee, and expert focus groups to address common misconceptions about AI and offer a balanced perspective on critical issues in CS education, including:

  • Why is it Still Important for Students to Learn to Program?
  • How Are Computer Science Educators Teaching With and About AI?
  • How Can Students Become Critical Consumers and Responsible Creators of AI?
 

The University Student’s Guide To Ethical AI Use  — from studocu.com; with thanks to Jervise Penton at 6XD Media Group for this resource

This comprehensive guide offers:

  • Up-to-date statistics on the current state of AI in universities, how institutions and students are currently using artificial intelligence
  • An overview of popular AI tools used in universities and its limitations as a study tool
  • Tips on how to ethically use AI and how to maximize its capabilities for students
  • Current existing punishment and penalties for cheating using AI
  • A checklist of questions to ask yourself, before, during, and after an assignment to ensure ethical use

Some of the key facts you might find interesting are:

  • The total value of AI being used in education was estimated to reach $53.68 billion by the end of 2032.
  • 68% of students say using AI has impacted their academic performance positively.
  • Educators using AI tools say the technology helps speed up their grading process by as much as 75%.
 

[Report] The Top 100 AI for Work – April 2024 — from flexos.work; with thanks to Daan van Rossum for this resource
AI is helping us work up to 41% more effectively, according to recent Bain research. We review the platforms to consider for ourselves and our teams.

Following our AI Top 150, we spent the past few weeks analyzing data on the top AI platforms for work. This report shares key insights, including the AI tools you should consider adopting to work smarter, not harder.

While there is understandable concern about AI in the work context, the platforms in this list paint a different picture. It shows a future of work where people can do what humans are best suited for while offloading repetitive, digital tasks to AI.

This will fuel the notion that it’s not AI that takes your job but a supercharged human with an army of AI tools and agents. This should be a call to action for every working person and business leader reading this.

 

Dr Abigail Rekas, Lawyer & Lecturer at the School of Law, University of Galway

Abigail is a lecturer on two of the Law micro-credentials at University of Galway – Lawyering Technology & Innovation and Law & Analytics. Micro-credentials are short, flexible courses designed to fit around your busy life! They are designed in collaboration with industry to meet specific skills needs and are accredited by leading Irish universities.

Visit: universityofgalway.ie/courses/micro-credentials/


The Implications of Generative AI: From the Delivery of Legal Services to the Delivery of Justice — from iaals.du.edu by

The potential for AI’s impact is broad, as it has the ability to impact every aspect of human life, from home to work. It will impact our relationships to everything and everyone in our world. The implications for generative AI on the legal system, from how we deliver legal services to how we deliver justice, will be just as far reaching.

[N]ow we face the latest technological frontier: artificial intelligence (AI).… Law professors report with both awe and angst that AI apparently can earn Bs on law school assignments and even pass the bar exam. Legal research may soon be unimaginable without it. AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as obviously it risks invading privacy interests and dehumanizing the law.

When you can no longer sell the time it takes to achieve a client’s outcome, then you must sell the outcome itself and the client’s experience of getting there. That completely changes the dynamics of what law firms are all about.


Preparing the Next Generation of Tech-Ready Lawyers — from news.gsu.edu
Legal Analytics and Innovation Initiative Gives Students a Competitive Advantage

Georgia State University College of Law faculty understand this need and designed the Legal Analytics & Innovation Initiative (LAII) to equip students with the competitive skills desired by law firms and other companies that align with the emerging technological environment.

“As faculty, we realized we need to be forward-thinking about incorporating technology into our curriculum. Students must understand new areas of law that arise from or are significantly altered by technological advances, like cybersecurity, privacy and AI. They also must understand how these advances change the practice of law,” said Kris Niedringhaus, associate dean for Law Library, Information Services, Legal Technology & Innovation.


The Imperative Of Identifying Use Cases In Legal Tech: A Guiding Light For Innovation In The Age Of AI — from abovethelaw.com by Olga V. Mack
In the quest to integrate AI and legal technology into legal practice, use cases are not just important but indispensable.

As the legal profession continues to navigate the waters of digital transformation, the importance of use cases stands as a beacon guiding the journey. They are the litmus test for the practical value of technology, ensuring that innovations not only dazzle with potential but also deliver tangible benefits. In the quest to integrate AI and legal technology into legal practice, use cases are not just important but indispensable.

The future of legal tech is not about technology for technology’s sake. It’s about thoughtful, purpose-driven innovation that enhances the practice of law, improves client outcomes, and upholds the principles of justice. Use cases are the roadmap for this future, charting a course for technology that is meaningful, impactful, and aligned with the noble pursuit of law.

 


[Report] Generative AI Top 150: The World’s Most Used AI Tools (Feb 2024) — from flexos.work by Daan van Rossum
FlexOS.work surveyed Generative AI platforms to reveal which get used most. While ChatGPT reigns supreme, countless AI platforms are used by millions.

As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme.

2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch.
As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.

With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.

.


Artificial Intelligence Act: MEPs adopt landmark law — from europarl.europa.eu

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations


The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock


A New Surge in Power Use Is Threatening U.S. Climate Goals — from nytimes.com by Brad Plumer and Nadja Popovich
A boom in data centers and factories is straining electric grids and propping up fossil fuels.

Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.

Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.


OpenAI and the Fierce AI Industry Debate Over Open Source — from bloomberg.com by Rachel Metz

The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?

The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.


Researchers develop AI-driven tool for near real-time cancer surveillance — from medicalxpress.com by Mark Alewine; via The Rundown AI
Artificial intelligence has delivered a major win for pathologists and researchers in the fight for improved cancer treatments and diagnoses.

In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.


 

Building Better Civil Justice Systems Isn’t Just About The Funding — from lawnext.com by Jess Lu and Mark Chandler

Justice tech — legal tech that helps low-income folks with no or some ability to pay, that assists the lawyers who serve those folks, and that makes the courts more efficient and effective — must contend with a higher hurdle than wooing Silicon Valley VCs: the civil justice system itself.

A checkerboard of technology systems and data infrastructures across thousands of local court jurisdictions makes it nearly impossible to develop tools with the scale needed to be sustainable. Courts are themselves a key part of the access to justice problem: opaque, duplicative and confusing court forms and burdensome filing processes make accessing the civil justice system deeply inefficient for the sophisticated, and an impenetrable maze for the 70+% of civil litigants who don’t have a lawyer.


Speaking of legaltech, also see:


“Noxtua,” Europe’s first sovereign legal AI — from eureporter.co

Noxtua, the first sovereign European legal AI with its proprietary Language Model, allows lawyers in corporations and law firms to benefit securely from the advantages of generative AI. The Berlin-based AI startup Xayn and the largest German business law firm CMS are developing Noxtua as a Legal AI with its own Legal Large Language Model and AI assistant. Lawyers from corporations and law firms can use the Noxtua chat to ask questions about legal documents, analyze them, check them for compliance with company guidelines, (re)formulate texts, and have summaries written. The Legal Copilot, which specializes in legal texts, stands out as an independent and secure alternative from Europe to the existing US offerings.


Generative AI in the legal industry: The 3 waves set to change how the business works — from thomsonreuters.com by Tom Shepherd and Stephanie Lomax

Gen AI is game-changing technology, directly impacting the way legal work is done and the current law firm-client business model; and while much remains unsettled, within 10 years, Gen AI is likely to change corporate legal departments and law firms in profound and unique ways

Generative artificial intelligence (Gen AI) isn’t a futuristic technology — it’s here now, and it’s already impacting the legal industry in many ways.


Hmmmm….on this next one…

New legal AI venture promises to show how judges think — from reuters.com by Sara Merken

Feb 29 (Reuters) – A new venture by a legal technology entrepreneur and a former Kirkland & Ellis partner says it can use artificial intelligence to help lawyers understand how individual judges think, allowing them to tailor their arguments and improve their courtroom results.

The Toronto-based legal research startup, Bench IQ, was founded by Jimoh Ovbiagele, the co-founder of now-shuttered legal research company ROSS Intelligence, alongside former ROSS senior software engineer Maxim Isakov and former Kirkland bankruptcy partner Jeffrey Gettleman.


 

Announcing the 2024 GSV 150: The Top Growth Companies in Digital Learning & Workforce Skills — from prnewswire.com with information provided by ASU+GSV Summit

“The world is adapting to seismic shifts from generative AI,” says Luben Pampoulov, Partner at GSV Ventures. “AI co-pilots, AI tutors, AI content generators—AI is ubiquitous, and differentiation is increasingly critical. This is an impressive group of EdTech companies that are leveraging AI and driving positive outcomes for learners and society.”

Workforce Learning comprises 34% of the list, K-12 29%, Higher Education 24%, Adult Consumer Learning 10%, and Early Childhood 3%. Additionally, 21% of the companies stretch across two or more “Pre-K to Gray” categories. A broader move towards profitability is also evident: the collective gross and EBITDA margin score of the 2024 cohort increased 5% compared to 2023.

See the list at https://www.asugsvsummit.com/gsv-150

Selected from 2,000+ companies around the world based on revenue scale, revenue growth, user reach, geographic diversification, and margin profile, this impressive group is reaching an estimated 3 billion people and generating an estimated $23 billion in revenue.

 

My 40 Most-Read Blog Posts This Year Tell A Story Of A Legal Industry Consumed With Generative AI — from lawnext.com by Bob Ambrogi

Look over this list of my blog posts that were most popular this year, and there is no doubt about the topic that most captivated the legal industry. Of my 40 most-read posts of 2023, 30 directly involved generative AI and others implicated it.


What I’m watching for in 2024 — from alexofftherecord.com by Alex Su
My view of the legal ecosystem, and why I’m going to pay close attention to generative AI, new Biglaw offerings, and the rise of “independent” attorneys

Today, instead of sharing my predictions about what’s coming to the legal industry, I’ll share where I think the most action will take place next year. In short, it’s (1) generative AI; (2) Biglaw firms adding low-cost capabilities; and (3) strong bench talent among independent attorneys. Before I get into why, I’ll share some observations on the past, and the trends I’m seeing take place right now.


Navigating Ediscovery and AI in Legal Tech – 2023 Trends — from jdsupra.com

In 2023, the legal landscape has been significantly shaped by two key trends: the rapid evolution of Artificial Intelligence (AI) and the advancements in ediscovery. These developments have not only transformed legal processes but also presented new challenges and opportunities for legal professionals. As we delve into this first part of our series, we examine the top blogs that have been at the forefront of these trends. These articles offer a window into the current state of legal tech, providing invaluable insights into how these technological advances are reshaping the practice of law.


The Metaverse: A new dimension for legal services — from connections.nortonrosefulbright.com by Lena Haffner

Some of the topics mentioned include:

  • Virtual law offices
  • Enhanced client communication
  • Virtual pitches
  • Client education and workshops
  • Virtual asset inspections and tours
  • Enhanced due diligence

The Digital Era: How Technology is Changing the Way We Hire a Lawyer — from medium.com by Hire 4

In the fast-paced world of the digital era, technology is revolutionizing almost every aspect of our lives, and the legal field is no exception. The traditional process to hire a lawyer, once dominated by word-of-mouth referrals and physical law firm visits, is undergoing a transformation. This shift is not just changing how clients find and interact with legal professionals, but also how legal services are delivered and experienced. In this blog, we delve into the ways technology is reshaping the landscape of hiring a lawyer.


Finding The Right Virtual Assistant For Your Legal Practice — from forbes.com by Raquel Gomes

A virtual assistant can perform administrative tasks for a firm, from client communication and appointment scheduling to legal research, contract management and accounting. Delegating these tasks can be an effective way to free up lawyers to do what they do best. Based on my work with virtual assistants, here are some of the potential benefits and also tips on how you can find a VA who best fits the needs of your practice.


Addendum on 1/2/24:

YEAR IN REVIEW: The top legal technology trends of 2023 — from abajournal.com by Nicole Black

The changing mindset of lawyers toward technology
In 2023, there was a noticeable shift in the mindset of legal professionals. While its direct health impacts remain uncertain, the pandemic’s influence in accelerating technology adoption has been undeniable. Lawyers and judges, traditionally viewed as tech-averse, are now embracing tools like Zoom, iPads and smartwatches with surprising enthusiasm.

Post-pandemic, as lawyers returned to offices—often in a hybrid model—they displayed a newfound curiosity about technology. This change in attitude came at an opportune moment, coinciding with the introduction of new generative AI technologies in the legal field even as legal technology funding declined.


Addendum on 1/2/24:

US Supreme Court’s Roberts urges ‘caution’ as AI reshapes legal field — from reuters.com by John Kruzel

WASHINGTON, Dec 31 (Reuters) – Artificial intelligence represents a mixed blessing for the legal field, U.S. Supreme Court Chief Justice John Roberts said in a year-end report published on Sunday, urging “caution and humility” as the evolving technology transforms how judges and lawyers go about their work.

Roberts struck an ambivalent tone in his 13-page report. He said AI had potential to increase access to justice for indigent litigants, revolutionize legal research and assist courts in resolving cases more quickly and cheaply while also pointing to privacy concerns and the current technology’s inability to replicate human discretion.

“I predict that human judges will be around for a while,” Roberts wrote. “But with equal confidence I predict that judicial work – particularly at the trial level – will be significantly affected by AI.”

 
 

The Evolution of Collaboration: Unveiling the EDUCAUSE Corporate Engagement Program — from er.educause.edu

The program is designed to strengthen the collaboration between private industry and higher education institutions—and evolve the higher education technology market. The new program will do so by taking the following actions:

  • Giving higher education professionals better access to corporate thought leaders who can help create change at their institutions
  • Educating corporate partners on the nuances of higher education and the major challenges it faces so that they can help provide meaningful solutions
  • Giving the EDUCAUSE staff and leadership better access to corporate change-makers in order to advocate for change on behalf of our institutional community
  • Providing the institutional community with higher-quality content and services from companies that are deeply invested in the success of higher education
  • Providing the corporate community with custom-built packages that allow more meaningful connections with the institutional community—not only at our in-person events but also through online opportunities year-round

By building better bridges between our corporate and institutional communities, we can help accelerate our shared mission of furthering the promise of higher education.


Speaking of collaborations, also see:

Could the U.S. become an “Apprentice Nation?” — from Michael B. Horn and Ryan Craig

Intermediaries do the heavy lifting for the employers.


Bottom line: As I discussed with Michael later in the show, we already have the varied system that Leonhardt imagines—it’s just that it’s often by chaos and neglect. Just like we didn’t say to 8th graders a century ago, “go find your own high school,” we need to design a post-high school system with clear and well-designed pathways that include:

  1. Apprenticeships outside of the building trades so students can learn a variety of jobs by doing the job.
  2. Short-term certificates that lead to jobs without necessarily having the college degree immediately, but having the option to return for a college degree later on.
  3. Transfer pathways where credits earned in high school really count in college and the move from two-year college to any four-year institution is seamless.

? Listen to the complete episode here and subscribe to the podcast.

 
© 2024 | Daniel Christian