The 2024 Lawdragon 100 Leading AI & Legal Tech Advisors — from lawdragon.com by Katrina Dewey

These librarians, entrepreneurs, lawyers and technologists built the world where artificial intelligence threatens to upend life and law as we know it – and are now at the forefront of the battles raging within.

To create this first-of-its-kind guide, we cast a wide net with dozens of leaders in this area, took submissions, consulted with some of the most esteemed gurus in legal tech. We also researched the cases most likely to have the biggest impact on AI, unearthing the dozen or so top trial lawyers tapped to lead the battles. Many of them bring copyright or IP backgrounds and more than a few are Bay Area based. Those denoted with an asterisk are members of our Hall of Fame.
.


Free Legal Research Startup descrybe.ai Now Has AI Summaries of All State Supreme and Appellate Opinions — from lawnext.com by Bob Ambrogi

descrybe.ai, a year-old legal research startup focused on using artificial intelligence to provide free and easy access to court opinions, has completed its goal of creating AI-generated summaries of all available state supreme and appellate court opinions from throughout the United States.

descrybe.ai describes its mission as democratizing access to legal information and leveling the playing field in legal research, particularly for smaller-firm lawyers, journalists, and members of the public.


 


[Report] Generative AI Top 150: The World’s Most Used AI Tools (Feb 2024) — from flexos.work by Daan van Rossum
FlexOS.work surveyed Generative AI platforms to reveal which get used most. While ChatGPT reigns supreme, countless AI platforms are used by millions.

As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme.

2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch.
As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.

With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.

.


Artificial Intelligence Act: MEPs adopt landmark law — from europarl.europa.eu

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations


The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock


A New Surge in Power Use Is Threatening U.S. Climate Goals — from nytimes.com by Brad Plumer and Nadja Popovich
A boom in data centers and factories is straining electric grids and propping up fossil fuels.

Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.

Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.


OpenAI and the Fierce AI Industry Debate Over Open Source — from bloomberg.com by Rachel Metz

The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?

The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.


Researchers develop AI-driven tool for near real-time cancer surveillance — from medicalxpress.com by Mark Alewine; via The Rundown AI
Artificial intelligence has delivered a major win for pathologists and researchers in the fight for improved cancer treatments and diagnoses.

In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.


 

Also see:

Cognition Labs Blob

 

A View into the Generative AI Legal Landscape 2024 — from law.stanford.edu by Megan Ma, Aparna Sinha,  Ankit Tandon, & Jennifer Richards

Excerpt (emphasis DSC):

Some key observations and highlights:

  1. Emerging technical solutions are addressing the main challenges of using Generative AI in legal applications, such as lack of consistency and accuracy, limited explainability, privacy concerns, and difficulty in obtaining and training models on legal domain data.
  2. Structural impediments in the legal industry, such as the billable hour, lack of standardization, vendor dependence, and incumbent control, moderate the success of generative AI startups.
  3. Our defined “client-facing” LegalTech market is segmented into three broad lines of work: Research and Analysis, Document Review and Drafting, and Litigation. We view the total LegalTech market in the United States to be estimated at ~$13B in 2023, with litigation being the largest category.
  4. LegalTech incumbents play a significant role in the adoption of generative AI technologies, often opting for market consolidation through partnerships or acquisitions rather than building solutions organically.
  5. Future evolution in LegalTech may involve specialization in areas such as patent and IP, immigration, insurance, and regulatory compliance. There is also potential for productivity tools and access to legal services, although the latter faces structural challenges related to the Unauthorized Practice of Law (UPL).

Fresh Voices on Legal Tech with Tessa Manuello — from legaltalknetwork.com by Dennis Kennedy and Tom Mighell

EPISODE NOTES
Creative thinking and design elements can help you elevate your legal practice and develop more meaningful solutions for clients. Dennis and Tom welcome Tessa Manuello to discuss her insights on legal technology with a particular focus on creative design adaptations for lawyers. Tessa discusses the tech learning process for attorneys and explains how a more creative approach for both learning and implementing tech can help lawyers make better use of current tools, AI included.


International Women’s Day: Kriti Sharma Calls for More Women Working in AI, LegalTech — from legalcurrent.com

In honor of International Women’s Day, Sharma discusses on LinkedIn the need for more female role models in the tech sector as AI opens up traditional career pathways and creates opportunities to welcome more women to the space.

Sharma invited Thomson Reuters female leaders working in legal technology to share their perspectives, including Rawia Ashraf, Emily Colbert, and Anu Dodda.


 

Amid explosive demand, America is running out of power — from washingtonpost.com by Evan Halper
AI and the boom in clean-tech manufacturing are pushing America’s power grid to the brink. Utilities can’t keep up.

Vast swaths of the United States are at risk of running short of power as electricity-hungry data centers and clean-technology factories proliferate around the country, leaving utilities and regulators grasping for credible plans to expand the nation’s creaking power grid.

A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing. Tech firms like Amazon, Apple, Google, Meta and Microsoft are scouring the nation for sites for new data centers, and many lesser-known firms are also on the hunt.


The Obscene Energy Demands of A.I. — from newyorker.com by Elizabeth Kolbert
How can the world reach net zero if it keeps inventing new ways to consume energy?

“There’s a fundamental mismatch between this technology and environmental sustainability,” de Vries said. Recently, the world’s most prominent A.I. cheerleader, Sam Altman, the C.E.O. of OpenAI, voiced similar concerns, albeit with a different spin. “I think we still don’t appreciate the energy needs of this technology,” Altman said at a public appearance in Davos. He didn’t see how these needs could be met, he went on, “without a breakthrough.” He added, “We need fusion or we need, like, radically cheaper solar plus storage, or something, at massive scale—like, a scale that no one is really planning for.”


A generative AI reset: Rewiring to turn potential into value in 2024 — from mckinsey.com by Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel; via Philippa Hardman
The generative AI payoff may only come when companies do deeper organizational surgery on their business.

  • Figure out where gen AI copilots can give you a real competitive advantage
  • Upskill the talent you have but be clear about the gen-AI-specific skills you need
  • Form a centralized team to establish standards that enable responsible scaling
  • Set up the technology architecture to scale
  • Ensure data quality and focus on unstructured data to fuel your models
  • Build trust and reusability to drive adoption and scale

AI Prompt Engineering Is Dead Long live AI prompt engineering — from spectrum.ieee.org

Since ChatGPT dropped in the fall of 2022, everyone and their donkey has tried their hand at prompt engineering—finding a clever way to phrase your query to a large language model (LLM) or AI art or video generator to get the best results or sidestep protections. The Internet is replete with prompt-engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.

However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.


What the birth of the spreadsheet teaches us about generative AI — from timharford.com by Tim Harford; via Sam DeBrule

There is one very clear parallel between the digital spreadsheet and generative AI: both are computer apps that collapse time. A task that might have taken hours or days can suddenly be completed in seconds. So accept for a moment the premise that the digital spreadsheet has something to teach us about generative AI. What lessons should we absorb?

It’s that pace of change that gives me pause. Ethan Mollick, author of the forthcoming book Co-Intelligence, tells me “if progress on generative AI stops now, the spreadsheet is not a bad analogy”. We’d get some dramatic shifts in the workplace, a technology that broadly empowers workers and creates good new jobs, and everything would be fine. But is it going to stop any time soon? Mollick doubts that, and so do I.


 

 

A Notre Dame Senior’s Perspective on AI in the Classroom — from learning.nd.edu — by Sarah Ochocki; via Derek Bruff on LinkedIn

At this moment, as a college student trying to navigate the messy, fast-developing, and varied world of generative AI, I feel more confused than ever. I think most of us can share that feeling. There’s no roadmap on how to use AI in education, and there aren’t the typical years of proof to show something works. However, this promising new tool is sitting in front of us, and we would be foolish to not use it or talk about it.

I’ve used it to help me understand sample code I was viewing, rather than mindlessly trying to copy what I was trying to learn from. I’ve also used it to help prepare for a debate, practicing making counterarguments to the points it came up with.

AI alone cannot teach something; there needs to be critical interaction with the responses we are given. However, this is something that is true of any form of education. I could sit in a lecture for hours a week, but if I don’t do the homework or critically engage with the material, I don’t expect to learn anything.


A Map of Generative AI for Education — from medium.com by Laurence Holt; via GSV
An update to our map of the current state-of-the-art


Last ones (for now):


Survey: K-12 Students Want More Guidance on Using AI — from govtech.com by Lauraine Langreo
Research from the nonprofit National 4-H Council found that most 9- to 17-year-olds have an idea of what AI is and what it can do, but most would like help from adults in learning how to use different AI tools.

“Preparing young people for the workforce of the future means ensuring that they have a solid understanding of these new technologies that are reshaping our world,” Jill Bramble, the president and CEO of the National 4-H Council, said in a press release.

AI School Guidance Document Toolkit, with Free Comprehensive Review — from tefanbauschard.substack.com by Stefan Bauschard and Dr. Sabba Quidwai

 

This week in 5 numbers: Another faith-based college plans to close — from by Natalie Schwartz
We’re rounding up some of our top recent stories, from Notre Dame College’s planned closure to Valparaiso’s potential academic cuts.

BY THE NUMBERS

  • 1,444
    The number of students who were enrolled at Notre Dame College in fall 2022, down 37% from 2014. The Roman Catholic college recently said it would close after the spring term, citing declining enrollment, along with rising costs and significant debt.
  • 28
    The number of academic programs that Valparaiso University may eliminate. Eric Johnson, the Indiana institution’s provost, said it offers too many majors, minors and graduate degrees in relation to its enrollment.

A couple of other items re: higher education that caught my eye were:

Universities Expect to Use More Tech in Future Classrooms—but Don’t Know How — from insidehighered.com by Lauren Coffey

University administrators see the need to implement education technology in their classrooms but are at a loss regarding how to do so, according to a new report.

The College Innovation Network released its first CIN Administrator EdTech survey today, which revealed that more than half (53 percent) of the 214 administrators surveyed do not feel extremely confident in choosing effective ed-tech products for their institutions.

“While administrators are excited about offering new ed-tech tools, they are lacking knowledge and data to help them make informed decisions that benefit students and faculty,” Omid Fotuhi, director of learning and innovation at WGU Labs, which funds the network, said in a statement.

From DSC:
I always appreciated our cross-disciplinary team at Calvin (then College). As we looked at enhancing our learning spaces, we had input from the Teaching & Learning Group, IT, A/V, the academic side of the house, and facilities. It was definitely a team-based approach. (As I think about it, it would have been helpful to have more channels for student feedback as well.)


Per Jeff Selingo:

Optionality. In my keynote, I pointed out that the academic calendar and credit hour in higher ed are like “shelf space” on the old television schedule that has been upended by streaming. In much the same way, we need similar optionality to meet the challenges of higher ed right now: in how students access learning (in-person, hybrid, online) to credentials (certificates, degrees) to how those experiences stack together for lifelong learning.

Culture in institutions. The common thread throughout the conference was how the culture of institutions (both universities and governments) need to change so our structures and practices can evolve. Too many people in higher ed right now are employing a scarcity mindset and seeing every change as a zero-sum game. If you’re not happy about the present, as many attendees suggested you’re not going to be excited about the future.

 

Immersive virtual reality tackles depression stigma says study — from inavateonthenet.net

A new study from the University of Tokyo has highlighted the positive effect that immersive virtual reality experiences have for depression anti-stigma and knowledge interventions compared to traditional video.

The study found that depression knowledge improved for both interventions, however, only the immersive VR intervention reduced stigma. The VR-powered intervention saw depression knowledge score positively associated with a neural response in the brain that is indicative of empathetic concern. The traditional video intervention saw the inverse, with participants demonstrating a brain-response which suggests a distress-related response.

From DSC:
This study makes me wonder why we haven’t heard of more VR-based uses in diversity training. I’m surprised we haven’t heard of situations where we are put in someone else’s mocassins so to speak. We could have a lot more empathy for someone — and better understand their situation — if we were to experience life as others might experience it. In the process, we would likely uncover some hidden biases that we have.


Addendum on 3/12/24:

Augmented reality provides benefit for Parkinson’s physical therapy — from inavateonthenet.net

 

AI Is Becoming an Integral Part of the Instructional Design Process — from drphilippahardman.substack.com Dr. Philippa Hardman
What I learned from 150 interviews with instructional designers

1. AI already plays a central part in the instructional design process
A whopping 95.3% of the instructional designers interviewed said they use AI in their day to day work. Those who don’t use AI cite access or permission issues as the primary reason that they haven’t integrated AI into their process.

2. AI is predominantly used at the design and development stages of the instructional design process 
When mapped to the ADDIE process, the breakdown of use cases goes as follows:

  • Analysis: 5.5% of use cases
  • Design: 32.1%
  • Development: 53.2%
  • Implementation: 1.8%
  • Evaluation: 7.3%

Speaking of AI in our learning ecosystems, also see:


Will AI Use in Schools Increase Next Year? 56 Percent of Educators Say Yes — from edweek.org by Alyson Klein

The majority of educators expect use of artificial intelligence tools will increase in their school or district over the next year, according to an EdWeek Research Center survey.

Applying Multimodal AI to L&D — from learningguild.com by Sarah Clark
We’re just starting to see Multimodal AI systems hit the spotlight. Unlike the text-based and image-generation AI tools we’ve seen before, multimodal systems can absorb and generate content in multiple formats – text, image, video, audio, etc.

 

 

How AI Is Already Transforming the News Business — from politico.com by Jack Shafer
An expert explains the promise and peril of artificial intelligence.

The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.

Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.

Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.

Also see:

Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena — from cjr.org by Felix Simon

TABLE OF CONTENTS



EMO: Emote Portrait Alive – Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions — from humanaigc.github.io Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo

We proposed EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, our method can generate vocal avatar videos with expressive facial expressions, and various head poses, meanwhile, we can generate videos with any duration depending on the length of input video.


Adobe previews new cutting-edge generative AI tools for crafting and editing custom audio — from blog.adobe.com by the Adobe Research Team

New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.


How AI copyright lawsuits could make the whole industry go extinct — from theverge.com by Nilay Patel
The New York Times’ lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI.

There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.


FCC officially declares AI-voiced robocalls illegal — from techcrunch.com by Devom Coldewey

The FCC’s war on robocalls has gained a new weapon in its arsenal with the declaration of AI-generated voices as “artificial” and therefore definitely against the law when used in automated calling scams. It may not stop the flood of fake Joe Bidens that will almost certainly trouble our phones this election season, but it won’t hurt, either.

The new rule, contemplated for months and telegraphed last week, isn’t actually a new rule — the FCC can’t just invent them with no due process. Robocalls are just a new term for something largely already prohibited under the Telephone Consumer Protection Act: artificial and pre-recorded messages being sent out willy-nilly to every number in the phone book (something that still existed when they drafted the law).


EIEIO…Chips Ahoy! — from dashmedia.co by Michael Moe, Brent Peus, and Owen Ritz


Here Come the AI Worms — from wired.com by Matt Burgess
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

 

Building Better Civil Justice Systems Isn’t Just About The Funding — from lawnext.com by Jess Lu and Mark Chandler

Justice tech — legal tech that helps low-income folks with no or some ability to pay, that assists the lawyers who serve those folks, and that makes the courts more efficient and effective — must contend with a higher hurdle than wooing Silicon Valley VCs: the civil justice system itself.

A checkerboard of technology systems and data infrastructures across thousands of local court jurisdictions makes it nearly impossible to develop tools with the scale needed to be sustainable. Courts are themselves a key part of the access to justice problem: opaque, duplicative and confusing court forms and burdensome filing processes make accessing the civil justice system deeply inefficient for the sophisticated, and an impenetrable maze for the 70+% of civil litigants who don’t have a lawyer.


Speaking of legaltech, also see:


“Noxtua,” Europe’s first sovereign legal AI — from eureporter.co

Noxtua, the first sovereign European legal AI with its proprietary Language Model, allows lawyers in corporations and law firms to benefit securely from the advantages of generative AI. The Berlin-based AI startup Xayn and the largest German business law firm CMS are developing Noxtua as a Legal AI with its own Legal Large Language Model and AI assistant. Lawyers from corporations and law firms can use the Noxtua chat to ask questions about legal documents, analyze them, check them for compliance with company guidelines, (re)formulate texts, and have summaries written. The Legal Copilot, which specializes in legal texts, stands out as an independent and secure alternative from Europe to the existing US offerings.


Generative AI in the legal industry: The 3 waves set to change how the business works — from thomsonreuters.com by Tom Shepherd and Stephanie Lomax

Gen AI is game-changing technology, directly impacting the way legal work is done and the current law firm-client business model; and while much remains unsettled, within 10 years, Gen AI is likely to change corporate legal departments and law firms in profound and unique ways

Generative artificial intelligence (Gen AI) isn’t a futuristic technology — it’s here now, and it’s already impacting the legal industry in many ways.


Hmmmm….on this next one…

New legal AI venture promises to show how judges think — from reuters.com by Sara Merken

Feb 29 (Reuters) – A new venture by a legal technology entrepreneur and a former Kirkland & Ellis partner says it can use artificial intelligence to help lawyers understand how individual judges think, allowing them to tailor their arguments and improve their courtroom results.

The Toronto-based legal research startup, Bench IQ, was founded by Jimoh Ovbiagele, the co-founder of now-shuttered legal research company ROSS Intelligence, alongside former ROSS senior software engineer Maxim Isakov and former Kirkland bankruptcy partner Jeffrey Gettleman.


 

How a Hollywood Director Uses AI to Make Movies — from every.to by Dan Shipper
Dave Clarke shows us the future of AI filmmaking

Dave told me that he couldn’t have made Borrowing Time without AI—it’s an expensive project that traditional Hollywood studios would never bankroll. But after Dave’s short went viral, major production houses approached him to make it a full-length movie. I think this is an excellent example of how AI is changing the art of filmmaking, and I came out of this interview convinced that we are on the brink of a new creative age.

We dive deep into the world of AI tools for image and video generation, discussing how aspiring filmmakers can use them to validate their ideas, and potentially even secure funding if they get traction. Dave walks me through how he has integrated AI into his movie-making process, and as we talk, we make a short film featuring Nicolas Cage using a haunted roulette ball to resurrect his dead movie career, live on the show.

 

Using Generative AI throughout the Institution — from aiedusimplified.substack.com by Lance Eaton
8 lightning talk on generative AI and how to use it through higher education


The magic of AI to help educators with saving time. — from magicschool.ai; via Mrs. Kendall Sajdak


Getting Better Results out of Generative AI — from aiedusimplified.substack.com by Lance Eaton
The prompt to use before you prompt generative AI

Last month, I discussed a GPT that I had created around enhancing prompts. Since then, I have been actively using my Prompt Enhancer GPT to much more effective outputs. Last week, I did a series of mini-talks on generative AI in different parts of higher education (faculty development, human resources, grants, executive leadership, etc) and structured it as “5 tips”. I included a final bonus tip in all of them—a tip that I heard from many afterwards was probably the most useful tip—especially because you can only access the Prompt Enhancer GPT if you are paying for ChatGPT.


Exploring the Opportunities and Challenges with Generative AI — from er.educause.edu by Veronica Diaz

Effectively integrating generative AI into higher education requires policy development, cross-functional engagement, ethical principles, risk assessments, collaboration with other institutions, and an exploration of diverse use cases.


Creating Guidelines for the Use of Gen AI Across Campus — from campustechnology.com by Rhea Kelly
The University of Kentucky has taken a transdisciplinary approach to developing guidelines and recommendations around generative AI, incorporating input from stakeholders across all areas of the institution. Here, the director of UK’s Center for the Enhancement of Learning and Teaching breaks down the structure and thinking behind that process.

That resulted in a set of instructional guidelines that we released in August of 2023 and updated in December of 2023. We’re also looking at guidelines for researchers at UK, and we’re currently in the process of working with our colleagues in the healthcare enterprise, UK Healthcare, to comb through the additional complexities of this technology in clinical care and to offer guidance and recommendations around those issues.


From Mean Drafts to Keen Emails — from automatedteach.com by Graham Clay

My experiences match with the results of the above studies. The second study cited above found that 83% of those students who haven’t used AI tools are “not interested in using them,” so it is no surprise that many students have little awareness of their nature. The third study cited above found that, “apart from 12% of students identifying as daily users,” most students’ use cases were “relatively unsophisticated” like summarizing or paraphrasing text.

For those of us in the AI-curious bubble, we need to continually work to stay current, but we also need to recognize that what we take to be “common knowledge” is far from common outside of the bubble.


What do superintendents need to know about artificial intelligence? — from k12dive.com by Roger Riddell
District leaders shared strategies and advice on ethics, responsible use, and the technology’s limitations at the National Conference on Education.

Despite general familiarity, however, technical knowledge shouldn’t be assumed for district leaders or others in the school community. For instance, it’s critical that any materials related to AI not be written in “techy talk” so they can be clearly understood, said Ann McMullan, project director for the Consortium for School Networking’s EmpowerED Superintendents Initiative.

To that end, CoSN, a nonprofit that promotes technological innovation in K-12, has released an array of AI resources to help superintendents stay ahead of the curve, including a one-page explainer that details definitions and guidelines to keep in mind as schools work with the emerging technology.


 

Generative AI’s environmental costs are soaring — and mostly secret — from nature.com by Kate Crawfold
First-of-its-kind US bill would address the environmental costs of the technology, but there’s a long way to go.

Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. It’s an unusual admission. At the World Economic Forum’s annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.

I’m glad he said it. I’ve seen consistent downplaying and denial about the AI industry’s environmental costs since I started publishing about them in 2018. Altman’s admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.


Get ready for the age of sovereign AI | Jensen Huang interview— from venturebeat.com by Dean Takahashi

Yesterday, Nvidia reported $22.1 billion in revenue for its fourth fiscal quarter of fiscal 2024 (ending January 31, 2024), easily topping Wall Street’s expectations. The revenues grew 265% from a year ago, thanks to the explosive growth of generative AI.

He also repeated a notion about “sovereign AI.” This means that countries are protecting the data of their users and companies are protecting data of employees through “sovereign AI,” where the large-language models are contained within the borders of the country or the company for safety purposes.



Yikes, Google — from theneurondaily.com by Noah Edelman
PLUS: racially diverse nazis…WTF?!

Google shoots itself in the foot.
Last week was the best AND worst week for Google re AI.

The good news is that its upcoming Gemini 1.5 Pro model showcases remarkable capabilities with its expansive context window (details forthcoming).

The bad news is Google’s AI chatbot “Gemini” is getting A LOT of heat after generating some outrageous responses. Take a look:

Also from the Daily:

  • Perplexity just dropped this new podcast, Discover Daily, that recaps the news in 3-4 minutes.
  • It already broke into the top #200 news pods within a week.
  • AND it’s all *100% AI-generated*.

Daily Digest: It’s Nvidia’s world…and we’re just living in it. — from bensbites.beehiiv.com

  • Nvidia is building a new type of data centre called AI factory. Every company—biotech, self-driving, manufacturing, etc will need an AI factory.
  • Jensen is looking forward to foundational robotics and state space models. According to him, foundational robotics could have a breakthrough next year.
  • The crunch for Nvidia GPUs is here to stay. It won’t be able to catch up on supply this year. Probably not next year too.
  • A new generation of GPUs called Blackwell is coming out, and the performance of Blackwell is off the charts.
  • Nvidia’s business is now roughly 70% inference and 30% training, meaning AI is getting into users’ hands.

Gemma: Introducing new state-of-the-art open models  — from blog.google


 

 

AI fast-tracks research to find battery material that halves lithium use — from inavateonthenet.net

Using AI, the team was able to plow through 32.6 million possible battery materials in 80 hours, a task the team estimates would have taken them 20 years to do.


Other interesting items from inavateonthenet.net:

Medical ‘hologram’ market to reach 6.8 bn by 2029

Providing audio for open spaces

 
© 2024 | Daniel Christian