This research is supported by the Northwestern Law and Technology Initiative, the Northwestern Security & AI Lab, and the Buffett Institute for Global Affairs.
#AI #FederalCourts #AI4Law
@northwesternlaw.bsky.social
@buffett.northwestern.edu
Posts by Daniel W. Linna Jr.
The full study, including additional findings and judges’ comments, is available on the Sedona Conference website:
www.thesedonaconference.org/publication/...
Co-authors: Anika Jaitley, Daniel W. Linna Jr., Hon. Xavier Rodriguez, V.S. Subrahmanian, and Siyu Tao.
The author team worked closely with New York City Bar Task Force members Harut Minasian and David Zaslowsky.
This study was completed in collaboration with the New York City Bar Association Presidential Task Force on Artificial Intelligence and Digital Technologies and co-published by New York City Bar and The Sedona Conference.
6. Judicial outlook on AI is evenly split
· Judges were nearly evenly divided between being optimistic about AI’s potential for the judiciary and being concerned.
5. One in three judges permit AI use in chambers
· 25.9% of judges permit & 7.4% of judges permit and encourage AI use in their chambers.
· Approximately 20% of judges formally prohibit AI use, 17.6% discourage but do not formally prohibit AI use & 24.1% of judges have no official policy on AI use.
4. AI training has not been offered to most judges
· 45.5% of judges said that AI training had not been provided by court administration, and 15.7% said that they were not sure. Three out of four judges offered AI training attended the training.
3. Legal research dominates use cases
· Legal research is the most common AI use case, reported by 30% of judges.
· Reviewing documents is the second most common AI use case, reported by 15.5% of judges.
· 38.4% of judges have never used any of the AI tools listed on the survey in their work.
2. Preference for legal-specific AI tools
· Judges are more likely to use "AI for Law" tools (AI tools specifically designed for legal use cases) than general-purpose AI platforms.
1. AI adoption is broad but infrequent
· More than 60% of responding judges reported using at least one AI tool in their judicial work.
· 22.4% of judges reported using AI tools in their work on a weekly or daily basis.
In “Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges,” our team conducted a stratified random sample survey of U.S. bankruptcy, magistrate, district court, and court of appeals judges. Of the 502 judges that we surveyed via email, 112 responded (22.3% response rate).
Judges' AI use cases
New Research: A significant number of U.S. federal judges are already using #AI tools in their work.
Law students: You must know how to use AI responsibly and well to succeed in your internship and career.
This FREE, 3-hour PLI program provides the fundamentals for using AI responsibly and well in law practice.
Proud to partner with PLI to deliver this FREE-to-all training.
Read "Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases" now: legal-forum.uchicago.edu/print-archiv...
Abhishek Dalal, Chongyang Gao, the Hon. Paul W. Grimm (ret.), Maura R. Grossman, @danlinna.bsky.social, V.S. Subrahmanian, and the Hon. John Tunheim suggest how federal judges should approach evidence that may be AI-generated, focusing on the implications in a national security hypothetical.
Authors: Abhishek Dalal, Chongyang GAO, Hon. Paul Grimm (ret.), Maura R. Grossman, Daniel W. Linna Jr., Chiara Pulice, V.S. Subrahmanian, Hon. John Tunheim
#deepfake #artificialintelligence #ai #machinelearning #nationalsecurity #electionsecurity #uselections #Law4AI
We hope that our article proves to be an important, informative, and interesting read for judges, practitioners, students, and anyone interested in the intersection of computer science and law!
Our multidisciplinary team of authors includes judges, computer scientists, and lawyers. We discuss an election interference scenario, and the analysis is applicable for any scenario involving the admissibility of alleged deepfakes into evidence in a court proceeding.
#Deepfakes are coming to courts. How will judges deal with them? The Federal Rules of Evidence set a low bar for admissibility, yet allowing juries to see deepfakes could be unfairly prejudicial.
Our article is now in the University of Chicago Legal Forum:
legal-forum.uchicago.edu/print-archiv...