An NYU Law Forum assesses the legal implications of generative AI

At the start of NYU Law’s Forum on generative artificial intelligence, panel moderator and Professor of Clinical Law Jason Schultz asked for a show of hands: Who in the audience had interacted with ChatGPT, Stable Diffusion, DALL-E, or any generative tools of the kind? Looking at the hands in the air, Schultz joked, “And how many of you thought it was sentient?”

Schultz was remarking on the uncanny, almost-human quality of some of the latest versions of generative chatbots. This increasing sophistication in generative technology has brought new questions—and litigation—regarding their use, particularly in the legal realms of free speech and artistic expression and copyright law.

At the April 5 NYU Law Forum, sponsored by Latham & Watkins, leading academics, practitioners, and artists working at the intersection of these concerns met to give their predictions about the future of generative technology within the legal landscape. The panel included Amy Adler, Emily Kempin Professor of Law; Esha Bhandari, deputy director of the ACLU Speech, Privacy, and Technology Project and adjunct professor at NYU Law; Annie Dorsen ’24, theater director and MacArthur Fellow; Andrew Gass, a partner at Latham & Watkins who specializes in intellectual property and antitrust law; and Joseph Gratz, a partner at Morrison & Foerster and counsel to OpenAI and artist Kristina Kashtanova.

Among topics discussed was whether or not copyrights could be granted to the creations made by generative AI tools, who should be accountable for any hate speech or misinformation spread or produced by generative programs, and how these new technologies may impact the future of legal education.

Watch video of their full discussion:

Selected remarks by the panelists:

Joseph Gratz: “Why should it matter that you can predict the form of the thing? There are lots of ways for an artist to have a work represent their mental conception without being able to predict ahead of time exactly the physical form that that thing will take or what words will be in it or what images will be in it.” (video 28:10)

Esha Bhandari: “The question of who is responsible for speech will be a major one going forward….When there’s output from generative AI, who’s the author of that? For copyright, there are different incentives and different reasons to care about authorship. But the law of freedom of expression and the First Amendment cares about authorship, with who is responsible for speech that is unprotected?” (video 40:50)

Andrew Gass: “One of the primary vectors of competition that we see in this space is going to be the restrictions and the rules of the road that these various platforms choose to impose on their products. I think that in many respects those who ultimately prevail as the winners in a commercial sense may be a function more of how thoughtful they were about what restrictions and limitations they impose and what they leave permissible, than anything about the underlying technology.” (video 49:26)

Annie Dorsen: “My hope would be actually that these tools will no longer be considered things that can do anything, but that they will very explicitly be considered things that can do some things. And that leaves a lot of room for speech, it leaves a lot of room for creative expression by human beings, it leaves lots of room for lots of social goods.” (video 54:08)

Amy Adler: “I think this is a great tool that our students are going to have to learn how to use and how to master, and to just chalk it up to cheating is to misunderstand that this is going to be a new way of being a lawyer and being a student going forward. ” (video 59:38)

Posted April 28, 2023.