Beyond slip-and-fall: The digital age raises new issues in tort law

Tort Law and AI Ideas Artwork

A wooden wheel on a 1909 Buick collapses; a bottle of Coca-Cola explodes in a waitress’s hand; an advertisement features a robot that looks like Wheel of Fortune host Vanna White. Each of these instances led to court rulings that expanded the application of tort law to keep pace with changes in products, commerce, and technology over the course of the 20th century. 

Now, in the 21st century, NYU Law faculty members are examining how tort law is evolving—or ought to—in the digital age. Should e-commerce platforms like Amazon be subject to liability for harm caused by products ordered from their websites? What kind of liability regime makes sense for harm caused by artificial intelligence, such as operating systems for self-driving vehicles? Can individuals prevail in right of publicity claims if their photographs are among the billions of images compiled by facial recognition companies?

Recent scholarship by Catherine Sharkey, Segal Family Professor of Regulatory Law and Policy, Mark Geistfeld, Sheila Lubetsky Birnbaum Professor of Civil Litigation, and Professor of Clinical Law Jason Schultz focuses on these questions. While their individual work considers very different digital-era developments, there is a common thread: each scholar examines the boundaries of tort law and how it can serve as a prelude or companion to government regulation. 

Catherine Sharkey: Product liability and E-commerce platforms

In August 2023, the Journal of Tort Law published an article by Sharkey titled “The Irresistible Simplicity of Preventing Harm.” That title encapsulates Sharkey’s view of the primary purpose of tort law: keeping accidents from happening by assigning liability to the party best positioned to avoid them. 

Catherine Sharkey
Catherine Sharkey

Courts are divided over whether people can seek redress against Amazon, the online retail giant, for injuries caused by products purchased through its platform. Some rulings have denied these claims, finding that, while Amazon was a facilitator of transactions, it was not a “seller” in the traditional sense.

But in “Products Liability in the Digital Age: Online Platforms as ‘Cheapest Cost Avoiders,’” published last year in the Hastings Law Journal, Sharkey argues that this result is out of step with the harm-reduction aims of tort law and at odds with the “practical, empirical reality, as concerns Amazon.” Instead, she lauds a California Court of Appeals ruling, Loomis v. Amazon.com, which found that Amazon could be subject to liability for a defective product sold through its website—a hoverboard that caught fire and burned the plaintiff.

Sharkey has also explored the tort law implications of online commerce from an international perspective. She has, since 2022, been a member of the International Working Group on Online Platforms and Product Liability, which in April 2024 will publish “Product Liability and Online Marketplaces: Comparison and Reform,” in the International and Comparative Law Quarterly.

Multiple strands of Sharkey’s scholarship combine to shape her views on the role of tort law. The idea that courts identify a “cheapest cost avoider” (CCA)—assigning tort liability to the entity that can most cost-effectively ensure product safety—is a concept that emerged from the field of law and economics. Sharkey, who has undergraduate and graduate degrees in economics, examines many issues through the lens of that discipline. She is also a leading scholar in both torts and administrative law, which prompts her to view ways in which these areas intersect and can complement each other. 

The deterrence-based CCA approach, Sharkey writes in a 2021 Harvard Law Review article titled “Modern Tort Law: Preventing Harms, Not Recognizing Wrongs,” is “a cause for celebration given its ability to handle the most urgent modern torts issues concerning the interface between tort and federal regulation and wide spread societal harms.”

In “Products Liability in the Digital Age,” Sharkey identifies four historical stages in the evolution of US products liability since the early 20th century. At each stage, she finds, courts employing a CCA assessment have expanded liability in response to items presenting new risks or to new modes of commerce. The wheel collapse on the 1909 Buick, for example, highlighted increased dangers presented by the automobile, and the landmark decision in MacPherson v. Buick permitted the accident victim to sue the manufacturer, even though he had purchased the car from a dealer. The exploding Coke bottle resulted in another seminal ruling in 1944 (Escola v. Coca-Cola) that laid the foundation for strict liability against makers and sellers of mass-produced goods.

Recently, Sharkey writes, we have arrived at the fifth stage, “as products liability confronts the digital age, typified by a transformative shift away from in-person purchase transactions toward digital purchases on e-commerce platforms.” In a brick-and-mortar store economy, she notes, a seller who transfers legal title of a product served as “a convenient proxy for the CCA.” But online platforms like Amazon, she says, have deliberately designed business models that allow them to convey and distribute goods without ever taking title, even as they act very much like a traditional seller or distributor. 

In Loomis, the California appeals court looked past the formalistic definition of seller, and, in what Sharkey called a “trailblazing” concurring opinion by Justice John Wiley, noted, “Amazon owes its customers a duty in strict liability because Amazon’s position in the distribution chain allows it to take cost-effective steps to reduce accidents.” That opinion, Sharkey predicts in her “Irresistible Simplicity” article, is “destined to enter the canon” alongside landmark rulings from the past. 

In fact, Sharkey is helping assure that this will be true: Loomis will be included in next edition of the widely adopted torts casebook she co-authors with Laurence A. Tisch Professor of Law Richard Epstein.

Mark Geistfeld: Injuries caused by self-driving vehicles

Most automobile crashes in the US are caused by driver error, and crash victims frequently seek redress through the tort system by suing  drivers for negligence. But what happens when a crash involves an autonomous vehicle (AV)? How will a court decide if such a vehicle’s operating system was at fault? Does a crash necessarily mean the design of that system was unreasonably dangerous—and thus subject to strict liability—even if the fleet of vehicles using the system has an accident rate much lower than for human drivers?

Mark Geistfeld
Mark Geistfeld

Geistfeld has been studying these questions since 2017, when he published a California Law Review article titled “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation,” Geistfeld sought to answer these questions. At the time the article was published, Geistfeld recalls in a recent conversation, many people expected that by 2023, a substantial number of self-driving vehicles would be on America’s roads. Legislation was advancing in the US House and Senate to establish a regulatory framework for AVs, and in another law review article in 2018, Geistfeld outlined a “regulatory sweet spot” for a combination of federal rules and state tort liability.

But both the law and technology for AVs have hit speed bumps. The move toward federal regulation stalled after a driverless car operated by Uber killed a pedestrian in Arizona in 2018. And the programing for AVs, Geistfeld says, is a lot more challenging that what people had contemplated. “Driving is just so complicated,” he notes. “There are so many variables that impact simple decisions—making a left turn with oncoming traffic turns out to be surprisingly hard.”

Which is not to say that Geistfeld’s work in the area has come to a halt. In November 2021, the European Commission published a comparative study of civil liability for artificial intelligence (including AVs), and Geistfeld authored the portion discussing the state of play in the US. “Unlike the US, the European Union is moving pretty quickly on all this,” Geistfeld says. In the US, Geistfeld is serving as reporter for the Automated Technology Liability Committee of the Uniform Law Commission.

Insurance—often tightly interlinked with development of tort law and policy—is a primary area of teaching and scholarship for Geistfeld, who is also a co-author of a widely used torts casebook. In recent years, he has begun looking at “massive changes” that AVs may bring to the insurance marketplace.  According to media reports, several companies have said they will take responsibility for harm caused by their vehicles when they are operating autonomously. The details need clarifying, Geistfeld says, but could presage a shift to car makers acting as insurers of the vehicles they sell, with the insurance cost bundled into the price. 

Automobile insurance is an enormous industry, Geistfeld notes, and it could be “fundamentally transformed” by this development. “There are a lot of really interesting, hard questions about this that I think have been completely under analyzed,” he adds. “From what I've seen thus far, auto insurers seem to be kind of surprisingly complacent” about the potential upheaval. “That’s my next project in this space,” he says.

Jason Schultz: Facial recognition and the right of publicity

In May, Schultz published “The Right of Publicity: A New Framework for Regulating Facial Recognition” in the Brooklyn Law Review. As befits a clinical law professor who engages in both scholarship and practice, his work on the article was spurred by a project taken on by the Technology Law & Policy Clinic (TLPC), which he directs.

Jason Schultz
Jason Schultz

Schultz got a call from Sejal Zota of Just Futures Law, a law firm dedicated to social activism, in late 2021. Her firm had sued facial recognition company Clearview AI in California state court, alleging that it engages in unlawful surveillance, violates privacy rights, and facilitates government monitoring of protesters, immigrants, and communities of color. Zota wanted to know if the TPLC would be interested in filing an amicus brief in the case—Renderos v. Clearview—to provide the court with history, information, and context on one of the lawsuit’s claims: that Clearview had violated plaintiffs’ right of publicity (ROP), which the law would classify as a tort.

“I realized,” Schultz says, “that this was an underdeveloped area of the law, theoretically and doctrinally, and that no one really had a chance to think about it all the way through, particularly beyond an individual lawsuit: what does this look like as a body of law, and as a theoretical role for the right of publicity?” In his article, Schultz traces the evolution of ROP cases over the past 100-plus years, noting how they have tracked continual advances in visual-capture technology and business models that exploit it. (Among the decisions he mentions is one upholding Vanna White’s ROP claim involving an image of a robot that was depicted turning over letters on a game board while wearing a blond wig and evening gown.)

“ROP claims have now been brought in cases involving nearly every type of media outlet or device,” he writes, “including films, advertisements, action figures, baseball cards, animatronic robots, video game avatars, and even digital resurrection in film sequels.” 

Schultz then makes the case for ROP claims against facial recognition businesses with their massive troves of facial images (Clearview says it has more than 40 billion in its database). The chances of any single individual having their image produce a “match” on a facial recognition system—much less having it displayed to the public—are miniscule, but Schultz says this is not required to advance a ROP claim.

“The profitability of facial recognition technology lies in its ability to recognize a person’s face in photos or videos the system has never processed before,” he notes. To do that, the system extracts biometric data from every person’s, creating unique faceprints to train the system and for possible later identification. 

That “appropriation of identity,” Schultz’s paper and the TLPC amicus brief contend, forms the core of an ROP violation. TLPC students prepared initial drafts of the amicus brief, which argues against Clearview’s motion to have the case dismissed. The brief was finalized by Schultz and Melodi Dincer ’20, a supervising attorney for the clinic and a research fellow at the Engelberg Center on Innovation Law & Policy. (The court denied Clearview’s motion to dismiss.)

It’s too early to say if ROP claims against facial recognition companies will succeed, but, as the title of Schultz’s article makes clear, he believes these claims could serve as a mechanism for regulating the technology. To avoid liability—on ROP or other grounds—Schultz says, facial recognition companies should do what so many other businesses do: obtain consent to use personal data. 

There are signs the business world is taking note, Schultz says. For example, visual media giant Getty Images has announced the introduction of an “enhanced model release form,” that includes consent to use of personal and biometric data.

“I think consent will come around as needed,” Schultz says. “And it won’t grind the industry to a halt. It might cost them a whole bunch of time and money, but that’s what happens when an industry screws up, right?”

Posted January 9, 2024