Technology evolves fast, and it’s a challenge for the law to keep up. That makes this an exciting time to be a law student.
BY EMILY BARKER
The emerging field of technology law can include matters such as cybersecurity, intellectual property, privacy, and civil liberties—but it doesn’t stop there. “How does products liability deal with the fact that you may have an artificially intelligent system that learns over time? How do you know if it’s malfunctioning or learning?” asks Marc Canellas ’21, co-founder of a new student group that fosters tech-related discussions. “Every single class I’ve been in, you can argue that technology’s going to fundamentally change the content of that area going forward.” NYU Law offers a variety of classes, clinics, and organizations that explore the impact of technology on society. Through the Technology Law and Policy Clinic, Guarini Global Law & Tech, and other programs, students—from both tech and non-tech backgrounds—are engaging with challenging new issues that will likely shape the law they practice and the work they do throughout their careers. We asked some of those students about what they are working on.
A former senior contracts manager at Google, Samantha Hedrick ’19 came to NYU Law to pursue a passion for intellectual property law. In addition to co-authoring an article with Professor Christopher Jon Sprigman in Lewis & Clark Law Review on how courts can better apply the copyright infringement standard, Hedrick published a paper on copyright authorship and artificial intelligence (AI) in NYU Law’s Journal of Intellectual Property & Entertainment Law earlier this year.
“If you use artificial intelligence to create something copyrightable, let’s say a song or poem, the question I was looking at was at what point, if any, has the machine done so much that the human can no longer claim to be the author of the resulting work? My answer is there is no such point, because I view it almost like a camera. When I take a photo, I don’t know what happens inside the camera, but I still know that if I tweak some of the settings, that image is going to change. I still have control over the resulting output. With AI, even with extremely complicated deep learning algorithms, like deep neural networks, we may not understand everything that’s happening inside, but we also may not need to, as long as we still have control over the outputs.”
In the Technology Law and Policy Clinic, Kara Brandeisky ’19 and Kristin Mulvey ’19 worked together on a motion to unseal a ruling in litigation between Facebook and the US Department of Justice.
Mulvey: “I always knew that I wanted to work on issues of civil rights and civil liberties. One of the things that got me interested in the tech aspect of law is the way tech is being used to deprive people of their civil liberties, particularly in the Fourth Amendment context or the First Amendment context.”
Brandeisky: “The Justice Department tried to force Facebook to break the encryption on Facebook Messenger calls. We know that the government lost, but we don’t know why. Everything was under seal. We filed a motion to try to unseal the judicial opinion so that we could know what the law is about encryption. Can the government force a company to undermine its own security measures, make it easier for the government to wiretap voice calls on the internet? If not, why not? “We lost in the district court. The entire proceeding remains under seal—we don’t know what the government even said in response to our motion. That was a little disheartening. On the other hand, we’re doing something kind of novel and exciting. Now we’ve filed an appeal.”
Ngozi Nwanta LLM ’19 intends to pursue her JSD at the Law School, starting this fall. With many African countries building digital identification systems for citizens, Nwanta is examining the relationship between identities, digital identification systems, and development.
“I’m working on blockchain technologies and whether it is the trust solution for digital identities. The research on identities and digital identification in West Africa is important because most citizens of West African countries lack digital ID, digital knowledge, or even digital technologies. The incentive to take up digital IDs in light of its exclusionary features becomes a challenge where there already exists a lack of trust between the citizens and the government, and if some people decide, ‘No, I don’t want my information to get out there,’ will they be excluded by the ID project? Another issue is that data protection laws are very weak, and even where the laws are in place, how are you going to enforce it regionally? I am not saying that digital identities are wrong. I’m saying that they’re actually beneficial— but it’s just not enough to transport ideas into the community; we have to work with the local circumstances in those communities.”
As a Cyber Scholar at NYU’s Center for Cybersecurity, Olivia Zhu ’20 is working on a project to examine the connection between cellphone hacking and robocalls.
“One has to do more with the privacy and security of the messages that are being communicated over your phone. And the other is more of a nuisance and sometimes a fraud problem. But they’re both linked to this one particular vulnerability in a very, very old telephone system, a protocol that was a response to the breakup of AT&T. In layperson’s terms, it helps different telecom providers pass calls to one another. The main focus of this system was interoperability across the telecom industry, while security and authentication were deemphasized. One aspect I’m considering is how the Federal Communications Commission has treated the problem of robocalls compared to the problem of hacking—or cyber espionage or surveilling lines through the phone network— as separate problems even though the root source is a vulnerability in the same protocol.”
Chase Weidner ’19 worked on a project, co-headed by Crystal Eastman Professor of Law Catherine Sharkey, that studied how federal agencies are using artificial intelligence and then reported the findings to the Administrative Conference of the United States (ACUS). Business and engineering students from Stanford University provided additional expertise.
“My group ended up focusing on the Food and Drug Administration, Health and Human Services, and the National Highway Traffic Safety Administration. The goal was to find actual use cases of AI within the agencies and then dig into what’s going on under the hood technically to the extent we could. Additionally, we wrote about their legal implications—whether there were any legal challenges presented by using artificial intelligence— as well as their policy implications. For example, whether there were any reasons to believe that the AI would shift the role of the agency or whether the agency had the sort of authority it needed to properly use the AI internally and to regulate it externally.”
When Enoch Ajayi ’20 took the Technology Law and Policy Clinic as a 2L, one of his clients was NYU’s AI Now Institute, an interdisciplinary research center that examines the social implications of artificial intelligence.
“One of the things that pushed me to go to law school was that in the tech area, what we often find is that the stakeholders, the decisionmakers, the folks who implement a lot of these new technologies, may have very good intentions but have a limited scope of understanding. A big part of that is because of the limited amount of voices in the room. That was something that I really wanted to have impact on. One of the things that I was able to work on with AI Now was writing a letter of support for a group of tenants who were appealing their landlord’s decision to implement a facial recognition system at their building. The landlord specifically chose to implement this system in his one building that’s primarily persons of color and women. One of the concerns with facial recognition technology is the highest rates of inaccuracies are with persons of color, women, elderly, and children as well.”
Melodi Dincer ’20 is preparing for a job in tech law that she says “probably doesn’t exist yet.” After interning at the Electronic Frontier Foundation during her 1L summer, she took two semesters of the Technology Law and Policy Clinic.
“I knew I wanted to study the law, but I also knew that I did not want to follow a traditional path. I wanted my future practice to have a direct impact on the law, and so I sought legal areas that are ambiguous and unsettled. Issues of technology and privacy are precisely such areas. I wanted to attend a school with robust clinical offerings that would allow me to study and work on these issues, helping me form skills and connections to help me succeed in my nontraditional legal career.”
Cassandra Carley ’21, who earned a PhD in computer science at Duke University, and Marc Canellas ’21, who has a PhD in aerospace engineering from Georgia Institute of Technology, are two of the co-founders of Rights Over Tech, a new student group formed to explore issues of civil liberties, human rights, and technology.
Canellas: “We’re trying to fill a void and use sort of the NYU social justice flair, I’ll call it that, to say: ‘Your rights come first.’ Technology is important, but your rights are what’s most important. In order to assert those kinds of rights, you have to know what technology is doing. It’s hard to show your client’s rights are violated if you don’t know what the tech is doing.”
Carley: “We wanted to create something for students and that is primarily student-focused—making sure that all kinds of voices and viewpoints are empowered. We want to host events that focus on students engaging with each other. It’s for techies and non-techies alike, since technology affects us all.”
These interviews have been condensed and edited.
Posted September 4, 2019