Why the ACLU of NJ says too much of how the NYPD uses facial recognition is shrouded in secrecy

US

Facial recognition technology is deeply problematic, the ACLU of New Jersey argues.

In some cases, it can have more trouble recognizing people of color than white people. It can struggle to recognize women. And it’s often based on matches from low-fidelity camera footage, against databases — like mug shot records — that may include large numbers of people of color, from communities disproportionately arrested for low-level crimes, it says.

But more than that: The ACLU argues there just isn’t much information available about how facial recognition technology used by law enforcement works, or how law enforcement uses it.

Alexander Shalom, senior supervising attorney and director of Supreme Court advocacy at the ACLU-NJ, joined the “Brian Lehrer Show” on Friday to discuss the case of Francisco Arteaga — accused of a 2019 armed robbery in the Hudson County, New Jersey community of West New York. Arteaga says he was miles away when the robbery took place, but law enforcement in New Jersey reached out to the NYPD for help identifying a suspect — and the NYPD found a match for Arteaga through its facial recognition software. The New Jersey Regional Operations Intelligence Center had previously failed to turn up a suspect checking a CCTV image against its own database.

In 2020, New Jersey’s then-attorney general barred police from using the Clearview AI facial recognition system. High-profile cases of wrongful arrests spurred by facial recognition over the last few years have drawn attention from reformers. And New Jersey’s current attorney general is weighing a statewide policy on whether and how the technology can be used, and the state ACLU would like to see its use by law enforcement banned altogether.

For its part, the NYPD stresses it never makes an arrest based on facial recognition technology alone. It provides a starting point for gathering more evidence tying a suspect to a crime, and there’s always a human review of whatever the software finds, it says. Video from police body-worn cameras isn’t used, and neither is video from the city’s surveillance cameras “unless it is relevant to a crime that has been committed,” according to the NYPD.

Unidentified suspects’ images also aren’t routinely compared to other government photo databases, like those of driver’s license photos, or to social media, according to the NYPD. But law enforcement officials “may specifically authorize the comparison of an unidentified suspect’s image against images other than mug shots, if there is a legitimate need to do so.”

The ACLU-NJ and other groups — including the Electronic Privacy Information Center, the Electronic Frontier Foundation, and the National Association of Criminal Defense Lawyers — have filed briefs in Arteaga’s case, arguing he’s entitled to more information about how the systems are used. Arteaga’s attorneys have asked for details about the NYPD’s facial recognition software, the image used to identify him, any edits officials may have made to the image, and the systems analyst who was involved in the matching process

Those arguments are now before an appellate court.

Shalom’s discussion of those filings with Nancy Solomon, sitting in for Brian Lehrer, is transcribed below. Some edits have been made for clarity and length. Callers and Shalom additionally described the low resolution of many surveillance images, known instances of false identification, and concerns that mug shot databases used in facial recognition can include people of color arrested at disproportionately high rates compared to white people committing similar crimes.

Nancy Solomon: Last Tuesday, when New York Gov. Kathy Hochul announced a plan to install cameras in subway cars, this somewhat puzzling line from her speech gained lots of attention: “You think Big Brother is watching you on the subways? You’re absolutely right. That is our intent, to get the message out that we are going to be having surveillance of activities on the subway trains.”

Wow. While cameras on subway cars haven’t been installed yet, Hochul is right: Big Brother is already watching. Unlike the Orwellian novel, it’s not 24/7 surveillance through television screens. Instead, we could potentially be identified with facial recognition, a revolutionary technology that has surfaced in the past 10 years.

Police reform advocates have raised the alarm on this technology, saying it can lead to false arrests. One case in Hudson County has grabbed the attention of multiple organizations concerned by the threat to privacy or civil liberties that facial recognition software poses.

Joining me now is Alexander Shalom, the senior supervising attorney and director of Supreme Court advocacy at the ACLU of New Jersey. So, according to CNBC reporting, companies that make facial recognition technology have created databases of faces by collecting images, often without people’s consent, from any source they can access. So, what can you tell us about where they’re getting this database of photos of faces?

Alexander Shalom: You know, the interesting thing about this case, Nancy, is that I can’t tell you anything, and that’s really what the case is about. The case is about the fact that in order to defend himself, a person who was charged because of facial recognition technology wanted to know some of those questions, wanted to know: Where do they get the database? Who’s on the candidate list? Who is manipulating the data? What is the name of the software? Things as basic as that have not been disclosed to the defendant.

But we do know, right, that there are private companies that are selling peoples’ data, which includes photos of their face that could come from social media, say?

Sure, we know that those are the possible ways that the databases can be formed. And what we just don’t know in this case is, did they only use information from the Department of Motor Vehicles, or did they also get things from the Department of Corrections? Or did they get things from Facebook? Or did they get things — they’re endless possibilities. And all of those things impact the reliability of the technology, and to defend oneself in a very serious case, it’s important to know those answers.

So you just mentioned mug shots. So we’re talking about people, obviously, who have been previously arrested. So how is this being put into use? Why do police officers need facial recognition and how do they use it in conjunction with this database? There used to be a book of mug shots, right? How are they using it now in terms of fighting crime?

So again, we have to just infer, because the NYPD is being scrupulously silent and not answering the questions that we think we’re entitled to have answers to. But our best understanding is that the way the NYPD’s Facial Identification Section — FIS — works, is they start with a probe image. That’s something that was maybe pulled from a surveillance camera or something like that.

They take their probe image that they’re trying to identify, but sometimes it has to get edited because probe images work best when their eyes are open, mouth is closed and it’s a full-frontal shot, right of the face. And so, if the head is turned to the side or the mouth is agape or the eyes are closed, they might Photoshop it a little at some point.

They then take the probe image, maybe edited, analyze certain points and features to create what’s called a face print. It’s a kind of a formula, a mathematical formula, which again, we don’t have access to. They take that and they run it against unknown data. That will produce a candidate list — so maybe 100 people who look kind of similar to the probe image.

They assign a numerical confidence ratio. You know, this person is a 94, and this person is a 92. And then a technician, again, someone we don’t know, chooses which image counts as a possible match.

The thing that’s so interesting about that, Nancy, though, is that the candidate list is going to be filled with false positives. Because if it’s 100 people, well, 99 of them are not the person in the image — and maybe 100, but at least 99 of them are not the person in the image. And so it’s very important for a criminal defendant to find out who’s on that list. Because in that list might be the actual suspect, the actual person who committed the crime.

So tell me, Alex, how this came onto either your personal radar or the ACLU’s radar in terms of this potential threat to civil liberties.

So this is a case, as you said, that arises from Hudson County. It was a robbery in West New York. And they had an image from a surveillance camera and they brought it to the New Jersey State Police. And the state police said, “Well, we can’t find any matches. It’s not a good enough picture for us to work with.”

And so the West New York Police Department went to the NYPD and said, “Hey, can you find someone?” And NYPD ran it, and through the process I just described, produced a possible match. And they went to two different witnesses to the crime and had the possible match, whose name is Mr. Arteaga, in the photo array. And both people picked Mr. Arteaga out, though after some hesitation. One person had gone past him once, and then came back. But they picked him out, and he was then charged with a crime. And he became represented by the Office of the Public Defender in New Jersey, and they have a really terrific forensics team there who recognized that this was a novel issue here on how we deal with facial recognition technology.

And so they filed an absolutely terrific brief. The first thing they said is, “We need some information.” And the court said, “We’re not going to give it to you.” And I can talk about their rationale there, because it’s really troubling. But they then took an appeal and the court agreed to hear the appeal, and the Office of the Public Defender reached out to us at the ACLU of New Jersey and our colleagues at the ACLU National, and the Innocence Project. And we together wrote a brief, and some other organizations like the Electronic Frontier Foundation put together a brief, and some of the world’s leading experts in misidentification put together a brief — because everyone recognizes that this is new technology, but it’s decidedly not science.

Rather than being akin to fingerprints, which are at least pseudoscientific, it’s more akin to a sketch artist. It might be helpful, but that doesn’t mean it’s always reliable, and we need certain information to test its reliability. And this case that we’re litigating now is really about our access to that information.

OK, that was a very good, clear explanation. I appreciate that, Alex. We’re going to take a caller who I believe wants to challenge the way that you’re describing the technology. We have Dwayne in Manhattan on the line. Dwayne?

Dwayne (caller): So one of the issues that he brought up [about a] surveillance photo, produced from an investigation. I’m a detective. I’ve been a detective for years. And we’ve used this technology. That image, it isn’t Photoshopped before it’s parsed through the facial recognition software. It is whatever image that is obtained from whatever video surveillance that’s around. That image is submitted to FIS, and it’s whatever raw footage it is.

The more footage that is available, the better. So yeah, it does … work best if someone’s eyes are able to be seen or something like that, but the more footage of that person that’s there, the better the software, of course, works. And of course, the better the quality of the image.

If there is a hit that is generated from that image, from whatever other sources that are available, either through previous bookings or other arrest records that maybe the department has access to, the detective receives that hit. One of the first things they say is, “This is not probable cause to arrest this person.”

And I want to make that very clear. The regular identification process that has to occur for identifying someone to generate probable cause, it still generally relies on a witness. That image has to be presented to a witness, with other similar witnesses in that photo array. And the rules, especially in the last few years, have been quite stringent as to how that photo is presented.

If that individual is wearing a red shirt and he has an earring or something like that, then all the other similar images that are presented to the witness have to, at best, be presented in the same fashion. This is where sometimes Photoshopping is used, not with the original photo, but with the subsequent photos that are presented to the witness, so that all the images appear to be as similar as possible, so that the person … is given the best opportunity to identify who is the perpetrator of the crime.

OK, fantastic to have you on the show. I love that. I just love that you called. So Alex, do you disagree with what Dwayne is saying?

No, I absolutely appreciate Dwayne coming on, and his candor. I just wish we were able to get straight answers like that in the case we’re working on, which is to say, one of the things we wanted to know is: Was the image manipulated? And we were told we’re not going to get an answer.

And the reason we were told that is, well, because the NYPD isn’t part of the team in West New York. But of course we don’t want to allow police agencies to outsource their responsibilities. They were asked to do a task, and as part of doing a task, they’re required to give us certain information. And some of that information is the very stuff that Dwayne was talking about.

Unfortunately, I don’t think Dwayne’s answer is going to be admissible in court, but it is certainly useful to know that the general policy of the NYPD is to not manipulate those images.

Now, it sounds to me like maybe what detectives do isn’t so problematic, but it’s more at the trial level. The problem is that information and about the source of this photo identification is not being shared with defense counsel. Is that the crux of it?

That’s what this case is about. And it might be that there are some departments that do facial recognition searches in the way Dwayne described, which would be less problematic, and others where it’s more problematic. But for example, we don’t even know the software that they use in New York.

We don’t know how often it gets it right, how often it gets it wrong, and different agencies use different programs. And that sort of black box where we’re depriving defendants and the public of that information is not generally how we approach criminal cases. We say that because we don’t just want to win when we punish the guilty. We also want to ensure trials are fair. We want to prevent innocent people from being convicted. So we try and share as much of this information as possible.

It’s a rule that’s been around in American jurisprudence for more than half a century. And that’s what the fight is about now, getting enough information so that Mr. Arteaga can figure out whether the process used in his case was a fair one, and one that led to the right result.

Now, there’s a long history with photo arrays and lineups and witness identification. I mean, this goes back years, long before facial recognition technology. So I’m kind of curious how these two things play together, because I think it’s largely misunderstood that there are problems with photo arrays that are given to a witness, and that witness testimony is one of the largest contributors to wrongful convictions. So where is the interplay between the old technology and problems with it, meaning a photo, and the new technology?

They’re deeply interrelated. And Dwayne alluded to some of this when he talked about some of the safeguards that have been put in place in recent years to minimize unduly suggestive identification procedures. And New Jersey has frankly been at the vanguard of ensuring that eyewitness identification procedures are as reliable as possible, which isn’t perfectly reliable, but it is nonetheless an improvement.

But think about this, Nancy. Imagine the facial recognition technology produces 100 names, 100 pictures, and they take the one that they want to put in the array, and that person clearly is going look something like this, the probe image, the suspect. If the other five images that they’re asking the witnesses to look at aren’t from that list, then they presumably aren’t going to look that much like the suspect, even if they’re wearing the same red shirt that Dwayne talked about and the same earring Dwayne talked about.

And so, in many ways, using facial recognition technology, unless you’re being very careful about it, can lead to more suggestive identification procedures, because you’re only, you’re putting in one person who looks similar to the suspect and five people who do not. So a smarter way, a more reliable way, would be to ensure that the filler images, the other five images, are also from the facial recognition technology-generated list.

Otherwise, it’s natural that the eyewitnesses would gravitate toward the only facial recognition technology-generated match.

So Alex, why do you think that either the NYPD or the DA’s office is so reluctant to share information about these practices on the surrounding facial recognition?

Because they haven’t been told that they have to, frankly. And it’s easier for them not to. And it’s easier for two reasons.

One, just in general: Who wants to go through the effort of sharing things like hit rates and failure rates and false positives and things like that? But the other is because it will give defendants ammunition to undermine the technology, and to anyone who’s looked seriously at that technology.

As I said, there’s a whole brief written by our colleagues at the Electronic Frontier Foundation and the National Association of Criminal Defense Lawyers, and a group called EPIC [Electronic Privacy Information Center] that looks really deeply at this technology. But anyone who looks at the technology understands that it’s an art, not a science.

And I think DAs and police officers would like to present it to juries as if it were a science. And I think any information that they have to provide to defendants that undermines that fallacy, that, scientific fallacy, they perceive as harmful to their interests.

Products You May Like

Articles You May Like

Appeals court rejects border 'invasion' claims, issues new block on Texas' strict immigration law
WATCH: Paris revives historic waiters' race after 13-year break
Joe Lieberman, senator and vice presidential nominee, has died : NPR
Ukraine Sets Sights on ‘Important Target’ in Crimea
Maryland state of emergency declared after Baltimore bridge collapse

Leave a Reply

Your email address will not be published. Required fields are marked *