Meg Foster, Justice Fellow at Georgetown University Center for Privacy and Technology, said there are concerns about bias within the algorithms of various facial recognition technologies. Some have a harder time recognizing minority faces, for example. And there are concerns that outside hackers will find ways to hack into government systems for nefarious purposes.

Regarding the TSA pilot program, Foster said she’s concerned that while the agency says it doesn’t currently store the biometric data it collects, what if that changes in the future? And while people are allowed to opt out, he said it’s not fair to hold harried passengers who might be worried about missing their flight responsible if they do.

“They might be worried that if they oppose facial recognition, they will be under more suspicion,” Foster said.

Jeramie Scott of the Electronic Privacy Information Center said that while he is volunteering now, he might not be for long. She noted that David Pekoske, who runs TSA, said during a talk in April that eventually the use of biometrics would be required because they are more effective and efficient, although he did not give a schedule.

Scott said he would prefer the TSA not use the technology at all. At the very least, he would like to see an external audit to verify that the technology is not disproportionately affecting certain groups and that the images be removed immediately.

TSA says the goal of the pilot is to improve the accuracy of identity verification without slowing down the speed at which passengers pass through checkpoints, a key issue for an agency that serves 2.4 million passengers daily. The agency said early results are positive and have shown no discernible differences in the algorithm’s ability to recognize passengers based on age, gender, race and ethnicity.

Lim said that the images are not compiled into a database and that photos and IDs are removed. Because this is an assessment, some data is collected and shared with the Department of Homeland Security’s Science and Technology Directorate in limited circumstances. TSA says that data is deleted after 24 months.

Lim said the camera only turns on when a person enters their ID card, so it doesn’t collect random images of people at the airport. That also gives passengers control over whether they want to use it, he said. And he said research has shown that while some algorithms perform worse with certain demographics, it also shows that higher-quality algorithms, like the one the agency uses, are much more accurate. He said using the best cameras available is also a factor.

“We take these privacy and civil rights concerns very seriously, because we touch so many people every day,” he said.

Retired TSA official Keith Jeffries said the pandemic has greatly accelerated the rollout of various types of this «contactless» technology, whereby a passenger does not hand over a document to an agent. And he envisioned a «checkpoint of the future» where a passenger’s face can be used to check their bags, go through security checkpoints and board the plane, all with little or no need to pull out a boarding pass or Identification documents.

He acknowledged the privacy concerns and lack of trust that many people have when it comes to providing biometric data to the federal government, but said that in many ways, the use of biometrics is already deeply embedded in society through the use of privately owned technology.

“Technology is here to stay,” he said.