Banning Facial Recognition Tech Moves Safety Backwards

Banning Facial Recognition Tech Moves Safety Backwards (Contributed)

Kevin Freiburger
June 26, 2019

Facial recognition technology wasn’t always feasible for the public sector. The accuracy of the face-matching software algorithms was not reliable, some contained biases and the systems themselves required significant IT overhead to operate.

However, that tide has shifted. Today’s facial recognition technology is much faster, more accurate and easier to operate largely thanks to machine learning. Machine learning allows the match software to self-learn, reducing bias and increasing accuracy. Google, Microsoft and Amazon helped democratize this machine learning technology with out-of-the-box toolsets and infrastructure that facial recognition vendors leveraged to improve their match software. But Amazon and Microsoft took it a step further, building their own facial recognition systems — available for anyone to buy, including government agencies.

While Microsoft has since turned back from selling facial recognition technology to government institutions, Amazon continues the practice. During its annual shareholders meeting in May, Amazon affirmed it will continue to sell facial recognition software to government institutions who “follow the law.” Amazon’s justification: You do not completely ban a technology because it has the potential to be misused.

But not everyone agrees. Now that government agencies are using this readily available facial recognition technology, watchdog organizations and concerned citizens are working to fight any threats the technology poses to both citizen privacy and basic human rights.

The city of San Francisco has taken the battle one step further. In early May, city leaders voted to ban the use of surveillance technology by police and other government agencies. Among other pieces of technology like body cameras and toll readers, “surveillance technology” includes all iterations of facial recognition software as well as any information originating from those iterations. That means that while mass surveillance technology is at the epicenter of the ban, any technology that creates or uses biometric data is also blacklisted.

As a center of technological innovation, San Francisco city leaders felt the onus was on them to prevent the misuse of facial recognition technology. The ban successfully quells fears of a brewing surveillance state, but it also stalls the progress of the technology’s safe and useful applications.


Refusing to acknowledge the benefits of facial recognition technology is detrimental to government agencies. Rather than deny the value entirely, government entities should acknowledge the usefulness of the technology and craft regulations to prevent its misuse.

Software applications at departments of motor vehicles (DMVs) are examples of value-added facial recognition technology. Many of these agencies already use the technology, not to capture mass surveillance footage, but to reduce identity theft (and to good effect) and prevent the use of fraudulent government-issued identity documents.

Using facial recognition software, one DMV identified 173 fraudulent transactions, 53 of which were transactions where another resident’s identity was stolen and an individual was attempting to obtain a government-issued ID using the victim’s stolen information. These statistics were gathered over just a single year within that state. Banning the technology while it continually produces results beneficial to the public is a step backward.


Facial recognition used to be a black box for governments and other entities that applied its technology: Users needed an output but weren’t especially interested in how that output was created. After news of the San Francisco ban surfaced, however, both users and citizens became interested in the details — which is a positive movement. A desire for information surrounding the technology supports and encourages a debate, which may lead to more widespread acceptance. Continuing education is the only way for both government agencies and citizens to feel comfortable with facial recognition technology and support a lift on the ban.

There are major misconceptions surrounding how facial recognition technology works. One example is the idea that the technology is a binary system for identifying suspects and criminals. The reality is much different. Facial recognition technology yields a match probability. Agencies using the system typically deploy it with other investigation tools that filter out false positive matches after a trained user conducts an investigation.

Facial recognition technology is not a perfect matching solution and algorithms may be vulnerable to bias. The technology will keep improving over time as it uses more data. The technology’s vendors are reinforcing the software’s algorithms with machine learning, using larger, diverse data sets to optimize the accuracy for variability across races, geographic regions and genders. In the meantime, government agencies can safely deploy the technology as a tool to guide investigations. However, the technology should never act as an absolute decision-maker.

Government agencies and citizens must be made aware of these facts. The public must understand that facial recognition technology is not an all-knowing magic wand, but simply another tool to help protect cities and prevent crime. Public outreach surrounding the technology and its misconceptions is vital and should facilitate conversations between government agencies and their constituents in which leaders address and mitigate concerns.


Bans similar to San Francisco’s are gaining traction in Oakland, Calif., and Somerville, Mass. Before any further bans take effect, public-sector leaders must take matters into their own hands, and support regulations that acknowledge the benefits and prevent the misuse of facial recognition technology. Failing to modernize with the times and banning emerging technology before full maturation is a step back for the public sector, and ultimately a disservice to the constituents it will protect.

Facial recognition technology is incredibly accurate and becoming more so each year. Many state government entities throughout the U.S. use the technology to protect residents from identity theft and fraud. Some agencies even use Amazon’s facial recognition product to find missing and exploited children.

The algorithms are still learning and will continue to gain accuracy and reduce bias moving into the future. It’s critical that we understand the underlying technology and the value it brings before rushing to judgement — and before government regulators implement complete bans.



Kevin Freiburger is director of identity programs at Valid, a technology company that provides identity solutions to national, state and local governments. With more than 15 years of customer-facing experience in the identification industry, Kevin leads the strategy behind Valid’s product portfolio, builds and aligns the teams to execute it, and deploys those products in large, enterprise projects through delivery teams.

The article can also be read by visiting GovTech.

Conteúdos Relacionados

Related Content

Contenido Relacionado