’Biometrics’ for access control is dead.
By Daniel Sozonov YouMay CEO
Biometrics is derived from the word bio-metry, or the mathematical deduction of life. Now the word signifies the aspect of measurements and shapes that make us unique. The practice has started since the 1880’s when Fingerprinting was invented but only until recently have we seen the mass-adoption of biometrics within access control as technology has caught up with our expectations.
Within the last 20 years, we have seen the term biometrics increasingly prevalent as a form of access control, to differentiate between it and other forms of electronic access control.
Now it is time for us to attempt phasing out the term biometrics all together.
As access control has become more sophisticated, to include being cloud-based, contactless and using open API’s; we have gone from analog methods to digital ones. Now a lot of state-of-the-art solutions are going beyond that, incorporating machine learning to help derive powerful insights through complex mathematical equations. These smart building technologies are plainly speaking turning our actions and our biological features into mathematical vectors, to help simplify that which is complex and organic.
What really drives adoption of these features is not just the need for another AI proptech company, it is the value that the application brings to the end user. Although this might seem rather obvious it begs the question then why are we bringing attention to the medium used rather than the return on investment gained, or the benefits presented. Instead the focus of biometric access control as it is now should be on the ability for it to improve the experience of people entering/exiting through doors or speed gates. Yet how we are able to make those improvements isn’t intrinsically valuable.
When people hear the word biometrics, they assume some algorithm taking in rudimentary data points, and measurements for facial features. I often hear the question whether we measure the distance between the eyes or the angle of the nose. The simple answer is that using machine learning we do not explicitly know how facial features are assessed to be used for comparison. The system assesses people holistically, just like a friend would. When we meet a person for the first time, we don’t grab a tape measure and start measuring the size of their ears, not only would that be very socially inept but it wouldn’t give us the results that we are looking for. Because people have different defining features, and intuitively we tap into that knowledge and are able to visualise how a person looks.
Want to try our products? Click the button:Get in touch