0%
100%

Biometrics: The Whats, The Whens, The Hows, And The Whodunnits

September 15, 2016

[Picture above, credit © Outline205 | Dreamstime.com]

By The Biometrica Blog Team

Biometrica Systems, Inc., a Nevada corporation, is in the business of creating software and systems that link the physical to the digital with the intention of minimizing criminality, or events that could lead to crime. The company’s range of tools enable the recording of mala fide or criminal occurrences, recognize threats, identify politically enhanced personnel and their associates, track individuals or groups that have either committed a misdemeanor or felony, or have contributed to breaking the law, and finally, help security and surveillance teams of all stripes link all of the above.

In this conversation with Biometrica CEO Wyly Wade, who has almost certainly collected more biometrics across more countries than any single individual in the world, we get down to the basics. We look at what constitutes biometrics, what are identifiers, and how biometric identification is done and used.

© Lorna | Dreamstime.com
© Lorna | Dreamstime.com

BBT: What is Biometrics? Most people’s idea of biometrics comes from Hollywood, or the TSA perhaps. Your fingerprint, or your eyeball, and then some bad guy can get hold of either and steal your ID.

WW: Well, it’s not quite that. For instance, from what we understand, the capillaries in the iris and retina decompose very quickly, so it might be near impossible to do the bit in Demolition Man, where Wesley Snipes pokes out the warden’s eyeball. But yes, biometrics is all of that, and more, essentially physiological, behavioral, or characteristic markers that are almost always unique to you. It can be a marker through your fingerprints, your iris or retina, face print, DNA, but also your gait, or how you move your mouse, for instance. And then there are the soft biometrics, your height, weight, color of eye, skin color, your smell … even your butt print.

Hang on, two questions. There’s a butt print biometric? And why are some markers called soft biometrics?

There’s a technology developed by researchers out of Japan that uses people’s unique butt impressions, using things like weight and distribution etc. to control whether, say, the correct driver is sitting in a car seat. It apparently has a greater than 98% accuracy but I’m not sure it’s in commercial application yet. Historically, these were thought to be markers that distinguish you from someone else naturally, but over time, there have been obvious problems with these markers. Outside of political stereotypes, how do you really define skin color? There’s a range of skin colors. But it is a serious question.

Because those traits can be altered?

Because the computer is making an assumption about you, and is defining you as a color, an approximate height, an approximate weight, whether you’re wearing a blue shirt or short skirt, whatever.

We thought biometrics couldn’t be changed?

Well, they can. Environmental, medical, and other factors contribute to some change, but here’s the real problem. You may have a really good algorithm for identifying people, like DNA. I can identify you down to one one-trillionth of an individual, but there’s no database to compare that to, so I have to start afresh, to build that and gather all the data.

What do you mean by that?

How do you compare it? If I take your DNA, what am I comparing it to, to identify you?

How do they catch criminals, in that case?

If you leave DNA at someplace and you already have DNA in the system. So you’re effectively caught twice.

Credit: brian0918™ (Own work) [Public domain], via Wikimedia Commons The structure of part of a DNA double helix.
Credit: brian0918™ (Own work) [Public domain], via Wikimedia Commons
The structure of part of a DNA double helix.
Which means your DNA has to be caught once at the crime scene, and once when you’re caught?

No actually, look at it the other way. If You’re already a criminal, we extract and store your DNA in the system. Then if we find your DNA someplace, we run it against the DNAs in the system, and it’s a match, we say, “got you! That same criminal we released two months ago is this person.” Doing one-to-one matching is easy. I’m comparing you to this sample. But going back in the history of time and saying, “I found this DNA, let me go back and find out who that is,” is far more difficult.

It does sound like it is.

It is, since you do not have a large database to compare it against, you’re only able to confirm that the suspect you have in custody is the same that left the DNA at the crime scene. We are a long way from having a DNA system to solve crimes, instead, the technology really validates the hard work of law enforcement, it does not replace it. That’s also why things like facial recognition are more accepted in the security and surveillance world, or to provide access controls.

Wouldn’t you also need a pre-existing database for facial recognition? If you don’t have somebody in the system, who are you matching a face to?

True, but because of the way the world has changed, we have pictures of most of the people we’d be looking for somewhere on the internet. Here’s an example. In the United States, we may convict, for a felony at a federal level, say, a million people a year. That means, by law, a million people have to give up a DNA sample. That means, if I commit a crime, and I’m not one of those million people being caught, the likelihood of someone getting hold of a valid sample of my DNA and then going and getting and finding an accurate match is highly unlikely. But if you have a picture of me, and you go out and try and find me in some system, there’s a much higher likelihood of you finding me even if I’m not a convicted felon. There’s no DNA fingerprint in the world that has my DNA. At least, none that I know of, yet.

Why is facial recognition so suspect in so many people’s eyes?

Because it’s non-contact, that is, you don’t know when it’s being captured. With a fingerprint, someone has to take your finger, press it down on something to record it, so you almost always know when it’s being captured.

Is that why a certain section of people don’t think much of facial recognition identification?

I think that also has to do with what a lot of privacy advocates wrap themselves around. We have to understand that most of our context, as a company, for most of the areas we work in, there’s little or no expectation of privacy.

Because we’re catching criminals?

It’s not even just that. When you’re walking down a public street, there’s no expectation of privacy, not in a cellphone camera era, not when social media is ubiquitous. And how often have we also voluntarily given up privacy for safety or security, or even convenience — to speed up a process, any kind of process?

There’s a lot of suspicion about facial recognition and surveillance. Big Brother, a dystopian world, or an Orwellian one.

That suspicion isn’t specific to facial recognition by itself. That really has to do with biometrics and surveillance in general. This comes down to the responsible use of available technologies by both private and public sector institutions.

But facial recognition seems to be the face of all the controversy perhaps.

Not really, in the Minority Report film, it was the iris. It had nothing to do with facial recognition as such. I think face rec is under the microscope because people understand it a little bit more. But the confidence rating on iris is far higher than facial recognition. It’s just that the database for irises is much smaller today, even with Aadhaar in India (the world’s largest biometric database), even with the experiments that have been done, border controls etc. But just to clarify something when it comes to Biometrica, and the work we do, we don’t collect bad data.

What does that mean exactly?

We don’t collect pictures of random people to keep them in our system. If you don’t already exist in our law enforcement-verified database as a felon, someone who’s been arrested, someone that’s on a wanted list, or someone that is an “undesirable” — having already created a major safety, security or legal problem at one of our customers’ private properties, we don’t store your picture. That would only clutter up our system, and constitutes bad practice, and bad data.

© Essentialimagemedia | Dreamstime.com
© Essentialimagemedia | Dreamstime.com

We mentioned the movies earlier. Because of the movies, or TV shows, many people think we can just grab faces off video feeds, like a group of people walking in a hallway, and know who all those people are immediately. Is that true?

You could, but this has limited effectiveness, largely because of bandwidth and processing power. Additionally, the best performing examples of this have had a very limited gallery size — thousands, not millions, and we are a long way from billions. You don’t recognize off video per se. Video feeds are and always have been a series of photographs, for the most part at 30 frames a second. You’d still have to run the best of those frames through the system. If that group of people are all individually registered in your system, there is a possibility you can identify most of them. I wouldn’t say you can identify all of them, because it does depend on how they are registered, what quality of images you have, how many images you have in the system of them, etcetera.

So by common definition, you can identify in video streams?

You’re still identifying and matching against frames. But as FR technology and the technology used in cameras themselves evolve, it does give you the ability to make far better identifications, far more quickly. Which still means your frames have to be matched against a database. I believe the TSA ran a test a few years ago in which they could identify 60% of people in a group in a very controlled environment.

But environments are not controlled.

Correct. That’s why you can give a confidence rating, saying this person X might be this person X. But it’s extremely difficult to say this person X is absolutely this person X. It doesn’t matter if you’re talking about DNA, fingerprints, iris or facial recognition. What you’re getting is a confidence level, that this person is this person.

Isn’t DNA more “confident” than facial recognition?

It is. But it’s still a confidence rating though. There are things that we can do environmentally, medically, to change someone’s DNA. If you have your DNA from 20 years ago, and are exposed to those environmental or medical factors that lead to changes in the DNA, there’s a good chance your DNA wouldn’t be a perfect match. There would be points that match though, and there would be a high enough confidence rating that you are a match to the you of 20 years ago, because those points in common would be so strong. But if you took exactly those two matches, there would also be differences if you’ve been exposed to those environmental or medical factors that cause changes in DNA.

Coming back to what Biometrica does, is it all about bridging the physical and digital to catch criminals?

It’s not just catching criminals actually. We get down quickly to that, which is obviously important. But there’s much more. Take casinos, for instance, and we work with more than 200, have an obligation as Non-bank financial institutions (NBFIs) to follow KYCs, or Know Your Customer norms (also called the Customer Identification Program) by law. Because of the U.S. Bank Secrecy Act, and the Basel Accords, they have to be able to identify the people walking into their casinos, try and know whether they’re politically enhanced persons, high net worth individuals, gang members, sex offenders, money launderers, prostitutes and escorts, among other things.

Do they do it?

Some do and some don’t. And that’s part of the difference today — it’s a very different world from 50 years ago. Historically, inside many casinos, it was all about plausible deniability. If you didn’t specifically know someone, you couldn’t identify that that someone was known to be a bad player, or wanted for something by law.

But once they became non-bank financial institutions, they can’t make that argument anymore.

Yes, they have to know. It’s also covered under the USA Patriot Act. You can’t have plausible deniability anymore, much like a bank can’t. You have to be able to say that that person that walked in is a gang member. And if they identify him as being someone other than he says he is, they have to submit a suspicious activity report for false information.

© Mopic | Dreamstime.com As things like machine learning and AI develop, we'll see different kinds of biometrics developed.
© Mopic | Dreamstime.com
As things like machine learning and AI develop, we’ll see different kinds of biometrics developed.

What lies ahead for biometrics?

I do think there is a big shift that is coming in biometrics. Historically, the way biometrics worked, you got that confidence rating because of something called a template. So let’s say with faces, the technology measures points between certain artifacts in your face, and builds a template based off that. It’s the same with fingerprints, where you have a fingerprint template. I think what’s going to happen though, as machine learning, as AI (Artificial Intelligence) and hive computing or swarm intelligence develop, you’re going to see some very different kinds of biometrics, and how they’re used. You’re already starting to see some stuff come out of Japan and Germany, being able to say that this person is hostile and this person isn’t. That this person is willing to pick a fight and this person isn’t.

Credit: From Unanimous.AI, via YouTube. For more, go to http://unanimous.ai/unu2/

Like the “PreCrime” (mostly written as “pre-crime” now) in Minority Report?

Well, quite frankly, I wouldn’t go there. I hate the word pre-crime, and the connotations it has. In the case mentioned above, the research being developed, it will tell you if someone is, say, angry enough to start a fight, or things like that.

Based on facial recognition?

Based off your body.

And you think that’s how our technology will change in the future?

I think that’s where a lot of the science is headed, and a lot of the tech is going, trying to answer questions like “Can I define that this person is being hostile?” “Can I determine…” helped by science and machine learning. It’s facial makeup, it’s heat, it’s where that heat is distributed on the body and many more factors. It’s if you’re walking fast with your head down past everyone and by the way, there’s a small square in the lower part of your back that happens to be cooler than the rest of your body temperature, maybe that’s a gun.

How is that not Big Brother?

We are in Big Brother, if you want to call it that. But look around at the society we live in. Look at London’s cameras, look at your malls; look at cellphone cameras or social media. Look at you letting Facebook suggest tagging your friends and family, or using Amazon or any online retailer and knowing it monitors your preferences. It’s not just because of terrorism, or safety, it’s also convenience. You opt for it.

Things still happen, even in places like London.

In London, things still happen because you create so much data, and have so many cameras, but there’s no one to watch them all. You can go back forensically, and say something happened at this place by looking at that video feed, but nobody’s watching all this stuff on an ongoing basis. Speaking for ourselves though, we’re focused on just the bad guys, we’re not interested in anyone else, unless they’re associated with the bad guy.

For more information, contact media@biometrica.com