Look no farther for a sensationalized depiction of biometric identification technology than the Tom Cruise movie “Minorty Report.”
Packed with scenarios that stretch the truth on how biometric technology actually works, the movie has unfortunately become a rallying cry for those opposed to the technology as an example of just how invasive the technology is to our personal privacy. While there are arguments to be made on both sides on whether biometric identification technology is a privacy detractor or a privacy boost, one thing is true: In the real world, front end biometric hardware devices work much differently than what we see on the big screen or when flipping through the pages of a science fiction novel. Which brings us to the topic of iris recognition.
When most people hear the words “iris recognition” they immediately confuse the technology with “retinal scanning,” a completely separate and totally different biometric modality. As our community already knows, iris recognition and retinal scanning are two completely different biometric modalities each operating under separate functional parameters and each using a different method of capturing individual biometric characteristics. Most people associate iris recognition with something that looks like this:
The picture above shows a retinal scanner beaming visible light into the human eye to read the unique physiological characteristics of the retina, located in the back of the eye. Despite it’s extremely high identification accuracy, retinal scanning is widely considered to be one of if not the most invasive biometric modality and an impractical technology for commercial use in high throughput environments. Conversely, iris recognition uses a sophisticated digital camera to capture your photograph, which maps unique data points of your iris (located in the front of the eye) and uses that information to create a unique identity template which is used on subsequent identification attempts and is also an extremely accurate .
Iris recognition does not beam any visible light into your eyes, is 100% safe to use, and does not perform anything even close to a “scan” – it is simply a digital photograph (albeit much more sophisticated that pictures we take with our digital cameras and cell phones). Here, we see a patient at a hospital using an iris camera for identification – notice how there aren’t any lights or lasers beamed into their eyes during the photograph capture process:
Why is it important to know that iris recognition does not “scan” your eyes? Like it or not, the proliferance of biometric technology for individual identification is a reality that we all must come to terms with. In fact, if you have never participated in a biometric identification deployment, chances are at some point you will considering the rapid pace in which many industries are adopting the technology as a tool to increase security, create efficiencies, eliminate waste and fraud, and raise accountability and productivity. In healthcare, many hospitals and medical facilities have already deployed iris recognition biometrics for patient identification, and are expanding their deployments to provide the technology for accurate patient ID at each and every touchpoint along the care continuum.
In the healthcare industry specifically, understanding what to expect when you participate in a biometric identification deployment is a key factor in accepting the technology as a key tool to help stop medical identity and fraud at the point of service and to eliminate duplicate medical records which are a direct threat to your safety. So the next time you visit the hospital or a medical facility that has deployed iris biometrics for patient identification, you are now empowered with the information on how the front end technology works and can rest assured that you are not being “scanned” in any way, shape, or form. It’s a photograph, not a scan!
What other common misunderstandings about biometrics may cause you trepidation?