We primarily amalgamate arrest data, 100% from law enforcement sources/systems (local, state, federal, some tribal, international). We also collect some conviction data but have not, at this point, begun the process of amalgamating data from court records. Our data comes from law enforcement and is real-time, i.e. it is immediate: It updates in UMbRA — Biometrica’s equivalent of a NatCrim database — as soon as a law enforcement system updates data and makes it available as public record.
Yes. We pull in data from every jurisdiction in our database every hour.
No. Our data is 100% law enforcement-sourced only. We do not go into your social media and collect data, we do not look at property filings, credit reports, mobile phone records, license plates, or anything else except arrest and/or conviction data. If you have not been charged with a crime and arrested for it, you will not be in our database.
Our data, as mentioned above, comes 100% from law enforcement, and is generated at the point of arrest. This data could be from:
A) JAILS: Local jails are operated by county or municipal authorities, including and not limited to sheriffs’ offices, and typically hold offenders for short periods ranging from a single day to a year.
B) PRISONS: Prisons serve as long-term confinement facilities and are only run by the 50 state governments and the Federal Bureau of Prisons (BOP). Prisons typically hold felons and persons with sentences of more than a year; however, the sentence length may vary by state.
C) INTEGRATED SYSTEMS: Six states have an integrated correctional system that combines jails and prisons. We have data from local/county jails, some from state prisons and some from federal law enforcement. We also have some tribal data.
No, we have access to data from 48 states and are in the process of amalgamating that data to make it real-time. But no one actually has data from every jurisdiction in every state, including the FBI. Even though we’re pulling data from various law enforcement systems and jurisdictions, we will not, in any case, be pulling data from every local U.S. jurisdiction; this could be for various reasons, including that that jurisdiction doesn’t have a need for any focus. For instance, all 3,142 county/county equivalents in the U.S. and 100 more in U.S. territories do not have jails and/or prisons. Just over 15% of jurisdictions actually do not have some form of incarcerated population at all. Additionally, there are some counties that have populations so small (1,000 or fewer residents as mentioned above) that it makes no sense to focus on them as yet.
The incarcerated population only refers to the population of inmates confined in a prison or a local jail. This number may also include halfway houses, boot camps, weekend programs, and other facilities in which individuals are locked up overnight. This is different from:
a) Those that are imprisoned, which is a reference to only those individuals that are under the jurisdiction, or legal authority, of state or federal correctional officers; AND
b) The number of adults supervised by the U.S. correctional system. The correctional population includes persons supervised in the community on probation or parole plus those incarcerated in prisons or local jails or locked up in other facilities overnight.
Law Enforcement Agencies (LEA) of different stripes at the local level (city/county/municipal authority) update arrest data, which is almost entirely digitized across the U.S. with text and facial recognition data feeds available for a large number of arrested individuals.
Conviction data comes from the courts, and lives in the prosecutorial basket. It is, for the most, textual data, not connected to facial recognition and in most cases, also not uniformly/comprehensively digitized.
Note: Law enforcement agencies may or may not update an arrest in a system to reflect the status of a case through the courts— this varies from jurisdiction to jurisdiction and is largely dependent on the availability of time and manpower in that jurisdiction. If they do update it, it gets updated in our system too.
Raw arrest data is data that comes into UMbRA, our NatCrim database,from law enforcement/jail management systems. It is untouched, it is not merged, and it is just raw data, unlike what we see at the user interface level in UMbRA.
Typically, a user cannot see that raw arrest data, unless they have developer access.A user sees the dataat the UI (User Interface) level. When raw arrest data comes in, it comes into the database in various forms, because different law enforcement agencies/counties/states input the data differently. Raw arrest data doesn’t have any direction or correlation, it simply existsin the form it is inputted by a law enforcement officer.
For ease of use, our software then takes that raw data, sifts through it and formats it into predetermined or preset categories: In this case, we have set those categories at the UI end as Name, Photo, Weight, Height, Race, Gender, Hair, Eye Color, Facility, Booking Date, Arrest Date, Inmate Classification, Arresting Officer, Arresting Agency, Age, Arrest Number, DOB, Charge ID, Description, Crime Type, Control Number, Arrest Code, Bond Amount. All of these details are not available on every arrest record. What is available depends on the Law Enforcement Agency concerned and what details they choose to enter.
The categories we have preset are mentioned above. Nationality is not one of those categories as it is not uniformly entered into data by law enforcement. If there are additional details law enforcement chooses to enter, we can’t always see those details at the UI end because they’re not part of the preset UI classifications; however, a developer with access to the raw data feed can see that additional data at the back end. Take Nationality, for instance. While we don’t have a classification for Nationality at the UI end,if we wanted to know how many Canadians we had in our database, that information would have to be requested from a developer who can look at the raw data and make a programmatic query.That number is unlikely to be correct, however, because most counties don’t enter Nationality data into their jail management systems.
No. We do not manipulate or change law enforcement data in any way. We simply present it differently in the UI, for easier viewing. We may also (at this time, manually) merge different arrest records for the same individual in our NatCrim database, to allow users the ability to see multi-jurisdiction or multiple arrests in the same jurisdiction on the same page. For example, if you run a search for an individual named Christopher Columbus, who has been arrested multiple times in Madison County, MS, and then in Pinal County, Arizona, and if we’ve gotten to his record (we haven’t gotten to every record in the system as we have to see them individually to double-check they are the same individual), and merged his available arrest records for ease of viewing, you can see all his arrests in Pinal and Madison counties on the same page through the UMbRA user interface. However, you can’t see that on the raw arrest feed at the back end.
The data can be accessed in 3 ways, depending on a user’s data access agreement.
a) Through eMotive, our continuous background checking software:An organization-authorized individual sets up a databaseof individuals being monitored via eMotive, and those datasets will run 24×7 against UMbRA and look for matches.An authorized HR/Compliance/Securityuser will be automatically notified through an encrypted alert when an individual on their dataset is a potential match to an arrested individual in UMbRA.
b) Through an UMbRA login:This also allowsa user to run a manual background check on an individual in addition to getting automatically notified of a potential arrest post-employment.
c) Through a direct API integration: An API or Application Programming Interface allows a program/device/system to connect, link with and interact with another, allowing backend systems to communicate with each other. The API is not the UI. The simple explanation is that a user (human) uses a UI to interact with software, a machine uses an API to talk to other machines/software/systems/devices. In this case, a client’s API will interact with UMbRA to pull and access raw data from UMbRA, our real-time multi-jurisdictional arrest database. How the Client sees that data later, depends on their own other integrations and UI.
Note: All UMbRA searches are tracked, as is every encrypted email notification alert sent out through eMotive.Whether that encrypted email link is opened, clicked on and acknowledged in the eMotive system is also tracked. This creation of a legally viable audit trail is mandatory to be FCRA compliant.
UMbRA is the overall national criminal database and search engine. eMotive is the FCRA compliant monitoring ecosystem that allows an organization or a CRA to create their own private encrypted database (of employees or contractors being monitored with their written consent). This database will “sit below” UMbRA and run against it, programmatically and automatically informing a company-authorized individual when someone on that private dataset has potentially been arrested.
We receive a range of identifiers, depending on the law enforcement jurisdiction. This includes full name, photograph, race, age at the time of arrest, DOB, zip codes, driver’s license number, full addresses, sometimes, social security number embedded in a photograph (these are typically from historical data), a partial social, tattoos or identifying marks.Law enforcement jurisdictions vary greatly in terms of what they collect and what they make available when they upload data to public record. It is not uniform across states and counties because state laws are different and no one follows uniform collection guidelines.
From 2004 onward, it has been against federal law to put an arrested individual’s social security number on display on a public document or public platform of any type, including driver’s licenses. Section 7214 of the Intelligence Reform and Terrorism Prevention Act of 2004 [42 USC 405(c)(2)(C)(vi)(II)] explicitly prohibits printing Social Security numbers on identification documents issued by motor vehicle agencies, including driver licenses and vehicle registrations. Old licenses that had them were phased out.
While Court Data (conviction data, not arrest data) would likely have social security numbers attached to a file, the display part would be truncated to prevent ID theft. Additionally, it would be textual data, not with photographs, so you could have name duplication.
For most Social Security Numbers displaying the last four digits, those aren’t identifiers. In a 9-digit ID, the first three numbers (area number) are keyed to the state in which the number was issued. The next two (group numbers) indicate the order in which the SSN was issued in each area. The last four (serial numbers) are randomly generated.
Law enforcement doesn’t uniformly collect or input social. When arrested, they ask for a driver’s license or some form of acceptable ID. Typically, if no other form of ID is accepted, the social security number is asked for, as a de facto ID.
Note:Everyone arrested may not have a social. People have the right to refuse to provide socials, and more than half the states have very specific laws to prevent identity theft, whichprevent a social security number from being shared on any platform by a public agency without end-to-end encryption and even then, the use case scenario is specific.
While SSNs may not be displayed, DLs are connected to Social Security Numbers in other systems, as an individual needsa SSN in order to apply for a license at a DMV. In many places, Medicaid and Medicare benefits are tied to an ID, which is tied to a DL/State ID/SSN. In Ohio, for instance, if an organization has access to LEADS, they could, hypothetically, look up the social and marry that data. In some states, DLs are algorithmically generated from a social. Again, though, they wouldn’t be displayed.
On our part, we, as an organization, do not specifically categorize, collect, or display social security numbers or driver’s licenses.
Not for display in the system, and that is not the intention behind our database or data collection — our focus is on amalgamating data to allow organizations to keep their employees, customers and the public at large safe beyond what is possible with a one-off background check. However, here are some facts based on BJS data.
*More than half (57%) of violent offenders who were released from state prison in 2016 — the latest year for which there is complete data — served less than 3 years before their release.
* The average time an offender (general) served in state prison in 2016, from the date of first admission to initial release, was 2.6 years. The median amount of time served (the middle value in the range of time served, with 50% of offenders serving more and 50% serving less) was 1.3 years.
* Persons serving less than one year in state prison made up 40% of first releases in 2016.
* The average time served before an initial release by state prisoners who were sentenced for a violent offense was 4.7 years and the median time was 2.4 years.
* State prisoners sentenced for rape or sexual assault served an average of 6.2 years and a median time of 4.2 years before initial release.
* State prisoners serving time for drug offenses, including trafficking and possession, served an average of 22 months and a median time of 14 months before their initial release.
* About 3 in 5 offenders released after serving time for drug possession served less than 1 year before their initial release.
* In general, state prisoners served an average of 46% of their maximum sentence before their first release. Violent offenders served 54 percent of their maximum sentence, property offenders served 42 percent, drug offenders served 41 percent and public order offenders served 45 percent.
* Persons in state prison for rape or sexual assault served an average of 62 percent of their maximum sentence before initial release.
* Those in prison for drug possession served an average of 38 percent of their maximum sentence length.
When making personnel decisions — including hiring, retention, promotion, and reassignment — some employers run background checks. This is allowed under law.However, any time an individual’s background information is used to make an employment decision, federal laws that protect applicants and employees from discrimination have to be complied with. This includes discrimination based on race, color, national origin, sex, or religion; disability; genetic information (including family medical history); and age (40 or older), meaning you cannot check criminal background for certain employees only, everyone is entitled to be treated equally under the law. These laws are enforced by the Equal Employment Opportunity Commission (EEOC).In addition, when background checks are runby or through a company in the business of compiling background information, including and not limited to criminal background information, the Fair Credit Reporting Act needs to be complied with.
According to federal guidelines under the FCRA, employee background checks of different stripes are deemed“consumer reports.”The FCRA does not just regulate credit reports —it also incorporates criminal and civil records, civil lawsuits, educational and other reference checks, and any other information obtained by a consumer reporting agency.The FCRA regulates the collection and use of data obtained through these consumer reports and “promotes the accuracy, fairness, and privacy of information in the files of consumer reporting agencies.” Because Biometrica’s algorithms amalgamate real-time arrest data and make that data available to employers for the purposes of their creating pre and post-employment criminal background reports, Biometrica has to strictly abide by the provisions of the FCRA. Please note that we are FCRA compliant and an associate member of the PBSA.
No, Biometrica is not a CRA. Biometrica is a data provider, more specifically, of real-time data-as-a-service (DaaS). Why are we not a consumer reporting agency? Because while our automated systems and algorithms provide what’s called “pointer data” to authorized clients with a subscription license in real-time (our data is updated from every jurisdiction every hour), for privacy reasons, Biometrica’s staff or contractors do not have any insight into or access to consumer data, i.e., the data of individual consumers being background checked by any organization, including their names or images.
Because of this, Biometrica staff and contractors also have no ability to conduct a background check or search on any individual consumer on behalf of a client, make a determination of a match (or not) to any criminal record, or make a recommendation on pre-adverse or adverse action. We provide that ability to our clients, once they are authorized and receive license keys.
We do provide information on FCRA compliance because it is important to our users, and because some of our users are CRAs. In the case of employee checks, we do not provide license keys till the employer signs an Employer Certification Agreement for FCRA Compliance, certifying they have complied with FCRA requirements. The Agreement details employee (this includes contractor, provider or volunteer) rights under the FCRA and requirements adhered to in order for employees to be background checked.
Our systems are HIPAA compliant, and FCRA compliant for any organization that requires adherence to FCRA guidelines.
With privacy as a foundational philosophy, we do not access any biometric templates generated during a search and allow clients to set up their systems to delete and purge biometric templates based on their requirements. If storage is mandated by law, the biometric template is stored in a black box environment and Biometrica staffers have no access to that stored data. Every event in the system has an immutable audit trail, to ensure accountability.
The FCRA, 15 U.S.C. 1681-1681, requires that this notice be provided to inform users of consumer reports of their legal obligations. State lawscould impose additional requirements. The text of a consumer’s rights under the FCRA is available here. Before an organization takes an adverse employment action, they must give the applicant or employee:
• A notice that includes a copy of the consumer report they relied on to make that decision; and
• A copy of “A Summary of Your Rights Under the Fair Credit Reporting Act,” which, for instance, Biometrica makes available to all its users.
The EEOC has no direct relevance to the use of criminal records in employment-related decisions. However, the EEOC enforces Title VII of the Civil Rights Act, which prohibits employment discrimination based on race, color, religion, sex, or national origin. According to the EEOC itself, “Having a criminal record is not listed as a protected basis in Title VII. Therefore, whether a covered employer’s reliance on a criminal record to deny employment violates Title VII depends on whether it is part of a claim of employment discrimination based on race, color, religion, sex, or national origin.”
Enforcement Guidance on the Consideration of Arrest and Conviction Records in Employment Decisions under Title VII of the Civil Rights Actnotes the following: “The fact of an arrest does not establish that criminal conduct has occurred, and an exclusion based on an arrest, in itself, is not job related and consistent with business necessity. However, an employer may make an employment decision based on the conduct underlying an arrest if the conduct makes the individual unfit for the position in question.” Although an arrest record standing alone may not be used to deny an employment opportunity, an employer may make an employment decision based on the conduct underlying the arrest if the conduct makes the individual unfit for the position in question. The conduct, not the arrest, is relevant for employment purposes.
Further, the EEOC adds, “a conviction record will usually serve as sufficient evidence that a person engaged in particular conduct. In certain circumstances, however, there may be reasons for an employer not to rely on the conviction record alone when making an employment decision.”
When an employer treats criminal history information differently for different applicants or employees, based on their race or national origin (disparate treatment liability), it will be considered a violation of EEOC guidelines. Further, “an employer’s neutral policy (e.g., excluding applicants from employment based on certain criminal conduct) may disproportionately impact some individuals protected under Title VII, and may violate the law if not job related and consistent with business necessity (disparate impact liability).”
Please note: Compliance with other federal laws and/or regulations that conflict with Title VII is a defense to a charge of discrimination under Title VII. The EEOC enforces Title VII of the Civil Rights Act of 1964 (Title VII) which prohibits employment discrimination based on race, color, religion, sex, or national origin.
While consumer reports may be obtained for any number of uses, in our particular case, we focus on one: For employment purposes, including hiring and promotion decisions, where the consumer (or individual) has given written permission for the same.
An eMotive account is not switched on without a signed certification from employers that they will use the reports only for employment purposes that will not be in violation of federal and where applicable, state law, and attest to the following:
• That an employer has notified its employees that they are being background checked post-employment and have received written permission to get a background report on them for the same.
• That the employer will comply with the FCRA’s requirements.
• That the employer won’t discriminate against an applicant or employee, or otherwise misuse the information in a report in violation of Federal or State Equal Employment Opportunity laws or regulations.
• That an employer will provide access to information about the FCRA, including information about their responsibilities to their employees under the statute, including the notice to users of consumer reports and a summary of consumer rights under the FCRA. These are also provided at different stages through the process from the point of onboarding onward and right through the process of a report being generated. These notifications are provided in the eMotive system itself at multiple steps and are available for immediate printing for the consumer/employee in each of those steps. The HR person only needs to hit a button.
• That an employer will honor the rights of applicants and employees, including by giving them access to their files when they ask for them and conduct a reasonable investigation when they dispute the accuracy of information. Their data can be printed and is accessible and made available. All Biometrica’s data is directly 100% sourced from law enforcement, it is not collected from any other agency and there is no human interface between the time it is ingested from a LEA to the time an HR person is asked to make a determination on a possible match, it is all algorithmically generated.
Under the FCRA, a CRA generally may not report records of arrests that did not result in entry of a judgment of conviction, where the arrests occurred more than seven years ago.The FCRA does clarify that items of public record relating to arrests, indictments and convictions are considered up to date if the CRA reports the current public record status of the item at the time of the report. For example, the FTC has issued guidance that if a CRA reports an indictment, it must also report any dismissal or acquittal available on the public record as of the date of the report. Similarly, if a CRA reports a conviction, it must report a reversal that has occurred on appeal. Because the requirement to report complete and up-to-date information is item-specific, the report should include the current, complete, and up-to-date public record status of each individual item reported. The FTC has not indicated that a company has an obligation to continually update reports that it has already provided, but the report should be up to date at the time it is provided.
As Biometrica carries real-time data, updated on a daily basis,our records match the most current available public records for an arrest.As this is real-time, a case would not have been adjudicated by a court at that point. We also update all law enforcement updates to the case as it moves through the system, when law enforcement updates the case concerned.
Algorithmically, the system first matches name and facial recognition against our 100% law enforcement-sourced multi-jurisdictional database, and then presents possible matches to a human — the company-authorized HR or compliance person — for a final determination. To elaborate, when the algorithms (machine intelligence) identify a possible match against name and facial biometric parameters, we then present an actual person (human intelligence) with all the arrested individual’s case data that the jurisdictional county makes available to us (which differs in the case of each county or county equivalent): It could be age, race, gender, height, eye color in one case, it could be age, race, gender, in another, or age, gender, height, eye color, hair color in yet another. The final determination of a match between an employee and an arrested individual is not made by Biometrica directly or indirectly, through AI or humans; it is made by the employeror client company who know the employee best, and who are best placed to decide what to do with the information they have. Our obligation is to comply with the law, provide them with the data, and remind them of their obligations at every stage under the law, including the obligation to provide the employee with a copy of their consumer report.
If an employer chooses to take any type of adverse action, as defined by the FCRA, and that action is based at least in some part on information contained in a report generated from eMotive data, Section 615(a) requires the employer to notify the employee. We recommend that is done as a matter of course. The notification may be done in writing, orally, or by electronic means, and must include the following information:
• The name, address, and contact information for Biometrica.
• A statement that Biometrica did not make the adverse decision and is not able to explain why the decision was made.
• A statement setting forth the individual’s right to obtain a free disclosure of their file from Biometrica, if the request is made within 60 days.
Please note, as mentioned above, we provide the means to print notifications and reports in seconds. Arrest reports include case file numbers of arrested individuals, and we would strongly recommend that in the case of an HR person determining a positive match to an employee, they follow up with the law enforcement jurisdiction concerned prior to determining further action.
Please note that no Biometrica employee or algorithm is making the final determination of a match between an employee and an arrested individual, that determination is being made by an organization’s authorized HR or Compliance personnel. As Biometrica is not making any determination, it is not required to provide notice directly to any individual,as long as Biometrica maintains strict procedures designed to ensure that the data is complete and up to date, which, as the data is real-time, is up to date.
However, notice IS required when a client company takes an adverse action against an employee based in some part on the information contained in the eMotive report, and Biometrica provides the means for an employer to generate a consumer report with details, for the employee’s benefit.We also alert employersto this responsibility at different stages within the system itself, with access to the notice to users of consumer reports.
All eMotive users will find the following language included in their systems at different places: “As the authorized person for your employer, you have received this report on a match candidate based on several algorithmically generated demographic parameters. These have been based on comparisons run between individuals in the data set you have entered into the system with their permission, and real time law enforcement-sourced arrest data. However, the final determination of a possible match is your decision. If you determine that the individual is a match and you take action with respect to this individual based on this determination, the individual has a right to be informed of this determination and this record should be made available to them. Please refer to the statement of individual rights for more information.”
“These are law enforcement-sourced records in our system that are each a potential match to an individual in your database, pending a final determination by you. These notifications should be used for informational purposes only. You should follow up with the law enforcement body or court in that jurisdiction for further details. If you determine that the individual is a match, and you take action with respect to this individual based on this determination, the individual has a right to be informed of this determination and this record should be made available to them. Please refer to the statement of individual rights for more information.”
No one from Biometrica can add arrest or conviction data to UMbRA. The only person who has the ability to enter data into the system is a law enforcement officer. It is entered into a jurisdiction’s jail system and our programs pull it directly from those systems. As mentioned above, there is no human interface at our end, this process is documented in our internal documents, as required under FCRA.
Note: UMbRA, our NatCrim database, and its app,DO NOT collect information about a user’s friends, contacts, or other third-party persons with or without the knowledge or consent of those parties. Our app does not collect a user’s (or their friends, contacts etc.) imagery, including and not limited to eye color, height, race, and other personal information.
This version of the app is ONLY for law enforcement usage, not for public use, and does allow a verified law enforcement official only to upload information on an already arrested or convicted individual, including details of eye color, hair color, height etc. This is standard practice for local, state, tribal, federal or international law enforcement and has been mandated by U.S. Congress. The National Prisoner Statistics (NPS) data collection program was started in 1926 in response to a congressional mandate to gather information on incarcerated individuals. Originally under the aegis of the U.S. Census Bureau, the collection of statistics moved to the Bureau of Prisons in 1950, and then to the National Criminal Justice Information and Statistics Service in 1971. This was the predecessor of the Bureau of Justice Statistics (BJS), which was established in 1979.
Under the National Corrections Reporting Program (begun in 1983), offender-level administrative data has been collected annually on prison admissions and releases, yearend custody populations, and parole entries and discharges in participating jurisdictions. In addition, “demographic information, conviction offenses, sentence length, minimum time to be served, credited jail time, type of admission, type of release, and time served are collected from individual prisoner records.”
NIST, the National Institute for Standards and Technology, publishes the “American National Standard for Information Systems — Data Format for the Interchange of Fingerprint Facial, & Other Biometric Information.” It states: “Various levels of law enforcement and related criminal justice agencies as well as identity management organizations procure equipment and systems intended to facilitate the determination of the personal identity of a subject from fingerprint, palm, facial (mugshot), or other biometric information (including iris data). To effectively exchange identification data across jurisdictional lines or between dissimilar systems made by different manufacturers, a standard is needed to specify a common format for the data exchange. To this end, this standard has been developed.”
Section 1 of the American National Standard — Data Format for the Interchange of Fingerprint, Facial, & Other Biometric Information — defines the standard for the content, format, and units of measurement for the electronic exchange of fingerprint, palm print, plantar, facial/mugshot, scar, mark & tattoo (SMT), iris, deoxyribonucleic acid (DNA), and other biometric sample and forensic information that may be used in the identification or verification process of a subject. It further states, “The information consists of a variety of mandatory and optional items. This information is primarily intended for interchange among criminal justice administrations or organizations that rely on automated identification systems or use other biometric and image data for identification purposes.” Please note, there are differences in the uploaded data, depending on a law enforcement body’s state/local jurisdiction because of the practice of allowing mandatory and optional items.”
Note: While, as mentioned above, the data uploaded through and in this app is intended to be entered in only by authorized law enforcement officers and other criminal justice agency professionals, and all of this data is still a matter of public record, the app is not intended for use for the general public. Information compiled and formatted in accordance with this standard can be recorded on machine-readable media or be transmitted by data communication facilities.
Section 8.2 defines user-defined descriptive text records and states: “This record may include such information as the state or FBI numbers, physical characteristics, demographic data, and the subject’s criminal history.”
Section 8.10.26 and 8.10.27 define the collection of eye and hair color, respectively, in a SAP (Subject Acquisition Profile) and further detail the guidelines and parameters for inputting eye and hair color in the SAP, including codes. See below.
|Eye color attribute||Attribute code|
|Hair color attribute||Attribute code|
|Unspecified or unknown||XXX|
|Blonde or Strawberry||BLN|
|Gray or Partially Gray||GRY|
|Red or Aubum||RED|
In addition to the federal mandate, different states have also established their own policies, procedures and guidelines when it comes to data collection of incarcerated individuals, some of which may ask for further details.
In California, for instance, the California Department of Justice’s data collection and reporting responsibility code, PC13102, Clause B, states that under the code, the department has the responsibility to collect and report statistics showing the “personal and social characteristics of criminals and delinquents.”
Further, PC 13020 states: “It shall be the duty of every city marshal, chief of police, railroad and steamship police, sheriff, coroner, district attorney, city attorney and city prosecutor having criminal jurisdiction, probation officer, county board of parole commissioners, work furlough administrator, the Department of Justice, Health and Welfare Agency, Department of Corrections, Department of Youth Authority, Youthful Offender Parole Board, Board of Prison Terms, State Department of Health, Department of Benefit Payments, State Fire Marshal, Liquor Control Administrator, constituent agencies of the State Department of Investment, and every other person or agency dealing with crimes or criminals or with delinquency or delinquents, when requested by the Attorney General:
(a) To install and maintain records needed for the correct reporting of statistical data required by him or her.
(b) To report statistical data to the department at those times and in the manner that the Attorney General prescribes.
(c) To give to the Attorney General, or his or her accredited agent, access to statistical data for the purpose of carrying out this title.
IMPORTANT: Please take note that despite this, Biometrica does not allow even law enforcement agencies to upload juvenile or delinquent data into the NatCrime database (UMbRA). We allow no juvenile data in our system at this point and do not see this changing in the foreseeable future.
Do also see the FBI UCR here.
Biometrica’s NatCrim database is modeled on the National Crime Information Center (NCIC), which is a computerized index of criminal justice information, including criminal record history information, fugitives, stolen properties, missing persons. However, unlike the NCIC, we currently only maintain law enforcement-sourced records relating to criminal history, i.e. like arrest/conviction/sex offender/warrant list records. UMbRA does not carry records of missing persons or any records not sourced from law enforcement.
As with NCIC records, records in UMbRA are protected from unauthorized access through administrative, physical, and technical safeguards. These safeguards include restricting access to those with a need to know to perform official duties; and using encrypting data communications to create an audit trail that maintains a legally viable digital chain of custody.
Please note, we do not touch or manipulate LEA (Law Enforcement Agency) data in any way, except for collecting and amalgamating the data at the back end. As all of the data is sourced 100% from law enforcement public record, we don’t touch it in any way in order to avoid compromising data integrity, which is why you’ll see some inconsistencies in how the data is entered from state to state or even county to county.
This means we don’t correct inputting errors either. For instance, we found a record of a man arrested by the Maricopa Co. Sheriff’s Office in Arizona in October 2017. His details included his being entered as having “blue” hair, when his image showed him as having strawberry blond hair. We didn’t change that, as that is what was entered by law enforcement. We don’t ever manipulate that data.
UMbRA is Biometrica’s NatCrim (National Criminal) database, currently at almost 7 million records and growing. This database keeps expanding as more individuals are arrested and/or convicted, and more counties are brought into the system. Data is pulled every hour into UMbRA from every jurisdiction in the system, but whether new data from a particular jurisdiction is reflected in the system or not depends on whether the law enforcement body concerned has updated their own database and has made those records public as yet.Typically, data from a jurisdiction (county/city/regional authority) is updated withinan hour to 24 hours of an arrest wherever possible, so, it is as real-time as possible.
UMbRA also allows subscribers to run manual background checks on individuals by text (name, other demographic details) or face (uploading a photograph).
No, it is a real-time database of arrests, which also provides subscribers the ability to run a manual background check in under 20 seconds. If you, as a CRA or employer, would like to be automatically notified if an employee has potentially been arrested, you’d have to subscribe to eMotive.
eMotiveis Biometrica’send-to-end encrypted, next generation, multi-jurisdictional 24×7 continuous background checking software. It gives an organization the ability to be notified in near real-time, through an encrypted alert to authorized company personnel, that an individual matching the profile of one of its employees/contractors/volunteers/vendors has potentially been arrested. This typically happens within an hour to 24 hours of an individual’s arrest and within minutes of that arrest being updated in law enforcement databases. Data in eMotive runs against UMbRA on a 24×7 basis, but eMotive data is only available to authorized personnel within an organization. UMbRA data is available to all subscribers.
Biometrica is FCRA compliant and an associate member of the Professional Backgrounders Screening Association (PBSA), formerly known as the National Association of Professional Background Screeners (NAPBS).
Contact us at here, using the form at the bottom of the page, and we will get in touch with you about implementing eMotive.
See our Getting Started page and documentation for eMotive here.
• You will, with permission from your employees or contractors, upload their images and relevant demographic information to that private database. This private database is visible to no one but your organization’s HR or compliance personnel. An eMotive account is not turned on till an organization certifies in writing to Biometrica that it has informed its staffers about their being continuously monitored for criminality and has their permission to do so.
• Our algorithms will then run continuous biometric comparisons against the larger UMbRA NatCrime database in the background.
• These comparisons are being run to let your HR or Security person know, in near real-time, when someone that the algorithm thinks is a potential match to your employee, based on biometric parameters, is arrested somewhere in the United States.
• How it would work is that there would be comparisons run against UMbRA on a constant basis. UMbRA data from each jurisdiction is updated between every hour to 24 hours, depending on a law enforcement jurisdiction or more specifically, when law enforcement within that jurisdiction updates its data, and how often it makes that updated data available.
• So, if someone called John Smith is arrested and you have a John Smith on your list, eMotive at the backend would automatically run a search against search parameters to see if the John Smith arrested is a possible match to the John Smith on your record. Name, Facial Recognition and other parameters would be matched.
• If those are a potential match in the check against UMbRA, the eMotive system would send your authorized personnel an encrypted alert in the form of a uniquely targeted email notification, a link attached to an email asking them to log into eMotive.
• Once your HR person logs into the system and is authenticated, it is then up to that HR person to look at the information and make a final determination as to whether there is a match, and if there is, what next steps need to be taken, if any.
The difference between eMotive and most other background checking systems comes down to this:
* It is a multi-jurisdictional 24×7 criminal background check, i.e. continuous monitoring, as opposed to an annual or single point-in-time background check.
* You are updated as soon as law enforcement updates data, about a potential match to an employee, and provided information on that match. You, as the CRA or employer, can then make the final determination on that match.
* It searches against both text and facial recognition parameters to prevent false positives.
* All notifications are through encrypted alerts — to preserve and protect PII, create an audit trail, and maintain a legally viable digital chain of custody.
* Comparisons are run against a 100% law-enforcement sourced multi-jurisdictional NatCrim database.
For obvious reasons, if you work with children, vulnerable adults, or are in any public facing job. Additionally, because it’s important for any organization to keep their employees safe, even from other employees when necessary, or know when any employee potentially needs help of any kind, in order to prevent workplace violence or an insider threat. Finally, because of Presidential Policy Directive 21 (PPD-21) on Critical Infrastructure and Resilience in 2013, which established national policy on critical infrastructure security and resilience and declared it a “shared responsibility” among the Federal, state, local, tribal, and territorial (SLTT) entities, and public and private owners and operators of critical infrastructure.
PPD-21 identified 16 critical sectors where it mandated self-reporting requirements for arrests when it came to any of these 16, for a variety of reasons, from cybersecurity to physical security to public safety. Basically, this was to maintain the development of situational awareness capability, and constantly reevaluate threat and risk assessments.
• Commercial Facilities
• Critical Manufacturing
• Defense Industrial Base
• Emergency Services
• Financial Services
• Food & Agriculture
• Government Facilities
• Healthcare & Public Health
• Information Technology
• Nuclear Reactors, Materials & Waste
• Transportation Systems
• Water & Wastewater Systems
For more on each sector, see DHS CISA here: https://www.cisa.gov/critical-infrastructure-sectors
Also see FEMA’s Protection FIOP (Federal Interagency Operational Plan).
Read more here:
Having information handy makes sense. What an HR or compliance person does with that information is their call, based on organizational policy and appropriate state and federal law and guidance. But information helps put up red flags, maintain a legally viable audit trail, bring down liability costs, bring down the overall costs of insurance, helps organizations maintain compliance and licensing norms, and most importantly, gives them a clear path to protecting themselves, their employees, and the people or public they serve.
* UMbRA is the overall criminal database and search engine. eMotive is the 24×7 monitoring ecosystem which will, to put it simply, allow you, as an organization, to create your own private database of your employees or contractors or vendors, a database that will sit “below” UMbRA.
* You will, with permission from your employees, upload their images and relevant demographic information to that private database [this private database is visible to no one but your organization’s HR or specific compliance or security personnel].
* Our algorithms will then run continuous biometric comparisons against the larger UMbRA database in the background.
To let your HR or security person know, in near real-time, when someone that the algorithm thinks is a potential match to your employee, based on biometric parameters, is arrested somewhere in the United States.
It is a product that provides an organization with the ability to enter all its employees into a database, and be notified in near real-time, through an encrypted alert, when someone matching the profile of one of its employees has potentially been arrested.
This would allow any company that has a requirement to do an annual background check (typically costing between $50-$250 or more per employee for a single point in time check) or with self-reporting requirements in the case of an arrest, to cut costs, improve notification systems, share information to protect their human and other assets and put in place practical systems to prevent insider threats and reputational and other damage, while essentially doing a 24×7 criminal background check. eMotive allows continuous monitoring, which would be a dramatic improvement over a single annual background check or a point-in-time check.
This is a new product and is specifically focused on continuous monitoring or a continuous background check for workplace and public safety and compliance reasons.
• It would work by you uploading a list of your employees (or any other list of people), with their permission, into a separate silo that sits beneath UMbRA’s arrest database.
• Because that siloed data contains PII or personally identifiable information, it would not be accessible to other UMbRA users, and even within your organization it would only be available to authorized personnel from the organization. It would not be accessible by Biometrica staffers.
• We have security in place to ensure that only you/your authorized personnel have access to the data in your silo.
• How it would work is that there would be comparisons run against UMbRA on a constant basis. UMbRA data is updated between every hour to 24 hours, depending on a law enforcement jurisdiction or more specifically, when law enforcement within that jurisdiction updates its data, and how often it makes that updated data available.
• So, if someone called John Smith is arrested and you have a John Smith on your list, eMotive at the backend would automatically run a search against search parameters to see if the John Smith arrested is a possible match to the John Smith on your record. Name, Facial Recognition etc. would be matched.
• If all of those are a potential match in the check against UMbRA, the eMotive system would send your authorized personnel an encrypted alert, asking him or her to log into eMotive because they have a notification.
• Once your HR person logs into the system and is authenticated, it is then up to that HR person to look at the information and make a final determination as to whether there is a match.
To expand on this, every employee on your list in eMotive, will be run against every arrest that comes into UMbRA (the NatCrime database) on a constant basis. If you have 50,0000 employees on a list, and 10,000 arrests come in today, all 50,000 employees will be run against all 10,000 arrests coming in today. This happens every day.
As employee data is Personally Identifiable Information or PII, we have to ensure that employee data and access to employee data is protected. Every employee in an organization mayor may not have an arrest record, and even if they do, their employee details should still be protected and access to that record still needs to be encrypted and monitored. Biometricais fully FCRA compliant. Every part of eMotive is end-to-end encrypted, and data is shared and notifications provided in a process similar to the movement of HIPAA data.
A company authorized person receives an email notification (without the employee name) as an alert. What we use is a Uniquely Targeted Email Notification, a notification that is unique to that particular alert. It is a notification that is produced in relation to a specific alert and is transmitted to a curated list of recipients — that may or may not be a single recipient — that has explicitly opted in at a prior time to receive that notification. It can be tracked and audited to reflect when it was transmitted, if it was opened,when it was read, to what action was taken thereafter.
Every single action or event is tracked, to maintain chain of custody and create an immutable digital audit trail, for legal and judicial purposes. For example,if an employee on your list, for instance, a driver, is arrested for a DUI on a Friday night, and gets out Saturday afternoon. Your HR person receives a notification asking them to log into eMotive and see information on a potential match to an arrested individual, but they do not check it, as it’s a weekend. Suppose that employee gets inebriatedagain anddrives a work truck into something Monday morning. You can actually see the audit trail for when the notification alert was received, whether it was even acknowledged, and if it was acknowledged, whether anything was done about that acknowledgement (at least in the system). There’s a process in place, one that can’t be manipulated.
Our data is updated when law enforcement updates data. Please note:
1. Your arrest record and your prosecution record effectively live in two separate baskets. So you might be booked, released, charges dropped, found not guilty, but that arrest is a matter of record until the prosecutorial body updates the law enforcement agency with the status, and the law enforcement agency, in turn, takes the trouble to update its own record to reflect that status.
2. It’s time-consuming and complicated, even in cases that were dismissed, for a record to be sealed or expunged or annulled or have what’s called “records restricted” (Georgia), a “Declaration of Factual Innocence” (California), such that an individual can lawfully deny the arrest — it depends on the state, the type of offense (violent offenders and sexual offenders are generally excluded), the outcome of the case, the age of the defendant, and procedures vary very widely.
For more, take a look at this excellent site, The Restoration of Rights Project.
From our perspective, if the law enforcement body updates that record and removes it, we could. If someone lets us know, we could. We already have a system in place for consumer rights or disputes here.
No. We have no juvenile records and have no plans to collect juvenile arrest or conviction data.
Toweigh the scales fairly between an employer’s “need to know” for the protection of their organization, other employees, and the people who interact with them, against an employee or prospective employee’s right to privacy and opportunity for equal employment, a number of federal and state laws regulate both the kind of information an employer or prospective employer might obtain about a job applicant or employee, and what they might actually look at. The extent of the check also depends on the role in question:Whether, for instance,it is a security or safety-sensitive position, whether there is interaction with, say, children or vulnerable adults, and if that role also has to meet federal background check requirements because it comes under an industry or sector classified as critical infrastructure. Before conducting background investigations, employers should be fully aware of the requirements under applicable law and ensure that their pre and post-employment screening practices are in compliance.
Depending on the job in question, and the state, expunged records sometimes have to be made available during background checks. In Arizona, for instance, from April 2019 onward, all non-certified teachers must disclose whether they have had criminal offenses expunged from their records.In Virginia, however, employers are prohibited from requiring an applicant for employment to disclose information concerning any arrest or criminal charge against him or her that has been expunged.
It is always the employee authorized by the company. All our algorithms do is run continuous matches against information sourcedfrom law enforcement data and present that information to an organization’s authorized personnel. We cannot directly inform the individual concerned because the determination on whether it is the individual concerned or not is not made by us under any circumstance. It is made always by the employer (HR) after that HR person or administrator, once notified as having an alert, has signed into eMotive, clicked on the alert and made a determination on whether arrested Individual A is Employee A or not.
We always recommend that as part of the onboarding process, over and above what is included in certifications from employer/employees, an organization does the following:
* Have a virtual demo of eMotive, showing what happens with a possible arrest. Emphasize that this situation only happens in case of an arrest.
* Include it in the employee handbook
* Show that in the case of an HR person making a final determination of a match, Individual A should receive a notification saying there is a public record that someone with their likeness may have been arrested, and that they may want to check if this is the person concerned.
(Note: Despite systemic inconsistencies, it’s unlikely that a wrong person will get a notification: You’re going to have to get a number of things exactly the same and that will be unusual). A false positive is rare.
* Double check with the law enforcement jurisdiction involved and follow up with the prosecutorial office involved on the case status. The arrest details are part of the case file.
Do have a look at this graphic below (these are DOJ stats).
A uniquely targeted notificationis a notification that is unique to that particular alert. It is a notification that is produced in relation to a specific alert and is transmitted to a curated list of recipients — that may or may not be a single recipient — that has explicitly opted in at a prior time to receive that notification. It can be tracked and audited to reflect when it was transmitted and read.
Arrest And Conviction Data Do Not Come From The Same Source
• Law Enforcement Agencies (LEA) of different stripes at the local level (city/county/municipal authority) update arrest data, which is almost entirely digitized across the U.S. with text and facial recognition data feeds available for a large number of arrested individuals.
• Conviction data comes from the courts, and lives in the prosecutorial basket. It is, for the most, textual data, not connected to facial recognition and in most cases, also not uniformly/comprehensively digitized. You often have to go in person to a courthouse and ask for a file on a particular case, as that case information may not be digitized.
Note: Law Enforcement may or may not update an arrest in a system to reflect the status of a case through the courts— this varies from jurisdiction to jurisdiction and is largely dependent on the availability of time and manpower in that jurisdiction.
Datais updated on a 1-24-hour update cycle per jurisdiction basis with respect to records, but please note this is only with reference to arrest records.The 1 hour to 24-hour update cycle timeline is accurate only with respect to the actual arrest itself, because that is the typical time-frame law enforcement jurisdictions take to update their public arrest databases on a daily basis. Some jurisdictions do it hourly, some do it every 24 hours, some do it every eight hours. UMbRA, our NatCrim database, updates as they do.
Similarly, when a LEA updates an arrest record status to reflect a case’s progression or adjudication in court, for instance, that progression or adjudication will be reflected in UMbRA because it picks up every update made by law enforcement — when they make it. eMotive runs against UMbRA. This progression as the case is adjudicated in the system, is what was referenced to in that statement about following someone as they go “through the system.”
From our perspective, if the law enforcement body updates it, we will update it. If someone lets us know about an update with a formal court order, we could update it. We already have a system in place for consumer rights, mentioned above. At every step of the way in the eMotive system, the HR person also has the ability to print notifications for the employee.
Our arrest database. That number, as mentioned above, is constantly changing as we add more counties and more people are arrested every day. Data is updated every hour to 24 hours from each jurisdiction wherever possible, so it’s pretty current once a county has come into our intake pipeline, allowing us to follow someone as they go through the system, depending on when the law enforcement body concerned updates its records. There is no other private organization that includes immigration arrest records, USPS, Secret Service or FBI data, along with a number of other federal arrest records, given that federal arrest records are not open to the public.
For most U.S. jurisdictions, existing background checks are reporting, for the most, not on arrests but on convictions or court cases. It can take between 24 hours to 6 weeks before a court case might be entered into a system and years before it is adjudicated in some cases. In addition, the courts only track people by Court ID number and textual data; the issue with this is that you end up with too many false positives on names (for instance, “John Smith”) and the information gets to you way too late. The problem with this is that the obligation in many industries/sectors/jobs is to self-report if arrested and not just when convicted. It is, therefore, not just a risk to the employee concerned, it is a potential reputational, financial, and legal risk to the company or organization that is employing that employee, contractor or vendor, if not worse. Continuous monitoring gives organizations the ability to put their people first, and their people’s lives first, while protecting themselves.
The 1 hour to 24-hour update cycle timeline is accurate only with respect to the actual arrest itself, because that is the typical time-frame law enforcement jurisdictions take to update their public arrest databases on a daily basis. Some jurisdictions do it hourly, some do it every 24 hours, some do it every eight hours. UMbRA, our arrest database, updates as they do.
Similarly, when a LEA updates an arrest record status to reflect a case’s progression or adjudication in court, for instance, that progression or adjudication will be reflected in UMbRA because it picks up every update made by law enforcement — when they make it. This progression as the case is adjudicated in the system, is what was referenced to in that statement about following someone as they go “through the system.”
From our perspective, if the law enforcement body updates it, we will update it. If someone lets us know about an update with a formal court order, we could update it. We already have a system in place for consumer rights. At every step of the way in the eMotive system, the HR person also has the ability to print notifications for the employee.
* Have a virtual demo of eMotive, showing what happens with a possible arrest.
* Include details in the employee handbook
* Emphasize that in the case of an HR person making a final determination of an employee match to an arrest record, the Employee should receive a notification letting them know that there is a public record that someone with their likeness may have been arrested, and they may want to request a copy of that information.
* Double check with the law enforcement jurisdiction involved and follow up with the prosecutorial office involved on the case status of a record.
When it comes to matching two faces, the accuracy of facial recognition depends on several things includingthe age of the subject in the photo, the angle, the lighting, the cameras being used. Then you look at the size of the database you’re comparing your photo against: Is it a 1:1 comparison or a 1:1,000,000 or 1: N
If you are doing a 1:1 match, say a picture of the person standing in front of you and his driver’s license, or the photo inside the chip in his passport, this kind of match recognition and confidence rating will be fairly high: over 90%, depending on the time between the initial photo and the current one. If you are doing a 1: Many or 1: N search, the accuracy rate drops but is still a key factor in improving your match ability. This also depends on how many photos of an individual the gallery or database has and over what time period.
Computers look at faces differently from humans. Computers look at faces as points in a facial template; humans look at faces as a whole. If, in the case of a sex offender, the system has a photo of an individual every year (for the worst sex offenders this is a requirement of Megan’s Law), and the system is presented with a photo or video of a person on that list, it is likely the person would be found within the first 10 results. You’re still going to need the human looking at the matches presented by the algorithm to give you the best option. But our system runs a search against millions in the UMbRA database in mere seconds. A human cannot do that. But a human can look at the two profiles an algorithm has matched from millions and make a final determination on whether they are actually a match.
Just to be very clear from a technical standpoint on how Facial Recognition works. We run a mathematical formula that runs numbers against numbers. We run an algorithm that creates a unique hash template that matches up against another unique hash template, which then generates a confidence rating. It’s a misnomer really that FR actually compares faces. We say that for easy understanding, but in the actual systemic process, in reality, it’s not a face-to-face comparison, it’s a numbers-to-numbers comparison from a machine intelligence perspective, which is why we present it to a human to do the final adjudication on eMotive. We hope this helps in understanding it all.
The system runs a search against everyone in the database when comparing employees in eMotive datasets for potential matches. At this point, that figure is in the range of 7 million, by the end of Q4 2020, it is likely to be substantially higher.
• The data the employer is required to upload, at the moment, includes: First Name, Last Name, Date of Birth, Home Address and Photograph.
• The match is currently algorithmically run against the name and photograph with a final determination to be made by an administrator appointed by the employer.
• No Biometrica employee or contractor is involved in this process at any stage.
• An employer has these optional data fields they can also upload: Prefix, Middle Name, Suffix, Race, Gender, Eye Color, Hair Color, Notes and Work Address.
• The additional section called “Notes” is for the employer’s administrative convenience.
• We added the section for home and work mailing address so HR could automatically send out the requisite notices when they make an adverse determination.
Note: Law enforcement jurisdictions vary vastly on what data they make available when it comes to each of these fields. For instance, almost all indicate gender, some indicate race, some say “unknown” for race but there’s no standardization of fields when it comes to race. Some put in eye color and hair color, but again, there is little standardization.
This really depends on a company’s set-up. It could be an HR administrator, a legal team member, a security head, or a compliance officer. Typically, in most companies, personnel matters come under HR. This is for the sake of writing. Each company could set it up as they please. In some places, we’ve used the word administrator interchangeably.
Algorithmically, the system first matches Name and Facial Recognition against law enforcement databases and then presents possible matches to the human for determination if a possible match is identified against those two parameters. The system then present the employer administrator with all the data that the jurisdictional county makes available to the system (which differs in the case of each county): It could be Age, Race, Gender, Height, Eye Color in one case, it could be Age, Race, Gender, in another, or Age, Gender, Height, Eye Color, Hair Color in yet another, to make a final determination. It depends on what data is available. Again, do note that the final determination of a match is made by the employer. No Biometrica employee is involved in any way at any stage in this process.
No, when a law enforcement body updates a record, UMbRA automatically pulls in that record in real time. It’s systemic and automatic — we cannot manually override these updates, unless we dismantle the bot or the entire system for that particular county, so it will ALWAYS reflect the newest available LEA arrest record on an individual. You will be provided with available records for that individual in the system if you’re looking at possible adverse action cases.
This FAQ is intended for informational purposes only. This information has been aggregated and compiled in document form for the understanding and convenience of customers, clients and partners of Biometrica Systems, Inc. All references have been sourced from public record, including but not limited to court, FTC and FCC documents, and all sources have been both linked to and noted in the footnotes on every page. Please do not copy to or share with unauthorized users or systems, in whole or in part, without permission. This is not a substitute and not intended to be a substitute for legal advice or information.
This is merely intended as a handy guide to best practices for casino customers, and provide a background on topics like Casino SARs, PII, KYC, regulatory, compliance and licensing requirements, maintaining transparency, AML and asset protection measures. Parts of this FAQ have been shared over the years with casinos in the form of white papers or training manuals. Do also note that this FAQ will be updated from time to time, both in terms of scope of content and in terms of changes in the content itself, because of compliance and legal cases, events and other incidents. Please write to marketingbiometrica.com if you want more information on source, or would like to provide any corrections or suggestions.
In its most basic form, Personal/Personally Identifiable Information or PII can be defined as information that provides access, directly or indirectly, to a unique individual’s identity. Generally, when companies are reminded about protecting consumers or clients’ PII so as to not cause identity theft or misuse, or expose an individual to financial harm, embarrassment, discrimination, or physical or mental trauma, the reference is typically to sensitive PII (sometimes called SPI or Sensitive Personal Information). This is usually information on an individual’s first name or first initial and last name, in combination with one or more of other data elements, including but not limited to the following:
• A Social Security number
• A driver’s license or state-issued ID card, or a passport
• A home address, phone number or personal cell phone number
• An individual account number, a credit or debit card number
• A biometric record, like a photographic representation or image
What isn’t classified as personal information is data that is legally or lawfully available to the general public, from federal, state, tribal or local government records, or has been widely distributed by what are reasonably considered legitimate media organizations.
This FAQ also provides information on the concept of personally identifiable information, the transmission of sensitive personal information data through different means, and details why communication systems like email are inherently insecure and could leave you and your organization open to civil and criminal penalties — if PII is transmitted unencrypted, shared, stored or read on insecure devices — provides clarity on the statutes governing the viewing and sharing of consumer PII for businesses, including casinos, explains the compliance requirements mandated by law, and provides examples of non-compliance and some of the penalties imposed for non-compliance and security breaches under state and federal laws, including the laws of Nevada.
The 2019 14th annual study on the cost of data breaches, by IBM and the Ponemon Institute, put the average total cost of a single instance of a data breachat almost $4 million ($3.92M) globally, with each stolen record costing, on average, $150 globally. The study estimates that the average size of a data breach, i.e. the number of records that have their data compromised in a single instance of a breach is 25,575.
In the United States — by far the most expensive country in the world to have a data breach in — those expense numbers are dramatically different. The average total cost of a single instance of a data breach in the U.S. is $8.19 million ($7.91M in 2018), with the cost per lost record standing at $242. The time to identify and contain a data breach in the U.S. is 245 days, as opposed to 279 globally. This does not include the long tail costs of a data breach, which, as the study says, and companies have experienced, can last for years, in monetary, legal and reputational damages.
According to the study, about a third of data breach costs occurred more than one year after a data breach incident. While an average of 67% of breach costs came in the first year, 22% accrued in the second post-breach year, and 11% in the third. But in highly regulated environments like the finance industry — important to note as most casinos are NBFIs or Nonbank Financial Institutions (see more on this below) —the long-tail costs of a breach were higher in years two and three. Organizations in a high data protection regulatory environment saw 53% of breach costs in the first year, 32% in year two, and 16% in the third year.
The 2019 annual study was conducted in 16 countries or regional samples: the United States, India, the United Kingdom, Germany, Brazil, Japan, France, the Middle East, Canada, Italy, South Korea, Australia, Turkey, ASEAN, South Africa, and, for the first time, Scandinavia. There are 17 industries and more than 500 companies that had experienced a data breach between July 2018 and April 2019 represented in the findings.
Breaches caused by malicious attacks grew from 42% of all breaches in 2014 to 51% in 2019. Globally, it took substantially longer to identify and contain a breach in the case of a malicious attack: a combined 314 days. This finding helps explain why breaches caused by a malicious attack were 27% more costly than breaches caused by human error ($4.45 million vs. $3.5 million) and 37% more costly than a breach caused by system glitches ($4.45 million vs $3.24 million per incident).
Typically, a casino that is sharing PII internally to its own email servers on its premises (or beyond) does not generally encrypt all its communications. However, as evidenced even in the 2016 U.S. presidential election campaign, data is not secure simply because your server is in a locked room or because you use an “active directory.”
Most third-party email services such as Google or Office365 specifically advise against sharing PII information via email. In Microsoft’s case, their Compliance Center and Data Loss Prevention tools even allow a systems administrator to put in place access controls that block the sending of PII data through email because it is considered insecure. It is, however, a complex, cumbersome process, and not easily implementable, not just because of the inherent vulnerabilities of email, but also because it could block other informational flow. We’ve explained the vulnerabilities of email systems in some detail later in this document.
Do note that sharing PII on or to a public website is also in contravention of several jurisdictional statutes, unless that information is already lawfully publicly available. For instance, just because an advantage player is known to a casino organization, unless that player has been convicted of a crime, and the record of that conviction is publicly available, you cannot post information on that player to a public website, or share PII on that player, like a name and driver’s license or a photograph, via email (unless to or from law enforcement), unless your email system is encrypted end-to-end. We’ve explained end-to-end encryption later in this FAQ, but you also have to have contractual protections in place specifically covering the transmission of PII with all the third parties involved in that communication (like internet or telecom service providers).
Many public data sharing sites are not secure and could expose a casino to several levels of both civil and criminal liability. Do note that if you share data to a third-party public website that does not use even a minimum level of security Secure Sockets Layer (SSL) certificates, it exposes that shared data almost to the level of knowingly providing PII for the use of Identity Theft.
PII stands for Personal/Personally Identifiable Information or any sensitive information that provides the ability to distinguish or identify an individual as a unique individual. It is typically the first and last name, plus a combination of any of several other personal identifiers. The U.S. Department of Labor defines PII as “any representation of information that permits the identity of an individual to whom the information applies to be reasonably inferred by either direct or indirect means.”
Under a section called “Guidance on the Protection of Personal Identifiable Information,” the Department of Labor adds that PII is information:
• That directly identifies an individual (e.g., name, address, social security number or other identifying number or code, telephone number, email address, etc.); or,
• By which an agency intends to identify specific individuals in conjunction with other data elements, i.e., indirect identification. (These data elements may include a combination of gender, race, birth date, geographic indicator, and other descriptors).
In addition, the Department of Labor states that information permitting the physical or online contacting of a specific individual is also considered personally identifiable information, whether maintained in paper, electronic, or other form.
PII is, perhaps, best described by the United States Navy as this: “information about an individual that identifies, links, relates, or is unique to, or describes him or her, e.g., a social security number; age; military rank; civilian grade; marital status; race; salary; home phone numbers; other demographic, biometric, personnel, medical, and financial information, etc.”
The National Institute of Standards and Technology (NIST), in its comprehensive “Guide To Protecting the Confidentiality of Personally Identifiable Information,” says this: PII is any information about an individual maintained by anagency, including:
• Any information that can be used to distinguish or trace an individual’s identity, such as name, social security number, date and place of birth, mother’s maiden name, or biometric records.
• Any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.
The Federal Communications Commission (FCC), which regulates interstate communications across all mediums, has further defined PII as “any information that is linked or linkable to an individual.” According to the FCC, such“linked or linkable” information is PII “if it can be used on its own, in context, or incombination to identify an individual or to logically associate with other information about a specific individual.”
PII, as mentioned above, can be “sensitive PII” — information that, if stolen or compromised, could result in damage or identity theft — or “non-sensitive PII”, information that, if released to the public, would not compromise an individual. Non-sensitive PII is considered releasable to the public. The USN also provides some examples of “sensitive” and non-sensitive” PII. Please note that this is not a complete list, but it does provide a representation of what elements constitute each.
Sensitive PII includes:
• Name and other names used in conjunction with a combination of other elements
• Biometrics of any kind (photographic images, especially of the face or other identifyingcharacteristics, fingerprints, handwriting, or other data, like a retina scan, voice signature,or facial geometry)
• Social Security number, full and truncated
• Driver’s license and other government identification numbers
• Citizenship, legal status, gender, race/ethnicity
• Birth date, place of birth
• Home and personal cell telephone numbers
• Personal email address, mailing and home address
• Religious preference
• Security clearance level
• Mother’s middle and maiden names/p>
• Spouse’s information, marital status, child information, emergency contact information
• Financial information, medical information, disability information
• Law enforcement information, employment information, educational information; military records.
Non-sensitive PII includes:
• Office location
• Business telephone number
• Business email address
• Other information that is made available to the public through federal, state, or local government records or widely distributed media
As mentioned above, an individual’s name by itself isn’t a problem, but the insecure transmission or sharing of a first name or first initial and last name plus one or more of a minimum of the following data elements: (i) SocialSecurity number, (ii) driver’s license number or state-issued ID card number, (iii) account number, credit card number or debit card number, (iv) biometrics (photos, fingerprints, retinal scans etc.), means you would have a problem with the law in most states.
According to the National Conference of State Legislatures, as of September 2018, all 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have legislation that requires private or governmental entities to notify individuals of security breaches of information involving personally identifiable information. This was a change from two years previously, when Alabama, New Mexico and South Dakota had not yet enacted specific data breach laws.
Do note, irrespective of a state’s PII guidelines, if you share or transmit data across state lines, you also have to comply with regulations in the state you are sending data to or receiving data from, and with any federal or federal agency regulations governing PII.
According to a Congressional Research Service report (the CRS prepares data for the United States Congress and its committees), a data security breach is defined as having occurred when “there is a loss or theft of, or other unauthorized access to, sensitive personally identifiable information that could result in the potential compromise of the confidentiality or integrity of data.”
The NCSL states: security breach laws typically have provisions regarding who must comply with the law (e.g., businesses, data/ information brokers, government entities, etc.); definitions of “personal information” (e.g., name combined with SSN, drivers’ license or state ID, account numbers, etc.); what constitutes a breach (e.g., unauthorized acquisition of data); requirements for notice (e.g., timing or method of notice, who must be notified); and exemptions (e.g., for encrypted information).
You can find it at the NCSL link provided above and in the footnotes. You can also find comprehensive information compiled by law firm BakerHostetler’s data privacy and security group here. This details all state laws in which the definition of “personal information” is broader than the generally accepted one we’ve defined above. This information is very handy if you send or share data across state lines, have offices in multiple locations, or operate under a corporate umbrella that needs to comply with multi-jurisdictional regulations.
In California, for instance, bakerlaw.com states that PII — in addition to the generally accepted definitions mentioned previously in this document — also includes a username or email address, in combination with a password or security question and answer that would permit access to an online account, and information or data that is collected through the use or operation of an automated license plate recognition system, in addition to medical and health insurance information on an individual.
In addition to requiring organizations that have undergone a security breach to inform customers or clients that their personal information may be compromised or open to unauthorized access, a 2015 amendment to California law SB1386 also mandates that breached organizations provide affected individuals with identity theft protection and alleviation services.
The basic definition of PII is pretty much foundational, but there are several state specifics that have to be kept in mind when you’re transmitting information across state lines, so as to not be in violation of multi-jurisdictional laws.
Coming back to the states, both the NCSL and Bakerlaw.com explain how state laws differ greatly, in some cases, in their interpretation of PII, in terms of what they include over and above the regular interpretation. If you read through the links, you’ll see that Iowa’s definition of PII includes unique electronic identifiers or routing codes in combination with any required security or access code, and all unique biometric data, including digital or physical representation. But North Carolina, on the other hand, doesn’t recognize email names or addresses, or electronic identifiers as sensitive personal information.
Maryland includes Taxpayer Identification Numbers and biometric data, Massachusetts includes non-electronic personal information, New Hampshire has a whole section on student and teacher-related personal information, while Ohio specifically details what publicly available information can be — the rest, it is implied, is personal.
Wisconsin is interesting, as it specifically mentions DNA profile, tribal identification card, and birth and marriage certificates (none of which is typical) as personal information, while Puerto Rico includes work-related evaluations. As for Nevada, we have a full section on Nevada’s laws relating to protecting PII later in this document.
Just a quick note on transmissions that cross state lines — we’ll also get into this in more detail in the next section on email transmissions — do remember what was mentioned in the summary at the start of this document, transmissions include making sure your provider partners also maintain the same standards of security, and all reasonable measures, for PII that you do, and that you’ve signed a contract with them that clearly states as much, so you can hold them to those standards.
There isn’t one comprehensive federal law on data security breaches, but different laws and agencies regulate compliance needs, depending on the kind of business you’re in.
In October 2014, the Federal Communications Commission (FCC), which governs and regulates all interstate and international communications by radio, television, wire, satellite and cable in all 50 states, the District of Columbia and U.S. territories, levied a $10 million fine against TerraCom Inc. and YourTel America, Inc. for collecting personally identifiable information on potential customers, including names and driver’s license numbers, and storing the information on publicly accessible internet servers, without password protection or encryption. A later settlement for $3.5 million was reached in July 2015.
According to a report by law firm Vedder Price, the FCC found that the carriers had “breached the personal data of up to 305,000 consumers through their lax data security practices and exposed those consumers to identity theft and fraud.” The report added: the FCC found that the carriers failed to protect the confidentiality of the customers’ sensitive data and failed to employ reasonable security measures to safeguard the information.
An FCC release stated: “A thorough Enforcement Bureau investigation found that the companies’ vendor stored consumers’ personal information on unprotected servers that were accessible over the Internet. The companies’ failure to provide reasonable protection for their customers’ personal information —including names, addresses, Social Security numbers, driver’s licenses, and other sensitive information —resulted in a data breach that permitted anyone with a search engine to gain unauthorized access to the information.”
The Commission clearly stated that the failure to reasonably secure customers’ proprietary information, was (in this case) a violation of the Communications Act. One point was very pertinent: it found that the companies’ security practices lacked “even the most basic and readily available technologies and security features,” thus creating an “unreasonable risk of unauthorized access.”
The Federal Trade Commission even directs you to this letter, which explains its outlook on the enforcement of the recent E.U.-U.S. Privacy Shield Framework this way: “Enforcement is the lynchpin of the FTC’s approach to privacy protection. To date, the FTC has brought over 500 cases protecting the privacy and security of consumer information. This body of cases covers both offline and online information and includes enforcement actions against companies large and small, alleging that they failed to properly dispose of sensitive consumer data, failed to secure consumers’ personal information, deceptively tracked consumers online, spammed consumers, installed spyware or other malware on consumers’ computers, violated Do Not Call and other telemarketing rules, and improperly collected and shared consumer information on mobile devices. The FTC’s enforcement actions — in both the physical and digital worlds — send an important message to companies about the need to protect consumer privacy.”
You could, as long as that information isn’t someone else’s sensitive and/or personally identifiable information, and you and your organization are prepared to handle the fallout, in case you’re caught sending PII via email. If you are, you’re likely breaking state and/or federal law.
Actually, everyone doesn’t do it anymore. In any case, that isn’t an excuse if you do it, nor is ignorance of the law. Email is inherently insecure, and even sophisticated IT systems are vulnerable unless they’re encrypted end-to-end, which isn’t generally the case, as that would potentially make regular business operations often difficult or impractical. However, when you’re transmitting PII, you are bound by law to take all available reasonable measures to protect that data, which includes, among other things, protecting names, addresses, driving license information, license plate information, customer financial information, customer account information like home or cell phone numbers and email addresses, and any biometric information like photographs.
You want the short answer? No, they’re probably more prevalent than you and we know. Many hacks don’t get reported, which, again, is in violation of the law if client or customer PII has been exposed, or potentially compromised. You have to let state and/or federal authorities know, and let affected customers know too, and then put in place a post data breach plan to monitor and manage the situation.
Here is some idea of how often systems and networks can be compromised (and do note that these examples are of known, successful indictments or convictions, including criminal convictions, in just the past year), and how this affects consumers, which is why businesses need all reasonable safeguards:
In December 2016, the Department of Justice announced a 21-count indictment charging three Romanian nationals for operating a cyber fraud conspiracy. They infected between 60,000 and 160,000 computers, sent out 11 million malicious emails and stole at least $4 million. According to the DOJ indictment, the trio operated a criminal conspiracy from Bucharest, Romania, from at least as far back as 2007. They apparently developed proprietary malware that was used to infect and control more than 60,000 computers, primarily in the U.S. They allegedly used the computers to gain access to PII and disable malware protection, among other things.
In November 2016, a Colorado Springs man was sentenced to 29 months in federal prison, followed by three years of a supervised release, for a number of offenses, including conspiracy to commit computer fraud and abuse, access device fraud and identification document fraud. Among other things, he published private images that he illegally obtained onto a website, found a way to connect email addresses to stolen private images, and sold those email addresses, leaving those affected open to online extortion and worse.
In October 2016, a man from Boca Raton, FL, pleaded guilty in a computer hacking and identity theft scheme that generated $1.3 million in illegal profits by hijacking customer email accounts and sending them unsolicited “spam” emails.
According to the DOJ, he had someone else write computer programs that would conceal the true origin of the email and bypass spam filters, and then used these programs to transmit spam. In addition, he used proxy servers and botnets to remain anonymous, hide the true origin of the spam, and evade anti-spam filters and other spam blocking techniques. He also admitted that he had hacked into individual email accounts and utilized corporate mail servers to further his spam campaigns.
In September 2016, two North Carolina men were arrested for their alleged roles in the hacking of several senior U.S. government officials and U.S. government computer systems. According to a DOJ release, the men reportedly conspired with a hacking group that called itself “Crackas With Attitude.” The release states that they used “social engineering hacking techniques, including victim impersonation, to gain unlawful access to personal online accounts.”
In August 2016, a Russian man was found guilty of 38 counts related to hacking into retail point-of-sale systems and installing malicious software to steal credit card numbers from various businesses. It was reported that at least 3,700 financial institutions lost more than $169 million because of the scheme, and several small businesses suffered tremendously, with one, The Broadway Grill in Seattle, WA, forced into bankruptcy.
In July 2016, an Oregon man was sentenced to six months in federal prison for a hacking scheme that gave him illegal access to 363 Apple and Google email accounts. He collected about 448 usernames and passwords for 363 email accounts.
In June 2016, two men were extradited from Israel to New York for hacking crimes against U.S. financial institutions, brokerage firms, and financial news publishers, including what was called “the largest theft of customer data from a U.S. financial institution in history.”
Note: In an October 2014 SEC filing, JPMorgan Chase stated that user contact information — name, address, phone number and email address — for 76 million households and 7 million small businesses had been compromised in a cyber-attack.
In May 2016, a Haitian based out of Florida was sentenced to 27 months in prison for money laundering via an email takeover scam.
In April 2016, an Estonian man was sentenced in Manhattan federal court to more than seven years in prison, for perpetrating an Internet fraud scheme by infecting more than four million computers in more than 100 countries with malware, and digitally hijacking victims’ computers.
In March 2016, a man was arrested in Spring, TX, after it was determined that he allegedly hacked into a competitor’s database and stole information from over 700,000 customer accounts, and later supposedly tried to use the proprietary information to defraud that same company.
In February 2016, an Indianapolis resident was sentenced in federal court to 27 months in prison, followed by one year supervised release, for violating the CAN-SPAM ACT. A member of a (later dismantled) hacking forum known as Darkode, he used a protected computer to relay or retransmit multiple commercial electronic mail messages with the intent to deceive or mislead recipients.
Note: If you’re using email for any business operations, you should probably know about the CAN-SPAM Act. It is governed by the FTC, and is a law that sets the rules for commercial email, and the requirements for commercial messages. It gives recipients the right to have you stop emailing them (which is why so many businesses have “unsubscribe” options at the end of their emails), and details the penalties for violations. It covers all commercial messages (and not just bulk email) that the law defines as “any electronic mail message the primary purpose of which is the commercial advertisement or promotion of a commercial product or service,” including email that promotes content on commercial websites. Importantly, the law makes no exception for business-to-business email.
In January 2016, a Chinese citizen, a U.S. permanent resident, was sentenced to 31 months in prison for stealing a large number of electronic documents from his employer, a financial services company. The documents included some that would have provided information on how to access the company’s computer network.
Note: Each of these instances and the terms of the offenses and penalties contained therein are sourced from releases by the U.S. Department of Justice’s Office of Public Affairs. While each example above has been linked to individually and in our footnotes, you can find several other examples of computer hacking and why electronic mail could be insecure here also.
Well, it’s convenient. And easy to use, and almost instantaneous, and it is a written record of all communications, so there’s an audit trail. It makes sense to use email communication for general use. However, when it comes to transmitting data like PII, or personal health information, or personal financial information for a third party like your clients or customers, information that, if exposed, could affect lives, you’ve got to take all reasonable measures that are available to protect that data.
There are several reasons actually.
• Emails generally do not have end-to-end encryption (E2EE), which is a form of communication between unique users in which cryptographic keys prevent third parties, including client-server systems, Internet Service Providers, telecom companies, and other service providers, or anyone apart from users communicating with each other with access to the keys, from having real access to or deciphering what is being communicated.
• Email messages split into something called packets and go through various points across different systems before getting to an end user. What this means is that an email message has several points of vulnerability and potential exposure, even outside of your physical device. There’s the router, the server, your telecom provider, the FTP server, the ISP etc.
• Additionally, the sections at the top of an email, typically the “to” “from,” “date,” etc. fields are transmitted in plain text and are potentially accessible to any third-party viewer or hacker that intercepts the email.
• It is also likely that your ISP stores backups of emails, even after you’ve deleted a mail from your inbox.
You could use passwords, but
a) They can be cracked.b) You can’t send a password on that same email.c) c) It’s probably going to very inconvenient to send every single record of a consumer’s PII on a password protected file and send the password securely and separately in some way.
Just to reiterate what we’ve said previously, email isn’t going to be completely private. There was a great piece in the MIT Technology Review by writer David Talbot a few years ago on this particular issue, in which he detailed how “the closure of two ultra-private e-mail services shows just how weak the system really is.” The piece began by talking about how Lavabit, the email service used by NSA whistleblower Edward Snowden suspended service, ostensibly because it had received a government demand for information, and pointed out what it called “two fundamental weaknesses in email.”
Note: You could read more on Lavabit’s reasons for suspending service in a January 20, 2017, letter from its owner here, in which he also announced its re-launch.
This is how Talbot, the MIT Review writer, described those weaknesses: “First, even if an e-mail service encrypts messages for secrecy, as Lavabit and Silent Circle did, the e-mail headers and routing protocols reveal who the senders and receivers are, and that information can be valuable in its own right. And second, the passcodes used as keys to decrypt messages can be requested by the government (if held by the e-mail company) or simply stolen by sophisticated malware.”
More like the safest option. Remember that E2EE via email requires anyone receiving your email to have a similar system, or the ability to decipher or decrypt your encrypted email. And if you have to secure an email through encryption, it also means you cannot pass along a decryption key in that same email.
You could also use a system like Biometrica’s SSIN, the Security and Surveillance Information Network.
The SSIN is a fully encrypted, peer-to-peer private information system, a network which allows for the notification of people, events, alerts or warnings, and gives you the ability to share information on various undesirables with other customers with access to the SSIN, from private clients to regulatory authorities to law enforcement agencies, on a one-to-one, one-to-some or a one-to-all basis.
Just FYI, sensitive customer/patron PII should also not be shared through Dropbox or Google docs — you’re still almost certainly transmitting the attachment via an unsecured email system, which is vulnerable to security breaches.
We touched upon this briefly in the section on state laws, but to elaborate on that, a data breach is when any sensitive, protected or confidential information or intellectual property has been stolen, compromised, misplaced, misused, left open to potential unauthorized access and misuse, or viewed by a person not authorized to view that data or information.
Typically, sensitive information refers to employee or customer/client/consumer information that is personally identifiable information, including but not limited to names and a combination of a number of elements like biometric information (photographs etc.) and financial data, or protected health information.
Data breaches can be caused by a number of bad players, including foreign actors. Remember when the former Director of National Intelligence, James Clapper, told the Senate Armed Services Committee in February 2015 that a devastating February 10, 2014, cyber-attack against the Las Vegas Sands Corp. – the world’s largest gambling company – that shut down computers, emails and phones, and wiped out hard drives, was perpetrated by Iran? Clapper also told senators that the hacking of Sony Corp. by a group calling itself the Guardians of Peace in November 2014 was actually perpetrated by North Korean state actors.
However, data breaches can also be caused by much more mundane things. Insecure systems, a lack of precautionary measures, an upset employee, an email sent to an incorrect address, or just pure negligence or carelessness.
In fact, here’s a list of just “DISC breaches” by businesses (BSFs — Financial and Insurance Services and BSOs — Businesses, Other) in 2016 alone, from the website Privacy Rights Clearinghouse. They track different kinds of data security breaches for various organizations that result in personal information being exposed or compromised. A DISC breach, according to them, refers to “unintended disclosure (not involving hacking, intentional breach or physical loss — for example: sensitive information posted publicly, mishandled or sent to the wrong party via publishing online, sending in an email, sending in a mailing or sending via fax).”
You could also look up all kinds of data breaches by different kinds of businesses and government agencies since 2005 here.
In one case on the list, involving a Toyota employee, (you could scroll down the examples to see it too), the breach occurred when a Toyota Financial Services associate “mistakenly emailed” a spreadsheet containing customer information to her personal email account. Even though the email was sent using an encrypted transmission method and information was not misused, according to available accounts, it constituted a data breach. Why?
Because the information transmitted included first and last names, telephone numbers and account numbers. Remember, PII is a combination of first and last names or a first initial and a last name in combination with any of several other potential identifiers that could lead to a person being identified.
Note: All casinos are required by law to have a customer to show some kind of identification, and most organizations collect and record that information. Casinos are also governed by KYC norms — and need to follow the BSA and AML (Title 31, Title 26) requirements. You can find out more by downloading our comprehensive AML 101 document here.
The TFS executive in the incident above seems to have made two mistakes: first, she sent that data to her personal email account; that should never be done. Second, she sent PII data via email, which, as explained above, is vulnerable at several points along the way, even if it is encrypted, unless there is end-to-end encryption of the email, which is usually rare if it’s been sent to a personal account.
Here is an example of a sample letter (from the Office of the California AG) that TFS subsequently sent out to affected customers, offering them “one year of free credit monitoring through ConsumerInfo.com, Inc., an Experian® company.”
In case you’re still wondering about the processes involved in email that has E2EE, have you ever opened an “Explanation of Benefits” email from your health insurance provider in the recent past?
Well, if you do open one, and we’d suggest you do, you’ll see that you’re not going to have a simple email explaining the issue. You’re going to get an email directing you to log in to your secure online account — that has probably been separately password protected by you at some stage and is encrypted — to access even a single line of relevant information. Ditto for your banking information, or really, for any email from other responsible parties transmitting PII or PHI.
In “Protecting Personal Information: A Guide for Business,” the FTC (which, among things, has to approve of most casino deals or mergers — yes, we’re talking about that Commission), which is also the agency tasked with protecting consumer privacy and supervising businesses in this regard, states that a sound data security plan is built on five principles. These are:
1. Take Stock. Know what personal information you have in your files and on your computers.
2. Scale Down. Keep only what you need for your business.
3. Lock It. Protect the information that you keep.
4. Pitch It. Properly dispose of what you no longer need.
5. Plan Ahead. Create a plan to respond to security incidents.
You can view the guideline details in each link. They also specifically suggest that you pay particular attention to how you keep and transmit personally identifying information, and make certain you aren’t in violation of different jurisdictional statutes governing reasonable measures to provide security for sensitive PII, like the Gramm-Leach-Bliley Act, the Fair Credit Reporting Act (FCRA), and provisions of the Federal Trade Commission (FTC).
Yes. The FTC has settled more than 50 cases of data breaches by businesses, including fining LifeLock $100 million for violations, but here’s an interesting case.
In December 2013, it announced that a Chicago-based company called Accretive Health, Inc. had agreed to settle charges that “inadequate data security measures unfairly exposed sensitive consumer information to the risk of theft or misuse.” Do note that the FTC release didn’t say that the information had been misused; it is enough if you even risk exposing sensitive PII by not taking all reasonable measures to protect it.
The FTC complaint against Accretive Health included the following background:
• An employee laptop containing PII on 23,000 customers was stolen from his car. The FTC believed the company created unnecessary risk by allowing sensitive personal information to be available on a laptop.
• The FTC found that the company didn’t employ reasonable measures to ensure employees removed any personal consumer information from their systems when they no longer needed that information./p>
• They found the company didn’t restrict employees’ access to customer PII to a need-to-know basis.
• They found it unacceptable that the real customer data that was used for consumer training sessions was allowed to stay on employee systems after the training.
Note: In another case, they charged a company, foru™ International Corporation, with giving access to sensitive consumer data to service providers that were developing applications for the company. In both these cases, the FTC stated that the companies could have used fictional data, for training and for application development.
In another very interesting case, the Superior Mortgage Corporation (like casinos, mortgage corporations are considered non-bank financial institutions) was charged with violating the FTC’s Standards For Safeguarding Consumer Information Rule (the Safeguards Rule), issued pursuant to the Gramm-Leach-Bliley Act (GLBA or GLB Act), for not effectively providing complete encryption of PII through a lifecycle.
The FTC alleged that the company used SSL encryption to secure the transmission of sensitive personal information between the customer’s web browser and the business’s website server. But once the information reached the server, the company’s service provider decrypted it and emailed it in clear, readable text to the company’s headquarters and branch offices.
Note: You can scroll through this link for more interesting examples.
According to the FTC, the Safeguards Rule implements Section 501(b) of the GLB Act and requires financial institutions “to protect the security, confidentiality, and integrity of customer information by developing a comprehensive written information security program that contains reasonable administrative, technical, and physical safeguards.” Financial institutions can be bank and nonbank — nonbank refers to institutions other than banks that provide financial or credit services, like casinos, hedge funds, insurance brokers, credit card operators, dealers in precious metals, pay day loan services etc.
Note: In addition to their own compliance measures, the regulation requires financial institutions under FTC jurisdiction to also have measures in place to ensure that their affiliates and service providers safeguard customer information in their care, which will be needed if you’re transmitting PII via end-to-end unencrypted email systems within or outside your business.
The Safeguards Rule includes the following measures, as defined in detail here:
• Designating one or more employees to coordinate the information security program.
• Identifying reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of customer information, and assessing the sufficiency of any safeguards in place to control those risks.
• Designing and implementing information safeguards to control the risks identified through risk assessment, and regularly testing or otherwise monitoring the effectiveness of the safeguards’ key controls, systems, and procedures.
• Overseeing service providers, and requiring them by contract to protect the security and confidentiality of customer information.
• Evaluating and adjusting the information security program in light of the results of testing and monitoring, changes to the business operation, and other relevant circumstances.
First, what is the Gramm-Leach-Bliley Act? The GLBA, also known as the Financial Modernization Act of 1999, is a U.S. federal law that was enacted to control how financial institutions dealt with sensitive PII of individuals and make information processes more efficient. By definition, it applies to any financial institution (bank and nonbank lenders) that provides financial, credit or insurance services to customers and collects and shares sensitive PII, especially information shared or exposed outside of secure company systems.
The UNLV Gaming Law Journal has argued that while some opinions hold that casino corporations may qualify under the GLBA as a “financial institution,” because the FTC and other federal governmental entities have not specifically applied the definition of “financial institutions as codified in the GLBA” to include casino corporations, federal data privacy laws do not “directly apply to the casino industry regarding the maintenance and security of patron database systems.” However, by any yardstick, state laws apply, as do state and federal laws governing all business transmission of sensitive PII.
Those of you that follow Casino City’s GamingRegulation.com for guidance on global gaming regulation and compliance, should read this answer to a specific question there on how the GLBA impacts casinos in customer due diligence/KYC, and sharing information across properties under a corporate umbrella.
Casinos (not just casinos that provide credit in any form) are definitely considered nonbank financial institutions and are required to comply with Know Your Customer (KYC) norms and Anti-Money Laundering (AML) regulations. Casinos also provide PII data to the FinCEN (the Financial Crimes Enforcement Network under the U.S. Department of the Treasury) as part of Bank Secrecy Act filings.
In a report called “Privacy Act Assessment (PIA) Data Collection, Storage, and Dissemination,” which mentions casinos, by the way, the FinCEN states: all authorized FinCEN personnel, as well as authorized personnel from designated federal, state, and, local law enforcement, intelligence, and regulatory agencies that have signed a Memorandum of Understanding (MOU) with FinCEN to allow access to the BSA information will be responsible for protecting the data. The information owner and system manager (identified in the Privacy Act System Notice) share overall responsibility for protecting the privacy rights of individuals by developing guidelines and standards which must be followed. The external users will also be responsible for protecting the information that they submit via BSA E-Filing.”
If you’re in doubt, just remember that it’s better to be safe than sorry when dealing with PII, and take all available reasonable measures to protect sensitive information like names and images on whitelists or blacklists or other customer information.
Regulation 5A.070 is part of a set of regulations that govern the operation of interactive gaming under the Gaming Control Act, and applies to any operators of gaming licensed by the Nevada Gaming Commission (NGC).
Note: If your organization has a Nevada presence, you and your organization will have to adhere to these regulations, in addition to the regulations of the state, tribal or other jurisdictions you are located in, and all federal jurisdictions. Most jurisdictions have gaming control laws similar to Nevada, in whole or in part, and as mentioned earlier in this document, all but three U.S. states have laws in place to protect PII data, and both civil and criminal penalties for data breaches or lapses in security resulting in unauthorized access to data, or leaving that PII data in a place that risks potential exposure to unauthorized access.
The regulation states that every gaming operator (for e.g. an organization operating a casino) has to establish, maintain, implement and comply with standards that the chairman shall adopt and publish pursuant to the provisions of Regulation 6.090.
The State of Nevada Gaming Control Board (NGCB) adopted what they call MICS or Minimum Internal Control Standards in accordance with NGC Regulation 6.090. This applies to Group I licensees (anyone operating not less than 15 slot machines or an amount defined under Regulation 6.010), requiring them to establish administrative and auditing procedures for taxation and fee purposes. This was adopted as a set of minimum requirements for internal controls over gaming operations, with the NGC clearly stating that is a licensee’s responsibility to read and review the MICS, and put in place detailed operating procedures that comply with those standards.
The last of 10 regulations under 5A.070 is this: Protecting an authorized player’s personally identifiable information.
The protections it requires include (but are not limited to) the following:
• The designation and identification of one or more senior company officials having primary responsibility for the design, implementation and ongoing evaluation of such procedures and controls.
• The procedures to be used to determine the nature and scope of all personally identifiable information collected, the locations in which such information is stored, and the devices or media on which such information may be recorded for purposes of storage or transfer.
• The policies to be utilized to protect personally identifiable information from unauthorized access by employees, business partners, and persons unaffiliated with the company.
• Procedures to be used in the event the operator determines that a breach of data security has occurred, including required notification to the Nevada Gaming Control Board’s enforcement division.
• Provision for compliance with all local, state and federal laws concerning privacy and security of personally identifiable information.
Regulation 5A.070 defines PII quite clearly: personally identifiable information means any information about an individual maintained by a gaming operator. This includes the following:
• Any information that can be used to distinguish or trace an individual’s identity, such as a name, social security number, date and place of birth, mother’s maiden name, or biometric records (photographs, iris scans, fingerprints etc).
• Any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.
So what you’re saying is that I, as a casino with a Nevada operation or presence, have to follow these regulations plus state, federal or other jurisdictional law regarding PII?
You, personally, could be fired, depending on what data has been exposed. Your organization could lose its reputation and business. Or worse, its license, and face a fine in any case. And depending on what state and federal laws you’ve broken and the scope of the breach, and intent, someone could perhaps go to jail.
The regulation itself states this under 5A.200, or “Grounds for Disciplinary Action.” “Failure to comply with the provisions of this regulation shall be an unsuitable method of operation and grounds for disciplinary action. The commission may limit, condition, suspend, revoke or fine any license, registration, finding of suitability or approval given or granted under this regulation on the same grounds as it may take such action with respect to any other license, registration, finding of suitability or approval.”
Nope. It doesn’t. For that you’ve got to look at the Nevada Revised Statutes (NRS), the current codified laws of the State of Nevada, and scroll down to Chapter 603A — Security of Personal Information.
They have a whole section there on what constitutes a breach of security, define what a collector of data is, and the regulations of business practices and penalties, including the destruction of records containing personal information.
First, even if you think you’ve got the hang of the rest, we’d really suggest you take a look at NRS Chapter 603A. But we’re glad you brought up destruction of records. It isn’t really just a Nevada law, by the way. Forty-seven states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring private, governmental or educational entities to notify individuals of security breaches of information involving personally identifiable information. Many of them, like Nevada, have defined data collectors as “any governmental agency, institution of higher education, corporation, financial institution or retail operator or any other type of business entity or association that, for any purpose, whether by automated collection or otherwise, handles, collects, disseminates or otherwise deals with nonpublic personal information.”
Sure. NRS Chapter 603A has a section called “Regulation Of Business Practices.” Section 603A 200 very specifically states that a business that maintains records containing personal information on customers of the business has to take reasonable measures to ensure the destruction of those records when the business decides it no longer needs to maintain the records.
This includes the shredding of records containing personal information and erasing personal information from any and all records. For example, personal information could be the names and addresses of VIP patrons, or names and photographs of advantage players that have visited your property. In addition, hopefully, you haven’t transmitted any of this information over email, unless there’s end-to-end encryption.
Well, NRS 603A.210 is about the security measures you have to have in place to protect records containing PII from unauthorized access, acquisition, destruction, use, modification or disclosure. And NRS 603A.215 very clearly states that any data collector (i.e. the business that collects that personal information data) that is not using current versions of Payment Card Industry Data Security Standards to transmit information must not :
a) Transfer any personal information through an electronic, non-voice transmission other than by fax to a person outside of the secure system of the data collector, unless the data collector uses encryption to ensure the security of electronic transmission; or
b) Move any data storage device containing personal information beyond the logical or physical controls of the data collector, its data storage contractor or, if the data storage device is used by or is a component of a multifunctional device, a person who assumes the obligation of the data collector to protect personal information, unless the data collector uses encryption to ensure the security of the information.
You probably could, within your organization, as long as you can guarantee that it is a secure system, you’re sending it from one dedicated fax machine to another dedicated fax machine, with both using Group 3 or Group 4 digital formats that conform to the International Telecommunications Union T.4 or T.38 standard, or computer modems that conform to the International Telecommunications Union T.31 or T.32 standard.
Additionally, that neither device — the facsimile sender and receiver, is connected to another device, there’s no onward transmission or third device involved, no storage on any data storage device, and that it is not accessible physically or digitally to any unauthorized person. And yes, you also have to guarantee that any record of that fax, and any copies, will be destroyed as soon as the record is no longer needed.
As mentioned elsewhere in this document, you could use end-to-end encryption technology for your mails containing PII, which means you have to ensure the receiver has it too, maintain safeguards for the cryptographic keys, to uphold the integrity of the encryption, and have your third-party service provider contracts on PII signed and in place.
Alternatively, you could use a system like Biometrica’s SSIN, the Security and Surveillance Information Network. The SSIN, as previously mentioned, is a fully encrypted, peer-to-peer private information system, which allows for the notification of people, events, alerts or warnings, and gives you the ability to share information on various undesirables with other customers with access to the SSIN, from private clients to regulatory authorities to law enforcement agencies, on a one-to-one, one-to-some or a one-to-all basis.
The choice is yours. Just don’t send any sensitive personal information via regular email. You’re putting yourself and your organization at risk if you do.
We’re periodically asked about what it means to have “reasonable measures” in place — regulatory bodies frequently use the term in connection with businesses adopting best practices and cybersecurity safeguards. We thought we’d explain this through a case study, which looks at the result of a long drawn out battle between the Federal Trade Commission (FTC) and the Wyndham Hotels and Resorts group.
This is a fascinating study on so many levels, not the least of which were the details so clearly laid out in an August 2015 opinion by the Philadelphia-based United States Court of Appeals For The Third Circuit. As Wired magazine’s Andy Greenberg so succinctly put it, the fact that “the ruling more widely cements the agency’s power to regulate and fine firms that lose consumer data to hackers, if the companies engaged in what the FTC deems “unfair” or “deceptive” business practices. At a time when ever-more-private data is constantly getting breached, the decision affirms the FTC’s role as a digital watchdog with actual teeth.”
According to Third Circuit court documents, “On three occasions in 2008 and 2009, hackers successfully accessed Wyndham Worldwide Corporation’s computer systems. In total, they stole personal and financial information for hundreds of thousands of consumers leading to over $10.6 million dollars in fraudulent charges.” Note that while this was about sensitive personal information including financial information, it is relevant because of what the court and the FTC laid out as systemic vulnerabilities when it comes to PII.
According to a Commission release, the FTC sued Wyndham Worldwide and three subsidiaries, as it believed that the corporation’s failure to have in place adequate data security protections led to the breaches, the consequent loss of PII, the fraudulent charges, and “the transfer of hundreds of thousands of consumers’ account information to a website registered in Russia.”
In August 2015, the Third Circuit — it is one of 13 U.S. courts of appeals, but its decisions are very closely followed because it has jurisdiction over Delaware, where more than half of publically traded companies and 64% of the Fortune 500 are incorporated — made a major decision, and ruled that the FTC had the authority to sue Wyndham Hotels for “allowing hackers” to steal customer data.
In finding for the FTC, the appellate court included the following details as part of its opinion: The FTC alleges that, at least since April 2008, Wyndham engaged in unfair cybersecurity practices that, “taken together, unreasonably and unnecessarily exposed consumers’ personal data to unauthorized access and theft.” This claim was fleshed out as follows, in the court’s words.
1. The company allowed Wyndham-branded hotels to store payment card information in clear readable text.
2. Wyndham allowed the use of easily guessed passwords to access property management systems. For example, to gain “remote access to at least one hotel’s system,” which was developed by Micros Systems, Inc., the user ID and password were both “micros.”
3. Wyndham failed to use “readily available security measures” — such as firewalls — to “limit access between [the] hotels’ property management systems, … corporate network, and the Internet.”
4. Wyndham allowed hotel property management systems to connect to its network without taking appropriate cybersecurity precautions. It did not ensure that hotels implemented “adequate information security policies and procedures.”
Also, it knowingly allowed at least one hotel to connect to the Wyndham network with an out-of-date operating system that had not received a security update in over three years. It allowed hotel servers to connect to Wyndham’s network even though “default user IDs and passwords were enabled … which were easily available to hackers through simple Internet searches.”
And, because it failed to maintain an “adequate inventory [of] computers connected to [Wyndham’s] network [to] manage the devices,” it was unable to identify the source of at least one of the cybersecurity attacks.
5. Wyndham failed to “adequately restrict” the access of third-party vendors to its network and the servers of Wyndham-branded hotels. For example, it did not “restrict connections to specified IP addresses or grant temporary, limited access, as necessary.”
6. It failed to employ “reasonable measures to detect and prevent unauthorized access” to its computer network or to “conduct security investigations.”
7. It did not follow “proper incident response procedures.” The hackers used similar methods in each attack, and yet Wyndham failed to monitor its network for malware used in the previous intrusions.
In December 2015, the FTC announced that Wyndham Hotels and Resorts had agreed to settle FTC charges that the company’s security practices had “unfairly exposed” the personal and financial information of hundreds of thousands of consumers to hackers in three separate data breaches.
Under the terms of the settlement, filed with the U.S. District Court for the District of New Jersey, the company would establish a comprehensive information security program designed to protect cardholder data — including payment card numbers, names and expiration dates. In addition, the settlement required that the company conduct annual information security audits and maintain safeguards in connections to its franchisees’ servers.
The order also stipulated that in the event the organization or its subsidiaries suffered another data breach that affected more than 10,000 payment card numbers, it would have to obtain a breach assessment and give the FTC that assessment within 10 days. The order also stated that if the group successfully obtained (prescribed) requisite compliance certifications, it would be thought to be in compliance with the “comprehensive information security program provision” in the order. That provision though, would not stand if it were found that the organization had misled or provided any false information during the annual audit and assessment process.
And if anyone is wondering about a timeline on these audits and assessments, the Wyndham’s obligations under the settlement are in place for 20 years.
According to court documents, Attack 1 was in April 2008, when hackers reportedly breached the local network of a hotel in Phoenix, Arizona, which was connected to Wyndham’s network and the Internet. They then used what’s called a brute-force attack — a method in which the attacker uses a program to guess passwords and encryption keys etc. by repeatedly guessing till they break it down — to access an administrator’s account on the Wyndham network, giving them access to customer/consumer data on the network.
Loss: It was estimated that “the hackers obtained unencrypted information for over 500,000 customer accounts, which they sent to a domain in Russia.”
Attack 2 took place in March 2009, when hackers reportedly accessed the Wyndham network through an administrative account. The FTC claimed the “Wyndham was unaware of the attack for two months until consumers filed complaints about fraudulent charges. Wyndham then discovered memory scraping malware used in the previous attack on more than thirty hotels’ computer systems.”
Loss: Unencrypted payment card information for approximately 50,000 consumers from 39 properties.
In Attack 3, apparently in the end of 2009, hackers again accessed an administrator account on a Wyndham network. According to the court documents, the “Wyndham only learned of the intrusion in January 2010 when a credit card company received complaints from cardholders.”
Loss: Payment card information for approximately 69,000 customers across 28 properties.
Finally, here is an interesting (and worrying, perhaps, for any organization) point that was clearly noted and put into its order by the appellate court: the FTC alleges that, in total, the hackers obtained payment card information from over 619,000 consumers, which (as noted) resulted in at least $10.6 million in fraud loss. It further states that consumers suffered financial injury through “unreimbursed fraudulent charges, increased costs, and lost access to funds or credit,” and that they “expended time and money resolving fraudulent charges and mitigating subsequent harm.”
The moral of the story: That order was definitive. We would suggest that you keep all your jurisdictional laws in mind, and make sure you do everything you can and take all reasonable available measures to protect your customers’ sensitive PII. The technology to do so is available. We would sincerely suggest you use it.
Biometrica’s closed and end-to-end encrypted data network, SSIN, the “Security and Surveillance Information Network,” is a private network that connects security, surveillance, compliance and law enforcement teams. The information on the network is fully encrypted at every stage of exchange and transmission. We adhere to the highest levels of standards for data security to maintain both the privacy and security of all shared, exchanged or transmitted data.
Sensitive PII should really only be shared with authorized organizational personnel, licensed data agents under the Fair Credit Reporting Act (FCRA), or with or by licensed private investigators acting as agents for customers. This basically means that if you are sharing third party PII outside of authorized personnel within your organization with anyone that is not one of the other two, or you’re sharing PII across a typically unsecured system like email, you’re putting your property and organization at tremendous risk.
Biometrica is a licensed private investigator and our contract provides us as an agent to our customers, across both the casino and gaming sector and in law enforcement. In addition to being licensed PIs, because we are agents of the casinos that have access to our systems, and because most casinos are now considered nonbank financial institutions (NBFIs), we also provide added support for a casino’s Know Your Customer (KYC) requirements under the Bank Secrecy Act (BSA).
Shortly after 9/11, the U.S. Congress enacted the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (more commonly known as the USA PATRIOT Act). Among other things, the USA PATRIOT Act established a number of new measures to prevent, detect, and prosecute those involved in money laundering and terrorist financing. It was vital to our collective national interest to know the source of funds.
The PATRIOT Act expanded the definition of what constitutes a “financial institution” and mandated that casinos follow most of the statutes governing financial institutions. You can find out more details on anti-money laundering, KYC norms and casinos by downloading our free AML 101 document here.
Please note that this document is intended solely as a compilation of information with regard to PII. It is neither all-inclusive, nor is it expected to act in lieu of legal advice in any shape, manner or form. Please follow the advice of your compliance or regulatory authorities and your legal representative/s for information specific to you.