..

Volume 3, Emitir 4 (2012)

Artigo de Pesquisa

A Robust Method for Fingerprint Recognition using Biometric Fusion

Nasrollah Moghadam, Mehdi Abadi and Hamza Ali Mahfoodh

In this paper we will introduce a combined approach that uses the local and global features on the fingerprint. Minutiae features are extracted then a mesh of minutiae points relations will created. Then among the outputs of first stage two points used in the other stage as reference point and rotation handle. Matching process will take place after each stage then results made in both stages will be used to get the final answer. In this approach we used 10 fingerprints of persons to maximize the precision. Also we will use powerful pre-process tools to ensure that the input image is in the best condition for matching.

Artigo de Pesquisa

Determining the Probability Distribution and Evaluating Sensitivity and False Positive Rate of a Confounder Detection Method Applied To Logistic Regression

Robin Bliss, Janice Weinberg, Thomas Webster and Veronica Vieira

Background: In epidemiologic studies researchers are often interested in detecting confounding (when a third variable is both associated with and affects associations between the outcome and predictors). Confounder detection methods often compare regression coefficients obtained from “crude” models that exclude the possible confounder(s) and “adjusted” models that include the variable(s). One such method compares the relative difference in effect estimates to a cutoff of 10% with differences of at least 10% providing evidence of confounding.
Methods: In this study we derive the asymptotic distribution of the relative change in effect statistic applied to logistic regression and evaluate the sensitivity and false positive rate of the 10% cutoff method using the asymptotic distribution. We then verify the results using simulated data.
Results: When applied to a logistic regression models with a dichotomous outcome, exposure, and possible confounder, we found the 10% cutoff method to have an asymptotic lognormal distribution. For sample sizes of at least 300 the authors found that when confounding existed, over 80% of models had >10% changes in odds ratios. When the confounder was not associated with the outcome, the false positive rate increased as the strength of the association between the predictor and confounder increased. When the confounder and predictor were independent of one another, false positives were rare (most < 10%).
Conclusions: Researchers must be aware of high false positive rates when applying change in estimate confounder detection methods to data where the exposure is associated with possible confounder variables.

Artigo de Pesquisa

Biometric Template Security using Dorsal Hand Vein Fuzzy Vault

Biometric system is vulnerable to a variety of attacks aimed at undermining the integrity of the authentication process. More importantly template security is of vital importance in the biometric systems because unlike passwords, stolen biometric templates cannot be revoked. In this paper we describe the various threats that can be encountered by a biometric system. We specifically focus on attacks designed to elicit information about the original biometric data of an individual from the stored template. A few algorithms presented in the literature are discussed in this regard. We also examine techniques that can be used to deter or detect these attacks. Furthermore, we provide experimental results
pertaining to a biometric system combining biometrics with cryptography, that converts dorsal hand vein templates into novel cryptographic structure called fuzzy vault. Initially, the pre-processing steps are applied to dorsal hand vein images for enhancement, smoothing and compression. Subsequently, thinning and binary encoding techniques are employed and then feature extracted. Then the biometric template and the input key are used to generate the fuzzy vault. For decoding, biometric template from dorsal hand vein image is constructed and it is combined with the stored fuzzy vault to generate the final key. The experimentation was conducted using dorsal hand vein databases and the FNMR and FMR values are calculated with and without noise.

Artigo de Pesquisa

Design Considerations for a Two-stage Study with a Continuous Outcome and a Rare Exposure

Hung-Mo Lin and John M. Williamson

We consider the scenario in which the severity of a disease is characterized by a normally distributed response and the chance of being exposed (yes/no) to the risk factor is extremely rare. A screening test is employed to oversample subjects who may be at risk for the disease because expensive laboratory tests are needed to measure the outcome of interest and to confirm the true exposure status. Considerations of sample size and cost are discussed for this type of two-stage design with the objectives of 1) minimizing the number of subjects in the Stage II, and 2) overcoming the problem of a rare exposure. In particular, with an imperfect screening tool, one must take into account the sensitivity and specificity of the screening test, and the uncertainty of the estimates of the exposure prevalence in the survey population. The Penn State Children Sleep Disorder Study (PSCSDS) is used for illustration.

Artigo de Pesquisa

Lessons Learned in Dealing with Missing Race Data: An Empirical Investigation

Mulugeta Gebregziabher, Yumin Zhao, Neal Axon, Gregory E. Gilbert, Carrae Echols and Leonard E. Egede

Abstract Background: Missing race data is a ubiquitous problem in studies using data from large administrative datasets such as the Veteran Health Administration and other sources. The most common approach to deal with this problem has been analyzing only those records with complete data, Complete Case Analysis (CCA) which requires the assumption of Missing Completely At Random (MCAR) but CCA could lead to biased estimates with inflated standard errors. Objective: To examine the performance of a new imputation approach, Latent Class Multiple Imputation (LCMI), for imputing missing race data and make comparisons with CCA, Multiple Imputation (MI) and Log-Linear Multiple Imputation (LLMI). Design/Participants: To empirically compare LCMI to CCA, MI and LLMI using simulated data and demonstrate their applications using data from a sample of 13,705 veterans with type 2 diabetes among whom 23% had unknown/ missing race information. Results: Our simulation study shows that under MAR, LCMI leads to lower bias and lower standard error estimates compared to CCA, MI and LLMI. Similarly, in our data example which does not conform to MCAR since subjects with missing race information had lower rates of medical comorbidities than those with race information, LCMI outperformed MI and LLMI providing lower standard errors especially when relatively larger number of latent classes is assumed for the latent class imputation model. Conclusions: Our results show that LCMI is a valid statistical technique for imputing missing categorical covariate data and particularly missing race data that offers advantages with respect to precision of estimates.

Indexado em

Links Relacionados

arrow_upward arrow_upward