scholarly journals Ruler Detection for Autoscaling Forensic Images

2014 ◽  
Vol 6 (1) ◽  
pp. 9-27 ◽  
Author(s):  
Abhir Bhalerao ◽  
Gregory Reynolds

The assessment of forensic photographs often requires the calibration of the resolution of the image so that accurate measurements can be taken of crime-scene exhibits or latent marks. In the case of latent marks, such as fingerprints, image calibration to a given dots-per-inch is a necessary step for image segmentation, preprocessing, extraction of feature minutiae and subsequent fingerprint matching. To enable scaling, such photographs are taken with forensic rulers in the frame so that image pixel distances can be converted to standard measurement units (metric or imperial). In forensic bureaus, this is commonly achieved by manual selection of two or more points on the ruler within the image, and entering the units of the measure distance. The process can be laborious and inaccurate, especially when the ruler graduations are indistinct because of poor contrast, noise or insufficient resolution. Here the authors present a fully automated method for detecting and estimating the direction and graduation spacing of rulers in forensic photographs. The method detects the location of the ruler in the image and then uses spectral analysis to estimate the direction and wavelength of the ruler graduations. The authors detail the steps of the algorithm and demonstrate the accuracy of the estimation on both a calibrated set of test images and a wide collection of good and poor quality crime-scene images. The method is shown to be fast and accurate and has wider application in other imaging disciplines, such as radiography, archaeology and surveying.

Author(s):  
Megha Chhabra ◽  
Manoj Kumar Shukla ◽  
Kiran Kumar Ravulakollu

: Latent fingerprints are unintentional finger skin impressions left as ridge patterns at crime scenes. A major challenge in latent fingerprint forensics is the poor quality of the lifted image from the crime scene. Forensics investigators are in permanent search of novel outbreaks of the effective technologies to capture and process low quality image. The accuracy of the results depends upon the quality of the image captured in the beginning, metrics used to assess the quality and thereafter level of enhancement required. The low quality of the image collected by low quality scanners, unstructured background noise, poor ridge quality, overlapping structured noise result in detection of false minutiae and hence reduce the recognition rate. Traditionally, Image segmentation and enhancement is partially done manually using help of highly skilled experts. Using automated systems for this work, differently challenging quality of images can be investigated faster. This survey amplifies the comparative study of various segmentation techniques available for latent fingerprint forensics.


2021 ◽  
pp. 1-19
Author(s):  
Maria Tamoor ◽  
Irfan Younas

Medical image segmentation is a key step to assist diagnosis of several diseases, and accuracy of a segmentation method is important for further treatments of different diseases. Different medical imaging modalities have different challenges such as intensity inhomogeneity, noise, low contrast, and ill-defined boundaries, which make automated segmentation a difficult task. To handle these issues, we propose a new fully automated method for medical image segmentation, which utilizes the advantages of thresholding and an active contour model. In this study, a Harris Hawks optimizer is applied to determine the optimal thresholding value, which is used to obtain the initial contour for segmentation. The obtained contour is further refined by using a spatially varying Gaussian kernel in the active contour model. The proposed method is then validated using a standard skin dataset (ISBI 2016), which consists of variable-sized lesions and different challenging artifacts, and a standard cardiac magnetic resonance dataset (ACDC, MICCAI 2017) with a wide spectrum of normal hearts, congenital heart diseases, and cardiac dysfunction. Experimental results show that the proposed method can effectively segment the region of interest and produce superior segmentation results for skin (overall Dice Score 0.90) and cardiac dataset (overall Dice Score 0.93), as compared to other state-of-the-art algorithms.


Author(s):  
Amrita Goswamy ◽  
Shauna Hallmark ◽  
Theresa Litteral ◽  
Michael Pawlovich

Intersection crashes during nighttime hours may occur because of poor driver visual cognition of conflicting traffic or intersection presence. In rural areas, the only source of lighting is typically provided by vehicle headlights. Roadway lighting enhances driver recognition of intersection presence and visibility of signs and markings. Destination lighting provides some illumination for the intersection but is not intended to fully illuminate all approaches. Destination lighting has been widely used in Iowa but the effectiveness has not been well documented. This study, therefore, sought to evaluate the effect on safety of destination lighting at rural intersections. As part of an extensive data collection effort, locations with destination/street lighting were gathered with the assistance of several state agencies. After manual selection of a similar number of control intersections, propensity score matching using the caliper width technique was used to match 245 treatments with 245 control sites. Negative binomial regression was used to evaluate crash frequency data. The presence of destination lighting at stop-controlled cross-intersections generally reduced the night-to-day crash ratio by 19%. The presence of treatment or destination lighting was associated with a 33%–39% increase in daytime crashes across all models but was associated with an 18%–33% reduction in nighttime crashes. Injuries in nighttime crashes decreased by 24% and total nighttime crashes reduced by 33%. Property damage crashes were reduced by 18%.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Raj Bridgelall ◽  
Pan Lu ◽  
Denver D. Tolliver ◽  
Tai Xu

On-demand shared mobility services such as Uber and microtransit are steadily penetrating the worldwide market for traditional dispatched taxi services. Hence, taxi companies are seeking ways to compete. This study mined large-scale mobility data from connected taxis to discover beneficial patterns that may inform strategies to improve dispatch taxi business. It is not practical to manually clean and filter large-scale mobility data that contains GPS information. Therefore, this research contributes and demonstrates an automated method of data cleaning and filtering that is suitable for such types of datasets. The cleaning method defines three filter variables and applies a layered statistical filtering technique to eliminate outlier records that do not contribute to distributions that match expected theoretical distributions of the variables. Chi-squared statistical tests evaluate the quality of the cleaned data by comparing the distribution of the three variables with their expected distributions. The overall cleaning method removed approximately 5% of the data, which consisted of errors that were obvious and others that were poor quality outliers. Subsequently, mining the cleaned data revealed that trip production in Dubai peaks for the case when only the same two drivers operate the same taxi. This finding would not have been possible without access to proprietary data that contains unique identifiers for both drivers and taxis. Datasets that identify individual drivers are not publicly available.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Guanghui Liang ◽  
Jianmin Pang ◽  
Zheng Shan ◽  
Runqing Yang ◽  
Yihang Chen

To address emerging security threats, various malware detection methods have been proposed every year. Therefore, a small but representative set of malware samples are usually needed for detection model, especially for machine-learning-based malware detection models. However, current manual selection of representative samples from large unknown file collection is labor intensive and not scalable. In this paper, we firstly propose a framework that can automatically generate a small data set for malware detection. With this framework, we extract behavior features from a large initial data set and then use a hierarchical clustering technique to identify different types of malware. An improved genetic algorithm based on roulette wheel sampling is implemented to generate final test data set. The final data set is only one-eighteenth the volume of the initial data set, and evaluations show that the data set selected by the proposed framework is much smaller than the original one but does not lose nearly any semantics.


2021 ◽  
pp. 73-80
Author(s):  
Eric Markley ◽  
◽  
David Q. Le ◽  
Peter Germonpré ◽  
Costantino Balestra ◽  
...  

Venous gas emboli (VGE) are often quantified as a marker of decompression stress on echocardiograms. Bubble-counting has been proposed as an easy to learn method, but remains time-consuming, rendering large dataset analysis impractical. Computer automation of VGE counting following this method has therefore been suggested as a means to eliminate rater bias and save time. A necessary step for this automation relies on the selection of a frame during late ventricular diastole (LVD) for each cardiac cycle of the recording. Since electrocardiograms (ECG) are not always recorded in field experiments, here we propose a fully automated method for LVD frame selection based on regional intensity minimization. The algorithm is tested on 20 previously acquired echocardiography recordings (from the original bubble-counting publication), half of which were acquired at rest (Rest) and the other half after leg flexions (Flex). From the 7,140 frames analyzed, sensitivity was found to be 0.913 [95% CI: 0.875-0.940] and specificity 0.997 [95% CI: 0.996-0.998]. The method’s performance is also compared to that of random chance selection and found to perform significantly better (p<0.0001). No trend in algorithm performance was found with respect to VGE counts, and no significant difference was found between Flex and Rest (p>0.05). In conclusion, full automation of LVD frame selection for the purpose of bubble counting in post-dive echocardiography has been established with excellent accuracy, although we caution that high quality acquisitions remain paramount in retaining high reliability.


2020 ◽  
Vol 20 (03) ◽  
pp. 2050018
Author(s):  
Neeraj Shrivastava ◽  
Jyoti Bharti

In the domain of computer technology, image processing strategies have become a part of various applications. A few broadly used image segmentation methods have been characterized as seeded region growing (SRG), edge-based image segmentation, fuzzy [Formula: see text]-means image segmentation, etc. SRG is a quick, strongly formed and impressive image segmentation algorithm. In this paper, we delve into different applications of SRG and their analysis. SRG delivers better results in analysis of magnetic resonance images, brain image, breast images, etc. On the other hand, it has some limitations as well. For example, the seed points have to be selected manually and this manual selection of seed points at the time of segmentation brings about wrong selection of regions. So, a review of some automatic seed selection methods with their advantages, disadvantages and applications in different fields has been presented.


Sign in / Sign up

Export Citation Format

Share Document