Optimized Video Tracking for Automated Vehicle Turning Movement Counts

2017 ◽  
Vol 2645 (1) ◽  
pp. 104-112
Author(s):  
François Bélisle ◽  
Nicolas Saunier ◽  
Guillaume-Alexandre Bilodeau ◽  
Sebastien le Digabel

This paper proposes a new method for automatically counting vehicle turning movements based on video tracking, expanding on previous work on optimization of parameters for road user trajectory extraction and on automated trajectory clustering. The counting method is composed of three main steps: an automated tracker that extracts vehicle trajectories from video data, an automated trajectory clustering algorithm, and an optimization algorithm. The proposed method was applied to obtain turning movement counts in three typical traffic engineering case studies in Canada representing industry-type conditions. These exhibited varying levels of tracking difficulty, ranging from a single-lane off-ramp to a six-movement intersection with a stop and a right-turn channel. Because of a limitation of the data set, giving flows per movement and not per lane, all sites were chosen with a single lane per movement. The 3-h morning peak period was used in the case studies. The results show an average weighted generalization error of 12% for more than 3,700 vehicles automatically analyzed for more than 8 h of video, ranging from 9.5% to 19.5%. The generalization error is on average 8.6% (and as low as 6.0% per movement) for the 3,084 uninterrupted vehicles that are in plain view of the camera. This paper describes in detail the methodology used and discusses the factors that affect counting performance and how to improve counting accuracy in further research.

2015 ◽  
Vol 2528 (1) ◽  
pp. 116-127 ◽  
Author(s):  
Mohamed Gomaa Mohamed ◽  
Nicolas Saunier

The increasing availability of video data, through existing traffic cameras or dedicated field data collection, and the development of computer vision techniques pave the way for the collection of massive data sets about the microscopic behavior of road users. Analysis of such data sets helps in understanding normal road user behavior and can be used for realistic prediction of motion and computation of surrogate safety indicators. A multilevel motion pattern learning framework was developed to enable automated scene interpretation, anomalous behavior detection, and surrogate safety analysis. First, points of interest (POIs) were learned on the basis of the Gaussian mixture model and the expectation maximization algorithm and then used to form activity paths (APs). Second, motion patterns, represented by trajectory prototypes, were learned from road users' trajectories in each AP by using a two-stage trajectory clustering method based on spatial then temporal (speed) information. Finally, motion prediction relied on matching at each instant partial trajectories to the learned prototypes to evaluate potential for collision by using computing indicators. An intersection case study demonstrates the framework's ability in many ways: it helps reduce the computation cost up to 90%; it cleans the trajectory data set from tracking outliers; it uses actual trajectories as prototypes without any pre- and postprocessing; and it predicts future motion realistically to compute surrogate safety indicators.


2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.


2021 ◽  
Vol 9 (2) ◽  
pp. 119
Author(s):  
Lúcia Moreira ◽  
Roberto Vettor ◽  
Carlos Guedes Soares

In this paper, simulations of a ship travelling on a given oceanic route were performed by a weather routing system to provide a large realistic navigation data set, which could represent a collection of data obtained on board a ship in operation. This data set was employed to train a neural network computing system in order to predict ship speed and fuel consumption. The model was trained using the Levenberg–Marquardt backpropagation scheme to establish the relation between the ship speed and the respective propulsion configuration for the existing sea conditions, i.e., the output torque of the main engine, the revolutions per minute of the propulsion shaft, the significant wave height, and the peak period of the waves, together with the relative angle of wave encounter. Additional results were obtained by also using the model to train the relationship between the same inputs used to determine the speed of the ship and the fuel consumption. A sensitivity analysis was performed to analyze the artificial neural network capability to forecast the ship speed and fuel oil consumption without information on the status of the engine (the revolutions per minute and torque) using as inputs only the information of the sea state. The results obtained with the neural network model show very good accuracy both in the prediction of the speed of the vessel and the fuel consumption.


2021 ◽  
pp. 1-11
Author(s):  
Wang Songyun

With the development of social economy and the improvement of science and technology, digital video on the Internet is increasing rapidly, and it has become a new force to promote the development of the times. Most of these videos are stored in the memory, which poses a great challenge to the research and development of the system. The reader service system is an important part of library service. The library uses it to collect information resources, not just for service and work. The document combines the video of library service, the analysis of video recovery and video software requirements of digital library, puts forward the design goal and conception of video search, and puts forward a foundation. From the video data of digital library, video retrieval experiments are gradually carried out. These experimental results show that the number of enhanced dynamic clustering algorithm increases to ensure the complexity of the image.


Genetics ◽  
2001 ◽  
Vol 159 (2) ◽  
pp. 699-713
Author(s):  
Noah A Rosenberg ◽  
Terry Burke ◽  
Kari Elo ◽  
Marcus W Feldman ◽  
Paul J Freidlin ◽  
...  

Abstract We tested the utility of genetic cluster analysis in ascertaining population structure of a large data set for which population structure was previously known. Each of 600 individuals representing 20 distinct chicken breeds was genotyped for 27 microsatellite loci, and individual multilocus genotypes were used to infer genetic clusters. Individuals from each breed were inferred to belong mostly to the same cluster. The clustering success rate, measuring the fraction of individuals that were properly inferred to belong to their correct breeds, was consistently ~98%. When markers of highest expected heterozygosity were used, genotypes that included at least 8–10 highly variable markers from among the 27 markers genotyped also achieved >95% clustering success. When 12–15 highly variable markers and only 15–20 of the 30 individuals per breed were used, clustering success was at least 90%. We suggest that in species for which population structure is of interest, databases of multilocus genotypes at highly variable markers should be compiled. These genotypes could then be used as training samples for genetic cluster analysis and to facilitate assignments of individuals of unknown origin to populations. The clustering algorithm has potential applications in defining the within-species genetic units that are useful in problems of conservation.


2021 ◽  
Vol 11 (22) ◽  
pp. 10596
Author(s):  
Chung-Hong Lee ◽  
Hsin-Chang Yang ◽  
Yenming J. Chen ◽  
Yung-Lin Chuang

Recently, an emerging application field through Twitter messages and algorithmic computation to detect real-time world events has become a new paradigm in the field of data science applications. During a high-impact event, people may want to know the latest information about the development of the event because they want to better understand the situation and possible trends of the event for making decisions. However, often in emergencies, the government or enterprises are usually unable to notify people in time for early warning and avoiding risks. A sensible solution is to integrate real-time event monitoring and intelligence gathering functions into their decision support system. Such a system can provide real-time event summaries, which are updated whenever important new events are detected. Therefore, in this work, we combine a developed Twitter-based real-time event detection algorithm with pre-trained language models for summarizing emergent events. We used an online text-stream clustering algorithm and self-adaptive method developed to gather the Twitter data for detection of emerging events. Subsequently we used the Xsum data set with a pre-trained language model, namely T5 model, to train the summarization model. The Rouge metrics were used to compare the summary performance of various models. Subsequently, we started to use the trained model to summarize the incoming Twitter data set for experimentation. In particular, in this work, we provide a real-world case study, namely the COVID-19 pandemic event, to verify the applicability of the proposed method. Finally, we conducted a survey on the example resulting summaries with human judges for quality assessment of generated summaries. From the case study and experimental results, we have demonstrated that our summarization method provides users with a feasible method to quickly understand the updates in the specific event intelligence based on the real-time summary of the event story.


2021 ◽  
Author(s):  
ElMehdi SAOUDI ◽  
Said Jai Andaloussi

Abstract With the rapid growth of the volume of video data and the development of multimedia technologies, it has become necessary to have the ability to accurately and quickly browse and search through information stored in large multimedia databases. For this purpose, content-based video retrieval ( CBVR ) has become an active area of research over the last decade. In this paper, We propose a content-based video retrieval system providing similar videos from a large multimedia data-set based on a query video. The approach uses vector motion-based signatures to describe the visual content and uses machine learning techniques to extract key-frames for rapid browsing and efficient video indexing. We have implemented the proposed approach on both, single machine and real-time distributed cluster to evaluate the real-time performance aspect, especially when the number and size of videos are large. Experiments are performed using various benchmark action and activity recognition data-sets and the results reveal the effectiveness of the proposed method in both accuracy and processing time compared to state-of-the-art methods.


2011 ◽  
pp. 24-32 ◽  
Author(s):  
Nicoleta Rogovschi ◽  
Mustapha Lebbah ◽  
Younès Bennani

Most traditional clustering algorithms are limited to handle data sets that contain either continuous or categorical variables. However data sets with mixed types of variables are commonly used in data mining field. In this paper we introduce a weighted self-organizing map for clustering, analysis and visualization mixed data (continuous/binary). The learning of weights and prototypes is done in a simultaneous manner assuring an optimized data clustering. More variables has a high weight, more the clustering algorithm will take into account the informations transmitted by these variables. The learning of these topological maps is combined with a weighting process of different variables by computing weights which influence the quality of clustering. We illustrate the power of this method with data sets taken from a public data set repository: a handwritten digit data set, Zoo data set and other three mixed data sets. The results show a good quality of the topological ordering and homogenous clustering.


Risks ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 204
Author(s):  
Chamay Kruger ◽  
Willem Daniel Schutte ◽  
Tanja Verster

This paper proposes a methodology that utilises model performance as a metric to assess the representativeness of external or pooled data when it is used by banks in regulatory model development and calibration. There is currently no formal methodology to assess representativeness. The paper provides a review of existing regulatory literature on the requirements of assessing representativeness and emphasises that both qualitative and quantitative aspects need to be considered. We present a novel methodology and apply it to two case studies. We compared our methodology with the Multivariate Prediction Accuracy Index. The first case study investigates whether a pooled data source from Global Credit Data (GCD) is representative when considering the enrichment of internal data with pooled data in the development of a regulatory loss given default (LGD) model. The second case study differs from the first by illustrating which other countries in the pooled data set could be representative when enriching internal data during the development of a LGD model. Using these case studies as examples, our proposed methodology provides users with a generalised framework to identify subsets of the external data that are representative of their Country’s or bank’s data, making the results general and universally applicable.


2019 ◽  
Author(s):  
Lin Fei ◽  
Yang Yang ◽  
Wang Shihua ◽  
Xu Yudi ◽  
Ma Hong

Unreasonable public bicycle dispatching area division seriously affects the operational efficiency of the public bicycle system. To solve this problem, this paper innovatively proposes an improved community discovery algorithm based on multi-objective optimization (CDoMO). The data set is preprocessed into a lease/return relationship, thereby it calculated a similarity matrix, and the community discovery algorithm Fast Unfolding is executed on the matrix to obtain a scheduling scheme. For the results obtained by the algorithm, the workload indicators (scheduled distance, number of sites, and number of scheduling bicycles) should be adjusted to maximize the overall benefits, and the entire process is continuously optimized by a multi-objective optimization algorithm NSGA2. The experimental results show that compared with the clustering algorithm and the community discovery algorithm, the method can shorten the estimated scheduling distance by 20%-50%, and can effectively balance the scheduling workload of each area. The method can provide theoretical support for the public bicycle dispatching department, and improve the efficiency of public bicycle dispatching system.


Sign in / Sign up

Export Citation Format

Share Document