crowd counting
Recently Published Documents


TOTAL DOCUMENTS

554
(FIVE YEARS 173)

H-INDEX

31
(FIVE YEARS 1)

2022 ◽  
Vol 108 ◽  
pp. 104563
Author(s):  
Shihui Zhang ◽  
Xiaoxiao Zhang ◽  
He Li ◽  
Huan He ◽  
Dandan Song ◽  
...  
Keyword(s):  

Author(s):  
Chenfeng Xu ◽  
Dingkang Liang ◽  
Yongchao Xu ◽  
Song Bai ◽  
Wei Zhan ◽  
...  
Keyword(s):  

2022 ◽  
pp. 1-1
Author(s):  
Mingjie Wang ◽  
Hao Cai ◽  
Xianfeng Han ◽  
Jun Zhou ◽  
Minglun Gong

Author(s):  
Zhengxin Guo ◽  
Fu Xiao ◽  
Biyun Sheng ◽  
Lijuan Sun ◽  
Shui Yu

2022 ◽  
Vol 41 (1) ◽  
pp. 255-269
Author(s):  
Wei Zhuang ◽  
Yixian Shen ◽  
Chunming Gao ◽  
Lu Li ◽  
Haoran Sang ◽  
...  

Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 31
Author(s):  
Jianqiang Xu ◽  
Haoyu Zhao ◽  
Weidong Min ◽  
Yi Zou ◽  
Qiyan Fu

Crowd gathering detection plays an important role in security supervision of public areas. Existing image-processing-based methods are not robust for complex scenes, and deep-learning-based methods for gathering detection mainly focus on the design of the network, which ignores the inner feature of the crowd gathering action. To alleviate such problems, this work proposes a novel framework Detection of Group Gathering (DGG) based on the crowd counting method using deep learning approaches and statistics to detect crowd gathering. The DGG mainly contains three parts, i.e., Detecting Candidate Frame of Gathering (DCFG), Gathering Area Detection (GAD), and Gathering Judgement (GJ). The DCFG is proposed to find the frame index in a video that has the maximum people number based on the crowd counting method. This frame means that the crowd has gathered and the specific gathering area will be detected next. The GAD detects the local area that has the maximum crowd density in a frame with a slide search box. The local area contains the inner feature of the gathering action and represents that the crowd gathering in this local area, which is denoted by grid coordinates in a video frame. Based on the detected results of the DCFG and the GAD, the GJ is proposed to analyze the statistical relationship between the local area and the global area to find the stable pattern for the crowd gathering action. Experiments based on benchmarks show that the proposed DGG has a robust representation of the gathering feature and a high detection accuracy. There is the potential that the DGG can be used in social security and smart city domains.


2021 ◽  
Vol 11 (24) ◽  
pp. 12037
Author(s):  
Xiaoyu Hou ◽  
Jihui Xu ◽  
Jinming Wu ◽  
Huaiyu Xu

Counting people in crowd scenarios is extensively conducted in drone inspections, video surveillance, and public safety applications. Today, crowd count algorithms with supervised learning have improved significantly, but with a reliance on a large amount of manual annotation. However, in real world scenarios, different photo angles, exposures, location heights, complex backgrounds, and limited annotation data lead to supervised learning methods not working satisfactorily, plus many of them suffer from overfitting problems. To address the above issues, we focus on training synthetic crowd data and investigate how to transfer information to real-world datasets while reducing the need for manual annotation. CNN-based crowd-counting algorithms usually consist of feature extraction, density estimation, and count regression. To improve the domain adaptation in feature extraction, we propose an adaptive domain-invariant feature extracting module. Meanwhile, after taking inspiration from recent innovative meta-learning, we present a dynamic-β MAML algorithm to generate a density map in unseen novel scenes and render the density estimation model more universal. Finally, we use a counting map refiner to optimize the coarse density map transformation into a fine density map and then regress the crowd number. Extensive experiments show that our proposed domain adaptation- and model-generalization methods can effectively suppress domain gaps and produce elaborate density maps in cross-domain crowd-counting scenarios. We demonstrate that the proposals in our paper outperform current state-of-the-art techniques.


Sign in / Sign up

Export Citation Format

Share Document