scholarly journals Unfolding Cubes: Nets, Packings, Partitions, Chords

10.37236/9796 ◽  
2020 ◽  
Vol 27 (4) ◽  
Author(s):  
Kristin DeSplinter ◽  
Satyan Devadoss ◽  
Jordan Readyhough ◽  
Bryce Wimberly

We show that every ridge unfolding of an $n$-cube is without self-overlap, yielding a valid net.  The results are obtained by developing machinery that translates cube unfolding into combinatorial frameworks.  Moreover, the geometry of the bounding boxes of these cube nets are classified using integer partitions, as well as the combinatorics of  path unfoldings seen through the lens of chord diagrams.

2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


Author(s):  
Andrew R. Buck ◽  
Derek T. Anderson ◽  
James M. Keller ◽  
Robert H. Luke ◽  
Grant Scott

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1299
Author(s):  
Honglin Yuan ◽  
Tim Hoogenkamp ◽  
Remco C. Veltkamp

Deep learning has achieved great success on robotic vision tasks. However, when compared with other vision-based tasks, it is difficult to collect a representative and sufficiently large training set for six-dimensional (6D) object pose estimation, due to the inherent difficulty of data collection. In this paper, we propose the RobotP dataset consisting of commonly used objects for benchmarking in 6D object pose estimation. To create the dataset, we apply a 3D reconstruction pipeline to produce high-quality depth images, ground truth poses, and 3D models for well-selected objects. Subsequently, based on the generated data, we produce object segmentation masks and two-dimensional (2D) bounding boxes automatically. To further enrich the data, we synthesize a large number of photo-realistic color-and-depth image pairs with ground truth 6D poses. Our dataset is freely distributed to research groups by the Shape Retrieval Challenge benchmark on 6D pose estimation. Based on our benchmark, different learning-based approaches are trained and tested by the unified dataset. The evaluation results indicate that there is considerable room for improvement in 6D object pose estimation, particularly for objects with dark colors, and photo-realistic images are helpful in increasing the performance of pose estimation algorithms.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Hiranya Jayakody ◽  
Paul Petrie ◽  
Hugo Jan de Boer ◽  
Mark Whitty

Abstract Background Stomata analysis using microscope imagery provides important insight into plant physiology, health and the surrounding environmental conditions. Plant scientists are now able to conduct automated high-throughput analysis of stomata in microscope data, however, existing detection methods are sensitive to the appearance of stomata in the training images, thereby limiting general applicability. In addition, existing methods only generate bounding-boxes around detected stomata, which require users to implement additional image processing steps to study stomata morphology. In this paper, we develop a fully automated, robust stomata detection algorithm which can also identify individual stomata boundaries regardless of the plant species, sample collection method, imaging technique and magnification level. Results The proposed solution consists of three stages. First, the input image is pre-processed to remove any colour space biases occurring from different sample collection and imaging techniques. Then, a Mask R-CNN is applied to estimate individual stomata boundaries. The feature pyramid network embedded in the Mask R-CNN is utilised to identify stomata at different scales. Finally, a statistical filter is implemented at the Mask R-CNN output to reduce the number of false positive generated by the network. The algorithm was tested using 16 datasets from 12 sources, containing over 60,000 stomata. For the first time in this domain, the proposed solution was tested against 7 microscope datasets never seen by the algorithm to show the generalisability of the solution. Results indicated that the proposed approach can detect stomata with a precision, recall, and F-score of 95.10%, 83.34%, and 88.61%, respectively. A separate test conducted by comparing estimated stomata boundary values with manually measured data showed that the proposed method has an IoU score of 0.70; a 7% improvement over the bounding-box approach. Conclusions The proposed method shows robust performance across multiple microscope image datasets of different quality and scale. This generalised stomata detection algorithm allows plant scientists to conduct stomata analysis whilst eliminating the need to re-label and re-train for each new dataset. The open-source code shared with this project can be directly deployed in Google Colab or any other Tensorflow environment.


2020 ◽  
Vol 2020 (11) ◽  
Author(s):  
Yiyang Jia ◽  
Jacobus J. M. Verbaarschot

Abstract We analyze the spectral properties of a d-dimensional HyperCubic (HC) lattice model originally introduced by Parisi. The U(1) gauge links of this model give rise to a magnetic flux of constant magnitude ϕ but random orientation through the faces of the hypercube. The HC model, which also can be written as a model of 2d interacting Majorana fermions, has a spectral flow that is reminiscent of Maldacena-Qi (MQ) model, and its spectrum at ϕ = 0, actually coincides with the coupling term of the MQ model. As was already shown by Parisi, at leading order in 1/d, the spectral density of this model is given by the density function of the Q-Hermite polynomials, which is also the spectral density of the double-scaled Sachdev-Ye-Kitaev model. Parisi demonstrated this by mapping the moments of the HC model to Q-weighted sums on chord diagrams. We point out that the subleading moments of the HC model can also be mapped to weighted sums on chord diagrams, in a manner that descends from the leading moments. The HC model has a magnetic inversion symmetry that depends on both the magnitude and the orientation of the magnetic flux through the faces of the hypercube. The spectrum for fixed quantum number of this symmetry exhibits a transition from regular spectra at ϕ = 0 to chaotic spectra with spectral statistics given by the Gaussian Unitary Ensembles (GUE) for larger values of ϕ. For small magnetic flux, the ground state is gapped and is close to a Thermofield Double (TFD) state.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2939
Author(s):  
Yong Hong ◽  
Jin Liu ◽  
Zahid Jahangir ◽  
Sheng He ◽  
Qing Zhang

This paper provides an efficient way of addressing the problem of detecting or estimating the 6-Dimensional (6D) pose of objects from an RGB image. A quaternion is used to define an object′s three-dimensional pose, but the pose represented by q and the pose represented by -q are equivalent, and the L2 loss between them is very large. Therefore, we define a new quaternion pose loss function to solve this problem. Based on this, we designed a new convolutional neural network named Q-Net to estimate an object’s pose. Considering that the quaternion′s output is a unit vector, a normalization layer is added in Q-Net to hold the output of pose on a four-dimensional unit sphere. We propose a new algorithm, called the Bounding Box Equation, to obtain 3D translation quickly and effectively from 2D bounding boxes. The algorithm uses an entirely new way of assessing the 3D rotation (R) and 3D translation rotation (t) in only one RGB image. This method can upgrade any traditional 2D-box prediction algorithm to a 3D prediction model. We evaluated our model using the LineMod dataset, and experiments have shown that our methodology is more acceptable and efficient in terms of L2 loss and computational time.


2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Riccardo Conti ◽  
Davide Masoero

Abstract We study the large momentum limit of the monster potentials of Bazhanov-Lukyanov-Zamolodchikov, which — according to the ODE/IM correspondence — should correspond to excited states of the Quantum KdV model.We prove that the poles of these potentials asymptotically condensate about the complex equilibria of the ground state potential, and we express the leading correction to such asymptotics in terms of the roots of Wronskians of Hermite polynomials.This allows us to associate to each partition of N a unique monster potential with N roots, of which we compute the spectrum. As a consequence, we prove — up to a few mathematical technicalities — that, fixed an integer N , the number of monster potentials with N roots coincides with the number of integer partitions of N , which is the dimension of the level N subspace of the quantum KdV model. In striking accordance with the ODE/IM correspondence.


Sign in / Sign up

Export Citation Format

Share Document