Finding Local Anomalies in Very High Dimensional Space

Author(s):  
Timothy de Vries ◽  
Sanjay Chawla ◽  
Michael E. Houle
1983 ◽  
Vol 35 (1) ◽  
pp. 117-130 ◽  
Author(s):  
E. S. Barnes ◽  
N. J. A. Sloane

1. Introduction. In this paper we give several general constructions for lattice packings of spheres in real n-dimensional space Rn and complex space Cn. These lead to denser lattice packings than any previously known in R36, R64, R80, …, R128, …. A sequence of lattices is constructed in Rn for n = 24m ≦ 98328 (where m is an integer) for which the density Δ satisfies log2 Δ ≈ – (1.25 …)n, and another sequence in Rn for n = 2m (m any integer) withThe latter appear to be the densest lattices known in very high dimensional space. (See, however, the Remark at the end of this paper.) In dimensions around 216 the best lattices found are about 2131000 times as dense as any previously known.Minkowski proved in 1905 (see [20] and Eq. (23) below) that lattices exist with log2 Δ > –n as n → ∞, but no infinite family of lattices with this density has yet been constructed.


2021 ◽  
pp. 1-12
Author(s):  
Jian Zheng ◽  
Jianfeng Wang ◽  
Yanping Chen ◽  
Shuping Chen ◽  
Jingjin Chen ◽  
...  

Neural networks can approximate data because of owning many compact non-linear layers. In high-dimensional space, due to the curse of dimensionality, data distribution becomes sparse, causing that it is difficulty to provide sufficient information. Hence, the task becomes even harder if neural networks approximate data in high-dimensional space. To address this issue, according to the Lipschitz condition, the two deviations, i.e., the deviation of the neural networks trained using high-dimensional functions, and the deviation of high-dimensional functions approximation data, are derived. This purpose of doing this is to improve the ability of approximation high-dimensional space using neural networks. Experimental results show that the neural networks trained using high-dimensional functions outperforms that of using data in the capability of approximation data in high-dimensional space. We find that the neural networks trained using high-dimensional functions more suitable for high-dimensional space than that of using data, so that there is no need to retain sufficient data for neural networks training. Our findings suggests that in high-dimensional space, by tuning hidden layers of neural networks, this is hard to have substantial positive effects on improving precision of approximation data.


2001 ◽  
Vol 24 (3) ◽  
pp. 305-320 ◽  
Author(s):  
Benoit Lemaire ◽  
Philippe Dessus

This paper presents Apex, a system that can automatically assess a student essay based on its content. It relies on Latent Semantic Analysis, a tool which is used to represent the meaning of words as vectors in a high-dimensional space. By comparing an essay and the text of a given course on a semantic basis, our system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, the outline and the coherence of the essay. Our experiments yield promising results.


Sign in / Sign up

Export Citation Format

Share Document