scholarly journals Some Plethysm Results related to Foulkes' Conjecture

10.37236/1050 ◽  
2006 ◽  
Vol 13 (1) ◽  
Author(s):  
Steven Sivek

We provide several classes of examples to show that Stanley's plethysm conjecture and a reformulation by Pylyavskyy, both concerning the ranks of certain matrices $K^{\lambda}$ associated with Young diagrams $\lambda$, are in general false. We also provide bounds on the rank of $K^{\lambda}$ by which it may be possible to show that the approach of Black and List to Foulkes' conjecture does not work in general. Finally, since Black and List's work concerns $K^{\lambda}$ for rectangular shapes $\lambda$, we suggest a constructive way to prove that $K^{\lambda}$ does not have full rank when $\lambda$ is a large rectangle.

2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


Author(s):  
PETER SPACEK

AbstractIn this article we construct Laurent polynomial Landau–Ginzburg models for cominuscule homogeneous spaces. These Laurent polynomial potentials are defined on a particular algebraic torus inside the Lie-theoretic mirror model constructed for arbitrary homogeneous spaces in [Rie08]. The Laurent polynomial takes a similar shape to the one given in [Giv96] for projective complete intersections, i.e., it is the sum of the toric coordinates plus a quantum term. We also give a general enumeration method for the summands in the quantum term of the potential in terms of the quiver introduced in [CMP08], associated to the Langlands dual homogeneous space. This enumeration method generalizes the use of Young diagrams for Grassmannians and Lagrangian Grassmannians and can be defined type-independently. The obtained Laurent polynomials coincide with the results obtained so far in [PRW16] and [PR13] for quadrics and Lagrangian Grassmannians. We also obtain new Laurent polynomial Landau–Ginzburg models for orthogonal Grassmannians, the Cayley plane and the Freudenthal variety.


Author(s):  
Gerandy Brito ◽  
Ioana Dumitriu ◽  
Kameron Decker Harris

Abstract We prove an analogue of Alon’s spectral gap conjecture for random bipartite, biregular graphs. We use the Ihara–Bass formula to connect the non-backtracking spectrum to that of the adjacency matrix, employing the moment method to show there exists a spectral gap for the non-backtracking matrix. A by-product of our main theorem is that random rectangular zero-one matrices with fixed row and column sums are full rank with high probability. Finally, we illustrate applications to community detection, coding theory, and deterministic matrix completion.


Sign in / Sign up

Export Citation Format

Share Document