Picard–Fuchs Uniformization and Modularity¶of the Mirror Map

2000 ◽  
Vol 212 (3) ◽  
pp. 625-647 ◽  
Author(s):  
Charles F. Doran
Keyword(s):  
1995 ◽  
Vol 433 (3) ◽  
pp. 501-552 ◽  
Author(s):  
S. Hosono ◽  
A. Klemm ◽  
S. Theisen ◽  
S.-T. Yau

2003 ◽  
pp. 195-199 ◽  
Author(s):  
Bong Lian ◽  
Shing-Tung Yau
Keyword(s):  

1992 ◽  
Vol 07 (35) ◽  
pp. 3277-3289 ◽  
Author(s):  
TRISTAN HÜBSCH ◽  
SHING-TUNG YAU

Each transversal degree-d hypersurface ℳ in a weighted projective space defines a Landau-Ginzburg orbifold, the superpotential of which equals the defining polynomial of ℳ. For a generic such ℳ with trivial canonical class, the degree-0 (mod d) subring of the Jacobian ring (that is, the (c, c)-ring of the Landau-Ginzburg orbifold) is shown to admit an [Formula: see text] action and the corresponding Lefschetz-type decomposition. This leads to a general definition of a “large complex structure” limit, the mirror of the “large volume” limit, and the mirror images on ⊕qH3−q,q of the Hodge *-operator, duality and inner product on ⊕qHq,q.


1994 ◽  
Vol 328 (3-4) ◽  
pp. 312-318 ◽  
Author(s):  
Maximilian Kreuzer
Keyword(s):  

2011 ◽  
Vol 2011 (2) ◽  
pp. 1-15 ◽  
Author(s):  
Ilarion V. Melnikov ◽  
M. Ronen Plesser
Keyword(s):  

2017 ◽  
Vol 29 (3) ◽  
pp. 825-860 ◽  
Author(s):  
Yunwen Lei ◽  
Ding-Xuan Zhou

We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.


Sign in / Sign up

Export Citation Format

Share Document