scholarly journals Particle physics and cosmology with high-scale SUSY breaking in five-dimensional supergravity models

2015 ◽  
Vol 2015 (10) ◽  
Author(s):  
Hajime Otsuka
2016 ◽  
Vol 25 (14) ◽  
pp. 1630027 ◽  
Author(s):  
John Ellis

The plethora of recent and forthcoming data on the cosmic microwave background (CMB) data are stimulating a new wave of inflationary model-building. Naturalness suggests that the appropriate framework for models of inflation is supersymmetry. This should be combined with gravity in a supergravity theory, whose specific no-scale version has much to commend it, e.g. its derivation from string theory and the flat directions in its effective potential. Simple no-scale supergravity models yield predictions similar to those of the Starobinsky [Formula: see text] model, though some string-motivated versions make alternative predictions. Data are beginning to provide interesting constraints on the rate of inflaton decay into Standard Model particles. In parallel, LHC and other data provide significant constraints on no-scale supergravity models, which suggest that some sparticles might have masses close to present experimental limits.


2016 ◽  
Vol 31 (17) ◽  
pp. 1650095 ◽  
Author(s):  
Gauhar Abbas ◽  
Mehran Zahiri Abyaneh ◽  
Aritra Biswas ◽  
Saurabh Gupta ◽  
Monalisa Patra ◽  
...  

The origin of small mixing among the quarks and a large mixing among the neutrinos has been an open question in particle physics. In order to answer this question, we postulate general relations among the quarks and the leptonic mixing angles at a high scale, which could be the scale of Grand Unified Theories. The central idea of these relations is that the quark and the leptonic mixing angles can be unified at some high scale either due to some quark–lepton symmetry or some other underlying mechanism and as a consequence, the mixing angles of the leptonic sector are proportional to that of the quark sector. We investigate the phenomenology of the possible relations where the leptonic mixing angles are proportional to the quark mixing angles at the unification scale by taking into account the latest experimental constraints from the neutrino sector. These relations are able to explain the pattern of leptonic mixing at the low scale and thereby hint that these relations could be possible signatures of a quark–lepton symmetry or some other underlying quark–lepton mixing unification mechanism at some high scale linked to Grand Unified Theories.


2020 ◽  
Vol 2020 (10) ◽  
Author(s):  
Igor Broeckel ◽  
Michele Cicoli ◽  
Anshuman Maharana ◽  
Kajal Singh ◽  
Kuver Sinha

Abstract The statistics of the supersymmetry breaking scale in the string landscape has been extensively studied in the past finding either a power-law behaviour induced by uniform distributions of F-terms or a logarithmic distribution motivated by dynamical supersymmetry breaking. These studies focused mainly on type IIB flux compactifications but did not systematically incorporate the Kähler moduli. In this paper we point out that the inclusion of the Kähler moduli is crucial to understand the distribution of the supersymmetry breaking scale in the landscape since in general one obtains unstable vacua when the F-terms of the dilaton and the complex structure moduli are larger than the F- terms of the Kähler moduli. After taking Kähler moduli stabilisation into account, we find that the distribution of the gravitino mass and the soft terms is power-law only in KKLT and perturbatively stabilised vacua which therefore favour high scale supersymmetry. On the other hand, LVS vacua feature a logarithmic distribution of soft terms and thus a preference for lower scales of supersymmetry breaking. Whether the landscape of type IIB flux vacua predicts a logarithmic or power-law distribution of the supersymmetry breaking scale thus depends on the relative preponderance of LVS and KKLT vacua.


2013 ◽  
Vol 719 (1-3) ◽  
pp. 126-130 ◽  
Author(s):  
Keisuke Harigaya ◽  
Masahiro Kawasaki ◽  
Tsutomu T. Yanagida
Keyword(s):  

2014 ◽  
Vol 2014 (7) ◽  
Author(s):  
Keisuke Harigaya ◽  
Masahiro Ibe ◽  
Koji Ichikawa ◽  
Kunio Kaneta ◽  
Shigeki Matsumoto
Keyword(s):  

Author(s):  
E.D. Wolf

Most microelectronics devices and circuits operate faster, consume less power, execute more functions and cost less per circuit function when the feature-sizes internal to the devices and circuits are made smaller. This is part of the stimulus for the Very High-Speed Integrated Circuits (VHSIC) program. There is also a need for smaller, more sensitive sensors in a wide range of disciplines that includes electrochemistry, neurophysiology and ultra-high pressure solid state research. There is often fundamental new science (and sometimes new technology) to be revealed (and used) when a basic parameter such as size is extended to new dimensions, as is evident at the two extremes of smallness and largeness, high energy particle physics and cosmology, respectively. However, there is also a very important intermediate domain of size that spans from the diameter of a small cluster of atoms up to near one micrometer which may also have just as profound effects on society as “big” physics.


Author(s):  
Sterling P. Newberry

At the 1958 meeting of our society, then known as EMSA, the author introduced the concept of microspace and suggested its use to provide adequate information storage space and the use of electron microscope techniques to provide storage and retrieval access. At this current meeting of MSA, he wishes to suggest an additional use of the power of the electron microscope.The author has been contemplating this new use for some time and would have suggested it in the EMSA fiftieth year commemorative volume, but for page limitations. There is compelling reason to put forth this suggestion today because problems have arisen in the “Standard Model” of particle physics and funds are being greatly reduced just as we need higher energy machines to resolve these problems. Therefore, any techniques which complement or augment what we can accomplish during this austerity period with the machines at hand is worth exploring.


2013 ◽  
Vol 221 (3) ◽  
pp. 190-200 ◽  
Author(s):  
Jörg-Tobias Kuhn ◽  
Thomas Kiefer

Several techniques have been developed in recent years to generate optimal large-scale assessments (LSAs) of student achievement. These techniques often represent a blend of procedures from such diverse fields as experimental design, combinatorial optimization, particle physics, or neural networks. However, despite the theoretical advances in the field, there still exists a surprising scarcity of well-documented test designs in which all factors that have guided design decisions are explicitly and clearly communicated. This paper therefore has two goals. First, a brief summary of relevant key terms, as well as experimental designs and automated test assembly routines in LSA, is given. Second, conceptual and methodological steps in designing the assessment of the Austrian educational standards in mathematics are described in detail. The test design was generated using a two-step procedure, starting at the item block level and continuing at the item level. Initially, a partially balanced incomplete item block design was generated using simulated annealing, whereas in a second step, items were assigned to the item blocks using mixed-integer linear optimization in combination with a shadow-test approach.


Sign in / Sign up

Export Citation Format

Share Document