scholarly journals Leveraging TSP Solver Complementarity through Machine Learning

2018 ◽  
Vol 26 (4) ◽  
pp. 597-620 ◽  
Author(s):  
Pascal Kerschke ◽  
Lars Kotthoff ◽  
Jakob Bossek ◽  
Holger H. Hoos ◽  
Heike Trautmann

The Travelling Salesperson Problem (TSP) is one of the best-studied NP-hard problems. Over the years, many different solution approaches and solvers have been developed. For the first time, we directly compare five state-of-the-art inexact solvers—namely, LKH, EAX, restart variants of those, and MAOS—on a large set of well-known benchmark instances and demonstrate complementary performance, in that different instances may be solved most effectively by different algorithms. We leverage this complementarity to build an algorithm selector, which selects the best TSP solver on a per-instance basis and thus achieves significantly improved performance compared to the single best solver, representing an advance in the state of the art in solving the Euclidean TSP. Our in-depth analysis of the selectors provides insight into what drives this performance improvement.

2019 ◽  
Vol 9 (20) ◽  
pp. 4294 ◽  
Author(s):  
Pang ◽  
Zhang ◽  
Yu ◽  
Sun ◽  
Gong

This paper presents a humanoid robotic arm massage system with an aim toward satisfying the clinical requirements of pain relief on the waist and legs of older patients during Chinese medicinal massage. On the basis of an in-depth analysis regarding the characteristics of arm joints of the human body and Chinese medicinal massage theory, a humanoid robotic arm massage system was designed by adapting a bottom to top modular method. The combined finite element and kinematic analysis led to an improved performance according to repeated positioning accuracy, massage strength accuracy, and massage effect. The developed humanoid robotic arm was characterized by a compact structure, high precision, light quality, and good stiffness, achieving a good bearing capacity. Due to the PID controller, the numerical simulations and experimental results provided valuable insight into the development of Chinese medicinal massage robots and massage treatments for patients who suffer from lumbar muscle strain.


Author(s):  
Elias B. Khalil ◽  
Bistra Dilkina ◽  
George L. Nemhauser ◽  
Shabbir Ahmed ◽  
Yufen Shao

``Primal heuristics'' are a key contributor to the improved performance of exact branch-and-bound solvers for combinatorial optimization and integer programming. Perhaps the most crucial question concerning primal heuristics is that of at which nodes they should run, to which the typical answer is via hard-coded rules or fixed solver parameters tuned, offline, by trial-and-error. Alternatively, a heuristic should be run when it is most likely to succeed, based on the problem instance's characteristics, the state of the search, etc. In this work, we study the problem of deciding at which node a heuristic should be run, such that the overall (primal) performance of the solver is optimized. To our knowledge, this is the first attempt at formalizing and systematically addressing this problem. Central to our approach is the use of Machine Learning (ML) for predicting whether a heuristic will succeed at a given node. We give a theoretical framework for analyzing this decision-making process in a simplified setting, propose a ML approach for modeling heuristic success likelihood, and design practical rules that leverage the ML models to dynamically decide whether to run a heuristic at each node of the search tree. Experimentally, our approach improves the primal performance of a state-of-the-art Mixed Integer Programming solver by up to 6% on a set of benchmark instances, and by up to 60% on a family of hard Independent Set instances.


In Multi-core systems the applications co-execute in Multi-programmed mode, have interfere with each other during execution, which creates resource bottleneck affecting the performance. To reduce the interference in a given set of resources some conventional approaches don't give guarantee of performance in a conflicting application environment. In this paper, we make an in-depth analysis of benchmark applications interference for shared resources and find out application set which could be executed adopting a designated policy to mitigate the interference effects. In this work, we have performed profiling and analysis of applications on the state-of-the-art simulator gem5. Finally, we conclude the possibility of performance improvement through the designated policy. The simulation results show the scope to have a new scheduler for performance improvement in such systems.


2020 ◽  
Author(s):  
Andrew Lensen ◽  
Bing Xue ◽  
Mengjie Zhang

Data visualization is a key tool in data mining for understanding big datasets. Many visualization methods have been proposed, including the well-regarded state-of-the-art method t-distributed stochastic neighbor embedding. However, the most powerful visualization methods have a significant limitation: the manner in which they create their visualization from the original features of the dataset is completely opaque. Many domains require an understanding of the data in terms of the original features; there is hence a need for powerful visualization methods which use understandable models. In this article, we propose a genetic programming (GP) approach called GP-tSNE for evolving interpretable mappings from the dataset to high-quality visualizations. A multiobjective approach is designed that produces a variety of visualizations in a single run which gives different tradeoffs between visual quality and model complexity. Testing against baseline methods on a variety of datasets shows the clear potential of GP-tSNE to allow deeper insight into data than that provided by existing visualization methods. We further highlight the benefits of a multiobjective approach through an in-depth analysis of a candidate front, which shows how multiple models can be analyzed jointly to give increased insight into the dataset.


2020 ◽  
Author(s):  
Andrew Lensen ◽  
Bing Xue ◽  
Mengjie Zhang

Data visualization is a key tool in data mining for understanding big datasets. Many visualization methods have been proposed, including the well-regarded state-of-the-art method t-distributed stochastic neighbor embedding. However, the most powerful visualization methods have a significant limitation: the manner in which they create their visualization from the original features of the dataset is completely opaque. Many domains require an understanding of the data in terms of the original features; there is hence a need for powerful visualization methods which use understandable models. In this article, we propose a genetic programming (GP) approach called GP-tSNE for evolving interpretable mappings from the dataset to high-quality visualizations. A multiobjective approach is designed that produces a variety of visualizations in a single run which gives different tradeoffs between visual quality and model complexity. Testing against baseline methods on a variety of datasets shows the clear potential of GP-tSNE to allow deeper insight into data than that provided by existing visualization methods. We further highlight the benefits of a multiobjective approach through an in-depth analysis of a candidate front, which shows how multiple models can be analyzed jointly to give increased insight into the dataset.


Author(s):  
Jingjing Li ◽  
Mengmeng Jing ◽  
Ke Lu ◽  
Lei Zhu ◽  
Yang Yang ◽  
...  

Zero-shot learning (ZSL) and cold-start recommendation (CSR) are two challenging problems in computer vision and recommender system, respectively. In general, they are independently investigated in different communities. This paper, however, reveals that ZSL and CSR are two extensions of the same intension. Both of them, for instance, attempt to predict unseen classes and involve two spaces, one for direct feature representation and the other for supplementary description. Yet there is no existing approach which addresses CSR from the ZSL perspective. This work, for the first time, formulates CSR as a ZSL problem, and a tailor-made ZSL method is proposed to handle CSR. Specifically, we propose a Lowrank Linear Auto-Encoder (LLAE), which challenges three cruxes, i.e., domain shift, spurious correlations and computing efficiency, in this paper. LLAE consists of two parts, a low-rank encoder maps user behavior into user attributes and a symmetric decoder reconstructs user behavior from user attributes. Extensive experiments on both ZSL and CSR tasks verify that the proposed method is a win-win formulation, i.e., not only can CSR be handled by ZSL models with a significant performance improvement compared with several conventional state-of-the-art methods, but the consideration of CSR can benefit ZSL as well.


2020 ◽  
Vol 635 ◽  
pp. A145 ◽  
Author(s):  
S. Rosu ◽  
G. Rauw ◽  
K. E. Conroy ◽  
E. Gosset ◽  
J. Manfroid ◽  
...  

Context. The eccentric massive binary HD 152248 (also known as V1007 Sco), which hosts two O7.5 III-II(f) stars, is the most emblematic eclipsing O-star binary in the very young and rich open cluster NGC 6231. Its properties render the system an interesting target for studying tidally induced apsidal motion. Aims. Measuring the rate of apsidal motion in such a binary system gives insight into the internal structure and evolutionary state of the stars composing it. Methods. A large set of optical spectra was used to reconstruct the spectra of the individual binary components and establish their radial velocities using a disentangling code. Radial velocities measured over seven decades were used to establish the rate of apsidal motion. We furthermore analysed the reconstructed spectra with the CMFGEN model atmosphere code to determine stellar and wind properties of the system. Optical photometry was analysed with the Nightfall binary star code. A complete photometric and radial velocity model was constructed in PHOEBE 2 to determine robust uncertainties. Results. We find a rate of apsidal motion of (1.843−0.083+0.064)° yr−1. The photometric data indicate an orbital inclination of (67.6−0.1+0.2)° and Roche-lobe filling factors of both stars of about 0.86. Absolute masses of 29.5−0.4+0.5 M⊙ and mean stellar radii of 15.07−0.12+0.08 R⊙ are derived for both stars. We infer an observational value for the internal structure constant of both stars of 0.0010 ± 0.0001. Conclusions. Our in-depth analysis of the massive binary HD 152248 and the redetermination of its fundamental parameters can serve as a basis for the construction of stellar evolution models to determine theoretical rates of apsidal motion to be compared with the observational one. In addition, the system hosts two twin stars, which offers a unique opportunity to obtain direct insight into the internal structure of the stars.


2019 ◽  
Vol 15 (S356) ◽  
pp. 44-49
Author(s):  
Robert Nikutta ◽  
Enrique Lopez-Rodriguez ◽  
Kohei Ichikawa ◽  
Nancy A. Levenson ◽  
Christopher C. Packham

AbstarctWe introduce Hypercat, a large set of 2-d AGN torus images computed with the state-of-the-art clumpy radiative transfer code Clumpy. The images are provided as a 9-dimensional hypercube, in addition to a smaller hypercube of corresponding projected dust distribution maps. Hypercat also comprises a software suite for easy use of the hypercubes, quantification of image morphology, and simulation of synthetic observations with single-dish telescopes, interferometers, and Integral Field Units. We apply Hypercat to NGC 1068 and find that it can be spatially resolved in Near- and Mid-IR, for the first time with single-dish apertures, on the upcoming generation of 25–40m class telescopes. We also find that clumpy AGN torus models within a range of the parameter space can explain on scales of several parsec the recently reported polar elongation of MIR emission in several sources, while not upending basic assumptions about AGN unification.


Author(s):  
Daniel Anderson ◽  
Gregor Hendel ◽  
Pierre Le Bodic ◽  
Merlin Viernickel

We propose a simple and general online method to measure the search progress within the Branch-and-Bound algorithm, from which we estimate the size of the remaining search tree. We then show how this information can help solvers algorithmically at runtime by designing a restart strategy for MixedInteger Programming (MIP) solvers that decides whether to restart the search based on the current estimate of the number of remaining nodes in the tree. We refer to this type of algorithm as clairvoyant. Our clairvoyant restart strategy outperforms a state-of-the-art solver on a large set of publicly available MIP benchmark instances. It is implemented in the MIP solver SCIP and will be available in future releases.


2020 ◽  
Author(s):  
Andrew Lensen ◽  
Bing Xue ◽  
Mengjie Zhang

Data visualization is a key tool in data mining for understanding big datasets. Many visualization methods have been proposed, including the well-regarded state-of-the-art method t-distributed stochastic neighbor embedding. However, the most powerful visualization methods have a significant limitation: the manner in which they create their visualization from the original features of the dataset is completely opaque. Many domains require an understanding of the data in terms of the original features; there is hence a need for powerful visualization methods which use understandable models. In this article, we propose a genetic programming (GP) approach called GP-tSNE for evolving interpretable mappings from the dataset to high-quality visualizations. A multiobjective approach is designed that produces a variety of visualizations in a single run which gives different tradeoffs between visual quality and model complexity. Testing against baseline methods on a variety of datasets shows the clear potential of GP-tSNE to allow deeper insight into data than that provided by existing visualization methods. We further highlight the benefits of a multiobjective approach through an in-depth analysis of a candidate front, which shows how multiple models can be analyzed jointly to give increased insight into the dataset.


Sign in / Sign up

Export Citation Format

Share Document