scholarly journals Dynamic Initial Weight Assignment for MaxSAT

Algorithms ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 115
Author(s):  
Abdelraouf Ishtaiwi ◽  
Qasem Abu Al-Haija

The Maximum Satisfiability (Maximum Satisfiability (MaxSAT)) approach is the choice, and perhaps the only one, to deal with most real-world problems as most of them are unsatisfiable. Thus, the search for a complete and consistent solution to a real-world problem is impractical due to computational and time constraints. As a result, MaxSAT problems and solving techniques are of exceptional interest in the domain of Satisfiability (Satisfiability (SAT)). Our research experimentally investigated the performance gains of extending the most recently developed SAT dynamic Initial Weight assignment technique (InitWeight) to handle the MaxSAT problems. Specifically, we first investigated the performance gains of dynamically assigning the initial weights in the Divide and Distribute Fixed Weights solver (DDFW+Initial Weight for Maximum Satisfiability (DDFW+InitMaxSAT)) over Divide and Distribute Fixed Weights solver (DDFW) when applied to solve a wide range of well-known unweighted MaxSAT problems obtained from DIMACS. Secondly, we compared DDFW+InitMaxSAT’s performance against three known state-of-the-art SAT solving techniques: YalSAT, ProbSAT, and Sparrow. We showed that the assignment of dynamic initial weights increased the performance of DDFW+InitMaxSAT against DDFW by an order of magnitude on the majority of problems and performed similarly otherwise. Furthermore, we showed that the performance of DDFW+InitMaxSAT was superior to the other state-of-the-art algorithms. Eventually, we showed that the InitWeight technique could be extended to handling partial MaxSAT with minor modifications.

2021 ◽  
Author(s):  
Leila Zahedi ◽  
Farid Ghareh Mohammadi ◽  
M. Hadi Amini

Machine learning techniques lend themselves as promising decision-making and analytic tools in a wide range of applications. Different ML algorithms have various hyper-parameters. In order to tailor an ML model towards a specific application, a large number of hyper-parameters should be tuned. Tuning the hyper-parameters directly affects the performance (accuracy and run-time). However, for large-scale search spaces, efficiently exploring the ample number of combinations of hyper-parameters is computationally challenging. Existing automated hyper-parameter tuning techniques suffer from high time complexity. In this paper, we propose HyP-ABC, an automatic innovative hybrid hyper-parameter optimization algorithm using the modified artificial bee colony approach, to measure the classification accuracy of three ML algorithms, namely random forest, extreme gradient boosting, and support vector machine. Compared to the state-of-the-art techniques, HyP-ABC is more efficient and has a limited number of parameters to be tuned, making it worthwhile for real-world hyper-parameter optimization problems. We further compare our proposed HyP-ABC algorithm with state-of-the-art techniques. In order to ensure the robustness of the proposed method, the algorithm takes a wide range of feasible hyper-parameter values, and is tested using a real-world educational dataset.


Author(s):  
P. Branco ◽  
L. Fiolhais ◽  
M. Goulão ◽  
P. Martins ◽  
P. Mateus ◽  
...  

Oblivious Transfer (OT) is a fundamental primitive in cryptography, supporting protocols such as Multi-Party Computation and Private Set Intersection (PSI), that are used in applications like contact discovery, remote diagnosis and contact tracing. Due to its fundamental nature, it is utterly important that its execution is secure even if arbitrarily composed with other instances of the same, or other protocols. This property can be guaranteed by proving its security under the Universal Composability model. Herein, a 3-round Random Oblivious Transfer (ROT) protocol is proposed, which achieves high computational efficiency, in the Random Oracle Model. The security of the protocol is based on the Ring Learning With Errors assumption (for which no quantum solver is known). ROT is the basis for OT extensions and, thus, achieves wide applicability, without the overhead of compiling ROTs from OTs. Finally, the protocol is implemented in a server-class Intel processor and four application-class ARM processors, all with different architectures. The usage of vector instructions provides on average a 40% speedup. The implementation shows that our proposal is at least one order of magnitude faster than the state-of-the-art, and is suitable for a wide range of applications in embedded systems, IoT, desktop, and servers. From a memory footprint perspective, there is a small increase (16%) when compared to the state-of-the-art. This increase is marginal and should not prevent the usage of the proposed protocol in a multitude of devices. In sum, the proposal achieves up to 37k ROTs/s in an Intel server-class processor and up to 5k ROTs/s in an ARM application-class processor. A PSI application, using the proposed ROT, is up to 6.6 times faster than related art.


Algorithms ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 12
Author(s):  
Abdelraouf Ishtaiwi ◽  
Feda Alshahwan ◽  
Naser Jamal ◽  
Wael Hadi ◽  
Muhammad AbuArqoub

For decades, the use of weights has proven its superior ability to improve dynamic local search weighting algorithms’ overall performance. This paper proposes a new mechanism where the initial clause’s weights are dynamically allocated based on the problem’s structure. The new mechanism starts by examining each clause in terms of its size and the extent of its link, and its proximity to other clauses. Based on our examination, we categorized the clauses into four categories: (1) clauses small in size and linked with a small neighborhood, (2) clauses small in size and linked with a large neighborhood, (3) clauses large in size and linked with a small neighborhood, and (4) clauses large in size and linked with a large neighborhood. Then, the initial weights are dynamically allocated according to each clause category. To examine the efficacy of the dynamic initial weight assignment, we conducted an extensive study of our new technique on many problems. The study concluded that the dynamic allocation of initial weights contributes significantly to improving the search process’s performance and quality. To further investigate the new mechanism’s effect, we compared the new mechanism with the state-of-the-art algorithms belonging to the same family in terms of using weights, and it was clear that the new mechanism outperformed the state-of-the-art clause weighting algorithms. We also show that the new mechanism could be generalized with minor changes to be utilized within the general-purpose stochastic local search state-of-the-art weighting algorithms.


Author(s):  
Xin Guo ◽  
Boyuan Pan ◽  
Deng Cai ◽  
Xiaofei He

Low rank matrix factorizations(LRMF) have attracted much attention due to its wide range of applications in computer vision, such as image impainting and video denoising. Most of the existing methods assume that the loss between an observed measurement matrix and its bilinear factorization follows symmetric distribution, like gaussian or gamma families. However, in real-world situations, this assumption is often found too idealized, because pictures under various illumination and angles may suffer from multi-peaks, asymmetric and irregular noises. To address these problems, this paper assumes that the loss follows a mixture of Asymmetric Laplace distributions and proposes robust Asymmetric Laplace Adaptive Matrix Factorization model(ALAMF) under bayesian matrix factorization framework. The assumption of Laplace distribution makes our model more robust and the asymmetric attribute makes our model more flexible and adaptable to real-world noise. A variational method is then devised for model inference. We compare ALAMF with other state-of-the-art matrix factorization methods both on data sets ranging from synthetic and real-world application. The experimental results demonstrate the effectiveness of our proposed approach.


Author(s):  
Fahiem Bacchus ◽  
Antti Hyttinen ◽  
Matti Järvisalo ◽  
Paul Saikko

Maximum satisfiability (MaxSAT) offers a competitive approach to solving NP-hard real-world optimization problems. While state-of-the-art MaxSAT solvers rely heavily on Boolean satisfiability (SAT) solvers, a recent trend, brought on by MaxSAT solvers implementing the so-called implicit hitting set (IHS) approach, is to integrate techniques from the realm of integer programming (IP) into the solving process. This allows for making use of additional IP solving techniques to further speed up MaxSAT solving. In this line of work, we investigate the integration of the technique of reduced cost fixing from the IP realm into IHS solvers, and empirically show that reduced cost fixing considerable speeds up a state-of-the-art MaxSAT solver implementing the IHS approach.


Author(s):  
Raymond McCALL ◽  
Janet Burge

AbstractMore than 40 years after Rittel and Webber published the first articles on the theory of wicked problems this theory has been applied to a wide range of fields involved in real-world problem solving. Interest in the theory seems greater than ever. This has led to an interest in rethinking the theory. A number of authors do this by imposing interpretations on the theory that are incompatible with each other and with the statements of the theory's authors. We agree that it is time to critically reexamine the theory and rethink what implications it has for design. However, rather than imposing an incompatible interpretation, our approach is see what new conclusions can be drawn from a systematic and critical examination of what Rittel and Webber actually said. This reexamination of their specific claims and arguments is what we calluntangling wicked problems. From this untangling, we derive new conclusions about how designers should tackle wicked problems and how design rationale can aid them in doing so.


2020 ◽  
Vol 12 ◽  
Author(s):  
Francisco Basílio ◽  
Ricardo Jorge Dinis-Oliveira

Background: Pharmacobezoars are specific types of bezoars formed when medicines, such as tablets, suspensions, and/or drug delivery systems, aggregate and may cause death by occluding airways with tenacious material or by eluting drugs resulting in toxic or lethal blood concentrations. Objective: This work aims to fully review the state-of-the-art regarding pathophysiology, diagnosis, treatment and other relevant clinical and forensic features of pharmacobezoars. Results: patients of a wide range of ages and in both sexes present with signs and symptoms of intoxications or more commonly gastrointestinal obstructions. The exact mechanisms of pharmacobezoar formation are unknown but is likely multifactorial. The diagnosis and treatment depend on the gastrointestinal segment affected and should be personalized to the medication and the underlying factor. A good and complete history, physical examination, image tests, upper endoscopy and surgery through laparotomy of the lower tract are useful for diagnosis and treatment. Conclusion: Pharmacobezoars are rarely seen in clinical and forensic practice. They are related to controlled or immediate-release formulations, liquid or non-digestible substances, in normal or altered digestive motility/anatomy tract, and in overdoses or therapeutic doses, and should be suspected in the presence of risk factors or patients taking drugs which may form pharmacobezoars.


This volume vividly demonstrates the importance and increasing breadth of quantitative methods in the earth sciences. With contributions from an international cast of leading practitioners, chapters cover a wide range of state-of-the-art methods and applications, including computer modeling and mapping techniques. Many chapters also contain reviews and extensive bibliographies which serve to make this an invaluable introduction to the entire field. In addition to its detailed presentations, the book includes chapters on the history of geomathematics and on R.G.V. Eigen, the "father" of mathematical geology. Written to commemorate the 25th anniversary of the International Association for Mathematical Geology, the book will be sought after by both practitioners and researchers in all branches of geology.


Author(s):  
Marc J. Stern

This chapter covers systems theories relevant to understanding and working to enhance the resilience of social-ecological systems. Social-ecological systems contain natural resources, users of those resources, and the interactions between each. The theories in the chapter share lessons about how to build effective governance structures for common pool resources, how to facilitate the spread of worthwhile ideas across social networks, and how to promote collaboration for greater collective impacts than any one organization alone could achieve. Each theory is summarized succinctly and followed by guidance on how to apply it to real world problem solving.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


Sign in / Sign up

Export Citation Format

Share Document