scholarly journals MooFuzz: Many-Objective Optimization Seed Schedule for Fuzzer

Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 205
Author(s):  
Xiaoqi Zhao ◽  
Haipeng Qu ◽  
Wenjie Lv ◽  
Shuo Li ◽  
Jianliang Xu

Coverage-based Greybox Fuzzing (CGF) is a practical and effective solution for finding bugs and vulnerabilities in software. A key challenge of CGF is how to select conducive seeds and allocate accurate energy. To address this problem, we propose a novel many-objective optimization solution, MooFuzz, which can identify different states of the seed pool and continuously gather different information about seeds to guide seed schedule and energy allocation. First, MooFuzz conducts risk marking in dangerous positions of the source code. Second, it can automatically update the collected information, including the path risk, the path frequency, and the mutation information. Next, MooFuzz classifies seed pool into three states and adopts different objectives to select seeds. Finally, we design an energy recovery mechanism to monitor energy usage in the fuzzing process and reduce energy consumption. We implement our fuzzing framework and evaluate it on seven real-world programs. The experimental results show that MooFuzz outperforms other state-of-the-art fuzzers, including AFL, AFLFast, FairFuzz, and PerfFuzz, in terms of path discovery and bug detection.

2020 ◽  
Vol 4 (1) ◽  
pp. 87-107
Author(s):  
Ranjan Mondal ◽  
Moni Shankar Dey ◽  
Bhabatosh Chanda

AbstractMathematical morphology is a powerful tool for image processing tasks. The main difficulty in designing mathematical morphological algorithm is deciding the order of operators/filters and the corresponding structuring elements (SEs). In this work, we develop morphological network composed of alternate sequences of dilation and erosion layers, which depending on learned SEs, may form opening or closing layers. These layers in the right order along with linear combination (of their outputs) are useful in extracting image features and processing them. Structuring elements in the network are learned by back-propagation method guided by minimization of the loss function. Efficacy of the proposed network is established by applying it to two interesting image restoration problems, namely de-raining and de-hazing. Results are comparable to that of many state-of-the-art algorithms for most of the images. It is also worth mentioning that the number of network parameters to handle is much less than that of popular convolutional neural network for similar tasks. The source code can be found here https://github.com/ranjanZ/Mophological-Opening-Closing-Net


2021 ◽  
Vol 14 (11) ◽  
pp. 2445-2458
Author(s):  
Valerio Cetorelli ◽  
Paolo Atzeni ◽  
Valter Crescenzi ◽  
Franco Milicchio

We introduce landmark grammars , a new family of context-free grammars aimed at describing the HTML source code of pages published by large and templated websites and therefore at effectively tackling Web data extraction problems. Indeed, they address the inherent ambiguity of HTML, one of the main challenges of Web data extraction, which, despite over twenty years of research, has been largely neglected by the approaches presented in literature. We then formalize the Smallest Extraction Problem (SEP), an optimization problem for finding the grammar of a family that best describes a set of pages and contextually extract their data. Finally, we present an unsupervised learning algorithm to induce a landmark grammar from a set of pages sharing a common HTML template, and we present an automatic Web data extraction system. The experiments on consolidated benchmarks show that the approach can substantially contribute to improve the state-of-the-art.


2013 ◽  
Vol 303-306 ◽  
pp. 2284-2288
Author(s):  
Fang Yan ◽  
Yu An Tan

The world is increasingly awash in more and more unstructured data. Object-based data de-duplication is the current most advanced method and is the effective solution for detecting duplicate data. We developed an energy saving policy for conventional disk based RAID systems. According to the characteristics of object-based data de-duplication, we introduce object layout strategies for unstructured data applications; disk accesses are concentrated in a part of the disks in a long time which is conducive to scheduling other disks into standby or shutdown mode. Our proposed methods reduce energy consumption of de-duplication storage system.


2020 ◽  
Vol 34 (06) ◽  
pp. 10393-10401
Author(s):  
Bing Wang ◽  
Changhao Chen ◽  
Chris Xiaoxuan Lu ◽  
Peijun Zhao ◽  
Niki Trigoni ◽  
...  

Deep learning has achieved impressive results in camera localization, but current single-image techniques typically suffer from a lack of robustness, leading to large outliers. To some extent, this has been tackled by sequential (multi-images) or geometry constraint approaches, which can learn to reject dynamic objects and illumination conditions to achieve better performance. In this work, we show that attention can be used to force the network to focus on more geometrically robust objects and features, achieving state-of-the-art performance in common benchmark, even if using only a single image as input. Extensive experimental evidence is provided through public indoor and outdoor datasets. Through visualization of the saliency maps, we demonstrate how the network learns to reject dynamic objects, yielding superior global camera pose regression performance. The source code is avaliable at https://github.com/BingCS/AtLoc.


2021 ◽  
Author(s):  
Christof Ferreira Torres ◽  
Antonio Ken Iannillo ◽  
Arthur Gervais ◽  
Radu State

<div> <div> <p>Smart contracts are Turing-complete programs that are executed across a blockchain. Unlike traditional programs, once deployed, they cannot be modified. As smart contracts carry more value, they become more of an exciting target for attackers. Over the last years, they suffered from exploits costing millions of dollars due to simple programming mistakes. As a result, a variety of tools for detecting bugs have been proposed. Most of these tools rely on symbolic execution, which may yield false positives due to over-approximation. Recently, many fuzzers have been proposed to detect bugs in smart contracts. However, these tend to be more effective in finding shallow bugs and less effective in finding bugs that lie deep in the execution, therefore achieving low code coverage and many false negatives. An alternative that has proven to achieve good results in traditional programs is hybrid fuzzing, a combination of symbolic execution and fuzzing. In this work, we study hybrid fuzzing on smart contracts and present ConFuzzius, the first hybrid fuzzer for smart contracts. ConFuzzius uses evolutionary fuzzing to exercise shallow parts of a smart contract and constraint solving to generate inputs that satisfy complex conditions that prevent evolutionary fuzzing from exploring deeper parts. Moreover, ConFuzzius leverages dynamic data dependency analysis to efficiently generate sequences of transactions that are more likely to result in contract states in which bugs may be hidden. We evaluate the effectiveness of ConFuzzius by comparing it with state-of-the-art symbolic execution tools and fuzzers for smart contracts. Our evaluation on a curated dataset of 128 contracts and a dataset of 21K real-world contracts shows that our hybrid approach detects more bugs than state-of-the-art tools (up to 23%) and that it outperforms existing tools in terms of code coverage (up to 69%). We also demonstrate that data dependency analysis can boost bug detection up to 18%.</p> </div> </div>


2019 ◽  
Vol 37 (1_suppl) ◽  
pp. 73-82 ◽  
Author(s):  
Marco Limburg ◽  
Jan Stockschläder ◽  
Peter Quicker

The increasing use of carbon fibre reinforced polymers requires suitable disposing and recycling options, the latter being especially attractive due to the high production cost of the material. Reclaiming the fibres from their polymer matrix however is not without challenges. Pyrolysis leads to a decay of the polymer matrix but may also leave solid carbon residues on the fibre. These residues prevent fibre sizing and thereby reuse in new materials. In state of the art, these residues are removed via thermal treatment in oxygen containing atmospheres. This however may damage the fibre’s tensile strength. Within the scope of this work, carbon dioxide and water vapour were used to remove the carbon residues. This aims to eliminate or at least minimize fibre damage. Improved quality of reclaimed fibres can make fibre reuse more desirable by enabling the production of high-quality recycling products. Still, even under ideal recycling conditions the fibres will shorten with every new life-cycle due to production-based blending. Fibre disposal pathways will therefore always also be necessary. The problems of thermal fibre disintegration are summarized in the second part of this article (Part 2: Energy recovery).


Author(s):  
Yasir Hussain ◽  
Zhiqiu Huang ◽  
Yu Zhou ◽  
Senzhang Wang

In recent years, deep learning models have shown great potential in source code modeling and analysis. Generally, deep learning-based approaches are problem-specific and data-hungry. A challenging issue of these approaches is that they require training from scratch for a different related problem. In this work, we propose a transfer learning-based approach that significantly improves the performance of deep learning-based source code models. In contrast to traditional learning paradigms, transfer learning can transfer the knowledge learned in solving one problem into another related problem. First, we present two recurrent neural network-based models RNN and GRU for the purpose of transfer learning in the domain of source code modeling. Next, via transfer learning, these pre-trained (RNN and GRU) models are used as feature extractors. Then, these extracted features are combined into attention learner for different downstream tasks. The attention learner leverages from the learned knowledge of pre-trained models and fine-tunes them for a specific downstream task. We evaluate the performance of the proposed approach with extensive experiments with the source code suggestion task. The results indicate that the proposed approach outperforms the state-of-the-art models in terms of accuracy, precision, recall and F-measure without training the models from scratch.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Mikko Rautiainen ◽  
Tobias Marschall

Abstract Genome graphs can represent genetic variation and sequence uncertainty. Aligning sequences to genome graphs is key to many applications, including error correction, genome assembly, and genotyping of variants in a pangenome graph. Yet, so far, this step is often prohibitively slow. We present GraphAligner, a tool for aligning long reads to genome graphs. Compared to the state-of-the-art tools, GraphAligner is 13x faster and uses 3x less memory. When employing GraphAligner for error correction, we find it to be more than twice as accurate and over 12x faster than extant tools.Availability: Package manager: https://anaconda.org/bioconda/graphalignerand source code: https://github.com/maickrau/GraphAligner


Author(s):  
Gopalendu Pal ◽  
Anquan Wang ◽  
Michael F. Modest

k-distribution-based approaches are promising models for radiation calculations in strongly nongray participating media. Advanced k-distribution methods were found to achieve close-to benchmark line-by-line (LBL) accuracy for strongly inhomogeneous multi-phase media accompanied by several orders of magnitude smaller computational cost. In this paper, a k-distribution-based portable spectral module is developed, incorporating several state-of-the-art k-distribution methods along with compact and high-accuracy databases of k-distributions. The module construction is flexible — the user can choose among various k-distribution methods with their relevant k-distribution databases, to carry out accurate radiation calculations. The spectral module is portable, such that it can be coupled to any flow solver code with its own grid structure, discretization scheme, and solver libraries. This open source code module is made available for free for all noncommercial purposes. This article outlines in detail the design and the use of the spectral module. The k-distribution methods included in the module are briefly described with a discussion of their advantages, disadvantages and their domain of applicability. Examples are provided for various sample radiation calculations in multi-phase mixtures using the new spectral module and the results are compared with LBL calculations.


2020 ◽  
Vol 34 (01) ◽  
pp. 303-311 ◽  
Author(s):  
Sicheng Zhao ◽  
Yunsheng Ma ◽  
Yang Gu ◽  
Jufeng Yang ◽  
Tengfei Xing ◽  
...  

Emotion recognition in user-generated videos plays an important role in human-centered computing. Existing methods mainly employ traditional two-stage shallow pipeline, i.e. extracting visual and/or audio features and training classifiers. In this paper, we propose to recognize video emotions in an end-to-end manner based on convolutional neural networks (CNNs). Specifically, we develop a deep Visual-Audio Attention Network (VAANet), a novel architecture that integrates spatial, channel-wise, and temporal attentions into a visual 3D CNN and temporal attentions into an audio 2D CNN. Further, we design a special classification loss, i.e. polarity-consistent cross-entropy loss, based on the polarity-emotion hierarchy constraint to guide the attention generation. Extensive experiments conducted on the challenging VideoEmotion-8 and Ekman-6 datasets demonstrate that the proposed VAANet outperforms the state-of-the-art approaches for video emotion recognition. Our source code is released at: https://github.com/maysonma/VAANet.


Sign in / Sign up

Export Citation Format

Share Document