scholarly journals A low-power and high-quality implementation of the discrete cosine transformation

2007 ◽  
Vol 5 ◽  
pp. 305-311 ◽  
Author(s):  
B. Heyne ◽  
J. Götze

Abstract. In this paper a computationally efficient and high-quality preserving DCT architecture is presented. It is obtained by optimizing the Loeffler DCT based on the Cordic algorithm. The computational complexity is reduced from 11 multiply and 29 add operations (Loeffler DCT) to 38 add and 16 shift operations (which is similar to the complexity of the binDCT). The experimental results show that the proposed DCT algorithm not only reduces the computational complexity significantly, but also retains the good transformation quality of the Loeffler DCT. Therefore, the proposed Cordic based Loeffler DCT is especially suited for low-power and high-quality CODECs in battery-based systems.

Author(s):  
Ziming Li ◽  
Julia Kiseleva ◽  
Maarten De Rijke

The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator. The reward signal from a poor discriminator can be very sparse and unstable, which may lead the generator to fall into a local optimum or to produce nonsense replies. To alleviate the first problem, we first extend a recently proposed adversarial dialogue generation method to an adversarial imitation learning solution. Then, in the framework of adversarial inverse reinforcement learning, we propose a new reward model for dialogue generation that can provide a more accurate and precise reward signal for generator training. We evaluate the performance of the resulting model with automatic metrics and human evaluations in two annotation settings. Our experimental results demonstrate that our model can generate more high-quality responses and achieve higher overall performance than the state-of-the-art.


2015 ◽  
Vol 22 (2) ◽  
pp. 165-173 ◽  
Author(s):  
Jihua Chen ◽  
Teresa Chen-Keat ◽  
Mehdi Hojjati ◽  
AJ Vallee ◽  
Marc-Andre Octeau ◽  
...  

AbstractDeveloping reliable processes is one of the key elements in producing high-quality composite components using an automated fiber placement (AFP) process. In this study, both simulation and experimental studies were carried out to investigate fiber steering and cut/restart under different processing parameters, such as layup rate and compaction pressure, during the AFP process. First, fiber paths were designed using curved fiber axes with different radii. Fiber placement trials were then conducted to investigate the quality of the steered fiber paths. Furthermore, a series of sinusoidal fiber paths were fiber placed and investigated. Moreover, a six-ply laminate with cut-outs in it was manufactured in the cut/restart trials. The accuracy of the fiber cut/restart was compared at different layup rates for both one- and bi-directional layups. Experimental results show that it was possible to layup steered fiber paths with small radii of curvature (minimum 114 mm) designed for this study when the proper process condition was used. It was observed from the cut/restart trials that the quality of tow cut was independent of layup speed; however, the accuracy of tow restart was related to the layup speed. The faster the layup speed, the less accurate was the tow restart.


Author(s):  
Wei Wang ◽  
Xiang-Yu Guo ◽  
Shao-Yuan Li ◽  
Yuan Jiang ◽  
Zhi-Hua Zhou

Crowdsourcing systems make it possible to hire voluntary workers to label large-scale data by offering them small monetary payments. Usually, the taskmaster requires to collect high-quality labels, while the quality of labels obtained from the crowd may not satisfy this requirement. In this paper, we study the problem of obtaining high-quality labels from the crowd and present an approach of learning the difficulty of items in crowdsourcing, in which we construct a small training set of items with estimated difficulty and then learn a model to predict the difficulty of future items. With the predicted difficulty, we can distinguish between easy and hard items to obtain high-quality labels. For easy items, the quality of their labels inferred from the crowd could be high enough to satisfy the requirement; while for hard items, the crowd could not provide high-quality labels, it is better to choose a more knowledgable crowd or employ specialized workers to label them. The experimental results demonstrate that the proposed approach by learning to distinguish between easy and hard items can significantly improve the label quality.


In today’s era the image has become useful for communication purpose. But due to the development of software and various techniques it is possible to change images in adding or removing essential feature from it without leaving a clue of real image. It is not easy for the common people to identify whether the image original or tampered. In order to avoid this problem, forgery detection came into existence. Detection of forgery refers to task of image processing to identify that the images are unique or tampered. Several techniques have been used in order to detect the forgeries from the forged image, but this issue has not yet solved. In order to solve these issues we have used Discrete Cosine Transformation (DCT) and quantization matrix techniques for identifying forged areas of image, where the quality of image is not reduced. The Discrete Cosine Transformation (DCT) is used in order for characterizing the overlapping blocks and quantization matrix is used to compress DCT values and gives both highly compressed and best decompressed image quality. Here we use block matching algorithm. This algorithm one of the most frequently used for detecting image which is duplicate. This proposed work also supports for different kinds of images such as JPEG, JPG or PNG of any size it can be either mxn or nxn.


Author(s):  
Rui Wang ◽  
Zhihao Zheng ◽  
Shuming Gao

Abstract Converting a hex mesh into a fundamental mesh by inserting fundamental sheets is an effective means to improve the hex mesh’s quality near the boundary. However, the high-quality and automatic fundamental sheets insertion is still a problem. In this paper, a method is proposed to automatically generate fundamental sheets with the support of stream surfaces. By establishing a constrained integer linear system, the types of fundamental sheets to be inserted are determined effectively and optimally. By constructing discrete stream surfaces associated with the relevant geometric entities, the optimized positions of fundamental sheets are automatically determined. The experimental results show that the proposed method can automatically insert high-quality fundamental sheets and effectively improve the elements’ geometric quality of the hex mesh.


2011 ◽  
Vol 58-60 ◽  
pp. 1329-1335
Author(s):  
Hai Ming Yin ◽  
Yuan Wang Wei ◽  
Yong Gang Li

In this paper, we propose an image-based approach for the simulation of outdoor snowy scenes. Using a snowy image as the reference, we extract the snow covered regions from the reference through a snow model and achieve the snow color information. Then the target image will get the snow color information and take on snowy appearance through a color transfer procedure. To solve the problem of different data distributions between the reference and the target in the procedure, power transform and modulus transform are employed to adjust the image data according to the snow information derived from the reference image data. The experimental results indicate this approach can simulate the snowy scenes with high quality while greatly reducing the computational complexity of simulation compared with the traditional algorithms.


Author(s):  
Chwei-Shyong Tsai ◽  
Chin-Chen Chang

Digital watermarking is an effective technique to protect the intellectual property rights of digital images. In general, a gray-level image can provide more perceptual information; moreover, the size of each pixel in the gray-level image is bigger. Commonly, gray-level digital watermarks are more robust. In this chapter, the proposed watermarking scheme adopts a gray-level image as the watermark. In addition, discrete cosine transformation (DCT) technique and quantization method are applied to strengthen the robustness of the watermarking system. Both original image and digital watermark, processed by DCT transformation, can build a quantization table to reduce the information size of the digital watermark. After quantized watermark is embedded into the middle frequency bands of the transformed original image, the quality of the watermarked image is always visually acceptable because of the effectiveness of the quantization technique. The experimental results show that the embedded watermark can resist image cropping, JPEG lossy compression, and destructive processes such as image blurring and sharpening.


2021 ◽  
Vol 2086 (1) ◽  
pp. 012120
Author(s):  
V S Reznik ◽  
V A Kruglov ◽  
V V Davydov

Abstract In the modern world, sequencing is an integral part of medicine, biology and other scientific fields. The Illimina / Solexa method is a new generation method and relates to methods of mass parallel sequencing. One of the features of using this method is the sequential pumping of various chemicals through the flow cell in which the reaction occurs. For uniformity and high quality of DNA sequencing, it is necessary that the amount of gas in liquids be minimized. Because many it can adversely affect both during chemical reactions and at the stage of recording reaction results. This article will examine the sequencing system using the Illumina\Solexa method using bubble sensors. An algorithm was developed that periodically receives information from bubble sensors in a microfluidic tube. The information received is processed and allows at certain stages to report deviations from the normal conditions for sequencing. The experimental results are presented.


2020 ◽  
Vol 7 (3) ◽  
pp. 471
Author(s):  
Herry Sujaini

<p class="Body">Korpus paralel memiliki peran yang sangat penting dalam mesin penerjemah statistik (MPS). Korpus paralel yang diperoleh berbagai sumber biasanya memiliki kualitas yang kurang baik, sedangkan kuantitas korpus paralel merupakan tuntutan utama bagi hasil penerjemahan yang baik. Penelitian ini bertujuan untuk mengetahui efek ukuran dan kualitas korpus paralel di MPS. Penelitian ini menggunakan metode <em>bilingual</em> <em>evaluation understudy</em> (BLEU) untuk mengklasifikasikan pasangan kalimat paralel sebagai kalimat berkualitas tinggi atau buruk. Metode ini diterapkan ke korpus paralel yang berisi 1,5 M pasangan kalimat Inggris-Indonesia paralel dan memperoleh 900K pasangan kalimat paralel berkualitas tinggi. Beberapa sistem MPS dengan berbagai ukuran korpus paralel mentah dan korpus berkualitas tinggi yang difilter dilatih dengan MOSES dan dievaluasi kinerjanya. Hasil percobaan yang dilakukan menunjukkan bahwa ukuran korpus paralel merupakan  faktor utama dalam kinerja terjemahan. Selain itu, kinerja terjemahan yang  lebih baik dapat dicapai dengan korpus berkualitas tinggi yang lebih kecil menggunakan metode filter berkualitas. Hasil eksperimen pada MPS bahasa Inggris-Indonesia menunjukkan bahwa dengan menggunakan 60% kalimat yang kualitas terjemahannya baik, kualitas terjemahan dapat meningkat sebesar 7,31%.</p><p class="Body"> </p><p class="Body"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>The parallel corpus has a very important role in the statistical machine translator (SMT) system. The parallel corpus obtained by various sources usually has poor quality, while the quantity of parallel corpus is the main demand for good translation results. This study aims to determine the effect of the size and quality of parallel corpus at SMT. This study uses the bilingual evaluation understudy (BLEU) method to classify pairs of parallel sentences as high-quality or bad sentences. This method is applied to a parallel corpus containing 1.5 M parallel English-Indonesian sentence pairs and obtaining 900K pairs of high-quality parallel sentences. Some SMT systems with various sizes of raw parallel bodies and high-quality corpus filtered are trained with MOSES and evaluated for performance. The experimental results show that the size of the parallel corpus is a major factor in translation performance. In addition, better translation performance can be achieved with a smaller high-quality corpus using a quality filter method.The experimental results in the English-Indonesian SMT show that by using 60% of sentences whose translation quality is good, the quality of the translation can increase by 7.31%.</em></p><p class="Body"><em><strong><br /></strong></em></p>


2021 ◽  
Vol 11 (1) ◽  
pp. 8
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Journal of Low Power Electronics and Applications maintains its standards for the high quality of its published papers [...]


Sign in / Sign up

Export Citation Format

Share Document