scholarly journals Multi-Source Data Stream Online Frequent Episode Mining

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 107465-107478
Author(s):  
Tao You ◽  
Yamin Li ◽  
Bingkun Sun ◽  
Chenglie Du
Author(s):  
Xiang Ao ◽  
Ping Luo ◽  
Chengkai Li ◽  
Fuzhen Zhuang ◽  
Qing He

2019 ◽  
Vol 10 (4) ◽  
pp. 1-26 ◽  
Author(s):  
Xiang Ao ◽  
Haoran Shi ◽  
Jin Wang ◽  
Luo Zuo ◽  
Hongwei Li ◽  
...  

2013 ◽  
Vol 40 (1) ◽  
pp. 13-28 ◽  
Author(s):  
Shukuan Lin ◽  
Jianzhong Qiao ◽  
Ya Wang

Author(s):  
Hitesh H Vandra

Image compression is used to reduce bandwidth or storage requirement in image application. Mainly two types of image compression: lossy and lossless image compression. A Lossy Image Compression removes some of the source information content along with the redundancy. While the Lossless Image Compression technique the original source data is reconstructed from the compressed data by restoring the removed redundancy. The reconstructed data is an exact replica of the original source data. Many algorithms are present for lossless image compression like Huffman, rice coding, run length, LZW. LZW is referred to as a substitution or dictionary-based encoding algorithm. The algorithm builds a data dictionary of data occurring in an uncompressed data stream. Patterns of data (substrings) are identified in the data stream and are matched to entries in the dictionary. If the substring is not present in the dictionary, a code phrase is created based on the data content of the substring, and it is stored in the dictionary. The phrase is then written to the compressed output stream. In this paper we see the effect of LZW algorithm on the png, jpg, png, gif, bmp image formats.


Author(s):  
Joshua Lubell

The Security Content Automation Protocol (SCAP) schema for source data stream collections standardizes the requirements for packaging Extensible Markup Language (XML) security content into bundles for easy deployment. SCAP bundles must be self-contained such that each bundle contains all necessary information without external references, and reversible such that XML components are unmodified when unbundled and re-bundled into new collections. These requirements (along with the need for very long, globally unique identifiers) make authoring the content and bundling a challenge. SCAP Composer, a software application that uses a Darwin Information Typing Architecture (DITA) specialized element type for source data stream collections, makes the authoring process easier. SCAP Composer takes an incremental approach to aiding SCAP content authors: it helps only with creating source data stream collections; it does not offer any help with creating the XML resources encapsulated in a data stream collection. SCAP Composer is implemented using the DITA Open Toolkit and can be used with any DITA authoring software that includes the Toolkit, or with a standalone Toolkit.


Author(s):  
A. A. Nedbaylov

The calculations required in project activities for engineering students are commonly performed in electronic spreadsheets. Practice has shown that utilizing those calculations could prove to be quite difficult for students of other fields. One of the causes for such situation (as well as partly for problems observed during Java and C programming languages courses) lies in the lack of a streamlined distribution structure for both the source data and the end results. A solution could be found in utilizing a shared approach for information structuring in spreadsheet and software environment, called “the Book Method”, which takes into account the engineering psychology issues regarding the user friendliness of working with electronic information. This method can be applied at different levels in academic institutions and at teacher training courses.


Sign in / Sign up

Export Citation Format

Share Document