CDIA-DS: A framework for efficient reconstruction of compound document image using data structure

Author(s):  
Anand Gupta ◽  
Devendra Tiwari ◽  
Priyanshi Gupta ◽  
Ankit Kulshreshtha
2020 ◽  
Vol 39 (4) ◽  
pp. 5027-5036
Author(s):  
You Lu ◽  
Qiming Fu ◽  
Xuefeng Xi ◽  
Zhenping Chen

Data outsourcing has gradually become a mainstream solution, but once data is outsourced, data owners will without the control of the data hardware, there is a possibility that the integrity of the data will be destroyed objectively. Many current studies have achieved low network overhead cloud data set verification by designing algorithmic structures (e.g., hashing, Merkel verification trees); however, cloud service providers may not recognize the incompleteness of cloud data to avoid liability or business factors fact. There is a need to build a secure, reliable, non-tamperable, and non-forgeable verification system for accountability. Blockchain is a chain-like data structure constructed by using data signatures, timestamps, hash functions, and proof-of-work mechanisms. Using blockchain technology to build an integrity verification system can achieve fault accountability. Blockchain is a chain-like data structure constructed by using data signatures, timestamps, hash functions, and proof-of-work mechanisms. Using blockchain technology to build an integrity verification system can achieve fault accountability. This paper uses the Hadoop framework to implement data collection and storage of the HBase system based on big data architecture. In summary, based on the research of blockchain cloud data collection and storage technology, based on the existing big data storage middleware, a large flow, high concurrency and high availability data collection and processing system has been realized.


2011 ◽  
Vol 8 (5) ◽  
pp. 670-684 ◽  
Author(s):  
A. Baliga ◽  
V. Ganapathy ◽  
L. Iftode

2010 ◽  
Vol 45 (5) ◽  
pp. 281-292 ◽  
Author(s):  
Gautam Upadhyaya ◽  
Samuel P. Midkiff ◽  
Vijay S. Pai

2016 ◽  
Vol 3 ◽  
pp. 1-19
Author(s):  
Luciana Quaranta

The Intermediate Data Structure (IDS) provides a common structure for storing and sharing historical demographic data. The structure also facilitates the construction of different open-access software to extract information from these tables and construct new variables. The article Using the Intermediate Data Structure (IDS) to Construct Files for Analysis (Quaranta 2015) presented a series of concepts and programs that allow the user to construct a rectangular episodes file for longitudinal statistical analysis using data stored in the IDS. The current article discusses, in detail, each of these programs, describing their technicalities, structure and syntax, and also explaining how they can be used.


2016 ◽  
Vol 8 (6) ◽  
Author(s):  
Yuanxi Sun ◽  
Wenjie Ge ◽  
Jia Zheng ◽  
Dianbiao Dong

This paper presents a systematic solution of the kinematics of the planar mechanism from the aspect of Assur groups. When the planar mechanism is decomposed into Assur groups, the detailed calculating order of Assur groups is unknown. To solve this problem, first, the decomposed Assur groups are classified into three types according to their calculability, which lays the foundation for the establishment of the automatic solving algorithm for decomposed Assur groups. Second, the data structure for the Assur group is presented, which enables the automatic solving algorithm with the input and output parameters of each Assur group. All decomposed Assur groups are stored in the component stack, and all parameters of which are stored in the parameter stacks. The automatic algorithm will detect identification flags of each Assur group in the component stack and their corresponding parameters in the parameter stacks in order to decide which Assur group is calculable and which one can be solved afterward. The proposed systematic solution is able to generate an automatic solving order for all Assur groups in the planar mechanism and allows the adding, modifying, and removing of Assur groups at any time. Two planar mechanisms are given as examples to show the detailed process of the proposed systematic solution.


1996 ◽  
Vol 5 (4) ◽  
pp. 329-336 ◽  
Author(s):  
Barry F. Smith ◽  
William D. Gropp

Over the past few years several proposals have been made for the standardization of sparse matrix storage formats in order to allow for the development of portable matrix libraries for the iterative solution of linear systems. We believe that this is the wrong approach. Rather than define one standard (or a small number of standards) for matrix storage, the community should define an interface (i.e., the calling sequences) for the functions that act on the data. In addition, we cannot ignore the interface to the vector operations because, in many applications, vectors may not be stored as consecutive elements in memory. With the acceptance of shared memory, distributed memory, and cluster memory parallel machines, the flexibility of the distribution of the elements of vectors is also extremely important. This issue is ignored in most proposed standards. In this article we demonstrate how such libraries may be written using data encapsulation techniques.


Sign in / Sign up

Export Citation Format

Share Document