Light Fidelity (LIFI) Data Exchange Using Secure Attribute Based Encryption Implementation

2019 ◽  
Vol 16 (5) ◽  
pp. 2317-2320
Author(s):  
N Karthik ◽  
B Palanisamy ◽  
T Karthikeyan ◽  
K Chandrakumar ◽  
K Thirunavukkarasu
2019 ◽  
Vol 8 (4) ◽  
pp. 9508-9512

Cloud computing provides a flexible and convenient way for data sharing, which brings various benefits for both the society and individuals. But there exists a natural resistance for users to directly outsource the shared data to the cloud server since the data often contain valuable information. Although it causes many security issues, cloud service providers are not at the same level of trust as users. To preserve the privacy of data against non-trusted Cloud Service Provider (CSP) files, current solutions implement Cryptographic methods (for example, encryption methods) and deliver decryption keys only to authorized users. However, data sharing in the cloud among authorized users remains a difficult problem, especially when it comes to dynamic user groups. Most of the research on dynamic group data exchange has been done in the cloud with many algorithms, such as Attribute-Based Encryption (ABE), Ciphertext Attribute-Based Encryption (CP-ABE) to provide better security in dynamic cloud users with multiple authorities, but they still face challenges, either lack of performance or rely on a trusted server, and are not suitable for distribution with the problem of eliminating attributes. Thus, the Revocation user cannot get shared data before and after. To solve this in particular, we first suggest an effective Modified Revocable Attribute-Based Encryption (MR-ABE) system with the quality of ciphertext allocation by applying and integrating both Identity-Based Encryption (IBE) and CP-ABE techniques. It can provide confidential forward / backward of encrypted data by delivering user revocation attributes and updating encrypted text simultaneously. Next, we perform Fine-grained access control and data exchange for on-demand services with dynamic user groups on the cloud. Experimental data show that our proposed system is more efficient and scalable than the latest generation solutions


Author(s):  
Salvador Perez ◽  
Jose L. Hernandez-Ramos ◽  
Diego Pedone ◽  
Domenico Rotondi ◽  
Leonardo Straniero ◽  
...  

2020 ◽  
Vol 51 (2) ◽  
pp. 479-493
Author(s):  
Jenny A. Roberts ◽  
Evelyn P. Altenberg ◽  
Madison Hunter

Purpose The results of automatic machine scoring of the Index of Productive Syntax from the Computerized Language ANalysis (CLAN) tools of the Child Language Data Exchange System of TalkBank (MacWhinney, 2000) were compared to manual scoring to determine the accuracy of the machine-scored method. Method Twenty transcripts of 10 children from archival data of the Weismer Corpus from the Child Language Data Exchange System at 30 and 42 months were examined. Measures of absolute point difference and point-to-point accuracy were compared, as well as points erroneously given and missed. Two new measures for evaluating automatic scoring of the Index of Productive Syntax were introduced: Machine Item Accuracy (MIA) and Cascade Failure Rate— these measures further analyze points erroneously given and missed. Differences in total scores, subscale scores, and individual structures were also reported. Results Mean absolute point difference between machine and hand scoring was 3.65, point-to-point agreement was 72.6%, and MIA was 74.9%. There were large differences in subscales, with Noun Phrase and Verb Phrase subscales generally providing greater accuracy and agreement than Question/Negation and Sentence Structures subscales. There were significantly more erroneous than missed items in machine scoring, attributed to problems of mistagging of elements, imprecise search patterns, and other errors. Cascade failure resulted in an average of 4.65 points lost per transcript. Conclusions The CLAN program showed relatively inaccurate outcomes in comparison to manual scoring on both traditional and new measures of accuracy. Recommendations for improvement of the program include accounting for second exemplar violations and applying cascaded credit, among other suggestions. It was proposed that research on machine-scored syntax routinely report accuracy measures detailing erroneous and missed scores, including MIA, so that researchers and clinicians are aware of the limitations of a machine-scoring program. Supplemental Material https://doi.org/10.23641/asha.11984364


Author(s):  
Scot D. Weaver ◽  
Thomas E. Lefchik ◽  
Marc I. Hoit ◽  
Kirk Beach

Sign in / Sign up

Export Citation Format

Share Document