Simple online privacy for Australia

First Monday ◽  
2016 ◽  
Author(s):  
Margaret Jackson ◽  
Jonathan O'Donnell ◽  
Joann Cattlin

Simple Privacy provides a system for Australian organisations to create privacy policies for the personal information they collect online. The privacy policies it creates are legally compliant and easy to understand. We developed this system because small Australian organisations seemed to find privacy policies too complicated to manage with the resources they have available.This paper describes the framework behind Simple Privacy and discusses the choices that we made during development. These choices balance the requirements of the privacy legislation and the needs of both organisations and customers.

2009 ◽  
pp. 269-283
Author(s):  
Suhong Li

The purpose of this chapter is to investigate the current status of online privacy policies of Fortune 100 Companies. It was found that 94% of the surveyed companies have posted an online privacy policy and 82% of them collect personal information from consumers. The majority of the companies only partially follow the four principles (notice, choice, access, and security) of fair information practices. For example, most of the organizations give consumers some notice and choice in term of the collection and use of their personal information. However, organizations fall short in security requirements. Only 19% of organizations mention that they have taken steps to provide security for information both during transmission and after their sites have received the information. The results also reveal that a few organizations have obtained third-party privacy seals including TRUSTe, BBBOnline Privacy, and Safe Harbor.


2019 ◽  
Vol 14 (2) ◽  
pp. 116-118 ◽  
Author(s):  
Stephanie Krueger

A Review of: Tummon, N., & McKinnon, D. (2018). Attitudes and practices of Canadian academic librarians regarding library and online privacy: A national study. Library and Information Science Research, 40(2), 86-97. https://doi.org/10.1016/j.lisr.2018.05.002 Abstract Objective – To assess attitudes of Canadian academic librarians regarding online privacy issues and to gauge their knowledge of related procedures and policies at their institutions. Design – Attitudinal online survey in English. Setting – English-language academic libraries in 10 Canadian provinces. Subjects – English-speaking academic librarians across Canada. Methods – Survey, based on Zimmer’s 2014 study of librarians in the United States of America, announced via email to 1,317 potential participants, managed using LimeSurvey, and available from April 7 to May 5, 2017. In 28 optional multiple choice or Likert scale questions, the survey prompted participants to express their attitudes regarding online privacy scenarios and privacy-related library practices, including patron data collection. Results were analyzed in Microsoft Excel and SPSS. Main Results – The survey response rate was 13.9% (183 respondents). Job position, age, or geographic location did not appear to influence attitudes towards privacy, with almost all respondents strongly agreeing or agreeing that individuals should control who sees their personal information (96.2%) and that companies collect too much such information (97.8%). Respondents voiced slightly less concern about government information collection, but nearly all respondents agreed that governments should not share personal information with third parties without authorization and that companies should only use information for the purposes they specify. When asked if privacy issues are more important today than five years ago, 69.9% of respondents said they were more concerned and 78.1% noted they knew more than five years before about privacy-related risks. Regarding online behaviour, 53.3% of respondents felt web behaviour tracking is both beneficial and harmful, with 29.1% considering it harmful, and 13.7% finding it neither beneficial nor harmful. Online shopping and identify theft, social media behaviour tracking, search engine policy display, and personal information sharing were also areas of concern for respondents, with the majority noting they were somewhat or very concerned about these issues.  In terms of library practices, most respondents strongly agreed that libraries should not share personal information, circulation records, or Internet use records with third parties unless authorized, though 33% of respondents noted they could neither agree nor disagree that libraries are doing all they can to prevent unauthorized access to such information. The majority of respondents strongly agreed or agreed that libraries should play a role in educating patrons about privacy issues. Many respondents (68.9%) did not know if their libraries had practices or procedures for dealing with patron information requests from law enforcement or governmental representatives. The majority of respondents did not know if patrons at their libraries had inquired about privacy issues, 42.3% did not know if their libraries communicate privacy policies to patrons, and 45.4% noted their libraries did not inform patrons about library e-resource privacy policies. Many respondents (55.2%) had attended educational sessions about online privacy and surveillance in the past five years, while 52.2% noted their libraries had not hosted or organized such sessions over the same period. Conclusion – Survey participants showed concern about online and patron privacy, though their lack of knowledge about local procedures and policies highlights a potential need for enhanced privacy education.


Author(s):  
Devjani Sen ◽  
Rukhsana Ahmed

With a growing number of health and wellness applications (apps), there is a need to explore exactly what third parties can legally do with personal data. Following a review of the online privacy policies of a select set of mobile health and fitness apps, this chapter assessed the privacy policies of four popular health and fitness apps, using a checklist that comprised five privacy risk categories. Privacy risks, were based on two questions: a) is important information missing to make informed decisions about the use of personal data? and b) is information being shared that might compromise the end-user's right to privacy of that information? The online privacy policies of each selected app was further examined to identify important privacy risks. From this, a separate checklist was completed and compared to reach an agreement of the presence or absence of each privacy risk category. This chapter concludes with a set of recommendations when designing privacy policies for the sharing of personal information collected from health and fitness apps.


Cyber Crime ◽  
2013 ◽  
pp. 1276-1291
Author(s):  
Suhong Li ◽  
Chen Zhang

The purpose of this chapter is to investigate the current status of online privacy policies of Fortune 100 Companies. It was found that 94% of the surveyed companies have posted an online privacy policy and 82% of them collect personal information from consumers. The majority of the companies only partially follow the four principles (notice, choice, access, and security) of fair information practices. For example, most of the organizations give consumers some notice and choice in term of the collection and use of their personal information. However, organizations fall short in security requirements. Only 19% of organizations mention that they have taken steps to provide security for information both during transmission and after their sites have received the information. The results also reveal that a few organizations have obtained third-party privacy seals including TRUSTe, BBBOnline Privacy, and Safe Harbor.


Author(s):  
Devjani Sen ◽  
Rukhsana Ahmed

With a growing number of health and wellness applications (apps), there is a need to explore exactly what third parties can legally do with personal data. Following a review of the online privacy policies of a select set of mobile health and fitness apps, this chapter assessed the privacy policies of four popular health and fitness apps, using a checklist that comprised five privacy risk categories. Privacy risks, were based on two questions: a) is important information missing to make informed decisions about the use of personal data? and b) is information being shared that might compromise the end-user's right to privacy of that information? The online privacy policies of each selected app was further examined to identify important privacy risks. From this, a separate checklist was completed and compared to reach an agreement of the presence or absence of each privacy risk category. This chapter concludes with a set of recommendations when designing privacy policies for the sharing of personal information collected from health and fitness apps.


2013 ◽  
Vol 19 ◽  
pp. 52-65
Author(s):  
Yohko Orito ◽  
Kiyoshi Murata ◽  
Yasunori Fukuta

In this study, we attempt to examine the effectiveness of online privacy policies and privacy seals/security icons on corporate trustworthiness and reputation management, and to clarify how young Japanese people evaluate the trustworthiness of B to C e-business sites in terms of personal information handling. The survey results indicate that posting online privacy policies and/or privacy seals/security icons by B to C e-businesses does not work for creating trust in business organisations by consumers actively. Instead, existing good name recognition and/or general reputation can engender trust and, increasingly, better their reputation in terms of personal information use and protection.


2010 ◽  
pp. 2046-2065
Author(s):  
Veda C. Storey ◽  
Gerald C. Kane ◽  
Kathy Stewart Schwaig

Privacy concerns and practices, especially those dealing with the acquisition and use of consumer personal information by corporations, are at the forefront of business and social issues associated with the information age. This research examines the privacy policies of large U.S. companies to assess the substance and quality of their stated information practices. Six factors are identified that indicate the extent to which a firm is dependent upon consumer personal information, and therefore more likely to develop high quality privacy statements. The study’s findings provide practical and theoretical implications for information privacy issues, particularly for consumers who need to determine whether or not to disclose their personal identifying information to firms. The results illustrate the complexity involved in managing personal private information.


2021 ◽  
Vol 2021 (2) ◽  
pp. 88-110
Author(s):  
Duc Bui ◽  
Kang G. Shin ◽  
Jong-Min Choi ◽  
Junbum Shin

Abstract Privacy policies are documents required by law and regulations that notify users of the collection, use, and sharing of their personal information on services or applications. While the extraction of personal data objects and their usage thereon is one of the fundamental steps in their automated analysis, it remains challenging due to the complex policy statements written in legal (vague) language. Prior work is limited by small/generated datasets and manually created rules. We formulate the extraction of fine-grained personal data phrases and the corresponding data collection or sharing practices as a sequence-labeling problem that can be solved by an entity-recognition model. We create a large dataset with 4.1k sentences (97k tokens) and 2.6k annotated fine-grained data practices from 30 real-world privacy policies to train and evaluate neural networks. We present a fully automated system, called PI-Extract, which accurately extracts privacy practices by a neural model and outperforms, by a large margin, strong rule-based baselines. We conduct a user study on the effects of data practice annotation which highlights and describes the data practices extracted by PI-Extract to help users better understand privacy-policy documents. Our experimental evaluation results show that the annotation significantly improves the users’ reading comprehension of policy texts, as indicated by a 26.6% increase in the average total reading score.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jillian Carmody ◽  
Samir Shringarpure ◽  
Gerhard Van de Venter

Purpose The purpose of this paper is to demonstrate privacy concerns arising from the rapidly increasing advancements and use of artificial intelligence (AI) technology and the challenges of existing privacy regimes to ensure the on-going protection of an individual’s sensitive private information. The authors illustrate this through a case study of energy smart meters and suggest a novel combination of four solutions to strengthen privacy protection. Design/methodology/approach The authors illustrate how, through smart meter obtained energy data, home energy providers can use AI to reveal private consumer information such as households’ electrical appliances, their time and frequency of usage, including number and model of appliance. The authors show how this data can further be combined with other data to infer sensitive personal information such as lifestyle and household income due to advances in AI technologies. Findings The authors highlight data protection and privacy concerns which are not immediately obvious to consumers due to the capabilities of advanced AI technology and its ability to extract sensitive personal information when applied to large overlapping granular data sets. Social implications The authors question the adequacy of existing privacy legislation to protect sensitive inferred consumer data from AI-driven technology. To address this, the authors suggest alternative solutions. Originality/value The original value of this paper is that it illustrates new privacy issues brought about by advances in AI, failings in current privacy legislation and implementation and opens the dialog between stakeholders to protect vulnerable consumers.


Author(s):  
Aleecia M. McDonald ◽  
Robert W. Reeder ◽  
Patrick Gage Kelley ◽  
Lorrie Faith Cranor

Sign in / Sign up

Export Citation Format

Share Document