Privacy and its importance to society have been studied for centuries. While our understanding and continued theory
building to hypothesize how users make privacy disclosure decisions has increased over time, the struggle to find a
one-size solution that satisfies the requirements of each individual remains unsolved. Depending on culture, gender,
age, and other situational factors, the concept of privacy and users' expectations of how their privacy should be
protected varies from person to person. The goal of this dissertation is to design and develop tools and algorithms
to support personal privacy management for end-users. The foundation of this research is based on ensuring the
appropriate flow of information based on a user's unique set of personalized rules, policies, and principles. This
goal is achieved by building a context-aware and user-centric privacy framework that applies insights from the users'
privacy decision-making process, natural language processing (NLP), and formal specification and verification
techniques. We conducted a survey (N=401) based on the theory of planned behavior (TPB) to measure the way users'
perceptions of privacy factors and intent to disclose information are affected by three situational factors embodied
by hypothetical scenarios: information type, recipients' role, and trust source. To help build usable privacy tools,
we developed multiple NLP models based on novel architectures and ground truth datasets, that can precisely recognize
privacy disclosures through text by utilizing state-of-the-art semantic and syntactic analysis, the hidden pattern of
sentence structure, tone of the author, and metadata from the content. We also designed a methodology to formally model,
validate, and verify personalized privacy disclosure behavior based on the analysis of the users' situational decision-making
process. A robust model checking tool (UPPAAL) is used to represent users' self-reported privacy disclosure behavior by an
extended form of finite state automata (FSA). Further, reachability analysis is performed for the verification of privacy
properties through computation tree logic (CTL) formulas. Most importantly, we study the correctness, explainability,
usability, and acceptance of the proposed methodologies. This dissertation, through extensive amounts of experimental
results, contributes several insights to the area of user-tailored privacy modeling and personalized privacy systems.