This chapter focuses on mobile app Google Assistant as an example of algorithmic personalization. The chapter draws on the accounts of six Google users who engaged with the app over the space of six weeks. The chapter explores participants’ framing of the app as “smart” and “impressive” even as the app failed to be “useful”; participants’ invocation of self-blame in order to explain the app’s failures; their faith that Google would uphold its side in the data-for-services exchange; and finally, participants’ expectations that the app could and should know them to an extraordinarily complex degree. The chapter proposes that the Google app’s interface constructs and evokes an ideologically normative “ideal user” in order to present “personalized” information. The author argues that participants articulated an enduring sense of trust to predict, construct, and legitimize identity articulation. This trust is embedded not only in the app itself, but also in Google as a broader technological, sociocultural, and commercial force.