If I didn't want people to know, I wouldn't put it on Facebook: How ...

1 downloads 993 Views 71KB Size Report
The University of Queensland. l.pomfret@business.uq.edu.au ... with pervasive membership to social media communities, such as Facebook, raises interesting.
If I didn’t want people to know, I wouldn’t put it on Facebook: How privacy is changing in the age of social networking Liam Pomfret*. The University of Queensland. [email protected] Josephine Previte. The University of Queensland. [email protected] Keywords: privacy, privacy protection, social privacy, social networking. Abstract This study qualitatively explores consumers’ privacy protecting behaviours on social networking sites. A Leximancer analysis of depth interviews with experienced social networkers revealed concerns about social risk factors, rather than monetary or physical risks associated with sharing information online. These findings are interesting in light of current scholarly research about online privacy which indicates that whilst consumers’ are increasingly concerned about their privacy, they appear to take little or no action to protect their personal information when using social networking sites. The findings of this preliminary research demonstrates that consumers’ are re-conceptualizing their privacy around social risk, which is influencing both how consumers classify what is private information, as well as how they perform privacy protecting behaviours. Introduction Semantic Web 2.0 technology has brought about rapid shifts in how people communicate with each other including users demonstrating an unprecedented willingness to post intimate, personal information (Gross & Acquisti, 2005). This intimate sharing behaviour, combined with pervasive membership to social media communities, such as Facebook, raises interesting questions around consumers’ privacy management behaviours. This unprecedented shift in consumers’ sharing of personal information has given rise to a range of social and legal issues relating to privacy, including identity theft (Ellison, Lampe & Steinfield, 2009), “cyberbullying”, and threats to jobs from “spying bosses” who report making decisions not to hire people based on reading personal information posted on social networking sites (Clark & Roberts, 2010; Smith & Kidder, 2010). However despite increasing levels of privacy concern being reported by consumers (Antón, Earp, & Young, 2010), there appears to have been no accompanying increase in consumers’ privacy protecting behaviours (Norberg, Horne, & Horne, 2007). The literature suggests that consumers frequently take little to no action to protect personally identifying information (Krishnamurthy & Wills, 2008), and states that many users of social networking never even change their privacy settings from defaults which potentially leave their disclosures publically available to anyone (Debatin, Lovejoy, Horn, & Hughes, 2009). Related Literature: The Privacy Paradox An assumption underpinning the characterization of privacy in previous research has been that consumers are reluctant to disclose information online and offline due to their concerns over privacy (Kelly & McKillop, 1996). Furthermore, this literature argues that selfdisclosure of personal information will only happen when consumers feel assured their information will be protected (Derlega & Chaikin, 1977). This kind of characterization of privacy appears inconsistent however with consumers current attitudes towards sharing of information (Labrecque, Markos, & Milne, 2011), where personal details are willingly, even

eagerly released by consumers in exchange for personalized services (Awad & Krishnan, 2006), such as their social networking profiles. Yet despite the level of personal information that is being shared by consumers, there has been no corresponding increase in consumers actual stated intensions to disclose personal information, which has been dubbed by Norberg, Horne & Horne (2007) as the “Privacy Paradox”. This lack of protection of personal information (Debatin et al., 2009), even as concerns over privacy issues have continued to grow (Antón et al., 2010) has led to the conclusion amongst researchers that consumer privacy concerns are only at best a weak indicator of their actual privacy protection behaviours online (Dwyer, Hiltz, & Passerini, 2007). In illuminating the privacy paradox, a number of researchers have presented images of consumers as lacking the ability to understand privacy statements; whereas others have argued that consumers do not even concern themselves with reading privacy statements because they assume the statements have a universal explanation (i.e., if I’ve read one privacy statement, all others will be much the same) (Milne & Culnan, 2004; Milne, Culnan, & Greene, 2006). Other researchers have also argued that consumers tend to ascribe privacy risks to other parties, rather than admitting to any personal vulnerability (Debatin et al., 2009), which leads consumers to grossly underestimate the privacy risk implications of their own disclosure behaviour (Milne, Labrecque, & Cromer, 2009). On the other hand, authors such as White (2004) have suggested that consumers have simply become accustomed to online monitoring and privacy invasion and as a result are now unconcerned about external surveillance of their personal information. Method Interviews were conducted with 15 users of the social networking site Facebook. Participants ranged in age and backgrounds, and attitudes towards online privacy. Previous studies have shown a significant reliance on survey data to measure consumers degree of engagement in behaviours online (Poddar, Mosteller, & Ellen, 2009), with studies relating to privacy typically measuring only the extent of consumers’ use of privacy settings and policies. In order to obtain richer insights into consumers’ motivations for privacy protection, in this study we engaged participants in semi-structured depth interviews. Interviews ranged between 30 and 75 minutes in length, with the typical interview lasting approximately 60 minutes. All interviews were digitally recorded, with participant’s consent. Participants were recruited as a convenience sample through a mixture of social and professional contacts, and through snowballing from early participants. The sample of 15 social networkers was considered adequate for exploratory research, and provide reasonable coverage of the phenomenon being examined (Patton, 2002). Experienced social networkers were intentionally sought, with the expectation they would offer richer, experiential insights into explanations of the social norms of privacy. Analysis and Discussion Leximancer software (Smith & Humphreys, 2006), applies a machine learning technique in a grounded research fashion to identify the main concepts of a corpus and the relationship between words in a corpus (for a detailed overview, see Rooney, 2005 and Campbell, Pitt, Parent, & Berthon, 2011). The leximancer concept map (Figure 1) illustrates the extracted concepts from the text, identifying key themes and concepts, their relative importance, and interrelations. In exploring the relationships illustrated in the map, we identified 6 themes (people, privacy, information, friend, use, comments) which assisted interpretation of

consumers social networking privacy behaviours. The qualitative interpretation of this map involved an abductive research process, where interpreation moved back and forth between the data, the interview transcripts, and broad consumer behaviour and privacy concepts identified in the literature. In the following discussion, we focus on four key themes: people, privacy, information and friends, as they provided a rich explanation of consumers’ privacy protecting behaviours. People emerged as a central, locating theme in the concept map. Emergent within this theme was a sense amongst participants that those most responsible for protecting privacy online are the individual people at risk themselves. This identification of privacy as a personal responsibility relates directly to how participants described their privacy protection behaviours. Specifically, participants talked about how their primary method of avoiding online privacy breaches was to simply not post up information they had any concern about. Their reasons for using these kinds of privacy protection strategies were expressed and described in terms of how it allowed them to exercise the greatest level of personal control over their privacy. By exercising self-control over what information they released online, participants gained confidence in their ability to prevent harmful or unwanted exposure of sensitive information to the broader public.

Figure 1: Concept map of semi-structured interviews

The characterization by participants of their disclosure management behaviours (White, 2004) in these terms points to the shared consumer concern about the effectiveness of technological solutions for privacy protection, such as Facebook’s privacy settings. Participants expressed feelings that any information placed online would inevitably become public eventually. This was not necessarily due to doubts regarding Facebook's good faith or ability in implementing these kinds of technological protections. Indeed, participants mentioned how they saw value in Facebook’s settings to remove themselves and their personal information from casual searches making it more difficult for others to randomly find them online. Rather, participants expressed a feeling that no matter how much effort might be expended in developing some security measure, that in the end it would inevitably be compromised. Participants also expressed the feeling that trying to delete or remove information after it had been placed

online was essentially a pointless effort. Essentially, participants indicated that once information has been placed online, it was ‘out of their control’. A contributing factor to participants’ lack of trust in technological solutions, and their focus on simple disclosure management strategies for privacy protection, may in part stem from their feelings about how these solutions have been constructed. Several participants expressed concerns regarding the privacy interface provided by Facebook. Specifically, there was a feeling amongst some participants that the workings of the settings had been made deliberately obscure, such that a person might accidently release information publically while believing that they'd secured information from unwanted eyes. Interestingly, these suspicions of bad faith on Facebook’s part in constructing the user interface did not carry over into participants comments regarding Facebook’s privacy policies, despite their expressing concerns over the policies’ readability. As found in previous research, some participants in our study also acknowledged they hadn’t read and/or completely understood the privacy statements (Milne & Culnan, 2004; Milne et al., 2006). Nonetheless, participants had a strong belief that privacy policies were basically all the same, and therefore saw no real benefit in reading Facebook’s statement. One participant specifically likened it to the end-user software license agreements, right down to the process of dealing with the statement by simply clicking the checkbox to say "I agree" and moving on. One clear implication from how participants described their privacy protection strategies was that the information they did place on their Facebook profiles was information not considered to be personal or private. What exactly constituted personal however varied greatly. Some participants were concerned mostly about their demographic information such as their birthdays or where they live. Other participants in the study had no such problems; but still talked generally about needing to protect personal information on their hobbies and personal interest. These differences in definitions of personal information may in part be attributable to how some participants saw a distinction between information in the sense of general demographic information, and information in the sense of other things placed on Facebook, such as status updates or photos. For participants demographic information was explicit knowledge, whereas things were available to others for view and potentially make personal judgements on. This information distinction also explains the separation between these two concepts on the conceptual map which guided our interpretation of privacy behaviours, with the concept of things being integrated within the core theme of people while information was contained separately in its own distinct theme. The way in which participants described what they considered personal information also linked in to how they conceptualized privacy. Participants did not describe privacy in terms of how Facebook or some other organization might use their personal information. Indeed, several statements were made by participants about how information might be used in this way, and they appeared entirely unconcerned about the prospect of being added to a company's email marketing list. While they did view these as intrusions of their privacy, it was only at the level of an easily dealt with minor annoyance. Instead, participants elaborated their conceptualizations of privacy in terms of the social consequences of it being breached. Specifically, through the loss of standing or status in one or more social circles that they were involved in, and/or through the impact on their inter-personal relations with other people, up to and including the break-up of marriages and families. There was also a clear juxtaposition between how participants talked about their friends, who drew them into using Facebook, and other unspecified people, whom they connected their

privacy concerns to. A specific concern mentioned was future employers’ judgements about hiring and firing based on information gleaned from Google searches and social networking profiles. From the literature (Clark & Roberts, 2010; Smith & Kidder, 2010) and popular media (Casserly, 2012) this is a well founded concern. Despite the common stereotype of social networkers as “friending” numerous random people and even “collecting” friends (a stereotype noted several times by our participants), and significant differences between participants in their individual “friending” strategies, a common factor in their “friending” decisions was their level of trust of that person. “Friending” was seen as giving people access to potentially sensitive disclosures of information, and participants mentioned how they took care to friend only those who they felt would respect their privacy and not spread information. Participants even engaged in culling friend lists, removing those they'd fallen out of contact with and in whom they felt they could no longer rely upon to protect their social information. Management of friends lists effectively insured privacy and thus is a new style of privacy protection behaviour. This juxtaposition between friends and other people also carried into participants attitudes towards the use of false information disclosures as a means of privacy protection. Whereas past research had found false disclosures to be widespread amongst youth on Myspace (Lenhart & Madden, 2007), participants in this study were largely disinclined towards this kind of behaviour. Participants mentioned feeling that false information would serve no purpose in protecting privacy, given that the “friends” they knew offline would generally know this information anyway, and that they were concernd about possible social consequences of presenting others a false image of themselves. A notable exception to the use of false information however was the use of fake names for their account. One participant mentioned how they felt having attached a fake name to their profile helped to protect them, so that the online people they knew offline who could find them on Facebook would be those they wanted to. Conclusions and Directions for Future Research The preliminary findings of our analysis suggest that contrary to the findings of previous researchers (e.g. Norberg et al., 2007), consumers’ privacy protection behaviours are strongly influenced by their privacy concerns. It is not a paradox that consumers are engaging in disclosures of information which may potentially identify them, while at the same time stating high levels of privacy concern. This is because how consumers conceptualize privacy, and how they define what is and isn’t private, has changed to the extent that some identifying information is no longer being viewed by consumers as needing to be protected. Consumers’ privacy concerns are not being framed in terms of how organisations may potentially use their data, but rather about the social risk factors associated with information disclosure. This study also suggests there has been a movement towards a view of privacy as a vulnerable, but transitive state (Baker et al., 2005) where privacy can never be recovered once lost. There are inherent limitations in the current study associated with the research context (Facebook) and the study’s qualitative research design. Future research can extend our findings by exploring online privacy behaviour in other social media platforms. Additionally, the privacy experiences outlined should be explored quantitatively, providing population snapshots of users’ experiences across more diverse populations. Finally, research is needed to further explore consumer privacy decision-making, specifically towards understanding the choices made by consumers about what information they consider private, and the relationship of these choices to the risk factors which inform their privacy concerns.

References

Antón, A. I., Earp, J. B., & Young, J. D. 2010. How Internet Users’ Privacy Concerns Have Evolved since 2002. IEEE Security & Privacy, 8(1): 21–27. Awad, N. F., & Krishnan, M. S. 2006. The personalization privacy paradox: An empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Quarterly, 30(1): 13–28. Campbell, C., Pitt, L. F., Parent, M., & Berthon, P. R. 2011. Understanding Consumer Conversations Around Ads in a Web 2.0 World. Journal of Advertising, 40(1): 87–102. Casserly, M. 2012. Social Media And The Job Hunt: Squeaky-Clean Profiles Need Not Apply. Forbes. http://www.forbes.com/sites/meghancasserly/2012/06/14/social-media-and-thejob-hunt-sqeaky-clean-facebook-profiles/, June 14, 2012. Clark, L. A., & Roberts, S. J. 2010. Employer’s Use of Social Networking Sites: A Socially Irresponsible Practice. Journal of Business Ethics, 95(4): 507–525. Debatin, B., Lovejoy, J. P., Horn, A.-K., & Hughes, B. N. 2009. Facebook and Online Privacy: Attitudes, Behaviors, and Unintended Consequences. Journal of ComputerMediated Communication, 15(1): 83–108. Derlega, V. J., & Chaikin, A. L. 1977. Privacy and Self-Disclosure in Social Relationships. Journal of Social Issues, 33(3): 102–115. Dwyer, C., Hiltz, S. R., & Passerini, K. 2007. Trust and privacy concern within social networking sites: A comparison of Facebook and MySpace. Proceedings of the Thirteenth Americas Conference on Information Systems. Ellison, N. B., Lampe, C., & Steinfield, C. 2009. Social Network Sites and Society: Current Trends and Future Possibilities. interactions, 16(1): 6. Gross, R., & Acquisti, A. 2005. Information revelation and privacy in online social networks. Proceedings of the 2005 ACM workshop on Privacy in the electronic society WPES ’05: 71. Kelly, A. E., & McKillop, K. J. 1996. Consequences of revealing personal secrets. Psychological Bulletin, 120(3): 450–465. Krishnamurthy, B., & Wills, C. E. 2008. Characterizing privacy in online social networks. Proceedings of the first workshop on Online social networks WOSP 08, 16(3): 37. Labrecque, L. I., Markos, E., & Milne, G. R. 2011. Online Personal Branding: Processes, Challenges, and Implications. Journal of Interactive Marketing, 25(1): 37–50. Lenhart, A., & Madden, M. 2007. Teens, Privacy & Online Social Networks: How teens manage their online identities and personal information in the age of Myspace.

http://www.pewinternet.org/Reports/2007/Teens-Privacy-and-Online-SocialNetworks.aspx. Milne, G. R., & Culnan, M. J. 2004. Strategies for reducing online privacy risks: Why consumers read (or don’t read) online privacy notices. Journal of Interactive Marketing, 18(3): 15–29. Milne, G. R., Culnan, M. J., & Greene, H. 2006. A Longitudinal Assessment of Online Privacy Notice Readability. Journal of Public Policy & Marketing, 25(2): 238–249. Milne, G. R., Labrecque, L. I., & Cromer, C. 2009. Toward an Understanding of the Online Consumer’s Risky Behavior and Protection Practices. Journal of Consumer Affairs, 43(3): 449–473. Norberg, P. A., Horne, D. R., & Horne, D. A. 2007. The Privacy Paradox: Personal Information Disclosure Intentions versus Behaviors. Journal of Consumer Affairs, 41(1): 100–126. Patton, M. Q. 2002. Qualitative research and evaluation methods. Thousand Oaks, California: Sage Publications. Poddar, A., Mosteller, J., & Ellen, P. S. A. M. S. 2009. Consumers’ Rules of Engagement in Online Information Exchanges. Journal of Consumer Affairs, 43(3): 419–448. Rooney, D. 2005. Knowledge, economy, technology and society: The politics of discourse. Telematics and Informatics, 22(4): 405–422. Smith, A. E., & Humphreys, M. S. 2006. Evaluation of unsupervised semantic mapping of natural language with Leximancer concept mapping. Behavior Research Methods, 38(2): 262–279. Smith, W. P., & Kidder, D. L. 2010. You’ve been tagged! (Then again, maybe not): Employers and Facebook. Business Horizons, 53(5): 491–499. White, T. B. 2004. Consumer Disclosure and Disclosure Avoidance: A Motivational Framework. Journal of Consumer Psychology, 14(1-2): 41–51.