The Dishonesty of Big Tech in Privacy and Censorship

    Today, the Big Tech platforms of Google and Facebook and their third-party fact-checkers like Politifact are dishonest. This dishonesty is significant because these companies market themselves as either objective, like the fact-checking organizations, or as an open platform, like Google and Facebook. This misleading marketing is also of concern when it comes to privacy in which the platforms say that user data are safe and by not being open with how they use private data. Companies like Google and Facebook need to be more transparent with data usage, more respectful of privacy, and less given to censorship on their platforms.

            Opinions about internet privacy have changed; however, these changes are recent, as shown in a study done in 2016 by the Journal of Social Studies Research. The study lays out how high school students in 2016 thought of internet privacy as a non-controversial issue. Students also had a considerable amount of faith in Facebook and Google using their data. Also, students thought of privacy online in terms of pros and cons. Finally, students thought that the responsibility of users to shield their online privacy belonged to the individual user and that privacy concerns did not have a substantial impact on society (Crocco et al.). These points that the students made are essential to address if supporters of Big Tech transparency want students to see the dishonesty of the Big Tech platforms and understand why some people do not see it today. The first point to address is the trust that students had in Google and Facebook. The reason students gave for trusting the platforms is that they were “just ‘customizing’ the ads they see” (Crocco et al.). This reason is a problem because it might mean that users will not see when Facebook or Google is using their data dishonestly. The next point is the reasons the students gave on how they see privacy online. These reasons are that values of safety, privacy, and convenience need to be weighed when addressing the issue of privacy (Crocco et al.). This finding is important because it shows what people concerned with the dishonesty of Google and Facebook need to address to make the lack of privacy a serious issue. The final point is why the students did not see the impact of the lack of privacy on society at large and instead saw the adverse effects as mainly impacting individuals (Crocco et al.). Thus, people are concerned with how to show the younger generation how the lack of privacy by these Big Tech platforms and their trust in them can lead to problems for society.

    Privacy is also of concern because consumers feel uncomfortable with how many online platforms use their data and how much of their data these companies get. These data include personal information like their age and sex, geographical location, and sensitive information like bank and credit card numbers. These data, along with search data, can approximate the users’ beliefs on a topic. This concern was highlighted when a participant in a study done by the International Journal of Consumer Studies said, “Companies have the primary and ultimate responsibility to protect our information and ensure that our privacy is intact” (Bandara et al. 427). This sentiment points to an underlying truth: companies like Google and Facebook are responsible for keeping their users’ data safe from hackers and using it transparently with their consumers. This concern is increased due to the amount of personal data that companies like Google and Facebook have on their consumers, which is “exponential” (Bandara et al. 426 - 427). Thus, these companies must be transparent in their use of the consumers’ data and its security.

    The lack of transparency of Big Tech platforms like Google and Facebook is also of concern because they control users by censoring information. This control of the public square has many users in “[a] state of powerlessness… due to repeated invasion of consumers’ privacy boundaries, which compels them to perceive that they cannot control their information anymore” (Bandara et al. 428). This “state of powerlessness” is not unwarranted because companies are vague regarding terms of speech in the user agreements or statements of user conduct on their platforms. The consequences of this vague use of language are demonstrated when Matthew Nicaud says, “But the vagueness of the language should not mean that physicians have to face a disciplinary hearing before they even have a full clarification on whether or not the Board’s misinformation policy was violated” (Nicaud). Though this statement comes from the medical field, it applies to other topics on social media as well. The way that Facebook enforces its misinformation policy can have a similar effect, which can lead to “the editing and sorting of content presented to individual users, as well as the promotion and suppression of certain pieces of content, … [affecting] the entire flow of content on the platform” (Koltay 278). Due to this editing, the platforms can control the public square by emphasizing or hiding stories by censoring them. Algorithms mainly do this censoring, which is a problem because algorithms generally display the bias of the people who trained and wrote them. This bias is a problem because the group of moderators who generally train these algorithms is small. Thus, the bias of the moderators is imbued into the algorithm, which allows this small group to control the information in the public square.

    The problem of censorship also extends to the people who are fact-checking posts and the methods and means they use to control the content on their platform. These methods include “treating a statement containing multiple facts as if it were a single fact and categorizing as accurate or inaccurate predictions of events yet to occur” (Uscinski and Butler 162). This moderating treatment is a problem because it means that the bias of the people checking has a high chance of corrupting the fact check. This corruption is especially troubling if most “cross-cutting” information that fact-checkers get is not from their social network but only from random people online (Bakshy, Messing, and Adamic 1130). This problem is shown in a study done by the Journal of Communication which quotes SmartPolitics saying, “Politifact rated Republican claims to be false three times more often than it rated Democratic claims during Obama’s second term” (Shin and Thorson 5). This data, along with the fact that sites like Facebook use Politifact to sort out and sort content based on their platform, is worrying. Again, this is a problem because, along with their vague terms of speech, the bent in that the third-party fact-checker they use is very worrying. Science also showed that in general of the two political parties, online Conservatives tended to have, on average, 10% more political cross-cutting from within their network than Liberals who got most of the cross-cutting political information from random people outside of their network (Bakshy, Messing, and Adamic 1130-1131). This data, along with the data from SmartPolitics, likely indicate that fact-checker might have a bias and influence contributing to censorship on Facebook.

            Finally, even after examining the arguments given, many people still do not see the disrespect these Big Tech companies like Facebook, Google and their third-party fact-checkers like Politifact have for people’s right to privacy and freedom of speech. This contrast is why the data from Crocco is so surprising that privacy was a non-controversial issue, given the knowledge most people have concerning the Big Tech platforms, as shown in statements cited in Koltay and Bandara. The fact that the data are shown in both Science and Journal of Communication also strongly suggests that people are unaware of these platforms’ and fact-checkers’ bias. These platforms also seem to be increasing their censorship with users who either want or are indifferent and do not see the apparent adverse effects, as pointed out by Nicaud and Koltay. Therefore, it is crucial to make users aware that they should protect their privacy and freedom of speech online.

            Thus, if people want to keep their freedom of speech and privacy online, they need to push the Big Tech platforms to be more transparent regarding their policies on privacy and terms of speech. In addition, users online also need to push the platforms to make their terms of speech clearer in their meaning and stop allowing third-party fact-checkers to dictate the control of information on their platform. The control these platforms have over the public square requires that people get involved in the debate over privacy and censorship. Furthermore, they need to hold these platforms accountable for censoring their users for just speaking their minds and using so much of their data and not letting them know what they are using. Finally given this


 

Works Cited

Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. “Exposure to ideologically diverse news and opinion on Facebook.” Science (2015): 1130-1132. Google Scholar, https://www.science.org/doi/abs/10.1126/science.aaa1160.

Bandara, Ruwan, et al. “Addressing Privacy Predicaments in the Digital Marketplace: A Power‐relations Perspective.” International Journal of Consumer Studies, vol. 44, no. 5, Sept. 2020, pp. 423–434. EBSCOhost, doi:10.1111/ijcs.12576.

Crocco, Margaret S., et al. “‘It’s Not like They’re Selling Your Data to Dangerous People’: Internet Privacy, Teens, and (Non-)Controversial Public Issues.” Journal of Social Studies Research, vol. 44, no. 1, Jan. 2020, pp. 21–33. EBSCOhost, doi:10.1016/j.jssr.2019.09.004.

Koltay, AndrĂ¡s. “The Private Censorship of Internet Gatekeepers.” University of Louisville Law Review, vol. 59, no. 2, Spring 2021, pp. 255–304. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=lgh&AN=151075645&site=ehost-live.

Nicaud, Matthew. “The Medical Misinformation Policy Has Unclear Implications For Social Media.” Mississippi Center For Public Policy, Mississippi Center For Public Policy, 24 Sept. 2021, mspolicy.org/the-medical-misinformation-policy-has-unclear-implications-for-social-media.

Shin, Jieun, and Kjerstin Thorson. “Partisan selective sharing: The biased diffusion of fact-checking messages on social media.” Journal of Communication 67.2 (2017): 233-255. Google Scholar, https://academic.oup.com/joc/article-abstract/67/2/233/4082394

Uscinski, Joseph E., and Ryden W. Butler. “The Epistemology of Fact Checking.” Critical Review 25.2 (2013): 162-180.

Comments