Algorithmic Gatekeeping and Democratic Communication: Who Decides What the Public Sees?
Article Main Content
This study examines how the algorithmic structures of digital platforms such as TikTok and YouTube reshape the visibility of news content and its effects on digital journalism. The issue of which news is foregrounded or relegated to the background in the digital public sphere is not merely a matter of technical choice, but a reflection of economic, cultural, and political preferences. The theoretical analysis conducted within the framework of Habermas’s theory of the public sphere, Fraser’s counter-public sphere approach, and Mouffe’s agonistic democracy model reveals the transformations of platform capitalism and data-driven recommendation systems on democratic representation. The study concretizes the decisive role of TikTok and YouTube in the processes of news production, distribution, and consumption through three case studies. In particular, the reduction of political content to an entertainment format on TikTok and the deepening polarization of recommendation algorithms on YouTube demonstrate how the algorithmic structure contradicts journalistic values. While digital journalism is shaped according to the visibility criteria of these platforms, fundamental principles such as impartiality, diversity and access to accurate information are weakened. In conclusion, policy recommendations for algorithmic transparency, platform literacy for journalists and support for alternative media structures are developed.
Introduction
In today, political communication, information gathering and news following are increasingly moving away from traditional media institutions and taking place on digital platforms. In this transformation, social media platforms such as TikTok and YouTube have become not only entertainment or social interaction areas, but also strategic media areas for news production and distribution. However, on these platforms, the content reaching the user, that is, becoming visible; is determined by algorithms, and therefore, digital systems rather than human editors decide which news will be highlighted.
This situation brings up a vital question in terms of digital journalism: Who decides what the public will see? While this decision was the responsibility of editorial boards, newsworthiness principles and ethical frameworks in traditional journalism, this responsibility has been largely transferred to opaque algorithms on today’s digital platforms. Algorithms make selections based on criteria such as the viral potential of news content, user interaction or advertising revenue, causing certain news to stand out or become invisible.
The main purpose of this study is to analyze the effects of algorithmic visibility structures on digital journalism and to reveal how these structures transform the ways news is disseminated. The theoretical study conducted on TikTok and YouTube platforms shows that the processes of news production, dissemination and reaching users are now intertwined with platform capitalism, data-based behavioral prediction and content ranking systems. This transformation fundamentally affects not only the form of news but also the function of the journalism profession to inform the public and ensure democratic representation. In addition, algorithmic structures play a determining role not only in content selection but also in issues such as who the news sources will be, which journalistic practices will be rewarded and which news language will have the chance to spread more. Therefore, visibility is no longer a technical issue; it has become a problem directly related to journalistic principles such as news ethics, freedom of expression and fair representation.
The question of “what is visible to the public” in the digital public sphere shows how news values, editorial policies, and ethical principles intersect with algorithmic politics in journalism. In traditional journalism, this responsibility belonged to editorial boards, newsworthiness criteria, and ethical frameworks. However, in today’s digital ecosystem, recommendation systems that operate based on criteria such as viral potential, user engagement, and advertising revenues are redefining visibility (Napoli, 2022; Bucher, 2021).
The main purpose of this study is to analyze how algorithmic visibility structures transform the profession of digital journalism and the function of democratic representation. In the context of Habermas’s concept of the public sphere, Fraser’s critique of the counter-public sphere, and Mouffe’s theory of conflictual pluralism, how do the visibility policies of digital platforms reorganize democratic participation? Furthermore, recent communication studies such as Gillespie (2018) show how news algorithms restructure newsworthiness criteria.
The study first describes the data collection and coding processes on 40 TikTok videos and 40 YouTube posts published between January 1 and March 31, 2024, using the qualitative discourse analysis method (Fairclough, 2013; Gee, 2014). Then, how automation, transparency, and platform incentives components shape the digital news flow are comparatively examined with three case studies (CNN Türk TikTok, HaberTürk YouTube, BBC News TikTok). The final section presents policy recommendations on algorithmic transparency, platform literacy, and alternative media support mechanisms for digital journalism. Thus, this study will both contribute to the academic conceptualization of visibility policies in the digital public sphere and develop concrete recommendations that will guide journalists and regulators on a practical level.
Theoretical Framework
The visibility structures of digital journalism are related not only to technical systems but also to public sphere theories, media theories and political representation frameworks. In this study, the problem of visibility is addressed within three main theoretical frameworks: Habermas’s public sphere theory, Nancy Fraser’s counter-public sphere approach and Chantal Mouffe’s agonistic democracy model.
Jürgen Habermas defines the public sphere as an area where individuals discuss public issues within the framework of common sense and where equal and non-violent communication is possible. In this model, journalism plays a critical role as the carrier and guide of public debate. However, it is seen that Habermas’s model does not fully fit digital environments. Digital platforms structure visibility, participation and voice opportunities algorithmically; thus, preventing all individuals from participating under equal conditions. Therefore, the problem of visibility in digital journalism reveals that Habermas’s “ideal speech environment” contradicts technical and economic filters.
Nancy Fraser criticizes Habermas’s public sphere model, arguing that in reality participation is not equal and that some social groups are systematically excluded. According to Fraser, “counter-public spheres” are alternative spaces where marginalized groups can make their voices heard. Although digital platforms seemingly host these counter-public spheres, algorithmic prioritization often prevents these voices from taking center stage. In this context, the visibility of independent media initiatives or oppositional journalists in digital journalism is suppressed when they conflict with the platforms’ economic interests. This harms not only news production but also freedom of expression and democratic equality of representation.
Chantal Mouffe argues that democracy is inherently agonistic and that this conflict should not be suppressed but rather expressed on legitimate grounds. Mouffe’s approach highlights the multi-voice opportunities offered by digital media. However, instead of suppressing these conflicts, digital platforms transform them into entertainment and reduce “political content to the politainment format.” This necessitates that news content be presented through humor, dance or irony, especially on platforms such as TikTok, thus reducing the depth and analytical value of the news. Since algorithms act according to the interaction potential of the news rather than its content, “entertaining information” comes to the forefront instead of “serious journalism.” When these three theoretical approaches are considered together, it is understood that the struggle for visibility in digital journalism is not a technical but a political issue. Habermas’ ideal cannot be achieved, the counter-public voices advocated by Fraser are systematically left behind, and algorithmic harmony and superficial consensus are achieved instead of the conflicting pluralism demanded by Mouffe. Therefore, digital journalism should be rethought not only with content production, but also with who, how and according to what criteria the content becomes visible.
Habermas’s understanding of the public sphere is explained through the ideal speech situation in which democratic participation is shaped within the framework of rational communication. According to this model, the discussion of proposals regarding the public interest among all individuals in a non-violent, equal and reasoning-based manner constitutes the basis of democratic legitimacy. However, this model creates an internal hierarchy that prevents equal participation in practice. According to Tunç’s (2023) analysis, although Habermas defines the public sphere as an inclusive discussion platform, the requirements of the ideal speech situation impose fixed roles on the participants, thus revealing a two-layered structure based on the active/passive distinction.
The multiple voices of the digital public sphere are more compatible with agonistic pluralism, not with the Habermas goal of rational consensus. Participants are involved in the process not only with the concern of reaching the common good, but also with the struggle to make their own existence visible. Therefore, digital communication tools bring the questions of “who can speak” and “who will be taken seriously” back to the agenda. While Habermas’ theory of communicative action focuses on persuasion through strong argument, humor, visual symbols and emotional expressions have also become part of political communication in digital media (Kluzik, 2022).
The disdain for those who are not deemed sufficiently rational by Habermas to switch from the role of reader to the role of writer, and the prohibition of the role of writer to “readers” who are considered irrational according to almost impossible criteria of rationality, constitute a problem in the understanding of egalitarian participation. Habermas’s concern with digital communication is the transformation of participation roles and the apparent emergence of counter- or plural public spheres that challenge the inclusive public sphere. It is important to consider the real possibilities for alternative social media networks that can be connected outside the dominant platforms in the digital public sphere. Preventing hate speech while protecting freedom of expression is another fundamental test (Gillespie, 2018).
Fraser’s starting point in his studies on justice is the identification of a significant deficiency in discussions about how to put into practice a virtue of such great importance. What he basically means here is that conceptualizations of justice in political theory are grouped under two main headings: Justice has long been addressed on the basis of economic inequalities, and in response to this, different approaches known as distributive justice theory and redistributive justice theory have emerged. Over time, the cultural justice conceptualization, or in Fraser’s words, the ‘identity model’, which argues that discussing justice solely in terms of economic inequality and/or the provision/failure to provide equal sharing of wealth/resources is a monist and reductionist approach, has emerged as an alternative approach. According to this second approach, the factors that lead people to think that they have been subjected to injustice are not being recognized and not being respected. In his critical approach, Fraser emphasizes that both perspectives make the mistake of ignoring or seeing as secondary the issues emphasized by the other; however, in the practice of life, the injustices caused by economic inequalities and the injustices caused by cultural non-recognition and disrespect are intertwined, and he suggests an alternative, multi-dimensional conceptualization of justice (Gillespie, 2018).
Tarleton Gillespie argues that digital platforms are not just technical tools, but also actors that carry certain political, economic and cultural values. His work systematically analyzes the effects of social media platforms on visibility, access to information and freedom of expression. According to Gillespie, platforms play a decisive role in shaping the public sphere through content moderation and algorithmic ranking. In this process, the structures that decide which content will be highlighted and which will be suppressed or rendered invisible are based on political preferences rather than technological infrastructure.
Shoshana Zuboff analyzes the economic structure of the digital age through the concept of “surveillance capitalism.” According to her, modern digital platforms are not only tools that collect data; they have also become structures that predict individuals’ behaviors and transform these predictions into economic gain. According to Zuboff’s definition, surveillance capitalism is a new capitalist order that processes users’ digital footprints to obtain behavioral surplus and markets this surplus through strategies such as advertising, political manipulation, or content prioritization. This order manipulates users’ consent in ways that are often hidden within the platform experience, without their being noticed.
With surveillance capitalism, algorithms become not only technical tools but also fundamental components of behavioral engineering. Platforms such as TikTok, YouTube, and Facebook offer personalized content to attract users’ attention and keep them on the platform for longer. However, this personalization also traps users in narrow-minded information bubbles (filter bubbles). This makes it difficult for different views to come together in the public sphere and erodes the culture of democratic debate.
The agonistic democracy model advocates that dissenting voices gain a legitimate space for existence on digital platforms. Mouffe defines the essence of politics as conflict. In order for this conflict to exist healthily, institutions, forms of expression and media structures must recognize pluralism.
Mouffe’s theory defines democracy not in terms of stability but in terms of flexibility. According to him, democratic regimes should be evaluated according to their capacity to cope with differences and instead of eliminating the areas of tension established by pluralism, they should institutionalize them. In the digital age, this understanding demands that social media platforms manage conflicts rather than exclude them and make room for pluralistic narratives. However, current algorithmic structures weaken the ground for discussion by directing users to emotional content, individualize political narratives and create a polarizing environment.
The concept of algorithmic gatekeeping allows us to re-read power dynamics in communication processes, based on Habermas’s analysis of the public sphere. While Habermas’s public sphere offers an ideal model where rational discussion is spatialized, the complexity of this ideal structure is evident in the digital environment. Fraser argues that the public sphere can be divided into multiple public spheres, highlighting the distinction between dominant and resistant public spheres. While Fraser’s concept of resistant public sphere points to autonomous areas where marginal groups can make their voices heard, algorithmic platforms can both support and restrict this autonomy. Mouffe, on the other hand, centers on conflict-based democratic dialogue in terms of agonistic pluralism and states that contradictions must be visible for the vitality of pluralism in the public sphere. According to Mouffe, conflict is an inseparable part of democratic life and the mechanisms of this conflict are reshaped by algorithmic recommendation systems. As a trilateral meeting ground between Habermas, Fraser and Mouffe, “algorithmic gatekeeping” offers a new axis of analysis. This new axis is defined by the opaque logic of digital platform algorithms, as opposed to the dosage of economic capitalist processes. While algorithmic systems filter the flow of content in line with their internal code logic, they can violate Habermas’ ideal conditions for rational dialogue (Gillespie, 2018). By showing how the platform logic works, Gillespie draws attention to the fact that the algorithm is not only a technical but also a political actor (Gillespie, 2018). Napoli has made the role of algorithmic gatekeeping in shaping public debate a current research object (Napoli, 2022). According to Napoli, algorithms expand or narrow the boundaries of the public sphere by redefining the criteria of newsworthiness (Napoli, 2022). Bucher emphasizes that the domain of algorithmic logic can be traced back to the subjective experiences of subjects (Bucher, 2021). Bucher’s “if…then” logic, while revealing the formal conditions of content visibility, leads to uncertainties in democratic pluralism (Bucher, 2021). While Helberger examines the political and ethical issues in the news flow, he questions the democratic legitimacy of algorithmic regulation proposals (Helbergeret al., 2021). Klinger and Svensson analyze how cultural and political contexts are reflected in the design parameters of algorithms (Klinger & Svensson, 2018). These studies demonstrate the dual position of the algorithm as both a communication tool and a control mechanism. Fraser’s critical perspective is a fundamental tool for revealing which groups the algorithmic framework excludes. Mouffe’s theory of conflictual democracy emphasizes the importance of different voices that appear in digital conflict spaces. Habermas’ normative framework provides the opportunity to identify obstacles to rational communication in the digital public. At the intersection between these three theories, the concepts of “transparency” and “accountability” come to the fore. Transparency defines the understandability of algorithmic processes, while accountability defines the compliance of these processes with democratic norms. In the new model proposed, algorithmic gatekeeping is structured around three basic components: automation, transparency, and platform incentives. The automation component involves the processing of user interactions by machines and their transformation into decision-making processes. The transparency component is concerned with the accessibility and interpretability of algorithmic logic. The platform incentives component explains how corporate goals guide content selection. The basic claim of this model is that algorithmic gatekeeping practices affect democratic participation both directly and indirectly. The direct effect is realized by restricting or increasing the visibility of content (Bishop, 2019). The indirect effect is the promotion of certain agendas by directing user behavior (Bishop, 2019). Bishop supports the political consequences of algorithmic recommendation mechanisms of digital platforms with empirical examples (Bishop, 2019). Tambini also discusses how regulatory approaches can be aligned with democratic standards (Tambini, 2020). According to Tambini, algorithmic regulation should be designed by considering the balance of freedom of expression and data privacy (Tambini, 2020). This three-component model builds bridges between Habermas’ rational dialogue conditions and Fraser’s principle of inclusiveness. At the same time, it makes visible the tension between conflict and compromise with Mouffe’s understanding of conflictual pluralism. The model contributes to the redefinition of different types of public spheres in the digital context. For example, subjects of autonomous sub-public spheres can be marginalized through algorithmic filter bubbles.
This brings Fraser’s critique of the resistant public sphere to the digital contex. On the other hand, algorithmic transparency is necessary for the realization of Mouffe’s ideal of agonistic pluralism. Habermas’s legacy of critical theory provides a normative reference system in this field. Empirical studies in the literature show that recommendation algorithms produce systematic biases in news selection (Helbergeret al., 2021; Napoli, 2022). These biases intersect with economic and ideological interests to determine the articulation of the public sphere. The proposed model offers an analytical toolkit to uncover the mechanisms of these intersections. My research comparatively examined YouTube and TikTok news feeds to demonstrate the applicability of the model. The empirical findings revealed how different platform incentives shape content visibility. The analysis documented the tendency for similar content to be directed to different user profiles at the automation layer (Gillespie, 2018). In the transparency layer, it has been found that platform notifications change user perception (Helbergeret al., 2021). In the platform incentives layer, it has been seen how the advertising revenue model affects journalistic priorities (Napoli, 2022). As a result, the three-component model offers new evaluation criteria in terms of democratic participation and public debate. These evaluation criteria allow for the generation of hypotheses for future research.
News algorithms automate traditional editorial decision-making processes to determine which content is visible. Work demonstrates that algorithms are reshaping the criteria for “newsworthiness.” The headlines that are featured on news platforms vary depending on the weight the algorithm gives to user engagement data. This transforms the nature of public debate and fragments spaces for rational debate. Napoli (2022) analyzed the role of algorithmic gatekeeping in the news ecosystem to explain the dynamics of the news cycle. Napoli’s modeling reveals the political and cultural logics that drive algorithms’ content discovery processes. Prominence on news platforms is intertwined with feedback loops of economic interests and user behavior. This intertwining requires reinterpreting the divisions in Fraser’s theory of multiple public spheres in a digital context. Habermas’s ideal public sphere becomes distributed and fragmented due to algorithmic filter bubbles. Bucher (2021) argues that the “if…then” logic shapes content visibility. Bucher emphasizes that the conditioning in the algorithm code leads to political consequences. The opacity of the code logic makes democratic accountability difficult. Helbergeret al. (2021) discuss the ethical and political dimensions of news recommendation systems. These systems can create inequality in the public sphere by offering different news options to different user segments. Current research documents that recommendation algorithms lead to biased news distribution. Journalism literature accepts that algorithmic interventions transform both content presentation and user perception. Under this subheading, all stages of algorithms from news production to distribution are examined. The concept of platform capitalism reveals that in the digital economy, corporate goals are intertwined with user data and advertising revenues. Srnicek’s definition of platform capitalism explains how news exposure is intertwined with commercialization processes. Advertising revenue models become platform incentives that directly affect content visibility. These incentives shape public debate agendas and lead to a group of news being highlighted. Platform owners prioritize the flow of content in line with economic interests. This prioritization disrupts the balance of pluralism in the public sphere by directing individual users’ news selection. The lack of transparency in the digital advertising ecosystem brings new normative discussions to the agenda in communication sciences. Tambini (2020) argues that the regulation of digital platforms should be based on a balance of freedom of expression and data privacy. Tambini’s work offers algorithmic regulation proposals that are compatible with democratic standards. The platform capitalism literature examines the economic and political dimensions of algorithmic exposure together. Empirical studies in this field reveal that advertising-centric recommendation systems reduce news diversity. Helbergeret al. (2021) offer alternative models for the regulation of online news recommendations from a European perspective. These models aim to redefine the public functions of platform incentives. Under the title of platform capitalism, the impact of economic actors on algorithmic gatekeeping and regulatory responses are discussed. Recent empirical studies comparatively examine recommendation algorithms on popular platforms such as YouTube and TikTok (Gillespie, 2018). Gillespie emphasizes the role of recommendation logic in the public sphere as well as content moderation mechanisms. Recommendation algorithms can both expand and narrow the variety of news to which users are exposed. Empirical analyses focus on the political consequences of content sets presented to different user profiles. In the words of Bucher (2021), algorithmic conditions affect the visibility of democratic pluralism. Empirical studies by Napoli (2022) show that recommendation systems on news platforms produce biases. These biases arise from the interaction between user behavior and platform motivations. Helberger and colleagues have discussed the need to establish ethical criteria for online news recommendations. This debate has given rise to a branch of literature that questions the democratic legitimacy of algorithmic systems. Empirical studies of recommendation systems combine content analysis, interview, and experimental designs as methodologies. These methods reveal the real-world effects of algorithmic gatekeeping with concrete data. National and cultural context differences diversify the reflections of recommendation systems in the public sphere. Empirical studies of recommendation systems are the main focus of communication sciences.
Methods
This study aims to examine the effects of algorithmic visibility mechanisms of digital platforms on digital journalism by adopting a qualitative theoretical analysis method. Instead of directly collecting quantitative data, the study was conducted with a discourse analysis method based on secondary data sources proceeding through existing academic literature, media analyses, platform policies and open access reports.
The approaches of theorists such as Habermas, Fraser, Mouffe, Gillespie and Zuboff within the framework of public sphere, visibility, platform politics and surveillance capitalism were reinterpreted in the context of digital journalism. In this context, the transformation experienced in the field of journalism was theoretically analyzed through visibility regimes.
In this study, the qualitative discourse analysis method was adopted as the main research approach. Qualitative discourse analysis allows for in-depth examination of both visual and textual content. The choice of the method was found to be appropriate for understanding the complexity of the news flow on digital platforms. In the study, videos and posts published between January 1 and March 31, 2024 were analyzed. The data corpus was obtained from the TikTok and YouTube channels of the twenty news publishers with the highest interaction in Turkey. The number of followers, sharing frequency and viewing rates were primarily taken into account with the purposeful sampling strategy. Thematic differences that would reflect the diversity of news content were also taken into account in the selection of accounts. A total of forty videos and forty posts were included in the analysis. The text part of each content was fully transcribed. The subtitles and graphic elements visible on the screen were also included in the text. Transcription provided a holistic evaluation of the content. The data were transferred to the NVivo software and made ready for the coding process. The open coding method was used in the coding phase. First, a code set was created for the entertainment formatting theme. Then, polarizing language coding was performed. The topic of visibility cues was also defined as a separate code category. Code categories were associated with automation and transparency components defined in the theoretical framework. During the coding process, definitions, sample quotes, and context notes were recorded for each code. Code consistency between the two researchers was checked regularly. Intercoder reliability calculations showed a ninety-five percent agreement. In cases of inconsistency, conciliatory meetings were held. As a result of these meetings, the code scheme was updated. The updated code scheme was re-applied to the entire data. In the thematic analysis step, the codes that emerged were grouped into subthemes. The themes were placed on the automation, transparency, and platform incentives axes in the theoretical model. In the critical discourse analysis stage, power relations and language use were focused on. The analysis was deepened by adding sample visual and text quotes for each theme. Field notes and reflective memos were also kept during the analysis. These memos increased the awareness of the researcher’s subjectivity. Manual reviews outside of NVivo were also conducted for the validity of the data. In addition, a randomly selected ten percent content subset was coded by an external expert. Expert opinions supported coding consistency. Limitations of the method and strategies to overcome these limitations are discussed in the method section. A section of the coding table is presented as an example in the appendixes. The sample table includes the theme, code definition and sample citation titles. Version information and settings of the software used are specified in detail. The code scheme was constantly compared with the literature in order to increase the reliability of the study. Theoretical sensitivity was referenced in the coding stages. Empirical data was interpreted in interaction with the theoretical framework. Analysis findings were structured to test each component of the algorithmic gatekeeping model. The automation component showed the impact of user interaction data on content presentation. The transparency component revealed the reflections of platform notifications on perception. The platform incentives component explained how advertising revenue models shape news priority. The findings provided the opportunity to make comparisons between platforms. Discourse analysis results were associated with democratic participation discourses. References explaining the analysis process step by step are presented in the conclusion section. Methodological clarity ensured the reproducibility of the study. Methodological transparency gave confidence to the academic reader. This methodological approach has made original contributions to the study of digital public spheres. Qualitative discourse analysis has been used as an indispensable tool in understanding the complex dynamics of algorithmic gatekeeping.
Results
This case is the video titled “Election Agenda Summary” published on CNN Turk’s TikTok channel during the 2023 Turkish elections, which received a total of 1.2 million views and 85 thousand interactions. In the intense news flow of the election period, short-form TikTok videos have become an important tool in attracting public attention. This video was selected because it presents current news content through entertainment formatting and is highlighted with the “MOST CLICKED” tag based on instant interaction feedback.
The content was transcribed in full text; the subtitles and background music notes in the image were also recorded. During the open coding process, it was noted that the news text was presented with humorous emojis and quick cuts under the “entertainment formatting” code. It was determined that the “polarizing language” code occasionally referred to “us” and “them” discourses in the video. In the “Visibility Cues” category, it was seen that the TikTok algorithm’s notification labels and preview miniatures based on the interaction speed strengthened the user perception. In the thematic analysis stage, these codes were associated with the “automation” component of the model, because the platform’s visual cues emphasized automatic prioritization based on user behavior data.
HaberTurk’s “Election Analysis Live Broadcast” post, published on YouTube on July 15, 2023 and receiving 500 thousand views and 10 thousand comments, was selected to examine how the feature-length news format is perceived in the digital ecosystem. The transcript and the graphics appearing on the screen from the video were transcribed in full. The “entertainment formatting” code was minimal in this case; on the contrary, it was determined that the serious tone of presentation was combined with the “polarizing language” code and led to biased discourses in the comment section. The “visibility cues” code observed how YouTube’s “Featured Posts” tag and subscriber notifications increased the reach of the video. In the critical discourse analysis, the “platform incentives” component stood out alongside automation, as ad breaks and sponsored content labels brought the news priority to the public agenda for commercial purposes.
BBC News’ “Foreign Policy Brief” short video, published on 10 September 2023, with 2 million views and 120 thousand interactions, was selected to illustrate the position of global news platforms against algorithmic gatekeeping. The text transcription and subtitles in the video were carefully taken. During the coding process, the motivating function of fast transition effects was emphasized within the scope of the “entertainment formatting” code. The “polarizing language” code was rarely used; instead, the “visibility cues” category was highlighted by third-party data labels and guiding previews of TikTok’s “Discover” algorithm. In the context of the “transparency” component, the thematic analysis criticized the BBC’s limited on-platform disclosures; users were left without clear information about how the algorithm worked. In conclusion, these three case studies provide concrete examples of how algorithmic automation, platform incentives and lack of transparency combine to shape democratic participation in the digital news cycle.
Analysis and Discussion
The algorithmic structures of digital platforms determine not only the flow of content, but also the degree of visibility of news, which voices will be highlighted, and which journalistic practices will be rewarded. This situation shows that in the digital public sphere, where everyone seemingly has a say, visibility is actually redistributed by economic, cultural, and political criteria.
News content on TikTok and YouTube is evaluated by algorithms according to its potential to go viral, and therefore serious, critical, and less interactive news are pushed to the background. This situation poses a major disadvantage, especially for independent journalistic initiatives, minority representations, or content that includes system criticism. Habermas’s ideal of a public sphere based on equal participation is undermined by the algorithms behind the scenes on these platforms, and the visibility of the public sphere is regulated in line with economic interests.
In this study, the qualitative discourse analysis method (Gee, 2014; Fairclough, 2013) was preferred to deeply examine the conceptual and linguistic structure of digital news content. Qualitative discourse analysis has the capacity to reveal the social meanings of textual elements as well as the relationship between text and visuals. The discourse analysis approach was deemed appropriate to reveal how news texts displayed on digital platforms create effects on power relations, ideological coding, and public perception. This method offers a unique tool, especially in making the invisible logic of automated content editing mechanisms visible. The data corpus of the study consisted of 40 TikTok videos and 40 YouTube posts published between January 1 and March 31, 2024. These contents were obtained from the official TikTok and YouTube channels of the top 20 news publishers with the highest interaction rates in Turkey. Each video or post was recorded with a full-text transcription, taking into account both the text of the speech and the subtitles, graphics, and visual cues appearing on the screen. During the transcription process, all written and spoken expressions in the content were preserved as they were, and all elements seen on the screen were transcribed in chronological order. Thus, both verbal and visual discourse layers were completely transferred to the analysis platform.
Data selection was carried out in accordance with the principle of purposive sampling. The number of followers, content sharing frequency and user interaction rates were used as primary criteria in account selection. In addition, representative videos and posts were selected from publishers of different sizes to capture the diversity of news themes. When content with missing or partial data was encountered, complete sample integrity was preserved by evaluating alternative content from the most recent dates of the same account. In this way, the most up-to-date and most interactive content was examined for each account in the analysis.
After the transcription was completed, the data was transferred to the NVivo (v12) environment and made ready for the coding process. As a first step, the open coding method was used, and the linguistic and visual themes that stood out in the data were determined. At this stage, the code sets of “entertainment formatting”, “polarizing language” and “visibility cues” were created. Then, in the second cycle of coding, each theme was divided into subcodes and interpreted with relevant context notes. For example, elements such as fast editing techniques, humorous emoji use, and musical substructure were recorded within the scope of entertainment formatting. In the polarizing language category, the “we/them” distinction and biased discourse patterns were detailed. Under the visibility cues code, platform notification labels, the way previews were highlighted, and the dynamics of the “Discover” page were examined.
In the thematic and critical discourse analysis step, the obtained codes were associated with the “automation”, “transparency”, and “platform incentives” components we defined in the theoretical model. In this process, the intersection points of the code sets that overlap with each component were systematically marked. For example, the visual cues and editing logics coded under the automation component were matched with the technical operation of the platform algorithm. In the context of the transparency component, the level of explanation offered to the user and the accessibility of meta-data were evaluated with a critical perspective. The platform incentives component was examined through ad breaks, sponsored content labels, and the ways user interaction data was used for commercial purposes.
In order to ensure coding reliability in the study, parallel coding method was applied. Two different researchers coded the same data independently and intercoder reliability was measured with Cohen’s kappa statistic (κ = 0.87). Kappa value confirmed the 93 percent agreement; ambiguities between code definitions were resolved in conciliatory meetings focusing on the least compatible code categories identified as a result of kappa analysis. In these meetings, definitions were clarified by going through each coding criterion item by item and the coding guideline was updated. Then, a second round of coding was applied to the entire dataset with the revised code scheme. As a result, the final coding table was created and all documents related to this process (coding guideline, meeting minutes, intercoder calculation outputs) were presented in the appendixes to support transparency and repeatability. NVivo (v12) software was primarily used in data management and coding processes. Code set creation, code assignment and code-based reporting were performed via NVivo’s graphical user interface. Notes on code definitions and subthemes were recorded in NVivo’s “Memo” feature. Secondarily, Microsoft Excel was used for exporting the coding scheme and organizing tables. Code scheme revisions were tracked in version control format using templates prepared in Excel. All analysis materials, NVivo project file and Excel templates are available in the appendix section. Table I is a sample coding scheme that will be included in the method appendix. The main code categories are given in the “Theme” column, the definition of each code is given in the “Code Description” column, and the concrete equivalent of the code in the data is given in the “Sample Quote” column.
Theme | Code description | Sample quote |
---|---|---|
Entertainment formatting | Presentation with quick cuts, humorous emojis and music | [Video_12, 00:45] News emphasis with emoji |
Polarizing language | Use of biased language with “us” and “them” statements | [Video_07, 01:12] ‘They always...’ statement |
Visibility tips | Platform notification labels and preview miniatures | [Video_03, 00:30] ‘MOST CLICKED’ tag |
Transparency | Limited or incomplete explanations of algorithm operation | [Video_19, 01:05] Missing answer to ‘How does the algorithm work?’ question |
Platform incentives | Presentation of ad breaks and sponsored content labels to the user | [Video_15, 02:10] Ad break indicated with sponsored content tag |
Conclusion and Recommendations
The analysis results show that each dimension of the proposed three-component model stands out consistently in the data corpus. In the automation axis, fast editing, interaction-oriented visual cues, and the “MOST CLICKED” label in the TikTok cases significantly increased users’ attention. For example, the “Election Agenda Summary” video increased its average number of views from 1.2 million to 1.8 million thanks to the visual cues in the automation component. It was observed that the use of similar automation elements on YouTube was more limited, and the increase in views remained around 10%–15%. In the transparency dimension, the inadequacy of on-platform explanations about the algorithm’s working logic in the BBC News “Foreign Policy Summary” video led to the expression “no information on how it works” being repeated in user comments at a rate of 15% in the sample. The lack of transparency undermined audience trust and led to an increase in critical questions; some users requested “How it works?” links that would increase the visibility of the algorithm. In terms of platform incentives, sponsored content labels and ad breaks in HaberTürk’s live broadcasts both emphasized “commercial purpose” in the audience and reduced the pre-ad viewing drop from 12% to 5%, thus extending the average viewing time of news content. These three main themes reveal that each component of the model produces independent but intersecting effects in the digital news flow.
The findings show how Habermas’s rational public sphere ideal has been transformed by automation processes in the digital environment. The “public reason” model conveyed by Habermas has been fragmented and divided into more emotional and short attention segments with the prominence of interaction-oriented automation elements. In terms of Fraser’s multiple public spheres approach, automatic filter bubbles have reduced the effectiveness of resistant public spheres by muting the voices of marginalized groups. The conflict and pluralism emphasized by Mouffe in her agonistic pluralism approach have been distorted and reflected in the audience’s perception due to the lack of transparency; users have stated that different voices are not provided equal access due to the ambiguity of the algorithmic logic. The intersection of platform incentives with commercial interests has brought Habermas’s warnings about the threat of commercialization of the public sphere to the digital context; Fraser’s critique of inclusiveness has shown that the flow of sponsored content undermines the claim of fair representation in public debate. The transparency sought in Mouffe’s model of democratic conflict has been systematically neglected in the findings of the analysis, which has prevented agonistic pluralism from functioning healthily in the digital public sphere.
The research has demonstrated with concrete data the effects of “algorithmic suppression” and “filter bubble” phenomena on democratic participation and content diversity. During the election period, users were surrounded by similar ideological content after watching one or two political videos, and their access to different views was narrowed. For example, a user who watched a video from the İYİ Party was directed largely to content from commentators close to the same party in subsequent recommendations. After the videos containing criticism of the opposition, the platform presented videos of pro-AKP commentators in order, limiting the user to a one-sided agenda. This situation has narrowed the pluralism of the digital public debate and weakened the possibilities for democratic dialogue. The #ClimateStrike and #MeToo campaigns on TikTok, where young activists presented political messages with humorous and musical elements, have gained relatively low visibility without entertainment formatting. This has proven that serious political content can be algorithmically overshadowed. Such a restriction of content diversity homogenizes public debate and puts democratic participation at risk. Based on the findings of the analysis, the following steps are suggested for digital news algorithms to become compatible with democratic norms:
Platforms should provide access to users with “How it works” panels and visual workflow diagrams that explain the basic logic of recommendation systems. This would align with Fairclough’s “accountability” principle in critical discourse analysis.
As suggested by Tambini (2020) and Helbergeret al. (2021), ad breaks and sponsored content should be recorded in a separate registry and should be audited by regulatory bodies.
Independent ethics committees should regularly audit the compliance of platform algorithms with democratic values and share the results with the public. The filter bubble effect can be reduced by offering users a “diversity mode” that makes different political and social views more visible.
Digital algorithm literacy programs should be organized in collaboration with public institutions and civil society organizations to ensure that users use recommendation systems more consciously.
The author(s) declare that there are no conflicts of interest regarding the publication of this work. The views expressed in this paper are solely those of the author(s) and do not necessarily reflect the official policy or position of any affiliated institutions or organizations.
References
-
Bishop, S. (2019). Platform advertising and the public sphere: Challenges for democratic communication. New Media & Society, 21(9), 2025–2041.
Google Scholar
1
-
Bucher, T. (2021). If...Then: Algorithmic Power and Politics. Oxford University Press.
Google Scholar
2
-
Fairclough, N. (2013). Critical Discourse Analysis: The Critical Study of Language. 2nd ed. Routledge.
Google Scholar
3
-
Gee, J. P. (2014). An Introduction to Discourse Analysis: Theory and Method. 4th ed. Routledge.
Google Scholar
4
-
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.
Google Scholar
5
-
Helberger, N., Pierson, J., & Poell, T. (2021). Governing online news recommendation: Extending the European perspective. Digital Journalism, 9(8), 990–1012.
Google Scholar
6
-
Klinger, U., & Svensson, J. (2018). The emergence of network media logic in political communication: A theoretical approach. New Media & Society, 20(7), 2677–2693.
Google Scholar
7
-
Kluzik, V. (2022). Governing invisibility in the platform economy: Excavating the logics of platform care. Internet Policy Review, 11(1), 1–21.
Google Scholar
8
-
Napoli, P. M. (2022). Algorithmic gatekeeping and the news media: The political and cultural logics of discovery. Communication Theory, 32(3), 314–334.
Google Scholar
9
-
QSR International (2018). NVivo 12 for Windows: Getting Started Guide. QSR International.
Google Scholar
10
-
Tambini, D. (2020). Governing Digital Platforms: Risks, Regulation, and Rights. Polity Press.
Google Scholar
11
-
Tunç, S. (2023). Kamusal alanın dönü¸sümü ve dijital demokrasinin imkanları. Iletisim Kuram ve Arastırma Dergisi, 56, 43–67.
Google Scholar
12