top of page

SURVEY

 

The survey of journalists was conducted in order to measure journalistic role conceptions and perceived role enactment. 

 

Sampling and data collection

The journalists surveyed were those who worked in 2020 at the media outlets on which the content analysis was based.  

 

The media outlets used in the study differed in terms of the number of journalists. Some outlets were small (less than 50 journalists in the newsroom), some were medium in size (50 to 200 journalists), and some were large (more than 200 journalists). We thus used quota samples of journalists to match their responses with the average content of their news media organizations, depending on the size of each newsroom. 

To calculate the minimum number of responses required per outlet, based on its size, we performed a power analysis for multilevel models with three levels (i.e., individual journalists nested within news organizations nested within countries), as these models represent the most complex analytical approaches we would potentially use. The analysis suggested including only news organizations for which data were collected from at least four journalists. We thus included all media outlets that contained at least four cases in the analyses for small newsrooms. We doubled this number for medium newsrooms and tripled it for large newsrooms. As a consequence, to construct samples that enable us to compare across news media organizations and countries, we included all medium-size newsrooms with at least 8 cases and large newsrooms with at least 12 cases.1

The goal was for national teams to capture, as much as possible, the diversity of each newsroom in order to represent different editorial responsibilities (reporters, producers, editors, and anchors in the cases of TV and radio) as well as news beats covered by each media outlet included in the study. 

Journalists were contacted through their personal/work emails, telephone, social media accounts, or via editors in their respective newsrooms, and invited to participate in the study. The surveys were conducted via web-based questionnaires, telephone interviews, or face-to-face/Zoom interviews. The first method proved least successful in some countries, as survey reminders were sent repeatedly but were ignored by potential respondents. Given that different techniques were used to obtain the journalists’ cooperation, we later controlled for mode effect.

 

The surveys were administered by trained research assistants from each national team. Journalists were informed of the purpose of the study and were required to sign an informed consent statement prior to completing the survey, which included consent to participate in the survey, and for sharing and publishing the anonymous data. Each team underwent an Institutional Review Board (IRB) or similar process in their respective institutions, with additional reviews performed at cooperating institutions if necessary.

 

svey.png

While a total of 2,886 journalists from 326 outlets completed the survey, we excluded from the sample 113 outlets that did not provide the minimum number of journalist responses required to make valid calculations and achieve sufficient power to detect significant differences when analyzing the data.

There were important differences in the achievement of the minimum required number of responses per outlet across countries. The required quotas for the different outlets were achieved in 65% of the participating countries but in 35% of countries, it was necessary to exclude responses from one to five outlets that did not meet the required number. Different reasons led to this situation: in several countries, journalists claimed to be suffering from “survey fatigue” and said that they felt overwhelmed by the number of survey requests they receive; others claimed that their bosses did not allow them to answer surveys of any kind, and some complained about the length of the survey.

In the end, our valid sample consisted of 2,615 survey responses from 252 news outlets.

Measurements

The survey questionnaire measured journalists’ conception of their professional roles and the perceived enactment of those roles. 

The assumption underlying the survey was that journalists provide more reliable and valid responses regarding practical issues than abstract normative statements that can have dissimilar meanings across cultures and even within newsrooms. The current questionnaire was based on the one designed for the first wave of this project and subsequently refined by adding several measures to improve internal validity. The questionnaire contains 40 evaluative statements designed to measure professional roles at the evaluative level, which correspond to the indicators measured in the content analysis.

Specifically, the survey measures the individual importance that journalists give to the six professional roles (role conception) analyzed by our project, ranging from 1 “not important at all,” 2 “not very important,” 3 “somewhat important,” 4 “quite important,” and 5 “extremely important.” The survey also asked the journalists to rate the same statements in terms of how common they perceived specific journalistic reporting practices to be in the news stories their media outlets published (perceived role enactment) using a five-point scale where 1 is “not common at all,” 2 is “not common,” 3 is “sometimes,” 4 is “quite common,” and 5 is “extremely common.”

Journalists were also asked about their perceived levels of professional autonomy, social media practices, use of different digital tools, and work-related and socio-demographic characteristics. 

The questionnaire was translated from English to Spanish, German, Italian, French, Arabic, Korean, Japanese, Polish, Hungarian, Russian, Portuguese, Serbian, Estonian, Hebrew, Chinese, Dutch, and Kinyarwanda by each country team. It was then translated back into English to check on the validity of the translation.

As with the content analysis and prior to our main analyses, confirmatory factor analyses were conducted to test whether journalists’ responses reflected a latent role manifested through concurrent concrete indicators. Within that framework, we empirically tested competing measurement models. 

At the role conception and perceived enactment levels, 33 of the 40 statements remained part of the six scales, respectively, indicating a good fit with the data. The other seven statements did not fit well with the data or their loading with the respective dimension was low. Furthermore, acceptable levels of internal consistency were found for all roles across countries.

Following CFA results, and in order to analyze the gaps between journalists’ perceptions and role performance, we first calculated the average score of each journalist based on their answers to the survey questions representing each role. We then calculated the average score of role performance for each media outlet with regard to each role, considering all of the news stories from each specific outlet.

Given that the scale range used to measure role performance (0–1) was different from the scale range for measuring role conception and perceived role enactment (1–5), we transformed the average scores for role conception (ranging from 1 to 5) into ranges of 0 to 1. Finally, we calculated the absolute differences between the two by subtracting the average role performance score of each media outlet from the average role conception score of each journalist belonging to that outlet. 

For descriptive purposes, we used the raw score of each gap. To test for differences in the gaps across different individual, organizational and societal-level factors, we transformed the raw scores into Z-scores.

It should be noted that the absolute values of the “gap” scores have no substantive interpretation. The focus of the analysis is the relative sizes and directions of these gaps, and the factors that increase or decrease the gaps between journalists’ perceptions, and the average performance of their news organizations. Likewise, the “gap” is defined as the difference between two variables. As such, factors that reduce the gap can do so by increasing the performance of that role or by decreasing the priority assigned to it by journalists.

Note

1 To perform the power analysis, we used the “WebPower” package in R. Based on the 365 news organizations for which content analysis and survey data were collected, and assuming small effect sizes (f = 0.2) and small intraclass correlations (icc = 0.2) while aiming to achieve an alpha level of no more than 5% and power of at least 80%.

bottom of page