Home       About us   Issues     Search     Submission Subscribe   Contact    Login 
Conservation and Society
An interdisciplinary journal exploring linkages between society, environment and development
Conservation and Society
Users Online: 39 Home Print this page Email this page Small font sizeDefault font sizeIncrease font size



 
Previous article Table of Contents  Next article
 

RESEARCH ARTICLE
Year : 2022  |  Volume : 20  |  Issue : 3  |  Page : 195-200

Altmetric Scores in Conservation Science have Gender and Regional Biases


1 Wilson Center; Center for the Advanced Study of Human Paleobiology, George Washington University, Washington DC, USA; School of Life Sciences, University of KwaZulu-Natal, Pietermaritzburg, South Africa; Shaanxi Key Laboratory for Animal Conservation, Northwest University, Xi'an, China
2 Office of International Science and Engineering, National Science Foundation, Virginia, USA
3 Department of Geography and Environmental Studies, Carleton University, Ottawa, Canada
4 Viral Evolution; Epidemiology of Highly Pathogenic Microorganisms, Robert Koch Institute, Berlin; Applied Zoology and Nature Conservation, University of Greifswald, Greifswald, Germany
5 Centre for Ecological and Evolutionary Synthesis, Department of Biosciences, University of Oslo, Blindern, Norway

Correspondence Address:
Colin A Chapman
Wilson Center; Center for the Advanced Study of Human Paleobiology, George Washington University, Washington DC, USA; School of Life Sciences, University of KwaZulu-Natal, Pietermaritzburg, South Africa; Shaanxi Key Laboratory for Animal Conservation, Northwest University, Xi'an

Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/cs.cs_27_21

Rights and Permissions
Date of Submission16-Feb-2021
Date of Acceptance01-Dec-2021
Date of Web Publication23-Feb-2022
 

   Abstract 


There is a growing view in conservation science that traditional ways to evaluate publications, researchers, and projects are too slow. This has led to a rise in the use of altmetrics, which are metrics based on social media data, news pieces, blogs, and more. Here we examine altmetric data linked to nearly 10,000 papers published in 23 conservation journals, exploring five issues that represent some of the challenges associated with using social media data in evaluating conservation. We discuss whether social media activity reflects meaningful engagement, and how easily individuals can manipulate scores by using bots or simply through active personal networks or institutional promotion services. Our analysis shows a highly skewed distribution of altmetric scores where most papers have such low scores that the scores likely convey little meaningful information. Examining scores that would be considered meritorious, we find that papers where the first author was male have higher scores than papers led by a woman, suggesting a gender bias in altmetric scores. Finally, this data set reveals regional differences that correspond with access to different social media platforms. Metrics, like altmetrics, may have a role to play when making rapid evaluations. However, such metrics should only be used after careful deliberation and should not be influenced by institutions looking for shortcuts, by companies looking to advance profits, or by individuals seeking to promote themselves, rather than generating meaningful engagement in scholarship and conservation action. Scholarly and conservation activities should be judged on the quality of their contributions, which will require the input of experts and direct contact with impacted communities.

Keywords: academic evaluation, altmetrics, research impact, science engagement, science communication


How to cite this article:
Chapman CA, Hemingway CA, Sarkar D, Gogarten JF, Stenseth NC. Altmetric Scores in Conservation Science have Gender and Regional Biases. Conservat Soc 2022;20:195-200

How to cite this URL:
Chapman CA, Hemingway CA, Sarkar D, Gogarten JF, Stenseth NC. Altmetric Scores in Conservation Science have Gender and Regional Biases. Conservat Soc [serial online] 2022 [cited 2023 Mar 24];20:195-200. Available from: https://www.conservationandsociety.org.in//text.asp?2022/20/3/195/338143




   Introduction Top


Scientists, and society broadly, have taken to social media to communicate and manage their online profiles and to reach out to the public. This is particularly evident in the conservation field, which has a goal of influencing the actions of policy makers and the public to protect biodiversity (Vercammen et al. 2020; Veríssimo et al. 2020). The use of social media by scientists is growing rapidly. For example, the ResearchGate social network site has over 15 million users and is visited more than 150 million times a month (ResearchGate 2020), and about 13% of scientists regularly use Twitter (Van Noorden 2014). The digital transformation in the way we obtain and interact with information is evident in the way academic communities use social media to communicate science, promote public interest, allocate funding, evaluate individuals for tenure and promotion, and more ways (Dinsmore et al. 2014; Wouters et al. 2015; Sugimoto et al. 2017).

Recent years have seen the rapid growth of web-based scores that supplement traditional bibliometrics with metrics that include sources such as views, downloads, Twitter, Facebook, Reddit, news reports, blogs, recommendations (e.g., F1000), Wikipedia, and policy sources. These alternative metrics have been termed altmetrics.

Traditional bibliometrics for evaluating academic impact are well documented across all fields to be limited for evaluating academic impact by the speed of publications and the inappropriate application of metrics (Kelly and Jennions 2006; Morales et al. 2021). A publication's success, as measured by the number of citations, can take well over a year to begin accumulating, and metrics like the h-index to assess an author's impact accrue even more slowly. Journal impact factors, which measure journals' average citations per article, are now widely recognised as inappropriate for assessing the value of individual articles (Declaration on Research Assessment (DORA)). A limitation of traditional indices particularly relevant to the conservation field is that bibliometric scores do not assess progress towards conservation goals.

Altmetric scores are perceived to rapidly quantify the societal impact, attention, and influence of research outputs (Dinsmore et al. 2014; Sugimoto et al. 2017; Holmberg et al. 2019). In conservation and other fields that bridge researchers and practitioners, organisations seek assessment tools to evaluate project progress and engagement, such as local community involvement. As traditional metrics are slow at measuring impact and mostly restricted to scholarly publications, alternative measures can offer useful viewpoints on the impact of products. As a result, funding and reward systems are increasingly drawn to use social media as part of their evaluation criteria. Funding organisations want metrics of project success to estimate returns on investments and to evaluate future grants.

Altmetric scores have been used by the Wellcome Trust, John Templeton Foundation, and others to influence funding worth billions of dollars (Dinsmore et al. 2014). Altmetric scores are included in widely used platforms that monitor research outcomes, such as ResearchFish and Dimensions (Wouters et al. 2015). In conservation, the use of altmetric scores by funders such as Wildlife Conservation Society, World Wildlife Fund, and National Geographic will likely result in biased funding to species or habitats that are attractive to the public since their donations fuel these organisations' operations. It is not only the funders that are evaluating the use of altmetric scores; International Union for Conservation of Nature (IUCN) has been monitoring the online attention to its publications using these metrics and is investigating their potential uses and benefits. The Society of Conservation Biology uses these scores to award prestigious prizes and they “celebrate the authors of papers that score highly on these scales” (Jarrad et al. 2016).

The use of altmetric scores by conservation organisations, funders, and universities will continue to evolve as the use and understanding of social media changes; therefore, open discussion of the meaning of the scores is urgent and necessary. The for-profit companies that produce these metrics, often companies owned by publishers (e.g., Altmetric, a subsidiary of McMillan Publishers, and Mendeley and Plum Analytics, owned by Elsevier), collect and weigh the sources using different undisclosed algorithms and data sources to produce an aggregate score for a research product (Zahedi and Costas 2018). Despite widely recognised issues with altmetrics for academic evaluation (Wouters et al. 2015; Wilsdon et al. 2017), their use is expanding.

Part of the challenge associated with using social media data in science is that it is not clear what altmetric scores depict. Our objective is tackling this challenge by examining five issues particularly relevant for conservation scientists to reflect upon given that a core motivation of conservation scientists is to inform public opinion. We address 1) if the use of altmetric scores could promote gender inequality; 2) if the sampling pool from which scores are drawn is regionally biased; 3) what constitutes a meritorious score; 4) if scores can be artificially manipulated; and 5) if high scores are associated with meaningful engagement with the content. We extracted social media data from nearly 10,000 papers published in 23 conservation journals to examine the first three of these issues and use the literature to examine the last two issues.


   Five issues concerning the use of social media metrics in conservation Top


We downloaded metadata of articles published by 23 conservation journals and evaluated 9,532 articles published between January 1, 2015, and August 26, 2020 (Supplementary Data) from Scopus. We extracted the name and country of affiliation for the first and last authors of each article. We focused on these authorship positions because the first author is often the scientist who executed and wrote up the research, while the last author is often senior and may have conceived of and/or funded the research. We approximated the gender of these individuals by associating the first names of the authors with the probability of the name being held by a man versus a woman, using the Gender API. This service has been shown to have the best performance when compared to its peers (Santamaría and Mihaljević 2018), though we recognise the challenges of assigning gender. We obtained the citation number from Scopus and altmetric scores from the Altimetric API. As the data is strongly skewed (see below), with many small scores and few large ones, we analyse the data with non-parametric statistics.

First, the algorithms, whose structures are guarded trade secrets of social media companies, cloud how the certain posts are deemed worthy of promotion over others. Scholars have argued that social media ranking systems encapsulate certain philosophies and assumptions in determining worthiness of posts, which are encoded in the algorithms that govern interaction on the platform (Bucher 2017). Platform owners defend making their systems opaque by arguing that enumeration of the rules leaves the system vulnerable to 'being gamed' (Pasquale 2015). Thus, it is not clear how biases encoded in these systems could exacerbate marginalisation and consequently negatively influence academia. For example, female academics have disproportionately fewer Twitter followers, likes, and retweets than men, regardless of their professional rank or Twitter activity (Zhu et al. 2019). Yet, women use Twitter more than men relative to their scholarly publishing (Ke et al. 2017). Furthermore, men promote their research accomplishments more than women, and social media provides a platform for self-promotion (Lerchenmueller et al. 2019). This suggests that using altmetric scores will lead to an increasing marginalisation of women in conservation.

To examine whether a gender bias occurred within our data set, we contrasted altmetric scores of papers where men and women were first or last authors. When all papers were considered, scores of papers did not vary as a function of gender (first author, Mann-Whitney U, P-value = 0.193, last author MW, P-value = 0.366). However, scores are only likely to influence evaluation of aspects of academic life, such as tenure and grant achievement, when the scores are high. Thus, we contrasted the gender of the first and last authors when papers were ranking in the top 10% of the papers considered. Here, papers where the first authors are male have higher altmetric scores (MW P-value = 0.033), but the scores did not differ by gender with respect to the last author (MW P-value = 0.620). If one considers that we are running multiple comparisons (n = 2 all data and the top 10%) on the same data set, one might want to use a Bonferroni correction and accept conservative probability level of acceptance of 0.025. In which case this difference would be considered marginal. Regardless, this suggests that high scores, which are the scores that may benefit an academician's career most, are biased against women and, if high scores continue to garner rewards in the academic and conservation system, they will facilitate continued gender biases (Holman et al. 2018). At this time, facilitating the continuation of a gender gap should be more actively avoided than ever because women experienced an acute productivity drain associated with the pandemic and elevated work–family conflicts (Gabster et al. 2020: Staniscuaski et al. 2020). This drain may have been particularly acute for women involved in conservation field work or research involving communicating with local communities. The alternative explanation for the gender difference in top altmetric scores that women have less impactful research can be dismissed. Several studies have shown that male and female authored papers have equal impact and men and women have equivalent career-wise impact for the same size body of research (Cameron et al. 2016; van den Besselaar and Sandström 2017; Huang et al. 2020).

Second, altmetric scores draw on a sampling pool that is not representative of the global population. Groups (e.g., academic field, demographic, economic, or political) differ in their platform use. For example, different countries have different preferences for social media platforms. While ResearchGate is international, academicianss in Brazil and India use it a great deal, while researchers in China, South Korea, and Russia do not (Thelwall and Kousha 2015). Also, scientific tweeters in China and Eastern Europe are more likely to write original scientific tweets and less likely to retweet than people in other regions (Yu et al. 2019). The most widely known difference in the use of social media concerns with the ban of Facebook and Twitter in China. This raises the question: which social media platforms do the academic community want included in social media assessments of a research publication? Should Qzone and Renren, that are popular in China, or VK, that is popular in Europe, be included?

We evaluated if a regional bias existed in our dataset by categorising the countries as high, medium, or low in terms of the number of Facebook accounts (following https://worldpopulationreview.com/country-rankings/facebook-users-by-country). Not surprisingly, papers written by first authors from countries with fewer Facebook accounts had lower scores (Kruskal-Wallis P-value < 0.001). An alternative explanation for this finding would be that countries with fewer Facebook users have fewer resources for research, which leads to their research being less impactful. We evaluated this alternative for 83 countries where we were able to obtain data of the number of Facebook users and research expenditure (https://en.wikipedia.org/wiki/List_of_countries_by_research_and_development_spending). We found that countries categorised as high, medium, or low in terms of the number of Facebook accounts differed consistently with respect the amount spent on research (Kruskal-Wallis P-value = 0.0042) and the number of Facebook users in a country is related to research output (rs P-value = 0.541, P<0.01). However, scientific impact, as indexed by publication number, is only weakly limited by funding (Fortin and Currie 2013).

We can examine the possibility of regional biases by contrasting countries banning some social media platforms. By 2019 the governments of China, Iran, North Korea, and Turkmenistan had all blocked Twitter access in their countries. The altmetric scores of papers written after 2019 where the first authors are from these countries is lower than for papers with first authors where Twitter is not blocked (MW P-value < 0.001). The possibility of such political, regional, and country-level differences in the use of social media platforms make their use in global comparisons of research problematic and it impacts not only the first author but also their co-authors.

On the backdrop of these examples, we focus on biases in the sampling pool on which altmetric scores are drawn. The sampling pool will also be biased so that poorer countries, regions with lower levels of technological development (e.g., limited access to the internet), or areas using regionally restricted languages, may be less likely to obtain high altmetric scores and the rewards associated with them. Such biases are particularly troubling for conservation as these areas will be home to species in need of conservation attention. Furthermore, since high scores can garner rewards in the academic and conservation system, it is important that these biases are clearly understood and do not inappropriately affect conservation action.

Third, for altmetric scores to warrant use in evaluation, not only must scores represent meaningful engagement, but there must be sufficient variation in the scores to allow evaluators to distinguish between research outputs that merit recognition and those that do not. In our set of conservation papers, the scores ranged from 0 to 1,747, and the distribution is highly skewed (median = 6.5, mode = 0.5, x̄ = 24.35, S.D. = 65.81; [Figure 1]). We asked, if an evaluator considered presence in the top 10% of papers as a criterion in awarding excellence, what would the social media activity of a paper at this cut-off point look like? Examining the papers that were plus or minus half a point from the 10% cut-off revealed the following averages: 42 tweets (range = 7-90), 0.4 blogs (range = 0-3), 7 Facebook posts (range = 0-77), and 2.9 news mentions (range = 0-7). Some of these mentions are likely those of the authors, the universities of the authors, among friends, and by publishers. Both universities and publishers have public relations groups that legitimately encourage researchers to promote their publications through social media and provide easy mechanisms to facilitate this. Thus, generating altmetric scores that would be considered high could simply reflect the activity of the researcher's university public relations group or self-promotion by the authors, rather than the actual impact of the research. For example, the author of a paper tweeting with multiple co-authors, promoting universities, and colleagues could easily generate 42 tweets, depicting nothing about global conservation impact.
Figure 1: The distribution of altmetric scores from 9532 articles published from January 1, 2015 to August 26, 2020 from 23 conservation journals (Supplementary Data). The distribution was truncated at a score of 500 to facilitate presentation, and 16 scores fell between 500 and the upper score of 1747

Click here to view


The fourth issue is artificial manipulation. While Twitter and Facebook provide an easily accessible and quick way to quantify the interest in an article, they are vulnerable to artificial inflation. This can be done by the author, institution, or publisher—all of whom have an interest in increasing the score of a paper. Scores can be inflated by using easily purchased bots. Bots are prevalent on social media platforms and detecting them is non-trivial. Bots are autonomous or semi-autonomous software agents that can pose as people on social media and dramatically influence altmetric scores. A recent example is Twitter activity surrounding Covid pandemic. Researchers used a bot detection tool to analyse 200 million tweets about the pandemic and found that 45.5% were likely from bots (Owen 2020). Similarly, following the announcement of the US government's withdrawal from the Paris Climate Change agreement, 25% of the tweets about the climate crisis and 38% of the tweets about “fake science” came from bots (Guardian 2020).

Such attention influences public opinion and conservation funding. Yu et al. (2019) estimated that the use of bots among scientific tweeters is much lower (1.8%). However, if the rewards of deception increase and the likelihood of detection remains very low, then their use could increase. The services that provide bots are inexpensive: 5,000+ Twitter followers and automatic retweets can be purchased for USD49. With Pubfacts, which provides access to over 20 million PubMed citations, a researcher can buy 500+ article views for USD5 or generate 3,000+ views for USD50. While buying a bot to promote one's own academic product is likely viewed as academically inappropriate, would academicians similarly consider it inappropriate to promote awareness of important environmental issues or conservation projects using bots? If the answer to this question is no, then the existence of these services and the prevalence of bot posts raises clear issues facing the use of social media posts as surrogates to gauge societal relevance and impact for conservation. The desire to advertise a research or conservation product, be it done in an academically appropriate fashion or not, is a reflection of the fact that incentives for academics have become increasingly perverse. This is part of the neoliberal perspective and the commercialization of universities and conservation groups (Stephan 2012: Edwards and Roy 2017).

A fifth and fundamental issue is that it is not clear that high scores are associated with meaningful engagement with the content of scientific articles. As Twitter data is a key component of altmetric scores, the number of tweets that a paper receives provides a good example. To assess the degree to which tweeting about scientific papers signified engagement with the scientific literature, Robinson-Garcia et al. (2017) examined 8,000 tweets from 2,000 US-based accounts that contained links to research articles. Most tweets were simply retweeting or just contained the title, and less than 10% indicated intellectual engagement. The ease of clicking on the Twitter icon on a paper's webpage facilitates mechanical sharing of content without engagement. Furthermore, a small number of tweeters produce most of the tweets. Yu et al. (2019) analysed 2.6 million scientific tweeter's data and documented that 80% of the tweets were produced by only 10 % of the tweeters.

Furthermore, a high score can reflect either positive or negative attention. For example, an article on early career mentoring was published in Nature Communications (AlShebli et al. 2020) on November 17, 2020 and retracted on December 21, 2020 because of questionable interpretations—it had received 13,889 tweets by January 15, 2021. High traditional scores, such as citations, can also reflect the attention received by papers publishing questionable results; however, the ease of posting on social media is more likely to generate high scores, deemed worthy of merit, as increasing citations is harder as journals are unlikely to publish many multiple critics of the same work. This negative attention can represent meaningful engagement by those writing the critiques, but the high scores of criticised research should not be considered meritorious. This highlights the need to avoid the uncritical use of altmetric scores in the reward system of academia and conservation.

It is often claimed that social media reflects communication to the public, but this needs verification. In an analysis of 3, 31, 696 Twitter posts referencing 1,800 highly tweeted bioRxiv preprints, researchers found that 96% of the tweets were from the academic audience, suggesting that outreach to the public through such means was minimal (Carlson and Harris 2020). Unless the data that are used in constructing the altmetric scores are meaningful to assess scholarship or public interest, the simplistic use of these scores risks endangering the scientific enterprise (Robinson-Garcia et al. 2017).

Carefully crafted social media posts can generate engagement in conservation, which we view as a very positive development. Enhancing the appreciation for and understanding of conservation is essential, given the negative attitudes science holds in some countries and the pressing scientific and social issues we currently face. However, the use of social media metrics for evaluation should only be done after careful deliberation and their use should not be influenced by institutions looking for shortcuts, by universities and companies looking to advance profits or status, or by individuals seeking to promote themselves rather than generating meaningful engagement in scholarship or conservation (Wouters et al. 2015).

Furthermore, using social media for scientific purposes demands considerable time commitments. Several online sites suggest that effective participation in social media involves a full suite of activities including having one's own blog, writing lay summaries to papers published, uploading data, images, PowerPoint presentations, and posters, reaching out to key bloggers in the field, working with the university's press office, and using a variety of social media outlets (Sugimoto et al. 2017). Just the work to write a visually-appealing weekly blog is estimated to take 182 hours a year; the equivalent of 8.8% of an academician's work time (Strong 2018). Thus, scientists run the risk of spending more time announcing ideas than formulating them.

Without resolving key issues on the production of social media and the meaning of altmetric scores, it becomes “very easy for people to build a seemingly impressive persona by essentially 'shouting louder' than others” (Hall 2014). Ultimately, we feel that scholarly and conservation activities should primarily be judged on the quality of their contributions, which will require the input of experts and direct contact with impacted communities.

Author contribution statement

CC and CH conceived of the project, JG and DS led in the analysis, and all authors contributed to the project's development and the writing.

Acknowledgements

We thank Songtao Gou, Finn-Eirik Johansen, Kjetill Sigurd Jakobsen, and Jason D. Whittington for perspectives and comments on an earlier version of this contribution. All opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Declaration of competing/conflicting interests

None.

Financial Disclosures

None.

Research ethics approval

This research did not involve data collected from human or animal participants.



 
   References Top

1.
AlShebli, B., K. Makovi, and T. Rahwan. 2020 (RETRACTED ARTICLE) The association between early career informal mentorship in academic collaborations and junior author performance. Nature communications 11: 5855.  Back to cited text no. 1
    
2.
Bucher, T. 2017. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication and Society 20: 30-44.  Back to cited text no. 2
    
3.
Cameron, E.Z., A. White, and M.E. Gray. 2016. Solving the productivity and impact puzzle: do men outperform women, or are metrics biased? Bioscience 66: 245-252.  Back to cited text no. 3
    
4.
Carlson, J. and K. Harris. 2020. Quantifying and contextualizing the impact of bioRxiv preprints through automated social media audience segmentation. PLoS Biology 18: e3000860.  Back to cited text no. 4
    
5.
Declaration on Research Assessment (DORA). American Society for Cell Biology. https://sfdora.org/read/. Accessed on December 1, 2021.  Back to cited text no. 5
    
6.
Dinsmore, A., L. Allen, and K. Dolby. 2014. Alternative perspectives on impact: the potential of ALMs and altmetrics to inform funders about research impact. PLoS Biology 12: e1002003.  Back to cited text no. 6
    
7.
Edwards, M.A. and S. Roy. 2017. Academic research in the 21st century: maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science 34: 51-61.  Back to cited text no. 7
    
8.
Fortin, J.M., and D.J. Currie. 2013. Big science vs. little science: how scientific impact scales with funding. PLOS One 8: e65263.  Back to cited text no. 8
    
9.
Gabster, B.P., K. van Daalen, R. Dhatt, and M. Barry. 2020. Challenges for the female academic during the COVID-19 pandemic. The Lancet 395: 1968-1970.  Back to cited text no. 9
    
10.
Guardian. 2020. Revealed: quarter of all tweets about climate crisis produced by bots. The Guardian. February 21, 2020.  Back to cited text no. 10
    
11.
Hall, N. 2014. The Kardashian index: a measure of discrepant social media profile for scientists. Genome biology 15: 424.  Back to cited text no. 11
    
12.
Holman, L., D. Stuart-Fox, and C.E. Hauser. 2018. The gender gap in science: How long until women are equally represented? PLoS Biology 16: e2004956.  Back to cited text no. 12
    
13.
Holmberg, K., S. Bowman, T. Bowman, F. Didegah, and T. Kortelainen. 2019. What is societal impact and where do altmetrics fit into the equation? Journal of Altmetrics 2.  Back to cited text no. 13
    
14.
Huang, J., A.J. Gates, R. Sinatra, and A.L. Barabási. 2020. Historical comparison of gender inequality in scientific careers across countries and disciplines. Proceedings of the National Academy of Sciences 117: 4609-4616.  Back to cited text no. 14
    
15.
Jarrad, F., E. Main, and M. Burgman. 2016. Conservation Biology celebrates success. Conservation Biology 30: 929-930.  Back to cited text no. 15
    
16.
Ke, Q., Y.-Y. Ahn, and C.R. Sugimoto. 2017. A systematic identification and analysis of scientists on Twitter. PLOS One 12: e0175368.  Back to cited text no. 16
    
17.
Kelly, C.D., and M.D. Jennions. 2006. The h index and career assessment by numbers. Trends in Ecology and Evolution 21: 167-170.  Back to cited text no. 17
    
18.
Lerchenmueller, M.J., O. Sorenson, and A.B. Jena. 2019. Gender differences in how scientists present the importance of their research: observational study. bmj 367.  Back to cited text no. 18
    
19.
Morales, E., E.C. McKiernan, M.T. Niles, L. Schimanski, and J.P. Alperin. 2021. How faculty define quality, prestige, and impact of academic journals. PLOS One 16: e0257340.  Back to cited text no. 19
    
20.
Owen, T. 2020. Nearly 50% of Twitter accounts talking about coronavirus might be bots. Vice. https://www.vice.com/en_us/article/dygnwz/if-youre-talking-about-coronavirus-on-twitter-youre-probably-a-bot. Accessed on December 1, 2021.  Back to cited text no. 20
    
21.
Pasquale, F. 2015. The black box society. Cambridge, MA: Harvard University Press.  Back to cited text no. 21
    
22.
ResearchGate. 2020. https://www.researchgate.net/. Accessed on January 20, 2020.  Back to cited text no. 22
    
23.
Robinson-Garcia, N., R. Costas, K. Isett, J. Melkers, and D. Hicks. 2017. The unbearable emptiness of tweeting—about journal articles. PLOS One 12: e0183551.  Back to cited text no. 23
    
24.
Santamaría, L. and H. Mihaljević. 2018. Comparison and benchmark of name-to-gender inference services. PeerJ Computer Science 4: e156.  Back to cited text no. 24
    
25.
Staniscuaski, F., F. Reichert, F.P. Werneck, L. de Oliveira, P.B. Mello-Carpes, R.C. Soletti, C.I. Almeida et al. 2020. Impact of COVID-19 on academic mothers. Science 368: 724-724.  Back to cited text no. 25
    
26.
Stephan, P. 2012. How economics shapes science. Cambridge, MA: Harvard University Press.  Back to cited text no. 26
    
27.
Strong, F. 2018. Study: the perfect blog post length—and how long it should take to write https://www.ragan.com/study-the-perfect-blog-post-length-and-how-long-it-should-take-to-write-2/. Accessed on December 1,, 2021.  Back to cited text no. 27
    
28.
Sugimoto, C.R., S. Work, V. Larivière, and S. Haustein. 2017. Scholarly use of social media and altmetrics: a review of the literature. Journal of the Association for Information Science and Technology 68: 2037-2062.  Back to cited text no. 28
    
29.
Thelwall, M. and K. Kousha. 2015. Research gate: disseminating, communicating, and measuring scholarship? Journal of the Association for Information Science and Technology 66: 876-889.  Back to cited text no. 29
    
30.
van den Besselaar, P. and U. Sandström. 2017. Vicious circles of gender bias, lower positions, and lower performance: gender differences in scholarly productivity and impact. PLOS One 12: e0183301.  Back to cited text no. 30
    
31.
Van Noorden, R. 2014. Online collaboration: scientists and the social network. Nature News 512: 126.  Back to cited text no. 31
    
32.
Vercammen, A., C. Park, R. Goddard, J. Lyons-White, and A. Knight. 2020. A reflection on the fair use of unpaid work in conservation. Conservation and Society 18(4): 399-404.  Back to cited text no. 32
    
33.
Veríssimo, D., T. Pienkowski, M. Arias, L. Cugnière, H. Doughty, M. Hazenbosch, E. De Lange, A. Moskeland et al. 2020. Ethical publishing in biodiversity conservation science. Conservation and Society 18(3): 220-225.  Back to cited text no. 33
    
34.
Wilsdon, J., J. Bar-Ilan, R. Frodeman, E. Lex, I. Peters, and P. Wouters. 2017. Next-generation metrics: responsible metrics and evaluation for open science. Report of the European Commission Expert Group on Altmetrics.  Back to cited text no. 34
    
35.
Wouters, P., M. Thelwall, K. Kousha, L. Waltman, S. de Rijcke, A. Rushforth, T. Franssen et al. 2015. The metric tide. Literature review, supplementary report I to the independent review of the role of metrics in research assessment and management, HEFCE, London.  Back to cited text no. 35
    
36.
Yu, H., T. Xiao, S. Xu, and Y. Wang. 2019. Who posts scientific tweets? An investigation into the productivity, locations, and identities of scientific tweeters. Journal of Informetrics 13: 841-855.  Back to cited text no. 36
    
37.
Zahedi, Z. and R. Costas. 2018. General discussion of data quality challenges in social media metrics: extensive comparison of four major altmetric data aggregators. PLOS One 13: e0197326.  Back to cited text no. 37
    
38.
Zhu, J. M., A.P. Pelullo, S. Hassan, L. Siderowf, R.M. Merchant, and R.M. Werner. 2019. Gender differences in twitter ue and influence among health policy and health services researchers. JAMA internal medicine 179: 1726-1729.  Back to cited text no. 38
    


    Figures

  [Figure 1]



 

Top
 
Previous articleNext article
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
    Abstract
     Introduction
   Five issues conc...
    References
    Article Figures

 Article Access Statistics
    Viewed1931    
    Printed73    
    Emailed0    
    PDF Downloaded269    
    Comments [Add]    

Recommend this journal