Thousands of examining eyes: credibility, authority and validity in biodiversity citizen science data production

Björn Ekström (The Swedish School of Library and Information Science, University of Borås, Borås, Sweden)

Aslib Journal of Information Management

ISSN: 2050-3806

Article publication date: 21 March 2022

Issue publication date: 6 January 2023

1794

Abstract

Purpose

The purpose of this study is to contribute with knowledge about how valid research data in biodiversity citizen science are produced through information practices and how notions of credibility and authority emerge from these practices.

Design/methodology/approach

Data were collected through an empirical, interview-based study of the information practices of 15 participants active in the vicinity of the Swedish biodiversity citizen science information system Artportalen. Interview transcripts were analysed abductively and qualitatively through a coding scheme by working back and forth between theory and data. Values of credibility, authority and validity of research data were unfolded through a practice-oriented perspective to library and information studies by utilising the theoretical lens of boundary objects.

Findings

Notions of credibility, authority and validity emerge through participant activities of transforming species observations to data, supplementing reports with objects of trust, augmenting identification through authority outreach and assessing credibility via peer monitoring. Credibility, authority and validity of research data are shown to be co-constructed in a distributed fashion by the participants and the information system.

Originality/value

The article extends knowledge about information practices in emerging, heterogeneous scholarly settings by focussing on the complex co-construction of credibility, authority and validity in relation to data production.

Keywords

Citation

Ekström, B. (2023), "Thousands of examining eyes: credibility, authority and validity in biodiversity citizen science data production", Aslib Journal of Information Management, Vol. 75 No. 1, pp. 149-170. https://doi.org/10.1108/AJIM-10-2021-0292

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Björn Ekström

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

The inclusion of laypersons in scholarly work, commonly named citizen science, has enabled a growth of distributed, large-scale research projects where data production practices are dispersed over the general public (cf. Haklay, 2018; Kullenberg and Kasperowski, 2016; Strasser et al., 2019; Wiggins and Crowston, 2011). This transfer has been reported as implying a “[…] stereotypical tradeoff in citizen science project design, pitting data quality against engagement” (Wiggins and He, 2016, p. 1,556). As the number of participants increases, difficulties can emerge in establishing quality data. Biodiversity citizen science in particular originates in a research field incorporating multiple scientific disciplines where “[…] each discipline has grown up with its own information infrastructure and information standards” (Bowker, 2000a, p. 695). Conforming epistemic interests poses further standardisation challenges following public participation. This in turn provides discrepancies in seeking, sharing, annotating and producing data in relation to shared information systems. While information practices of adequately reporting observed species are standardised through a joint information system, participation is multifaceted with a variation of information practices occurring in the vicinity of these information systems (Ekström, 2022a). Considering that participation involves numerous individuals with a variety of interests in several species groups such as plants, birds, amphibians and lichens, questions remain regarding how valid data are produced and how authority and credibility emerge through the practices.

Credibility has previously been shown to emerge through various ways in citizen science. Previous research illustrates that credibility in birdwatching is framed by reputations arising from peer evaluations of honesty and accuracy when reporting sighted birds (Lundquist, 2018). Moreover, the biodiversity citizen science information system iNaturalist has been described as supporting “[…] collaborative information stewardship interactions by providing mechanisms that blend socializing with data verification, and displaying them on the records as a form of provenance” (Wiggins and He, 2016, p. 1,551). Notions of credibility, authority and validity can however shift in citizen science settings where multiple heterogeneous disciplinary interests are united. A similar differentiation of how information is interacted with depending on discipline is traceable in contemporary library and information science studies. Previous research shows how disciplinary and contextual aspects enable various information practices (cf. Pilerot, 2016; Talja and Maula, 2003). The meeting of interests through a shared information system leads to a variety of approaches to what entails good practices in collaborative data production. Adopting a sociomaterial theoretical perspective focussing on interconnected material components following situated activities to achieve shared objectives (cf. Gherardi, 2017; Orlikowski, 2010) to participants' information practices can serve to yield new knowledge about epistemic interests in biodiversity citizen science and how varying notions of what comprises valid research data as well as how credibility and authority emerge through endeavours towards achieving such validity.

The purpose of this study is to contribute with knowledge about how valid research data in biodiversity citizen science are produced through information practices and how notions of credibility and authority arise from these practices. Focus is placed on exploring the information practices of volunteer participants active in the vicinity of the Swedish biodiversity citizen science platform Artportalen (literal translation: the species portal). This is done by especially devoting attention to how biodiversity citizen science data are produced, shared understandings of how species observations are reported and how validating routines homogenise these reports. The study is guided by the following research questions.

RQ1.

What is considered valid biodiversity citizen science data according to the study participants?

RQ2.

How do notions of credibility and authority on behalf of the data unfold through collaborative data production?

The article is structured accordingly: first, a literature review is presented. Then, the research context for the study is described. This is followed by a theory section and a subsequent description of the method used for this study. Next, the results of the study are presented. The article ends with a concluding discussion.

Literature review

The following literature review consists of three major themes intended to frame and position this study. Initially, previous information practices research on credibility, authority and validity is accounted for. Next, previous research on boundary objects in library and information science is discussed. Thereafter, prior contributions to studies of credibility, authority and validity in relation to citizen science are explicated.

Conceptions of credibility and authority in relation to information have been studied in numerous empirical settings and from a variety of angles, in information practices-related research (e.g. Francke and Sundin, 2012; Haider and Sundin, 2020; Huvila, 2017, 2020). In an overview article, several perspectives on credibility as interconnected with other value concepts are unfolded (Rieh and Danielson, 2007). The relation between credibility and quality is discussed in tandem with early contributions to library and information science denoting criteria such as accuracy, comprehensiveness, currency, reliability and validity (e.g. Taylor, 1986). Moreover, credibility and authority are described as related to notions of cognitive authority, which is interlinked with situated trustworthiness (cf. Huvila, 2017, 2013; Wilson, 1983). This is a subject which practice-oriented library and information studies of authority have been especially devoted to, that is, how experiential knowledge complements authoritative knowledge sources (McKenzie, 2003). Drawing on constructionist metatheoretical perspectives, knowledge gained through lived experience is in this sense aiding and supplementing authoritative sources as a “[…] social product created between people, in conversation and communication, not in the hidden recesses of individual minds” (Tuominen et al., 2002, p. 278). Moreover, the vast information found through a multitude of digital resources may provide information overload. Triangulation, as a means of seeking information from a variety of sources, has been shown to function to overcome such issues by iteration to “[…] assess competing claims against each other, and make sense of information in the contexts of their everyday lives” (Greyson, 2018, p. 877). Notions of credibility also vary depending on material and temporal aspects. In relation to technology in school settings, credibility is found to “[…] change over time, as people become more familiar with new genres and technologies and as these genres and technologies are discussed more both in general discourse, among educators, and in academia” (Francke and Sundin, 2012, p. 275). Annotations and descriptors play a certain part in reflecting the needs and interests of certain groups, understood as knowledge communities, to supply information (Kalir and Garcia, 2021, p. 57). However, rather than counting differences in credibility criteria, it has been argued that credibility should be understood by investigating underlying epistemic beliefs (Huvila, 2020). Moreover, the concept of critical literacy has been suggested for situated and tool-mediated information practice studies (Johansson and Limberg, 2017).

Validity is another concept previously studied in relation to collaboration. Drawing on science and technology studies research on the negotiation of evidence (e.g. Latour, 1999, 1987; Law and Lynch, 1988), verifiability has been shown to function as an “obligatory passage point” (cf. Callon, 1986) for Wikipedia editors striving towards the establishment of trust (Sundin, 2011). In other words, being able to verify the validity of a Wikipedia edit is highly related to credibility establishment within this setting. Moreover, the concept obligatory passage point is especially relevant for the present study as it was adopted in connection to the concept of boundary objects (cf. Bowker and Star, 1999; Star, 1989, 2010; Star and Griesemer, 1989) for investigating the utilisations of material objects in manners acknowledged by a certain community. Moreover, science and technology studies pose a substantial point of inspiration in information practices research (Pilerot et al., 2017). In line with this type of research, the concept of boundary objects is utilised extensively in various library and information science subfields such as knowledge organisation, information practices, document studies, social informatics and computer-supported cooperative work (Huvila et al., 2017). Related to information practices and documentation studies in particular, boundary objects as a concept is depicted to unfold sociotechnical settings in which people's interactions with information occur (Huvila et al., 2017). For instance, practices of information sharing can be understood as a boundary object (Pilerot, 2012). Material tools such as calendars are also investigated through the concept of boundary objects, synchronising timelines among participants in work environments (Davies and McKenzie, 2004). In the context of archaeology professionals, documents have been found to form and lose statuses as boundary objects if they are too detached from the practice (Huvila, 2016). While approaches and empirical settings vary, boundary objects as a theoretical concept enable making visible conformity and variation associated with information practices enacted in relation to material objects.

Among scientists studying biodiversity, the interdisciplinary features of the research field have been especially addressed in terms of heterogeneous utilisations of information standards in relation to information systems (Bowker, 2000a, p. 695). This heterogeneity becomes an issue during ambitions of integrating information across subcommunities (Bowker, 2000a). Such a concern relates also to overarching strivings towards uniting data collection efforts; “[…] [e]ven if it succeeds, there will still be coding cultures specific to given locations and particular disciplines” (Bowker, 2000b, p. 676). Similarly, adopting an epistemic lens to data sharing in biodiversity research, trustworthiness, here understood in terms of credibility, is related to knowledge as a social construct unfolding through mundane practices. This entails a variety of epistemic mechanisms which affect truth and credibility and how practitioners become knowers through participation in epistemic cultures (Van House, 2002a). Credibility thus occurs through “[…] determining or demonstrating membership in a community of practice” (Van House, 2002b, p. 111).

Considering the production of citizen science data, a common dilemma is that while the incorporation of members of the public in research activities can lead to large quantities of data being produced in a short time frame and in various geographical settings, the quality and validity of the data can be problematic since the participants do not operate through the exact same methods (cf. Wiggins and He, 2016). Issues relating to valid and qualitative data have been reported to include that protocols for data collection are not being followed by participants, that the protocols do not match the participants' or the project's aims, that the protocols may be incorrectly implemented, that stakeholders have varying conceptions of collection protocols or that data do not fit the purpose of the projects (Balázs et al., 2021). In comparison, for citizen science projects in the natural sciences, factors such as repeatability and predictability are prominent principles for ensuring validity through multidisciplinary approaches (Frigerio et al., 2021). As biodiversity projects employ citizen science and joint work in shared information systems, these aspects need to be taken into consideration while also being seen in a new light bearing in mind the present empirical setting incorporating a multitude of participants with a breadth of interests concerning various species groups. Volunteer participation in research projects has also been explicated as a way of bypassing institutional systems for quality reviews, instead relying on peer validation (Elliott and Rosenberg, 2019). Several studies have also problematised such validation processes through evaluative methods (e.g. Bonter and Cooper, 2012; Swanson et al., 2016). When it comes to biodiversity citizen science, it has been suggested that the “[…] implementation of controls at the stage of data collection is critical to ensure data quality and the possibility to use these data to address ecological research questions” (Serret et al., 2019, p. 9). Related, previous research has found that participants' in biodiversity citizen science gradually come to understand the importance of well-taken photographs for assisting species identification (Ekström, 2022b). These validation ambitions need further investigations in relation to the establishment of credibility and authority through information practices.

Credibility in citizen science has been investigated as occurring in relation to ongoing participation. One study found that citizen science projects from their very beginning would benefit from incorporating well-directed formal promises to “[…] enhance the public participation process and ensure transparency, accountability, and trust” (Eleta et al., 2019, p. 7). Acknowledgement within the community has been understood as a significant factor for creating trust and motivation as credibility within citizen science projects (Rotman et al., 2012), also facilitating learning and support through social media (Liberatore et al., 2018). It has also been shown that the judgement of honesty and trust, here seen in the context of credibility, among birdwatchers is tightly framed by the evaluation of other contributors of one's statements and claims. This evaluation is performed in relation to the participant as a person and the way they speak (Lundquist, 2018).

The literature review for the present article positions the study in relation to previous library and information science research on credibility, authority and validity. Moreover, the review has explicated previous research concerning boundary objects in library and information science and conceptions of credibility, authority and validity in citizen science data production. Considering the multifaceted character of citizen science activities occurring in the vicinity of a large-scale biodiversity citizen science information system, these perspectives are all relevant as points of departure to investigate what is considered valid data in biodiversity citizen science. In turn, this can provide understanding to how credibility and authority are formed among participants in relation to varying yet standardised information-related activities, unfolded through the concept of boundary objects. In the following section, the research context of the study is presented.

Research context

The research context for this study is that of citizen science data production activities in the vicinity of the Swedish species observation system Artportalen. A centralised node for the reporting of species including algae, amphibians, birds, fish, fungi, invertebrates, lichens, mosses and vascular plants, Artportalen is maintained and developed by the Swedish Species Observation Centre at the Swedish University of Agricultural Sciences in Uppsala, Sweden, on behalf of the Swedish Environmental Protection Agency (Swedish University of Agricultural Sciences, n.d.). Artportalen functions as a knowledge source concerning targeted conservation efforts for understanding and predicting trends as well as for preventing climate and environmental issues. Moreover, governmental County Administrative Boards carry out nature investigations on cases of natural exploitation and tree felling based on the data reported to Artportalen (Swedish University of Agricultural Sciences, n.d.).

Participants partake in the collaborative production of data through reporting observations in the flora and fauna to Artportalen. The enactment of observations can range from highly organised activities by a multitude of participants to singlehandedly conducted observations carried out in mundane circumstances; from targeted inventories during a lengthy time frame and short-term intensive periods in a delimited geographical setting such as bioblitzes (Meeus et al., 2021), to everyday monitoring of one's close surroundings, that is, in one's backyard or on the way to pick up one's children from school (Ekström, 2022a). Information practices comprise observing species in nature and subsequently identifying these species through the use of field guides or by comparing the species found with previous contributions to the Artportalen database. The identified findings are reported to Artportalen through a variety of digital interfaces where associated metadata such as geographical coordinates, number of occurrences, habitat and biotope are entered (Ekström, 2022a). The data are co-validated by all participants in the sense that anyone with an account can make a comment confirming or questioning the identity of a reported species. At the same time, some users have received additional administrative privileges for routinely monitoring and validating data (Ekström, 2022a).

In the following section, the theoretical framework used to analyse how valid research data in biodiversity citizen science are produced through information practices, and how notions of credible and authoritative research data arise from these practices, is presented.

Theory

This study originates in a practice-oriented perspective to library and information science studies on biodiversity citizen science information practices. While practice-oriented studies vary in scope and focus, there are theoretical resemblances throughout the literature which altogether encompass a practice approach (cf. Gherardi, 2017; Nicolini and Monteiro, 2016; Schatzki, 2001). From a library and information science practice-theoretical perspective, practices constitute shared understandings, rules, conventions and norms along with routinised activities and situated places and material objects (Pilerot and Lindberg, 2018, p. 256). Moreover, the present study approaches information practices from a sociomaterial perspective which assumes that social life comprises configured and interconnected material components following situated and normative schemes to achieve shared objectives (cf. Gherardi, 2017; Orlikowski, 2010). The present study takes its starting point from this theoretical point of view considering that biodiversity citizen science data production encompasses a range of information-related activities in the vicinity of large-scale information systems and the material objects that emerge through these activities. Adopting such an overarching theoretical perspective paves the way for understanding how values in terms of valid (as in correct) research data are constructed and how credibility (as in trustworthiness) and authority (as in directivity) arise through participant strivings to produce valid data.

To further address an ambition to understand values of credibility, authority and validity shaped through the distributed setting of biodiversity citizen science, it is fruitful to adopt a theoretical framework through which notions of heterogeneity and homogeneity can be unfolded. In this study, this is done by applying the theoretical concept of boundary objects (see Bowker and Star, 1999; Griesemer, 2015; Star, 1989; Star and Griesemer, 1989) to the research setting at hand. Scientific conduct requires collaborative efforts to “[…] create common understandings, to ensure reliability across domains and to gather information which retains its integrity across time, space and local contingencies” (Star and Griesemer, 1989, p. 387). Since consensus is not necessary for collaboration, several knowledge interests, backgrounds and notions can co-exist while still aiming towards joint objectives (Star and Griesemer, 1989). Objects that conform with such activities and in the scope of this study relate to information practices can be understood as boundary objects. These are objects which “[…] both inhabit several intersecting social worlds […] and satisfy the informational requirements of each of them” (Star and Griesemer, 1989, p. 393). Boundary objects are

weakly structured in common use, and become strongly structured in individual-site use […]. They have different meanings in different social worlds but their structure is common enough to more than one world to make them recognizable, a means of translation (Star and Griesemer, 1989, p. 393).

For boundary objects to make sense, they need to be understood as per their material structure and in relation to a scale, on an organisational level, and a scope, on a specificity level (Star, 2010). In the present study, what is under scrutiny are values of credibility, authority and validity of research data emerging through participants' information practices in the vicinity of the biodiversity citizen science information system Artportalen. Information objects in Artportalen such as reports of species observations, species lists, reporting tools and visual renderings are understood as boundary objects, that is, objects which are “[…] both plastic enough to adapt to local needs and the constraints of the several parties employing them, yet robust enough to maintain a common identity across sites” (Star and Griesemer, 1989, p. 393). Of specific importance in this context are four types of boundary objects which guide the analytical perspective to the empirical material:

  1. ideal types as “an object such as a diagram, Atlas or other description” (Star and Griesemer, 1989, p. 410), for example, a species report or a photography of a species as an augmentation to a report;

  2. repositories as “ordered ‘piles’ of objects which are indexed in a standardized fashion” (Star and Griesemer, 1989, p. 410), for example, a collection of reported species indexed in relation to the metadata entries provided by the information system;

  3. coincident boundaries as “common objects which have the same boundaries but different internal contents” (Star and Griesemer, 1989, p. 410f), for example, lists of participants' species observations during a certain day; and

  4. standardised forms “devised as methods of common communication across dispersed work groups” (Star and Griesemer, 1989, p. 411), for example, the reporting tools utilised to report species in Artportalen.

Since boundary objects have a variety of meanings depending on their utilisation, participants “[…] are faced with the task of reconciling these meanings if they wish to cooperate” (Star and Griesemer, 1989, p. 388). This is accomplished through method standardisation, by which different parties can participate in heterogeneous work (Star and Griesemer, 1989). Moreover, different people make use of the same tools for different purposes and reconcile understandings of handling these tools while contributing to shared objectives, indicating an interpretative flexibility (Star, 2010). Still, particularly central tools need to be utilised in ways that are acknowledged by the community. These conformed utilisations can be understood as obligatory passage points (Callon, 1986; Star and Griesemer, 1989). In relation to the scope of the present study, boundary objects and the various ways through which obligatory passage points are established comprise a theoretical toolbox for understanding the collaborative emergence of what constitutes valid data, the various ways through which credibility is negotiated in relation to valid data and how authority is established when engaging in information practices leading to valid data. In the following section, the method used in the study is described.

Method

Studies invoking a practice-theoretical lens require close examination of routinised activities, shared understandings and the conformity of actions in relation to situated and material aspects. One way to methodologically accomplish this when studying distributed settings is to approach the participants through co-presence (Beaulieu, 2010). By aiming towards co-presence, it is possible to be present with the participants by stressing and marking mutual understanding of the activities under study without physically being present on site. In this study, co-presence has been achieved through engaging in conversations with the study participants on the Swedish nature as well as discussions of Artportalen and other citizen science platforms and how they can be utilised. Closeness to the empirical setting at hand has been sought in a practice-theoretical vein to get “a feel for the game” (Bourdieu, 1990, p. 66). Other means for achieving co-presence included explicating my own participation on Artportalen and keeping contact with some participants on topics related to biodiversity monitoring. From this overarching methodological approach, the method used in this study will be further described.

Data selection and collection

The unit of analysis for this study is information practices occurring in the vicinity of Artportalen. Artportalen is an information system which enables volunteers to produce biodiversity citizen science data through reporting observations of species sighted in Swedish nature. The data produced are directly providing a basis for environmental research and targeted conservation results (Swedish University of Agricultural Sciences, n.d.).

For this study, 15 citizen science practitioners active through Artportalen were recruited as participants. To access participants with a spectrum of engagement, data selection criteria included that the participants had been engaged in producing biodiversity data through Artportalen between 2018 and 2020 and that they had reported more than 150 species observations. This selection was motivated by the endeavour to interview participants ranging from highly active observers to participants that less often but routinely engaged in data production. Moreover, participants were sought to be geographically dispersed to understand possible variations of engagement and activity depending on location. Therefore, the participants were actively sought to be distributed over several Swedish regions. In the present study, participants were distributed in 11 regions, from the Skåne region in southern Sweden to the Västerbotten region in northern Sweden. As such, this entailed gaining access to a breadth of participant engagement in relation to factors concerning experience and geographical location.

The participants were initially contacted via email, through which further contact was established to agree on form, time and date for interviews. Among the participants, six participants (Beatrice, Daniella, Helena, Isaac, Lisa, Nora) were engaged in Artportalen solely on a volunteer basis. Out of these participants, two (Beatrice and Helena) had been appointed a validator role. A validator role was also appointed to yet another participant (Christian) who along with one other participant (Joanna) is or was previously employed at Swedish Country Administrative Boards. Four participants (Eric, Felix, Gabriel, Molly) work or have worked with reporting species observations through Artportalen as professional biologists. One (Olivia) was employed at the Swedish Forest Agency. Two of the participants (Adam, Karl) are or were previously employed by the Swedish University of Agricultural Sciences. While the participants had various backgrounds including various types of previous knowledge regarding observing, identifying and reporting species as well as ranging from professional biologists to amateur naturalists, they had in common that they all produce citizen science biodiversity data on a spare time, volunteer basis; for instance, participants being appointed validator roles are still first and foremost volunteers as they engage in validation without monetary profit. The distribution of various types of engagements and occupations among the participants was motivated by the ambition to gain a broad understanding of how notions of credibility, authority and validity arise through multifaceted participation in relation to Artportalen as a large-scale information system for species reports.

The empirical data consisted of interview transcripts based on semi-structured interviews conducted via video conference application and telephone calls and were produced between February and April 2021. The interview guide was designed to include various types of interview questions, from introductory and follow-up questions to open-ended, probing and structuring questions (cf. Brinkmann and Kvale, 2015). The interview approach entailed seeking “[…] openness to changes of sequence and forms of questions in order to follow up on the specific answers given and the stories told” (Brinkmann and Kvale, 2015, p. 150). As an example, while the focus of the study was participants' information practices in relation to authority, credibility and validity, formulations in the interview guide included notions of what was deemed important and reliable when producing biodiversity citizen science data. This was done with the purpose of allowing the participants to speak freely about their understandings of the three topics without directly steering them to these particular concepts. All interviews followed an interview guide (see Appendix) concerning views on trustworthy data, requirements for participation and views on the significance of collaboration. The interviews were recorded and subsequently transcribed. Quotations were translated from Swedish to English by the author. The interviews ranged from 30 min to 1.5 h. The participants all gave their written permission and consent to be included in the study and having personal information collected. All participants were pseudonymised.

Analysis

Interview transcripts were qualitatively coded by the author through a coding scheme using qualitative data analysis software Atlas.ti 9.0.7. Major code themes included information practices identified in previous research such as observing, reporting and validating species (cf. Ekström, 2022a) and themes of credibility, authority and validity following the scope of the current article. When analysing the transcripts, the process consisted of an abductive approach seeking to unfold statements about information practices through “[…] the unpredictable conversational world of human beings” (Brinkmann and Kvale, 2015, p. 255). As the transcripts were analysed through meaning interpretation (Brinkmann and Kvale, 2015), certain passages were occasionally re-coded considering that the meanings of the participants' sayings emerged through iterative reading. Through iterative reading and comparison of participants' answers in the interview transcripts, similar accounts from the participants gave rise to saturation with variations in participants' statements implying differences of aspects concerning information practices. Coding descriptions and examples are presented in Table 1.

Following the coding, the themes emerging throughout the interview transcripts were analysed through the lens of boundary objects. Information practices and their sociomaterial aspects, such as how practices were made possible and constrained by the utilisation of material objects, were analysed in relation to the various forms of boundary objects explicated in the theory section. Ideal types, repositories, coincident boundaries and standardised forms (cf. Star and Griesemer, 1989) provided a theoretical understanding of how information practices were enacted in relation to material objects and how notions of credibility, authority and validity unfolded through the participants' sayings concerning their activities. For instance, in the analysis, a photograph of a species was considered an ideal type as it provides a basis for ensuring a credible report, but the determination of how this credibility arises in relation to the photograph is subject to ongoing negotiation among the practitioners. In other words, information practices were understood in relation to their situatedness as well as the motives, interests and backgrounds of the participants. Moving between the empirical data and the theoretical framework through an abductive approach (cf. Brinkmann and Kvale, 2015), it was possible to attain a gradually increasing understanding of the participants' information practices and how notions of validity, authority and credibility were co-constructed.

Results

Since Artportalen involves a multitude of participants with different knowledge interests in a distributed setting, united through their interactions with a standardising information system, there is a shared interest among the interviewed participants that the data that become registered with the portal should be valid. As the species observations reported through Artportalen constitute open data, they are publicly visible and open for all participants to explore. This openness entails transparency which is associated with the co-construction of valid data according to the participants:

You publish [your reports] directly and the details become public at once, and they are thereby up-to-date. At the same time, there might be uncertainties. But that fact means that there are thousands of eyes that see those details and, if things look strange, the reporter is often questioned (Adam).

The thousands of eyes that Adam mentions refer to the participants who monitor and examine current sightings out of curiosity. As will be shown in this section, these eyes include even more individuals assisting also in identifying species, providing additional meaning to the boundary objects which are created and maintained in relation to participation.

The results section, in which the emergence of credibility, authority and validity of biodiversity citizen science data production is understood through the theoretical lens of boundary objects, is presented through four main activities through which credibility, authority and validity are co-created: transforming species observations to data, in which the validity of species reports is discussed in particular; supplementing reports with objects of trust, where credibility is shown to arise through participants providing additional objects such as photographs when submitting sightings to the information system; augmenting identification through authority outreach, where professional knowledge is pulled into Artportalen as participants contact experts in person, via social media or by sending samples to a lab for analysis: and assessing credibility via peer monitoring, where volunteers in their authoritative appointment as validators systematically engage in practices of validating, and assessing the credibility of, the data supplied to Artportalen.

Transforming species observations to data

When people engage in biodiversity citizen science through Artportalen, they do so by observing species, identifying the species and making annotations about the circumstances during which the species were sighted. Annotations are then transformed to structured data through reporting tools in Artportalen's interface and submitted to a designated database. These activities through which such reporting is accomplished range from the observation and the annotation to the sharing of sightings. The physical witnessing of species is transferred to the mediated representation of an observation as bits and bytes, which results in a reported observation published in the information system. Evident from the empirical data is that correctly reporting species sightings to data is deemed highly important for participants in Artportalen. What constitutes a valid reported observation, however, is not clearly determined, as described below.

Submitting observed species to the large-scale information system Artportalen allows the participant to enter metadata in a broad range of forms, and the understanding of which of these should be filled in to ensure a valid report varies among participants. Certain annotations are compulsory when reporting species observations. As stated by the participant Karl, a species needs to be entered from a predefined list of species in the information system. A user's collection of such species becomes a repository (cf. Star and Griesemer, 1989). Date and time are crucial for a valid report, as is the site of observation, which is entered “either by writing a coordinate or by choosing a point on a map” (Adam). Additionally, the final mandatory task is to mark the observer, which is auto-generated as participants log into Artportalen.

As such, Artportalen's reporting tools are standardised forms through which the information practices are implicitly guided and procedures are established (cf. Star and Griesemer, 1989). However, while some participants remark that details are occasionally entered arbitrarily, others are more thorough. Thorough annotations concerning when species were sighted are desirable, but not required, for a valid report. Detailed date and time annotations are moreover considered significant by certain participants as they can contribute to phenological indicators:

Seasonal indicators are interesting, that is, when a bird arrives, because that is something that you can use when you look at climate changes. “When does the first small tortoiseshell [butterfly] arrive?” or “When does the first coltsfoot sprout?”, [this is] a climate indicator (Joanna).

Temporal annotations are considered valuable in the production of data as they can lead to answering questions of climate change impacts on species' arrival and migration. Similarly, detailed geographical coordinates give rise to spatial occurrences and indicators of where species have been sighted, which is stressed by several participants (Beatrice, Joanna, Gabriel, Felix) and exemplified accordingly:

We have inventoried roadsides for four years. It relates to both what species can be found and the ecology, so it is quite difficult stuff. In those cases, coordinates and photographs are meticulous and there are often a lot of writing assignments. That information is then used by the Swedish Transport Administration to take care of our roads (Gabriel).

Apart from mandatory details for ensuring valid data, participants can register additional descriptors deemed of importance for a variety of reasons. One such descriptor is the biotope, through which the context of a plant species can be apprehended and changes might be traced through “a good description of the milieu, plant environment, biotope” (Helena). Similarly, the description of host plants might also provide adjacent annotations (Lisa). For other participants, species' movements and doings present other significant descriptors. “I try to note, for example if I saw a northern goshawk, that the goshawk sat in a tree by the road and flew up or that it was a foraging common kestrel, or two” (Olivia). The situation in which the species was sighted is important as it can provide circumstantial details concerning the activities of the bird. Additionally, the quantity of the species observed is regarded as significant for understanding spatial and temporal changes:

Almost everyone who does research on or who works with these data in any way wants to look at change. For example [when investigating] plants on a specific site, a successive decrease can say a lot. This is a highly interesting piece of information instead of [simply understanding] that [certain events] continue to occur (Joanna).

As shown above, annotations come in many forms for the co-production of valid research data, occasionally more valuable for some than for others. These descriptors, and the meaning imbued in them, make up obligatory passage points (cf. Star and Griesemer, 1989) where nature details are transferred to the system as textual and numerical descriptions. Joanna's saying above also replicates Adam's quotation in the introduction of the results section in that there are many who are interested in and surveys the data. Another way to strive for valid research data is to complement reports with additional data, perceived to function as evidence, or objects of trust, as illustrated in the following section.

Supplementing reports with objects of trust

To enhance species reports, participants can add supplemental data such as photographs, video and sound recordings. Photographs, especially, are considered to provide evidential support to reports of rare species, resulting in valid reports: “For documentation [purposes], one needs to be able to take photographs in some manner if one sees something more unusual or something that one needs to identify” (Molly). In other words, should the species be incorrectly or uncertainly identified, the photograph functions as a supporting object through which the report can be corrected and thereby valid. This is further explicated by another participant:

It is not like you must be an expert on all species, but what you must do is to know that what you report is correct, so to speak. If you are unsure you have to report it as an “uncertain identification” and then you can add a picture or a photograph (Adam).

Photographs become a way to ensure the credibility of the reports through identification and prove that what was reported was what was seen, but also to mitigate uncertainties. The photograph becomes an object of trust. The participant Daniella argues that photographs have made physical collection of species redundant, deeming species samples old-fashioned and unethical:

When it comes to [killing] insects and the like […], I think that it is a completely obsolete way of working and that it is immoral, especially when it comes to rare insects. Because today one can, with a camera and zoom lenses and other [tools], come so close to an insect that it is not needed (Daniella).

Despite the statement above, the participant Eric stresses the necessity of collecting samples by arguing that physical evidence and photography should not be dichotomised ways of ensuring valid reports. Well-documented species are seen as few in relation to the full breadth of data:

The amount of total information is so much larger today in relation to the well documented [collections] where there are samples or where there are actual photographs that suffice for species identification. That is something of an issue for the credibility, I would say (Eric).

The photograph as an ideal type is not clearly determined. It is sought for as a denominator of credibility and thoroughness, but what constitutes a valid photo is open for debate among the participants. “Some insects are possible to photograph, others are so small that you have to look through a stereo microscope” (Eric). A photograph itself is a recognisable unit providing credibility to the report. Yet, there are shifting meanings regarding how a photograph should be taken depending on different views among the participants, establishing numerous obligatory passage points to photographing species (cf. Star and Griesemer, 1989). Not only the validity of the reports is negotiated, but also the validity of the photographs.

Photographs can also serve as an indicator for enhancing the credibility of bulk reports. As several participants (Adam, Eric, Felix) imply, another way through which authority and credibility can be established in this context is through submitting large amounts of reports during a certain period. This fosters a competitive aspect to biodiversity citizen science and the associated information practices:

That is perhaps a downside with the competitive element of Artportalen, that you want to show that you have seen so and so many species. It can be tempting, if you are “well …”, slightly uncertain, “… but I think so …”, and you report [the species anyway]. If you do not have a photograph or a sample, [the credibility] can be questioned (Adam).

Unfolding this quotation, credibility and authority can also be attained by producing a large amount of data by competitive means. However, if these large amounts cannot be ascertained by supplemental information objects, there is a risk that the credibility of the data, and subsequently the reporter, becomes questioned, risking a diminishment of how credible and authoritative they are considered by other participants. The photograph and the sample here become objects of trust certifying the credibility as well as the authority of highly active participants.

Photographs and samples can provide subsidiary documentation for the identification and insurance of observed species. What comprises an adequate photograph or a sample is however disputable, leading to an ideal type in flux. The report shifts between a well-structured and an ill-structured ideal type in relation to these photographs (cf. Star, 2010). Occasionally, however, neither samples nor photographs are enough. In those cases, additional activities are needed, also in relation to scientific conduct: “Sometimes we must sequence species, look at DNA, to be certain [about identification]. I am not sure how credible sequencing is; it depends on which genes you choose” (Gabriel). Regarding DNA sequencing, this activity invites even further individuals in the determination of the species, extending the examining eyes previously discussed by providing credibility to the species report through authority-supplying methods. Similar outreach activities present other needs for determining credibility, which is discussed next.

Augmenting identification through authority outreach

As reported above, the striving towards valid and credible data is a co-constructive effort since activities of supplying descriptors and auxiliary data are the results of ongoing discussions on and determinations of species. Turning the prime attention to values of authority, in cases of mitigating insecurities during species identification activities, participants are occasionally involved with external parties. This external contact is established for their observations to be correctly identified and become ideal types; recognisable and yet shifting in meaning depending on different views (cf. Star and Griesemer, 1989). These external parties provide a source of authority represented through the ability to identify species on demand. A prominent way to informally ask for help when identifying species is to direct questions through social media platforms. The participant Daniella describes this activity further:

There are quite a lot of Facebook groups such as “what kind of insect is this?” and “what kind of bird is this?”. Often when you are uncertain you can post pictures there and ask. Those who really are capable or administrators on Artportalen will often let you know what you have found. You get help from those who know more (Daniella).

Daniella's statement relates to a type of outreach where individuals regarded as authoritative are contacted in relation to species identifications, seeking to increase the likelihood that one makes a credible and valid report. Participants take help from the crowd through channels external to Artportalen as an information system, which here functions as a form of assisted identification as a passage point to the report as an ideal type (cf. Star and Griesemer, 1989). Other ways of reaching out to authoritative sources for identification assistance relate to direct contact. As mentioned below in the case of Isaac, outreach to scientists can assist the activity of species identification:

Most often it is not possible to identify springtails through other means than by microscope. I have received confirmation of that from real experts. That is another important aspect, I think, that you discuss with people who actually know and are engaged with such research. […] There is a Norwegian scientist […] to whom I send springtails if I need [definitive proof for] which species they are. He has also shown interest in getting samples for his own studies (Isaac).

The outreach activity in this case functions to ensure the credibility of valid species reporting by outreach with active scientists. This in turn leads to a credibility determination as the identification is confirmed through the authority of a considered expert, but also to a declaration of what is deemed a reliable way to investigate species which are especially difficult to distinguish by untrained eyes. Proofs of outreach can also be added to single reported species through descriptors and references, to enhance the credibility of reported species:

It has happened, not least with insects, that I have handed findings over to experts and gotten them confirmed. So, when I report [those], I write approximately the way that you do with collected material in written reports. You write “det.” as in determined, identified species, and the name of the person because it somehow strengthens reliability and makes results more credible (Eric).

The det. annotation signifies a determination activity which allows other participants to understand that the identification has been acknowledged and confirmed by a credible, external source. This is an activity seeking to enhance the validity of the report, which in turn may enrich the credibility of the practitioner. The det. annotation becomes an authoritative mark which is made possible through the free text field in the reporting tools as standardised forms, augmenting the reported species as an ideal type (cf. Star and Griesemer, 1989). In other words, the det. descriptor is understood as an authoritative seal, implying that yet another eye, that of an expert, has engaged in examining the observation to be reported.

As described above, authority outreach functions as a way both to mitigate uncertainties and as advanced use of reporting tools and annotations seen as standardised forms (cf. Star and Griesemer, 1989). Authority and credibility also arise through the validation monitoring of aggregated report lists, which is explicated next.

Assessing credibility via peer monitoring

In this final results subsection, peer monitoring and the shared validation of species as credibility assessment are described. While the open data of Artportalen are free to be investigated and scrutinised by all participants, certain participants have been granted volunteer validator roles and may question reported observations regarded as out of scope for a certain geographic region:

I use to keep track a bit of what is reported day-to-day because, like many others, I am curious […]. If there is something odd, some species that should not at all be [at a certain site], you can see quite quickly if something seems a bit strange. You might get the impression that this is someone who does not observe plants a lot but still make reports, and you [yourself] know that there is another species which is much more common and looks the same. Then, we usually ask questions if it concerns [species] that have not been seen in the region before (Christian).

In line with the outreach activities described previously, validators also enact authority in Artportalen by scrutinising metadata, estimating the credibility of the reported observations. As such, the lists comprising daily sightings can be seen as repositories, as “[…] ordered ‘piles’ of objects which are indexed in a standardized fashion” (cf. Star and Griesemer, 1989, p. 410). For the individual participant, the lists work to satisfy one's curiosity and show activity carried out in a certain time frame as a personal repository (cf. Star and Griesemer, 1989). For the validator, the lists can be understood as coincident boundaries (cf. Star and Griesemer, 1989); they comprise the basis for ensuring valid reports through systematic monitoring.

Related, peer monitoring of species' plausibility also gives rise to evaluation of participants. As participant Nora states, “Anyone can add anything, which also happens. So, unfortunately, one surveys who has published the observation [and judges] if it is someone you trust” (Nora). Perceptions of respect and esteem arise:

It is about a kind of reputation. Some people who report wildly, they will soon have a reputation that “you cannot be really certain that the person concerned has made a correct identification”. You do not want that kind of reputation. Therefore, you must be quite careful that you know what you are doing (Adam).

Through the quotation above, Adam echoes his previous point of the thousands of eyes who continuously monitor reported observations by also declaring the risks attained to multiple wrongdoings and mistakes. As reputations are reciprocally formed between participants, the reported observations are data of what has been witnessed. At the same time, the data include a variety of internal contents which arise depending on the viewer, and the translations of species into data, viewed as passage points (cf. Star and Griesemer, 1989), are continuously examined by peers. Lists of reports piling up as repositories can be understood as enhancing the authority of the single participant, but only if these lists, and the reports that comprise them, are understood as valid and credible by the community.

Summing up the results section, as participants utilise the same information system for reporting species observations made in the Swedish nature, reporting tools provide standardised forms which homogenises the reports, seeking to streamline them into valid ideal types (cf. Star and Griesemer, 1989). While valid data are understood as a sought for ideal, various notions of what constitutes this ideal, as well as what that validity is, exist. Participants present various opinions of what comprises adequate research data. The adequacy rather relates to thoroughness of described circumstances to provide credibility. By engaging oneself in the collaborative production of biodiversity citizen science data, credibility is established through long-term, dedicated participation represented by voluminous reports, as well as precise species knowledge made visible through these reports. This credibility can be facilitated by the supplementation of objects of trust such as photographs or identification assistance through authority outreach. This is of particular importance as several highly active participants engage in creating repositories (cf. Star and Griesemer, 1989) for their own sake with the ambition to highlight activity and be perceived as credible and authoritative contributors. Moreover, authority is regarded as domain knowledge concerning certain species groups but also aligning competences by other parties working with species identification through laboratory techniques. The activities of supplying reports with objects of trust and augmenting identification through authority outreach are highly significant since collaborative validation activities are carried out as these lists of reported species are understood as coincident boundaries (cf. Star and Griesemer, 1989), varyingly forming notions of who is trustworthy and who is authoritative. The discrepancy between what encompasses obligatory annotations, what are desirable notes and how these should be entered through the information system gives rise to a variety of opinions on what is considered valid data, that is, what encompasses species reports as an ideal type or a “‘good enough road map’ for all parties” (cf. Star and Griesemer, 1989, p. 410).

Concluding discussion

The present study has focussed on the co-creation of credibility, authority and validity in biodiversity citizen science, in the vicinity of a shared information system where a multitude of knowledge interests meet. Through a sociomaterial practice-oriented perspective departing from the assumption that information-related activities comprise interconnected material components through situated schemes for achieving shared objectives (cf. Gherardi, 2017; Orlikowski, 2010), values of credibility, authority and validity were analysed through the lens of boundary objects and obligatory passage points (cf. Star and Griesemer, 1989). This fostered a sociomaterial approach to the study of biodiversity citizen science information practices, positioning the study in relation to previous research on boundary objects in library and information science (e.g. Huvila, 2016; Pilerot, 2012). The perspective was considered a fruitful approach considering that material objects such as reporting tools, photographs and metadata forms play a significant part in the enactment of information practices in the empirical setting.

Validity has been shown to be a desirable value for the participants operating through Artportalen, achieved through collaborative efforts in the present research setting. Certain details are compulsory when reporting species, others are voluntary. However, as institutional systems for quality reviews are bypassed in favour of peer validation (cf. Elliott and Rosenberg, 2019), what is deemed valid data is not fully agreed upon (cf. Bálasz et al., 2021). The data are shaped and formed iteratively by thousands of eyes, as one participant put it, who are monitoring, examining and judging the data. Biodiversity citizen science presents many and varied perspectives of seeking, sharing, and annotating information to achieve validity, emerging from biodiversity's epistemic heterogeneity (cf. Bowker, 2000a, b). For instance, mandatory descriptors include species, date, time, geographical coordinates and observer, but metadata varies in significance among the participants. Temporal metadata thoroughness is for some deemed relevant for understanding longitudinal change, while detailed geographical coordinates may be of certain importance for others to ease data reuse. Other significant annotations include biotope descriptors or species quantities. The notions of what constitutes a valid observation vary, but should all be considered aspects through which credibility and authority can emerge. By consistently ensuring the validity of one's reports, a participant can serve to lay the foundation for authority regarding a certain species group or credibility regarding the overall know-how to produce data.

Prior research has argued the importance to “uncover […] underpinning epistemic beliefs” (Huvila, 2020) for credibility criteria. In this study, this notion has been acknowledged through the investigation of how credibility is tied to mediated data: depending on the participants' experiences and knowledge interests to produce valid research data, certain descriptors are deemed more important than others. The assessment of credibility can be understood as an ongoing negotiation in the present research setting, which is in line with previous research determining how notions of credibility change over time (Francke and Sundin, 2012). It should be noted, however, that the length of participation does have implications for how participants regard credibility (cf. McKenzie, 2003; Tuominen et al., 2002); as participant Adam describes, a contributor who reports a multitude of observed species will soon have a bad reputation and not be considered trustworthy. This implies, in a broader scheme of things, that new participants may not immediately fully grasp the co-construction of credibility, and that this understanding is gained by trial and error. Continuing sociomaterial (cf. Pilerot, 2016) and science and technology studies influenced (cf. Sundin, 2011) information practices research, credibility has been found to be co-constructed through peer data production, but also by Artportalen. Artportalen as an actor, with the possibilities and the constraints that the information system entails, functions as a node through which species reports are publicly visible, up-to-date and criticisable by the participants. The information system invokes these practices and co-constructs credibility with the participants.

The indeterminacy of biodiversity citizen science research data validity through standardised forms gives rise to varying opinions on who and what is credible and authoritative. While the production of valid data comprises a modus operandi for participants, the striving towards providing claims through annotations for ones' reports is highly significant. Rich annotation of descriptors is one way to accomplish credibility, supplementing reports with auxiliary objects such as photographs is another. Some participants are also utilising tools and methods such as microscopes and sending samples to DNA sequencing for this matter. Altogether, this speaks of the necessity of annotation to supply information among knowledge communities (cf. Kalir and Garcia, 2021). Such efforts give rise to notions of trust (cf. Rotman et al., 2012), in cases of reporting rare species as well as when reporting a large amount of species.

Moreover, reaching out to authorities, whether expert peers or active scientists, is used to augment reports through references, including even more eyes to identify species and enhance credibility of data (cf. Liberatore et al., 2018). That is, outreach activity is deemed necessary in cases where the participant cannot determine the identity of the species through Artportalen or finds it easier to ask other interested parties outside of Artportalen. These activities of augmenting the credibility of the reports through authority outreach provide further ways through which endeavours towards valid data are enacted. Furthermore, such claims are subsequently considered in peer monitoring activities by the multitude of eyes scanning, identifying and monitoring reports of species observations. There is not only one way to place descriptors on observations, and some are more authoritative than others. The aftermath of outreach activities leads to minor annotations, such as the det. (as in determined) reference represented in reported observations' metadata, which provide credibility for specific species reports as it is a practice fetched from traditional activities relating to written reports. In this case, participants are mirroring traditional biodiversity research activities as a way to draw authority into the information system, strengthening the credibility of the reports. This in turn implies that individual augmentation of reports has implications for the collaborative data production as a whole. As some participants engage in mimicking traditional methods for identifying species, this raises the bar for other participants to ensure the validity of their species reports as well as guaranteeing their own credibility as data producers. Such information practices can be likened to endeavours towards reliability discussed in previous research (cf. Frigerio et al., 2021). The information practices of seeking, sharing and annotating information, together with distributed peer validation practices, assist the negotiation and co-construction of credibility in collaborative citizen science research data production. In a relational, practice-oriented perspective, one of the three values studied, authority, credibility or validity, cannot be regarded as more prominent than another. Rather, they are interwoven and reciprocally formed through information practices in the empirical setting.

Acknowledging previous information practices studies on cognitive authority (e.g. McKenzie, 2003), the focus of the study has been investigating instances of distributed authority, credibility and validity of research data established through information practices. While the findings of the study provide certain limitations considering the number of participants enquired from a qualitative perspective, other practice-oriented approaches could be utilised in further related studies to saturate the results. Arguably, investigations of related practice elements such as embodiment and temporality (cf. Reckwitz, 2002) or power (cf. Watson, 2017) could prove feasible. Concerning temporal aspects of information practices in particular, this has not been the main focus of enquiry in the present study although the subject occasionally surfaced during interviews. Participants' lengths of participation could prove feasible for future studies, juxtaposing previous research (Aceves-Bueno et al., 2017; Dalby et al., 2021). Such perspectives could serve to broaden the understanding of biodiversity citizen science information practices, providing nuances concerning for instance how routinely bodily activities and performances enact information practices, how information practices change over time and how previous experiences facilitate how authority and credibility are established.

Previous research contributions have adopted science and technology studies-inclined theoretical approaches for understanding knowledge production in organisations and groups of peers, through interaction with technological tools (e.g. Bowker, 2000a; Johansson, 2012; Lundquist, 2018; Sundin, 2011). The present study's theoretical focus on boundary objects arguably extends sociomaterial perspectives in scholarly information practices research by investigating collaborative research data production through peer practices of seeking, reporting, annotating, documenting and validating species data. More specifically, validity, authority and credibility of data have been shown to be co-constructed in a distributed fashion by thousands of eyes, made visible through the theoretical lens of boundary objects (Star and Griesemer, 1989). The metaphor of the thousands of examining eyes can also be seen in the light of the wisdom of crowds, how collaborative aggregation of information is often better than single, individual contributions (cf. Surowiecki, 2004). Besides the proposals for further studies grounded in adjacent practice-theoretical approaches depicted in the section above, future research can also serve to nuance understandings of collaborative scholarly conduct involving non-scientists by further adopting science and technology studies perspectives on objectual knowledge production (e.g. Knorr Cetina, 2001). This can be achieved by focussing especially on the material arrangements through which practices play out (e.g. Dourish, 2017). By so doing, it is possible to understand the interrelated, sociomaterial information practices enacted through participation in citizen science.

Coding descriptions and examples

Code groupCodeDescriptionExample
Information practicesObservingSighting species in nature, through organised inventorying or in one's close surroundings“We have inventoried roadsides for four years. It relates to both what species can be found and the ecology, so it is quite difficult stuff“ (Gabriel)
Information practicesReportingSubmitting annotated observations to the citizen science species report system Artportalen“You publish [your reports] directly and the details become public at once, and they are thereby up-to-date” (Adam)
Information practicesValidatingEnsuring the accuracy of data submitted to the information system“I cannot validate sightings [made] in Norrbotten [in northern Sweden], I do not know what it looks like there. You need people locally who know their region” (Helena)
ValuesValidityCorrectness in biodiversity citizen science data production“[A report shall contain] a species from [Artportalen's] list of species, a site of finding, a datum that can be a period, the person logged in” (Karl)
ValuesCredibilityTrustworthiness emerging through thorough participation“Anyone can add anything, which also happens. So, unfortunately, one surveys who has published the observation [and judges] if it is someone you trust” (Nora)
ValuesAuthorityPower and control concerning the identification of species“Most often it is not possible to identify springtails through other means than by microscope. I have received confirmation of that from real experts” (Isaac)

Appendix Interview guide

  1. Can you describe your background?

    • How did you become involved as a citizen scientist?

    • Do you have any prior relevant experience of this?

    • Is there a certain interest that has led you to being engaged in citizen science?

  2. Can you describe how you engage in species observations and reporting at Artportalen?

    • Are there any particular species that you focus on?

    • How do you report these?

    • Do you make use of Artportalen in any other way than reporting observations?

  3. Can you describe how you report observations?

    • Generally? Examples?

    • When does this work out well? When can issues arise?

      • How? Why?

  4. Do you prepare yourself before observing species?

    • If yes; How? Why?

    • If no; Why not?

  5. Do you collaborate with others while observing species?

    • If yes; When? How?

    • If no; Why not, do you think?

  6. Has your participation as a citizen scientist changed over time?

    • If yes; How? Why?

    • If no; Why not, do you think?

  7. What do you think that you have learned during the time that you have been engaged in this activity?

    • If so; What? How?

    • If not; Had you expected that? If yes; What? If not; Why has not it happen?

  8. What is particularly important to note when observing species?

    • (What significance does collaboration have?)

  9. What do you think is required to make good species observations?

  10. What would you say is reliable citizen science data?

    • How do you contribute to reliable data?

    • Are there any issues relating to reliability of citizen science data as you see it?

  11. What do you need to know as a citizen scientist (to complete one's task)?

    • How do you get to know this? When?

  12. What do you think is required for you to contribute?

  13. What has worked well? What has worked less well?

  14. Is there anything you would like to add?

References

Aceves-Bueno, E., Adeleye, A.S., Feraud, M., Huang, Y., Tao, M., Yang, Y. and Anderson, S.E. (2017), “The accuracy of citizen science data: a quantitative review”, Bulletin of the Ecological Society of America, JSTOR, Vol. 98 No. 4, pp. 278-290.

Balász, B., Mooney, P., Nováková, E., Bastin, L. and Arsanjani, J.J. (2021), “Data quality in citizen science”, in Vohland, K., Land-Zandstra, A., Ceccaroni, L., Lemmens, R., Perelló, J., Ponti, M., Samson, R. and Wagenknecht, K. (Eds), The Science of Citizen Science, Springer Nature, London, pp. 139-158.

Beaulieu, A. (2010), “Research note: from co-location to co-presence: shifts in the use of ethnography for the study of knowledge”, Social Studies of Science, Vol. 40 No. 3, pp. 453-470.

Bonter, D.N. and Cooper, C.B. (2012), “Data validation in citizen science: a case study from project FeederWatch”, Frontiers in Ecology and the Environment, Vol. 10 No. 6, pp. 305-307.

Bourdieu, P. (1990), The Logic of Practice, Stanford University Press, Stanford, CA.

Bowker, G.C. (2000a), “Work and information practices in the sciences of biodiversity”, VLDB 2000, Proceedings of 26th International Conference on Very Large Data Bases, Kaufmann, San Francisco, CA, pp. 10-14.

Bowker, G.C. (2000b), “Biodiversity datadiversity”, Social Studies of Science, Vol. 30 No. 5, pp. 643-683.

Bowker, G.C. and Star, S.L. (1999), Sorting Things Out: Classification and its Consequences, MIT Press, Cambridge, MA.

Brinkmann, S. and Kvale, S. (2015), InterViews: Learning the Craft of Qualitative Research Interviewing, Vol. 3, Sage Publications, Los Angeles, CA.

Callon, M. (1986), “Some elements of a sociology of translation: domestication of the scallops and the fishermen of St. Brieuc Bay”, in Law, J. (Ed.), Power, Action, and Belief: A New Sociology of Knowledge, Routledge and Kegan Paul, London, pp. 196-233.

Dalby, O., Sinha, I., Unsworth, R.K., McKenzie, L.J., Jones, B.L. and Cullen-Unsworth, L.C. (2021), “Citizen science driven big data collection requires improved and inclusive societal engagement”, Frontiers in Marine Science, Frontiers, Vol. 8, p. 432.

Davies, E. and McKenzie, P.J. (2004), “Preparing for opening night: temporal boundary objects in textually-mediated professional practice”, Information Research, Vol. 10 No. 1, available at: http://informationr.net/ir/10-1/paper211.html (accessed 07 December 2021).

Dourish, P. (2017), The Stuff of Bits: an Essay on the Materialities of Information, MIT Press, Cambridge, MA.

Ekström, B. (2022a), “A niche of their own: variations of information practices in biodiversity citizen science”, Journal of Documentation, Vol. 78 No. 7, pp. 248-265, doi: 10.1108/JD-07-2021-0146.

Ekström, B. (2022b), “Trace data visualisation enquiry: a methodological coupling for studying information practices in relation to information systems”, Journal of Documentation, Vol. 78 No. 7, pp. 141-159, doi: 10.1108/JD-04-2021-0082.

Eleta, I., Clavell, G.G., Righi, V. and Balestrini, M. (2019), “The promise of participation and decision-making power in citizen science”, Citizen Science: Theory and Practice, Vol. 4 No. 1, pp. 1-9.

Elliott, K.C. and Rosenberg, J. (2019), “Citizen science: theory and practice”, Philosophical Foundations for Citizen Science, Ubiquity Press, Vol. 4 No. 1, pp. 1-9.

Francke, H. and Sundin, O. (2012), “Negotiating the role of sources: educators' conceptions of credibility in participatory media”, Library and Information Science Research, Vol. 34 No. 3, pp. 169-175.

Frigerio, D., Richter, A., Per, E., Pruse, B. and Vohland, K. (2021), “Citizen science in the natural sciences”, in Vohland, K., Land-Zandstra, A., Ceccaroni, L., Lemmens, R., Perelló, J., Ponti, M., Samson, R. and Wagenknecht, K. (Eds), The Science of Citizen Science, Springer Nature, London, pp. 79-96.

Gherardi, S. (2017), “Sociomateriality in posthuman practice theory”, in Hui, A., Schatzki, T.R. and Shove, E. (Eds), The Nexus of Practices: Connections, Constellations and Practitioners, Routledge, London, pp. 38-51.

Greyson, D. (2018), “Information triangulation: a complex and agentic everyday information practice”, Journal of the Association for Information Science and Technology, Vol. 69 No. 7, pp. 869-878.

Griesemer, J.R. (2015), “Sharing spaces, crossing boundaries”, in Bowker, G.C., Timmermans, S. and Clarke, A.E. (Eds), Boundary Objects and beyond: Working with Leigh Star, The MIT Press, Cambridge, MA, pp. 201-218.

Haider, J. and Sundin, O. (2020), “Information literacy challenges in digital culture: conflicting engagements of trust and doubt”, Information, Communication and Society, Routledge, Vol. ahead-of-print No. ahead-of-print, pp. 1-16.

Haklay, M. (2018), “Participatory citizen science”, in Hecker, S., Haklay, M., Bowser, A., Makuch, Z., Vogel, J. and Bonn, A. (Eds), Citizen Science: Innovation in Open Science, Society and Policy, UCL Press, London, pp. 52-62.

Huvila, I. (2013), “In web search we trust? Articulation of the cognitive authorities of web searching”, Information Research, Vol. 18 No. 1, available at: http://informationr.net/ir/18-1/paper567.html (accessed 18 June 2021).

Huvila, I. (2016), “Awkwardness of becoming a boundary object: mangle and materialities of reports, documentation data, and the archaeological work”, The Information Society, Vol. 32 No. 4, pp. 280-297.

Huvila, I. (2017), “Distrust, mistrust, untrust and information practices”, Information Research, Vol. 22 No. 1, available at: http://informationr.net/ir/22-1/isic/isic1617.html (accessed 11 June 2021).

Huvila, I. (2020), “Information-making-related information needs and the credibility of information”, Information Research, Vol. 25 No. 4, available at: http://informationr.net/ir/25-4/isic2020/isic2002.html (accessed 11 June 2021).

Huvila, I., Anderson, T.D., Jansen, E.H., McKenzie, P. and Worrall, A. (2017), “Boundary objects in information science”, Journal of the Association for Information Science and Technology, Vol. 68 No. 8, pp. 1807-1822.

Johansson, V. (2012), A Time and Place for Everything? Social Visualisation Tools and Critical Literacies, University of Borås, Borås.

Johansson, V. and Limberg, L. (2017), “Seeking critical literacies in information practices: reconceptualising critical literacy as situated and tool-mediated enactments of meaning”, Information Research, Vol. 22 No. 1, available at: http://informationr.net/ir/22-1/colis/colis1611.html (accessed 11 June 2021).

Kalir, R.H. and Garcia, A. (2021), Annotation, MIT Press, Cambridge, MA.

Knorr Cetina, K. (2001), “Objectual practice”, in Schatzki, T.R., Von Savigny, E. and Knorr Cetina, K. (Eds), The Practice Turn in Contemporary Theory, Routledge, London, pp. 184-197.

Kullenberg, C. and Kasperowski, D. (2016), “What is citizen science? – a scientometric meta-analysis”, PLoS ONE, Vol. 11 No. 1, pp. 1-16.

Latour, B. (1987), Science in Action: How to Follow Scientists and Engineers through Society, Harvard University Press, Cambridge, MA.

Latour, B. (1999), “Circulating reference: sampling the soil in the Amazon forest”, Pandora's Hope: Essays on the Reality of Science Studies, Harvard University Press, Cambridge, MA, pp. 24-79.

Law, J. and Lynch, M. (1988), “Lists, field guides, and the descriptive organization of seeing: birdwatching as an exemplary observational activity”, Human Studies, Vol. 11 No. 2, pp. 271-303.

Liberatore, A., Bowkett, E., MacLeod, C.J., Spurr, E. and Longnecker, N. (2018), “Social media as a platform for a citizen science community of practice”, Citizen Science: Theory and Practice, Vol. 3 No. 1, pp. 1-14.

Lundquist, E. (2018), Flyktiga Möten: Fågelskådning, Epistemisk Gemenskap Och Icke-Mänsklig Karisma, Stockholm University, Stockholm.

McKenzie, P.J. (2003), “Justifying cognitive authority decisions: discursive strategies of information seekers”, The Library Quarterly: Information, Community, Policy, Vol. 73 No. 3, pp. 261-288.

Meeus, S., Silva-Rocha, I., Adriaens, T., Brown, P., Chartosia, N., Claramunt-López, B., Martinou, A., Pocock, M., Preda, C., Roy, H., Tricario, E. and Groom, Q. (2021), “BioBlitz is more than a bit of fun”, Biodiversity Information Science and Standards, Pensoft Publishers, Sofia, Vol. 5, doi: 10.3897/biss.5.74361.

Nicolini, D. and Monteiro, P. (2016), “The practice approach: for a praxeology of organisational and management studies”, in Langley, A. and Tsoukas, H. (Eds), The SAGE Handbook of Process Organization Studies, SAGE Publications, London, pp. 110-125.

Orlikowski, W.J. (2010), “The sociomateriality of organisational life: considering technology in management research”, Cambridge Journal of Economics, Oxford Academic, Vol. 34 No. 1, pp. 125-141.

Pilerot, O. (2012), “LIS research on information sharing activities – people, places, or information”, Journal of Documentation, Vol. 68 No. 4, pp. 559-581.

Pilerot, O. (2016), “A practice-based exploration of the enactment of information literacy among PhD students in an interdisciplinary research field”, Journal of Documentation, Vol. 72 No. 3, pp. 414-434.

Pilerot, O. and Lindberg, J. (2018), “Inside the library: academic librarians' knowing in practice”, Journal of Librarianship and Information Science, Vol. 50 No. 3, pp. 254-263.

Pilerot, O., Hammarfelt, B. and Moring, C. (2017), “The many faces of practice theory in library and information studies”, Information Research, Vol. 22 No. 1, available at: http://www.informationr.net/ir/22-1/colis/colis1602.html (accessed 11 June 2021).

Reckwitz, A. (2002), “Toward a theory of social practices: a development in culturalist theorizing”, European Journal of Social Theory, Vol. 5 No. 2, pp. 243-263.

Rieh, S.Y. and Danielson, D.R. (2007), “Credibility: a multidisciplinary framework”, Annual Review of Information Science and Technology, Vol. 41 No. 1, pp. 307-364.

Rotman, D., Preece, J., Hammock, J., Procita, K., Hansen, D., Parr, C., Lewis, D. and Jacobs, D. (2012), “Dynamic changes in motivation in collaborative citizen-science projects”, Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, Association for Computing Machinery, New York, NY, pp. 217-226.

Schatzki, T.R. (2001), “Introduction: practice theory”, in Schatzki, T.R., Von Savigny, E. and Knorr Cetina, K. (Eds), The Practice Turn in Contemporary Theory, Routledge, London, pp. 10-23.

Serret, H., Deguines, N., Jang, Y., Lois, G. and Julliard, R. (2019), “Data quality and participant engagement in citizen science: comparing two approaches for monitoring pollinators in France and South Korea”, Citizen Science: Theory and Practice, Vol. 4 No. 1, pp. 1-12.

Star, S.L. (1989), “The structure of ill-structured solutions: boundary objects and heterogeneous distributed problem solving”, in Gasser, L. and Huhns, M.N. (Eds), Distributed Artificial Intelligence, Morgan Kaufmann, San Francisco, CA, pp. 37-54.

Star, S.L. (2010), “This is not a boundary object: reflections on the origin of a concept”, Science, Technology, and Human Values, Vol. 35 No. 5, pp. 601-617.

Star, S.L. and Griesemer, J.R. (1989), “Institutional ecology, ‘translations’ and boundary objects: amateurs and professionals in Berkeley's Museum of Vertebrate Zoology, 1907-39”, Social Studies of Science, Vol. 19 No. 3, pp. 387-420.

Strasser, B.J., Baudry, J., Mahr, D., Sanchez, G. and Tancoigne, E. (2019), “‘Citizen science’? rethinking science and public participation”, Science and Technology Studies, Vol. 32 No. 2, pp. 52-76.

Sundin, O. (2011), “Janitors of knowledge: constructing knowledge in the everyday life of Wikipedia editors”, Journal of Documentation, Vol. 67 No. 5, pp. 840-862.

Surowiecki, J. (2004), The Wisdom of Crowds: Why the Many Are Smarter than the Few and How Collective Wisdom Shapes Business, Economics, Society and Nations, Little Brown, London.

Swanson, A., Kosmala, M., Lintott, C. and Packer, C. (2016), “A generalized approach for producing, quantifying, and validating citizen science data from wildlife images”, Conservation Biology, Vol. 30 No. 3, pp. 520-531.

Swedish University of Agricultural Sciences (n.d.), “About Artportalen, the Swedish species observation system - Artportalen”, available at: https://artportalen.se/Home/About (accessed 16 March 2021).

Talja, S. and Maula, H. (2003), “Reasons for the use and non-use of electronic journals and databases: a domain analytic study in four scholarly disciplines”, Journal of Documentation, Vol. 59 No. 6, pp. 673-691.

Taylor, R.S. (1986), Value-Added Processes in Information Systems, Ablex, Norwood, NJ.

Tuominen, K., Talja, S. and Savolainen, R. (2002), “Discourse, cognition, and reality: toward a social constructionist metatheory for library and information science”, in Bruce, H. and Fidel, R. (Eds), Emerging Frameworks and Methods: CoLIS 4: Proceedings of the Fourth International Conference on Conceptions of Library and Information Science, Libraries Unlimited, Seattle, WA, pp. 271-283.

Van House, N.A. (2002a), “Trust and epistemic communities in biodiversity data sharing”, Proceedings of the 2nd ACM/IEEE-CS Joint Conference on Digital Libraries, Association for Computing Machinery, New York, NY, pp. 231-239.

Van House, N.A. (2002b), “Digital libraries and practices of trust: networked biodiversity information”, Social Epistemology, Vol. 16 No. 1, pp. 99-114.

Watson, M. (2017), “Placing power in practice theory”, in Hui, A., Schatzki, T.R. and Shove, E. (Eds), The Nexus of Practices: Connections, Constellations and Practitioners, Routledge, London, pp. 169-182.

Wiggins, A. and Crowston, K. (2011), “From conservation to crowdsourcing: a typology of citizen science”, 2011 44th Hawaii International Conference on System Sciences, presented at the 2011 44th Hawaii International Conference on System Sciences, IEEE Computer Society, Washington DC, MA, pp. 1-10.

Wiggins, A. and He, Y. (2016), “Community-based data validation practices in citizen science”, Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work and Social Computing, Association for Computing Machinery, New York, NY, pp. 1548-1559.

Wilson, P. (1983), Second-Hand Knowledge: an Inquiry into Cognitive Authority, Greenwood, Westport, CT.

Acknowledgements

The author would like to thank the participants of the study for their time and effort and his supervisors Professor Ola Pilerot and Senior Lecturer Veronica Johansson at the University of Borås, Sweden, for valuable feedback. The author would also like to thank Senior Lecturer Rachel Pierce at the University of Borås, Sweden, for proof-reading participant quotations.

Corresponding author

Björn Ekström can be contacted at: bjorn.ekstrom@hb.se

About the author

Björn Ekström is a doctoral student in Library and Information Science at the Swedish School of Library and Information Science, University of Borås, Sweden. His main research interests include information practices, citizen science and digital methods.

Related articles