Abstract
Purpose
This paper illustrates how Guba and Lincoln's parallel criteria for establishing trustworthiness, can be adapted and applied to qualitative research on indigenous social protection systems. It provides insights for social protection researchers, exploring plausible qualitative research rigor evaluation criteria, on plausible alternatives.
Design/methodology/approach
The paper draws on qualitative evidence from a larger ethnographic study on the dynamics of indigenous social protection systems in Nigeria. It illustrates the systematic application of Guba and Lincoln's parallel criteria.
Findings
Available evidence from the study shows that Guba and Lincoln's parallel criteria is viable for establishing trustworthiness of qualitative research on indigenous social protection systems. The criteria can facilitate credible and reliable research outcomes in research on improving social protection policy and practice.
Research limitations/implications
Qualitative inquiries that draw on Guba and Lincoln's parallel criteria as evaluation criteria for trustworthiness can complement quantitative research on social protection. This makes it imperative to incorporate both, in social protection research for a holistic system. How this can be done is beyond the scope of this paper but needs to be explored by future research.
Originality/value
Contrary to the use of Guba and Lincoln's parallel criteria in qualitative research in other contexts, the use of the criteria has not been carefully examined in qualitative research on indigenous social protection systems. This paper is an attempt to fill this gap.
Keywords
Citation
Enworo, O.C. (2023), "Application of Guba and Lincoln's parallel criteria to assess trustworthiness of qualitative research on indigenous social protection systems", Qualitative Research Journal, Vol. 23 No. 4, pp. 372-384. https://doi.org/10.1108/QRJ-08-2022-0116
Publisher
:Emerald Publishing Limited
Copyright © 2023, Oko Chima Enworo
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
The importance of evaluation of the quality of a research cannot be overemphasized. Without rigor, argue Morse et al. (2002), research is worthless, becomes fiction and loses its utility. In like manner, a research that lacks trustworthiness portends harmful consequences if adopted for policy purposes (Tierney and Clemens, 2010). Just as in quantitative research, qualitative research has criteria that can be used to evaluate the quality of the research (Liamputtong, 2019). Indeed, the different models and emphasis on standards for legitimation of qualitative research and the quality of qualitative research evaluation generally (see Onwuegbuzie et al., 2012) attest to the importance attached to trustworthiness of research.
Qualitative research is often seen to be rigorous if it is reliable and valid (Creswell, 2013; Morse et al., 2002; Morse 2015a, b; 2018). In reality, however, the debate on rigor, including suggestions about its constituent elements, is alive and at the forefront of qualitative scholarship, often resulting in a plethora of terms and criteria that undermines rather than clarify the concept (Johnson and Rasulova, 2016; Morse et al., 2002; Morse, 2018).
As for the related concept, trustworthiness, some background information is vital for clarity. Guba and Lincoln settled the debate for qualitative rigor by introducing a new perspective, new criteria and a new language for qualitative rigor in the 1980s. Prior to that time, quantitative researchers devalued qualitative research for the latter's methods considered to lack rigor and incapable of generating valid results (Morse, 2018). Lincoln and Guba's (1986) idea of scientific rigor comprise two components: parallel criteria of trustworthiness and unique criteria of authenticity. The first was referred to as “parallel criteria” because, as Lincoln and Guba (1986, p. 76) explained, “given a dearth of knowledge about how to apply rigor in the naturalistic paradigm, using conventional criteria as analogs or metaphoric counterparts was a possible and useful place to begin”, which is also why the term trustworthiness was used as a parallel term to rigor, such that the basis of the latter include credibility as an analog to internal validity, transferability as an analog to external validity, dependability as an analog to reliability and confirmability as an analog to objectivity. However, the second component of rigor in Lincoln and Guba's (1986) expanded idea of the term is the “authenticity criteria”, but being beyond the scope of the current paper, this is not delved into.
The development recounted above notwithstanding, the lack of clarity among qualitative researchers in the use of the concept of rigor prompted Morse (2015a, p. 1213) to suggest a return to the terminology of mainstream social science, “using rigor (rather than trustworthiness) and replacing dependability, credibility, and transferability with the more generally used reliability, validity, and generalizability.” A major implication of the foregoing is the connection between rigor, trustworthiness and the nature of meaning making in qualitative inquiry. For example, the language that describes, and the meanings attached to the terminology for establishing and assessing rigor in qualitative research vary from that of traditional positivist studies (Tucket, 2005). Thus, meaning making is a vital but complex element in qualitative research and lapses in this regard ultimately make aspects of rigor and trustworthiness to suffer (James and Mulcahy, 1999; Krauss, 2005).
The components of Lincoln and Guba's (1986) “parallel criteria” also referred to as “the Four-Dimensions Criteria” or the acronym FDC (Forero et al., 2018, p. 2), namely, credibility, dependability, confirmability and transferability, are elaborated below.
Credibility addresses whether the findings and judgments made by the researcher can be trusted and the extent to which they provide comprehensive and sensible interpretations of the data (Lincoln and Guba, 1986 cited in Hanson et al., 2019). The purpose is to establish confidence that the results (from the perspective of the participants) are true, credible and believable and can be achieved through prolonged engagement, persistent observation, triangulation, peer debriefing, negative case analysis, referential adequacy and member checks (Guba and Lincoln, 1989; Forero et al., 2018). Basically, dependability is the ability, to the extent that all conditions are equal, to obtain the same results if the study were to be repeated (Morse, 2015a). As such, the research process should be logical and transparent such that the process and procedures can be auditable and traced, ensuring coherence across the methods and findings (Hanson et al., 2019). Dependability is attainable through credibility, triangulation, splitting data and duplicating the analysis and use of audit trail (Guba and Lincoln, 1989). According to Liamputtong (2019, p. 20), “Confirmability attempts to show that the findings and the interpretations of the findings do not derive from the imagination of the researchers but are clearly linked to the data.” The purpose is to extend the confidence that the results of a study would be confirmed or corroborated by other researchers (Forero et al., 2018). The strategies to achieve this include triangulation and the audit trail (Guba and Lincoln, 1989). The underlying intention behind transferability is to extend the degree to which the results of a qualitative inquiry can be generalized or transferred to other contexts or settings (Forero et al., 2018; Liamputtong, 2019). Based on Guba and Lincoln's criteria, the strategies to achieve this include thick description, purposive sampling and reflexivity. Furthermore, credibility and the three other criteria, can also be addressed through the researcher's reflexive journal or reflexivity (Lincoln and Guba, 1985).
Criticisms to Guba and Lincoln's (1989) argument abound (a few are highlighted in Morse et al., 2002). However, such criticisms have also been addressed, for instance, in the article, RSVP: we are pleased to accept your invitation (Lincoln and Guba, 1994). In fact, the paradigmatic shift heralded by Guba and Lincoln can be seen as work-in-progress, the relevance of which in contemporary scholarship is not in doubt (Morse, 2015a; Lincoln et al., 2018).
Moving on, research on indigenous social protection systems known variously with terms like “informal” “traditional”, “community-based” and “non-state” systems – as the neglected half of formal social protection in Africa has only gained attention in recent years (Patel et al., 2012; Dafuleya, 2018; Noyoo and Boon, 2018; Mokomane et al., 2021). For example, researchers and policy-makers from Southern and Western Africa assembled in 2016, at an International Workshop in Johannesburg, South Africa and deliberated on the state of these indigenous systems, contending that they should constitute an important basis for the formulation of public policies in Africa. In Nigeria, social protection only gained prominence and sense of direction in terms of a clearly defined policy framework at the institutional level with the ratification of a National Social Protection Policy [NSPP] in 2017 (Ministry of Budget and National Planning, 2017). Yet, in the NSPP, indigenous social protection systems are given a peripheral position – unsurprisingly, as is the case in most African countries, on the grounds that they are fragile, rapidly declining, not as strong or effective as before, only effective in managing idiosyncratic risks as opposed to covariate shocks and apparently providing only short-term solutions, among others (Balgah and Buchenrieder, 2010; Dafuleya, 2018). Thus, these systems are not well understood in relation to social protection in Nigeria (World Bank, 2019).
The foregoing, among others, explains the dearth of empirical research on indigenous social protection systems. Moreover, whereas the subject matter lies within the interpretive (constructivist) paradigm, in Nigeria, only few studies have explored the roles of indigenous social protection systems whether in managing idiosyncratic (micro) risks or covariate (community-wide) shocks, using purely qualitative methods (Fabiyi and Oloukoi, 2013; Izugbara, 2017). Even where qualitative methods have been used in whole or in part for research on social protection, ensuring and evaluating trustworthiness has most often been either completely overlooked or scantily done (see for example, Aiyede et al., 2015; Surajo et al., 2019). The concern with quality of qualitative research on social protection is relevant in view of the adverse ripple effects of shocks from one region to others – if not curbed.
In this article, I show how Guba and Lincoln's parallel criteria for establishing trustworthiness, also known as the “Four-Dimensions Criteria” (subsequently referred to as FDC) was adapted and applied in an ethnographic study of indigenous social protection systems in Nigeria. By so doing, I not only justify the importance of qualitative research using ethnographic methods in particular, for social protection research, generally, but go ahead to show why the FDC is a viable qualitative research tool for research on indigenous social protection systems in particular. Both for methodology and policy, the paper offers an opportunity for tool development in terms of appropriate recognition and allocation of resources by policy-makers and research funders into wider use of ethnographic methods of qualitative research in social protection research (see also Loh, 2013). This novel application of the FDC to research on indigenous social protection systems will also serve as a guide to researchers in other developing countries for similar future studies.
These criteria have been used in other contexts of qualitative research in education (Lincoln and Guba, 1986), qualitative health research (Tuckett, 2005; Morse, 2015a; Forero et al., 2018), impact evaluation research in development consultancy (Johnson and Rasulova, 2016) and research on international marketing (Singh et al., 2021), among others; but this is the first time it has been used in the study of indigenous social protection systems.
In the next section, the study methodology is presented. Subsequently, the study findings which involve detailed explanation on the application of the FDC to the study in question are presented. In the last section, methodological issues arising from these findings and their policy implications are discussed further with pointers on possible areas of improvements.
Methodology
In this paper, I discuss the application of Guba and Lincoln's (1989) parallel criteria for ensuring trustworthiness in a wider study which explores the dynamics of indigenous social protection systems vis-à-vis formal systems in handling covariate shocks (emphasis on floods) in Southeast Nigeria. Following the examples of Tuckett (2005), Morse (2015a) and Forero et al. (2018), I adapted the criteria point by point by selecting those strategies that applied to the present study systematically. Indeed, the beauty of Guba and Lincoln's criteria is that the range of strategies may be adapted across each of the four quality criteria by investigators to suit their qualitative inquiry (Shenton, 2004; Korstjens and Moser, 2018).
However, the full study adopted an ethnographic qualitative research design, using in-depth interviews (IDIs), focus group discussions (FGDs) and participant observations. This was justified on the grounds that the phenomena of study lie within the interpretive paradigm. Two communities – Umueze-Anam and Nzam, respectively, in Anambra state in Southeast Nigeria were selected purposively for the study for their rural location, agrarian livelihood, linguistic diversity, minority status and history of marginalization and vulnerability by their location in the lowlands of the Niger River to perennial flood disasters. Furthermore, study participants comprised three categories: key informants that comprised 7 relevant staff of state ministries and entities most relevant to social protection in the state; extremely poor –using United Nations Development Programme (2019) and Oxford Poverty and Human Development Initiative (2019) measures and vulnerable community members (e.g. persons with severe physical disability and poor aged) numbering 38 in all and; 11 indigenous community-based associations. On completion of the data collection, the audio-recordings of all interviews and group discussions were fully transcribed. Thereafter, data were analyzed using Braun et al.’s (2019, p. 852) “six-phase” reflexive thematic analysis (TA) approach.
In addition, ethical approval for the study was obtained from the Faculty of Humanities Research Ethics Committee, University of Pretoria. Next, the study findings with respect to the stated objectives are discussed.
Results
A detailed description of each of the components of the FDC including application of particular strategies in the study is provided in this section.
Credibility
For this criterion, the following strategies were applied thus:
Prolonged engagement
For the present study, I spent two months in each of the two study sites. Most of the initial key informant interviews were held prior to travelling to the fieldwork sites, while two sessions were held concurrently in the course of the fieldwork in the second community which is semi-urban and had better access road to the state capital where the key informant interviews were held. Without doubt, both for observations and interviews, this strategy gave participants time to know the researcher better through participation in community life and events which increased their trust and intimacy, spurring them to provide richer data through the numerous revelations. Thus, through prolonged engagement with the locals, I gradually transformed from an “outsider” … to more of an “insider” whom participants were comfortable to initiate small talks with (Hung and Min, 2020, p. 117). Prolonged engagement at a site is “to test biases and perceptions of both inquirer and respondents and to provide time to identify salient characteristics of both the context and the problem” (Guba and Lincoln, 1982, p. 377; Lincoln and Guba, 1986).
Persistent observation
In this study observations entailed attending the meetings, events or activities of selected community associations. It was a channel to gather more data in natural settings about indigenous social protection systems since it was not feasible to get a practical understanding of certain group practices through the FGDs. The aim was to identify the patterns of these associations, their functions, generally, including extent of inclusion of vulnerable groups, and particularly in relation to dealing with floods, thereby identify the opportunities these indigenous systems present with regards to handling the problem of floods vis-à-vis formal social protection systems. The indigenous community-based social protection systems in the two study sites involved include the following: Town unions (TUs), which are community development associations; social clubs, which are mutual aid associations; age grades which serve as community support networks; rotating savings and credit associations (ROSCAs) which are also mutual aid associations and; indigenous women's organizations, comprising married women. I was a participant-observer in most events, except that of TUs and married women's organizations.
Generally, each observation lasted between 2 and 10 hours, often with breaks in-between. Both for the participant observation sessions and the complete observer sessions, no notes were taken at the spot. First, this would be difficult, for example during the farm work and burial ceremonies. Again, it could alter the behavior of those being observed. Instead, I recorded events observed, mostly later in the day at my rented apartment, after leaving the site. A field note for recording of events was used to document field observations, intuitions and clarifications. Mott (2022) applied this strategy as well in her ethnographic study, based on the argument that it would hinder ability to build rapport and deep relationships with the people who the author encountered. In addition, the number of observations made in each community was limited by the number of community-based associations' events. The relevance of the persistent observation strategy lies in the “in-depth pursuit” of those elements identified to be salient through prolonged engagement (Lincoln and Guba, 1986, p. 77), to gain a high degree of acquaintance with pervasive qualities and salient characteristics and be able to eliminate those which are irrelevant (Guba and Lincoln, 1982).
Debriefing
Debriefing which is an opportunity to present one's findings, is intended to prevent bias and aid conceptual development of the study and is particularly useful for new investigators (Morse, 2015a). This can be achieved through various methods (Onwuegbuzie et al., 2008). I benefitted from this strategy. After data collection, debriefing sessions were held with the project supervisor and later, with other scholars in my department. After receiving an extensive explanation and verifying a sample of the transcripts of the interviews and other texts (as referential adequacy materials) I received in the course of the study, the project supervisor made practical suggestions based on experience as a seasoned researcher. Other colleagues also made vital suggestions on how the work may be improved. These inputs improved the quality of the research report.
Collection of referential adequacy materials2
Referential adequacy materials refer to materials collected during the study and archived without analysis, but which can later be utilized by the inquirer or others, to test interpretations made from other analyzed data (Guba and Lincoln, 1982). During the data collection for this study, I collected additional relevant documents from government agencies and a community leader in one of the communities. These include newsletters, policy documents, periodicals and books about the study areas. The documents contain additional information in relation to the context of the study and were utilized later to confirm the accuracy of the interpretations made from the analyzed data. These materials were kept for future reference (see Forero et al., 2018, p. 6 for a similar practice in a qualitative study).
Dependability
The strategies that applied to this study are discussed as follows:
Rich description
By the term thick (or rich) description, Guba and Lincoln (1982) meant providing enough information about a research context and is a fundamental feature of a qualitative inquiry (Goodwin and Horowitz, 2002). This includes being clear about the purpose of the project and about the type of questions and data that will best meet the research goal (See Morse, 2018). Having complied with these requirements right from the proposal stage during which greater transparency and logic were imbued into the research, the data collection instruments were appropriate and enabled participants the opportunity to provide in-depth responses relevant to the research question (see Hanson et al., 2019). Throughout the designing of the instruments, there was continual reflection on the questions: “What assurances can we offer to policy studies, methodologies, and academic research … Are we even asking the right questions?” (Wolgemuth et al., 2018, p. 7). In this regard, the Faculty of Humanities Research Ethics Committee, University of Pretoria was very instrumental, throughout the entire process of ethical clearance for the study through pointing out aspects of the research that needed further elaboration or clarification. It was this process that facilitated the proper research method-related choices, including selection of an adequate and appropriate sample, which in turn guaranteed theoretical saturation as argued below, all contributing to empirical data generated, fulfilling the requirements of being rich (Morse, 2015a). The rich description strategy is further reflected in the application of some of the principles of micro-interlocutor analysis including frequency data for themes that emerged from members of focus groups (see Onwuegbuzie et al., 2009). Therefore, the same results can be obtained if the study were to be repeated.
Member checks
As a tool for establishing dependability, member checks answers the question: “Does the researcher understand/interpret the participant correctly?” (Morse, 2015a, p. 1219). If the researcher do not understand events accurately, the analysis will be “unstable and the results cannot be repeated” to get the same results (Morse, 2015a, p. 1219). Hence, to avoid this loophole, I conducted member checking in the process of data collection to check data between participants in the manner suggested by Morse (2015a, p. 1218) thus: “some people tell me [so and so]. Do you see it from that angle?” This was not done with the original participants but with others, i.e. subsequent respondents both among the government officials and among members of the selected communities, particularly in the in-depth interviews. This strategy determined normative patterns of behavior and ensured that the findings reflect the full depth and scope of the participants' experiences and perspectives hence; dependability was achieved (see Morse, 2015a; Morse, 2018; Hanson et al., 2019). In addition, it facilitated full participation by respondents in the research process, enabling them co-construct the research outcomes together with me, the researcher (Livari, 2017).
Confirmability
The strategies that are applicable to this study are triangulation and reflexivity (Guba and Lincoln, 1982). These are explained here.
Triangulation
In this study, I applied the within-methods triangulation in which two or more approaches are combined in one method (Flick, 2018). Practically, individual in-depth interviews, group interviews and observations were combined in an ethnographic approach (See Flick, 2018, p. 795). This method of triangulation may enhance credibility as it is used to expand understanding (Morse, 2015a). Data source triangulation (Creswell, 2013; Flick, 2018; Vogl et al., 2019) was also applied such that the sources of the data were different categories of vulnerable populations, government establishments and different groups that constitute indigenous social protection systems in two different study settings with different socio-cultural features. Triangulation is particularly relevant for studying social problems, in this case flooding, and social justice for the vulnerable (Flick, 2018). Therefore, triangulation offered a broad source of data for a holistic understanding of social protection in Nigeria generally, and how indigenous social protection systems in particular handled covariate shocks. Hence, the results of the study can be trusted. Generally, the use of a single technique does not guarantee that a qualitative study is rigorous (Hanson et al., 2019).
Reflexivity
Guba and Lincoln (1982, p. 379) defines practicing reflexivity as “attempting to uncover one's underlying epistemological assumptions, reasons for formulating the study in a particular way, and heretofore implicit assumptions, biases or prejudices about the context or problem.” They suggested keeping a reflexive journal in the field as the most appropriate means for the practice of reflexivity. For the purposes of this study, I actually kept a reflexive journal and applied reflexivity as a way of clarifying researcher bias in the study. For instance, I had anticipated massive rural-urban migration of working-age persons occasioned by the menace of flood in the first study site but this was not the case. Careful documentation of some of the assumptions and efforts to avoid biases and prejudices arising from the insider-outsider status dichotomy on my part contributed to the overall success in the field. This was reflected for instance, in the scheduling of the interviews to suit the participants given the nature of their livelihood and the way I applied probes during interviews as well as participation in the daily activities of the communities during the participant observations. To further show that the findings of the study do not derive from my imagination, quotations that support the findings are provided in the transcripts as suggested by Hanson et al. (2019).
Transferability
To avoid repetition, I followed the example of Forero et al. (2018) by adapting and applying the strategy on sampling with the argument that this can increase transferability (See also Hanson et al., 2019, p. 1017).
Sampling techniques (purposive and snowball sampling)
The sampling technique applied in this study has the potential to extend the degree to which the results of the study can be transferred to other contexts or settings. For instance, key informants for the study were selected through purposive sampling and comprised of staff of ministries and entities most relevant to formal social protection in Anambra state and who had also acquired in-depth knowledge of indigenous social protection systems in the state both as residents and in the course of their official duties.
Similarly, through the use of snowball sampling, I was able to access the most relevant individuals who comprised the extremely poor and most vulnerable and groups which comprised functional indigenous social protection systems, in relation to social protection in the study areas. Thus, the most appropriate participants formed the sample for the study. Actually, the peculiarity of such persons makes them accessible mostly through networks. Community-based associations were eleven (11) in numbers and selected on the basis of the provision of social protection or assistance to their members. Such associations were indigenous arrangements based on their communal cultural values and members of such groups selected [purposively] as participants in the group discussions were predominantly poor farmers who had lived and worked in the community for at least 10 years and experienced major flooding disasters in the community.
Saturation
To further ensure that the results of this qualitative inquiry can be transferred to other contexts or settings, effort was made to achieve saturation. Theoretical saturation was achieved thus: I conducted 7 in-depth interviews with key informants, 36 in-depth interviews with different categories of vulnerable community members in the two study sites and 11 FGDs, based on constant reviewing of the findings so far made in the field, through the content of statements made during concurrent data collection and analysis with emphasis on the themes that stood out in response to the research questions for the study and how this linked to the theoretical framework for the study and literature. In the process, I observed that there was enough in-depth data showing the patterns, categories and variety of the phenomena under study (see also Morse, 2015b; Moser and Korstjens, 2018). At that point, I considered whether sampling might be ended because of saturation. However, 2 more interviews were carried out to confirm that saturation has been reached, bringing the total number of in-depth interviews with the vulnerable to 38. Also, as inherent in the design of the study, I used multiple groups to assess if the themes that emerged from one group also emerged from other groups in a sort of across-group saturation check (See Onwuegbuzie et al., 2009).
It became clear that no new analytical information was emanating and the findings provided maximum information on the subject matter of the research. Therefore, sampling was ended and the sample size considered sufficient (see Malterud et al., 2016; Moser and Korstjens, 2018). Theoretical saturation was, therefore, reached. Theoretical saturation was applied in this research because it was an ethnographic qualitative inquiry as opposed to code or meaning saturation which would have been the case in health sciences and public health research (Hennink et al., 2017, 2019). All in all, the information power of the sample took priority over the size (Malterud et al., 2016). Understandings and applications of saturation in qualitative researches vary (Morse, 2015b; Malterud et al., 2016; Saunders et al., 2018; Sebele-Mpofu, 2020), yet, are guided by parameters like study purpose, population, sampling strategy, data quality and researcher's skill (Morse, 2015b; Hennink et al., 2017). Thus, researchers need to be clear on the type of saturation they claim to have achieved because saturation is not unidimensional; it can be assessed (or achieved) at different levels, either by individual constructs or by overall study saturation (Hennink et al., 2017).
The application of the criteria is summed up in Table 1 below.
Conclusion
To recapitulate, over the years, Guba and Lincoln's parallel criteria for evaluating rigor, termed trustworthiness in qualitative research remains one of the most extensively used. This criteria, also referred to as the Four-Dimensions Criteria (FDC) consists of credibility, dependability, confirmability and transferability to be respectively equivalent to quantitative criteria of internal validity, reliability, objectivity and external validity or generalizability.
Given the dearth of research on indigenous social protection in the context of a recently developed social protection policy in Nigeria, the specific objective of this article was to show the applicability of Guba and Lincoln's parallel criteria for establishing trustworthiness in qualitative research on social protection, drawing empirical data from a wider qualitative inquiry in Southeast Nigeria. The criteria were applied following the examples of Tuckett (2005), Morse (2015a) and Forero et al. (2018) by adapting the criteria point by point, selecting those strategies that applied to the present study systematically. Being closely engaged with their cases, qualitative researchers typically adapt existing theories or make new conceptual distinctions or theoretical arguments to accommodate new data (Goodwin and Horowitz, 2002). This is the first time the FDC has been applied in this manner in a study of [indigenous] social protection systems. It will among other things, provide guidance to researchers and policy-makers.
On the basis of ethnographic fieldwork data from the study sites, the paper indicated how the four components of the FDC and particular strategies under each component were successfully applied to the study in question and concludes that these criteria increased trustworthiness. As has been correctly observed by Daniel (2018), while trustworthiness enhances the understanding and interpretation of research findings and enables others to establish a level of confidence in the quality of an investigation, establishing trustworthiness in qualitative does not imply subscribing to one unified ontology, or embracing a universal epistemology, but rather demonstrating an acceptable degree of integrity in the process and outcome of the study. It is this acceptable level of integrity in the process and outcome of the study in question, that I have demonstrated in this paper.
In all, based on qualitative (ethnographic) data that informed this paper, the argument is that social protection researchers can engage in more rigorous qualitative research process to facilitate trustworthiness of their research using Guba and Lincoln's parallel criteria as enunciated above. In addition, the criteria hold great potential for social policy given the purpose of each of the strategies adopted for this particular inquiry as explained in Table 1. In their study of regulatory practices in the private health care sector of Bangladesh, Rahman and Caulley (2007) inculcated some of Guba and Lincoln's criteria and found qualitative research methods to be appropriate for doing health research and evaluation, as it provided a platform to gain in-depth insights into policy issues. Thus, based on the application of the FDC to assess rigor of qualitative research as used for this study, some issues in the current social protection policy and programmes linked to methodology are discussed with an elaboration of the wider implications.
First, I argue that formal social welfare in most of Africa, and Nigeria in particular, remains an elite project devoid of norms of equality and solidarity vital for holistic development. This is reflected, for example, in the relegating of the potentials of indigenous social protection systems. Appropriate mainstreaming of indigenous social protection systems into the social protection space in Nigeria and other African countries is needed to widen social protection coverage which is vital for poverty reduction. This is where the role of qualitative inquiries that draw on the FDC as evaluation criteria for trustworthiness becomes relevant to support the quantitative data dominated social protection research space – which often fails to dig-up social phenomena only amenable to certain qualitative strategies. Thus, linking formal and indigenous social protection systems is vital but should be done in such a way to facilitate research designs that embeds the FDC into qualitative research methods. This could form a topic for further exploration by researchers.
Methodologically, the one-off cross-sectional survey using questionnaires or in-depth interview guides that is mostly adopted in most social protection research has shown limitations in terms of shallow understanding of people's subjective wellbeing and dimensions of social vulnerabilities and how to properly target and select beneficiaries for formal social assistance. Often, faulty assumptions about programme success have been made by using purely quantitative research methods or faulty qualitative research methods, with adverse impact on social policies and programmes. This further underscores the relevance of the methodology advocated for in this paper.
A research that lacks trustworthiness portends harmful consequences if adopted for policy purposes as pointed out by Tierney and Clemens (2010). Guba and Lincoln's parallel criteria for establishing trustworthiness are plausible for qualitative research on social protection, and indigenous social protection systems in particular, in countries with low human development outcomes similar to Nigeria. Recognition of this fact and prioritizing research funding in that regard can guarantee sustainable development.
Trustworthiness
Criteria | Purpose | Original strategies | Strategies applied in this study to achieve trustworthiness |
---|---|---|---|
Credibility | To show that the research process was done with integrity and the final results of the study can be trusted | Prolonged engagement | Prolonged engagement |
Persistent observation | Persistent observation | ||
Triangulation | (See confirmability below) | ||
Peer debriefing | Debriefing | ||
Negative case analysis | Not applicable | ||
Referential adequacy | Collection of referential adequacy materials | ||
Member checks (process and terminal) | (See dependability below) | ||
Dependability | To show that the research was sound in all ramifications and the outcome will be similar should it be repeated | Use of “overlapping methods” (triangulation) | Rich description |
“Stepwise replication” (splitting data and duplicating the analysis) | Member checks (process, not terminal) | ||
“Inquiry audit” or audit trail | Not applicable | ||
Confirmability | To show confidence in the fact that the research findings are based on data generated and are verifiable | Triangulation | Triangulation |
Reflexivity | Reflexivity | ||
Audit trail | Not applicable | ||
Transferability | To show that findings from the study can be transferred to other contexts or settings | Thick description is essential for “someone interested” to transfer the original findings to another context, or individuals | Sampling techniques |
Saturation |
Source(s): (adapted based on Guba and Lincoln, 1989)
References
Aiyede, E., Sha, P., Haruna, B., Olutayo, A., Ogunkola, E. and Best, E. (2015), The Political Economy of Social Protection Policy Uptake in Nigeria, Working Paper No. 002, Partnership for African Social and Governance Research, Nairobi.
Balgah, R.A. and Buchenrieder, G. (2010), “The dynamics of informal responses to covariate shocks”, Journal of Natural Resources Policy Research, Vol. 2 No. 4, pp. 357-370.
Braun, V., Clarke, V., Hayfield, N. and Terry, G. (2019), “Thematic analysis”, in Liamputtong, P. (Ed.), Handbook of Research Methods in Health Social Sciences, Springer Nature Pte, Singapore, pp. 843-860.
Creswell, J.W. (2013), Qualitative Inquiry and Research Design: Choosing Among Five Approaches, 3rd ed., Sage, CA.
Dafuleya, G. (2018), “(Non) state and (In)formal social protection in Africa: focusing on Burial Societies”, International Social Work, Vol. 61 No. 1, pp. 156-168.
Daniel, B.K. (2018), “Empirical verification of the ‘TACT’ framework for teaching rigour in qualitative research methodology”, Qualitative Research Journal, Vol. 18 No. 3, pp. 262-275.
Fabiyi, O.O. and Oloukoi, J. (2013), “Indigenous knowledge system and local adaptation strategies to flooding in coastal rural communities of Nigeria”, Journal of Indigenous Social Development, Vol. 2 No. 1, pp. 1-19.
Flick, U. (2018), “Triangulation”, Denzin, N.K. and Lincoln, Y.S. (Eds), The Sage Handbook of Qualitative Research (5th ed.), Sage, Thousand Oaks, Sage, pp. 777-804.
Forero, R., Nahidi, S., De Costa, J., Mohsin, M., Fitzgerald, G., Gibson, N., McCarthy, S. and Aboagye-Sarfo, P. (2018), “Application of four-dimension criteria to assess rigour of qualitative research in emergency medicine”, BMC Health Services Research, Vol. 18 No. 120, pp. 1-11.
Goodwin, J. and Horowitz, R. (2002), “Introduction: the methodological strengths and dilemmas of qualitative sociology”, Qualitative Sociology, Vol. 25 No. 1, pp. 33-47.
Guba, E. and Lincoln, Y.S. (1989), Fourth Generation Evaluation, Sage, Newbury Park, CA.
Guba, E.G. and Lincon, Y.S. (1982), “Epistemological and methodological bases for naturalistic inquiry”, Educational Communications and Technology Journal, Vol. 30 No. 4, pp. 363-381.
Hanson, C.S., Ju, A. and Tong, A. (2019), “Appraisal of qualitative studies”, in Liamputtong, P. (Ed.), Handbook of Research Methods in Health Social Sciences, Springer Nature Pte, Singapore, pp. 1013-1026.
Hennink, M.M., Kaiser, B.N. and Marconi, V.C. (2017), “Code saturation versus meaning saturation: how many interviews are enough?”, Qualitative Health Research, Vol. 27 No. 4, pp. 591-608.
Hennink, M.M., Kaiser, B.N. and Weber, M.B. (2019), “What influences saturation? Estimating sample sizes in focus group research”, Qualitative Health Research, Vol. 00 No. 0, pp. 1-14, doi: 10.1177/1049732318821692.
Hung, A.H. and Min, A.M. (2020), “I'm afraid”: the cultural challenges in conducting ethnographic fieldwork and interviews in Myanmar”, Qualitative Research Journal, Vol. 21 No. 2, pp. 113-123.
Izugbara, C. (2017), “Livelihoods and Associational Life Among Rural Older Igbo Persons in Southeastern Nigeria”, Unpublished Doctoral Thesis, University of Pretoria, Pretoria, SA.
James, P. and Mulcahy, D. (1999), “Meaning-making in qualitative research: issues of rigour in a team-based approach”, AVETRA Conference Papers, available at: http://www.academia.edu/52697934/Meaning_Making_in_Qualitative_Research_Issues_of_Rigor_in_a_Team_Based_Approach
Johnson, S. and Rasulova, S. (2016), “Qualitative impact evaluation: incorporating authenticity into the assessment of rigour”, Bath Papers in International Development and Wellbeing, University of Bath, Centre for Development Studies (CDS), Bath, No, Vol. 45, pp. 1-42.
Korstjens, I. and Moser, A. (2018), “Series: practical guidance to qualitative research. Part 4: trustworthiness and publishing”, European Journal of General Practice, Vol. 24 No. 1, pp. 120-124.
Krauss, S.E. (2005), “Research paradigms and meaning making: a primer”, The Qualitative Report, Vol. 10 No. 4, pp. 758-770.
Liamputtong, P. (2019), “Qualitative inquiry”, in Liamputtong, P. (Ed.), Handbook of Research Methods in Health Social Sciences, Springer Nature Pte Ltd, Singapore, pp. 9-26.
Lincoln, Y.S. and Guba, E.G. (1985), Naturalistic Inquiry, 1st ed., Sage Publications Inc, Newbury Park.
Lincoln, Y.S. and Guba, E.G. (1986), “But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation”, New Directions for Program Evaluation, Vol. 30, pp. 73-84.
Lincoln, Y.S. and Guba, E.G. (1994), “RSVP: we are pleased to accept your invitation”, Evaluation Practice, Vol. 15 No. 2, pp. 179-192.
Lincoln, Y.S., Lynham, S.A. and Guba, E.G. (2018), “Paradigmatic controversies, contradictions, and emerging confluences, revisited”, Denzin, N.K. and Lincoln, Y.S. (Eds), The Sage Handbook of Qualitative Research (5th ed.), Sage, Thousand Oaks, pp. 213-263.
Livari, N. (2017), “Using member checking in interpretive research practice: a hermeneutic analysis of informants' interpretation of their organizational realities”, Information Technology and People, Vol. 31 No. 1, pp. 111-133.
Loh, J. (2013), “Inquiry into issues of trustworthiness and quality in narrative studies: a perspective”, The Qualitative Report, Vol. 18 No. 33, pp. 1-15.
Malterud, K., Siersma, V.D. and Guassora, A.D. (2016), “Sample size in qualitative interview studies: guided by information power”, Qualitative Health Research, Vol. 26 No. 13, pp. 1753-1760.
Ministry of Budget and National Planning (2017), National Social Protection Policy 2017, Ministry of Budget and National Planning, Abuja.
Mokomane, Z., Enworo, O.C. and Setambule, T. (2021), “The role of IndigenousSocial protection systems in the sustainable development of contemporary african communities”, Filho, W.L., Pretorius, R. and de Sousa, L.O. (Eds), Sustainable Development in Africa. World Sustainability Series, Springer, Cham, pp. 75-89, doi: 10.1007/978-3-030-74693-3_5.
Morse, J.M. (2015a), “Critical analysis of strategies for determining rigor in qualitative inquiry”, Qualitative Health Research, Vol. 25 No. 9, pp. 1212-1222.
Morse, J.M. (2015b), “Data were saturated”, Qualitative Health Research, Vol. 25 No. 5, pp. 587-588.
Morse, J. (2018), “Reframing rigor in qualitative inquiry”, Denzin, N.K. and Lincoln, Y.S. (Eds), The Sage Handbook of Qualitative Research (5th ed.), Sage, Thousand Oaks, pp. 1372-1409.
Morse, J.M., Barrett, M., Mayan, M., Olson, K. and Spiers, J. (2002), “Verification strategies for establishing reliability and validity in qualitative research”, International Journal of Qualitative Methods, Vol. 1 No. 2, pp. 13-22.
Moser, A. and Korstjens, I. (2018), “Series: practical guidance to qualitative research. Part 3: sampling, data collection and analysis”, European Journal of General Practice, Vol. 24 No. 1, pp. 9-18.
Mott, K.L. (2022), “Hurry up and wait”: stigma, poverty, and contractual citizenship”, Qualitative Sociology, Vol. 45, pp. 271-290.
Noyoo, N. and Boon, E. (2018), “Introduction and background”, Noyoo, N. and Boon, E. (Eds), Indigenous Social Security Systems in Southern and West Africa, Sun Press, pp. 1-13.
Onwuegbuzie, A.J., Leech, N.l. and Collins, K.M.T. (2008), “Interviewing the interpretiveresearcher: a method for addressing the crises of representation, legitimation, and praxis”, International Journal of Qualitative Methods, Vol. 7 No. 4, pp. 1-17.
Onwuegbuzie, A.J., Dickinson, W.B., Leech, N.L. and Zoran, A.G. (2009), “A qualitative framework for collecting and analyzing data in focus group research”, International Journal of Qualitative Methods, Vol. 8 No. 3, pp. 1-21.
Onwuegbuzie, A., Leech, N.L., Slate, J.R., Stark, M., Sharma, B., Frels, R. and Combs, J.P. (2012), “An exemplar for teaching and learning qualitative research”, The Qualitative Report, Vol. 17 No. 1, pp. 16-77.
Oxford Poverty and Human Development Initiative (2019), Nigeria Country Briefing, Multidimensional Poverty Index Data Bank, Oxford Poverty and Human Development Initiative, University of Oxford, Oxford OX1 3TB.
Patel, L., Kaseke, E. and Midgley, J. (2012), “Indigenous welfare and community-based socialdevelopment: lessons from African innovations”, Journal of Community Practice, Vol. 20 Nos 1-2, pp. 12-31.
Rahman, R.M. and Caulley, D.N. (2007), “Regulatory practices in the private health care sector of Bangladesh”, Qualitative Research Journal, Vol. 6 No. 2, pp. 67-88.
Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H. and Jinks, C. (2018), “Saturation in qualitative research: exploring its conceptualization and operationalization”, Qual Quant, Vol. 52, pp. 1893-1907.
Sebele-Mpofu, F.Y. (2020), “Saturation controversy in qualitative research: complexities and underlying assumptions. A literature review”, Cogent Social Sciences, Vol. 6 No. 1, 1838706, doi: 10.1080/23311886.2020.1838706.
Shenton, A.K. (2004), “Strategies for ensuring trustworthiness in qualitative research projects”, Education for Information, Vol. 22, pp. 63-75.
Singh, N., Mamoun, B., Meyr, E. and Ramazan, H.A. (2021), “Verifying rigor: analyzing qualitative research in international marketing”, International Marketing Review, Vol. 38 No. 6, pp. 1289-1307.
Surajo, A.Z., Musa, J. and Umar, A.S. (2019), “An integrated approach to social protection system in Kano State, Nigeria: a sustainable development linkage”, International Journal of Modern Trends in Social Sciences, Vol. 2 No. 6, pp. 48-62.
Tierney, G.W. and Clemens, R.F. (2010), Qualitative Research and Public Policy: The Challenges of Relevance and Trustworthiness, Center for Higher Education Policy Analysis, University of Southern California, Los Angeles.
Tuckett, A. (2005), “Part II: rigour in qualitative research- complexities and solutions”, Nurse Researcher, Vol. 13 No. 1, pp. 29-42.
United Nations Development Programme (2019), Human Development Report 2019: beyond Income, beyond Averages, beyond Today: Inequalities in Human Development in the 21st Century, UNDP, New York.
Vogl, S., Schmidt, E. and Zartler, U. (2019), “Triangulating perspectives: ontology and epistemology in the analysis of qualitative multiple perspective interviews”, International Journal of Social Research Methodology, Vol. 22 No. 6, pp. 611-624.
Wolgemuth, J.R., Koro-Ljungberg, M., Marn, T.M., Onwuegbuzie, A.J. and Dougherty, S.M. (2018), “Start here, or here, no here: introductions to rethinking education policy and methodology in a post-truth era”, Education Policy Analysis Archives, Vol. 26 No. 145, pp. 1-8.
World Bank (2019), “Advancing social protection in a dynamic Nigeria”, ASA P165426 Main Report, 7 August, The World Bank, Washington DC. Available at: https://documents1.worldbank.org/curated/en/612461580272758131/pdf/Advancing-Social-Protection-in-a-Dynamic-Nigeria.pdf
Acknowledgements
The author gratefully acknowledges the guidance and support of Prof. Zitha Mokomane, his doctoral thesis supervisor at the University of Pretoria.
Disclosure statement: No potential conflict of interest was reported by the author.