Abstract
Purpose
Presently the whole world has been experiencing a pandemic threat of coronavirus diesease 2019 (COVID-19) and at the same time facing unprecedented changes in everything including education. E-learning has evolved as the only alternative of knowledge transmission even in third world nations, and e-assessment has been playing an increasingly important role in this digital transformation of education. But how far and of what depth it has made its place among students' minds need to be studied to leverage its full potential to transform students' learning needs. This study reports an investigation made in this direction.
Design/methodology/approach
An online survey consisting of 40 questions in Google Forms was conducted to collect primary data on students' perception of e-assessment among 200 Indian students pursuing higher education from several geographical locations. The quantitative methodological approach was followed. The data were analyzed descriptively and inferentially.
Findings
The results were analyzed based on the model of acceptance and usage of e-assessment (MAUE), and findings revealed that students' overall perception toward e-assessment was of moderate level and this perception varies depending on their gender, academic level, nature of the stream of study and their economic condition. Of the eight domains investigated, students showed better perception in the perceived usefulness, perceived ease of use, compatibility, subjective norms and self-efficacy domains, while they cut a sorry figure in domains like awareness, resource facilitation and information technology (IT) support. It became evident from their responses that COVID was instrumental in enhancing their interest in e-assessment.
Social implications
The implication of this study lies in strengthening e-assessment by attending to the factors as noted in the MAUE in India and alike developing nations having huge space left for e-learning to reach a boom.
Originality/value
This is an empirical investigation conducted in India on the state of students' perception of the e-assessment in the backdrop of the COVID-19 outbreak. To do this work, the authors conducted online surveys, and the write-up of the findings focus on the survey data only.
Keywords
Citation
Kundu, A. and Bej, T. (2021), "Experiencing e-assessment during COVID-19: an analysis of Indian students' perception", Higher Education Evaluation and Development, Vol. 15 No. 2, pp. 114-134. https://doi.org/10.1108/HEED-03-2021-0032
Publisher
:Emerald Publishing Limited
Copyright © 2021, Arnab Kundu and Tripti Bej
License
Published in Higher Education Evaluation and Development. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
John Dewey (1997) once said mankind likes to think in terms of extreme opposites during formulating his/her beliefs. Kucey and Parsons (2012) said the pendulum of educational philosophy has been vacillating backward and forward between traditional and progressive, between novel and ancient, and between radical and conservative. New theories arrive replacing the old ones, and then they get replaced by another new one. This is a universal cyclical process. E-assessment is one such progressive entry into the hegemony of paper–pencil mode of conservative assessments. Technological developments and their simultaneous penetration in the educational process in the 21st century have made this a novel way of evaluating student learning and providing feedback (Kundu, 2018a; Kundu et al., 2020). “E-Assessment” is the use of digital technologies to create, dispense, evaluate and deliver feedback for formative, summative, diagnostic or self-assessment (Kocdar et al. (2018). This can deliver a greater variety of assessments seizing a wider range of skills and attributes that are not straightforwardly assessed in traditional methods (De Villiers et al., 2016). E-assessment, alternatively known as electronic assessment, computer-based assessment, digital assessment or online assessment, is the use of information and communication technology (ICT) in the assessment (Electronic Assessment, 2020, 3rd July). Osuji (2012) said e-assessment could be hypothesized as the use of ICTs to facilitate the entire assessment process, from the designing and delivering of assignments to marking, reporting, storing the results and/or making statistical analysis. Crisp (2011) used the term e-assessment to refer to all the assessment tasks conducted through a computer, a digital tool and/or the web. Advancement of technology and e-learning systems has put e-assessment on high demand (Brink and Lautenbach, 2011). Kuh et al. (2014) said few higher-level cognitive or affective skills are very difficult to assess using traditional selected-response formats, which necessitates the use of e-assessments as an alternative. Whitelock (2009) said e-assessment is a new assessment paradigm and methodology that has been playing a progressively important role in the transformation of higher education. UNESCO (2020) also recommended the use of ICT in delivering traditional assessment formats more effectively and efficiently to measure the latest and soft nuances of modern competencies.
The organizational use of technology changes the roles and relationships for teachers and students (Kundu, 2018b). Still, planning efficient e-assessment techniques at the university level has become a matter of concern where students are a major stakeholder (Davidson and McKenzie, 2009). In this context, their perception, attitude and alertness are major determiners behind an effective culture of e-assessment to flourish (Sorensen, 2013). Jordan (2011) has specifically highlighted the importance of considering and evaluating students' perceptions of e-assessment in several domains like awareness, ability, security, validity, teaching–learning, etc. Besides, Dermo (2009) found that while there is a profusion of research on teachers' awareness and attitudes toward e-assessments, there is very little on students' views toward them. Hardly, there is a study amidst this coronavirus diesease 2019 (COVID-19), an instance of human helplessness fronting an enormous natural disaster bringing online education and e-assessment into limelight immense popularity (Kundu and Bej, 2021). The practice is altogether new and a honeymoon period for the majority of Indian students. The current investigation targeted this research gap to delve deep into the students' perception of acceptance and awareness of e-assessment. The specific research questions set for this study were:
How was the Indian higher education students' perception of e-assessment?
How far students were apprised of the e-assessment mechanism?
How far COVID has influenced students' perception level of e-assessment?
Did the perception of e-assessment vary depending on students' gender, academic grades, stream of study and economic condition?
E-assessment
Long ago Rowntree (1987) said an institution is known by its assessment policies and practices that influence havoc on students' activities and achievements. Boud (1995) said students can face poor teaching, but they cannot face the aftereffects of poor assessment. Thus, assessment is unavoidable in the learning process through which students demonstrate and apprise their abilities and prepare fields for necessary support for further progress (Amelung et al., 2011; Gikandi et al., 2011). Brink and Lautenbach (2011) have defined assessment as a vital process of collecting and interpreting information from students by teachers with the final goal of assigning a grade to them. Its potential is unquestionable in the fields of both assessing students' progress and certifying their progress (Lafuente et al., 2014; Clements and Cord, 2013).
With the introduction of World Wide Web in the 1990s, a drastic change took place across the globe in several sectors, and the education sector was no exception (Kundu, 2020b). Alruwais et al. (2018) said ICT has been a blessing, a vital backing for education in a long time since the time of Sidney Leavitt Pressey when he designed a mechanical machine for automatic testing. Following this line of research, the latest addition is the e-assessment which means using the ICT to manage and deliver assessment in every form – diagnostic, summative or formative (Howarth, 2015). Crisp (2011) has defined e-assessment as a unique process of recording students' responses and providing feedback. Cook and Jenkins (2010) reported e-assessment as “end-to-end electronic assessment processes” where “ICTs are used for the presentation of assessment activity and the recording of responses” (p. 34). This means that ICTs are absolute necessities for e-assessment making it a part of the e-learning process opposed to the traditional ways of assessing students' progress (Howarth, 2015). Several studies have conducted wide research on the principles pertaining to e-assessment. Brink and Lautenbach (2011) said it needs to be authentic, consistent, transparent, having accurate and practical supporting system. Baartman et al. (2007) claimed it to be challenging, reflecting real world situations and depicting the skills needed in real life. Tinoca (2012) stressed on the practicability or feasibility of it since e-assessment requires numerous resources like time, cost, training, equipment, expertise, apposite attitude, etc.
Advantages of e-assessment
ICTs have brought the globe in a room and made functioning easy and faster (Kundu, 2020a). Similarly, e-assessment has made assessment developed, accurate and faster compared to traditional time-taking paper-based measures (Hillier, 2014; Stodberg, 2012; De Villiers et al., 2016). Several studies reported many advantages of e-assessment, such as providing more accessible, flexible, efficient and convenient assessment experiences for learners, teachers and institutions (Attia, 2014; Sorensen, 2013; Pedersen et al., 2012; De Villiers et al., 2016; Crisp et al., 2016); being fast and easy to use (Eljinini and Alsamarai, 2012); providing students more control, friendly interfaces and recreational experiences (Ridgway et al., 2004; Way, 2010; Williams and Wong, 2009); providing immediate feedback compared with paper-based test that improves the learning level (Gilbert et al., 2011; Way, 2010); increasing the students' motivation to enhance their performance (study in University of Winchester) (Gilbert et al., 2011); saving teachers' time (Gilbert et al., 2011; Ridgway et al., 2004; Eljinini and Alsamarai, 2012; Sorensen, 2013; Gikandi et al., 2011); adding more value to students' learning (Sorensen, 2013); saving valuable time for the institutions (Gilbert et al., 2011; Ridgway et al., 2004; Way, 2010; Donovan et al., 2007; Sorensen, 2013; Gikandi et al., 2011); easing tracking students' performance (Ellaway and Masters, 2008); making assessment fast and accurate (Ridgway et al., 2004; Way, 2010); reducing the teachers' burden to assess large number of students (Nicol, 2007); helping teachers to improve the quality of feedback for the students (Ridgway et al., 2004; Way, 2010); adding a higher level of security to assessment procedures (Sorensen, 2013); and finally supporting high-order thinking skills such as critiquing, reflection on cognitive processes and facilitating group work projects (Ridgway et al., 2004).
Disadvantages of e-assessment
However, despite having many advantages, e-assessment is not completely free from disadvantages that curb its wide application and acceptance (Bacigalupo et al., 2010). Several past studies pointed out disadvantages. Isaías and Issa (2013) talked about the lack of institutional commitment that most institutions have; Whitelock and Brasher (2006) pointed out the lack of confidence among students and teachers mainly due to their lack of computer efficacy; Bacigalupo et al. (2010) found the lack of students' motivation as a disadvantage, and Redecker et al. (2012) found teachers' doubt on its effectivity and unbiasedness as a disadvantage. Mason (2014) made a very significant point when he said computer distraction is a disadvantage for e-assessment affecting the students' achievement as whole. Whitelock (2009) found the lack of feedback as a potential disadvantage. Kocdar et al. (2018) found certain challenges pertaining to students' verification, authorship and authentication participating in e-assessment, and Xu and Mahenthiran (2016) found cheating and plagiarism as a grave concern associated with e-assessment compared to traditional paper-based assessment. Several other studies also advocated that cheating and plagiarism are easier and more frequent in e-assessment (Pedersen et al., 2012; Kocdar et al., 2018; Bartley, 2005; Rowe, 2004; Gathuri et al., 2014; Mellar et al., 2018; Hillier, 2014; Kocdar et al., 2018). Apampa et al. (2011), Bartley (2005) and Mellar et al. (2018) categorize cheating and plagiarism in e-assessment like impersonation, taking materials into exams, looking at others' answers and ghostwriting. Kocdar et al. (2018) and Dermo (2009) unequivocally said such forms of malpractices undoubtedly affect the validity and reliability of the e-assessments.
Context analysis
India has been watching the success of the West in adopting e-learning which allows students and teachers to learn on their terms and is trying hard to accommodate it to achieve the target of making education accessible to every corner of the country (Kundu and Dey, 2018). The country is all set to transform India into a digitally empowered knowledge society, and in this direction, several initiatives have been taken to integrate ICT from the school curriculum to the higher education curriculum (Kundu, 2018b). Currently, India's higher education system is the largest in the world enrolling 37.4 million students, and in the last few years, the gross enrollment ratio (GER) has increased from 25.8 in 2017–18 to 26.3 in 2018–19 (Nanda, 2019). Taking 2017–18 as a reference year, the GER in higher education of the US is the highest at 88.2%, followed by Germany (70.3%), France (65.6%), UK (60.6%), Brazil (51.3%), China (49.1%), Indonesia (36.4%) and India (25.8% in 2017–18, 26.3% in 2018–19), and the country is ambitious to raise the GER 50% by 2035 through its much-anticipated National Education Policy 2020 (Sharma, 2020, 15th June). The new competence-based curriculum introduced in the National Education Policy (NPE, 2020) has included competencies such as collaboration, problem-solving, critical thinking, creativity and communication, as well as subject knowledge and skills (Kundu et al., 2020). Renewed interest has been put on online education that has been getting traction across the country mandatorily will follow e-assessment (KPMG, 2017). In the meantime, the COVID pandemic outbreak and eventual country-wide lockdown since 16th March 2020 have pushed millions of students out of institutional education (Sanyal, 2020) and despite having a deep digital divide (NSS, 2018) have brought online learning and assessment to vast popularity. COVID, a threat to the conventional idea of classroom education, is an awaking call to all concerned that online learning is the only option left during pandemic devastation and it has inspired corporates, governments and policymakers to resort to having alternative backup emergency management plans by reducing the digital divide to deliver education online during periods of school closures (Kundu and Bej, 2021). Most of the schools, colleges and universities are in touch with their students over WhatsApp or email, are conducting classes on platforms such as Google Meet, Microsoft Teams, Zoom, etc. and are planning to begin incorporating online exams as well (Time, 2020). All nation-wide “Mega Exams” are also being held online owing to lockdown and social distancing (Choudhary, 2020). The situation has made online education and e-assessment a mandatory requirement pushing all hesitations and taboos against e-revolution behind. A positive attitude of students and teachers enable e-learning and e-assessment more successful in improving the student performance (Attia, 2014; Sorensen, 2013). Several past studies assessed students' opinions about e-assessment but most of them belong to Western countries. For example, Donovan et al. (2007) found 88.4% of students prefer e-assessment, Gilbert et al. (2011) found 92% of the students approved that e-assessment helped their learning. A similar study in the Indian context is a rarity. In this backdrop it is pertinent to assess students' perception on e-assessment, their responses of awareness, ability, acceptability, benefits and complications in implementing this new assessment method to make the process smooth, enhanced and student-centered.
Theoretical framework
Isaías et al. (2014) said digital technologies allow the construction of a virtual environment of learning where the possibility of interaction and communication is encouraged. But, King and Boyatt (2014) said effective online learning requires strategic leadership, pedagogical and technical support, and awareness of new responsibilities for both teachers and students. Hence, in this study, we adopted the “model of acceptance and usage of e-assessment (MAUE)” (see Figure 1) developed by Sadaf et al. (2012) based on the technology acceptance model (TAM) developed by Davis (1985) to examine the acceptance and utilization of e-assessment by academics. The TAM shows that people's attitude and intention to use the technology are essential for its effective use. The MAUE divided the inducing factors into three broad determinants: attitude, subjective norm and perceived behavioral control. Attitude is further divided into three sub-factors – perceived usefulness, perceived ease of use and compatibility. Subjective norm addresses the impact of social influences like peer influence or supervisors' influence. Perceived behavioral control means people's perception of the ease or difficulty of performing the behavior of interest which is further divided into sub-factors like self-efficacy, resource facilitating conditions and the system of information technology (IT) support. This study investigated several issues regarding students' perceived ease of use like awareness, affective attachment, ability, validity, practicality, security and academic utility.
Methods of investigation
This study followed a descriptive quantitative methodological approach to assess the extent to which e-assessment is adopted among Indian students and the domains that affect students' perception regarding its adoption. This study used an online survey designed to collect relevant information since Creswell (2008) said surveys are always suitable strategies of inquiry within quantitative approaches. An online survey was used owing to the COVID outbreak and prolonged lockdown in educational institutions with strict social distancing measures. Besides, it is a time-saving, easy, effective and cost-effective method with the potential to reach geographically dispersed large sections of participants to obtain data, and it ensures the anonymity of the respondents (Wright, 2005).
Sites and participants
The target audience was higher education students from all over India in order to get a holistic picture of the e-assessment scenario in higher education. To collect the necessary data for the investigation, a survey questionnaire was framed in Google Forms that has become a popular survey administration app encouraging paperless research work. The survey was disseminated among 200 higher education students pursuing several bachelor’s or master’s stream of studies under different universities belonging to different genders, four different streams of studies (humanities, social science, science and law), economic backgrounds and geographical regions and was translated in five different regional languages besides English. The participants were selected via a sample of convenience. Details of the demographic data of the participants are presented in Table 1.
Research tools
Online questionnaire
Researchers developed a questionnaire having 40 question items (see Appendix I) to measure students' perception of e-assessment in eight different domains – awareness domain (item no. 1–5), perceived usefulness domain (item no. 6–10), perceived ease of use domain (item no. 11–15), compatibility domain (item no. 16–20), peer influence and superior influence (item no. 21–25), self-efficacy domain (item no. 26–30), resource facilitation domain (item no. 31–35) and IT support domain (item no. 36–40), covering all important factors as proposed in the MAUE model adopted as a conceptual framework for this study. The overall measured internal consistency (Cronbach's α) was 0.95, and the domain-wise internal consistencies were 0.86, 0.85, 0.78, 0.92, 0.85, 0.88, 0.91 and 0.89, respectively. Cronbach’s alpha is a coefficient that describes how well a group of items focuses on a single idea or construct (Cronbach, 1951). A five-point Likert scale was employed to measure participants' level of perception on each question, with higher scores indicating better and positive perception toward e-assessment. Reverse scoring was done for negatively worded question items (for item no. 11, 12, 13, 14, 15, 32, 34 and 35). The questionnaire was converted to the digital Google Forms platform and sent to the participants. The mean scores and SD of total and each e-assessment domain were summarized separately.
Data collection
The questionnaires were available to the participants for completion for about one month, from March 30 to April 30. A pilot version of the online surveys was initially administered to a limited number of respondents with different characteristics to establish the effectiveness of the designed tools. Testing the survey design helped ensure that the used terms were easily perceived, as well as to check for validity (i.e. the items were asking what we wanted to learn) and consistency. Member checking was also used to check the accuracy of the data collected. All 200 responses were collected and considered for data analysis.
Data analysis
Data were analyzed in accordance with each research approach, and the results were presented in several tables. Descriptive as well as inferential statistics were applied and the data analysis was steered on SPSS-22 (Statistical Package for the Social Sciences). The benefit of its use is that it is conceivable to register and analyze data in a variety of ways at an inordinate speed (Bryman and Cramer, 2011). The analysis of the data comprised descriptive statistical analysis, factorial analysis, t-tests and one-way ANOVA tests.
Results
The data were analyzed holistically and for eight different domains as well relating to students' perception of e-assessment, and the results are presented in Table 2 that shows students' overall perception toward e-assessment (M = 117.4, SD = 9.06) is moderately high. Factorial analysis of the scores reveals that the awareness domain (M = 10.5, SD = 5.11) has the lowest mean score followed by the perceived usefulness domain (12.22, SD-4.02), IT support domain (M = 13.32, SD = 4.88), resource facilitation domain (M = 13.69, SD = 5.11), perceived ease of use domain (M = 20.11, SD = 4.53), compatibility domain (M = 22.76, SD = 5.24), peer influence and superior influence domain (M = 21.87, SD = 4.64), and the self-efficacy domain (M = 23.56, SD = 5.54) has the highest score.
The frequency distribution of the scores in Figure 2 shows a positive skewness (Skp = 0.5) that indicates that the distribution is fairly symmetrical or moderately skewed, meaning the mean score is moderately higher than the median score and mode which indicates that maximum scores are of average level, meaning maximum students have an average level of perception toward e-assessment.
Table 3 presents the differences of perception among different demographic variables which shows boys (M = 109.6, SD = 4.98) hold better perception than girls (M = 101.3, SD = 5.18), postgraduate students (M = 113.21, SD = 4.54) also hold a better perception than graduate students (M = 102.54, SD = 5.87); expectedly students reading science (M = 117.23, SD = 5.21) show the highest perception followed by students reading law (M = 109.61, SD = 5.19), social science (M = 104.77, SD = 5.54) and humanities (M = 99.98, SD = 5.77). The difference in perception among several economic condition groups of students is also notable: the students having average annual income more than 5 lakh show the highest perception (M = 112.8, SD = 5.24) followed by the group having family income 1–5 lakh (109.31, SD = 5.65) and the group having family income below 1 lakh (M = 105.76, SD = 5.11).
The t-test result in Table 4 shows that the difference of perception between boys and girls is statistically significant (t = 2.39, p = 0.009 < 0.05) which means this difference is not attributed to chance; rather, there are significant reasons behind this though the reasons were not investigated in this study. The t-test result in Table 5 shows that the differences in perception between graduate and postgraduate students is also statistically significant (t = 3.61, p = 0.002 < 0.05).
The ANOVA and post hoc analysis results in Table 6 show that the difference of perception among students of four different streams of studies is also statistically significant (F = 0.36, p = 0.019 <0 .05). The ANOVA and post hoc analysis results in Table 7 show that the difference of perception among students of three different economic groups is also statistically significant (F = 0.82, p = 0.004 <0 .05).
Discussion
Students' overall perception of e-assessment
The findings of this survey revealed that Indian higher education students' overall perception of e-assessment was of moderate level or slightly lower than medium level. They felt it could be better for them in many directions though they were found dubious mainly due to their lack of confidence in computer skills. More importantly, students showed better perception in the domains like self-efficacy, subjective norms, capability domains or in perceived ease of use, but they figured low in domains like awareness, perceived usefulness, IT support or resource facilitation. Majority of students felt stress in using computer during examinations, and they felt a lack of concentration during online exams, a potential disadvantage of e-assessment as pointed out in Whitelock and Brasher (2006). Hence, they prefer traditional paper–pen mode more over e-assessment. But they were found having a strong belief in the validity, practicality, security and teaching–learning potential of e-assessment which they believed could do good to their learning and upcoming career. The most appealing thing about e-assessment is its being free from human errors which makes it more reliable to the students positively affecting its perceived ease of use and efficacy. The students feel more secure about their grades online. It provides feedback to the students and helps them to understand their actual progress. Students also showed good perception in peer influence, an important factor in the MAUE (Sadaf et al. 2012). Most of the students believed e-assessment could add value to their learning which agrees with the study of Sorensen (2013) who said that students feel e-assessment plays a role in higher education in adding value to their learning. Majority of students (60%) believed online marking was more accurate because they do not suffer from human error, still a considerable number of students (40%) were found who were critical of the online multiple-choice question and the effectiveness of this type of examination. Significantly, students accept with a vast majority (88%) that the COVID pandemic was a positive catalyst compelling students and institutions to resort to e-learning and e-assessments. The question remains in their minds about safety issues as well. The devices that are used in this process, using those for a long time can harm health. But these technological devices are used in the other daily works as well. So, the supporters of it can raise the point that we cannot live our lives without technology anyway. Also, e-assessment has to be technically problem free to be more effective. Only 20% students agreed on the issue that in e-assessment there is less scope to adopt malpractices than to do so in paper-based assessments, meaning they were sensitive toward security and integrity issues associated with e-assessment as claimed as a potent disadvantage in several past studies like Mellar et al. (2018), Hillier (2014), Kocdar et al. (2018).
Impacts of e-assessment were found influencing students differently according to their gender, academic level, stream of study and economic condition. Boys were found having better perception than girls, postgraduates were found having better perception than graduate students, students of science and law streams of studies were found having better perception than the students of social science and humanities, and students from stronger economic background had better perception than students of lower economic background which agree with the findings of NSS (2018), and Kundu and Bej (2021) supporting deep digital divide and an unequal distribution of e-resources among different social strata.
Students' appraisal level of e-assessment
This investigation found that students' level of awareness regarding e-assessment system is not sufficiently high as evident from the low scores in the awareness domain (see Table 2). A deep analysis of the responses showed that 81% students agreed that their awareness of e-assessment is limited, 78% students agreed that they did not know pros and cons of it, and 81% disagreed that they had been provided regular information on e-assessment by their colleges/universities. This finding is also indicative of low adoption level of e-assessment in the Indian higher education system. This poor appraisal level has been negatively affecting students' perceived usefulness domain, and these were basically the results of poor perception in the IT support domain and the resource facilitation domain. Majority of students (77%) agreed that their computer skill is not adequate for e-assessment and notably 85% students found not having a computer for personal use at home.
Students' low perception in awareness and IT skill has been producing in them stress and fear, making them feel uncomfortable with the online examinations, and it can make it hard to concentrate as it is a new system; still, they are welcoming e-assessment in higher education as evident from their responses. Majority of students believe e-assessment is effective in making them future ready for a bright career; it is easy and time saving; it will terminate the favoritisms and will add value to their learning. They are hopeful that e-assessment will get popularity with the increased dissemination of e-learning. Thus, the acceptability of e-assessment among Indian higher education students is of medium level where they think it to be good for their future; still, they felt stress and hopelessness due to their inabilities in overcoming their weakness in computer operation.
Coronavirus disease 2019 effects on students' perception level
It is obvious that the COVID pandemic has a far-fetching effect on the global academic arena as well as in India (Kundu and Bej, 2021), and this investigation also endorses the authors' claims that COVID was instrumental in bringing positive changes among academics to accept e-learning and e-assessment. The COVID compulsions were really effective in making online education productive with an instant result. Surprisingly 95% students strongly said COVID compelled them to make themselves adopt e-assessment. Besides it was effective to put a positive peer influence on students that was inspirational for them to adopt e-assessment with a positive note. This COVID was found as a good sub-factor to influence the subjective norm factor for e-assessment. This high perception among students regarding e-assessment may be due to this unprecedented time of the pandemic, and this may be deduced from the majority of students' (88%) strongly agree response found on the question that in normal time I used to at ease with the paper-pencil mode of assessment with.
Perceptional differences among demographic variables
The investigation revealed students' perception of e-assessment vary significantly depending on students' gender, academic grades, stream of study and economic condition. Boys showed better perception than girls, and this difference was found statistically significant. Several reasons may be attributed to this difference, like historical Indian inequalities based on gender and caste (Desai et al., 2010), the orthodox mindset that thinks educating a female child is a bad investment because she is bound to get married and leave her paternal home one day (Gender inequality in India, 2017), gender acting as an influencing factor in technology adoption as men are claimed to be more technologically adept compared to women (Goswami and Dutta, 2016), women having a less positive attitude toward computers in general (Tondeur et al., 2016), there remaining a gender difference in the place of use of computers in India (Basavaraja and Sampath, 2017).
Perception varies depending on students' stream of studies as well. Science students felt themselves more comfortable with e-assessment than students of law, humanities or social science. This difference was also found statistically significant that supports the observation of O'Brien and Marakas (2009), Zwass (2019) , and Verhoeven et al. (2020) that students who are interested in scientific research and/or want to become researchers will be more open to mastering ICT skills and/or will make greater use of computers and the Internet for their studies than other students will.
A significant difference of perception was also observed among students belonging to several economic groups, and a positive correlation between economic health and students' perception of e-assessment was simply noted as the students' perception was found to decrease with the decrease of their economic condition though statistical significance of this finding was not investigated in this study. Though the reasons were not investigated in this study, it may be assumed that students from strong economic background enjoy good environment for their study that help them to be efficient in digital equipment as well which is not the case with the poor students who do not have a computer and who first saw a computer in their college laboratory. This finding echo the study of Gustafsson et al. (2018) and Thomson (2018) that reported the positive correlation between socioeconomic status and academic achievement. Several studies (e. g. Scheerder et al., 2017; Desjardins and Ederer, 2015; Fraillon et al., 2014; Hohlfeld et al., 2013; Hatlevik and Cristophersen, 2013; Senkbeil et al., 2013) have categorically mentioned that socioeconomic status of a student has a positive correlation with his ability of ICT literacy, access and use.
Conclusion
The study concludes that COVID was instrumental in attracting Indian students toward e-assessment and enhancing their interest. During this time students' overall perception of e-assessment was found of moderate level though this perception level varies depending on students' sex, academic level, stream of study and economic backgrounds. A mixed response from them was collected. Of the eight domains investigated, students showed better perception in the perceived usefulness, perceived ease of use, compatibility, subjective norms and self-efficacy domains, while they cut a sorry figure in domains like awareness, resource facilitation and IT support. The study found, stress and allied fear of ICT skills as the major limitation that refrain students from adopting it intensively. The implementation of e-assessment in primary and secondary level may help students respond differently with more conviction and skill than now, when they are observing it for the first time at their tertiary college level. Finally, the study found that e-assessment has left a positive impact on students' perception of e-assessment. Indian students have been living in an atmosphere where there is a lot of insufficiency in respect of infrastructure or innovative measures. They lack IT skills that only add stress in operating a computer, still they are welcoming e-assessment in higher education because they feel it to be the time's demand as the world is moving toward paperless documentation, and e-assessment is the close associate of e-learning.
Implications
The report has a wide implication concerning the information relating to Indian students' perception of e-assessment. It depicts a clear picture of students' readiness to adopt this new mode of assessment based on the MAUE framework, provides the necessary information to the implementation authorities, and thus helps in the effective incorporation of e-assessment in education. It could provide information regarding the limitations and strengths of e-assessment as students feel that it is of wide use during this pandemic time. In this era of advanced technology, e-assessment can add value to the evaluation system. Paper–pencil-based assessment requires more staff loads and working hours along with security issues, human errors, chances of partialities and faulty question papers. These uncertainties can be eliminated with the use of e-assessment. In a country like India e-assessment is an area with much potential to cater to the huge enrollment of upcoming students in higher education as it reveals the prodigious importance of monitoring the development of the students, thus allowing the teachers to follow the results of the students throughout the semester and not only at the end of the year and preparing them for a professional future. This process is more all-inclusive as it considers all the actions, works, reports, portfolios and forums developed. Currently, e-assessment might be more appropriate as opposed to traditional assessment since it has the potential to curtail stress among students, improve decision-making among administrators, and decrease costs and time. As an invention, it has the possibility of enhancing learning and teaching at higher education institutions, and this is only possible if the institutions put suitable assessment policies and procedures in place. This study is helpful for future research also and will serve as a ready reference for them to direct this study in several other aspects like teachers' perception, institutional perception, policy makers' perception or finding correlation among these different aspects.
Figures
Demographic details of the study participants
Demographic | Frequency(N) | Percentage (%) |
---|---|---|
Gender | ||
Male | 140 | 70 |
Female | 60 | 30 |
Academic qualification | ||
Graduate | 105 | 47.5 |
Postgraduate | 95 | 42.5 |
Stream of study | ||
Humanities | 61 | 30.5 |
Social science | 59 | 29.5 |
Science | 42 | 21 |
Law | 38 | 19 |
Family economic condition (yearly income in Indian rupees) | ||
<1 lakh | 22 | 11 |
1–3 lakh | 130 | 65 |
>3lakh | 48 | 24 |
Primary factor loadings, means and standard deviations perception scores (N = 200)
Factors | Mean(M) | Std. deviation (SD) | Primary factor loadings | |
---|---|---|---|---|
a | Awareness domain | 10.5 | 5.11 | 0.59 |
b | Perceived usefulness domain | 12.22 | 4.02 | 0.65 |
c | Perceived ease of use domain | 20.11 | 4.53 | 0.83 |
d | Compatibility domain | 22.76 | 5.24 | 0.87 |
e | Peer influence and superior influence | 21.87 | 4.69 | 0.85 |
f | Self-efficacy domain | 23.56 | 5.54 | 0.91 |
g | Resource facilitation domain | 13.69 | 5.11 | 0.71 |
h | IT support domain | 13.32 | 5.11 | 0.69 |
Overall perception | 104.9 | 5.54 |
Differences of perception among demographic variables
Demographic variable | Mean | Std. Deviation |
---|---|---|
Gender | ||
Boys | 109.6 | 4.98 |
Girls | 101.3 | 5.18 |
Academic qualification | ||
Graduate | 102.54 | 5.87 |
Postgraduate | 113.21 | 4.54 |
Stream of study | ||
Humanities | 99.98 | 5.77 |
Social science | 104.77 | 5.54 |
Science | 117.23 | 5.21 |
Law | 109.61 | 5.19 |
Family economic condition (yearly income in Indian rupees) | ||
<1 lakh | 105.76 | 5.11 |
1–5 lakh | 109.31 | 5.65 |
>5 lakh | 112.8 | 5.24 |
t-test results comparing differences in perception between boys and girls
N | Mean | SD | t-cal | t-crit | df | p | Decision | |
---|---|---|---|---|---|---|---|---|
Boys | 140 | 109.6 | 4.98 | 2.39 | 1.65 | 124 | 0.009* | Reject |
Girls | 60 | 101.3 | 5.18 |
Note(s): p = 0.009 < 0.05
t-test results comparing differences in perception between graduate and postgraduate students
N | Mean | SD | t-cal | t-crit | df | p | Decision | |
---|---|---|---|---|---|---|---|---|
Graduates | 105 | 102.54 | 5.87 | 3.61 | 2.97 | 193 | 0.002* | Reject |
Postgraduates | 95 | 113.21 | 4.54 |
Note(s): p = 0.002 < 0.05
ANOVA and post hoc analysis results comparing differences in perception among students of different streams
ANOVA | |||||
---|---|---|---|---|---|
Sum of squares | df | Mean square | Calculated F | Sig.(p) | |
Between groups | 38.85 | 3 | 3112.95 | 0.36 | 0.019* |
Within groups | 5673.64 | 192 | 23.62 |
Post hoc test | |||
---|---|---|---|
I (stream) | J (stream) | Mean difference (I-J) | Sig |
Humanities | Social science | 3.70000* | 0.002 |
Science | 3.90000* | 0.014 | |
Law | 2.70000 | 0.674 | |
Social science | Humanities | 3.70000* | 0.002 |
Science | −4.10000* | 0.009 | |
Law | −3.50000* | 0.001 | |
Science | Humanities | 3.90000* | 0.014 |
Social science | −4.10000* | 0.889 | |
Law | 0.60000 | 0.664 | |
Law | Humanities | 2.70000 | 0.674 |
Social science | −3.50000* | 0.001 | |
Science | 0.60000 | 0.664 |
Note(s): p = 0.019 <0 .05
Note(s): * The mean difference is at the 0.05 level
ANOVA and post hoc analysis of results comparing differences in perception among students of different economic backgrounds
ANOVA | |||||
---|---|---|---|---|---|
Sum of squares | df | Mean square | Calculated F | Sig.(p) | |
Between groups | 49.7 | 2 | 24.19 | 0.82 | 0.004* |
Within groups | 5842.1 | 194 | 30.11 |
Post hoc test | |||
---|---|---|---|
I (FIC) | J (FIC) | Mean difference (I-J) | Sig |
<1 Lakh | 1–3 Lakh | 4.60000* | 0.036 |
>3Lakh | 4.80000* | 0.024 | |
1–3 Lakh | <1 Lakh | −4.60000* | 0.036 |
>3Lakh | 0.30000 | 0.889 | |
>3Lakh | <1 Lakh | −4.80000* | 0.024 |
1–3 Lakh | −0.30000 | 0.889 |
Note(s): p = 0.004 <0 .05, FIC – Family income condition
* The mean difference is at the 0.05 level
References
Alruwais, N., Wills, G. and Wald, M. (2018), “Advantages and challenges of using e-assessment”, International Journal of Information and Education Technology, Vol. 8 No. 1, pp. 34-37, doi: 10.18178/ijiet.2018.8.1.1008.
Amelung, M., Krieger, K. and Rösner, D. (2011), “E-assessment as a service”, IEEE Transactions on Learning Technologies, Vol. 4 No. 2, pp. 162-174.
Apampa, K.M., Wills, G. and Argles, D. (2011), “Towards a blob-based presence verification system in summative e-assessments”, International Journal of e-Assessment, Vol. 1 No. 1, available at: http://journals.sfu.ca/ijea/in...ewFile/9/8.
Attia, M.A. (2014), “Postgraduate students' perceptions toward online assessment: the case of the faculty of education, Umm Al-Qura university”, Education for a Knowledge Society in Arabian Gulf Countries, Emerald Group Publishing Limited, pp. 151-173.
Baartman, L.K.J., Bastiaens, T.J., Kirschner, P.A. and Vleuten, C. (2007), “Evaluating assessment quality in competence-based education: a qualitative comparison of two frameworks”, Educational Research Review, Vol. 2, pp. 114-129.
Bacigalupo, D.A., Warburton, W.I., Draffan, E.A., Zhang, P., Gilbert, L. and Wills, G.B. (2010), “A formative eAssessment co-design case study”, 2010 IEEE 10th International Conference on Advanced Learning Technologies (ICALT), pp. 35-37.
Bartley, J.M. (2005), “Assessment is as assessment does: a conceptual framework for understanding online assessment and measurement”, Online Assessment and Measurement: Foundations and Challenges, IGI Global, pp. 1-45.
Basavaraja and Sampath (2017), “Gender disparities in the use of ICT: a survey of students in urban schools”, Journal of Information Science Theory and Practice, Vol. 5, pp. 39-48, doi: 10.1633/JISTaP.2017.5.4.3.
Boud, D. (1995), “Assessment and learning: contradictory or complementary?”, in Knight, P. (Ed.), London, Kogan Page, available at: http://www.teacamp.eu/moodle2/pluginfile.php/1791/mod_ resource/content/1/UA/Assessment_and_learning_contradictory_or_complementary.pdf (accessed 1 July 2015).
Brink, R. and Lautenbach, G. (2011), “Electronic assessment in higher education”, Educational Studies, Vol. 37 No. 5, pp. 503-512.
Bryman, A. and Cramer, D. (2011), Quantitative Data Analysis with IBM SPSS 17, 18 & 19: A Guide for Social Scientists, 1st ed, Routledge, New York.
Choudhary, R. (2020), “COVID-19 pandemic: impact and strategies for education sector in India”, available at: https://government.economictimes.indiatimes.com/news/education/covid-19-pandemicimpact-and-strategies-for-education-sector-in-india/75173099.
Clements, M.D. and Cord, B.A. (2013), “Assessment guiding learning: developing graduate qualities in an experiential learning programme”, Assessment and Evaluation in Higher Education, Vol. 38 No. 1, pp. 114-124.
Cook, J. and Jenkins, V. (2010), Getting Started with EAssessment. Bath, available at: http://opus.bath.ac.uk/17712/.
COVID 19 pandemic (2020), Wikipedia, available at: https://en.wikipedia.org/wiki/COVID-19_pandemic.
Creswell, J.W. (2008), Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, Sage, Thousand Oaks/London/New Delhi.
Crisp, G. (2011), Teacher's Handbook on E-Assessment: A Handbook to Support Teachers in Using E-Assessment to Improve and Evidence Student Learning and Outcomes, Creative Commons, San Francisco, California.
Crisp, G., Guàrdia, L. and Hilier, M. (2016), “Using e-Assessment to enhance student learning and evidence learning outcomes”, International Journal of Educational Technology in Higher Education, Vol. 13, p. 18, doi: 10.1186/s41239-016-0020-3.
Cronbach, L.J. (1951), “Coefficient alpha and the internal structure of tests”, Psychometrika, Vol. 22 No. 3, pp. 297-334.
Davidson, S. and McKenzie, L. (2009), Tertiary Assessment and Higher Education Student Outcomes: Policy, Practice and Research, available at: https://akoaotearoa.ac.nz/download/ng/file/group-4/n2742-tertiary-assessment--higher-education student-outcomes---policy-practice--research---summary.pdf.
Davis, F. (1985), A Technology Acceptance Model for Empirically Testing New End-User Information Systems, Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, available at: http://hdl.handle.net/1721.1/15192.
De Villiers, R., Scott-Kennel, J. and Larke, R. (2016), “Principles of effective e-assessment: a proposed framework”, Journal of International Business Education, Vol. 11, pp. 65-92.
Dermo, J. (2009), “E-assessment and the student learning experience: a survey of student perceptions of e-assessment”, British Journal of Educational Technology, Vol. 40 No. 2, pp. 203-214, doi: 10.1111/j.1467-8535.2008.00915.x.
Desai, S., Dubey, A., Joshi, B.L., Sen, M., Abusaleh, S. and Reeve, V. (2010), Human Development in India: Challenges for a Society in Transition, Oxford University Press, New Delhi.
Desjardins, R. and Ederer, P. (2015), “Socio-demographic and practice-oriented factors related to proficiency in problem solving: a lifelong learning perspective”, International Journal of Lifelong Learning, Vol. 34 No. 4, pp. 468-486, doi: 10.1080/02601370.2015.1060027.
Dewey, J. (1997), Democracy and Education: An Introduction to the Philosophy of Education, The Free Press, New York, (Original work published 1916).
Donovan, J., Mader, C. and Shinsky, J. (2007), “Online vs. traditional course evaluation formats: student perceptions”, The Journal of Interactive Online Learning, Vol. 6, pp. 158-180.
Electronic Assessment (2020), Wikipedia, available at: https://en.wikipedia.org/wiki/Electronic_assessment.
Eljinini, M. and Alsamarai, S. (2012), “The impact of e-assessments system on the success of the implementation process”, Mod. Educ. Comput. Sci., Vol. 4 No. 11, pp. 76-84.
Ellaway, R. and Masters, K. (2008), “AMEE Guide 32: E-Learning in medical education Part 1: learning, teaching and assessment”, Medical Teacher, Vol. 30, pp. 455-73, doi: 10.1080/01421590802108331.
Fraillon, J., Ainley, J., Schulz, W., Friedman, T. and Gebhardt, E. (2014), Preparing for Life in a Digital Age the IEA International Computer and Information Literacy Study, International Report Amsterdam, Springer Open, doi: 10.1007/978-3-319-14222-7.
Gathuri, J.W., Luvanda, A., Matende, S. and Kamundi, S. (2014), “Impersonation challenges associated with e-assessment of university students”, Journal of Information Engineering and Applications, Vol. 4 No. 7, pp. 60-68, available at: https://www.iiste.org/Journals/index.php/JIEA/article/view/14289/.
Gender inequality in India (2017), White Planet Technologies, available at: www.indiacelebrating.com/social-issues/gender-inequality-in-india/.
Gikandi, J.W., Morrow, D. and Davis, N.E. (2011), “Online formative assessment in higher education: a review of the literature”, Computers and Education, Vol. 57 No. 4, pp. 2333-2351.
Gilbert, L., Whitelock, D. and Gale, V. (2011), Synthesis Report on Assessment and Feedback with Technology Enhancement, Electronics and Computer Science EPrints, Southampton, available at: http://srafte.ecs.soton.ac.uk.
Goswami, A. and Dutta, S. (2016), “Gender differences in technology usage—a literature review”, Open Journal of Business and Management, Vol. 4, pp. 51-59, doi: 10.4236/ojbm.2016.41006.
Gustafsson, J.E., Nilsen, T. and Hansen-Yang, K. (2018), “School characteristics moderating the relation between student socio-economic status and mathematics achievement in grade 8. Evidence from 50 countries in TIMSS 2011”, Studies in Educational Evaluation, Vol. 57, pp. 16-30, doi: 10.1016/j.stueduc.2016.09.004.
Hatlevik, O.E. and Christophersen, K. (2013), “Digital competence at the beginning of upper secondary school: identifying factors explaining digital inclusion”, Computers and Education, Vol. 63, pp. 240-247, doi: 10.1016/j.compedu.2012.11.015.
Hillier, M. (2014), “The very idea of e-exams: student (pre) conceptions”, Australasian Society for Computers in Learning in Tertiary Education Conference, pp. 77-88.
Hohlfeld, T.N., Ritzhaupt, A.D. and Barron, A.E. (2013), “Are gender differences in perceived and demonstrated technology literacy significant? It depends on the model”, Educational Technology Research and Development, Vol. 61, pp. 639-663, doi: 10.1007/s11423-013-9304-7.
Howarth, P. (2015), The Opportunities and Challenges Faced in Utilizing E-Based Assessment, available at: http://www.educationalrc.org/oldconf/old/pdf/Paul%20Howarth%20-%20Beirut%20Presentation.pdf (accessed 6 July 2015).
Isaías, P. and Issa, T. (2013), “E-learning and sustainability in higher education: an international case study”, The International Journal of Learning in Higher Education, Vol. 20 No. 4, pp. 77-90.
Isaías, P., Pífano, S. and Miranda, P. (2014), “Higher education and Web 2.0: theory and practice”, in Pelet, J.E. (Ed.), E-learning 2.0 Technologies and Web Applications in Higher Education, United States of America: Information Science Reference, pp. 88-106.
Jordan, S. (2011), “Using interactive computer-based assessment to support beginning distance learners of science”, Open Learning, Vol. 26 No. 2, pp. 147-164, doi: 10.1080/02680513.2011.567754.
King, E. and Boyatt, R. (2014), “Exploring factors that influence adoption of e-learning within higher education”, British Journal of Educational Technology, Vol. 46 No. 6, pp. 1272-1280, doi: 10.1111/bjet.12195.
Kocdar, S., Karadeniz, A., Peytcheva-Forsyth, R. and Stoeva, V. (2018), “Cheating and plagiarism in E-assessment: students' perspectives”, Open Praxis, Vol. 10 No. 3, p. 221, doi: 10.5944/openpraxis.10.3.873.
KPMG (2017), Online Education in India: 2021, available at: https://home.kpmg/in/en/home/insights/2017/05/internet-online-education-india.html.
Kucey, S. and Parsons, J. (2012), “Linking past and present: john Dewey and assessment for learning”, Journal of Teaching and Learning, Vol. 8, doi: 10.22329/jtl.v8i1.3077.
Kuh, G.D., Jankowski, N., Ikenberry, S.O. and Kinzie, J. (2014), Knowing what Students Know and Can Do: The Current State of Student Learning Outcomes Assessment in US Colleges and Universities, University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA), Urbana.
Kundu, A. (2018a), “Prospects of ICT integration in school education: an analytical study of the government schools in West Bengal, India”, International Journal of Advance and Innovative Research, Vol. 5 No. 3, July-September 2018, available at: http://iaraedu.com/about-journal/ijair-volume-v-issue-3-i-july-september.php.
Kundu, A. (2018b), “A study on Indian teachers' roles and willingness to accept education technology”, International Journal of Innovative Studies in Sociology and Humanities (IJISSH), Vol. 3 No. 9, September 2018, pp. 2456-4931, (Online), available at: http://ijissh.org/articles/2018-2/volume-3-issue-9/.
Kundu, A. (2020a), “Toward a framework for strengthening participants' self-efficacy in online education”, Asian Association of Open Universities Journal, Vol. ahead-of-print No. ahead-of-print, doi: 10.1108/AAOUJ-06-2020-0039.
Kundu, A. (2020b), “A sound framework for ICT integration in Indian teacher education”, International Journal of Teacher Education and Professional Development, Vol. 4 No. 1, Article 4, doi: 10.4018/IJTEPD.
Kundu, A. and Bej, T. (2021), “COVID-19 response: students' readiness for shifting classes online”, Corporate Governance, Vol. ahead-of-print No. ahead-of-print, doi: 10.1108/CG-09-2020-0377.
Kundu, A. and Dey, K.N. (2018), “A contemporary study on the flourishing E-learning scenarios in India”, International Journal of Creative Research Thoughts (IJCRT), Vol. 6 No. 2, pp. 384-390, ISSN: 2320-2882, April 2018, available at: http://www.ijcrt.org/IJCRT1892060.pdf.
Kundu, A., Bej, T. and Dey, K.N. (2020), “Indian educators' awareness and attitude towards assistive technology”, Journal of Enabling Technologies, Vol. 14 No. 4, pp. 233-251, doi: 10.1108/JET-04-2020-0015.
Lafuente, M., Remesal, A. and Valdivia, A.M. (2014), “Assisting learning in e-assessment: a closer look at educational supports”, Assessment and Evaluation in Higher Education, Vol. 39 No. 4, pp. 443-460.
Mason, J.C. (2014), “Theorizing why in digital learning: opening frontiers for inquiry and innovation with technologies”, in Sampson, D.G., Ifenthaler, D., Spector, J.M. and Isaías, P. (Eds), Digital Systems for Open Access to Formal and Informal Learning, Springer, New York, pp. 101-120.
Mellar, H., Peytcheva-Forsyth, R., Kocdar, S., Karadeniz, A. and Yovkova, B. (2018), “Addressing cheating in e-assessment using student authentication and authorship checking systems: teachers' perspectives”, International Journal for Educational Integrity, Vol. 14 No. 2, doi: 10.1007/s40979-018-0025-x.
Nanda, P.K. (2019), India's Higher Education Student Population Grows by 8 Lakh: HRD Ministry, available at: https://www.livemint.com/education/.
Nicol, D. (2007), “E-assessment by design: using multiple-choice tests to good effect”, Journal of Further and Higher Education, Vol. 31 No. 1, pp. 53-64.
NPE (2020), National Education Policy 2020, Govt of India, available at: https://pib.gov.in/PressReleasePage.aspx?PRID=1642061.
NSS (2018), National Sample Survey Report Ministry of Statistics and Programme Implementation, Government of India, available at: http://mospi.nic.in/.
Osuji (2012), “The use of e-assessments in the Nigerian higher education system”, The Turkish Online Journal of Distance Education, Vol. 13 No. 4, pp. 140-152.
O'Brien, J.A. and Marakas, G.M. (2009), Management Information Systems, 9th ed., McGraw Hill Irwin, Boston.
Pedersen, C., White, R. and Smith, D. (2012), “Usefulness and reliability of online assessments: a Business Faculty's experience”, International Journal of Organisational Behaviour, Vol. 17 No. 3, pp. 33-45.
Redecker, C., Punie, Y. and Ferrari, A. (2012), “eAssessment for 21st century learning and skills”, in Ravenscroft, A., Lindstaedt, S., Kloos, C.D. and Hernandez-Leo, D. (Eds), 21st Century Learning for 21st Century Skills, Springer, Germany, pp. 292-305.
Ridgway, J., McCusker, S. and Pead, D. (2004), Literature Review of E-Assessment, NESTA Futurelab Series Report 10, NESTA Futurelab, Bristol.
Rowe, N.C. (2004), “Cheating in online student assessment: beyond plagiarism”, Online Journal of Distance Learning Administration, Vol. 7 No. 2, available at: https://www.westga.edu/∼distance/ojdla /summer72/rowe72.html.
Rowntree, D. (1987), Assessing Students − How Shall We Know Them?, Harper and Row, London.
Sadaf, A., Newby, T.J. and Ertmer, P.A. (2012), “Exploring pre-service teachers' beliefs about using Web 2.0 technologies in K-12 classroom”, Computers and Education, Vol. 59 No. 3, pp. 937-945.
Sanyal, A. (2020), “Schools closed, travel to Be avoided, says centre on coronavirus: 10 points”, NDTV, available at: https://www.ndtv.com/india-news/mumbai-s-siddhivinayak-temple-to-close-entry-for-devotees-from-today-amid-coronavirus-outbreak-2195660.
Scheerder, A., van Deursen, A. and van Dijk, J. (2017), “Determinants of Internet skills, uses and outcomes. A systematic review of the second- and third-level digital divide”, Telematics and Informatics, Vol. 34, pp. 1607-1624, doi: 10.1016/j.tele.2017.07.007.
Senkbeil, M., Ihme, J.M. and Wittwer, J. (2013), “The test of technological and information literacy (TILT) in the national educational panel Study: development, empirical testing, and evidence for validity”, Journal for Educational Research Online, Vol. 5, pp. 139-161, available at: http://www.j-e-r-o.com/index.php/jero/article/view/364/176.
Sharma, K. (2020), Study Shows How India's Higher Education Enrollment Can Jump to 65 from 27%, available at: https://theprint.in/india/education/study-shows-how-indias-higher-education-enrollment-can-jump-to-65-from-27/441582/.
Sorensen, E. (2013), “Implementation and student perceptions of e-assessment in a Chemical Engineering module”, European Journal of Engineering Education, Vol. 38 No. 2, pp. 172-185, doi: 10.1080/03043797.2012.760533.
Stodberg, U. (2012), “A research review of e-assessment”, Assessment and Evaluation in Higher Education, Vol. 37 No. 5, pp. 591-604, doi: 10.1080/02602938.2011.557496.
Thomson, S. (2018), “Achievement at school and socio-economic background—an educational perspective”, Npj Science of Learning, Vol. 3 No. 1, doi: 10.1038/s41539-018-0022-0.
Time (2020), “Coronavirus forces families to make painful childcare decisions”, Time, available at: https://time.com/5804176/coronavirus-childcare-nannies/.
Tinoca, L. (2012), “Promoting e-assessment quality in higher education: a case study in online professional development”, ICICTE 2012 Proceedings, pp. 213-223, available at: http://www.icicte.org/Proceedings2012/Papers/06-1-Tinoca.pdf (accessed 20 August 2015).
Tondeur, J., Van de Velde, S., Vermeersch, H., Van Houtte and Mieke (2016), “Gender differences in the ICT profile of university students: a quantitative analysis”, Journal of Diversity and Gender Studies, Vol. 3, doi: 10.11116/jdivegendstud.3.1.0057.
UNESCO (2020), E-assessment/ ICT Based Assessment, available at: http://www.ibe.unesco.org/en/glossary-curriculum-terminology/e/e-assessmentict-based-assessment.
Verhoeven, J.C., Heerwegh, D. and De Wit, K. (2020), “Predicting ICT skills and ICT use of university students”, in Tatnall, A. (Ed.), Encyclopedia of Education and Information Technologies, Springer, Cham. doi: 10.1007/978-3-319-60013-0_226-1.
Way, A. (2010), “The use of e-assessments in the Nigerian higher education system”, The Turkish Online Journal of Distance Education, Vol. 13 No. 1, pp. 140-152.
Whitelock, D. (2009), “E-assessment: developing new dialogues for the digital age”, British Journal of Educational Technology, Vol. 40 No. 2, pp. 199-202.
Whitelock, D.M. and Brasher, A. (2006), “Developing a roadmap for e-assessment: which way now?”, in Danson, M. (Ed.), Proceedings of the 10th CAA International Computer Assisted Assessment Conference, Professional Development, Loughborough University, Loughborough, pp. 487-501.
Williams, J.B. and Wong, A. (2009), “The efficacy of final examinations: a comparative study of closed-book, invigilated exams and open-book, open-web exams”, British Journal of Educational Technology, Vol. 40 No. 2, pp. 227-236.
Wright, K.B. (2005), “Researching internet-based populations: advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services”, Journal of Computer-Mediated Communication, Vol. 10 No. 3, doi: 10.1111/j.1083-6101.2005.tb00259.x.
Xu, H. and Mahenthiran, S. (2016), “Factors that influence online learning assessment and satisfaction: using moodle as a learning management system”, International Business Research, Vol. 9 No. 2, doi: 10.5539/ibr.v9n2p1.
Zwass, V. (2019), “Information system”, Encyclopædia Britannica, available at: https://academic.eb.com/ levels/collegiate/article/informationsystem/126502.
Acknowledgements
Conflict of interest: The authors hereby declare that there have no conflicts of interest.The authors hereby declare that every ethical measure was paid due respect and maintained with utmost dignity during making this study. Authors cordially acknowledge the voluntary contribution of all those 200 students who accepted the call and came out to reveal their perceptions on e-assessment. Moreover, the authors avouch hereby that no monetary funding from any source was availed in course of this research project.
Corresponding author
About the authors
Arnab Kundu is a PhD student at the Department of Education, Bankura University, India. He has received a Master of Arts (MA) in English and in Education, MPhil in Education, Post-Graduate Diploma in Educational Management and Administration (PGDEMA). Arnab's research focuses on several issues regarding teachers' work especially to promote literacies in online/digital environments.
Dr. Tripti Bej has been working as an assistant teacher at the Department of Mathematics Education, Srima Balika Vidyalaya, Govt. of West Bengal, India for the last thirteen years. She has received MSc and PhD in Applied Mathematics from Vidyasagar University, India. Her research interest includes algebras and teaching mathematics.