Abstract
Purpose
The purpose of this paper is to focus on the relationship between student assessment method and e-learning satisfaction. Which e-learning assessment method do students prefer? The assessment method is an additional determinant of the effectiveness and quality that affects user satisfaction with online courses.
Design/methodology/approach
The study employs data from 1,114 students. The first set of data was obtained from a questionnaire on the online platform. The second set of information was obtained from the external assessment reports by e-learning specialists. The satisfaction revealed by the students in their responses to the questionnaire is the dependent variable in the multivariate technique. In order to estimate the influence of the independent variables on the global satisfaction, we use the ordinary least squares technic. This method is the most appropriate for dependent discrete variables whose categories are ordered but have multiple categories, as is the case for the dependent variable.
Findings
The method influences e-learning satisfaction, even though only slightly. The students are reluctant to be assessed by a final exam. Students prefer systems that award more importance to the assessment of coursework as part of the final mark.
Practical implications
Knowing the level of student satisfaction and the factors that influence it is helpful to the teachers for improving their courses.
Originality/value
In online education, student satisfaction is an indicator of the quality of the education system. Although previous research has analyzed the factors that influence e-student satisfaction, to the best of authors’ knowledge, no previous research has specifically analyzed the relationship between assessment systems and general student satisfaction with the course.
Keywords
Citation
Martín Rodríguez, Ó., González-Gómez, F. and Guardiola, J. (2019), "Do course evaluation systems have an influence on e-learning student satisfaction?", Higher Education Evaluation and Development, Vol. 13 No. 1, pp. 18-32. https://doi.org/10.1108/HEED-09-2018-0022
Publisher
:Emerald Publishing Limited
Copyright © 2019, Óscar Martín Rodríguez, Francisco González-Gómez and Jorge Guardiola
License
Published in Higher Education Evaluation and Development. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
At the start of this millennium, e-learning was considered as one of the cornerstones of information and communication technology in the realm of education. It facilitated the introduction of changes long demanded by specialists in didactics, and it modified roles traditionally played by students and teachers in the classroom (Gray et al., 2004; Volman, 2005). One essential condition for successful e-learning is that the student should feel overall satisfaction with the proposed system of teaching‒learning (Teo, 2010). Indeed, there is a close relationship between user satisfaction, successful teaching and the quality of a course (Bailey and Pearson, 1983; Peltier et al., 2007). It is important to be aware of student satisfaction with e-learning, as it is yet another aspect of the assessment of the educators, their courses and the overall quality of the educational programs (Bradford and Wyatt, 2010). Accordingly, some authors made recommendations on the design of e-learning courses (Tham and Werner, 2005; Roach and Lemasters, 2006).
Student satisfaction with an e-learning course depends on certain main components or factors (Sun et al., 2008; Ginns and Ellis, 2009). One such component that must be taken into consideration is the system proposed by the lecturer to assess student performance. An assessment method must take into account several aspects, from the time it is designed until it is implemented; this method was designed under the framework project called “Assesment guide to teaching actions based on Information and communication technologies,” developed by all Andalusian public universities (Blanco et al., 2005). In this guide, first, teachers must disclose assessment criteria at the beginning of the course. Second, the assessment method must give students periodic information in order to correct the learning process accordingly. Finally, the assessment method should be representative of the knowledge and skills acquired by the student throughout the course. Failure to accomplish one of these key factors could result in an unsuccessful teaching‒learning process and, therefore, a reduction in the level of student satisfaction. A specific analysis of the relation between assessment method and student satisfaction is important, as the student perceives differences between e-learning and face-to-face learning (Paechter and Maier, 2010). For instance, learning management systems facilitate the incorporation of a greater diversity of assessment tools, allow for other types of student collaboration and provide a feedback process. (Grieve et al., 2016). Furthermore, students may have different perceptions regarding the learning environment. For example, some studies have evaluated the effectiveness of computer-based assessment compared to paper-and-pencil format classroom assessments with regard to the student’s learning motivation (Schmeeckle, 2003; Chua, 2012; Nikou and Economides, 2016).
The purpose of this research is to answer the following question: Do course evaluation systems have an influence on e-learning students satisfaction? For this, the existing literature can be shown as a base, incorporating concepts from assessment systems and students satisfaction. This study contributes to the e-learning literature with an instrument providing information to the courses developers and distance educators to create a good an efficient assessments method. The questionnaires provide information in order to create a better understanding of the students’ perceptions associated with the assessment method used in the e-learning systems. By providing a multidimensional evaluation of e-learning systems from the students’ perspective, the findings of this research help to build a more effective evaluation system and improve its effectiveness in distance education.
2. Literature review
In the literature on e-learning user satisfaction (for example, Ho and Dzeng, 2010; González-Gómez et al., 2012), some research works have taken into account the assessment method as an explanatory factor of user satisfaction (Sun et al., 2008; Kelly et al., 2007). In short, the assessment method is an additional determinant of the effectiveness and quality of an online course and, therefore, a factor that influences the satisfaction of e-learning users. More specifically, Abdous and Yoshimura (2010) examined the final mark and satisfaction level differences among students; in this paper, they compared three different methods: face-to-face in class, via satellite broadcasting at remote sites, and via live video streaming at home or at work. Eom et al. (2006) took into account the importance of feedback in the student learning process and student satisfaction. Kelly et al. (2007) analyzed the impact of marking fairness, unclear marking procedures and midterm changes in marking procedures on users’ satisfaction. Ozkan and Koseler (2009) highlighted the importance of the instructor clearly informing students about marking policy; the authors evaluated several dimensions, of which system and service quality were from a student’s perspective. In addition, Sun et al. (2008) showed that using different assessment methods facilitates the relationship between students and teachers, and improves performance, as this is normally associated with multiple feedbacks. Finally, Lu and Chiou (2010) related satisfaction to the perceived flexibility to control the learning progress. All these works show the magnitude and the weight of the assessment system used in courses in the satisfaction of e-learning students.
Lemos and Nueza (2012) explored in their study the relationship between e-learning students’ expectations and their level of satisfaction. They considered several dimensions: course design, coordination, faculty and tutors; curricular program; resources learning methodologies, evaluation system, support services and technological infrastructures. The main aim of this research paper is to analyze the relationship between the student assessment method and e-learning satisfaction. It seems reasonable to assume that not every assessment method generates the same level of satisfaction among e-learning users, bearing in mind their design and how they are implemented. Therefore, the main hypothesis to test is whether introducing different assessment criteria leads to different levels of satisfaction among e-learning users. In addition, this research also aims to determine the student assessment criteria that yield a greater level of satisfaction.
As regards the contributions this research makes to the literature, the first is the detail provided on the relationship between assessment methods and student satisfaction. Unlike previous studies, which analyzed the assessment method and e-learning satisfaction in general terms, this paper explores different dimensions of that relationship. Our second contribution is to show the combination of subjective and objective information in the student assessment method. The information on which the study is based was extracted in three ways: student questionnaires, course guides and academic records. The findings of our research will provide teachers with recommendations regarding the assessment criteria for e-learning courses that result in a greater level of student satisfaction. After this introduction, the paper is organized as follows. The second section reviews student assessment in the context of the European Higher Education Area (EHEA). The third section outlines the data and the model, whereas the fourth section discusses the results. The final section concludes.
2.1 European higher education area (EHEA)
The EHEA, which was created following the Bologna Declaration in 1999, emphasizes the need to change the university education model. The new model, clearly based on the constructivism learning theory (see Rovai, 2004 for an excellent description of constructivism in e-teaching), is intended to encourage a more active participation of students in the learning process, with teachers merely acting as guides (Bologna Working Group on Qualifications Frameworks, 2005). Concepts such as the master class, passive students and memorizing of content no longer feature prominently in the EHEA. Instead, emphasis is placed on lifelong learning, participative and self-managed learning and skills acquisition (Nicol and Macfarlane-Dick, 2006; Biggs and Tang, 2007; Boud and Falchikov, 2007).
Consequently, the EHEA has also changed the way it addresses student assessment. Emphasis is placed on the need for ongoing assessment in order to evaluate the acquisition of skills. Unlike the traditional written exam at the end of the course, which measures students’ ability to memorize, the new education model defends the need to test the competences and skills acquired by the students periodically throughout the learning process. Some authors point to a transition from a testing culture to an assessment culture (Birenbaum et al., 2006; Baartman et al., 2007).
The EHEA views information and communication technologies as tools that facilitate all these changes. The ongoing assessment model can be implemented in e-teaching. In fact, e-learning environments have applications that make ongoing assessment easier. In this sense, an e-learning environment allows students to do activities, assess themselves, build portfolios, participate in forums and chats and hand in coursework periodically, in addition to fostering group activities.
Among the possibilities that an e-learning environment offers, teachers can, to some extent, use their discretion while designing and implementing the assessment methods. They can decide the kind of assessment tools to be included in the course, as well as the percentage that each represents in the final mark. In general, we wonder what impact the decisions taken by the teacher in matters relating to the assessment of learning in web-based learning environments will have on student satisfaction. The literature review identifies various sets of factors.
The first element to consider is the information conveyed by the teacher at the beginning of the course about the evaluation method. Clarity is a powerful predictor of student satisfaction with the course (Paechter and Maier, 2010). Fair and clear learning assessment guidelines reduce students’ uncertainty and encourage good planning (Jung, 2012).
The second element is the issue of the consistency of assessment activities in the context of learning design. Assessment is key to learning design (Armellini and Aiyegbayo, 2010). There must be coherence with respect to the dual purpose of the assessment activities (Toetenel and Rienties, 2016), namely to check the students’ progress and measure their achievement. With regard to the first, the teacher must propose activities that encourage and allow to know the degree of development of competencies and skills required in the course. The students must perceive that the course material is useful for their learning (Diep et al., 2017). Assessment activities that are not in line with the course goals are likely to reduce student satisfaction. Turning to the second purpose, measurement, the required level for assessment activities must be consistent with the course content. Levels above the course content requirement will result in frustration on the part of the student.
The third element is in relation to the assessment activities. In this sense, an important issue is the weight assigned to the different activities in the final marks. For instance, the student will not perceive an evaluation based on the results of a single exam as fair, because the luck factor can play an important role (Struyven et al., 2005). Conversely, the alternative of an assessment design that combines different evaluation tools and allows continuous evaluation is well perceived by the students (Thurmond et al., 2002), as they think their learning efforts are being properly assessed (Sun et al., 2008). Moreover, these are often activities that are not subject to time pressure and that use tools and other material aids that will also be used in real life (Baartman et al., 2007). The assessment based on a series of different activities can reduce stress, increase students’ motivation and, ultimately, encourage their learning process (Mello, 1993).
With regard to the different assessment activities, the first thing to mention is that, generally speaking, sitting for an exam puts stress on the student. This topic has been widely studied and measured (Cassady and Finch, 2015; Hoferichter et al., 2015). In addition, there is evidence that stress negatively affects student performance (Neemati et al., 2014; Crisan and Copaci, 2015). In the context of EHEA, however, other assessment tools are incorporated in the course design. In relation to other assessment activities, there is evidence that students prefer learning and assessment tools that encourage self-reflection (Cheng and Chau, 2016). Although there may be a number of drawbacks to the teamwork approach (Berry, 2007; Lipson et al., 2007; Mellor, 2012), group work acts as an element of social integration and can generate student satisfaction (Wilkins et al., 2015; Scotland, 2016). Furthermore, activities that promote interaction with the teacher and discussion with fellow students increase student satisfaction (Swan, 2001; Martín-Rodríguez et al., 2015).
3. Method
The study was conducted using information from the Campus Andaluz Virtual (CAV). It comprises ten universities from Andalusia, a region in southern Spain with some 8.4 million inhabitants, nearly 18 percent of the total population of Spain. These ten universities have a total of 250,000 students, and any of these students may register for a subject offered fully online by the CAV. There is a broad array of course offerings, covering nearly all areas of knowledge. The Learning Management System (LMS) used by students on these courses was Moodle.
In this research, six universities participated and two sources of information were used to compile the database, namely a survey and course reports by e-learning specialists. The database includes information from 1,114 students enroled on 50 e-learning courses. Appendix provides a description of the variables.
As regards the survey, a group of experts from ten Andalusian public universities participated in its design as part of a quality project funded by the Regional Government of Andalusia (Blanco et al., 2005). After several meetings, a questionnaire ‒ available as a beta version for three years ‒ was agreed. After the trial period, in academic year 2010‒2011, the questionnaire was approved for all virtual courses from the ten universities in Andalusia.
Data collection began when the e-learning service specialists at each university published the link to the questionnaire on the online platform. The data collection tool was the survey module of Moodle. The questionnaire was made available to students approximately 15 days before until 15 days after the end of the course. In order to improve the response rate, the e-learning service specialists sent an e-mail from the platform encouraging students to complete the questionnaire. The e-mail highlighted that all the information gathered would be anonymous. The questionnaire that students answered included 16 questions, and all questionnaire items were measured using a six-point Likert scale, ranging from 1 to 6: (1) Never; (2) Almost never; (3) Some (Sometimes); (4) A lot (Many times); (5) Almost always; (6) Always. There were a total of 1,114 student responses, with a response rate of 35percent. All the students were working toward the equivalent of a Bachelor’s degree in one of the Andalusian universities, and they had an intermediate level of knowledge and experience in new technologies.
Then, information was obtained from the external assessment reports by e-learning specialists. Regarding the method, initially three specialists were informed of the research aim. The selection of these experts, was made by the e-learning units of each university, selecting the staff that could carry out the evaluation, all of them were technicians trained in e-learning and who had completed specific training called e-learning evaluation course given within the CAV project.
The expert group then agreed to the sources of information that could be taken and the information that could be extracted for the purposes of the research. Later, each expert prepared a report taking information from various sources. Finally, information was pooled and the database was created through a comparison of the resulting three reports.
In order to compile the aforementioned reports, the specialists took into account the course guide, the materials that the teacher uploaded to the platform, the data on course progress and the final assessment results. This process generated 12 variables related to the system and the results of student assessment (see Table AI). Furthermore, for the variable that provides information about the pass mark rate of each course, the academic records were consulted.
The satisfaction revealed by the students in their responses to the questionnaire is the dependent variable in the multivariate technique proposed in this research. In order to estimate the influence of the independent variables on the global satisfaction, we use the ordinary least squares technique. This method is chosen because it is the most appropriate for dependent discrete variables whose categories are ordered but have multiple cathegories, as is the case for the dependent variable. Other plausible methods to perform the estimations are the probabilistic methods (such as ordered logit and ordered probit). Those methods aim to determine the marginal contribution of each independent variable to the variation of probability when the dependent variable changes its value. We prefer to use ordinary least squares for several reasons: first, because the conclusions taken by each method are basically the same (Ferrer‐i‐Carbonell and Frijters, 2004). Second, because the ordinary least squares results are easy to understand and require less information to correctly interpret them. The estimations are performed using the statistical package StataCorp (2015).
4. Results and discussion
Table I presents the main descriptive statistics. After the dependent variable, GLOBAL_SATISFACTION, a first set of variables provide information on various aspects of online courses, such as the course syllabus, the LMS accessibility or course materials. Then, we show four blocks of variables that refer to the course evaluation method, which a priori could influence student satisfaction. The blocks match the set of variables explained in paragraph 2: information conveyed at the start of the course by the teacher about the evaluation method; variables that capture the consistency of assessment activities in the context of learning design; and activities considered for student assessment.
It can be concluded that student satisfaction with courses in the sample is high, reaching 4.5 points out of 6. The question we seek to answer is whether the assessment method used influences student satisfaction. Using the technique presented above, we estimate the models displayed in Table II. The technique used enables us to demonstrate whether or not different decisions made by the teacher about the assessment method influence student satisfaction. We follow the strategy of sequentially introducing into the model a set of variables shown in Table I. We used this strategy in order to assess the marginal effect of a different set of assessment-related variables on the goodness of fit of a baseline model.
Before discussing the results of the variables of the models, the goodness of fit of the models is worth mentioning. The pseudo R2 of the models is above 0.78, which seems a good fit, considering the dependent variable is subjective.
The first model generally confirms our expectations with regard to the relationship between the variables considered and general satisfaction with the course. We find that the definition of objectives , OBJECTIVES, clarity in the definition of the objectives of the course, CLARITY, adequate timing, TIMING, meeting expectations, EXPECTATIONS_MATERIAL, appropriate use of illustrations and examples, ILLUSTRATION_EXAMPLES, motivation, MOTIVATION, the educational environment and accessibility, TEACHING_TOOLS, PLATFORM and ACCESSIBILITY have a positive impact on students’ general opinion of the course, whereas the variables CONSISTENCY_MATERIAL and TUTOR_CONTRIBUTION were not statistically significant.
The models II‒IV incorporate the variables related to the course design and assessment tools. It is worth highlighting that the goodness of fit does not undergo significant changes. The pseudo R2 ranges between 0.781 (model I) and 0.798 (model IV). This means that the variables representing the course design and assessments tools seem to be less important in the models. However, the statistical significance and sign of the coefficients can provide some insight into the influence of the assessment method on satisfaction.
In model II, it is surprising that students do not seem to find aspects related to assessment method transparency relevant. In this vein, it is worth highlighting that the variables EVALUATION_SYSTEM and SCALE are not statistically significant. This result contrasts with Sun et al. (2008), who concluded that the criteria for assessing activities should be taken into account while designing the course in order to achieve the best results. According to Ozkan and Koseler (2009), the fact that exam questions and assignments are clearly explained influences general satisfaction with e-learning. In addition, Roach and Lemasters (2006) maintained that the user positively values the assessment criteria being clearly defined right from the beginning of the course.
Moreover, e-learning users consider this to be a determinant factor of the quality of the course. One possible explanation for the result obtained in our research is that students do not pay much attention to the course guide, which can be consulted at the beginning of the course. In short, the result obtained could be due to student unwillingness to fill in the questionnaire. The preparation of course guides is not a widespread practice in Spanish universities and, furthermore, students are not used to reading them.
A set of variables is introduced into the third model that provides information about the coherence of the course assessment activities. We ask students whether the assessment techniques and procedures are keeping with the objectives of the course. In fact, defining course objectives is vital in terms of motivating the student in the learning process and in the final result of this process (Klein et al., 2006). The variable ASSESSMENT_PROCEDURES shows the positive relation with student satisfaction when the students recognise that the activities are useful for the development of their skills and abilities. In addition, the variable LEVEL_DEMAND indicates that student satisfaction increases when they feel that the demand level for the implementation of activities is consistent with the level of the course content. It is logical to assume that students will react unfavourably to the course if the assessment method test skills have not been acquired in the course or if the written tests require a higher level of knowledge than the course content provides. Additionally, we wanted to test if there was any relationship between students’ marks and their satisfaction. Conflict may arise, at least in part, because the teacher demands a standard of work above the course objectives and content. In such cases, student dissatisfaction may result. AGREEMENT is positively and significantly related to general satisfaction with the course. In short, students who believe they have been assessed fairly feel satisfied. In addition, we introduced an objective variable proxying how demanding teachers are, namely the ratio between students who passed and students eligible for assessment, APPROVED, but it was found to be non-significant in the last model. In summary, these results reinforce the idea that students care what happens to them, but not about what happens to the group as a whole.
In model IV, we can see that there are other variables related to assessment design that are significantly related to students’ evaluation of the course. First, students prefer assessment designs in which there is no need for a final exam, FINAL_EXAM. Furthermore, although there is no evidence to suggest that students enjoy a greater sense of satisfaction when carrying out activities (ACTIVITIES), the sign and the significance of the WEIGHTING_ACTIVITIES variable shows that students do prefer assessment design where the marks allocated to their activities carry a greater weight than the score for the final exam. In short, students[1] are less satisfied with assessment testing that includes a final exam; furthermore, should they be required to sit for an exam, the higher the weighting of the exam in their final score, the lower will be their level of satisfaction.
5. Conclusion
Spanish universities have undergone significant changes in only a few years, with two key aspects figuring prominently. On the one hand, there are the guidelines set by the EHEA, which entail changes in teaching and learning methodology and student assessment processes. On the other hand, there are new information technologies, which require the introduction of new tools for communicating and exchanging ideas and also act as facilitators of the changes that have occurred in the education model.
In this new paradigm of higher education, supported mainly by e-learning, student satisfaction is still an indicator of the quality of the education system. Although previous research works have analyzed the factors that influence e-student satisfaction, to the best of author’s knowledge, none of them have specifically analyzed the relationship between assessment systems and general student satisfaction with the course.
Taking into account the results, it can be concluded that the assessment system influences the satisfaction of e-students. However, at least on the basis of the data from the sample, this aspect appears to play a secondary role among the factors that affect satisfaction with the course. The goodness of fit of the model slightly improves when variables referring to the assessment system are incorporated.
One of the key results of the analysis is that students value assessment positively when it is keeping with the objectives and level of the course. Moreover, students are found to prefer assessment methods in which handing in coursework is worth a significant share of the final mark, rather than simply sitting for exams or undertaking end-of-course projects.
As expected, students are more satisfied with the course when they agree with the marks given by the teacher. The following recommendations can be made in light of the results obtained. First, the indifference displayed by students toward the effort that a teacher makes to specify and clarify the assessment system in the course guide seems to indicate that students do not pay much attention to the content of the guide. Teachers should employ strategies to raise student awareness of the importance of the course guide.
Second, student ratings should be based on the recognition of abilities and skills students manage to develop during the course. The final exam option is less popular among students and mainly rewards students’ efforts in terms of memorizing content. Consequently, this option should be worth less than other techniques in an ongoing assessment system.
Third, we believe that performing similar studies in other countries would be an interesting avenue for future research. This would enable international comparisons regarding the relationship between assessment method and e-learning satisfaction. There are bound to be differences due to the different styles and types of learning that are influenced by distinctive cultural features (Yamazaki, 2005; Joy and Kolb, 2009). The comparisons carried out could lead to recommendations regarding assessment method and working criteria across cultures. Along the same lines, it is important to bear in mind the cultural factors that affect training effectiveness, according to the style of learning implemented (Yang et al., 2009). Another use for comparative research is to make recommendations on trends to follow so that certain evaluation systems and working criteria for e-learning succeed in countries that have no tradition of using such systems. It is important to analyze the relationship between assessment method and student satisfaction. This will allow us to make changes to improve motivation, and hence student learning and academic outcomes.
Finally, another interesting research line is to include qualitative data to better understand the effect of e-learning student satisfaction. This comparative study would be difficult to perform, however, it would allow a deeper understanding of students’ satisfaction.
Descriptive statistics of the variables used in the study
Variable | Mean/percentage | SD | Min. | Max. |
---|---|---|---|---|
GLOBAL_SATISFACTION | 4.4955 | 1.306 | 1 | 6 |
Variables that influence GLOBAL_SATISFACTION (excluding variables relating to the student’s assessment method) | ||||
OBJECTIVES | 4.7469 | 1.306 | 1 | 6 |
CLARITY | 4.5287 | 1.461 | 1 | 6 |
TIMING | 4.4847 | 1.4831 | 1 | 6 |
CONSISTENCY_MATERIAL | 4.7226 | 1.2587 | 1 | 6 |
EXPECTATIONS_MATERIAL | 4.4443 | 1.4056 | 1 | 6 |
TUTOR_CONTRIBUTION | 5 | 1.2235 | 1 | 6 |
ILLUSTRATION_EXAMPLES | 4.5835 | 1.3719 | 1 | 6 |
MOTIVATION | 4.3546 | 1.394 | 1 | 6 |
TEACHING_TOOLS | 4.6032 | 1.3191 | 1 | 6 |
PLATFORM | 4.4425 | 1.4705 | 1 | 6 |
ACCESIBILITY | 4.4901 | 1.4301 | 1 | 6 |
Clarity of assessment method | ||||
EVALUATION_SYSTEM | 0.8887 | 0.3147 | 0 | 1 |
SCALE | 0.7136 | 0.4523 | 0 | 1 |
Coherence of the assessment activities with goals and course content | ||||
ASSESSMENT_PROCEDURES | 4.5368 | 1.3621 | 1 | 6 |
LEVEL_DEMAND | 4.4686 | 1.4436 | 1 | 6 |
APPROVED | 0.7035363 | 0.1823185 | 0.3793103 | 1.909091 |
AGREEMENT | 4.476661 | 1.439369 | 1 | 6 |
Assessment activities | ||||
CONTINUOUS | 0.3268 | 0.4692 | 0 | 1 |
WEIGHTING_ACTIVITIES | 34.2 | 27.4 | 0 | 100 |
WEIGHTING_EXAM | 14.3 | 19.6 | 0 | 70 |
ACCESS | 0.3142 | 0.4644 | 0 | 1 |
GROUP | 0.0987 | 0.2985 | 0 | 1 |
PARTICIPATION | 0.8133 | 0.3899 | 0 | 1 |
ACTIVITIES | 0.8636 | 0.3434 | 0 | 1 |
FINAL_EXAM | 0.5009 | 0.5002 | 0 | 1 |
FINAL_PROJECT | 0.2549 | 0.436 | 0 | 1 |
Estimation results
Variables | (I) | (II) | (III) | (IV) |
---|---|---|---|---|
OBJECTIVES | 0.0594** (0.0256) | 0.0580** (0.0257) | 0.0398 (0.0251) | 0.0403 (0.0251) |
CLARITY | 0.0642*** (0.0191) | 0.0662*** (0.0193) | 0.0568*** (0.0188) | 0.0507*** (0.0189) |
TIMING | 0.0732*** (0.0175) | 0.0721*** (0.0176) | 0.0537*** (0.0172) | 0.0498*** (0.0172) |
CONSISTENCY_MATERIAL | 0.0333 (0.0284) | 0.0342 (0.0285) | 0.0101 (0.0279) | 0.00945 (0.0278) |
EXPECTATIONS_MATERIAL | 0.199*** (0.0260) | 0.198*** (0.0261) | 0.175*** (0.0255) | 0.173*** (0.0256) |
TUTOR_CONTRIBUTION | 0.0466** (0.0232) | 0.0467** (0.0232) | 0.0326 (0.0227) | 0.0366 (0.0228) |
ILLUSTRATION_EXAMPLES | 0.0391* (0.0225) | 0.0387* (0.0225) | 0.0227 (0.0220) | 0.0235 (0.0220) |
MOTIVATION | 0.114*** (0.0211) | 0.116*** (0.0212) | 0.101*** (0.0206) | 0.101*** (0.0206) |
TEACHING_TOOLS | 0.103*** (0.0260) | 0.102*** (0.0261) | 0.0552** (0.0263) | 0.0576** (0.0264) |
PLATFORM | 0.0898*** (0.0159) | 0.0888*** (0.0159) | 0.0810*** (0.0155) | 0.0767*** (0.0156) |
ACCESIBILITY | 0.207*** (0.0204) | 0.208*** (0.0205) | 0.166*** (0.0204) | 0.168*** (0.0205) |
Clarity of assessment method | ||||
EVALUATION_SYSTEM | −0.0382 (0.0634) | −0.0479 (0.0621) | −0.0219 (0.0657) | |
SCALE | −0.028 (0.0440) | −0.015 (0.0435) | −0.0543 (0.0780) | |
Coherence of the assessment activities with goals and course content | ||||
ASSESSMENT_PROCEDURES | 0.0696*** (0.0264) | 0.0677** (0.0264) | ||
LEVEL_DEMAND | 0.0888*** (0.0253) | 0.0883*** (0.0255) | ||
APPROVED | 0.330** (0.1610) | 0.124 (0.1820) | ||
AGREEMENT | 0.0723*** (0.0181) | 0.0712*** (0.0184) | ||
Assessment activities | ||||
CONTINUOUS | 0.0495 (0.0469) | |||
WEIGHTING_ACTIVITIES | 0.00185* (0.0010) | |||
WEIGHTING_EXAM | 0.00317* (0.0018) | |||
ACCESS | −0.0632 (0.0449) | |||
GROUP | −0.0773 (0.0689) | |||
PARTICIPATION | −0.000939 (0.0612) | |||
ACTIVITIES | −0.0279 (0.0790) | |||
FINAL_EXAM | −0.147** (0.0681) | |||
FINAL_PROJECT | −0.0732 (0.0492) | |||
CONSTANT | −0.157* (0.0894) | −0.104 (0.1040) | −0.290** (0.1430) | −0.0809 (0.1740) |
n | 1,114 | 1,114 | 1,114 | 1,114 |
R2 | 0.781 | 0.781 | 0.795 | 0.798 |
Notes: Standard errors in parentheses. *p<0.1; **p<0.05; ***p<0.01
Questions and variables
Dependent variable | Answer | Source |
---|---|---|
GLOBAL_SATISFACTION: global satisfaction of the course | (1) Never; (2) Almost never; (3) Some (some time); (4) Much (many times); (5) Almost always; (6) Always | Survey |
Variables that influence GLOBAL_SATISFACTION (without including variables of the student’s evaluation system) | ||
OBJECTIVES: degree of commitment to the objectives of the course | (1) Never; (2) Almost never; (3) Some (some time); (4)Much (many times); (5) Almost always; (6) Always. | Survey |
CLARITY: since the beginning of the course, the objectives and the development of the course were clear | ||
TIMING: the timing in the modules and subjects of the course were accurate | ||
CONSISTENCY_MATERIAL: the contents were in accordance with the objectives and the program of the course | ||
EXPECTATIONS_MATERIAL: the contents were in accordance with the student’s expectation | ||
TUTOR_CONTRIBUTION: the tutors showed that they knew their subject well | ||
ILLUSTRATION_EXAMPLES: the tutor made adequate use of illustrations and examples | ||
MOTIVATION: the motivation in the course was high | ||
TEACHING_TOOLS: the activities and tools used in the course have been helpful in order to achieve the objectives | ||
PLATFORM: the interface of the formative environment (graphic environment of the course) was accessible and easy to use | ||
ACCESIBILITY: the course is friendly | ||
Clarity of assessment method | ||
EVALUATION_SYSTEM: Was the evaluation system clear and detailed? | 0: No; 1: Yes | Course guide |
SCALE: Are assessment standards established? | ||
Coherence of the assessment activities with goals and course content | ||
ASSESSMENT_PROCEDURES: the techniques and evaluation procedures employed were in accordance with the objectives of the course | (1) Never; (2) Almost never; (3) Some (some time); (4) Much (many times); (5) Almost always; (6) Always | Survey |
LEVEL_DEMAND: the demanded level was at the level of the content of the course | ||
APPROVED: ratio of students approved over students enroled in the course | In percentage | Academic Record |
AGREEMENT: Do you agree with the marks obtained until now? | (1) Never; (2) Almost never; (3) Some (some time); (4) Much (many times); (5) Almost always; (6) Always. | Survey |
Assessment activities | ||
CONTINUOUS: there was a process of continuous evaluation | 0: No; 1: Yes | Course guide |
WEIGHTING_ACTIVITIES: What was the weight of the activities? | In percentage over the total value of the mark | |
WEIGHTING_EXAM: What was the weight of the exams? | ||
ACCESS: the number of access to the platform was taken into account | 0: No; 1: Yes | |
GROUP: there exist group activities that foster the team work | ||
PARTICIPATION: the participation in the fórums was taken into account | ||
ACTIVITIES: the delivery of activities was taken into account | ||
FINAL_EXAM: the final exam was taken into account | ||
FINAL_PROJECT: there was a need to hand in a final project |
Note
Due to the nature of the study, the formats of some of the assessment methods, question types such as multiple choice, short answer, gap fill, etc., were not taken into consideration and these may affect user satisfaction; Danili and Reid (2005) found that alternative assessment formats do affect performance.
Appendix
References
Abdous, M.H. and Yoshimura, M. (2010), “Learner outcomes and satisfaction: a comparison of live video-streamed instruction, satellite broadcast instruction, and face-to-face instruction”, Computers & Education, Vol. 55 No. 2, pp. 733-741.
Armellini, A. and Aiyegbayo, O. (2010), “Learning design and assessment with e‐tivities”, British Journal of Educational Technology, Vol. 41 No. 6, pp. 922-935.
Baartman, L.K., Bastiaens, T.J., Kirschner, P.A. and van der Vleuten, C.P. (2007), “Evaluating assessment quality in competence-based education: a qualitative comparison of two frameworks”, Educational Research Review, Vol. 2 No. 2, pp. 114-129.
Bailey, J.E. and Pearson, S.W. (1983), “Development of a tool for measuring and analyzing computer user satisfaction”, Management Science, Vol. 29 No. 5, pp. 530-545.
Berry, E. (2007), “Group work and assessment – benefit or burden?”, The Law Teacher, Vol. 41 No. 1, pp. 19-36.
Biggs, J. and Tang, C. (2007), Teaching for Quality Learning at University, SHRE & Open University Press, Buckingham.
Birenbaum, M., Breuer, K., Cascallar, E., Dochy, F., Dori, Y., Ridgway, J. and Nickmans, G. (2006), “A learning integrated assessment system”, Educational Research Review, Vol. 1 No. 1, pp. 61-67.
Blanco, E., Cordón, Ó. and Infante, A. (2005), “Guía Afortic: Guía para la evaluación de acciones formativas basadas en tecnologías de la información y comunicación”, Unidad para la calidad de las universidades andaluzas (UCUA), Córdoba.
Bologna Working Group on Qualifications Frameworks (2005), A Framework for Qualifications of the European Higher Education Area, Ministry of Science, Technology and Innovation, Copenhagen.
Boud, D. and Falchikov, N. (Eds) (2007), Rethinking Assessment in Higher Education: Learning for the Longer Term, Routledge, London.
Bradford, G. and Wyatt, S. (2010), “Online learning and student satisfaction: academic standing, ethnicity and their influence on facilitated learning, engagement, and information fluency”, Internet and Higher Education, Vol. 13 No. 3, pp. 108-114, available at: www.learntechlib.org/p/108388/ (accessed May 9, 2019).
Cassady, J.C. and Finch, W.H. (2015), “Using factor mixture modeling to identify dimensions of cognitive test anxiety”, Learning and Individual Differences, Vol. 41 No. 2015, pp. 14-20.
Cheng, G. and Chau, J. (2016), “Exploring the relationships between learning styles, online participation, learning achievement and course satisfaction: an empirical study of a blended learning course”, British Journal of Educational Technology, Vol. 47 No. 2, pp. 257-278.
Chua, Y.P. (2012), “Effects of computer-based testing on test performance and testing motivation”, Computers in Human Behavior, Vol. 28 No. 5, pp. 1580-1586.
Crisan, C. and Copaci, I. (2015), “The relationship between primary school childrens’ test anxiety and academic performance”, Procedia-Social and Behavioral Sciences, Vol. 180 No. 2015, pp. 1584-1589.
Danili, E. and Reid, N. (2005), “Assessment formats: do they make a difference?”, Chemistry Education Research and Practice, Vol. 6 No. 4, pp. 204-212.
Diep, A.N., Zhu, C., Struyven, K. and Blieck, Y. (2017), “Who or what contributes to student satisfaction in different blended learning modalities?”, British Journal of Educational Technology, Vol. 48 No. 2, pp. 473-489.
Eom, S.B., Wen, H.J. and Ashill, N. (2006), “The determinants of students’ perceived learning outcomes and satisfaction in university online education: an empirical investigation”, Decision Sciences Journal of Innovative Education, Vol. 4 No. 2, pp. 215-235.
Ferrer‐i‐Carbonell, A. and Frijters, P. (2004), “How important is methodology for the estimates of the determinants of happiness?”, The Economic Journal, Vol. 114 No. 497, pp. 641-659.
Ginns, P. and Ellis, R.A. (2009), “Evaluating the quality of e-learning at the degree level in the student experience of blended learning”, British Journal of Educational Technology, Vol. 40, pp. 652-663.
González-Gómez, F., Guardiola, J., Martín-Rodríguez, O. and Montero-Alonso, M.A. (2012), “Gender differences in e-learning satisfaction”, Computers & Education, Vol. 58 No. 1, pp. 283-290.
Gray, D., Ryan, M. and Coulon, A. (2004), “The training of teachers and trainers: innovative practices, skills and competencies in the use of e-learning”, European Journal of Open, Distance and E-Learning, Vol. 2004 No. 2, available at: www.eurodl.org/materials/contrib/2004/Gray_Ryan_Coulon.pdf
Grieve, R., Padgett, C.R. and Moffitt, R.L. (2016), “Assignments 2.0: the role of social presence and computer attitudes in student preferences for online versus offline marking”, The Internet and Higher Education, Vol. 28, January, pp. 8-16.
Ho, C.L. and Dzeng, R.J. (2010), “Construction safety training via e-learning: learning effectiveness and user satisfaction”, Computers & Education, Vol. 55 No. 2, pp. 858-867.
Hoferichter, F., Raufelder, D., Ringeisen, T., Rohrmann, S. and Bukowski, W.M. (2015), “Assessing the multi-faceted nature of test anxiety among secondary school students: an english version of the german test anxiety questionnaire: PAF-E”, The Journal of Psychology, pp. 1-23.
Joy, S. and Kolb, D.A. (2009), “Are there cultural differences in learning style?”, International Journal of Intercultural Relations, Vol. 33 No. 1, pp. 69-85.
Jung, I. (2012), “Asian learners’ perception of quality in distance education and gender differences”, The International Review of Research in Open and Distributed Learning, Vol. 13 No. 2, pp. 1-25.
Kelly, H.F., Ponton, M.K. and Rovai, A.P. (2007), “A comparison of student evaluations of teaching between online and face-to-face courses”, The Internet and Higher Education, Vol. 10 No. 2, pp. 89-101.
Klein, H.J., Noe, R.A. and Wang, C. (2006), “Motivation to learn and course outcomes: the impact of delivery mode, learning goal orientation, and perceived barriers and enablers”, Personnel Psychology, Vol. 59 No. 3, pp. 665-702.
Lemos, S. and Nueza, P. (2012), “Students expectation and satisfaction in postgraduate on-line courses”, ICICTE 2012 Proceedings, pp. 568-580.
Lipson, A., Epstein, A.W., Bras, R. and Hodges, K. (2007), “Students’ perceptions of terrascope”, A Project-Based Freshman Learning Community Journal of Science Education and Technology, Vol. 16 No. 4, pp. 349-364.
Lu, H.P. and Chiou, M.J. (2010), “The impact of individual differences on e-learning system satisfaction: a contingency approach”, British Journal of Educational Technology, Vol. 41 No. 2, pp. 307-323.
Martín-Rodríguez, Ó., Fernández-Molina, J.C., Montero-Alonso, M.Á. and González-Gómez, F. (2015), “The main components of satisfaction with e-learning”, Technology, Pedagogy and Education, Vol. 24 No. 2, pp. 267-277.
Mello, J.A. (1993), “Improving Individual member accountability in small work group settings”, Journal of Management Education, Vol. 17 No. 2, pp. 253-259.
Mellor, T. (2012), “Group work assessment: some key considerations in developing good practice”, Planet, Vol. 25 No. 1, pp. 16-20.
Neemati, N., Hooshangi, R. and Shurideh, A. (2014), “An investigation into the learners’ attitudes towards factors affecting their exam performance: a case from Razi University”, Procedia-Social and Behavioral Sciences, Vol. 98, May, pp. 1331-1339.
Nicol, D. and Macfarlane-Dick, D. (2006), “Formative assessment and self-regulated learning: a model and seven principles of good feedback practice”, Studies in Higher Education, Vol. 31 No. 2, pp. 199-218.
Nikou, S.A. and Economides, A.A. (2016), “The impact of paper-based, computer-based and mobile-based self-assessment on students’ science motivation and achievement”, Computers in Human Behavior, Vol. 55, February, pp. 1241-1248.
Ozkan, S. and Koseler, R. (2009), “Multi-dimensional students’ evaluation of e-learning systems in the higher education context: an empirical investigation”, Computers & Education, Vol. 53 No. 4, pp. 1285-1296.
Paechter, M. and Maier, B. (2010), “Online or face-to-face? Students’ experiences and preferences in e-learning”, The Internet and Higher Education, Vol. 13 No. 4, pp. 292-297.
Peltier, J.W., Schibrowsky, J.A. and William, D. (2007), “The interdependence of the factors influencing the perceived quality of the online learning experience: a casual model”, Journal of Marketing Education, Vol. 29 No. 2, pp. 140-153.
Roach, V. and Lemasters, M. (2006), “Satisfaction with online learning: a comparative descriptive study”, Journal of Interactive Online Learning, Vol. 5 No. 3, pp. 317-332.
Rovai, A.P. (2004), “A constructivist approach to online college learning”, Internet and Higher Education, Vol. 7 No. 2, pp. 79-93.
Schmeeckle, J.M. (2003), “Online training: an evaluation of the effectiveness and efficiency of training law enforcement personnel over the Internet”, Journal of Science Education and Technology, Vol. 12 No. 3, pp. 205-260.
Scotland, J. (2016), “How the experience of assessed collaborative writing impacts on undergraduate students’ perceptions of assessed group work”, Assessment & Evaluation in Higher Education, Vol. 41 No. 1, pp. 15-34.
StataCorp (2015), Stata Statistical Software: Release 14, StataCorp LP, College Station, TX.
Struyven, K., Dochy, F. and Janssens, S. (2005), “Students’ perceptions about evaluation and assessment in higher education: a review”, Assessment and Evaluation in Higher Education, Vol. 30 No. 4, pp. 331-345.
Sun, P., Tsai, R., Finger, G., Chen, Y. and Yeh, D. (2008), “What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction”, Computers & Education, Vol. 50 No. 4, pp. 1183-1202.
Swan, K. (2001), “Virtual interaction: design factors affecting student satisfaction and perceived learning in asynchronous online courses”, Distance Education, Vol. 22 No. 2, pp. 306-331.
Teo, T. (2010), “A structural equation modelling of factors influencing student teachers’ satisfaction with e-learning”, British Journal of Educational Technology, Vol. 41 No. 6, pp. 150-152.
Tham, C.M. and Werner, J.M. (2005), “Designing and evaluating e-learning in higher education: a review and recommendations”, Journal of Leadership & Organizational Studies, Vol. 11 No. 1, pp. 15-25.
Thurmond, V.A., Wambach, K., Connors, H.R. and Frey, B.B. (2002), “Evaluation of student satisfaction: determining the impact of a web-based environment by controlling for student characteristics”, The American Journal of Distance Education, Vol. 16 No. 3, pp. 169-190.
Toetenel, L. and Rienties, B. (2016), “Analysing 157 learning designs using learning analytic approaches as a means to evaluate the impact of pedagogical decisionmaking”, British Journal of Educational Technology, Vol. 47 No. 5, pp. 981-992.
Volman, M. (2005), “A variety of roles for a new type of teacher educational technology and the teaching profession”, Teaching and Teacher Education, Vol. 21 No. 1, pp. 15-31.
Wilkins, S., Butt, M.M., Kratochvil, D. and Balakrishnan, M.S. (2015), “The effects of social identification and organizational identification on student commitment, achievement and satisfaction in higher education”, Studies in Higher Education, Vol. 41 No. 12, pp. 2232-2252.
Yamazaki, Y. (2005), “Learning styles and typologies of cultural differences: a theoretical and empirical comparison”, International Journal of Intercultural Relations, Vol. 29 No. 5, pp. 521-548.
Yang, B., Wang, Y. and Drewry, A.W. (2009), “Does it matter where to conduct training? Accounting for cultural factors”, Human Resource Management Review, Vol. 19 No. 4, pp. 324-333.
Further reading
Jiménez Raya, M. (2012), “Exploring pedagogy for autonomy in language education at university: possibilities and impossibilities”, in Pérez Cañado, M. (Ed.), Competency-Based Language Teaching in the European Higher Education Area, Springer, New York, NY.