University Students’ Perceptions of E-Portfolios and Rubrics as Combined Assessment Tools in Education Courses

This article presents a study with a twofold research aim: (a) to ascertain university students’ perceptions on two combined assessment tools (e-portfolios and formative rubrics) and (b) to identify if among students there were differing perceptions on the use of e-portfolios, and what factors favored acceptance of these. The data gathering method was a questionnaire administered to 247 students on the Education Degree at the University of Barcelona. Regarding our first aim, it was confirmed that although the portfolio and rubrics were used in combination, students viewed each of them independently. Regarding the second aim, we identified four groups and a range of factors that may explain the varying perceptions of the portfolios and rubrics. Favorable factors were, in first place, greater teacher experience in using the digital portfolios; second, continuous technical support for their use; third, their having greater weight in assessment; and fourth, smaller class sizes.

Many authors and studies (oriented toward both the institutional and teaching points of view, and including those investigating students' perceptions) concur that e-portfolios have considerable advantages for students in developing transferable skills, mainly reflection, critical thinking, learner autonomy, professional development, and the ability to organize and self-regulate the learning process (Cambridge, 2010;Heinrich, Bhattacharya, & Rayudu, 2007;Lopez-Fernandez & Rodriguez-Illera, 2009;Rodrigues, 2013;Rubio & Galva´n, 2013;Sa´nchez Santamarı´a, 2012;Zubizarreta, 2009). Also worth noting are the development of digital competences and collaborative competences, such as peer feedback in the use of net portfolios or shared portfolios (Barbera`& Martı´n, 2009;Brandes & Boskic, 2008;van Aalst & Chan, 2007). Further, e-portfolios can boost motivation in learning (Bolliger & Shepherd, 2010;Hinett, 2002) and greatly facilitate the acquisition, assimilation, and accumulation of knowledge (Chang, Liang, Tseng, & Tseng, 2014).
But the benefits of e-portfolios are not without controversy, especially from the students' point of view, and this can cause problems in their implementation (Tzeng, 2011), due to the workload involved, the cost-benefit ratio in terms of learning (Oblender, 2002;Roblyer, Davis, Mills, Marshall, & Pape, 2008), and other factors. Barbera`(2005) found that that it was not easy for students to accept an e-portfolio culture at the outset because it requires time to set in place, both for the portfolios themselves and for the digital platform. Because their use is sporadic rather than continuous across various years, this may also influence not only their long-term effects but also students' perceptions on them (Salomon, Perkins, & Globerson, 1992;Wetzel & Strudler, 2006). The perception of usefulness and ease of use has been shown to be influential in acceptance in a study by Chen, Mou-te Chang, Chen, Huang, and Chen (2012) based on the Technology Acceptance Model. Students' previously acquired skills in using tools needed for the e-portfolio (writing abilities, organizing and representing ideas and analyses) seems also to have an effect, according to Wray (2007), on their acceptance of them. Also formative and technical backup are key elements in the portfolio's success; and the importance of the teacher's role in its design and technical problem solving has also been underlined (Delandshere & Arens, 2003;Tosh, Light, Fleming, & Haywood, 2005). The use of e-portfolios for individual-work and group-work portfolios in multicollaborative environments requires different methodological strategies (Parada, Pardo, & Delgado-Kloos, 2011;Romero-Cerezo, 2008) and different settings of workspace structure (Parada et al., 2011) and produces different effects on students (Al-Qadi & Smadi, 2014). On the other hand, there appear to be no appreciable differences between more or less technologically competent subjects (Shepherd & Bolliger, 2011); IT skills help, but they are not decisive in the success of e-portfolio.
Many authors discuss the positive views that teachers and students have on the use of rubrics in a range of contexts and disciplines (Jonsson, 2014), while to a lesser extent, there are studies showing a connection between rubrics and higher performance (Andrade & Du, 2005;Andrade et al., 2010;Kocaku¨lah, 2010;Popham, 1997). Panadero and Jonsson (2013), in a review of studies on the use of rubrics in formative assessment, discuss ways in which these can help improve students' performance: increasing transparency in assessment criteria, reducing anxiety, aiding the feedback process, improving self-efficacy, and supporting selfregulation through the revision of assignments before delivery (Steffens & Underwood, 2008). In the same line, other studies also indicate the advantages of rubrics in promoting consistency in students' progress (Andrade & Du, 2005;Cebria´n, 2007;Jonsson & Svingby, 2007;Powell, 2001;Schneider, 2006) and in developing competences (Stevens & Levi, 2005;Torres & Perera, 2010).
From the teachers' point of view, rubrics promote the development of reflective practice, provide them with more information on its effectiveness, help them to offer better quality feedback to their students, serve as a support for students in assessing their own work, and boost students' engagement in tasks (Jonsson & Svingby, 2007;Schamber & Mahoney, 2006). However, resistance from teachers to the use of rubrics has also been found (Garcı´a-Ros, 2011;Reddy & Andrade, 2010), as well as doubts about their utility on the part of students. Thus, various studies affirm that students may perceive rubrics more as a tool for satisfying their teachers than as a representation of standards and quality criteria to take into account in their work (Andrade & Du, 2005) or that students can doubt their usefulness for selfassessment and better interpretation of feedback (Baron & Keller, 2003). Therefore, the institutional efficacy of rubrics may be seriously affected if, for example, students think that they do not include the key criteria for carrying out a task, that they are not useful for improving the outcomes of their work, or that they do not enable them to assess the quality of their work properly. Other studies confirm the importance of involving students in developing rubrics to ease their comprehension and application (Garcı´a-Ros, 2011;Huba & Freed, 2000;Stix, 1997;Taggart, Phifer, Nixon, & Wood, 2001).

Aims
Up to now, there has been a plentiful literature on the technological, institutional, and didactic conditioning factors for the use and adoption of the e-portfolio among students, but the varying typology of university students, resulting in the use of portfolios not being universally valid, has not been studied. Nor is the rubric a tool fully accepted among students. For these reasons, the aims of this study were as follows: In the first place, to determine students' perceptions on two combined assessment strategies: a system of e-portfolios and formative rubrics.
Second, to identify whether there exist among students different ways of seeing the use of the e-portfolios and rubrics, and what factors favored the acceptance of the tools.

Method Design
The study was carried out using a quantitative, descriptive, and retrospective survey (Torrado Fonseca, 2004).
The procedure followed was to choose a group of modules from the Education Degree at the University of Barcelona on which the e-portfolio and the rubric for formative assessment were used during the second trimester of the 2012-2013 academic year. In all of the selected modules, part of the e-portfolio consisted of answering some questions to explain how students chose the topic or subject of an essay, and they knew that this decision-making process was going to be assessed with a detailed rubric (see the example rubric in Table 1 used to assess the task in Figure 1). All the teachers of the chosen groups had received training in the use of these tools, and at the beginning of their courses, all the students participating also received specific formative in the digital platform used for the digital portfolio at the University of Barcelona, Digital Folder (Rubio & Galva´n, 2013). According to module requirements or their personal interest, students completed an individual-work portfolio, or a group-work portfolio, or both, individual and group-work portfolios.
The tool used for gathering data was a purpose-designed 43-item questionnaire probing the perceptions of students on the use of the digital portfolio and assessment using rubrics. It was administered in June of the 2012-2013 academic year, face-to-face in the classroom (guaranteeing the confidentiality of individual replies), to the following groups: . four groups on the Theory and Practice of Educational Research (TPER) module (two in the morning and two in the afternoon), . two groups on the IT Applied to Educational Research (ITAER) module (one in the morning and one in the afternoon), and . one morning group on the Tools and Strategies for Information Gathering (TSIG) module.
To address the aims of this study, we consider here the following parts of the questionnaire: a set of questions characterizing the sample, and two scales: (a) students' evaluation of the use of e-portfolios, divided into two  subdimensions (motivation and reflection on learning) with six and nine items, respectively, and (b) students' evaluation of assessment using rubrics (three items). The scalar items featured scores from 1 to 6 (from completely disagree to completely agree). The reliability study (Cronbach, 1951) showed an internal consistency of .96 and .95, respectively, for the scales of each dimension (according to Nunnally, 1978, values greater than .7 are acceptable as research instruments).

Participants
The sample consisted of 247 students: 66% from TPER, 26% from TSIG, and 8% from ITAER. The majority were women (82.7%), and the average age was 21. It was found that 79.3% also took part in nonacademic activities, to which they devoted a weekly average of 12.3 hours. Also, 66.8% already had experience of paper portfolios, 17.8% were aware of the University of Barcelona portfolio (Digital Folder), and 22.7% had used other portfolios. Sixty-two percent of the students considered the tool to be complex, which broadly coincided with the percentage of students who had no previous experience of either digital or paper portfolios, and even some who had already used them deemed insufficient their initial training in using the platform (74.2%).
Regarding the modules chosen for the study, each had its own particular characteristics. On the optional module (ITAER), the group size was smaller, numbering 40 students, while the other modules featured groups averaging 60. The ITAER module teacher had previous experience in digital portfolios, unlike the other modules; all modules had continual technical support for using the platform. Finally, in TSIG, the assessment weighting of the portfolios was 30%, in TPER, it was 55%, and in ITAER, 100%.

Data Analysis
For the first aim of the study, we used a descriptive statistical analysis of the results, reporting the normal indices of frequency, position, and dispersion.
For the second aim, we used a twostep cluster analysis, which enabled us to automatically choose the optimum number of clusters (Bacher, Wenzig, & Vogler, 2004). Because all the variables in the procedure were continuous, the result of the two-stage cluster analysis, once the optimum number of clusters was known, was validated by applying K-means clustering (MacQueen, 1967) to the same data. To calculate the index of agreement between both classifications (twostep and K-means), we applied the Kappa intraclass correlation coefficient measure as an estimate of interrater reliability.
All the earlier operations were carried out using the statistical package SPSS, version 18.

Rating of the Use of E-Portfolios
Motivation by the portfolio and learning subdimension. In general, students rated the e-portfolios poorly. As Table 2 shows, the average rating was 2.27 on a scale of 1 to 6.
If we analyze each item on the scale in detail, we obtain similar data. The use of the portfolio did not seem to have any great impact on students' motivation or on their learning. The scores they gave in this dimension are noticeably low, especially regarding (a) the intention to keep using the portfolio in future and (b) whether the e-portfolio promoted their desire to learn.
Transferable skills subdimension. The scores in the area of transferable skills boosted by the e-portfolios are significantly below average for the scale, with writing skills the lowest of all (Table 3).

Evaluations of Assessment and the Use of Rubrics
The students' evaluations of module assessment and the use of rubrics were higher than those for the e-portfolio. The average obtained for this scale was above the theoretical mean (3.6 out of 6), as Table 4 shows.
The scores for the items on this scale were similar, consistently with the previous section. Students deemed useful the use of rubrics in assessment. More specifically, the rubrics helped them to understand assessment better and to gain awareness of the skills they needed to develop. In general, the students judged that they were a positive tool in formative assessment, as they aided learning.

Grouping Opinions on the e-Portfolio and Rubrics
The twostep cluster analysis of the two scalar variables (scores for the e-portfolio and scores for assessment and the rubrics) identified four high-quality clusters, as shown in Figure 2. With differing percentages of students classified in each cluster, a cluster size quotient was calculated from larger to smaller, yielding a result of 2.86. The twostep was applied four times, reordering the cases randomly as the test requires. In all cases, similar results showed good-quality clusters (profile cohesion mean higher than 0.5). These four clusters represent four distinct profiles of student perceptions on the e-portfolio and assessment using rubrics. To specify the composition of the groups (Figure 3): . Favorable group, made up of 29 students, giving a high score to the e-portfolio and assessment using rubrics,   . Moderate group, made up of 49 students, giving a middle score to both the e-portfolio and assessment using rubrics, . Controversial group, made up of 81 students, who despite giving the e-portfolio a low score, rated assessment using rubrics more highly, and . Unfavorable group, made up of 83 students, giving a low score to both e-portfolios and assessment using rubrics.
This grouping was validated with a K-means cluster analysis from which similar results were obtained. Analyzing the measure according to the subject classification obtained in the twostep cluster and that obtained from the Kmeans cluster (Table 5), we obtained a Kappa value of .81, which according to Altman's (1991) classification, represents a very good measure.
Thus, these four profiles are valid, and in the following section, we describe in more depth the features of each, in line with participants' contextual variables and the specific features of the modules and groups in which the experiments in assessment and e-portfolios were carried out. In particular, we highlight the statistically significant differences, summarized in Table 6, where it can be seen, for example, that favorable cluster is characterized by students with an average age (and different from other clusters) of 24, the percentage of students with experienced teachers in e-portfolios was 24.1, the average weighting of portfolio in the module assessment of these students was 64.14%, the average class size was 55.2 students, the percentage of students with continuous technical support was 24.1, 31% of the students in this cluster completed group portfolios, 6.9% completed individual portfolios, and 62.1% completed both group and individual portfolios.
Looking into the features of the profiles we had outlined, we found the significant factors to be students' age, class size, teachers' experience, the assessment weighting of portfolios, whether students had continuous support for the e-portfolio or not, and whether they completed individual or group-work portfolios. The variables of the nonsignificant relations were self-perceived competence in previous experiences of portfolios, pedagogical support during the module, time spent, and self-perception of group participation and collaboration. In the following, we go into more detail on these relations.
Favorable group. This is the smallest cluster, with 29 students. This group gave a high score to both the e-portfolio and assessment using rubrics.
In this cluster, we found the oldest students (F ¼ 8.251, p ¼ .000). This was in fact a small group on the Education degree, specifically located in this cluster.
Regarding class size among students in this cluster, they were the smallest in the sample (F ¼ 5.747, p ¼ .001).
These students had the highest weighting of the portfolio in assessment (F ¼ 10.765, p ¼ .000).
In addition, there was a higher number of students with continuous support for the digital portfolio (chi-square ¼ 16.4, p ¼ .001).
Finally, we should note that students in this cluster more often had teachers with experience in the digital portfolio (chi-square ¼ 16.35, p ¼ .001).
Moderate group. The group we have called moderate was made up of 49 students, giving a moderate score to both the e-portfolio and assessment using rubrics. Its outstanding feature is that it is the group with the highest number completing both individual and group portfolios at the same time.
Controversial group. The group we have called controversial while rating the digital portfolio poorly, gave a high score to assessment with rubrics. It is made up of 81 students whose outstanding feature was that their portfolios had lower weighting in assessment (F ¼ 10.765, p ¼ .000).
It is also worth noting that, as Table 6 shows, the students in this cluster are those who most frequently completed only group portfolios (chisquare ¼ 25.671, p ¼ .000).
Unfavorable group. Finally, the unfavorable group is the largest cluster: 83 students, giving a low score to both the digital portfolio and assessment with rubrics.
This group is characterized by having a lower number of students with continuous support for the e-portfolio over the academic year (chi-square ¼ 16.4, p ¼ .001).
It includes a higher proportion of students completing only individual portfolios (chi-square ¼ 25.671, p ¼ .000).
In addition, we observed a tendency to be the youngest cluster, with lower weighting of the portfolio in assessment, and with teachers less experienced in using e-portfolios.
When the opinion clusters are arranged in a quadrant diagram (Corvala´n, 2011) according to the score of the portfolios (y) and of assessment using rubrics (x), an imbalance in group distribution is clearly in evidence (see Figure 4).
There is a tendency toward scoring the portfolio and the rubrics equally, as the diagram shows, from (Àx, Ày) to (+x, +y). This may represent a relationship between opinions on the e-portfolio and on the rubrics, although this tendency is upset by the controversial cluster, with its opposing score for the portfolio and the rubrics (Àx, +y). The presence of this group blurs the direct relation between opinions on both tools.

Conclusions
In terms of the first aim of this study, we drew the conclusion that students had differing perceptions of the portfolio and the rubrics. It was confirmed that their perceptions on the portfolio and the rubrics were independent, even though these tools were jointly applied: While two thirds of students tended to score the two tools similarly, a third rated them opposingly.
Briefly, we found that students' assessment of the e-portfolio was that it had little impact on their motivation to learn, or to continue using it, or on its usefulness in boosting transferable skills. Turning to the rubrics, we observed that students found them useful, specifically in that that the rubrics helped them both to understand assessment better and to become more aware of competences.
In terms of our second aim, four groups were identified, along with various factors that may explain the differing ratings given to the portfolio and the rubrics, as we explain later.
There are four contextual factors favoring students' positive perceptions on the combined use of portfolios and assessment using rubrics on their modules. These are (a) greater teacher experience in using the digital portfolio, (b) continuous technical support for the digital portfolio, (c) greater weighting of the portfolio in assessment, and (d) smaller class size.
As for personal factors, we found that only student age was a differentiating variable among opinions about acceptance of the digital portfolio and rubrics. The most favorable group was the oldest (24 years old, compared with the average of 21).

Discussion
The results obtained tend to concur with previous studies. In this sense, we should note the low opinion students have of e-portfolios and the possible factors associated with this, to which various authors have drawn attention. Thus, as some of these scholars have found (Barbera`, 2005; Valero, Aramburu, Ban˜os i Dı´ez, Sentı´, & Pe´rez, 2007;Wray, 2007), these first experiments with portfolios are not very encouraging, especially in students' first years of using them, as is the case in our study. According to Wray (2007), the frustrations shown by students in these first years are mainly due to their confusion in selecting material and organizing the portfolio, their inability to complete the work in the requisite time, and their lack of clarity on the purposes of the portfolio. In addition, their perception of the ease or difficulty of the system has a bearing on their acceptance of portfolios (Chen et al., 2012), the difficulty of the platform being the main aspect students found unsatisfactory in our case. Wray (2007) suggests that these issues can be addressed by instructing students in the criteria for selecting and organizing portfolio contents, and in how to plan their time and activities, and by providing examples showing the process to be followed. Students who receive specific advice on how to build and use the portfolio formulate their learning needs better, choose learning tasks more appropriately, complete practical tasks more thoroughly, and obtain better results than students who only receive feedback (Kicken, Brand-Gruwel, van Merrie¨nboer, & Slot, 2009). In the same line, Delandshere and Arens (2003) highlight the importance of the teacher's role in designing the portfolio and even in solving technological problems. Students need guidance when working on their portfolios and in addressing questions and problems stemming from them. Besides, planning the teaching-learning process to coincide with the required competences is essential in achieving valid results in developing these competences. In our experience, teachers who are novices in the use of portfolios in all probability influence their application negatively. Teachers need time to adapt to the use of portfolios and should seek the best strategies for putting them into practice and motivating their students, as well as bringing learning activities into line with the competences best boosted by the tool (Salomon et al., 1992); this requires a certain amount of experimentation, over more than 1 academic year.
Turning to factors that can make learning with e-portfolios effective, a recent study by Castan˜o Sa´nchez (2014) highlights assessment methods and training in using the tool, among other aspects. These findings coincide with our results: On one hand, students considered that they needed more training in using the portfolios and that the training they had received was lacking; on the other, students in classes where the portfolio was given greater weight in assessment also rated the tool more highly, along with its ability to boost competences.
Lastly, our study confirms that feedback using rubrics is highly valued for its ability to give an overview of the complex picture of students' work, and as a guide to students' achievements (Nordrum, Evans, & Gustafsson, 2013). As the participants in our study stated, rubrics are useful for promoting awareness of competences and for making assessment of these more transparent, as previous studies have also found (Allen & Tanner, 2006;Navarro, Ortells, & Martı´, 2011;Raposo & Martı´nez, 2011).

Implications for Teaching Practice
The main finding of this study is the delineation of four groups of students according to the variables: perceptions on the use of portfolios and assessment with rubrics.
The results of this study have important implications for educational practice, and on this basis, we would underline the need to take into account the educational context. In particular, we would recommend the following: . Improving teacher's skills on e-portfolio platform, . Improving e-portfolio teaching methods, . Assigning greater weight to the portfolio in assessment, . Providing students with continuous technical support for the portfolio throughout their course, and . Prioritizing group-work over individual-work portfolios when students first begin to work with them.
In addition, we would recommend maintaining the use of rubrics combined with portfolios because this can sustain a positive initial effect in the process of innovation. In our study, the most highly rated item was that which argued that rubrics enhanced the transparency of assessment (they were useful for understanding assessment better). More strategies are needed to reduce initial resistance to the portfolio, and the rubrics can be one of these.

Limitations of the Study
This empirical study has some limitations. In the first place, the sample consisted solely of undergraduate modules at the Faculty of Education (University of Barcelona), where rubrics had been previously applied and where students' experience in using the portfolio was relatively recent. We would recommend to expand this study to include other institutions to avoid the limitations of this sample. Second, the inclusion of students using both individual and group-work portfolios brings up some uncontrolled variables: from classroom activities design and strategies to students resistance to group work as part of assessment, which supposes taking several recommendations for implementing group work, such as recognition of effort, group size, incentives to deter problems free-riding inside work teams, among others (Davies, 2009). Third, factors affecting users' behavior are complex and diverse, and in this sense, there are important variables, which were not taken into account in the research and which may influence students' perceptions (Bolliger & Shepherd, 2010): types of group interaction, students' personal motivations in taking the course, their capacity for self-regulation, and so forth. These variables may define new groupings of students and would therefore be recommendable to take into account for future study.
group MideMe. Her scientific career is focused primarily in the areas of IT and teaching innovation, e-learning evaluation, and IT and gender.
Ruth Vila`-Ban˜os is a professor at the University of Barcelona, specialized in research methods and diagnosis in education. She gives courses on statistical analysis applied to educational research. She is a member of the research group in Intercultural Education and of the teaching innovation group MideMe. Her scientific career is focused primarily in the areas of intercultural education and educational research.