Using Spark

Different Self and Peer Assessment Implementations Using SPARKPLUS

The following examples discuss a number of different ways in which Self and Peer assessment has been implemented using SPARKPLUS.

  1. To Assess and Provide Feedback on Individual Work
  2. Benchmarking Exercise
  3. To Assess and Provide Feedback on Team contribution

 

To Assess and Provide Feedback on Individual Work

Students use SPARKPLUS to assess their own and seven of their peer’s submissions, rating each student’s individual project concept which has been chosen to meet a number of specified criteria.

The following is from “Assessment for learning: Using minimum assessment to maximise learning. Willey K and Gardner A. (2009), submitted ATN Assessment Conference: Assessment in Different Dimensions 19-20 November, 2009. Melbourne, Australia.

The activity consists of a series of distinct processes:

  1. Students are required to use SPARKPLUS to assess their own and seven of their peer’s submissions reporting each student’s individual project concept chosen to meet a number of specified criteria. This assessable part of the overall task is completed individually by students outside of class.
  2. In the following tutorial the group of eight students debate the merits of each individual submission (discussing their individual strengths and weaknesses) and collectively place them in order from best to worst awarding a mark for each.
  3. Tutors then distribute the results from SPARKPLUS (radar diagram) and are asked to reflect on any differences between the results produced from their individual assessments (SPARKPLUS) and those produced collectively in their peer group.
  4. The tutor then marks the best report from each group (as identified by the students) and determines the marks for the other reports using the weighting produced by SPARKPLUS.
  5. The order of the activities within this task means that students were required to do some individual thinking and engage with the assessment criteria for the project concept before meeting in the tutorial to discuss the submissions with their peers. This means that students come to class prepared, allowing discussions to quickly focus on areas where there is a difference of opinion. While not directly assessable we specifically designed steps 2 and 3 to involve collaborative reflection with the expectation that organising students to explore differences in their opinions and understanding will make a major contribution to their learning.

The motivation to actively participate in this activity is on two levels. Groups are required to select one of the project concepts to work on for the rest of the semester, so choosing the concept that was the ‘best’ fit with the project constraints would both simplify their subsequent tasks and potentially provide them with the highest grade. More immediately, it is in the group’s interest to correctly identify the ‘best’ concept to maximise their mark as the tutor only marks the concept identified as being the best by the group. The mark awarded to this concept capping those allocated to the remaining submissions which are calculated in proportion to the ratings calculated by SPARKPLUS.

Group radar diagram

Concept mark = (SparkPLUS mark) * (Tutor mark for best submission) / (SparkPLUS mark for best submission)

The figure above shows a group radar diagram and table reporting the SPA and SAPA factors and marks for each student calculated from the self and peer assessments of the group members. The blue envelope represents the SAPA factors. When this envelope exceeds 1 it indicates that the student rated their own submission higher than it was rated on average by their group peers. The red envelope represents the SPA factors which indicates whether the quality of a student's individual submission was considered to be above or below the average of those marked by the group.

   

Benchmarking Exercise

The following is from Using Benchmarking to improve students’ judgement and make assessment more student centred.” Willey K and Gardner A. (2009), Submitted 20th Australasian Assoc. for Engineering Education Conference 6-9 December, 2009 Adelaide Australia.

The benchmarking mode enables an instructor to create an assessment of an exemplar piece of work, activity or task against which students can compare their own understanding and/or judgment. Instructors can choose from a number of predefined rating scales or create their own, however typically for benchmarking a Standard Assessment scale is used (for example Unsatisfactory (Z), Pass(P), Credit (C), Distinction (D) and High Distinction (HD) (Figure 1).

Students are required to logon and enter their assessments by moving the sliders (orange bars) against a number of criteria (Figure 1). While we have found it useful to discuss the chosen criteria in class and post explanatory details online, to prevent screen clutter when the criteria are wordy we only record the heading of each criteria (eg Test Plan) in SPARKPLUS (Figure 1).

Benchmark assessment sliders

Figure 1: Students enter their assessments by moving the sliders to their chosen rating against each criterion.

In addition to entering their own assessments instructors enter a written report explaining their marking against each criterion. The calculation of a student’s score is generated using a weighted mean squared error of the differences between their’s and the instructors assessments. We recommend that it is best to moderate the results by adjusting the scores to fit within to specified boundaries.

Benchmarking instructor set minimum result screen

Figure 2: Mark moderation: Instructor screen showing the setting of the minimum student mark for the exercise.

SPARKPLUS facilitates moderation by anonymously providing Instructors with the student submissions that differ the most from and is closest to their own. Figure 2 shows the screen used to set the minimum student mark for the exercise. The upper blue triangle shows the rating submitted by the student for each criterion. The lower orange triangle shows the rating submitted by the instructor. The instructor uses the orange bars to rate the student judgement against each criterion. The total bar allows instructors to fine tune their marking encouraging a holistic approach (the individual criterion bars move proportionally tracking the movement of the total slider). The maximum result is set in a similar way by grading the student submission that was closest to the instructor’s assessment. The results for the remaining students are calculated according to the chosen formula (a number of different formulas are provided to accommodate different assessment objectives) by adjusting the raw mean squared error scores to fit within the specified moderation limits.

Benchmarking student result

Figure 3: Student results screen reports that students overall score. In addition, the upper blue triangle shows the rating submitted by the student and the lower orange triangle shows the rating submitted by the instructor for each criterion. The feedback box explains the instructor ratings.

Once the exercise is complete and the results are published students may logon to receive their score/grade for the exercise. In addition, students are provided with the feedback in regard to their submission against each criterion. In the results screen the sliders displayed two triangles. The upper blue triangle shows the rating submitted by the student and the lower orange triangle shows the rating submitted by the instructor. In addition a feedback box explains the instructor ratings for each criterion (Figure 3).

Method

The benchmarking activity consisted of a series of distinct processes:

  1. Students were provided with a Sample Requirement Specification produced by a student group from a previous semester. After discussing the marking criteria each student individually assessed the report using the benchmarking facility of SPARKPLUS.
  2. In the following tutorial each project group of four students discussed their individual marking of the report and re-assessed it collectively against the criteria.
  3. Two project groups then combined and discussed each group’s marking of the report, reflecting on any differences and collectively re-marking it.
  4. Tutors then discussed how the academic had marked the report.
  5. After the tutorial students logged on to SPARKPLUS to compare their individual marking to the academic’s assessment of the report for each separate criterion and to read the academic’s comments. In addition, they can view there mark calculated based on how close the student’s individual assessment was to the academic’s assessment.

The order of the processes in the benchmarking activity means that students were required to do some individual thinking and engage with the assessment criteria for the report before they came to the tutorial and discussed the report with their peers. This meant that students knew the details of what they were coming to class to talk about and discussions could relatively quickly focus on areas where there was a difference of opinion.

The motivation to actively participate in the benchmarking activity was that students in their project groups were required to write a Requirement Specification report for their group’s product. Furthermore, students are encouraged to explore the different opportunities to learn. They may choose to teach others and in the process improve their own understanding or alternatively being taught by their peers to address gaps in their learning. It is our experience that in collaborative exercises most students tend to adopt a combination of these learning methods, but we strongly encourage those that feel they have nothing to learn from their peers to take the opportunity to teach.

   

Team Contribution

Students use SPARKPLUS to rate their own and their team peers contribution to each stage of a project. The SPARKPLUS SPA factors are used to produce individual marks by moderating the mark for the group's submission.

For early stage students and/or students who have had little experience in group work the group's radar diagrams and a table of categorised factors are distributed to each group and discussed in their next tutorial. Groups are guided through a feedback process. This process begins with students sharing positive feedback with the focus not just being on what their peers did well but also on what they learnt from their peers. This is followed by a process of self evaluation where students share with their group what they have learnt or discovered about their strengths, weaknesses or performance from the exercise. Students are encouraged to identify how they could improve their own performance and in what way they would approach the task differently if they had to do it again. The final stage in the feedback process is the provision of constructive criticism to team peers. Students are asked to suggest how others in their group may have approached their tasks differently to achieve a better group result, how aspects of their behaviour affected the team and the benefits of changing that behaviour and to reflect on how team peers could have learnt more from the process. Furthermore, students are asked to share what they consider to be the weaker aspects of a peer’s contribution and how this could have been improved.

The in-class discussion concludes by teams agreeing how to improve their overall team and individual performance for the remaining parts of the project and /or in future group work opportunities.

Often postgraduate and late stage undergraduate students are expected to manage their own team practices and feedback and are left to view their results individually and independently take appropriate action to rectify or resolve any team issues.