Develpoing and growing …

AFL-Flower

Cheryl’s PGCAP assessment archive …

Cheryl’s Reflections – below are my main reflections for assessment, however please note that I refer/link to additional reflections (including literature) throughout:

AFL (Sept 2012) for CPD

LTHE (Jan 2012)

Cheryl’s Categories (see left menu) – throughout the PGCAP I have categorised my reflections according to the UK Professional Standards Framework (PSF) where I feel there is evidence of competence and engagement.

Cheryl’s Tags (see left menu) – these indicate the various topic areas covered throughout all my reflections.

PGCAP on Flickr – happy memories
http://www.flickr.com/photos/pgcap/

PGCAP on YouTube – a shared experience
https://www.youtube.com/user/pgcapsalford

PGCAP on Slideshare – looking back
http://www.slideshare.net/academicdevelopment/tag/pgcap

PGCAP on Twitter – keeping in touch
https://twitter.com/pgcap

PGCAP on Scoop.it – keeping fresh
http://www.scoop.it/t/pgcap

 

Advertisements

Individual reflection (AFL)

Throughout the Postgraduate Certificate in Academic Practice (PGCAP), I was encouraged to use various models to facilitate my reflective practice (Gibbs, 1998; Moon, 2004). I found that when I first started to use such models this really helped me to focus my thoughts. It’s interesting to feel now that I’m relatively comfortable and confident in reflective writing, no longer feeling the need to explicitly structure my reflections using such models, although, now the questions are continuously in my head – progress I think!

Why AFL, when I’ve already done!

As a lifelong learner, I’ve never needed much of an excuse to participate in continuous professional development (CPD). However, my main motivation for joining the Assessment and feedback for learning (AFL) module after completing the PGCAP was strongly linked to my role as Academic Developer.

In the past, my role has been focused around supporting academic colleagues to use learning technologies to enhance their learning, teaching, and assessment practice, however more recently the role has changed into supporting learning and teaching in Higher Education (HE) more broadly including curriculum and assessment design. AFL, I felt was an excellent development opportunity to strengthen my ability to support colleagues in their academic practice as a whole, rather than specialising in Technology-Enhanced Learning (TEL).

Improving staff and student perceptions of assessment and feedback is a strategic goal identified locally within the University’s Learning and Teaching Strategy (University of Salford, 2011) aligned with national initiatives to transform assessment in UK HE (HEA, 2012), and therefore staff development in this area is high priority. By being part of the AFL community, and by listening to the ‘stories’ of colleagues from various contexts across the University, I can better understand the ‘feelings’ and ‘beliefs’ of my academic colleagues relating to their assessment and feedback practice.

Before joining the PGCAP as a ‘student’, I was part of the PGCAP programme team as tutor on the Application of Learning Technologies (ALT) module. Whilst I really enjoyed this aspect of my role, I valued the importance of experiencing the whole PGCAP learning journey myself, and of course achieving the PGCAP certification and Fellowship of the Higher Education Academy (FHEA). As the ALT module was my first tutoring experience, I was new to assessment and feedback practice, and felt that I had a lot to learn with regards to marking and providing feedback. Although I was often praised for being very helpful and constructive in my feedback, I did tend to give lots of it following submission (taking a long time to mark!), and that was an area I wanted to explore – how much feedback should I give, and when and how should I give it? I also experienced disparity between my marking and the marking of others within the module team (highlighted during moderation), and so wanted to explore – how much could I be subjective, how much and why did I need to ‘stick to a marking grid’, how specific does a marking grid or ‘rubric’ need to be? Although the PGCAP started to explore this, I felt there was more to learn, and hence my further interest in AFL.

AFL – the beginning

It seems not so long ago I was at the beginning of a new learning journey, starting as a small group all committed to spending the next three months sharing reflections, values, and experiences of assessment and feedback through a process of Problem-Based Learning (PBL). In some ways it was continuing the learning journey from the core module with familiar faces, which was ‘comforting’, although there were some new faces (including our tutor), which provided an element of ‘freshness’ and new insights. Having already experienced wonderful learning through the PGCAP, and passing the assessment, I felt more ‘relaxed’ about the assessment on AFL and able to focus more on the learning.

The first reflective task was the ‘feel-o-meter’; first defining themes relevant to my own learning journey (see post Back behind the wheel of AFL), and then deciding where I felt my current knowledge and experience was on the scale. It was encouraging to be able to map out my own ‘learning labels’ most relevant to my context.

Cheryl's Wheel of AFL - Week 1

Cheryl’s Wheel of AFL – Beginning

AFL – the PBL journey

At the start, I was feeling relatively comfortable with learning through PBL, and quietly excited. Throughout the PGCAP I’d participated in various learning activities which adopted a PBL approach, and I’d started to reflect on how I may embed this into my own practice (see post Reflection 6/6 – Developing my PBL practice). Having the opportunity to experience as a ‘student’ first hand a PBL curriculum linked to assessment was a great opportunity to continue this reflection.

I felt my familiarity with PBL was an advantage, in that I wasn’t too concerned about learning in this way, and as my peers on the module were also continuing their PGCAP I made an assumption that we were all at the same place in terms of our understanding. However, reflecting now I don’t think this was the case. Half of the group, who were part of my core module cohort, may have recalled the brief introduction to PBL, however they may not have furthered their learning through reflections for assessment, and the other half of the group weren’t part of my previous cohort and therefore their core module syllabus may have been different. Also, my peers and I were from various contexts, and those from the science disciplines may have been less familiar with such approaches than those from health and/or educational disciplines. The unfamiliarity with PBL I felt contributed to a slow start from my PBL group, and looking back, perhaps I should have made more of an effort to guide my peers along a more structured PBL process (Barrett and Cashman, 2010), although I also didn’t want to appear the ‘know-all’ of the group.

7 Step PBL Process Guide

7 Step PBL Process Guide (Barrett and Cashman, 2010)
Image source:
http://www.ucd.ie/t4cms/ucdtli0041.pdf

I realise that PBL can be adapted to suit the given context, and that sometimes it’s unnecessary to impose a tight structure. Although I agree that there should be flexibility and agreement between group members, I think that by proposing a model to work with, this would have helped everyone’s understanding. Perhaps an early PBL activity (not linked to assessment) or pre-induction readings/discussion would have helped. One of the case studies in A Practitioners’ Guide to Enquiry and Problem-based Learning (Barrett and Cashman, 2010) shows an example of effective staff development for lecturers in PBL, and is an excellent resource for practitioners. Whilst there’s no time for extensive staff development in a ten-week module, I’m sure more could be done to ensure the cohort is relatively comfortable with the process before starting the scenarios linked to assessment.

Of course, there are some aspects of this particular cohort that made the PBL scenarios particularly challenging. Best practice guidelines on PBL, or similar approaches, suggest a PBL group has 5-8 members, and that practical roles (e.g. Chair, Reader, Timekeeper, Scribe) in addition to scenario specific roles (e.g. Tutor, Student, School Manager) are assigned (Barrett and Moore, 2011). However, the small cohort of six meant that two groups of three were created making it difficult to assign roles – everyone had to do everything, which made things very intense and time consuming for everyone. On reflection, I think it would have been preferable to have one group of six. However, the added (perhaps unnecessary) complication of peer assessing another group’s learning meant that at least two PBL groups needed to be in place.

Also, there were the practical frustrations of varying study schedules (including pressure to work over the Christmas break), and the feelings of there being too much to do, with varying opinion amongst the group on which particular problems/solutions to focus. The scenarios each gave a list of ‘possible topics’ and initially we were trying to un(cover) them all (constrained by a word limit). Once we’d established, with much appreciated guidance from our tutor as ‘facilitator’, that we didn’t need to look at everything, we then seemed to spend too much time discussing which topic to focus on. For example, the scenarios suggested looking at technology-enhanced assessment (TEA), however I was less keen considering this was something I was already familiar with through my own practice, and therefore not an area for my personal development.

Another frustration was in the varying preferences relating to how we were to present the ‘solutions’. I feel that by everyone trying to contribute to the same thing, scenarios became ‘messy’ and ‘inconsistent’ in terms of the writing style, referencing, and media input. Perhaps if we’d have taken the time to identify skills within the team we may have produced something more ‘polished’.

On the positive side, our tutor was a real help, in particular when we had tutor input sessions, which gave us, as a whole cohort, the opportunity to discuss themes related to each scenario across various contexts – science, engineering, life sciences, journalism, education, health and social care. Our tutor was open to what we discussed, whilst also keeping us ‘on track’ if we moved off topic. Looking at the PBL literature that’s exactly the nature of the tutors role – more facilitative – where the focus is on students learning, rather than tutors teaching (Barrett and Moore, 2011) and striking a balance between being dominant and passive – “dominant tutors in the group hinder the learning process, but the quiet or passive tutor who is probably trying not to teach also hinders the learning process” (Dolmans et al., 2005).

During PBL, it is important for the tutor as ‘facilitator’ to ensure adequate time is allocated for constructive feedback throughout the process (Barrett and Moore, 2011). Although our tutor did provide this, it was difficult to agree as a group when we needed the feedback. I would have preferred to grant our tutor access to the three scenarios in-progress from the start, allowing him to provide feedback throughout. However, other group members preferred to wait until nearer the end. I found this quite frustrating, particularly as one aspect of our learning was around the importance of a continuous student-tutor dialogue, and feedback as being formative and ‘developmental’ (HEA, 2012).

Although our tutor shared with us a wealth of knowledge and experiences himself, which was very interesting and useful, he was also more than happy to take a step back and listen to our experiences as practitioners, providing thought provoking statements and questioning – helping us to unravel the scenarios and assisting us in recognising our own wealth of prior knowledge (Barrett and Moore, 2011). In one sense I would have liked more input from our tutor (just because I enjoyed his ‘stories’ so much), however in another sense it was great that he left us to get on with the group-work. As mature practice-based students, we were able to remain relatively motivated, and continued to meet up regularly even when there weren’t any face-to-face sessions. However, I feel this was partly down to there being only three of us – we simply had to get on with it, and there was no room for anyone to take a back seat, even if there were other professional and/or personal commitments. However, I was heartened that having experienced a family bereavement towards the end of the PBL group-work, both my tutor and peers understood that I needed a little time away, although again due the small group I felt pressured (only by myself) to return as soon as possible and get on, so as not to disrupt or delay the group-submission too much.

Despite the challenges I feel that my PBL group worked well together, and at least met the deadline in producing the group’s solutions, and in the end I think we all learnt something, not only about the scenarios themselves and about our collective experience of assessment and feedback practice, but about the potential feelings and frustrations of our students when thrown into similar assessment processes – worthwhile experiential learning (Kolb, 1984).

AFL – the learning journey

Rubrics & the Secret to Grading

(Cartoon from www.wisepedagogy.com)

PBL1 – What’s in a mark?
(see PBL1 group-submission)

As a staff developer within academic practice, I’m often engaged in discussions with colleagues regarding assessment and feedback. For the most part, I design and facilitate staff development relating to TEL. However, I’ve often felt that I needed to widen my knowledge across various disciplines rather than being limited to my own specialist area, enabling me to better support and advise my colleagues. For example, a recent strategic initiative to roll out e-Assessment institution-wide has lead to staff development in this area, including the use of e-Submission and e-Marking/e-Feedback tools. Throughout, I’ve met with some resistance towards the use of electronic rubrics, which seemed to stem from a general resistance to the use of rigid criteria and standards – whether paper-based or electronic.

The PBL1 group-work and related tutor input sessions helped me to take a step back and reflect on why my academic colleagues held such strong often negative views on the development and use of assessment criteria linked to learning outcomes. My own views were relatively positive, with a view that such standards provided transparency for both students and tutors in terms of what was being assessed and how. How could anyone argue with that?

Throughout the initial weeks on AFL, I soon began to further understand where these negative viewpoints came from, and a lot had to do with changing HE systems and structures. In the old days of University education, when the opportunity for furthering education through HE was restricted to an elite few, the University experience was very different. The ‘academic’ was deemed as the all-knowing authority, learning was ‘passive’ – a transfer of knowledge from the expert to the novice, with a focus on the acquisition of expert-knowledge, rather than experience or skill. Transferring from this to the current culture, where there is an increased and diverse student body paying high fees as ‘customers’, where students and tutors are deemed as partners in learning, and where learning is ‘active’ providing opportunity for students to co-construct their own knowledge collaboratively through both theory and practice (HEA, 2012). In terms of assessment and feedback there was more scope for individual ‘academic’ subjectivity, however today HE institutions are under the scrutiny of the Quality Assurance Agency (QAA) to ensure consistent and transparent assessment and feedback practices (QAA, 2011).

This has lead to a perceived obsession with bureaucratic processes linked with compliance and standards – learning outcomes, criteria – rather than there being a focus on ‘learning’ which some believe is lost through the bureaucracy. Furedi (2012) in his article ‘The unhappiness principle’ argues that learning outcomes “disrupt the conduct of the academic relationship between teacher and student”, “foster a climate that inhibits the capacity of students and teachers to deal with uncertainty”, “devalues the art of teaching”, and “breeds a culture of cynicism and irresponsibility”. The article certainly sparked some debate, some of which we talked about on AFL. How can we prescribe outcomes, when learning for everyone is different depending on where they start from in the first place, and indeed where they intend to go? Can, or should, we actually guarantee anything as an outcome of learning? Can the presence of outcomes as a ‘check list’ of learning discourage from creativity, exploration, and discovery in learning? Or, does the absence of learning outcomes provide a ‘mystic’ that encourages creativity, exploration, and discovery? Does such ‘mystic’ foster deeper learning or merely confusion and student dissatisfaction?

Through collaborative exploration during both the PBL group-work and discussions with the whole cohort, it was clear that to some degree consistency, transparency, and constructive alignment of assessment criteria with learning outcomes are important factors, not just to satisfy QAA requirements, but also to ensure validity, reliability and fairness to both tutors and students as partners in learning (University of Salford, 2012). It is also important to acknowledge and respect that there will always be an element of subjectivity, as responsibility for marking is largely down to the subjective judgement of tutors and other markers (Bloxham and Boyd, 2007).

In terms of where this relates to my own practice, I feel the conclusions are relevant in that I need to work with academic colleagues to further understand their assessment and feedback practices. During AFL I came across some real examples of criteria and/or rubrics in-practice at Salford, which although not fully aligned with University guidance (e.g. use of A-F grades), they do seem to work in that staff, students, and external examiners have praised some of the practice. As a staff developer I need to appreciate and fully understand current practice, and work with colleagues to facilitate a ‘community’ amongst programme and subject teams to enable them to work collectively, socially constructing a set of shared standards to ensure valid, reliable, and fair assessment and feedback practice.

PBL2 – Where’s my feedback, dude?
(see PBL2 group-submission)

Although as a PBL group all three of us contributed to each scenario, PBL2 was where I contributed much of the initial literature searches and writing, perhaps because I saw this particularly scenario as most relevant to my current practice. Recently, I have been working closely with academic colleagues to try and unravel why assessment and feedback continues to be an area of lower student satisfaction than in any other area, both locally within the institution and nationally across UK HE (HEA, 2012; HEFCE, 2012).

Before my PBL group could effectively start to look at current practice, it was useful to take a step back and reflect upon what changes in HE may have impacted on how staff and students perceive assessment and feedback. As a relatively ‘young’ learner, I have experienced a ‘modular’ University education (1997-present), and as such hadn’t really thought extensively about University education pre-1990s, and particularly how the introduction of a modular system created a significant growth in summative assessment (HEA, 2012). This certainly echoes some of the opinions of colleagues who speak about over-assessing, and resonates with some of the recent changes in policy – a move from 15 to 30 credit modules.

Also, a common response to the poor performance in the National Student Survey (NSS) is that students don’t realise when they’re receiving feedback (Boud, 2012). At one time I may have gone along with this simplistic viewpoint, however through exploration of the PBL2 scenario, I have developed my understanding and ideas around how institutions can start to address the issues surrounding assessment and feedback.

Research suggests that improvements in assessment and feedback can be achieved through reviewing practice, making changes to course and curriculum design, and questioning – what is feedback?, and how is it useful? (Boud, 2012). Timely and developmental feedback, ensuring students are able to act on feedback to develop future work, are key factors and align with research on ‘assessment for learning’ (Biggs, 1996: Gibbs & Simpson, 2004; 2005, Walker, 2012), as opposed to ‘assessment of learning’.

Another important factor for improvement is ensuring a continuous dialogue to facilitate development, and whilst there is a preference for more personal contact between student and tutor (Gibbs, 2010), there is also a need to provide more opportunity for dialogue between students themselves, and to involve them as partners in learning and in deciding how they are assessed. Within my PBL group, peer-assessment, as an active teaching and learning approach, was explored as one way of achieving this (Falchikov, 1995; 2005; Fry, 1990; Hughes, 2001; Orsmond, 2004).

Of the staff development sessions I’ve facilitated relating to assessment and feedback as part of my practice and during the PGCAP (see post Reflection 4/6 – Tutor observation), due to the element of ‘poor performance’ in the NSS linked to discussion, I have sometimes found them uncomfortable and intense sessions. I feel that working with my PBL group in a relaxed shared ‘learning’ environment has enabled further exploration of varied practice, strengthening my ability and confidence to facilitate these sessions in the future.

PBL3 – A module assessment redesign
(see PBL3 group-submission)

PBL3 was unlike the others, in that the group had to agree upon an area of practice where an assessment redesign was necessary, and use the scenario of a seemingly unfair assessment to inform the redesign. All group members had ideas for redesign from various contexts, however it was decided to focus on the redesign of a module in mechanical engineering as this linked with aspects of ‘authentic’ and ‘negotiated’ assessment, which seemed most relevant to ‘inclusive’ assessment and the scenario.

Through exploring these areas to inform the redesign, I have developed my understanding of inclusive assessment and feedback practice both in terms of enabling (1) authentic assessment which reflects ‘real-world’ practice, and (2) negotiated assessment relating to choice of assessment topic, learning outcomes/criteria, or the medium in which the student presents their learning.

The redesign of the mechanical engineering module to enable students to experience the practical elements of testing individual systems was one way of achieving authenticity, as is providing work experience and placement opportunities. Programmes and modules, such as in health and social care, are examples of where authenticity is central to learning. On the PGCAP, students engage in teaching observations providing an insight for learners into professional practice across various disciplines. However, these examples either require money and/or a working partnership with employers or an established practice-based approach.

As increasing numbers of students enter higher education with the primary hope of finding employment, there is a pressure to ensure that assessment can, at least in part, mirror the demands of the workplace or lead to skills that are relevant for a range of ‘real world’ activities beyond education, but this has been largely unreflected in the reform of assessment within many disciplines” (HEA, 2012).

As someone who has always studied part-time whilst working full-time in an area directed related to study, I’ve had the opportunity to link theory with practice. However, I often reflect on how those undergraduate students studying full-time without work experience manage to apply their knowledge to practice with a view to improving their employability.

Therefore, my personal interest is around what methods foster an authentic learning experience, where without the budget to buy expensive kit or provide placements, authentic assessment can still be achieved. Interestingly, PBL is one such method where students are provided real-world authentic problems to work on in groups, tasked to produce a collaborative report as an authentic assessment task. This provides authenticity not only relating to the problems themselves, however also relating to the skills required to work within the group – problem-solving, teamwork, communication, time management, presentation of work in various formats – all skills of value to the employer (Bloxham and Boyd, 2007).

The aspect of ‘negotiated learning’ was also of interest. By providing students with the opportunity to negotiate their own learning path through choice of learning outcomes, assessment criteria, topic, or format students become involved, taking responsibility for their own learning and assessment (Boud, 1992). I have in the past been involved in negotiated learning where, as in the mechanical engineering module redesign the negotiation was linked to the choice of topic (e.g. ALT module), however the concept of taking this further enabling students to negotiate learning outcomes and/or assessment criteria is an area which I’d like to explore further to inform future practice. The PBL3 group exploration of the use of assessment schedules or ‘learning contracts’ is a spring-board to further reading in this area.

“The negotiated learning contract is potentially one of the most useful tools available to those interested in promoting flexible approaches to learning. A learning contract is able to address the diverse learning needs of different students and may be designed to suit a variety of purposes both on course and in the workplace” (Anderson and Boud, 1996).

Although the final group-submission of the scenarios weren’t as ‘polished’ as I’d liked, and in the end I had to ‘let go’, I do feel that the learning journey as a whole has benefited me hugely in terms of ‘sharing’ and constructing ‘new knowledge’, which has helped me to achieve my learning goals. I feel more able to support my colleagues in their wider academic practice relating to assessment and feedback, and more confident in my own assessment and feedback practice as a module tutor. Time and ‘experience’ will tell!

FeelOMeterEnd

Cheryl’s Wheel of AFL – End

Action plan

  • Staff development (Colleges) – Work with programme teams and/or subject areas to facilitate a ‘community’ and sharing of assessment and feedback practices, aiming towards developing a shared set of standards.
  • Module tutor (PGCAP) – Work with PGCAP programme team, developing my own assessment and feedback practices through becoming part of a ‘community’ and sharing of assessment and feedback practices, aiming towards developing a shared set of standards.
  • CPD (AFL) – Further develop my understanding of assessment and feedback practices by further reading and co-facilitating AFL in the future, having the opportunity to listen to more ‘stories’ of colleagues from various contexts across the University.
  • CPD (PBL) – Further develop my own PBL practice by further reading and as a PBL ‘facilitator’ on the AFL module, and participate as a learner on the Flexible, Distance and Online Learning (FDOL) open course to continue the PBL student experience, however in an online and global context.

References

Anderson, G. and Boud, D. (1996) Introducing Learning Contracts: A Flexible Way to Learn.
Innovations in Education & Training International Vol. 33, Iss. 4, 1996. Available online http://www.tandfonline.com/doi/abs/10.1080/1355800960330409 (accessed Jan 2013).

Barrett, T. and Cashman, D. (Eds) (2010) A Practitioners’ Guide to Enquiry and Problem-based Learning. Dublin: UCD Teaching and Learning. Available online http://www.ucd.ie/t4cms/ucdtli0041.pdf (accessed Jan 2013).

Barrett, T. and Moore, S. (2011) New approaches to problem-based learning: revitalising your practice in higher education. New York, Routledge.

Biggs, J. (1996) Assessing learning quality: reconciling institutional, staff and educational demands. Assessment & Evaluation in Higher Education, 12(1): 5-15. Available online http://www.tandfonline.com/doi/abs/10.1080/0260293960210101(accessed Jan 2013).

Bloxham, S. and Boyd, P. (2007) Developing Effective Assessment in Higher Education: A Practical Guide. Maidenhead: Open University Press.

Boud, D. (1992) The Use of Self-Assessment in Negotiated Learning. Available online http://www.iml.uts.edu.au/assessment-futures/subjects/Boud-SHE92.pdf

Boud, D. (2012) A transformative activity. Times Higher Education (THE). 6th September 2012. Available online http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=421061 (accessed Jan 2013).

Dolmans, D. H. J. M., De Grave, W., Wolfhagen, I. H. A. P. and Van Der Vleuten, C. P. M. (2005), Problem-based learning: future challenges for educational practice and research. Medical Education, 39: 732–741. Available online http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2929.2005.02205.x/full (accessed Jan 2013).

Falchikov, N. (2003) Involving Student in Assessment, Psychology Learning and Teaching, 3(2), 102-108. Available online http://www.pnarchive.org/docs/pdf/p20040519_falchikovpdf.pdf (accessed Jan 2013).

Falchikov, N. (1995) Peer feedback marking: developing peer assessment, Innovations in Education and Training International, 32, pp. 175-187.

Fry, S. (1990) Implementation and evaluation of peer marking in Higher Education. Assessment and Evaluation in Higher Education, 15: 177-189.

Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students’ learning, Learning and Teaching in Higher Education, vol. 1. pp.1-31. Available online http://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/issue1.pdf#page=5(accessed Jan 2013).

Gibbs, G. & Simpson, C. (2005) Does your assessment support your students’ learning? Learning and Teaching in Higher Education, 1. Available online http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.201.2281&rep=rep1&type=pdf (accessed Jan 2013).

Gibbs, G. (2010) Dimensions of quality. Higher Education Academy: York. Available online http://www.heacademy.ac.uk/assets/documents/evidence_informed_practice/Dimensions_of_Quality.pdf (accessed Jan 2013)

Gibbs G (1988) Learning by Doing: A guide to teaching and learning methods. Further Education Unit. Oxford Polytechnic: Oxford.

HEA (2012) A Marked improvement: Transforming assessment in higher education, York, Higher Education Academy. Available online http://www.heacademy.ac.uk/resources/detail/assessment/a-marked-improvement (accessed Jan 2013).

HEFCE (2012) National Student Survey. Available online http://www.hefce.ac.uk/whatwedo/lt/publicinfo/nationalstudentsurvey/ (accessed Jan 2013).

Hughes, I. E. (2001) But isn’t this what you’re paid for? The pros and cons of peer- and self-assessment. Planet, 2, 20-23. Available online http://www.gees.ac.uk/planet/p3/ih.pdf (accessed Jan 2013).

Furedi, F. (2012) The unhappiness principle. Times Higher Education (THE). 29th November 2012. Available online http://www.timeshighereducation.co.uk/story.asp?storycode=421958 (accessed Jan 2013)

Kolb, D. (1984) Experiential Learning: experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall.

Gibbs G (1988) Learning by Doing: A guide to teaching and learning methods. Further Education Unit. Oxford Polytechnic: Oxford.

Moon J. (2004) A Handbook of Reflective and Experiential Learning, Routledge Falmer.

Orsmond, P. (2004) Self- and Peer-Assessment: Guidance on Practice in the Biosciences, Teaching Bioscience: Enhancing Learning Series, Centre for Biosciences, The Higher Education Academy, Leeds. Available online http://www.bioscience.heacademy.ac.uk/ftp/teachingguides/fulltext.pdf (accessed Jan 2013).

QAA (2011) Quality Code – Chapter B6: Assessment of students and accreditation of prior learning. Available online http://www.qaa.ac.uk/Publications/InformationAndGuidance/Pages/quality-code-B6.aspx (accessed Jan 2013).

University of Salford (2011) Transforming Learning and Teaching: Learning & Teaching Strategy 2012-2017. Available online http://www.hr.salford.ac.uk/employee-development-section/salford-aspire (accessed Jan 2013).

University of Salford (2012) University Assessment Handbook: A guide to assessment design, delivery and feedback. Available online http://www.hr.salford.ac.uk/cms/resources/uploads/File/UoS%20assessment%20handbook%20201213.pdf (accessed Jan 2013).

Walker, D. (2012) Food for thought (20): Feedback for learning with Dr David Walker. Available online http://www.youtube.com/watch?v=DNu3fMMNQlw (accessed Jan 2013).

PBL3 group submission – A module assessment redesign?

Problem

In mechanical engineering the students all ask about mechanical systems related to motor vehicles. However, we do not have a vehicle within the building that the students could relate to when  they are learning about brakes, engines, dynamic controls, fuel selection (green solutions), weight considerations, introduction of carbon materials, and stress analysis with frame stability.

The University has given us a budget to purchase two motorsport racing cars and track transport trailer, to use one for track testing, and keep the other one un-assembled in kit form, so that individual systems can be independently tested by the students. This will allow students to negotiate their learning by selecting the system they are interested in and test as part of a final year project, enabling the student to achieve their individual learning goals in an authentic environment.

The vehicles will be used primarily in the mechanical engineering group design project work, and is a 30 credit module.

Every student needs to input a satisfactory piece of work, but as the scenario suggests, a way in which all students participate may include a range of assessments including lab practicals, reports and exams.

Although students all have to submit the same components, the negotiation is over the specialism chosen, encouraging authentic assessment influenced by their own career aspirations – hence a fairer assessment.

(1) A student who is good at data logging, data analysis can work on measurement collection and analysis of fuel management systems, or structural analysis of stress characteristics of the vehicle chassis.
(2) Another student may be more focussed on mechanical linkages and hydraulic systems thus incorporating steering and brakes, to determine efficient control parameters with steering and braking.
Both student types therefore still fulfill the mechanical engineering background practice and theory but focusing on a different branch.

Different strokes for different folks

In 1 and 2 we can acknowledge that we are introducing authentic and negotiated learning.

“Students should experience assessment as a valid measure of their programme outcomes using authentic assessment methods, which are both intrinsically worthwhile and useful in developing their future employability” (HEA, 2012)

“Assessment reform with these aims would benefit from increased involvement of professional,regulatory and statutory bodies; engaging with them to identify how professional and personal capabilities can be evidenced. It would build on existing efforts to design integrative and creative assessment that is more able to determine authentic achievement. It would resist grading performances that cannot easily be measured. It would help students understand the assessment process and develop the skills of self-evaluation and professional judgement. It would enable students to recognise what they have learned and be able to articulate and evidence it to potential employers. Improving assessment in this way is crucial to providing a richer and fairer picture of students’ achievement.” (HEA, 2012)

In accordance with the FHEQ the descriptor most appropriate for the module redesign is level 4 (QAA, 2008).

To facilitate the course redesign a new course handbook has been developed from one of our current handbooks for curriculum design. Below are modifications to existing forms for final assessment marks as a guide for the supervisor, moderator in preparing feedback and marking criteria to be followed.

Final supervisor assessment form

Final supervisor assessment form

Final Report Moderator Marking Proforma

The Higher Education Academy commissioned a guide to support the higher education sector to think creatively about inclusive curriculum design from a generic as well as subject or disciplinary perspective (HEA, 2010).

The curriculum represents the expression of educational ideas in practice. The word curriculum has its roots in the latin word for track or race course. From there to mean course of study or syllabus.
Today the definition is much wider and includes all the planned learning experiences of a school or educational institution.

Descriptive Model (Reynolds and Skilbeck, 1976)
Image Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125124/

An enduring example of a descriptive model is the situational model advocated by Reynolds and Skilbeck (1976), which emphasises the importance of situation or context in curriculum design. In this model, curriculum designers thoroughly and systematically analyse the situation in which they work for its effect on what they do in the curriculum. The impact of both external and internal factors is assessed and the implications for the curriculum are determined.

Situational Model (Reynolds and Skilbeck, 1976)
Image Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125124/

Although all steps in the situational model (including situational analysis) need to be completed, they do not need to be followed in any particular order. Curriculum design could begin with a thorough analysis of the situation of the curriculum or the aims, objectives, or outcomes to be achieved, but it could also start from, or be motivated by, a review of content, a revision of assessment, or a thorough consideration of evaluation data. What is possible in curriculum design depends heavily on the context in which the process takes place.

Assessment is the process of documenting, usually in measurable terms, criteria such as knowledge and skills.

Focus can be directed on the individual learner, the learning community,  the institution, or the educational system as a whole. The assessment practices depend on theoretical framework of which the students use, their assumptions and beliefs culminating in their knowledge and their process of learning. CIIA (2010) highlights key points, but not all in every circumstance can be used.

Assessment Learning Cycle CIIA (2010)
Image Source: http://pandora.cii.wwu.edu/cii/resources/outcomes/how_assessment_works.asp

Assessment criteria for our mechanical students, who follow their chosen specialism, as discussed previously:
(a) Initial Literature research on the topic chosen
(b) Testing and data options
(c) Achievable theory objectives with numerical and classical data
(d) Has a goal been achieved with objective enhancements
(f) Has an understanding of the fundamentals been achieved, and the students knowledge known

Before final submission the students will give a presentation to 3 members of staff providing a feasible systems project, demonstrating theory, knowledge and intended outcomes.
At the end of each presentation feedback will be given using the, final supervisor assessment form (as above) and if the project lacks a driven objective, or insufficient understanding, then advice will be given, in order for changes/improvements to be made.
This could also be given at every stage individually  or given at the end of the students analysis, but before the student writes their final dissertation.

Gibbs (2010) asserts that feedback should be timely and understandable in order for it to make a difference to students’ work – feedback FOR learning, rathern feedback OF learning.

While the redesign of this module has not involved students negotiating their own assessment criteria – only the specialism they choose to follow – consideration should be given to this as it could enhance assessment.

One of the solutions put forward by Boud (1992) is to incorporate a “qualitative and discursive” self-assessment schedule in order to provide a comprehensive and analytical record of learning in situations where students have substantial responsibility for what they do. It is a personal report on learning and achievements which can be used either for students’ own use or as a product which can form part of a formal assessment procedure.

“The issue which the use of a self-assessment schedule addresses is that of finding an appropriate mechanism for assessing students’ work in self-directed or negotiated learning situations which takes account of both the range of what is learned and the need for students to be accountable for their own learning. Almost all traditional assessment strategies fail to meet these criteria as they tend to sample a limited range of teacher-initiated learning and make the assumption that assessment is a unilateral act conducted by teachers on students.” (Boud, 1992)

For many years Ken Robinson has been one of many voices urging a revolution in our education system in order to uncover the talents of individuals. A complete reform of our assessment structure might enable students to perform to their best.

“We have to recognise that human talent is tremendously diverse, people have very different aptitudes. We have to feed their spirit, feed their energy, feed their passion.” (Robinson, 2006)


References

Boud, D (1992) Studies in HE Volume 17, no 2 – The Use of Self-Assessment in Negotiated Learning available online http://www.iml.uts.edu.au/assessment-futures/subjects/Boud-SHE92.pdf

CIIA (2010) Assessment and Outcomes: How Assessment Works.  Available online http://pandora.cii.wwu.edu/cii/resources/outcomes/how_assessment_works.asp

HEA (2012) A marked improvement: Transforming assessment in higher education, York, Higher Education Academy. Available online
http://www.heacademy.ac.uk/resources/detail/assessment/a-marked-improvement

HEA (2010) Inclusive curriculum design in higher education: Engineering.  Available online http://www.heacademy.ac.uk/resources/detail/inclusion/Disability/Inclusive_curriculum_design_in_higher_education

HEFCE (2009) Managing Curriculum Design.  JISC publication available online http://www.jisc.ac.uk/media/documents/publications/managingcurriculumchange.pdf

Gibbs, G. (2010) Dimensions of quality.  Higher Education Academy: York.  Available onlinehttp://www.heacademy.ac.uk/assets/documents/evidence_informed_practice/Dimensions_of_Quality.pdf(accessed Dec 2012)

Race, P (2007) The lecturers Toolkit 3rd edition Routledge Taylor & Francis group, Abingdon, Oxon, Routledge

Reynolds J, Skilbeck M. (1976) Culture and the classroom. London: Open Books.  Available online http://books.google.co.uk/books/about/Culture_and_the_classroom.html?id=3woNAQAAIAAJ

Robinson, K (2006) Bring on the Learning Revolution available online http://www.youtube.com/watch?v=r9LelXa3U_I

QAA (2008) The framework for higher education qualification in England, Wales & Northern Ireland.  Available http://www.qaa.ac.uk/Publications/InformationAndGuidance/Documents/FHEQ08.pdf

PBL2 group submission – Where’s my feedback, dude?

Group report by

Caroline Cheetham
Lecturer in Journalism (School of Media, Music and Performance)
Cheryl Dunleavy
Academic Developer (Human Resource Development)
Philip Walker
Lecturer in Product Engineering (School of Computing, Science and Engineering)

Despite much effort in recent years to focus staff development and strategic initiatives on assessment and feedback (University of Salford, 2011), the University continues to receive low student satisfaction in this area across many subjects through the National Student Survey (NSS, 2012). The cartoon in the scenario (see Box 1) tells an all too familiar story – across the UK Higher Education (HE) sector, despite increased workloads for tutors, in terms of providing marks and feedback, student satisfaction in the category of assessment and feedback remains generally lower than in other categories (see Box 2).

Where's my feedback, dude?

Box 1: Scenario
(Image Source: http://www.health.heacademy.ac.uk/rp/publications/occasionalpaper/occp11.pdf)
In the context of the University’s regulations on the provision of feedback, identify the problems in the cartoon and investigate, though the use of relevant literature, how we can improve feedback for student learning.

Box 2

Box 2: 2012 NSS results for the UK (HEFCE, 2012)
(Image source: http://www.hefce.ac.uk/whatwedo/lt/publicinfo/nationalstudentsurvey/)

In finding a solution to this problem, it is important to look at both staff and student perceptions of assessment and feedback – (1) how have tutors become so overworked? Why do they often conclude that students aren’t interested in their feedback – just their mark? and (2) when so much time is spent on providing marks and feedback, why do students convey such dissatisfaction in the NSS?

Many believe that changing HE systems and structures over the last few decades are partly to blame. The post-1990 modularisation of degree programmes may have lead to over-assessing through more short-fat (30 credit) and short-thin (10 credit) modules, as opposed to the traditional long-thin (60 credit) term. A recent report by the Higher Education Academy (HEA) states:

“Modularisation has created a significant growth in summative assessment, with its negative backwash effect on student learning and its excessive appetite for resources to deliver the concomitant increase in marking, internal and external moderation, administration and quality assurance” (HEA, 2012).

Due to the very nature of ‘summative’ assessment occurring at the ‘end-point’, modular structures have naturally increased the amount of summative assessment occurring over a single term – typically two modules per semester, therefore four modules/summative assessment points per term. Traditionally, there would have been only one summative assessment (e.g. exam) at the end of each term perhaps allowing for more ‘formative’ assessment and learning throughout. With modularisation, there is little time for students to absorb learning through formative tasks in between summative assessments (Irons, 2008).

Although it is useful to understand the distinctions between ‘formative’ and ‘summative’ assessments when contributing to the assessment and feedback discourse (see Box 3), it does sometimes distract from the bigger picture.

Summative assessment as “any assessment activity which results in a mark or grade which is subsequently used as a judgement on student performance”

Formative assessment as “any task or activity which creates feedback (or feedforward) for students about their learning. Formative assessment does not carry a grade which is subsequently used in a summative assessment”

Formative feedback as “any information, process or activity which affords or accelerates student learning based on comments relating to either formative assessment or summative assessment”

Box 3: Summative Vs. Formative: Irons (2008)

Whilst we may accept such definitions and distinctions exist, it is important to remember that they cannot in practice be easily separated from one another, and ‘assessment’ as a whole should frame any strategies for improvement.

Post-modularisation research suggests institutional reviews of assessment practice are needed to consider where improvements can be made to ‘support learning’ (Biggs, 1996: Gibbs & Simpson, 2004;2005). There is a shift in focus from summative assessment to formative assessment and formative feedback approaches (HEA, 2012) – indeed a shift from ‘assessment of learning’ as a ‘measure’ (quantitative) to ‘assessment for learning’ as ‘developmental’ (qualitative). Therefore, if we are to think about the concept of ‘assessment for learning’ to help resolve this problem, then we need to consider various aspects of current practice.

If, as the scenario suggests, students are dissatisfied with their feedback, when on the other side tutors are working hard to produce marks and feedback, then we can assume that there is something wrong with the feedback and/or a difference in perception of what quality ‘comprehensive’ and ‘meaningful’ feedback looks like.

Timeliness and feed ‘forward’ are key factors.

“Feedback may be backward looking – addressing issues associated with material that will not be studied again, rather than forward-looking and addressing the next study activities or assignments” (Gibbs & Simpson, 2005)

If the feedback is received at the end of a module (backward-looking), and it is not clear to the student how they may use this as a ‘developmental’ tool to improve (forward-looking), then they are unlikely to fully engage.

The student needs to be clear of the future benefit of assessment, therefore if assessment and feedback is truly ‘comprehensive’ and ‘meaningful’ then the benefit should be ‘lifelong’ contributing to an improvement in the next assessment, module, further study, and/or future employment.

At Salford, our students have indicated that ‘feedback for learning’ is key to improving student satisfaction. Former President of the Student Union suggests that more informal (formative) feedback ‘before’ summative assessment is important – a continuous dialogue between student and tutor (see Box 4).

Box 4: Feedback for learning (Dangerfield, 2012)
(Video source: http://youtu.be/zfMCMm1htLY)

The importance of a continuous student-tutor dialogue through interaction on a personal basis is highlighted in the literature, both through early research – ‘Seven principles of good practice in undergraduate education’ (Chickering & Gamson, 1987) and more recently – ‘Dimensions of quality’ (Gibbs, 2010).

However, whilst more personal contact is desirable, large cohorts may present resource issues and therefore we need to be more creative in how we realistically provide more formative feedback opportunities for all our students.

Returning to the concept of ‘assessment’ as a whole, it is important to ensure that formative and summative tasks are designed with each other in mind.  This can be achieved through ‘active’ teaching and learning shaped from and explicitly linked to module intended learning outcomes (ILOs) and assessment criteria – constructive alignment (see Box 5).

Box 5: Constructive alignment (Biggs & Tang, 2007)
(Image source: http://www.ucdoer.ie/images/3/3c/Aligned-curriculum-model.gif)

So, what formative approaches, which support active teaching and learning, could we suggest to help resolve this particular problem?  There are several examples in the literature, however here we have chosen to focus on ‘peer-assessment’ as along with more contact time, feedback throughout and a variety of assessment methods (including peer-assessment) have been highlighted in the Student ‘Charter on Feedback & Assessment’ (see Box 6) .

Box 6

Box 6: Student ‘Charter on Feedback & Assessment’ (NUS, 2010)
(Image source: http://www.nusconnect.org.uk/news/article/highereducation/720/)

Peer assessment is defined

“the  process  through which groups of individuals  rate  their  peers.  This  exercise  may  or may  not  entail  previous  discussion  or agreement over criteria. It may involve the use of rating instruments  or checklists which have been  designed  by others before the  peer  assessment  exercise,  or designed by the  user group to  meet  its  particular  needs” (Falchikov, 1995)

Peer-assessment is one approach which provides both formative assessment and formative feedback opportunities where students are empowered to take personal ownership not just of their learning, but also of how their learning is assessed.  Peer-assessment methods are centred around the student and offer an authentic and lifelong learning experience (Orsmond, 2004).

When peer-assessment methods are designed well, students are positive about the outcomes claiming that it makes them think more, become more critical, learn more and gain in confidence (HEA, 2012).

Of course, peer-assessment methods aren’t always well received by students.  Students can feel that they are being made to do the work of their tutor (e.g. marking) which can be highly controversial today – students, often seen as ‘customers’ , now paying increased fees and therefore demand to see ‘value for money’ from their tutors.  However, in well designed peer-assessment the ‘value’ can be seen through tutors spending their time providing more formative feedback in-class clarifying assessment criteria in preparation for peer-assessment activities.

Hughes (2001), in his implementation and evaluation of peer-marking of practical write-ups (>100 pharmacology students), found that such an approach was successful, however it is clear that three factors contributed to this success (see Box 7).

1 Firstly, it was important to clarify to students why they were being asked to peer-mark practical write-ups emphasising the ‘value’ of this process to them as learners. This was achieved through a preliminary session where he introduced his students to peer-assessment, helping to encourage them to see the educational benefit of peer-assessment, including clarity of assessment criteria, learning from others mistakes, benchmarking the standard of own work against the standards achieved elsewhere, gaining experience of assessing others work – a necessary lifelong skill, especially in employment. Non-attendance at the preliminary session resulted in a penalty on their own mark therefore providing further encouragement to participate.

Student Guide to peer Assessment of Practicals (Fry, 1990) Why are we doing this?
You should get several things out of this method of assessment which may be new to you:

  • It is an open marking system; therefore you can see what was required and how to improve your work.
  • You see mistakes others make and therefore can avoid them; you also see the standard achieved by others and can set your own work in the spectrum of marks.
  • You get a full explanation of the practical and how you should have processed the data and done the discussion. Therefore your information and understanding is improved.
  • You get practice in assessing others and their work. You will need this skill quite early in a career and you will need to come to terms with the problem of bias; someone who is a good friend may have done poor work; it can be disturbing to have to give them a poor mark.
  • In assessing others you should acquire the ability to stand back from your own work and assess that as well. This is an essential ability in a scientist; an unbiased and objective assessment of the standards you have achieved in your own work. Once you are away from the teacher/pupil relationship (i.e. leave University) you will be the person who decides if a piece of work is good enough to be considered as finished and passed on to your boss.

The method of marking adopted in this course is designed with the above factors in mind.

2 Secondly, students were not left to their own devices when marking. It was important to provide a clear process and ‘time’ in-class for students to peer-mark together. All students gathered in a lecture theatre, the practical write-ups were distributed at random so as to reduce bias amongst friends, and tutors were present to provide and further explain criteria set out on an explicit marking schedule. Again, non-attendance at the marking sessions resulted in penalised marks.

3 Thirdly, students were encouraged to take ownership of their marking by crucially signing to accept responsibility for the accuracy of their marking. The students were also made aware that a sample of the practical write-ups would be second-marked by a tutor, and anyone who felt their mark was unfair could request to have their work re-marked by a tutor (less than 2% chose to do so).

Box 7: A case study of good practice (Hughes, 2001)

From his comparative study of two first year cohorts on two consecutive years, Hughes (2001) presents data which suggests that although both cohorts gained similar marks for the first practical write-up, students involved in the peer-marking process improved in their remaining three practical write-ups, obtaining consistently better marks than the students who were not involved.  Also, the tutor-marked sample did not reveal significantly different marks from those awarded during peer-marking.  This data suggested three things, (1) students were learning how to improve their practical write-ups through the peer-marking process, (2) peer-marking did not result in a lowering of standards when compared with tutor-marking, and (3) peer-marking resulted in a reduction in staff time.

The findings from Hughes (2001) are consistent with other peer-assessment studies in the sciences (Orsmond, 2004) which contribute to best practice guidance (see Box 8).

Box 8

Box 8: Stages in carrying out and evaluating a self or peer assessment study (Falchikov, 2003)
Image source: http://www.pnarchive.org/docs/pdf/p20040519_falchikovpdf.pdf)

To conclude, nationally peer-assessment is encouraged as one way to engage students in a dialogue about their learning, and would seem an ideal approach to encourage across the University to help resolve the problem identified here.

“Encouraging self- and peer assessment, and engaging in dialogue with staff and peers about their work, enables students to learn more about the subject, about themselves as learners, as well as about the way their performance is assessed” (HEA, 2012).

Although formative feedback through peer assessment doesn’t necessarily reduce a tutor’s workload, it does however have the potential to at least be valued by the students, and has the potential to be well received when it comes to the NSS.  Therefore, staff can be motivated and enthused that students appreciate the work they put into assessment and feedback processes, rather than, as in the scenario, where there is dissatisfaction from both perspectives.  Students and tutors as partners in learning.

References

Biggs, J. (1996) Assessing learning quality: reconciling institutional, staff and educational demands.  Assessment & Evaluation in Higher Education, 12(1): 5-15.  Available online http://www.tandfonline.com/doi/abs/10.1080/0260293960210101 (accessed Nov 2012)

Biggs, J. & Tang, C. (2007) Teaching for Quality Learning at University, Buckingham, Open University Press.

Chickering, A.W. & Gamson, Z.F. (1987) Seven principles for good practice in undergraduate
education. AAHE Bulletin. 39 (7), pp3–7.

Dangerfield, C. (2012) Food for thought (5): Feedback for learning with Caroline Dangerfield.  Available at http://youtu.be/zfMCMm1htLY (accessed Dec 2012)

Falchikov, N. (2003) Involving Student in Assessment, Psychology Learning and Teaching, 3(2), 102-108.  Available online http://www.pnarchive.org/docs/pdf/p20040519_falchikovpdf.pdf (accessed Nov 12).

Falchikov, N.  (1995)  Peer  feedback  marking:  developing peer  assessment, Innovations in Education  and Training International,  32,  pp.  175-187.

Fry, S. (1990) Implementation and evaluation of peer marking in Higher Education. Assessment and Evaluation in Higher Education, 15: 177-189.

Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students’ learning, Learning and Teaching in Higher Education, vol. 1. pp.1-31. Available online http://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/issue1.pdf#page=5 (accessed Nov 2012)

Gibbs, G. & Simpson, C. (2005) Does your assessment support your students’ learning? Learning and Teaching in Higher Education, 1.  Available online http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.201.2281&rep=rep1&type=pdf (accessed Dec 2012)

Gibbs, G. (2010) Dimensions of quality.  Higher Education Academy: York.  Available online http://www.heacademy.ac.uk/assets/documents/evidence_informed_practice/Dimensions_of_Quality.pdf (accessed Dec 2012)

HEA (2012) A marked improvement: Transforming assessment in higher education, York, Higher Education Academy.  Available online http://www.heacademy.ac.uk/resources/detail/assessment/a-marked-improvement (accessed Nov 2012).

HEFCE (2012) National Student Survey.  Available online http://www.hefce.ac.uk/whatwedo/lt/publicinfo/nationalstudentsurvey/ (accessed Dec 2012).

Hughes, I. E. (2001) But isn’t this what you’re paid for? The pros and cons of peer- and self-assessment. Planet, 2, 20-23.  Available online http://www.gees.ac.uk/planet/p3/ih.pdf (accessed Nov 2012).

Irons, A. (2008) Enhancing Learning through Formative Assessment and Feedback. London: Routledge.

NSS (2012) The National Student Survey.  Available online http://www.thestudentsurvey.com/ (accessed Dec 2012).

NUS (2010) Charter on Feedback & Assessment, National Union of Students.  Available online http://www.nusconnect.org.uk/news/article/highereducation/720/ (accessed Nov 2012)

Orsmond, P. (2004) Self- and Peer-Assessment: Guidance on Practice in the Biosciences, Teaching Bioscience: Enhancing Learning Series, Centre for Biosciences, The Higher Education Academy, Leeds.  Available online http://www.bioscience.heacademy.ac.uk/ftp/teachingguides/fulltext.pdf (accessed Nov 2012)

University of Salford (2011) Transforming Learning and Teaching: Learning & Teaching Strategy 2012-2017.  Available online http://www.adu.salford.ac.uk/html/aspire/aspire.html (accessed Dec 2012).

PBL1 group submission – What’s in a mark?

The issues this school is facing outlined in the image above are multiple but we would assert that ultimately one main problem is central – the school appears to lack confidence in the assessment and marking process because the marks the students are receiving are below the average for the sector.

Of course within every school, different subject areas work to different criteria in the marking process, but what is not clear here is whether each department even has any criteria which the students are working towards, and it’s even less clear if departments are working to an agreed set of standards.

In this paper we will work towards ascertaining if the school’s use of rubrics is at the heart of the concerns of staff over the marks students are receiving.

However, even if we assume that a lack of, or poor use of, rubrics is a fundamental part of the problem, it also seems likely that other factors might be at play including, less able students and less effective teaching methods. But even if these elements are considered and a lack of coherence in the use of rubrics, what cannot be overlooked is that strict adherence across departments to ever stricter rubrics is not the answer to every problem.

The University’s aim is informed by the guidance in the QAA UK Quality Code for Higher Education: Assessment of Students and Accreditation of Prior Learning

The establishment of rubrics within universities are informed by the guidance of the QAA. In line with local and national guidelines:

  • All assessments should have marking scheme and marking criteria developed in line with University grade descriptors.
  • Awards either percentage mark (University scale 0-100%) or pass/fail grade.
  • University marking scale provides brief grade descriptors (subject-specific marking criteria should be developed to align with this).

The guidelines are there to assist in our quality assurance methods but also to make sure the assessment process is transparent and clear for our students. What is crucial is that the students know what they are working towards. That it’s not some big secret and we, as lecturers, are trying to catch them out.

How We Measure by giulia.forsythe
How We Measure, a photo by giulia.forsythe on Flickr. (Forsythe, 2012)

Biggs points out that the process of teaching and assessing has to be inherently linked – to relate intrinsically to each other:

“Constructive alignment’ has two aspects.  The ‘constructive’ aspect refers to the idea that students construct meaning through relevant learning activities. The ‘alignment’ aspect refers to what the teacher does, which is to set up a learning environment that supports the learning activities appropriate to achieving the desired learning outcomes. The key is that the components in the teaching system, especially the teaching methods used and the assessment tasks, are aligned with the learning activities assumed in the intended outcomes.” (Biggs, 2003)

Making use of a rubric or feedback chart or grid would appear to be the most open and transparent way for students to learn. The assessment criteria are inherently linked to the learning outcomes – one should inform the other. The concept of students working towards undisclosed or “secret” learning outcomes would appear to contradict the very point of learning. In addition, as “experts” we set the learning outcomes because they are integral to what we want our students to learn. And if the learning outcomes are not clear to the students, are they really clear to the lecturers?

While rubrics should play an integral part in how we assess our students, there will always be an element of subjectivity in our assessments. Bloxham (2007) argues that the responsibility for grading is largely down to the subjective judgement of tutors and other markers, and that there are two main marking categories: norm-referenced and criterion-referenced.

Criterion-referenced assessment is tested against a set of criteria such as those linked to the learning outcomes for the assignment. In criterion-referenced assessment all students have an opportunity to do equally well. According to Price (2005) this system is generally considered desirable on the basis that it is fairer to students to know how they will be judged and to have that judgements based on the quality of their work rather than the performance of other members of their cohort.

Higher education marking relies on a combination of judgement against criteria and the application of standards which are heavily influenced by academic norms.

“While we can share assessment criteria, certainly at the level of listing what qualities will be taken into account, applying standards is much harder to accomplish and relies on interpretation in context, there therein lies one of the key components of unreliability.” (Bloxham, S and Boyd, P)

To this point we have made some of the arguments for the importance of rubrics in assessment – essentially that students have a right to know what we, the lecturers, are assessing them on. The issue of transparency is one which is high on the educational agenda, and to this end it can appear that there is an increasing reliance on rubrics in our HE system.

According to A Marked Improvement from the HEA Academy, assessment is at the heart of many challenges facing HE.

“A significantly more diverse student body in relation to achievement, desirability, price, education and expectations of HE has put pressure on retention and standards.”

Yet despite our increasing reliance on rubrics in order to ensure transparency, the National Student Survey continues to show that students express concerns about the reliability of assessment criteria.

And equally concerning is a sense that assessment practices which desperately need to mirror the demands of the workforce, are not reflecting what employers want.

“There is a perception, particularly among employers that HE is not always providing graduates with the skills and attributes they require to deal successfully with a complex and rapidly changing world: a world that needs graduates to be creative, capable of learning independently and taking risks, knowledgable about the work environment, flexible and responsive.” (HEA Academy: A Marked Improvement)

In Ken Robinson’s incredibly innovative Changing Education Paradigms he outlines why we need to change how we teach and assess in order to meet the needs of a changing society and a different age:

So does an ever-increasing reliance on rubrics, for the sake of transparency, mean we are assessing too narrowly and failing to take account of the essential and valuable learning which takes place in spite of the formal learning outcomes?

According to Sociologist Frank Furedi in his paper: The Unhappiness Principle: learning outcomes have become too prescriptive and are a “corrosive influence on higher education.” Furedi argues that a strict adherence to learning outcomes devalues the actual experience of education and deprives teaching and learning of meaning:

“The attempt to abolish ambiguity in course design is justified on the grounds that it helps students by clarifying the overall purpose of their programme and of their assessment tasks. But it is a simplistic form of clarity usually associated with bullet points, summary and guidance notes. The precision gained through the crystallisation of an academic enterprise into a few words is an illusory one that is likely to distract students from the clarity that comes from serious study and reflection.”

In essence Furedi’s concerns reflect fears amongst academics that prescriptive learning outcomes allow students less freedom to learn through the process of learning, for the sake of learning. Instead an increasingly number of students want to know what they have to do to pass the assessment criteria.

But Ferudi’s views also hark back to a time when assessments were not open and transparent, and students’ marks were based on unknown criteria and subject only to the an academic’s arbitary judgement.

It would seem to us in conclusion that while Ferudi’s views strike a chord with many academics who relish learning for the sake of learning, and rail against the QAA’s insistence of measuring and marking every aspect of learning with an increasing use of rubrics, the importance of transparency in student assessment is crucial.

For the School of X it is likely that there is a lack of coherence in how rubrics are designed and used. For rubrics to be effective and fair across departments there needs to be a shared understanding of standards.

The HEA Academy suggests that the onus has to be on academics to work collectively to ensure consistency in marking. Assessment standards should be socially constructed in a process which actively engages both staff and students. In addition, as previously pointed out, we also must have confidence in the judgements of professionals.

“Academic, disciplinary and professional communities should set up opportunities and processes such as meetings, workshops and groups to regularly share exemplars and discuss assessment standards. These can help ensure that educators, practitioners, specialists and students develop shared understandings and agreement about relevant standards.”

We believe the School of X needs to have confidence in standards across subject areas, but it also needs to consider if the comparisons between different universities are reasonable ones. Ultimately the performance of students at this level will always be influenced by their own ability, and to that end the School of X might want to consider educational gain as a better measure.

According to Biggs: “This matters because the best predictor of product is the qualitty of the students entering the institution, and the quality of students varies greatly between institutions, so that if you only have a measure of product, such as degree classifications, rather than gains, then you cannot easily interpret differences between institutions.”

References

QAA UK Quality Code for Higher Education: Assessment of Students and Accreditation of Prior Learning. Chapter B6, 2011.

Bloxham, S (2009) Marking and moderation in the UK: false assumptions and wasted resources,

Bloxham, S. & Boyd, P. (2007) Chapter 6: Marking, pp.81 – 102 in Developing Effective Assessment in Higher Education. Maidenhead: Open University Press.

Biggs, J (2003) Aligning Teaching for Costructive Learning. HEA Academy.

Bridges (1999) ‘Are we missing the mark?’ Times Higher Education published 3rd September 1999

HEA Academy – A Marked Improvement – 2012

Knight, P (2007) Grading, classifying and future learning, pp.72-86 in Boud, D. & Falchikov, N. (2007) Rethinking Assessment in Higher Education, London: Routledge

Forsyth, G. (2012) How we measure. Available online http://www.flickr.com/photos/gforsythe/7102055531/ (accessed December 2012)

Frank Furedi (2012) The Unhappiness Principle. Times Higher Educational Supplement

Orr, S. (2007) Assessment moderation: constructing the marks and constructing the students,

Price, M. (2005) Assessment – What is the answer? Oxford Brookes University Business School

Robinson, K. (2010) The Element. How Finding your Passion Changes Everything. Penguin

Yorke, M. (2000) Grading: The subject dimension,

 

Food for thought …

More great videos from the PGCAP Food for Thought series for learning and reflection at: http://www.youtube.com/user/pgcapsalford/videos?query=food+for+thought

The unhappiness principle

During the recent weeks on the AFL module we’ve discussed the perceived value of learning outcomes.  We’ve asked, how can we prescribe outcomes, when learning for everyone is different depending on where they start from in the first place, and indeed where they intend to go?  Can, or should, we actually guarantee anything as an outcome of learning?  Can the presence of outcomes as a ‘check list’ of learning discourage from creativity, exploration, and discovery in learning?  Does the absence of  learning outcomes provide a ‘mystic’ which encourages creativity, exploration, and discovery?  Does such ‘mystic’  foster deeper learning or merely confusion and student dissatisfaction?

Furedi (2012) argues that learning outcomes “disrupt the conduct of the academic relationship between teacher and student”, “foster a climate that inhibits the capacity of students and teachers to deal with uncertainty”, “devalue the art of teaching”, and “breed a culture of cynicism and irresponsibility”.

Read more – The unhappiness principle.