Cheryl’s PGCAP assessment archive …
|Cheryl’s Reflections – below are my main reflections for assessment, however please note that I refer/link to additional reflections (including literature) throughout:
AFL (Sept 2012) for CPD
LTHE (Jan 2012)
|Cheryl’s Categories (see left menu) – throughout the PGCAP I have categorised my reflections according to the UK Professional Standards Framework (PSF) where I feel there is evidence of competence and engagement.
Cheryl’s Tags (see left menu) – these indicate the various topic areas covered throughout all my reflections.
PGCAP on Flickr – happy memories
PGCAP on YouTube – a shared experience
PGCAP on Slideshare – looking back
PGCAP on Twitter – keeping in touch
PGCAP on Scoop.it – keeping fresh
In mechanical engineering the students all ask about mechanical systems related to motor vehicles. However, we do not have a vehicle within the building that the students could relate to when they are learning about brakes, engines, dynamic controls, fuel selection (green solutions), weight considerations, introduction of carbon materials, and stress analysis with frame stability.
The University has given us a budget to purchase two motorsport racing cars and track transport trailer, to use one for track testing, and keep the other one un-assembled in kit form, so that individual systems can be independently tested by the students. This will allow students to negotiate their learning by selecting the system they are interested in and test as part of a final year project, enabling the student to achieve their individual learning goals in an authentic environment.
The vehicles will be used primarily in the mechanical engineering group design project work, and is a 30 credit module.
Every student needs to input a satisfactory piece of work, but as the scenario suggests, a way in which all students participate may include a range of assessments including lab practicals, reports and exams.
Although students all have to submit the same components, the negotiation is over the specialism chosen, encouraging authentic assessment influenced by their own career aspirations – hence a fairer assessment.
(1) A student who is good at data logging, data analysis can work on measurement collection and analysis of fuel management systems, or structural analysis of stress characteristics of the vehicle chassis.
(2) Another student may be more focussed on mechanical linkages and hydraulic systems thus incorporating steering and brakes, to determine efficient control parameters with steering and braking.
Both student types therefore still fulfill the mechanical engineering background practice and theory but focusing on a different branch.
Different strokes for different folks
In 1 and 2 we can acknowledge that we are introducing authentic and negotiated learning.
“Students should experience assessment as a valid measure of their programme outcomes using authentic assessment methods, which are both intrinsically worthwhile and useful in developing their future employability” (HEA, 2012)
“Assessment reform with these aims would benefit from increased involvement of professional,regulatory and statutory bodies; engaging with them to identify how professional and personal capabilities can be evidenced. It would build on existing efforts to design integrative and creative assessment that is more able to determine authentic achievement. It would resist grading performances that cannot easily be measured. It would help students understand the assessment process and develop the skills of self-evaluation and professional judgement. It would enable students to recognise what they have learned and be able to articulate and evidence it to potential employers. Improving assessment in this way is crucial to providing a richer and fairer picture of students’ achievement.” (HEA, 2012)
In accordance with the FHEQ the descriptor most appropriate for the module redesign is level 4 (QAA, 2008).
To facilitate the course redesign a new course handbook has been developed from one of our current handbooks for curriculum design. Below are modifications to existing forms for final assessment marks as a guide for the supervisor, moderator in preparing feedback and marking criteria to be followed.
The Higher Education Academy commissioned a guide to support the higher education sector to think creatively about inclusive curriculum design from a generic as well as subject or disciplinary perspective (HEA, 2010).
The curriculum represents the expression of educational ideas in practice. The word curriculum has its roots in the latin word for track or race course. From there to mean course of study or syllabus.
Today the definition is much wider and includes all the planned learning experiences of a school or educational institution.
Descriptive Model (Reynolds and Skilbeck, 1976)
Image Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125124/
An enduring example of a descriptive model is the situational model advocated by Reynolds and Skilbeck (1976), which emphasises the importance of situation or context in curriculum design. In this model, curriculum designers thoroughly and systematically analyse the situation in which they work for its effect on what they do in the curriculum. The impact of both external and internal factors is assessed and the implications for the curriculum are determined.
Situational Model (Reynolds and Skilbeck, 1976)
Image Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125124/
Although all steps in the situational model (including situational analysis) need to be completed, they do not need to be followed in any particular order. Curriculum design could begin with a thorough analysis of the situation of the curriculum or the aims, objectives, or outcomes to be achieved, but it could also start from, or be motivated by, a review of content, a revision of assessment, or a thorough consideration of evaluation data. What is possible in curriculum design depends heavily on the context in which the process takes place.
Assessment is the process of documenting, usually in measurable terms, criteria such as knowledge and skills.
Focus can be directed on the individual learner, the learning community, the institution, or the educational system as a whole. The assessment practices depend on theoretical framework of which the students use, their assumptions and beliefs culminating in their knowledge and their process of learning. CIIA (2010) highlights key points, but not all in every circumstance can be used.
Assessment Learning Cycle CIIA (2010)
Image Source: http://pandora.cii.wwu.edu/cii/resources/outcomes/how_assessment_works.asp
Assessment criteria for our mechanical students, who follow their chosen specialism, as discussed previously:
(a) Initial Literature research on the topic chosen
(b) Testing and data options
(c) Achievable theory objectives with numerical and classical data
(d) Has a goal been achieved with objective enhancements
(f) Has an understanding of the fundamentals been achieved, and the students knowledge known
Before final submission the students will give a presentation to 3 members of staff providing a feasible systems project, demonstrating theory, knowledge and intended outcomes.
At the end of each presentation feedback will be given using the, final supervisor assessment form (as above) and if the project lacks a driven objective, or insufficient understanding, then advice will be given, in order for changes/improvements to be made.
This could also be given at every stage individually or given at the end of the students analysis, but before the student writes their final dissertation.
Gibbs (2010) asserts that feedback should be timely and understandable in order for it to make a difference to students’ work – feedback FOR learning, rathern feedback OF learning.
While the redesign of this module has not involved students negotiating their own assessment criteria – only the specialism they choose to follow – consideration should be given to this as it could enhance assessment.
One of the solutions put forward by Boud (1992) is to incorporate a “qualitative and discursive” self-assessment schedule in order to provide a comprehensive and analytical record of learning in situations where students have substantial responsibility for what they do. It is a personal report on learning and achievements which can be used either for students’ own use or as a product which can form part of a formal assessment procedure.
“The issue which the use of a self-assessment schedule addresses is that of finding an appropriate mechanism for assessing students’ work in self-directed or negotiated learning situations which takes account of both the range of what is learned and the need for students to be accountable for their own learning. Almost all traditional assessment strategies fail to meet these criteria as they tend to sample a limited range of teacher-initiated learning and make the assumption that assessment is a unilateral act conducted by teachers on students.” (Boud, 1992)
For many years Ken Robinson has been one of many voices urging a revolution in our education system in order to uncover the talents of individuals. A complete reform of our assessment structure might enable students to perform to their best.
“We have to recognise that human talent is tremendously diverse, people have very different aptitudes. We have to feed their spirit, feed their energy, feed their passion.” (Robinson, 2006)
Boud, D (1992) Studies in HE Volume 17, no 2 – The Use of Self-Assessment in Negotiated Learning available online http://www.iml.uts.edu.au/assessment-futures/subjects/Boud-SHE92.pdf
CIIA (2010) Assessment and Outcomes: How Assessment Works. Available online http://pandora.cii.wwu.edu/cii/resources/outcomes/how_assessment_works.asp
HEA (2012) A marked improvement: Transforming assessment in higher education, York, Higher Education Academy. Available online
HEA (2010) Inclusive curriculum design in higher education: Engineering. Available online http://www.heacademy.ac.uk/resources/detail/inclusion/Disability/Inclusive_curriculum_design_in_higher_education
HEFCE (2009) Managing Curriculum Design. JISC publication available online http://www.jisc.ac.uk/media/documents/publications/managingcurriculumchange.pdf
Gibbs, G. (2010) Dimensions of quality. Higher Education Academy: York. Available onlinehttp://www.heacademy.ac.uk/assets/documents/evidence_informed_practice/Dimensions_of_Quality.pdf(accessed Dec 2012)
Race, P (2007) The lecturers Toolkit 3rd edition Routledge Taylor & Francis group, Abingdon, Oxon, Routledge
Reynolds J, Skilbeck M. (1976) Culture and the classroom. London: Open Books. Available online http://books.google.co.uk/books/about/Culture_and_the_classroom.html?id=3woNAQAAIAAJ
Robinson, K (2006) Bring on the Learning Revolution available online http://www.youtube.com/watch?v=r9LelXa3U_I
QAA (2008) The framework for higher education qualification in England, Wales & Northern Ireland. Available http://www.qaa.ac.uk/Publications/InformationAndGuidance/Documents/FHEQ08.pdf
Filed under: -PGCAP, -PGCAP AFL, -PGCAP Assessment, A0-Areas of Activity, A1-learning activities, A2-supporting learning, A3-assessment-feedback, A4-learning environments, A5-research & sholarship, K0-Core Knowledge, K1-subject, K2-T&L methods, K3-student learning, K4-learning technologies, K5-evaluation methods, K6-QA/QE in teaching, V0-Professional Values, V1-diverse learners, V2-equality for learners, V3-evidence-informed, V4-wider HE context | Tagged: AFL theory, assessment & feedback, authentic, constructive alignment, course design, experiential learning, inclusive assessment, inclusivity, literature, negotiated, practice, problem-based learning, teaching and learning | Leave a comment »
Group report by
Lecturer in Journalism (School of Media, Music and Performance)
Academic Developer (Human Resource Development)
Lecturer in Product Engineering (School of Computing, Science and Engineering)
Despite much effort in recent years to focus staff development and strategic initiatives on assessment and feedback (University of Salford, 2011), the University continues to receive low student satisfaction in this area across many subjects through the National Student Survey (NSS, 2012). The cartoon in the scenario (see Box 1) tells an all too familiar story – across the UK Higher Education (HE) sector, despite increased workloads for tutors, in terms of providing marks and feedback, student satisfaction in the category of assessment and feedback remains generally lower than in other categories (see Box 2).
In finding a solution to this problem, it is important to look at both staff and student perceptions of assessment and feedback – (1) how have tutors become so overworked? Why do they often conclude that students aren’t interested in their feedback – just their mark? and (2) when so much time is spent on providing marks and feedback, why do students convey such dissatisfaction in the NSS?
Many believe that changing HE systems and structures over the last few decades are partly to blame. The post-1990 modularisation of degree programmes may have lead to over-assessing through more short-fat (30 credit) and short-thin (10 credit) modules, as opposed to the traditional long-thin (60 credit) term. A recent report by the Higher Education Academy (HEA) states:
“Modularisation has created a significant growth in summative assessment, with its negative backwash effect on student learning and its excessive appetite for resources to deliver the concomitant increase in marking, internal and external moderation, administration and quality assurance” (HEA, 2012).
Due to the very nature of ‘summative’ assessment occurring at the ‘end-point’, modular structures have naturally increased the amount of summative assessment occurring over a single term – typically two modules per semester, therefore four modules/summative assessment points per term. Traditionally, there would have been only one summative assessment (e.g. exam) at the end of each term perhaps allowing for more ‘formative’ assessment and learning throughout. With modularisation, there is little time for students to absorb learning through formative tasks in between summative assessments (Irons, 2008).
Although it is useful to understand the distinctions between ‘formative’ and ‘summative’ assessments when contributing to the assessment and feedback discourse (see Box 3), it does sometimes distract from the bigger picture.
|Summative assessment as “any assessment activity which results in a mark or grade which is subsequently used as a judgement on student performance”
Formative assessment as “any task or activity which creates feedback (or feedforward) for students about their learning. Formative assessment does not carry a grade which is subsequently used in a summative assessment”
Formative feedback as “any information, process or activity which affords or accelerates student learning based on comments relating to either formative assessment or summative assessment”
|Box 3: Summative Vs. Formative: Irons (2008)|
Whilst we may accept such definitions and distinctions exist, it is important to remember that they cannot in practice be easily separated from one another, and ‘assessment’ as a whole should frame any strategies for improvement.
Post-modularisation research suggests institutional reviews of assessment practice are needed to consider where improvements can be made to ‘support learning’ (Biggs, 1996: Gibbs & Simpson, 2004;2005). There is a shift in focus from summative assessment to formative assessment and formative feedback approaches (HEA, 2012) – indeed a shift from ‘assessment of learning’ as a ‘measure’ (quantitative) to ‘assessment for learning’ as ‘developmental’ (qualitative). Therefore, if we are to think about the concept of ‘assessment for learning’ to help resolve this problem, then we need to consider various aspects of current practice.
If, as the scenario suggests, students are dissatisfied with their feedback, when on the other side tutors are working hard to produce marks and feedback, then we can assume that there is something wrong with the feedback and/or a difference in perception of what quality ‘comprehensive’ and ‘meaningful’ feedback looks like.
Timeliness and feed ‘forward’ are key factors.
“Feedback may be backward looking – addressing issues associated with material that will not be studied again, rather than forward-looking and addressing the next study activities or assignments” (Gibbs & Simpson, 2005)
If the feedback is received at the end of a module (backward-looking), and it is not clear to the student how they may use this as a ‘developmental’ tool to improve (forward-looking), then they are unlikely to fully engage.
The student needs to be clear of the future benefit of assessment, therefore if assessment and feedback is truly ‘comprehensive’ and ‘meaningful’ then the benefit should be ‘lifelong’ contributing to an improvement in the next assessment, module, further study, and/or future employment.
At Salford, our students have indicated that ‘feedback for learning’ is key to improving student satisfaction. Former President of the Student Union suggests that more informal (formative) feedback ‘before’ summative assessment is important – a continuous dialogue between student and tutor (see Box 4).
|Box 4: Feedback for learning (Dangerfield, 2012)
(Video source: http://youtu.be/zfMCMm1htLY)
The importance of a continuous student-tutor dialogue through interaction on a personal basis is highlighted in the literature, both through early research – ‘Seven principles of good practice in undergraduate education’ (Chickering & Gamson, 1987) and more recently – ‘Dimensions of quality’ (Gibbs, 2010).
However, whilst more personal contact is desirable, large cohorts may present resource issues and therefore we need to be more creative in how we realistically provide more formative feedback opportunities for all our students.
Returning to the concept of ‘assessment’ as a whole, it is important to ensure that formative and summative tasks are designed with each other in mind. This can be achieved through ‘active’ teaching and learning shaped from and explicitly linked to module intended learning outcomes (ILOs) and assessment criteria – constructive alignment (see Box 5).
So, what formative approaches, which support active teaching and learning, could we suggest to help resolve this particular problem? There are several examples in the literature, however here we have chosen to focus on ‘peer-assessment’ as along with more contact time, feedback throughout and a variety of assessment methods (including peer-assessment) have been highlighted in the Student ‘Charter on Feedback & Assessment’ (see Box 6) .
Peer assessment is defined
“the process through which groups of individuals rate their peers. This exercise may or may not entail previous discussion or agreement over criteria. It may involve the use of rating instruments or checklists which have been designed by others before the peer assessment exercise, or designed by the user group to meet its particular needs” (Falchikov, 1995)
Peer-assessment is one approach which provides both formative assessment and formative feedback opportunities where students are empowered to take personal ownership not just of their learning, but also of how their learning is assessed. Peer-assessment methods are centred around the student and offer an authentic and lifelong learning experience (Orsmond, 2004).
When peer-assessment methods are designed well, students are positive about the outcomes claiming that it makes them think more, become more critical, learn more and gain in confidence (HEA, 2012).
Of course, peer-assessment methods aren’t always well received by students. Students can feel that they are being made to do the work of their tutor (e.g. marking) which can be highly controversial today – students, often seen as ‘customers’ , now paying increased fees and therefore demand to see ‘value for money’ from their tutors. However, in well designed peer-assessment the ‘value’ can be seen through tutors spending their time providing more formative feedback in-class clarifying assessment criteria in preparation for peer-assessment activities.
Hughes (2001), in his implementation and evaluation of peer-marking of practical write-ups (>100 pharmacology students), found that such an approach was successful, however it is clear that three factors contributed to this success (see Box 7).
|1 Firstly, it was important to clarify to students why they were being asked to peer-mark practical write-ups emphasising the ‘value’ of this process to them as learners. This was achieved through a preliminary session where he introduced his students to peer-assessment, helping to encourage them to see the educational benefit of peer-assessment, including clarity of assessment criteria, learning from others mistakes, benchmarking the standard of own work against the standards achieved elsewhere, gaining experience of assessing others work – a necessary lifelong skill, especially in employment. Non-attendance at the preliminary session resulted in a penalty on their own mark therefore providing further encouragement to participate.
Student Guide to peer Assessment of Practicals (Fry, 1990) Why are we doing this?
The method of marking adopted in this course is designed with the above factors in mind.
2 Secondly, students were not left to their own devices when marking. It was important to provide a clear process and ‘time’ in-class for students to peer-mark together. All students gathered in a lecture theatre, the practical write-ups were distributed at random so as to reduce bias amongst friends, and tutors were present to provide and further explain criteria set out on an explicit marking schedule. Again, non-attendance at the marking sessions resulted in penalised marks.
3 Thirdly, students were encouraged to take ownership of their marking by crucially signing to accept responsibility for the accuracy of their marking. The students were also made aware that a sample of the practical write-ups would be second-marked by a tutor, and anyone who felt their mark was unfair could request to have their work re-marked by a tutor (less than 2% chose to do so).
|Box 7: A case study of good practice (Hughes, 2001)|
From his comparative study of two first year cohorts on two consecutive years, Hughes (2001) presents data which suggests that although both cohorts gained similar marks for the first practical write-up, students involved in the peer-marking process improved in their remaining three practical write-ups, obtaining consistently better marks than the students who were not involved. Also, the tutor-marked sample did not reveal significantly different marks from those awarded during peer-marking. This data suggested three things, (1) students were learning how to improve their practical write-ups through the peer-marking process, (2) peer-marking did not result in a lowering of standards when compared with tutor-marking, and (3) peer-marking resulted in a reduction in staff time.
The findings from Hughes (2001) are consistent with other peer-assessment studies in the sciences (Orsmond, 2004) which contribute to best practice guidance (see Box 8).
To conclude, nationally peer-assessment is encouraged as one way to engage students in a dialogue about their learning, and would seem an ideal approach to encourage across the University to help resolve the problem identified here.
“Encouraging self- and peer assessment, and engaging in dialogue with staff and peers about their work, enables students to learn more about the subject, about themselves as learners, as well as about the way their performance is assessed” (HEA, 2012).
Although formative feedback through peer assessment doesn’t necessarily reduce a tutor’s workload, it does however have the potential to at least be valued by the students, and has the potential to be well received when it comes to the NSS. Therefore, staff can be motivated and enthused that students appreciate the work they put into assessment and feedback processes, rather than, as in the scenario, where there is dissatisfaction from both perspectives. Students and tutors as partners in learning.
Biggs, J. (1996) Assessing learning quality: reconciling institutional, staff and educational demands. Assessment & Evaluation in Higher Education, 12(1): 5-15. Available online http://www.tandfonline.com/doi/abs/10.1080/0260293960210101 (accessed Nov 2012)
Biggs, J. & Tang, C. (2007) Teaching for Quality Learning at University, Buckingham, Open University Press.
Chickering, A.W. & Gamson, Z.F. (1987) Seven principles for good practice in undergraduate
education. AAHE Bulletin. 39 (7), pp3–7.
Dangerfield, C. (2012) Food for thought (5): Feedback for learning with Caroline Dangerfield. Available at http://youtu.be/zfMCMm1htLY (accessed Dec 2012)
Falchikov, N. (2003) Involving Student in Assessment, Psychology Learning and Teaching, 3(2), 102-108. Available online http://www.pnarchive.org/docs/pdf/p20040519_falchikovpdf.pdf (accessed Nov 12).
Falchikov, N. (1995) Peer feedback marking: developing peer assessment, Innovations in Education and Training International, 32, pp. 175-187.
Fry, S. (1990) Implementation and evaluation of peer marking in Higher Education. Assessment and Evaluation in Higher Education, 15: 177-189.
Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students’ learning, Learning and Teaching in Higher Education, vol. 1. pp.1-31. Available online http://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/issue1.pdf#page=5 (accessed Nov 2012)
Gibbs, G. & Simpson, C. (2005) Does your assessment support your students’ learning? Learning and Teaching in Higher Education, 1. Available online http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.201.2281&rep=rep1&type=pdf (accessed Dec 2012)
Gibbs, G. (2010) Dimensions of quality. Higher Education Academy: York. Available online http://www.heacademy.ac.uk/assets/documents/evidence_informed_practice/Dimensions_of_Quality.pdf (accessed Dec 2012)
HEA (2012) A marked improvement: Transforming assessment in higher education, York, Higher Education Academy. Available online http://www.heacademy.ac.uk/resources/detail/assessment/a-marked-improvement (accessed Nov 2012).
HEFCE (2012) National Student Survey. Available online http://www.hefce.ac.uk/whatwedo/lt/publicinfo/nationalstudentsurvey/ (accessed Dec 2012).
Hughes, I. E. (2001) But isn’t this what you’re paid for? The pros and cons of peer- and self-assessment. Planet, 2, 20-23. Available online http://www.gees.ac.uk/planet/p3/ih.pdf (accessed Nov 2012).
Irons, A. (2008) Enhancing Learning through Formative Assessment and Feedback. London: Routledge.
NSS (2012) The National Student Survey. Available online http://www.thestudentsurvey.com/ (accessed Dec 2012).
NUS (2010) Charter on Feedback & Assessment, National Union of Students. Available online http://www.nusconnect.org.uk/news/article/highereducation/720/ (accessed Nov 2012)
Orsmond, P. (2004) Self- and Peer-Assessment: Guidance on Practice in the Biosciences, Teaching Bioscience: Enhancing Learning Series, Centre for Biosciences, The Higher Education Academy, Leeds. Available online http://www.bioscience.heacademy.ac.uk/ftp/teachingguides/fulltext.pdf (accessed Nov 2012)
University of Salford (2011) Transforming Learning and Teaching: Learning & Teaching Strategy 2012-2017. Available online http://www.adu.salford.ac.uk/html/aspire/aspire.html (accessed Dec 2012).
Filed under: -PGCAP, -PGCAP AFL, -PGCAP Assessment, A0-Areas of Activity, A1-learning activities, A2-supporting learning, A3-assessment-feedback, A4-learning environments, A5-research & sholarship, K0-Core Knowledge, K1-subject, K2-T&L methods, K3-student learning, K4-learning technologies, K5-evaluation methods, K6-QA/QE in teaching, V0-Professional Values, V1-diverse learners, V2-equality for learners, V3-evidence-informed, V4-wider HE context | Tagged: AFL theory, assessment & feedback, course design, development, feedback, formative assessment, inclusive assessment, learning theories, literature, peer assessment, problem-based learning, summative assessment, teaching and learning | Leave a comment »
The issues this school is facing outlined in the image above are multiple but we would assert that ultimately one main problem is central – the school appears to lack confidence in the assessment and marking process because the marks the students are receiving are below the average for the sector.
Of course within every school, different subject areas work to different criteria in the marking process, but what is not clear here is whether each department even has any criteria which the students are working towards, and it’s even less clear if departments are working to an agreed set of standards.
In this paper we will work towards ascertaining if the school’s use of rubrics is at the heart of the concerns of staff over the marks students are receiving.
However, even if we assume that a lack of, or poor use of, rubrics is a fundamental part of the problem, it also seems likely that other factors might be at play including, less able students and less effective teaching methods. But even if these elements are considered and a lack of coherence in the use of rubrics, what cannot be overlooked is that strict adherence across departments to ever stricter rubrics is not the answer to every problem.
The University’s aim is informed by the guidance in the QAA UK Quality Code for Higher Education: Assessment of Students and Accreditation of Prior Learning
The establishment of rubrics within universities are informed by the guidance of the QAA. In line with local and national guidelines:
- All assessments should have marking scheme and marking criteria developed in line with University grade descriptors.
- Awards either percentage mark (University scale 0-100%) or pass/fail grade.
- University marking scale provides brief grade descriptors (subject-specific marking criteria should be developed to align with this).
The guidelines are there to assist in our quality assurance methods but also to make sure the assessment process is transparent and clear for our students. What is crucial is that the students know what they are working towards. That it’s not some big secret and we, as lecturers, are trying to catch them out.
Biggs points out that the process of teaching and assessing has to be inherently linked – to relate intrinsically to each other:
“Constructive alignment’ has two aspects. The ‘constructive’ aspect refers to the idea that students construct meaning through relevant learning activities. The ‘alignment’ aspect refers to what the teacher does, which is to set up a learning environment that supports the learning activities appropriate to achieving the desired learning outcomes. The key is that the components in the teaching system, especially the teaching methods used and the assessment tasks, are aligned with the learning activities assumed in the intended outcomes.” (Biggs, 2003)
Making use of a rubric or feedback chart or grid would appear to be the most open and transparent way for students to learn. The assessment criteria are inherently linked to the learning outcomes – one should inform the other. The concept of students working towards undisclosed or “secret” learning outcomes would appear to contradict the very point of learning. In addition, as “experts” we set the learning outcomes because they are integral to what we want our students to learn. And if the learning outcomes are not clear to the students, are they really clear to the lecturers?
While rubrics should play an integral part in how we assess our students, there will always be an element of subjectivity in our assessments. Bloxham (2007) argues that the responsibility for grading is largely down to the subjective judgement of tutors and other markers, and that there are two main marking categories: norm-referenced and criterion-referenced.
Criterion-referenced assessment is tested against a set of criteria such as those linked to the learning outcomes for the assignment. In criterion-referenced assessment all students have an opportunity to do equally well. According to Price (2005) this system is generally considered desirable on the basis that it is fairer to students to know how they will be judged and to have that judgements based on the quality of their work rather than the performance of other members of their cohort.
Higher education marking relies on a combination of judgement against criteria and the application of standards which are heavily influenced by academic norms.
“While we can share assessment criteria, certainly at the level of listing what qualities will be taken into account, applying standards is much harder to accomplish and relies on interpretation in context, there therein lies one of the key components of unreliability.” (Bloxham, S and Boyd, P)
To this point we have made some of the arguments for the importance of rubrics in assessment – essentially that students have a right to know what we, the lecturers, are assessing them on. The issue of transparency is one which is high on the educational agenda, and to this end it can appear that there is an increasing reliance on rubrics in our HE system.
According to A Marked Improvement from the HEA Academy, assessment is at the heart of many challenges facing HE.
“A significantly more diverse student body in relation to achievement, desirability, price, education and expectations of HE has put pressure on retention and standards.”
Yet despite our increasing reliance on rubrics in order to ensure transparency, the National Student Survey continues to show that students express concerns about the reliability of assessment criteria.
And equally concerning is a sense that assessment practices which desperately need to mirror the demands of the workforce, are not reflecting what employers want.
“There is a perception, particularly among employers that HE is not always providing graduates with the skills and attributes they require to deal successfully with a complex and rapidly changing world: a world that needs graduates to be creative, capable of learning independently and taking risks, knowledgable about the work environment, flexible and responsive.” (HEA Academy: A Marked Improvement)
In Ken Robinson’s incredibly innovative Changing Education Paradigms he outlines why we need to change how we teach and assess in order to meet the needs of a changing society and a different age:
So does an ever-increasing reliance on rubrics, for the sake of transparency, mean we are assessing too narrowly and failing to take account of the essential and valuable learning which takes place in spite of the formal learning outcomes?
According to Sociologist Frank Furedi in his paper: The Unhappiness Principle: learning outcomes have become too prescriptive and are a “corrosive influence on higher education.” Furedi argues that a strict adherence to learning outcomes devalues the actual experience of education and deprives teaching and learning of meaning:
“The attempt to abolish ambiguity in course design is justified on the grounds that it helps students by clarifying the overall purpose of their programme and of their assessment tasks. But it is a simplistic form of clarity usually associated with bullet points, summary and guidance notes. The precision gained through the crystallisation of an academic enterprise into a few words is an illusory one that is likely to distract students from the clarity that comes from serious study and reflection.”
In essence Furedi’s concerns reflect fears amongst academics that prescriptive learning outcomes allow students less freedom to learn through the process of learning, for the sake of learning. Instead an increasingly number of students want to know what they have to do to pass the assessment criteria.
But Ferudi’s views also hark back to a time when assessments were not open and transparent, and students’ marks were based on unknown criteria and subject only to the an academic’s arbitary judgement.
It would seem to us in conclusion that while Ferudi’s views strike a chord with many academics who relish learning for the sake of learning, and rail against the QAA’s insistence of measuring and marking every aspect of learning with an increasing use of rubrics, the importance of transparency in student assessment is crucial.
For the School of X it is likely that there is a lack of coherence in how rubrics are designed and used. For rubrics to be effective and fair across departments there needs to be a shared understanding of standards.
The HEA Academy suggests that the onus has to be on academics to work collectively to ensure consistency in marking. Assessment standards should be socially constructed in a process which actively engages both staff and students. In addition, as previously pointed out, we also must have confidence in the judgements of professionals.
“Academic, disciplinary and professional communities should set up opportunities and processes such as meetings, workshops and groups to regularly share exemplars and discuss assessment standards. These can help ensure that educators, practitioners, specialists and students develop shared understandings and agreement about relevant standards.”
We believe the School of X needs to have confidence in standards across subject areas, but it also needs to consider if the comparisons between different universities are reasonable ones. Ultimately the performance of students at this level will always be influenced by their own ability, and to that end the School of X might want to consider educational gain as a better measure.
According to Biggs: “This matters because the best predictor of product is the qualitty of the students entering the institution, and the quality of students varies greatly between institutions, so that if you only have a measure of product, such as degree classifications, rather than gains, then you cannot easily interpret differences between institutions.”
QAA UK Quality Code for Higher Education: Assessment of Students and Accreditation of Prior Learning. Chapter B6, 2011.
Bloxham, S (2009) Marking and moderation in the UK: false assumptions and wasted resources,
Bloxham, S. & Boyd, P. (2007) Chapter 6: Marking, pp.81 – 102 in Developing Effective Assessment in Higher Education. Maidenhead: Open University Press.
Biggs, J (2003) Aligning Teaching for Costructive Learning. HEA Academy.
Bridges (1999) ‘Are we missing the mark?’ Times Higher Education published 3rd September 1999
HEA Academy – A Marked Improvement – 2012
Knight, P (2007) Grading, classifying and future learning, pp.72-86 in Boud, D. & Falchikov, N. (2007) Rethinking Assessment in Higher Education, London: Routledge
Forsyth, G. (2012) How we measure. Available online http://www.flickr.com/photos/gforsythe/7102055531/ (accessed December 2012)
Frank Furedi (2012) The Unhappiness Principle. Times Higher Educational Supplement
Orr, S. (2007) Assessment moderation: constructing the marks and constructing the students,
Price, M. (2005) Assessment – What is the answer? Oxford Brookes University Business School
Robinson, K. (2010) The Element. How Finding your Passion Changes Everything. Penguin
Yorke, M. (2000) Grading: The subject dimension,
Filed under: -PGCAP, -PGCAP AFL, -PGCAP Assessment, A0-Areas of Activity, A1-learning activities, A2-supporting learning, A3-assessment-feedback, A4-learning environments, A5-research & sholarship, K0-Core Knowledge, K1-subject, K2-T&L methods, K3-student learning, K4-learning technologies, K5-evaluation methods, K6-QA/QE in teaching, V0-Professional Values, V1-diverse learners, V2-equality for learners, V3-evidence-informed, V4-wider HE context | Tagged: AFL theory, assessment & feedback, constructive alignment, course design, criteria, feedback, formative assessment, inclusive assessment, literature, problem-based learning, quality, rubrics, summative assessment, teaching and learning | Leave a comment »
More great videos from the PGCAP Food for Thought series for learning and reflection at: http://www.youtube.com/user/pgcapsalford/videos?query=food+for+thought
During the recent weeks on the AFL module we’ve discussed the perceived value of learning outcomes. We’ve asked, how can we prescribe outcomes, when learning for everyone is different depending on where they start from in the first place, and indeed where they intend to go? Can, or should, we actually guarantee anything as an outcome of learning? Can the presence of outcomes as a ‘check list’ of learning discourage from creativity, exploration, and discovery in learning? Does the absence of learning outcomes provide a ‘mystic’ which encourages creativity, exploration, and discovery? Does such ‘mystic’ foster deeper learning or merely confusion and student dissatisfaction?
Furedi (2012) argues that learning outcomes “disrupt the conduct of the academic relationship between teacher and student”, “foster a climate that inhibits the capacity of students and teachers to deal with uncertainty”, “devalue the art of teaching”, and “breed a culture of cynicism and irresponsibility”.
Read more – The unhappiness principle.
Filed under: -PGCAP, -PGCAP AFL, A0-Areas of Activity, K0-Core Knowledge, V0-Professional Values | Tagged: AFL theory, assessment & feedback, constructive alignment, course design, learning outcomes, teaching and learning | 1 Comment »