Individual reflection (AFL)

Throughout the Postgraduate Certificate in Academic Practice (PGCAP), I was encouraged to use various models to facilitate my reflective practice (Gibbs, 1998; Moon, 2004). I found that when I first started to use such models this really helped me to focus my thoughts. It’s interesting to feel now that I’m relatively comfortable and confident in reflective writing, no longer feeling the need to explicitly structure my reflections using such models, although, now the questions are continuously in my head – progress I think!

Why AFL, when I’ve already done!

As a lifelong learner, I’ve never needed much of an excuse to participate in continuous professional development (CPD). However, my main motivation for joining the Assessment and feedback for learning (AFL) module after completing the PGCAP was strongly linked to my role as Academic Developer.

In the past, my role has been focused around supporting academic colleagues to use learning technologies to enhance their learning, teaching, and assessment practice, however more recently the role has changed into supporting learning and teaching in Higher Education (HE) more broadly including curriculum and assessment design. AFL, I felt was an excellent development opportunity to strengthen my ability to support colleagues in their academic practice as a whole, rather than specialising in Technology-Enhanced Learning (TEL).

Improving staff and student perceptions of assessment and feedback is a strategic goal identified locally within the University’s Learning and Teaching Strategy (University of Salford, 2011) aligned with national initiatives to transform assessment in UK HE (HEA, 2012), and therefore staff development in this area is high priority. By being part of the AFL community, and by listening to the ‘stories’ of colleagues from various contexts across the University, I can better understand the ‘feelings’ and ‘beliefs’ of my academic colleagues relating to their assessment and feedback practice.

Before joining the PGCAP as a ‘student’, I was part of the PGCAP programme team as tutor on the Application of Learning Technologies (ALT) module. Whilst I really enjoyed this aspect of my role, I valued the importance of experiencing the whole PGCAP learning journey myself, and of course achieving the PGCAP certification and Fellowship of the Higher Education Academy (FHEA). As the ALT module was my first tutoring experience, I was new to assessment and feedback practice, and felt that I had a lot to learn with regards to marking and providing feedback. Although I was often praised for being very helpful and constructive in my feedback, I did tend to give lots of it following submission (taking a long time to mark!), and that was an area I wanted to explore – how much feedback should I give, and when and how should I give it? I also experienced disparity between my marking and the marking of others within the module team (highlighted during moderation), and so wanted to explore – how much could I be subjective, how much and why did I need to ‘stick to a marking grid’, how specific does a marking grid or ‘rubric’ need to be? Although the PGCAP started to explore this, I felt there was more to learn, and hence my further interest in AFL.

AFL – the beginning

It seems not so long ago I was at the beginning of a new learning journey, starting as a small group all committed to spending the next three months sharing reflections, values, and experiences of assessment and feedback through a process of Problem-Based Learning (PBL). In some ways it was continuing the learning journey from the core module with familiar faces, which was ‘comforting’, although there were some new faces (including our tutor), which provided an element of ‘freshness’ and new insights. Having already experienced wonderful learning through the PGCAP, and passing the assessment, I felt more ‘relaxed’ about the assessment on AFL and able to focus more on the learning.

The first reflective task was the ‘feel-o-meter’; first defining themes relevant to my own learning journey (see post Back behind the wheel of AFL), and then deciding where I felt my current knowledge and experience was on the scale. It was encouraging to be able to map out my own ‘learning labels’ most relevant to my context.

Cheryl's Wheel of AFL - Week 1

Cheryl’s Wheel of AFL – Beginning

AFL – the PBL journey

At the start, I was feeling relatively comfortable with learning through PBL, and quietly excited. Throughout the PGCAP I’d participated in various learning activities which adopted a PBL approach, and I’d started to reflect on how I may embed this into my own practice (see post Reflection 6/6 – Developing my PBL practice). Having the opportunity to experience as a ‘student’ first hand a PBL curriculum linked to assessment was a great opportunity to continue this reflection.

I felt my familiarity with PBL was an advantage, in that I wasn’t too concerned about learning in this way, and as my peers on the module were also continuing their PGCAP I made an assumption that we were all at the same place in terms of our understanding. However, reflecting now I don’t think this was the case. Half of the group, who were part of my core module cohort, may have recalled the brief introduction to PBL, however they may not have furthered their learning through reflections for assessment, and the other half of the group weren’t part of my previous cohort and therefore their core module syllabus may have been different. Also, my peers and I were from various contexts, and those from the science disciplines may have been less familiar with such approaches than those from health and/or educational disciplines. The unfamiliarity with PBL I felt contributed to a slow start from my PBL group, and looking back, perhaps I should have made more of an effort to guide my peers along a more structured PBL process (Barrett and Cashman, 2010), although I also didn’t want to appear the ‘know-all’ of the group.

7 Step PBL Process Guide

7 Step PBL Process Guide (Barrett and Cashman, 2010)
Image source:
http://www.ucd.ie/t4cms/ucdtli0041.pdf

I realise that PBL can be adapted to suit the given context, and that sometimes it’s unnecessary to impose a tight structure. Although I agree that there should be flexibility and agreement between group members, I think that by proposing a model to work with, this would have helped everyone’s understanding. Perhaps an early PBL activity (not linked to assessment) or pre-induction readings/discussion would have helped. One of the case studies in A Practitioners’ Guide to Enquiry and Problem-based Learning (Barrett and Cashman, 2010) shows an example of effective staff development for lecturers in PBL, and is an excellent resource for practitioners. Whilst there’s no time for extensive staff development in a ten-week module, I’m sure more could be done to ensure the cohort is relatively comfortable with the process before starting the scenarios linked to assessment.

Of course, there are some aspects of this particular cohort that made the PBL scenarios particularly challenging. Best practice guidelines on PBL, or similar approaches, suggest a PBL group has 5-8 members, and that practical roles (e.g. Chair, Reader, Timekeeper, Scribe) in addition to scenario specific roles (e.g. Tutor, Student, School Manager) are assigned (Barrett and Moore, 2011). However, the small cohort of six meant that two groups of three were created making it difficult to assign roles – everyone had to do everything, which made things very intense and time consuming for everyone. On reflection, I think it would have been preferable to have one group of six. However, the added (perhaps unnecessary) complication of peer assessing another group’s learning meant that at least two PBL groups needed to be in place.

Also, there were the practical frustrations of varying study schedules (including pressure to work over the Christmas break), and the feelings of there being too much to do, with varying opinion amongst the group on which particular problems/solutions to focus. The scenarios each gave a list of ‘possible topics’ and initially we were trying to un(cover) them all (constrained by a word limit). Once we’d established, with much appreciated guidance from our tutor as ‘facilitator’, that we didn’t need to look at everything, we then seemed to spend too much time discussing which topic to focus on. For example, the scenarios suggested looking at technology-enhanced assessment (TEA), however I was less keen considering this was something I was already familiar with through my own practice, and therefore not an area for my personal development.

Another frustration was in the varying preferences relating to how we were to present the ‘solutions’. I feel that by everyone trying to contribute to the same thing, scenarios became ‘messy’ and ‘inconsistent’ in terms of the writing style, referencing, and media input. Perhaps if we’d have taken the time to identify skills within the team we may have produced something more ‘polished’.

On the positive side, our tutor was a real help, in particular when we had tutor input sessions, which gave us, as a whole cohort, the opportunity to discuss themes related to each scenario across various contexts – science, engineering, life sciences, journalism, education, health and social care. Our tutor was open to what we discussed, whilst also keeping us ‘on track’ if we moved off topic. Looking at the PBL literature that’s exactly the nature of the tutors role – more facilitative – where the focus is on students learning, rather than tutors teaching (Barrett and Moore, 2011) and striking a balance between being dominant and passive – “dominant tutors in the group hinder the learning process, but the quiet or passive tutor who is probably trying not to teach also hinders the learning process” (Dolmans et al., 2005).

During PBL, it is important for the tutor as ‘facilitator’ to ensure adequate time is allocated for constructive feedback throughout the process (Barrett and Moore, 2011). Although our tutor did provide this, it was difficult to agree as a group when we needed the feedback. I would have preferred to grant our tutor access to the three scenarios in-progress from the start, allowing him to provide feedback throughout. However, other group members preferred to wait until nearer the end. I found this quite frustrating, particularly as one aspect of our learning was around the importance of a continuous student-tutor dialogue, and feedback as being formative and ‘developmental’ (HEA, 2012).

Although our tutor shared with us a wealth of knowledge and experiences himself, which was very interesting and useful, he was also more than happy to take a step back and listen to our experiences as practitioners, providing thought provoking statements and questioning – helping us to unravel the scenarios and assisting us in recognising our own wealth of prior knowledge (Barrett and Moore, 2011). In one sense I would have liked more input from our tutor (just because I enjoyed his ‘stories’ so much), however in another sense it was great that he left us to get on with the group-work. As mature practice-based students, we were able to remain relatively motivated, and continued to meet up regularly even when there weren’t any face-to-face sessions. However, I feel this was partly down to there being only three of us – we simply had to get on with it, and there was no room for anyone to take a back seat, even if there were other professional and/or personal commitments. However, I was heartened that having experienced a family bereavement towards the end of the PBL group-work, both my tutor and peers understood that I needed a little time away, although again due the small group I felt pressured (only by myself) to return as soon as possible and get on, so as not to disrupt or delay the group-submission too much.

Despite the challenges I feel that my PBL group worked well together, and at least met the deadline in producing the group’s solutions, and in the end I think we all learnt something, not only about the scenarios themselves and about our collective experience of assessment and feedback practice, but about the potential feelings and frustrations of our students when thrown into similar assessment processes – worthwhile experiential learning (Kolb, 1984).

AFL – the learning journey

Rubrics & the Secret to Grading

(Cartoon from www.wisepedagogy.com)

PBL1 – What’s in a mark?
(see PBL1 group-submission)

As a staff developer within academic practice, I’m often engaged in discussions with colleagues regarding assessment and feedback. For the most part, I design and facilitate staff development relating to TEL. However, I’ve often felt that I needed to widen my knowledge across various disciplines rather than being limited to my own specialist area, enabling me to better support and advise my colleagues. For example, a recent strategic initiative to roll out e-Assessment institution-wide has lead to staff development in this area, including the use of e-Submission and e-Marking/e-Feedback tools. Throughout, I’ve met with some resistance towards the use of electronic rubrics, which seemed to stem from a general resistance to the use of rigid criteria and standards – whether paper-based or electronic.

The PBL1 group-work and related tutor input sessions helped me to take a step back and reflect on why my academic colleagues held such strong often negative views on the development and use of assessment criteria linked to learning outcomes. My own views were relatively positive, with a view that such standards provided transparency for both students and tutors in terms of what was being assessed and how. How could anyone argue with that?

Throughout the initial weeks on AFL, I soon began to further understand where these negative viewpoints came from, and a lot had to do with changing HE systems and structures. In the old days of University education, when the opportunity for furthering education through HE was restricted to an elite few, the University experience was very different. The ‘academic’ was deemed as the all-knowing authority, learning was ‘passive’ – a transfer of knowledge from the expert to the novice, with a focus on the acquisition of expert-knowledge, rather than experience or skill. Transferring from this to the current culture, where there is an increased and diverse student body paying high fees as ‘customers’, where students and tutors are deemed as partners in learning, and where learning is ‘active’ providing opportunity for students to co-construct their own knowledge collaboratively through both theory and practice (HEA, 2012). In terms of assessment and feedback there was more scope for individual ‘academic’ subjectivity, however today HE institutions are under the scrutiny of the Quality Assurance Agency (QAA) to ensure consistent and transparent assessment and feedback practices (QAA, 2011).

This has lead to a perceived obsession with bureaucratic processes linked with compliance and standards – learning outcomes, criteria – rather than there being a focus on ‘learning’ which some believe is lost through the bureaucracy. Furedi (2012) in his article ‘The unhappiness principle’ argues that learning outcomes “disrupt the conduct of the academic relationship between teacher and student”, “foster a climate that inhibits the capacity of students and teachers to deal with uncertainty”, “devalues the art of teaching”, and “breeds a culture of cynicism and irresponsibility”. The article certainly sparked some debate, some of which we talked about on AFL. How can we prescribe outcomes, when learning for everyone is different depending on where they start from in the first place, and indeed where they intend to go? Can, or should, we actually guarantee anything as an outcome of learning? Can the presence of outcomes as a ‘check list’ of learning discourage from creativity, exploration, and discovery in learning? Or, does the absence of learning outcomes provide a ‘mystic’ that encourages creativity, exploration, and discovery? Does such ‘mystic’ foster deeper learning or merely confusion and student dissatisfaction?

Through collaborative exploration during both the PBL group-work and discussions with the whole cohort, it was clear that to some degree consistency, transparency, and constructive alignment of assessment criteria with learning outcomes are important factors, not just to satisfy QAA requirements, but also to ensure validity, reliability and fairness to both tutors and students as partners in learning (University of Salford, 2012). It is also important to acknowledge and respect that there will always be an element of subjectivity, as responsibility for marking is largely down to the subjective judgement of tutors and other markers (Bloxham and Boyd, 2007).

In terms of where this relates to my own practice, I feel the conclusions are relevant in that I need to work with academic colleagues to further understand their assessment and feedback practices. During AFL I came across some real examples of criteria and/or rubrics in-practice at Salford, which although not fully aligned with University guidance (e.g. use of A-F grades), they do seem to work in that staff, students, and external examiners have praised some of the practice. As a staff developer I need to appreciate and fully understand current practice, and work with colleagues to facilitate a ‘community’ amongst programme and subject teams to enable them to work collectively, socially constructing a set of shared standards to ensure valid, reliable, and fair assessment and feedback practice.

PBL2 – Where’s my feedback, dude?
(see PBL2 group-submission)

Although as a PBL group all three of us contributed to each scenario, PBL2 was where I contributed much of the initial literature searches and writing, perhaps because I saw this particularly scenario as most relevant to my current practice. Recently, I have been working closely with academic colleagues to try and unravel why assessment and feedback continues to be an area of lower student satisfaction than in any other area, both locally within the institution and nationally across UK HE (HEA, 2012; HEFCE, 2012).

Before my PBL group could effectively start to look at current practice, it was useful to take a step back and reflect upon what changes in HE may have impacted on how staff and students perceive assessment and feedback. As a relatively ‘young’ learner, I have experienced a ‘modular’ University education (1997-present), and as such hadn’t really thought extensively about University education pre-1990s, and particularly how the introduction of a modular system created a significant growth in summative assessment (HEA, 2012). This certainly echoes some of the opinions of colleagues who speak about over-assessing, and resonates with some of the recent changes in policy – a move from 15 to 30 credit modules.

Also, a common response to the poor performance in the National Student Survey (NSS) is that students don’t realise when they’re receiving feedback (Boud, 2012). At one time I may have gone along with this simplistic viewpoint, however through exploration of the PBL2 scenario, I have developed my understanding and ideas around how institutions can start to address the issues surrounding assessment and feedback.

Research suggests that improvements in assessment and feedback can be achieved through reviewing practice, making changes to course and curriculum design, and questioning – what is feedback?, and how is it useful? (Boud, 2012). Timely and developmental feedback, ensuring students are able to act on feedback to develop future work, are key factors and align with research on ‘assessment for learning’ (Biggs, 1996: Gibbs & Simpson, 2004; 2005, Walker, 2012), as opposed to ‘assessment of learning’.

Another important factor for improvement is ensuring a continuous dialogue to facilitate development, and whilst there is a preference for more personal contact between student and tutor (Gibbs, 2010), there is also a need to provide more opportunity for dialogue between students themselves, and to involve them as partners in learning and in deciding how they are assessed. Within my PBL group, peer-assessment, as an active teaching and learning approach, was explored as one way of achieving this (Falchikov, 1995; 2005; Fry, 1990; Hughes, 2001; Orsmond, 2004).

Of the staff development sessions I’ve facilitated relating to assessment and feedback as part of my practice and during the PGCAP (see post Reflection 4/6 – Tutor observation), due to the element of ‘poor performance’ in the NSS linked to discussion, I have sometimes found them uncomfortable and intense sessions. I feel that working with my PBL group in a relaxed shared ‘learning’ environment has enabled further exploration of varied practice, strengthening my ability and confidence to facilitate these sessions in the future.

PBL3 – A module assessment redesign
(see PBL3 group-submission)

PBL3 was unlike the others, in that the group had to agree upon an area of practice where an assessment redesign was necessary, and use the scenario of a seemingly unfair assessment to inform the redesign. All group members had ideas for redesign from various contexts, however it was decided to focus on the redesign of a module in mechanical engineering as this linked with aspects of ‘authentic’ and ‘negotiated’ assessment, which seemed most relevant to ‘inclusive’ assessment and the scenario.

Through exploring these areas to inform the redesign, I have developed my understanding of inclusive assessment and feedback practice both in terms of enabling (1) authentic assessment which reflects ‘real-world’ practice, and (2) negotiated assessment relating to choice of assessment topic, learning outcomes/criteria, or the medium in which the student presents their learning.

The redesign of the mechanical engineering module to enable students to experience the practical elements of testing individual systems was one way of achieving authenticity, as is providing work experience and placement opportunities. Programmes and modules, such as in health and social care, are examples of where authenticity is central to learning. On the PGCAP, students engage in teaching observations providing an insight for learners into professional practice across various disciplines. However, these examples either require money and/or a working partnership with employers or an established practice-based approach.

As increasing numbers of students enter higher education with the primary hope of finding employment, there is a pressure to ensure that assessment can, at least in part, mirror the demands of the workplace or lead to skills that are relevant for a range of ‘real world’ activities beyond education, but this has been largely unreflected in the reform of assessment within many disciplines” (HEA, 2012).

As someone who has always studied part-time whilst working full-time in an area directed related to study, I’ve had the opportunity to link theory with practice. However, I often reflect on how those undergraduate students studying full-time without work experience manage to apply their knowledge to practice with a view to improving their employability.

Therefore, my personal interest is around what methods foster an authentic learning experience, where without the budget to buy expensive kit or provide placements, authentic assessment can still be achieved. Interestingly, PBL is one such method where students are provided real-world authentic problems to work on in groups, tasked to produce a collaborative report as an authentic assessment task. This provides authenticity not only relating to the problems themselves, however also relating to the skills required to work within the group – problem-solving, teamwork, communication, time management, presentation of work in various formats – all skills of value to the employer (Bloxham and Boyd, 2007).

The aspect of ‘negotiated learning’ was also of interest. By providing students with the opportunity to negotiate their own learning path through choice of learning outcomes, assessment criteria, topic, or format students become involved, taking responsibility for their own learning and assessment (Boud, 1992). I have in the past been involved in negotiated learning where, as in the mechanical engineering module redesign the negotiation was linked to the choice of topic (e.g. ALT module), however the concept of taking this further enabling students to negotiate learning outcomes and/or assessment criteria is an area which I’d like to explore further to inform future practice. The PBL3 group exploration of the use of assessment schedules or ‘learning contracts’ is a spring-board to further reading in this area.

“The negotiated learning contract is potentially one of the most useful tools available to those interested in promoting flexible approaches to learning. A learning contract is able to address the diverse learning needs of different students and may be designed to suit a variety of purposes both on course and in the workplace” (Anderson and Boud, 1996).

Although the final group-submission of the scenarios weren’t as ‘polished’ as I’d liked, and in the end I had to ‘let go’, I do feel that the learning journey as a whole has benefited me hugely in terms of ‘sharing’ and constructing ‘new knowledge’, which has helped me to achieve my learning goals. I feel more able to support my colleagues in their wider academic practice relating to assessment and feedback, and more confident in my own assessment and feedback practice as a module tutor. Time and ‘experience’ will tell!

FeelOMeterEnd

Cheryl’s Wheel of AFL – End

Action plan

  • Staff development (Colleges) – Work with programme teams and/or subject areas to facilitate a ‘community’ and sharing of assessment and feedback practices, aiming towards developing a shared set of standards.
  • Module tutor (PGCAP) – Work with PGCAP programme team, developing my own assessment and feedback practices through becoming part of a ‘community’ and sharing of assessment and feedback practices, aiming towards developing a shared set of standards.
  • CPD (AFL) – Further develop my understanding of assessment and feedback practices by further reading and co-facilitating AFL in the future, having the opportunity to listen to more ‘stories’ of colleagues from various contexts across the University.
  • CPD (PBL) – Further develop my own PBL practice by further reading and as a PBL ‘facilitator’ on the AFL module, and participate as a learner on the Flexible, Distance and Online Learning (FDOL) open course to continue the PBL student experience, however in an online and global context.

References

Anderson, G. and Boud, D. (1996) Introducing Learning Contracts: A Flexible Way to Learn.
Innovations in Education & Training International Vol. 33, Iss. 4, 1996. Available online http://www.tandfonline.com/doi/abs/10.1080/1355800960330409 (accessed Jan 2013).

Barrett, T. and Cashman, D. (Eds) (2010) A Practitioners’ Guide to Enquiry and Problem-based Learning. Dublin: UCD Teaching and Learning. Available online http://www.ucd.ie/t4cms/ucdtli0041.pdf (accessed Jan 2013).

Barrett, T. and Moore, S. (2011) New approaches to problem-based learning: revitalising your practice in higher education. New York, Routledge.

Biggs, J. (1996) Assessing learning quality: reconciling institutional, staff and educational demands. Assessment & Evaluation in Higher Education, 12(1): 5-15. Available online http://www.tandfonline.com/doi/abs/10.1080/0260293960210101(accessed Jan 2013).

Bloxham, S. and Boyd, P. (2007) Developing Effective Assessment in Higher Education: A Practical Guide. Maidenhead: Open University Press.

Boud, D. (1992) The Use of Self-Assessment in Negotiated Learning. Available online http://www.iml.uts.edu.au/assessment-futures/subjects/Boud-SHE92.pdf

Boud, D. (2012) A transformative activity. Times Higher Education (THE). 6th September 2012. Available online http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=421061 (accessed Jan 2013).

Dolmans, D. H. J. M., De Grave, W., Wolfhagen, I. H. A. P. and Van Der Vleuten, C. P. M. (2005), Problem-based learning: future challenges for educational practice and research. Medical Education, 39: 732–741. Available online http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2929.2005.02205.x/full (accessed Jan 2013).

Falchikov, N. (2003) Involving Student in Assessment, Psychology Learning and Teaching, 3(2), 102-108. Available online http://www.pnarchive.org/docs/pdf/p20040519_falchikovpdf.pdf (accessed Jan 2013).

Falchikov, N. (1995) Peer feedback marking: developing peer assessment, Innovations in Education and Training International, 32, pp. 175-187.

Fry, S. (1990) Implementation and evaluation of peer marking in Higher Education. Assessment and Evaluation in Higher Education, 15: 177-189.

Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students’ learning, Learning and Teaching in Higher Education, vol. 1. pp.1-31. Available online http://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/issue1.pdf#page=5(accessed Jan 2013).

Gibbs, G. & Simpson, C. (2005) Does your assessment support your students’ learning? Learning and Teaching in Higher Education, 1. Available online http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.201.2281&rep=rep1&type=pdf (accessed Jan 2013).

Gibbs, G. (2010) Dimensions of quality. Higher Education Academy: York. Available online http://www.heacademy.ac.uk/assets/documents/evidence_informed_practice/Dimensions_of_Quality.pdf (accessed Jan 2013)

Gibbs G (1988) Learning by Doing: A guide to teaching and learning methods. Further Education Unit. Oxford Polytechnic: Oxford.

HEA (2012) A Marked improvement: Transforming assessment in higher education, York, Higher Education Academy. Available online http://www.heacademy.ac.uk/resources/detail/assessment/a-marked-improvement (accessed Jan 2013).

HEFCE (2012) National Student Survey. Available online http://www.hefce.ac.uk/whatwedo/lt/publicinfo/nationalstudentsurvey/ (accessed Jan 2013).

Hughes, I. E. (2001) But isn’t this what you’re paid for? The pros and cons of peer- and self-assessment. Planet, 2, 20-23. Available online http://www.gees.ac.uk/planet/p3/ih.pdf (accessed Jan 2013).

Furedi, F. (2012) The unhappiness principle. Times Higher Education (THE). 29th November 2012. Available online http://www.timeshighereducation.co.uk/story.asp?storycode=421958 (accessed Jan 2013)

Kolb, D. (1984) Experiential Learning: experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall.

Gibbs G (1988) Learning by Doing: A guide to teaching and learning methods. Further Education Unit. Oxford Polytechnic: Oxford.

Moon J. (2004) A Handbook of Reflective and Experiential Learning, Routledge Falmer.

Orsmond, P. (2004) Self- and Peer-Assessment: Guidance on Practice in the Biosciences, Teaching Bioscience: Enhancing Learning Series, Centre for Biosciences, The Higher Education Academy, Leeds. Available online http://www.bioscience.heacademy.ac.uk/ftp/teachingguides/fulltext.pdf (accessed Jan 2013).

QAA (2011) Quality Code – Chapter B6: Assessment of students and accreditation of prior learning. Available online http://www.qaa.ac.uk/Publications/InformationAndGuidance/Pages/quality-code-B6.aspx (accessed Jan 2013).

University of Salford (2011) Transforming Learning and Teaching: Learning & Teaching Strategy 2012-2017. Available online http://www.hr.salford.ac.uk/employee-development-section/salford-aspire (accessed Jan 2013).

University of Salford (2012) University Assessment Handbook: A guide to assessment design, delivery and feedback. Available online http://www.hr.salford.ac.uk/cms/resources/uploads/File/UoS%20assessment%20handbook%20201213.pdf (accessed Jan 2013).

Walker, D. (2012) Food for thought (20): Feedback for learning with Dr David Walker. Available online http://www.youtube.com/watch?v=DNu3fMMNQlw (accessed Jan 2013).

PBL3 group submission – A module assessment redesign?

Problem

In mechanical engineering the students all ask about mechanical systems related to motor vehicles. However, we do not have a vehicle within the building that the students could relate to when  they are learning about brakes, engines, dynamic controls, fuel selection (green solutions), weight considerations, introduction of carbon materials, and stress analysis with frame stability.

The University has given us a budget to purchase two motorsport racing cars and track transport trailer, to use one for track testing, and keep the other one un-assembled in kit form, so that individual systems can be independently tested by the students. This will allow students to negotiate their learning by selecting the system they are interested in and test as part of a final year project, enabling the student to achieve their individual learning goals in an authentic environment.

The vehicles will be used primarily in the mechanical engineering group design project work, and is a 30 credit module.

Every student needs to input a satisfactory piece of work, but as the scenario suggests, a way in which all students participate may include a range of assessments including lab practicals, reports and exams.

Although students all have to submit the same components, the negotiation is over the specialism chosen, encouraging authentic assessment influenced by their own career aspirations – hence a fairer assessment.

(1) A student who is good at data logging, data analysis can work on measurement collection and analysis of fuel management systems, or structural analysis of stress characteristics of the vehicle chassis.
(2) Another student may be more focussed on mechanical linkages and hydraulic systems thus incorporating steering and brakes, to determine efficient control parameters with steering and braking.
Both student types therefore still fulfill the mechanical engineering background practice and theory but focusing on a different branch.

Different strokes for different folks

In 1 and 2 we can acknowledge that we are introducing authentic and negotiated learning.

“Students should experience assessment as a valid measure of their programme outcomes using authentic assessment methods, which are both intrinsically worthwhile and useful in developing their future employability” (HEA, 2012)

“Assessment reform with these aims would benefit from increased involvement of professional,regulatory and statutory bodies; engaging with them to identify how professional and personal capabilities can be evidenced. It would build on existing efforts to design integrative and creative assessment that is more able to determine authentic achievement. It would resist grading performances that cannot easily be measured. It would help students understand the assessment process and develop the skills of self-evaluation and professional judgement. It would enable students to recognise what they have learned and be able to articulate and evidence it to potential employers. Improving assessment in this way is crucial to providing a richer and fairer picture of students’ achievement.” (HEA, 2012)

In accordance with the FHEQ the descriptor most appropriate for the module redesign is level 4 (QAA, 2008).

To facilitate the course redesign a new course handbook has been developed from one of our current handbooks for curriculum design. Below are modifications to existing forms for final assessment marks as a guide for the supervisor, moderator in preparing feedback and marking criteria to be followed.

Final supervisor assessment form

Final supervisor assessment form

Final Report Moderator Marking Proforma

The Higher Education Academy commissioned a guide to support the higher education sector to think creatively about inclusive curriculum design from a generic as well as subject or disciplinary perspective (HEA, 2010).

The curriculum represents the expression of educational ideas in practice. The word curriculum has its roots in the latin word for track or race course. From there to mean course of study or syllabus.
Today the definition is much wider and includes all the planned learning experiences of a school or educational institution.

Descriptive Model (Reynolds and Skilbeck, 1976)
Image Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125124/

An enduring example of a descriptive model is the situational model advocated by Reynolds and Skilbeck (1976), which emphasises the importance of situation or context in curriculum design. In this model, curriculum designers thoroughly and systematically analyse the situation in which they work for its effect on what they do in the curriculum. The impact of both external and internal factors is assessed and the implications for the curriculum are determined.

Situational Model (Reynolds and Skilbeck, 1976)
Image Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125124/

Although all steps in the situational model (including situational analysis) need to be completed, they do not need to be followed in any particular order. Curriculum design could begin with a thorough analysis of the situation of the curriculum or the aims, objectives, or outcomes to be achieved, but it could also start from, or be motivated by, a review of content, a revision of assessment, or a thorough consideration of evaluation data. What is possible in curriculum design depends heavily on the context in which the process takes place.

Assessment is the process of documenting, usually in measurable terms, criteria such as knowledge and skills.

Focus can be directed on the individual learner, the learning community,  the institution, or the educational system as a whole. The assessment practices depend on theoretical framework of which the students use, their assumptions and beliefs culminating in their knowledge and their process of learning. CIIA (2010) highlights key points, but not all in every circumstance can be used.

Assessment Learning Cycle CIIA (2010)
Image Source: http://pandora.cii.wwu.edu/cii/resources/outcomes/how_assessment_works.asp

Assessment criteria for our mechanical students, who follow their chosen specialism, as discussed previously:
(a) Initial Literature research on the topic chosen
(b) Testing and data options
(c) Achievable theory objectives with numerical and classical data
(d) Has a goal been achieved with objective enhancements
(f) Has an understanding of the fundamentals been achieved, and the students knowledge known

Before final submission the students will give a presentation to 3 members of staff providing a feasible systems project, demonstrating theory, knowledge and intended outcomes.
At the end of each presentation feedback will be given using the, final supervisor assessment form (as above) and if the project lacks a driven objective, or insufficient understanding, then advice will be given, in order for changes/improvements to be made.
This could also be given at every stage individually  or given at the end of the students analysis, but before the student writes their final dissertation.

Gibbs (2010) asserts that feedback should be timely and understandable in order for it to make a difference to students’ work – feedback FOR learning, rathern feedback OF learning.

While the redesign of this module has not involved students negotiating their own assessment criteria – only the specialism they choose to follow – consideration should be given to this as it could enhance assessment.

One of the solutions put forward by Boud (1992) is to incorporate a “qualitative and discursive” self-assessment schedule in order to provide a comprehensive and analytical record of learning in situations where students have substantial responsibility for what they do. It is a personal report on learning and achievements which can be used either for students’ own use or as a product which can form part of a formal assessment procedure.

“The issue which the use of a self-assessment schedule addresses is that of finding an appropriate mechanism for assessing students’ work in self-directed or negotiated learning situations which takes account of both the range of what is learned and the need for students to be accountable for their own learning. Almost all traditional assessment strategies fail to meet these criteria as they tend to sample a limited range of teacher-initiated learning and make the assumption that assessment is a unilateral act conducted by teachers on students.” (Boud, 1992)

For many years Ken Robinson has been one of many voices urging a revolution in our education system in order to uncover the talents of individuals. A complete reform of our assessment structure might enable students to perform to their best.

“We have to recognise that human talent is tremendously diverse, people have very different aptitudes. We have to feed their spirit, feed their energy, feed their passion.” (Robinson, 2006)


References

Boud, D (1992) Studies in HE Volume 17, no 2 – The Use of Self-Assessment in Negotiated Learning available online http://www.iml.uts.edu.au/assessment-futures/subjects/Boud-SHE92.pdf

CIIA (2010) Assessment and Outcomes: How Assessment Works.  Available online http://pandora.cii.wwu.edu/cii/resources/outcomes/how_assessment_works.asp

HEA (2012) A marked improvement: Transforming assessment in higher education, York, Higher Education Academy. Available online
http://www.heacademy.ac.uk/resources/detail/assessment/a-marked-improvement

HEA (2010) Inclusive curriculum design in higher education: Engineering.  Available online http://www.heacademy.ac.uk/resources/detail/inclusion/Disability/Inclusive_curriculum_design_in_higher_education

HEFCE (2009) Managing Curriculum Design.  JISC publication available online http://www.jisc.ac.uk/media/documents/publications/managingcurriculumchange.pdf

Gibbs, G. (2010) Dimensions of quality.  Higher Education Academy: York.  Available onlinehttp://www.heacademy.ac.uk/assets/documents/evidence_informed_practice/Dimensions_of_Quality.pdf(accessed Dec 2012)

Race, P (2007) The lecturers Toolkit 3rd edition Routledge Taylor & Francis group, Abingdon, Oxon, Routledge

Reynolds J, Skilbeck M. (1976) Culture and the classroom. London: Open Books.  Available online http://books.google.co.uk/books/about/Culture_and_the_classroom.html?id=3woNAQAAIAAJ

Robinson, K (2006) Bring on the Learning Revolution available online http://www.youtube.com/watch?v=r9LelXa3U_I

QAA (2008) The framework for higher education qualification in England, Wales & Northern Ireland.  Available http://www.qaa.ac.uk/Publications/InformationAndGuidance/Documents/FHEQ08.pdf

PBL2 group submission – Where’s my feedback, dude?

Group report by

Caroline Cheetham
Lecturer in Journalism (School of Media, Music and Performance)
Cheryl Dunleavy
Academic Developer (Human Resource Development)
Philip Walker
Lecturer in Product Engineering (School of Computing, Science and Engineering)

Despite much effort in recent years to focus staff development and strategic initiatives on assessment and feedback (University of Salford, 2011), the University continues to receive low student satisfaction in this area across many subjects through the National Student Survey (NSS, 2012). The cartoon in the scenario (see Box 1) tells an all too familiar story – across the UK Higher Education (HE) sector, despite increased workloads for tutors, in terms of providing marks and feedback, student satisfaction in the category of assessment and feedback remains generally lower than in other categories (see Box 2).

Where's my feedback, dude?

Box 1: Scenario
(Image Source: http://www.health.heacademy.ac.uk/rp/publications/occasionalpaper/occp11.pdf)
In the context of the University’s regulations on the provision of feedback, identify the problems in the cartoon and investigate, though the use of relevant literature, how we can improve feedback for student learning.

Box 2

Box 2: 2012 NSS results for the UK (HEFCE, 2012)
(Image source: http://www.hefce.ac.uk/whatwedo/lt/publicinfo/nationalstudentsurvey/)

In finding a solution to this problem, it is important to look at both staff and student perceptions of assessment and feedback – (1) how have tutors become so overworked? Why do they often conclude that students aren’t interested in their feedback – just their mark? and (2) when so much time is spent on providing marks and feedback, why do students convey such dissatisfaction in the NSS?

Many believe that changing HE systems and structures over the last few decades are partly to blame. The post-1990 modularisation of degree programmes may have lead to over-assessing through more short-fat (30 credit) and short-thin (10 credit) modules, as opposed to the traditional long-thin (60 credit) term. A recent report by the Higher Education Academy (HEA) states:

“Modularisation has created a significant growth in summative assessment, with its negative backwash effect on student learning and its excessive appetite for resources to deliver the concomitant increase in marking, internal and external moderation, administration and quality assurance” (HEA, 2012).

Due to the very nature of ‘summative’ assessment occurring at the ‘end-point’, modular structures have naturally increased the amount of summative assessment occurring over a single term – typically two modules per semester, therefore four modules/summative assessment points per term. Traditionally, there would have been only one summative assessment (e.g. exam) at the end of each term perhaps allowing for more ‘formative’ assessment and learning throughout. With modularisation, there is little time for students to absorb learning through formative tasks in between summative assessments (Irons, 2008).

Although it is useful to understand the distinctions between ‘formative’ and ‘summative’ assessments when contributing to the assessment and feedback discourse (see Box 3), it does sometimes distract from the bigger picture.

Summative assessment as “any assessment activity which results in a mark or grade which is subsequently used as a judgement on student performance”

Formative assessment as “any task or activity which creates feedback (or feedforward) for students about their learning. Formative assessment does not carry a grade which is subsequently used in a summative assessment”

Formative feedback as “any information, process or activity which affords or accelerates student learning based on comments relating to either formative assessment or summative assessment”

Box 3: Summative Vs. Formative: Irons (2008)

Whilst we may accept such definitions and distinctions exist, it is important to remember that they cannot in practice be easily separated from one another, and ‘assessment’ as a whole should frame any strategies for improvement.

Post-modularisation research suggests institutional reviews of assessment practice are needed to consider where improvements can be made to ‘support learning’ (Biggs, 1996: Gibbs & Simpson, 2004;2005). There is a shift in focus from summative assessment to formative assessment and formative feedback approaches (HEA, 2012) – indeed a shift from ‘assessment of learning’ as a ‘measure’ (quantitative) to ‘assessment for learning’ as ‘developmental’ (qualitative). Therefore, if we are to think about the concept of ‘assessment for learning’ to help resolve this problem, then we need to consider various aspects of current practice.

If, as the scenario suggests, students are dissatisfied with their feedback, when on the other side tutors are working hard to produce marks and feedback, then we can assume that there is something wrong with the feedback and/or a difference in perception of what quality ‘comprehensive’ and ‘meaningful’ feedback looks like.

Timeliness and feed ‘forward’ are key factors.

“Feedback may be backward looking – addressing issues associated with material that will not be studied again, rather than forward-looking and addressing the next study activities or assignments” (Gibbs & Simpson, 2005)

If the feedback is received at the end of a module (backward-looking), and it is not clear to the student how they may use this as a ‘developmental’ tool to improve (forward-looking), then they are unlikely to fully engage.

The student needs to be clear of the future benefit of assessment, therefore if assessment and feedback is truly ‘comprehensive’ and ‘meaningful’ then the benefit should be ‘lifelong’ contributing to an improvement in the next assessment, module, further study, and/or future employment.

At Salford, our students have indicated that ‘feedback for learning’ is key to improving student satisfaction. Former President of the Student Union suggests that more informal (formative) feedback ‘before’ summative assessment is important – a continuous dialogue between student and tutor (see Box 4).

Box 4: Feedback for learning (Dangerfield, 2012)
(Video source: http://youtu.be/zfMCMm1htLY)

The importance of a continuous student-tutor dialogue through interaction on a personal basis is highlighted in the literature, both through early research – ‘Seven principles of good practice in undergraduate education’ (Chickering & Gamson, 1987) and more recently – ‘Dimensions of quality’ (Gibbs, 2010).

However, whilst more personal contact is desirable, large cohorts may present resource issues and therefore we need to be more creative in how we realistically provide more formative feedback opportunities for all our students.

Returning to the concept of ‘assessment’ as a whole, it is important to ensure that formative and summative tasks are designed with each other in mind.  This can be achieved through ‘active’ teaching and learning shaped from and explicitly linked to module intended learning outcomes (ILOs) and assessment criteria – constructive alignment (see Box 5).

Box 5: Constructive alignment (Biggs & Tang, 2007)
(Image source: http://www.ucdoer.ie/images/3/3c/Aligned-curriculum-model.gif)

So, what formative approaches, which support active teaching and learning, could we suggest to help resolve this particular problem?  There are several examples in the literature, however here we have chosen to focus on ‘peer-assessment’ as along with more contact time, feedback throughout and a variety of assessment methods (including peer-assessment) have been highlighted in the Student ‘Charter on Feedback & Assessment’ (see Box 6) .

Box 6

Box 6: Student ‘Charter on Feedback & Assessment’ (NUS, 2010)
(Image source: http://www.nusconnect.org.uk/news/article/highereducation/720/)

Peer assessment is defined

“the  process  through which groups of individuals  rate  their  peers.  This  exercise  may  or may  not  entail  previous  discussion  or agreement over criteria. It may involve the use of rating instruments  or checklists which have been  designed  by others before the  peer  assessment  exercise,  or designed by the  user group to  meet  its  particular  needs” (Falchikov, 1995)

Peer-assessment is one approach which provides both formative assessment and formative feedback opportunities where students are empowered to take personal ownership not just of their learning, but also of how their learning is assessed.  Peer-assessment methods are centred around the student and offer an authentic and lifelong learning experience (Orsmond, 2004).

When peer-assessment methods are designed well, students are positive about the outcomes claiming that it makes them think more, become more critical, learn more and gain in confidence (HEA, 2012).

Of course, peer-assessment methods aren’t always well received by students.  Students can feel that they are being made to do the work of their tutor (e.g. marking) which can be highly controversial today – students, often seen as ‘customers’ , now paying increased fees and therefore demand to see ‘value for money’ from their tutors.  However, in well designed peer-assessment the ‘value’ can be seen through tutors spending their time providing more formative feedback in-class clarifying assessment criteria in preparation for peer-assessment activities.

Hughes (2001), in his implementation and evaluation of peer-marking of practical write-ups (>100 pharmacology students), found that such an approach was successful, however it is clear that three factors contributed to this success (see Box 7).

1 Firstly, it was important to clarify to students why they were being asked to peer-mark practical write-ups emphasising the ‘value’ of this process to them as learners. This was achieved through a preliminary session where he introduced his students to peer-assessment, helping to encourage them to see the educational benefit of peer-assessment, including clarity of assessment criteria, learning from others mistakes, benchmarking the standard of own work against the standards achieved elsewhere, gaining experience of assessing others work – a necessary lifelong skill, especially in employment. Non-attendance at the preliminary session resulted in a penalty on their own mark therefore providing further encouragement to participate.

Student Guide to peer Assessment of Practicals (Fry, 1990) Why are we doing this?
You should get several things out of this method of assessment which may be new to you:

  • It is an open marking system; therefore you can see what was required and how to improve your work.
  • You see mistakes others make and therefore can avoid them; you also see the standard achieved by others and can set your own work in the spectrum of marks.
  • You get a full explanation of the practical and how you should have processed the data and done the discussion. Therefore your information and understanding is improved.
  • You get practice in assessing others and their work. You will need this skill quite early in a career and you will need to come to terms with the problem of bias; someone who is a good friend may have done poor work; it can be disturbing to have to give them a poor mark.
  • In assessing others you should acquire the ability to stand back from your own work and assess that as well. This is an essential ability in a scientist; an unbiased and objective assessment of the standards you have achieved in your own work. Once you are away from the teacher/pupil relationship (i.e. leave University) you will be the person who decides if a piece of work is good enough to be considered as finished and passed on to your boss.

The method of marking adopted in this course is designed with the above factors in mind.

2 Secondly, students were not left to their own devices when marking. It was important to provide a clear process and ‘time’ in-class for students to peer-mark together. All students gathered in a lecture theatre, the practical write-ups were distributed at random so as to reduce bias amongst friends, and tutors were present to provide and further explain criteria set out on an explicit marking schedule. Again, non-attendance at the marking sessions resulted in penalised marks.

3 Thirdly, students were encouraged to take ownership of their marking by crucially signing to accept responsibility for the accuracy of their marking. The students were also made aware that a sample of the practical write-ups would be second-marked by a tutor, and anyone who felt their mark was unfair could request to have their work re-marked by a tutor (less than 2% chose to do so).

Box 7: A case study of good practice (Hughes, 2001)

From his comparative study of two first year cohorts on two consecutive years, Hughes (2001) presents data which suggests that although both cohorts gained similar marks for the first practical write-up, students involved in the peer-marking process improved in their remaining three practical write-ups, obtaining consistently better marks than the students who were not involved.  Also, the tutor-marked sample did not reveal significantly different marks from those awarded during peer-marking.  This data suggested three things, (1) students were learning how to improve their practical write-ups through the peer-marking process, (2) peer-marking did not result in a lowering of standards when compared with tutor-marking, and (3) peer-marking resulted in a reduction in staff time.

The findings from Hughes (2001) are consistent with other peer-assessment studies in the sciences (Orsmond, 2004) which contribute to best practice guidance (see Box 8).

Box 8

Box 8: Stages in carrying out and evaluating a self or peer assessment study (Falchikov, 2003)
Image source: http://www.pnarchive.org/docs/pdf/p20040519_falchikovpdf.pdf)

To conclude, nationally peer-assessment is encouraged as one way to engage students in a dialogue about their learning, and would seem an ideal approach to encourage across the University to help resolve the problem identified here.

“Encouraging self- and peer assessment, and engaging in dialogue with staff and peers about their work, enables students to learn more about the subject, about themselves as learners, as well as about the way their performance is assessed” (HEA, 2012).

Although formative feedback through peer assessment doesn’t necessarily reduce a tutor’s workload, it does however have the potential to at least be valued by the students, and has the potential to be well received when it comes to the NSS.  Therefore, staff can be motivated and enthused that students appreciate the work they put into assessment and feedback processes, rather than, as in the scenario, where there is dissatisfaction from both perspectives.  Students and tutors as partners in learning.

References

Biggs, J. (1996) Assessing learning quality: reconciling institutional, staff and educational demands.  Assessment & Evaluation in Higher Education, 12(1): 5-15.  Available online http://www.tandfonline.com/doi/abs/10.1080/0260293960210101 (accessed Nov 2012)

Biggs, J. & Tang, C. (2007) Teaching for Quality Learning at University, Buckingham, Open University Press.

Chickering, A.W. & Gamson, Z.F. (1987) Seven principles for good practice in undergraduate
education. AAHE Bulletin. 39 (7), pp3–7.

Dangerfield, C. (2012) Food for thought (5): Feedback for learning with Caroline Dangerfield.  Available at http://youtu.be/zfMCMm1htLY (accessed Dec 2012)

Falchikov, N. (2003) Involving Student in Assessment, Psychology Learning and Teaching, 3(2), 102-108.  Available online http://www.pnarchive.org/docs/pdf/p20040519_falchikovpdf.pdf (accessed Nov 12).

Falchikov, N.  (1995)  Peer  feedback  marking:  developing peer  assessment, Innovations in Education  and Training International,  32,  pp.  175-187.

Fry, S. (1990) Implementation and evaluation of peer marking in Higher Education. Assessment and Evaluation in Higher Education, 15: 177-189.

Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students’ learning, Learning and Teaching in Higher Education, vol. 1. pp.1-31. Available online http://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/issue1.pdf#page=5 (accessed Nov 2012)

Gibbs, G. & Simpson, C. (2005) Does your assessment support your students’ learning? Learning and Teaching in Higher Education, 1.  Available online http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.201.2281&rep=rep1&type=pdf (accessed Dec 2012)

Gibbs, G. (2010) Dimensions of quality.  Higher Education Academy: York.  Available online http://www.heacademy.ac.uk/assets/documents/evidence_informed_practice/Dimensions_of_Quality.pdf (accessed Dec 2012)

HEA (2012) A marked improvement: Transforming assessment in higher education, York, Higher Education Academy.  Available online http://www.heacademy.ac.uk/resources/detail/assessment/a-marked-improvement (accessed Nov 2012).

HEFCE (2012) National Student Survey.  Available online http://www.hefce.ac.uk/whatwedo/lt/publicinfo/nationalstudentsurvey/ (accessed Dec 2012).

Hughes, I. E. (2001) But isn’t this what you’re paid for? The pros and cons of peer- and self-assessment. Planet, 2, 20-23.  Available online http://www.gees.ac.uk/planet/p3/ih.pdf (accessed Nov 2012).

Irons, A. (2008) Enhancing Learning through Formative Assessment and Feedback. London: Routledge.

NSS (2012) The National Student Survey.  Available online http://www.thestudentsurvey.com/ (accessed Dec 2012).

NUS (2010) Charter on Feedback & Assessment, National Union of Students.  Available online http://www.nusconnect.org.uk/news/article/highereducation/720/ (accessed Nov 2012)

Orsmond, P. (2004) Self- and Peer-Assessment: Guidance on Practice in the Biosciences, Teaching Bioscience: Enhancing Learning Series, Centre for Biosciences, The Higher Education Academy, Leeds.  Available online http://www.bioscience.heacademy.ac.uk/ftp/teachingguides/fulltext.pdf (accessed Nov 2012)

University of Salford (2011) Transforming Learning and Teaching: Learning & Teaching Strategy 2012-2017.  Available online http://www.adu.salford.ac.uk/html/aspire/aspire.html (accessed Dec 2012).

PBL1 group submission – What’s in a mark?

The issues this school is facing outlined in the image above are multiple but we would assert that ultimately one main problem is central – the school appears to lack confidence in the assessment and marking process because the marks the students are receiving are below the average for the sector.

Of course within every school, different subject areas work to different criteria in the marking process, but what is not clear here is whether each department even has any criteria which the students are working towards, and it’s even less clear if departments are working to an agreed set of standards.

In this paper we will work towards ascertaining if the school’s use of rubrics is at the heart of the concerns of staff over the marks students are receiving.

However, even if we assume that a lack of, or poor use of, rubrics is a fundamental part of the problem, it also seems likely that other factors might be at play including, less able students and less effective teaching methods. But even if these elements are considered and a lack of coherence in the use of rubrics, what cannot be overlooked is that strict adherence across departments to ever stricter rubrics is not the answer to every problem.

The University’s aim is informed by the guidance in the QAA UK Quality Code for Higher Education: Assessment of Students and Accreditation of Prior Learning

The establishment of rubrics within universities are informed by the guidance of the QAA. In line with local and national guidelines:

  • All assessments should have marking scheme and marking criteria developed in line with University grade descriptors.
  • Awards either percentage mark (University scale 0-100%) or pass/fail grade.
  • University marking scale provides brief grade descriptors (subject-specific marking criteria should be developed to align with this).

The guidelines are there to assist in our quality assurance methods but also to make sure the assessment process is transparent and clear for our students. What is crucial is that the students know what they are working towards. That it’s not some big secret and we, as lecturers, are trying to catch them out.

How We Measure by giulia.forsythe
How We Measure, a photo by giulia.forsythe on Flickr. (Forsythe, 2012)

Biggs points out that the process of teaching and assessing has to be inherently linked – to relate intrinsically to each other:

“Constructive alignment’ has two aspects.  The ‘constructive’ aspect refers to the idea that students construct meaning through relevant learning activities. The ‘alignment’ aspect refers to what the teacher does, which is to set up a learning environment that supports the learning activities appropriate to achieving the desired learning outcomes. The key is that the components in the teaching system, especially the teaching methods used and the assessment tasks, are aligned with the learning activities assumed in the intended outcomes.” (Biggs, 2003)

Making use of a rubric or feedback chart or grid would appear to be the most open and transparent way for students to learn. The assessment criteria are inherently linked to the learning outcomes – one should inform the other. The concept of students working towards undisclosed or “secret” learning outcomes would appear to contradict the very point of learning. In addition, as “experts” we set the learning outcomes because they are integral to what we want our students to learn. And if the learning outcomes are not clear to the students, are they really clear to the lecturers?

While rubrics should play an integral part in how we assess our students, there will always be an element of subjectivity in our assessments. Bloxham (2007) argues that the responsibility for grading is largely down to the subjective judgement of tutors and other markers, and that there are two main marking categories: norm-referenced and criterion-referenced.

Criterion-referenced assessment is tested against a set of criteria such as those linked to the learning outcomes for the assignment. In criterion-referenced assessment all students have an opportunity to do equally well. According to Price (2005) this system is generally considered desirable on the basis that it is fairer to students to know how they will be judged and to have that judgements based on the quality of their work rather than the performance of other members of their cohort.

Higher education marking relies on a combination of judgement against criteria and the application of standards which are heavily influenced by academic norms.

“While we can share assessment criteria, certainly at the level of listing what qualities will be taken into account, applying standards is much harder to accomplish and relies on interpretation in context, there therein lies one of the key components of unreliability.” (Bloxham, S and Boyd, P)

To this point we have made some of the arguments for the importance of rubrics in assessment – essentially that students have a right to know what we, the lecturers, are assessing them on. The issue of transparency is one which is high on the educational agenda, and to this end it can appear that there is an increasing reliance on rubrics in our HE system.

According to A Marked Improvement from the HEA Academy, assessment is at the heart of many challenges facing HE.

“A significantly more diverse student body in relation to achievement, desirability, price, education and expectations of HE has put pressure on retention and standards.”

Yet despite our increasing reliance on rubrics in order to ensure transparency, the National Student Survey continues to show that students express concerns about the reliability of assessment criteria.

And equally concerning is a sense that assessment practices which desperately need to mirror the demands of the workforce, are not reflecting what employers want.

“There is a perception, particularly among employers that HE is not always providing graduates with the skills and attributes they require to deal successfully with a complex and rapidly changing world: a world that needs graduates to be creative, capable of learning independently and taking risks, knowledgable about the work environment, flexible and responsive.” (HEA Academy: A Marked Improvement)

In Ken Robinson’s incredibly innovative Changing Education Paradigms he outlines why we need to change how we teach and assess in order to meet the needs of a changing society and a different age:

So does an ever-increasing reliance on rubrics, for the sake of transparency, mean we are assessing too narrowly and failing to take account of the essential and valuable learning which takes place in spite of the formal learning outcomes?

According to Sociologist Frank Furedi in his paper: The Unhappiness Principle: learning outcomes have become too prescriptive and are a “corrosive influence on higher education.” Furedi argues that a strict adherence to learning outcomes devalues the actual experience of education and deprives teaching and learning of meaning:

“The attempt to abolish ambiguity in course design is justified on the grounds that it helps students by clarifying the overall purpose of their programme and of their assessment tasks. But it is a simplistic form of clarity usually associated with bullet points, summary and guidance notes. The precision gained through the crystallisation of an academic enterprise into a few words is an illusory one that is likely to distract students from the clarity that comes from serious study and reflection.”

In essence Furedi’s concerns reflect fears amongst academics that prescriptive learning outcomes allow students less freedom to learn through the process of learning, for the sake of learning. Instead an increasingly number of students want to know what they have to do to pass the assessment criteria.

But Ferudi’s views also hark back to a time when assessments were not open and transparent, and students’ marks were based on unknown criteria and subject only to the an academic’s arbitary judgement.

It would seem to us in conclusion that while Ferudi’s views strike a chord with many academics who relish learning for the sake of learning, and rail against the QAA’s insistence of measuring and marking every aspect of learning with an increasing use of rubrics, the importance of transparency in student assessment is crucial.

For the School of X it is likely that there is a lack of coherence in how rubrics are designed and used. For rubrics to be effective and fair across departments there needs to be a shared understanding of standards.

The HEA Academy suggests that the onus has to be on academics to work collectively to ensure consistency in marking. Assessment standards should be socially constructed in a process which actively engages both staff and students. In addition, as previously pointed out, we also must have confidence in the judgements of professionals.

“Academic, disciplinary and professional communities should set up opportunities and processes such as meetings, workshops and groups to regularly share exemplars and discuss assessment standards. These can help ensure that educators, practitioners, specialists and students develop shared understandings and agreement about relevant standards.”

We believe the School of X needs to have confidence in standards across subject areas, but it also needs to consider if the comparisons between different universities are reasonable ones. Ultimately the performance of students at this level will always be influenced by their own ability, and to that end the School of X might want to consider educational gain as a better measure.

According to Biggs: “This matters because the best predictor of product is the qualitty of the students entering the institution, and the quality of students varies greatly between institutions, so that if you only have a measure of product, such as degree classifications, rather than gains, then you cannot easily interpret differences between institutions.”

References

QAA UK Quality Code for Higher Education: Assessment of Students and Accreditation of Prior Learning. Chapter B6, 2011.

Bloxham, S (2009) Marking and moderation in the UK: false assumptions and wasted resources,

Bloxham, S. & Boyd, P. (2007) Chapter 6: Marking, pp.81 – 102 in Developing Effective Assessment in Higher Education. Maidenhead: Open University Press.

Biggs, J (2003) Aligning Teaching for Costructive Learning. HEA Academy.

Bridges (1999) ‘Are we missing the mark?’ Times Higher Education published 3rd September 1999

HEA Academy – A Marked Improvement – 2012

Knight, P (2007) Grading, classifying and future learning, pp.72-86 in Boud, D. & Falchikov, N. (2007) Rethinking Assessment in Higher Education, London: Routledge

Forsyth, G. (2012) How we measure. Available online http://www.flickr.com/photos/gforsythe/7102055531/ (accessed December 2012)

Frank Furedi (2012) The Unhappiness Principle. Times Higher Educational Supplement

Orr, S. (2007) Assessment moderation: constructing the marks and constructing the students,

Price, M. (2005) Assessment – What is the answer? Oxford Brookes University Business School

Robinson, K. (2010) The Element. How Finding your Passion Changes Everything. Penguin

Yorke, M. (2000) Grading: The subject dimension,

 

Authentic assessment

References

Dykstra, K. (2011) Authentic Assessment.  Available online http://youtu.be/c_gibuFZXZw (accessed Nov 2012).

Zone of Formative Learning

Zone of Formative Learning

Zone of Formative Learning (Orsmond, 2004)

References

Orsmond (2004) Self- and Peer-Assessment: Guidance on Practice in the Biosciences, Teaching Bioscience: Enhancing Learning Series, Centre for Biosciences, The Higher Education Academy, Leeds.  Available online http://www.bioscience.heacademy.ac.uk/ftp/teachingguides/fulltext.pdf (accessed Nov 2012)

Reflection 6/6 – Developing my PBL practice

During the introductory weeks of the module my peers and I were asked to consider “how do you teach?” and “how do you see yourself as a teacher?” (Nerantzi, 2012).  I remember that my initial reaction to such questions were that I wasn’t a ‘proper teacher’ – at least not like the majority of peers on the PGCAP.  We were asked to consider our own practice against various metaphors – “lamplighters, gardeners, muscle builders, bucket fillers, challengers, travel guides, factory supervisors, artists, applied scientists, and craftspeople” (Apps, 1991, 23-24).  After some thought my own response to this was …

“I’ve always considered myself more of a ‘facilitator’ rather than a ‘teacher’.  From the list I’d probably say that I was a mix of a lamplighter, a gardener, and a travel guide.  I do try to use a variety of approaches – a bit of presenting (hopefully in a fun and engaging way), a bit of questioning, group-work and discussion, etc. so I guess craftsperson is also appropriate.” (Dunleavy, 2012).

At the start, I acknowledged that I was more of a ‘facilitator’, however I clearly didn’t regard this as being a ‘proper teacher’.  Throughout the module, during the weekly sessions and also during teaching observations I have been introduced to a variety of teaching approaches and have experienced various learning environments.  All of this has has been extremely influential in determining my own preferences in terms of ‘how I teach’.  I am now more confident to say that yes, I am a teacher with a facilitative approach.  In the past, I have ‘followed’ how others’ have taught a topic sometimes with consequences of feeling ‘out of my depth’.  Whilst there are certain areas where I am more confident (e.g. in Technology-Enhanced Learning), I think that in my new role as staff developer in a wider academic context I need to accept that I am still learning – and that’s ok.  I believe through adopting more ‘facilitative’ approaches to teaching such as Inquiry/Enquiry/Problem-based learning I can be an effective teacher who continues to develop through experience (‘good’ and ‘bad’).  Therefore, I’d like to focus this final reflection on how I may integrate what I have learnt around ‘facilitative’ approaches to teaching into my own professional practice as staff developer.

Reading through my previous reflections there seems to be a developing ‘flavour’ to my portfolio relating to the topic of assessment and feedback and problem-based learning (PBL).  This is no surprise, as my new role is focused currently on assessment and feedback, aligned with strategic priorities and my remit to plan and deliver staff development to my colleagues in this area across two Colleges.  I mentioned, during my educational autobiography, that I preferred to ‘contextualise’ my learning, and I guess that shows again here.

PBL as an approach to teaching wasn’t entirely new to me.  Working in academic development, I’m bound to have come across this method and knew that it was, in laymans terms, something to do with giving your students a ‘problem’ and then leaving them to figure out a solution, often working in small groups to do so.  I hadn’t ‘formerly’ experienced this sort of learning myself previously, although I suspected that many group learning scenarios were based on this concept and therefore felt that I had some knowledge of what it may be.  I also felt that PBL alluded to something we do in-practice all of the time in our workplace unaware of a formal title.  This is perhaps what I like most about PBL – it is a learning and teaching approach that we can equip our students with ready for the world of work.  Of course, it is important for students to learn ‘the subject’, however it is lifelong learning skills that the employer is looking for, and which can often make one graduate stand out from the next in terms of their ‘employability’.

Of course, as a staff developer of colleagues already employed in the University, it’s not necessarily my role to make them ‘employable’ – they already have a job.  However, I do think my role is to equip colleagues with new skills so that they may consider changing their own approaches to teaching to enhance the student experience.

My knowledge gap in terms of PBL?

I wasn’t really aware of the formal ‘process’ of PBL or indeed how the notion of ‘ a problem’ was defined.  During week 7 of the module I experienced PBL as a student myself on the theme of ‘Assessment & Feedback’, and also observed a peer facilitating a PBL session with his students in Occupational Therapy (see Reflection 5/6 – Observing peers) all of which enthused me to study PBL further.  Through my own continuous professional development (CPD) I discovered similar concepts to PBL – enquiry-based learning (EBL) and inquiry-based learning (IBL) and wanted to find out more in terms of how they could help me to improve my practice.

Unravelling the mystery – IBL, EBL, PBL?

Through this module, further reading, and through participating on an additional workshop for my own CPD (see additional reflection – From PBL to IBL) I’ve started to develop my understanding of such interrelated concepts.

EBL is often used as an umbrella term to capture all forms of learning stimulated by enquiry: project work, small-scale investigation or ‘inquiry’ and problem-based learning (Barrett & Cashman, 2010).  I now understand that PBL and IBL, although similar, have different characteristics relating to their perceived ‘flexibility’ and ‘openness’ (Kutar et. al., 2012).

A perspective on problem-based learning vs. inquiry-based learning

A perspective on problem-based learning vs. inquiry-based learning (Kutar et. al., 2012)

From the analysis above it would seem that PBL is inflexible, and whilst it may be the case in terms of a more structured process with the ‘problem’ being identified at the outset, there is an element of flexibility in PBL.  Barrett & Moore (2011) write about the six-dimensional approach to PBL in higher education and discuss the various ways in which the PBL tutorial process in practice can be adapted depending on context.

PBL as a total six-dimensional approach to higher education (Barratt & Moore, 2011)

PBL as a total six-dimensional approach to higher education (Barratt & Moore, 2011)

Below, I consider how each of the six dimensions relate to my own context of embedding PBL in staff development in academic practice.

Embedding PBL in my own professional practice

PBL problem design – A ‘problem’ could be a scenario, a story, a dilemma, a challenge, a trigger derived from any media, or a starting point for learning.  In my context as an academic developer, I prefer to do some preparatory research into the real ‘problems’ occurring under the staff development theme.  For example, in recent sessions discussing the topic of assessment and feedback with colleagues (see additional reflection Assessment and feedback and Reflection 4/6 – Tutor observation) a number of School issues were raised, and I’d look at using some of the outcomes of these sessions to design ‘authentic’ problems for a future PBL workshop.  Therefore, although the PBL session would be driven by the ‘problems’ which I set, they will be focused on real-life scenarios as experienced by my colleagues, and as such will provide context to their learning as students in a PBL environment.  It will also be important to involve other stakeholders at the ‘problem’ design stage.  For example, Student Life and the Student Union could provide a useful insight in to understanding the nature of the ‘problems’ from the students perspective.

PBL tutorials in small teams – The PBL tutorial is traditionally set within small teams of between 5-8 students and a tutor.  However, in a typical staff development workshop, I envisage approximately 30 staff as students and only 1, or possibly 2 tutors to facilitate the process.  Therefore, if I consider 5 groups of 6 students in each, then I would need to adopt a ‘roaming’ tutor approach.  On the PBL session in week 7, our module tutor used a type of ‘flag’ to allow the students themselves to indicate when they needed help, which I thought worked very well.  I prefer this approach to actually having a tutor present throughout, as I feel it helps students to drive the process themselves rather than always looking to the tutor for their acknowledgement and approval.  It gives the students control of the process and of their own learning, and emphasises the role of the tutor to facilitate a challenging learning process.

The student roles (e.g. chairperson, scribe, reader, timekeeper, observer, etc.) are also important, and I would need to consider which roles are appropriate for the staff development context, and ensure these were clearly defined before strarting the process.

Apropriate resources are also key to ensuring an effective tutorial – e.g. whiteboards, flipcharts, laptops, pens, etc. for each team to capture their thoughts.  One concern I do have with regards to my own context is ‘time’.  The workshops will be short (between 2-3 hours) and I worry if this is enough for the process of knowledge construction.  If I think back to Jason’s PBL sessions (see Reflection 5/6 – Observing peers) there were a number of PBL tutorials focusing on various ‘trigger’ points of a ‘problem’ which allowed for a longer ‘developmental’ process of knowledge construction.

PBL compatible assessments – In any teaching and learning, it is important that learning activities are aligned appropriately with assessment in terms of the learning outcomes.  In my educational autobiography I stated that “as the majority of my ‘teaching’ is around staff development I don’t tend to use techniques such as ‘constructive alignment’ as I’ve never thought it necessary” (see Reflection 1/1 – Educational autobiography).  Over the course of this module I’ve come to recognise that it is equally as important to set clear learning outcomes and align learning activities and assessment in staff development workshops as much as it is in a programme or module of study.  Not only does this demonstrate good practice to my academic colleagues, it allows my colleagues as learners to understand why they are participating in such staff development and helps them to take responsibility for their own CPD.  Assessment could be embedded in the way of a presentation at the end of the PBL session.

PBL curriculum development –  PBL is traditionally developed to occur across a whole curriculum or programme of study.  The ‘themed’ staff development workshops, although part of a suite of CPD for staff, are likely to be stand-alone.  I need to ensure that I find ways of allowing my colleagues to continue developing their thoughts around the ‘problem’ after the PBL session ends, perhaps through providing a shared medium for continuous dialogue and debate, and also by ‘following’ and co-publishing progress of their development in-practice.

Developing knowledge and capabilities – One of the main benefits of a PBL approach is to provide a learning environment which affords the development of transferable skills for the workplace.  In the context of my staff development workshops, I’d hope that through experiencing PBL themselves, my academic colleagues would start to consider how they themselves may adopt such an approach in their own teaching and learning practice.  Also, the PBL process can help with improving their own skills in communication, teamwork, information literacy, critical and creative thinking, problem solving, reflection, etc.  The PBL process will allow ‘teachers’ to get away from their ‘silos’, and to come together to share knowledge and through dialogue construct new knowledge to improve their practice.

Philosophy of problem-based learning – One of my main concerns in using a PBL approach for staff development is that I may move away from the underlying principles of PBL, and therefore I think it is important to continuously remind myself of the PBL philosophy, particularly relating to a higher education.  When adopting and adapting PBL for my own professional practice I need to continuously question the purpose, the rationale, the ethical issues, the tutorial process, and the link between teaching, learning and research.

It may help, whilst developing my own PBL approach, to review how other higher education practitioners have adapted the traditional PBL process to suit their own contexts.  Barrett and Cashman (2010) produce a useful resource: A Practitioner’s Guide to Enquiry and Problem-Based Learning, which outlines the theory, and also provides some interesting case studies.  One case study is particularly relevant as it talks about how PBL was used with lecturers – ‘Lecturers as Problem-based Learning Students’.  I think this provides a good example of lecturers being introduced to the PBL process by working on a ‘problem’ about PBL itself.  This prepares lecturers for future staff development facilitated in this way, at the same time as allowing them to consider how they may adopt such an approach in their own practice.  I am a little concerned that if I jump straight into PBL on a ‘themed’ topic such as assessment and feedback without first giving my colleagues an opportunity to familiarise themselves with the process I may run into ‘problems’ of my own.  I need to carefully consider how I can effectively introduce the PBL concept before running too far ahead.

The practitioners guide also introduces a useful model which may help to inform my own practice.

7 Step PBL Process Guide

7 Step PBL Process Guide (Barrett and Cashman, 2010)
Image source:
http://www.ucd.ie/t4cms/ucdtli0041.pdf

References

Apps, J. (1991) Mastering the Teaching of Adults, FL: Krieger.

Barrett, T. & Moore, S. (2011) New approaches to problem-based learning: revitalising your practice in higher education. New York, Routledge.

Barrett, T. & Cashman, D. (Eds) (2010) A Practitioners’ Guide to Enquiry and Problem-based Learning. Dublin: UCD Teaching and Learning

Dunleavy, C. (2012) CoreJan12 (Cohort 4) Module Discussions Space, Who is who and a task during pre-induction, posted 31 January 2012, 08:12.

Kutar, M., Griffiths, M. & Wood, J. (2012) IBL Workshop Presentation at the HEA STEM: I3 Inquiry, Independence and Information.  Using IBL to Encourage Independent Learning in IT Students.  19 April 2012, Media City UK, University of Salford.

Nerantzi, C. (2012) CoreJan12 (Cohort 4) Module Discussions Space, Who is who and a task during pre-induction, posted 3 January 2012, 12:19.

week 10 – professional discussions – CHERYL

so pleased I passed the first hurdle – just the write up now 🙂

Via Flickr:
The Lego is a symbol of my learning …

The tree indicates that I have grown although there are new branches still to develop

The window indicates that new doors have opened as the PGCAP has made me brave enough and confident to try new things and find my own approach rather than follow others

The people around the table are my peers on the PGCAP who have made it an enjoyable experience both personally and professionally – I’ve learnt so much from them (and Chrissi of course)

From PBL to IBL

The PGCAP has exposed me to all sorts of new teaching methods one of which being Problem-based learning (PBL) however I’d like to explore this and other similar approaches further to help with my final reflections on how I can apply such methods to my own teaching and staff development sessions.

There’s an interesting workshop coming up next Thursday on Inquiry-based learning (IBL) …

HEA STEM: I3 – Inquiry, Independence and Information. Using IBL to Encourage Independent Learning in IT Students

I’ve registered to attend – why don’t you join me 🙂

Useful links

http://www.emeraldinsight.com/teaching/issues/inquiry_based_learning.htm?part=1

Reflection 5/6 – Observing peers

As part of the observation process I observed two of my PGCAP peers – Rosie and Jason …

Rosie’s peer observation feedback

Date: 6th March 2011
Time: 11am-12.30pm
Session: Drop-in tutorials

Although I’d already agreed to peer observe Jason, when Rosie asked if I would also observe her tutorial session I didn’t hesitate.  For me, one of the great things about participating on the PGCAP is that I get to see the University from a different viewpoint – the student’s viewpoint.  As a staff developer I like to think that I ‘indirectly’ have an impact on the student experience, and that through my work with academic colleagues I can enhance University teaching and learning to some extent.  That said I don’t often get the opportunity to see University teaching in action and felt honoured and privileged to be invited in to Rosie’s tutorials.  It was also an opportunity to interact with an academic colleague from an unfamiliar discipline – Arts & Social Science – my usual interactions are with colleagues from either Science & Technology or Health & Social Care.  I even visited a part of the campus that I’d never been before – Adelphi – (eventually, after getting a little lost along the way).

As you can see from the pre-observation form above, the tutorials were a final chance for the Art & Design students to get formative feedback before the assessment hand-in date the following week so it was pretty hectic.  Rosie described the tutorials as a ‘doctors surgery’ and it certainly did have that constant flow about it.  When I arrived (a little late from the drama of getting lost) the session was in full flow with students waiting in the corridor and in the room itself.

Feedback Festival, S.Casciano, June 2009

Students waiting
Image source:
Feedback Festival, S.Casciano, June 2009
(www.flickr.com/photos/xdxd_vs_xdxd/3671671524/)

There was a sign on the door indicating I was in the right place, however my first thoughts were that there was no mention of “feedback”.  At this point I reflected on conversations I’d had with colleagues regarding low scores on Assessment & Feedback in the National Student Survey (NSS, 2012), and about students not understanding what feedback was.  It’s often said that students relate only to post-assessment summative feedback rather than the ‘timely’ formative feedback given during a module leading up to assessment.  I wondered whether making it absolutely clear – “Formative FEEDBACK here” – would help at all in managing student expectations and understanding around feedback.  Also, the National Union of Students (NUS, 2010) developed a Charter on Feedback & Assessment which explicitly mentions the importance of integrating formative feedback throughout the whole curriculumn – an emphasis on ‘feedback for learning‘ rather than ‘feedback of learning‘ (see additional reflection including literature Assessment & Feedback).  As it happens, when I mentioned this to Rosie later she told me how their programme did quite well in the NSS scores and therefore didn’t think it was an issue in this case, however agreed that it was a valid point and it wouldn’t hurt to implement a simple change to be sure.

Formative feedback

Formative feedback (NUS, 2012)
Image source:
www.nusconnect.org.uk/asset/news/6010/FeedbackCharter-toview.pdf

When I entered the tutorial session Rosie was already providing feedback, and there were other tutors doing the same, and students waiting their turn, and so I quietly waited not wanting to interrupt.  Rosie saw me and made me feel at ease immediately by calling me over and introducing me to her students.  Rosie made it clear to her students that I was there to observe her and not them which also put the students at ease.  Rosie is very approachable and this came across in the session.  I observed Rosie provide feedback to a number of students and although Rosie was less familiar with some of the students work (as indicated in the pre-observation form) this didn’t seem to effect how she interacted with each student.  When I read in the pre-observation form that Rosie wasn’t necessarily the allocated tutor for the students requesting feedback I wondered if this would go down well at all.  However, it soon became clear that it was actually helpful for them to have another tutor’s perspective.  Rosie was supportive, interested, engaged, and a good listener.  Some of the students were clearly anxious about their upcoming assessments and quite distressed at the start of the tutorial, however Rosie’s calm and supportive nature soon had them believing “I can do it”.  When Rosie came across any negative aspects of a student’s piece of work she addressed it with a positive spin – ” this would work really well in [another context] but perhaps not here”.  It was also nice that Rosie was very flexible with the students allowing them to make use of the session in whatever way they found useful – some preferred to get their laptop out and do direct edits, and some just wanted to talk.  Also, it was clear that Rosie and the other tutors had second-guessed what some of the generic questions might be as they had a number of prepared resources available – referencing and binding examples.  So, all in all I thought Rosie’s tutorials were extremely well received by the students and all at a point were they most needed the help.  Well done Rosie 🙂

I did have a few environmental concerns.  I wasn’t sure if having some students wait around in the same room was a good idea.  I wondered how I might feel about my peers watching, especially if I was feeling anxious or had personal problems to share.  I did mention this to Rosie later and although she agreed it wasn’t ideal she explained that there are ‘timetabling’ issues in getting suitable rooms for such a session.  I can relate to such issues, as it’s often difficult to get suitable and ‘private’ rooms for tutorials with colleagues or suitable rooms for group work, etc.  I think Rosie and her colleagues are well aware of this issue and certainly sensitive enough to see when a student may be uncomfortable with the ‘public’ context.

During our post-observation chat Rosie also shared with me some of her ideas around providing generic video feedback to students via the virtual learning environment (VLE) so that this could be viewed online by all students.  I thought this was a great idea.

I really enjoyed observing Rosie’s session and also chatting to Rosie about it afterwards – it was a pleasurable experience 🙂

Jason’s peer observation feedback

Date: 14th March 2011
Time: 10am-12pm
Session: Problem based learning session – 1st trigger

Please listen to the audio on iPadio or read the transcript whilst viewing the story (StoryBird) …

Peer observationA Problem Based Learning (PBL) Journey with my peer J”All in it together” on Storybird(please note the Storybird is on the ‘pgcap’ account under the ‘class’ – due to problems with making it ‘public’ you will need to sign in)

My post-observation reflection

It was really fun to experiment with StoryBird as a way of feeding back to Jason.  You’d think, as someone who is enthused by technology and whose role it is to promote technology-enhanced learning to academic colleagues across the University, that this would be ‘old hat’, however it’s always a different story being ‘the student’.  That’s partly what the PGCAP is all about I think – putting yourself in the students’ shoes.  Clearly I’m not afraid of experimenting with technology – that part is second nature – however making use of the technology so that it is effective in enhancing my student experience is more of a challenge.  I think it’s often an assumption that is made of the ‘digital natives’ – the younger generation growing up with technology – that they will instantly know how to learn using technology, but this isn’t the case.  Initially, I was very aware that ‘playing’ with StoryBird may turn out to be time-wasting, however I think Jason would agree that this wasn’t the case.  This makes me think back to the Webinar about ‘Play’ by Carol Yeager in week 4 (see additional reflection – Play – fun, or more serious?) where I was sceptical about ‘play’ in relation to learning.  Perhaps this ‘related’ as opposed to ‘unrelated’ play is something to value afterall.

As I said at the end of the feedback, I was really enthused by Jason’s session to start to develop my own knowledge and understanding of PBL, and to consider adopting such an approach in my staff development sessions.  We got started with PBL in week 6 of the PGCAP where Leslie Robinson came to talk to us about her experience.  After hearing her story, seeing it in action with Jason, and experiencing it for myself as a student in week 7, I’m well underway to finding out more …

My Postcard (Week 6)

My Postcard (Week 6)

References

NSS (2012) The National Student Survey.  http://www.thestudentsurvey.com/

NUS (2010) NUS Charter on Feedback & Assessment, National Union of Students. http://www.nusconnect.org.uk/news/article/highereducation/720/