Improving Student
Learning Through Assessment
The 13th Improving
Student Learning Symposium, Imperial College, London, UK, 5-7 September
2005
The skill of knowing how reliable your knowledge is
AR Gardner-Medwin, University
College London
NA Curtin, Imperial College
Research seminar
Themes:
assessment, skills development and lifelong learning
In both formative and summative assessment we should take account of
the students' ability to justify and express reservations about
answers, as well as the correctness of the answers. A lucky guess does
not deserve the credit due for confident and well justified knowledge,
and a firm misconception is much worse than acknowledged ignorance.
This paper will provoke discussion about how this skill relates to
assessment of knowledge, and how it can be developed and assessed with
a well designed scheme for Confidence-Based Marking (CBM: see www.ucl.ac.uk/lapt ). Though
the concept has been well researched over some decades, mostly before
large-scale IT implementation was practical, it has seldom been widely
applied. A simple scheme has been in use with medical classes for many
years now at UCL, initially set up to help stimulate critical
reflection and good study habits. In recent years it has been used
also, partly at the students' instigation, for exams (with OMR
technology). We use a 3-point confidence scale: 1,2,3, entered for each
answer, with 1,2 or 3 marks awarded when the answer is right and
0,-2,-6 when it is wrong. Recently a link has been set up between the
CBM software and student VLE systems, using WebCT at UCL and Imperial
College and a locally developed VLE at Winchester College. This
provides simple access to formative tests with automated grade
recording and management. A wealth of data, much but not all of it
published so far (see the website), addresses the important conceptual
as well as practical issues that arise through use of CBM.
We hope to provoke discussion about these issues and address them where
possible with relevant data and experience. Most people recognise that
the skill of knowing and expressing the reliability of knowledge is
crucial to human interaction and highly valued, though it is seldom
taught or subject to practice or assessment. Relevant data include
student evaluations and comments, comparison of performance by
relatively naive and well practised subjects and comparisons by gender,
ethnic grouping and formative vs summative assessments. Staff new to
the concept are sometimes concerned that it might merely reward
self-confidence rather than knowledge; but this can be shown to be not
at all true. There is much statistical data about reliability and
validity in exam assessment, each significantly improved with CBM.
For those interested in evaluating CBM in their own contexts or in
relation to any of a wide range of disciplines and question styles, the
web site should be the main source; but practical issues may also be
raised and discussed in the session.
Acknowledgements: DA Bender, P Tatham, J Levy (UCL), M De Iorio, M
Burrow (Imperial), C Ryan, H Bassett (Winchester College), HEFCE FDTL4
for funding.