|Year : 2017 | Volume
| Issue : 2 | Page : 86-91
Applying blooms taxonomy in framing MCQs: An innovative method for formative assessment in medical students
Silpa Kadiyala1, Siva Gavini2, D Sujith Kumar3, V Kiranmayi4, P. V. L. N Srinivasa Rao4
1 Department of Radiology, Sri Venkateswara Institute of Medical Sciences, Tirupati, Andhra Pradesh, India
2 Department of Surgical Gastroenterology, Sri Venkateswara Institute of Medical Sciences, Tirupati, Andhra Pradesh, India
3 Department of Community Medicine, Sri Venkateswara Institute of Medical Sciences, Tirupati, Andhra Pradesh, India
4 Department of Biochemistry, Sri Venkateswara Institute of Medical Sciences, Tirupati, Andhra Pradesh, India
|Date of Web Publication||13-Jun-2017|
Department of Radiology, Sri Venkateswara Institute of Medical Sciences, Tirupati - 517 507, Andhra Pradesh
Source of Support: None, Conflict of Interest: None
Background: Assessment is a crucial step in the educational process and is the driving force behind learning. Formative assessment (FA) is a relatively new concept in assessment methods. By applying Blooms taxonomy (BT), which describes developmental progression for knowledge in FA, we can drive deeper learning. We were interested to know if by framing multiple choice questions (MCQ) using BT model and using it as a tool for FA will help to reinforce learning among first year MBBS students.
Materials and Methods: All 150 MBBS students taking biochemistry classes were subjected to MCQ tests at the end of series of lectures and an internal exam (T3). MCQ s were framed by applying BT testing at all levels of cognition domain. Pearson's correlation coefficient of marks before and after the intervention was calculated. Mean scores were analyzed using students paired t-test. Feedback was analyzed for students' perception regarding the assessment method.
Results: Comparison of mean scores in T3 with average of T1 and T2 showed P value of >0.0001. Only 28.7% students got <50% marks when assessed after FA as compared to 46% students before using FA. Analysis of students' perception indicated a high level of acceptability and motivation toward incorporation of formative assessment.
Conclusion: Introduction of MCQs as a tool for formative assessment at the end of each lecture helps to reinforce learning in first year medical students. This study demonstrates the need for using the BT model, which tests knowledge at various levels of cognition in FA pattern.
Keywords: Blooms taxonomy, formative assessment, MCQs as a tool, medical student
|How to cite this article:|
Kadiyala S, Gavini S, Kumar D S, Kiranmayi V, Rao PS. Applying blooms taxonomy in framing MCQs: An innovative method for formative assessment in medical students. J NTR Univ Health Sci 2017;6:86-91
|How to cite this URL:|
Kadiyala S, Gavini S, Kumar D S, Kiranmayi V, Rao PS. Applying blooms taxonomy in framing MCQs: An innovative method for formative assessment in medical students. J NTR Univ Health Sci [serial online] 2017 [cited 2020 Apr 4];6:86-91. Available from: http://www.jdrntruhs.org/text.asp?2017/6/2/86/208010
| Introduction|| |
Assessment is a crucial step in the educational process and is the driving force behind learning., Assessment should be structured in a manner that aligns educational objectives to learning outcomes. Students must learn from tests and receive feedback to build on their knowledge and skills.
Many innovations have occurred in medical education in the last three decades in the form of development of new curricula, introduction of variety of teaching-learning methods, and designing new assessment methods, and many more effective forms are forthcoming. Formative assessment (FA) is one such relatively new concept in assessment methods. In FA, assessment is done more frequently and learning is facilitated through a continuous process of feedback. Internal assessment (IA) exams are the routine conventional/traditional theory, and practical exams, which are usually conducted after finishing a quarter portion of the years syllabus because university rules stipulate a minimum of three internal assessment exams to be conducted in an academic year. These are summative assessments (SA) typically taking place at the end of the learning process which are used for accountability and impacts on extrinsic student motivation. If a test is conducted and the student does not meet the standards, there should be further opportunities to try again until the competency is ultimately achieved. Motivation to learn actually increases when students see the gap between what they thought they knew and what they actually know. This can be further enhanced by providing feedback from time to time. Therefore, by using FA, students are given multiple chances for self-assessment, which helps them to identify and fill the gaps in their method of learning/listening to a class.
Miller introduced a conceptual framework of different aspects of medical competence. These are “knows” (factual knowledge) “knows how” (analysis, application, and interpretation of knowledge), “shows how” (actual application and practical demonstration in a simulated situation) and “does” (performs in real situations), which are arranged as various layers of a pyramid known as the “Millers pyramid” [Figure 1]. Developmental progression for knowledge has been described in Blooms taxonomy (BT), which is depicted in [Figure 2]. By designing an assessment applying BT, we can drive deeper learning.
|Figure 1: Framework for clinical assessment. Adapted from GE Miller. Acad Med 1990|
Click here to view
|Figure 2: BLOOMS taxonomy revised. [Based on an APA adaptation of Anderson LW, Keathwohl DR (Editors) (2001)]|
Click here to view
A wide range of assessment methods are available in medical teaching such as written exercises, assessment by supervising clinicians, clinical simulations, multi-source (360°) assessments. Among them, the commonly used methods for written exercises are structured essays, short- answer questions, key feature/script concordance questions, multiple choice questions (MCQs) in either single best answer or extended matching format. Van der Vleuten  describes five criteria for determining the usefulness of a particular method of assessment: Reliability (the degree to which the measurement is accurate and reproducible), validity (whether the assessment measures what it claims to measure), impact on future learning and practice, acceptability to learners and faculty, and costs (to the individual trainee, the institution, and society at large). MCQs have high reliability and can be effectively administered to a large number of examinees. They demand analytical thinking, therefore, measuring learning outcomes from simple to complex.
In the light of abovementioned background, we have framed MCQs using BT and used it as a tool for FA to reinforce learning among first year MBBS students. The objectives of our study were to assess improvement in students' performance in IA after introducing MCQ tests at the end of every lecture, as well as to analyses students' perception toward FA.
| Materials and Methods|| |
A prospective study was conducted to assess the impact of applying the BT model in framing MCQs and using it as a tool for FA among 150 MBBS students of second semester attending biochemistry classes at a medical college in Andhra Pradesh, India, in 2015. The study was implemented after obtaining institutional ethical clearance. During this period, students faced MCQ tests at the end of series of lectures and a conventional exam as part of the third IA. The scores of the third IA exam were compared with previous IA scores to access the improvement in students' performance.
Multiple choice questions tests
A workshop was conducted by the medical education unit of the institute for all teaching faculty taking classes for first year MBBS students on orientation to prepare MCQs using various levels of cognition using BT. Questions were framed such that they align assessment of knowledge with learning outcomes by applying BT. All levels of cognition domain were assessed in each topic as follows: 2 questions in level 1 (recall of facts), 1 question in level 2 (interpretation of facts), 1 question in level 3 (problem solving abilities), 1 question in level 4 (application), and 1 question in level 5 (application and synthesis). Each topic was assessed similarly by using a total of 6 MCQs per class. Format used for MCQs was the single best response with 4 options. Three faculty of teaching experience between 8 and 15 years conducted these lectures. MCQs in a topic were prepared by the same faculty who conducted lectures on that particular topic.
Students were informed about the FA. MCQ tests were administered at the end of a series of 13 lectures taken in the months of June and July in biochemistry on topics covering clinical physiology. Papers were collected from the students and they were informed that the answers will be discussed in the next class. Answers were shown as 1st slide of the next class. Formal feedback on performance was given to students.
After covering the required syllabus, an IA (T3) was conducted exclusively in the topics covered during this intervention. All 150 students attended this exam. Same format of IA (2 essays, 5 short notes, 5 very short answers) which was used in first and second assessments (T1 and T2) taken in earlier part of the course was used. Care was taken not to include the questions directly as given in the MCQ test.
At the end of the internal exam, a questionnaire-based perception study about the assessment method was done. All the students were given appropriate instructions and adequate time to fill in the perception questionnaire which had 5 item statements [Table 1]. Students were given the option whether to write or not to write their names and roll numbers. One hundred forty five students returned the feedback papers while five of them remained absent. The five items were based on 3-point Likert scale to access the impact of FA and using MCQ as its tool on various aspects of learning.
Likert scale items
Likert scale items analyzed the impact of FA and using MCQ as its tool on various aspects of learning: (1) Motivation to listen to class effectively, (2) help in better understanding of the lecture being taken, (3) creating interest for self—directed learning, (4) help to score better in internal exams, and (5) continuation of these tests in future classes. All the items had three options, namely, agree, neutral, and disagree.
The data collected was analyzed using the SPSS Statistics for Windows, Version 20.0. Armonk, NY: IBM Corp. Marks of 150 students in T3 were compared with the average of T1 and T2, as shown in [Figure 3]. Mean scores were calculated and analyzed for statistical significance using students paired t-test [Figure 4]. Pearson's correlation coefficient of marks before and after intervention was calculated [Figure 5]. Percentage of students scoring above 50% before and after introduction of intervention were calculated.
|Figure 3: Comparison of internal assessment scores before and after using formative assessment|
Click here to view
|Figure 4: Comparison of mean scores in I, II, and III internal assessment tests|
Click here to view
Feedback questionnaire was analyzed to determine students perception about introduction of FA using MCQs. Out of 150 students, 5 did not submit the feedback. Forms of the rest of the 145 students were analyzed as follows: Number of students choosing each of the 3 options were noted separately for all 5 items, which is shown in the [Table 1]. Percentages were calculated for these options.
| Results|| |
[Figure 3] shows the distribution of marks in T3 (after intervention) and average of T1 and T2 (before intervention). Analysis of the mean scores obtained by students before and after intervention suggested that students scored significantly higher marks when assessed after using FA (mean score: 21.76 ± 8.82) as compared to using SA (mean score: 19.76 ± 9.03), as shown in [Figure 4]. The difference was found to be statistically significant with P value less than <0.0001. The number of students achieving >70% marks was also significantly higher after intervention. It was found that only 28.7% students got <50% marks when assessed after FA as compared to 46% students before using FA (as shown in [Figure 3]). [Figure 5] shows that the correlation coefficient of comparison of marks before and after intervention was found to be 0.7 with a P value of less than 0.0001, suggesting significance.
Evaluation of feedback responses through perception questionnaire demonstrated that the participating students strongly endorsed FA [Table 1]. Majority of the students felt that FA motivated them (98.62%) to listen to the class attentively, and having the test at the end of lecture helped in better understanding of the lecture. 97.93% felt that the test created interest in them for self-learning. 91.72% thought these tests would help them to score better in the internal exams, and 80% suggested these tests be continued for future classes. Responses indicated a high level of acceptability and motivation towards incorporation of FA.
| Discussion|| |
Assessment is the driving force behind learning and it should be a continuous process. In FA, repeated attempts are given to a student to master the content before being subjected to an endpoint examination. FA, thus, allows the students to make adjustments to what and how they are learning. With each domain of assessment, various levels of competence, as proposed in Millers pyramid [Figure 1], have to be assessed. The BT model [Figure 2] tests knowledge at various levels of cognition.
In our present study, we found that FA leads to improvement in summative assessment. Statistical analysis of mean scores obtained by students before and after intervention showed a significant increase in the mean scores of assessment done after introduction of FA with P value <0.0001, which is highly significant. Only 28.7% students got <50% marks when assessed after FA as compared to 46% students before using FA. Students positively perceived the implementation of FA in the form of MCQ tests at the end of each lecture from analysis of the questionnaire. 80% students suggested that these tests should be followed in future classes, and 91.72% hoped these MCQ tests would help them in scoring better marks in IA, which was proven in the present study.
Singh et al. have reported that FA has a potential to promote deeper learning by using it for day to day observation of the student. He also suggested that assessment should focus on the process of learning as much as on the amount of learning. Rita opined that increasing attention is being paid to using FA to improve learning. Belghi et al. proposed MCQs for use in assessment of different levels in the intellectual process. For knowledge, concepts, application of knowledge (“knows” and “knows how” of Millers conceptual pyramid for clinical competence) context-based multiple choice questions (MCQs) are appropriate. A study conducted by Phillips et al. proved that applying BT to questions showed an increase in the complexity level being tested, leading to a rise of the standard required to pass, but then teaching and student preparedness also improved to rise to the challenge. While most teachers are well versed with the summative or certifying purpose of assessment (assessment of learning), FA which uses assessment as an educational tool (assessment for learning) is a relatively recent phenomenon. FA does not have to be graded. However, if a supervisor wishes to include FA as a part of summative assessment, results of various FA tests can be accumulated toward their final grade in a unit or course.
By framing MCQs using BT model, which tests knowledge at various level of cognition, students were exposed to this pattern of learning in every topic. Using this model for every lecture led to all topics being consistently assessed with equal standards. By preserving multiple tests question papers, students were creating a question bank with them, which can be used for future references during internal and university exams. Answers were discussed and formal feedback on their performance in every test was provided to the students in the immediate class following the lecture. The time gap between the test and discussion was implemented as it would encourage healthy debates among friends, few would refer to textbooks to find answers themselves, all these paving way for self-directed learning. In our study, 97.93% students felt that the test created interest in them for self-learning. Better performance after intervention could be attributed to these factors.
A lot of effort is put by a teacher to prepare for a class and to deliver it. Every teacher aims at full participation of a student in their lecture. This is partially achieved by the various teaching-learning methods chosen by the faculty. Having an instant assessment of the topic then and there with timely feedback helps in further enhancing a student's participation in the class. In our study, 98.62% students felt that these tests motivated them to listen to the class attentively, and 98.62% felt having the test at the end of lecture helped them in better understanding of the class being taken. All the teaching faculty gained experience in framing MCQs at various levels of cognition using BT. Further, these tests might have increased the responsibility of a teacher in taking the lecture more effectively. However, there were disadvantages such as time constraint for constructing good test questions for each class and all faculty may not have liked assessing each lecture.
| Conclusion|| |
Introduction of MCQs as a tool for FA at the end of each lecture helped to reinforce learning in first year medical students. Students perceived it as an important approach as it motivated them to listen to the class attentively, helped in better understanding of lecture, and created interest in them for self-learning. This study demonstrated the need for using the BT model, which tests knowledge at various levels of cognition in the FA pattern.
I sincerely thank the Departments of Medical Education and Radiology for the constant support provided to complete this study. This study was done as a project for Fellowship in Medical Education (FIME) and was presented as a poster in abstract format at the second contact session of FIME on 30 October 2015, CMC, Vellore.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Sood R, Singh T. Assessment in medical education: Evolving perspectives and contemporary trends. Natl Med J India 2012;25:357-64.
van der Vleuten CC. Validity of final examinations in undergraduate medical training. BMJ 2000;321:1217-19.
Rauf A. Formative assessment in undergraduate medical education: Concept, implementation and hurdles. J Pak Med Assoc 2014;64:72-5.
Lakshmipathy K. MBBS student perceptions about physiology subject teaching and objective structured practical examination based formative assessment for improving competencies. Adv Physiol Educ 2015;39:198-204.
Rushton A. Formative assessment: A key to deeper learning? Med Teacher 2005;27:509-13.
Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990;65:S63-7.
Bloom BS. Taxonomy of educational objectives: The classification of educational goals. New York NY: Longmans, Green; 1956.
Phillips AW. Driving deeper learning by assessment: An adaptation of the revised Blooms Taxonomy for medical imaging in gross anatomy. Acad Radiol 2013;20:784-9.
Van Der Vleuten CP. The assessment of professional competence: Developments, research and practical implications. Adv Health Sci Educ 1996;1:41-67.
van der Vleuten CP. How can we test clinical reasoning? The Lancet 1995;345:1032-4.
Singh T, Anshu. Internal assessment revisited. Natl Med J India 2009;22:82-410.
Beghi M. Multiple choice questions in educational assessment: Proposal of a computerised programme. Minerva Chir 1989;44:1435-9.
Tabish SA. Assessment Methods in Medical Education. Int J Health Sci 2008;292:3-7.
Marshall JM. Formative assessment: Mapping the road to success. A white paper prepared for the Princeton Review. New York: The Princeton Review; 2005.
Office of health education. Improve learning through formative assessment. Queens University. Spring; 2008.
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5]