Home

  • Current Issue
  • Upcoming Issue
  • Guidance to Authors
  • Editorial Executive Board
  • Individual Details
  • Become a Peer Reviewer
  • Proof Readers
  • Terms & Conditions
  • Submit a Manuscript
  • Manuscript Guidance
  • Add Conference
  • Add Job Advert

Critical Analysis of Case Based Discussions

J M L Williamson and A J Osborne

Introduction

Assessment and evaluation are the foundations of learning; the former is concerned with how students perform and the latter, how successful the teaching was in reaching its objectives. Case based discussions (CBDs) are structured, non-judgmental reviews of decision-making and clinical reasoning 1 . They are mapped directly to the surgical curriculum and “assess what doctors actually do in practice” 1 . Patient involvement is thought to enhance the effectiveness of the assessment process, as it incorporates key adult learning principles: it is meaningful, relevant to work, allows active involvement and involves three domains of learning 2 :

  • Clinical (knowledge, decisions, skills)
  • Professionalism (ethics, teamwork)
  • Communication (with patients, families and staff)

The ability of work based assessments to test performance is not well established. The purpose of this critical review is to assess if CBDs are effective as an assessment tool.

Validity of Assessment

Validity concerns the accuracy of an assessment, what this means in practical terms, and how to avoid drawing unwarranted conclusions or decisions from the results. Validity can be explored in five ways: face, content, concurrent, construct and criterion-related/predicative. 

CBDs have high face validity as they focus on the role doctors perform and are, in essence, an evolution of ‘bedside oral examinations’ 3 . The key elements of this assessment are learnt in medical school; thus the purpose of a CBD is easy for both trainees and assessors to validate 1 . In terms of content validity, CBDs are unique in assessing a student’s decision-making and which, is key to how doctors perform in practice. However, as only six CBDs are required a year, they are unlikely to be representative of the whole curriculum. Thus CBDs may have a limited content validity overall, especially if students focus on one type of condition for all assessments.

Determining the concurrent validity of CBDs is difficult as they assess the pinnacle of Miller’s triangle – what a trainee ‘does’ in clinical practice (figure1) 4 . CBDs are unique in this aspect, but there may be some overlap with other work based assessments particularly in task specific skills and knowledge. Simulation may give some concurrent validity to the assessment of judgment. The professional aspect of assessment can be validated by a 360 degree appraisal, as this requests feedback about a doctor’s professionalism from other healthcare professionals 1 .

CBDs have high construct validity, as the assessment is consistent with practice and appropriate for the working environment. The clinical skills being assessed will improve with expertise and thus there should be ‘expert-novice’ differences on marking 3 . However the standard of assessment (i.e. the ‘pass mark’) increases with expertise – as students are always being assessed against a mark of competency for their level. A novice can therefore score the same ‘mark’ as an expert despite a difference in ability.

In terms of predictive validity performance-based assessments are simulations and examinees do not behave in the same way as they would in real life 3 . Thus, CBDs are an assessment of competence (‘shows how’) but not of true clinical performance and one perhaps could deduct that they don’t assess the attitude of the trainee which completes the cycle along with knowledge and skills (‘does’) 4 . CBDs permit inferences to be drawn concerning the skills of examinees that extend beyond the particular cases included in the assessment 3 . The quality of performance in one assessment can be a poor predictor of performance in another context. Both the limited number and lack of generalizability of these assessments have a negative influence on predictive validity 3 . 

Reliability of Assessment

Reliability can be defined as “the degree to which test scores are free from errors of measurement”. Feldt and Brennan describe the ‘essence’ of reliability as the “quantification of the consistency and inconsistency in examinee performance” 5 . Moss states that less standardized forms of assessment, such as CBDs, present serious problems for reliability 6 . These types of assessment permit both students and assessors substantial latitude in interpreting and responding to situations, and are heavily reliant on assessor’s ability. Reliability of CBDs is influenced by the quality of the rater’s training, the uniformity of assessment, and the degree of standardization in examinee.

Rating scales are also known to hugely affect reliability – understanding of how to use these scales must be achieved by all trainee assessors in order to achieve marking consistency. In CBD assessments, trainees should be rated against a level of completion at the end of the current stage of training (i.e. core or higher training) 1 . While accurate ratings are critical to the success of any WBA, there may be latitude in the interpretation of these rating scales between different assessors. Assessors who have not received formal WBA training tend to score trainees more generously than trained assessors 7-8 . Improved assessor training in the use of CBDs and spreading assessments throughout the student’s placement (i.e. a CBD every two months) may improve the reliability and effectiveness of the tool 1 .

Practicality of Assessment

CBDs are a one-to-one assessment and are not efficient; they are labour intensive and only cover a limited amount of the curriculum per assessment. The time taken to complete CBDs has been thought to negatively impact on training opportunities 7 . Formalized assessment time could relieve the pressure of arranging ad hoc assessments and may improve the negative perceptions of students regarding CBDs.

The practical advantages of CBDs are that they allow assessments to occur within the workplace and they assess both judgment and professionalism – two subjects on the curriculum which are otherwise difficult to assess 1 . CBDs can be very successful in promoting autonomy and self-directed learning, which improves the efficiency of this teaching method 9 . Moreover, CBDs can be immensely successful in improving the abilities of trainees and can change clinical practice – a feature than is not repeated by other forms of assessment 8 .

One method for ensuring the equality of assessments across all trainees is by providing clear information about what CBDs are, the format they take and the relevance they have to the curriculum. The information and guidance provided for the assessment should be clear, accurate and accessible to all trainees, assessors, and external assessors. This minimizes the potential for inconsistency of marking practice and perceived lack of fairness 7-10 . However, the lack of standardization of this assessment mechanism combined with the variation in training and interpretation of the rating scales between assessors may result in inequality.

Formative Assessment

Formative assessments modify and enhance both learning and understanding by the provision of feedback 11 . The primary function of the rating scale of a CBD is to inform the trainee and trainer about what needs to be learnt 1 . Marks per see provide no learning improvement; students gain the most learning value from assessment that is provided without marks or grades 12 . CBDs have feedback is built into the process and therefore it can given immediately and orally. Verbal feedback has a significantly greater effect on future performance than grades or marks as the assessor can check comprehension and encourage the student to act upon the advice given 1,11-12 . It should be specific and related to need; detailed feedback should only occur to help the student work through misconceptions or other weaknesses in performance 12 . Veloski, et al, suggests that systemic feedback delivered from a credible source can change clinical performance 8 .

For trainees to be able to improve, they must have the capacity to monitor the quality of their own work during their learning by undertaking self-assessment 12 . Moreover, trainees must accept that their work can be improved and identify important aspects of their work that they wish to improve. Trainee’s learning can be improved by providing high quality feedback and the three main elements are crucial to this process are 12 :

  • Helping students recognise their desired goal
  • Providing students with evidence about how well their work matches that goal
  • Explaining how to close the gap between current performance and desired goal

The challenge for an effective CBDis to have an open relationship between student and assessor where the trainee is able to give an honest account of their abilities and identify any areas of weakness. This relationship currently does not exists in most CBDs, as studies by Veloski, et al 8 and Norcini and Burch 9 who revealed that only limited numbers of trainees anticipated changing their practice in response to feedback data. An unwillingness to engage in formal self-reflection by surgical trainees and reluctance to voice any weaknesses may impair their ability to develop and lead to resistance in the assessment process. Improved training of assessors and removing the scoring of the CBD form may allow more accurate and honest feedback to be given to improve the student’s future performance. An alternative method to improve performance is to ‘feed forward’ (as opposed to feedback) focusing on what students should concentrate on in future tasks 10

Summative Assessment

Summative assessments are intended to identify how much the student has learnt. CBDs have a strong summative feel: a minimum number of assessments are required and a satisfactory standard must be reached to allow progression of a trainee to the next level of training 1 . Summative assessment affects students in a number of different ways; it guides their judgment of what is important to learn, affects their motivation and self-perceptions of competence, structures their approaches to and timing of personal study, consolidates learning, and affects the development of enduring learning strategies and skills 12-13 . Resnick and Resnick summarize this as “what is not assessed tends to disappear from the curriculum” 13 . Accurate recording of CBDs is vital, as the assessment process is transient, and allows external validation and moderation.

Evaluation of any teaching is fundamental to ensure that the curriculum is reaching its objectives 14 . Student evaluation allows the curriculum to develop and can result in benefits to both students and patients. Kirkpatrick suggested four levels on which to focus evaluation 14 :

Level 1 – Learner’s reactions Level 2a – Modification of attitudes and perceptions Level 2b – Acquisition of knowledge and skills Level 3 – Change in behaviour Level 4a – Change in organizational practice Level 4b – Benefits to patients

At present there is little opportunity within the Intercollegiate Surgical Curriculum Project (ISCP) for students to provide feedback. Thus a typical ‘evaluation cycle’ for course development (figure 2) cannot take place 15 . Given the widespread nature of subjects covered by CBDs, the variations in marking standards by assessors, and concerns with validity and reliability, an overall evaluation of the curriculum may not be possible. However, regular evaluation of the learning process can improve the curriculum and may lead to better student engagement with the assessment process 14 . Ideally the evaluation process should be reliable, valid and inexpensive 15 . A number of evaluation methods exist, but all should allow for ongoing monitoring review and further enquiries to be undertaken.

CBDs, like all assessments, do have limitations, but we feel that they play a vital role in development of trainees. Unfortunately, Pereira and Dean suggest that trainees view CBDs with suspicion 7 . As a result, students do not engage fully with the assessment and evaluation process and CBDs are not being used to their full potential. The main problems with CBDs relate to the lack of formal assessor training in the use of the WBA and the lack of evaluation of the assessment process Adequate training of assessors will improve feedback and standardize the assessment process nationally. Evaluation of CBDs should improve the validity of the learning tool, enhancing the training curriculum and encouraging engagement of trainees.

If used appropriately, CBDs are valid, reliable and provide excellent feedback which is effective and efficient in changing practice. However, a combination of assessment modalities should be utilized to ensure that surgical trainees are facilitated in their development across the whole spectrum of the curriculum. 

1. Intercollegiate Surgical Curriculum Project (ISCP). ISCP/GMP Blueprint, version 2. ISCP website (www.iscp.ac.uk) (accessed November 2010)

2. Lake FR, Ryan G. Teaching on the run tips 4: teaching with patients. Medical Journal of Australia 2004;181:158-159

3. Swanson DB, Norman GR, Linn RL. Educational researcher 1995;24;5-11+35

4. Miller GE. The assessment of clinical skills/competence/ performance. Academic Medicine, 1990;65:563–567.

5. Feldt LS, Brennan RL. Reliability. In Linn RL (ed), Education measurement (3rd edition). Washington, DC: The American Council on Education and the National Council on Measurement in Education; 1989 

6. Moss PA. Can there be Validity without Reliability? Educational Researcher 1994:23;5-12

7. Pereira EA, Dean BJ. British surgeons’ experience of mandatory online workplaced-based assessment. Journal of the Royal Society of Medicine 2009;102:287-93

8.  Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systemic review of the literature on assessment, feedback and physician’s clinical performance: BEME Guide No. 7. Medical teacher 2006;28:117-28

9. Norcini J, Burch V. Workplaced-based assessment as an educational tool: AMEE Guide No 31. Medical teacher 2007;28:117-28

10. Hounsell D. Student feedback, learning and development in Slowery, M and Watson, D (eds). Higher education and the lifecourse. Buckingham; Open University Press; 2003.

11. Bloxham S, Boyd P. Developing effective assessment in higher education: A practical guide. Maidenhead: Open University Press; 2007

12. Cooks TJ. The impact of classroom evaluation practices on students. Review of Educational Research 1998;58;438-481

13. Resnick LB, Resnick D. Assessing the thinking curriculum: New tools for educational reform. In Gifford B and O’Connor MC (eds), Cognitive approaches to assessment. Boston: Kluwer-Nijhoff; 1992

14. Barr H, Freeth D, Hammick M, Koppel, Reeves S. Evaluation of interprofessional education: a United Kingdom review of health and social care. London: CAIPE/BERA; 2000

15. Wahlqvist M, Skott A, Bjorkelund C, Dahlgren G, Lonka K, Mattsson B. Impact of medical students’ descriptive evaluations on long-term course development. BMC Medical Education 2006;6:24

Creative Commons Licence

  • Login or register to post comments
  • Disclaimers

Increasing Collaborative Discussion in Case-Based Learning Improves Student Engagement and Knowledge Acquisition

  • Original Research
  • Open access
  • Published: 05 September 2022
  • Volume 32 , pages 1055–1064, ( 2022 )

Cite this article

You have full access to this open access article

  • Nana Sartania   ORCID: orcid.org/0000-0002-3196-2312 1 ,
  • Sharon Sneddon   ORCID: orcid.org/0000-0001-9767-4180 1 ,
  • James G. Boyle 1 ,
  • Emily McQuarrie   ORCID: orcid.org/0000-0002-0010-2491 1 &
  • Harry P. de Koning   ORCID: orcid.org/0000-0002-9963-1827 2  

2686 Accesses

8 Citations

2 Altmetric

Explore all metrics

In the transition from academic to clinical learning, the development of clinical reasoning skills and teamwork is essential, but not easily achieved by didactic teaching only. Case-based learning (CBL) was designed to stimulate discussions of genuine clinical cases and diagnoses but in our initial format (CBL’10) remained predominantly tutor-driven rather than student-directed. However, interactive teaching methods stimulate deep learning and consolidate taught material, and we therefore introduced a more collaborative CBL (cCBL), featuring a structured format with discussions in small breakout groups. This aimed to increase student participation and improve learning outcomes.

A survey with open and closed questions was distributed among 149 students and 36 tutors that had participated in sessions of both CBL formats. A statistical analysis compared exam scores of topics taught via CBL’10 and cCBL.

Students and tutors both evaluated the switch to cCBL positively, reporting that it increased student participation and enhanced consolidation and integration of the wider subject area. They also reported that the cCBL sessions increased constructive discussion and stimulated deep learning. Moreover, tutors found the more structured cCBL sessions easier to facilitate. Analysis of exam results showed that summative assessment scores of subjects switched to cCBL significantly increased compared to previous years, whereas scores of subjects that remained taught as CBL’10 did not change.

Conclusions

Compared to our initial, tutor-led CBL format, cCBL resulted in improved educational outcomes, leading to increased participation, confidence, discussion and higher exam scores.

Similar content being viewed by others

case based discussion in medical education

Strategies for teaching evidence-based practice in nursing education: a thematic literature review

May-Elin T. Horntvedt, Anita Nordsteien, … Elisabeth Severinsson

case based discussion in medical education

Study smart – impact of a learning strategy training on students’ study behavior and academic performance

Felicitas Biwer, Anique de Bruin & Adam Persky

case based discussion in medical education

The Value of Using Tests in Education as Tools for Learning—Not Just for Assessment

Dillon H. Murphy, Jeri L. Little & Elizabeth L. Bjork

Avoid common mistakes on your manuscript.

Introduction

Teaching methods in medical education that involve students in discussion and interaction continue to evolve with a focus on promoting collaborative and active learning. Many medical schools have introduced clinical teaching early in the curriculum in an attempt to integrate basic and clinical sciences [ 1 ]. The use of clinical cases to aid teaching has been documented for over a century, with the first use of case-based learning (CBL) in 1912 to teach pathology at the University of Edinburgh [ 2 ]. There is no set definition for CBL currently. Thistlethwaite et al. [ 3 ] define the goal of CBL ‘is to prepare students for clinical practice through the use of authentic clinical cases. It links theory to practice through the application of knowledge to the cases, using inquiry-based learning methods’, and it fits in the continuum between structured and guided learning. As such, ‘CBL’ comes in many different formats, some more didactic, some more participatory, but the pros and cons of the different approaches have rarely been systematically evaluated.

Clinical Reasoning is one of the key skills to be taught before the transition into the clinic. Diagnostic error rates continue to be high [ 4 ] and reflect deficits in both knowledge and reasoning skills. Training in clinical reasoning enhances the students’ ability to transfer declarative knowledge to clinical problems in preparation for working in clinical teams, and a format of transition-stage teaching that addresses both deficits in tandem would be highly beneficial. It has been reported that some forms of CBL learning compare favourably to didactic teaching [ 5 , 6 , 7 ] because it is a participatory method [ 8 ] that leads to improved motivation [ 9 ] and the development of reflective thinking [ 10 ]. Preparing students to think like clinicians before they commence clinical attachments affords opportunities for vertical and horizontal integration of the curriculum and fosters learning for competence [ 11 ].

CBL was introduced at the University of Glasgow, in its initial format, in 2010 (CBL’10) and incorporated in ‘Phase 3', a 15-week-long period of transition into full-time clinical teaching with a focus on pathophysiology. McLean [ 7 ] reviewed several published definitions of CBL and summarized that CBL requires the presentation of a clinical case followed by an ‘enquiry’ on the part of the learner and provision of further information by a tutor who guides the discussion towards meeting the learning objectives—the idea we tried to replicate in CBL’10. However, our in-house evaluation found that student participation was uneven, and the sessions often became too didactic, contravening the idea of CBLs being student-centred. As a form of adult learning, Mayo [ 12 ] describes CBL in terms of the socio-constructivist model, with the students themselves constructing the new knowledge and insights and the tutor functioning merely as a guide.

The recent work by Schwartzstein and colleagues [ 13 ] suggested that collaborative case-based learning (cCBL), a modification of CBL, brings additional benefits to students over more didactic forms of CBL in terms of enjoyment and working collectively in teams. Addition of classroom discussions to case-based learning encourages the students to make inferences and conclusions from the presented data [ 14 , 15 ]. cCBL is described as ‘team-based’ and as incorporating elements of both PBL and CBL [ 13 ]. In PBL, however, a problem is presented as the means of obtaining basic scientific knowledge, whereas CBL is typically supported by prior didactic teaching to assure the class has the required knowledge base to discuss the clinical case; they exercise logical diagnostic problem-solving that combines comprehension, critical thinking and problem-solving skills, engendering deep cognitive learning [ 16 ]. cCBL places more emphasis on student-based discussions of focused but open-ended questions in small groups, before reaching a consensus in a larger group, and requires students to iteratively generate their own hypotheses from real-life clinical observations and data (Fig.  1 ), incorporating elements of various small group teaching modalities [ 13 ]. The collaborative format integrates cognitive and social learning modes [ 12 ] and makes the material appear more relevant [ 17 ], particularly for students with below average academic achievement and/or hesitant to participate in discussions in large groups. A key benefit of cCBL is the increased interactivity between the students in small groups to ensure student-centred active learning and reasoning. The sessions require the students to integrate different types of information, including clinical and social data and ethical considerations, while the training in evidence-based deduction stimulates the procedural learning essential for clinical reasoning [ 18 , 19 ]. The CBL’10 and cCBL formats are compared in Fig.  1 .

figure 1

Comparison of the CBL’10 and collaborative CBL formats. An essential feature of our cCBL is the iterative nature of the small group discussion with the small groups reporting back to class, leading to next-level case information being released by the tutor followed by additional round(s) of small group discussions

We here present a pilot study where we re-designed some of our CBL’10 scenarios to fit the cCBL format in which we reinforced the discussion element, using breakout groups for intensive small group discussions, while keeping the other modules in the CBL’10 format. The collaborative approach aimed to encourage students to practice clinical reasoning and decision-making by providing iterative experiences of analysing and problem-solving complex cases. This models genuine care situations where clinicians are required to integrate a variety of topics, including ethical issues, prevention and epidemiology. Students are given relevant case information to discuss, from which they form hypotheses. They report back to the full class, upon which further information is released (e.g. ‘investigative outcomes’) followed by subsequent discussion in the breakout groups; this process is repeated several times (Fig.  1 ).

In 2015, the National Academy of Medicine urgently highlighted the need to improve diagnostic error rates, which continued to be too high [ 20 ]. The change to cCBL aimed to improve inductive clinical reasoning skills and joint decision-making in small teams. Moreover, this format encourages the ‘explicit integration’ of clinical reasoning in the undergraduate years of the medical curriculum, as encouraged in a recent consensus statement by the UK Clinical Reasoning in Medical Education (CReME) group in order to foster effective clinical reasoning in teams [ 21 ]. Importantly, we believe that the collaborative format also highlights the importance of a truly patient-centred care to participating students, which may not be as evident in CBL’10.

While it is expected that cCBL encourages a higher level of participation and should result in better knowledge as well as better reasoning and decision making in clinical practice, it is not easy to summatively assess clinical reasoning in an undergraduate setting. Indeed, a systematic review of the use of CBL, drawing on worldwide practice, shows that the three top methods of evaluation of a CBL learning session were survey (36%), test (17%) and test plus survey (16%) [ 7 ]. OSCE was used less frequently—only 9% of the studies reviewed reported using a practical exam like OSCE [ 7 ] or prescribing [ 22 ] as an outcome for evaluating the effectiveness of CBL. Here, we share a mixed-methods evaluation of our implementation, consisting of both a survey focusing on issues of participation and motivation and on summative assessments to gauge the effects on integrative knowledge acquisition in subjects taught by either CBL format.

This pilot study focuses on the comparison of CBL’10 and cCBL as used in Phase 3 of the Glasgow Undergraduate Medical School curriculum. For this study, we have converted three CBL’10 scenarios to cCBL, while other sessions continued unchanged as CBL’10. Evaluations by students and tutors of these sessions are presented, alongside a side-by-side comparison of the exam performance for topics taught by the two CBL formats. All tutors are trained by faculty and had experience of teaching both versions of CBLs and were as such, well acquainted with both methods when asked to compare the two. Similarly, all students surveyed had participated in both formats.

After presenting the initial case information to the class of 16 students, the group was divided into sub-groups of 3–4 students each. The cCBL included pre-class readiness assessment and in class activities in which students were asked to work on specific tasks individually (discuss differentials; propose investigations; suggest treatment and management), debate their answers in groups of 3–4 and record these on ‘post-it’ notes presented to the full class as part of the discussion to reach a consensus in the larger groups of 16 [ 13 ].

Students ( n  = 270) took part in a survey that gathered quantitative and qualitative data from responding year 3 students (Table 1 ; n  = 149; 55%) and clinical tutors (Table 2 ; n  = 36). The questionnaire addressed the participants’ perception of the effectiveness of the cCBL sessions. The survey was designed by NS and SS. The questions were based on the annual course evaluation forms and included core statutory questions used for teaching appraisal as well as bespoke ones specific to cCBLs. The survey has high face validity as the authors have extensive experience of questionnaire design, are experts in CBL and are highly experienced in curriculum design, development and evaluation.

The paper-based surveys were distributed and collected by the year administrator immediately after the cCBL teaching sessions and contained open and closed questions (5-point Likert scale: 1, strongly disagree; and 5, strongly agree) on the experiences of the group work, peer-to-peer interaction and the intended learning outcomes (ILOs) coverage throughout these sessions. Open questions in the survey were categorized into sub-themes through an inductive process, and the dominant thematic categories were agreed upon and analysed. Quotes were selected in relation to the whole dataset. Initial subthemes were refined through successive returns to the data, from which additional quotes were used.

We applied interpretive analysis to develop an argument based on the categories identified, in order to explain how the use of cCBLs has helped students to overcome team-learning difficulties using collaborative inductive reasoning. This nested mixed method [ 23 , 24 ] provides a stronger basis of causal inference by combining quantitative analysis of the large open question dataset with an in-depth investigation of the student and tutor responses embedded in the survey.

The second outcome measure was a comparative analysis of the end-of-year exam performance between two independent cohorts that were assessed on the topics covered in cCBL versus the same topics taught via CBL’10 in previous years. The three cCBL-taught topics assessed in 2019/2020 and 2020/2021 were myocardial infarction, chronic kidney disease and diabetes; the outcomes were compared to the results of 2015/2016 and 2018/2019, when students were assessed on the same topics, then taught via CBL’10. Student knowledge was assessed in summative exams using both multiple choice questions (MCQ) and modified essay questions (MEQ), with a similar weighting for each component. Exam questions were developed and standard set by subject experts involved in designing and delivering the curriculum and quality assured to ensure validity and reliability. Exam questions were subject to internal and external scrutiny. Questions were blueprinted to intended learning outcomes (ILOs) to ensure content validity and that the type of testing used (MEQs) is appropriate to test relevant knowledge in terms of diagnosis, investigation and management in a written format (Downing et al., 2003) [ 25 ]. MEQs are often used to assess higher order abilities and abstract knowledge according to Bloom’s taxonomy, for which they are preferred over MCQs [ 26 ]. For example, structured questions are mapped to intended learning outcomes such as the ability to ‘differentiate’, which requires analysing and conceptualizing knowledge [ 27 ]. Examinations were standard set using a modified Angoff method whereby the panel judges (subject experts) discuss each question and arbitrate the expected answers from students [ 28 ]. Information regarding internal consistency of the exams was determined by calculating Cronbach’s alpha [ 29 ]. Data are presented as mean exam score ± SE. An unpaired, two-tailed t test was used to establish whether the difference in knowledge gain of the topics taught by the two methods is indeed attributable to the new teaching method. p  <  0.01 was considered statistically significant.

The Practice of CBL at the University of Glasgow

The three re-designed collaborative CBLs introduced in 2019/20 were compared to the remaining unchanged CBL’10 s. The students took part in 90-min CBL sessions twice a week and all participated equally in both types of CBL sessions and all other aspects of the course. The key differences between the two types of CBL are shown in Table 3 .

In our CBL’10 s, tutors led a more didactic session with the occasional discussions designed to allow the students to contextualize the knowledge gained from earlier lectures while still building on the clinical and problem-solving aspects of the medical curriculum.

In cCBLs, students were expected to follow the structured script and asked to formulate hypotheses. This required them to apply the knowledge gained from the supporting lectures and other teaching formats to various clinical presentations in three steps. First, they needed to generate a differential diagnosis and commit to a most likely diagnosis. The next step was to test the hypothesis and see what additional tests they required in order to confirm or reject the initial diagnosis. Finally, students are asked to go beyond the diagnosis and explain their therapeutic goal, as well as consider the factors that might influence the patient’s response to therapy. Students feed their thoughts back to the main group using post-it notes.

Groups of 16 were further subdivided into smaller groups of 3–4 students each, which were given specific tasks (e.g. generating differential diagnosis; proposing investigations and/or possible treatment/management). Through interactions in the small break-out groups, students were able to integrate a variety of topics, including prevention, ethical issues and epidemiology or health systems issues and generate discussions necessary for confirming or refuting the initial diagnosis. The advantage of smaller groups was the opportunity to practice analytical reasoning skills in a safe environment. In terms of group size in shared reasoning, Edelbring et al. [ 30 ] found the dyad peer setting works best, although there was a concern that any knowledge asymmetry in the peer group may negatively impact on the learning experience.

Survey of Opinion of Collaborative Versus Traditional CBL Sessions

Qualitative analysis identified two overarching themes: enhanced engagement and gain of knowledge.

Enhanced engagement

Students and tutors both found that cCBLs increased student engagement and motivation for self-directed learning as a result of the more interactive nature of these sessions. Ninety percent of students and 92% of tutors thought that the cCBL sessions facilitated group discussions and thought them more interactive than the CBL’10 sessions; all had had experience with both formats.

‘Students were more engaged; the sessions generally were more interactive using breakout groups and post-it notes’. (Tutor 28). ‘Breakout stimulated discussion; smaller groups worked better, they were more interactive, more engaging and as a result, more enjoyable’. (Tutor 9). ‘Mini-group work is effective for discussion, the sessions are interactive, got you thinking about specific ideas first’. (Student 39).

As the small groups are presented with a set of clinical symptoms and a patient history in real-life cases, the discussions tend to draw on a wide range of topics and knowledge within the group.

‘Encourages discussion about dermatology + other systems’. (Student 86).

The biggest benefit—according to tutors and students—was the inclusivity, as the collaborative CBLs encouraged those who normally participate passively to engage better, due to the very small group size.

‘Definitely encourages more discussion and even from those who may not normally contribute’ (Tutor 16). ‘Allows to encourage even fairly non-responsive ones’ (Student 126).

With an increased engagement came a deeper understanding of the subject matter, and the students approached these sessions more confidently. Tutors commented that the students took control of their own learning and really practiced their clinical reasoning skills as intended, rather than relying on the tutors.

‘Encouraged students to identify own learning needs; onus is on students to breakout and discuss’. (Tutor 36). ‘Allowed students to be actively involved and develop clinical reasoning skills’. (Tutor 35). ‘Makes you think for yourself, got you thinking about specific ideas first and think of alternatives and different causes/potentials’. (Student 60).

Knowledge gain

Students felt that the interactivity helped contextualize the knowledge and they learnt better as a result of it; it was easier for them to concentrate on the topic when discussing it with their peers:

‘I felt like I learned more because I was more engaged as it was interactive; use of post-its was helpful in solidifying points from discussion and whether or not they were valid, in a way that previous CBLs lacked’. (Student 27, Student 20). ‘Very interactive which encouraged more active learning and discussions, were clinically relevant’. (Student 17).

Students felt that breaking down the topic to basics and building a hypothesis that they discussed with peers in smaller groups, allowed them to consolidate lectures.

‘Reinforces knowledge from lectures in a practical case situation; we learn more when we discuss the topic with others and it helps consolidate knowledge’. (Students 99 and 49). ‘[collaborative] CBLs wrapped up the lectures well and helped me to consolidate my knowledge’. (Student 42). ‘Breaking everything down to basics helped with understanding’. (Student 123). ‘Using ‘post-it's’ for differentials helps consolidate knowledge from the week; groups of 3 work best—groups should be really small, otherwise it doesn’t work’. (Student 21).

It should be noted, however, that a quarter of the students surveyed and responding were either neutral (17%) or disliked (8%) the discussions in breakout groups. These students questioned the usefulness of the approach as they felt they had insufficient knowledge of the subject matter to engage effectively in small group discussions.

‘The format was not very helpful; I prefer when it goes through the case sequentially; don’t like too many breakout group discussions – often we don’t know enough to comment’. (Student 84). ‘Students didn’t enjoy the post-it notes and preferred to discuss things as opposed to writing down’. (Tutor 8). ‘It is difficult to answer /discuss very broad topics in small groups with limited knowledge. It is much easier in the previous weeks to follow a case from start to finish with smaller questions’. (Student 91).

Effect of the Introduction of cCBL on Summative Examinations

In order to objectively assess knowledge gain, we compared relevant exam performance of the cohorts that were taught specific topics via CBL’10 with those in later cohorts that received them as cCBLs.

There was a highly significant improvement in the cCBL students’ exam marks for the questions particularly on myocardial infarction (MI; p  =  2.2E-46 , Fig.  2 A), but also on chronic kidney disease (CKD; p  =  0.0012 , Fig.  2 B) and diabetes ( p  =  0.022 ; Fig.  2 C). As a control, the overall exam performance for the selected two cohorts at the end of year 2 was used, providing the nearest possible comparison. The two cohorts compared for MI, sitting the Y2 exam in 2018/19 (the cohort of Y3 in 2019–2020) and 2014/2015 (the Y3 cohort of 2015–2016), respectively, performed identically in this assessment, indicating that the two cohorts achieved comparable exam grades prior to cCBL ( p  =  0.15 ; Fig.  2 A). Analysis of the end-of-Y2 exam for the cohorts compared for CKD and diabetes showed that the cohort that would go on to receive CBL’10 in these subjects performed slightly better in the Y2 exam than the cohort that would go on to do cCBL in Year 3 ( p  =  0.006 ; Fig.  2 B, C).

figure 2

A – C Comparison of exam performance in year 3 on a specific topic between cohorts having been taught the subject by cCBL (red bars; 2019/2020 or 2020/2021) and CBL’10 (blue bars; 2015/2016 and 2018/2019). The two cohorts were set questions of comparable difficulty and complexity in their Y3 exams. As a control, the overall cohort performance in the end of year 2 exam was also analysed for the same cohorts. Bars show average ± SEM and the t test score is indicated above the compared bars. D Y3 exam performance comparison for the two cohorts in three subjects taught only via CBL’10 and assessed by a similar question on the topic. Green bars, cohort of 2019/2020; blue bars, cohorts of 2020/2021, 2018/2019, and 2015/2016, respectively. The number of students in each cohort is shown in each bar

However, not all topics in the 15-week Phase 3 were changed to cCBL at the same time. In order to allow a true side-by-side comparison, we analysed exam performance in topics that were still taught via CBL’10 in 2019/2020 (Fig.  2 D, green bars) and compared these with exam performance in the same topic in previous years, also taught by CBL’10 (Fig.  2 D, blue bars). In contrast to the gains seen in the cCBL topics of MI, CKD and diabetes seen in Figs.  2 A–C, for the topics of haematology, pathology and pharmacology, taught via CBL’10 (and assessed in both cohorts), exam performance was statistically identical ( p  =  0.81 , 0.068 and 0.82 , respectively, Fig.  2 D). We can thus conclude that the improvements in exam scores in cCBL-taught topics is likely attributable to the CBL format as no such improvement was seen when the same cohorts were compared in topics still taught via CBL’10.

CBL, in its original form [ 3 , 31 , 32 ], was introduced into our year 3 MBChB curriculum in 2010 and has consistently been one of the most highly evaluated components of the course. However, our tutor feedback indicated diminishing student engagement in the discussion elements of the class. Following the study by Krupat et al. [ 13 ], demonstrating an improved learning experience of students with the cCBL format, we introduced a similar change in academic year 2019/2020, in a subset of our CBL sessions, as a pilot study to address and investigate the student participation issues.

We have conducted a side-by-side comparison of the two cohorts that received both types of CBL and found that cCBL improved student engagement, motivation and knowledge gain and had a positive impact on assessment performance. Direct quantitative evidence of improved exam scores in the cCBL group was presented. Our findings are consistent with other studies that evaluated a switch to collaborative CBL [ 13 , 33 , 34 ], but our study is the first to compare two types of CBL in a mixed teaching year featuring both formats. While this study cannot firmly stipulate that the gain in the quality of learning by students can only be attributed to the switch from the original CBL to cCBL, no performance gain was seen in the subjects that remained taught as CBL’10. The majority of our students engaged well with the cCBL process and evaluated it positively.

We think cCBL sessions are particularly effective in developing clinical reasoning. The hypothesis-generating step and the discussions are opportunities for students to consolidate topics, first in small peer groups and then with the tutor for clarification, as required. This approach develops deeper clinical insights than didactic teaching can provide. The complex combination of skills and knowledge needed to arrive at differential diagnoses and evidence-based practice cannot be taught effectively via lectures alone [ 35 , 36 ]. Interactive teaching sessions with peers and tutors, involving discussions and clinical reasoning, are essential and highly valued by the current generation of students [ 37 ]. The qualitative analysis carried out in this study identified two overarching themes: gain of engagement/motivation and gain of knowledge.

Fredricks et al. [ 38 ] distinguish three types of engagement: behavioural, emotional and cognitive, describing participation, emotional responses to the learning environment and deliberate investment of effort by the student, respectively. Expanding on this theme, studies by Wang and Eccles [ 39 ] and Kahu [ 40 ] as well as DiBenedetto and Bembenutty [ 41 ] found that each engagement type influenced academic performance and aspiration in different ways, but that both aspects were positively correlated with self-regulated learning. A study with 5,805 undergraduate students refined the model further to show that the effects of a student’s emotional state on achievement are mediated through self-regulated learning and motivation [ 42 ]. Cavanagh et al. [ 43 ] identified student buy-in of the active learning format as an important factor for improved course performance, showing the need to engage with the student body in co-creating and developing teaching material and the format in which the students want to be taught.

The data collected from students that participated in the cCBL sessions indicated that it enhanced learning, increased interactivity and promoted better learning of the topic through the group discussions. A large majority of the students believed that they learned more because of the interactive format and that the stepwise process of the cCBL enabled topics to be broken down, making the construction of hypotheses easier. They also reported that the collaborative format allowed better integration of diverse learning on the course, which built confidence and helped with consolidation. However, some of the students felt uneasy about being expected to discuss complex scenarios in such small groups, apparently not confident that they had the knowledge to do so. Table 1 shows that 12/149 responded disagree or strongly disagree to the survey question ‘The breakout sessions were useful in discussing topics with my peers’, with a further 25/149 responding ‘neutral’. It could be argued that these students may have actually benefitted most from this clinical reasoning training, as they will next find themselves in various ward placements requiring these skills.

The tutors found that the collaborative format made it easier to engage with, and motivate students, particularly the quieter ones, who had less opportunity to ‘hide’ in the small groups. While this may have been somewhat uncomfortable for the most passive participants, we regard the increased (need for) participation as one of the most positive outcomes of the cCBL format. Students are more anxious about being called upon (by the tutor) to answer a question in a larger group, as opposed to discussions with a few of their peers. McConnell and Eva [ 44 ] discuss how a (pre)-clinical student’s emotional state can determine learning outcomes and how direct questioning can induce ‘fear and stress’. The use of ‘post-it’ notes to capture ideas and thoughts is another effective way to get everyone involved in the process and can help overcome silence in groups as it serves to almost anonymize the opinions submitted to the larger group. Tutors also noted that cCBL helped students identify their own learning needs as the small group discussions readily identified knowledge gaps.

Almost all of the tutors (94%) surveyed felt more confident in leading the student discussions and that the session achieved the learning outcomes of the case. Thistlethwaite et al . [ 3 ] noted a similar satisfaction with CBL in tutors, attributing the enjoyment to either the use of authentic clinical cases, or the group learning effect. The cCBL development further extends these gains as our tutors noted that the smaller groups used in cCBL were probably the reason for the increased engagement. Moreover, tutors were more positive about delivering sessions because cCBL is more structured than CBL’10, which is hostage to the participation by students in the full group, with the iterative 3-step process providing tutors with a clear framework to follow.

The use of active learning is increasingly considered to be associated with student engagement and improved outcomes [ 45 ]. Bonwell and Eison [ 46 ] defined active learning ‘as instructional activities involving students in doing things and thinking about what they are doing’. It strengthens students’ use of higher order thinking to complete activities or participate in discussion in class, and it often includes working in groups. Students retain information for longer, as groups tend to learn through discussions, formulating hypotheses and evaluation of others’ ideas. Often, it helps them recognize the value of their contribution, resulting in increased confidence, as shown by the survey results. When views of cCBL were negative, it was almost always reflecting lack of confidence in their own knowledge, emphasizing the need to have CBL follow on from didactic teaching sessions, consolidating comprehension. This study confirms that cCBL encourages participation and that it is popular with most students, who find it a relevant way to prepare for the clinical phase of their medical education. Many students are still too passive in CBL’10, and the switch to cCBL aimed to increase ‘active learning’ for the entire cohort and thereby improve exam performance as well.

Although many medical schools are using active learning strategies, there is still little evidence in the literature that directly demonstrates a positive effect on summative assessment. Krupat et al. [ 13 ] showed an increase in attainment in lower aptitude students. In this study, we have shown exam performance improvements in all three subjects that switched to cCBL with a highly significant improvement in MI and modest improvements in two other topics. We attribute the very large improvement in cardiovascular question performance, relative to the more modest improvements in diabetes and nephrology, to the fact that the students have little cardiology teaching in year 2. In contrast, year 2 provided a solid understanding of the basic principles and clinical application of the other two subjects, which resulted in a less dramatic improvement when using cCBL, while the cardiovascular topic is a more sensitive test of the extent that knowledge gain is possible using cCBL. These increases in performance were compared to three other topics that remained taught by CBL’10 (haematology, pathology and pharmacology). None of these subjects showed any increase in assessment scores in the cohorts examined. Moreover, the cohorts performed virtually identically in their respective second year exams, suggesting that the difference in exam performance between the two cohorts taught via two different CBL formats could indeed be attributed to cCBL efficacy.

However, a limitation of any form of teaching is that one size rarely fits all. The collaborative style of CBL aims to encourage participation and discussion, with ample opportunity to explore issues in a safe environment, but we would wish to study how different individual learners perceive this and whether a more personalized approach could be developed. In addition, we would like to investigate whether non-specialty and specialty tutors find the process equally effective. The true benefit of the cCBL would have been logical to assess in a clinical setting, where the reasoning skills are exercised daily. However, it is obvious that there are a number of potential confounders to such a study evaluating students’ abilities to reason clinically, once they have more experience of placements.

To summarize, modification of CBL to a more collaborative approach with very small breakout groups is effective in improving medical students’ engagement with tutors and peers and their performance in assessment. This side-by-side direct comparative study outlines the clear benefits of the collaborative format: practicing clinical reasoning in small groups, and the power of the directed, focused discussions of the cases presented to the group led to increased participation as well as improved summative examination scores.

Eisenstein A, Vaisman L, Johnston-Cox H, Gallan A, Shaffer K, Vaughan D, O’Hara C, Joseph L. Integration of basic science and clinical medicine: the innovative approach of the cadaver biopsy project at the Boston University School of Medicine. Acad Med. 2014;89:50–3.

Article   Google Scholar  

Sturdy S. Scientific method for medical practitioners: the case method of teaching pathology in early twentieth-century Edinburgh. Bull Hist Med. 2007;81(4):760–92.

Thistlethwaite JE, Davies D, Ekeocha S, Kidd JM, MacDougall C, Matthews P, Purkis J, Clay D. The effectiveness of case-based learning in health professional education. A BEME systematic review: BEME Guide No. 23. Med Teacher. 2012;34:e421–44.

Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:1493–9. https://doi.org/10.1001/archinte.165.13.1493 .

Jhala M, Mathur J. The association between deep learning approach and case based learning. BMC Med Educ. 2019;19:106.

Srinivasan M, Wilkes M, Stevenson F, Nguyen T, Slavin S. Comparing problem-based learning with case-based learning: effects of a major curricular shift at two institutions. Acad Med. 2007;82:74–82.

McLean SF. Case-based learning and its application in medical and health-care fields: a review of worldwide literature. J Med Educ Curric Dev. 2016;3:39–49. https://doi.org/10.4137/JMECD.S20377 .

Tomey AM. Learning with cases. J Contin Educ Nurs. 2003;34:34–8.

Irby DM. Three exemplary models of case-based teaching. Acad Med. 1994;69:947–53.

Schwartz PL, Egan AG, Heath CJ. Students’ perceptions of course outcomes and learning styles in case-based courses in a traditional medical school. Acad Med. 1994;69:507.

Ten Cate O, Custers EJ, Durning SJ. Principles and practice of case-based clinical reasoning education: a method for preclinical students. Cham, Switzerland: Springer. 2018:208. ISBN 978–3–319–64827–9.

Mayo JA. Using case-based instruction to bridge the gap between theory and practice in psychology of adjustment. J Construct Psychol. 2004;17:137–46. https://doi.org/10.1080/10720530490273917 .

Krupat E, Richards J, Sullivan A, Fleenor T, Schwartzstein R. Assessing the effectiveness of case-based collaborative learning via randomized controlled trial. Acad Med. 2016;91:723–9.

Wassermann S. Introduction to case method teaching: a guide to the galaxy. New York: Teachers College Press; 1994.

Google Scholar  

McDade SA. Case study pedagogy to advance critical thinking. Teach Psychol. 1995;22:9–10.

Krockenberger MB, Bosward KL, Canfield PJ. Integrated case-based applied pathology (ICAP): a diagnostic-approach model for the learning and teaching of veterinary pathology. J Vet Med Educ. 2007;34:396–408.

Koh D, Chia KS, Jeyaratnam J, Chia SE, Singh J. Case studies in occupational medicine for medical undergraduate training. Occup Med. 1995;45:27–30.

Wadowski PP, Steinlechner B, Schiferer A, Löffler-Statka H. From clinical reasoning to effective clinical decision making – new training methods. Front Psychol. 2015;6:473.

Turk B, Ertl S, Wong G, Wadowski PP, Löffler-Statka H. Does case-based blended-learning expedite the transfer of declarative knowledge to procedural knowledge in practice? BMC Med Ed. 2019;19:447. https://doi.org/10.1186/s12909-019-1884-4 .

Balogh EP, Miller BT, Ball JR (Eds). Improving diagnosis in health care. Committee on diagnostic error in health care; Board on health care services; Institute of medicine; The national academies of sciences, engineering, and medicine. Washington (DC): The National Academies Press. 2015.

Cooper N, Bartlett M, Gay S, Hammond A, Lillicrap M, Matthan J, Singh M. UK Clinical Reasoning in Medical Education (CReME) consensus statement group. Consensus statement on the content of clinical reasoning curricula in undergraduate medical education. Med Teach. 2021;43:152–22.

Herdeiro MT, Ferreira M, Ribeiro-Vaz I, Junqueira Polónia J, Costa-Pereira A. Workshop- and telephone-based Interventions to improve adverse drug reaction reporting: a cluster-randomized trial in Portugal. Drug Saf. 2012;35:655–65. https://doi.org/10.1007/BF03261962 .

Creswell JW. Research design: qualitative, quantitative, and mixed methods approaches. 2nd ed. Thousand Oaks, CA: Sage; 2003.

Lieberman ES. Nested analysis as a mixed-method strategy for comparative research. Am Pol Sci Rev. 2005;99:435–52.

Downing SM. Validity: on the meaningful interpretation of assessment data. Med Educ. 2003;37:830–7.

Palmer EJ, Devitt PG. Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? BMC Med Educ. 2007;7:49. https://doi.org/10.1186/1472-6920-7-49 .

Krathwohl DR. A revision of Bloom’s taxonomy: an overview. Theory Pract. 2002;41(4):212–8.

Norcini JJ. Setting standards on educational tests. Med Educ. 2003;37:464–9.

Raykov T, Marcoulides GA. Thanks coefficient alpha, we still need you! Educ Psychol Meas. 2017;2017(79):200–10.

Edelbring S, Parodis I, Lundberg IE. Increasing reasoning awareness: video analysis of students’ two-party virtual patient interactions. JMIR Med Educ. 2018;4: e4. https://doi.org/10.2196/mededu.9137 .

Garvey MT, O’Sullivan M, Blake M. Multidisciplinary case-based learning for undergraduate students. Eur J Dent Educ. 2000;4:165–8.

Struck BD, Teasdale TA. Development and evaluation, of a longitudinal case-based learning (CBL) experience for a Geriatric Medicine rotation. Gerontol Geriat Educ. 2008;28:105–14.

Fischer K, Sullivan AM, Krupat E, Schwartzstein RM. Assessing the effectiveness of using mechanistic concept maps in case-based collaborative learning. Acad Med. 2019;94:208–12.

Said JT, Thompson LL, Foord L, Chen ST. Impact of a case-based collaborative learning curriculum on knowledge and learning preferences of dermatology residents. Int J Womens Dermatol. 2020;6:404–8.

Wouda JC, Van de Wiel HB. Education in patient-physician communication: how to improve effectiveness? Patient Educ Couns. 2013;90:46–53.

Draaisma E, Bekhof J, Langenhorst VJ, Brand PL. Implementing evidence-based medicine in a busy general hospital department: results and critical success factors. BMJ Evid Based Med. 2018;23:173–6.

Koenemann N, Lenzer B, Zottmann JM, Fischer MR, Weidenbusch M. Clinical Case Discussions - a novel, supervised peer-teaching format to promote clinical reasoning in medical students. GMS J Med Educ. 2020;37:Doc48.

Fredricks JA, Blumenfeld PC, Paris AH. School engagement: potential of the concept, state of the evidence. Rev Educ Res. 2004;74:59–109.

Wang M, Eccles J. Adolescent behavioral, emotional, and cognitive engagement trajectories in school and their differential relations to educational success. J Res Adolesc. 2012;22:31–9.

Kahu ER. Framing student engagement in higher education. Stud High Educ. 2013;38(5):758–73. https://doi.org/10.1080/03075079.2011.598505 .

DiBenedetto MK, Bembenutty H. Within the pipeline: self-regulated learning, self-efficacy, and socialization among college students in science courses. Learn Individ Differ. 2013;23:218–24.

Mega C, Ronconi L, De Beni R. What makes a good student? How emotions, self-regulated learning, and motivation contribute to academic achievement. J Educ Psychol. 2014;106:121.

Cavanagh AJ, Aragon OR, Chen X, Couch BA, Durham MF, Bobrownicki A, Hanauer DI, Graham MJ. Student buy in to active learning in a college science course. CBE Life Sci Educ. 2016;15:ar76. https://doi.org/10.1187/cbe.16-07-0212 .

McConnell MM, Eva KW. The role of emotion in the learning and transfer of clinical skills and knowledge. Acad Med. 2012;87:1316–22.

Freeman S, O’Connor E, Parks JW, Cunningham M, Hurley D, Haak D, Dirks C, Wenderoth MP. Active learning increases student performance in science, engineering, and mathematics. Proc Natl Acad Sci USA. 2014;111:8410–5.

Bonwell CC, Eison JA. Active learning: creating excitement in the classroom. ASHE-ERIC Higher Education Report No. 1. Washington (DC): George Washington University, School of Education and Human Development. 1991. ISBN 1–878380–08–7.

Download references

Author information

Authors and affiliations.

Undergraduate Medical School, School of Medicine, University of Glasgow, Glasgow, UK

Nana Sartania, Sharon Sneddon, James G. Boyle & Emily McQuarrie

Institute of Infection, Immunity and Inflammation, University of Glasgow, Glasgow, UK

Harry P. de Koning

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nana Sartania .

Ethics declarations

Ethics approval.

Ethical approval for the project was granted by the MVLS College Ethics Committee of the University of Glasgow (Ref 200190106).

Consent to Participate

Consent to participate was obtained from all individual participants included in the study.

Conflict of Interest

The authors declare no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Sartania, N., Sneddon, S., Boyle, J.G. et al. Increasing Collaborative Discussion in Case-Based Learning Improves Student Engagement and Knowledge Acquisition. Med.Sci.Educ. 32 , 1055–1064 (2022). https://doi.org/10.1007/s40670-022-01614-w

Download citation

Accepted : 25 August 2022

Published : 05 September 2022

Issue Date : October 2022

DOI : https://doi.org/10.1007/s40670-022-01614-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical education
  • Case-based learning
  • Collaborative learning
  • Small group teaching
  • Find a journal
  • Publish with us
  • Track your research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 9, Issue 9
  • Can clinical case discussions foster clinical reasoning skills in undergraduate medical education? A randomised controlled trial
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Marc Weidenbusch 1 , 2 ,
  • http://orcid.org/0000-0003-2239-797X Benedikt Lenzer 1 ,
  • Maximilian Sailer 3 ,
  • Christian Strobel 1 ,
  • Raphael Kunisch 2 ,
  • Jan Kiesewetter 1 ,
  • Martin R Fischer 1 ,
  • http://orcid.org/0000-0002-3887-1181 Jan M Zottmann 1
  • 1 Institute for Medical Education, University Hospital of LMU Munich , Munich , Germany
  • 2 Department of Internal Medicine IV , University Hospital of LMU Munich , Munich , Germany
  • 3 Department of Education , University of Passau , Passau , Germany
  • Correspondence to Dr Jan M Zottmann; jan.zottmann{at}med.uni-muenchen.de

Objective Fostering clinical reasoning is a mainstay of medical education. Based on the clinicopathological conferences, we propose a case-based peer teaching approach called clinical case discussions (CCDs) to promote the respective skills in medical students. This study compares the effectiveness of different CCD formats with varying degrees of social interaction in fostering clinical reasoning.

Design, setting, participants A single-centre randomised controlled trial with a parallel design was conducted at a German university. Study participants (N=106) were stratified and tested regarding their clinical reasoning skills right after CCD participation and 2 weeks later.

Intervention Participants worked within a live discussion group (Live-CCD), a group watching recordings of the live discussions (Video-CCD) or a group working with printed cases (Paper-Cases). The presentation of case information followed an admission-discussion-summary sequence.

Primary and secondary outcome measures Clinical reasoning skills were measured with a knowledge application test addressing the students’ conceptual, strategic and conditional knowledge. Additionally, subjective learning outcomes were assessed.

Results With respect to learning outcomes, the Live-CCD group displayed the best results, followed by Video-CCD and Paper-Cases, F(2,87)=27.07, p<0.001, partial η 2 =0.384. No difference was found between Live-CCD and Video-CCD groups in the delayed post-test; however, both outperformed the Paper-Cases group, F(2,87)=30.91, p<0.001, partial η 2 =0.415. Regarding subjective learning outcomes, the Live-CCD received significantly better ratings than the other formats, F(2,85)=13.16, p<0.001, partial η 2 =0.236.

Conclusions This study demonstrates that the CCD approach is an effective and sustainable clinical reasoning teaching resource for medical students. Subjective learning outcomes underline the importance of learner (inter)activity in the acquisition of clinical reasoning skills in the context of case-based learning. Higher efficacy of more interactive formats can be attributed to positive effects of collaborative learning. Future research should investigate how the Live-CCD format can further be improved and how video-based CCDs can be enhanced through instructional support.

  • undergraduate medical education
  • case-based learning
  • clinical reasoning
  • social interaction
  • medical decision making

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2018-025973

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

First empirical study on the implementation of clinical case discussions in undergraduate medical education.

Comparison of clinical case discussions with differing grades of social interaction to determine their effectiveness on medical students’ acquisition of clinical reasoning skills by between-group analyses.

Implementation of multidimensional and multilayered test instruments in a pre-test, post-test and delayed post-test design to measure clinical reasoning skills with a knowledge application test and self-assessment.

The knowledge application test utilised in this study did not allow for a more in-depth analysis of clinical reasoning skills (ie, a distinction of conceptual, strategic and conditional knowledge).

Introduction

Curriculum developers face the challenge of implementing competence-oriented frameworks such as CanMEDS (Canada; http://www.royalcollege.ca/canmeds ), NKLM (Germany; http://www.nklm.de ) or PROFILES (Switzerland; http://www.profilesmed.ch ), including the need to train clinical reasoning skills as a medical doctor’s key competence. 1–3 As such, clinical reasoning skills are crucial not only for appropriate medical decision making but also to avoid diagnostic errors and the associated harm for both patients and healthcare systems. 4

Case-based learning has been proposed to foster clinical reasoning skills 5 and is well accepted among students. 6 Case-based learning found an early representation in clinicopathological conferences (CPC, first introduced by Cannon in 1900 7 ) which are practised until today. The CPC conducted at the Massachusetts General Hospital are published on a regular basis known as the Case Records series of the New England Journal of Medicine . In those CPCs, the ‘medical mystery’ 8 presented by the case under discussion calls readers to think about the possible diagnosis themselves, before it is finally disclosed at the last part of the CPC. Despite the absence of definitive evidence for efficacy as a teaching method, CPCs have widely been used in medical education since the early 20th century to foster clinical reasoning. 9–11 While CPC case records reach lots of medical readers around the world, they have been criticised as being anachronistic with a diagnosing ‘star’ (ie, the discussant), performing, acutely aware of being the centre of attention. 12

Case-based learning formats are embedded in a context, which is known to promote learning better than providing facts in an abstract, non-contextual form. 13 A definition found in the review by Merseth suggests three essential elements of a case: a case is real (ie, based on a real-life situation or event); it relies on careful research and study; it is ‘created explicitly for discussion and seeks to include sufficient detail and information to elicit active analysis and interpretation by users’. 14 Cases may be represented by means of text, pictures, videos and the like. Realism and authenticity are varying features of cases, 15 but particularly elaborated and authentic cases provide increased diagnostic challenge, comprising added value for medical training. 16

However, due to their setup, CPCs are often passive learning situations for participants, as they listen to the discussant laying out his or her clinical reasoning on the case under discussion. According to the ICAP framework by Chi et al , 17 teaching formats increase their efficacy from passive < active < constructive < interactive learning environments. Learning is enhanced when students interactively engage in discussions among each other. Accordingly, case-based learning has been found to be particularly beneficial in collaborative settings. 15 However, another important aspect to consider in collaborative learning environments is that some students may participate passively, while others contribute disproportionately much. To foster optimal learning effects, students should thus be encouraged to be interactively engaged. One prerequisite to achieve self-guided learning in groups is a low threshold for students to come forward with their questions and participate in ensuing discussions. 18 To this end, peer teaching has been established as an effective tool to stimulate discussions. 19 To make sure peer tutors are not overwhelmed in moderating these discussions, the presence of an experienced clinician appears to be warranted 20 in addition to a specific training of the tutors.

Taken together, while traditional CPCs encompass some important dimensions of effective case-based learning environments, they are not systematically aiming at constructive or interactive learner activities that are known features of effective teaching formats. 17 21 Therefore, we introduced clinical case discussions (CCD) in undergraduate medical education to account for these features. We still use the case records of the Massachusetts General Hospital, 9 as these cases exemplify realistic patient encounters and fulfil the criteria for an interactive collaborative learning process as explained above. In the CCD approach, cases are typically presented with information until the admission of the patient to the hospital. This event is usually the starting point of an interactive discussion phase of the group about possible diagnoses and diagnostic strategies. After all test results have been discussed, the actual diagnosis is disclosed and the pitfalls and take-home messages of the case are summarised.

To investigate the effectiveness of the CCD approach in undergraduate medical education, we designed an intervention trial and assessed clinical reasoning skills in medical students before and after participating in live CCDs or being exposed to video recordings of live CCDs. We compared these formats and their effects on clinical reasoning with the more traditional approach of working through written cases. When carrying out this randomised trial, we hypothesised that participation in live CCD sessions would lead to a higher increase of clinical reasoning skills than simply reading the cases. To better understand possible effects of the CCD learning environment with its social components on learning outcomes, participation in live CCDs as outlined above was additionally compared with the effects of watching videos of CCDs online. This comparison also seemed relevant from an economic point of view as videostreaming of lectures and seminars is prevalent at many institutions in higher education, allowing for flexible and scalable access to learning materials. 22 To investigate the potential of different CCD formats for regular curricular use, we also measured subjective learning outcomes after the intervention and correlated student self-assessments with objective changes in their clinical reasoning skills.

Participants

Initially, we recruited 106 volunteer medical students at the Medical Faculty of LMU Munich. Randomisation was performed in a two-step procedure. First, we selected a sample of roughly 100 enrolled students. Next, we stratified participants by creating triplets on the basis of the variables age, gender, year of study, prior CCD participation and performance in a knowledge application pre-test. This was done in an effort to limit the risk of random misdistribution of the selected sample. From each triplet, we randomly assigned participants to the experimental groups. A total of 90 participants eventually completed the study, 31 of them were male and 59 female. They were aged 20–41 years (M=23; SD=2.97) and in their first to eighth clinical semester (M=3.50; SD=1.78).

The study was approved by the ethics committee of the Medical Faculty of LMU Munich (approval reference no. 222–15). Written informed consent was obtained from all study participants and they received a financial reimbursement of 50 Euros on completion of the trial.

Patient and public involvement

No patients or public were involved in this research.

Study design

We conducted a single-centre randomised controlled trial consisting of a total of five course sessions with a parallel design (see figure 1 ). One week prior to the first CCD session, participants were introduced to the principles of the CCD approach and the sequence of this trial in an introductory session where they also took a knowledge application pre-test (T_0). In the experimental phase, participants attended 3 weekly interventional course sessions of 90 min each in one of three experimental groups with the respective CCD formats. Participants took a knowledge application post-test at the end of the last experimental course session (T_1), 4 weeks after pre-testing. A delayed knowledge application post-test was conducted 2 weeks after completion of the interventional courses (T_2); we deliberately chose that time interval to investigate the sustainability of possible effects while balancing the risk of postintervention confounding. 23

  • Download figure
  • Open in new tab
  • Download powerpoint

Study design. Full data sets of 90 medical students were analysed. T_0, knowledge application pre-test; T_1, knowledge application post-test; T_2, delayed knowledge application post-test.

In all experimental groups, the intervention was based on the same three, independent internal medicine cases. Chief complaints in these cases were paraesthesia (first session), fever and respiratory failure (second session) and rapidly progressive respiratory failure (third session). 24–26 Cases were worked through in an iterative approach in different formats: (1) peer-moderated live case discussions in an interactive setting (Live-CCD, n=30), (2) a single-learner format utilising an interactive multimedia platform displaying video recordings of the live case discussions (Video-CCD, n=27) and (3) a single-learner format in which the students worked with the original paper cases of the NEJM (Paper-Cases, n=33). The cases were prepared in a way that participants in each format were exposed to the same case information.

In all three groups, cases were presented in a specified structured manner similar to the original CPC (see figure 2 ). In each format, the students (‘discussants’) had to fill out a form after the admission in which the case had to be summarised and a list of clinical problems and working diagnoses had to be provided. Subsequently, between discussion and summary a second case summary had to be completed in which the final diagnostic test and the most likely diagnosis had to be proposed.

Live-CCD structure. CCD sessions are divided into three parts. In the admission part, the presenting student shows the discussants his prepared slides (based on the original NEJM case record), after which the group has to agree on an assessment of the patient under discussion. In the interactive discussion part, the students prioritise the medical problems, link them to possible aetiologies and order tests to further corroborate or discard differential diagnoses. After all these tests have been discussed, students order the putative diagnostic test. The result is disclosed along with the pathological discussion and ‘take home messages’ on important differentials in the third part of the session. CBC, complete blood count; CC, chief complaint; CCD, clinical case discussion; CMP, comprehensive metabolic panel; CXR, chest radiograph; FH, family history; HPI, history of present illness; Meds, medications; PE, physical examination; PMH, past medical history; PT, prothrombin time; PTT, partial thromboplastin time; ROS, review of systems; SH, social history; UA, urine analysis; VS, vital signs.

In the Live-CCD group, the case presentation was prepared beforehand by a voluntary discussant (‘presenter’), who presented the facts in the admission (according to the structure shown in figure 2 ). Electronic slides and flipcharts were used to transport case information. Original test results were revealed by the presenter during the discussion only when requested by the group of students. Furthermore, the presenter summarised the differential diagnosis, important pathophysiological features of the case at the end of the session and provided a short take home message. The moderating medical students (‘moderator’) were recruited among previous CCD participants. They had experience in CCD moderation and had had an introductory training (2 days) in higher education methods and group facilitation prior to the study. The moderator facilitated the discussion process and ensured a reasonable approach to the patient encounter (eg, with respect to timing and hierarchy of ordered tests) in close communication with the discussants. Moreover, the moderator helped students develop their diagnostic strategy by co-evaluating their requested findings and the reasoning employed. Supervision of the correctness of medical facts and the correct diagnostic approach were ultimately granted by a clinician who could stop the discussion at any point when faulty reasoning was evident or discussants explicitly requested the facilitation of an experienced physician. The clinicians’ level of involvement into the discussion was left at their own discretion. We varied the staff between each Live-CCD to minimise effects of personal teacher characteristics. Live sessions typically lasted 90 min and were recorded with multiple cameras.

Students in the Video-CCD format worked on a single-learner multimedia workstation on which a video recording of the Live-CCD was displayed. These recordings also contained the electronic slide presentation from the Live-CCD and enabled simultaneous observation of the discussion from multiple camera angles. Participants could pause and partially skip the videos.

In the Paper-Cases group, participants received the case information of each CCD section sequentially (ie, admission, discussion, summary) in a print format. In both single-learner formats, students could choose their personal working speed. There was neither a prespecified minimum nor a maximum time they were required to work on the cases. In each of the three formats, full access to the internet was permitted for additional information.

Instruments

Learning outcomes with respect to clinical reasoning were measured with a knowledge application test that consisted of 29 items (ie, a maximum of 29 points could be achieved) and was to be filled out within 45 min. The knowledge application test was based on instruments previously developed at the Institute for Medical Education at LMU Munich. 27–29 It comprised multiple choice items, key feature problems and problem-solving tasks, addressing the conceptual, strategic and conditional knowledge of the participants (see figure 3 ). Meta-analyses on retest effects suggest that score increase is higher for identical forms than for parallel test forms. 30 In order to limit such effects, we applied parallel forms of the knowledge application test for premeasurement and postmeasurement (ie, topics covered by the individual items were the same, but the items were reformulated and their order was permutated). Overall test difficulty was chosen to be high in order to avoid ceiling effects, as students from all clinical years were allowed to participate in the study. Overall test reliability was satisfactory (Cronbach’s α=0.71).

Knowledge application test. Exemplary items are shown for each of the knowledge types addressed (arrows point to the correct answers). The test included 11 items on conceptual knowledge, nine items on strategic knowledge and nine items on conditional knowledge. BMI, body mass index; BP, blood pressure; EMS, emergency medical service; HR, heart rate; PE, physical examination; RR, respiratory rate; SpO 2 , oxygen saturation; T, temperature;

Subjective learning outcomes were measured at T_1 with a short questionnaire consisting of nine items (eg, ‘I learnt a lot during the CCD course’, ‘The CCD course increased my learning motivation’ or ‘I recommend the implementation of the CCD teaching format into the curriculum’; the full questionnaire is available as an online supplementary file ). Participants were asked to rate these items on a Likert scale ranging from 1 (I don’t agree) to 5 (I fully agree). Reliability of the corresponding scale was good (Cronbach’s α=0.95). Additionally, study participants were asked to share their views on positive and negative aspects of the respective training format through open items at the end of the questionnaire.

Supplemental material

Statistical analysis.

The required sample size (N=128) was estimated to detect medium effect sizes with a power of 80% and a significance level of α=0.05. For between-group analyses, one-way analyses of variances were conducted with post hoc Bonferroni tests for multiple comparisons.

Effects of the CCD format on learning outcomes related to clinical reasoning

Experimental groups differed significantly with respect to the knowledge application post-test (see table 1 ), F(2,87)=27.07, p<0.001, partial η 2 =0.384. The Live-CCD group (M=14.10; SD=3.32) outperformed both the Video-CCD (M=11.69; SD=3.34) and the Paper-Cases group (M=8.50; SD=2.44). Post hoc Bonferroni tests revealed significant differences between Live-CCD and Video-CCD (p=0.011) as well as the Paper-Cases group (p<0.001). The difference in the knowledge application post-test between Video-CCD and the Paper-Cases group was also significant (p<0.001).

  • View inline

Overview of the findings of the study

Two weeks after course completion, the effect of the teaching format was still found in a delayed knowledge application post-test, F(2,87)=30.91, p<0.001, partial η 2 =0.415. Both Live-CCD (M=13.36; SD=3.23) and Video-CCD (M=11.84; SD=2.92) outperformed the Paper-Cases group (M=7.89; SD=2.41). Post hoc Bonferroni tests revealed significant differences between the Live-CCD and Paper-Cases group (p<0.001) as well as between the Video-CCD and Paper-Cases group (p<0.001). However, the difference between Live-CCD and Video-CCD was not significant in the delayed knowledge application post-test (p=0.146).

Effects of the CCD format on subjective learning outcomes

Experimental groups differed significantly with respect to subjective learning outcomes (see table 1 ), F(2,85)=13.16, p<0.001, partial η 2 =0.236. Participants of the Live-CCD group (M=4.20; SD=0.63) assigned better ratings to their course format than participants in the Video-CCD group (M=3.18; SD=1.24) and the Paper-Cases group (M=3.00; SD=0.99). Post hoc Bonferroni tests showed that the Live-CCD differed from the Video-CCD (p=0.001) and the Paper-Cases group (p<0.001) in this regard. An additional Duncan post hoc test confirmed that the Video-CCD and the Paper-Cases group did not differ from each other in this regard (p=0.48).

To investigate the relations between the subjective assessment and the knowledge application tests applied at the end and 2 weeks after the course, we calculated correlations between the different outcome measures. Subjective learning outcomes correlated on a medium level with both the knowledge application post-test (r=0.343, n=88, p=0.001) and the delayed knowledge application post-test (r=0.339, n=88, p=0.001).

In the Live-CCD group, 83% of the students were in favour of implementing routine Live-CCD into the medical curriculum. Only 45% and 31% of students from the Video-CCD and Paper-Cases groups voted for an implementation of their respective course in the curriculum. With respect to the open items from the subjective learning outcomes questionnaire, participants from all groups praised the quality of the cases. Participants from the Live-CCD group particularly valued their course format for providing an opportunity to practice ‘diagnostic thinking’ and the ‘focus on practice elements’. They also mentioned that ‘you can look up theoretical knowledge, but you cannot look up applied knowledge’. Students in the Video-CCD group, on the other hand, praised features of the digital learning environment as they could ‘pause, reflect or quickly do a Google search’ when watching the case discussions. However, they also criticised it was not possible for them to ‘participate in a more active way’.

This randomised controlled study shows that even relatively short CCD interventions can lead to improved and sustainable learning outcomes with respect to clinical reasoning. This provides evidence that the CCD approach, which is based on CPCs, is an effective teaching resource to foster clinical reasoning skills in medical students. We had hypothesised that a more interactive course format would result in an improvement of clinical reasoning skills when compared with less interactive formats. Results show that the Live-CCD indeed leads to the highest learning outcomes in medical students compared with less interactive formats. Consistent with our hypothesis, clinical reasoning skills, as measured with our knowledge application test, had the highest gain in the Live-CCD group. These positive effects of the CCD teaching format on clinical reasoning skills proved sustainable as shown by the results in the delayed knowledge application post-test. Overall, these results are in line with a recently published study on diagnostic reasoning 31 where students who worked in pairs were more accurate in their diagnosis than individual students despite having comparable knowledge. Collaborative clinical reasoning has thus far been under-represented in the literature and yet, seems to solve many of the educational problems regarding diagnostic errors. 32

The significant difference between the Live-CCD and the Video-CCD group can be explained by the findings of a meta-analysis that showed technology-assisted single-person learning to be inferior to group learning because of the decreased social interaction. 33 However, it is important to note that 2 weeks after the course, participants of the Live-CCD and Video-CCD groups did not differ significantly anymore while both groups still clearly outperformed the Paper-Cases group. In other words, watching a video of the live case discussion was found to be more beneficial for learners regarding their clinical reasoning skills than just reading the printed cases. We cannot rule out that Live-CCD and Video-CCD groups did not differ in the delayed knowledge application post-test due to underpowering of the study. As our trial was not designed to detect smaller effect sizes, this finding has to be treated with caution. Subjective learning outcomes suggest that students prefer the live discussion over the other formats. The subjective assessment correlated with the students’ performance in both knowledge application post-tests. Additional qualitative data from the open item answers suggests that the Live-CCD format supported students in performing clinical reasoning and that the active discussion of cases was particularly valued by the students.

Generalisability

The conclusions of this study are applicable to a broader audience of medical students. The CCD approach and its respective formats can easily be implemented in routine medical education. Peer teaching courses hold the promise of being more easy to install and more easy to staff than courses led by faculty. Of course, Live-CCDs still come with certain personnel requirements, as faculty as well as a moderator need to be present. Extensive preparation was not necessary for the clinicians involved though as they served as facilitators and provided guidance only in situations when they were explicitly asked for their clinical judgement or when they felt that the discussion went astray. Total time requirements might still be lower compared with other teaching formats. Likewise, the implementation of a singular 2-day training for moderators should not require extensive resources. The study population consisting of students with heterogeneous levels of clinical experience implies that the CCD is an effective teaching format not only for students at the beginning of their clinical career but also for intermediate students. Generalisability is potentially limited as only students from one medical school participated in our study.

Limitations of the study

There are certain limitations of this study that have to be addressed. One important limitation is the single-centre nature of this study and the relatively small sample size. Before the CCD approach can be implemented on a larger scale, a validation of our findings is therefore required. Caution is clearly warranted with the effect sizes shown in this trial, as it has been shown that effect sizes of learning intervention trials tend to be inflated compared with the effectiveness of the intervention when used in routine education. 34 Since we did not limit the time students had to work on the cases, we cannot entirely rule out that less time was spent on task in the single-learner formats and particularly the Paper-Cases group. Against this backdrop, we suggest replication to further validate the results found in this study and strengthen the outlined implications. The knowledge application test utilised in this study did not allow for a more in-depth analysis of clinical reasoning skills (ie, a distinction of conceptual, strategic and conditional knowledge). Larger item numbers could facilitate a reliable assessment of changes on the level of corresponding subscales. Finally, we cannot relate the underlying reasoning process with the measured knowledge gains. Further studies on clinical reasoning processes of individuals and groups are methodologically challenging but urgently needed for the advancement of a model of clinical reasoning and for improving teaching clinical reasoning. 35

Future research questions

Based on our findings, the CCD approach is a useful asset for medical educators to widen the range of clinical reasoning teaching tools. Live-CCD can thus be seen as a prime candidate for routine implementation in clinical reasoning curricula. Future research should aim to identify which Live-CCD elements (roles, case contents or course structure) contribute in which way to the improvement of clinical reasoning skills in medical students. The question if and to what extent such skills are applicable across domains is currently being discussed. 36 Future studies may also address the issue of transfer (ie, to what extent can clinical reasoning skills obtained in case-based training later be applied to different cases?). 37 Regarding the Video-CCD, means of instructional support to increase the effectiveness and interactivity of the video-based format should be investigated in an attempt to exploit its full potential.

Acknowledgments

The authors thank Johanna Huber and her team for technical support with the evaluation, Thomas Brendel and Thomas Bischoff for help with the video production and Mark S Pecker for critical reading of our manuscript and valuable suggestions. The authors also thank the CCD student discussants and moderators for their contributions. We wish to sincerely address our gratitude to the CCD team for organisational support with the study: Nora Koenemann, Simone Reichert, Sandra Petrenz, Fabian Haak, Bjoern Stolte, Simon Berhe, Bastian Brandt and Thomas Lautz. Marc Weidenbusch wishes to express special thanks to Bernd Gansbacher for introduction to CCDs.

  • Cate OT , et al
  • Fischer MR ,
  • Mohn K , et al
  • Harasym PH ,
  • Donaldson MS ,
  • Corrigan JM ,
  • Kassirer JP
  • Adler M , et al
  • Ertmer PA ,
  • Zottmann JM ,
  • Stegmann K ,
  • Strijbos J-W , et al
  • Powers BW ,
  • Navathe AS ,
  • Duncan RG ,
  • de Menezes S ,
  • Ince-Cushman D ,
  • Rosenberg E
  • Mintzer MJ ,
  • Miller DC ,
  • Kotton DN ,
  • Zukerberg LR
  • Adolf C , et al
  • Kühne-Eversmann L , et al
  • Schmidmaier R ,
  • Ebersbach R , et al
  • Hausknecht JP ,
  • Halpert JA ,
  • Di Paolo NT , et al
  • Kämmer JE ,
  • Schauber SK , et al
  • Schmidt HG ,
  • Abrami PC ,
  • d’Apollonia S
  • Springer L ,
  • Stanne ME ,
  • Heitzmann N ,
  • Fischer F ,
  • Engelmann K
  • Keemink Y ,
  • Custers EJFM ,
  • van Dijk S , et al

MW and BL are joint first authors.

Contributors MW, BL, MRF and JMZ planned the study. MW, BL and CS were responsible for data acquisition. MW, BL, MS, RK, JK, MRF and JMZ analysed and interpreted the data. MW, BL and JMZ drafted and revised the manuscript. All authors contributed to the significant intellectual content and gave final approval of the version to be published.

Funding This work was supported by the German Federal Ministry of Education and Research (grant no. 01PL12016) and an intramural grant of the Medical Faculty of the University of Munich (Lehre@LMU).

Competing interests None declared.

Patient consent for publication Not required.

Provenance and peer review Not commissioned; externally peer reviewed.

Data availability statement Data are available upon reasonable request.

Read the full text or download the PDF:

Online case-based learning in medical education: a scoping review

Affiliations.

  • 1 School of Medicine and Dentistry, Griffith University, Sunshine Coast Health Institute, 6 Doherty St, Birtinya, Qld, 4575, Australia. [email protected].
  • 2 School of Health, University of the Sunshine Coast, 90 Sippy Downs Drive, Sippy Downs, Qld, 4556, Australia. [email protected].
  • 3 Department of Cellular and Physiological Sciences, Faculty of Medicine, University of British Columbia, 2350 Health Sciences Mall, Vancouver, BC, V6T 1Z3, Canada.
  • 4 Division of Medical Sciences, University of British Columbia, 3333 University Way, Prince George, BC, V2N 4Z9, Canada.
  • PMID: 37559108
  • PMCID: PMC10413534
  • DOI: 10.1186/s12909-023-04520-w

Background: Case-Based Learning (CBL) in medical education is a teaching approach that engages students as learners through active learning in small, collaborative groups to solve cases from clinical patients. Due to the challenges afforded by the COVID-19 pandemic, small group learning such as CBL, transitioned quickly to include technology-enhanced learning to enable distance delivery, with little information on how to apply pedagogical frameworks and use learning theories to design and deliver online content.

Methods: To extend understanding of online CBL a scoping review protocol following the PRISMA-ScR framework explored the literature that describes the use of online CBL application in medical education and the outcomes, perceptions, and learning theories. A literature search was conducted in January 2022 followed by a subsequent review in October 2022. After peer review using the PRESS guidelines, the CASP appraisal tool was used to assess the rigor of each study design.

Results: The scoping review identified literature published between 2010 and 2022 (n = 13 articles), on online CBL in the field of medical education with 11 observational studies describing student and facilitator perceptions and two randomized controlled studies. Positive perceptions of online learning included a flexible work-life balance, connection with learners, and improved accessibility. Negative experiences of online CBL included poor internet access, a distracting learning environment, and loss of communication. In the studies that collected student performance data, results showed equivalent or improved outcomes compared to the control. The CASP appraisal tool highlighted the deficiencies in most study designs, lack of framework or learning theory, and poor reproducibility of the methods to answer the research questions.

Conclusion: This scoping review identified literature to describe the academic outcomes, and student and facilitator perceptions of online CBL in medical education. However, the CASP tool uncovered deficiencies in study descriptions and design leading to poor quality evidence in this area. The authors provide recommendations for frameworks and learning theories for the future implementation of online CBL.

Keywords: Case Based Learning; Medicine; Online delivery.

© 2023. BioMed Central Ltd., part of Springer Nature.

Publication types

  • Systematic Review
  • Education, Distance*
  • Education, Medical*
  • Reproducibility of Results

Why Not Treat Patients the Same Way We Teach Medical Students?

— a case-base approach to problems in living and social determinants of health.

by Arthur Lazarus, MD, MBA March 23, 2024

A photo of a female physician welcoming her female patient into an examination room.

"Problems in living" is a broad term that refers to difficulties people may experience in managing or coping with various aspects of activities related to daily living. These problems can be psychological, emotional, social, or practical in nature, and may have a significant impact on a person's well-being and quality of life. Typical examples include relationship problems, financial hardship, occupational stress, and social isolation.

"Problems in living" can be closely related to social determinants of health. The World Health Organization (WHO) defines social determinants of health as the conditions in which people are born, grow, live, work, and age, including the health system. These circumstances are shaped by the distribution of money, power, and resources at global, national, and local levels.

For instance, individuals living in poverty (a social determinant) may face numerous "problems in living," such as lack of access to quality healthcare, lower educational attainment, inadequate housing, and food insecurity, which can all have significant impacts on their physical and mental health.

Similarly, social determinants, like discrimination, social exclusion, and stressful work conditions can also lead to "problems in living," such as mental health issues, substance use, and chronic health conditions. In this way, the social determinants of health can both contribute to, and exacerbate, a wide range of "problems in living."

Therefore, addressing problems in living and social determinants of health are important parts of improving overall health outcomes and reducing health disparities in a population.

Here are a few specific examples of the ways the interaction between "problems in living" and social determinants of health can manifest:

  • Chronic Disease Management: Consider a patient with diabetes who lives in a low-income neighborhood without access to fresh, healthy food (food insecurity). This social determinant of health can exacerbate the patient's diabetes management, leading to poorer health outcomes.
  • Mental Health: A patient suffering from depression may have their condition worsened by unemployment or job insecurity. The stress of financial instability, a social determinant of health, can intensify feelings of hopelessness and anxiety.
  • Substance Use: An individual living in an area with high rates of drug use and crime may be more likely to struggle with a substance use disorder. The neighborhood, a social determinant, can contribute to the initiation and continuation of substance use.
  • Maternal and Child Health: A pregnant woman without access to prenatal care due to lack of transportation or health insurance may face increased risks for complications during pregnancy and delivery. These social determinants can lead to poorer health outcomes for both the mother and baby. CDC statistics indicate that Black women are three times as likely to die from pregnancy-related causes as white women, and their babies are two to three times more likely to die before their first birthday.
  • Elder Care: An older adult patient with limited mobility living alone may struggle with isolation, which can lead to depression and decline in overall health. Social determinants such as living conditions and social support networks can significantly impact the health of older adults.

In each of these examples, addressing the social determinants of health -- such as improving access to healthy food, providing employment support, enhancing community safety, improving access to healthcare, and providing social support for the elderly -- can help alleviate the associated "problems in living" and improve health outcomes.

However, clinicians are often specialized in medicine by disease area. While this approach has its merits, it may not be the most effective way to address the complex interplay between health issues and social determinants such as poverty, education, and living conditions.

In light of this, a shift towards a more comprehensive, case-based method, similar to the pedagogical approach used in teaching medical students, may be beneficial. This approach would involve a holistic evaluation of each patient, considering not only their specific medical conditions but also their social circumstances and lifestyle factors.

Utilizing a case-based approach could lead to more personalized care plans tailored to address the individual's unique set of challenges. This could include linking patients with community resources, providing education on disease management, or making adjustments to treatment plans to account for social determinants of health.

Implementing a case-based approach in the U.S. healthcare system would require several structural changes, such as:

  • Interdisciplinary Teams: This approach would necessitate the formation of interdisciplinary teams composed of various healthcare professionals such as doctors, nurses, social workers, nutritionists, and mental health professionals. These teams would work collaboratively to address the multiple factors affecting a patient's health.
  • Training and Education: There would need to be a shift in medical education and training to emphasize the importance of social determinants of health and holistic patient care. This could be incorporated into continuing education programs for practicing physicians.
  • Data Integration: To enable comprehensive patient assessments, healthcare providers need access to a wide range of data. This includes medical history, socio-economic status, education level, and lifestyle factors. Therefore, the healthcare system would need to improve data collection and sharing across different sectors.
  • Policy Changes: Policies and reimbursement models would need to change to incentivize holistic, preventive care rather than focusing solely on treatment. This could involve payment structures that reward healthcare providers for improved patient outcomes rather than the number of services provided, i.e., value-based healthcare.
  • Community Partnerships: Healthcare providers would need to establish partnerships with community organizations that can address social determinants of health. For example, they could work with food banks to help patients access nutritious food, or flag "frequent flyers" in the emergency department for housing agencies to help patients find safe, affordable housing.
  • Patient Engagement: The healthcare system would need to prioritize patient engagement and empowerment. This could involve providing patients with education and resources to manage their health, and involving them in decision-making processes.

The ultimate goal is to improve patient outcomes by addressing the root causes of health problems, rather than merely treating symptoms. Such a radical shift in focus would require significant collaboration across various disciplines and sectors, including healthcare practitioners, social workers, educators, policymakers, and lawmakers. While these changes would require significant effort and resources, they could lead to improved health outcomes, reduced healthcare costs, and a more equitable healthcare system.

A case-based approach to medical practice reflects the teaching methods used in medical school and mirrors the ways students learn. Why not model this behavior in practice as well?

Arthur Lazarus, MD, MBA, is a former Doximity fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of Every Story Counts: Exploring Contemporary Practice Through Narrative Medicine and Medicine on Fire: A Narrative Travelogue.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Breathe (Sheff)
  • v.15(3); 2019 Sep

Logo of breathe

Workplace-based assessment: how to use case-based discussion as a formative assessment

Rob primhak.

1 Assessments Director, European Respiratory Society

Neil Gibson

2 Respiratory Medicine, Royal Hospital for Sick Children Yorkhill, Glasgow, UK

Associated Data

Please note: supplementary material is not edited by the Editorial Office, and is uploaded as it has been supplied by the author.

Case-based discussion form EDU-0209-2019_Supplementary_material

Workplace-based assessments are increasingly used as a way of gaining insight into clinician performance in real-life situations. Although some can be used to inform a summative (pass/fail) assessment, many have a much greater role in the formative assessment of trainees, and can be used as tools for teaching and training and in identifying the development needs of trainees. There is considerable variation between different European countries in the use of formative, workplace-based assessment, such as a structured case-based discussion (CbD), during training. This article gives an overview of how to use CbD as a formative assessment for higher specialist trainees, and gives access to a downloadable record form which can be used by trainers.

Short abstract

Case-based discussion is a structured method of formative assessment which has been found to be valuable by both trainees and supervisors. This article describes the process, and offers a proforma for supervisors who have no access to this facility. http://bit.ly/2HYkOVJ

Introduction

The HERMES (Harmonising Education in Respiratory Medicine for European Specialists) project was launched by the European Respiratory Society in 2005, with the aim of promoting harmonised education and training in respiratory medicine for European specialists [ 1 ]. Acceptable methods of assessing trainees in clinical competencies, including skills, attitudes and behaviours, were outlined by the paediatric HERMES task force in 2009, when they suggested an “assessment toolbox”, a series of tools which could be used to assess trainees in the workplace [ 2 ]. However, while a few countries in Europe use these workplace-based assessment tools routinely in training, many others still have no access to such methods, and no national requirements to use them. A recent needs assessment suggested that there was considerable enthusiasm among respiratory trainers to gain access to some of these assessment tools, and training in how to use them. This article explains the use of case-based discussion (CbD), and outlines its potential benefits to trainees and trainers.

What is formative assessment?

All medical graduates are used to summative assessment: the final examinations at the end of medical school, postgraduate examinations or an oral defence of a research thesis. The objective of a summative examination is a simple, usually binary outcome, pass or fail. In clinical practice the question is whether the trainee is competent to progress to the next stage of their career. Formative assessments are more of a training tool, used to identify the strengths of a trainee, and more importantly, the areas in which they need to improve their performance and develop their skills. They fulfil a teaching function in which the trainee is an active participant but can also be used to feed into an assessment of competency. While these types of assessments are probably happening informally during many of the normal working interactions between the supervisor and trainee, they can be much more useful if they are formalised and there is some record of the interaction. Formative assessment is most useful when it is a direct assessment of real-life functioning in the workplace, i.e. workplace-based assessment (WBA).

The conventional model of different levels of assessment is Miller's pyramid ( figure 1 ) in which the lowest level is factual knowledge (“knows”), followed by integrated knowledge (“knows how”), then “shows how”, demonstrating competence in a simulated situation, and finally “does” [ 3 ]. It is this final level which we attempt to assess in WBA, exploring the way in which a clinician performs in normal practice. The advantage of this type of assessment is that it can take into account knowledge, skills and attitudes, and gives a realistic picture of actual performance, so it has a high validity. The disadvantage is that it is less reproducible than a simple cognitive assessment such as a multiple-choice examination and introduces the subjectivity of the assessor.

An external file that holds a picture, illustration, etc.
Object name is EDU-0209-2019.01.jpg

Miller's pyramid and prism of assessment. Reproduced and modified from [ 3 ] and [ 4 ], with permission from the publisher. DOPS: direct observation of procedural skill; OSCE: objective structured clinical examination; MCQ: ­multiple-choice question.

In most countries, to enter a programme of higher specialist training the trainee will have gone through a series of summative assessments, including entry to medical school, completion of medical training, performance during foundation training after graduation, and often postgraduate examinations before going through a selection process for the specialist training programme. We can therefore assume that almost all trainees have the ability to become specialists, given the appropriate training, and with a reasonable amount of effort on their part. Formative assessment is not primarily concerned with detection of the “failing” trainee (although it can perform that function); it is aimed at ensuring that the trainee is being helped to maximise their potential, broaden their skills and experience so that they emerge from training with as few gaps in their clinical competencies as possible.

What is case-based discussion?

Almost everyone involved in higher specialist training discusses cases with their trainees as part of the training process. Informally, this will be done during or after a ward round, or in an outpatient clinic: the trainee presents a summary of the case to his supervisor, who critiques and approves or adjusts the decision-making. It is often an opportunity to teach at the same time. There may be departmental meetings in which trainees present a case (often selected for clinical interest or rarity) for discussion with their colleagues and supervisors. How is this concept different?

CbD builds on these traditional methods, by identifying a period of protected time to carry out a focused, private, one-to-one discussion of a case with the trainee, identifying the trainee's strengths, and also making suggestions for development or further learning. The encounter should normally be scheduled to last 20–30 min, and will usually focus on one or two particular aspects of the case, e.g. clinical reasoning, management planning or communication.

How is it done?

The case is usually chosen by the trainee, but the supervisor can select a case if he or she is aware of an issue which would benefit from discussion. It should be a case seen recently (in the past week), and it should be one in which the trainee felt there was uncertainty, or a conflict in decision making. If the schedules permit it, the meeting can take place after a clinic or ward round, but can also occur at a pre-planned time, with a case chosen from the recent workload. It should take place in a private, protected environment, since the discussion needs to be free from constraints. The trainee brings the case notes, and presents a summary of the case, and the supervisor and trainee agree an area to focus on. The supervisor should try to explore the trainee's thinking and decision making, rather than giving a didactic tutorial. Prompts should encourage reflection and should be open and gently probing questions such as: “What diagnoses did you consider, and how did you reach the one you did?”, “What factors did you take into account in deciding on the treatment?”, “How did you feel the communication went?”, and perhaps “What might you do differently on reflection?”. The supervisor should avoid knowledge-based questions like: “What is the commonest cause of…?”, or “What is the most important side-effect of…”.

At the end of the session, the supervisor should spend a few minutes giving feedback to the trainee about what was done well and what might have been done better or differently. It is then important to suggest and then agree what might be useful actions for learning or development. It is often helpful to start this discussion by asking the trainee for their views on what they did well and what they were less happy with, as they are often aware of their own development needs.

The supervisor finally records the encounter on a structured form, which documents the seniority of the trainee, the setting (inpatient, outpatient, etc. ), and the complexity of the case. The supervisor writes down the strengths and the suggestions for development, and finally gives an overall rating of the trainee's competence, based on this case discussion. Based on the concept of “entrustable professional activities” [ 5 ], this might involve an assessment of the level of supervision needed to manage a similar case in future (see table 1 ). The record is signed by both parties, and a copy is kept by both. The trainee can keep the record in their portfolio, and the trainee's educational supervisor should keep a copy. If the supervisor performing the CbD is not the normal educational supervisor, they should send the copy to the normal supervisor.

Table 1

An example of a supervisor subjective rating of level of trust

Giving feedback

Many trainees are excessively self-critical, and will often focus on the negative aspects of the feedback. It is helpful to start with the strengths (what was done well) before dealing with any weaker areas, which can be referred to more positively as “areas for development”. When concluding it is a good idea to finish by reminding the trainee of the strengths, so they leave in a positive frame of mind. This “feedback sandwich” approach does not mean that the supervisor should not be rigorous in identifying areas for improvement. Research has shown that trainees value the CbD process most when the supervisor has identified these areas, and agreed specific action points for the trainee to pursue [ 6 , 7 ]. Suggestions for development should be specific, realistic and measurable. “Learn more about asthma” is too vague to be helpful, whereas “update myself on the recommendations for biological agents in asthma” or “familiarise myself with the current BTS/SIGN/GINA guidelines on asthma” might be appropriate.

Why should we bother?

First, this form of formalised CbD is highly valued by trainees as a learning process [ 6 – 8 ], thus functioning as a formative assessment. It is also regarded as a valuable teaching process by most supervisors [ 8 ]. Importantly, the degree to which the trainees value the process is dependent on the supervisors' commitment to it, and their understanding of the need to give constructive and specific feedback.

Secondly, it allows the trainee and supervisor to have a record of strengths, weaknesses and level of performance, which can identify if there are consistent deficits or flaws which need to be addressed.

Thirdly, when training programmes move towards competency-based summative assessments it can be used by the trainee as evidence of competence: if the most recent CbDs are all at the level of trusting the trainee without supervision, then this is evidence that they are ready for independent practice.

Who should do it?

Ideally, it is most useful if a number of supervisors perform CbDs with the same trainee at different times. This allows more objectivity in the assessment component of the process, and may allow the detection of a consistent fault in the trainee which cannot be blamed on the likes and dislikes of a single observer. Obviously, this will depend on the number of supervisors available in any training institution.

Who should initiate it, and how often?

In most competency-based training programmes that use WBA, the trainee has a requirement to have completed a certain number of CbDs in each module, covering a broad range of case types and areas of the training curriculum. In this situation, the trainee is usually the initiator. However, a supervisor can trigger a CbD, especially if they feel that a trainee demonstrated a developmental need during an informal case review. Where there is no existing national training requirement, the introduction of CbD will probably be an individual decision by a supervisor or a training centre, and there should be some clear agreement from the outset about the frequency of CbDs and the responsibility for initiating them. The simplest way to ensure that they occur is to inform the trainee that they need to do a certain number to be signed off by the supervisors for that training period!

Time constraints are the usual factor limiting the numbers of CbDs being undertaken. One session of 20–30 min every 2 or 3 weeks might be an achievable goal. Of course, the informal case discussions will continue as before; this form of structured CbD is an additional tool which can help to structure and document training and identify developmental needs.

The proforma for recording a CbD (see the online supplementary material ) is also available for download at: www.ers-education.org/cbd ; the downloadable form can be used as a printed form or an electronic record. It is recommended that before introducing CbD to a training unit, at least one of the supervisors should have undergone some training in the process, and in particular in how to deliver effective feedback.

Supplementary material

This article has supplementary material available from breathe.ersjournals.com

Conflict of interest: R. Primhak has nothing to disclose.

Conflict of interest: N. Gibson has nothing to disclose.

  • Open access
  • Published: 26 March 2024

Continuous training based on the needs of operating room nurses using web application: a new approach to improve their knowledge

  • R. Khorammakan 1 ,
  • S. H. Roudbari 2 ,
  • A. Omid 3 ,
  • V. S. Anoosheh 4 ,
  • A. N. Arabkhazaei 5 ,
  • A. Z. Arabkhazaei 6 ,
  • J. Khalili 7 ,
  • H. Belyad Chaldashti 8 &
  • A. Ghadami 9  

BMC Medical Education volume  24 , Article number:  342 ( 2024 ) Cite this article

11 Accesses

Metrics details

Introduction

Since university education and intensive and limited pre-service training do not provide an acceptable level of performing the duties of operating room nurses, and considering the limitations of traditional training methods in the field of operating room; This study was conducted with the aim of determining the effect of using the electronic education approach based on web application, leveled, personalized and based on the needs of nurses on their level of knowledge and satisfaction.

Materials and methods

This research is a quasi-experimental type of single-group multi-center pre-test-post-test, which during that, four stages of educational needs assessment, educational content design, web application design for training and evaluation of operating room nurses and determining the effectiveness of this method are included. Based on their knowledge and satisfaction, during this period, 36 nurses from the operating rooms that met the study criteria were included in the study by stratified random sampling based on the determined sample size. The data collection includes a four-choice test to measure the knowledge of operating room nurses in heart anatomy (score range 0–20), the principles of movement, transferring and positioning of the patient in the operating room (score range 0–15), the principles of ergonomics in the operating room (score range score 0–10) and satisfaction questionnaire (score range 0–28). Data collected using descriptive statistical tests (percentage of frequency and frequency, mean and standard deviation) and analytical tests (paired sample t-test, independent samples t-test, ANOVA, Pearson correlation, chi-square) with the software SPSS version 16 was analyzed.

Generally, the average knowledge scores of operating room nurses before and after the intervention were 5.96 ± 3.96 vs. 13.6 ± 3.77, in the course of principles of moving, transferring and positioning the patient in the operating room were 6.3 ± 3.42 vs. 13.3 ± 1.32, respectively 8.7 ± 3.97 vs. 18.1 ± 1.07 (in heart anatomy), 1.57 ± 2.6 vs. 0.73 ± 9.1 (in the principles of ergonomics in the operating room) and the average Knowledge scores after the intervention were significantly higher than before the intervention ( P <0.001). Also, the average satisfaction score of nurses was 21.3 ± 5.83 and 22 nurses (64.7%) were satisfied with the e-learning course.

The use of the electronic education approach based on the web application, leveled, personalized and based on the needs of the nurses, led to the improvement of the level of knowledge and satisfaction of the operating room nurses. E-learning can be used as a complementary educational tool and method for continuous training of operating room nurses in other specialized fields of operating room and surgery.

• Educational content in the form of educational videos taught by professors of medical sciences universities on each of the topics of heart anatomy (28 episodes of 5–10 minutes), principles of ergonomics in the operating room (7 episodes of 5–25 minutes) and movement principles. The transfer and positioning of the patient in the operating room (16 episodes of 10–20 minutes) were designed in three primary, intermediate and advanced levels.

• The results of this study showed that the use of an electronic education approach based on the web application, levelled, personalized and based on the needs of nurses, led to the improvement of the knowledge of operating room nurses. Also, operating room nurses were delighted with electronic training courses. E-learning can be used as a complementary educational tool and method for continuous training of operating room nurses in other specialized fields of operating room and surgery.

• Based on the results of this study, the use of an electronic education approach based on the needs of operating room nurses can be used as a complementary tool to conventional continuous education. Since this method allows interactive, personalized education is levelled, and asynchronous. It can be used at any time and place on a laptop, tablet or mobile phone; a wide range of operating room nurses in the hospitals of the Islamic Republic of Iran can use it for educational justice to Many borders should be established in the country. However, there are studies to evaluate the generalizability and the effect of using the e-learning approach on the clinical skills of operating room nurses and to compare the effect of e-learning with other methods and educational tools on the knowledge and skills of the learners and the extent of consolidating the learned material in their memory.

Peer Review reports

The operating room is a very complex environment and system where the caregiver, the patient and the technology are gathered in a physical environment in order to achieve the desired results for the patients [ 1 , 2 ]. In today’s complexity and transformation of the world, the continuity and existence of organizations depends on creating a balance between the development of human resources, methods and technologies in organizations and adapting to changes and departmental innovations. According to the World Health Organization, the performance of any system depends on a combination of the skills, availability and performance of its human resources, and the scientific and practical abilities of personnel in various fields on their own safety and patients’ safety, as well as providing the best services in the direction of treatment. It has a direct and significant effect on patients [ 3 , 4 ]. Therefore, most of the advanced countries of the world have realized the importance of human resources as a part of vital and strategic resources and productive assets, and in order to strengthen their knowledge, skills and abilities, they prepare and implement various programs.

Today, one of the basic measures that leads to the efficiency of organizations is the creation or acquisition and continuous development of human resource through the implementation of training and improvement programs, which at the individual level increases the value of the individual, and at the organizational level, improves and develops the organization, and at the national and even transnational level, it leads to an increase in productivity [ 5 , 6 ]. Since official academic education and intensive and limited pre-service training do not sufficiently and acceptably prepare hospital staff to perform their duties in the clinical environment, the implementation of the training program has become more necessary [ 7 ]. Due to the fact that each person has unique characteristics, the way of learning skills and learning needs are also different, so the first and most basic step in education is to examine educational needs.

Examining educational needs is a process during which needs are identified and planned and acted upon according to priority [ 8 , 9 ] and it is considered as a basis for preparing special educational content and a basis for setting goals and thus providing a suitable platform for organizing other important elements around prioritized needs; By preventing rework, it ultimately leads to increasing the effectiveness and efficiency of human resources, reducing waste, developing knowledge, skills, increasing job satisfaction and motivating employees [ 5 , 8 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 ]. In the study of Qalaei et al. (2013), which was conducted with the aim of determining the effectiveness of in-service training courses for nurses in medical centers affiliated to the Social Security Organization, the results showed that the lack of correct needs assessment caused the lack of overlap between training programs and the training needs of nurses [ 7 ]. In the study of Mazoji et al. (2015), the results showed that nurses who work in eye surgery operating rooms need training and informing and refresher courses about drug information, the nature of surgery and especially eye surgery techniques [ 20 ].

The most widespread method of continuous training of medical personnel is the face-to-face training method, which many studies have shown that this traditional training method has many limitations, such as not recognizing the needs of learners and their personal differences, and not addressing high-level cognitive skills such as problem solving and creative thinking [ 21 ]. As a result, many researchers have emphasized that traditional educational methods need to be changed and modified by modern educational methods [ 21 , 22 ]. Electronic education has been proposed as one of the complementary educational methods [ 21 , 23 , 24 , 25 , 26 ] and researchers have come to the conclusion that to modify the time and place limitations associated with traditional education, the possibility of lifelong learning and appropriate to the specific conditions of each individual, e-learning increasingly provides easy access to education and can be a suitable complement to traditional education [ 21 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 ]. In the field of the operating room, e-learning is also possible due to the possibility of providing cost-effective training, at any time and place, without worrying about endangering the patient’s safety, sharing educational materials, facilitating the updating of educational content, using different learning styles according to It has become popular with the needs and abilities of each learner and the adjustment of learning speed by each learner according to his characteristics [ 21 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 ]. Various studies have shown that the use of electronic education methods in the nursing profession has led to an increase in the satisfaction and professional progress of nurses in hospitals [ 54 ].

Since academic education and intensive and limited pre-service trainings do not provide an acceptable level of effective performance of duties by operating room nurses, and also considering the limitations of the traditional training method in the operating room field; Therefore, the necessity of planning to prepare, formulate and implement electronic training courses related to individual, occupational and organizational needs as a means of informing and responding to rapid changes in the health system as well as improving professional knowledge and skills is felt more. This study sought to answer the question that to what extent the training of specialized topics of the operating room profession, electronic education based on web application, leveled, asynchronous, personalized and based on the needs of nurses can lead to the promotion of specialized knowledge and Will operating room nurses be satisfied with the new teaching method?

Study design

This study was done as a semi-experimental single-group, multi-center pre-test, post-test in four phases of educational needs assessment, educational content design, web application design for training and evaluation of operating room nurses and determining the effect of operating room nurse training based on the web application on the level of knowledge and satisfaction.

Ethical considerations

First, the code of ethics was obtained from the regional committee of ethics in medical science research. Then the process of conducting the study and its objectives were explained by the researcher through WhatsApp messenger to each of the operating room nurses who met the criteria for entering the study, and then the online informed consent form (in WORD file format) was completed by each of them and it was delivered to the researcher through WhatsApp messenger in the form of a WORD file.

Sampling method and sample size

To evaluate the effect of this study, at least 29 volunteer operating room nurses was needed (according to formula 1), and based on the results obtained from similar study [ 55 ] and estimated losses(20%), a sample of this size would allow us to detect a somewhat large effect size, on the order of 2.62(d), with a confidence interval of 95%(z1) and power of 80%(z2), with a standard deviation of 3.62 score for pre-test(s1) and standard deviation of 3.49 score for post-test, this sample size would allow us to find mean differences of 1.57 scores.

Study phases

Training needs assessment.

At this stage, according to our previous study [ 56 ], which aims to determine the need for training and improving the knowledge of operating room nurses in Al-Zahra, Amin, Kashani and Chamran hospitals in Isfahan, Iran, in the areas of general and specialized knowledge of the operating room and the need for counseling to improve motivation and their job and the level of need to launch a web application with the purpose of special training for operating room nurses was done in an organized and leveled manner, and the results showed that operating room nurses in the field of general and specialized knowledge and heart anatomy topics (95%, 38 people), the principles of ergonomics in the operating room (95%, 38 people) and the principles of moving, transferring and positioning the patient in the operating room (90%, 36 people) as the sub-fields of this field, need the most training and knowledge improvement; Therefore, in the second phase, we designed educational content to train operating room nurses in the mentioned topics.

Educational content design

At first, for each training course, a panel of 10 experts consisting of professors from the surgical technology department of the University of Medical Sciences (with at least 5 years of experience in teaching theoretical and clinical courses in surgical technology and at least a bachelor’s degree in surgical technology), operating room nurses working in Hospitals with at least 10 years of experience in the operating room, hospital operating room supervisors with at least 5 years of experience in the operating room, cardiac surgeons with at least 15 years of experience in heart surgery, professors of the Medical Education Department of the University of Medical Sciences, professors of the Ergonomics Department with at least 5 years of teaching experience in the field of ergonomics and at least a master’s degree in ergonomics), was formed and during 3 sessions, the theoretical and clinical educational needs of operating room nurses, the problems in the training of operating room nurses and the results of phase 1 were examined.4, was discussed and based on the results of the expert panel meetings and also by using authentic books in the fields of cardiac anatomy, ergonomics and the principles of moving, transferring and positioning the patient in the operating room [ 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 ], the research team with the help of professors to Designed the educational content of each course. In the next step, the educational content in the form of educational videos was provided to the panel of experts for content validation, and their opinions regarding the adaptation of the educational content to the needs of operating room nurses were obtained, and the validity of the educational content of each course was approved by all members. The panel of experts (10 people) arrived. Among the opinions of the expert panel members regarding the educational content, it is possible to improve the quality of the teachers’ voices, use a white background to display the educational content, use more images instead of text, and reduce the duration of each video to a maximum of 20 minutes in order to avoid the fatigue of the learners in the virtual direction. Finally, educational content in the form of educational videos taught by professors of medical sciences universities in each of the topics of heart anatomy (28 episodes of 10 5 minutes), principles of ergonomics in the operating room (7 episodes of 25 5 minutes) and principles of movement, transferring and Patient positioning in the operating room (16 episodes of 20 10 minutes) was designed in three levels: basic, intermediate and advanced (Fig.  1 ).

figure 1

Educational content of heart anatomy: Tricuspid valve anatomy

Web application design for training and evaluation of operating room nurses

At first, an expert team with 3 years of experience in designing, programming and building quality web applications was added to the research team. This team designed the initial and experimental version of the web application during 3 months and the initial version was given to the panel of experts who were also present in phase 4 and 2 to check the validity of the content, and the panel members reviewed all the panels of the web application during 4 sessions. Among the comments of the expert panel members, it is possible to perform the initial registration of users by the admin (creating a username and password for each user), creating and maintaining the security of virtual tests in all tests of each course, adding the learning objectives of each course after completing the test. The level pointed out that it was applied in the web application and its edited version was again provided to the members of the panel of experts and was approved by all the members of the panel.

The application was designed on the web and contains four panels of training courses in each of the subjects of heart anatomy, principles of ergonomics in the operating room and principles of moving, transferring and positioning the patient in the operating room, discussion forum, about us and contact us, and how Registration and comprehensive use of this application is as follows:

After entering the application link in the browser and using a mobile phone, tablet or laptop, the user will face the application login page. On this page, if the user has registered in the application, he can enter the application space by entering the username and password and clicking on the login option. Also, if the user has forgotten his password, he can recover his password by clicking on the forgotten password option. New users should complete the registration form (demographic information questionnaire) by selecting the registration option and entering the username and password created by the admin. After entering the application, the user will face a page that should choose the topic among the three topics of heart anatomy, principles of ergonomics in the operating room, and principles of moving, transferring and positioning the patient in the operating room. By choosing a subject, the user will enter a page where the user will be warned that if wrong topic was chosen, he/she can change it by choosing the option “change the educational topic”, and if the option “enter the test page” is clicked, the user enters the relevant level test page and reads the rules of participating in the test, and by clicking on the test start option, the questions (in the form of four-choice questions) will be displayed for him, and it will not be possible to change the educational topic.

It should be noted that the following measures have been taken to create and maintain the security of all tests:

Questions do not have numbers.

The options for each question do not have numbers.

It is not possible to copy questions and options.

The time to answer each question is about 20 seconds.

The order of displaying questions and the options of each question is random and is different for each user

Correct answers will not be shown to the users during or at the end of the test

Access to the educational content of each course and topic is limited during the exam

After completing the level determination test, the user’s score and his level (very poor, poor, average and excellent) will be displayed and by clicking on the training panel option, the user will enter the training course according to his level and will get acquainted with the course instructor and the educational topics of different levels (basic, intermediate and advanced). Each user will have one of the following four statuses at the end of the level determination test in which of the topics:

A user who is at a very poor level in the placement test will enter the basic level training course of his chosen subject and will have 14 days to study the training content of his basic level course and on the fourteenth day (in order to reduce errors Reminder [ 73 , 74 , 75 , 76 , 77 , 78 ] The final exam of the basic level course (four-choice exam) has been activated for him and the user will participate in the exam (within a period of 14 days, at least 2 times using the communication method that the user has chosen in his registration form, including WhatsApp, email, Telegram, he will be reminded of the exam time) and if the user receives a passing score, the intermediate level training course of his chosen subject will be available and, like the basic level course, the user will have 14 days to study the contents and on the 14th day will participate in the final exam of the intermediate level course, and if the exam is passed successfully, the advanced level training course of the subject would be available, and the user will have 14 days to study advanced level educational content and participate in the final exam of the chosen subject on the 14th day. In case of passing the advanced level exam, the user will enter the satisfaction questionnaire page and after completing the form, a course completion certificate will be awarded.

A user who is at a weak level in the placement test will enter the intermediate level training course of his chosen subject and will have 14 days to study the contents and participate in the end of the intermediate level course on the 14th day. and if a passing grade is accomplished, the user will enter the advanced level training course of the chosen subject and will have 14 days to study the advanced level educational content and on the fourteenth day will participate in the final exam of the chosen subject and if a passing grade is accomplished, the user will enter the questionnaire page After completing the questionnaire, a certificate of completion will be given. It should be noted that if desired, the user can have access to the basic level educational content of the selected subject through the option of ‘training courses’ in the user panel section and participate in the final exam of the basic level course.

A user who is placed at the intermediate level in the placement test, enters the advanced level training course of the selected topic and will have 14 days to study the advanced level educational content and participate in the final test of the selected topic on the 14th day. If a passing grade is accomplished, the user will enter the satisfaction questionnaire page and after completing the questionnaire, a course completion certificate will be awarded. It should be noted that if the user wishes, can have access to the educational content of the basic and intermediate level of the chosen subject and participate in the end-of-course exam.

A user who is at an excellent level means that it is not necessary to participate in the training courses designed by our team and the educational content of the mentioned system cannot lead to the improvement of his knowledge; However, if the user wishes, can have access to the educational content of all levels of the chosen subject and participate in the end-of-course exam of each level and the final exam.

In the user panel section, which is displayed and accessible to the user after completing the placement test, the user will be faced with options to contact us, about us, discussion forum, my training panel, training courses and exit.

On the ‘contact us’ page, there are ways to communicate with the application admin and the study team through which any user can communicate with us and raise their questions and problems in the field of using the application.

On the ‘About Us’ page, the user gets acquainted with the goals, vision and mission of our team in web application, design, and the training courses available in it, the web application design team and its supporters.

On the forum page, the user can exchange opinions and discuss with the instructor of the selected course and other users of the same level in the training course of the selected subject through text messages.

The user can be quickly transferred to the page of his educational content by clicking on the ‘My Education Panel’ option.

Determining the effectiveness of the training of operating room nurses based on the web application on their knowledge and satisfaction

At this stage, 36 operating room nurses working in four selected hospitals in Isfahan, Iran, who meet the criteria for entering the study, were selected by stratified random sampling method and after a full explanation of the study process and its objectives by the researcher through message, the study was started by sending WhatsApp and completing the online informed consent form (in WORD file format). The criteria for entering the study include a history of at least 3 months of working in the operating room, being employed in the operating room of one of the studied hospitals, having consent to participate in the study, having at least a academic education with an associate degree, having a computer, tablet or a mobile phone with the ability to connect to the Internet to enter the web application, having the internet with a suitable speed to enter the web application, and the exit criteria also include unwillingness to continue attending the study for any reason and at any stage of the research, not obtaining the quorum score in Each of the selected subject tests for more than 2 times, failure to perform each of the selected subject tests, non-completion or incomplete completion of the satisfaction questionnaire. In coordination with the officials of the studied hospitals, a video guide for registering and entering the web application and participating in the training courses uploaded in PDF file format was provided to the nurses of the operating room through the WhatsApp messenger, and then they registered in the web application and participating in one of the training courses uploaded in the application, based on their needs and interests, and the progress process of the nurses in each of the training courses was followed through the results registered in the application and communication with each of them. The nurses in the study entered the application as described in phase 3.4 and participated in the uploaded training courses.

Data collection tools

The three tools used in this study included the following:

Tests to measure the level of knowledge of operating room nurses

The placement tests, the end of the basic level course, the end of the intermediate level course and the end of the advanced level course (final exam) in each of the educational topics, in which questions are designed by the teachers of each topic in the form of four-choice questions, and to evaluate the knowledge of operating room nurses. The budgeting of the questions for the placement and final exams was such that 25% of the questions were selected from basic level educational content, 50% from intermediate level content and 25% from advanced level content.

The number of questions and how to calculate the score in each topic are as follows:

Placement test

The heart anatomy’s exam consists of 40 four-choice questions and there is no negative score for wrong answers to the questions, and 0.5 points are awarded to the student for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–20 and the duration of the test is 13 minutes. The way to interpret the obtained scores is that the learner with a score in the range of 0–7 is in the very poor level, 8–12 in the poor level, 13–17 in the average level, and 18–20 in the excellent level.

The test of the ergonomic principles in the operating room consists of 10 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–10 and the duration of the test is 4 minutes. The way to interpret the obtained scores is that the learner with a score in the range of 0–3 is in the very poor level, 4–5 in the poor level, 6–7 in the average level and 10–8 in the excellent level.

The test of the principles of moving, transferring and positioning the patient in the operating room consists of 15 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–15 and the duration of the test is 5 minutes. The way to interpret the obtained scores is that the learner with a score in the range of 0–7 is in the very poor level, 8–10 in the poor level, 11–12 in the average level, and 13–15 in the excellent level.

Basic level final exam

This test in the heart anatomy field consists of 19 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–19 and the duration of the test is 6 minutes. The passing score to enter the intermediate level course is equal to 13.

The test of the ergonomic principles in the operating room consists of 7 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the student for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–7 and the duration of the test is 2 minutes. The passing score to enter the intermediate level course is equal to 5.

The exam of the principles of moving, transferring and positioning the patient in the operating room consists of 10 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0 to 10 and the duration of the test is 4 minutes. The passing score to enter the intermediate level course is equal to 7.

Intermediate level final exam

This test in heart anatomy field consists of 20 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 200 and the duration of the test is 7 minutes. The passing score to enter the advanced level course is equal to 14.

The exam of the ergonomic principles in the operating room consists of 10 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–10 and the duration of the test is 4 minutes. The passing score to enter the advanced level course is equal to 7.

The exam of the principles of moving, transferring and positioning the patient in the operating room consists of 10 four-choice questions and there is no negative score for wrong answers to the questions and 1 score is given to the participants for each correct answer, so the minimum and maximum score of the user is in the range of 0–10 and the duration of the exam is 4 minutes. The passing score to enter the advanced level course is equal to 7.

This exam in the heart anatomy field consists of 40 four-choice questions and there is no negative score for wrong answers to the questions, and 0.5 points are awarded to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–20 and the duration of the test is 13 minutes. The passing score for the successful completion of the heart anatomy training course is equal to 15.

The exam of the ergonomic principles in the operating room consists of 10 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–10 and the duration of the test is 4 minutes. The passing score for the successful completion of the heart anatomy training course is equal to 6.

The test of the principles of moving, transferring and positioning the patient in the operating room consists of 15 four-choice questions and there is no negative score for wrong answers to the questions, and 1 score is given to the students for each correct answer. Therefore, the minimum and maximum score of the user is in the range of 0–15 and the duration of the test is 5 minutes. The passing score for the successful completion of the heart anatomy training course is equal to 11.

In order to determine the face validity of the questions of each test, Millman’s checklist and Blueprint were used and the results showed that the questions have appropriate face validity.

In order to evaluate the content validity of the questions of each of the tests of the training courses on the principles of patient movement, transfer and positioning in the operating room, heart anatomy and principles of ergonomics in the operating room, all the questions were given to 20 professors of medical sciences universities and only 10 of them examined the content validity of the questions using the Lausche index (CVR) and Waltz and Basel (CVI). The validity results indicated that the value of CVR and CVI for each of the tests in each of the mentioned training courses was 0.99 and 0.79, respectively, and the content validity of the instrument used in the present study was confirmed.

In order to measure the reliability of the questions of each of the tests using the Cronbach’s alpha method, a pilot study was conducted and during those tests, the questions of the placement tests, the basic level final exam, the intermediate level final exam and the final exam in each of the training courses of the principles of moving, transferring and positioning the patients in the operating room, heart anatomy and principles of ergonomics in the operating room were given to 30 operating room nurses and Cronbach’s alpha coefficient was calculated using SPSS version 16. The obtained results showed that the Cronbach’s alpha coefficient for placement tests, the basic level final exam, the intermediate level final exam and the final exam in the training course on the principles of moving, transferring and positioning of the patient in the operating room are equivalent to 0.812, 0.765, 0.805 and 0.819, respectively for each of the tests in the training course of heart anatomy, respectively, equivalent to 0.794, 0.780, 0.785, 0.760, and for each of the tests in the training course of ergonomic principles in the operating room, respectively, equivalent to 0.8, 0.749, 0.780, 0.was 826, which indicates the reliability of the questions.

Demographic information questionnaire

A demographic information questionnaire was designed, which was used in the web application as a registration form for each learner, and in which questions such as place of employment hospital, the amount of work experience in the operating room, the field of study, the level of education, the specialized field of work, gender, user name and password are given.

Satisfaction questionnaire

In order to measure the level of learner’s satisfaction with each of the training courses contents and the designed web application, a researcher-made questionnaire including 7 items with a 4-point Likert scale was used. The questionnaire’s interpretation was as follows: scores of 0–7, 8–14, 15–21, and 22–28 respectively mean lack of satisfaction, low satisfaction, moderate satisfaction, and high satisfaction of the learners with the contents of each of the educational courses, and the designed web application.

In order to determine the face and content validity of the satisfaction behavior questionnaire, the questionnaire was given to 20 professors of medical sciences universities and only 10 of them proceeded to determine the validity. The results of face validity indicated that all the items of the questionnaire had appropriate face validity in terms of the number of items, the comprehensibility of the sentences of each item and compliance with grammar. Also, the content validity of the items was evaluated using the Lausche Index (CVR) and Waltz and Basel (CVI) and the results indicated that the value of CVR and CVI for each test was 0.99 and 0.79, respectively, and the content validity of the questionnaire was confirmed.

In order to measure the reliability of the questionnaire using the Cronbach’s alpha method, a pilot study was conducted, during which the questionnaire was given to 30 operating room nurses, and the Cronbach’s alpha coefficient was calculated using SPSS version 16. The obtained results showed that Cronbach’s alpha coefficient was 0.794, which showed the reliability of the questionnaire.

Statistical analysis

The collected data were analyzed using descriptive statistical tests (frequency and frequency percentage, mean and standard deviation), analytical tests (Paired sample t-test, independent samples t-test, ANOVA, Pearson correlation) and using SPSS software version 16.

After reviewing the data obtained from 36 operating room nurses, the data of 2 people were deleted due to not completing the satisfaction questionnaire and not participating in the final exam, and the data of 34 people (13 people in the course of the principles of movement, transfer and patient positioning in Operating room, 11 people in the course of heart anatomy and 10 people in the course of principles of ergonomics in the operating room) were analyzed.

The results of the demographic variables (Table  1 ) showed that among the nurses in the training group of the principles of patient movement, transfer and positioning in the operating room, 10 (76.9%) were women, 10 (76.9%) had a bachelor’s degree, 4 (30.7%) with work experience of 1–5 and 16–20 years, 7 people (53.8%) working in hospital one and 5 people (38.4%) working in general surgery field, 8 people (72.7%) in heart anatomy training group, 8 people (72.7%) have a bachelor’s degree, 5 people (45.4%) have 1–5 years of work experience, 7 people (63.7%) are working in 4 Hospital and 8 people (72.7%) are working in the field of cardiovascular surgery. And in the group of training course on principles of ergonomics in the operating room, 5 people (50%) are women, 6 people (60%) have a bachelor’s degree, 2 people (20%) have work experience 1–5, 6–10, 11–15, 20 16 and 21–25 years old, 5 people (50%) were working in hospital 1 and 2 people (20%) were working in the field of general surgery and endoscopy.

Also, to investigate the relationship between demographic variables and educational courses, chi square test was used and the results showed that the variables of education level ( P  = 0.452), work experience ( P  = 0.401), gender ( P  = 0.051) and courses, no meaningful educational relationship was not seen; But there is a significant relationship between the hospital of the place of employment ( P  = 0.004) and the field of specialized surgery ( P  = 0.012) with training courses.

In order to compare the average knowledge scores of operating room nurses in general and also in two groups of training courses on principles of patient movement, transferring and positioning in the operating room, heart anatomy and principles of ergonomics in the operating room in the pre-intervention phase (level determination test) and in the stage after the intervention (final test), first the normality of the data distribution was checked using the Shapiro-Wilk test, according to the result of this test ( P  > 0.05) and the normality of the data distribution, in order to compare the scores, t-test was used and the results showed that the average knowledge scores of operating room nurses in general before and after the intervention were 5.91 ± 3.96 and 13.67 ± 3.77, respectively ( P -value<0.001, mean difference: 7.76, 95%Cl: 6.62, 8.90) and significantly, the mean scores of knowledge in the stage after the intervention were higher than before the intervention ( P  < 0.001).(Table  2 ); Also, the average scores of nurses’ knowledge before and after the intervention in the course of the principles of moving, transferring and positioning the patient in the operating room are 6.07 ± 3.42 and 13.38 ± 1.32, respectively ( P -value< 0.001, mean difference: 7.30, 95% CI: 5.15, 9.46), in the heart anatomy course 8.72 ± 3.97 and 18.18 ± 1.07 respectively ( P -value< 0.001, mean difference: 9.45, 95%Cl: 6.98, 11.92) and in the course of principles of ergonomics in the operating room, respectively It was equivalent to 2.60 ± 1.57 and 9.10 ± 0.73, and significantly the mean scores after the intervention are higher than before the intervention ( P -value< 0.001, mean difference: 6.50, 95% Cl: 5.53, 7.46) and significantly the mean scores of knowledge in the stage after the intervention was higher than before the intervention ( P  < 0.001) (Table 2 ).

In order to determine the relationship between the average knowledge scores of operating room nurses in the training course groups of the principles of moving, transferring and positioning the patient in the operating room, heart anatomy and the principles of ergonomics in the operating room with demographic variables, Pearson, independent t and ANOVA tests were used. The results showed that there is a direct, weak and non-significant relationship between the average knowledge scores of operating room nurses in the training course group on the principles of moving, transferring and positioning patients in the operating room with the level of education ( r  = 0.165, P  > 0.05). And there was an inverse, weak and non-significant relationship with work experience ( r  = 0.188, P  > 0.05). Also, there is an inverse, strong and non-significant relationship between the knowledge scores of operating room nurses in the cardiac anatomy training group with the level of education ( r  = 0.547, P  > 0.05) and a direct, strong and significant relationship with work experience ( r  = 0.622, P  > 0.05). There is an inverse, strong and significant relationship between the average knowledge scores of operating room nurses in the training course group of principles of ergonomics in the operating room with the level of education ( r  = 0.667, P  < 0.05) and a direct, strong and significant relationship with work experience ( r  = 0.707, P  < 0.05).

There was no significant difference between the average knowledge scores of the nurses of the training course groups on the principles of movement, transferring and positioning of the patient in the operating room, heart anatomy and ergonomic principles in the operating room with the demographic variables of the hospital where they work, the field of specialized surgery and gender ( P >0.05)).

Also, the results of the level of knowledge of the operating room nurses after performing the placement test showed that 8 (61.5%) of the nurses in the training group of the principles of patient movement, transferring and positioning in the operating room, 5 (45.5%) in the group of the heart anatomy training course) and 7 people (70%) of the nurses of the ergonomic principles training group in the operating room were at a very poor level (Table  3 ).

The results of examining the amount and level of nurses’ satisfaction with the held electronic training courses showed that the average score of nurses’ satisfaction was 21.38 ± 5.83 and 22 (64.7) nurses were highly satisfied with the electronic training course; Also, the average score of nurses’ satisfaction in each of the training courses on the principles of moving, transferring and positioning the patient in the operating room, cardiac anatomy and principles of ergonomics in the operating room are 18.76 ± 7.15, 23.36 ± 3.82 and 4.88 ± 22.60 respectively, and 7 people (53.8%) from the nurses of the training course groups on the principles of patient movement, transferring and positioning in the operating room, 8 people (72.2%) from the heart anatomy group and 7 nurses (70%) from the ergonomic principles group in the operating room are highly satisfied (Table  4 ).

This study sought to answer the question that to what extent electronic training on the specialized topics of the operating room profession based on a web application, leveled, asynchronous, personalized and based on the needs of nurses can lead to the improvement of specialized knowledge and nurses’ satisfaction. Is the operating room a new teaching method? The results showed that, in general, the knowledge of operating room nurses after the intervention (participation in electronic training courses, based on comprehensive and leveled needs) significantly improved compared to before the intervention, and the knowledge of nurses after the intervention. In each of the specialized electronic training courses, the principles of moving, transferring and positioning the patient in the operating room, heart anatomy and the principles of ergonomics in the operating room also increased significantly.

Today, one of the basic measures that leads to the efficiency of organizations is the creation or acquisition and continuous development of human resource through the implementation of training and improvement programs, which at the individual level increases the value of the individual, and at the organizational level, improves and develops the organization, and at the national and even transnational level, it leads to an increase in productivity [ 5 , 6 ] and since official academic education and intensive and limited pre-service training are sufficient and acceptable, the hospital staff for Performing one’s duties in the hospital environment does not prepare one, the implementation of the educational program has become more necessary [ 7 ]. Due to the fact that each person has unique characteristics, the way of learning skills and learning needs are also different, so the first and most basic step in education is to examine the educational needs, which is a basis for preparation of special educational content is considered and provides a basis for setting goals and as a result, a suitable platform for organizing other important elements around prioritized needs; By preventing rework, it ultimately leads to increasing the effectiveness and efficiency of human resources, reducing waste, developing knowledge, skills, increasing job satisfaction and motivating employees [ 5 , 8 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 ]. In the study of Qalaei et al. (2013), the results showed that the lack of correct needs assessment has caused the lack of overlap between educational programs and the educational needs of nurses [ 7 ]. And for this reason, we initially examined the educational needs of operating room nurses and the results of this study showed that there is a need for training and improving knowledge in topics such as the principles of moving, transferring and positioning the patient in the operating room, heart anatomy and ergonomic principles in the operating room was more important than other topics.

The most widespread method of continuous training of medical personnel is the face-to-face training method, which many studies have shown that this traditional training method has many limitations, such as not recognizing the needs of learners and their personal differences, and not addressing high-level cognitive skills such as problem solving and creative thinking [ 21 ]; As a result, many researchers have emphasized that traditional educational methods need to be changed and modified by modern educational methods [ 21 , 22 ]. Electronic education has been proposed as one of the complementary educational methods [ 21 , 23 , 26 ] and researchers have come to the conclusion that in order to correct the time and place limitations associated with traditional education, the possibility of lifelong learning and appropriate to the specific conditions of each learner, electronic education can increasingly provides the possibility of easy access to education and can be a suitable supplement for traditional education [ 21 , 27 , 43 ]; Therefore, in this study, in order to teach the topics required by operating room nurses, electronic education approaches were used in a leveled manner, and the results showed that the knowledge of operating room nurses participating in each of the electronic training courses on the principles of transferring, and patient positioning in the operating room, cardiac anatomy and principles of ergonomics in the operating room significantly increased compared to before participating in the courses ( P  < 0.001). The results of a review study by Rouleau and colleagues (2019) showed that the knowledge of nurses has increased, especially in the fields of calculation, preparation and prescription of medicine through electronic education [ 25 ]; Also, in a review study by Maertens and colleagues (2016), the results showed that the use of electronic education approach in surgical training has the same or more effectiveness than other educational methods in improving the knowledge of medical staff [ 44 ]; The results of bibani et al.’s study (2022) showed that the knowledge of nurses in the intervention group (using the e-learning approach) was significantly higher than the nurses in the control group (using face-to-face mock training method) ( P  < 0.05) [ 21 ] and the authors concluded that these results should encourage those responsible for continuing education to consider online education as a complementary and promising solution to ensure flexible continuing education sessions for health care personnel.

In the study of Sabbagh and colleagues (2017), the results showed that the knowledge level of nurses after the intervention (electronic training of patient safety principles) was significantly higher than before the intervention ( P  < 0.05) [ 79 ]; The results of the study by Sung et al. (2008) showed that the level of knowledge and satisfaction of nurses in the intervention group (combined teaching of pharmacology principles by electronic method and lectures) significantly improved compared to the control group (teaching principles of pharmacology by lecture method) ( P  < 0.05) [ 54 ] and these results show that blended learning by integrating e-learning and face-to-face classroom training is useful for increasing pharmaceutical knowledge. An e-learning program can reduce the lecture time and cost of repetitive topics such as medicine, which suggests that it can be an effective component in nursing education programs.

The results of the study by Hashemiparast et al. (2016) showed that the average knowledge score of the employees of the clinical departments of the selected hospitals of the universities of medical sciences in Tehran, Iran in the intervention group (teaching the principles of infection control electronically) was significantly higher than the control group. ( P  = 0.002) [ 80 ] and the authors concluded that despite the effectiveness of e-learning in learning and increasing learners’ awareness, the use of this method among health-related organizations requires empowering employees, removing barriers and infrastructures. The results of a review study by Feng and colleagues (2013) showed that electronic education leads to the improvement of learners’ knowledge [ 81 ] and the authors concluded that situational electronic education is an effective method to improve the performance of beginner learners. The effect of situational e-learning on improving cognitive ability is limited compared to traditional learning. Situational e-learning is a useful supplement to traditional learning for medical and nursing students. In the studies of Khatony and colleagues (2009) [ 82 ], Laine and colleagues (2019) [ 83 ], Gentizon and colleagues (2019) [ 24 ], Phaneuf and colleagues (2012), Vaona et al. (2018) [ 45 ], Bea et al. (2021) [ 46 ], Horiuchi et al. [ 84 ] Different types of medical care improve nurses’ knowledge compared to face-to-face training. Therefore, the web-based method is recommended as a complement to the face-to-face method for designing and presenting some topics of continuing education programs for nurses. The results of Van de Steeg et al.’s study (2015) showed that the average knowledge scores of nurses improved significantly after teaching the principles of delirium diagnosis in the elderly using an electronic method compared to before the intervention [ 85 ] and the authors believe this result, found that the e-learning course significantly improved nursing staff’s knowledge of delirium in all subgroups of participants and for all question categories. In contrast to other studies, the assessment of baseline knowledge showed that, overall, nursing staff were relatively knowledgeable about delirium. The results of the mentioned studies are consistent with the results of our study. Among the reasons for compatibility can be e-learning due to the use of different learning styles according to the needs and abilities of each learner and the adjustment of the learning speed by each learner according to his characteristics and to motivate learning due to the attractiveness of the educational environment. He pointed out that electronics leads to a better consolidation of learned material in the nurses’ memory and thus improves their knowledge [ 21 , 22 , 24 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 86 , 87 , 88 ].

Also, the results of the present study showed that the operating room nurses participating in each training course were delighted with the use of electronic and levelled education approaches.

Learners’ satisfaction is one of the essential factors in the effectiveness of new educational processes [ 32 , 33 , 43 ]. Nurses’ satisfaction with the education process leads to improved learning motivation and, as a result, improves their level of knowledge [ 43 ]. The results of Lhbibani et al.’s study [ 21 ] showed that the satisfaction of nurses in the intervention group (using the e-learning approach) was significantly higher than the nurses in the control group (using the face-to-face mock training method) ( P  < 0.05). Also, the results of Yazdannik et al.’s study [ 43 ] showed that the level of satisfaction of emergency department nurses in the intervention group (electronic patient triage training) was significantly higher than the nurses in the control group (face-to-face training) and authors concluded that using nursing professors’ electronic education programs can increase the level of satisfaction and motivation in the nursing mothers. Therefore, the use of this new educational method is recommended by managers and educational planners as an effective teaching. In the study of Costa and colleagues [ 89 ], the results showed that nurses were delighted with the electronic training on the principles of pain diagnosis in infants. The results of Narbona et al.’s study [ 90 ] showed that the nurses who participated in the electronic training course on evaluating patients’ pain intensity were delighted with the training course. The results of Chang et al.’s study [ 91 ] showed that the use of an e-learning approach in the in-service training of nurses resulted in 97.6% satisfaction of them. The results of Khoshnoodifar et al.’s study [ 92 ] showed that the level of nurse satisfaction with the cardiopulmonary resuscitation e-learning course was higher than that of the nurses in the control group (teaching the principles of cardiopulmonary resuscitation by lecture method) and authors concluded that satisfaction from CPR e-learning course was higher than those in nurses participating in the traditional training method. The results of the mentioned study are consistent with the results of our study. Among the compliance reasons, nurses can participate in training courses at any time and place based on their free time, exchange messages share their content and opinions with other learners in the online educational environment, and interact with course instructors [ 21 , 41 , 84 , 86 ].

Study implications

Based on the results of this study, it seems that the use of electronic education approach based on the needs of operating room nurses can be used as a complementary tool to conventional continuous education, and since this method allows interactive, personalized education, leveled, asynchronous and can be used at any time and place on a laptop, tablet and mobile phone, a wide range of operating room nurses in the hospitals of the Islamic Republic of Iran can use it for educational justice to many borders should be established in the country. However, there are studies to evaluate the generalizability and the effect of using the e-learning approach on the clinical skills of operating room nurses and to compare the effect of e-learning with other methods and educational tools on the knowledge and skills of the learners and the extent of consolidating the learned material in their memory.

Strengths and limitations

Among the strengths of this study are the virtuality of training and evaluation of operating room nurses, training based on the needs of learners, organized training and appropriate to the knowledge level of each learner (personalized training), asynchronous training, the possibility of message exchange and interaction. Learners with each other and with instructors of training courses in the web application environment, using a new, low-cost training method, using leveled standardized tests made by researchers to measure the level of knowledge of operating room nurses in each of the training courses, online and It can be used at any place and time and with any smart device (laptop, tablet and mobile phone), reducing training costs, conducting a multicenter study and selecting subjects randomly, and among the limitations of the study, a small sample size and no control group can be named. Also, the low power of semi-experimental studies in generalizing the results obtained in the examined sample to the entire population compared to other studies, limited focus on null hypothesis tests and weak analytical samples are among the factors that threaten the validity of this study.

The results of the present study showed that the use of an electronic education approach based on a web application, leveled, asynchronous, personalized and based on the needs of nurses led to the improvement of the knowledge of operating room nurses. Also, operating room nurses were highly satisfied with electronic training courses. It seems that e-learning can be used as a complementary educational tool and method for continuous training of operating room nurses in other specialized fields of operating room and surgery.

Availability of data and materials

The datasets used during the current study are available from the corresponding author upon reasonable request.

Arzani A, Lotfi M, Abedi AR. Experiences and decisions of operating room nurses based on Benner theory. J Babol Univ Med Sci. 2016;18(4):35–40.

Google Scholar  

Bull R, Fitzgerald M. Nursing in a technological environment: nursing care in the operating room. Int J Nurs Pract. 2006;12(1):3–7.

Article   Google Scholar  

An M, Sh A, Mostafazadeh M, Hajian S. Investigation of the relationship between organizational atmosphere and accountability in Namazi and Faghihi Hospital in Shiraz in 2013. Sci J Med Syst Org Islamic Repub Iran. 2016;34(2):143–50.

Council P. Health in fifth Development program of the Islamic Republic of Iran. Tehran: Ministry of Health & Medical Education Publication; 2009.

Hakimzadeh R, Javadipour M, Masoubi S, Ghorbani H, Fallah Mehrjordi MA, Ghafarian M. Assessing the educational needs of nurses by Dicom method: a case study. Q J Nurs Manag. 2014;3(1).

Abtahi H. Education and Human Resource Development. Tehtan: the institute of educational planning. Organization of Iran’s Industrial Development and Renovation; 2004.

Ghalehei AR. Evaluation of the effectiveness of in-service training courses for nurses in medical centres affiliated with the social security organization (case study: Ostad Alli and 29 Bahman hospitals, Tabriz). J Urmia Univ Nurs Midwifery. 2014;11(12):961–70.

Dehghani MR, Zare S, Bazrafkan L, Amini M, Kajoori J, Hayat AA, Nabiei P. Educational needs assessment and development of the educational program using the developing a curriculum model. J Centre Study Dev Med Educ. 2014;11(3):299–312.

Fathi K. Educational needs assessment. Tehran: Ketabiran; 2000.

Shabani T. The principles of educational management. Tehran: Aan; 2002.

Abaszadegan M, Torkzadeh J. Needs assessment in the organization. Tehran: Saham Co; 2000.

Brown J. Training needs assessment: A must for developing an effective training program. Public Pers Manag. 2002;31(4):569–78.

Aminoroaya M, Yarmohammadian MH, Yousefy AR. Educational needs of the Esfahan Medical University’s experts. Educ Med Sci J. 2002;3(3):6.

Fathi K. Educational needs assessment, models and techniques. Tehran: Ayizh Publisher; 2000.

Seyedabbaszadeh M, Nikbakht A, Kh V. Investigating the educational needs of nursing managers in public hospitals. Nurs Res. 2009;4(15):16–24.

Sullivan EJ. Effective leadership and Management in Nursing. New Jersey: Pearson Practice Hall; 2005.

Grant J. Learning need assessment assessing the need. Br Med J. 2002;12(2).

Dehghani H, Dehghani K, Kh N, Dehghani A, Banaderakhshan H. Evaluation of continuing education needs of nurses in hospitals of Shahid Sadoughi University of Medical Sciences in Yazd using Delphi technique. J Yazd Centre Study Dev Med Educ. 2012;7(4):73–83.

Shirzadkebria B, Momeni A, Sh H. Identifying and prioritizing the educational needs of Baqiyatallah hospital managers in the field of human, perceptual and technical skills. J Health. 2017;8(2):77–93.

Mazoji F, Mazoji F, Karimi S. Educational needs of nurses on medicine use, indications and ophthalmic surgery techniques. Two Chapters of Nursing and Midwifery Faculties of Gilan Province. 2007;16(55):30-3.

Lhbibani A, Daaif J, Lotfi S, Tridane M, Belaaouad S. Effect of online training in the continuing Education of nurses in hospitals in the Casablanca-Settat region. The open. Nurs J. 2022;16(1).

Stevens K. The impact of evidence-based practice in nursing and the next big ideas. Online J Issues Nurs. 2013;18(2).

Rachid G, Said B, Said B, Mohamed R. The motivation of nurses to participate in continuous training activities: A descriptive study in the surgical emergency department at the Ibn-Rochd Hospital in Casablanca. French-Speaking Int J Nurs Res. 2018;4(4):237–43.

Gentizon J, Kottelat Y, Hamel-Lauzon G, Szostak V, Gallant S. Knowledge serving patients: evaluation of knowledge transfer to nurses, after e-learning training on pain management. Sci Nurs Health Pract. 2019;2(1):1–13.

Rouleau G, Gagnon M-P, Côté J, Payne-Gagnon J, Hudson E, Dubois C-A, et al. Effects of e-learning in a continuing education context on nursing care: systematic review of systematic qualitative, quantitative, and mixed-studies reviews. J Med Internet Res. 2019;21(10):e15118.

Rouleau G, Gagnon M-P, Côté J, Payne-Gagnon J, Hudson E, Bouix-Picasso J, et al. Effects of e-learning in a continuing education context on nursing care: a review of systematic qualitative, quantitative and mixed studies reviews (protocol). BMJ Open. 2017;7(10):e018441.

Wu J, Tsai RJ, Chen CC, Wu Y. An integrative model to predict the continuance use of electronic learning systems: hints for teaching. Int J E-Learning. 2006;5(2):1–16.

Aronen R, Dierssen G. Improving equipment reliability through e-learning. Hydrocarb Process. 2001;80(9):47–60.

Narimani M, Zamani BE, Asemi A. Qualified instructors, students’ satisfaction and electronic education. Interdiscip J Virtual Learn Med Sci. 2015;6(3):31–9.

Atreja A, Mehta NB, Jain AK, Harris CM, Ishwaran H, Avital M, et al. Satisfaction with web-based training in an integrated healthcare delivery network: do age, education, computer skills and attitudes matter? BMC Med Educ. 2008;8(1):1–8.

Jeffries PR, Woolf S, Linde B. Technology-based vs. traditional instruction: A comparison of two methods for teaching the skill of performing a 12-lead ECG. Nurs Educ Perspect. 2003;24(2):70–4.

Keulers B, Welters C, Spauwen PH, Houpt P. Can face-to-face patient education be replaced by computer-based patient education? A randomised trial. Patient Educ Couns. 2007;67(1–2):176–82.

Mohamadiriz S, Khani B, Mohamadirizi S. Role playing approach vs. traditional method about neonatal admission skills among midwifery students. Int J Pediatr. 2015;3(5):965–70.

Zolfaghari M, Mehrdad N, Parsa YZ, Salmani BN, Bahrani N. The effect of lecture and e-learning methods on learning mother and child health course in nursing students. Iran J Med Educ. 2007;7(1):31–9.

Kohpaye zadeh J, Khoshnevisan MH, Bilravand A. Comparison of the effect of two virtual and traditional teaching methods on the learning of the course "familiarity with dental instruments and equipment and their maintenance" of students of general dentistry doctorate course of Shahid Beheshti University of Medical Sciences. J Razi Med Sci. 2016;23(143):63-70.

Hale LS, Mirakian EA, Day DB. Online vs. classroom instruction: student satisfaction and learning outcomes in an undergraduate allied health pharmacology course. J Allied Health. 2009;38(2):36–42.

Reime MH, Harris A, Aksnes J, Mikkelsen J. The most successful method in teaching nursing students infection control–E-learning or lecture? Nurse Educ Today. 2008;28(7):798–806.

Davis J, Chryssafidou E, Zamora J, Davies D, Khan K, Coomarasamy A. Computer-based teaching is as good as face-to-face lecture-based teaching of evidence-based medicine: a randomised controlled trial. BMC Med Educ. 2007;7(1):1–6.

Fahami F, Mohamadirizi S, Bahadoran P. Effect of electronic education on the awareness of women about postpartum breastfeeding. Int J Pediatr. 2014;2(3):57–63.

Doll WJ, Deng X, Raghunathan TS, Torkzadeh G, Xia W. The meaning and measurement of user satisfaction: A multigroup invariance analysis of the end-user computing satisfaction instrument. J Manag Inf Syst. 2004;21(1):227–62.

Mohamadirizi S, Fahami F, Bahadoran P. The effect of E-learning education on primipar women’s knowledge about neonatal care. Iran J Neonatol IJN. 2013;4(1):24–7.

Mohamadirizi S, Bahadoran P, Fahami F. Comparison between the impacts of e-learning and booklet education on nulliparous women’s satisfaction about postpartum care. Iran J Obstetrics Gynecol Infertility. 2013;16(61):1–8.

Yazdannik A, Mohamadirizi S, Nasr-Esfahani M. Comparison of the effect of electronic education and workshop on the satisfaction of nurses about emergency severity index triage. J Educ Health Promot. 2020;2020(9):1–6.

Maertens H, Madani A, Landry T, Vermassen F, Van Herzeele I, Aggarwal R. Systematic review of e-learning for surgical training. J Brit Surg. 2016;103(11):1428–37.

Vaona A, Banzi R, Kwag KH, Rigon G, Cereda D, Pecoraro V, et al. E-learning for health professionals. Cochrane Database Syst Rev. 2018;1(1):1–79.

Dijkman B, Oosterhoff A, Akanov A, Paans W. The Development of an e-platform to strengthen nursing in Kazakhstan: A systematic review and a Delphi study to define requirements. Open Nurs J. 2021;15(1).

Hanto DW. Patient safety begins with me. Ann Surg. 2014;260(6):971–2.

Bridges M, Diamond DL. The financial impact of teaching surgical residents in the operating room. Am J Surg. 1999;177(1):28–32.

Philibert I, Friedmann P, Williams WT. New requirements for resident duty hours. Jama. 2002;288(9):1112–4.

Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376(9756):1923–58.

Pugh CM, Watson A, Bell RH Jr, Brasel KJ, Jackson GP, Weber SM, et al. Surgical education in the internet era. J Surg Res. 2009;156(2):177–82.

Stefanidis D, Sevdalis N, Paige J, Zevin B, Aggarwal R, Grantcharov T, et al. Simulation in surgery: what’s needed next? Ann Surg. 2015;261(5):846–53.

Ritz J, Gröne J, Hopt U, Saeger H, Siewert J, Vollmar B, et al. “Practical course for visceral surgery in Warnemünde” 10 years on. Significance and benefits of a surgical training course. Der Chirurg; Zeitschrift fur Alle Gebiete der Operativen Medizen. 2009;80(9):864–71.

Sung YH, Kwon IG, Ryu E. Blended learning on medication administration for new nurses: integration of e-learning and face-to-face instruction in the classroom. Nurse Educ Today. 2008;28(8):943–52.

Arabkhazaie A, Arabkhazaie A, Sadati L, Hannani S. The effect of Education based on the spinal fusion surgery simulation on the level of knowledge and practical skills of the 8th students. Tehran: Iran Univ Med Sci. 2018;7(5):11–4.

Khorammakan R, Khalili J, Azar A, Azin A, Belyadchaldashti H, Omid A, Roudbari SH, Ghadami A. Studying the educational needs of operating room technologists in selected Isfahan hospitals and related factors in 2021. J Nurs Educ (JNE). 2023;12:47–56 in Persian.

BRT F. Surgical technology for the surgical technologists. 4th ed. USA: Association of Surgical Technologists, Inc; 2014.

NHA P. Berry & Kohn’s operating room technique. 14th ed. Elsevier; 2021.

Rothrock J, McEwen D. Alexander's care of the patient in surgery. Elsevier. 15th ed: 2015.

Janki S, Mulder E, IJzermans JN, Tran TC. Ergonomics in the operating room. Surg Endosc. 2017;31(6):457–66.

Vural F, Sutsunbuloglu E. Ergonomics: an important factor in the operating room. J Perioper Pract. 2016;26(7):174–8.

Bridger R. Introduction to ergonomics. CRC Press; 2008.

Book   Google Scholar  

Salvendy. Handbook of human factors and ergonomics. John Wiley & Sons; 2012.

Simonsen GJ, Arvidsson I, Nordander C. Ergonomics in the operating room. Work. 2012;41(1):5644–6.

Rosenblatt PL, McKinney J, Adams SR. Ergonomics in the operating room: protecting the surgeon. J Minim Invasive Gynecol. 2013;20(6):744.

Sh M. Atlas of cardiac anatomy. Cardiotext; 2019.

Mirmohammadsadeghi M. Cardiac anatomy. Tehran: Tabib, Teimorzadeh; 2016.

Foller JK. Surgical technology principles and practice. Elsevier. 6th ed. 2013.

Shahraki A. Introduction of surgical technology. Tehran: Jamehnegar; 2014.

Sadati L, Golchini E. Introduction of surgical technology. Tehran: Jamehnegar; 2021.

Ghardashi F. Introduction of surgical technology. Tehran: Jamehnegar; 2020.

Khoshtarash M. A comprehensive guide to the operating room. Tehran: Sabura; 2014.

Mohammadbeigi A, Aligol M. Validity and reliability of the instruments and types of measurement in health applied research. Rafsanjan Univ Med Sci. 2015;13(12):1153–70.

Al-Rukban MO. Guidelines for the construction of multiple-choice questions tests. J Fam Community Med. 2006;13(3):125.

Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet. 2001;357(9260):945–9.

Raymond MR, Grande JP. A practical guide to test blueprinting. Med Teach. 2019;41(8):854–61.

Millman J, Greene J. The specification and development of tests of achievement and ability. In: Linn RL, editor. Educational measurement. 3rd ed. New York (NY): Macmillan; 1989. p. 13–103.

Raymond MR. Job analysis, practice analysis, and the content of credentialing examinations. In: Lane S, Raymond MR, Haladyna TM, editors. Handbook of test development. 2nd ed. New York (NY): Routledge; 2016.

Sabbagh F. The effect of an E-learning course of patient safety on nursing Staff’s knowledge: pre-post examination. International journal of social science and humanities. Research. 2017;5(2):444–53.

Hashemiparast MS, Sadeghi R, Ghaneapur M, Azam K, Tol A. Comparing E-learning and lecture-based education in control of nosocomial infections. Payavard Salamat. 2016;10(3):230–8.

Feng JY, Chang YT, Chang HY, Erdley WS, Lin CH, Chang YJ. A systematic review of the effectiveness of situated e-learning on medical and nursing education. Worldviews Evid-Based Nurs. 2013;10(3):174–83.

Khatony A, Nayery ND, Ahmadi F, Haghani H, Vehvilainen-Julkunen K. The effectiveness of web-based and face-to-face continuing education methods on nurses’ knowledge about AIDS: a comparative study. BMC Med Educ. 2009;9(1):1–7.

Laine A, Välimäki M, Löyttyniemi E, Pekurinen V, Marttunen M, Anttila M. The impact of a web-based course concerning patient education for mental health care professionals: quasi-experimental study. J Med Internet Res. 2019;21(3):e11198.

Horiuchi S, Yaju Y, Koyo M, Sakyo Y, Nakayama K. Evaluation of a web-based graduate continuing nursing education program in Japan: A randomized controlled trial. Nurse Educ Today. 2009;29(2):140–9.

Van De Steeg L, IJkema R, Wagner C, Langelaan M. The effect of an e-learning course on nursing staff’s knowledge of delirium: a before-and-after study. BMC Med Educ. 2015;15(1):1–8.

Launay-Vacher G, Rieutord A. E-learning for pharmacist CPD: educational engineering, a key factor. J Pharm Clin. 2014;33(2):76–102.

Karaman S, Kucuk S, Aydemir M. Evaluation of an online continuing education program from the perspective of new graduate nurses. Nurse Educ Today. 2014;34(5):836–41.

Phaneuf M. Yesterday’s knowledge still valid tomorrow? University of Montreal; 2012.

Costa T, Silva IA, Peres HH, Duarte ED, Bueno M. Nurses’ motivation, knowledge, and satisfaction with a neonatal pain assessment e-learning course. Pain Manag Nurs. 2022;23(5):576–82.

Muñoz-Narbona L, Cabrera-Jaime S, Lluch-Canut T, Castaño PB, Roldán-Merino J. E-learning course for nurses on pain assessment in patients unable to self-report. Nurse Educ Pract. 2020;43:10.

Chang WY, STH S, Chang PC, Lee PH. Developing an e-learning education programme for staff nurses: processes and outcomes. Nurse Educ Today. 2008;28(7):822–8.

Khoshnoodifar M, Rafie S, Zeraati Nasrabadi M, Masoudi Alavi N. The effects of CPR training using two traditional and electronic training methods on the knowledge, skill, and satisfaction of nurses from in service Education of cardiopulmonary resuscitation. Qom Univ Med Sci J. 2019;13(9):34–43.

Download references

Acknowledgements

The authors of the article feel it necessary to express their sincere thanks and appreciation to the respected officials of the National Center for Strategic Research in Medical Education, Tehran, Iran and the operating room nurses working in selected hospitals, in Isfahan, Iran who helped us in the implementation of this study.

This study was funded and supported by the National Center for Strategic Research in Medical Education, Tehran, Iran (Grant No. 4000560).

Author information

Authors and affiliations.

Department of the Operating Room, School of Nursing and Midwifery, Hormozgan University of Medical Sciences, Bandar Abbas, Iran

R. Khorammakan

Department of the operating room, Farmaniyeh hospital, Tehran, Iran

S. H. Roudbari

Department of Medical Education, Medical Education Research Center, Isfahan University of Medical Sciences, Isfahan, Iran

Department of Occupational Health and Ergonomics, Student Research Committee, School of Health, Shiraz University of Medical Sciences, Shiraz, Iran

V. S. Anoosheh

Department of Operating Room, Torbatjam Faculty of Medical Sciences, Torbatjam, Iran

A. N. Arabkhazaei

Department of Operating Room, School of Paramedical Science, Gonabad University of Medical Sciences, Gonabad, Iran

A. Z. Arabkhazaei

Ansar Al-Ghadir Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Department of Operating Room, Shahid Ansari Hospital, Rudsar, Iran

H. Belyad Chaldashti

Department of the Operating Room, Nursing and Midwifery Care Research Centre, School of Nursing and Midwifery, Isfahan University of Medical Sciences, Isfahan, Iran

You can also search for this author in PubMed   Google Scholar

Contributions

Reza Khorammakan: Data curation, Project administration, Writing – original draft. Seyed Hadi Roudbari: Design of Web-application Athar Omid: Scientific director, Writing – review & editing. Vida Sadat Anoosheh: Design of educational contents. Azin Arabkhazaei: Writing – original draft. Azar Arabkhazaei: Writing – original draft. Javad Khalili: Data curation. Hamed Belyad Chaldashti: Data curation. Ahmad Ghadami: Funding acquisition, Investigation, Methodology, Formal analysis, Project administration, Writing – review & editing.

Corresponding author

Correspondence to A. Ghadami .

Ethics declarations

Ethics approval and consent to participate.

The Regional Research Ethics Committee of the National Agency for Strategic Research in Medical Education Tehran, Iran, approved the study (Approval ID: IR.NASRME.REC.1401.426). The researcher explained this to the participants and obtained their informed and voluntary consent. We confirmed that all methods were performed according to the relevant guidelines and regulations.

Consent to publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Khorammakan, R., Roudbari, S.H., Omid, A. et al. Continuous training based on the needs of operating room nurses using web application: a new approach to improve their knowledge. BMC Med Educ 24 , 342 (2024). https://doi.org/10.1186/s12909-024-05315-3

Download citation

Received : 26 February 2023

Accepted : 14 March 2024

Published : 26 March 2024

DOI : https://doi.org/10.1186/s12909-024-05315-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Continuous education
  • Satisfaction
  • Operating room nurse

BMC Medical Education

ISSN: 1472-6920

case based discussion in medical education

IMAGES

  1. Case-Based Discussions in Medicine, second edition: 9781911510765

    case based discussion in medical education

  2. (PDF) Case-based learning in pharmacology: Moving from teaching to learning

    case based discussion in medical education

  3. Case Based Discussion Template

    case based discussion in medical education

  4. Perioperative Medicine

    case based discussion in medical education

  5. Case Based Discussion

    case based discussion in medical education

  6. Case Based Discussion

    case based discussion in medical education

VIDEO

  1. Stanford Webinar

  2. Update on dyslipidemia management case based discussion

  3. Pediatric ARDS. Critical Case Discussion. Saif Awlad Thani

  4. Case Based Discussion. Pregnant patient with mitral stenosis for c setion

  5. උසස්පෙළ ට දවස් 3 න් A 3 ක් al exam study tips motivation 2023 2024

  6. IAE Webinar: 3D imaging of mitral valve, tips to improve accuracy of MR assessment

COMMENTS

  1. PDF Case-based Learning in Medical Education

    Case-based learning (CBL) as a pedagogical approach rounds out medical students' education with sessions dedicated to the discussion and exploration of cases. When integrated into medical curricula, case-based education has been demonstrated to improve clinical performance, attitudes, and teamwork.1

  2. Online case-based learning in medical education: a scoping review

    Case-Based Learning (CBL) in medical education is a teaching approach that engages students as learners through active learning in small, collaborative groups to solve cases from clinical patients. Due to the challenges afforded by the COVID-19 pandemic, small group learning such as CBL, transitioned quickly to include technology-enhanced learning to enable distance delivery, with little ...

  3. Case-Based Learning and its Application in Medical and Health-Care

    Keywords: case-based learning, medical education, medical curriculum, graduate medical education. Introduction. ... CBL with PBL 4 and noted that in PBL the student had little advance preparation and very little guidance during the case discussion. However, in CBL, both the student and faculty prepare in advance, and there is guidance to the ...

  4. Choose Your Own Adventure: Leading Effective Case-Based Learning

    Introduction. Medical education commonly employs case-based learning (CBL) as a pedagogical method. The literature suggests that teachers enjoy CBL, and students believe it improves both learning and retention through the "application of knowledge to clinical cases … enhancing the relevance of their learning and promoting their understanding of concepts." 1 In practice, however, CBL ...

  5. The effectiveness of case-based learning in health professional

    Background: Case-based learning (CBL) is a long established pedagogical method, which is defined in a number of ways depending on the discipline and type of 'case' employed. In health professional education, learning activities are commonly based on patient cases. Basic, social and clinical sciences are studied in relation to the case, are integrated with clinical presentations and conditions ...

  6. Case-Based Learning and its Application in Medical and Health-Care

    Medical and health care-related education is currently changing. Since the advent of adult education, educators have realized that learners need to see the relevance and be actively engaged in the topic under study. 1 Traditionally, students in health care went to lectures and then transitioned into patient care as a type of on-the-job training. . Medical schools have realized the importance ...

  7. Case-Based Discussion in United Kingdom General Practice Training: A

    Case-based discussion (CbD) is a form of workplace-based assessment to assess the progress of learning in general practice trainees in the United Kingdom. We aim to identify the need and rationale behind CbD. ... (Postgraduate Medical Education and Training Board) when it laid out the principles of a good assessment system in 2007 . This guide ...

  8. Critical Analysis of Case Based Discussions

    Case based discussions (CBDs) are structured, non-judgmental reviews of decision-making and clinical reasoning 1. They are mapped directly to the surgical curriculum and "assess what doctors actually do in practice" 1. Patient involvement is thought to enhance the effectiveness of the assessment process, as it incorporates key adult ...

  9. Integrated clinical case discussions

    Conception of the didactic concept. ICCDs followed cognitive learning theory. Each session was set as an interactive problem-based learning scenario (Clinical Case Discussion), that facilitated learners´ active participation to organize and conceptualize information [].To prompt students to access pre-existing knowledge ICCD sessions started with a voluntary entry-exam of five multiple-choice ...

  10. Collaborative case-based learning with ...

    Background Imperial College London launched a new, spiral undergraduate medical curriculum in September 2019. Clinical & Scientific Integrative cases (CSI) is an innovative, flagship module, which uses pioneering methodology to provide early-years learning that [1] is patient-centred, [2] integrates clinical and scientific curriculum content, [3] develops advanced team-work skills and [4 ...

  11. Increasing Collaborative Discussion in Case-Based Learning ...

    Background In the transition from academic to clinical learning, the development of clinical reasoning skills and teamwork is essential, but not easily achieved by didactic teaching only. Case-based learning (CBL) was designed to stimulate discussions of genuine clinical cases and diagnoses but in our initial format (CBL'10) remained predominantly tutor-driven rather than student-directed ...

  12. Case-based Learning: Its Importance in Medical Student Education

    Radiologists, with ever-increasing workloads, research expectations, and administrative duties, have less time and fewer resources with which to meet the needs of today's learners, particularly medical students. From both an educator and learner perspective, case-based learning (CBL) is an efcient and effective method to engage the Gen Z/.

  13. Case-based Learning: Its Importance in Medical Student Education

    In an era when medical education is changing to serve the educational needs of Generation Z/Millennial learners, there is a desire to shift from traditional lecture-based teaching to more active learning, which has been shown to more effectively engage medical students (1,2). As graduate medical education has begun to employ more active forms of learning, it is important for radiology ...

  14. Teacher questions and student responses in case-based learning

    Furthermore, to be able to compare our results to other dynamic forms of teaching in medical education, video recordings of other formats, such as bedside teaching, should be considered. Unfortunately, no evidence exists which would allow for comparing our results to other didactical formats in medical education, such as problem based learning.

  15. [PDF] Case-based discussion

    Case-based discussion is placed within the context of contemporary postgraduate medical education and the curricula of the College, offering practical guidance on how best to use this method for the assessment of reasoning and judgement. Finally, some questions are posed regarding the potential use of case-based discussion in the proposals for ...

  16. Case-based discussion

    Case-based discussion in postgraduate medical education, in common with other workplace-based assessments, must always be accompanied by effective feedback to aid performance improvement. Therefore, assessors must also be skilled in offering timely and effective feedback (Reference Brown and Cooke Brown 2009).

  17. Can clinical case discussions foster clinical reasoning skills in

    Objective Fostering clinical reasoning is a mainstay of medical education. Based on the clinicopathological conferences, we propose a case-based peer teaching approach called clinical case discussions (CCDs) to promote the respective skills in medical students. This study compares the effectiveness of different CCD formats with varying degrees of social interaction in fostering clinical ...

  18. Online case-based learning in medical education: a scoping review

    Case-Based Learning (CBL) in medical education is a teaching approach that engages students as learners through active learning in small, collaborative groups to solve cases from clinical patients. Due to the challenges afforded by the COVID-19 pandemic, small group learning such as CBL, transitioned quickly to include technology-enhanced ...

  19. Clinical Case Discussions

    We propose an interactive, iterative, case-based and supervised peer-teaching format to promote CR in medical students: The Clinical Case Discussion (CCD). In our experience, CCDs are a well-accepted teaching format of growing popularity. In 2015, an award for innovative teaching was presented to the organizers of the CCD courses at LMU Munich.

  20. Case-Based Learning and its Application in Medical and Health-Care

    Introduction: Case-based learning (CBL) is a newer modality of teaching healthcare. In order to evaluate how CBL is currently used, a literature search and review was completed. Methods: A literature search was completed using an OVID© database using PubMed as the data source, 1946-8/1/2015. Key words used were "Case-based learning" and "medical education", and 360 articles were retrieved.

  21. Teacher questions and student responses in case ...

    Background Case-based learning (CBL) is a highly interactive instructional format widely used in medical education. One goal of CBL is to integrate basic biomedical knowledge and its application to concrete patient cases and their clinical management. In this context, we focus the role of teacher questions as triggers for reproductive vs. elaborative student responses. Specifically, our ...

  22. Online case-based learning in medical education: a scoping review

    Background: Case-Based Learning (CBL) in medical education is a teaching approach that engages students as learners through active learning in small, collaborative groups to solve cases from clinical patients. Due to the challenges afforded by the COVID-19 pandemic, small group learning such as CBL, transitioned quickly to include technology-enhanced learning to enable distance delivery, with ...

  23. Why Not Treat Patients the Same Way We Teach Medical Students?

    Implementing a case-based approach in the U.S. healthcare system would require several structural changes, such as: Interdisciplinary Teams: This approach would necessitate the formation of ...

  24. Workplace-based assessment: how to use case-based discussion as a

    Workplace-based assessments are increasingly used as a way of gaining insight into clinician performance in real-life situations. Although some can be used to inform a summative (pass/fail) assessment, many have a much greater role in the formative assessment of trainees, and can be used as tools for teaching and training and in identifying the ...

  25. The effect of "typical case discussion and scenario simulation" on the

    Development of the "typical case discussion and scenario simulation" class mode. This study is based on the implementation of the new century higher education teaching reform project at Sichuan University. With the support of Sichuan University, we first established a "typical case discussion and scenario simulation" class mode team.

  26. Continuous training based on the needs of operating room nurses using

    Since university education and intensive and limited pre-service training do not provide an acceptable level of performing the duties of operating room nurses, and considering the limitations of traditional training methods in the field of operating room; This study was conducted with the aim of determining the effect of using the electronic education approach based on web application, leveled ...