This section frames the findings around feedback-exchange tensions in residency programs (Sec.
5.1), dual role of faculty (Sec.
5.2), and cognitive apprenticeship (Sec.
5.3). Specifically, we present how faculty’s feedback methods are not in alignment with learners’ needs, and later discuss how this misalignment stems from training strategies and content that aim to address clinical duties in addition of teaching, as well as lack of support for learners to examine and share their cognitive processes. Sec.
5.4 describes sociotechnical strategies to improve learning of highly specialized and critical medical skills, not just in contouring education, but also in the broader apprenticeship-based healthcare training.
5.1 Tensions between teaching methods of faculty and learning needs of residents
The empirical findings of this work shed light on interrelationships between faculty and residents in the apprenticeship model of residency training, and how the existing mechanisms of feedback exchange do not align with needs of residents, especially in terms of timeliness, relevance, and diversity of training methods.
The asynchronous methods of training introduce significant delay in feedback exchange which leads to subpar in-time support and can degrade overall learning of critical and high-stakes medical tasks. As mentioned in the results section, a common feedback exchange strategy is when faculty assign their own clinical cases to residents as practice opportunities, and later re-contour the entire case and send it back, so the residents can learn by comparing their contour with the expert’s. However, this method lacks accounting for confusions and questions that arise when contouring patient cases, evident by interface mock-ups that residents generated during the workshops (
e.g., on-demand support features in Figure 3e). Many components of the survey results further showcase the benefit of in-time support, such as high usability and learnability scores of the interface design with the left panel support, as well as the incorporated interactive mentoring functionality that provides hints during contouring sessions (
i.e., S4 in Figure
6). Seeking feedback from peer residents is another method of training, yet the ad-hoc nature of this support mechanism can minimize benefits for learners, since there might not be adequate support in place when help is needed, such as unavailability of a senior resident with experience relevant to the case at hand.
The interactions between faculty and residents are limited in supporting unique and granular learning needs. For instance, comparing contours of the entire case with the solution (provided by the faculty) might not directly address gaps of contouring knowledge and skills, since it remains up to the residents to interpret differences as essential concepts or subjective tendencies. The faculty further shared sending notes via email, mainly to provide specific and critical learning points. While the explanation can help clarify some confusions, the barrier to provide detailed and targeted feedback on a disjoint, text-based medium — i.e., email content that needs to map to specific segments of particular images, in a case only accessible by separate contouring tools — can introduce additional burden for the faculty and discourage providing granular feedback. In the training method of watch-alongs, in which faculty think out loud their processes as they place contours on images, these processes can differ from learning needs of the less-experienced residents. While the resident can contribute in this training strategy and ask for clarifications, these questions might differ from the confusions they face when contouring themselves. As such, learning needs can remain unaddressed and only be uncovered when residents engage with contouring tasks in-depth, and when relevant feedback is provided.
Similar to many training programs for specialized healthcare procedures, trainees are matched with only a limited number of expert physicians, which can hinder diversity of feedback especially in complex tasks that involve a certain degree of subjectivity. While access to senior residents can improve variety in feedback, the participants indicated that impressing attending faculty is a main goal of residency programs. As further unveiled via the workshop (
i.e., Figure 3a) and survey (
e.g., I1 heatmap in Figure
5), learners valued getting exposed to contouring tendencies and diverse perspectives. In addition, facilitating access to expert physicians (from varying backgrounds and experiences) can substantially improve equity in healthcare: prior research shows that a “quality gap” exists in cancer treatment, in which medical institutions at rural locations (with fewer volume of patients) provide substandard treatment compared to the counterpart urban providers with higher patient volume [
1,
50].
5.3 Moving from traditional apprenticeships towards cognitive apprenticeship
The findings of this work revealed elements of a residency program that more closely resembles a traditional apprenticeship (
a.k.a. the first half of a cognitive apprenticeship), in which the three principles of
modeling,
coaching, and
scaffolding are moderately supported. As reported in the results, 1-on-1 contouring watch-alongs — in which experts perform contouring and externalize their internal processes — centers around the first principle (
modeling). The second step of cognitive apprenticeship model (
coaching) is partly fulfilled by interacting with peer residents: some participants benefited from working through a small subsection of cases, while the senior resident evaluates their thought process and provides guidance. However, this strategy can be unstructured and ad-hoc, meaning support might not always be available, or expertise of the senior resident can differ from the learner’s need. The case exchange and re-contouring method, while mainly a form of experiential learning (
i.e., “learning by doing”) [
42], shares core elements with the
scaffolding principle of cognitive apprenticeship, in which the faculty adjust the depth and modality of feedback according to the skill-level of the residents. As noted in the results section, the faculty decide to either only provide the contours, or also add text-based justifications via email exchange when the residents can benefit from the additional hints. Given the asynchronous (
i.e., delay in sending feedback) and contextually-limited (
i.e., textual
v.s. richer video formats) nature of the feedback, this training mechanism might not adequately gauge the expertise level of learners in order to provide the relevant help.
The existing training strategies lack support for the second part of cognitive apprenticeship (Table
1), in which learners focus on developing and solidifying their cognitive processes. Given the existing curricular infrastructure, however, residents have limited opportunity to engage in
articulation and
reflection, principles that aim to delve deeper into the cognitive processes of learners and enable comparison with experts’ model. These steps require investing significant time and resources, an investment that might not directly contribute to clinical throughput, further highlighting the constraint of the dual role of clinician and teacher (as elaborated in Sec.
5.2). Lastly, the
exploration phase promotes fading, not only in problem-solving, but also in problem-setting, in which learners apply their newly learned skills to seek and tackle other problems that align with their learning goals. This can be difficult, especially in healthcare, for two reasons: first, patient cases involve a high degree of sensitivity and privacy which can pose a barrier for access. Second, it can be challenging to gauge complexity of patient cases, and specifically, what learning goals they cover. As such, residents might need additional guidance in selecting new contouring cases, the type of support that lacks in current training methods. For a complex, critical, and cognitively loaded task like contouring in radiation oncology, as well as clinical workflows in many other healthcare domains, it is imperative that all six principles of cognitive apprenticeship are adequately supported to yield improved learning, and consequently, quality clinical outcomes.
The designed features in the interface mock-ups can especially complement the existing training model by supporting the last three principles of cognitive apprenticeship. For instance, the video-enabled feedback exchange (interface 3f) can be an effective strategy in communicating the internal processes of residents (i.e., articulation) and facilitating reflection opportunities when enabling learners to compare their cognitive model of expertise (via the proposed feature of Experts’ Videos) with their own processes. The Similar Cases feature — introduced in mock-ups 3d and 3e — can also benefit exploration, in which residents can explore cases relevant to their current learning task. This approach, however, might require further scaffolding to align these clinical cases with the underlying principles that residents need in improving contouring knowledge and skills.
5.4 Sociotechnical Methods of bridging dual faculty roles and facilitating cognitive apprenticeship
While fundamental solutions for the current residency programs might suggest complete decoupling of clinical and teaching roles among the expert physicians hired at academic institutions, we recognize that these changes require significant re-structuring of existing societal and monetary models and, as such, we offer more attainable curricular and technological strategies to mitigate the shortcomings of healthcare training. This section presents sociotechnical solutions that can, not only address constraints of faculty’s dual role (to a reasonable extent), but also support all principles of cognitive apprenticeship. Many of these approaches directly apply to other healthcare domains that incorporate similar apprenticeship models of training.
Leveraging peer resident resource to lower teaching duties and enrich learning — As discovered in the interview sessions, residents seek guidance from their more experienced peers, in a back and forth exchange where the pair can work through a subset of the case together. While especially beneficial for early residents, this method of ad-hoc and informal help-seeking involves additional overhead and uncertainty (e.g., finding senior residents with the relevant level of expertise, when needed) and can further deter residents from pursuing these resources.
Contouring education can especially benefit by systematically leveraging the knowledge and skill-set of senior residents to train a larger number of novice residents. When structured according to case difficulty and expertise level, senior residents can be valuable training resources as they can engage with learners on deeper and longer sessions, and hence, lower the teaching responsibilities of faculty. Many prior HCI works on crowdsourcing explored leveraging the wisdom of expert workers and matching their expertise to the needs of novices by providing concrete learning tasks with representative descriptions, measuring the extent of expert knowledge, and defining reasonable incentives [
37,
67,
84]. While residency programs operate at smaller scales than these systems, our findings point to how similar principles can help streamline this process: 1) contouring cases for early residents can be defined by their attending faculty who better understand the complexity of tasks and required skills, 2) senior residents who have specialized in these particular cases can be matched for extended co-contouring sessions that engage learners in deeper cognitive processes, and 3) these expert residents can later be compensated with academic credits or monetary incentives.
Facilitating convenient capture of video snippets to share cognitive processes — As shown during the design-thinking workshops and survey, embedded video recording can help residents capture questions and uncertainties that arise during contouring sessions, and facilitate targeted post-hoc review and learning from the faculty.
Video-assisted feedback is an effective method of feedback exchange in healthcare, as it significantly improves clinical skills [
57], and in some cases benefits learners on par with direct expert feedback [
64,
65]. In surgery, video summaries — especially if developed via human leaning models [
25] — can benefit resident training by showing alternative ways of performing gestures and enabling residents to trace their mistakes [
3]. Creating reusable video snippets, not only lowers the burden on the faculty time in the long-term, but also provides a necessary space for residents’ review and self-reflection [
24]. These video snippets can especially benefit residency programs, since reviewing and responding via short videos (especially if embedded in the main contouring tools) avoids adding significant overhead to the current workflow of the attending faculty that are responsible for many clinical and teaching tasks.
In addition, convenient video capturing system can facilitate
articulation principle of a cognitive apprenticeship model, in which learners can express their cognitive processes in-depth and contextually, and further compare their problem-solving skills with experts’. It is particularly beneficial to record these processes in-session, as given the existing training strategies, residents forget many important contouring details and confusions, or there might not be sufficient amount of time during review sessions with the faculty (result presented in Sec.
4.3).
Aggregating variability to capture unique tendencies and yield deliberation — The interviews and interface mock-ups (
e.g., the design of 3a and the following positively rated heatmap annotation shown in Figure
5) pointed out the nuanced differences in experts’ contours, and how residents raised concerns about the lack of exposure to different contouring tendencies and emphasized learning from diverse styles.
Capturing and presenting other experts’ unique contouring tendencies can complement residency programs that facilitate apprenticeship with only a few faculty. Despite existing guidelines (
e.g., [
49] and [
78]), physicians can interpret images differently [
44], and hence, introduce contouring variations. As shown in the results, one main source of variation stems from clinicians’ dissimilar judgements in including or excluding certain regions around the tumor. Expert disagreements appear in many clinical decision-makings, such as identification of abnormal spikes in brain signals [
4] and eye assessment in referral diagnoses [
91]. Capturing and presenting contouring disagreements can further encourage deliberation and enhance learning, by especially promoting the
reflection principle in the cognitive apprenticeship model. Group Deliberation refers to sense-making of the collected uncertainty [
29,
72] by leveraging dissenting positions to generate necessary information that can be otherwise lost in consensus-reaching procedures (
e.g., majority voting) [
80]. In-depth discussions over different contouring tendencies can enable more opportunities for learners to compare their internal cognitive model of expertise with the faculty and peer residents, further aligning with the cognitive apprenticeship model.
An important sociotechnical consideration of collecting variability — according to Ackerman’s list of challenges that should be considered in computer supported cooperative work [
2] — is critical mass, and specifically in healthcare, scarcity of highly skilled physicians. Critical mass is the idea that a certain threshold of participants is required for the success of a social movement [
59] and can affect the perceived usefulness and acceptability of sociotechnical systems [
20,
36,
55]. Attracting radiation oncologists, to contribute to a diverse collection of contours, might face challenges due to the lack of critical mass, especially given the already small number of physicians in this field. Careful design of cooperative contouring systems that incorporate elements of the Technology Acceptance Model [
16] can enhance user adoption and address critical mass: for instance, since
perceived critical mass (
e.g., through personal interactions) can improve system acceptability [
53], feedback solutions (that leverage collection of contours) can start by advertising predominantly to major medical hubs, such as medical schools and oncology clinics.
Providing in-situ and anchored resources to enhance asynchronous faculty feedback — As noted in the interview sessions, a prominent method of feedback exchange is providing solution contours and additional text-based comments provided separately via emails. However, this method can pose learning challenges given the disconnect between the contexts of contouring (via the clinical tools) and the feedback (via email).
The disjoint set of modalities (between the contour solution and textual feedback) can hinder establishing common grounds and exacerbate interlocutors’ joint communicative efforts [
10,
11]. Prior works in facilitating visual/spatial referencing produced higher quality comments [
28], lowered confusion [
97], and increased satisfaction [
54,
100]. Leveraging the unique characteristics of medical images and spatially anchoring faculty’s comments to specific image slices is especially beneficial in contouring residency, in which due to limited availability of the expert faculty with dual roles of clinician and teacher, asynchronous feedback is likely to continue as a prominent training method. An example solution appears in one of the sketches in the survey study (S4 in Figure
6), in which hints are displayed on top of medical images with arrows pointing to specific regions.
Feedback type and presentation can be adjusted to reflect the differing goals of novice and experienced residents. As discussed by Ackerman [
2] this is an important consideration for increasing the feasibility of computer supported cooperative systems. Prior research demonstrated how members of organizations can have differing or (sometimes) conflicting goals which can stem from difference of knowledge, meanings, and histories [
38,
41,
82]. In contouring education, while targeted and anchored feedback can especially help new residents — who might struggle on region detection and fundamental contouring procedures — experienced learners might benefit more from holistic and diverse feedback. Healthcare training tools should account for the varying goals and experience level of learners to provide effective feedback and avoid disrupting the learning workflow.