1. Table 1: Only 31 of 60 participants (51.7%) completed the survey, yet you report percentages as if they represent the entire cohort. Moreover, 42% of respondents did not disclose POLAR4/TUNDRA data. How can you claim the experience was largely accessed by students from widening participation backgrounds when nearly half of your sample has unknown socioeconomic status, and you have no data on the 29 non-respondents who may differ systematically?
2. Figure 5: Figure 5A shows 100% of students reported enjoying taking part (n=31), but the Results section states 100% reported a positive learning experience and liked taking part. Which is it? Liking and positive learning experience are distinct constructs. Please provide the exact survey questions and response distributions, not aggregated conflated metrics.
3. Figure 2: You describe three patients awaiting a liver transplant, but only provide one patient’s history (66-year-old man with alcoholic liver disease, depression, no social support). What were the other two patients’ profiles? Without this information, readers cannot evaluate whether the ethical dilemma was genuinely balanced or biased toward a predetermined correct answer.
4. Definition of traditional dyadic styles (Results, 91% agreement): This term is never defined. Do you mean one-on-one didactic teaching? Small-group work? Standard lectures? Comparing simulations to an undefined comparator renders this statistic meaningless.
5. Selection bias in participant recruitment: The simulations were open only to GEMMS-PA mentorship group members, students already expressing interest in patient-facing careers. Your conclusion that simulations should be embedded into Biomedical Sciences curricula ignores that your sample is not representative of all Biomedical Science students, many of whom pursue non-clinical careers (e.g., pathology lab, research).