The paper presents a structured decision-making approach but has several critical weaknesses. The justification for using PIPRECIA over established methods like AHP and TOPSIS lacks empirical validation, and the expert panel is too small to ensure reliable conclusions. Additionally, the study does not adequately account for real-world variations in educational environments, which may limit its practical applicability. A more rigorous validation process, a larger and more diverse expert panel, and a broader consideration of institutional differences would significantly strengthen the findings. Addressing these issues will enhance the study’s methodological robustness, improve clarity, and ensure greater reliability and applicability of its conclusions.

The concerns about the justification for PIPRECIA, the small expert panel, and real-world applicability are valid. A more detailed comparison with AHP and TOPSIS, along with a larger and more diverse expert panel, could enhance the study’s validity and practical relevance.
I also have a question; could the authors clarify how the weighting process in PIPRECIA accounts for potential biases introduced by the small expert panel? Were any consistency checks or validation steps performed to ensure robustness?