The authors say they identified 26 SSH disciplines and connected them to five themes (Contextualizing, Facilitating, etc.) and then to the IEA steps through consensus achieved during their workshop discussions.
This seems like a major methodological weak point. My question is: Was this “consensus” just an agreement among the authors themselves, who, while multidisciplinary, are still a limited group?
If so, isn’t there a huge risk of selection bias and a kind of “echo chamber” effect? The entire framework they build (Tables 1 & 2, Figure 3), which is the core of their contribution, is based on this internal agreement. But what if a wider panel of SSH experts would have categorized the disciplines or their relevance to IEA steps completely differently?
For example, why is “Anthropology” primarily tagged as “Facilitating” and not “Contextualizing”? Why is “Economics” only “Evaluating” and not also “Anticipating”? These classifications feel a bit arbitrary without a more robust, transparent, and inclusive method for reaching them.
So, to put it simply: Did the process of creating this foundational framework rely too heavily on the internal opinions of the co-authors, potentially compromising its objectivity and general applicability? If the answer is “yes,” then the entire proposed structure of which SSH discipline helps with what, and when, might be built on a pretty shaky foundation.