Look at this part where they show how much people like each intervention (Figure 3). The number of people who gave their opinion is very different for each idea. For some ideas, like the “on-farm tools” (Intervention D), only 17 people said if they like it or not. But for other ideas, like the “message framing” (Intervention A), 35 people gave their opinion.
This is a problem, no? If you ask only 17 people about one idea, but 35 people about another idea, how you can really compare them? Maybe if they asked all 35 people about all the ideas, the results would be different. The favourite interventions might change.
It make me think, is this ranking of interventions really true for everyone, or it’s just because not same people voted for all? Maybe the 17 people who voted for Intervention D are a special group who think differently.
So, my question is: why they didn’t ask the same number of stakeholders to rate every single intervention? This mistake with the numbers, it can make their whole conclusion about which interventions are best not so trustworthy, I think.