1. You claim Scopus provides balanced representation but excluding Web of Science and Google Scholar means you’ve systematically missed tons of relevant papers. This isn’t just a minor oversight – it fundamentally biases your entire analysis toward journals that happen to be in Scopus. How can you claim to map the “research landscape” when you’ve only looked at one corner of it?
2. Looking at your methodology section, I’m really confused about what search terms you actually used. You mention “values” and “consumer behavior” but don’t provide the exact Boolean string. Without transparency here, your whole study is basically unreproducible. Also, why no inclusion of terms like “ethical consumption,” “pro-environmental behavior,” or specific value frameworks? This seems like a huge gap.
3. You excluded 13 articles through manual screening but provide zero detail on why. What were your inclusion/exclusion criteria? This is basic stuff! Without this transparency, readers have no idea if you cherry-picked the dataset.
4. You’re ranking authors by publication count, but Spaulding (12 papers, 90 citations) vs. Nguyen (4 papers, 499 citations) – seriously? Publishing volume without considering citation impact tells us nothing about actual influence. This is just bad bibliometrics.