I would argue that not all research is given the same credit. While this may sound like sour grapes because I mostly worked on projects that tested ideas in applied settings, I still believe that basic research is often more respected and certainly easier than applied research. Certainly research that carefully controls all but a few variables and randomly assigns participants to treatments is the best approach for the discovery of new principles, this is not the situation practitioners face. I am guessing that many valid principles identify factors that even if addressed are not sufficient to overcome the variability in learner samples and messiness with which different application attempts create to produce consistent positive results. Investing your time evaluating “proven” principles in applied situations tends to mean you spend more time on a given project and have more frequent studies producing nothing of statistical significance. If this is your thing, you are simply less likely to generate the number of publishable studies in contrast to those who focus on cleaner controlled studies.
I have been reading a book (Everyday Chaos) that seems to me to take a similar position. The author, David Weinberger, uses examples from big data and A/B testing to argue that a focus on simple causes may be fruitless. Technological approaches allowing a search for patterns in large data sets can generate useful strategies that are very difficult and sometimes not possible to explain. Whatever works for a given large dataset may not work for a different dataset.
EdSurge offers a recent article describing a new Department of Education grant competition focused on the type of issue I have described here. What is described as implementation science involves the study of variations that impact the efficacy of applications
“The agency kicked off a new research competition to better understand how technology programs that IES previously deemed effective can perform in specific but varied settings, from different geographic regions to different populations of learners, educators and schools.”
Thinking about what it would take to achieve the goals of this grant program I can think of no way I would have been in a situation to participate in such research. On the surface at least, this work would seem to require access to a variety of settings and educators willing to attempt a similar tactic. Metaanalyses attempt to do this after the fact, but designing an approach up front will take programs with tremendous resources.