One has more “good ideas” that he can cope with – especially early in the morning after the first coffee cup or late at night as the last whisky glass slowly warms in one’s hand. How to separate wheat from chaff?
Here is a suggestion: Make a randomized control trial (RTC) – it might contribute to the answer. In such an experiment some (chosen at random) get the benefit of the idea, some are kept away. Compare and see whether the idea has made a difference (oh Gods of statistics, pardon me this gross oversimplification).
One does not often like this kind of cautionary instruction: “verify first”. Aficionados call it “evidence-based” knowledge. One is SO convinced that one’s own plausible idea is good; as “true believer” one wants to rush ahead and implement it. The truth of the matter is, only after a randomized trial does one know (a bit more). There are so many biases that beset us: one must be skeptical of one’s intuitions – particularly when doing “good”, and we all want to do “good”, don’t we? Intent dislodges outcome. And we don’t want to verify our intuition, lest it prove to be illusion…
“Evidence based” development aid is emerging as a movement – or a fad (we don’t know yet). The issue is not without researchers who call for caution, but this debate is healthy. “Evidence-based” medical practice (as against pharmacology) is slowly – some would say all too slowly – emerging alongside medical experience.
A word of caution – I’m not saying that RTC’s will yield the “truth” – which in my view will remain elusive. Things are always more complex than we think, and one explanation is likely to hide another. RTCs however help clear the way of illusions and force us to face up to our errors.
Thinking in “small steps” is the perfect context in which to make RTCs. When available means are limited compared to the objective, making randomized trials just about comes for free –the cost of the experiment design.
Assume you have invented that solar cooker that will save millions of lives threatened by respiratory disease when food is cooked inside the smoky hut over dung patties or green wood. You don’t have enough cookers anyway. They may be distributed in such a way that a TRC can be constructed to find out whether they’ve made a difference.
Carrying out the test and the statistical analysis is the easy part. The difficulty is in the design – to ask the deep question that identifies what causes what. One needs a lot of imagination, cleverness, and experience. I’d say that this is a matter of culture (rather than a policy).
Culture? Yes – let’s start with a free-wheeling way of doing things where the statistician and the expert learn (and enjoy) working together. One might think of a “Club of Randomistas” – where the issues are discussed freely, where project leaders may ask statisticians for help, and statisticians may learn lessons from the field. It could become a nodal point between disciplines. Leadership in RTC-based knowledge could emerge.
Over time this process might move from a position of “leadership” – where a new idea is advocated and applied on a voluntarily basis – to that of “authority” – where most ideas have to pass muster under a policy of establishing “best practice”. I highlight the eventual tension: as the framework of the discussion changes inevitably evidenced-based experiences become policy and “means” become the “goal”. It can’t be helped.
How can we create such a culture? I’ve indicated the first step: create an (informal) institution, a place where people can interact around a common intentionality – to know more. Every time we lift the phone or write an eMail to ask for advice from a colleague we make use of such informal institutions based on reciprocity or altruism. Formalizing this process and making it transparent increases the efficiency. An example could be a “liberty wall” within an organization, where people can post questions and get answers – beats the water fountain down the corridor or huddling for a smoke outside the building.
Incentives may be provided to cover the cost of the CTR – and something more. Google has here a well-known policy of favoring “positive deviance” by giving their staff 20% of “own time”. In the beginning CTRs may become a way of using such “free time” and evolve as it proves its mettle.
By putting the matter in cultural and “bottom-up”, rather than administrative and “top-down” terms, one deliberately slows down the transition from allowing positive deviance and experience to emerge and the imposition of “best practice”. One creates space for experimentation and evolution as well as a culture of continuously upgrading it as we go along.
One key issue I’d like to highlight at this end, for it is in the forefront of the whole discussion on the use of RTCs – in fact it is often the first issue quoted: “Arbitrarily (randomly) offering a service to some people and not others is immoral and goes against the professional grain of the people providing the service. The randomistas accommodate this concern by only randomizing in less-fraught ways.” (see ROODMAN op. cit.)
We have become ethically sensitive – sometimes to a fault. This is an area where we may be going to excess and tie ourselves up in ethic knots of our own making. In an RTC we don’t know beforehand whether our measure is going to be effective or not. We may have a hunch, a hope or an argument from plausibility – no certainty. That’s why we carry out an RTC in the first place. All participants are then on the equal footing of certainty before the experiment and I see no major ethical problem here (with nuances, of course). Please note that run of the mill “medical experience” is never subjected to ethical tests, unless it is malpractice.
Let’s look at another instance. If I ride my hunch and do a limited project which for budgetary reasons only helps a group of recipients – the standard procedure in most situations – this does not raise ethical qualms. If I want to test my hunch, however, and add to the “beneficiaries” some more participants, who are included/excluded in order to get the base line for comparison, this might be considered “unethical” because I withhold from the latter the the benefits of the conjecture I want to test. But I’m treating just as many beneficiaries in both cases. The “luckless” participants would have remained without treatment in any case (albeit no one would have noticed). There seems to me to be hidden bias against advancing toward the truth.
What do you think?
 See e.g.: Daniel KAHNEMAN (2011): Thinking, Fast and Slow. Allen Lane, London.
 Why do projects fail? Learning how to prevent project failure. http://www.mindtools.com/pages/article/newPPM_58.htm
 See e.g. Olofsgård ANDERS (2012): “The Politics of Aid Effectiveness: Why Better Tools can Make for Worse Outcomes, “SITE Working Paper Series 16, Stockholm Institute of Transition Economics, Stockholm School of Economics. Also: David ROODMAN (2009): The rapid rise of the Randomistas and the trouble with RTC. http://blogs.cgdev.org/open_book/2009/03/the-rapid-rise-of-the-randomis.php
 I’ve touched upon this in my 150 already.
 See e.g.: Steven LEWITT – Stephen J. DUBNER (2005): Freakonomics. A rogue economist explores the hidden side of everything. Allen Lane, London.