Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/81070 
Erscheinungsjahr: 
2012
Schriftenreihe/Nr.: 
WIDER Working Paper No. 2012/104
Verlag: 
The United Nations University World Institute for Development Economics Research (UNU-WIDER), Helsinki
Zusammenfassung: 
There is an inherent tension between implementing organizations - which have specific objectives and narrow missions and mandates - and executive organizations - which provide resources to multiple implementing organizations. Ministries of finance/planning/budgeting allocate across ministries and projects/programmes within ministries, development organizations allocate across sectors (and countries), foundations or philanthropies allocate across programmes/grantees. Implementing organizations typically try to do the best they can with the funds they have and attract more resources, while executive organizations have to decide what and who to fund. Monitoring and Evaluation (M&E) has always been an element of the accountability of implementing organizations to their funders. There has been a recent trend towards much greater rigor in evaluations to isolate causal impacts of projects and programmes and more 'evidence base' approaches to accountability and budget allocations Here we extend the basic idea of rigorous impact evaluation - the use of a valid counter-factual to make judgments about causality - to emphasize that the techniques of impact evaluation can be directly useful to implementing organizations (as opposed to impact evaluation being seen by implementing organizations as only an external threat to their funding). We introduce structured experiential learning (which we add to M&E to get MeE) which allows implementing agencies to actively and rigorously search across alternative project designs using the monitoring data that provides real time performance information with direct feedback into the decision loops of project design and implementation. Our argument is that within-project variations in design can serve as their own counter-factual and this dramatically reduces the incremental cost of evaluation and increases the direct usefulness of evaluation to implementing agencies. The right combination of M, e, and E provides the right space for innovation and organizational capability building while at the same time providing accountability and an evidence base for funding agencies.
Schlagwörter: 
evaluation
monitoring
learning
experimentation
implementation
feedback loops
JEL: 
H43
L30
O20
ISBN: 
978-92-9230-570-3
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
2.4 MB





Publikationen in EconStor sind urheberrechtlich geschützt.