"This paper suggests that the problem of impact evaluation of media assistance is understood to be more than a simple issue of methods, and outlines three underlying tensions and challenges that stifle implementation of effective practices in media assistance evaluation. First, there are serious conceptual ambiguities that affect evaluation design. Second, bureaucratic systems and imperatives often drive evaluation practices, which reduces their utility and richness. Third, the search for the ultimate method or toolkit of methods for media assistance evaluation tends to overlook the complex epistemological and political undercurrents in the evaluation discipline, which can lead to methods being used without consideration of the ontological implications. Only if these contextual factors are known and understood can effective evaluations be designed that meets all stakeholders’ needs." (Abstract)