Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

flexible and community driven strategies.

The disagreements about interpretation of the results of community-based trials have been the basis for recommendations to expand the scope of evaluation methods for community programmes.4 These recommendations need to be supported by parallel developments in the criteria used to appraise the quality of evidence on public health interventions.

It has been proposed that evaluation designs should be more prudently and strategically sequenced to a programme’s stage of development and to the available evaluation resources.6 Expensive randomised trial designs should be used only after research using simpler and cheaper designs has yielded satisfactory results regarding the feasibility of the intervention. Thus an RCT design may be best used to test a causal hypothesis, after satisfactory pre-post single group design has been conducted, and assurance has been obtained that the measuring instruments satisfactorily capture programme implementation processes and outcomes.4

Specification of the theoretical basis of the intervention can also improve the credibility of outcome measures, and accords with a trend towards making the hypotheses and assumptions underpinning public health interventions more explicit. Intervention theories should be explicit and plausible. Explicit theories allow us to determine whether they are commensurable with the impact and outcome measures that have been adopted to evaluate that intervention, and whether an appropriate method was used to analyse those measures.The trend towards identifying the anticipated causal pathway of an intervention (the “mode of action”) is redressing the pragmatic “black box” use of epidemiology that placed more weight on research methods and outcomes than on intervention theory.5

Multi-dimensional approaches are available for evaluating outcomes research.5Table 1 includes a recent guide  for assessing evidence on intervention effectiveness on three dimensions: the strength of evidence, which is determined by a combination of the study design (level), methodological quality and statistical precision; the magnitude of the measured effects; and relevance of the measured effects (as observed in the evaluation) to the implementation context. Such approaches are in tune with the epidemiological tradition of using multiple criteria to assess causal associations or causal inference (also listed in table 1). For the purpose of evaluating evidence on public health interventions, such an approach could be expanded to consider issues of intervention theory, intervention implementation, and monitoring in the evaluation process.