For example, i ndependent evaluations are conducted by institutions or individual experts from outside projects sponsored by the German Government . They include the evaluations conducted by the BMZ within the framework of its Central Evaluation Programme, quality assurance provided by independent auditors and final, ex-post evaluations as well as the evaluations of ongoing projects introduced by GTZ. The GTZ Evaluation Unit contracts independent research bodies each year to conduct these evaluations. Final evaluations take place shortly before or after the completion date of projects. Ex-post evaluations focus in particular on the sustainability of results. They are conducted two to five years after completion of the project (Castañer, 2007). However, in other countries, the most common method of “evaluation” consists of simply monitoring the labour market status and earnings of participants for a brief period following their spell on a programme. While this sort of exercise provides useful information, Martin(2000) warns that it cannot answer the vital question of whether the programme in question “worked” or not for participants (see the issue of learner outcomes mentioned above). The OECD has reviewed the available evaluation literature in OECD (1993) and this review was updated in Fay (1996) and the OECD review by Martin (2000) of the evaluation literature tells us about what works and what does not? At first sight, the bottom line from this OECD research on the effectiveness of ALMPs is not terribly encouraging. The track record of many active measures is mixed in terms of raising the future employment and earnings prospects of job seekers and producing benefits to society. As the OECD Jobs Study has stressed, more effective active policies are only one element in a comprehensive strategy of macroeconomic and microeconomic measures required to cut unemployment significantly. Nonetheless, they remain a potentially important weapon in the fight against unemployment (Martin 2000). Martin (2000) characterised the evaluation literature in the following way. The ―outcomes‖ are invariably expressed in terms of programme impacts on future earnings and/or re-employment prospects of participants. There is an issue about the scale of programmes , even those which appear to work. Many programmes, which have been evaluated rigorously, tend to be small-scale programmes – sometimes called ―demonstration‖ programmes. While the evaluation literature tells a lot about what works, it is not very instructive in answering other equally important and
14 related questions, such as why do certain programmes work for some groups and not for others, and in what circumstances? For example, do skill-enhancing activities e.g. via classroom training and/or on-the-job training, work best or must they be combined with personal counselling, job- search assistance and mentoring services in order to work? Policy-makers want to know the answers to such questions, but the evidence is simply not there for the moment (Martin 2000),
You've reached the end of your free preview.
Want to read all 131 pages?