So there are three separate labeling frameworks with regard to goals/objectives/measures and indicators. And the IM on applying for a grant uses yet another framework. The slight differences between these are magnified in terms of the amount of confusion they create in local programs. 6. There was a protracted debate between OMB and the Head Start Bureau and within the HSB about whether each local program should be allowed to come up with their own indicators to relate to each of the child outcomes and other performance measures or be required to use some new national system. Since there was no national system in place for developing agreed upon measures -- or for finding and collecting the data the new measures required – the HSB began backfilling on outcome measurement by using some of the findings from FACES, which is the a national pilot program that was the precursor research effort to pilot the tests and methods for the national impact evaluation. In FACES, teams of psychologists and testing experts visit about 40 local programs with about 3,200 children every three years and conduct extensive tests on the children. However, they do not track the same groups of children over time. They visit the program every three years and take a “snapshot” of those children who are there. Use of data from FACES was seen as a temporary way to have some credible data on child outcomes until a better system could be developed. The effect of this however was to shift the responsibility for collecting and assessing data fromthe local programs to a national evaluation contractor, and fromHead Start staff tohighly-trained experts. At this point, the local programs began losing influence over the process of developing outcome measures -- because the level of training and expertise needed to administer the tests selected for producing the “surrogate” data and as being used in FACES (Peabody this and that) was far beyond the staff capacity of local Head Start programs. Only experts could administer these tests.7. The software vendors like Kaplan and Creative Curriculum meanwhile were busily coming up with standardized systems that were keyed to their curricula and suggested activities. The programs that use these curricula do in fact get a way of measuring child outcomes -- but it is developed by a national software vendor not by the local program. And, they only cover the 8 child outcomes and maybe one or two other outcomes.8. The local Head Start programs that did not use one of the standardized curricula were having great difficulty in coming up with ways to measure outcomes. They were having the same kind of problems on measuring outcomes of all types that the CAA’s were having on measuring any outcomes, especially community outcomes and agency outcomes. In short, the local Head Start programs were all reinventing the wheels – slowly and with difficulty.