Wednesday, August 30, 2006

PLS 595 JOURNAL ENTRY 1

The Carousel Center Evaluation’s mini-retreat last Monday: Kevin Lee, Alyson Nowicki, Heather Sandala, David Timken, Kevin Tulley, Pat Jessup, and Lisa Savitts. The Team met downtown at the BB&T Building courtesy of BB&T.

The Team met to discuss the basic basic issues of evaluability and, ideally, choose which type of evaluation to conduct. This was intended to be a working session with lots of give and take. The meeting started with welcome and introductions by Kevin. This was followed by my reading of last meetings minute and old business. This led into a discussion of the agenda for the day (see the attached agenda).

Our objectives for this meeting were:

  1. A list of target outcomes to measure
  2. Research questions to answer
  3. The type of evaluation to undertake
  4. A specification of the target population(s) of the study
  5. Selection of instruments
  6. An experimental design
  7. An updated timeline.

Because of time constraints, we were unable to cover all of these issues as a group, although, with your help, Kevin and I were ultimately able to make a meet 6 of these 7 objectives and that is a very successful result.

I reviewed the results of a national survey of CACs undertaken by the University of New Hampshire’s Crimes Against Children Research Center (see Cross and Jones’s PPT under the Files tab entitiled “multi-site outcome survey resultsmulti-site outcome survey results”). Since the Center aspires to attain full accreditation as a CAC, most of these outcomes would be crucial to accomplishing the Center’s mission. They are the starting point for designing this evaluation backward from its objectives. They will help form the questions we hope to answer about the Center’s operation and its interactions with its clients.


OUTCOMES
I then engaged the team in an excercise regarding the survey outcomes: The room was broken down into the five agencies represented and each of these agency reps were asked to priortize the outcomes with respect to their own agency experiences and knowledge of the Center’s operations. These outcomes were classified according to whom they accrued: Child and Family, Agencies and the Community. Then I brought all team members back together again to discuss the four lists and create one master list. The resulting list of top outcomes of the Center’s operation, broken down by beneficiary, include:

Child and Family

  1. More thorough investigations
  2. Less distress
  3. More support and advocacy
  4. Promptness of response
  5. More available services
  6. Sense of justice

Agencies

  1. More thorough investigations
  2. More expertise
  3. More sharing of resources and interagency cooperation
  4. Increased information sharing
  5. Increased opportunities for joint training

Community

  1. Increased resources for child abuse response
  2. Increased child abuse prevention
  3. Increased public awareness of child abuse

QUESTIONS
These three categories of beneficiaries suggest a basis for comparison among how clients, other agencies and citizens in general perceive the Center and its effectiveness. It was generally agreed that measuring some of these outcomes would be difficult. For example, how does one measure public awareness of child abuse? Furthermore, what kind of study could measure the Center’s impact on awareness? Furthermore, what basis for comparison is there to determine if investigations are more throrough because of the Center and its Multi-Disciplinary Team (MDT)? Obviously, there would have to be agreement about how to define and operationalize the concept of “thoroughness”—not an easy thing to do.

Thus, we can begin to set about asking the right questions with these outcomes that will help us to design or choose both the right type evaluation and instruments. Furthermore, information shared by Pat has led me to believe that an impact evaluation is simply impossible given the lack of significant popluation of non-referred cases of child abuse in New Hanover county. For the purposes of this evaluation, only cases from NHC can be considered. Pat stated that DSS is in the practice of referring all such cases. In other words, non-referral would be unconscionable. Furthermore, whatever clients do NOT follow through the referral process could be significantly different from those clients who do. That might tend to render comparisons between the two groups invalid. There is no way of knowing for sure.

Finally, the resulting sample sizes would simply be too small to be very meaningful. As its current average rate of referral intakes, the maximum number of cases from NHC we might expect with 100% participation is 20—far too few in the short-term of this evaluation (two months) to measure impact. In fact, based on the reworked timeline, the maximum number of participants we might expect is 40. Furthermore, an outcome evaluation would require a much longer timeframe than two months. A program monitoring evaluation, would also suggest an ongoing type of evaluation that would exceed the timeframe. Yet, it could reveal some of the targeted reults in the short-term. I spent a few minutes laying out these issues and defining the three different evaluation types under consideration.


AN ANSWER
After all of these points were brought out in discussion, it was generally agreed that rather than approaching this evaluation as a one-shot deal, that Kevin and his employees should consider the ongoing evaluation approach that program monitoring offers. Simply put, the Center would monitor and maintain a database of evaluation results in perpetuity or, at least, on a repetetive basis. This information could then be used to track the Center staff’s progress in meeting set standards of performance. Those standards are established through a combination of feedback from the MDT and clients and best practices of other CACs throughout the country via the Nathional Children’s Alliance and the DOJ’s standardized evaluation instruments.

Shortly thereafter the general meeting of the team was adjourned. I stayed afterward to discuss timeline issues with Kevin and plan for the next meeting.