I am an activist who became a social scientist because I wanted to engage in more complex investigations of the systems that disenfranchise and marginalize vulnerable populations. Through these investigations I believed I could gather the evidence to make compelling arguments about how systems could reduce their harm and more effectively fulfill their responsibility to ensure all groups in society full, fair and equal treatment. Over the course of my career I have been a part of research and evaluation projects throughout the United States, mostly involving individuals with low income, low education attainment and who are of color. While engaged in the research I often thought about the late New York Congresswoman Shirley Chisholm’s statement, “If they don’t give you a seat at the table, bring a folding chair.” I thought of my role on the various research and evaluation teams as occupying that folding chair for disenfranchised and marginalized populations who were not invited to the table. However, what I know now is that occupying a folding chair at the table is not enough.
Several years ago I was the only person of color on a team of three individuals evaluating a workforce development program operating in a midsized city in the midwestern United States. Although the city is predominantly white, most of the participants in the program were of color—mostly Black and Latino. In addition, the administration and most of the program staff were of color. Throughout the latter half of the evaluation, the program director told the evaluation project manager that she was not comfortable with how the evaluation was being framed. The evaluation manager explained to her that the evaluation was already in process and that the integrity of the research would be compromised if a different frame of the evaluation were employed at that point. While I was not a part of the early conversations of the framing of the evaluation, nor was I in a decision-making position on the team, I believed my seat at the table could make a difference. I believed my participation could influence data analysis and recommendations and ensure that the realities of the study population’s lived experience would be central to the study.
The actual result of the evaluation was findings and recommendations that recreated policies and practices that were in alignment with best practices of program delivery, but not the lived experience of the program participants.
One recommendation was that the program be more flexible; however, the evaluation did not consider what “flexible” meant to the program participants. It did not ask what a participant might have to give up in order to meet the program’s requirements, or weigh if the tradeoff would be worth it for them in the context of their whole selves. For example, although adjustments were made to the program curriculum to be more flexible (e.g., participants were given more time to meet soft skill training goals), other policies were left unchanged. The program’s strict policy for program participants to be on time for classes, meetings, and counseling sessions created undue burden for some program participants, even while staff were reporting concerns about retention. Despite the new flexible curriculum, one particularly determined program participant could not sustain her participation in the program; the new flexible curriculum did not increase her access to the program.
The participant, who had been living in a shelter, managed to acquire a bike to get to and from the shelter and the program. Throughout the fall she was on-time and engaged in the program. Once daylight savings time began, she felt vulnerable because she was riding back to the shelter from the program in the dark. She could not take public transportation because its very limited route and hours of operation did not meet her needs. In addition, once winter began, despite wearing four coats, she was unable to successfully navigate the rain, ice, and snow to arrive to the program on-time; ultimately, she dropped out of the program.
Despite the program staff’s best intentions to prepare a very hard to employ population with remedial education, job readiness, and vocational training, some program requirements were too high a cost compared to dropping out of the program. Although program staff attempted to be more flexible in program delivery, our recommendations did not direct them to engage program participants in defining what a more flexible program would look like to them. While best practices serve a purpose, they are only effective if they can be operationalized within the context of the reality of the messiness of the lived experience. Because program participants were not seen in the full frame of their lives, they were required to make unsustainable tradeoffs to participate in the program.
Evaluation is not inherently benign. Who is sitting at the table not in a folding chair but in a decision-making chair matters. How an evaluation is framed and by who matters. Evaluators are the experts who lead the framing of the inquiry. Framing is imperative for determining the question or set of questions the evaluation will be focused on. The questions are critical for determining the methodology. The methodology determines what is critical for the evaluators to pay attention to. There are real consequences from how an evaluation is framed, conducted, and evidence marshalled; our findings are depended upon for making critical decisions about policy and practice. The consequences of not engaging the study population meaningfully in framing could result in policies and practices that not only continue to focus on problems not people, but also that continue to institutionalize injustice.
La Tonya Green, PhD is FFI’s Director of Evidence and Knowledge. She is responsible for generating knowledge and evidence about the applicability and effectiveness of the Full Frame Approach and the Five Domains of Wellbeing.