Partnering with J-PAL North America: A Practitioner Perspective

Posted on:
Authors:
Hands shaking in front of government building
Photo: Dirk Mathesius | CC BY-NC-ND
Featuring research by Matthew Notowidigdo.

Benefits Data Trust (BDT), a nonprofit partner, is collaborating with J-PAL North America to identify effective outreach strategies to enroll low-income households into benefits. We asked Rachel Cahill, Director of Policy at BDT, a few questions about her experience partnering with J-PAL North America to design an evaluation that will answer important questions about BDT’s work.

What made you decide to partner with J-PAL North America?

I think there is a benefit to the partnership. We were doing this work and we already had another evaluation although not as an RCT, and we were confident that our program worked to help low-income households apply for benefits. Really we were seeking to answer a related question: is the intervention that we already believe is working on SNAP takeup—is it having an impact on health outcomes?

We got connected through the Camden Coalition, and began working with Amy [Finkelstein] and Matt [Notowidigo], discussing what we already knew, to eventually arrive at the first order research question in this evaluation on the effect of our program on SNAP take-up.

We didn’t begin with this particular research question. It was sort of a negotiation—Matt and Amy found that there really isn’t a lot on the effect of outreach and application assistance on benefits enrollment. We bought into that approach.

It was much more of a partnership to decide on the research question.

Can you discuss one or two examples of challenges in delivering your intervention in context of a randomized evaluation and how you worked with J-PAL to overcome them?

There were various challenges. One of the nuances of the program is that we are using data that belongs to Pennsylvania Department of Health and Human Services. We had to broker a three-way agreement between BDT, MIT, and DHS, which was a very large barrier to overcome.

The current Data Use Agreement allowed us to use DHS data for outreach but not for research. Ultimately we did overcome that barrier, but it did delay the launch of the evaluation.

Doing an RCT with an entitlement program, we had to develop a design that was really just a wait list control. It would be much more straightforward to get to BDT’s core question of identifying the effect of the SNAP program by randomly assigning some people to receive the SNAP program and randomly assigning others not to receive the program. However, people who are eligible cannot be denied the program because it is an entitlement.

Instead, we had to think creatively to develop a high-intensity, low-intensity, and control group in an encouragement design. We had to negotiate with MIT because we were seeking simplicity in the design. Initially J-PAL proposed a dozen different types of outreach letters, to explore the effect of many different variations on the outreach strategy. BDT explained we can manage a lot of nuance, but there was a limit at which the ability for us to manage different treatment arms would decline.

There was a tradeoff: Do you keep the design simpler or do you have a dozen treatment arms and increase the probability that you make a mistake?

What lessons or insights about your program have emerged from this partnership and this evaluation?

We have learned a lot about doing research. I joked with Amy that I estimated I would spend 20 percent of my time on this project. She asked how I was going to spend 20 percent of my time on this project, and I said to Amy that it takes 20 percent of my time to answer her questions!

To do it right, which J-PAL does, it does take a lot of thought and planning. It really takes multiple people with different types of expertise. This requires a big resource commitment from the organization.

We also have to use some political capital with our state partners. There’s an opportunity cost there: you have a limited number of requests that you can make in a given time. Looking back on things now, I still would have done the evaluation, but I would have allocated more time and more resources.

For example, we thought BDT would be involved for the first 12 months, but we didn’t budget time for the data analysis because we figured that would be MIT’s role. We realized that the engagement doesn’t end when the 12 months ends, and that we still have to follow up concerning things like data collection.

This experience will just make us smarter in doing future research.

There’s been tremendous value in terms of learning on our program.

We very quickly saw that the marketing letter formatted in a particular way for the evaluation generated an increased response rate of a half percentage point, which may seem small but is a lot for our field of work. This was significant enough to us, not even to wait until the end of the RCT; we thought about how to incorporate this letter design in other states beside Pennsylvania immediately.

The discipline of setting up an RCT was also helpful.

How will your organization use the knowledge generated by the evaluation?

Our hypothesis is that the high-touch intervention will increase take-up more than the light touch intervention and certainly more than the control group.

A common misperception in this field of social services is that just sending out a letter will be enough to increase take-up. This is founded on the premise that low take-up is just an issue of awareness, but we know that the enrollment process takes a lot more than awareness raising. Many of the people we talk to know that SNAP exists—they just can’t imagine going through all the paperwork and enrollment procedures to access the benefits they are eligible for. This is especially the case for the SNAP program, which has one of the more archaic enrollment processes.

The real game changer, is whether we can demonstrate long-term outcomes on health. Amy and Matt believe that we are not powered enough to detect these effects in this particular evaluation.

People are really thinking about investments in social services to decrease healthcare costs. I don’t want to put it bluntly, but that’s where the money is—healthcare is an area where the government is spending so much money. We think this is a really significant opportunity to generate definitive evidence on whether social services can prevent future health costs rather than just having a hypothesis that social services might be helpful.

That’s really what we’re striving for, and we’re willing to go down the rabbit hole to figure that out.

I really like working with the MIT team. I would explain to any other nonprofit considering doing a randomized evaluation what a big deal an RCT is. It’s still a big challenge to get a nonprofit to go down that rabbit hole to answer really tough questions. With a big emphasis on rapid testing, people often don’t want to wait several years to see longer term outcomes. The staff alone in nonprofits often transition with 3 years, so it can be hard to even have the same people working for the duration of the evaluation.

I don’t say this as a critique but just as advice to J-PAL folks—thinking about partners other than government—building a level of transparency on the level of commitment that is required going into an evaluation. I was not aware of how much this would entail when BDT signed on.

Read partner testimonials from BDT and from the South Carolina Department of Health and Human Services here.

Authored By