<iframe src="//www.googletagmanager.com/ns.html?id=GTM-5TSRKG" height="0" width="0" style="display: none; visibility: hidden"></iframe>
Published Online:https://doi.org/10.1089/elj.2016.0365

Abstract

In recent years, the courts have invalidated a variety of campaign finance laws while simultaneously upholding disclosure requirements. Courts view disclosure as a less-restrictive means to root out corruption while critics claim that disclosure chills speech and deters political participation. Using individual-level contribution data from state elections between 2000 and 2008, we find that the speech-chilling effects of disclosure are negligible. On average, less than one donor per candidate is likely to stop contributing when the public visibility of campaign contributions increases. Moreover, we do not observe heterogeneous effects for small donors or ideological outliers despite an assumption in First Amendment jurisprudence that these donors are disproportionately affected by campaign finance regulations. In short, the argument that disclosure chills speech is not strongly supported by the data.

As a result of the wave of anticorruption reforms in the 1970s and technological advancements in the 2000s, governments in the United States make many proceedings, decisions, and information easily accessible to anyone who is interested. As government transparency has increased, mandatory disclosure has also increased (Ben-Shahar and Schneider 2014). Political campaigns are no exception. States and the federal government mandate that candidates disclose both personal financial information and the sources of their campaign finances (Briffault 2010; Corrado 1997). The purpose of these disclosure laws is to provide the electorate with information about election-related spending sources and to prevent or expose political corruption. Although the courts have grown skeptical of limiting the sources of campaign funds in recent years, judges almost always uphold disclosure requirements despite challenges that transparency chills speech and deters political participation.

In Buckley v. Valeo the Supreme Court held that disclosure “appear[s] to be the least restrictive means of curbing the evils of campaign ignorance and corruption.”1 In McConnell v. FEC three justices who disagreed with the Court's opinion on certain regulations of soft money nonetheless voted to uphold disclosure and disclaimer requirements.2 In Citizens United3 the Court invalidated a federal ban on independent expenditures from a corporation's general treasury by a 5–4 vote yet agreed 8–1 that disclosure requirements for entities who fund independent electioneering communication are constitutionally valid under the First Amendment.4 Even more recently, lower courts have upheld various disclosure laws and practices in state and federal elections, e.g., The Real Truth About Abortion v. FEC,5Center for Individual Freedom v. Madigan,6Free Speech v. FEC,7Center for Individual Freedom v. Tennant,8ProtectMarriage.com-Yes on 8 v. Bowen,9Committee for Justice and Fairness v. Arizona,10Justice v. Hoseman,11Delaware Strong Families v. Denn,12Van Hollen v. FEC.13 This repeated endorsement of disclosure by the courts has prompted interest groups, commissions, academics, and legislators to respond to the deregulation of federal campaign finance rules with calls for stricter disclosure laws (see, e.g., Hasen 2012; Briffault 2012b; Cain 2010; Briffault 2010).14

Despite a willingness to uphold disclosure laws as “a less restrictive alternative to more comprehensive regulations of speech,”15 the Supreme Court has consistently expressed a serious concern about laws that have the effect of chilling speech, particularly political speech. In the Court's view, disclosure generally does not chill speech in a way that violates the First Amendment. As long as disclosure has a “substantial relation” to a “sufficiently important government interest” it does not abridge the freedom of speech.16 Nevertheless, some argue that disclosure can chill political speech (McGeveran 2003; Gilbert 2012, 2013), and others believe it does (Samples 2010; Wang 2013). Conservative groups have objected to enhanced campaign finance disclosure by arguing that it increases the risk of harassment against those who donate to controversial candidates or causes (e.g., Messner 2009). Historically, the Supreme Court has been sympathetic to demonstrated claims of harassment, or fear of harassment, by donors to controversial candidates and causes. Where disclosure of contributions will subject contributors to the “reasonable probability” of “threats, harassment, or reprisals,” the First Amendment prohibits the government from compelling disclosures.17

As a general matter, courts have held that the benefits of transparency outweigh any alleged costs, though data quantifying the costs and benefits of disclosure have emerged more slowly than court opinions upholding disclosure (Levinson 2016). Moreover, there is no guarantee that courts will continue to uphold disclosure regulations (Shaw 2016). In this article, we use individual-level contribution data to quantify the impact of disclosure on political participation. We analyze elections at the state level where variation in disclosure rules and practices over time provide a natural setting to test our hypotheses. We find that contributors are only slightly less likely to contribute in future elections in states that increase the public visibility of campaign contributions, relative to contributors in states that do not change their disclosure laws or practices over the same time period. This “chilling” of speech amounts to a two percentage point decrease in future contributions, though in most of our models the estimates are indistinguishable from zero, with narrow confidence intervals around zero. Our findings have important implications for the jurisprudence on campaign finance disclosure. In addition to the negligible overall effect, we find no difference in the willingness to contribute among small donors or ideological outliers, despite a long-standing assumption that these groups are disproportionately affected by disclosure.

The Benefits and Costs of Disclosure

Giving money to a candidate for public office is one of many forms of political participation (Verba, Schlozman, and Brady 1995). The decision whether to vote, contact your representative, volunteer for a campaign, or give money to a candidate has traditionally been characterized as an individual-level calculation of expected costs and benefits (Ansolabehere, De Figueiredo, and Snyder 2003; Gerber and Lupia 1995). In recent years, scholars have also shown how structural factors, such as income inequality, affect political participation (Gilens 2014; Schlozman, Verba, and Brady 2013) and how social networks play a critical role in the decision to vote (Rolfe 2012). Whatever one's motivation for contributing to a political campaign, the disclosure of these contributions carries additional potential costs and benefits.

The benefits of disclosure are generally considered to be diffuse and to accrue to the public, while the costs are more personal to the individual and candidate. The public benefits of disclosure are “almost certainly overstated” (Briffault 2003), yet disclosure's potential advantages have allure. The two main governmental interests contemplated by Buckley and its progeny, disseminating information and combating corruption, are considered to accrue to society at large:

disclosure provides the electorate with information ‘as to where political campaign money comes from and how it is spent by the candidate’ in order to aid the voters in evaluating those who seek federal office. It allows voters to place each candidate in the political spectrum more precisely than is often possible solely on the basis of party labels and campaign speeches. The sources of a candidate's financial support also alert the voter to the interests to which a candidate is most likely to be responsive, and thus facilitate predictions of future performance in office … disclosure requirements deter actual corruption and avoid the appearance of corruption by exposing large contributions and expenditures to the light of publicity. This exposure may discourage those who would use money for improper purposes either before or after the election.18

Disclosure provides information. Voters might use campaign finance information as a heuristic, an informational shortcut that allows low-information voters to vote as if they had more “encyclopedic” knowledge (Lupia 1994). Quantifying the information benefit is difficult, though a few scholars are making headway (Primo 2013; Fortier and Malbin 2013; Carpenter 2009). For example, experimental evidence has shown that the effects of attack ads are neutralized when donors are revealed (Dowling and Wichowsky 2015). Other research reveals different ways that voters demand disclosure and punish anonymity (Dowling and Wichowsky 2013). Information about contributions also can be difficult to interpret (Sullivan 1998), though recent improvements to search and filtering functions in campaign finance databases make interpretation easier. As for the other contemplated benefit in Buckley, the anti-corruption benefit, most individual donations are small and unlikely to present opportunities for quid pro quo corruption (Bauer and Issacharoff 2015). Thus, most campaign contributions are likely better understood as a traditional form of political participation by donors rather than an investment in policy outcomes (Ansolabehere 2007).

Another potential benefit of disclosure is that it enables contributors to credibly signal their alignment with a candidate or platform (Gilbert 2013), a signal that can later be used to gain access to the elected official (Kalla and Broockman 2015). The availability of the signaling benefit varies with the strength of the disclosure regime, though candidates can always know who their contributors are19 regardless of whether they must disclose information about their contributors to the public.20

The cost of disclosure can be large and vary with the composition of the disclosure regime in which the contributions are made. Disclosure imposes privacy costs on individual contributors (La Raja 2014).21 Contributors might worry that exposure will hurt their business, that the information collected will result in junk mail, or that they will be harassed for their political opinions (McGeveran 2003; Mayer 2010). Candidates have expressed a reluctance to remind contributors that their information will be disclosed in part because the candidates estimate that privacy concerns are large (Carpenter et al. 2014).

We note a contributor's behavior can be affected by disclosure rules even without knowing the details of the campaign finance disclosure laws governing the election to which she is contributing. In places with strong disclosure rules and good data accessibility, all she has to do is happen upon the information online, perhaps on her newspaper's website. Or she might find it incidentally, in the course of looking up an employer, a name, or street, all of which are linked to databases of campaign contributions in some states. When Proposition 8, the referendum opposing same sex marriage in California, was pending, enterprising activists searched contribution data for those who supported Proposition 8 and linked it to the addresses of the donors, producing a geo-tagged and interactive map that circulated online and eventually was the source of harassment for some Proposition 8 supporters. California's disclosure requirements made the map possible. One did not have to be aware of the laws and regulations themselves to know that contributions are highly visible in California. From this logic we develop our first hypothesis.

Hypothesis 1:Disclosure causes donors to stop contributing. Contributors are less likely to make future contributions in states that increase the visibility of contributions relative to states that do not increase the visibility of contributions over the same time period.

Asymmetries of benefits and costs

As the Supreme Court conceded in Buckley, “It is undoubtedly true that public disclosure of contributions to candidates and political parties will deter some individuals who otherwise might contribute.”22 In other words, the costs and benefits of disclosure may not fall on all participants equally because some contributors internalize the costs differently. In Buckley the Court hypothesized that smaller contributors would be more elastic to disclosure rules: “[c]ontributors of relatively small amounts are likely to be especially sensitive to recording or disclosure of their political preferences. These strict requirements may well discourage participation by some citizens in the political process, a result that Congress hardly could have intended.”23 The ACLU has expressed a similar concern, arguing that “the [political] system is not strengthened by chilling the speech and invading the privacy of modest donors to controversial causes” (ACLU 2010). If “small” or “modest” donors are disproportionately impacted by disclosure rules, their privacy concerns may cause them to drop out of the donor pool when disclosure rules are strengthened.24

It is possible, however, that the Supreme Court is wrong about the unique sensitivity of small donors to disclosure. Indeed, several authors note that information about large contributions is likely a better heuristic than information about smaller contributions (Briffault 2010; La Raja 2007; Fung, Graham, and Weil 2007), in part because small-time donations lack both informational and anticorruption value (Briffault 2012b; Hasen 2010). If this is true, then large donors may drop out of the donor pool at a higher rate when disclosure rules are strengthened because their contributions would be disproportionately scrutinized. For example, rather than contributing large amounts directly to candidates, large donors may seek refuge in organizations established to circumvent disclosure, such as organizations governed by section 501c of the tax code (Briffault 2012a). In other words, even if we observe large donors dropping out the of pool of contributors, these donors may continue to participate in a less visible realm.

Hypothesis 2:The rate at which donors drop out depends on the size of contribution. Compared to donors in states that do not strengthen disclosure of campaign contributions, repeat contributions will decrease among the smallest and the largest contributors in states that strengthen their disclosure laws.25

Would-be contributors who opt out might do so for at least two reasons related to privacy. The first reason would be to avoid unwanted attention. The second reason is related to homophily (La Raja 2014; Mutz 2002). Individuals may fear that disclosure will expose their political allegiances to neighbors, colleagues, and friends (McClurg 2006). This fear is likely greater where individuals are ideologically dissimilar from their neighbors and friends, as La Raja showed in a recent survey experiment. The ACLU's concern about deterring contributions to controversial candidates, and conservatives' concerns about businesses being hurt by revelation of political activities, both speak to homophily and privacy concerns.

Hypothesis 3:Ideological outliers will drop out of donor pool at a higher rate. In states that strengthen disclosure of campaign contributions, contributors who are ideologically distant from their neighbors will opt out of the donor pool at a higher rate than contributors in states that do not change their laws.

Empirical Analysis of State Campaign Finance Disclosure

We test our three hypotheses using a panel dataset of more than 175,000 individual contributors to state gubernatorial and legislative campaigns between 2000 and 2008. In particular, we compare the pool of campaign contributors in states that strengthened their disclosure rules and practices between 2004 and 2008 (our “treatment” states) to the pool of campaign contributors in states that did not change their disclosure rules and practices (our “control” states). There are fourteen “treatment” states and nine “control” states in our sample. See Figure 1. By construction, no treatment states had very high disclosure scores at the beginning of the time period or very low disclosure scores at the end of the time period.

FIG. 1. 

FIG. 1. States with no change compose our control group (n = 9). States that strengthened their disclosure rules and practices compose our treatment group (n = 14). Note that Kansas and Vermont are dropped from the ideology analysis for lack of zip code data.

Data

In order to divide the states into groups, we rely on state scores produced by the Campaign Disclosure Project (CDP), an annual report authored by subject-area experts from both academia and law.26 The CDP evaluated every state's campaign finance laws and data accessibility for the years 2003–2005 and 2007–2008.

We use the CDP's state disclosure scores to measure the strength of each state's disclosure regime. The CDP grades states on four dimensions: (1) de jure language; (2) electronic filing; (3) content accessibility; and (4) user-friendliness of the data. Each dimension comprises a handful of measures, which are further divided into dozens of sub-measures, coded by campaign finance experts. Our identification of treatment and control states is based on the aggregate score of all measures in all four dimensions for each state. Aspects of these (largely technologically driven) changes noted by the CDP between 2004 and 2008 appear in each dimension, so we also cannot isolate the discrete impact of small or individual changes.27Table 1 presents summary data for the years in the dataset.

Table 1. Summary Statistics of Disclosure Scores Over Time

YearMinMeanMaxSt. Dev.
200301.3951.75
200401.4451.83
200501.9682.31
200703.2693.21
200804.1393.51

States that improved their disclosure score by three or more points between 2004 and 2008 are coded as “treatment” states, and states with no measured improvement in their disclosure regime over the same time period are our controls. Our main results are robust to using alternative thresholds as cutpoints.28 Many disclosure improvements in the treatment states involve access to information and were not accomplished via new legislation or administrative rules but instead by simple improvements to data accessibility, such as improving the user interface for state websites and making campaign contribution data searchable and downloadable. Some states enabled searching by name, geographical location (address, zip code, etc.), or employer (usually for larger amounts). Others mandated electronic filing between 2004 and 2008, which greatly improved the searchability of data. Reporting thresholds were unchanged in all treatment states except North Carolina, which slightly decreased its threshold, from $100 to $50, and New Jersey, which decreased from $400 to $300. Because most improvements analyzed by the CDP did not, in fact, involve changes to disclose laws (their most heavily weighted category), we followed up on each state in the sample, pinpointing the changes that we could observe in treatment states and verifying that no important changes were made in control states, using a combination of the CDP coders' summaries, which captured changes in data accessibility and web navigation that are no longer observable, and our independent Westlaw searches to find legal changes over the time period. See Appendix D.

Our data on campaign contributions and political ideology are drawn from the Database on Ideology, Money in Politics, and Elections (DIME) (Bonica 2013) which includes contributions data from the National Institute on Money in State Politics (NIMSP).29 NIMSP, and in turn DIME, collect the contributor's zip code, recipient's name, recipient's state, recipient's party, target seat, amount contributed, and the date of the contribution.30

Among the 23 states in our sample, more than one million individuals contributed half a billion dollars to 15,995 candidates for 5,553 contested seats for statewide office in 2000, 2004, and 2008. We subset the data to the 175,644 individuals who made a contribution during the 2000 election cycle and analyze the behavior of this panel in subsequent statewide elections. We track the raw amount contributed by each contributor in the panel as well as a relative measure of each contributor's impact, which divides the individual's contribution(s) by the total contributions to the seat in question.31

We use DIME ideology scores to identify the ideology of each contributor. DIME uses common contributors to state and federal races to bridge ideology estimations across different types of races in a method that improves upon other well-known ideology estimation efforts, such as NOMINATE scores (Bonica 2014; McCarty, Poole, and Rosenthal 2006). We use the DIME estimates to generate a measure of the absolute value of the distance between each contributor's ideology and the average ideology of his or her zip code,32 using the most recent pre-treatment measure of ideological distance.33 Almost all DIME estimates range from −2 to 2. As we explain in Appendix A2, we reduced the data by 0.1% by cutting contributors who fell outside of the DIME range.

Balance checks

The cleanest designs for causal inference in a non-experimental context demonstrate balance between two groups and then use a simple difference in means as a causal estimate. Here, there are small but statistically significant differences between contributors in treatment and control states. We therefore use fixed effects probability models and controls to correct for imbalance.34

Table 2 examines balance on key covariates using pre-treatment data. The differences for amounts contributed are substantively fairly small, but they are statistically significant because of our large sample size. Contributors in control states give less money per capita per seat (one cent, compared to six cents in treatment states), but the relative importance of each contributor is higher in control states than treatment states, whether measured relative to other contributors to the race or relative to all contributors in the state.

Table 2. Balance Between Treatment and Control States on Measures of Political Competition, per Capita Political Spending, and Average Ideology of Contributor

CharacteristicTreatmentControlp-Value (t-test)
Individual-level characteristics
Amount per capita per seat in 2000 state elections$0.06$0.010.00
Relative amount contributed per seat in prior election0.0080.050.00
Relative amount contributed per state in prior election0.000080.00070.00
Average contributor ideology score, 2000 (Std. dev.)0.12 (0.77)0.49 (0.86)0.00 –
State-level characteristics
Share of state population contributing to same-state candidates in 20000.0080.00080.03
Voter turnout 20040.630.640.65
Presence of divided government, 1994–20040.660.510.02
Size of legislative min. party under divided government, 1994–20040.400.380.03
Average number of seats up for election115.660.00.02
Numeric Disclosure Scores in pre-period1.80.80.19

Contributor ideology is also unbalanced between the treatment and control groups. The range of political ideology in both groups of states is similar, but contributors in control states are more conservative. The differences are reflected in the interquartile ranges. The interquartile range of contributor ideology is −0.44 to 0.79 for treatment states and −0.09 to 1.12 in control states. Contributors in treatment states have a mean ideology score of 0.12 (s.d. = 0.77), and candidates in control states average 0.49 (s.d. = 0.86). The mean differences are not large, but the distribution of ideologies is clearly not equivalent. We therefore control for ideology in our models.

Political competition might affect a state's probability of strengthening campaign finance disclosure requirements, as incumbents attempt to erect barriers to entry for challengers. We test balance on measures of political competition, including the presence or absence of divided government, the size of the minority party, and the number of seats being challenged in an election. As the table indicates, balance is mixed on the measures of political competition, though substantively, differences are not large. This is also true for pre-period disclosure scores.35

The balance statistics provide some foundation for our inferences about the effect of disclosure. Our use of individual-level controls for ideology and amount, combined with techniques that account for the clustered nature of the data and the fact that treatment occurs at the state level, strengthen the validity of our findings.36

Methods and Findings

In order to estimate the effect of disclosure on contributors, we leverage the difference in state disclosure regimes over time in a difference-in-differences design that takes advantage of the rich individual-level data available. A difference-in-differences design does not assume that the control group is exchangeable for the treatment group, as is assumed in the experimental context. Instead, the difference-in-differences design takes as a starting point that treatment is not randomly distributed and requires a weaker assumption than exchangeability. The design assumes that in the absence of treatment, the treatment and control groups would have followed a parallel time trend. Any deviations in the treatment group's time trend are assumed to be caused by the treatment. We conduct our analysis on a panel of campaign contributors from the 2000 elections. Following a panel of contributors over time supports the parallel time trends assumption in the difference-in-differences design, because it holds individual contributors constant, rather than allowing the population of donors to change each year.

We derive our estimates using a linear regression with state fixed effects. We use clustered bootstrapping to correct for the fact that contributors are clustered within states. We run 1,000 replications of each estimate, drawing a random sample of states, with replacement, and report 90% confidence intervals on the median estimate.37 Our confidence intervals based on clustered bootstrapping are less biased than robust-clustered standard errors (Harden 2011). We explain more about our methodology in Appendix A.

Average effects

We first examine the general question of whether contributors are deterred by disclosure. We subset the data to the people in the sample who contributed to same-state candidates in the year 2000. We then compare rates of continued participation over time in treatment and control states. We expect similar rates in treatment and control states in 2004, when no states in the sample had big changes in disclosure, and a difference to emerge by 2008, by which time the treatment states had made contributions more visible. If contributors are deterred by enhanced disclosure, then we should observe a divergence in the probability of repeat contributions after the treatment states make contributions more visible. The unit of analysis is contributor-cycle for 2004 and 2008 cycles (two observations per contributor).

We run the following linear probability model:

where Y indicates (0, 1) whether the contributor gave to a state candidate in a given cycle, T indicates whether the contributor is in a treatment state, P indicates the “post” period (2008), C is a vector of individual-level controls for amount given in the pre-period and pre-period ideology, and F is a fixed effect for every state in the sample, which is intended to control for any state-specific features that could confound our inference, particularly unobservable ones. The coefficient of interest is the difference in differences reported in β3, which estimates the 2008 – 2004 probability of contributing among treatment-state contributors, minus the same probability among control-state contributors. If the estimate is negative, then treatment state contributors have a lower probability of contributing in 2008 than we would expect in the absence of treatment, using the control group's time trend as a counterfactual. Our error term, ɛ, is normally assumed to be independent and identically distributed, but we know that the data is clustered at the state level, which violates the independence assumption. To account for the clustering in the data, we use clustered bootstrapping, as we explain above.38

The results of the basic difference-in-differences estimation are presented in Table 3, where Model 1 presents the basic difference-in-differences result, and Models 2 and 3 introduce controls for ideology and amount, including relative amount per seat. So that we can capture the most recent measure of amount for each contributor, we use the amount from 2004 where the contributor gave in 2004, and from 2000 where she did not.39

Table 3. Average Effects of Increased Disclosure Among 175,644 Contributors in 14 Treatment States and 9 Control States

 Model 1Model 2Model 3
Intercept0.20.41−0.37
 [0.14, 0.32][0.03, 0.94][−0.51, −0.27]
Treatment0.020.060.03
 [−0.14, 0.13][−0.47, 0.27][−0.15, 0.22]
Post−0.06−0.06−0.07
 [−0.07. −0.03][−0.11, −0.02][−0.09, −0.05]
Treatment × Post−0.02−0.02−0.03
 [−0.04, 0.01][−0.06, 0.03][−0.04, −0.01]
Ideology −0.030.11
  [−0.04, −0.01][0.09, 0.12]
log(Rel. amt. per seat) 0.04 
  [0.03, 0.05] 
log(Amount)  0.03
   [−0.14, 0.17]
State fixed effectsyesyesyes

All members of the sample contributed in the year 2000. Dependent variable is whether the contributors gave again in a subsequent cycle (2004 or 2008). Difference-in-differences estimates of the difference in contribution percentages in 2008 and 2004 for treatment and control groups are shown in boldface. Confidence intervals (90%) are provided below the estimates. They are generated with clustered bootstrapping (1,000 replications).

The coefficient of interest is the difference-in-differences, from the interaction between Treatment and Post, shown in boldface. For all three models, the estimate is very small, either a two or three percentage point decrease in repeat contributions below the level of contributions we would have expected, taking the time trend for control states as our counterfactual time trend. While the confidence intervals for two of the three estimates cross zero, we can rule out non-negligible negative effects by examining the lower end of the confidence intervals, which are −0.04 and −0.06. In other words, with 1,000 replications of the estimate using randomly selected treatment and control states, 90% of the estimates were no larger than a six percentage point decrease below that which we would have expected in the counterfactual world in which the treatment states did not improve disclosure.40

We illustrate our expectations by interpreting the coefficients from Model 1. In Model 1, we observe that 22% of treatment group contributors to state campaigns in the year 2000 contributed again in 2004 (α + β1), compared to 20% of control group contributors (α). By 2008, both groups retained 14% of contributors from the year 2000 as repeat contributors (treatment level is α + β1 + β2 + β3, and control level is α + β2). The larger drop in participation among the treatment group (from 22% to 14%) results in the negative difference-in-differences estimate of −0.02 (β3). Using the control group's time trend of a six percentage point decrease, we would have expected 16% of treatment group contributors to participate in 2008. We further note that the smallest estimated level of participation for the treatment group, given the lower bound of the confidence interval on β3 (−0.04), is 12%, or two percentage points less than participation by contributors in states with no changes in disclosure.

Of course, small percentages of large populations can still mean that a lot of people are affected, or that the effect is otherwise non-negligible. What does a two percentage point drop mean in terms of actual contributors who opt out? How many potential contributors does each candidate lose as a result of enhanced contribution visibility? The point estimate of −0.02, when multiplied by the number of treatment state contributors in 2000, means that 3,293 contributors in treatment states whom we otherwise would expect to have observed contributing in the absence of their contributions becoming more visible opted out of contributing above the disclosure threshold in 2008. On average, that represents 235 contributors per treatment state. In 2008 there were 3,983 candidates campaigning for 1,453 seats in the treatment states. In other words, our results show that about two donors per district, or less than one (0.83) donor per candidate, dropped out of the donor pool between 2000 and 2008. Reasonable minds might disagree, but we interpret this finding as a negligible effect.41

Alternatively, it could be that there is a generalized disclosure effect and that the low repeat rate in the time period we study is related to the way the world changed in both treatment and control states due to the increased attention to disclosure generally. To investigate whether a generalized disclosure effect could be occurring, we analyzed donor repeat rates in two additional time periods, 1992–2000 and 1996–2004, which are considered pre-treatment for our purposes but for which we do not have information about state disclosure laws and regulations. Among the 1996 contributors, 19% contributed again in 2000 and 11% in 2004. Among the 1992 contributors, 8.6% contributed again in 1996 and 5.5% in 2000, though the DIME data contains information for only four states from our sample over that time period and is therefore a poorer proxy for our larger sample. Nevertheless, both estimates are lower than the 20%–22% repeat rates we observe between 2000 and 2004 and 14% from 2004 to 2008. Repeat contribution rates are simply very low over time, and the 2000–2008 time frame does not seem to be an outlier.42

Heterogeneous treatment effects by amount

The Supreme Court has expressed special concern about the disclosure of small donors and ideological outliers. We hypothesize that people giving relatively large and/or small contributions will be more likely to opt out when the visibility of contributions increases, compared to those giving intermediate amounts (Hypothesis 2). If donating to political campaigns is about buying access and policy favors, then those giving less money have less of a reason to expose their small contributions, for which their expected policy and access benefits are limited. And those giving large contributions might choose to participate via less visible avenues for fear of exposing their large resources or giving the impression that they are trying to buy policy favors. If, however, contributing is about participation in the process and signaling one's involvement to others, we might not see more opting out among small or large contributors when the contributions are more easily accessed by the public.

Figure 2 shows repeat contributions across state elections for donors that gave different amounts in 2000. The subgroups are chosen somewhat arbitrarily, with consideration of both the range of the contribution size and the group size.43Figure 2 plots the treatment and control groups separately, using results from the following regression, run on each subgroup in the data, by amount:

where the variables and coefficients are interpreted as described above, in the main difference-in-differences analysis, and 90% confidence intervals are bootstrapped at the state level.44 Because we divide the data into various subgroups and analyze by subgroup, we lose a lot of statistical power, and our confidence intervals widen. We are therefore unable to draw strong statistical conclusions about the nature of the heterogeneous effects, though the effect appears similarly negligible to the aggregate effect. We present the difference-in-differences estimates graphically here and include a table with the regression results in Appendix B.

As Figure 2 shows, as the size of the contribution increases, the overall tendency to contribute in a subsequent election increases in both treatment and control states. Among those who contributed in 2000, donors of less than $100 have around a 14% rate of repeating their participation in 2004, whereas donors of $1,000 or more have around a 38% rate of repeating their participation in 2004. Simply put, repeat players are more likely to contribute big amounts, and big contributors are likely to be repeat players.

FIG. 2. 

FIG. 2. Repeat contributions in a given state election cycle, grouped based on amount contributed in 2000 to state elections. Estimates are calculated with clustered bootstrapping of the difference-in-differences regressions (1,000 replications). The percentage of repeat contributors decreases in control states (solid black line) and decreases slightly more in treatment states (dashed, medium gray line) in the wake of enhanced visibility. As before, these effects are not statistically distinguishable from zero.

In each panel, the percent of repeat contributors decreases between 2004 and 2008, which is natural, given that some attrition is to be expected in a panel study. In general, we see that the percent of repeat contributors in treatment states, where we expected contributors to opt out of future participation, decreases at a very similar, but slightly faster, rate than in control states over the same time period. Indeed, across the entire spectrum of contributors, we see that the usual estimate of chilling participation is only a three percentage point decrease below the counterfactual time trend presented by the control states. The estimate is indistinguishable from zero, but its consistency across almost all of the amount-based subgroups implies to us that there is not a larger effect among donors of small or large amounts.45 Nevertheless, where repeating rates are already low, even small changes can be impactful.

Among those contributing $100 or less, treatment state contributors were, on average, three percentage points less likely to contribute than they would have been in the absence of the change in disclosure visibility, using the time trend of the control states as a counterfactual. Control state contributors decreased repeat participation from 14% to 10%, and treatment state contributors decreased from 16% to 9%. Assuming that, in the absence of the increase in visibility of contributions, the repeating percentage in treatment states would have followed a parallel time trend to the control states, we would have expected the treatment state contributors to decrease from their initial 16% to 12%. In other words, we observe 25% fewer repeat contributors among small donors than we would have expected (though the estimate is indistinguishable from zero). The three percentage point difference does not seem to be due to an additional burden on small donors brought about by enhanced disclosure (since almost all subgroups experience the same difference-in-differences). Among those contributing $1,000 or more, the story is similar, but the effect is much smaller, both as a point estimate and in relation to the baseline level of repeating.46 The Supreme Court has disavowed an interest in leveling the playing field in terms of the amounts contributed and spent in politics. Disclosure can provide information about this interest. While our results lack the statistical power to be definitive, it appears the playing field is relatively level, despite differences in the baseline repeat participation rate.

Heterogeneous treatment effects by ideology

Our third hypothesis is that the privacy costs of enhanced visibility are more salient among contributors who are ideological outliers. In the case of ideology, the motivation to opt out is driven by the desire to avoid revealing that one's politics are at odds with the surrounding political culture, one's neighbors, or one's friends. We proxy local political culture using zip code level data.47 As an example, we suspect that a supporter of a socialist candidate will be more likely to stop contributing to socialist candidates in the wake of disclosure enhancements when the contributor lives in a conservative zip code like 75225 (Dallas, Texas) rather than a relatively liberal zip code like 94709 (Berkeley, California). Similarly, a contribution to a Republican candidate may be more likely to be suppressed by enhanced disclosure in 94709 than in 75225.

Our measure of ideological distance differs from one recently used in an experimental setting (La Raja 2014). Our measure is solely based on physical proximity, which can vary depending on the population of the zip code. La Raja asked respondents about whether their political views differed from people in “your family, coworkers, and neighborhood.” Family and friends can live anywhere, and coworkers might or might not live in the same zip code. So La Raja's measure captures an aspect of homophily that ours does not—the subjective impression of would-be contributors. On the other hand, our measure is able to detect whether the direction of the ideological distance matters—whether a contributor to the right of the zip code is more likely to opt out than one to the left of the zip code.

Figure 3 shows the effects of ideological distance from one's neighbors for contributors along the political spectrum (negative numbers are less conservative, positive numbers are more conservative), between treatment and control states.48 The values in Figure 3 represent the ideological distance from the mean political ideology of a contributor's zip code, not the absolute ideology value for each contributor. For example, a conservative living in a fairly conservative district would have a smaller ideological distance than a moderate conservative living in a very liberal district.49 We also restrict the sample to conservatives who are more conservative than the average of their zip code, and liberals who are more liberal than the average of their zip code in order to preserve ordering within the panels.50

FIG. 3. 

FIG. 3. Repeat contributions to same-state candidates by 2000 contributors in the years 2004 and 2008, grouped by contributors' ideological distance from others in their zip codes. Difference-in-differences estimates with 90% confidence intervals reported. All confidence intervals cross zero, though none is less than −0.07. Ideological outliers within their zip codes (outermost panels) are not affected any more than those who are more aligned with their neighbors (inner panels).

We observe small changes in every panel. In the left panel, contributors who are more liberal than the average contributor in their zip codes were three percentage points less likely to contribute in 2008 in control states (from 14% in 2004 to 11% in 2008), but just two percentage points less likely in treatment states (from 18% to 16%). Among those farthest to the right of their neighbors, the repeat rate in control states dropped from 31% in 2004 to 21% in 2008. In treatment states, the repeat rate dropped from 32% to 21%. The confidence intervals for these one percentage point differences are quite wide, due in part to the smaller sample size of each panel, yet this finding is at least suggestive that increasing disclosure did not chill campaign contributions from ideological outliers. In fact, these outliers were less affected by disclosure rules than contributors who are ideologically similar to their neighbors. Moreover, the lower bound of the confidence intervals for contributors to both the left and right of their neighbors are consistent with lower bounds for those more aligned with their neighbors—even the most extreme typical scenarios from the 1,000 replications do not imply that relative ideological outliers were impacted more heavily than relative moderates.

The panels in Figure 3 are based on each contributor's ideological distance from the mean of the zip code, on the assumption that geographic proximity affects contributors' cost-benefit analyses on whether to participate once their participation is more visible. However, because a lot of disclosure happens online, regardless of the location of the person searching it, we repeat our analysis on contributor ideology, independent of the ideological distance between contributors and their neighbors. If would-be repeat contributors opt out due to concerns about exposure of their contributions online or otherwise beyond their neighborhoods, then we should observe supporters of extreme candidates opting out of contributing in 2008 in treatment states more than in control states.

Since the range of raw ideological scores is broader than the range of ideological differences discussed above, we slice the data even more thinly in Figure 4, which widens our confidence intervals of the differences in most of the panels.51 Among those who are most liberal, the percent of repeat contributors drops five percentage points between 2004 and 2008 in control states (16% to 11%) compared to a six percentage point drop in treatment states (24% to 18%). Among the most conservative contributors, the percent of repeat donors drops five percentage points in control states (17% to 12%) compared to just three percentage points in treatment states (12% to 9%).52 These negligible effects are independent of the size of each contribution. The correlation of a contributor's ideology and the size of her contribution is 0.06 (0.04 in the treatment group). A one-unit increase in a contributor's conservatism score (more than one standard deviation given the distribution of conservatism scores) generates just 4% more spending, or approximately $170, in an election cycle.

FIG. 4. 

FIG. 4. Repeat contributions to same-state candidates by 2000 contributors in the years 2004 and 2008, grouped by ideological ranges. Within-panel difference-in-differences estimates with 90% confidence intervals reported. All confidence intervals cross zero, though none is less than −0.1, and the average impact on ideological outliers is no greater than impacts on moderates.

In summary, our analysis suggests that enhanced disclosure has a negligible effect on contributors who are ideological outliers, whether outliers are defined relative to all donors or just to donors in one's zip code.

Discussion

Our findings indicate that disclosure, particularly in the form of increased visibility of contributions, has a negligible deterrent effect on contributors. Moreover, any chilling effect that we do observe does not appear to disproportionately affect high-spending contributors or ideological outliers. The deterrent effect of disclosure on smaller donors is similar to other donors, though the baseline rate of contributing is disproportionately low. Our interpretation that the deterrent effect is negligible turns on our interpretation of percentages and not overall numbers. More than 3,000 contributors in our sample of 175,000 stopped giving in response to enhanced disclosure. Those with a fundamental view of the First Amendment's protection of political contributions will view this aggregate finding as evidence that disclosure chills speech. Our interpretation is different both because of the relative effect size (less than one “chilled” donor per candidate) and because our estimates are statistically indistinguishable from zero, or no effect.53 To be clear, this study uses the best-available measures of disclosure and campaign contributions and a rigorous methodology, but like all causal and observational studies, the study is only as strong as the measures and treatment identification. The disclosure scores, while generated by the leading legal experts in campaign finance, are blunt. There are several pathways to any given score. One state with a score of a “B” might have laws that require a lot of information to be disclosed on tight deadlines but not provide the public with easy access to the information that is disclosed. Another could have the opposite tendencies and still get a “B” due to mandatory electronic filing, for example. The single score allows us to compare across states, but the underlying measures are on several dimensions.

Another challenge is the strength of the treatment we analyze. We argue that an individual contributor need not be told about the changes by the state (e.g., that the Office of the Secretary of State will now allow contribution data to be searched by address) and that the contributor can happen across contribution information more easily after disclosure is strengthened, thereby “treating” the individual. But it is possible that (1) only a small fraction of contributors had the experience of “happening” across contribution information, and (2) had the states announced the changes, the treatment would have both been stronger and reached more people. This biases our results toward finding no effect, an undesirable bias in the current project. While we know there were changes to disclosure rules and practices and we know that experts noticed them, we unfortunately cannot verify that contributors also noticed them.

Conclusion

The Supreme Court has long supported disclosure laws on the premise that they increase information and combat corruption or the appearance of corruption. The court has taken for granted all along that disclosure chills some speech. Our findings indicate that, when it comes to the effects of disclosure on giving, the effects are negliglble, though we hasten to add that the rise of “dark money” (expenditures made by groups that do not disclose their donors), particularly after the years covered by our study, means that the whole playing field is not visible.

Campaign finance disclosure does not seem to have the effect of deterring ideological outliers any more than other participants in the system. The group most likely to opt out has probably varied over time. In the 1950s and again in the 1980s, the court acted to protect more liberal activists and contributors from harassment. In current times, conservatives who oppose gay marriage might be most worried about harassment. Nevertheless, this research suggests that conservatives, whether measured in terms of absolute ideology or ideological distance from one's neighbors, are not measurably more deterred than their liberal neighbors and compatriots. Of course, we only observe contributions that are above the disclosure threshold. It might be the case that more people are contributing below required disclosure thresholds or reallocating their money from campaign contributions to disclosure-free advocacy organizations (Issacharoff and Karlan 1998). Furthermore, because of the bundled nature of state disclosure reforms and the coarseness and limited time series available with available disclosure data, this study cannot test the relative impacts of discrete disclosure changes. The recent move toward experimental evaluation of variations on disclosure and disclaimer regimes is useful in this regard (see e.g., La Raja 2014; Dowling and Wichowsky 2013; Ridout, Franz, and Fowler 2014).

The federal trend to “deregulate and disclose” federal campaign finance has been accompanied by a similar trend in state governments across the country. The potential costs of this new degree of transparency on political behavior have been understudied. In this article, we attempt to answer one of the most pressing questions about the relationship between campaign finance laws on the books and electoral funding in practice. Our analysis focuses only on the cost of disclosure, leaving analysis of disclosure's benefits for future work. In short, our data show that First Amendment concerns about disclosure from both the right and the left are probably overstated.

References

  • Ackerman B.A. and Ayres I. 2002. Voting with Dollars. Google Scholar
  • ACLU. 2010. “DISCLOSE ACT Passed by House Today Compromises Free Speech.” Press Release. <https://www.aclu.org/free-speech/disclose-act-passed-house-today-compromisesfree-speech>. Google Scholar
  • Alexander Kim, Barrett Will, Doherty Joseph, Levinson Jessica, Lowenstein Daniel H., Milligan Molly, Stern Bob, and TracyWesten. 2003–2008. The Campaign Disclosure Project. <http://campaigndisclosure.org/>. Google Scholar
  • Angrist J.D. and Pischke J.S.. 2009. Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press. CrossrefGoogle Scholar
  • Ansolabehere S., De Figueiredo J.M, and Snyder J.M.. 2003. “Why Is There so Little Money in Politics?Journal of Economic Perspectives 17(1): 105–130. CrossrefGoogle Scholar
  • Ansolabehere Stephen. 2007. “The Scope of Corruption: Lessons from Comparative Campaign Finance Disclosure.” Election Law Journal 6(2): 163–183. LinkGoogle Scholar
  • Bauer Robert F. and Issacharoff Samuel. 2015, “Keep Shining a Light on Dark Money.” Politico. <www.politico.com/magazine/story/2015/04/keep-shining-the-light-on-dark-money-116901>. Google Scholar
  • Ben-Shahar Omri and Schneider Carl E. 2014. More Than You Wanted to Know: The Failure of Mandated Disclosure. Princeton University Press. CrossrefGoogle Scholar
  • Bonica Adam. 2013. “Database on Ideology, Money in Politics, and Elections: Public Version 1.0 [Computer file].” Google Scholar
  • Bonica Adam. 2014. “Mapping the Ideological Marketplace.” American Journal of Political Science 58(2): 367–386. CrossrefGoogle Scholar
  • Briffault R. 2003. “Reforming Campaign Finance Reform: A Review of Voting With Dollars.” California Law Review 91(3): 643–684. CrossrefGoogle Scholar
  • Briffault R. 2010. “Campaign Finance Disclosure 2.0.” Election Law Journal 9(4): 273–303. AbstractGoogle Scholar
  • Briffault R. 2012a. “Updating Disclosure for the New Era of Independent Spending.” Journal of Law and Politics 27: 683. Google Scholar
  • Briffault Richard. 2012b. “Super PACs.” Minnesota Law Review 96: 1644–1693. Google Scholar
  • Cain Bruce. 2010. “Shade from the Glare: The Case for Semi-Disclosure.” Cato Unbound. <http://www.cato-unbound.org/2010/11/08/bruce-cain/shade-glare-case-semidisclosure>. Google Scholar
  • Carpenter Dick M. 2009. “Mandatory Disclosure for Ballot-Initiative Campaigns.” Independent Review 13(4): 567–583. Google Scholar
  • Carpenter Dick M., Primo David M., Tendetnik Pavel, and Ho Sandy. 2014. “Disclosing Disclosure: Lessons from a ‘Failed’ Field Experiment.” The Forum 12(2): 343–356. CrossrefGoogle Scholar
  • Center for Competitive Politics. 2013. “Campaign Finance Disclosure: The Devil is in the Details.” Google Scholar
  • Clinton Joshua D. and Lewis David E. 2008. “Expert Opinion, Agency Characteristics, and Agency Preferences.” Political Analysis 16(1): 3–20. CrossrefGoogle Scholar
  • Corrado A. 1997. Campaign Finance Reform: A Sourcebook. Brookings Institution Press. Google Scholar
  • Dowling Conor M. and Wichowsky Amber. 2013. “Does It Matter Who's Behind the Curtain? Anonymity in Political Advertising and the Effects of Campaign Finance Disclosure.” American Politics Research 41: 965–996. CrossrefGoogle Scholar
  • Dowling Conor M. and Wichowsky Amber. 2015. “Attacks Without Consequence? Candidates, Parties, Groups, and the Changing Face of Negative Advertising.” American Journal of Political Science 59(1): 19–36. CrossrefGoogle Scholar
  • Fisman R. and Miguel E. 2007. “Corruption, Norms, and Legal Enforcement: Evidence from Diplomatic Parking Tickets.” Journal of Political Economy 115(6): 1020–1048. CrossrefGoogle Scholar
  • Fortier John C. and Malbin Michael J. 2013. “An Agenda for Future Research on Money in Politics in the United States.” The Forum 11(3): 455–479. CrossrefGoogle Scholar
  • Fung A., Graham M., and Weil D. 2007. Full Disclosure: The Perils and Promise of Transparency. Cambridge University Press. CrossrefGoogle Scholar
  • Gerber Elizabeth R. and Lupia Arthur. 1995. “Campaign Competition and Policy Responsiveness in Direct Legislative Elections.” Political Behavior 17(3): 287–304. CrossrefGoogle Scholar
  • Gilbert Michael. 2013. “Campaign Finance Disclosure and the Information Tradeoff.” Iowa Law Review 98: 1847. Google Scholar
  • Gilbert Michael D. 2012. “Disclosure, Credibility, and Speech.” Journal of Law and Politics 27: 627–641. Google Scholar
  • Gilens Martin. 2014. Affluence and Influence: Economic Inequality and Political Power in America. Princeton University Press. Google Scholar
  • Harden Jeffrey J. 2011. “A Bootstrap Method for Conducting Statistical Inference with Clustered Data.” State Politics and Policy Quarterly 11(2): 223–246. CrossrefGoogle Scholar
  • Hasen Richard L. 2012. “Chill Out: A Qualified Defense of Campaign Finance Disclosure Laws in the Internet Age.” Journal of Law and Politics 27: 557–574. Google Scholar
  • Hasen R.L. 2010. “A Semi-Objection to Bruce Cain's Semi-Case for Semi-Disclosure.” Cato Unbound. <http://www.cato-unbound.org/2010/11/15/richard-hasen/semi-objection-brucecains-semi-case-semi-disclosure>. Google Scholar
  • Issacharoff Samuel and Karlan Pamela. 1998. “The Haudraulics of Campaign Finance Reform.” Texas Law Review 77: 1705. Google Scholar
  • Kalla Joshua L. and Broockman David E. 2016. Campaign Contributions Facilitate Access to Congressional Officials: A Randomized Field Experiment. American Journal of Political Science 60(3): 545–558. CrossrefGoogle Scholar
  • La Raja, J Ray. 2014. “Political Participation and Civic Courage: The Negative Effect of Transparency on Making Small Campaign Contributions.” Political Behavior 36(4): 753–776. CrossrefGoogle Scholar
  • La Raja R.J. 2007. “Sunshine Laws and the Press: The Effect of Campaign Disclosure on News Reporting in the American States.” Election Law Journal 6(3): 236–250. LinkGoogle Scholar
  • Levinson Jessica. 2016. “Full Disclosure: The Next Frontier in Campaign Finance Law.” Denver Law Review 93(2): 431–467. Google Scholar
  • Lupia Arthur. 1994. “Shortcuts Versus Encyclopedias: Information and Voting Behavior in California Insurance Reform Elections.” American Political Science Review 88(01): 63–76. CrossrefGoogle Scholar
  • Mayer Lloyd Hitoshi. 2010. “Disclosures About Disclosure.” Indiana Law Review 44: 255–84. Google Scholar
  • McCarty Nolan, Poole Keith T., and Rosenthal Howard. 2006. Polarized America: The Dance of Ideology and Unequal Riches. Vol. 5. Massachusetts Institute of Technology Press. Google Scholar
  • McClurg Scott D. 2006. “Political Disagreement in Context: The Conditional Effect of Neighborhood Context, Disagreement and Political Talk on Electoral Participation.” Political Behavior 28(4): 349–366. CrossrefGoogle Scholar
  • McGeveran William. 2003. “Mrs. McIntyre's Checkbook: Privacy Costs of Political Contribution Disclosure.” University of Pennsylvania Journal of Constitutional Law 6(1): 1–55. Google Scholar
  • Messner Thomas M. 2009. “The Price of Prop 8.” Heritage Foundation Backgrounder No. 2328. <www.heritage.org/research/reports/2009/10/the-price-of-prop-8>. Google Scholar
  • Mutz Diana C. 2002. “Cross-Cutting Social Networks: Testing Democratic Theory in Practice.” American Political Science Review 96(01): 111–126. CrossrefGoogle Scholar
  • Primo David M. 2013. “Information at the Margin: Campaign Finance Disclosure Laws, Ballot Issues, and Voter Knowledge.” Election Law Journal 12(2): 114–129. LinkGoogle Scholar
  • Rainey Carlisle. 2014. “Arguing for a Negligible Effect.” American Journal of Political Science 58(4): 1083–1091. CrossrefGoogle Scholar
  • Ridout Travis N., Franz Michael M., and Fowler Erika Franklin. 2014. “Sponsorship, Disclosure, and Donors Limiting the Impact of Outside Group Ads.” Political Research Quarterly. doi: 10.1177/1065912914563545. CrossrefGoogle Scholar
  • Rolfe Meredith. 2012. Voter Turnout: A Social Theory of Political Participation. Cambridge University Press. CrossrefGoogle Scholar
  • Rosenbaum Paul R. 1999. “Choice as an Alternative to Control in Observational Studies.” Statistical Science 14(3): 259–278. CrossrefGoogle Scholar
  • Samples John. 2010. “DISCLOSE Will Chill Speech.” Daily Caller, April 5. Google Scholar
  • Schlozman Kay Lehman, Verba Sidney, and Brady Henry. 2013. The Unheavenly Chorus: Unequal Political Voice and the Broken Promise of American Democracy. Princeton University Press. Google Scholar
  • Shaw Katherine. 2016. “Taking Disclosure Seriously,” Yale Law & Policy Review, Inter Alia. <http://ylpr.yale.edu/inter_alia/taking-disclosure-seriously>. Google Scholar
  • Sullivan K.M. 1998. “Against Campaign Finance Reform.” Utah Law Review 1998: 311. Google Scholar
  • Verba Sidney, Schlozman Kay Lehman, and Brady Henry. 1995. Voice and Equality: Civic Voluntarism in American Politics. Harvard University Press. Google Scholar
  • Wang Eric. 2013. “Disclosure's Unintended Consequences.” The Hill, September 13. Google Scholar

Appendix

Appendix A: Methods

A.1. Case selection

The Campaign Disclosure Project was a collaboration of the UCLA School of Law, the Center for Governmental Studies and the California Voter Foundation. It was supported by The Pew Charitable Trusts. Its list of principal investigators and participants includes some of the most important election law experts in the country, including Daniel Lowenstein (UCLA Law), Jessica Levinson (Loyola Law and Los Angeles Ethics Commission), and Paul S. Ryan (Senior Counsel, Campaign Legal Center).

We use the CDP's state disclosure scores as a proxy for the strength of each state's disclosure regime, to guide our case selection. Disclosure scores are available for the years 2003–2005 and 2007–2008. The scores are calculated using a 300-point system awarded in four categories:

  • 1. Disclosure laws (120 points), including disclosure of contributors' occupations and employers, reporting of last minute contributions and independent expenditures, strong enforcement, frequent reporting requirements.

  • 2. Electronic filing (30 points), including whether states mandate electronic filing and maintain a searchable database.

  • 3. Disclosure content accessibility (75 points), including how easy and inexpensive it is to obtain records from a distance, usually via the Internet, and ways the data could be analyzed online (e.g., searching, filtering, online analysis, and downloadable content).

  • 4. Online usability (75 points), an evaluation of the user experience on state disclosure websites,with states earning higher scores for websites that included information about the laws, disclosure requirements, and reporting periods, as well as original content such as the state's own analysis or overviews.

States are assigned letter grades based on this point system, which we convert into an ordinal numeric scale for ease of analysis, 0 for “F” and 11 for “A.” Most states improved their scores over time. In 2003 all states scored a 5 (C) or lower, with the modal score being a 0 (F). By 2008 the median score was 6 (B-). The mean score monotonically increased over the time period from 1.4 (between a D- and D) to 4.7 (between a C- and C).

In an ideal world, we would be able to isolate the components of each of the subcategories that influenced the scores in each subcategory. Indeed, the CDP published a list of the hundred or so variables coded for each state in each year.54 However, the data for the scoring components is unavailable. Table D1 is our attempt to capture the changes that we can still observe, a decade later, either because they are in the CDP summaries for the states or because they were changes enshrined in law. We are unable to create our own measure using a tool like factor analysis, because the only measures available—the four sub-measures—cannot be tested for more than one factor. While the scholars and lawyers involved in the CDP are nationally recognized experts whose expertise we trust, our restricted ability to look “under the hood” of the measure is unfortunate. The two groups of states we identify fall cleanly into “big change” and “no change” states, but with more fine-grained data, we would have been able to do even more.

Partially as a result of the lack of fine-grained institutional information, in addition to the reality that states generally change a shifting bundle of visibility-related factors over time, we are unable to do two things. First, we are unable to test a “dose” response. We cannot evaluate the relative effect of discrete institutional changes. We cannot, for example, say whether online searchability by employer has more of a deterrent effect to political participation than mandatory electronic filing by candidates, which makes disclosure information available more quickly. Further research, in an experimental setting, will be needed to pin down which features of disclosure cause the greatest amount of opting out. We are also unable to rule out that some increases in visibility might actually reduce the propensity to opt out, and that what we are observing is the offsetting effects of two kinds of reforms working against each other, resulting in our negligible findings. Again, we think that conducting follow-up research in laboratory experiments would be beneficial. Appendix A.5 discusses dose response more in-depth.

A.2. Calipers

We restrict the populations to areas of overlap, cutting 10 cases of people who gave less than $4 or more than $688,615. We do this to ensure complete coverage for causal inference.55 We also drop 326 contributors with ideologies that fall outside of the −2 to 2 interval, as they are so politically extreme that estimates based on their behavior might bias our results for the rest of the population. This decision drops 0.1% of the data. (Neither decision affects our results.)

A.3. More on misreported zip codes

In addition to non-reporting of zip codes in New Jersey and Kansas for 2000, more than 10% of contributor zip codes in Arkansas, New Mexico, South Carolina, Dakota, Vermont, and Wyoming were misreported as well. It seems unlikely that this high of a percentage of misreporting could have been initiated by the contributors, given that the rest of the states have much lower rates of misreporting, most below 4%. Moreover, the misreporting decreased over time. For example, Arkansas has 704 misreported zip codes in 2000 but only 91 in 2004. Iowa had 635 in 2000 and 69 in 2004. Other states had even more drastic reductions: Arizona, Minnesota, North Dakota, Nebraska, Oregon, Virginia, Vermont, and Wyoming all reduced misreporting by over 90% between 2000 and 2004. The size of the reductions strikes the researchers as related more to technological improvements than a drastic change in the level of trust among contributors. Furthermore, among contributors whose zip codes are misreported in 2000 or 2004, 1699 of them who contributed in both elections only have an incorrect zip code in one of the elections in which they contributed, and they were equally likely to have an incorrect zip code as they were to report correctly in 2000 and later misreport in 2004. All of this points to technological or random errors more than to contributor concerns about privacy. Therefore, we are likely excluding many randomly misreported zip codes, out of an abundance of caution.

There is no statistically distinguishable difference between the amounts given by those whose zip codes are incorrect (mean $733) and those whose zip codes are correct (mean $ 701, p = 0.61).56

If the incorrect zip codes in the NIMSP data correlate to ideology, then our estimate could misstate the scale of the influence of ideology on opting out. Those whose zip codes were wrongly reported in the pre-period are slightly to the right ideologically from those whose zip codes were not wrongly reported (0.17 vs. 0.11, p = 0). However the distance between them is 1/10 of a standard deviation. When we look among misreporting in treatment and control states, we see that the ideologies of contributors with misreported zip codes in treatment and control (0.18 and 0.15, p = 0.38) states are closer than the ideologies of those with properly reported zip codes (0.08 and 0.54, p = 0). Among those who misreported only in 2008, treatment states had a 0.2% misreporting rate and control states had a 0.5% misreporting rate. It therefore seems that our results are missing zip code information for a small number of fairly moderate contributors, which, if anything, will cause us to overstate the effect we observe. While overstating is generally worrisome, here we argue for a negligible effect, so erring on the side of overstatement is the more conservative approach.

A.4. Clustered bootstrap methodology

Clustered bootstrapping allows us to circumvent a known challenge with combining a treatment dummy and fixed effects in the same regression. Fixed effects regressions omit one of the fixed effect categories as a reference category. There exists a commonly acknowledged quirk of using a treatment dummy with fixed effect dummies in regression, which, to our knowledge, no literature currently informs. The use of the treatment dummy means that the fixed effects require a reference category—here, a reference state—from both treatment and control groups. Our statistical software, R, always drops the alphabetically last state from each randomly selected group of states in the cluster bootstrap process. As a result, states like Wyoming and Washington have a much higher probability of being omitted from analysis through the resampling process, which biases our estimates. We therefore add a step to the resampling process. We first require that two treatment and two control states be selected randomly, without replacement. (We require two; otherwise the selection would also be used as the reference category and the run would fail because the treatment dummy would be either all 1s or all 0s, with no ability to detect a 1-to-0 difference.) Then we randomly selected one of the treatment and one of the control states to be the reference category. Then we drew, with replacement, 19 more states from the full list of 23 states. (If any of the second draw matched the states already chosen as the reference category, we labeled them as reference as well.) This two-step sampling process allowed us to equalize, in expectation, the probability that any given state would be the reference category within the treatment and control groups over the 1,000 replications.

A.5. Is there a dose response?

We cannot detect a dose response, given the nature of the data. Different configurations of institutions and data availability combine to create the same scores and same magnitudes of improvement. The aggregated nature of the data does not permit us to say whether, for example, effects of a three-point increase in disclosure score are more impactful from a lower starting point.

We present below the raw repeater drop off for each state, along with the 2004 and 2008 disclosure scores. Figure A1 displays the information in Table A1 in a way that might help us detect a dose response. Each point corresponds to a state and is located at the intersection of the 2004 disclosure score for the state and the magnitude of the 2008 improvement. The size of the point indicates the average drop off for that state. If starting with very little disclosure and increasing data availability at all causes bigger effects than starting with some amount of disclosure and increasing data availability, we would expect to see larger points on the left side of the figure, which correspond to the lower disclosure scores. We do not. The average effect for states with a score of 0 in 2004 is −0.13, and the average effect for states with a score greater than 0 in 2004 is −0.123. The average effect for states with a score of 0 or 2 in 2004 is also −0.13, and the average effect for states with a score greater than 2 in 2004 is −0.127.

FIG. A1. 

FIG. A1. Each treatment state's 2004 Disclosure Score plotted against its improvement over time. The size of each point reflects the overall contributor drop-off rate for the state making the improvement, with larger dots indicating larger drop-offs.

Table A1. Raw Data on State Disclosure Scores and Repeat Contributor Participation Decreases from 2004 to 2008

StateScore 04Score 08Score diffContributor drop 04–08
1 AR033−0.09
2 AZ374−0.12
3 CO484−0.11
4 IA033−0.14
5 KS033−0.14
6 MN275−0.08
7 NC374−0.14
8 NJ583−0.11
9 NY286−0.13
10 OK473−0.16
11 OR297−0.14
12 SC055−0.12
13 VA396−0.12
14 WV066−0.16

Appendix B: Tables to Support Figures 2 Through 4

Table B1. Repeat Contributions Among 2000 Contributors in 14 Treatment States and 9 Control States, Analyzed Based on the Amount Contributed

 $100 or less$101 to $249$250 to $499$500 to $999> $999
Intercept0.140.140.170.240.38
 [0.1, 0.24][0.1, 0.24][0.1, 0.34][0.14, 0.56][0.22, 0.79]
Treatment0.020.020.060.060.01
 [−0.12, 0.26][−0.12, 0.26][−0.15, 0.31][−0.23, 0.29][−0.41, 0.31]
Post−0.04−0.04−0.04−0.06−0.01
 [−0.06, −0.01][−0.06, −0.01][−0.08, −0.01][−0.1, −0.02][−0.11, −0.05]
Treatment × Post−0.03−0.03−0.03−0.03−0.01
 [−0.07, 0.01][−0.07, 0.01][−0.08, 0.02][−0.08, 0.02][−0.07, 0.03]
State fixed effectsyesyesyesyesyes
Median N. Obs.63,45028,48829,16920,55624,945

Dependent variable is whether the contributors gave again in a subsequent cycle (2004 or 2008). Difference-in-differences estimates of the difference in contribution percentages in 2008 and 2004 for treatment and control groups are shown in boldface. Confidence intervals (90%) are provided below the estimates. They are generated using a cluster bootstrap with 1,000 replications. These estimates are used to construct Figure 2.

Table B2. Repeat Contributions Among 2000 Contributors in 14 Treatment States and 9 Control States, Analyzed Based on the Ideological Distance from the Average Contributor in One's Zip Code

 < −.05−0.05 to −0.010 to 0.49> 0.5
Intercept0.140.310.150.31
 [0.9, 0.34][0.16, 0.62][0.08, 0.32][0.14, 0.45]
Treatment0.04−0.030.020.01
 [−0.16, 0.2][−0.34, 0.18][−0.16, 0.18][−0.18, 0.19]
Post−0.03−0.12−0.03−0.01
 [−0.06, 0.01][−0.22, −0.05][−0.05, 0.02][−0.12, −0.06]
Treatment × Post0.010.04−0.03−0.01
 [−0.04, 0.05][−0.05, 0.15][−0.07, 0][−0.06, 0.02]
State fixed effectsyesyesyesyes
Median N. Obs.31,54626,44140,04331,541

A positive ideological distance means the contributor is to the right of the average contributor in the zip code. A negative distance means the contributor is to the left of the average contributor in the zip code. Dependent variable is whether the contributors gave again in a subsequent cycle (2004 or 2008). Difference-in-differences estimates of the difference in contribution percentages in 2008 and 2004 for treatment and control groups are shown in boldface. Confidence intervals (90%) are provided below the estimates. They are generated using a cluster bootstrap with 1,000 replications. These estimates are used to construct Figure 3.

Table B3. Repeat Contributions Among 2000 Contributors in 14 Treatment States and 9 Control States, Analyzed Based on the Measure of Raw Ideology, or Conservatism Score

 < −1−1 to −0.5−0.49 to −0.010 to 0.490.5 to 0.991 and above
Intercept0.160.180.160.280.250.17
 [0.01, 0.33][0.12, 0.38][0.09, 0.39][0.18, 0.41][0.13, 0.55][0.04, 0.37]
Treatment0.080.0600−0.01−0.05
 [−0.14, 0.27][−0.17, 0.23][−0.26, 0.21][−0.19, 0.23][−0.33, 0.16][−0.24, 0.15]
Post−0.05−0.03−0.08−0.13−0.06−0.05
 [−0.09, 0.04][−0.06, 0][−0.12, 0][−0.23, −0.01][−0.1, −0.01][−0.08, −0.03]
Treatment × Post−0.01−0.020.040.02−0.010.02
 [−0.1, 0.05][−0.06, 0.03][−0.05, 0.09][−0.1, 0.12][−0.07, 0.03][−0.01, 0.04]
State fixed effectsyesyesyesyesyesyes
Median N. Obs.11,12223,45423,02316,35540,09314,129

Dependent variable is whether the contributors gave again in a subsequent cycle (2004 or 2008). Difference-in-differences estimates of the difference in contribution percentages in 2008 and 2004 for treatment and control groups are shown in boldface. Confidence intervals (90%) are provided below the estimates. They are generated using a cluster bootstrap with 1,000 replications. These estimates are used to construct Figure 4.

Appendix C: Robustness Checks

C.1. Results using a two-point change threshold

To test our identification assumption that a disclosure score improvement of three points or higher constituted “treatment,” we relaxed the assumption to a two-point change constituting treatment. This increased the size of the treatment group by five states (Hawaii, Montana, New Hampshire, Texas, Wisconsin) and 40,024 contributors. We present the main results using this new treatment group in Table C1. The point estimate that results is 0, with a maximum negative effect of −0.03. If anything, using a two-point threshold would strengthen our argument that disclosure has negligible effects here. In the interest of social scientific integrity (to avoid data mining), we stick with the three-point threshold from our initial research design.

Table C1. Average Effects of Increased Disclosure Among 215,668 Contributors in 19 Treatment States and 9 Control States, Where the Threshold of Determining Whether a State Is in the Treatment Group Is Relaxed to a Two-Point Improvement in Disclosure Scores

 Model 1
Intercept0.18
 [0.15, 0.35]
Treatment0.02
 [−0.14, 0.13]
Post−0.07
 [−0.08, −0.05]
Treatment × Post0.0005
 [−0.03, 0.02]
Fixed effectsyes

All members of the sample contributed in the year 2000. Dependent variable is whether the contributors gave again in a subsequent cycle (2004 or 2008). Difference-in-differences estimates of the difference in contribution percentages in 2008 and 2004 for treatment and control groups is shown in boldface. Confidence intervals (90%) are provided below the estimates. They are generated with clustered bootstrapping (1,000 replications).

C.2. Placebo test with federal data

In this section, we present a placebo test with federal contribution data. Figures C1, C2, and C3 echo the tables and figures in the main text; the only difference is that the data we used was federal contributions to candidates for the U.S. House of Representatives from a given state.

FIG. C1. 

FIG. C1. Repeat contributions in a given federal election cycle by amount contributed in 2000 to federal elections, calculated with 1,000 bootstrapped difference-in-differences regressions. The repeating percentage decreases in control states (solid black line) and decreases slightly more in treatment states (dashed, medium gray line) in the wake of enhanced visibility. Same division of amounts as in main text, though at the federal level, disclosure only occurs for amounts $250 and over.

FIG. C2. 

FIG. C2. Repeat federal contributions to same-state candidates by 2000 contributors in the years 2004 and 2008, grouped by each contributor's ideological distance from others in their zip codes. Within-panel difference-in-differences estimates with 90% confidence intervals reported.

FIG. C3. 

FIG. C3. Repeat federal contributions to same-state candidates by 2000 contributors in the years 2004 and 2008, grouped by ideological ranges (without taking ideological distance into account). Within-panel difference-in-differences estimates with 90% confidence intervals reported. Confidence intervals are generated using clustered bootstraps (1,000 replications).

Because there were no changes in federal disclosure laws over the 2004–2008 time period (and because, even if there were changes, they would affect contributors from all states equally), we should not observe any differences between treatment and control states. Estimates should be close to zero. If estimates with the federal data are less negative (more positive) than estimates with the state data in the main text, then the triple difference would imply that whatever trend was happening at the federal level, the more pronounced difference between treatment and control states at the state level would indicate that there could actually be treatment effect of enhanced disclosure among those contributing to state races. But what we see, almost across the board, is that estimates are more negative at the federal level. Moreover, for the most part, the lower bound on the 90% confidence intervals is lower for the estimates of federal data than state data.

These results help support our argument that the effect at the state level is negligible: estimates on the state contributor data are the same as, or closer to zero than, the effects we observe where there was no treatment at all, among federal contributors.

Appendix D: Legal Changes in Treatment States

The following multipage table, Table D1, shows legal changes in treatment states over the time period.

Table D1. Website and Data Availability Data from the Campaign Disclosure Project

  Contributions searchable by donor's …      
StateGroupWeb nav. addedNameGeographyEmployerAmountDownload dataElectr. filingData price ↓↓ report thresholdCurrent statutes
ArkansasTreatment2005, 20082007  200820082008(v)  A.C.A. 7-6-201 (amended 2009, 2011, 2013); A.C.A. 7-6-203 (amended 2009, 2011, 2013); A.C.A. 7-6-204 − 06; A.C.A. 7-6-207 (amended 2009, 2011, 2013); A.C.A. 7-6-214; A.C.A. 7-6-218 (amended 2013); A.C.A. 7-6-220 (amended 2011)
ArizonaTreatment2005, 200820032008   ≤2003  A.R.S. 16-901; A.R.S. 16-902.01 (amended 2008, 2010, 2012); A.R.S. 16-904; A.R.S. 16-912–15; A.R.S. 16-916 (amended 2008, 2010, 2012); A.R.S. 16-918 (amended 2008, 2010); A.R.S. 16-943; A.R.S. 16-958 (amended 2012)
ColoradoTreatment2007, 20082003  2003 2003(v)
2007(m)
  C.R.S.A 1-45-108; C.R.S.A. 1-45-109 (amended 2009, 2010); C.R.S.A. Const. Art. 28, 2, 5, 7, 9, 10
IowaTreatment2004, 2005, 2008     2003(v)
law passed
2005 paper copies I.C.A. 68A.102 (amended 2008, 2010); I.C.A. 68A.201 (amended 2010); I.C.A. 68A.203; I.C.A. 68A.401; I.C.A. 68A.402 (amended 2008, 2010); I.C.A. 68A.404 (amended 2008, 2009, 2010); I.C.A. 68A.405; I.C.A. 68A.501; I.C.A. 68B.32A (amended 2008, 2010)
KansasTreatment2004, 2007, 20082003≤ 2008 2007 2008(v)  K.S.A. 25-4143–45; K.S.A. 25-4147–48 (amended 2008, 2009, 2011); K.S.A. 25-4150; K.S.A. 25-4153–54; K.S.A. 25-4156–57; K.S.A. 25-4167–68; K.S.A. 25-4173
MinnesotaTreatment2005, 2007≤ 200820072007200320082003(v)  M.S.A. 10A.01 (amended 2008, 2010, 2013, 2014); M.S.A. 10A.02; M.S.A. 10A.20 (amended 2010, 2013, 2014); M.S.A. 10A.14 (amended 2008, 2010, 2013)
North CarolinaTreatment2004, 2005, 2008    20042003(v) legis.
2003(m) state
$5,000+
 2006 from $100 to $50N.C.G.S.A. 163-278
New JerseyTreatment2005, 2007, 2008200720072007200720052003(v)
2007(m)
$100k+
 2004 from $400 to $300N.J.S.A. 19:44A-11; N.J.S.A. 19:44A-16; N.J.S.A. 19:44A-20; N.J.S.A. 19:44A-22.3
New YorkTreatment2004, 200520032005 200520072003(m)
$1k+
2004(m)
all
  NY ELEC 14-118; NY ELEC 14-112; NY ELEC 14-108; NY ELEC 14-102; NY ELEC 14-100; NY ELEC 14-104; NY ELEC 14-110; NY ELEC 14-128; NY ELEC 14-120; NY ELEC 14-106; NY ELEC 14-104
OklahomaTreatment2004, 2005, 2007200820082008≤ 200720082003(v)
2006(m)
$20k+
  74 Okl. St. Ann. 4256; 74 Okl. St. Ann. 4255; OK ST Ethics Commission 257:10-1-13; Okl. Const. Art. 29, 3; OK ST Ethics Commission 257:10-1-12; OK
ST Ethics Commission 257:1-1-2; OK ST Ethics Commission 257:10-1-11; OK ST Ethics Commission 257:10-1-19; OK ST
Ethics Commission 257:10-1-18; OK ST Ethics Commission 257:10-1-15
OregonTreatment2004, 2005 20082008 20072003(m)
$50k+
2007(m)
$2k+
  O.R.S. 260.055; O.R.S. 260.039; O.R.S. 260.057; O.R.S. 260.005; O.R.S. 260.083; O.R.S. 260.112; O.R.S. 260.043; O.R.S. 260.046; O.R.S. 260.083
South CarolinaTreatment 2006    2006  SC ST 8-13-1302; SC ST 8-13-1306; SC ST 8-13-1308; SC ST 8-13-1310; SC ST 8-13-1304; SC ST 8-13-1368; SC ST 8-13-1360; SC ST 8-13-1324; SC ST 8-13-1300
VirginiaTreatment 20082005 2008 2003(m) state
2003(v) legis
  VA Code Ann. 24.2-945.2; VA Code Ann. 24.2-956; VA Code Ann. 24.2-957.1; VA Code Ann. 24.2-958.1; VA Code Ann. 24.2-959; VA Code Ann. 24.2-943; VA Code Ann. 24.2-945.1; VA Code Ann. 24.2-947.1; VA Code Ann. 24.2-948.4
West VirginiaTreatment 200820082008  2004(v)
2007(m) state
  W. Va. Code 3-1B-2; W. Va. Code 3-8-5; WV ST 3-8-2; W. Va. Code 3-8-1a
MarylandControlYes (fields not specified)    20042003(m)
$5k+
  MD Code, Election Law, 1-101; MD Code, Election Law, 13-208; MD Code, Election Law, 13-305; MD Code, Election Law, 13-309; MD Code, Election Law, 13-311; MD Code, Election Law, 13-316; MD Code, Election Law, 13-304; MD Code, Election Law, 13-207; MD Code, Election Law, 13-312; MD Code, Election Law, 13-222; MD Code, Election Law, 13-221
MississippiControl      NA  Miss. Code Ann. 23-15-801; Miss. Code Ann. 23-15-807; Miss. Code Ann. 23-15-809; Miss. Code Ann. 23-15-803; Miss. Code Ann. 23-15-805; Miss. Code Ann. 23-15-817; Miss. Code Ann. 23-15-813; Miss. Code Ann. 23-15-811
North DakotaControl 20042004  2004NA  NDCC, 16.1-08.1-02; NDCC, 16.1-08.1-04; NDCC, 16.1-08.1-01; NDCC, 16.1-08.1-03.3; NDCC, 16.1-08.1-06; NDCC, 16.1-08.1-05; NDCC, 16.1-08.1-07
NebraskaControl20072005    NA  Neb. Rev. St. 49-1410; Neb. Rev. St. 49-1445; Neb. Rev. St. 49-1449; Neb. Rev. St. 49-1454; Neb. Rev. St. 49-1453; Neb. Rev. St. 49-1470; Neb. Rev. St. 49-1450; Neb. Rev. St. 49-1462; Neb. Rev. St. 49-1459; Neb. Rev. St. 49-1456; Neb. Rev. St. 49-1478.01; Neb. Rev. St. 49-1472
New MexicoControl 2008    2003(v)
2006(m)
2004 copies to 10¢ N.M.S.A. 1978, 1-19-26; N.M.S.A. 1978, 1-19-33; N.M.S.A. 1978, 1-19-26.1; N.M.S.A. 1978, 1-19-29; N.M.S.A. 1978, 1-19-27; N.M.S.A. 1978, 1-19-31
NevadaControl2005, 2008     (v)2005 copies from $1 to 50¢ N.R.S. 294A.120 (amended 2011, 2013); N.R.S. 294A.380; N.R.S. 294A.400 (amended 2011, 2013); N.R.S. 294A.420 (amended 2011, 2013); N.R.S. 294A.341; N.R.S. 294A.190
South DakotaControl      NA  SDCL 12-27-24; SDCL 12-27-11; SDCL 12-27-1; SDCL 12-27-3 (amended 2010, 2012); SDCL 12-27-22 (amended 2008, 2010); SDCL 12-27-25 (amended 2008); SDCL 12-27-29; SDCL 12-27-6; SDCL 12-27-28 (amended 2008); SDCL 12-27-16 (amended 2010); SDCL 12-27-15; SDCL 12-27-32; SDCL 12-27-42 (amended 2008)
VermontControl2005, 2008     NA  All repealed: 17 V.S.A. 2801–03; 17 V.S.A. 2805–06; 17 V.S.A. 2810 –11; 17 V.S.A. 2882–83; 17 V.S.A. 2892–93
Wyoming 2007     NA  W.S.1977 22-25-101; W.S.1977 22-25-102 (amended 2009, 2011, 2015); W.S.1977 22-25-106 (amended 2011); W.S.1977 22-25-107 (amended 2008, 2013); W.S.1977 22-25-108 (amended 2015); W.S.1977 22-25-110 (amended 2011); W.S.1977 22-25-112

Legal citations from Westlaw. Years given are the years that the Project reports improvements made. Many columns are self-explanatory, but not all. “Web nav. added” is the years in which the Campaign Disclosure Project mentioned that the website had enhanced navigability. Contributions searchable by “geography” are searchable by zip code or address. The year that data is first made downloadable on the website is in the ”Download data” column. For “Electr. filing,” (m) indicates mandatory electronic filing; (v) indicates voluntary electronic filing. Some states only included data filed electronically in searchable databases. Others included scanned, handwritten filings as “electronic” filings, which greatly reduces searchability. “Data price ↓” captures the year in which the price of data (usually on paper or via CD) is reduced. “↓ report threshold” describes the year and amount of any reduction in the threshold for reporting a contribution.

1 424 U.S. 1 at 68 (1976).

2 540 U.S. 93 at 321 (2003).

3 558 U.S. 310 (2010).

4 As in McConnell, only Justice Thomas dissented on the point of disclosure, arguing that the First Amendment protects anonymous free speech and that disclosure might lead to retaliation by one's political nemeses.

5 681 F.3d 544 (4th Cir. 2012).

6 697 F.3d 464 (7th Cir. 2012).

7 720 F.3d 788 (10th Cir. 2013).

8 706 F.3d 270 (4th Cir. 2013).

9 752 F.3d 827 (9th Cir. 2014).

10 235 Ariz. 347 (Ariz. Ct. App. 2014).

11 771 F.3d 285 (5th Cir. 2014).

12 793 F.3d 304 (3rd Cir. 2015), cert. denied, 579 U.S. ___ (2016).

13 811 F.3d 486 (D.C. Cir. 2016), request for rehearing denied, __ F.3d __ (D.C. Cir. 2016).

14 The federal disclosure law proposed in the immediate wake of Citizens United, the DISCLOSE Act, aimed at disclosure of independent expenditures, defined as spending that “expressly advocat[es] the election or defeat of a clearly identified candidate that is not made in cooperation, consultation, or concert with, or at the request or suggestion of, a candidate, a candidate's authorized committee, or their agents, or a political party or its agents.” 11 C.F.R. § 100.16(a). We note the important practical and jurisprudential distinction between independent expenditures and direct contributions to candidates. Citizens United invalidated a ban on corporate independent expenditures, but it had no effect on the regulation of direct contributions. Indeed, the federal ban on corporate contributions to candidates persists today. As a matter of law, courts apply heightened scrutiny to laws that regulate independent expenditures but are more lenient with respect to the regulation of direct contributions to candidates. Nevertheless, regulations of both types of political spending have fared poorly in the federal courts recently. The data and analysis that we present is limited to direct contributions to candidates.

15 Citizens United, 558 U.S. at 369.

16 Citizens United, 558 U.S. 310, 366–67 (2010) (quoting Buckley v. Valeo, 424 U.S. 1, 64 (1976)).

17 Brown v. Socialist Workers Comm., 459 U.S. 87, 88 (1982) (quoting Buckley v. Valeo). See also NAACP v. Alabama ex. rel. Patterson, 357 U.S. 449 (1958) (holding that compelled disclosure of NAACP membership lists would have a repressive effect on the right to associate because of likely harassment against list members).

18 Buckley, 424 U.S. at 66–67.

19 Ackerman and Ayres (2002) have proposed to anonymize political donations. In theory, anonymous contributions would render moot the informational asymmetry between candidates and the public and also combat quid pro quo corruption as officeholders would not know who supported them and thus would not be able to reward large donors.

20 Candidates might also benefit from disclosure, insofar as they might like to publicize the composition of their donor pools—for populist claims—or use the deluge of information to hide less desirable contributions. But voluntary disclosure is always possible for candidates—indeed, around 17% of our data involves disclosure of contributions below the legal thresholds. So a legal change should not prevent candidates from disclosing, if they think this benefit outweighs the costs described below.

21 The public faces costs, too. Ramping up disclosure requires fiscal expenditures to support data conversion and storage, though these costs are likely a one-time expenditure that becomes negligible over time. Whatever the expense, public costs cannot explain behavioral impacts of disclosure laws.

22 424 U.S. at 58 (emphasis added).

23 424 U.S. at 83.

24 It is also possible that small donors would drop out of the donor pool because candidates would stop soliciting their donations. If the administrative costs to candidates are sufficiently burdensome, they might forgo the solicitation of small-money donors in favor of rich donors where the net gain—money received minus the administrative costs related to disclosure—is highest.

25 We operationalize our second hypothesis by identifying individuals who make the smallest and the largest contributions in successive elections; comparing the percentage of donors during election cycle ECt–1 who give again in election cycle ECt. Equally important is the effect of disclosure on potential contributors who decide not to give in the wake of disclosure rules, regardless of past behavior. By definition, these individuals are unobserved, and our findings do not directly speak to their behavior.

26 Our use of expert-informed data follows a tradition of using similar data in both political science and economics. See, e.g., Clinton and Lewis (2008) and Fisman and Miguel (2007). More fine-grained data would allow us to make more precise claims about which aspects of disclosure impose the most costs.

27 A full description of the Campaign Disclosure Project's (CDP's) methodology and discussion of its limits is presented in Appendix A.1.

28 We present the results of a two point cutoff in Appendix C.1. The results are even closer to zero, as expected.

29 National Institute on Money in State Politics (NIMSP) data can be downloaded at 003C;http://www.transparencydata.com/bulk/>. This data is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License by NIMSP.

30 The Database on Ideology, Money in Politics, and Elections (DIME) uses fuzzy matching to link records. For information on the accuracy of matches on a training set (fairly high, but not perfect), see the supplemental materials to Bonica (2014).

31 For 2000 contributors who gave again in 2004, we use the maximum of their relative contributions in 2004 (if they gave to multiple seats) as a measure of their relative importance. If contributors skipped the 2004 election, we use maximum of their relative contributions for the year 2000 as their measure of relative importance in that election.

32 We note one important caveat about our process of using zip codes in the DIME data. The data contains both missing and misreported zip codes. After omitting two states that did not report zip codes in 2000, 3.7% of contributor zip codes are misreported in 2000 (6,163 of 164,278). The number drops sharply when we examine misreported zip codes among the 2000 contributors who gave in 2004 (974 of 41,374 contributors to both elections) and 2008 (502 of 30,071 repeat contributors). Despite several attempts, we have not found a way that our results are confounded by the missing data. For a full explanation, see Appendix A.3.

33 If a contributor gave in 2000 but not 2004, we use the ideological distance in 2000. If they gave in both 2000 and 2004, we use the ideological distance in 2004. (Ideological distance changed very little over the time period, with the median and mean difference over time for 2000 and 2004 contributors being 0.02.) For measures of ideological distance, we omit zip codes that have only one contributor, which removes 0.1% of contributors from the data.

34 The primary imbalance is from the sheer size of the two groups—approximately 94% of the contributors are in treatment states. Nevertheless, the groups overlap almost completely for the key individual-level covariates.

35 Our study coincides with several same-sex marriage campaigns. Same-sex marriage was on the ballot in Arizona and Arkansas, both treatment states, and in two states that are not in our sample (California and Florida). We do not measure contributions for ballot initiatives. Nevertheless, there is a chance that contributors in Arizona and Arkansas were either more (policy-oriented) or less (privacy-concerned) likely to support candidates with strong stances on same-sex marriage in the post-period, which our data could pick up. Arizona and Arkansas comprise 10% of the treatment state contributors in the year 2000 and 7% of the repeat contributors in 2008, but the attrition in those states (85% in Arkansas and 90% in Arizona) was equal to or higher than the average attrition across treatment states (85% for all states, 84% not including Arizona and Arkansas).

36 Following Rosenbaum (1999) we note that in an observational study of this nature, generalizability is a secondary concern to making the cleanest possible causal inference, though with 46% of the states in the country under analysis, we think generalizability is fairly strong here.

37 Confidence intervals provide more information than simple rejection tests of the null hypothesis. In particular, hypothesis tests require the user to define a threshold, m, that distinguishes between negligible and non-negligible effects. The value of m is often arbitrary and sometimes as contested as the statistical tests used to identify effects. In some cases, courts have adopted a particular m (a “bright line”) to aid lawmakers, watchdogs, and judges in future cases. There is no such threshold in First Amendment law. As a result, our interpretation is guided by 90% confidence intervals because they “… contain the same information as two one-sided tests and, compared to p-values, [are] simpler for applied researchers to implement and easier for readers to interpret. Confidence intervals also provide readers with important information about the robustness of the test to the choice of m … ” (Rainey 2014).

38 With a binary dependent variable, logistic regression is a possibility. We opt for linear probability models (LPM) for ease of interpretation, particularly on interaction terms that are central to our analysis. For a discussion of the similarities of LPM and logit/probit regression, including the limits of LPM, see Angrist and Pischke (2009).

39 This decision does not affect our results. Indeed, when it comes to amount contributed, repeat donors are creatures of habit—the modal difference in amount is 0.

40 Interpretations of actual levels of giving are noisy due to the sensitivity of the intercept to the small size of the control group relative to the treatment group.

41 We can repeat the analysis with the most negative lower bound of the confidence intervals in Table 3, which is −0.06. Ninety percent of the estimates in our bootstrapping exercise estimate that fewer than 9,880 contributors opted out in treatment states where we would otherwise expect them to contribute above the contribution threshold in 2008. That means that the outer limit on the observed estimates is 705 contributors missing per treatment state, 6.8 per district, and 2.5 per candidate.

42 A second way we could understand a generalized disclosure effect is as a violation of the stable unit treatment value assumption (SUTVA)—that changes in the treatment states bled over to the control states. We cannot rule out a SUTVA violation. There were no major changes at the federal level that would cause donors in control states to be more aware of disclosure generally, so any violation would have to come from contributors in control states somehow learning that candidates for state office in treatment states were now subject to additional disclosure, or that the secretary of state office in treatment states decided to change various features of their websites. We think neither is likely, particularly given the very valid concern of both reviewers that the states do not seem to have communicated the disclosure changes directly to the public in their own states, much less across states. Nevertheless, if there is a SUTVA violation, it would bias our findings toward a null result, which is undesirable in an article arguing for a negligible result. We thank an anonymous reviewer for bringing this challenge to our attention.

43 Contribution data is clumpy, as contributors tend to give amounts in multiples of $50. Contributors of $100 and $250, for example, make up almost 40% of the data.

44 Here, we exclude the C term, which represents each contributor's amount or ideology (individual-level-characteristics), since we are now grouping on those variables to test hypotheses 2 and 3.

45 Because only 2 of our 14 treatment states reduced the disclosure threshold, we think that an increased administrative burden cannot explain these results, particularly since they would be expected to affect contributors at the more “modest” end of the contribution spectrum, who are already not likely to repeat their contributions.

46 Because of variations in cost of living across the states, as well as the different state sizes represented by the two groups, we have repeated this analysis using relative amount measures based on the seat and state. When we use relative amounts, the difference-in-difference estimates are more likely to be positive, particularly among relatively smaller contributors. The lowest low end of a confidence interval for any panel is −0.08, though confidence intervals tend to be wider than those reported in Figure 2. This is probably due, in part, to the imbalance in the sizes of treatment and control groups in some panels, and our requirement that at least two control and two treatment states appear in each randomization, since one of each is a reference category. We failed to find at least two control states that had enough contributors to identify an effect in panels about 80% of the time, meaning that the estimates described here are drawn from fewer successful replications (around 200).

47 Zip codes are admittedly an imperfect proxy for geography, though they do reflect several relevant characteristics of neighborhoods, such as ease of travel and the volume of mail. We note some important data-centric challenges, including the missing or incorrect zip codes described above (dropped for this analysis), and some cases (0.2% of the sample) where our subject was the sole contributor from his or her zip code (also dropped for this analysis). Future researchers might be interested to know that the DIME data provides estimated latitude and longitude coordinates for each contributor, which may provide more precise measures of geographic proximity between contributors, to the extent the coordinates are reliable.

48 Full model specifications are presented in Appendix B.

49 Further note that we retain the sign for all contributors. Thus, we code contributors as having negative distance if they are classified as negative on the conservatism scale (meaning ideologically more liberal than the national median voter).

50 This restriction drops 20% of the sample. We generate the mean zip code values using the full sample, then drop donors that we cannot properly line up in the panels. If we include the full sample, the differing signs for these contributors counteract the effects of the limited sample, and the result is an artificial increase among moderates.

51 See Appendix B for full regression estimates.

52 We also note that the overall contributor pool in treatment states becomes slightly more liberal after disclosure is strengthened, with the average donor moving 0.08 to the left (−0.08). At the same time, ideology among control state contributors shifts slightly to the right by 0.1 (from 0.49 to 0.59). In other words, campaign finance disclosure may also have more structural impacts by shifting the donor pool more to the left than it otherwise would be, using control states as a counterfactual. The raw difference-in-differences is a leftward shift of −0.18 or about 1/4 of a standard deviation.

53 We do test the robustness of our findings with a placebo test that uses federal contribution data. Because federal disclosure rules did not change more in treatment states than in control states over the time period, we expect to find no effect. Most estimates for heterogeneity for amount and ideology are similar in magnitude to the findings for state contributions, though confidence intervals are much wider, given that the treatment and control groups are essentially arbitrary. Results are presented in Appendix C.2.

54 See “Grading State Disclosure Criteria,” at <http://www.campaigndisclosure.org/gradingstate2007/appendix3.html>.

55 We trim because there is no valid counterfactual for those 10 observations.

56 Both groups have a median of $200. The lack of difference persists when we look within group at misreporters and non-misreporters. Among treatment group contributors, misreporters gave a mean of $637, and those without misreported zip codes gave a mean of $662 (p = 0.55). Among control contributors, the numbers are $1,098 and $1,294, respectively (p = 0.45).

Back to Top