A Measure of Media Bias
Tim Groseclose
Department of Political Science
UCLA
Jeff Milyo
Department of Economics
December 2004
We are grateful for the research assistance by Aviva
Aminova, Jose Bustos, Anya Byers, Evan Davidson, Kristina Doan, Wesley Hussey,
David Lee, Pauline Mena, Orges Obeqiri, Byrne Offut, Matt Patterson, David
Primo, Darryl Reeves, Susie Rieniets, Tom Rosholt, Michael Uy, Diane Valos,
Michael Visconti, Margaret Vo, Rachel Ward, and Andrew Wright. Also, we are grateful for comments and
suggestions by Matt Baum, Mark Crain, Tim Groeling, Phil Gussin, Jay Hamilton, Wesley
Hussey, Chap Lawson, Steve Levitt, Jeff Lewis, Andrew Martin, David Mayhew, Jeff
Minter, Mike Munger, David Primo, Andy Waddell, Barry Weingast, John Zaller,
and Jeff Zwiebel. We also owe gratitude
to UCLA,
A Measure of Media Bias
In this paper we estimate
A Measure of Media
Bias
“The editors in
Do the major media outlets in the
Few studies provide an objective measure of the slant of news, and none has provided a way to link such a measure to ideological measures of other political actors. That is, none of the existing measures can say, for example, whether the New York Times is more liberal than Tom Daschle or whether Fox News is more conservative than Bill Frist. We provide such a measure. Namely, we compute an ADA score for various news outlets, including the New York Times, the Washington Post, USA Today, the Drudge Report, Fox News’ Special Report, and all three networks’ nightly news shows.
Our results show a strong liberal bias. All of the news outlets except Fox News’ Special Report and the Washington Times received a score to the left of the average member of Congress. And a few outlets, including the New York Times and CBS Evening News, were closer to the average Democrat in Congress than the center. These findings refer strictly to the news stories of the outlets. That is, we omitted editorials, book reviews, and letters to the editor from our sample.
To compute our measure, we count
the times that a media outlet cites various think tanks and other policy groups.[1] We compare this with the times that members
of Congress cite the same think tanks in their speeches on the floor of the
House and Senate. By comparing the
citation patterns we can construct an
As a simplified example, imagine
that there were only two think tanks, one liberal and one conservative. Suppose that the New York Times cited the
liberal think tank twice as often as the conservative one. Our method asks: What is the estimated
A feature of our method is that it
does not require us to make a subjective assessment of how liberal or
conservative a think tank is. That is,
for instance, we do we need to read policy reports of the think tank or analyze
its position on various issues to determine its ideology. Instead, we simply observe the
Some Previous Studies of Media Bias
Survey research has shown that an
almost overwhelming fraction of journalists are liberal. For instance, Elaine Povich
(1996) reports that only seven percent of all
These statistics suggest that
journalists, as a group, are more liberal than almost any congressional
district in the country. For instance, in the Ninth California district, which
includes
Of course, however, just because a journalist has liberal or conservative views, this does not mean that his or her reporting will be slanted. For instance, as Kathleen Hall Jamieson (2000, 188) notes,
One might hypothesize instead that reporters respond to the cues of those who pay their salaries and mask their own ideological dispositions. Another explanation would hold that norms of journalism, including `objectivity’ and `balance’ blunt whatever biases exist.”
Or, as Timothy Crouse explains:
It is an unwritten law of current political journalism that conservative Republican Presidential candidates usually receive gentler treatment from the press than do liberal Democrats. Since most reporters are moderate or liberal Democrats themselves, they try to offset their natural biases by going out of their way to be fair to conservatives. No candidate ever had a more considerate press corps than Barry Goldwater in 1964, and four years later the campaign press gave every possible break to Richard Nixon. Reporters sense a social barrier between themselves and most conservative candidates; their relations are formal and meticulously polite. But reporters tend to loosen up around liberal candidates and campaign staffs; since they share the same ideology, they can joke with the staffers, even needle them, without being branded the “enemy.” If a reporter has been trained in the traditional, “objective” school of journalism, this ideological and social closeness to the candidate and the staff makes him feel guilty; he begins to compensate; the more he likes and agrees with the candidate personally, the harder he judges him professionally. Like a coach sizing up his own son in spring tryouts, the reporter becomes doubly severe. (1973, 355-6)
However, a strong form of the view that reporters offset or blunt their own ideological biases leads to a counterfactual implication. Suppose it is true that all reporters report objectively, and their ideological views do not color their reporting. If so, then all news would have the same slant. Moreover, if one believes Crouse’s claim that reporters overcompensate in relation to their own ideology, then a news outlet filled with conservatives, such as Fox News, should have a more liberal slant than a news outlet filled with liberals, such as the New York Times.
Spatial models of firm location, such as those by Hotelling (1929) or Mullainathan and Shleifer (2003) give theoretical reasons why the media should slant the news exactly as consumers desire.[6] The idea is that if the media did not, then an entrepreneur could form a new outlet that does, and he or she could earn greater-than-equilibrium profits, possibly even driving the other outlets out of business. This is a compelling argument, and even the libertarian Cato Journal has published an article agreeing with the view: In this article, the author, Daniel Sutter (2001), notes that “Charges of a liberal bias essentially require the existence of a cartel (431).”
However, contrary to the prediction
of the typical firm-location model, we find a a systematic liberal bias of the
Although his primary focus is not
on media bias, in one section of his book, James Hamilton (2004) analyzes
John Lott and Kevin Hassett (2004) propose an innovative test for media bias. They record whether the headlines of various economic news stories are positive or negative. For instance, on the day that the Commerce Department reports that GDP grows by a large degree, a newspaper could instead report “GDP Growth Less than Expected.” Lott and Hasset control for the actual economic figures reported by the Commerce Department, and they include an independent variable that indicates the political party of the president. Of the ten major newspapers that they examine, they find that nine are more likely to report a negative headline if the president is Republican.[7]
Daniel Sutter (2004) collects data on the geographic locations of readers of Time, Newsweek, and U.S. News and World Report. He shows that as a region becomes more liberal (as indicated by its vote share for President Clinton), its consumption of the three major national news magazines increases. With a clever and readers that we are ideologues. It is an exercise of, in disinformation, of alarming proportions. This attempt to convince the audience of the world’s most ideology-free newspapers that they’re being subjected to agenda-driven news reflecting a liberal bias. I don’t believe our viewers and readers will be, in the long-run, misled by those who advocate biased journalism.”[8]
“…when it comes to free publicity, some of the major broadcast media are simply biased in favor of the Republicans, while the rest tend to blur differences between the parties. But that’s the way it is. Democrats should complain as loudly about the real conservative bias of the media as the Republicans complain about its entirely mythical bias…”[9]
"The mainstream media does not have a liberal bias. . . . ABC, CBS, NBC, CNN, the New York Times, The Washington Post, Time, Newsweek and the rest -- at least try to be fair." [10]
"I'm
going out telling the story that I think is the biggest story of our time: how
the right-wing media has become a partisan
propaganda arm of the Republican National Committee. We have an ideological press that's interested
in the election of Republicans, and a mainstream press that's interested in the
bottom line. Therefore,
we don't have a vigilant, independent press whose interest is the American
people.”[11]
Data
The web site, www.wheretodoresearch.com lists
200 of the most prominent think tanks and policy groups in the
We also recorded the average adjusted
Along with direct quotes of think tanks, we sometimes included sentences that were not direct quotes. For instance, many of the citations were cases where a member of Congress noted “This bill is supported by think tank X.” Also, members of Congress sometimes insert printed material into the Congressional Record, such as a letter, a newspaper article, or a report. If a think tank was cited in such material or if a think tank member wrote the material, we treated it just as if the member of Congress had read the material in his or her speech.
We did the same exercise for stories that media outlets report, except with media outlets we did not record an ADA score. Instead, our method estimates such a score.
Sometimes a legislator or journalist noted an action that a think tank had taken—e.g. that it raised a certain amount of money, initiated a boycott, filed a lawsuit, elected new officers, or held its annual convention. We did not record such cases in our data set. However, sometimes in the process of describing such actions, the journalist or legislator would quote a member of the think tank, and the quote revealed the think tank’s views on national policy, or the quote stated a fact that is relevant to national policy. If so, we would record that quote in our data set. For instance, suppose a reporter noted “The NAACP has asked its members to boycott businesses in the state of South Carolina. `We are initiating this boycott, because we believe that it is racist to fly the Confederate Flag on the state capitol,’ a leader of the group noted.” In this instance, we would count the second sentence that the reporter wrote, but not the first.
Also, we omitted the instances where the member of Congress or journalist only cited the think tank so he or she could criticize it or explain why it was wrong. About five percent of the congressional citations and about one percent of the media citations fell into this category.
In the same spirit, we omitted cases where a journalist or legislator gave an ideological label to a think tank (e.g. “Even the conservative Heritage Foundation favors this bill.”). The idea is that we only wanted cases were the legislator or journalist cited the think tank as if it were a disinterested expert on the topic at hand. About two percent of the congressional citations and about five percent of the media citations involved an ideological label.[14]
For the congressional data, we
coded all citations that occurred during the period Jan. 1, 1993 to December
31, 2002. This covered the 103rd
thru 107th Congresses. We
used the period 1993 to 1999 to calculate the average adjusted
As noted earlier, our media data
does not include editorials, letters to the editor, or book reviews. That is, all of our results refer only to the
bias of the news of media. There are several reasons why we do not
include editorials. The primary one is
that there is little controversy over the slant of editorial pages—e.g. few
would disagree that Wall Street Journal editorials are conservative, while New
York Times editorials are liberal.
However, there is a very large controversy about the slant of the news
of various media outlets. A second reason involves the effect (if any)
that the media have on individuals’ political views. It is reasonable to believe that a biased
outlet that pretends to be centrist has more of an effect on readers’ or
viewers’ beliefs than, say, an editorial page that does not pretend to be
centrist. Because of this, we believe it
is more important to examine the news than editorials. A third reason involves difficulties in
coding the data. Editorial and opinion
writers, much more than news writers, are sometimes sarcastic when they quote
members of think tanks. If our coders
do not catch the sarcasm, they record the citation as a favorable one. This biases the results toward making the
editorials appear more centrist than they really are.
In Table 1 we list the 50 groups
from our list that were most commonly cited by the media. The first column lists the average
While most of these averages closely agree with the conventional wisdom, two cases seem somewhat anomalous. The first is the ACLU. The average score of legislators citing it was 49.8. Later, we shall provide reasons why it makes sense to define the political center at 50.1. This suggests that the ACLU, if anything is a right-leaning organization. The reason the ACLU has such a low score is that it opposed the McCain-Feingold Campaign Finance bill, and conservatives in Congress cited this often. In fact, slightly more than one-eight of all ACLU citations in Congress were due to one person alone, Mitch McConnell (R.-Kt.), perhaps the chief critic of McCain-Feingold. If we omit McConnell’s citations, the ACLU’s average score increases to 55.9. Because of this anomaly, in the Appendix we report the results when we repeat all of our analyses but omit the ACLU data.
The second apparent anomaly is the
RAND Corporation, which has a fairly liberal average score, 60.4. We mentioned this finding to some employees
of
The second and third columns respectively report the number of congressional and media citations in our data. These columns give some preliminary evidence that the media is liberal, relative to Congress. To see this, define as right wing a think tank that has an average score below 40. Next, consider the ten most-cited think tanks by the media. Only one right-wing think tank makes this list, American Enterprise Institute. In contrast, consider the ten most-cited think tanks by Congress. (These are the National Taxpayers Union, AARP, Amnesty International, Sierra Club, Heritage Foundation, Citizens Against Government Waste, RAND, Brookings, NFIB, and ACLU.) Four of these are right wing.
For perspective, in Table 2 we list
the average adjusted
Because, at times, there is some subjectivity in coding our data, when we hired our research assistants we asked (i) for whom they voted or would have voted if they were limited to choosing only Al Gore and George Bush. We chose research assistants so that approximately half our data was coded by Gore supporters and half by Bush supporters.
For each media outlet we selected an observation period that we estimated would yield at least 300 observations (citations). Because magazines, television shows, and radio shows produce less data per show or issue (e.g. a transcript for a 30-minute television show contains only a small fraction of the sentences that are contained in a daily newspaper), with some outlets we began with the earliest date available in Lexis-Nexis. We did this for: (i) the three magazines that we analyze, (ii) the five evening television news broadcasts that we analyze; and (iii) the one radio program that we analyze.[18]
Our Definition of Bias
Before proceeding, it is useful to clarify our
definition of bias. Most important, the
definition has nothing to do with the honesty or accuracy of the news
outlet. Instead, our notion is more like
a taste or preference. For instance, we
estimate that the centrist
In contrast, other writers, at least at times, do define bias as a matter of accuracy or honesty. We emphasize that our differences with such writers are ones of semantics, not substance. If, say, a reader insists that bias should refer to accuracy or honesty, then we urge him or her simply to substitute another word wherever we write “bias”. Perhaps “slant” is a good alternative.
However, at the same time, we argue that our notion of bias is meaningful and relevant, and perhaps more meaningful and relevant than the alternative notion. The main reason, we believe, is that only seldom do journalists make dishonest statements. Cases such as Jayson Blair, Stephen Glass, or the falsified memo at CBS are rare; they make headlines when they do occur; and much of the time they are orthogonal to any political bias.
Instead, for every sin of commission, such as those by Glass or Blair, we believe that there are hundreds, and maybe thousands, of sins of omission—cases where a journalist chose facts or stories that only one side of the political spectrum is likely to mention. For instance, in a story printed on March 1, 2002, the New York Times reported that (i) the IRS increased its audit rate on the “working poor” (a phrase that the article defines as any taxpayer who claimed an earned income tax credit); while (ii) the agency decreased its audit rate on taxpayers who earn more than $100,000; and (iii) more than half of all IRS audits involve the working poor. The article also notes that (iv) “The roughly 5 percent of taxpayers who make more than $100,000 … have the greatest opportunities to shortchange the government because they receive most of the nonwage income.”
Most would agree that the article contains only true and accurate statements; however, most would also agree that the statements are more likely to be made by a liberal than a conservative. Indeed, the centrist and right-leaning news outlets by our measure (the Washington Times, Fox News’ Special Report, the Newshour with Jim Lehrer, ABC’s Good Morning America, and CNN’s Newsnight with Aaron Brown) failed to mention any of these facts. Meanwhile, three of the outlets on the left side of our spectrum (CBS Evening News, USA Today, and the [news pages of the] Wall Street Journal) did mention at least one of the facts.
Likewise, on the opposite side of
the political spectrum there are true and accurate facts that conservatives are
more likely to state than liberals. For
instance, on
We also believe that our notion of bias is the one that is more commonly adopted by other authors. For instance, Lott and Hasset (2004) do not assert that one headline in their data set is false (e.g. “GDP Rises 5 Percent”) while another headline is true (e.g. “GDP Growth Less Than Expected”). Rather, the choice of headlines is more a question of taste, or perhaps fairness, than a question of accuracy or honesty. Also, much of Goldberg’s (2002) and Alterman’s (2003) complaints about media bias are that some stories receive scant attention from the press, not that the stories receive inaccurate attention. For instance, Goldberg notes how few stories the media devote to the problems faced by children of dual-career parents. On the opposite side, Alterman notes how few stories the media devote to corporate fraud. Our notion of bias also seems closely aligned to the notion described by Bozell and Baker (1990, 3):
But though bias in the media exists, it is rarely a conscious attempt to distort the news. It stems from the fact that most members of the media elite have little contact with conservatives and make little effort to understand the conservative viewpoint. Their friends are liberals, what they read and hear is written by liberals.[20]
Similar to the facts and stories
that journalists report, the citations that they gather from experts are also
very rarely dishonest or inaccurate.
Many, and perhaps most, simply indicate the side of an issue that the
expert or his or her organization favors.
For instance, on
Similarly, another large fraction
of cases involve the organization’s views of
politicians. For instance,
on
A Simple Structural Model
Define xi as the
average adjusted
aj
+ bj xi + eij .
The parameter, bj, indicates the ideology of the think tank. Note that if xi is large (i.e. the legislator is liberal), then the legislator receives more utility from citing the think tank if bj is large. The parameter, aj , represents a sort of “valence” factor (as political scientists use the term) for the think tank. It captures non-ideological factors that lead legislators and journalists to cite the think tank. Such factors may include such things as a reputation for high-quality and objective research, which may be orthogonal to any ideological leanings of the think tank.
We assume that eij is distributed according to a Weibull distribution. As shown by McFadden (1974; also see Judge, et. al, 1985, pp. 770-2), this implies that the probability that member i selects the jth think tank is
exp(aj + bj xi ) / ∑k=1J exp(ak + bk xi ) , (1)
where J is the total number of think tanks in our sample. Note that this probability term is no different from the one we see in a multinomial logit (where the only independent variable is xi ).
Define cm as
the estimated adjusted
aj
+ bj cm + emj .
We assume that emj is distributed according to a Weibull distribution. This implies that the probability that media outlet m selects the jth think tank is
exp(aj + bj cm ) / ∑k=1J exp(ak + bk cm ). (2)
Although this term is similar to the term that appears in a multinomial logit, we cannot use multinomial logit to estimate the parameters. The problem is that cm, a parameter that we estimate, appears where normally we would have an independent variable. Instead, we construct a likelihood function from (1) and (2), and we use the “nlm” (non-linear maximization) command in R to obtain estimates of each aj , bj, and cm.
Similar to a multinomial logit, it is impossible to identify each aj and bj. Consequently, we arbitrarily choose one think tank and set its values of aj and bj to zero. It is convenient to choose a think tank that is cited frequently. Also, to make most estimates of the bj ‘s positive, it is convenient to choose a think tank that is conservative. Consequently, we chose the Heritage Foundation. It is easy to prove that this choice does not affect our estimates of cm. That is, if we had chosen a different think tank, then all estimates of cm would be unchanged.
This identification problem is not just a technical point; it also has an important substantive implication. Our method does not need to determine any sort of assessment of the absolute ideological position of a think tank. It only needs to assess the relative position. In fact, our method cannot assess absolute positions. As a concrete example, consider the estimated bj’s for AEI and the Brookings Institution. These values are .026 and .038. The fact that the Brookings estimate is larger than the AEI estimate means that Brookings is more liberal than AEI. (More precisely, it means that as a legislator or journalist becomes more liberal, he or she prefers more and more to cite Brookings than AEI.) These estimates are consistent with the claim that AEI is conservative (in an absolute sense), while Brookings is liberal. But they are also consistent with a claim, e.g., that AEI is moderate-left while Brookings is far-left (or also the possibility that AEI is far-right while Brookings is moderate-right). This is related to the fact that our model cannot fully identify the bj’s—that is, we could add the same constant to each and the value of the likelihood function (and therefore the estimates of the cm’s ) would remain unchanged.
One difficulty that arose in the estimation process is that it takes an unwieldy amount of time to estimate all of the parameters. If we had computed a separate aj and bj for each think tank in our sample, then we estimate that our model would take over two weeks to converge and produce estimates.[22] Complicating this, we compute estimates for approximately two dozen different specifications of our basic model. (Most of these are to test restrictions of parameters. E.g. we run one specification where the New York Times and NPR’s Morning Edition are constrained to have the same estimate of cm.) Thus, if we estimated the full version of the model for each specification, our computer would take approximately one year to produce all the estimates.
Instead, we
collapsed data from many of the rarely-cited think tanks into six mega
think tanks. Specifically, we estimated
a separate aj and bj for the 44 think tanks
that were most-cited by the media. These
comprised 85.6% of the total number of media citations. With the remaining think tanks, we ordered
them left to right according to the average
pi = pmin + (i/6)( pmax - pmin ).
In practice, these five cut points were 22.04, 36.10, 50.15, 64.21, and 78.27.
The number of actual and mega think tanks to include (respectively, 44 and 6) is a somewhat arbitrary choice. We chose 50 as the total number because we often used the mlogit procedure in Stata to compute seed values. This procedure is limited to at most 50 “choices,” which meant that we could estimate aj and b’s for at most 50 think tanks. This still leaves an arbitrary choice about how many of the 50 think tanks should be actual think tanks and how many should be mega think tanks. We experimented with several different choices. Some choices made the media appear slightly more liberal than others. We chose six as the number of mega think tanks, because it produced approximately the average of the estimates. In the Appendix we also report results when instead we choose 2, 3, 4, 5, 7, or 8 as the number of mega think tanks.
Our choice to use 50 as the total number of actual and mega think tanks, if anything, appears to makes the media appear more conservative than they really are. In the Appendix we report results when instead we chose 60, 70, 80, and 90 as the total number of actual and mega think tanks. In general, these choices cause average estimate of cm to increase by approximately one or two points.
Results
In Table 3 we list the estimates of
cm, the adjusted
One surprise is the Wall Street Journal, which we find as the most liberal of all 20 news outlets. We should first remind readers that this estimate (as well as all other newspaper estimates) refers only to the news of the Wall Street Journal; we omitted all data that came from its editorial page. If we included data from the editorial page, surely it would appear more conservative.
Second, some anecdotal evidence agrees
with our result. For instance, Reed
Irvine and Cliff Kincaid (2001) note that “The Journal has had a long-standing
separation between its conservative editorial pages and its liberal news pages.” Paul Sperry, in an article titled the “Myth
of the Conservative Wall Street Journal,” notes that the news division of the
Journal sometimes calls the editorial division “Nazis.” “Fact is,” Sperry
writes, “the Journal’s news and editorial departments are as politically
polarized as North and
Third, a recent poll from the
Finally, and perhaps most important, a scholarly study—by Lott and Hasset (2004)—gives evidence that is consistent with our result. As far as we are aware this is the only other study that examines the political bias of the news pages of the Wall Street Journal. Of the ten major newspapers that it examines, the study estimates the Wall Street Journal as the second-most liberal.[26] Only Newsday is more liberal, and the Journal is substantially more liberal than the New York Times, Washington Post, L.A. Times, and USA Today.
Another somewhat surprising result
is our estimate of NPR’s Morning Edition.
Conservatives frequently list NPR as an egregious example of a liberal news
outlet.[27] However, by our estimate the outlet hardly
differs from the average mainstream news outlet. For instance, its score is approximately equal
to those of Time, Newsweek, and U.S. News and World Report, and its score is slightly
less than the Washington Post’s. Further, our estimate places it well to the
right of the New York Times, and also to the right of the average speech by Joe
Lieberman. These differences are
statistically significant.[28] We mentioned this finding to Terry Anderson,
an academic economist and Executive Director of the Political Economy Research
Center, which is among the list of think tanks in our sample. (The average score of legislators citing PERC
was 39.9, which places it as a moderate-right think tank, approximately as
conservative as
Another result, which appears anomalous, is not so anomalous upon further examination. This is the estimate for the Drudge Report, which at 60.4, places it approximately in the middle of our mix of media outlets and approximately as liberal as a typical Southern Democrat, such as John Breaux (D–La.). We should emphasize that this estimate reflects both the news flashes that Matt Drudge reports and the news stories to which his site links on other web sites. In fact, of the entire 311 think-tank citations we found in the Drudge Report, only five came from reports written by Matt Drudge. Thus, for all intents and purposes, our estimate for the Drudge Report refers only to the articles to which the Report links on other web sites. Although the conventional wisdom often asserts that the Drudge Report is relatively conservative, we believe that the conventional wisdom would also assert that—if confined only to the news stories to which the Report links on other web sites—this set would have a slant approximately equal to the average slant of all media outlets, since, after all, it is comprised of stories from a broad mix of other outlets.[29]
Digression: Defining the “Center”
While the main goal of our research is to provide a measure that allows us to compare the ideological positions of media outlets to political actors, a separate goal is to express whether a news outlet is left or right of center. To do the latter, we must define center. This is a little more arbitrary than the first exercise. For instance, the results of the previous section show that the average NY Times article is approximately as liberal as the average Joe Lieberman (D-Ct.) speech. While Lieberman is left of center in the U.S. Senate, many would claim that, compared to all persons in the entire world, he is centrist or even right-leaning. And if the latter is one’s criterion, then nearly all of the media outlets that we examine are right of center.
However, we are more interested in
defining centrist by
Given this, one of the simplest
definitions of centrist is simply to
use the mean or median ideological score of the U.S. House or Senate. We focus on mean scores since the median
tends to be unstable.[30] This is due to the bi-modal nature that
We are most interested in comparing
news outlets to the centrist voter, who,
for a number of reasons, might not have the same ideology as the centrist
member of Congress. For instance,
because
Another problem, which applies only to the Senate, involves the fact that voters from small states are overrepresented. Since in recent years small states have tended to vote more conservatively than large states, this would cause the centrist member of the Senate to be more conservative than the centrist voter.
A third reason, which applies only
to the House, is that gerrymandered districts can skew the relationship between
a centrist voter and a centrist member of the House. For instance, although the total votes for Al
Gore and George W. Bush favored Gore slightly, the median House district
slightly favored Bush. Specifically, if
we exclude the
The second problem, the small-state bias in the Senate, can be overcome simply by weighting each senator’s score by the population of his or her state. The third problem, gerrymandered districts in the House, is overcome simply by the fact that we use mean scores instead of the median.[33]
In Figure 1, we list the mean House
and Senate scores over the period 1947-99 when we use this methodology (i.e.
including phantom D.C. legislators and weighting senators’ scores by the population
of their state). The focus of our
results is for the period 1995-99. We
chose 1999 as the end year simply because this is the last year for which
Groseclose, Levitt, and Snyder (1999) computed adjusted
Over this period the mean score of
the Senate (after including phantom D.C. senators and weighting by state
population) varied between 49.28 and 50.87.
The mean of these means was 49.94.
The similar figure for the House was 50.18. After rounding, we use the midpoint of these
numbers, 50.1, as our estimate of the adjusted
A counter view is that the 1994
elections did not mark a new
era. Instead, as some might argue, these
elections were an anomaly, and the congresses of the decade or so before the
1994 elections are a more appropriate representation of voter sentiment of the
late 1990s and early 2000s. Although we
do not agree, we think it is a useful straw man. Consequently, we construct an alternative
measure based on the congresses that served between 1975 and 1994. We chose 1975, because this was the first
year of the “Watergate babies” in Congress.
As Figure 1 shows, this year produced a large liberal shift in
Congress. This period, 1975-94, also happens
to be the most liberal 20-year period for the entire era that the
The average
Further Results: How Close are Media Outlets to the
Center?
Next, we compute the difference of a media outlet’s score from 50.1 to judge how centrist it is. We list these results in Table 4. Most striking is that all but two of the outlets we examine are left of center. Even more striking is that if we use the more liberal definition of center (54.0)—the one constructed from congressional scores from 1975-94—it is still the case that eighteen of twenty outlets are left of center.
The first, second, and third most centrist outlets are respectively Newshour with Jim Lehrer, CNN’s Newsnight with Aaron Brown, and ABC’s Good Morning America. The scores of Newsnight and Good Morning America were not statistically different from the center, 50.1. Although the point estimate of Newshour was more centrist than the other two outlets, its difference from the center is statistically significant. The reason is that its margin of error is smaller than the other two, which is due primarily to the fact that we collected more observations for this outlet. Interestingly, in the four presidential and vice-presidential debates of the 2004 election, three of the four moderators were selected from these three outlets. The fourth moderator, Bob Schieffer, works at an outlet that we did not examine, CBS’s Face the Nation.
The fourth and fifth most centrist outlets are the Drudge Report and Fox News’ Special Report with Brit Hume. Their scores are significantly different from the center at a 95% significance level. Nevertheless, top five outlets in Table 4 are in a statistical dead heat for most centrist. Even at an 80% level of significance, none of these outlets can be called more centrist than any of the others.
The sixth and seventh most centrist outlets are ABC World News Tonight and NBC Nightly News. These outlets are almost in a statistical tie with the five most centrist outlets. For instance, each has a score that is significantly different from Newshour’s at the 90% confidence level, but not at the 95% confidence level. The eighth most centrist outlet, USA Today, received a score that is significantly different from Newshour’s at the 95% confidence level.
Fox News’ Special Report is
approximately one point more centrist than ABC’s World News Tonight (with Peter
Jennings) or NBC’s Nightly News (with Tom Brokaw). In neither case is the difference statistically
significant. Given that Special Report is one hour long and the other two shows
are a half-hour long, our measure implies that if a viewer watched all three
shows each night, he or she would receive a nearly perfectly balanced version
of the news. (In fact, it would be
slanted slightly left by 0.4
Special Report is approximately thirteen points more centrist than CBS Evening News (with Dan Rather). This difference is significant at the 99% confidence level. Also at 99% confidence levels, we can conclude that NBC Nightly News and ABC World News Tonight are more centrist than CBS Evening News.
The most centrist newspaper in our
sample is USA Today. However, its
distance from the center is not significantly different from the distances of
the Washington Times or the Washington Post.
Interestingly, our measure implies that if one spent an equal amount of
time reading the Washington Times and Washington Post, he or she would receive
a nearly perfectly balanced version of the news. (It would be slanted left by only 0.9
If instead we use the 54.1 as our measure of centrist (which is based on congressional scores of the 1975-94 period), the rankings change, but not greatly. The most substantial is the Fox News’ Special Report, which drops from fifth to fifteenth most centrist. The Washington Times also changes significantly. It drops from tenth to seventeenth most centrist.
Another implication of the scores concerns the New York Times. Although some claim that the liberal bias of the New York Times is balanced by the conservative bias of other outlets, such as the Washington Times or Fox News’ Special Report, this is not quite true. The New York Times is slightly more than twice as far from the center as Special Report. Consequently, to gain a balanced perspective, a news consumer would need to spend twice as much time watching Special Report as he or she spends reading the New York Times. Alternatively, to gain a balanced perspective, a reader would need to spend 50% more time reading the Washington Times than the New York Times.
Potential Biases
A frequent concern of our method is a form of the following claim: “The sample of think tanks has a rightward [leftward] tilt rather than an ideological balance. E.g. it does not include Public Citizen and many other “Nader” groups. [E.g., it does not include the National Association of Manufacturers or the Conference of Catholic Bishops.] Consequently this will bias estimates to the right [left].” However, the claim is not true, and here is the intuition: If the sample of think tanks were (say) disproportionately conservative, this, of course, would cause media outlets to cite conservative think tanks more frequently (as a proportion of citations that we record in our sample). This might seem to cause the media to appear more conservative. However, at the same time it causes members of Congress to appear more conservative. Our method only measures the degree to which media is liberal or conservative, relative to Congress. Since it is unclear how such a disproportionate sample would affect the relative degree to which the media cite conservative [or liberal] think tanks, there is no a priori reason for this to cause a bias to our method.
In fact, a similar concern could be leveled against regression analysis. As a simple example, consider a researcher who regresses the arm lengths of subjects on their heights. Suppose instead of choosing a balance of short and tall subjects, he or she chooses a disproportionate number of tall subjects. This will not affect his or her findings about the relationship between height and arm length. That is, he or she will find that arm length is approximately half the subject’s height, and this estimate, “half,” would be the same (in expectation) whether he or she chooses many or few tall subjects. For similar reasons, to achieve unbiased estimates in a regression, econometrics textbooks place no restrictions on the distribution of independent variables. They only place restrictions upon, e.g., the correlation of the independent variables and the error term.
Another frequent concern of our method takes a form of the following claim: “Most of the congressional data came from years in which the Republicans were the majority party. Since the majority can control the rules, and hence debate time given to each side, this will cause the sample to have a disproportionate number of citations by Republicans. In turn, this will cause media outlets to appear to be more liberal than they really are.” First, it is not true that the majority party gives itself a disproportionate amount of debate time. Instead, the usual convention is for debate time to be divided equally between proponents and opponents of any issue. This means that the majority party actually gives itself less than the proportionate share. However, this convention is countered by two other factors, which tend to give the majority and minority party their proportionate share of speech time: 1) Many of the speeches in the Congressional Record are not part of the debate on a particular bill or amendment but are from “special orders” (generally in the evening after the chamber has adjourned from official business) or “one minutes” (generally in the morning before the chamber has convened for official business). For these types of speeches there are no restrictions of party balance, and for the most part, any legislator who shows up at the chamber is allowed to make such a speech. 2) Members often place printed material “into the Record”. We included such printed material as a part of any member’s speech. In general, there are no restrictions on the amount of material that a legislator can place into the Record (or whether he or she can do this). Thus, e.g. if a legislator has run out of time to make his or her speech, he or she can request that the remainder be placed in written form “into the Record.”
But even if the majority party were given more (or less) than its proportionate share of speech time, this would not bias our estimates. With each media outlet, our method seeks the legislator who has a citation pattern that is most similar to that outlet. For instance, suppose that the New York Times cites liberal think tanks about twice as often as conservative think tanks. Suppose (as we actually find) that Joe Lieberman is the legislator who has the mix of citations most similar to the New York Times—that is, suppose he also tends to cite liberal think tanks twice as often as conservative think tanks. Now consider a congressional rules change that cuts the speech time of Democrats in half. Although this will affect the number of total citations that Lieberman makes, it will not affect the proportion of citations that he makes to liberal and conservative think tanks. Hence, our method would still give the New York Times an ADA score equal to Joe Lieberman’s.[36]
More problematic is a concern that congressional citations and media citations do not follow the same data generating process. For instance, suppose that a factor besides ideology affects the probability that a legislator or reporter will cite a think tank, and suppose that this factor affects reporters and legislators differently. Indeed, John Lott and Kevin Hasset have invoked a form of this claim to argue that our results are biased toward making the media appear more conservative than they really are. They note:
“For example, Lott (2003, Chapter 2) shows that the New York Times’ stories on gun regulations consistently interview academics who favor gun control, but uses gun dealers or the National Rifle Association to provide the other side … In this case, this bias makes [Groseclose and Milyo’s measure of] the New York Times look more conservative than is likely accurate. (2004, 8)”
However, it is possible, and perhaps likely, that members of Congress practice the same tendency that Lott and Hassett have identified with reporters—that is, to cite academics when they make an anti-gun argument and to cite, say, the NRA when they make a pro-gun argument. If so, then our method will have no bias. On the other hand, if members of Congress do not practice the same tendency as journalists, then this can cause a bias to our method. But even here, it is not clear which direction the bias will occur. For instance, it is possible that members of Congress have a greater (lesser) tendency than journalists to cite such academics. If so, then this will cause our method to make media outlets appear more liberal (conservative) than they really are.
In fact, the criticism we have heard most frequently is a form of this concern, but it is usually stated in a way that suggests the bias is in the opposite direction. Here is a typical variant: “It is possible that (1) Journalists care more about the ‘quality’ of a think tank than do legislators (e.g. suppose they prefer to cite a think tank with a reputation for serious scholarship than another group that is known more for its activism); and (2) the liberal think tanks in the sample tend to be of higher quality than the conservative think tanks.” If statements (1) and (2) are true, then our method will indeed make media outlets appear more liberal than they really are. That is, the media will cite liberal think tanks more, not because they prefer to cite liberal think tanks, but because they prefer to cite high-quality think tanks. On the other hand, if one statement is true and the other is false, then our method will make media outlets appear more conservative than they really are. (E.g. suppose journalists care about quality more than legislators, but suppose that the conservative groups in our sample tend to be of higher quality than the liberal groups. Then the media will tend to cite the conservative groups disproportionately, but not because the media are conservative, rather because they have a taste for quality. This will cause our method to judge the media as more conservative than they really are.) Finally, if neither statement is true, then our method will make media outlets appear more liberal than they really are. Note that there are four possibilities by which statements (1) and (2) can be true or false. Two lead to a liberal bias and two lead to a conservative bias.
To test this concern, we collected two variables that indicate whether a think tank or policy group is more likely to produce quality scholarship. The first variable, staff called fellows, is coded as 1 if any staff members on the group’s website are given one of the following titles: fellow (including research fellow or senior fellow), researcher, economist, or analyst. The second variable, closed membership, is coded as a 0 if the web site of the group asks visitors to join the group and 1 otherwise. The idea behind this is that more activist groups are more likely to recruit laypersons for things such as protests and letter-writing campaigns to politicians. More scholarly groups are less likely to engage in these activities.
Both variables seem to capture the conventional wisdom about which think tanks are known for quality scholarship. For instance, of the top-25 most-cited groups in Table 1, the following had both closed membership and staff called fellows: Brookings, Center for Strategic and International Studies, Council on Foreign Relations, AEI, RAND, Carnegie Endowment for Intl. Peace, Cato, Institute for International Economics, Urban Institute, Family Research Council, and Center on Budget and Policy Priorities. Meanwhile, the following groups, which most would agree are more commonly known for activism than high-quality scholarship, had neither closed membership nor staff called fellows: ACLU, NAACP, Sierra Club, NRA, AARP, Common Cause, Christian Coalition, NOW, and Federation of American Scientists.[37]
These two variables provide some
weak evidence that statement (1) is true—that journalists indeed prefer to cite
high-quality groups more than legislators do.
When we restrict the sample only to citations from the top-44 most cited
think tanks (recall it is only these 44 that receive their own estimate of aj
and bj), journalists cite these think tanks approximately 46%
more frequently in our data set than legislators cite them. (This is due simply to the fact that the data
set we collect for media outlets is approximately 46% larger than the data set
we collect for Congress.) However, if we
restrict the sample only to the top-44 think tanks that also have closed
membership, then the media cite this set of groups 82% more frequently than
legislators do. Thus, to the extent closed
membership indicates quality, this result suggests that the 'mso-spacerun:yes'> (Recall that high
This evidence suggests that, if anything, our estimates are biased in the direction of making the media look more conservative than they really are. However, because the correlations are so close to zero, we believe that any bias is small.
A final anecdote gives some
compelling evidence that our method is not biased. Note that none of the above arguments
suggest a problem with the way our method ranks
media outlets. Now, suppose that
there is no problem with the rankings, yet our method is plagued with a
significant bias that systematically causes media outlets to appear more
liberal (conservative) than they really are.
If so, then this means that the three outlets we find to be most centrist
(Newshour with Jim Lehrer, Good Morning
First, we
find a systematic tendency for the
Some
scholars have extended the basic spatial model to provide a theory why the
media could be systematically biased. For
instance, James Hamilton (2004) notes that news producers may prefer to cater
to some consumers more than others. In
particular,
A more compelling explanation for the liberal slant of news outlets, in our view, involves production factors, not demand factors. As Daniel Sutter (2001) has noted, journalists might systematically have a taste to slant their stories to the left. Indeed, this is consistent with the survey evidence that we noted earlier. As a consequence, “If the majority of journalists have left-of-center views, liberal news might cost less to supply than unbiased news (444).” David Baron (2004) constructs a rigorous mathematical model along these lines. In his model journalists are driven, not just by money, but also a desire to influence their readers or viewers. Baron shows that profit-maximizing firms may choose to allow reporters to slant their stories, and consequently in equilibrium the media will have a systematic bias.[39]
A second
empirical regularity is that the media outlets that we examine are fairly
centrist relative to members of Congress.
For instance, as Figure 2 shows, all outlets but one have
Moreover,
when we add price competition to the basic spatial model, then, as Mullainathan
and Schleifer (2003) show, even fewer media outlets should be centrist. Specifically, their two-firm model predicts
that both media firms should choose slants that are outside the preferred
slants of all consumers. The intuition is that in the first round, when
firms choose locations, they want to differentiate their products significantly,
so in the next round they will have less incentive to compete on price. Given
this theoretical result, it is puzzling that media outlets in the
A third
empirical regularity involves the question whether reporters will be faithful
agents of the owners of the firms for which they work. That is, will the slant of their news stories
reflect their own ideological preferences or the firm’s owners? The
conventional wisdom, at least among left-wing commentators, is that the latter
is true. For instance, Eric Alterman
(2003) entitles a chapter of his book “You’re Only as Liberal as the Man Who
Owns You.” A weaker assertion is that
the particular news outlet will be a
faithful agent of the firm that owns it.
However, our results provide some weak evidence that this is not
true. For instance, although Time
magazine and CNN’s Newsnight are owned by the same firm (Time Warner), their
A fourth regularity concerns the
question whether one should expect a government-funded news outlet to be more
liberal than a privately-funded outlet.
“Radical democratic” media scholars Robert McChesney and Ben Scott claim
that it will. For instance, they note
“[Commercial journalism] has more often served the minority interests of
dominant political, military, and business concerns than it has the majority
interests of disadvantaged social classes
(2004, 4).” And conservatives,
who frequently complain that NPR is far left, seem also to agree. However, our results do not support such
claims. If anything, the
government-funded outlets in our sample (NPR’s Morning Edition and Newshour
with Jim Lehrer) have a slightly lower average
In interpreting some of the above regularities, especially perhaps the latter two, we advise caution. For instance, with regard to our comparisons of government-funded vs. privately-funded news outlets, we should emphasize that our sample of government-funded outlets is small (only two), and our total sample of news outlets might not be representative of all news outlets.
Related, in our attempts to explain these patterns, we in no way claim to have provided the last word on a satisfactory theory. Nor do we claim to have performed an exhaustive review of potential theories in the literature. Rather, the main goal of our research is simply to demonstrate that it is possible to create an objective measure of the slant of the news. Once this is done, as we hope we have demonstrated in this section, it is easy to raise a host of theoretical issues to which such a measure can be applied.
Appendix
We believe that the most appropriate model specification is the one that we used to generate estimates in Table 3. However, in this Appendix we show how the estimates change when we adopt alternative specifications.
Recall, that we excluded observations in which the journalist or legislator gave an ideological label to the think tank or policy group. The first column of Table A1 lists ADA estimates when instead we include these observations, while maintaining all the other assumptions that we used to create Table 3—e.g. that we use 44 actual think tanks and 6 mega think tanks, etc. As mentioned earlier, when we include labeled observations, the main effect is to make the media outlets appear more centrist. For example, note that this causes the New York Times’ score to become more conservative by about 3.8 points; while it makes the score of the Fox News’ Special Report become more liberal by 1.8 points.
In column 2 we report the results when we exclude citations of the ACLU (while we maintain all the other model specifications we used to construct Table 3, including the decision to omit labeled observations).
In columns 3 to 8 we report the results when, instead of using 44 actual think tanks and 6 mega think tanks, we use 48 (respectively, 47, 46, 45, 43, and 42) actual and 2 (respectively 3, 4, 5, 7, and 8) mega think tanks.
In columns 1 to 4 of Table A2 we report the results when, instead of using 44 actual think tanks and 6 mega think tanks, we use 54 (respectively 64, 74, and 84) actual think tanks and 6 mega think tanks. That is, we let the total number of think tanks that we use change to 60, 70, 80, and 90.
In column 5 of Table A2 we use sentences as the level of observation instead of citations. That is, for instance, suppose that a news outlet lists a four-sentence quotation from a member of a think tank. In the earlier analysis we would count this as one observation. However, the estimates in column 5 would treat this as four observations. One problem with this specification is that the data are very lumpy—that is, some quotes contain an inordinate number of sentences, which cause some anomalies. One is that some relatively obscure think tanks become some of the most-cited under this specification. For instance, the Alexis de Tocqueville Institute, which most readers would agree is not one of the most well-known and prominent think tanks, is the 13th most-cited think tank by members of Congress, when we use sentences as the level of observation. It is the 58th most-cited, when we use citations as the level of observation. Members of Congress cited it only 35 times, yet they cited an average of 39 sentences for each citation. This compares to approximately five sentences per citation for the other think tanks. Meanwhile, it is one of the 30 least-cited think tank by the media.[44] A related problem is that these data are serially correlated. That is, for instance, if a given observation for the New York Times is a citation to the Brookings Institution, then the probability is high that the next observation will also be a citation to the same think tank (since the average citation contains more than one sentence). However, the likelihood function that we use assumes that the observations are not serially correlated. Finally, related to these problems, the estimates from this specification sometimes are in stark disagreement with common wisdom. For instance, the estimates imply that the Washington Times is more liberal than Good Morning America. For these reasons, we base our conclusions on the estimates that use citations as the level of observation, rather than sentences.
References
Alterman, Eric. 2003. What Liberal Media? The Truth about Bias and the News. New
Baron, David. 2004. “Persistent Media Bias.” Manuscript.
Black, Duncan.
1958. The Theory of Committees and Elections.
University Press.
Bozell, L.B., and B.H. Baker. 1990. (editors) That’s
the Way It Isn’t: A Reference Guide
to Media Bias.
Crouse, Timothy. 1973. Boys
on the Bus.
Djankov, Simeon, Caralee McLiesh, Tatiana Nenova, and Andrei Shleifer. 2003. “Who
Owns the
Media?” Journal of Law and Economics. 46 (October): 341-81.
Franken, Al. 2003. Lies
and the Lying Liars Who Tell Them: A Fair and Balanced Look
at the Right.
Goff, Brian, and Robert Tollison. 1990. “Why is the Media so
Liberal?” Journal of
Public Finance and Public Choice. 1: 13-21.
Goldberg, Bernard. 2002. Bias: A CBS Insider Exposes How the Media Distort the News.
Groeling, Tim, and Samuel Kernell. 1998. “Is Network News Coverage of the President
Biased?” Journal
of Politics. 60 (November): 1063-87.
Groseclose, Tim, Steven D. Levitt, and James M. Snyder, Jr. 1999. “Comparing Interest
Group
Scores across Time and Chambers: Adjusted
Congress,” American Political Science Review. 93 (March): 33-50.
Groseclose, Tim, and Jeff Milyo. 2004. “Response to ‘ “Liberal Bias,” Noch Einmal.’”
http://itre.cis.upenn.edu/~myl/languagelog/archives/001301.html.
Hamilton, James. 2004.
All the News That’s Fit to Sell:
How the Market Transforms
Information into News.
Herman, Edward S., and Noam Chomsky. 1988. Manufacturing
Consent: The Political
Economy of the Mass Media.
Hotelling, Harold. 1929. “Stability In Competition.” Economic Journal. 39: 41-57.
(website for Accuracy in Media.)
Jamieson, Kathleen Hall. 2000. Everything You Think You Know About Politics … and
Why You’re Wrong.
Judge, George G., W.
Lee. 1985. The Theory and Practice of Econometrics.
Sons.
Kurtz, Howard. 2004. “Fewer Republicans Trust the News,
Survey Finds.”
Post. P. C01.
Lichter, S.R., S. Rothman, and L.S. Lichter. 1986. The Media Elite.
and Adler.
Lott, John R, Jr. 2003.
The Bias Against Guns.
Inc.
Lott, John R., Jr.. 1999. “Public Schooling, Indoctrination, and Totalitarianism.”
Journal of Political Economy. 6, part 2: S127-S157.
Lott, John R., Jr., and Kevin A. Hasset. 2004. “Is Newspaper Coverage of Economic
Events Politically Biased?” Manuscript.
Institute.
McChesney, Robert, and Ben Scott. 2004. Our Unfree Press: 100 Years of Radical
Press.
Mullainathan, Sendhil, and Andrei Shleifer. 2003. “The Market for News.” Manuscript.
Nunberg, Geoffrey. 2004. “ ‘Liberal Bias,’ Noch Einmal.”
http://itre.cis.upenn.edu/~myl/languagelog/archives/001169.html
Parenti, Michael. 1986. Inventing
Reality: The Politics of the Mass Media.
Povich, Elaine. 1996. Partners and
Adversaries: The Contentious Connection Between
Congress
and the Media.
Sperry, Paul. 2002. “Myth of the
The Cato
Journal. 20 (Winter): 431-51.
Sutter, Daniel. 2002. “Advertising
and Political Bias in the Media,” American
Journal of
Economics
and Sociology, 61 (3): 725-745.
Sutter, Daniel. 2004. “An Indirect Test of the Liberal Media Thesis Using
Newsmagazine Circulation.” Manuscript.
Weaver, D.H. and G.C. Wilhoit. 1996. American Journalist in
the 1990s.
Woodward, Bob. 1994. The Agenda: Inside the
& Schuster.
|
Table
1. The 50 Most-Cited Think Tanks and
Policy Groups by the Media in our Sample |
|||||
|
|
|
|
|
||
|
|
Ave. Score
of |
Number of |
Number of |
||
|
|
legislator
citing |
Citations
by |
Citations
by |
||
|
Think
Tank/Policy Group |
think tank |
Legislators |
Media
Outlets |
||
|
|
|
|
|
||
1 |
Brookings
Institution |
53.3 |
320 |
1392 |
||
2 |
American
Civil Liberties |
49.8 |
273 |
1073 |
||
3 |
NAACP |
75.4 |
134 |
559 |
||
4 |
Center for
Strategic and International Studies |
46.3 |
79 |
432 |
||
5 |
Amnesty
International |
57.4 |
394 |
419 |
||
6 |
Council on
Foreign Relations |
60.2 |
45 |
403 |
||
7 |
Sierra
Club |
68.7 |
376 |
393 |
||
8 |
American
Enterprise Institute |
36.6 |
154 |
382 |
||
9 |
RAND
Corporation |
60.4 |
352 |
350 |
||
10 |
National Rifle
Association |
45.9 |
143 |
336 |
||
11 |
American
Association of Retired Persons |
66.0 |
411 |
333 |
||
12 |
Carnegie
Endowment for International Peace |
51.9 |
26 |
328 |
||
13 |
Heritage
Foundation |
20.0 |
369 |
288 |
||
14 |
Common
Cause |
69.0 |
222 |
287 |
||
15 |
Center for
Responsive Politics |
66.9 |
75 |
264 |
||
16 |
Consumer
Federation of |
81.7 |
224 |
256 |
||
17 |
Christian
Coalition |
22.6 |
141 |
220 |
||
18 |
Cato
Institute |
36.3 |
224 |
196 |
||
78.9 |
62 |
195 |
||||
20 |
Institute
for International Economics |
48.8 |
61 |
194 |
||
21 |
Urban
Institute |
73.8 |
186 |
187 |
||
22 |
Family
Research Council |
20.3 |
133 |
160 |
||
23 |
Federation
of American Scientists |
67.5 |
36 |
139 |
||
24 |
Economic
Policy Institute |
80.3 |
130 |
138 |
||
25 |
Center on
Budget and Policy Priorities |
88.3 |
224 |
115 |
||
26 |
National
Right to Life Committee |
21.6 |
81 |
109 |
||
27 |
Electronic
|
57.4 |
19 |
107 |
||
28 |
International
Institute for Strategic Studies |
41.2 |
16 |
104 |
||
29 |
World
Wildlife Fund |
50.4 |
130 |
101 |
||
30 |
Cent. for
Strategic and Budgetary Assessments |
33.9 |
7 |
89 |
||
31 |
Nat.
Abort. and Reproductive Rights Action Lg. |
71.9 |
30 |
88 |
||
32 |
Children's
Defense Fund |
82.0 |
231 |
78 |
||
33 |
Employee
Benefit Research Institute |
49.1 |
41 |
78 |
||
34 |
Citizens
Against Government Waste |
36.3 |
367 |
76 |
||
35 |
People for
the |
76.1 |
63 |
76 |
||
36 |
Environmental
Defense Fund |
66.9 |
137 |
74 |
||
37 |
Economic
Strategy Institute |
71.9 |
26 |
71 |
||
38 |
People for
the Ethical Treatment of Animals |
73.4 |
5 |
70 |
||
39 |
Americans
for Tax Reform |
18.7 |
211 |
67 |
||
40 |
Citizens
for Tax Justice |
87.8 |
92 |
67 |
||
41 |
National
Federation of Independent Businesses |
73 |
64 |
|||
43 |
National
Taxpayers |
34.3 |
566 |
63 |
||
44 |
|
63.6 |
26 |
63 |
28 |
61 |
46 |
Handgun
Control, Inc. |
77.2 |
58 |
61 |
||
47 |
|
36.5 |
35 |
61 |
||
48 |
|
21.7 |
6 |
61 |
||
49 |
American
Conservative |
16.1 |
43 |
56 |
||
50 |
Manhattan
Institute |
32.0 |
18 |
54 |
Table
2. Average Adjusted Scores of
Legislators |
|
|
|||
|
|
|
|
|
|
Legislator |
|
Ave.
Score |
|
|
|
|
|
|
|
|
|
Maxine
Waters (D.-Calif.) |
99.6 |
|
|
|
|
Ted
Kennedy (D.-Mass.) |
88.8 |
|
|
|
|
John
Kerry (D.-Mass.) |
87.6 |
|
|
|
|
average
Democrat |
84.3 |
|
|
|
|
Tom Daschle
(D.-S.D.) |
80.9 |
|
|
|
|
Joe
Lieberman (D-Ct.) |
74.2 |
|
|
|
|
Constance
Morella (R-Md.) |
68.2 |
|
|
|
|
Ernest
Hollings (D-S.C.) |
63.7 |
|
|
|
|
John
Breaux (D-La.) |
59.5 |
|
|
|
|
Christopher
Shays (R-Ct.) |
54.6 |
|
|
|
|
Arlen
Specter (R-Pa.) |
51.3 |
|
|
|
|
James
Leach (R-Iowa) |
50.3 |
|
|
|
|
Howell
Heflin (D-Ala.) |
49.7 |
|
|
|
|
|
|
||||
Sam Nunn
(D-Ga.) |
48.0 |
|
|
|
|
Dave
McCurdy (D-Ok.) |
46.9 |
|
|
|
|
|
43.0 |
|
|
|
|
Susan
Collins (R-Me.) |
39.3 |
|
|
|
|
Charlie
Stenholm (D-Tex.) |
36.1 |
|
|
|
|
Rick
Lazio (R-N.Y.) |
35.8 |
|
|
|
|
Nathan
Deal (D-Ga.) |
21.5 |
|
|
|
|
Joe
Scarborough (R.-Fla.) |
17.7 |
|
|
|
|
average
Republican |
16.1 |
|
|
|
|
John
McCain (R.-Ariz.) |
12.7 |
|
|
|
|
Bill
Frist (R.-Tenn.) |
10.3 |
|
|
|
|
Tom Delay
(R.-Tex.) |
4.7 |
|
|
|
|
|
|
|
|
|
|
Note: The table lists average adjusted |
|
||||
adjusting
scores is described in Groseclose, Levitt, and Snyder (1999). |
|||||
Scores
are converted to the 1999 House scale.
The scores listed |
|
||||
are an
average of the legislator's scores during the 1993-1999 period. |
|
||||
The one
exception is Nathan Deal. His score is
the average for the |
|
||||
two years
of the sample in which he was a Democrat, 1993 and 1994. |
|||||
He
switched parties in 1995. Deal is the
most conservative Democrat |
|||||
in the
sample. Constance Morella is the most
liberal Republican in |
|
||||
the
sample. |
|
|
|
|
|
|
|
|
|
|
|
|
Table 3.
Results from Maximum Likelihood Estimation |
|
|
|||
|
|
|
|
|
||
|
|
Period of |
|
Standard |
||
|
|
Observation |
Score |
Error |
||
|
ABC Good
Morning |
|
56.1 |
3.2 |
||
|
ABC World
News Tonight |
|
61.0 |
1.7 |
||
|
CBS Early
Show |
|
66.6 |
4.0 |
||
|
CBS
Evening News |
|
73.7 |
1.6 |
||
|
CNN
NewsNight with Aaron Brown |
|
56.0 |
4.1 |
||
|
|
Fox News'
Special Report with Brit Hume |
|
39.7 |
1.9 |
|
|
LA Times |
|
70.0 |
2.2 |
||
|
NBC
Nightly News |
|
61.6 |
1.8 |
||
|
NBC Today
Show |
|
64.0 |
2.5 |
||
|
New York
Times |
|
73.7 |
1.6 |
||
|
Newshour
with Jim Lehrer |
|
55.8 |
2.3 |
||
|
Newsweek |
|
66.3 |
1.8 |
||
|
NPR
Morning Edition |
|
66.3 |
1.0 |
||
|
Time
Magazine |
|
65.4 |
4.8 |
||
|
|
|
65.8 |
1.8 |
||
|
|
|
63.4 |
2.7 |
||
|
Wall
Street Journal |
|
85.1 |
3.9 |
||
|
|
|
66.6 |
2.5 |
||
|
|
|