Extract

Sirs—In their qualitative assessment of screening trials Freedman et al.1 demonstrate that impressive author affiliations and the peer review process do not guarantee accuracy. On page 50, the reader will find the following persuasive statement about the Canadian National Breast Screening Study (CNBSS). ‘Centre radiologists only agreed with the reference radiologist 30–50% of the time.’ What a dreadful study that must have been! Quite wrong.

The numbers they cite are clearly reported as kappa statistics, which indicate how much of the agreement observed was agreement beyond that which might occur by chance.2 In fact, Table 2 (which they disregarded) from our publication reveals very clearly that there was agreement between centre radiologists and the reference radiologist 85.6% of the time with respect to cancer cases and 75.8% of the time for mammograms from women who did not have cancer. Do these authors truly not understand the difference between inter-observer agreement and kappa statistics?

You do not currently have access to this article.