Woo Suk Hwang, the star of Korean science, did not establish stem cell lines from cloned human embryos after all. Rather, Hwang's team deliberately fabricated data published in two Science papers [Hwang et al. (2005) 308, 1777; (2004) 303, 1669]. Given the scientist's profile and the importance of the field, the announcement from Seoul National University investigators exploded across the world's newspapers and TV screens.

The fiasco is reminiscent of the Jan Hendrik Schön affair that hit materials science so hard in 2002. Both cases involve feted researchers reporting outstanding results but making breakthroughs that were not entirely unexpected by the community. Both also had access to the best equipment and materials, so that the results were entirely plausible.

The fraud committed by Hwang's team was exposed through a tip-off to the Korean TV program PD Notebookand anonymous postings on an Internet message board. They suggested that photographs and DNA fingerprints had been duplicated in the Science papers. So, the inevitable question arises: how did the journal editors and referees miss it?

That is to confuse the purpose of peer review somewhat. Peer review cannot be expected to detect every case of fraud, particularly if it is carefully done. Its purpose is to review a paper's originality, that an appropriate approach has been used, the conclusions are fair, and that it is worth publishing. It when other groups try to repeat work that fraud is more likely to become apparent.

As with the Schön affair, when the role of coauthors came under the spotlight, I believe the fallout from this case will not affect peer review, but be seen elsewhere. Already, the Center for Science in the Public Interest in Washington, DC has asked Science and Nature to tighten up their conflict of interest policies – Hwang and his former colleague Gerald Schatten of the University of Pittsburgh, Pennsylvania applied for patents on their cloning techniques before they submitted papers to the journals, but no disclosure of the potentially lucrative patents appeared in either journal. Perhaps publishers could introduce further checks, for example, asking authors to detail their contributions to papers as some medical journals do already. Where papers are submitted electronically to journals, it may be possible to use software to identify some types of image manipulation or plagiarism.

In my view, peer review is under pressure from very different and less publicized sources. The burden on referees is reaching breaking point, with more papers being published and increasing pressure to speed up the review process. Instead, some method of successfully recognizing the valuable contribution of referees is needed that does not jeopardize impartiality.

Read full text on ScienceDirect

DOI: 10.1016/S1369-7021(06)71370-0