Integrity Initiatives in the Südhof Lab
Updated November 5, 2025
Südhof Lab Policies
As science has evolved over the last decades from a paper-based to a web-based endeavor, we have instituted procedures to enable transparent access to all data and tools of the Südhof lab beyond the documentation in published papers. The overall lab policies are documented in our publicly available lab manual. [Link to our Lab Manual]
Current procedures include:
- Full unrestricted access to all published renewable reagents, including mice (via Jackson lab) and plasmids for non-profit research entities
- Access to all experimental protocols
- Public access to software applications (Apps) and algorithms (https://med.stanford.edu/sudhoflab/science-resources/tools.html)
- Public access to all raw data for published papers after 2023 in the publicly accessible Stanford Data Repository (SDR)
- All manuscripts and published papers are screened for copy-paste errors and image artifacts using A.I. tools (ProoFig, ImageTwin and Forensically)
- Conversion of all lab notebooks to digital form (after 2023)
- Publication of all papers with an open access CC BY 4.0 license
- Publication of reviews of a paper with the paper when allowed by a journal
- Facilitating lab visits to disseminate key technologies
All experiments in the Südhof lab are required to conform to the following guidelines:
- Experimenters must be unaware (‘blinded’) of the identity of samples, data, or animal subjects which are anonymized whenever possible
- All experiments are carried out in at least three independent biological replicates (not just pseudo-replicates)
- All major conclusions are based on at least two experimentally distinct methods
- Sex is considered an independent biological variable whenever appropriate
- All experiments are documented in virtual notebooks stored in Stanford Box
- All primary data are reviewed in lab meetings by multiple lab members, not just the P.I.
Adherence to the Südhof lab guidelines was documented in an anonymous survey of current lab members found here: [link to 2025 lab survey]
In addition, our lab strives to independently reproduce major conclusions with follow-up studies using a different approach. With these rules, we aim to promote transparency and reproducibility. Arguably the biggest challenge in science integrity at present is not the fabrication of data, but issues in selecting, analyzing and interpreting data. Unrestricted access to raw data will greatly help to meet this challenge because it enables independent and alternative evaluations of the data. It is inevitable that our analyses and interpretations will be challenged. Gratifyingly, these challenges will concern the substance of science instead of focusing on human errors and will facilitate more interactive scientific practices.
For our efforts in promoting data availability and transparency, our lab received the Stanford University Libraries Data Sharing Prize, an Open Science Award from CORES in 2024.
PubPeer Scrutiny of Südhof Lab Work
Our lab has come under intense scrutiny from commentors outside of the scientific community. Websites such as PubPeer and ForBetterScience have published extensive criticisms of our work. PubPeer critics have recently used newly developed A.I. software to analyze >25,000 display items in our papers. PubPeer posts thereby helped us to identify and correct errors that we made and that were undetectable without A.I. software tools (see detailed accounting of PubPeer posts below). In addition, PubPeer critics levelled unfounded accusations that ‘flag’ papers even though there are no errors (again, see detailed accounting below). No PubPeer post ever questioned our paper’s findings; instead, PubPeer posts focus on perceived discrepancies in the data. Furthermore, none of the identified errors, even errors that resulted in retractions, had an impact on a paper’s conclusions. Dr. E. Bik, a prominent PubPeer professional, widely distributed to universities and other parties a summary of her accusations against our lab that is presented here with our comments [link to Bik Excel file]. Our own analysis of our 674 published papers, provided in detail below, revealed that 21 of our papers contain mostly trivial errors, 7 papers of collaborators included errors for which PubPeer blamed us, and more than 30 papers were falsely ‘flagged’ (total numbers exceed 51 because many papers are associated with both errors and additional false accusations).
As discussed in detail below, most of our errors are copy-paste mistakes that occurred in the time period between the implementation of digital data processing and the availability of A.I.-powered error-detection tools, which detect but also prevent such errors. In addition, some papers contained inexplicable image aberrations that do not serve to support the papers’ conclusions and are likely image processing artifacts as described below.
Regrettably, PubPeer and other social media sites are non-transparent, censor responses, ‘flag’ as many papers as possible to force corrections and retractions, and use anonymous commentators. Moreover, PubPeer posts take a fundamentalist attitude that is deeply unscientific. PubPeer commentors insist that even an accidental duplication of a control image, undetectable to the naked eye, is a major issue warranting a correction or even a retraction although there is no reason to doubt the conclusions of these papers.
Historically, representative images were used to illustrate data for a better understanding of experiments. They were not meant to be ‘pure’. Splicing gel bands or changing images for illustrations were accepted practices that are now alleged to constitute data manipulations. Even if an accusation is implausible and the alleged manipulation serves no purpose, a frequent PubPeer strategy is to repeat the same allegations over and over again in different ways, often accompanied by ‘animations’ that confer a veneer of seriousness. This strategy creates an aura of ‘something is wrong here’ to which journal editors are susceptible. Although occasional PubPeer comments seriously examine scientific results, many comments appear to pursue an agenda unrelated to the actual science.
Moreover, most PubPeer comments are posted by a small group of people who publicize their accusations on ‘X’ (formerly Twitter) and other social media. Many PubPeer commentors have apparent conflicts of interest. Prominent PubPeer commentators serve as science integrity experts who are invited to give paid presentations, receive honoraria for consultations, and are awarded handsome ‘integrity’ prizes. Some PubPeer commentors maintain commercial websites publishing their discoveries. All PubPeer accusations are communicated to journals and universities who are often unfamiliar with the science and who are easily pressured by PubPeer commentors to start investigations. For example, Dr. E. Bik widely distributed to universities and others a summary of her accusations against our lab that is presented here with our comments [link to Bik Excel file]. Journals and university administrators often accept PubPeer accusations without examining their plausibility or the integrity of the commentors. A typical example of this situation is our paper in Science Advances for which which the journal published an Editorial Expression of Concern simply repeating PubPeer accusations (https://www.science.org/doi/10.1126/sciadv.aec3110; see case B below). Editorial overreach fueled by PubPeer may be a growing problem in science publishing.
Errors and Artifacts in Scientific Papers
Two principal types of problems in papers are detected by new A.I.-powered programs such as ‘ImageTwin’ or ‘Proofig’. First, image or data duplications that could represent accepted practices, artifacts, or copy-paste errors. Copy-paste errors commonly occur when the copy function on a keyboard fails and related, previously copied images or data that are on the computer’s ‘clipboard’ are inserted. Second, image changes that could be artifacts or intentional image manipulations. As described below, artifacts were often produced by older software using image compression and expansion algorithms. Thus, a ‘flagged’ problem could represent an acceptable practice, an artifact produced by instruments or image processing software, a minor error made by a scientist or journal, or an intentional data manipulation.
To distinguish artifacts and accidental errors from misconduct, arguably the most important criteria are intent and impact. Honest mistakes and artifacts generally provide no benefit to a paper (intent) and do not affect a paper’s findings (impact). Misconduct, conversely, aims to make a paper look better (intent) and to enhance its conclusions (impact). A third, less important criterion is the nature of the issue, i.e. whether the issue is plausibly explained as an accidental error or artifact. Accidental errors and artifacts generally affect non-critical parts of data. They are caused by a series of problems, ranging from copy-paste mistakes to compression artifacts in background signals and image processing-induced duplications of tiny image areas. Willful manipulations, conversely, generally leave traces of the manipulation. The wide availability of new A.I.-powered programs that enable detection of problems such as accidental duplications and intentional manipulations will prevent future errors but will also, regrettably, facilitate creating undetectable manipulations.
Importantly, A.I.-powered searches for issues in papers, such as those performed by PubPeer professionals, can result in false accusations. For example, when immunoblots are imaged at high resolution but reproduced at low resolution, bands can assume a ‘halo’ that is a simple digital reproduction artifact, not a manipulation.
More critically, immunoblots produced by the same gel apparatus with similar samples and analyzed with comparable secondary antibodies often exhibit the same shapes and features. This is true even if the blots are run on different gels. Our example shows the same artifacts in four different blots with distinct samples run on different gels – the band shapes are almost indistinguishable! Many false allegations of fraud by PubPeer commentors suggest blot duplications that don’t exist. These false allegations may have led to unwarranted paper retractions that destroy valuable data and promising careers.
In addition to possible image duplications, A.I.-powered image analysis programs detect image artifacts that can be produced by image processing algorithms. In a fraud accusation against our lab (case #10 below) we were able to prove that the accusations of image duplications and image manipulations were false because, serendipitously, the first author retained images of the original blots (see detailed analysis here [link to Burre paper analysis]). More often, however, old raw data are no longer available. In that case, journals often demand retractions even though there is no evidence that the data from decades ago are unreliable. Thus, a general problem is that digital reproductions of images with repeated cycles of image compression and expansion using older software creates artifactual duplications of regions of an image, especially if the image resolution is changed multiple times. These artifacts are not visible to the naked eye but are now found by PubPeer accusers using sophisticated A.I. software, leading to false accusations of fraud and journal investigations that arguably represent overreach.
Benefits and Costs of Efforts to Detect Scientific Fraud
Fraud constitutes a potentially serious problem in science, especially when it motivates follow-up studies or clinical trials as described in a recent book (Charles Piller “Doctored: Fraud, Arrogance, and Tragedy in the Quest to Cure Alzheimer’s”). It is essential for scientific papers to present data that are accurate descriptions of experimental results. Papers with errors that affect conclusions need to be retracted. In the current fundamentalist climate of science, however, papers containing minor errors that cannot be definitively clarified are also asked to be retracted, even if the error doesn’t affect the conclusions of the paper. The benefits and rationale for such retractions is questionable, and the retractions may be unethical. As a case in point, to the best of our knowledge none of the thousands of errors identified by the celebrated Dr. E. Bik ever had an impact on the actual conclusions of a paper. The errors were almost always procedural, although some were clearly indicative of data manipulations.
Thus, there is a dark side to scientific fraud detection efforts. The wide reporting of mistakes on PubPeer, ‘X’, and other social media and the public shaming of scientists cause mental anguish, ruin careers, and waste resources. They create the public impression that there is a huge problem in scientific integrity, which arguably doesn’t exist. Every error, no matter how trivial, has to be addressed by the authors of papers in extensive communications, record searches, meaningless corrections, and sometimes even new experiments. Especially the costs to young scientists are horrendous. In my own lab, such accusations have pushed multiple junior scientists out of science and have occupied scores of scientists in defensive work.
As scientists, we need to publicly acknowledge mistakes. But we also have to ask whether it is warranted to correct errors that nobody would notice without A.I.-driven software and that have no impact on a paper. Are we forced to conform to pressures from PubPeer professionals who are uninterested in the actual results but claim the need for absolute purity in science? Moreover, if a paper contains possible data manipulations that constitute only a tiny fraction of a project, is it sufficient to note this publicly or should this be a reason to retract the entire paper? These are issues that we, the scientific community, have to address, even though we don’t have the power to deal with them appropriately.
Finally, PubPeer has guided journal and university practices beyond science. Ethics departments that investigate allegations of misconduct proliferate. The ‘ethics’ teams, however, often have a conflict of interest because their existence depends on finding problems, an inherent issue with regulatory bureaucracies. As a result, minor infractions become amplified. Again, this creates human suffering, destroys careers, and enhances anti-science attitudes in the public without tangible benefits for science integrity. An impression of a ‘reproducibility crisis’ in science has emerged, a crisis that arguably doesn’t exist – all that happened is that many older papers contain relatively trivial errors and artifacts, which is not a surprise given that they were produced by humans.
Errors in Papers from the Südhof Lab
As described below in detail, PubPeer posts identified multiple errors in papers from our lab. Most errors were copy-paste mistakes that concern control conditions where images look alike. Once a copy-paste mistake was committed, neither the student/postdoc who assembled the figure nor I could detect the mistake until new A.I.-based computational image analysis software identified the mistake. The nature of these mistakes – insertion of incorrectly duplicated representative images or duplicated numbers in data files after analysis – means that the mistakes did not impact the findings or conclusions of a paper. In addition, some of our published images include strange microduplications that serve no purpose and are likely artifacts but provide an opportunity for PubPeer professionals to allege manipulations.
Nearly all of the mistakes in our papers involve isolated errors in papers containing up to 750 images. Overall, approximately 3% of our more than 600 papers that contain more than 25,000 display items contain an error (an error rate of <0.1%). A plot of the papers we published (green circles, see below) and of papers with errors from our lab (red bars) or from collaborating labs (orange bars) vs. the year of publication of the papers tracks the temporal relationship of our lab’s errors as a function of time and the number of published papers. The plot also shows the timeline of the ‘flagging’ of our papers on PubPeer that coincides with the advent of A.I.-based image analyses (blue circles).
The plot reveals that errors do not correlate with the number of papers published, lab size, or individual lab members, but coincide with the advent of pasting digital images and data that are susceptible to copy-paste errors. Also shown is that these errors were only detected when A.I. driven software became available (blue circles). A limited sampling of the literature reveals that similar copy-paste mistakes are ubiquitous in papers with many display items. The A.I.-powered software that enabled recent detection of these errors will also prevent their recurrence in future, together with the advances in raw data storage that are now in place. Indeed, no new errors in our papers were identified after we instituted A.I. screening of our papers in 2023.
One accusation that has been leveled against my lab is that although the overall error rate of our lab is low, the fact that multiple errors occurred is indicative of a careless lab culture. Alarmed by this accusation, I asked an independent monitor to organize surveys of our lab culture, as experienced by current and by past lab members. The anonymous survey of current lab members can be found here [link to 2025 lab survey] and the questionnaires completed by previous trainees can be found here [link to folder with trainee questionnaires]. Some of the responses of previous trainees were anonymized because the current atmosphere of fear among scientists makes many scientists afraid of becoming PubPeer targets, thus requesting anonymity.
Two Case Studies of Flagged Papers Raising Journal Integrity Questions
All PubPeer accusations are directly communicated to journals that then explore whether the accusations are plausible. When journals request feedback from authors and the authors do not concur with the PubPeer analysis, journals have to assess whether their integrity is served by adopting the PubPeer accusations or accepting the authors’ views. Unfortunately given the fact that authors often no longer have original data – often for legal reasons or because of the elapsed time – this assessment tends to favor PubPeer. Two cases from our lab illustrate this conundrum.
Case A: Shin et al. (2010), #37 below
In a widely publicized insinuation of fraud against my lab, it was alleged that an immunoblot we published in 2010 in Nature Structural Molecular Biology was intentionally manipulated. Specifically, the immunoblot stripes in Supplementary Figure S6b (reproduced digitally at low resolution in the journal from a non-digital original blot) contains areas of micro-duplications in the background pattern of a control blot (boxed in different colors) that are alleged to be altered. This is the only ‘issue’ detected in this paper containing vast amounts of data (94 display items plus 2 tables).
Background: Figure S6B represents a negative control. The experiment shows immunoblots of pulldowns with WT and mutant C2B-domains of Munc13-2. Blotting controls are included in the left lanes. The 6 different immunoblots in Figure S6B show variations of the same experiment demonstrating the C2B-domain does not bind to synaptic proteins as a function of calcium, as is also independently tested in Figure S6A and has since been repeated by others.
Analysis: The accusation targets two blot stripes from the pre-digital era in which a blot was visualized by ECL, photographed, and reproduced by cycles of image compression and expansion. We here show a higher resolution image of the area with the duplications. In this image, the background is enhanced to enable better views of the duplicated background areas:
The upper blot strip contains a rectangular area in the middle (1’) that exhibits a bizarre pattern of full and partial duplications: a full duplication on the right (1’’), a partial duplication (lower 25%) placed at the upper middle and right edges (2’ & 2’’), and a smaller partial duplication (lower right corner ~10%) on the middle upper edge (3). The lower blot strip contains three different background areas that are each duplicated once (1’-3’’). The thin blue lines demarcate the boundaries of the lanes on the gels. Note that the background exhibits other abnormalities, such as vertical lines.
Let us assume that these duplications are not artifacts but represent intentional manipulations. If so, such manipulations could only have two purposes (intent and impact), to change the result (i.e. falsify the data) or to ‘beautify’ the images because there may have been gel smudges.
No data falsification. The duplicated background areas do not cover up lanes of the gels, as would be expected if someone wanted to conceal a non-wanted band and didn’t simply want to rerun the gel without loading anything (a non-discoverable approach to data falsification).
The image above shows that the microduplications do not correspond to lanes and that non-duplicated background areas separate the duplicated micro-areas of the background. Thus, if there had been a band that was supposed to be covered up, it would have been seen between the microduplications. To illustrate this point in a different way, we are here showing what the background duplications would look like when transferred onto a blot containing a signal (taken from the GST-control strip of the same figure):
Clearly the microduplications do not cover up bands. Thus, there is no falsification or fabrication of data.
Intentional data beautification is improbable. Our accusers could argue that maybe there were smudges on the two blots containing background duplications but not on the other blots and that the experimenter may have been tempted to ‘simply’ cover up these areas instead of even more simply re-running the gel. Several arguments contradict this accusation:
1. There is no evidence of smudges in any of the blots. Given that not a single but a cluster of 7 microduplications are observed, there would have to be many small smudges – why cover them up then?
2. The most straightforward way of covering up smudges would be copy-pasting the duplicated areas. This procedure would create border effects in the pixels. These border effects are absent from the published image. This is shown below in a comparison of an intentional duplication (image section framed in red with red arrows pointing to edge effects) with the observed duplications (arrows point to border effects that are absent from the published image).
There is now more sophisticated software to duplicate an image area and the absence of edge effects does not in itself completely rule out the possibility of image manipulations. However, if one argues that in this case a perpetrator was too lazy to re-run a gel in order to fix smudges and also too lazy to simply copy-paste a different gel, it seems implausible that the same perpetrator would expend enormous efforts to create multiple perfect duplications of image areas. If a perpetrator expected that someone would look for edge effects, he/she would also know that the duplications could be detected and would have just re-run the gel with empty samples.
3. The microduplications in the two blot strips are different. In the upper strip, the same area is duplicated multiple times in part or in full, whereas in the lower strip, three different areas are duplicated. The two distinct patterns of duplications are more consistent with an artifact than with a human actor.
4. This is a supplementary blot of no esthetic value that few people would ever scrutinize without A.I. – why correct what would be tiny smudges based on the sizes of the microduplications if nobody sees them?
5. A final argument against the accusation that the image are duplications represent efforts to cover up ‘smudges’ on a gel is the nature of how these images were created: an ECL blot was photographed, the blot strips were cut from the photographs with scissors and pasted onto paper with Letraset labeling, and the physically constructed figure was photographed again. It seems not creditable that a photograph with a smudge would first be pasted on a paper and then later manipulated?
How Nature Structural Molecular Biology handles this PubPeer accusation. In a troubling email from the Nature Structural Molecular Biology editor, the editor stated “Unfortunately, having an example for your other article, where you were able to conclusively determine the origin of the image irregularities flagged by others online [we showed the editors that in another example, a similar microduplication was clearly an artifact and is thus plausibly explained here as an artifact], does not allow us to conclude that the irregularities for images in your NSMB article are similarly artefacts. We do not have access to the original data matching your NSMB article. Therefore, in our view, for your NSMB article, it is not possible for us to definitively determine what the irregularities are. We understand they are likely artefacts, however, at this juncture and based on the lack of original, high-quality image data, we cannot draw this conclusion for your article.”
Basically, the editor states that the fact that the microduplications could be an artifact and are most likely an artifact is irrelevant – we are supposed to prove our innocence by providing original data from 16 years ago. This is troubling because it imposes an impossible requirement on us. We are not legally allowed to possess the original data since the data are the property of UT Southwestern, my previous institution, which refuses to provide them. The editors’ stance is also deeply troubling because it pays no apparent attention to the science, the nature of the microduplications, or the impact of the blot strip microduplications on the paper. It all apparently doesn’t matter – the science, the impact, the intent. The editor’s reaction suggests a potential problem with some journals where editors may have no regard for scientific content and are more afraid of social media like PubPeer than of supporting unfounded accusations. We refer to this as editorial overreach.
The most likely explanation for the areas of background microduplications, like for many of the ‘mistakes’ identified by A.I.-powered software, is that the microduplications are artifacts of digitized images of photographs after multiple rounds of image compression and expansion. Indeed, for the paper discussed in #10 we could demonstrate such artifacts because digital copies of the original data were still available after more than 15 years [link to Burre paper analysis]. It seems probable that many more cases like this will be discovered by A.I. Artifacts produced by digital reproductions of images with repeated cycles of image compression and expansion using older software are now found by PubPeer accusers using sophisticated A.I. software, leading to false accusations of fraud that at least some journal editors take more seriously than the actual science. No wonder that the public has doubts about the value of science!
Case B: Lin et al. (Science Advances 2023), #18 below
PubPeer accusers of our lab made two principal allegations against us with respect to the PY Lin et al. Science Advances (2023) paper, namely that the n’s in Figure 3B are incorrect based on the data we submitted to the Stanford Digital Depository (SDR) and that the leak currents for the recordings were too high. Science Advances recently published an “Editorial Expression of Concern” that repeats the PubPeer accusations without consulting us as authors (https://www.science.org/doi/10.1126/sciadv.aec3110).
The EEoC relied on anonymous PubPeer commentary even though the data in question are verifiably publicly available in the Stanford Digital Repository (SDR). The problem here is that the PubPeer accusers and the Editor-in-Chief of Science journals, Dr. Holden Thorp, apparently didn’t look in the right tab of the Prism file in the SDR. They looked at the ‘AMPA IO’ tab, which contains only part of the data, but not at the ‘Data 7’ tab, which contains all of the data with the correct n’s. This incorrect reading of the Prism file led them to conclude that the n’s are incorrect. The Prism file has been publicly available since last year and was not recently added, so all of the information has always been public. All recordings are also public. Thus, the accusation is a simple misunderstanding of the SDR, which admittedly is very non-intuitive and easy to misunderstand.
The second PubPeer accusation against our Science Advances paper is that the leak currents for Figure 3B were too high. It is correct that the recordings for Figure 3B are not of as high a quality as those in Figures 1 and 5, with all three figures testing the same question and obtaining the same result from slightly different angles. Indeed, Figure 3B was obtained with a high chloride internal pipette solution which produces normally very high leak currents (see analysis here [Analysis of leak currents effect of internals]), whereas Figures 1 and 5 were obtained with a regular CsMeSO3 internal that produces lower leak currents. Note that the term ‘leak current’ here is misleading since most leak currents are not due to a ‘leak’ but reflect normal ion conductances.
We have now reanalyzed the recordings for Figure 3B. We obtained the same conclusions as published, although the new analysis results in slightly different numbers and shifts in statistical significance, as would be expected for electrophysiological analyses in which individual judgement calls play a role. Even when recordings with higher leak currents were excluded, the conclusions remained valid. Thus, the concern about leak currents seems unfounded.
A third accusation by PubPeer professionals against this paper was that in Figure 4B, the expanded images on the right are not all present in the image overview shown on the left. However, there was no claim in the paper that the expansded images on the right are present in the overview on the left. We could refute this particular accusation at a time when we still responded on PubPeer (which we no longer do) by showing a larger image of the overview that contained all expanded images.
Summary. Together these two examples illustrate the relentless nature of PubPeer investigations and the conundrum of journals. It is inevitable that we as scientists and authors make mistakes. Even the Science Advances paper likely contains real mistakes (indeed, the methods section failed to describe the internal solutions properly – we are human) but the PubPeer accusations were unfounded. The question is whether in a situation like ours, where one particular lab becomes the unending target of PubPeer professionals, journals have a higher responsibility to the authors of the papers and should give the authors the benefit of the doubt, or whether in such a situation journals should force authors to undergo a continuous re-review of their papers, and authors are then damned if they cannot satisfy even the most inconsequential concern, no matter if that concern has any bearing on the conclusions of a paper or not (see case A of the NSMB paper above). This is a question which the scientific establishment needs to address.
Retraction of Papers from the Südhof Lab
In response to PubPeer accusations and journal pressures, we retracted two papers in 2024. Both papers contained significant mistakes that, however, did not affect the major conclusions of the papers. A detailed analysis shows that the errors in these two papers had no impact on the principal conclusions and were likely unintentional.
The first retracted paper (P.Y. Lin et al., PNAS 2023; entry #8 below) included multitudinous number duplications in the supplementary source data file. The paper had no other known errors, although the quality of some of the original recordings was also criticized. Further scrutiny revealed that the errors in the source data file had no detectable effect on the figures, which were essentially the same independent of whether they were plotted from the error-containing or the corrected source data from the first author. The duplications suggest that the number duplications in the paper occurred when the lead postdoc was careless in assembling the excel file containing the source data, a file that includes thousands of manual data entries. The primary raw data of the paper are publicly available (https://purl.stanford.edu/cp231wr9194), documenting the overall validity of the study.
The second retracted paper (L.Y. Chen et al., Neuron 2017; entry #27 below) contained three panels with image abnormalities that we could not explain because the original files and samples for those particular images were no longer available after 8 years. The image abnormalities provided no benefit to the paper. The example shown here illustrates the complex microduplications observed in the blue color channel. This channel is not even mentioned in the paper. The microduplications resemble microduplications found in other PubPeer examples that were proven to be artifacts (https://pubpeer.com/publications/C9E4F18C603C449A0CD32876B719A5 and https://pubpeer.com/publications/F647CE4587D81D76F67C6B8A2F7B0C; see also our case documented by original data here [link to Burre paper analysis]). We cannot demonstrate that these microduplications were not intentional because the original samples are lost. However, the abnormalities fail the two tests of an intentional manipulation: They lack impact because they provide no benefit to the paper and exhibit no evidence of intent. Less than 2% of display items of this paper show abnormalities, no other problems in the paper were detected besides the three image abnormalities, and all raw data except for those of a subset of images are available. We hope to have the opportunity to republish the data of this paper with additional confirmatory evidence, although journals are loath to republish such papers. We now feel that the retraction of this paper may also have been premature.
Both of these retractions caused the loss of public access to valuable data, months of extra work by innumerable scientists, and severe damage to the careers of scientists. More retractions like these will probably follow given the desire of journals to appear to be ‘pure’.
My advice to any scientist who might encounter a similar situation is to resist the pressures to retract papers by a journal’s ‘ethics department’ that is guided by PubPeer, to limit interactions with PubPeer instead of responding (I learned this too late), and to protect the actual scientific truth as much as possible. In this age of disinformation and public accusations, we need to be able to admit mistakes and we have to correct errors caused by our imperfections, but we also need to defend the value of the actual data and of conclusions. At least for the last goal I failed.
Main PubPeer Accusers of the Südhof Lab
Originally, most PubPeer contributors were idealistic science enthusiasts. More recently, professional “science integrity investigators” who may derive income from their activities seem to dominate. The criticisms increasingly imply that minor mistakes are not just human error but signify a bigger problem. The majority of PubPeer accusation against our lab are made by four commentators, Dr. Elisabeth Bik, Dr. Maarten van Kampen (pseudonym Orchestus quercus), Patrick Kevin (pseudonym Actinopolyspora biskrensis), and Dr. Kaveh Bazargan (pseudonym Illex illecebrosus). Our lab’s main accusers often operate like a tag team creating an echo chamber. Dr. Bik usually first makes an accusation and the others then repeat the allegation from various angles. Recently, Dr. Bik donated money from her science integrity income to a foundation she controls and was appointed as a faculty affiliate at Stanford, making her at the same time my colleague at Stanford and my most frequent critic who widely distributes and frequently repeats her specific accusations against my lab [link to Bik Excel file].
The four main accusers of our lab post their findings on ‘X’ and other social media such as ‘ForBetterScience’ that personally accuse me of fraud. They frequently ask for corrections or retractions based on perceived errors, no matter how trivial. The accusations never imply that the conclusions of a paper are incorrect but instead describe isolated image or data errors, alleging manipulations. PubPeer commentors repeatedly stress that they don’t want to speculate on the intention of an alleged fraud even though they accuse us of intentional data manipulations. This is presumably because the alleged manipulation generally provides no benefit to the paper and it is thus impossible to identify a reason why a person would want to intentionally commit the alleged fraud. One can’t escape the impression that PubPeer commentors generally believe that we are crazy scientists who commit data manipulations of no benefit to a paper just for the fun of it. Sadly, journals sometimes seem to agree.
Complete Accounting of Südhof Lab Mistakes and Unfounded Allegations
Given the persistent public scrutiny of our lab's work, we here provide a full accounting of all PubPeer accusations. We discuss all correct and unfounded PubPeer accusations, describe our uncensored responses, and additionally reveal mistakes we identified that have not (yet) been publicized on PubPeer. Note that we no longer respond on PubPeer owing to the biased nature of the website.
The PubPeer accusations are summarized in reverse order of a paper's first criticism or of our identification of a problem. Accusations are numbered according to PubPeer entries, with gaps in numbers caused by responses or irrelevant comments. We classify accusations based on their resolution as ‘unfounded’, ‘error’, or ‘irrelevant'. We include accusations directed against our lab but actually concerning data from collaborating labs. As scientists, we are often reluctant to accept mistakes. Realizing this tendency, we tried to concede errors wherever these exist, even if trivial. However, PubPeer accusers are also loath to admit mistakes. They often repeat the same point multiple times no matter how implausible and associate the accusations with fancy graphics and animations that purport to represent analyses without new information, leading to unending discussions that create an atmosphere of ‘something is there’.
Detailed discussion of individual papers
51. Paper: Han, W., Rhee, J.S., Maximov, A., Lao, Y., Mashimo, T., Rosenmund, C., and Südhof, T.C. (2004) N-glycosylation is essential for vesicular targeting of synaptotagmin 1. Neuron 41, 85-99.
PubPeer Weblink: https://pubpeer.com/publications/E8D33683A69195FD74E0FE0BBC204C
#1 (June 2025)
Accusation: One figure contains an image duplication.
Resolution: The accusation is partly correct. The images were switched, not duplicated. The paper contains 177 images, resulting in an error rate of <1%. A corrected figure has been sent to the journal.
Classification: Minor error during figure assembly
***
50. Paper: Xu, J., Pang, Z.P., Shin, O.H., and Südhof, T.C. (2009) Synaptotagmin-1 functions as a Ca2+ sensor for spontaneous release. Nature Neurosci. 12, 759-766.
PubPeer Weblink: https://pubpeer.com/publications/FA163CBB31FF4A026B0E3DC0BF0365
#1 & #2 (January 2025)
Accusation: Two of the 174 electrophysiology traces in the paper look ‘unexpectedly similar’, implying they may be duplicated.
Resolution: The similarity in traces may be due to a copy-paste error, a serendipitous chance event, or an image duplication during the processing of the paper for publication. We no longer have access to the raw data that were archived at UT Southwestern, my former institution and cannot clarify this issue. If an error occurred, it affected a control condition without implications for the conclusions of the paper and represents an error rate of <1%.
Classification: Unfounded or minor error during figure assembly or during reproduction
***
49. Paper: Fernandez-Chacon, R., Shin, O.-H., Königstorfer, A., Matos, M.F., Meyer, A.C., Garcia, J., Gerber, S.H., Rizo, J., Südhof, T.C., and Rosenmund, C. (2002) Structure/function analysis of Ca2+-binding to the C2A-domain of synaptotagmin 1. J. Neurosci. 22, 8438-8446.
PubPeer Weblink: https://pubpeer.com/publications/1FD9572D003130580F6C598C39D9C6
#1 & #2 (November 2024)
Accusation: Figure 7A exhibits an ‘irregular background’, implying it has been manipulated.
Resolution: The irregular background is a typical artifact of older image processing, compression and reproduction algorithms. Specifically, 25 years ago a blot was visualized by ECL, the signal was detected on a film that was scanned and photographed. The photographs were printed, pasted on a figure paper, and again photographed to be finally digitally processed during submission and publication of the paper. This extensive process involves many potential artifacts as extensively discussed below. None of these background effects have any implications for a paper. PubPeer implies that an image manipulation here but the Adobe Photoshop tool that would allow this type of changes to be introduced only became available in 2002 (https://v4development.com/blog/our-blog-post-four-36). Here is an analysis of this image:
The published image is on the left, the enhanced contrast images on the right. Older images, like those of #10 and #37, show weird artifacts that are clearly not manipulations.
Classification: Unfounded
***
48. Paper: Hao, Y.A., Lee, S., Roth, R.H., Natale, S., Gomez, L., Taxidis, J., O’Neill, P.S., Villette, V., Bradley, Jl, Wang, Z., Jiang, D., Zhang, G., Sheng, M., Lu, D., Boyden, E., Delvendahl, I., Golshani, P., Wernig, M., Feldman, D.F., Ji, N., Ding, J., Südhof, T.C., Clandinin, T.R., and Lin, M. (2024) A fast and responsive voltage indicator with enhanced sensitivity for unitary synaptic events. Neuron 112, 3680-3696.
PubPeer Weblink: https://pubpeer.com/publications/B86E470B8AF2DBE88B7956BCF08AF3
#1 & #2 (November 2024)
Accusation: One of the traces in a collaborator’s paper may be duplicated.
Resolution: The collaborator confirmed that his postdoc committed a copy-paste mistake that was corrected.
Classification: Minor error in a collaborator’s lab
***
47. Paper: Chen, M., Jiang, X., Quake, S.R., and Südhof, T.C. (2020) Persistent transcriptional programs are associated with remote memory. Nature 587, 437-442.
PubPeer Weblink: https://pubpeer.com/publications/252AB9CA56042FD2AC7F710FE4B31F
#1 & #2 (January 2025)
Accusation: Extended Data Fig. 5 contains a duplicated volcano plot.
Resolution: The accusation is correct and a correction is published. All primary data are publicly available.
Classification: Minor error in a collaborator’s lab
***
46. Paper: Pak, C.H., Danko, T., Zhang, Y., Aoto, J., Anderson, G., Maxeiner, S., Yi, F., Wernig, M., and Südhof, T.C. (2015) Human Neuropsychiatric Disease Modeling Using Conditional Deletion Reveals Synaptic Transmission Defects Caused by Heterozygous Mutations in NRXN1. Cell Stem Cell 17, 316-328.
PubPeer Weblink: https://pubpeer.com/publications/4BF247658B9C771C11277691085AE5
#1 (July 2024)
Accusation: Two panels in Figure S6 are switched.
Resolution: Accidental copy-paste mistake during figure assembly of 207 display items (error rate ~0.5%)
Classification: Minor error
#4, #6 & #7 (October 2024)
Accusation: The resting potential of the neurons analyzed in the paper is too low, and our explanation is false that such a low resting potential is commonly observed in human neurons derived from stem cells.
Resolution: Neurons derived from human stem cells mature slowly and exhibit a steady increase in resting potential during culture. In this particular paper, neurons were analyzed after 3 weeks in culture and are thus relatively immature. The -38 mV resting potential is not materially different from that observed in older neurons analyzed by others at 5-8 weeks. For example, Shih et al. (Stem Cell Res. 2021) observe -42-48 mV resting potentials after 4 weeks in culture and Chen et al. (Stem Cell Res. 2020) observe -50 mV after 12 weeks in culture. Even one of the examples mentioned by the accusers (Buhlmann et al., J. Neurosci. 2024) sees -45-50 mV resting potentials after 4 weeks, which isn’t very different from ours.
Classification: Unfounded
***
45. Paper: Um, J.W., Pramanik, G., Ko, J.S., Song, M.Y., Lee, D., Kim, H., Park, K.S., Südhof, T.C., Tabuchi, K., and Ko, J. (2014) Calsyntenins Function as Synaptogenic Adhesion Molecules in Concert with Neurexins. Cell Reports 6, 1096-1109.
PubPeer Weblink: https://pubpeer.com/publications/10E74576C4033F2A67749B3395FEFD
#1 (July 2024)
Accusation: Figure 3A contains an image duplication
Resolution: Our lab made only a minor contribution to this paper but the contributing lab who made this mistake acknowledged the duplication.
Classification: Minor error by another lab
***
44. Paper: Boucard, A., Maxeiner, S., and Südhof, T.C. (2014) Latrophilins Function as Heterophilic Cell-Adhesion Molecules by Binding to Teneurins: Regulation by Alternative Splicing. J. Biol. Chem. 289, 387-402.
PubPeer Weblink: https://pubpeer.com/publications/7B45E6171BCB1F3B805C2BDC316DF0
#1-4, #6 & #8 (July 2024)
Accusation: Figure 1G contains an image duplication
Resolution: Copy-paste error in the panels of Figure 1G, with a total of 142 display items (error rate <1%)
Classification: Minor error
***
43. Paper: Arancillo, M., Min, S.W., Gerber, S., Münster-Wandowski, A., Wu, Y.J., Herman, M., Trimbuch, T., Rah, J.C., Ahnert-Hiler, G., Riedel, D., Südhof, T.C., and Rosenmund, C. (2013) Titration of Syntaxin 1 in mammalian synapses reveals multiple roles in vesicle docking, priming, and release probability. J. Neurosci. 33, 16698-16714.
PubPeer Weblink: https://pubpeer.com/publications/25105898138827EDD8E4A7332B871D
#1 & #2 (2024)
Accusation: The allegations are that the Rab3 and NSF blots in Figure 1G are duplicated (‘remarkably similar’) and that the blots were spliced together from different gels.
Resolution: High-magnification images show that the blots are clearly different. Although the bands are similar as expected for blots run on the same gel apparatuses (see documentation above), the lanes ran differently, with a much larger gap between lanes for the smaller Rab3A protein than the larger NSF protein, again as expected (see red circles in image from published paper). Moreover, assembling illustrative blots from different lanes was perfectly acceptable practice in science before PubPeer criticized it. The blot predates the digital era and was obtained using ECL and film at UT Southwestern.
Classification: Unfounded
***
42. Paper: Sun, W., Liu, Z., Jiang, X., Chen, M.B., Dong, H., Liu, J., Südhof, T.C., and Quake, S. R. (2024) Spatial and single-cell transcriptomics reveal neuron-astrocyte interplay in long-term memory. Nature 627, 374-381.
PubPeer Weblink: https://pubpeer.com/publications/6123C544EAE3A2F1FE1D9CE476E00C
#1 (July 2024)
Accusation: None – simply cites an unreviewed preprint that alleges that our study is ‘unlikely to replicate in future’ because the authors of that preprint did not agree with some of the statistical analyses in our paper.
Resolution: It is impossible to respond to a reference to an unpublished paper, except to say that this post obviously ‘questions’ yet another of our papers to add to the success list of PubPeer claiming to have identified ‘questionable’ papers from our lab. However, we applaud the BioRxiv paper authors for initiating an open scientific discussion in which we will engage instead of the anonymous PubPeer allegations that are impossible to discuss fairly.
Classification: Uninterpretable accusation
#3 & 4 February 2025)
Accusation: The data we deposited publicly are not sufficiently annotated and therefore the results cannot be reproduced.
Resolution: The accusation may reflect the commentors inability to understand the deposited data since the critics who alleged that our statistical analysis was wrong (see #1 above) clearly were able to reproduce the results – their criticism was that they disagreed with the statistics.
Classification: Unfounded
#3 & 4 (June 2025)
Accusation: One of the authors of the criticism of our statistics who has now published his criticism publicized his criticism again on PubPeer, as did an anonymous commentor
Resolution: The previously unpublished criticism of our statistics has now been published in Nature as a “Matters Arising” in a notably adversarial opinion piece (https://www.nature.com/articles/s41586-025-08988-y) as well as our response explaining why the criticism is misguided (https://www.nature.com/articles/s41586-025-08989-x).
Classification: Unfounded
#7 (September 2025)
Accusation: The comment includes two accusations, namely 1) that the QC described in the methods is inconsistent with the number of genes discussed in the text, raising questions about QC procedures, and 2) that the number of neurons in the text (2,137) is inconsistent with the number of annotations in the data file (2,440).
Resolution: 1) The QC procedures were performed in two stages with analysis-specific thresholds. The thresholds quoted in the Methods (>2,000 genes, >50,000 reads, ERCC <10%, mitochondrial <20%) were applied to the neuronal subset prior to subclustering and differential expression. For the initial all-cell level clustering across all-BLA cell clusters, we used a more permissive whole-tissue QC (>500 detected genes, >50,000 reads). Of note, this is a typical threshold for clustering all cells in the mouse brain, e.g. filtering out cells with less than 250 genes (PMID: 29545511) or filtering out cells with less than 500 genes (PMID: 31551601). This whole-tissue QC retained 6,361 cells, so the GEO matrix reflects the post-QC set used for global clustering. We then extracted neurons and applied the stricter neuron-specific QC for downstream analyses because neurons express more genes. We performed integration in Seurat v4 (CCA-based) as described in the Methods. Minor differences from Seurat v5 defaults can shift a small number of borderline cells but do not affect the major annotations.
2) After global clustering, 2,440 cells were annotated as neurons. Before neuronal subclustering and differential expression analyses, we applied the neuron-specific QC stated in the Methods (>2,000 detected genes and >50,000 reads, excluding cells with ERCC >10% or mitochondrial >20%) to remove low-complexity/low-quality cells. This removed 303 neuronal barcodes, leaving 2,137 neurons for subclustering (the number reported in the main text). We used a higher minimum-genes threshold for neurons because neurons natively show high transcript complexity. This two-stage approach improves the robustness of neuronal subclusters, without removing good quality glia cells, some of which natively express fewer genes.
Classification: Unfounded
***
41. Paper: Gallardo, G., Schlüter, O.M., and Südhof, T.C. (2008) A molecular pathway of neurodegeneration linking a-synuclein to ApoE and Ab-peptides. Nature Neurosci. 11, 301-308.
PubPeer Weblink: https://pubpeer.com/publications/02B530E2D617B034C0E76407D16B99
#1 & #3 (July 2024)
Accusation: A blot is duplicated in Figures 2C and 6C.
Resolution: High magnification images show that the two blots exhibit distinct backgrounds, showing they are different. Dr. Gallardo documented this on PubPeer. This is a repeated false accusation issue as described above. Blots are ‘expectedly similar’, not ‘unexpectedly similar’, if the samples are run on the same gel apparatuses and the bands are of similar sizes
Classification: unfounded
#4 (July 2024)
Accusation: The error bars are missing in Figure 4a.
Resolution: The error bars are not missing. The SEMs were simply too small to be visible in the graph.
Classification: unfounded
***
40. Paper: Shin, O.-K., Rizo, J., and Südhof, T.C. (2002) Synaptotagmin function in dense core vesicle exocytosis studied in cracked PC12 cells. Nature Neurosci. 5, 649-656.
PubPeer Weblink: https://pubpeer.com/publications/9BAEF55F4945697536FB528482F5B7
Date of first mistake allegation: May 2024
#1 to #6 (May 2024)
Accusation: The accusers allege that among the 70 different blots shown in this paper, bands on 8 blots look ‘surprisingly similar’, implying that the images were manipulated. In total, in pairwise comparisons 9 ‘surprising’ similarities were alleged among a total of 556 bands (>300,000 comparisons in total). All allegedly duplicated bands are adjacent on the same blots. Thus, the allegations here are not that blots were duplicated but that a fraction of adjacent bands were duplicated.
Resolution: Since the accusers somehow convinced the ‘ethics’ department of Nature Neuroscience that their allegations are worthy an investigation and since PubPeer does not enable a scientific discussion, we respond here in detail.
The blots reported in this paper were produced by ECL amplification of immunoreactive bands that were visualized on film and scanned or photographed (not sure which since the blots were obtained 25 years ago). Photographs of the blots were then pasted manually onto a paper, annotated with Letraset, and again photographed as a figure for a paper. The original raw data were archived as required by law at UT Southwestern when I left that institution in 2008.
High-magnification images of 3 examples of blots alleged to represent duplications among reveal four interesting features:
- The bands are indeed similar – only 9 such cases exist out of >300,000 comparisons.
- The 9 similar bands are ALWAYS next to each other.
- The background surrounding the similar bands contains many non-identical features (yellow circles) and is thus clearly different, i.e. not duplicated.
- There is no sign of band ‘splicing’ or of any other manipulations of the images even though other bands in the blots are clearly spliced. These spliced bands were transparently moved from the right to the left of the blots for illustrative purposes at a less fundamentalist time when such ‘splicing’ was standard acceptable practice.
The fact that there are no indications of band splicing or other physical or digital manipulations is important given that these experiments were performed decades ago with non-digital methods. At that time, no AI-driven image reconstructions were available to seamlessly insert bands of proteins into different backgrounds. The Adobe Photoshop tool that would allow this type of changes to be introduced only became available in 2002 (https://v4development.com/blog/our-blog-post-four-36). The fact that other bands were transparently ‘spliced’ shows that such splicing is easily detectable.
How can we explain the fact that 9 out of 556 bands are similar but the images are overall non-identical and there is no sign of image manipulations? Two plausible explanations are possible. First, that this is an accidental similarity in <2% of the bands of similar samples run on the same gels. Second, that the image processing software during scanning of the blots, several rounds of image compression and expansion, or publishing may have introduced duplications of internal image components when the software perceived these components to be the same, even if they are not the same. Such artifacts have explained other alleged image manipulations on PubPeer. Digitization and compression algorithms, especially older ones, used ‘pattern matching’ and economize by simply duplicating matched patterns. A similar error was observed in old Xerox copy machines “The source of the error was a bug in the JBIG2 implementation, which is an image compression standard that makes use of pattern matching to encode identical characters only once. While this provides a high level of compression, it is susceptible to errors in identifying similar characters.” (see https://en.m.wikipedia.org/wiki/Xerox#Character_substitution_bug). This has also been documented for immunocytochemistry images in older papers (see https://pubpeer.com/publications/C9E4F18C603C449A0CD32876B719A5, an example where the responsible scientist was lucky to retain decades old images). We are not image processing algorithm experts and cannot assess what older image processing, compression, and reproduction software may or may not have done to images, nor do we know why only some bands are ‘pattern matched’ whereas others are not.
Classification: unfounded
#8, #9 & #11 (July 2024)
Accusation: Various commenters claim that it is impossible that the isolated band similarities are due to a pattern matching algorithm artefact or a similar computational error in image processing downstream of the actual data production.
Resolution: We are not sure whether the commenters have the computer science background to make such a definitive judgement about image processing and reproduction software in use 25 years ago, but we certainly don’t have that expertise. Moreover, we are disturbed that we are thought guilty if we cannot prove our innocence. The commenters/accusers do not actually provide an explanation for the small fraction of similar adjacent bands or why the bands are similar yet clearly different. Instead, the commenters demand that we explain the similarities and not the differences. If we can’t, we are damned.
Furthermore, the commenters claimed that the images could have been digitally manipulated at the time of the submission of the paper 25 years ago because software at that time, according to the commenters, was supposed to have been available, but at least the corresponding Adobe Photoshop software was not available at that time. Moreover, at the time at which this paper was submitted we did not use digital procedures to process and submit figures. Figures were submitted in physical paper form. Nature Neuroscience did not have an on-line submission system until 2002. Figures were simply mounted on a paper and photographed, and the photos were submitted.
Classification: unfounded
#12-14 (September 2024)
Accusation: Additional commenters now claim to detect signs of splicing in one of the allegedly manipulated bands as a sharp edge.
Resolution: Our accusers here allege band splicing because of a sharp edge at some of the bands reproduced at low resolution with large pixels. The accusations doesn’t mention that in this paper we transparently spliced together many bands (see example below; arrows identify obvious lane splicing without an attempt at deception, box shows the bands criticized by our accusers):
At the time of this paper’s publication, band splicing was a common acceptable practice and we made no attempt to hide it. Band splicing was done manually by cutting out bands and pasting them on paper (there was no electronic submission or electronic figure preparation). In some panels, all lanes are ‘spliced’. The allegation that a postdoc openly spliced bands together at one position in a figure but tried to hide the splicing at another position beggars belief.
Of course our accusers will now say that they don’t speculate on motivation and that they only state facts but that is not true. PubPeer commenters select features of a figure to sow doubts about it but leave out other features that are relevant and state speculations as facts without an assessment of the science. Throughout the hundreds of accusations against our lab, our accusers always claim that they can judge which image processing and image reproduction software artifacts are ‘suspicious’ without real evidence – they have always been proven wrong when old data were actually available.
Classification: unfounded
***
39. Paper: Hosaka, M., Hammer, R.E., and Südhof, T.C. (1999) A phospho-switch controls the dynamic association of synapsins with synaptic vesicles. Neuron 24, 377-387.
PubPeer Weblink: https://pubpeer.com/publications/3A1A9BCBA901F9A66ED727A7912877
Date of first mistake allegation: June 2024
#1 (June 2024)
Accusation: The immunoblots for Synaptogyrin and Rab3a in Figure 8 look ‘remarkably similar’, implying that they are duplicated.
Resolution: This is a typical A.I.-driven erroneous identification of a non-existing blot duplication. The two blots are expected to be 'remarkably similar' given that Synaptogyrin and Rab3a are both synaptic vesicle proteins which biologically behave the same way in this particular experiment, and that the immunoblots are for proteins of similar but different molecular weights (~35 kDa vs. ~25 kDa) that were analyzed on the same gel and probed on the same blot which was cut into stripes. As a result, the bands exhibit the same relative changes and display the same blotting artifacts. Note also that this blot derives from the predigital area when copy-paste mistakes were very unusual.
Classification: unfounded
#2
Accusation: It is improper for us to thank a lab member for an initial technical contribution to the project in the Acknowledgement section of the paper.
Resolution: We do not believe that thanking a lab member in an Acknowledgement section is inappropriate.
Classification: unfounded
#4, #6 #8, #10, #12 & #14 (July 2024)
Accusation: Further accusations alleging, without new information, that the two blots are the same and that there is a suspicious duplication.
Resolution: To avoid the real possibility that a journal, Stanford, or NIH takes these fraud allegations seriously, I will describe in detail these experiments that were performed nearly 30 years ago. The blots reported in this paper were produced by ECL amplification of immunoreactive bands that were visualized on film and then scanned or photographed. The scanned or photographed images were then manually pasted into an illustration on paper, labeled using Letraset, photographed, and submitted for publication as a photograph since there was no digital submission system. The original raw data were archived as required by law at UT Southwestern when I left that institution in 2008.
Let us look at this in more detail. The entire figure is reproduced here, with the allegedly duplicated blot strips marked by a red rectangle:
The bands look indeed similar but as we have shown experimentally above, similar samples run on the same gel apparatus and having a similar molecular weight produce indistinguishable blots. Note that the blots are redundant since three different vesicle proteins are immunoblotted (Synaptototagmin, Synaptogyrin, and Rab3a). A high-magnification image of a section of the blot is shown below to illustrate differences between bands:
The similarities between bands would be expected from the experiment (see explanation above). In fact, it would be worrisome if they weren’t. However, there are significant differences that show that the blots are NOT identical. These subtle differences in addition to the fact that the bands exhibit distinct thicknesses as would be expected again from slightly different antibody affinities show that the blots are different.
Classification: unfounded
#16 (July 2024)
Accusation: The ‘moderator’ (who may well be identical to the major accusers of our lab) states “Dr Südhof is obviously unwilling and/or unable to address the issues. However unsatisfactory that conclusion may be, we may limit fruitless, repetitive discussion.”, basically claiming that our responses do not address the issues.
Resolution: We feel this comment exposes the nature of PubPeer: Even the ‘moderator’ is partisan. No wonder that our comments get censored. PubPeer reveals itself as being an anti-science ‘X’ like platform with opaque funding and intransparent backers.
Classification: unfounded
***
38. Paper: Pang, Z.P., Sun, J., Rizo, J., Maximov, A., and Südhof, T.C. (2006) Genetic Analysis of Synaptotagmin 2 in Spontaneous and Ca2+-Triggered Neurotransmitter Release. EMBO J. 25, 2039-2050.
PubPeer Weblink: Not yet publicized by PubPeer but posted on the first author’s website
Date of first mistake identification/allegation: May 2024
#1
Mistake identified: Supplementary Figure S1 contains a single blot image duplication involving a control condition.
Resolution: The duplication affects one of 139 display items (error rate <1%). The journal has been contacted and the corrected blot has been submitted.
Classification: Minor error identified by lab
***
37. Paper: Shin, O.-H., Lu, J., Rhee, J.-S., Tomchick, D.R., Pang, Z.P., Wojcik, S., Camacho-Perez, M., Brose, N., Machius, M., Rizo, J., Rosenmund, C., and Südhof, T.C. (2010) Munc13 C2B-domain – an activity-dependent Ca2+-regulator of synaptic exocytosis. Nature Struct. Mol. Biol. 17, 280-288.
PubPeer Weblink: https://pubpeer.com/publications/CF9BF3F6AF7EACB2FB3C0581A8AB76
Date of mistake identification/allegation: May 2024
#1 and following comments (May 2024)
Accusation: The immunoblot stripes in supplementary Figure S6b (reproduced digitally at low resolution by the journal from a non-digital original blot) contains tiny areas of micro-duplications in the background pattern (not the actual signal) that are alleged to be manipulated. This accusation is also discussed in detail above as a case study.
Background information on the figure: Figure S6B represents a negative control. The experiment shows immunoblots of pulldowns with WT and mutant C2B-domains of Munc13-2. Positive controls are shown in the left lane and the empty test lanes in the right lanes. The 6 different immunoblots in Figure S6B control for each other in that they all six independently support the same conclusion, namely that the C2B-domain does not bind to synaptic proteins as a function of calcium, as is also independently tested in Figure S6A.
Resolution: We initially considered this as a bizarre accusation since in our view, these microduplications make no sense as anything else but as a digital artifact. However, NSMB has taken this accusation seriously, similar accusations have since been made for other old blots and images from our lab, and the accusation has been widely publicized on social media.
The accusation targets small areas in two blot stripes from the pre-digital era in a supplementary figure panel containing 7 such stripes. If these micro-areas were truly intentionally duplicated and not created by an image-processing artifact with a non-digitally acquired blot that is reproduced at low resolution and compressed and expanded multiple times, a person could have achieved this goal much more easily by re-running the gel or duplicating larger areas. Moreover, we could show here [link to Burre paper analysis] in another example for which original resolution images were available that such microduplications DO OCCUR as artifacts during image processing, compression and reproduction procedures.
Nevertheless, we have analyzed these duplications in detail as described above as a case study and repeated below. First, we here show a higher resolution image of the area with duplications. In this image the background is greatly enhanced to enable better views of the duplications:
The upper blot strip contains a rectangular area in the middle (1’) that exhibits a bizarre pattern of duplications: a full duplication on the right (1’’), a partial duplication (lower 25%) placed at the upper middle and right edges (2’ & 2’’), and a smaller partial duplication (lower right corner ~10%) on the middle upper edge (3). The lower blot strip contains three different background areas that are each duplicated once (1’-3’’). The thin blue lines demarcate the boundaries of the lanes on the gels.
Let us assume that these duplications are not artifacts caused by image processing, compression, and reproduction procedures since the published images went through multiple cycles of such procedures, but that they represent intentional manipulations. If so, such manipulations could only have two purposes, to change the result (i.e. falsify the data) or to ‘beautify’ the images because there may have been gel smudges.
Ruling out data falsification. Remarkably, the duplicated background areas do not cover up lanes of the gels, as would be expected if someone intended to conceal a non-wanted band but didn’t simply want to rerun the gel without loading anything (a non-detectable approach to achieving a negative signal). Comparison of the lanes on the right with duplicated microareas with the positive control on the left shows that the duplicated microareas could not have covered an unwanted band. A non-wanted band could only have been covered by duplicating a larger area of background. The following image shows an expansion of the area containing duplications with enhanced contrast, demonstrating that there are non-duplicated sections of the blots between the duplicated areas. The blue asterisks indicate the positions where one would expect a positive blot signal based on the positions of the lanes and positive controls:
The absence of any increase in the signal in the non-duplicated areas shows that the duplicated areas did not serve to cover up a band and that there thus is no falsifications of data.
To illustrate this point in a different way, we are here showing what it would look like when transferred onto a blot containing a signal (the lower strip in the figure with the alleged duplication):
Clearly the microduplications do not cover up bands, excluding data falsification.
Intentional data beautification as a cause is improbable. Our accusers could argue that maybe there were lots of smudges on the two blots containing background duplications but not on the other blots and that the experimenter may have been tempted to ‘simply’ cover up these areas instead of the more straightforward act of re-running the gel, which would have been easy. Several arguments make this accusation improbable:
1. There is no evidence of smudges in any of the blots. Given that not a single but a cluster of 7 duplications are observed, there would have been many smudges in small areas of the gels but none in other areas.
2. The most straightforward way of covering up smudges would be copy-pasting the duplicated areas but such a procedure creates border effects in the pixels that are absent from the image as shown below:
There are more sophisticated less detectable maneuvers to duplicate an image area and the absence of edge effects does not in itself completely rule out the possibility of image manipulations. However, if one argues that in this case a perpetrator was too lazy to re-run a gel in order to fix smudges, it seems incredible that the same perpetrator would then expend enormous efforts to create a perfect duplication of an image area. If a perpetrator expected that someone would look for edge effects, he/she would also know that the duplications could be detected and would have just re-run the gel with empty samples. Moreover, a perpetrator could just have duplicated a bigger section from another blot but tellingly all duplications here are within the same blot.
3. The image area duplications in the two blot strips are different in nature. In the upper strip, the same area was duplicated multiple times in part or in full, whereas in the lower strip, three different areas were duplicated. The two distinct patterns of duplications are more consistent with an artifact than with a human actor.
4. This is a supplementary blot of little esthetic value that few people would ever see – why correct what would have to be tiny smudges based on the sizes of the microduplications?
5. A final argument against the accusation that the image are duplications represent efforts to cover up ‘smudges’ on a gel is the nature of how these images were created: an ECL blot was photographed, the blot strips were cut from the photographs with scissors and pasted onto paper with Letraset labeling, and the physically constructed figure was photographed again. It seems not creditable to believe that a photograph with a smudge would first be pasted on a paper and then later manipulated – why not simply rerun the gel?
To summarize, the most likely explanation for the areas of background duplications, like for many of the ‘mistakes’ identified by A.I.-powered software, is that the random microduplications discussed here are simply a reproduction artifact of a digitized image derived from a photograph. Indeed, for the paper discussed in #10 we could demonstrate such artifacts because the original data were still available after more than 15 years [link to Burre paper analysis].
Classification: unfounded
***
36. Paper: Maximov, A., Shin, O.-H., and Südhof, T.C. (2007) Synaptotagmin-12, a synaptic vesicle phosphoprotein that modulates spontaneous neurotransmitter release. J. Cell Biol. 176, 113-124.
PubPeer Weblink: https://pubpeer.com/publications/1623AD50C70F6D92C1BD05B69EEED3
Date of mistake identification/allegation: May 2024
#1 & #2 (May 2024)
Mistake identified: A control electrophysiology trace may be duplicated.
Resolution: We will contact UT Southwestern which retained all original data by law to determine if this is a duplication or simply a similar trace.
Classification: Likely minor error or a serendipitously similar trace
#4 (August 2024)
Mistake identified: The accuser demands to see the original traces
Resolution: As mentioned above, the experiments were performed 20 years ago at UT Southwestern, which retained all original data as required by law. These data are property of UT Southwestern and no longer under our control.
Classification: Irrelevant comment
***
35. Paper: Gokce, O., and Südhof, T.C. (2013) Membrane-Tethered Monomeric Neurexin LNS-Domain Triggers Synapse Formation. J. Neurosci. 33, 14617-14628. PMCID: PMC3761060
PubPeer Weblink: https://pubpeer.com/publications/DB2D4970F91BF62729727D5DE39975
Date of mistake identification/allegation: We identified and corrected this mistake in the journal but Dr. Bik afterwards repeated our own identification of the mistake without new information
#1 (May 2024)
Mistake identified: The paper contains a copy-paste mistake
Resolution: No resolution necessary. A correction has been published
Classification: Minor mistake uncovered by the lab
***
34. Paper: Lee, K., Kim, Y., Lee, S.-J., Qiang, Y., Lee, D., Woo Lee, H., Kim, H., Je, H.S., Südhof, T.C., and Ko, J. (2013) MDGAs selectively interact with neuroligin-2 but not other neuroligins to regulate inhibitory synapse development. Proc. Natl. Acad. Sci. U.S.A. 110, 336-341.
PubPeer Weblink: Not yet publicized by PubPeer but PNAS published a Correction
#1 (May 2024)
Mistake identified: Image duplication in Suppl. Figure 2B
Resolution: A correction has been published by the journal
Classification: Minor mistake by another lab and uncovered by the lab
***
33. Paper: Wöhr, M., Fong WM, Janas JA, Mall M, Thome C, Vangipuram M, Meng L, Südhof, T.C., and Wernig, M. (2022) Myt1l haploinsufficiency leads to obesity and multifaceted behavioral alterations in mice. Mol. Autism, 13, 19. PMCID: PMC9087967
PubPeer Weblink: https://pubpeer.com/publications/6AEE2ED764E4EA67E352B62B92D882
#1 (April 2024)
Accusation: Image duplication in Supplementary/Additional File 2
Resolution: The accusation is likely correct but the data are from a collaborating lab
Classification: Likely minor mistake from another lab
***
32. Paper: Ko, J., Soler-Llavina, G.J., Fuccillo, M.V., Malenka, R.C., and Südhof, T.C. (2011) Neuroligins/LRRTMs prevent activity- and Ca2+/calmodulin-dependent synapse elimination in cultured neurons. J. Cell Biol. 194, 323-334.
PubPeer Weblink: https://pubpeer.com/publications/6A24D71A02F213024FBA3452BD89DA
#1 (April 2024)
Mistake identified: Supplementary Figure 3 contains a duplicated image
Resolution: We self-reported the error
Classification: Minor mistake identified by us
#2 & 4 (July 2024)
Accusation: A professional PubPeer commentor amplifies the error that we ourselves reported
Resolution: We self-reported the error and no additional amplification is needed
Classification: Irrelevant
#6 (May 2025)
Accusation: Dr. Bik returns to our self-reported error and demands a correction
Resolution: We have contacted the journal although we feel that the error is too minor to warrant a correction
Classification: Irrelevant
***
31. Paper: Ko, J., Fuccillo, M., Malenka, R.C., and Südhof, T.C. (2009) LRRTM2 Functions as a Neurexin Ligand in Promoting Excitatory Synapse Formation. Neuron 64, 791-798.
PubPeer Weblink: https://pubpeer.com/publications/13DFE3B5D880F57C51C7ACB03CFC6C
#1 (April 2024)
Mistake identified: Supplementary Figure 3 contains a duplicated image
Resolution: We reported the error
Classification: Minor mistake identified by us
#2 (April 2024)
Accusation: Supplementary Figure 3 contains a second duplicated image
Resolution: We missed this mistake when we reported the first error but confirm that the second mistake exists in the same figure
Classification: Minor mistake
#4 (July 2024)
Accusation: Repeat of accusation #2 three months after it was resolved
Resolution: No need for additional resolutions
Classification: Irrelevant comment
#6 (May 2025)
Accusation: A year after her first comment, Dr. Bik identified another partial image duplication that she apparently missed a year ago
Resolution: Dr. Bik is correct in that another image duplication was missed by us and everybody else, but this comment raises the question whether Dr. Bik observed it only now or kept the finding ‘in reserve’ to be able to make another criticism later on.
Classification: Minor mistake
***
30. Paper: Shimojo, M., Madara, J., Pankow, S., Liu, X., Yates, J. 3rd, Südhof, T.C., and Maximov, A. (2019) Synaptotagmin-11 mediates a vesicle trafficking pathway that is essential for development and synaptic plasticity. Genes and Dev. 33, 365-376.
PubPeer Weblink: https://pubpeer.com/publications/2999ACA0C61FDB9B3F71E366688A37
#1 & #3 (April 2024)
Accusation: In Figure 1D some immunoblot sections containing no signal may be duplicated.
Resolution: The figure and data were not from the Südhof lab, but the senior author responded on PubPeer to explain that this allegation is unfounded
Classification: Unfounded
#2 (April 2024)
Accusation: Dr. Bik claims that since I am the only author from my lab on this paper, I must be an honorary author which is improper
Resolution: My affiliation on the paper was listed as UT Southwestern, as was that of the senior author Dr. Maximov. Dr. Maximov initiated the project in my lab at UT Southwestern and took the project and the reagents he generated with him in 2007 to his own lab, where he completed the project.
Classification: Unfounded
#4 (April 2024)
Accusation: The blots shown in Supplementary Figure 1 are overexposed
Resolution: This is a question of taste. These blots were from the pre-digital age when we often preferred long exposures with ECL visualizations to identify possible minor signals.
Classification: Unfounded
***
29. Paper: Sando, R., Jiang, X., and Südhof, T.C. (2019) Latrophilin GPCRs direct synapse specificity by coincident binding of FLRTs and teneurins. Science 363, pii: eaav7969.
PubPeer Weblink: https://pubpeer.com/publications/3782092ABD5E5AC83CBB9F899C0D59
#1, #2, #4 & #6 (April 2024)
Accusation: Figure 3B contains image duplications.
Resolution: The image duplications produced by a copy-paste mistake were undetectable without specialized software but have now been corrected. They are among 401 total display items (<0.5% error rate).
Classification: Minor mistake
***
28. Paper: Eichel, K., Uenaka, T., Belapurkar, V., Lu, R., Cheng, S., Pak, J.S., Taylor, C.A., Südhof T.C., Malenka, R., Wernig, M., Özkan, E., Perrais, D., and Shen, K. (2022) Endocytosis in the axon initial segment maintains neuronal polarity. Nature 609, 128-135.
PubPeer Weblink: https://pubpeer.com/publications/5423F032DB2E73F1ACD449E4B5BA96
#1 & #2 (April 2024)
Accusation: Extended Data Figure 5a contains a blot image duplication
Resolution: The allegations are likely correct but the data are not from a our lab
Classification: Minor mistake in a collaborator’s paper
#5 (May 2025)
Accusation: Dr. Bik publicizes the correction that Dr. Shen made
Resolution: Gratuitous mentioning of correction in parallel with X
Classification: Irrelevant
***
27. Paper: Chen, L.Y., Jiang, M., Zhang, B., Gokce, O., and Südhof, T.C. (2017) Conditional Deletion of All Neurexins Defines Diversity of Essential Synaptic Organizer Functions for Neurexins. Neuron 94, 611-625.
PubPeer Weblink: A copy-paste mistake was initially discovered by our lab, but was later amplified by PubPeer posts at https://pubpeer.com/publications/F7C42C356B2E7049FDB68A434EF4F8.
#1 & #2 (April 2024)
Mistake identified: The paper’s first author, Dr. L.Y. Chen, reported that she inadvertently and incorrectly copy-pasted an image in Figure 2D into Figure S3A, creating an image duplication
Resolution: Dr. Chen reported this mistake on PubPeer as soon as we discovered it; a correction was filed.
Classification: Minor mistake identified by us.
#3-67 (April to July 2024)
Accusation: In extensive, repetitive PubPeer posts, image manipulations were alleged. We were then able, with the help of external experts (Drs. Matthew Schrag and Pinar Avci), to distill the various accusations into the identification of three image abnormalities.
Abnormality 1: The duplication of the image from Figure 2D into Figure S3A that we originally identified was shown to involve numerous additional microduplications and changes in the green signal intensity in the duplicated image of Figure S3A.
Abnormality 2: The blue channel of Figure 1D, but no other channel, was shown to also contain numerous microduplications.
Abnormality 3: Figure S4B was shown to contain numerous microduplications, most of which affect only the green channel but some of which affect all three channels.
Resolution: All of the affected figures concern results from a single type of experiment in this paper, imaging of the cerebellum after inferior olive injections. For these experiments, we could no longer locate the original samples or the raw imaging files from 8 years ago. Thus, we could not prove that the numerous microduplications and other changes in these three images are artifacts. We therefore retracted the paper with the following text “We, the authors of this publication, have decided to retract the paper because we found that the images in Figure 1D and Figure S4B contain aberrations that cannot be explained, and the original data for these figures are missing. Raw data for the other components of the paper are available, and their reanalysis confirmed the conclusions of the paper. We would like to thank M. Schrag for bringing these image aberrations to our attention.”
The question now arises whether the image aberrations in Figures 1D, S3A, and S4B are data manipulations as alleged or artifacts. Microduplications caused by artifacts are common (https://pubpeer.com/publications/C9E4F18C603C449A0CD32876B719A5 and https://pubpeer.com/publications/F647CE4587D81D76F67C6B8A2F7B0C). Thus, we examined the impact of the identified image aberrations on the paper.
Impact of Abnormality 1: The duplication of Figure 2D into Figure S3A is clearly an error that resembles a common copy-paste mistake. However, the copy-paste mistake doesn’t explain why the duplicated image contains aberrations. Strikingly, the observed abnormalities in the Figure S3A image are contradictory the conclusions of the paper. If they were intentionally produced after intentionally duplicating an image, such a manipulation would decrease the evidence for the paper’s conclusions instead of increasing it. Moreover, there is no aesthetic value in these image abnormalities. Thus, the aberrations in the duplicated image in Figure S3A weakens instead of strengthening the conclusions of the paper, arguing against an intentional manipulation.
Impact of Abnormality 2: The blue channel that is the only channel containing image abnormalities in Figure 1D is not mentioned in the text, is not quantified, and is virtually invisible to the naked eye in the composite figure. This abnormality could only be detected by AI-driven software. Thus, the microduplications in the blue channel of Figure 1D have no impact on the paper. They do not serve an aesthetic purpose or strengthen any conclusions of the paper, arguing again against an intentional manipulation
Impact of Abnormality 3: The areas of Figure S4B that contain microduplications are not in the areas that were quantified and thus the microduplications have no impact on the conclusions, although their complex and bizarre nature puzzles us. They would be difficult to introduce by an intentional act, which together with the lack of benefit for the paper also argues against an intentional manipulation.
In summary, although we cannot definitively establish the origin of the three image abnormalities in this paper, they affect only a tiny fraction of the data in the paper, have no impact on its conclusions, and could represent artifacts caused by the image acquisition, processing and reproduction procedures.
Classification: Founded is that the copy-paste mistake in Figure S3A that we identified is an error; unfounded is the accusation that the image artifacts in Figure S3A, Figure 1D and Figure S4B that are intentional.
***
26. Paper: Seigneur, E., Polepalli, J., and Südhof, T.C. (2018) Cbln2 and Cbln4 are expressed in distinct medial habenula-interpeduncular projections and contribute to different behavioral outputs. Proc. Natl. Acad. Sci. U.S.A. 115, E10235-E10244.
PubPeer Weblink: https://pubpeer.com/publications/25B0A673C668139C77CEAC19A99B11
#1 (April 2024)
Accusation: We discovered and reported two image duplications in Figure S4
Resolution: Posted by us to preempt public shaming. The two copy-paste errors in Figure S4 are in a figure containing 48 images and represent a <1.0% error rate among a total of 261 display items.
Classification: Minor mistake identified by us.
#2 (July 2024)
Accusation: Our attempt to preempt a social media accusation because of this copy-paste mistake failed – the same mistake was repeatedly reported on PubPeer 3 months after it was reported
Resolution: None necessary
Classification: Irrelevant comment
***
25. Paper: Mall, M., Kareta, M.S., Chanda, S., Ahlenius, H., Perotti, N., Zhou, B., Grieder, S.D., Ge., X., Drake, S., Ang, D.E., Walker, B.M., Vierbuchen, T., Fuentes, D.R., Brennecke, P., Nitta, K.R., Jolma, A., Steinmetz, L.M., Taipale, J., Südhof, T.C., and Wernig, M. (2017) A proneuronal transcription factor repressing many non-neuronal fates. Nature 544, 245-249.
PubPeer Weblink: https://pubpeer.com/publications/D4D96DB690DAC5A3D912EE64E92AC3
#1 & #2 (April 2024)
Accusation: Extended Data Figure 5 contains two image duplications.
Resolution: The allegation concerns data that are not from the Südhof lab but are correct. The postdoc from the Wernig lab involved has initiated a Correction with the journal.
Classification: Minor mistake by another lab
***
24. Paper: Bacaj, T., Ahmad, M., Jurado, S., Malenka, R.C., and Südhof, T.C. (2015) Synaptic Function of Rab11Fip5: Selective Requirement for Hippocampal Long-Term Depression. J. Neurosci. 35, 7460-7474.
PubPeer Weblink: https://pubpeer.com/publications/52D37ED6C4D16682694521FA04B2BA
#1, 2, 4 & 6 (April 2024)
Accusation: In Figure 3D the immunoblotting bands for Cpx and Sph are alleged to look ‘unexpectedly similar’, implying they are duplicated.
Resolution: This allegation is based on a common image analysis mistake. Immunoblotting bands are expectedly similar when samples run on the same apparatus and analyzed by Coomassie or blotted with similarly clean secondary antibodies as described above.
Classification: Unfounded
***
23. Paper: Golf, S,R,, Trotter, J,H,, Nakahara, G., and Südhof, T.C. (2023) Astrocytic Neuroligins Are Not Required for Synapse Formation or a Normal Astrocyte Cytoarchitecture. bioRxiv 10:2023.04.10.536254. doi: 10.1101/2023.04.10.536254. Preprint.
PubPeer Weblink: https://pubpeer.com/publications/98784D9AF9B1E8B5B1818E516B5001
#1 (March 2024)
Accusation: One control panel among the 70 blots of Figure 3 is duplicated.
Resolution: There are two problems with this accusation. First, this is not a published paper, but a preprint posted to elicit comments on BioRxiv. The accuser chose not to comment on BioRxiv but to level an accusation on PubPeer which is typical of PubPeer’s anti-scientific agenda. Second, blot duplications are often mis-identified because similar samples, when run on the same gel apparatuses at the same relative positions in the gel produce the same blotting artifacts. This doesn’t mean they are duplicated. We have resubmitted the paper for publication and have made all raw data available.
Classification: Unfounded
#3 & #4 (April 2024)
Accusation: The accusers repeat the allegation of #1.
Resolution: Same response as for the first allegation
Classification: Unfounded
#6 & #7 (April 2024)
Accusation: The accusers demand to see the unpublished raw data.
Resolution: We consider the demand for unpublished raw data inappropriate
Classification: Unfounded
***
22. Paper: Seigneur, E., and Südhof, T. C. (2018) Genetic ablation of all cerebellins reveals synapse organizer functions in multiple regions throughout the brain. J. Neurosci. 38, 4774-4790.
PubPeer Weblink: https://pubpeer.com/publications/D8EAA6F915EE4008B654738F66ABE6
#1-#3 (March 2024)
Accusation: A blot among the 108 images in Figure 2 and an image among the 82 images in Figure 3 are duplicated (total error rate <1.0% among 310 display items).
Resolution: The allegation is correct. Once we copy-pasted the wrong blots and images we would have been unable to detect it with A.I.-driven software. Original data are now posted on PubPeer.
Classification: Minor mistake
#8 (April 2024)
Accusation: The accusers demand a journal correction.
Resolution: A journal correction has already been made.
Classification: Irrelevant
#9 & #11, 12, 16 & 19 (April 2024)
Accusation: The accusers repeatedly claim that the fact that 25 postdocs in the Sudhof lab over 15 years committed individual copy-paste errors suggests that there is systemic fraud in the lab. Moreover, the accusers also warrant that they have no conflict of interest.
Resolution: No resolution possible although the blanket condemnation of our lab by the same group of people seems very personal. Accusing 25 postdocs of fraud for isolated copy-paste mistakes in a tiny fraction of their data appears hateful. Note that the raw data for this paper have already been posted previously (https://purl.stanford.edu/cc564dr1376).
Classification: Irrelevant but quite revealing of the accusers’ motivations
#23 (July 2024)
Accusation: another repetition of the same accusations.
Resolution: No resolution necessary.
Classification: Pointless repetition of a resolved accusation
#23 (May 2025)
Accusation: Dr. Bik amplifies her previous accusations after our correction was published to highlight the fact that the original paper contained errors, and overstates the number of errors in the process
Resolution: No resolution necessary.
Classification: Pointless repetition of a resolved accusation
***
21. Paper: Wang, J., Miao, Y., Wicklein, R., Sun, Z., Wang, J., Jude, K.M., Fernandes, R.A., Merrill, S.A., Wernig, M., Garcia, K.C., and Südhof, T.C. (2021) RTN4/NoGo-Receptor Binding to BAI Adhesion-GPCRs Regulates Neuronal Development. Cell 184, 5869-5885. PMCID: PMC8620742
PubPeer Weblink: https://pubpeer.com/publications/5813077CE8B5C29E479FD50C259F77
#1 (March 2024)
Accusation: A supplementary figure containing 36 panels of illustrative images in a paper with 373 images and more than 130 graphs contains a single duplication of a control image
Resolution: The accusation is correct
Classification: Minor mistake
#6, 8, 10, 12, 14 & 15 (July 2024)
Accusation: Repetition of the same accusation with ‘animations’
Resolution: Same as above
Classification: No need for a response except to note that the enthusiasm in repeating allegations shows how PubPeer operates as a huge echo chamber
***
20. Papers: Jiang, X., Sando, R., and Südhof, T.C. (2021) Multiple signaling pathways are essential for synapse formation induced by synaptic adhesion molecule. Proc. Natl. Acad. Sci. U.S.A. 118, e2000173118; Li, J., Xie, Y., Cornelius, S., Jiang, X., Sando, R., Kordon, S., Pan, M., Leon, K., Südhof, T.C., Zhao, M., and Araç, D. (2020) Alternative splicing controls teneurin-latrophilin interaction and synapse specificity by a shape-shifting mechanism. Nature Comm. 11, 2140.
PubPeer Weblink: https://pubpeer.com/publications/027E93962D3C5DB86482283739C67D#10
#9 (March 2024)
Accusation: The two papers cited above use the same control image even though the stated conditions are different, suggesting fraud
Resolution: The same control image was used for the same type of experiment in two different studies performed at the same time. In the Nature Communications paper the control condition is labeled as 'Ctrl' whereas in the PNAS paper it is labeled as 'NPR-mut', but the Methods section clearly states that the 'Ctrl' of the Nature Communications paper is the 'NPR-Mut' that was used in the PNAS paper and thus the conditions are the same. I agree that we should have indicated that the same control condition was used for experiments performed at the same time for two different projects, but this is perfectly proper since the experiments were done in parallel simultaneously. Five years ago we lived in a less prosecutorial environment and the lead author did not think of stating this explicitly, which was an oversight.
Classification: Unfounded
***
19. Paper: Burré, J., Sharma, M., and Südhof, T.C. (2012). Systematic Mutagenesis of a-Synuclein Reveals Distinct Sequence Requirements for Physiological and Pathological Activities. J. Neurosci. 32, 15227-15242.
PubPeer Weblink: https://pubpeer.com/publications/0FECC6D2E9498F9876CFCC24D2E03E#8
#1-#3 (December 2023 & March 2024)
Accusations: The blots shown in Figure 5, 6 and 7 are inappropriately assembled from multiple individual blots.
Resolution: These blots combine analyses of 26 samples that cannot be examined on single gels. Thus, the samples were run in parallel on multiple gel electrophoresis apparatuses and blots. No attempt was made to hide this fact. A decade ago, before the current atmosphere of prosecution, composite blots like the one cited by the accusers that are assembled from multiple individual blots were acceptable common practice. Nowadays this is labeled deceptive even though there is no attempt to hide it.
Classification: Unfounded
#4 - #6 (March 2024)
Accusations: Presumably based on artificial intelligence searches, the accuser identified image duplications in two sets of panels in Figure 7D and in Figure 9A.
Resolution: The accusation is correct – the paper contains 2 copy-paste errors among 602 display items that were undetectable before A.I. tools became available (error rate <0.5%)
Classification: Minor mistake
#9 (March 2024)
Accusations: The accuser repeats the accusation that the combination of different blots in Figures 5 and 6 is inappropriate because different blots were combined at different positions.
Resolution: We disagree. At the time of this study -15 years ago- people found this perfectly acceptable.
Classification: Unfounded
#16 & #18 (April 2024)
Accusations: The accuser alleges a pattern of fraud in my lab because we retracted another paper in which we (not PubPeer) identified an inappropriate analysis of data. The comment implies that Dr. Burre’s isolated errors are also fraud and that more than 25 members of my lab each committed fraud by copy-paste errors in their papers.
Resolution: It is unfortunate that PubPeer has become a forum for people who appear to have an anti-science agenda expressed with implausible suspicions and insinuations. The occurrence of multiple isolated copy-paste errors in our papers that could only be identified using AI-driven software and that likely similarly exist in hundreds of other papers is not an indication that each of the responsible postdocs committed fraud.
Classification: Inappropriate comment.
#19 (July 2024)
Accusations: The accuser provides a belated animation of comment #4 that was already resolved, possibly to amplify the accusation even though it was long dealt with.
Resolution: No need for further responses.
Classification: Unfounded
***
18. Paper: Lin, P.Y., Chen, L.Y., Jiang, M., Trotter, J.H., Seigneur, E., and Südhof, T.C. (2023) Neurexin-2: An Inhibitory Neurexin That Restricts Excitatory Synapse Formation in the Hippocampus. Sci. Advances 9, eadd8856.
PubPeer Weblink: https://pubpeer.com/publications/C22E0805CB0B55CB7388F488611145
#1 (March 2024)
Accusation: One set of representative images in Figure 4B shows enlarged views of two synapses that cannot be found next to each other in the low-magnification view shown in the paper and thus represent image manipulations. In order to illustrate the accusation, the accuser drew new boxes into the published figure that are not in the paper.
Resolution: As explained in the figure legend, these are representative synapse images taken from the same experiment but are not adjacent in the low-magnification images shown. The low-magnification image is shown specifically to illustrate adjacent synapses whereas the high-magnification image is shown to illustrate particular features of synapses. All raw data were submitted to a public database (https://purl.stanford.edu/nb252dn4150). We showed on PubPeer now the entire dSTORM low magnification image that contains both illustrative synapses. PubPeer is establishing new fundamentalist rules whereby illustrative figures become the main point of papers instead of illustrations of experiments for readers.
Classification: Unfounded
#4, #6 & #7
Accusation: These accusations make the same point as #1: The commenters allege that it is unethical that we did not explicitly state in the figure legend that the representative images shown were not adjacent to each other in the section.
Resolution: Journal restrictions on legend sizes make it impossible to explain every detail but there was clearly no intent of hiding the fact that the representative images are just that, representative images, that were selected from a larger set of images. Otherwise we would have indicated this using customary boxes.
Classification: Unfounded
#10 & #12
Accusation: The commentators criticize that the leak currents in the experiments for Figure 3 is too high and allege that they cannot reproduce the graph in Figure 3B from the raw data that we made publicly available.
Resolution: Leak currents are always high when a high Cl- concentration is used in the internal solution during recordings as we show above in a detailed analysis. However, the leak currents in some of the original recordings may indeed have been higher than is acceptable for a high Cl- internal solution. We have reanalyzed the data without and with exclusing recordings with very high leak currents and obtain the same conclusions as published. We have also resolved the accusation that the n’s are incorrect and demonstrated that all n’s are indeed valid.
Classification: Pending
#13
Accusation: PubPeer is delighted to note that the journal has published a Editorial Expression of Concern exclusively based on PubPeer.
Resolution: Hopefully the journal will give us a chance to explain that the EEoC was unwarranted.
Classification: Pending
***
17. Paper: Wang, S., DeLeon, C., Sun, W., Quake, S.R., Roth, B.L., and Südhof, T.C. (2024) Alternative Splicing of Latrophilin-3 Controls Synapse Formation. Nature 626, 128-135.
PubPeer Weblink: https://pubpeer.com/publications/A04E94FAF81B5D7EC9E6B1668085EA
#1a (February 2024)
Accusation: The p values in Figure 5 must be wrong because they are the same for Exon 31 (E31) and Exon 32 (E32) conditions.
Resolution: The P values in Fig5 (and also throughout the paper) are identical for E31 and E32 in the same comparison groups because the splicing of E31 and E32 are mutually exclusive. Therefore, each PSI datapoint in E31 plot always has a corresponding datapoint with the value of 100-PSI in E32 plot, and for the same comparison group (e.g. KCl 0hr vs 6hr), the p value for E31 must be equal to E32.
Classification: Unfounded
#1b (February 2024)
Accusation: The p values shown in Figure 5 are different from those of the Supplemental Tables and therefore one of them must be wrong.
Resolution: Figure 5 used a t-test as specified in the legend. The supplemental tables use Tukey’s test to calculate pairwise p values after correcting family-wise error as specified again in the legend. In the “Statistics and reproducibility” section we explicitly stated: “Most statistical tests were performed using two-sided t-tests, as indicated. To control for family-wise error during multiple comparisons, two-sided Tukey’s tests were used in parallel and the adjusted P values are summarized in Supplementary Tables 1 and 2, and do not change the conclusions drawn from t-tests in this work.”
Classification: Unfounded
#1c (February 2024)
Accusation: Figures 5E and 5F are duplications because they are the same graph rotated 180 degrees.
Resolution: The distribution of Figure 5e and 5f are expected to be precise mirror-images of each other because E31 and E32 are mutually exclusive.
Classification: Unfounded
#3 (April 2024)
Accusation: We reported mis-labeling of a Figure in the paper committed by Nature staff as an illustration that published papers also contain errors introduced during processing
Resolution: No need for corrections except if Nature is convinced by PubPeer tweets that it should correct its own mistake
Classification: Unfounded
***
16. Paper: Woerman AL, Stöhr J, Aoyagi A, Rampersaud R, Krejciova Z, Watts JC, Ohyama T, Patel S, Widjaja K, Oehler A, Sanders DW, Diamond MI, Seeley WW, Middleton LT, Gentleman SM, Mordes DA, Südhof TC, Giles K, Prusiner SB. (2015) Propagation of Prions Causing Synucleinopathies in Cultured Cells. Proc. Natl. Acad. Sci. USA 112, E4949-4958.
PubPeer Weblink: https://pubpeer.com/publications/F80D8161AB29FEF18967FAF9A0D228#2
#1, 3-5 (December 2023)
Accusation: Based on data reconstructions, the statistical significance of panel A of Figure 6 is p = 0.0679 instead of p<0.05 as stated in the paper. Again, the accusation is supported by an echo chamber of commentators.
Resolution: The incriminated data are not from the Südhof lab but small differences in reconstructed data points could easily be responsible for the tiny difference in calculated p values, especially since figures are generally constructed from data points by programs and subsequent shuffling of panels introduces inaccuracies in figures. Given that the alleged p value difference is small, the conclusion that the stated p value is wrong seems to be unjustified, although this argument was criticized by PubPeer as inappropriate.
Classification: Unfounded
***
15. Paper: Jiang, X., Sando, R., and Südhof, T.C. (2021) Multiple signaling pathways are essential for synapse formation induced by synaptic adhesion molecule. Proc. Natl. Acad. Sci. U.S.A. 118, e2000173118. PMCID: PMC7826368
PubPeer Weblink: https://pubpeer.com/publications/027E93962D3C5DB86482283739C67D
#1 (November 2023)
Accusation: The stated statistical significance of the right graph in Figure 5C must be wrong because the error bars overlap
Resolution: Prism software indicates that the two conditions are significantly different, consistent with the fact that overlap of confidence intervals is not a reliable indicator of statistical significance (Schenker, Nathaniel, and Jane F. Gentleman. 2001. “On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals.” The American Statistician 55: 182–86. http://www.jstor.org/stable/2685796.)
Classification: Unfounded
#3-6 (November 2023)
Accusation: Using the error bars as a guide to reconstruct the statistical significance between the two conditions, the two conditions cannot be significantly different
Resolution: We went back to the original data and confirmed that the error bars are SDs, not SEMs, and that the two conditions are statistically significantly different. An error in the figure legend was identified.
Classification: Minor error in figure legend
#8, 9, & 11 (March 2024)
Accusation: The same control was used in two different papers
Resolution: Correct – the experiments were carried out at the same time with the same controls
Classification: Unfounded
***
14. Paper: Ho, A., Morishita, W., Hammer, R.E., Malenka, R.C., and Südhof, T.C. (2003) A role for Mints in transmitter release: Mint 1 knockout mice exhibit impaired GABAergic synaptic transmission. Proc. Natl. Acad. Sci. U.S.A. 100, 1409-1414.
PubPeer Weblink: https://pubpeer.com/publications/866AA1811F89014742E1EAEB2BD25D
#1 (October 2023)
Accusation: The 60 day data point in Figure 3C cannot be statistically significantly different because of the error bars almost overlap.
Resolution: In the incriminated graph only the 60 day data point is statistically significantly different. This is not a biologically significant result as discussed in the paper, but the existing statistical significance was nevertheless reported as mandated by publishing rules. Also see #1 under Paper 15 above.
Classification: Unfounded
#2 (October 2023)
Accusation: The fact that neurexin protein levels are lower in heterozygous than in homozygous KO mice in Table 1 is suspicious.
Resolution: Protein measurements are notoriously noisy. As a result, small changes in protein levels are not interpretable, especially if they were statistically not significantly different as in this case.
Classification: Unfounded
***
13. Paper: Biederer, T., Cao, X., Südhof, T.C., and Liu, X. (2002) Regulation of APP-dependent transcription complexes by Mints/X11s: Differential functions of Mint isoforms. J. Neurosci. 22, 7340-7351.
PubPeer Weblink: https://pubpeer.com/publications/34D6A8F36DFC9F19D753DCAF6B96FE
#1 (October 2023)
Accusation: The lack of error bars on panel D of Figure 3 raises concerns about the validity of the data
Resolution: Error bars are not visible in Figure 3D because the error bars are too small
Classification: Unfounded
***
12. Paper: Fernandez-Chacon, R., Shin, O.-H., Königstorfer, A., Matos, M.F., Meyer, A.C., Garcia, J., Gerber, S.H., Rizo, J., Südhof, T.C., and Rosenmund, C. (2002) Structure/function analysis of Ca2+-binding to the C2A-domain of synaptotagmin 1. J. Neurosci. 22, 8438-8446.
Weblink: https://pubpeer.com/publications/1FD9572D003130580F6C598C39D9C6
Date of accusation: September 2023
Accusation: Unknown since comment was removed by ‘moderator’
Resolution: No criticism recorded
Classification: Unfounded
***
11. Paper: Trotter, J.H., Hao, J., Maxeiner, S., Tsetsenis, T., Liu, Z., Zhuang, X., and Südhof ,T.C. (2019) Synaptic Neurexin-1 Assembles into Dynamically Regulated Active Zone Nanoclusters. J. Cell Biology 218, 2677-2698. PMCID: PMC6683742
PubPeer Weblink: https://pubpeer.com/publications/71F24BE796880C3A39AC29501382AB
#1 (July 2023)
Accusation: Control blots in Figure 7C are duplicated
Resolution: Original blots document that the control blots were likely not duplicated even though the blots show similar blotting artifacts as documented, although given the similarity of the blots one might easily conclude that they might be identical. A common error of PubPeer posts is that blots with similar artifacts are alleged to be duplicated, which neglects the fact that samples analyzed on the same blotting apparatus with the same antibodies exhibit similar artifacts even though they are different.
Classification: Unfounded
Postscript: Upon reviewing the primary data, we discovered a blot mixup in the images shown that we corrected with an Erratum but that was not detected on PubPeer
Classification: Minor error identified by the lab
#4 (October 2023)
Accusation: One bar in the Figure 9C graph is not labeled as significantly different but looks like it should have been labeled as significantly different
Resolution: PRISM software analysis of the data suggests that it is not significantly different
Classification: Unfounded
#6 (March 2024)
Accusation: Dr. Bik acknowledges that we published a correction not of any mistake PubPeer identified, but of a mistake we identified
Resolution: No resolution needed – the comment was just meant to amplify the fact that we found and acknowledged a mistake
Classification: Unfounded
#8 (March 2024)
Accusation: Dr. Bik alleges that there is a mistake in Figure S6E because two representative images overlap. As often, her comments are then echoed and amplified by Dr. Bazargan and Actinopolyspora
Resolution: The overlap exists but is perfectly legitimate since the panels illustrate sample images from the same experiment
Classification: Unfounded
***
10. Paper: Burré, J., Sharma, M., Tsetsenis, T., Buchman, V., Etherton, M., and Südhof, T.C. (2010) a-Synuclein Promotes SNARE-Complex Assembly in Vivo and in Vitro. Science 329, 1664-1668.
PubPeer Weblink: https://pubpeer.com/publications/2452F555579F6D021205B875814D82
#1 (May 2023)
Accusation: The blots in Figure 2B, 4B, and 4C are duplicated
Resolution: Original blots demonstrate that the blots are not duplicated. This is the same common error in PubPeer accusations as in paper 11, which alleges blot duplications based on similar artifacts but neglects the fact that similar samples run on the same blotting apparatuses exhibit similar artifacts. A complete analysis of this and the following accusations based on original data is provided here [link to Burre paper analysis].
Classification: Unfounded
#3 (May 2023)
Accusation: Demands higher resolution images of the blots that we presented in PubPeer
Resolution: We feel the original blots we presented on PubPeer are sufficient to demonstrate that the accusation that the blots were duplicated is mistaken.
Classification: Unfounded
#5 & #6 (March 2024)
Accusations: Dr. Bik reinforced/repeated #1 and #3 accusations.
Resolution: The first author of the paper now posted the original blots of the paper to document that there is no evidence that the control blots were duplicated and that these blots exist.
Classification: Unfounded
#9 (January 2025)
Accusations: Dr. Sholto David alleges that the incriminated blots were manipulated based on his image analysis that identified microduplications
Resolution: Above we show a complete comparison of the published data, alleged manipulation, and high-resolution images of the submitted data which clearly shows that the allegation is wrong. This case is a great example to illustrate the nature of many false PubPeer accusations: In older papers, less sophisticated software led to changes in images that consist of duplications of background areas and other aberrations which are clearly not intentional and are not present in the original data. The problem we face as scientists is that for many old papers, original data are simply not available, which is not due to our fault. Given that social media like PubPeer seem to imply we are guilty if we can’t prove otherwise, this creates a toxic atmosphere especially for young scientists.
Please note that the artifacts introduced by the publishing process that Dr. Sholto identified also potentially explain some of the other artifacts for which we were accused of fraud (see #37). In this case, the original data were luckily available but for decades-old papers this is not always the case. Again, a more detailed analysis is provided here [link to Burre paper analysis].
Classification: Unfounded
***
9. Paper: Patzke, C., Brockmann, M.M., Dai, J., Gan, K.J., Grauel, M.K., Fenske, P., Liu, Y., Acuna, C., Rosenmund, C., and Südhof, T.C. (2019) Neuromodulator Signaling Bidirectionally Controls Vesicle Numbers in Human Synapses. Cell 179, 498-513. PMCID: PMC7159982
PubPeer Weblink: https://pubpeer.com/publications/4AEAAAE084C8DFE9E26107D350B0B5
#1, 2, 4 & 5 (March 2023)
Accusation: The source data files for Figures 6E, 6F, 6K, S6F, S6G, S6Q, S6R, and S7C contain instances of data duplication.
Resolution: The accusation is largely correct. During assembly of the source data files, copy-paste errors of several blocks of numbers were committed for which a Correction has been filed. In addition, isolated number identities occurred that are not errors but intrinsic to the scientific method used.
Classification: Minor mistake identified by PubPeer
***
8. Paper: Lin, P.Y., Chen, L.Y., Zhou, P., Lee, S.H., Trotter, J.H., and Südhof, T.C. (2023) Neurexin-2 restricts synapse numbers and restrains the presynaptic release probability by an alternative splicing-dependent mechanism. PNAS 120, e2300363120
PubPeer Weblink: https://pubpeer.com/publications/DAF32F6DB6C166337E5381F769AE52
#1 - #4, #7, #11, and #13-#16 (March & April 2023)
Accusation: The source data files for Figures 2-6 contain extensive data duplications
Resolution: The accusation is mostly correct. The source data files for Figures 2-6 contain multiple erroneous number duplications involving <4% of the numbers. In addition, some experiments that were performed at the same time use the same control values. These numbers are therefore duplicated on purpose. Although this is perfectly legitimate practice, it was not appropriately explained in the paper.
Recent re-analyses showed that replotting the figures of the paper using the published source data without or with the erroneous data duplications or plotting the figures of the paper with the corrected source data yield essentially identical figures that are difficult to distinguish from the originally published figures. Thus, although the published paper did contain extensive data duplications in its source data file that should not have occurred, the data duplications had no tangible effect on the conclusions of the paper.
#18 - #21, #23-#26, #28, #30, #31, #33-#35, #37, #38, #40, #42, #44, and #46 (April-August 2023)
Accusation: The unpublished ‘replacement data’ posted by the 1st author on PubPeer contain irregularities suggesting that these replacement data may have been made up
Resolution: The accusation is correct. The ‘replacement data that the 1st author of the paper posted on PubPeer, however, were not posted by the Südhof lab. They were not seen or endorsed by the Südhof lab prior to being posted, nor was the 1st author asked by the Südhof lab to post any data on PubPeer. It recently emerged that the 1st author considered PubPeer an unaccountable social media site. She did not know that PubPeer would serve as a reference source for journals and administrators or would be considered a scientific publication. She stated that she posted random data on a regrettable impulse because she did not think that this would be considered inappropriate since PubPeer is not a lab publication.
Classification: The ‘replacement data’ PubPeer post is not a Südhof lab publication but a private posting of the 1st author of the paper
#64-#67 and #69 (October 2023)
Accusation: The raw data of the paper that we made publicly available (https://purl.stanford.edu/cp231wr9194) contain major technical issues
Resolution: The accusation is incorrect. The allegation of technical issues in these PubPeer posts was largely based on an incomplete understanding of electrophysiological methods, as one of the PubPeer commentors actually graciously pointed out. Electrophysiological results are inherently noisy and there are no standardized analysis methods. Noticeable differences often emerge when different experts analyze the same raw data. Our own limited reanalysis of the raw data for this paper by an independent expert overall confirmed the conclusions of the paper. However, the expert did think that some of the numerous electrophysiological traces should have been excluded, whereas the imaging data were deemed to be of excellent quality throughout the paper.
Classification: Unfounded
#68 (October 2023)
Accusation: The review of the paper was flawed because both reviewers are alumni of the Südhof lab
Resolution: The accusation is incorrect. Prof. Josh Huang was never associated with, or collaborated with, the Südhof lab, although Prof. Katsuhiko Tabuchi was in the Südhof lab more than 20 years ago but has since established a successful independent research program as a full professor in Japan.
Classification: Unfounded
#72 and #73 (October 2023)
Accusation: The mean value in Figure 1B does not fit the datapoints associated with it, suggesting that the mean value is wrong
Resolution: The accusation is correct. The mean control value does not fit the datapoints shown because the datapoints were shifted during figure construction. Replotting of the raw data in the source file shows that the mean value is correct and reveals that the datapoints were accidentally shifted during construction of the figure, probably because in the adobe illustrator software independent image objects are often linked and movement of an object can inadvertently cause movement of other linked objects.
Classification: Minor mistake identified by PubPeer
Postscriptum: We retracted the P.Y. Lin et al. paper after we confirmed the PubPeer allegation that the published source data Excel file contained numerous number duplications (retraction statement: “We wish to retract the paper because re-analysis of the original raw data for Figs. 2, 4, and 6 (https://purl.stanford.edu/cp231wr9194) revealed that, although our analyses of the original data are supportive of the conclusions of the paper, unresolvable differences exist between these raw data and the published data source file that cannot be corrected by a simple erratum. In addition, the data source file contained copy-paste errors, and Fig. 1 included shifted data points that occurred during figure drafting. We thank Dr. Daniel Matus of Stanford University for his independent analysis of the primary raw data.")
However, we have now performed further analyses of the data reported in the Lin et al. (2023) paper. We find that the only significant errors of the paper are contained in its source data file that included multiple number duplications accounting for <4% of the data. Replotting the data either with the erroneous duplications or with the correct numbers results in nearly identical figures that are indistinguishable from the published figures of the retracted paper. Thus, although the number duplications in the source data file are clearly unacceptable, these data entry errors did not produce major differences between the raw data and the data source file.
Moreover, upon reanalysis of the genesis of the paper, the source data files, and a subset of raw data, we realized that in our lab’s initial PubPeer responses to the accusations we overestimated the extent of the problems associated with this paper. As a result, some of the PubPeer responses we posted were overly negative about the data underlying this paper. There is no clear cause to question the actual findings of the paper since the figures are not materially affected by the numerous number duplications in the source data files that were published, since all raw data are publicly available, and since most of the data that we reassessed are of good quality.
***
7. Paper: Dai, J., Liakath-Ali, K., Golf, S., and Südhof, T.C. (2022) Distinct Neurexin-Cerebellin Complexes Control AMPA- and NMDA-Receptor Responses in a Circuit-Dependent Manner. E-Life 11, e78649.
PubPeer Weblink: https://pubpeer.com/publications/68D8490A4754CE00F936214C3931F2
#1 & #4 (March 2023)
Accusation: The source data file of the paper contains duplicated values for several cells
Resolution: The accusation is correct. The duplications are copy-paste errors that were corrected with an Erratum in the journal.
Classification: Minor mistake identified by PubPeer
#5, #7 & #8 (November 2023 & July 2024)
Accusation: An anonymous commentator confirms an Erratum was published, another commentator demands all raw data, and finally Dr. Bik 9 months later re-confirms that an Erratum has been published.
Resolution: The confirmation is correct but we are puzzled why it was posted except if it was meant to amplify a questioning signal.
Classification: Irrelevant comment
***
6. Paper: Zhang, X., Lin, P.Y., Liakath-Ali, K, and Südhof, T.C. (2022) Teneurins Assemble into Presynaptic Nanoclusters that Promote Synapse Formation via Postsynaptic Non-Teneurin Ligands. Nature Comm. 13, 2297.
PubPeer Weblink: https://pubpeer.com/publications/EC9A138F8BFAAE1F2FB803106703AB
#1 (March 2023)
Accusation: Two columns in the source data file for Figure 8 are duplicated
Resolution: The accusation is correct. The columns were accidentally duplicated in a copy-paste error during assembly of the file for sharing after figures were drafted. The error was corrected with an erratum. All raw data were submitted to a public database (https://doi.org/10.25740/kq306bq2466)
Classification: Minor mistake identified by PubPeer
#5 & #6 (July 2024)
Accusation: Dr. Bik, followed as usual by Dr. Bazargan’s ‘animation’, finds that two of the 413 fluorescence images of the paper may exhibit a partial overlap.
Resolution: We could not confirm the alleged overlap despite the animation but the complete raw data are publicly available for reanalysis (see https://doi.org/10.25740/kq306bq2466)
Classification: Unfounded
#8 (May 2025)
Accusation: Dr. Bik is checking up on us asking whether the minor mistake she identified will be corrected in the journal; in addition, she apparently contacted the journal again to request an investigation.
Resolution: The journal has been contacted, although we do not agree with Dr. Bik’s fundamentalist attitude.
Classification: Irrelevant
***
5. Paper: Dai, J., Patzke, C., Liakath-Ali, K., Seigneur, E., and Südhof, T.C. (2021) GluD1, A signal transduction machine disguised as an ionotropic receptor. Nature 595, 261-265.
PubPeer Weblink: https://pubpeer.com/publications/0B87E141DC8DFFF4826A9250A94BAD
#1 (February 2023)
Accusation: The PPR data in the source file are missing
Resolution: The corresponding data were mislabeled as belonging to ‘1f’ instead of ‘1k’ in the source data file due to a typing error. This mistake is being addressed in a ‘Correction’ in the journal.
Classification: Unfounded
#2 & #4 (February & March 2023)
Accusation: The source file contains duplicated values for two rows of numbers
Resolution: Correct. We introduced two copy-paste mistakes during transfer of data from experimental logs to the source file, as occurs when the ‘copy’ key is not pressed sufficiently. This mistake is being addressed in a ‘Correction’ in the journal.
Classification: Minor mistake identified by PubPeer
#3, #9, & #11 (March & June 2023)
Accusation: Some of the full-sized blots in Suppl. Figure 1b and 1c do not correspond to the cropped or quantified data in the Extended Data figure.
Resolution: Correct. Reassessment of original blots identified two related mistakes. In Suppl. Figure 1b one blot was mislabeled and not all blots were included. In Suppl. Figure 1c a different blot of the same experiment with identical results was shown. Again, these mistakes are being addressed in a ‘Correction’ in the journal.
Classification: Minor mistake identified by PubPeer
#12-#16 & #18 (July & November 2023)
Accusation: The data on the effect of a mutation on a function should not have been tested by t-tests
Resolution: T-tests are the standard test for manipulations that contains only a single independent variable and only compare that variable to the control, but not among samples. The graph reports a comparison of multiple single tests to the same shared controls, and could be plotted as multiple graphs consisting of a control and test condition. We agree that the question of the appropriate statistical test can be contentious especially if one focuses less on the biological experiment and more on the graph format but feel that the t-test is appropriate here.
Classification: unfounded
***
4. Paper: Sclip A, and Südhof, T.C. (2020) LAR receptor phospho-tyrosine phosphatases regulate NMDA-receptor responses. E-Life 9, pii: e53406. PMCID: PMC6984820
PubPeer Weblink: https://pubpeer.com/publications/613AFEC1A22C0725BB6D5A9E5CFE76
#1 (February 2023) & #4 (April 2023)
Accusation: Alleged falsification of data because (i) quantifications of blots were made on the basis of different ‘n’s’ for different proteins but (ii) only a single Tuji control blot is shown for the quantifications.
Resolution: (i) The ‘n’s’ (number of replicates) differ between experiments because each protein quantification is performed separately with a different antibody using the same samples. Different numbers of repeat experiments are performed for various proteins because the noise levels differ between proteins dependent on a protein’s abundance and the quality of an antibody. Each antibody is different, and each immunoblotting quantification is a separate experiment compared to the same controls run on the same gels for multiple antigens.
(ii) A single sample control blot is shown for each set of proteins because illustrating samples of the Tuji control blots separately for each of the 23 proteins seems superfluous and adds no information. The same control blot is shown for panels B and C because these experiments were run on the same gels to illustrate that fact.
Classification: Unfounded
#7 & #9 (July 2024)
Accusation: Drs. Bik and Bazargan (in yet another ‘animation’) restate a year later the accusation that the control blot in panels B and C was duplicated, even though we explained already above that they are purposely the same. Dr. Bik states “The lengthy explanations given above [our earlier response in PubPeer, when we still tried to discuss data with PubPeer accusers, which is clearly impossible] seem to not address the concern raised above in #1, which is that two Tuji blots representing different samples (presynaptic vs active zone) look unexpectedly similar. I understand that each protein-of-interest might have been normalized using its own Tuji re-probe, and that protein quantifications are difficult, but that still does not explain the abovementioned similarity. Can the authors please address the actual concern? Also, these figures are not "for illustration purposes only" - they are the data.”
Resolution: Although our explanation was lengthy, we apologize that apparently it wasn’t lengthy enough. First, the samples analyzed are NOT different, they are the same. Second, these samples are run on the same gels and analyzed on the same blots and thus share Tuji controls. The two Tuji blots are thus ‘expectedly similar’ because they are identical. Third, we show the same Tuji blot for pre- and postsynaptic proteins because these proteins were analyzed on the same gels. This is real data but illustrative data – we do not show every blot we ever did. All this was described in earlier responses. We realize that Drs. Bik and Bazargan do not focus on the actual science and how science is done, and we appreciate the many questions they raise. We also realize that in order to not attract Drs. Bik’s and Bazargan’s accusations, we should choose different Tuji examples even though they are run on the same gels. At this point, scientists in our labs are becoming paranoid that something may look ‘unexpectedly similar’ and cause PubPeer comments that can destroy careers without having any impact on the science.
Classification: Unfounded
#12 (July 2024)
Accusation: An anonymous commenter understood that the samples are not different, for which we are grateful
Resolution: We appreciate that another commenter reads our responses
Classification: helpful comment
***
3. Paper: Dai, J., Aoto, J., and Südhof, T.C. (2019) Alternative Splicing of Presynaptic Neurexins Differentially Controls Postsynaptic NMDA- and AMPA-Receptor Responses. Neuron 102, 993-1008. PMCID: PMC6554035
PubPeer Weblink: https://pubpeer.com/publications/568B4CF8B40A979424B7F343F3B061
#1 (February 2023)
Accusation: The y-axis labels of Figure 6B and 6C are incorrect
Resolution: Correct - the y-axes were mislabeled
Classification: Minor mistake identified by PubPeer
#3-#11, #22, #23 (February & March 2023)
Accusation: The source data for Figure 2D, 4A, 5E, 6A, 6B, 8, S2, S4, and S8 contain duplications and possible calculation errors
Resolution: Correct - the source data contain multiple isolated errors produced by copy-paste mistakes that also lead to errors in downstream calculations but have no discernable effect on figures or conclusions.
Classification: Minor mistake identified by PubPeer
#25-#28 and #30 (March 2023)
Accusation: The statistical tests used for experiments examining 3 experimental groups – derived from littermate SS4+ and SS4- mice vs. unrelated WT samples – are generally incorrect
Resolution: The accusers express their view that the three experimental groups should be treated as equivalent in pairwise comparisons whereas biologically they are not equivalent. The 3 experimental groups comprise a genetically distinct WT sample expressing both SS4+ and SS4- variants of neurexins and two genetically identical samples that express either only SS4+ or SS4- variants of neurexins. Thus, the statistics are more complex than a simple 2-way ANOVA with a post-hoc correction since the comparison of the two genetically identical samples between each other is inherently different from their comparison with the WT sample. Given this experimental configuration, we believe our statistical approach may be the most appropriate, but it is possible that a non-traditional test that accounts for the experimental non-equivalency might be even better.
Classification: Unfounded
#33-#37 (March 2024)
Accusation: Dr. Bik states that the Correction we published on this paper to rectify the labeling mistakes and copy-paste errors in our paper represent a ‘Mega Correction’ and another accuser criticized that we did not address the accuser’s concerns about statistics.
Resolution: We feel that our correction of copy-paste errors in supplementary tables and of mislabeled supplementary graphs is not a ‘Mega Correction’, although we regret these errors. The impact of our 'Mega Corrections' is purely procedural and has no consequences for the actual science.
Classification: Irrelevant comment
***
2. Paper: Sclip, A., Bacaj, T., Giam, L., and Südhof, T.C. (2016) Extended synaptotagmin (ESyt) triple knock-out mice are viable and fertile without obvious endoplasmic reticulum dysfunction. PlosONE 11, e0158295.
PubPeer Weblink: https://pubpeer.com/publications/FBC43D21E0E903A65AF81CD8D1CAF1
#1 (August 2022)
Accusation: Alleged duplication of blots in Figure 3 and S1.
Resolution: It is correct that the same tubulin and actin blots were used as loading controls for experiments that were performed at the same time. This is not a duplication of blots but the use of the same controls for an experiment that was performed at the same time, and we should have noted this in the legends. An Erratum to document this detail has been demanded by the journal to satisfy PubPeer although it seems pointless.
Classification: This is only a mistake if one considers every word in a paper ….
#3 (December 2022)
Accusation: A correction was made in the published paper without notification
Resolution: This is incorrect – no correction was made
Classification: Unfounded
#5 (December 2022)
Accusation: Claims that the data are no longer in the PlosONE website
Resolution: A screenshot from the PlosONE website shows that the data are clearly still there.
Classification: Unfounded
#7 (June 2023)
Accusation: The Munc18 and Syt1 blots are identical, i.e. represent duplications
Resolution: Original blots of the experiments are now shown on PubPeer demonstrate that the Munc18 and Syt1 blots were derived from the same experiments run on the same gels but probed with species-specific distinct antibodies. Since these proteins have similar sizes, the shape of their bands is nearly identical, creating an appearance of duplication. However, based on direct communication of the accusers with PlosONE ‘Ethics’, the PlosONE ‘Ethics’ administrators demand that we amend the blots to show samples from the same original publicly deposited blots that make the bands look different, which will be published as a ‘Correction’. The ‘Correction’ doesn’t actually correct anything but journals are very sensitive to PubPeer criticism and their Ethics departments are tasked with eliciting Corrections.
Classification: Unfounded
#10 (August 2023)
Accusation: Maintains that he/she/they cannot see the blots properly
Resolution: Original blots were publicly deposited (https://purl.stanford.edu/vq040hz0549)
Classification: Unfounded
#12 (March 2024)
Accusation: Dr. Bik states “the blot provided in #2 appears to contain munc18 bands in green and syt1 bands in red, just under the munc18 bands, while the published panels show both bands separately (no double bands) and both in red. So it appears you either did not provide the originals, or the published panels have been color-converted with double bands removed.”
Resolution: We have already published a ‘Correction’ that explains better the various blots with inclusion of full-length original blots and boxed areas, and all original blots have been posted on PubPeer and are also publicly available at the published URL as high-resolution images. However, the accusations that "So it appears you either did not provide the originals, or the published panels have been color-converted with double bands removed" are incorrect. In the incriminated blot, two different species-specific antibody signals were monitored on the same blot (for Munc18 and synaptotagmin that have almost identical molecular weights) in different optical channels. Thus, no double band has been removed, nor are colors converted because every color represents an arbitrary assignment of a digital signal. We chose to show the Munc18 and synaptotagmin blots in the same false color in our paper because we thought that this would make it easier to compare the blots, but this has clearly confused people who are not familiar with this type of experiments.
Classification: Unfounded
#13 (July 2024)
Accusation: Dr. Bik quotes the Correction
Resolution: No resolution necessary since this comment is just meant to amplify the number of comments on PubPeer.
Classification: Unfounded
***
1. Paper: Yi, F., Danko, T., Botelho SC, Patzke C, Pak C, Wernig, M., and Südhof, T.C. (2016) Autism-associated SHANK3 haploinsufficiency causes Ih channelopathy in human neurons. Science 352, aaf2669.
PubPeer Weblink: https://pubpeer.com/publications/CBAA10B0FBF31CC41E9B7D56C8B0C3
#1 (April 2018)
Accusation: Since the resting membrane potential of the human neurons derived from stem cells in the experiments is at approximately -40 mV, the neurons must be leaky and sick and it is therefore not possible to draw any conclusions from these neurons
Resolution: All human neurons produced from stem cells are immature and exhibit a decreased resting potential, even though they are not leaky or sick. They exhibit robust active and passive membrane properties and form fully functional synapses
Classification: Unfounded
#3 (April 2018)
Accusation: How can different clones in different sets of experiments produce very similar readouts given that evoked EPSC amplitudes are dependent on intensity of stimulation
Resolution: The electrophysiological approaches used here were validated previously in a number of papers and shown to be highly reproducible across experiments (e.g., see Zhang, Y., Pak, C.H., Han, Y., Ahlenius, H., Zhang, Z., Chanda, S., Marro, S., Xu, W., Yang, N., Patzke, C., Chen, L., Wernig, M., and Südhof, T.C. (2013) Rapid Single-Step Induction of Functional Neurons from Human Pluripotent Stem Cells. Neuron 78, 785-798. PMCID: PMC3751803; Pak, C., Danko, T., Mirabella, V.R., Wang, J., Liu, Y., Vangipuram, M., Grieder, S., Zhang, X., Ward, T., Huang, Y.W.A., Jin, K., Dexheimer, P., Bardes, E., Mittelpunkt, A., Ma, J., McMachlan, M., Moore, J.C., Qu, P., Purmann, C., Dage, J.L., Swanson, B.J., Urban A.E., Aronow, B.J., Pang, Z.P., Levinson, D.F., Wernig, M., and Südhof, T.C. (2021) Cross-Platform Validation of Neurotransmitter Release Impairments in Schizophrenia Patient-Derived NRXN1-Mutant Neurons. Proc. Natl. Acad. Sci. U.S.A. 118, e2025598118. PMCID: PMC8179243)
Classification: Unfounded
#4 (April 2018)
Accusation: The electrophysiological experiments are unreliable (e.g., “Anyone knows that hyperpolarizing MP from -40 to -70 takes heroic current magnitude” and “Not all cells should be active at rest, even not ~60 %”)
Resolution: The electrophysiological approaches used here are consistent with a large number of previous studies by multiple independent experimenters not only the Südhof lab but also in others, and the data shown in the paper are internally fully consistent
Classification: Unfounded
#5 (April 2018)
Accusation: Mean amplitudes and intervals do not correlate with the 'representative traces'
Resolution: Electrophysiological measurements vary between cells and representative traces are only an illustration of the results, they do not depict an average of traces
Classification: Unfounded
#10 & #11 (July 2024)
Accusation: Desires to know the current used to measure the resting membrane potential
Resolution: We unfortunately do not have the resources to research all questions posed by the audience if there is no pressing rationale
Classification: Unfounded