ABSTRACT
Plagiarism in publicly funded research threatens research integrity and misuses taxpayer dollars. In the past two decades, clear discrepancies between how the Office of Research Integrity and the National Science Foundation address plagiarism have emerged. One factor driving this discrepancy is the use of plagiarism detection software. Advancements in the sophistication of plagiarism detection revealed the amount of plagiarism surpasses previous expectations. Continued education on responsible conduct of research is imperative to fostering research integrity and decreasing instances of research misconduct. Congress and the National Science Foundation have initiated new policies to address plagiarism, and institutions and researchers must establish widespread implementation of these policies. By examining recent plagiarism cases and responsible conduct of research training, this article illuminates issues with the current approach to addressing plagiarism and advances arguments to remedy these issues.
INTRODUCTION
Research misconduct in federally funded grants involves the misappropriation of public investment. Research misconduct is defined federally as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.” Misconduct is primarily overseen by two agencies within the federal government, the National Science Foundation (NSF), which determines cases involving NSF funding, and the Office of Research Integrity (ORI), which reports cases involving Public Health Service Funds (PHS). Although other disciplines define plagiarism differently, both aforementioned agencies define plagiarism as “the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.” The federal definition excludes “self-plagiarism” and honest error. Furthermore, for a finding of research misconduct to be made, the following must be satisfied: “(1) There be a significant departure from accepted practices of the relevant research community; and (2) The research misconduct be committed intentionally, knowingly, or recklessly; and (3) The allegation be proven by a preponderance of evidence.”
Plagiarism and research misconduct was initially explored by Scientific Misconduct and the Plagiarism Cases twenty-seven years ago. This formative article demonstrated the disjointed response to misconduct. Several years later, Research Misconduct and Plagiarism advanced discussion in the importance of plagiarism and clarified federal approaches to regulation. Developments in the intervening years in detection, policy, and public distrust in the scientific community have triggered the need to readdress plagiarism. Given the proliferation of digital resources, plagiarism detection software—for example, iThenticate and Turnitin—has substantially impacted how plagiarism is discovered and investigated. Attempts to address plagiarism extend beyond detection into prevention through training programs. The 2007 America COMPETES Act established a responsible conduct of research training (RCR) requirement for all institutions receiving funding from the NSF. Further, the CHIPS and Science Act of 2022 revised these requirements to improve the effectiveness of RCR training. Integrity in scientific research is important today because of public distrust in the scientific community. Overall confidence in medical scientists and scientists more broadly has declined since April 2020. Although plagiarism is of greater importance to academics than the public, research misconduct furthers the gap in trust. Discussion of shrinking trust in science has been taking place for many years, and this distrust has grown recently. Endeavors to decrease rates of plagiarism—such as more effective RCR training and greater support for inexperienced researchers—simultaneously address research integrity more broadly.
This article expands upon previous discussion of plagiarism and research misconduct in the following ways. I review the past sixteen years of plagiarism cases to call attention to the growing discrepancies between the ORI and the NSF in their findings. Subsequently, I examine the expansion of plagiarism detection software’s capabilities and application. Data collected by this software provides a unique opportunity for assessing the vertical and horizontal extent of plagiarism. Third, I show the inability of RCR training to adequately teach the proper populations of researchers and recent solutions enacted by the NSF. Fourth, the effects of increased transparency regarding cases of plagiarism are unclear, but methods intended to decrease plagiarism also address issues of research integrity. These methods can help rebuild trust between the public and the scientific community as well as promoting proper citation practices. Finally, I suggest further development of strategies for decreasing instances of plagiarism.
I. RECENT PLAGIARISM CASES
A. GROWING DIVERGENCE
The recent cases of plagiarism show a growing discrepancy between the number of findings made by the NSF and ORI. Although the number of ORI findings of plagiarism has remained stable over the last few decades, NSF findings have ballooned because of developments in detection. Differences in how each agency responds to allegations have also emerged in recent years. Both agencies use the same definition of plagiarism and research misconduct, and therefore, this discrepancy must stem from how each agency is regulating plagiarism.
1. ORI Findings
Between 2005 and 2021, the ORI made eleven findings of research misconduct involving plagiarism. The National Institutes of Health—the largest agency of the PHS—funds sixty thousand grants per year. In the sixteen-year period analyzed in this article, the ORI oversaw just under one million grants funded by the National Institutes of Health (NIH) and found eleven that met the federal definition of plagiarism. All eleven respondents were affiliated with a university, either as a professor or a researcher at the medical center, and the highest degree attained was a doctorate (most commonly a PhD or MD). Allegations were divided evenly between solely plagiarism (six cases) and plagiarism with falsification/fabrication (five cases). The venue in which plagiarism was found was predominantly publications and grant applications, with nine unpublished manuscripts, one abstract, and one doctoral thesis.
The ORI determines sanctions in accordance with the seriousness of the misconduct. Seriousness is determined by the following factors: intent, pattern, impact, whether the respondent accepted responsibility, retaliation, and other circumstances. Sanctions imposed in these cases all included prohibition from serving on a PHS advisory board for two to ten years, depending on the severity of the plagiarism. Nine of the eleven cases were resolved with a voluntary settlement agreement or voluntary exclusion agreement. Voluntary agreements are reached when the respondent commits to accepting the finding of research misconduct. The other two respondents were debarred for two or five years. A respondent may be debarred if the research misconduct seriously impacted the respondent’s current responsibilities. Other sanctions included exclusion from government contracts, supervision of future research, and certifications and assurances that submitted grant applications do not contain plagiarism.
Highly publicized plagiarism cases such as Sezen suggest that releasing the names of researchers found to have plagiarized could harm their careers. Despite the public nature of ORI findings, many respondents were able to continue their careers. Six of the researchers were, at one point after the public report of misconduct, or are currently employed in their fields. Although these researchers have attained industry jobs related to their fields, only one currently holds a faculty position in academia. This suggests that academic institutions take findings of plagiarism seriously and will not hire a researcher who has plagiarized. However, the careers of these researchers can continue in industry spheres unhindered. The ineffectiveness of public censure is not limited to plagiarism cases. Retraction Watch recently published an article about a researcher previously found to have falsified data and methods in a grant application who was recently awarded federal funding. The remaining five researchers had no readily available employment information after publication of the Federal Register notice of research misconduct.
With a sample size of eleven, it is difficult to make any broad assertions. However, findings of plagiarism appear to have a more substantial effect on the careers of postdoctoral researchers and students when compared to the consequences for established faculty. What may be occurring here is that established researchers are able to attain public industry jobs based on their long careers in their fields, despite sanctions from the ORI. Conversely, postdoctoral researchers and students do not have an established career to rely on when searching for employment after a finding of plagiarism is published. These less-experienced researchers rely heavily on recommendations from previous employers when looking for future employment. Perhaps the mentors of researchers who had plagiarized were hesitant or unwilling to support them.
The universities affiliated with the researchers at the time of the research misconduct were unlikely to release a public statement regarding the researcher’s actions. Legal risks associated with disclosing misconduct is a persuasive factor for universities, but mitigation of these risks may still lead to reputational harm. A university’s reputation is important to attracting funding and retaining students. Public statements censuring researchers for lack of integrity may dissuade new students from enrolling or diminish current students’ satisfaction with their educations. In those cases in which a public statement of plagiarism was made, the statement only appeared in the student or faculty newspapers. Research misconduct diminishes the reputation of affiliated institutions. However, disavowing research misconduct is crucial to establishing a culture of research integrity, especially for universities with multiple instances of plagiarism.
It is rare for other researchers associated with the person who plagiarized to be held responsible for the misconduct. However, in one case, the supervisor was found partially responsible for the plagiarism of another. In the Lushington case, an allegation of plagiarism was made against Mahesh Visvanathan by the authors of the article that had been copied. The university’s investigation revealed Visvanathan and Lushington, his supervisor, had dismissed a student’s allegation of plagiarism before publication. Plagiarism had occurred in three publications and one abstract, all of which had been approved by Lushington. This is the first known case in which ORI held a supervising faculty member accountable for approving plagiarized work. However, the finding apparently has not substantially impacted Lushington’s career in academia. He remains the only respondent to have held a faculty or equivalent position at an accredited university after a finding of plagiarism is made public by the PHS.
Institutions play a fundamental role in these plagiarism cases. Institutions must assure that they review and report research misconduct allegations as a requirement to receive funding from the PHS. Research misconduct proceedings begin when an allegation is reported to the ORI or the university research integrity department. Allegations are made by internal and external sources, including universities, the publisher of the article, and unaffiliated individuals. An inquiry to substantiate the allegation is conducted by the affiliated institution using a framework provided by the ORI. If the results of the inquiry warrant an investigation, the matter will be referred to an investigational committee at the institution and reported to the ORI. Institutions are the initial investigators of plagiarism accusations for both the NSF and the ORI. However, in the ORI cases, institutional proceedings are determinative. Institutions may request ORI assistance through the Rapid Response for Technical Assistance program intended to facilitate institutional investigations. The ORI may also conduct oversight reviews after an institution reports its final findings. Oversight reviews overwhelmingly find institutional investigations to be sufficient. After receiving an institutional finding of research misconduct, the ORI sanctions the individual and publishes the finding in the Federal Register. The ORI’s role in investigations has primarily focused on supervision during the publication of findings rather than direct involvement during the investigation period.
2. NSF Findings
NSF findings of research misconduct show a drastically different picture of plagiarism. Between 2005 and 2021, the NSF made over 150 findings of plagiarism, primarily in grant applications. Per year, the NSF reviews over fifty thousand grant proposals and funds eleven thousand. The NSF made 134 findings of research misconduct involving plagiarism in fiscal years 2007–17, accounting for eighty-one percent of its research misconduct findings. These statistics show a drastic increase in plagiarism cases from previous decades. Both allegations and findings of research misconduct have increased by three times in the decade following 2003, according to NSF Inspector General Allison Lerner. Examining the avenues through which the NSF obtains instances of research misconduct may highlight why NSF realized an increase in the number of findings.
Most findings originate from external allegations received by the NSF. These allegations can come from institutions, the NSF OIG Hotline, NSF reviewers, and program officers. After receiving allegations of plagiarism, the NSF conducts inquiries and substantiates allegations using plagiarism software. The other method for detecting plagiarism is NSF’s proactive review using plagiarism software to detect copied text. Proactive reviews involve the NSF sending random samples of proposals through plagiarism detection software. Although it is not explicitly clear what is fueling this increase in detection, it can be inferred that plagiarism software has played an important role.
A review of two cases published in 2015 highlights the different mechanisms by which cases are brought to the NSF’s attention and how the NSF handles each type. The first case was identified as containing plagiarized material via a proactive review of proposals funded in 2011. Based on the plagiarism detected in the proactive review, the award was suspended and ultimately $79,050 of public funds were reallocated. The NSF program officer stated that the proposal would likely have not received funding had he been aware of the plagiarism. In the second case, the relevant university received an allegation of plagiarism against a member of its faculty. The university notified the NSF OIG when its internal inquiry determined an investigation was warranted. The university investigation committee discovered that two NSF-funded publications and five additional publications contained self-plagiarism and copied text from uncited sources. The NSF has indicated a limited ability to screen proposals for plagiarism using plagiarism software, and most of its cases are initiated by allegations.
The NSF is less reliant on its grantee institutions when making findings of research misconduct than the ORI. Although, institutions are the primary investigators of allegations of plagiarism, the NSF will conduct a review of the allegation if an institution is unable to complete an investigation or the NSF is not satisfied with the institution’s findings. For example, the NSF used its ability to review investigations in a case where a funded grant application was alleged to contain plagiarism. The NSF conducted its own investigation after reviewing the university’s findings. The NSF’s investigation determined the university failed to fully examine the departure from accepted practices and whether there had been a pattern of misconduct. After the NSF had determined a significant departure and pattern of plagiarism, it sanctioned the subject.
3. Philosophical Differences Between the NSF and ORI
The ORI and the NSF approach public reporting of findings of research misconduct differently. This difference stems from how each agency apparently believes plagiarism cases should be reported publicly. When the NSF closes an investigation, it publishes a Case Closeout Memorandum. These memoranda do not disclose personal information about the respondent or the institution and do not include the source of the allegation. These memoranda are available to the public via the NSF OIG website. The NSF has accumulated aggregate data of its findings of plagiarism and based future action on its discoveries. In contrast to the NSF, the ORI publishes the respondent’s and institution’s names. As evident in the differences between these two methods of publication, the ORI focuses on the individual, and the NSF examines external and systemic factors. However, neither agency’s approach adequately addresses why plagiarism occurs. Plagiarism occurs as a complex combination of external factors—such as highly competitive environments and pressure to publish—and the individual respondent’s ability to mitigate those factors. An effective approach to decreasing plagiarism finds a middle ground between the two approaches, possibly focusing on formative repercussions.
Another key difference is the emphasis placed on plagiarism as an issue in research integrity. The widespread use of plagiarism detection software has allowed the NSF to recognize the extent of plagiarism. By publishing public reports, the NSF has shifted focus to structural and environmental issues. The NSF has addressed how it is currently handling plagiarism and how it, as an agency, can improve. Further, the NSF has made it clear to the scientific community and its grantee institutions that originality of academic research is paramount. Based on the relatively low number of plagiarism cases reported by the ORI, it either experiences drastically fewer instances of plagiarism than the NSF, or it does not treat plagiarism as an important issue. According to the Gallup Organization’s assessment of researchers’ having witnessed misconduct, plagiarism occurs more frequently than is reported by the ORI. Plagiarism involves the misappropriation of public funding and should be treated as the important issue it is by the primary government agencies seeking to regulate it.
B. “TIP OF THE ICEBERG”
Throughout the history of research misconduct study, it has been unclear whether the reported cases are underrepresentative of the extent of the issue or if research misconduct is relatively rare. It is possible that the ORI accounts for all cases of potential misconduct, but the number of plagiarism cases the NSF finds makes that unlikely. Based on a survey conducted by the Gallop Organization, reported cases appear to be just the “tip of the iceberg.” Underreporting comes from multiple sources at the institutional and individual levels. The ORI reports indicated that institutions disclosed an average of 1592 allegations of misconduct annually from 1992–2006, yet the ORI oversaw investigations of only 24. Further, of those twenty-four, an average of twelve investigations will result in a finding of research misconduct. These investigations are done by universities and may indicate a lack of institutional willingness to investigate potential misconduct. Further, only half of possible misconduct cases are reported by individuals. Researchers are more likely to report their colleagues’ potential misconduct, if they are aware of their institutions’ policies and reporting venues. Institutional and individual underreporting likely has obscured the rate of plagiarism in research. Therefore, findings of research misconduct officially reported by the ORI do not fully reflect the extent of research misconduct.
Use of plagiarism software by the NSF has substantiated the tip of the iceberg theory. Internal audits of funded proposals using plagiarism detection software have identified substantial amounts of verbatim plagiarism. As of 2013, the NSF was unable to address all instances of plagiarism discovered by these internal audits. Expanding the capacity of the NSF to review both external allegations and its own proactive reviews remains an issue for the agency. To alleviate this pressure on the NSF and the public funding needed to address it, other actors should have a more active role. This includes researchers submitting their work to plagiarism software if available, pressuring their institutions to provide plagiarism software if unavailable, institutions meeting this need, and fully investigating substantial allegations of research misconduct before submission for funding.
In cases where no federal funding is involved, institutions are not required to report allegations of plagiarism to federal agencies. Further, federal definitions of plagiarism and research misconduct only apply to research funded by the federal agency. This subset of allegations are defined by institutional policies and addressed as that institution deems fit. Therefore, allegations of plagiarism at the institutional level are not reported by federal agencies, and due to the nature of reputational consequences for research misconduct, universities may be incentivized not to publicly report such findings. The study by the Gallup Organization found that research misconduct surpassed expected levels due to lack of institutional responses. If the strain of detecting plagiarism in the thousands of submitted grant proposals is at fault for the discrepancy, widespread use of plagiarism software by universities and researchers before submission of a grant application or manuscript may reduce strain on federal agencies investigating plagiarism.
C. MOTIVATIONS
The motivations and conditions of individuals who commit research misconduct are multifaceted and complex. Researchers who have observed colleagues commit misconduct are one source of information on what motivates plagiarism. To contribute to growing discussion of research misconduct in the biomedical field, the ORI produced a report in conjunction with the Gallup Organization. Scientists in the survey reported their observed conditions for research misconduct, including a competitive environment, funding pressure, “publish or perish,” and advancing their careers. Research shows that the number of PhD’s in biomedical research is rising, while the number of corresponding faculty positions falls. Combined with declining success rates in grant applications, this phenomenon may contribute to hypercompetitive research environments. Most universities stress researchers’ ability to bring in federal funding and place importance on publication when determining tenure positions. The combination of the aforementioned factors may lead researchers to sacrifice their integrity to achieve their goals.
Another source of understanding motivations is the reasoning respondents give to justify or explain their actions. The most common explanation is a lack of understanding of proper citation. Some respondents claimed others were responsible for the plagiarism or were bound by time constraints. While some justifications for plagiarism are unfounded, differences in teaching and citation standards between United States and international institutions pose a substantiated reason that is remediable. Researchers in Carlo Croce’s, a well-known cancer researcher, laboratory cited a lack of adequate training and supervision as explanation for allegations of plagiarism and falsification. One researcher claimed to have never received training in what constituted plagiarism during her education in the United States or her home country of Italy. The NSF has noted that many researchers who plagiarized had earned at least some of their degrees from international institutions. This may indicate that plagiarism sometimes occurs not because of deceitful or negligent practice, but rather is a byproduct of second language writing.
Ultimately, these cases suggest that both federal agencies and institutions have failed to sufficiently educate researchers. This finding indicates that blame for a lack of understanding of proper citation requirements should be placed on the shortcomings of the research community, rather than on individual researchers, or potentially on both parties. This discussion of motivational forces could benefit from the discoveries made in other fields. These theories regarding motivation to commit misconduct include differential association, low expectations of success, and loss aversion. Differential association, a popular theory in explaining business fraud, highlights the role peers have on an actor’s decision-making. This theory posits that misconduct is learned through an individual’s environment, rather than a personal predisposition to misconduct. Therefore, a research culture prioritizing results and grant awards over integrity would produce less ethical scientists. Researchers’ perceptions of grant award fairness may reduce ethical barriers to committing research misconduct. Thirty-nine percent of subjects in NSF plagiarism cases had never received a grant, despite submitting numerous proposals. If these researchers perceive the system of selection as biased toward certain kinds of proposals, they could feel justified in engaging in research misconduct. Loss aversion may explain why the NSF experiences more cases of plagiarism by faculty than students. People are more likely to take risks to avoid losses than to secure a gain. A professor trying to make tenure may be more willing to take a risk, such as plagiarizing part of a grant application, than a postdoctoral researcher trying to find a faculty position. In this example, both researchers have the same stakes: a faculty position. However, due to loss aversion, the potential to lose something has a greater psychological impact than the potential to gain the same thing. It is important to note that these theories do not serve as excuses for researchers to commit misconduct, but rather as insights into why research misconduct occurs.
II. MODERN PLAGIARISM DETECTION
A. PLAGIARISM SOFTWARE
The widespread availability and use of plagiarism detection software has transformed the ability to identify plagiarism. It has allowed for the twofold discovery of both the breadth of occurrence and the depth of individual cases of plagiarism. NSF’s proactive review using plagiarism software of proposals submitted in FY 2011 revealed a 1–1.5% rate of plagiarism in eight thousand funded NSF proposals. Audits of this scale indicate the scope of plagiarism is occurring at a rate that cannot be addressed solely at the regulatory level. Plagiarism software quantifies copied text, allowing investigations to determine how many lines have been plagiarized. Quantitative analysis of individual cases of plagiarism enables agencies to prioritize cases with substantial amounts of plagiarism.
In the past, plagiarism software was predominantly used by professors to review student papers. The first instance of algorithmic detection of duplication was with eTBLAST and the Déjà vu database. Now defunct, eTBLAST was originally created to assist researchers in finding relevant literature by checking submitted text against publications and ranking available literature in Medline by similarity. Other functions, such as finding applicable journals and expert reviewers, allowed researchers to efficiently interface with Medline. A later study applied eTBLAST’S capabilities to determine plagiarized material and entered allegedly plagiarized publications into the Déjà vu database. The results of the study indicated that duplicated publications were far more extensive than previously reported, and their occurrence posed a significant issue in research integrity. The usefulness of eTBLAST has been absorbed by other widely available plagiarism detection software, but it remains an important initiative in understanding plagiarism.
Cases reported by the NSF indicate that some universities implement a plagiarism software review process as a sanction against respondents. For example, one respondent was required to “submit plagiarism detection software results for all proposals before submission.” The NSF and most reputable institutions use iThenticate Plagiarism-Detection Software, a resource for academics that checks documents against an extensive content database. Six of the nine institutions associated with an ORI plagiarism case have access to iThenticate available to students and faculty involved in research. These time-consuming cases could have been avoided had the researchers submitted their work to the software before submission for funding or publication. In the cases where the respondent acted recklessly or did not understand what constitutes plagiarism, submission to plagiarism detection software would have highlighted the unacceptable copied text. The rate of plagiarism case findings made by the ORI has not increased in the past decade as compared to previous decades. Only two of the eleven cases reported by the ORI mention using plagiarism software. In these cases, the software was used by the publisher or institution to substantiate allegations rather than to outright detect. In contrast, the NSF uses plagiarism detection software to identify and substantiate allegations of plagiarism. The burden of detecting and investigating plagiarism remains on the research institutions and the publishing journals. Many journals use plagiarism detection software. For example, the Journal of Materials Science uses CrossCheck by iThenticate, and Nature Portfolio is a member of Similarity Check, a service through iThenticate. Using plagiarism software to screen manuscripts before publication can prevent journals from publishing plagiarized work but does not prevent researchers from committing plagiarism. If more universities adopted stricter policies on submitting proposals and manuscripts to plagiarism detection software before publication, researchers would be made aware of duplicate text.
B. SHORTCOMINGS AND POTENTIAL NEGATIVE EFFECTS
Plagiarism software has made the detection of copied text easier, but barriers remain to eliminating plagiarism. Despite the ability of plagiarism software to screen for copied text, it is not a comprehensive detection method. Authors can circumvent plagiarism software by minimal rewording. Increased automation capabilities allow for malicious acts of plagiarism to go undetected. Although able to quantify lines of copied text, the software does not yet detect stolen ideas or processes when the wording is altered. It also does not check against unpublished work such as in the case of a peer reviewer plagiarizing a paper they reviewed. Therefore, plagiarism software may be a solution to the most blatant cases of plagiarism, but it does not eliminate stolen content.
Rather than relying on technological advancements to solve for problems created by increased automation, experts in the field have proposed using human-generated qualitative assessments and cooperative initiatives to equip journals with tools to combat misconduct. These recommendations have been posed to address paper mills but can be extended to plagiarism. Previously suggested solutions of creating better plagiarism detection software to keep pace with advancing text generation technology would practically result in an arms race between those attempting to exploit the proliferation of online journals and those attempting to regulate it. Current online platforms like PubPeer share discussions of scientific literature publicly. This site has exposed low-quality research by allowing for members of the scientific community to post concerns. Increased investment in resources like the STM Integrity Hub allows journals to discuss best practices for publishing quality research. Efforts to decrease plagiarism are most effective when attempting to address different facets. Both solutions address the inability of current plagiarism detection software to identify uncredited content that has been reworded.
III. EFFECTIVENESS OF RCR TRAINING
A. INSTITUTIONAL REQUIREMENTS FROM THE ORI
The PHS requires institutions to create environments of responsible research conduct through RCR training, prevent research misconduct, and take immediate action against potential misconduct. RCR training is predominantly given to students when beginning their careers in research, and involves sessions on proper attribution and other conduct. Institutions must file an annual report with the ORI to ensure compliance with the aforementioned policy. It is unclear to what extent compliance with this requirement is tracked and assessed.
B. INSTITUTIONAL REQUIREMENTS FROM THE NSF
The 2007 America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act was intended to keep America on track with international standards of research. Section 7009 of the Act establishes an RCR requirement for all grantees of federal funding through the NSF. The NSF enacted its RCR training requirement on January 4, 2010. This requirement applies to “undergraduate students, graduate students, and post-doctoral researchers participating in the proposed research project.” Although it is important to educate the people working on current research and the future generation of researchers, faculty account for eighty-two percent of findings of plagiarism. The NSF also requires grantee institutions to designate compliance personnel and verify student compliance with the training. Guidelines and templates are not currently provided by the NSF, but institutional examples are posted on its website.
Improvements on educational and regulatory fronts would decrease the extent of plagiarism. In 2013, the NSF conducted a review of institutional responses to the RCR requirement. Their findings indicated that, before NSF’s contact, approximately one-fourth of universities in the survey did not have an RCR training program in place. Between the completion of the survey in 2013 and the publication of the report in 2017, most of the noncompliant universities had created an RCR program, resulting in a ninety-two percent compliance rate. The first implication of this study and subsequent report is the implied lack of RCR training at universities and institutions receiving NSF funding. The NSF surveyed a sample of 53 out of the 1800 universities receiving federal funding to accumulate this data. Applying the noncompliance rate of contacted universities, four hundred universities could be noncompliant. The second implication is that the NSF should contact the remaining 1747 to improve compliance with RCR training. This poses a simple solution and could provide a measurable increase in the percentage of compliant universities.
IV. DECLINE OF PUBLIC TRUST IN SCIENCE
Universally, the scientific community is facing a crisis of trust, and trust from the public is intrinsic to science’s ability to benefit society. Responses to this crisis must be established on an understanding of its many causes: a combination of politicization and polarization, information overload, disinformation campaigns, and the expansion of public access to the scientific process. Although increased public involvement in science can bridge the divide between scientists and other members of the community, availability of research before vetting by the scientific community furthers misinformation. Research misconduct contributes to the growing distrust of the scientific community from some members of the public. Plagiarism weakens scientific credibility, and people are less likely to believe in scientific research when they perceive deception in the institutions producing it.
One method of restoring trust in science is to foster connection between science and the communities it serves. Publicizing instances of plagiarism may help to rebuild public trust in science, or it may serve to further demonize scientists by portraying them as not working for the benefit of society. Proving to the public that research misconduct is adequately addressed may dispel perceptions of science as underregulated. Alternatively, making a public display of plagiarists could unintentionally reinforce negative narratives. The effects of increased transparency about plagiarism may be mixed. Regardless, the prevention of plagiarism is an important initiative to rebuilding this trust. By promoting integrity, the scientific community can regain the credibility with funders and the public.
V. ELIMINATING PLAGIARISM
A. CONGRESSIONAL EFFORTS
The 2022 update to the America COMPETES Act resolves two current issues in addressing research misconduct: lack of an RCR requirement for faculty and research on research misconduct. The primary focus of the CHIPS and Science Act of 2022 is to fund technological research and promote manufacturing of semiconductors in the United States. However, section 10335 contains a renewed effort for NSF grants supporting the institutional investigation of research misconduct. This section enables the NSF to fund and accumulate a greater body of knowledge on research misconduct. Funded research on the causes and solutions of research misconduct is a step in the process of decreasing instances of plagiarism. The paramount section of this bill, in the discussion of plagiarism cases, is the amendment to section 7009 of the America COMPETES Act. The CHIPS Act added faculty and “other senior personnel” to the individuals required to complete RCR training. The amendment also adds a mentorship requirement. The addition of an RCR requirement for faculty and mentorship programs addresses the aforementioned issues and follows NSF recommendations discussed below.
B. NSF RECOMMENDATIONS
The NSF OIG recommends a variety of approaches to solve plagiarism at the institutional level, which would assist in eliminating the “tip of the iceberg” issue. These strategies include supporting inexperienced grant writers, strengthening an institutional culture of integrity, including faculty requirements in RCR training, and modifying document submission practices. Mentorship programs are an opportunity for inexperienced grant writers to be paired with a successful researcher to learn standards and techniques for drafting better proposals. The NSF believes universities play an important role in building a community of ethical researchers. Through establishing norms that promote integrity in research, institutions can decrease rates of misconduct allegations.
C. CREATING EFFECTIVE RCR PROGRAMS
Unlike other forms of research misconduct, simple solutions, such as required RCR training, show a measurable decrease in occurrences of plagiarism. Almost seventy percent of researchers found to have plagiarized cite lack of knowledge on adequate citation practices. These statistics demonstrate the need for effective RCR training courses. Improvements can be made by both NSF guidelines and institutional responses to them. Rather than prescribing RCR training only for faculty after a finding of research misconduct has been made, universities should require all faculty to complete training as a part of ongoing learning throughout their careers. This approach may foster more integrity in the research community as a whole, instead of only focusing on cases of misconduct. Effective RCR training, if not a simple solution to a complex issue, will eliminate respondents’ abilities to claim lack of knowledge as an explanation for plagiarism.
Not only does plagiarism software have a deterring effect, it can also serve as a tool to teach proper citation. In cases of second language writing and lack of knowledge regarding appropriate citation standards, plagiarism software can highlight discrepancies and provide a teaching moment. RCR training can bolster antiplagiarism courses with a section where the instructor walks through using plagiarism software. This may be more beneficial if researchers taking the course are able to submit their own work to the software during the training and receive feedback on its originality. Using an objective tool, RCR training can address potential cultural differences tactfully.
D. TAILORING SANCTIONS TO DECREASE PLAGIARISM
Rather than focusing on the extent of the plagiarism and whether the respondent had a pattern of similar behavior, sanctions might be enacted based on the environmental factors outside of the respondent’s control and their ability to mitigate those factors. For example, a principal investigator may feel pressured by approaching deadlines and other faculty responsibilities to cut corners and plagiarize the background section of a grant application. In this hypothetical case, the researcher had the ability to mitigate some of the external factors, such as prioritizing the grant application well ahead of the deadline, but cannot control an academic system that depends on attracting funding.
VI. CONCLUSION
Plagiarism is a recurrent issue within academic research. Discrepancies in approaches and attitudes toward addressing plagiarism between the two primary federal agencies regulating it has grown exponentially in recent years. Current cases may not reflect the extent of plagiarism that occurs in scientific research, but plagiarism detection software is one mechanism to detect straightforward cases of plagiarism. RCR training, which has shown to be effective in reducing plagiarism but which some institutions apparently are not providing, could also reduce the lack of knowledge regarding proper citation methods. Most significantly, public distrust in science has made addressing research misconduct an important endeavor for the scientific community and must be continually addressed through collaborative efforts.

