## Top Ten Reasons To Not Share Your Code (and why you should anyway)

**April 1, 2013**

**Randall J. LeVeque***There is no . . . mathematician so expert in his science, as to place entire confidence in any truth immediately upon his discovery of it. . . . Every time he runs over his proofs, his confidence encreases; but still more by the approbation of his friends; and is raised to its utmost perfection by the universal assent and applauses of the learned world.*---David Hume, 1739*

I am an advocate of sharing the computer code used to produce tables or figures appearing in mathematical and scientific publications, particularly when the results produced by the code are an integral part of the research being presented. I'm not alone, and in fact the number of people thinking this way seems to be rapidly increasing; see, for example, [1–3, 6–8, 10].

But there is still much resistance to this idea, and in the past several years I have heard many of the same arguments repeated over and over. So I thought it might be useful to write down some of the arguments, along with counter-arguments that may be worth considering.

In this article I am thinking mostly of relatively small-scale codes of the sort that might be developed to test a new algorithmic idea or verify that a new method performs as claimed, the sort of codes that might accompany papers in many SIAM journals. It can be at least as important to share and archive large-scale simulation codes that are used as tools to do science or make policy decisions, but the issues are somewhat different and not all the arguments that follow apply directly. However, as computational mathematics becomes increasingly important outside the ivory tower because of these large simulation codes, it is also worth remembering that the way we present our work can play a role in the ability of other scientists and engineers to do credible and reliable work that may have life-or-death consequences. Reproducibility is a cornerstone of the scientific method, and sharing the computer code used to reach the conclusions of a paper is often the easiest way to ensure that all the details needed to reproduce the results have been provided.

This article grew out of a talk with the same title that I gave in a minisymposium, Verifiable, Reproducible Research and Computational Science, at the 2011 SIAM CSE meeting in Reno, organized by Jarrod Millman. (Slides from my talk and others are available at http://jarrodmillman.com/events/siam2011.html.)**An Alternative Universe**

Before discussing computer code, I'd like you to join me in a thought experiment. Suppose we lived in a universe where the standards for publication of mathematical theorems were quite different: Papers would present theorems without proofs, and readers would simply be expected to believe authors who state that their theorems have been proved. (In fact, our own universe was once somewhat like this, but fortunately the idea of writing detailed proofs grew up along with the development of scientific journals. See, for example, Chapter 8 in [9] for a historical discussion of openness in science; I highly recommend the rest of this book as well.)

In this alternative universe, the reputation of the author would play a much larger role in deciding whether a paper containing a theorem could be published. Do we trust the author to have done a good job on the crucial part of the research not shown in the paper? Do we trust the theorem enough to use it in our own work in spite of not seeing the proof? This might be troubling in several respects. However, there are many advantages to not requiring carefully written proofs (in particular, that we don't have to bother writing them up for our own papers, or referee those written by others), and so the system goes on for many years.

Eventually, some agitators might come along and suggest that it would be better if mathematical papers contained proofs. Many arguments would be put forward against the idea. Here are some of them (and, yes, of course I hope you will see how similar they are to arguments against publishing code, mutatis mutandis, and will come up with your own counter-arguments):*1. The proof is too ugly to show anyone else.* It would be too much work to rewrite it neatly so that others could read it. Anyway, it's just a one-off proof for this particular theorem; my intention is not that others will see it or use the ideas for proving other theorems. My time is much better spent proving another result and publishing more papers rather than putting more effort into this theorem, which I've already proved.*2. I didn't work out all the details. *Some tricky cases I didn't want to deal with, but the proof works fine for most cases, such as the ones I used in the examples in the paper. (Well, actually, I discovered that some cases don't work, but they will probably never arise in practice.)

*And the student has since graduated, moved to Wall Street, and thrown away the proof, because of course dissertations also need not include proofs. But the student was very good, so I'm sure the proof was correct.*

3. I didn't actually prove the theorem---my student did.

3. I didn't actually prove the theorem---my student did.

*It took years to prove this theorem, and the same idea can be used to prove other theorems. I should be able to publish at least five more papers before sharing the proof. If I share it now, my competitors will be able to use the ideas in it without having to do any work, and perhaps without even giving me credit, since they won't have to reveal their proof technique in their papers.*

4. Giving the proof to my competitors would be unfair to me.

4. Giving the proof to my competitors would be unfair to me.

*The ideas in this proof are so great that I might be able to commercialize them someday, so I'd be crazy to give them away.*

5. The proof is valuable intellectual property.

5. The proof is valuable intellectual property.

*Journals wouldn't want to publish them, and who would want to read them?*

6. Including proofs would make math papers much longer.

6. Including proofs would make math papers much longer.

7.

*Referees would never agree to check proofs.*It would be too hard to determine the correctness of long proofs, and finding referees would become impossible. It's already hard to find enough good referees and get them to submit reviews in finite time. Requiring them to certify the correctness of proofs would bring the whole mathematical publishing business crashing down.

*8. The proof uses sophisticated mathematical machinery that most readers/referees don't know.*Their wetware cannot fully execute the proof, so what's the point in making it available to them?

*9. My proof invokes other theorems with unpublished (proprietary) proofs.*So it won't help to publish my proof---readers still will not be able to fully verify its correctness.

*People who can't figure out all the details will send e-mail requesting that I help them understand it, and asking how to modify the proof to prove their own theorems. I don't have time or staff to provide such support.*

10. Readers who have access to my proof will want user support.

10. Readers who have access to my proof will want user support.

Back to the Real World

Back to the Real World

Of course, sharing code and publishing proofs are different activities. So let's return to the real world and examine some of the arguments in more detail.

*General-purpose software designed to be user-friendly obviously differs from research code developed to test an idea and support a publication. However, most people recognize this difference and do not expect every code found on the web to come with user support. Nor do people expect every code found on the web to be wonderfully well written and documented. The more you clean it up, the better, but people publish far more embarrassing things on the web than ugly code, so perhaps it's best to get over this hangup [2]. Whatever state it is in, the code is an important part of the scientific record and often contains a wealth of details that do not appear in the paper, no matter how well the authors describe the method used. Parameter choices or implementation details are often crucial, and the ability to inspect the code, if necessary, can greatly facilitate efforts of other researchers to confirm the results or to adapt the methods presented to their own research problems.*

It's just a research code, not software designed for others to use.

It's just a research code, not software designed for others to use.

Moreover, I believe that it is actually extremely valuable to the author to clean up any code used to obtain published results to the point that it is not an embarrassment to display it to others. All too often, I have found bugs in code when going through this process, and I suspect that I am not alone. Almost everyone who has published a theorem and its proof has found that the process of writing up the proof cleanly enough for publication uncovers subtle issues that must be dealt with, and perhaps even major errors in the original working proof. Writing research code is often no easier, so why should we expect to do it right the first time? It's much better for the author to find and fix these bugs before submitting the paper for publication than to have someone else rightfully question the results later.

*It's forbidden to publish proprietary code.*It is often true that research codes are based on commercial software or proprietary code that cannot be shared for various reasons. However, it is also often true that the part of the code that relates directly to the new research being published has been written by the authors of the paper, and they are free to share this much at least. This is also the part of the code that is generally of most interest to referees or readers who want to understand the ideas or implementation described in the paper, or to obtain details not included in it. The ability to execute the full code and replicate exactly the results in the paper is often of much less interest than the opportunity to examine the most relevant parts of the code.

Some employers may not allow employees to share any code they write. However, if authors are allowed to publish a piece of research in the open literature, then in my view they should be allowed to publish the parts of the code that are essential to the research. After all, employers cannot forbid authors to publish proofs along with their theorems---referees would not put up with it. A change in expectations may lead to a change in what's allowed. Moreover, publishing code can take various forms. Some employers may forbid sharing executables or source code in electronic form, but impose far fewer restrictions on publishing an excerpt of the code (or even the entirety) in a pdf file. It is worth reiterating that, for many readers or referees, being able to inspect the relevant part of the code is often more valuable than being able to run the code.

If you do publish code, in a paper or on the web, it is worth thinking about the type of copyright or licensing agreement you attach to the code. Your choice may affect the ability of others to reuse your code and the extent to which they must give you credit or propagate your license to derivative works [12].

*Even apart from the question of proprietary software, many codes have certain hardware or software dependencies that may make them impossible for the average reader to run---perhaps a code runs only on a supercomputer or requires a graphics package that's available only on certain operating systems. Moreover, even if everyone can run it today, there is no guarantee that it will run on computers of the future, or with newer versions of operating systems, compilers, graphics packages, etc. In this way a code is quite different from a proof. So what is the value of archiving code? As in the case of proprietary software dependencies, I would argue that being able to examine code is often extremely valuable even if it cannot be run, and it is critical in making research independently reproducible even if the tables or plots in the paper can't be reproduced with the push of a button.*

The code may run only on certain systems today, and nowhere tomorrow.

The code may run only on certain systems today, and nowhere tomorrow.

Of course, authors should attempt whenever possible to make it easy to run their code. From a purely selfish standpoint, any effort put into cleaning up code so that it does reproduce all the plots in the paper with a push of the button often pays off for the author down the road---when referees ask for a revision of the way things are plotted, when the author picks up the research project again years later, or, in the worst case, when results in the paper come into question and the co-author who wrote the code has graduated or retired and is no longer available to explain where they actually came from. To minimize difficulties associated with software dependencies and versions, authors might consider using such techniques as a virtual machine to archive the full operating system along with the code [5], or a code-hosting site that simplifies the process of running an archived code without downloading or installing software [4].

*Code is valuable intellectual property.*It is true that some research groups spend years developing a code base to solve a particular class of scientific problems; their main interest is in "doing science" with these codes and publishing new results obtained from simulations. Expecting such researchers to freely share their full code might be seen as the computational equivalent of requiring experimentalists publishing a result not only to describe their methods in detail, but also to welcome any reader into their laboratory to use their experimental apparatus. This concern should be respected when advocating reproducibility in computational science, and I don't claim to have a good solution for all such cases.

For many research codes developed by applied mathematicians, however, the goal is to introduce and test new computational methods in the hope that others will use them (and cite their papers). For such codes I see little to be gained by not sharing. The easier it is for readers to understand the details and to implement the method (or even borrow code), the more likely they are to adopt the method and cite the paper. Some people worry that they will not receive proper credit from those who adapt code to their own research. But if everyone were expected to share code in publications, it would be much easier to see what code has been used, and to compare it to the code archived with earlier publications. Citing the original source would then be easy and would become standard operating procedure, leading to more citations for the original author. Readers of mathematics papers can judge for themselves the originality of the ideas in a published proof, and if code development were equally transparent, those developing the original algorithms and code would ultimately receive more credit, not less.

Today, most mathematicians find the idea of publishing a theorem without its proof laughable, even though many great mathematicians of the past apparently found it quite natural. Mathematics has since matured in healthy ways, and it seems inevitable that computational mathematics will follow a similar path, no matter how inconvenient it may seem. I sense growing concern among young people in particular about the way we've been doing things and the difficulty of understanding or building on earlier work. Some funding agencies and journals now require sharing code that is used to obtain published results (see the Science guidelines for authors [11], for example). SIAM journals are not currently contemplating such a requirement, but the capability is now available for accepting and publishing unrefereed supplementary materials (including code) in conjunction with papers for some SIAM journals (see "SIAM Journals Introduce Supplementary Materials,"

*SIAM News*, March 2013). I believe there is much to be gained, for authors as well as readers and the broader scientific community, from taking advantage of this capability and rethink-ing the way we present our work. We can all help our field mature by making the effort to share the code that supports our research.

**References**

[1] K.A. Baggerly and D.A. Barry, Reproducible research, 2011; http://magazine.amstat.org/blog/2011/01/01/scipolicyjan11/.

[2] N. Barnes,

*Publish your computer code: It is good enough*, Nature, 467 (2010), 753; http://www.nature.com/news/2010/101013/full/467753a.html.

[3] S. Fomel and J.F. Claerbout,

*Guest editors' introduction: Reproducible research*, Comput. Sci. Eng., 11 (2009), 5–7; http://csdl2.computer.org/comp/mags/cs/2009/01/mcs2009010005.pdf.

[4] J. Freire, P. Bonnet, and D. Shasha,

*Exploring the coming repositories of reproducible experiments: Challenges and opportunities*, Proc. VLDB Endowment, 4 (2011), 1494–1497.

[5] B. Howe,

*Virtual appliances, cloud computing, and reproducible research*, Comput. Sci. Eng., 14 (2012), 36–41.

[6] J. Kovačević,

*How to encourage and publish reproducible research*, Proc. IEEE Int. Conf. Acoust. Speech, and Signal Proc., IV (2007), 1273–1276.

[7] R.J. LeVeque, I.M. Mitchell, and V. Stodden,

*Reproducible research for scientific computing: Tools and strategies for changing the culture*, Comput. Sci. Eng., 14 (2012), 13–17.

[8] J. Mesirov,

*Accessible reproducible research*, Science, 327 (2010), 415–416.

[9] M. Nielsen,

*Reinventing Discovery: The New Era of Networked Science*, Princeton University Press, Princeton and Oxford, 2012.

[10] R.D. Peng,

*Reproducible research in computational science*, Science, 334 (2011), 1226–1227.

[11] Science,

*General information for authors: Data and materials availability*, 2012; http://www.sciencemag.org/site/feature/contribinfo/prep/gen_info.xhtml#dataavail.

[12] V.C. Stodden,

*The legal framework for reproducible scientific research: Licensing and copyright*, Comput. Sci. Eng., 11 (2009), 35–40.

*

*A Treatise of Human Nature*, http://www.gutenberg.org/files/4705/4705-h/4705-h.htm.

*Randall J. LeVeque (rjl@uw.edu) is a professor in the Department of Applied Mathematics at the University of Washington in Seattle. He is chair of SIAM's Committee on Supplementary Materials.*