Automating Transport with Univalent E-Graphs

How cool would it be if we could take entire verified libraries in an interactive theorem prover defined over existing datatypes and, with the push of a button (and maybe a tiny bit of user guidance), port those libraries’ functions and proofs to use our own datatypes? This is the problem that DEVOID, a part of the PUMPKIN PATCH plugin suite, chips away at—starting with a small and confined use case.

I’ve recently extended DEVOID (in a branch) to chip away more at this problem, and the more and more I chip away at it, the clearer the picture becomes. I think this kind of automation is pretty closesay, 5-10 years away—but we have some interesting problems to solve first.

In this post, I’ll talk about one of those problems: efficiently automating transport across type equivalences. A type equivalence between two types A and B is a pair of functions:

f : A -> B
g : B -> A

that are mutual inverses:

section: forall (a : A), g (f a) = a.
retraction: forall (b : B), f (g b) = b.

Transport, at a very high level, means taking something defined over A and spitting out something analogous defined over B. Automating transport is key to good reuse and even to related problems like proof repair. I have some ideas for how to do this that I’d like to share.

If you’re interested in any of these problems, please talk to me, because a lot of this is work in progress.

Transport in DEVOID

In homotopy type theory, transport is a function that lets you derive analogous proofs about B from proofs about A, given an equivalence between A and B.

Since we can’t define transport as a function in Coq without any axioms (in general, at least, but there’s some great work on how close we can get), in DEVOID, transport is a program transformation implemented in OCaml that takes in Coq proofs about A and spits out Coq proofs about B. This has the extra benefit of producing fast functions and small proof terms, and allowing us to get rid of A entirely after porting our functions and proofs if we’d like.

The caveat right now is that while in homotopy type theory, this transport function works for any equivalence, in DEVOID, it’s so far implemented for just a few kinds of equivalences. I’m working on changing that, but that’s a topic for a different day.

Push-Button Transport

Even if you have transport defined as a function in your language, like in homotopy type theory, it’s still not fully automatic. You have to explicitly apply this transport function in order to derive your new proofs (think of this like rewriting along equalities in Coq, but fancier). This can be really annoying, especially when you want to port an entire library or something that effect.

With DEVOID, we automate the most basic case, where the user just wants to change every A to a B (or similarly in the opposite direction). For example, one of my users recently came to me with a problem. He had a compiler from another language to Coq’s specification language Gallina, and his compiler generated types and functions that looked like this:

Module Generated.

  Definition input := (prod bool (prod nat bool)).

  Definition output := (prod nat bool).

  Definition op (r : (prod bool (prod nat bool))) : (prod nat bool) :=
    pair (fst (snd r)) (andb (fst r) (snd (snd r))).

Emd Generated.

He wanted to write some proofs about the generated functions. Unfortunately, the generated functions were really gross to look at. He wanted to write proofs about types that looked more like this:

Record input :=
  MkInput { firstBool : bool; numberI : nat; secondBool : bool }.

Record output :=
  MkOutput { numberO : nat; andBools : bool }.

Using an extended branch of DEVOID, my user can now do this easily. All he has to do is call the Lift command:

Lift Generated.input input in Generated.op as op_1.
Lift Generated.output output in op_1 as op.

This proves the equivalences between Generated.input and input and between Generated.output and output, then writes functions for him over his records:

Definition op (r : input) : output :=
      numberO := numberI r;
      andBools := andb (firstBool r) (secondBool r)

He can then write his proofs over these records:

Theorem and_spec_true_true :
  forall (r : input),
    firstBool r = true ->
    secondBool r = true ->
    andBools (op r) = true.
  intros. destruct r as [f n s]. simpl in *. intuition.

then lift those proofs back to get analogous proofs over his original specification:

Module GeneratedProofs.

  Theorem and_spec_true_true :
    forall (r : Generated.input),
      fst r = true ->
      snd (snd r) = true ->
      andb (fst (Generated.op r)) (snd (snd (Generated.op r))) = true.

End GeneratedProofs.

Not Quite Push-Button Transport

What I’ve shown you is pretty automatic, but you can already see a problem if you look closely. Take a look at Generated.op:

Definition op (r : (prod bool (prod nat bool))) : (prod nat bool) :=
    pair (fst (snd r)) (andb (fst r) (snd (snd r))).

We can think of the input r as having type Generated.input, or we can think of it as having type (prod bool Generated.output). Our procedure is so naive that it turns every A into a B. So it works fine if the user lifts in this order:

Lift Generated.input input in Generated.op as op_1.
Lift Generated.output output in op_1 as op.

but if the user lifts in the opposite order:

Lift Generated.output output in Generated.op as op_1.
Lift Generated.input input in op_1 as op.

we’re in trouble:

Definition op (r : bool * output) : output := 
    numberO := numberO (snd r);
    andBools := andb (fst r) (andBools (snd r))

The function that we get back is also correct, but it’s not the one we want.

So the user has to think hard about the order, and still is pretty limited in what is possible. If the user wants to lift only some occurrences of A, this is only possible in DEVOID right now with some hacky options.

Automatic Transport as Type-Directed Search

The better solution, in the long run, will be implementing automatic transport as type-directed search. So the user can just say:

Lift Generated.op as op : input -> output.

and DEVOID will look through a bunch of stored equivalences, and apply the program transformation along the correct ones to get the desired type.

The solutions I’ve seen for this so far in other tools like the univalent parametricity framework use type classes. I think this is a good start, but I suspect that it won’t scale well when we have large libraries to port, and when we want to port those libraries along many different equivalences. Indeed, Nate Yazdani, who worked with me on DEVOID, noted that in his experience with the univalent parametricity framework, it was common for this type-directed search to spin out of control.

Univalent E-Graphs

This is the cool part.

Automatic transport isn’t special. Equality up to transport is just an equivalence relation, and automatic transport just searches for proofs of that relation and rewrites along those proofs, either by directly applying transport (in homotopy type theory and univalent parametricity) or by transforming the term in a metalanguage (in DEVOID) . In other words, automatic transport is a rewrite system.

There are these beautiful data structures called e-graphs that are already well equipped for handling this efficiently and elegantly in non-dependently-typed languages. They are used, for example, inside of SMT solvers. They are basically designed to help deal with the problem like the one we saw with op earlier, when there were multiple equivalences to choose from.

Recent work extended these data structures to handle dependent types. The result was wonderful automation of equality proofs in the Lean theorem prover. The cost was introducing an axiom called UIP to Lean that is not in Coq and is incompatible with homotopy type theory (without any axioms, the notion of equality between dependent types is just not very useful for automation like this).

What if we were to implement this data structure for equality up to transport rather than heterogenous equality as in Lean? In univalent type theories like homotopy type theory, I bet we could implement automatic transport this way, using what would essentially be an extremely powerful congruence tactic coupled with the transport function. In the univalent parametricity framework, I imagine we could get much more predictable and efficient type-directed transport when managing a lot of equivalences. And in DEVOID, I bet we could also get efficient type-directed search.

Thinking Big

The IAS memorial service spoke about Voevodsky’s vision of proof reuse, a world where:

“mathematicians around the world could collaborate by depositing proofs and constructions in the computer, and … it  would be up to the computer to locate the equivalence between formulations and [to] transport the constructions from one context to another.”

I think we as a community are only 5-10 years away from realizing that vision if we really sit down and collaborate on these problems. DEVOID already does both of these for some specific kinds of equivalences, and the second of these is automatic transport, the subject of this post.

I think this also hides the key to better proof repair. Proof repair is really nothing but proof reuse over time. And I bet that, in a dependently-typed language, you can think of any change as an equivalence between sigma types. The problem just becomes exposing a good interface around searching for and transporting along this equivalence (maybe using example-based search like the original PUMPKIN PATCH, maybe doing something different), getting rid of all references to the old type (which DEVOID can do), and then hopefully at some point converting everything back to a reasonable, maintainable set of tactics (my collaborator RanDair Porter has ongoing work on this).

So I think we’re also not too far from a world in which we can freely change our specifications and easily repair our proofs. And I think that type-directed automatic transport really holds the key to this as well.

Let’s Hack!

In summary, I think maybe the answer to easy proof reuse, awesome congruence tactics in univalent systems, automatic transport, and proof repair is sitting there right in front of us. What I want right now is a hackathon for univalent e-graphs. I want the hackathon to include, at a minimum:

  • Me and my coauthors (DEVOID)
  • Daniel Selsam and Leo de Moura (congruence closure in Lean)
  • Nicolas Tabareau, Eric Tanter, and Matthieu Sozeau (univalent parametricity in Coq)
  • Bas Spitters (interested in better congruence in Coq)
  • Your name here?

Let’s all just sit down and hack away at univalent e-graphs for a couple of weeks. I think the benefits for the interactive theorem proving community would be massive.

Thanks to Chandra and Max for initial thoughts about when e-graphs would be useful.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.


Proof Engineering Survey Paper: Q(ED) & A

Recently, I had the joy of writing a 120 page survey paper on proof engineering:

Talia Ringer, Karl Palmskog, Ilya Sergey, Milos Gligoric, and Zachary Tatlock. QED at Large: A Survey of Engineering of Formally Verified Software. Foundations and Trends in Programming Languages (FTPL), Vol. 5: No. 2-3, pp 102-281, September 2019.

This was a lot of work. It took two years of reading hundreds of papers, talking to dozens of experts, writing, playing with examples, soliciting feedback, and refining. The end result was a 120-page distillation of the state of the field so far and some promising future directions.

I’ll write more about what it was like to write this paper at some point. But for now, here’s a brief Q(ED) & A for interested readers. Because you’re probably wondering, “This is cool, but where do I find it, and how the heck do I read it? 120 pages? Really?”

Q: Where can I get a copy of this paper?

A: You can purchase a copy from the publisher here. If that is not accessible to you, but you would really like to read it, please contact us directly to ask about a personal copy. There will be a delay before we host our own copy online.

Q: Is this the authoritative source of what to cite?

A: No. I mean, it’s a really, really good entry point into the area. It’s a great place to start if you are looking to cite some work in this area. But it is also non-exhaustive. So, in addition to citing the survey and whatever work you find relevant in the survey, please also check the errata to make sure that you also cite the papers that we missed, search for papers that have surfaced since the survey, and check the papers that each paper cites to make sure that you get a complete view of the area.

Q: Do I really need to read 120 pages?

A: No, of course not. Then you would miss the 60 pages of citations! Don’t forget those!

OK but really. The reading guide in Section 1.4 says it best:

It is not always necessary to understand previous chapters in order to understand later chapters; readers should feel free to skip sections or chapters, or to consult later chapters or cited resources for more information.

We recommend reading the introduction, conclusion, and any sections that interest you. We also recommend scanning the errata briefly, especially for omitted citations and factual corrections. We are flattered if you want to sit down and read every single word, but we believe it is not necessary to do so in order to benefit from the paper.

Since we aim to address a broad audience, below are some specifics beyond that for particular kinds of readers. You can obviously read this however you want, but these are just some potentially helpful strategies. If you do not fit any of these profiles, feel free to contact us directly for advice on how to approach this monolith.

The Proof Engineer


You are a proof engineer working on a large proof development. Your main goal in reading this paper is to make your life easier.

We think chapters 5, 6, and 7 are probably the meat of what will help you. You probably do not need much motivation for the problem, so Chapters 2 and 3 may not be as interesting to you as later chapters (unless you work in one of the domains from Chapter 3). Skimming Chapter 4 may help if you have a choice over which proof assistant to use, but you don’t need to get too lost in the details.

The New Researcher


You are about to embark on the exciting journey that is graduate school, or something like that. Really, you’re just looking for a cool project. You’re pretty open, but verification using interactive theorem provers sounds nice.

We think the future work sections sprinkled throughout the paper will be the most interesting to you. If I were advising a student in this situation, I would tell the student to do start by reading Chapters 1 through 4, but only at a very high level, not getting lost in details. Then, search for the keywords “beyond” and “future”; read the future work sections this brings up, and think about whether those problems sound interesting. If they do, read the relevant sections, and also check the errata for those sections. In addition, read any sections you find interesting, and read the conclusion. Supplement with any resources from Section 1.4 that you find necessary to understand the sections that deal with the problems you find most interesting.

The Concerned Researcher


You already know what you want to work on, but you want a good idea of related work. You hope this will help make your life easier, and you also hope it will help you write your related work section and ensure that your work is novel.

We recommend focusing on the sections most relevant to your work. Then, if you are still interested, you may wish to read more. We also recommend checking the errata before citing any papers that we cite to make sure that you also cite the papers that we missed. We recommend searching for papers that have surfaced since the survey, and also checking the papers that each paper cites to make sure that you get a complete view of the area.

The Advisor


You are advising some students who are potentially interested in verification. You want a good overview of the field so that you can advise them well.

We recommend an end-to-end read, but focusing little on low-level details. Having a broad overview of the field is important, and the best way to get this will likely be to just dip your toes into every section. A more detailed read may help when you pick out a particular project. You may also want to refer your students to the paper and have them follow the advice for them in this guide.

The Tool Developer


You are developing or maintaining a proof engineering tool, or maybe even an interactive theorem prover. You want some ideas for how to improve your tool, or you just want to scope out the competition.

We—OK, Ihave strong opinions about this one, since this is the kind of research I do. You’ll want to read Chapter 3 just to get an idea of who some of your users might be. Then, you’ll want to learn the concepts necessary to communicate with tool developers from other communities; for this, read Chapter 4, Section 5.1, and Section 7.1. The Coq developers, for example, could learn a lot from the pain points the Isabelle/HOL developers have faced; the selected chapters and sections can help you speak each others’ languages. Then, of course, you can read the sections that are most relevant to the tool you are developing. But again, approach this with an open mind, considering tools from many different proof assistants. And finally, make sure you check the errata for omissions, like the HOL definitional mechanisms.

The Entrepreneur


You are running a business or managing a team of engineers, and you think maybe verification is worth it. You’re not sure, though. You think maybe this paper will help.

We recommend reading the introduction in detail, as well as Chapter 3 (aside from the mathematics section) for an idea of some domains in which verification has already seen success. You can probably skip Chapter 4 and Chapter 5. You will likely want to skim Chapters 6 and 7, but at a high level, without getting lost on details; these may help you understand what is currently difficult, so you can do some cost-benefit analysis. The conclusion should also be helpful for this.

The Newcomer


You have just set foot on this Earthin particular, on the world of proof engineering. You don’t really know what’s going on. Help? Oh god this is 120 pages? A nap sounds better.

We are really happy that you found this survey! Don’t panic. You really don’t need to understand everything in this survey. We honestly recommend the same strategy we recommend to The Advisor (skim the whole thing), but really, really being unafraid to skip sections that are inaccessible. Check out the resources in Section 1.4 if you really want to understand more.

Q: I found a mistake or omission. Where do I report it?

A: We will add it to the errata! Please report it as a GitHub issue or pull request modifying the errata source file, or contact us directly.

Q: How do I cite this paper?

A: Please cite the following:

url = {},
year = {2019},
volume = {5},
journal = {Foundations and Trends® in Programming Languages},
title = {QED at Large: A Survey of Engineering of Formally Verified Software},
doi = {10.1561/2500000045},
issn = {2325-1107},
number = {2-3},
pages = {102-281},
author = {Talia Ringer and Karl Palmskog and Ilya Sergey and Milos Gligoric and Zachary Tatlock}

Q: Why did you write this paper?

A: Because we need new hobbies?

Also, though, the words “proof engineering” have been thrown around for a while, and we think it’s time to really define the field. We think this is a good start. We hope it will help people from many different backgrounds to get a sense of what proof engineering is, why it’s useful, and what the oustanding challenges are.

Thanks for reading our paper! Let us know what you think.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Social Processes and the Next 100 Years of Verification

The infamous Social Processes paper concludes that my research area is worthless. So maybe it’s unconventional to say this, but I love this paper.

Here’s the abstract, for the unacquainted:

It is argued that formal verifications of programs, no matter how obtained, will not play the same key role in the development of computer science and software engineering as proofs do in mathematics. Furthermore the absence of continuity, the inevitability of change, and the complexity of specification of significantly many real programs make the formal verification process difficult to justify and manage. It is felt that ease of formal verification should not dominate program language design.

The first time I read this paper as a first-year Ph.D. student, it made me feel incensed—the authors bring up so many legitimate concerns, but then instead of concluding that the concerns are surmountable with more work, they conclude that program verification is not worth pursuing as a research area. To me, all of the concerns they bring up seemed surmountable (granted, with 40 years of work to reflect on since the paper was written). This is why I ended up working on making proof assistants like Coq easier to use; this inspired my thesis.

So I like to think of this paper as “The Next 100 Years of Verification,” even though the authors hold the view that there should not be any of that work. In other words, I think this paper is fantastic with one caveat: All of its conclusions are wrong.

The concerns the authors have about verification are real, but they’re also surmountable. Some of them have been addressed already, some are currently being addressed, and some are viable future research directions.

I’m going to discuss three concerns the authors have that I find particularly interesting, in the light of progress that has been made in the last 40 years. This is only a small sample! If you’re looking for a fun project to make proof assistants more accessible and useful, this great paper from 1979 is full of ideas, some of which are still untouched.

Believing Theorems and Proofs

As the authors note, believing a theorem or proof is a social process. The authors then conclude that the same social processes do not apply to verification.

The standard counterclaim to this point from programming languages researchers is that the computer does the believing for you: Proof assistants like Coq have a small core kernel which checks the proofs. Obviously, there’s always something to trust, no matter how deep down you get, but in practice people do seem to trust proofs that are checked by computers. In programming languages especially, a formal development of some result in a proof assistant is often considered more trustworthy than a hand-written proof of the same result.

I think this standard response is a half-truth that dodges the authors’ concerns. In fact, the same social processes do apply to verification. And that is valuable. We just have a lot of work left to do to make that easier.

Formal developments rarely come without explanation: research papers, comments in the code, research talks, accompanying websites, and so on. It seems that formal developments add trust, but alone are not enough—humans still crave understanding of the developments themselves. So we read about them, we discuss them, we play with them in our own IDEs, we use them to derive new results. We just trust them that much more because the kernel has checked the result for us.

So how do we make this easier? One cool area to focus on is human readability of proofs. In this spirit, some proof assistants like Mizar stress human readability; the Formalized Mathematics journal consists entirely of Mizar proofs that are automatically translated into English. There are Mizar modes for many proof assistants, including Coq. Isabelle/HOL comes with a style guide which recommends using Isar, a proof language influenced by Mizar that also stresses readability.

Mizar-inspired languages and modes are just one way of improving readability. Style guides are common within certain frameworks or developments, such as the Coq Homotopy Type Theory library. There is also some preliminary work on making proofs readable post-hoc without enforcing a certain style, for example by translating Coq proofs into English. There is a lot of work left to do here, but it’s possible to do this work.

Proofs as Messages

There’s a beautiful quote about proofs in here:

The proof of a theorem is a message. A proof is not a beautiful abstract object with independent existence. No mathematician grasps a proof, sits back, and sighs happily at the knowledge that he can now be certain of the truth of his theorem. He runs out into the hall and looks for someone to listen to it.

I find this quote really interesting because some mathematicians claim to think of proofs as irrelevant: Once we have a proof, the theorem is true, so we can throw away the proof and believe the theorem.

In practice, I think mathematicians actually do use information from the proof itself. And I think the authors of this paper, perhaps inadvertently, are getting at this. A proof of a theorem in itself carries more information than just the theorem it proves; an elegant proof tells you more than just the truth value of the theorem.

In proof assistants like Coq, every proof corresponds to a term. While it can be useful to erase proofs as Coq does during extraction, and while proof irrelevance itself—the notion that any two proofs of a given proposition are equal—also has practical benefits, I think there is a lot of benefit to considering the actual proof terms themselves, especially for automated tooling. This is an area where computers can shine.

Enter proof reuse, an area that has been around since the 1990s. The key insight behind proof reuse is that we can write tools (languages, plugins, and so on) that can derive new proofs from existing proofs!

There are real-life analogues to a lot of work in proof reuse. Let’s consider inverting proofs. Pretty much anyone who has ever written a paper proof has written one that proceeds like this:

We show one direction; the opposite direction follows similarly.

This is a tactic. It’s a tactic that you expect the reader of your proof to have installed in her brain. And it’s a tactic that actually considers the proof object itself. To run this tactic, the reader reads the first half of your proof once again, this time with some variables swapped; this proves the opposite direction. And this is exactly what the inversion component of my tool PUMPKIN PATCH does; you can implement a toy version of this in my Magic tutorial plugin, along with toy versions of tactics for generalization of proofs, factoring of proofs into multiple proofs of intermediate lemmas, and reduction of complex proofs into simpler proofs.

So I think that in the real world, proofs are relevant, and that is fantastic. My takeaway from this is that we should consider what other information proofs carry, and use it to automatically extract more information. I think that computers can be a lot better at this than humans! But ultimately, the final information the computer spits out is for a human, and tooling should keep that in mind.

Changing Specifications

This is a minor concern in the scope of the Social Processes paper, but at one point the authors do note that real programs (and their specifications) change over time; verification is useless if it can’t handle these changes. Indeed, it is, which is why there is a lot of work on addressing changes, either by making your formalization resilient to change ahead of time, or by using automatic tooling to adapt it after specifications change.

There’s some great work here at UW on design principles that can make proofs resilient to change; some of the design principles cross over directly from software engineering. Adam Chlipala (who, for those of you in the class I’m TAing, wrote the book you’re using) advocates for writing powerful domain-specific tactics; updating a proof development to a change in specification is then reduced to updating the automation (or, sometimes, to nothing at all, if your automation is powerful enough), which only has to be done once in a single location instead of over and over again in every proof.

But if you’re not perfect and can’t foresee every change, then it’s still OK, because proofs are programs. And that means we can automatically repair them like we do with programs. Actually, we’re really lucky here, because traditional program repair is hard precisely because we rarely have full specifications for programs, so it is difficult to tell when a patched program is correct—a famous paper recently showed that test cases alone are usually not enough, and patches generated by tooling that only considers test cases are often incorrect. But in the world of proofs, we know exactly when the patched proof is correct: It proves the theorem we want it to prove. We have the formal specification most program repair researchers would kill for. So proof repair is, in a sense, easier than other kinds of program repair.

Work on proof repair is in its infancy, but is my thesis area, so I hope to bring this into the mainstream. This is what PUMPKIN PATCH does; I also have some unreleased tooling I plan to integrate with PUMPKIN PATCH. I would love to see more work on it outside of my own tools. If you’re interested, definitely talk to me.

Coq Implementors Workshop

When I was a software engineer at Amazon, learning new APIs was easy. I always had experts around who understood those APIs, so if I couldn’t find the answer myself, I could always just ask.

Being a researcher is different. There are very few people who understand the APIs that I work with, and almost all of those people live in France.

Two weeks ago, I had the pleasure of going to the Coq Implementors Workshop in France to work with those people.  It was a lot like being at a software company again, except with better food. In just one day, for example, I was able to update my plugin from Coq 8.6 to Coq 8.8, address two mysterious bugs that had been so confusing I’d left the comment “TODO ask in France” in my code, and eat a three-hour-long French lunch. (Everything was good, but surprisingly, I think my favorite thing was the terrine.)

Here are five things I learned about plugin development at the workshop.

Coq has a Gitter!

Maybe this was obvious, but Coq has a Gitter (kind of like Slack, but linked to Git). While you can probably answer most beginner Coq questions through a quick Google search, and while the coq-club and coqdev mailing lists are great places to ask general questions that you think will benefit a larger audience, the Coq Gitter is perfect for really specific questions about the internal Coq API.

If you’re a plugin developer, it’s pretty much indispensable. I got live support through Gitter when I updated my plugin from 8.6 to 8.8, and it saved me so much time. This means you don’t actually have to go to France to communicate with people who know the APIs you’re using.

Coq developers really want to help plugin developers.

The Coq developers want people developing plugins and contributing to better Coq user experience. The Coq developers also expect to break plugins when they release new versions of Coq. With these things in mind, they are very willing to help developers adapt their plugins to new versions of Coq and understand the APIs that they are interacting with. If the internal API is confusing, feel free to cut an issue in Github and link it to the API documentation meta-issue. The Coq developers would love for you to put your plugin in the Coq CI.

Basically, don’t be shy about being involved with the development community if you’re writing a plugin. The development community wants you to be a part of it. Right now, few enough plugin developers are involved that the Coq developers are not always sure of the best ways to help and to get people involved. If you have ideas, let me know over the next month, because we’re working on putting together a plugin development community.

Coq is not the toy language we often imagine it to be.

People often tell me that writing the kinds of plugins I write is hard. I usually dismiss it as “just writing a compiler.” I still think this is true. You may have a fancy type system, but when you write code that takes in a Gallina term and manipulates it and spits out another Gallina term, you really just have a compiler from Gallina to Gallina. And at the core of my code, there are really just a bunch of functions traversing the AST and updating the environment and constructing new terms. If you’ve ever written a compiler, you can do what I do. A lot of the hard work is already done for you (if you know where in the code to look).

But it turns out the system that I write compilers for is complicated. When we talk about Coq, we often talk about the Calculus of Inductive Constructions (CIC). This is already a gross simplification, because Coq has universes, so it’s closer to CICω (plus a handful of other goodies). And it turns out that while you can usually get away without thinking about universes when you’re writing Coq code, when you actually try to write a plugin, universes always matter. A Coq term these days doesn’t actually mean anything without a corresponding set of universe constraints (currently stored in the evar_map, so-called because it also holds constraints for existential variables).

This can be really confusing when you write a plugin! In my case, I ran into several bugs where I was constructing terms that were well-typed and my plugin almost always worked. But occasionally, I would get universe errors. If I constructed the same term by hand in a Coq file, I wouldn’t get those universe errors. What gives?

I found out that you basically need to always thread your evar_map through your plugin and update it as you go. There’s still a lot that I don’t understand, though, which is why I have an open issue about this.

For example, one of the hardest things to do in a Coq plugin is to define a term. Yves Bertot has an excerpt in his plugin tutorial:

let edeclare ident (_, poly, _ as k) ~opaque sigma udecl body tyopt imps hook =
 let open EConstr in
 let env = Global.env () in
 let sigma = Evd.minimize_universes sigma in
 let body = to_constr sigma body in
 let tyopt = (to_constr sigma) tyopt in
 let uvars_fold uvars c =
 Univ.LSet.union uvars (Univops.universes_of_constr env c) in
 let uvars = List.fold_left uvars_fold Univ.LSet.empty
 (Option.List.cons tyopt [body]) in
 let sigma = Evd.restrict_universe_context sigma uvars in
 let univs = Evd.check_univ_decl ~poly sigma udecl in
 let ubinders = Evd.universe_binders sigma in
 let ce = Declare.definition_entry ?types:tyopt ~univs body in
 DeclareDef.declare_definition ident k ce ubinders imps hook

let packed_declare_definition ident value_with_constraints =
 let body, ctx = value_with_constraints in
 let sigma = Evd.from_ctx ctx in
 let k = (Decl_kinds.Global,
 Flags.is_universe_polymorphism(), Decl_kinds.Definition) in
 let udecl = Univdecls.default_univ_decl in
 let nohook = Lemmas.mk_hook (fun _ x -> x) in
 ignore (edeclare ident k ~opaque:false sigma udecl body None [] nohook)

Do you understand that? Me neither, which is why I’m grateful that Yves wrote it. For now the recommendation is to copy and paste this into your plugin code. Hopefully it will become a part of the API at some point, or the API for defining terms will be easier to use. I’ve got an issue open about this one too.

So clearly, some of the difficulty is just a matter of improving the API. But for things like universe constraints, it’s hard to develop an API that abstracts this away from the plugin developer. Ideally you’d like to never think about universe constraints. But they do always matter.

It’s easy to accidentally reinvent the wheel if you can’t find the wheel to begin with.

I have an entire de Bruijn library in both of my plugins. I wrote this library because I had no idea that the Coq code already had de Bruijn management functions. I probably shouldn’t have written it to begin with.

The code base is big. There are a lot of useful functions in there to help you out if you know where to find them. But you probably won’t know where to find them without a little bit of help. And if you find them, you might find that there are several functions that you have access to, each with a slightly different name and signature, and it might not be clear which one you should call.

This is another place where the Coq Gitter can be really useful. I also hope that once we put together a plugin community, we can find a way to share our own library functions that aren’t in the core Coq code. I have a lot of useful higher-order functions for traversing and manipulating terms!

Even the Coq developers disagree on best practices, sometimes.

When Yves first sent out his plugin tutorial, it had a line that did this:

interp_constr (Global.env()) (Evd.from_env (Global.env())) e

I asked if there was any difference between using the global environment and using the current context:

let (env, evm) = Pfedit.get_current_context() in interp_constr env evm e

Well, there is. But which one you should actually use when was the subject of a 15 minute debate on the first day, and then another 15 minute debate another day, and then several disagreements on Gitter.

You should probably know that both of these exist, and that the latter gives you the proof context while the former just gives you all of your global constants and so on. Other than that, I guess when you have a system as complex as Coq, developed by world-class researchers who have come and gone since 1984 (six years before I was born, mind you), not every question you have is going to have an easy answer.

And don’t get me wrong: Any term you produce with a plugin is still ultimately going to go through type-checking, so unless you deliberately try to compromise type-checking, you’re not going to introduce anything bad. If you mess up, your users will just get confusing error messages. Not a huge deal.

But Coq is just code. Even this beautiful CICω-based system is still ultimately just pretty normal software once you look under the hood.  And to be honest, I think that is comforting. There’s no magic.

PL for SE, Part 1: “Programming Languages”

This is the first post in a Q&A series I’m doing for software engineers who are interested in getting into programming languages (PL) research. And it’s a great question to start with because it’s impossible to get into PL research if you don’t know what it is. The name really isn’t self-explanatory.


@drin_montana writes:

Type systems, formal analysis and security related things are all I know of in PL research. What other types of research are under a PL PhD?


Any one-line summary of PL research is going to miss the mark, but here’s an attempt at one, anyways: PL research is about reasoning about programs. Sometimes we do the reasoning ourselves, sometimes we build algorithms or tools that do the reasoning automatically, and sometimes we make it easier for programmers to do the reasoning.

I once gave a quick summary along those lines to a machine learning researcher, who then responded “so basically, you guys solve the halting problem.”¹ I mean, basically. Pretty much anything interesting you could ever want to know about a program is undecidable. And yet PL is a thriving field. How?

Typically, we explore some trade-offs:

  1. We solve decidable sub-problems of an undecidable problem.
  2. We approximate a solution to an undecidable problem that mostly works in practice.
  3. We limit what we let the programmer do so that we can avoid solving the problem altogether. (This is often actually a really good solution.)

This explanation for sure falls short, though, so let’s get into some examples. I’ll walk through a small selection of topics I know about. I’m not going to pretend it’s an exhaustive list, and it’s definitely biased by the kinds of research I’m exposed to at UW. Also note that this list is constantly evolving, and that there’s some overlap with software engineering research—I’m not going to get into the distinction.

If you want to learn more about any of these topics or see what other kinds of PL research exist, I recommend checking out the programs for some big PL conferences (PLDI 2017 [not yet posted, but hopefully soon], POPL 2017, OOPSLA 2016).

Language Design & Implementation

What it is: This is probably what most people think we do: design and implement programming languages.² This often means developing new abstractions that help programmers express something that is difficult to express or avoid common mistakes. Sometimes it’s just a matter of combining and implementing existing abstractions the right way. It’s actually a pretty small part of PL research, but there are people who focus on it, and it intersects a lot with other kinds of PL research.

Recent example: Checked C is a project at Microsoft Research with the goal of making your life less miserable if you have to write C programs. In their words:

Checked C … adds checking to C to detect or prevent common programming errors such as buffer overruns, out-of-bounds memory accesses, and incorrect type casts.

Why you should care: You use these languages every day. No matter how awesome of a programmer you are, there are sometimes things you can’t express easily or mistakes that you’re likely to make. Researchers are trying to make your life easier.

Domain-Specific Languages

What it is: General-purpose programming languages (like C or Java) are great. But sometimes, you can get a lot more power by using a language that is specific to the problem you are solving (say, by using Mathematica for math, or SQL to query a database). Domain-specific languages (DSLs) are designed for these specific tasks.

Recent example: Luck is a language that makes it easier to write generators (which generate test inputs) for QuickCheck-style (automatic) testing.

Why you should care: You already do! DSLs are all around you. They’re often disguised as frameworks or libraries. You can view Spring MVC as a DSL. The same goes for mocking frameworks like Mockito.

Solver-Aided Languages

What it is: Solver-aided languages are (usually domain-specific) languages backed by constraint solvers. These languages let you leave some of the reasoning to a logic solver, so you can write high-level programs and have the logic solver prove your program correct (verification), generate correct code on the fly (angelic execution), or produce correct programs for you (synthesis).

Recent example: Rosette is a solver-aided language that makes it easy to write new solver-aided languages (pretty meta). One recent application of Rosette is a solver-aided language that helps programmers generate lots of different inputs for testing (kind of like Luck, but very different under the hood).

Why you should care: For all of the same reasons you care about DSLs and general-purpose programming languages, with the added bonus that constraint solvers are really good at the tedious things you don’t like wasting your time on. Imagine if all of the time you spent thinking about low-level details and writing boilerplate were freed up to spend just on the fun, creative stuff. Your job would be more fun; your code would be better.

Program Synthesis

What it is: An elaboration of the above, but not limited to its use in new languages. Basically, programming is hard, so let’s let our computers do some of the programming for us. Sometimes we give the computer a specification of what our program should do; sometimes we give it some examples and let it figure out the rest.

Recent example: Cassius synthesizes CSS. This probably doesn’t need any motivation to anyone who has ever written a website, but if you’d like some, the talk intro has a nice example.

Why you should care: You may have unknowingly used synthesis tools already (for example, Flash Fill, an Excel feature that fills in spreadsheets from examples). If you haven’t, they will be coming your way soon. I think this is really going to be the future of programming. It doesn’t mean you’ll be out of a job—it just means you’ll be able to spend more time on the things that really matter. So think about any programming task that you find mundane or unnecessarily difficult, something you wish you could just throw at your computer and have it do for you. Synthesis is here to help you with that.


What it is: Proving programs correct. According to common wisdom there are “two kinds” of verification, but nobody agrees on what those “two kinds” are. Some say the split is automatic/manual: Is the tool reasoning, or is the programmer using the tool to reason? Others say it is classical/constructive: What assumptions does the tool’s underlying logic make? In reality, these are more like axes, so you’ll see automatic-classical, automatic-constructive, and manual-constructive (probably not manual-classical, but who knows, and there are way more than just two flavors of logic).

Recent example: Neutrons verified a radiotherapy system (and found bugs in the process). In the past, programming bugs in (unverified) radiotherapy machines actually poisoned patients from radiation overdoses.

Why you should care: If you write code that’s important in any way, you probably want it to be correct. Even if your bugs don’t physically harm customers, they might impact customer security and privacy. Testing gives you some weak guarantees, but imagine the peace of mind of knowing your program is correct. (It also means getting paged less often.)


What it is: Sometimes you really don’t need full guarantees that your programs are correct, so you write some tests. Writing good tests is hard, though. Testing research aims to make that easier, usually through automation (whether partial or full), but sometimes through new abstractions (as in Luck, which we talked about earlier). The meat of this is software engineering research, but testing papers still show up in PL conferences.

Recent example: PBNJ makes it easier to write mock objects. It uses constraint solvers under the hood.

Why you should care: Oh, I’m sure you already care. I remember scoping out projects with my TPM and setting aside an equal amount of time for testing and writing code in some cases. Should you really spend that much time writing tests? Sometimes you have to, but you probably shouldn’t have to.

Static Program Analysis

What it is: Sometimes you don’t quite want to prove things correct, but you also don’t want to just run a bunch of tests and call it a day. There are a lot of topics that fall in between the two approaches, and I’m not going to write about all of them, but static program analysis is a really big one worth mentioning. Basically, static program analysis approximates your program in some way, then proves some properties about the approximation. You can use this to find bugs, but for even very simple bugs (like divide by zero) you’ll typically have at least one of two problems: Sometimes your analysis will report bugs that aren’t really bugs, or sometimes it will fail to report some bugs. That’s usually OK, though.

Recent example: FindBugs is an analysis tool that finds bugs in Java programs. I’ve seen it actually used in industry.

Why you should care: I think I’ve motivated the “bugs suck” angle a lot, so here’s another reason: Every time you compile a general-purpose language, the compiler is already doing this for you and using it to optimize your code. This is why you don’t have to think about all of the low-level optimizations you learned about in undergrad that make your code fast but messy. The compiler makes your code fast, but you never have to see the mess.

Type Systems / Type Theory

What it is: This is way down in “core PL,” and focuses on reasoning about the type systems that programming languages use. Type systems are really a way of limiting what the programmer can do so we can better reason about our programs. Ideally you want a type system that is expressive enough to do everything you want to do (imagine writing Java code without being able to make a List of Integers). But as you make your type system more expressive, it may be harder to reason about or require extra programmer effort.

Not-so-recent example: I’m going a little older here, since it takes a while for type theory results to reach the real world (but trust me, they do). It’d be great to automatically infer types without the programmer ever having to write them. It turns out this is undecidable for a pretty simple language. But! (OK, if you can’t tell, I really like this stuff.) By placing just a few restrictions on the type system, it’s actually decidable. ML and older versions of Haskell both use this result.

Why you should care: You probably don’t want your type system holding you back when you write programs. But you also probably don’t want your programs mysteriously failing at runtime because of a simple type error that you could have prevented ahead of time. Type theory research gets very abstract very quickly, but ultimately it still does reach you (sometimes decades later) in the form of more expressive languages and more reliable programs.

  1. At the time, I was working with Byron Cook‘s group at Amazon. Byron Cook is well-known for literally solving the halting problem (in practice).^
  2. Often the first question we get when we say we study programming languages is, “what language?” ^

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

How to Take an Industry Break

“Grad school or industry?” God, who knows. You haven’t even finished college yet. How the hell are you supposed to know what you want to do with the rest of your life? Research sounds fun – maybe you’ve done a bit already – but grad school is a big commitment. And you have an offer from that company you interned with. It was a fun internship! The money sure sounds nice right now. Plus, you could really use a break.

It probably seems like everyone else knows exactly what they want right now. It’s not true. I promise. This was me in 2011. I see it all the time in students I mentor.

The reality is that it’s really scary to decide what to do with the rest of your life right now. You’re young and you have few priors.

Everyone is sure in retrospect. It’s easy to look back and rewrite your story so it’s consistent.¹ But probably most of us were unsure.

I’m not going to tell you what to do. That’s your job. But here’s the beautiful thing: You don’t need to figure it out right now. It turns out there’s no real way to go wrong in computer science.

Let’s say you go to grad school. You get a stipend (enough to get by). Two years in, you decide you hate it. Guess what? You just got a free masters degree. Drop out. Don’t stay because it’s What Someone Like Me Would Do™. Snag a cushy software gig. Make some extra cash with your new degree.

Now let’s say you go to industry. Two years in, you think boy, I really would like my Ph.D. Well, you can still do that.² Apply to grad school. Quit your job.³ Grad school will be fun with your newfound certainty, savings, and perspective.

But make it easy to change your mind. It’s OK to change your mind! If you only do something because past-you wanted to do it, you’ll be miserable. Set yourself up to go back to school.

Well, that’s neat and all, but how do I actually do that?

Good question, strawman student!

Before You Leave

Talk to your professors.

They probably seem kind of intimidating right now, but they are your best allies. If there’s a class you like a lot, get to know your professor. Go to office hours. Ask her about research.

Do research.

I remember feeling like I didn’t know enough to do research. This is silly; research is discovery. You learn from it. Talk to professors and grad students about doing research. I recommend working with a grad student on an existing project rather than doing a thesis. Better incentives. More guidance.

Try to publish something.

In undergrad, I was so scared of the publication process that I never submitted my research anywhere. Try to get your work published, even if it’s just to a workshop. Worst case, you learn how the process works. Best case, you strengthen your application a lot.

Tell your professors you’re planning to go to grad school.

If there’s one thing adults love, it’s younger people saying “I want to be you when I grow up.” Your professors will be happy to hear that you want to get your Ph.D. They might not believe you. That’s fine. Some may offer to help. Again, your professors are your best allies.

Identify professors for recommendations.

Think ahead of time about who will write you good recommendation letters. If you’re not sure, you can test the waters. For example, senior year, I applied to give the graduation speech, and the associate dean of my college told me my recommendations were great. This made my choice easy later on.

Get the GRE out of the way.

It’s really annoying to study for a test and take a test when you’re working full-time. Get it out of the way before you graduate. Your scores last five years!

After You’re Gone

Stay in touch with your professors.

If you email your professor like “hey, remember me? I want a recommendation letter” and they haven’t heard from you in the last n years, they might say no or write a weak letter. Make it easy for them. Stay in touch.

Again, your professors are your best allies, so you want to maintain connections with them. The payoff can be huge (think, getting you in touch with professors at your favorite school, helping you with your application).

Since every student I mentor thinks this is an extremely awkward thing to do, here are a few excerpts from emails I sent to my professors while in industry:

I’ve decided that  I’m going to go back to school two years from now.  One thing I noticed is that I dork out about PL as much as I dork out about theory, which is weird because I had always assumed I’d only want to go back for theory. So suddenly I’m not sure what I want to do specifically. I don’t know exactly what I’m expecting in response to this, but any information that I can use to make a decision would be super helpful. And I honestly don’t know how these decisions work; do I need to know what I want to do before I get to school, or can I make that decision once I am already there? You were always pretty enthusiastic about me going to grad school so I figure you’re a good person to contact.

If you’re cringing at how awkward this email is, good. Me too. But this professor responded with the name of the professor who is now my advisor and told me he’d be in town for a conference in the field I’m now studying and got me in touch with professors at in the lab I currently work with. Awkward is OK. This is computer science. Though I wouldn’t recommend saying you “dork out.”

My team at work does a series of technical talks. My manager wants me to do one on FindBugs, since I got kind of excited when I saw that we were actually using it during our build process. I’m willing to do this but I’ve never given a technical talk before. I’m wondering if you have any advice. The talks are generally 30 minutes to an hour, which makes it hard to go into any real detail. The audience is all technical, mostly software engineers. They are all extremely bright people with strong backgrounds in computer science, but it’s a safe assumption that most of them probably don’t know anything about static analysis. Please let me know if you have any ideas or resources.

The feedback was useful. The talk went well. This professor learned I was still interested in PL.


I’m just wondering if you’re doing OK in the aftermath of Boston. I knew a lot of people out there and I know a lot of runners who are in the same position. All 18 members of my current club who were there as well as all 12 members of my old club who were there are safe and unharmed. I hope you are doing well.
This professor and I are both runners. So we chat about running sometimes. In this case, I reached out as a friend to make sure he was OK. This is also a thing you can do. Fun fact: Professors are people, too.

Nowadays, there’s also social media, provided your accounts are Safe For Academia. This doesn’t mean you need to always talk about computers, but keep things reasonable if you’re going to add your professors on Facebook (if you know them well) or Twitter (you can do this for pretty much anyone).

Go to conferences!

Pick a field you like. Google the top conferences in that field. Sign up for one every year or two. You might be able to get your employer to pay for this. Go to the talks that sound interesting. Skip the ones that sound boring (hang out in the hallways and talk to people). Talk to your old professors there. Talk to new people. Don’t worry about sounding dumb. If you’re not even in grad school yet, nobody will expect you to know anything.

Conferences are fun. You’ll learn about the latest research. You’ll meet people. Maybe they’ll remember you when they’re on your admissions committee in n years.

Power play: Find a professor whose work you find interesting, read his work before the conference, go to a talk for a paper he’s on, write down a question during the talk, then approach him casually in the hallway and ask him the question after the talk.

Keep up with the latest research.

Even if you can’t go to conferences, find the papers that are accepted to those conferences and read a few every year. Don’t worry if you don’t understand them at all. That’s normal.

Get involved with your local research group.

This is another really awkward thing to do, but if there’s a research group at your local university doing work that you think is cool, you can reach out to them and see if they have group lunches or social events or reading groups or talks that you can go to. Just mention that you’re planning to apply in n years and are currently in industry.

Also, you should actually talk to people while you’re there. I went to a theory talk by a UW grad student before I applied and was too scared to actually talk to anyone. Then I just sort of silently left. #MissingThePoint

Get help with your application.

If you’re close with any professors and grad school friends, don’t be scared to ask them for help with your application. They’ll give you good advice. Also, they won’t judge you if you say something stupid. You probably will. That’s fine.

Don’t get used to industry income.

It’s nice to spend some money for fun things, but saving is a good way to ensure that fear of a grad school stipend (which is honestly not bad in this field) doesn’t keep you from returning to school. As a bonus, you’ll be able to afford luxuries like an infinite supply of Jimmy John’s during your first deadline.

  1. Academics call this “writing a research paper.”^
  2. Every professor ever will tell you that you’re only dumb enough to go to grad school when you’re still in college, but I’m a constructive proof that this isn’t true.^
  3. You should probably actually get accepted to a school before you do this.^
  4. It is actually awkward. And I am actually awkward. So there’s that.^


Flip your NSF Rejection

I spent the last few days of March in 2015 refreshing TheGradCafe and my email inbox. I get push notifications for my email, so I’m not sure why I bothered refreshing it. I guess I was hoping it’d make things go faster. It didn’t.

One night, FastLane went down for maintenance. Posters on TheGradCafe noted that this had happened last year shortly before notifications had come out. I stayed up and waited for a few hours, but nothing happened, so I went to sleep. When I woke up, I saw this:

2015 NSF Graduate Research Fellowship Program Application Notification

Please be an award, please be an award, please be a-

Your application for the 2015 National Science Foundation (NSF) Graduate Research Fellowship Program has been evaluated. We regret to inform you that you were not selected to receive a fellowship in this year’s competition.

I’d gone into the application cycle pretty optimistic. Maybe I just wasn’t cut out for this research thing.

I got my first taste of academic rejection. Mostly reasonable and constructive academic rejection, but back then, it read like this:

This applicant has a very good academic record and rich relevant work experiences which are helpful for the proposed research. But all reference letters are from the same institution. … Also this application hasn’t any publication.

The proposed is an interesting idea but is needing more development. The questions seem a little naive. It is a research direction worth pursuing.

The research proposal is a bit rough and would be improved by better placing within the context of more existing work. While the letter writers are enthusiastic about the student, they collectively do not add many comments about their support for her specific research plan.

Just kidding, it actually read like this:




On TheGradCafe, the other rejected folk were posting their reviews. E / E / VG / VG / VG / VG. E / E / E / VG / E / VG. E / E / VG / VG / G / G. And there was a double poor just staring at me. Clearly, grad school wasn’t my thing.

This year, I got the NSF. There was no magical transformation in between the two years. I worked just as hard both years. But this year, I did better work. For those of you looking to flip your computer science NSF GRFP application, I’d like to share some advice.

Generate Lots of Bad Ideas

I remember emailing my undergrad professor the first time I applied and asking how people think of ideas. We chatted a bit and he sent me some papers. I didn’t really understand any of them. I thought of something on my own that sounded kind of interesting, chatted with a friend, and decided to take that direction because it sounded cool and I didn’t feel like generating more ideas.

I had one idea. It wasn’t really a great idea. Most of my ideas are bad. Most of most people’s ideas are bad. But bad ideas aren’t actually bad! Bad ideas are great. They’re ideas. Don’t filter them out too early. In your long list of ideas you think suck, there’s probably one that’s actually good. So write down the whole list.

This of course doesn’t help much if your list is empty. And it’s totally fine if that’s true. So I’m going to take this classic advice a step further and share some ways to actually think of ideas.

How to Generate Ideas

If you’re in industry, look around you for problems. There are plenty of problems. There are plenty of things that are unnecessarily hard. Builds take forever and always break. Developers spend as much time writing unit tests as code. Then they wait 24 hours for code reviews. Bugs regress. Developers get paged. Everything is broken, always.

Every time you find something that’s hard that you think would be easier in an ideal world, write it down. Send it to yourself in an email. Whatever. Anything to keep a list running. Later on, think about ways to solve these problems with computer science. These are research problems!

If you’re in academia, read current work. Go to talks. Talk to people about their ideas. Talk to people about the almost-ideas you have that sound really stupid that you don’t think should actually be ideas. Eat lunch with your research group. Eat lunch with other research groups. Absorb as much as you can.

Write down all of the questions you have after reading papers or attending talks. Ask those questions. Write down the answers. Ask questions about those answers. Always ask questions. I personally hate asking questions during a talk, but find that most people are willing to talk about the same questions either immediately after the talk or later in the day. Look for existing work that tackles those questions, read that work, and then (you guessed it) ask more questions about that work, too.

Oh, and all of this still applies if you’re an undergrad. Grad students don’t bite. We love undergrads who are interested in our research areas. They’re free labor impressionable get ’em hooked while they’re young! bright young minds.

Why not mix the two? If you’re in industry, hang out with your local university’s research group. Attend their talks. Show up to their reading groups. This might feel weird, but you’re probably welcome. If you’re in academia, talk to your friends in industry (make some if you don’t have any) about the problems they have. You’ll probably end up with something high in both broader impacts and intellectual merit.

Get Peer Feedback

Generating ideas is the fun part. You don’t have to worry about how stupid your ideas actually are, because they’re allowed to be stupid. You just have to have them. The scary part is actually pursuing an idea. If you think the idea is dumb, it feels awful to bounce it off of your peers, let alone your professors.

But here’s the secret: Nobody is judging you. Nobody’s going to go, “oh man, did you hear Talia’s idea today? It was so dumb.” People are mostly reasonable.

Talk to your peers about your ideas. Talk to your professors about your ideas. Go back to your peers and talk to them about your ideas some more. Then go back to your professors. Write a really shitty first draft. Bounce it off of everyone who’s willing to read it. Get feedback. Refine your draft. Get more feedback. Send it to your professors. Get more feedback. Exchange essays with other people from your year who are applying. Exchange feedback.

But Talia, what if my peers give me negative feedback?

Better them than your reviewers. Don’t turn in a draft.

My first year, I was too scared to get any feedback ahead of time, so I sent in what I had. This year, I got feedback from several people in my lab, including two students who had won the NSF the year before. Some of it was really bad! But nobody decided I was stupid because the first draft of my research proposal wasn’t good.

Two students in my lab won the NSF this year; the two of us had exchanged essays with each other and given each other feedback. I believe that one of the reasons UW is so successful at winning NSF fellowships is that feedback among peers is highly encouraged.

Don’t be Scared

Once you think an idea is good, there’s a terrifying moment when you search through existing work and you’re like oh god, what if someone already had this idea that I really love, and they’ve already done it better than I ever could? 

Well, what are your options?

If you don’t look for existing work, one of your reviewers will know about it already. It will show up in your reviews. Ignorance doesn’t help.

If you find existing work that does exactly what you want, that’s great! That means that you had an idea that landed a publication. It was a good idea. Read the papers. Ask questions about the papers. Maybe you can do it better, or maybe you can find a new idea from those questions.

If you only find related work, you should still read all of those papers and mention them. They’ll help you refine your idea and make a good case that you’ve actually put a lot of thought into your proposal.

Looks like there’s only one way to lose.

Write for the Busy Reader

Your reviewer is probably juggling three commitments on the job and also has a family and friends and a life outside of reviewing NSF applications. So when you’re writing you should strategically use bold and italics. Get the reviewers to notice exactly what you want them to notice.

Tell your reader as clearly as possible how you’re addressing broader impacts and intellectual merit

My first year, I wrote a really nice personal statement that read beautifully if you had like thirty minutes to sit down and actually read it. Nobody did. I was profoundly surprised when my reviewers missed the broader impacts in my personal statement. This year, I added some nice headers: MENTORSHIP – BROADER IMPACTSTEACHING – BROADER IMPACTSRESEARCH – INTELLECTUAL MERIT. Nobody missed it this time.

Tell a Consistent Story

One of my professors emailed me the first year and asked me to send my materials so that he could give me a recommendation that was consistent with my story. I was too scared to actually do this. I think at first I gave him a high-level overview, then maybe eventually I caved and sent it but I think I sent it with a disclaimer that was longer than my actual research proposal. Whoops.

I don’t know why, but for some reason I thought that the professor who had helped me get that far would suddenly decide I wasn’t worth it anymore after reading my bad proposal. This was silly. It bit me in my reviews:

The research proposal seems to be in an early stage would benefit from being better placed within the context of existing work. The application could be strengthened by having a consistent message between the student and advisor. The collective letters of support do not provide strong support of the specific research goals. A more comprehensive package that reinforces the intellectual merit and shows more faculty support on the research contributions would make this proposal more competitive with others.

Even if my proposal had sucked, if I’d at least kept my professors on the same page, I’d have had a much stronger application!

This year, I sent my full application materials to everyone as soon as they were in reasonable shape. Then I sent the final materials before I submitted them. My story was consistent. Reviewers noticed. Your professors aren’t psychic.

Learn from your Mistakes

Which brings me to my final point. You’re going to get rejected a lot in academia. That’s what everyone tells me, at least. It’s good practice. Sometimes you’ll get rejected for stupid reasons. But most of the time you can learn from it.

The NSF does you a huge favor by releasing the reviews to you. Some of them probably aren’t useful, but I’m willing to bet there’s some useful information in there. You can sit there and beat yourself up over negative comments, or you can honestly look at them and figure out how you can strengthen your application next year.

Or, if you’re like me, you can beat yourself up a little, save them to a document, buy a pint of ice cream, eat it, and come back a few days later with a fresh mind. Then you can read them and figure out exactly how you’re going to win the NSF next year.

Go get it!