Ablation of core research claim
Google Affiliation/FB Affiliation
I excluded Google and FB affiliated papers for 2 reasons. One, measuring the effect of including Google/FB papers for framework popularity is identical to measuring the number of papers Google/FB publishes. Two, these numbers are more easily gameable. Since the majority of papers don’t mention what framework they use, a company-wide mandate to include PyTorch in all FB papers could have an outsized effect on these numbers.
Since Google publishes more papers than FB does, not accounting for authors makes the numbers slightly better for TensorFlow. However, it doesn’t make a significant difference.
Another thing that might help out TensorFlow is incorporating Keras numbers into TensorFlow’s. The reason I didn’t do this in the original figure is that it mostly is irrelevant, and it's not clear that it's the right thing to do. Most papers that mention Keras will mention Keras and TensorFlow jointly, and some of the remaining papers are using Keras with Theano or other backends.
In addition, from the ICML code, we can see that the majority of papers that use TensorFlow don’t use Keras. Only about a third of the ICML papers that use TensorFlow use Keras in any capacity (most of those are only using it for datasets).
However, there are some papers that only mention Keras but not TensorFlow. Assuming that all Keras papers are TensorFlow papers changes most conferences a negligible amount (a percent or two). The conference with the biggest change is ACL 2019, which goes from 75% PyTorch to 70% PyTorch.
One issue with all these figures is that they’re somewhat of a biased sample. It’s not a requirement for a paper to mention what framework they use, and most papers don’t. At most conferences, only around 20-35% of papers will mention PyTorch or TensorFlow, with vision conferences at the higher end of the spectrum and ML conferences at the lower end.
With this limited sample, selection bias could skew the results significantly. For example, if there was PyTorch community push to cite PyTorch, or if TensorFlow users were more eager to express their enthusiasm for the framework.
There are two primary sources of data we can use to examine this issue
- Appendices. Unlike most other conferences, ICLR includes appendices in the same pdf as the main paper. That boosts its PyTorch/TensorFlow mention rate to 38%. Still, its numbers are still in line with the other ML conference (ICML), at around 55% PyTorch usage.
- Code. This year, ICML encouraged papers to submit code and suggested reviewers to take it into account. As a result, 534 of the 774 accepted papers had code available on the website. This provides a significantly larger and less prone to bias population on which to examine framework usage.
I downloaded all code that was available as Github repositories, and ended up with 488 repos. I grepped each one for
from tensorflow, etc. Finally, I resolved this information with the other information to remove Google/FB papers.
When we scraped only from the pdfs, we found 69 PyTorch papers vs 53 TensorFlow papers (56.5% PyTorch).
From the code, we found 116 PyTorch papers, and 101 TensorFlow papers (53.4% PyTorch)
Finally, merging the two sources of information, we find 137 PyTorch papers and 123 TensorFlow papers (52.7% PyTorch)
Overall, although there is some variation in these different data source, it’s clear that there isn’t any overwhelming effect biasing towards PyTorch, and it doesn’t change any conclusions meaningfully.
Researchers abandoning TensorFlow
Another claim I made was that researchers have been abandoning TensorFlow for PyTorch. Although it’s certainly true that as a whole, researchers have moved to PyTorch over the last year, it’s possible that researchers using TensorFlow have stuck with it, while new researchers have gone with PyTorch.
I examined the authors of all the conference papers published in 2018 and 2019 thus far, and examined how many researchers have switched to PyTorch versus how many have stayed on TensorFlow.
Of the 161 researchers who published more TensorFlow papers than PyTorch papers in 2018, 88 (55%) of them have switched to PyTorch. On the other hand, only 15% of the researchers who published more PyTorch papers have switched to TensorFlow.
These figures are very approximate, of course. Many authorship positions have nothing to do with implementation, and most papers mention neither PyTorch nor TensorFlow. Thus, some of the researchers who’ve “switched” may only seem like they’ve switched due to low sample size.