Howdy Logo
Blog Hero image
Software Development

What Science Says About Getting Your PR Accepted

Clock 15 min read
Howdy Expert

By Darío Macchi

Developer Advocate

Darío Macchi is a Howdy Software Engineer with more than two decades of experience. He fell in love with programming at age 11, when he taught himself to code while tinkering with video games on his first computer, an Intel 386.

At Howdy, Darío captains the development team in the Marketing department, supporting developers to solve problems and providing the team with forward-thinking solutions. As a Developer Advocate, Darío is the voice of the Howdy dev team, communicating their feedback and concerns, developing blog posts and resources, speaking at events, and representing Howdy in the broader dev community. Darío lives in Montevideo, Uruguay, where he enjoys cultivating bonsai trees.

Content

    What makes developers accept PR? The article "Does code quality affect pull request acceptance? An empirical study" contains a fascinating insight into this question because it reviews many studies on how they are accepted/rejected. It is unique because it confirms previously discovered facts across several studies using different research techniques. This paper benefited developers, not only if you want to improve your acceptance rate but mostly because everyone involved in software development should try to return to the original spirit of pull requests.

    Let us focus on the most valuable points of the article. This is a condensed summary with a mix of direct quotes from the paper and related work, sometimes rewording them and occasionally adding my comments. All research results come from the original article; any mistakes are mine.

  1. Some (fun/curious/not even new) facts around PRs
  2. We are fancy because we are using pull requests. Indeed, we feel protected by working like this, as though they automatically bestow magical effects on our code; we must have high-quality code just because we are doing them!
    However, deep inside, we know that is not true. We have witnessed several projects that used them but had significant quality issues regardless; those cases where you think, "This should never have passed a code review." So, what is the explanation for this?
    Researchers have been working on this for years, trying to understand the conditions that lead to poor code review in PRs and what leads to their acceptance/rejection. The following section lists some facts from different researchers in various articles on pull request acceptance. The key takeaway is that they differ from the panacea we like/want to believe.
    I suggest you follow the references and snowball over them to get the fine details around each fact.

  3. Factors that affect acceptance/rejection of PRs
  4. Evidence from different studies suggests several factors affect the speed with which a PR is accepted. A study that began with the GHTorrent corpus and then continued on a sample of 291 projects (Gousios et al., 2014) named factors like "developer's previous track record, the size of the project and its test coverage and the project's openness to external contributions." In 2014, Rahman and Roy conducted a study comparing successful and unsuccessful pull requests made to 78 GitHub-based projects by 20,142 developers from 103,192 forked projects. It found that developers with 20 to 50 months of experience were the most productive in submitting and accepting pull requests. The success and failure rates of pull requests for a project can be affected by the number of developers involved and their experience.

    Additionally, a case study with Active Merchant, a commercial project developed by Shopify Inc. (Kononenko et al., 2018), goes in the same direction: "The statistical models revealed that both PR review time and merge decision are affected by PR size, discussion, and author experience and affiliation. Developers believe that PR quality, type of change, and responsiveness of an author are also important factors". The size of the pull requests, their perceived quality, and the code in general, along with the context, play an essential role in the acceptance of pull requests (Tsay et al., 2014Gousios et al., 2014Soares et al., 2015b).‍

  5. And what about gender?
  6. What about it? In 2021, we are still wondering if there are gender-related differences in PR acceptance rates. However, here we are, and (as I supposed) gender affects the acceptance rate of pull requests.
    Terrell and colleagues conducted a study of an open-source software community in 2017, revealing some surprising results. It highlights that when women's contributions were not identifiable as being from women, they were more likely to be accepted than men's.

  7. Well, at least my social networks don't have an effect… or do they?
  8. Social networks affect everything nowadays. People make many inferences from the activity on social networks such as GitHub and Twitter. They infer technical goals, how people react to code reviews, and even infer seniority. All these inferences influence group collaboration and reputation.

    Unsurprisingly, pull requests from developers with more social contributions are more likely to be merged than those with fewer contributions (Dabbishetal., 2012). Tsay et al., 2014 show that better-connected authors and project managers have higher acceptance rates.

  9. I thought pull requests had to do with the quality
  10. It was always about quality. The spirit of pull requests is to allow every project developer to review the code related to a changeset in a forum-like environment. This leads to better engagement opportunities with the community and incorporating contributions (Gousios et al., 20142015Veen et al., 2015). So why have we yet to see quality named as the main acceptance factor in the previous section? Sadly, technical ones are only a tiny minority (Gousios et al., 20142015).

    To confirm these suspicions, the author of the article under review studied the role of PMD (Programming et al.) issues in pull request acceptance. PMD is a static source code analyzer used to find common programming flaws. Although the author expected developers to address quality issues in them, the statistical techniques showed that the acceptance/rejection did not relate to the presence of PMD issues. Not happy with that, they applied six machine learning models to confirm/reject these results.

    The results confirmed that code smells and anti-patterns are not considered problems when accepting or rejecting a pull request! I cannot stop thinking about the implications of this. All these quality misses seriously increase the risk of faults, raise bugs, and increase maintenance effort.

    Let me give you some examples related to some of these PMD issues. The authors manually reviewed 28 well-known Java projects (i.e., apache/Cassandra, apache/Kafka, hibernate/hibernate-orm, and spring-projects/spring-frame). They found things that I see every day with my undergraduate students: god classes, speculative generality (named "Law of Demeter," do not talk to strangers… remember?), duplicated code, long methods, and many other issues usually considered dangerous by different empirical studies (Sjberg et al., 2013Taibi et al., 2017Palombaetal., 2018).

  11. Implications of the study
  12. Here (taking into account the background of the other research mentioned), the authors give some advice.

    Remember that good code may not be the key to accepting your PRs. Pay attention to the coding standards and quality rules agreed upon by the team. Also, add a test suite and documentation. Use what you have learned in this article to make your submissions mergeable with fewer changes. You are the last person responsible for your code's quality (at least, until researchers find a better way).

    Core team members interested in adding value to their product should consider adopting an automated static analysis tool.

    It should be integrated with the CD/CI pipeline so contributors can validate their code automatically. They should adopt more systematic code review methods, like checklists based on well-known best practices, to avoid measuring based on perceived quality.

    Finally, spread the culture of objective quality patterns regarding clean code. Avoid well-documented code smells or anti-patterns in pull request approval processes until researchers discover how to make the PR process more quality-centered again!

What makes developers accept PR? The article "Does code quality affect pull request acceptance? An empirical study" contains a fascinating insight into this question because it reviews many studies on how they are accepted/rejected. It is unique because it confirms previously discovered facts across several studies using different research techniques. This paper benefited developers, not only if you want to improve your acceptance rate but mostly because everyone involved in software development should try to return to the original spirit of pull requests.

Let us focus on the most valuable points of the article. This is a condensed summary with a mix of direct quotes from the paper and related work, sometimes rewording them and occasionally adding my comments. All research results come from the original article; any mistakes are mine.

Some (fun/curious/not even new) facts around PRs

We are fancy because we are using pull requests. Indeed, we feel protected by working like this, as though they automatically bestow magical effects on our code; we must have high-quality code just because we are doing them!
However, deep inside, we know that is not true. We have witnessed several projects that used them but had significant quality issues regardless; those cases where you think, "This should never have passed a code review." So, what is the explanation for this?
Researchers have been working on this for years, trying to understand the conditions that lead to poor code review in PRs and what leads to their acceptance/rejection. The following section lists some facts from different researchers in various articles on pull request acceptance. The key takeaway is that they differ from the panacea we like/want to believe.
I suggest you follow the references and snowball over them to get the fine details around each fact.

Factors that affect acceptance/rejection of PRs

Evidence from different studies suggests several factors affect the speed with which a PR is accepted. A study that began with the GHTorrent corpus and then continued on a sample of 291 projects (Gousios et al., 2014) named factors like "developer's previous track record, the size of the project and its test coverage and the project's openness to external contributions." In 2014, Rahman and Roy conducted a study comparing successful and unsuccessful pull requests made to 78 GitHub-based projects by 20,142 developers from 103,192 forked projects. It found that developers with 20 to 50 months of experience were the most productive in submitting and accepting pull requests. The success and failure rates of pull requests for a project can be affected by the number of developers involved and their experience.

Additionally, a case study with Active Merchant, a commercial project developed by Shopify Inc. (Kononenko et al., 2018), goes in the same direction: "The statistical models revealed that both PR review time and merge decision are affected by PR size, discussion, and author experience and affiliation. Developers believe that PR quality, type of change, and responsiveness of an author are also important factors". The size of the pull requests, their perceived quality, and the code in general, along with the context, play an essential role in the acceptance of pull requests (Tsay et al., 2014Gousios et al., 2014Soares et al., 2015b).‍

And what about gender?

Well, at least my social networks don't have an effect… or do they?

Social networks affect everything nowadays. People make many inferences from the activity on social networks such as GitHub and Twitter. They infer technical goals, how people react to code reviews, and even infer seniority. All these inferences influence group collaboration and reputation.

Unsurprisingly, pull requests from developers with more social contributions are more likely to be merged than those with fewer contributions (Dabbishetal., 2012). Tsay et al., 2014 show that better-connected authors and project managers have higher acceptance rates.

I thought pull requests had to do with the quality

To confirm these suspicions, the author of the article under review studied the role of PMD (Programming et al.) issues in pull request acceptance. PMD is a static source code analyzer used to find common programming flaws. Although the author expected developers to address quality issues in them, the statistical techniques showed that the acceptance/rejection did not relate to the presence of PMD issues. Not happy with that, they applied six machine learning models to confirm/reject these results.

The results confirmed that code smells and anti-patterns are not considered problems when accepting or rejecting a pull request! I cannot stop thinking about the implications of this. All these quality misses seriously increase the risk of faults, raise bugs, and increase maintenance effort.

Implications of the study

Here (taking into account the background of the other research mentioned), the authors give some advice.

Remember that good code may not be the key to accepting your PRs. Pay attention to the coding standards and quality rules agreed upon by the team. Also, add a test suite and documentation. Use what you have learned in this article to make your submissions mergeable with fewer changes. You are the last person responsible for your code's quality (at least, until researchers find a better way).

Core team members interested in adding value to their product should consider adopting an automated static analysis tool.

It should be integrated with the CD/CI pipeline so contributors can validate their code automatically. They should adopt more systematic code review methods, like checklists based on well-known best practices, to avoid measuring based on perceived quality.

Finally, spread the culture of objective quality patterns regarding clean code. Avoid well-documented code smells or anti-patterns in pull request approval processes until researchers discover how to make the PR process more quality-centered again!