Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How Google and Facebook do code analysis

public://pictures/stephen_magill.jpg
Stephen Magill CEO, MuseDev
 

Over the past five years the internal developer productivity teams at Google and Facebook have been exploring a new approach to incorporating static code analysis into their development workflows. In contrast to traditional uses of static analysis, they are building code analysis into existing developer feedback mechanisms, such as code review or continuous integration (CI) checks, resulting in a highly effective, yet almost transparent, approach to ensuring code quality.

It's a game-changer for QA, and your organization can do it too.

At Google, static analysis infrastructure prevents hundreds of bugs per day from entering the codebase, while at Facebook static analysis tools detect over 40% of severe security bugs. Several of the tools these teams developed are now open source, which means you can replicate these results without hiring your own team of static analysis experts.

The open-source community has developed several effective static analysis tools that catch real performance, reliability, and security problems, but to successfully apply these tools you must do more than just choose a tool and run it over a codebase.

The success of static analysis at Google, Facebook, and other large tech companies is as much about how you apply the tools as which tools you choose. Here are the key principles that Google and Facebook apply in their use of static code analysis, and a review of the open-source static analysis tool landscape.

1. Use multiple tools

Given the wide variety of open-source and commercial static analysis tools out there, you might wonder which is best to include in your pipeline. Actually, different tools tend to catch largely non-overlapping error types, so you should include multiple tools whenever possible. This is exactly the approach Google has taken; it runs multiple static analysis tools behind the scenes, but presents the results in a uniform manner to developers.

2. Workflow integration matters

It is not enough to merely make results available; you must make them available in the right way in order for developers to notice them and take action. When Facebook first deployed its Infer tool, the team ran it overnight, then presented and assigned results to developers via the issue-tracking interface.

Infer Infer builds a rich model of the code, capturing which locks are held, how information flows in the program, and the shape of in-memory data structures. It then uses this model to check for security, performance, and reliability errors, including subtle errors such as the incorrect use of Java synchronization.

Despite the fact that the Infer team did a manual review of the results to ensure that they were relevant and important, almost none of the errors were fixed. But when the team deployed the same analysis on each code change using the code review interface, the fix rate went to over 70%.

The lesson here is that integration matters. Effective deployment of static analysis technology requires that you present the right bugs at the right time via the right interface. And the right interface is almost always something developers already use regularly (such as IDEs or code review).

3. Cherish developer trust

The right result presented at the wrong time can be ineffective, but the wrong result presented at the right time can be even more damaging. All static analysis tools produce some false positives, and as developers see these “wrong results,” they lose trust in the tool.

Facebook has reported target “fix rates” (how often developers fix the bug flagged by a tool) of 70% to 80%, and Google strives for greater than 90% fix rates for its ErrorProne code analysis tool. By only deploying tools with high fix rates, these companies maintain the trust of developers and ensure that they continue to act on important issues found by these tools. But how do you know whether a new tool meets this threshold?

The best approach is to be data-driven. The developer tooling groups at these companies collect data on which bugs are getting fixed and support explicit developer feedback on tool results. This data is constantly monitored to flag tools that are underperforming so they can improve or remove them.

4. Analysis tools make developers more productive

By flagging errors that humans are bad at noticing, or that are expensive to discover by other means, static analysis tools can help developers make better, more productive use of their time. And in some cases, these tools can enable engineering efforts that would not be possible otherwise.

The Facebook Infer team has one such story regarding the impact of their thread safety analysis. The News Feed component of the Facebook Android app was being converted from a single-threaded to a multi-threaded architecture. This required the introduction of synchronization code throughout the codebase, all of which had to be consistent.

The development team implemented the new architecture while relying on Infer to find cases it missed and save it from debugging subtle and hard-to-reproduce multithreading bugs. Following the effort, one engineer commented that “without Infer, multithreading in News Feed would not have been tenable.” 

The changing open-source static analysis landscape

Static analysis tools range from simple search tools that look for specific code patterns to deep analyses that can reason about thread safety or track null pointer issues across multiple method calls.

Ten years ago there were very few deep open-source analyses, but that changed in 2015 with the release of Facebook’s Infer tool. Infer supports C, C++, Objective C, and Java. For Java codebases, you can find several other open-source tools that take different approaches to code analysis and that are useful for catching other types of errors. ErrorProne, created by Google’s developer productivity team, catches a variety of Java-specific error patterns, such as comparing strings using reference equality (“double equals”) instead of calling the .equals() method.

FindSecBugs (based on SpotBugs) takes a similar approach to analysis as ErrorProne but focuses on security-relevant errors. And the static code analyzer PMD checks for a variety of API-specific error patterns and best practices. For C and C++, cppcheck, Clang Static Analyzer, and Google’s Clang Tidy each provide several important checks.

You can reap the same benefits

Advanced open-source analysis tools, deployed in a manner consistent with the four key principles described above, can enable your organization to achieve some of the same productivity, security, and reliability improvements that code analysis technology has enabled at Google and Facebook.

Meet me at DevOps Enterprise Summit: Las Vegas 2019, where I will speak in more detail about these and other key principles of effective static analysis as well as the recent technology shifts that have enabled new ways of applying these principles. The conference runs October 28-30, 2019.

Keep learning

Read more articles about: App Dev & TestingDevOps