Testing rules
Skylib has a test framework calledunittest.bzl
for checking the analysis-time behavior of rules, such as their actions and
providers. Such tests are called “analysis tests” and are currently the best
option for testing the inner workings of rules.
Some caveats:
- Test assertions occur within the build, not a separate test runner process. Targets that are created by the test must be named such that they do not collide with targets from other tests or from the build. An error that occurs during the test is seen by Bazel as a build breakage rather than a test failure.
- It requires a fair amount of boilerplate to set up the rules under test and the rules containing test assertions. This boilerplate may seem daunting at first. It helps to keep in mind that macros are evaluated and targets generated during the loading phase, while rule implementation functions don’t run until later, during the analysis phase.
- Analysis tests are intended to be fairly small and lightweight. Certain features of the analysis testing framework are restricted to verifying targets with a maximum number of transitive dependencies (currently 500). This is due to performance implications of using these features with larger tests.
fail() (which would
trigger an analysis-time build error), but rather by storing the errors in a
generated script that fails at test execution time.
See below for a minimal toy example, followed by an example that checks actions.
Minimal example
//mypkg/myrules.bzl:
//mypkg/myrules_test.bzl:
//mypkg/BUILD:
bazel test //mypkg:myrules_test.
Aside from the initial load() statements, there are two main parts to the
file:
-
The tests themselves, each of which consists of 1) an analysis-time
implementation function for the testing rule, 2) a declaration of the
testing rule via
analysistest.make(), and 3) a loading-time function (macro) for declaring the rule-under-test (and its dependencies) and testing rule. If the assertions do not change between test cases, 1) and 2) may be shared by multiple test cases. -
The test suite function, which calls the loading-time functions for each
test, and declares a
test_suitetarget bundling all tests together.
foo stand for
the part of the test name that describes what the test is checking
(provider_contents in the above example). For example, a JUnit test method
would be named testFoo.
Then:
-
the macro which generates the test and target under test should should be
named
_test_foo(_test_provider_contents) -
its test rule type should be named
foo_test(provider_contents_test) -
the label of the target of this rule type should be
foo_test(provider_contents_test) -
the implementation function for the testing rule should be named
_foo_test_impl(_provider_contents_test_impl) -
the labels of the targets of the rules under test and their dependencies
should be prefixed with
foo_(provider_contents_)
Failure testing
It may be useful to verify that a rule fails given certain inputs or in certain state. This can be done using the analysis test framework: The test rule created withanalysistest.make should specify expect_failure:
:all will result in a
build of the intentionally-failing target and will exhibit a build failure. With
‘manual’, your target under test will build only if explicitly specified, or as
a dependency of a non-manual target (such as your test rule):
Verifying registered actions
You may want to write tests which make assertions about the actions that your rule registers, for example, usingctx.actions.run(). This can be done in your
analysis test rule implementation function. An example:
analysistest.target_actions(env) returns a list of
Action objects which represent actions registered by the
target under test.
Verifying rule behavior under different flags
You may want to verify your real rule behaves a certain way given certain build flags. For example, your rule may behave differently if a user specifies:-c opt and another test which
verifies the rule behavior under -c dbg. Both tests would not be able to run
in the same build!
This can be solved by specifying the desired build flags when defining the test
rule:
config_settings overrides the values of the specified command line
options. (Any unspecified options will retain their values from the actual
command line).
In the specified config_settings dictionary, command line flags must be
prefixed with a special placeholder value //command_line_option:, as is shown
above.
Validating artifacts
The main ways to check that your generated files are correct are:-
You can write a test script in shell, Python, or another language, and
create a target of the appropriate
*_testrule type. - You can use a specialized rule for the kind of test you want to perform.
Using a test target
The most straightforward way to validate an artifact is to write a script and add a*_test target to your BUILD file. The specific artifacts you want to
check should be data dependencies of this target. If your validation logic is
reusable for multiple tests, it should be a script that takes command line
arguments that are controlled by the test target’s args attribute. Here’s an
example that validates that the output of myrule from above is "abc".
//mypkg/myrule_validator.sh:
//mypkg/BUILD:
Using a custom rule
A more complicated alternative is to write the shell script as a template that gets instantiated by a new rule. This involves more indirection and Starlark logic, but leads to cleaner BUILD files. As a side-benefit, any argument preprocessing can be done in Starlark instead of the script, and the script is slightly more self-documenting since it uses symbolic placeholders (for substitutions) instead of numeric ones (for arguments).//mypkg/myrule_validator.sh.template:
//mypkg/myrule_validation.bzl:
//mypkg/BUILD:
str.format method or %-formatting.
Testing Starlark utilities
Skylib’sunittest.bzl
framework can be used to test utility functions (that is, functions that are
neither macros nor rule implementations). Instead of using unittest.bzl’s
analysistest library, unittest may be used. For such test suites, the
convenience function unittest.suite() can be used to reduce boilerplate.
//mypkg/myhelpers.bzl:
//mypkg/myhelpers_test.bzl:
//mypkg/BUILD: