Testing Contracts Like an Ape
Testing is absolutely critical for Web3 development, and testing with Ape is no slouch. Based on the powerful and well-trusted Pytest testing library, Ape Test lets you verify your smart contracts with advanced features like fixtures, parametrization, reporting, and more!
Testing Contracts like an Ape
Testing is critical for Web3 development, and testing with Ape is no slouch. Based on the powerful and well-trusted Pytest testing library, Ape Test lets you verify your smart contracts with advanced features like fixtures, parametrization, reporting, and more!
Fixtures allow you to specify the setup order of the contracts and other resources necessary to test your contracts and customize them to your needs. The way fixtures work is that you define functions in a conftest.py file with the pytest.fixture decorator applied, with the input arguments being whatever other fixtures your fixture depends on.
In this example, we have two account fixtures that we are going to use in our test suite: the owner account and the receiver account. Notice that these are using two different indices of the built-in accounts fixture, which is provided to allow you access to test accounts within your test. These fixtures have "session" scoping, which means they are created once and useable over the entire test session.
A great use case for fixtures is when you want to deploy some contracts as the base scenario in your test suite. Here, for example, we want to deploy our token contract once at the start of our test suite and reuse a snapshot for that deployment in all of our test cases that use the token fixture. This is why we will also give this fixture "session" scoping to ensure that the deployment only happens once and not for every test that is run. Utilizing the snapshot feature of fixture scoping allows us to improve the setup time of our test suite since we only make one deployment overall.
To create the token fixture, we want to use our owner fixture, which plays an essential role in our test suite, to deploy the Token contract type compiled from our project. We will then return the deployed contract instance from this function, which will become stored for use in every test case.
For more information about fixtures, check out the pytest documentation.
Now that we have our fixtures, we can explore creating the tests in our test suite. We can see several functions representing our test cases in our test suite. These work similarly to how fixtures work in that all of the arguments will be fixtures from our conftest.py file, but here the difference is that there is no pytest.fixture decorator, and every test case function name must start with test_*.
Initial Conditions Test
In our first test, we will take our deployed token contract and the owner variable that deployed it and do some assertion checks on the initial conditions of the contract. We will make no changes to the state of the contract during this test case, just calling view methods to check that the initial conditions are as we expect them to be. For example, the ERC-20 Metadata extension defines the token's name, symbol, and decimals methods. Additionally, we will check non-ERC-20 methods like owner to see if the internal state is consistent with our setup conditions in the contract, where the owner is set as the deployer. Lastly, we will check the pre-mint amount of tokens given to the owner, which we gave 1000. The totalSupply and balanceOf methods should reflect that these 1000 tokens were issued, which is important because we want to ensure the accounting is consistent from the start.
Having an initial condition check like this is an excellent foundation to build out a test suite because it ensures you validate all of your base case expectations for your contract to build off of in other tests where more complex actions will happen.
The next case I want to show is the transfer method, which allows an owner of tokens to send them to another address. The ERC-20 specification defines several conditions that should hold for this transfer to be considered valid.
Again, we will use the token and owner fixtures defined in our conftest.py and our receiver fixture to represent the account that receives the tokens. It's important our receiver account is indeed different from the owner account so that we can show tokens were actually transferred between accounts.
To start our test case, we want to demonstrate that we begin with the correct initial conditions, namely that our owner has 1000 tokens, and our receiver has none.
This sets the scene for what we will do next: execute the actual transfer. One important thing to check is that the transaction emits the Transfer event, which the ERC-20 specification requires. Since this is the only event that should occur during this transaction, we will check that there is just one Transfer event. We will also make sure that all of the arguments of the log are specified correctly as well.
Finally, we will check the state changes present after performing these transactions, where we sent 100 tokens from the owner to the receiver. We will assert that the balance of the owner's address is 100 tokens less, and the balance of the receiver is 100 tokens more. Here we also ensure that the totalSupply did not change as a result of the transaction.
The above scenario is an excellent example of a "positive assertion", meaning that we checked the actual outcome of an action with our expected result. This is an effective strategy for correctly testing smart contracts' behavior and something you should master.
The other type of scenario to check for is "negative assertions", which are checks that an actioncannotbe performed under the current scenario. Per ERC-20, a transfer action should fail if the authorizing account does not have a sufficient balance to complete the call. Per our previous action, we know that our receiver account has 100 tokens. However, our transaction will fail if we try to transfer more than that. We can show this by using the ape.reverts command, which checks that the line immediately inside the with statement will fail with a contract logic revert exception.
To complete the testing of our token contract, we should check for edge cases or scenarios at the edges of what we expect in the normal flow. The ERC-20 specification describes a scenario when a holder moves 0 tokens to any other contract. Under this scenario, ERC-20 recommendsnotreverting, even though the scenario is unusual. Here the check is that it is possible to make this call, and no further checks are required.
Our last test case is a more advanced method defined in ERC-20 that lets an owner of tokens specify a certain allowance of tokens that a spender account can move on its behalf. This is a useful function because ERC-20 transfer methods do not allow any further processing in the call, and it is fairly common to want to be able to ensure a certain amount of tokens were given to a calling contract before doing anything in return (such as a DEX or NFT trade use case).
To start, we define one additional role spender as another account index from our accounts built-in fixture. After that role is set up, we want to show some initial conditions hold true for the start of our test.
First, we want to do a negative assertion according to the ERC-20 specification that a spender account cannot move any tokens on behalf of another party without previously being approved. Here we are calling the transferFrom function with ape.reverts to show that this is true.
Now that we've shown this negative assertion, we will want to show the positive counterexample for that same condition, namely, if there is an approval, the spender is now authorized to move tokens on behalf of the owner account. ERC-20 defines the Approval event, which must be emitted whenever the allowance changes for a particular account. We check that only one event of that type is emitted and that the logged data for that event matches our expectations. Lastly, we check the updated allowance for the spender account from the owner account.
Now that we've set up the allowance for the spender account, the next step is to commit a transferFrom action. Like the previous test, we want to show that the Transfer event is also emitted during the transferFrom method and that the data logged match the expected values. Finally, we want to show that the allowance for the spender account is lowered by the same amount of tokens that were transferred during the call to the transferFrom method.
Here we want to do another variation of the first negative assertion we performed, namely that even if a particular allowance is given to a spender account, more than that amount cannot be processed using transferFrom. This is perhaps redundant with the previous negative assertion, but it is nice to demonstrate that corner cases hold as expected.
To finish the test case, we want to show that transferring the last amount of allowance will work and decrement the allowance to zero. We also check that there were overall 300 tokens transferred between owner and receiver and that the spender account did not gain anything by performing this transfer. Additionally, we double-check that no tokens were created or destroyed during these calls.
Now that we have several test cases demonstrating the proper function of our token contract, we will want to execute these functions using ape test.
ape test is a thin wrapper around pytest that initializes our test session with several Ape-specific setup conditions, such as our built-in fixtures, testing snapshots, and initializing the network connection. It will also download dependencies, compile the project, and ensure that all the artifacts are up to date before running the suite.
Since we are using pytest under the hood, any pytest-specific flag will work. For example, if we'd like to filter down and run only one specific test, we can use the -k flag to do that. Also, notice how we do not need to compile Token.vy a second time.
We can also run tests from just one file by providing a path to that testing file. However, since all of our tests are in that one file already, it will run them all.
Now, let's show what happens when a test case fails. When a test case fails, ape test will show which file is failing, where it failed and why. If an assertion is causing the failure, it will also show the values in both sides of the assertion.
If you want to debug a test case interactively, just run with the -I interactive mode flag. This flag will let you enter a console session at the line the failure occurs. This interactive prompt will have access to any variable available inside of that test, including any built-in fixtures that Ape provides.
To stop using this mode, type exit, and the session will continue.
Hopefully, the above was a very detailed introduction to testing with Ape. In future videos, we will show more advanced features like how to profile gas usage and measure code coverage using our advanced reporting features. Have a blast testing using Ape, and remember, "untested code is just ugly documentation"!