We just saw in the previous article what builders, coupled with a fluent API, could bring to the table in terms of tests’ readability. However, we quickly realised that the more complex the production code was, the higher the complexity of the technical code became. In this article, we will discuss how we can make this better using the SAS testing technique. But let’s see first what was the situation we faced in the beginning of this journey.
In the beginning were unit tests
When I started working with my client in March 2020, our API was very well covered. Unit tests for our code, end-to-end Selenium tests for all websites. However, since the backend was unique and shared for all websites, the complexity was quite high. The unit tests were particularly cumbersome, some of them being over hundreds of lines long in size and the global setup weighing over 500 or 600 LOC. Readability and maintainability were tedious and lead developers to rely on hazardous copy/pastes. Which, in turn, would make the tests less readable and maintainable : a downward spiral that was difficult to stop.
Shall we really test each component in isolation ? Or do we want to test a complete scenario while stubbing only the I/O ?
We first asked ourselves the question of the granularity of the tests : shall we really test each component in isolation ? Or do we want to test a complete scenario while stubbing only the I/O ? We chose the second option with the team. So, in early 2021, we started working with builders. It was a large task because ot the complexity of the production code. While the tests themselves were easier to work with, we hit the same issues as in the previous post : dependency hell, dependency management and stub code mixed in the same class, etc. We also had our fair share of dedicated business objects (“Specifications”) as proposed in the “Improvements” section. Put that at the scale of an e-commerce website and you can imagine the mess we were in.
In early 2022, thanks to a large technical migration project, we started separating the concerns in the tests and builders. We used an approach like “business specifications/stub/dependencies/tests”.
Scenarios : business builders to setup the test context
One of the criticisms of the 2021 code was that everything was specified in the tests in order to have them pass, including data that were useless in the scope of the test. That created useless verbosity. Same thing for the fuzzers which were always created inside each test even though they are technical details.
Definition
So we came up with the scenario concept : an object proposing a fluent API whose goal was to look like a domain-specific language. It had to be meaningful from a business-only point of view. So we sort of created a “business builder” where everything was made in order to automatically set up a “happy path” context. We actually implemented Martin Fowler’s Object Mother pattern. This builder was also in charge of managing the fuzzing, which allowed us to remove boilerplate from the tests themselves. Therefore they become more readable because they only contain the important business aspects : the ones under test.
In practice
Let’s go back from the BookShop code before the tests and start with the first test : listing all books from the catalog when the API is called on the api/Catalog
route with a GET method. This is full “happy path” : basic use case, no specific condition, the paging feature should not be a problem, we just want it to work. The scenario declaration in the test is therefore very simple :
public class CatalogControllerShould
{
[Fact]
public async Task List_all_books_when_called_on_GetCatalog()
{
var scenario = new CatalogListScenario();
}
}
Inside the scenario though, we perform quite some work : fuzzer declaration (while letting the user to provide a seed from the scenario constructor) and BookSpecifications creation (while letting them being accessed via a get-only property) :
public class CatalogListScenario
{
private const int DefaultNumberOfBooksPerPage = 5;
public BookSpecification[] Books { get; }
public CatalogListScenario(int? seed = null)
{
var fuzzer = new Fuzzer(seed);
var numberOfBooksToGenerate = fuzzer.GenerateInteger(1, DefaultNumberOfBooksPerPage);
Books = Enumerable.Range(1, numberOfBooksToGenerate)
.Select(_ => new BookSpecification(fuzzer))
.ToArray();
}
}
This is actually fairly simple : we just moved pieces of technical code from the test to a dedicated class (the “business builder”). If we wanted to use the builders from the previous article, we should simply pass scenario.Books
to the builder’s WithBooks()
method and we’re good to go.
Going further
Now if we wanted to test more complex stuff, we would have to enrich the scenario. For example, if we wanted to test the paging feature with 3 books per page and 5 books in the entire catalog, we would check we have 2 pages, one with 3 books and the other with only 2. The test might look like this :
public class CatalogControllerShould
{
[Fact]
public async Task Return_2_pages_when_there_are_5_books_in_the_catalog_and_the_number_of_books_to_display_on_one_page_is_3()
{
var scenario = new CatalogListScenario()
.WithNumberOfBooksPerPage(3)
.WithRandomBooks(5);
}
}
With some refactoring, the scenario becomes :
public class CatalogListScenario
{
private int _numberOfBooksPerPage = 5;
private readonly Fuzzer _fuzzer;
public BookSpecification[] Books { get; private set; }
public CatalogListScenario(int? seed = null)
{
_fuzzer = new Fuzzer(seed);
var numberOfBooksToGenerate = _fuzzer.GenerateInteger(1, _numberOfBooksPerPage);
Books = GenerateRandomBooks(numberOfBooksToGenerate);
}
public CatalogListScenario WithNumberOfBooksPerPage(int numberOfBooksPerPage)
{
_numberOfBooksPerPage = numberOfBooksPerPage;
return this;
}
public CatalogListScenario WithRandomBooks(int numberOfBooksToGenerate)
{
Books = GenerateRandomBooks(numberOfBooksToGenerate);
return this;
}
private BookSpecification[] GenerateRandomBooks(int numberOfBooksToGenerate)
{
return Enumerable.Range(1, numberOfBooksToGenerate)
.Select(_ => new BookSpecification(_fuzzer))
.ToArray();
}
}
The test itself remains readable and the scenario creates the right objects for the test to pass. Except that… the test doesn’t test anything at all 😁
Calling the API thanks to the WebApplicationFactory
Introduction
This component allows us to run an API in memory in the tests context using .Net’s IoC container.
This is when Benoît showed us .Net’s WebApplicationFactory. This component allows us to run an API in memory in the tests context using .Net’s IoC container. This allows us to avoid creating all dependencies manually. This is a huge gain in complexity. Here’s a very simple code that runs the API, queries it and gets the response :
var api = new WebApplicationFactory<Program>();
var client = api.CreateDefaultClient();
var response = await client.GetAsync("api/Catalog?currency=EUR");
Check.That(response.StatusCode).IsEqualTo(200);
(plus the public partial class Program { }
as shown in the official documentation ; some kind of hack to make the WebApplicationFactory work with minimal APIs)
And that’s it ! In only 4 lines we :
- created the API along with all its dependencies ;
- created an
HttpClient
in order to be able to query it ; - checked the response’s status code was 200.
What about tests ?
OK that’s cool but now we need to stub I/Os or otherwise we will be making integration tests. This is why the WebApplicationFactory provides the ConfigureTestServices()
method :
var api = new WebApplicationFactory<Program>().WithWebHostBuilder(builder =>
{
builder.ConfigureTestServices(services =>
{
var metadataProvider = Substitute.For<IProvideBookMetadata>();
services.AddTransient<IProvideBookMetadata>(_ => metadataProvider);
});
});
This is where all the stubs will be made. However, if all the stubs of all low-level dependencies must be done here, it’s going to be a nightmare to maintain. This is where we introduce the last piece of the puzzle : the simulators.
The simulators : components dedicated to stub
While the scenarios focus on the tests data, the simulators each consist of stubbing a dependency according to what is described in the scenario. Then all we need to do is use this simulator in place of the dependency in the WebApplicationFactory and we will have correctly split the code’s concerns. Nothing very difficult here. For example, by stubbing everything we need related to our first test, this is what it looks like :
[Fact]
public async Task List_all_books_when_called_on_GetCatalog()
{
var scenario = new CatalogListScenario();
var api = new WebApplicationFactory<Program>().WithWebHostBuilder(builder =>
{
builder.ConfigureTestServices(services =>
{
var metadataProvider = Substitute.For<IProvideBookMetadata>();
var bookReferences = scenario.Books.Select(book => book.ToBookReference()).ToList();
metadataProvider.Get().Returns(bookReferences);
var inventoryProvider = Substitute.For<IProvideInventory>();
var books = scenario.Books.Select(book => book.ToBook()).ToList();
inventoryProvider.Get(Arg.Any<IEnumerable<BookReference>>())
.Returns(callInfo =>
{
var requestedBooksIsbns = callInfo.Arg<IEnumerable<BookReference>>();
return books.IntersectBy(requestedBooksIsbns, book => book.Reference);
});
services.AddTransient(_ => metadataProvider);
services.AddTransient(_ => inventoryProvider);
services.AddTransient(_ => new BookAdvisorHttpClient(new HttpClient(new StubHttpMessageHandler())
{
BaseAddress = new Uri("https://fake-address-for-tests")
}));
});
});
var client = api.CreateDefaultClient();
var response = await client.GetAsync("api/Catalog?currency=EUR");
Check.That(response.StatusCode).IsEqualTo(HttpStatusCode.OK);
}
Where StubMessageHandler
is a fake class that inherits from HttpMessageHandler in order to return an empty list of ResponseRatings
.
It is somewhat cumbersome so we are going to introduce for each part : books metadata, inventory and ratings. Here is what the inventory simulator mught look like :
public class InventorySimulator
{
private readonly IProvideInventory _inventoryProvider;
public InventorySimulator(CatalogListScenario scenario)
{
_inventoryProvider = Substitute.For<IProvideInventory>();
Simulate(scenario);
}
private void Simulate(CatalogListScenario scenario)
{
var books = scenario.Books.Select(book => book.ToBook()).ToList();
_inventoryProvider.Get(Arg.Any<IEnumerable<BookReference>>())
.Returns(callInfo =>
{
var requestedBooksIsbns = callInfo.Arg<IEnumerable<BookReference>>();
return books.IntersectBy(requestedBooksIsbns, book => book.Reference);
});
}
public void Register(IServiceCollection services)
{
services.AddTransient(_ => _inventoryProvider);
}
}
In addition, if we move the code creating the API into a dedicated class, we end up with a test that is clear from a business point of view :
[Fact]
public async Task List_all_books_when_called_on_GetCatalog()
{
var scenario = new CatalogListScenario();
var api = new CatalogApi(scenario);
var response = await api.GetCatalog("EUR");
Check.That(response.StatusCode).IsEqualTo(HttpStatusCode.OK);
var catalogResponse = await response.Content.ReadFromJsonAsync<CatalogResponse>();
Check.That(catalogResponse).IsNotNull();
Check.That(catalogResponse!.Books).HasSize(scenario.Books.Length);
Check.That(catalogResponse.TotalNumberOfPages).IsEqualTo(1);
}
The full code up to this point is available here in the Github repository. The complete implementation of the second test (for the paging feature) is also available.
SAS tests : Scenario/API/Simulator
By combining these 3 concepts, we realise we can write readable acceptance tests, whose intent is clear and where each behavior is isolated. The dependency and spaghetti code problem we could find with the basic builders apporach is then solved, even if the SAS technique is not without its flaws.
However, we still do have some very technical code only to instantiate the WebApplicationFactory or register each simulator into the IoC.
A library to help with SAS tests
In order to exernalise the boilerplate code and the technical effort, Benoît has created a Nuget package named sas. It’s a library providing a fluent syntax and helpers to ease the creation and maintenance of tests using the “scenario/API/simulators” approach. Let’s see how we can use it with the tests we wrote earlier
The BaseScenario
class as a marker
Little change when it comes to the scenarios. These objects are indeed aimed at modeling the tests’ business context so it’s going to be difficult to provide useful tools for these. However, we might want to have each of them inherit the BaseScenario
class in order for the API to detect them later in the process. It doesn’t provide anything useful though.
Building the API : getting rid of the boilerplate
In the previous tests we created a CatalogApi
class whose goal is to abstract the WebApplicationFactory and provide meaningful methods that will match the API’s endpoints. The initialisation is a bit boring : each simulator is registered into the WebApplicationFactory’s IoC, passing (or not) the scenatio in order to enable the stubbing part. If we had to create another API (e.g. for the checkout process) we will probably need to copy/paste a large part of this code. The simulators would be different of course, but the rest would be very similar. To solve this, the library provide on one hand the BaseApi
class (and its lazy counterpart, LazyBaseApi
) and the BaseSimulator
class on the other hand (in the sas.simulators.nsubstitute package ; we can use AbstractSimulator
in the case of another mocking library or framework).
The point is simply to have each simulator inherit the BaseSimulator
class, override the Simulate()
method in order to implement the stub, and then have our CatalogApi
inherit from BaseApi
. Finally, in the API’s constructor, we will list the simulators. That’s it. Let’s see how it goes with our example.
The simulators
Here’s what a simple simulator such as the InventorySimulator
looks like :
public class InventorySimulator : BaseSimulator<IProvideInventory>
{
protected override void Simulate(BaseScenario baseScenario)
{
if (baseScenario is not CatalogListScenario scenario)
{
return;
}
var books = scenario.Books.Select(book => book.ToBook()).ToList();
Instance.Get(Arg.Any<IEnumerable<BookReference>>())
.Returns(callInfo =>
{
var requestedBooksIsbns = callInfo.Arg<IEnumerable<BookReference>>();
return books.IntersectBy(requestedBooksIsbns, book => book.Reference);
});
}
}
In only 11 lines of code, we do the exact same thing as before without cumbersome boilerplate.
As we can see, only the code that has an actual added value remains. It is the stub itself. The stub creation using NSubstitute and its registration into the IoC are done for us. All we have to do is use BaseSimulator
with the appropriate type and use the Instance field which is exposed by the parent class. Note the BaseScenario
usage in the override of the Simulate()
method.
With a more complex simulator such as the HTTP client to the BookAdvisor service, it’s even better :
public class BookAdvisorSimulator : BaseHttpClientSimulator<BookAdvisorHttpClient>
{
protected override void Simulate(BaseScenario scenario)
{
HttpClient.Get(Arg.Is<string>(route => route.StartsWith("reviews/ratings")))
.Returns(_ => new HttpResponseMessage(HttpStatusCode.OK)
{
Content = JsonContent.Create(new RatingsResponse(5m, 1))
});
}
}
In only 11 lines of code, we do the exact same thing as before without the cumbersome boilerplate of the custom HttpMessageHandler or the IoC registration. There again, the BaseHttpClientSimulator
base class (in the sas.simulators.http.nsubstitute package) allows us to focus on the only code that actually matters.
What about the API ?
On the API side, it’s the same thing :
public class CatalogApi : BaseApi<Program>
{
private CatalogApi(CatalogListScenario scenario, ISimulateBehaviour[] simulators, IEnrichConfiguration[] configurations)
: base(scenario, simulators, configurations) {}
public static CatalogApi CreateApi(CatalogListScenario scenario)
{
return new CatalogApi(scenario, [
new BookAdvisorSimulator(),
new InventorySimulator(),
new MetadataSimulator()
], []);
}
public async Task<HttpResponseMessage> GetCatalog(string currency, int numberOfBooksPerPage = 5)
{
return await HttpClient.GetAsync($"api/Catalog?currency={currency}&pageNumber=1&numberOfItemsPerPage={numberOfBooksPerPage}");
}
}
The whole code managing the WebApplicationFactory is now gone and is handled by the BaseApi
. Note that the entry point is provided at the first line : for minimal APIs like here, it’s the Program
class but in more traditional ASP .Net Core it will probably be Startup
.
Gone is the technical code : the is BaseApi
instantiated with the simulators, possibly with other classes that are in charge of customising the configuration and we’re done. We can focus on the methods we want to expose to the tests while benefiting from the HttpClient
property exposed by the BaseApi
.
Extra : tools to handle payloads
Moreover, if we take a look at the test, we can regularly see the JSON deserialization and other similar checks (on the status code for example). The sas.nfluent package offers serveral extensions to the NFluent assertion library to help a bit with those and get the payload without breaking a sweat. Our first test then becomes :
[Fact]
public async Task List_all_books_when_called_on_GetCatalog()
{
var scenario = new CatalogListScenario();
var api = CatalogApi.CreateApi(scenario);
var response = await api.GetCatalog("EUR");
Check.That(response).IsOk<CatalogResponse>()
.WhichPayload(catalogResponse =>
{
Check.That(catalogResponse).IsNotNull();
Check.That(catalogResponse!.Books).HasSize(scenario.Books.Length);
Check.That(catalogResponse.TotalNumberOfPages).IsEqualTo(1);
});
}
A bit nicer than doing ReadFromJsonAsync()
and status codes checks.
The complete code using the sas library can be found on the with_sas_library
branch of the Github repo.
Better-organized tests to focus on what matters most
As we have just seen, the SAS tests used in addition to the eponymous library allows us to improve several things :
- test readability with a clear business orientation and a fluent syntax ;
- separation and isolation of concerns ;
- boilerplate code reduced to its simplest form ;
- the entire code is tested from the API’s endpoint to the call of low-level layers ;
- no need to manually recreate the dependency tree in our test code ;
- we can choose to split a single “god object” controller into several, more testable APIs in the test code ;
- we can avoid simulators altogether and use the same structure to perform integration tests ;
- a few helpers allow us to even further simplify the writing and the reading of tests.
But SAS tests do not solve everything. If the production code is complex wih many infra adapters and cascading dependencies, it will have an impact in terms of number of simulators and scenarios. Indeed, if the business is not separated clearly, our scenarios will become difficult to build and maintain.
Another point to consider is the learning curve. The introduction should be done gradually with the entire team, ideally using ensemble programming on a new project. But using SAS tests in an existing complex project without proper preparation can be hazardous and time-consuming from both the technical and change management points of view.