Testing Without Mocks: A .
So a few days ago I released this massive update to my article, "Testing Without Mocks: A Pattern Language." It's 40 pages long if you print it. (Which you absolutely should. I have a fantastic print stylesheet.) I promised a thread explaining what it's all about.
This is the thread. If you're not interested in TDD or programmer tests, you might want to mute me for a bit.
Here's the article I'm talking about: https://www.jamesshore.com/v2/projects/testing-without-mocks/testing-without-mocks
2/ First, why bother? Why write 40 pages about testing, with or without mocks?
Because testing is a big deal. People who don't have automated tests waste a huge amount of time manually checking their code, and they have a ton of bugs, too.
The problem is, people who DO have automated tests ALSO waste a huge amount of time. Most test suites are flaky and SLOOOOW. That's because the easy, obvious way to write tests is to make end-to-end tests that are automated versions of manual tests.
3/ Folks in the know use mocks and spies (I'll say "mocks" for short) to write isolated unit tests. Now their tests are fast! And reliable! And that's great!
Except that now their tests have lots of detail about the interactions in the code. Structural refactorings become really hard. Sometimes, you look at a test, and realize: all it's testing... is itself.
Not to mention that the popular way to use mocks is to use a mocking framework and... wow. Have you seen what those tests look like?
4/ So we don't want end-to-end tests, we don't want mocks. What do we do?
The people really REALLY in the know say "bad tests are a sign of bad design." They're right! They come up with things like Hexagonal Architecture and (my favorite) Gary Bernhardt's Functional Core, Imperative Shell. It separates logic from infrastructure so logic can be tested cleanly.
Totally fixes the problem.
For logic.
Anything with infrastructure dependencies… well… um… hey look, a squirrel! (runs for hills)
5/ Not to mention that (checks notes) approximately none of us are working in codebases with good separation of logic and infrastructure, and (checks notes again) approximately none of us have permission to throw away our code and start over with a completely new architecture.
(And even if we did have permission, throwing away code and starting over is a Famously Poor Business Decision with Far-Reaching Consequences.)
6/ So we don't want end-to-end tests, we don't want mocks, we can't start over from scratch... are we screwed? That's it, the end, life sucks?
No.
That's why I wrote 40 pages. Because I've figured out another way. A way that doesn't use end-to-end tests, doesn't use mocks, doesn't ignore infrastructure, doesn't require a rewrite. It's something you can start doing today, and it gives you the speed, reliability, and maintainability of unit tests with the power of end-to-end tests.
7/ I call it (for now, anyway, jury's out, send me your article naming ideas) "Testing With Nullables."
It's a set of patterns for combining narrow, sociable, state-based tests with a novel infrastructure technique called "Nullables."
At first glance, Nullables look like test doubles, but they're actually production code with an "off" switch.
8/ This is as good a point as any to remind everyone that nothing is perfect. End-to-end tests have tradeoffs, mocks have tradeoffs, FCIS has tradeoffs... and Nullables have tradeoffs. All engineering is tradeoffs.
The trick is to find the combination of good + bad that is best for your situation.
9/ Nullables have a pretty substantial tradeoff. Whether it's a big deal or not is up to you. Having worked with these ideas for many years now, I think the tradeoffs are worth it. But you have to make that decision for yourself.
Here's the tradeoff: Nullables are production code with an off switch.
Production code.
Even though the off switch may not be used in production.
10/ Okay, enough foreplay. Let's talk about how this thing works. Again, you can see all the details in the article: https://www.jamesshore.com/v2/projects/testing-without-mocks/testing-without-mocks
11/ The fundamental idea is that we're going to test everything—everything!—with narrow, sociable, state-based tests.
Narrow tests are like unit tests: they focus on a particular class, method, or concept.
Sociable tests are tests that don't isolate dependencies. The tests run everything in dependencies, although they don't test them.
And state-based tests look at return values and state changes, not interactions.
(There's a ton of code examples in the article, btw, if you want them.)
12/ This does raise some questions about how to manage dependencies. Another core idea is "Parameterless Instantiation." Everything can be instantiated with a constructor, or factory method, that takes NO arguments.
Instead, classes do the unthinkable: they instantiate their own dependencies. GASP!
Encapsulation, baby.
(You can still take the dependencies as an optional parameter.)
13/ People ask: "but if we don't use dependency injection frameworks..."
I interrupt: "your code is simpler and easier to understand?" I'm kind of a dick.
They continue, glaring: "...doesn't that mean our code is tightly coupled?"
And the answer is no, of course not. Your code was already tightly coupled! An interface with one production implementation is not "decoupled." It's just wordy. Verbose. Excessively file-system'd.
(The other answer is, sure, use your DI framework too. If you must.)
14/ Anyway, that's the fundamentals. Narrow, sociable, state-based tests that instantiate their own dependencies.
Next up: A-Frame Architecture! This is optional, but people really like it. It's basically a formalized version of Functional Core, Imperative Shell. I'm gonna skip on ahead, but feel free to check out the article for details. Here's the direct link to the Architecture section: https://www.jamesshore.com/v2/projects/testing-without-mocks/testing-without-mocks#arch-patterns
15/ Speaking of architecture, the big flaw with FCIS, as far as I've seen, is that it basically ignores infrastructure, and things that depend on infrastructure.
"I test it manually," Gary Bernhardt says in his very much worth watching video: https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell
That's a choice. I'm going to show you how to make a different one.
(Not trying to dunk on FCIS here. I like it. A-Frame Architecture has a lot in common with FCIS, but has more to say about infrastructure.)
16/ So right, Infrastructure!
Code these days has a LOT of infrastructure. And sometimes very little logic. I see a lot of code that is really nothing more than a web page controller than turns around and hands off to a bunch of back-end services, and maybe has a bit of logic to gllue it all together. Very hard to test with the "just separate your logic out" philosophy. And so it often doesn't get tested at all. We can do better.
17/ There are two basic kinds of infrastructure code:
1) Code that interfaces directly with the outside world. Your HTTP clients, database wrappers, etc. I call this "low-level infrastructure".
2) Code that *depends* on low-level infrastructure. Your Auth0 and Stripe clients, your controllers and application logic. I call this "high-level infrastructure" and "Application/UI code".
18/ Low-level infrastructure should be wrapped up in a dedicated class. I call these things "Infrastructure Wrappers," 'cause I'm boring and like obvious names, but they're also called "Gateways" and "Adapters."
Because it talks to the outside world, this code needs to be tested for real, against actual outside world stuff. Otherwise, how do you know it works? For that, you can use Narrow Integration Tests. They're like unit tests, except they talk to a test server. Hopefully a dedicated one.
19/ High-level infrastructure should also be wrapped up in an Infrastructure Wrapper, but it can just delegate to the low-level code. So it doesn't need to be tested against a real service—you can just check that it sends the correct JSON or whatever, and that it parses the return JSON correctly.
And parses garbage correctly. And error values. And failed connections. And timeouts.
*fratboy impression* Woo! Microservices rock!
20/ At this point, people ask,
"But what if the service changes its API? Don't you need to test against a real service to know your code still works?"
To which, I respond: "What, you think the service is going to wait for you to *run your tests* before changing its API?"
(Yeah, still kind of a dick.)
You need to have runtime telemetry and write your code to fail safe (and not just fall over) when it receives unexpected values. I call this "Paranoic Telemetry."
21/ Sure, when you first write the high-level wrapper, you'll make sure you understand the API so you can test it properly, maybe do some manual test runs to confirm what the docs say.
But then you gotta have Paranoic Telemetry. They ARE out to get you.
True story: I was at a conference once and somebody—I think it was Recurly, but maybe it was Auth0—changed their API in a way that utterly borked my login process.
My code had telemetry and failsafes, though, and handled it fine. Paranoia FTW.
22/ Moving up the call chain: Application code is like high-level infrastructure. It delegates, probably to the high-level infrastructure, which turns around and delegates to low-level infrastructure.
That raises the question: how do you TEST things that eventually delegate to low-level infrastructure and talk to the outside world? Without using mocks, stubs, or spies?
And that's where Nullables come in.
("Finally!" some of you say. "Won't this guy ever shut up?" the rest of you say.)
23/ Nullables are production code that can be turned off.
Let's take a simple example. You've got a low-level wrapper for Stdout. If it's Nullable, then you can either say `Stdout.create()`, in which case it works normally, or you can say `Stdout.createNull()`, in which case it works normally in *every respect* except that it doesn't write to stdout.
24/ "Working normally" isn't such a big deal for Stdout, because there's no real logic or behavior there, but it is a big deal for your higher-level code that does have logic. For example, a Terminal that uses Stdout and has the ability to draw boxes that are exactly the width of the terminal.
(I dunno. It's hard coming with examples. This is all off the cuff. See the article for actual source code examples with more than 10 seconds of thought in them: https://www.jamesshore.com/v2/projects/testing-without-mocks/testing-without-mocks)
25/ Your low-level infrastructure is Nullable, the high-level infrastructure that uses it is Nullable, and the application logic is Nullable. It's Nullables all the way down. (Except in your logic layer, if you're lucky enough to have one, which is beautiful and pure and mostly nonexistant for us Morlocks.)
And the thing about Nullables is that they run *real code* and *work normally* in *every way* except that they don't actually write to Stdout, or make HTTP calls, or whatever.
26/ That's kind of a big deal for your tests, because it means that, when somebody changes your Terminal abstraction in a totally cool, awesome, smart way, and THEY BREAK ALL YOUR SHIT, your tests fail.
Let me repeat that: your tests actually fail.
You learn that they broke your shit, and you fix it.
I don't know about you, but that's worth a certain amount of ugly tradeoffs to me.
27/ So buckle up, because I'm about to reveal the granddaddy of all tradeoffs: the magic that makes this work.
Nullables run real code because, way, way down at the bottom of your dependency chain, in the lowest of low-level infrastructure wrappers, they're implemented with an Embedded Stub.
28/ An Embedded Stub is production code that stubs out your third-party infrastructure library.
It's not a stub of your code; it's a stub of the standard library, or framework, or what have you.
For example, in Node, you use `http.request()` to make an HTTP request. The Embedded Stub stubs out `http`. The stub is used when `createNull()` is called, and the normal `http` is used when `create()` is called.
As a result, *all your code* runs the same regardless of whether it's Nulled or not.
29/ You're probably looking for an example right about now. I get it.
Here's a simple JavaScript example of stubbing out Math's random number generator, and a more complex one of stubbing out Node's http.
https://www.jamesshore.com/v2/projects/testing-without-mocks/testing-without-mocks#embedded-stub
If you like Java and Spring Boot, and who doesn't, here's an example of stubbing out Random and RestTemplateWrapper. (Cheers to @jitterted for creating this example with me on our livestream: jitterted.stream.)
https://www.jamesshore.com/v2/projects/testing-without-mocks/testing-without-mocks#thin-wrapper
30/ The rest of the patterns are all about how you make this work in practice.
We've got things like Configurable Responses, which is how you control which data your Nullables return.
And Output Tracker, which is a way of keeping track of what your infrastructure code sends to the outside world.
And Behavior Simulation, which is a way of simulating events that come from the outside world, such as a POST request to a web page controller.
31/ And there's a whole section of patterns on how you work with legacy code.
One of the neat things about these patterns is that they're *totally compatible* with your existing code.
This was a surprise! It wasn't part of my original design goals.
But it turns out that Nullables and all the other patterns, except the optional architecture patterns, can coexist side-by-side with your existing sh^H^H lovingly handcrafted legacy code.
Like literally, even in the same test.
32/ That means you can update a test to use Nullables by replacing exactly one mock and keeping everything else the same, run the tests, see them pass, and repeat.
That opens up some really nice opportunities for improving your codebase incrementally and gradually. And of course...
If it ain't broke, don't fix it.
33/ Nullables and the rest of the patterns are a way of solving the problems I see with existing approaches to testing.
If you have slow and flaky tests...
If you have hard-to-read tests that you suspect are really only testing themselves...
If your code is hard to refactor...
...check them out.
And if you don't have those problems, or they're not so bad to be worth the Embedded Stub, you don't have to use them.
Engineering is tradeoffs.
So choose the tradeoffs that are right for you.
34/34 And that's it for me. The article is in draft and I'd like your feedback. Please share it with others, and share your thoughts with me. Either here on Mastodon, on my Discord, or privately via email. The links are all in the article. Along with a LOT more detail and examples.
https://www.jamesshore.com/v2/projects/testing-without-mocks/testing-without-mocks
I hope you enjoyed the thread, and if not, well, that mute button sure is awesome!
Cheers.
It seems there could be a typo in the moon phase test. The passed date is in 2022 while the assertion checks for 2023 in the returned description.
Otherwise I'm enjoying the article quite literally : it does bring JOY.
I'm sometimes a bit depressed by the gap between what's possible and the day to day job so thanks a lot for showing a path forward !
@blabaere Fixed in my copy, thank you.