What if writing tests was a joyful experience?

3 years ago (blog.janestreet.com)

Most of the non-joyful experience of writing tests lies in the context preparation, not so much in the expected outputs.

I think we need a modern version of design by contract instead. Its one of the most powerful techniques of testing software is unfortunately often ignored and overlooked these days...

Some less known things about design by contract:

- you can easily test way more than types in preconditions and postconditions

  - extreme web controller postcondition example: every time there is a 200 response to a POST /new request, there should be at least one document more in storage than there was before the request. (caveat: unless there are deletes - then sum the deletes up too)

- things important to the business look a ton like contracts

  - e.g. a diff of two legal documents always contains the full content (list of words) of both documents (never accidentally swallow content!)

- your postconditions can surprisingly often fully describe the desired function behavior

- with sufficient amount of pre and postconditions, your property tests look like "exercise the module or entire system with random actions" -> unit and integration tests

(see Hilel Wayne's excellent video on this topic https://www.youtube.com/watch?v=MYucYon2-lk)

Some things that could really help modernize design by contract

- Test at a certain sampling rate in production for expensive contracts (do not turn them off)

- Report contract violations via observability mechanisms (tracing, sentry etc)

- Better language support for contract DSLs

  • I would categorize this as part of integration testing, which I prefer above unit tests by light-years.

    • It can also be unit tests. You can take any function that has contracts defined and run a fuzzer (or a property testing generator) on its inputs, and its an isolated unit test.

      Or you could generate input actions for the entire system to exercise all contracts, and get e2e tests.

      Or you could run your code in production and exercise contracts (maybe at a certain sampling rate) to get observability and "testing in production".

      Its the most powerful and flexible concept I know of, but it requires thinking in properties (pre/postconditions) which can be a bit tricky.

    • I agree, but unit tests are cheaper, you can do them unilaterally, and you can accomplish a lot with them in an organization that can't or won't invest in integration testing.

      Even when integration tests exist, they typically don't go beyond exercising each ability of the system one or two ways. They don't achieve good "data coverage" or branch coverage, which is what property-based unit testing excels at.

  • > lies in the context preparation

    The right fix here is to simplify the context preparation - make the functions more self contained and less dependent on external scope.

    • This is not always in your control - especially if you're using libraries and frameworks. But also there are sometimes complex functions that (need to) combine lots complex data (say from multiple APIs) together to produce results, and there is not much that can be done about it.

      Take a structured document diff for example - its seemingly simple, diff(documentRepresentation1, documentRepresentation2) -> diffDocument. But in actual reality it has to handle all kinds of edge cases in the structure of the inner documents, and preparing the structure of those documents is hard enough that you actually need to build helper functions to make it easier to make a variety of them.

  • I don’t have sufficient experience to judge whether you are correct or not. But I hope that you are correct!

    I think code deserves way more assertions and validations. And (like you alluded to) fine-grained ways to turn them on or off; we shouldn’t shy away from expensive tests that might take hundreds of milliseconds just because they might be non-practical to run everywhere—instead we should have configuration to turn them on or off. And not just simple on/off assertions like in Java but things that can have metadata like “cost”, “priority”, and so on.

    And, of course, some things (probably the actual contracts) might be always-on.

    There’s a lot of exciting potential!

  • Careful, you're coming dangerously close to reinventing SystemVerilog's formal verification support :) SV formal is tough at first but wonderful once you really get it because it lets you lay out your constraints and preconditions and then the solver verifies that your circuit fulfills those requirements.

  • I’m meaning to write a library that’s the intersection of design by contract and abstract algebra for Python. You’d be able to say that getting a diff is associative and get free tests just for documenting that.

  • another deep cut of testing is maintaining them, especially the blur between improper old test with changing specs, you this squared map to fix now.

I am not an software engineer, but I write code almost daily for personal side projects and automation at work. I write unit and integration tests even for the most toy thing I do (this doesn't come from some kind of fanatic view about testing, but simply because I enjoy writting code and learning/exposing myself to the reasoning and decisions of certain scenarios). I understand the benefits of it, however, I find unit testing specifically tedious and boring as hell. I always wondered if it could be done by some framework that relies on something like Github Copilot: Install the framework, run it pointing to your function/methods file(s), get a file with your tests that you can perhaps adjust here and there and then integrate it in your CI. I understand the challenge due to, perhaps in some cases, the implicit difficulty of coming up with relevant tests for certain parts of your business logic but, do you think it will be possible some day, Am I saying something stupid?

  • This may sound weird, but I've written code professionally for 25+ years and I have never written a unit test, I'm not even sure what a "unit test" is or would look like in the context of what I write.

    My job is to first consider a dozen ways a new feature might be used; then consider how it could be misused; then write a backend portion and UI for it, testing it for usability and potential for error along the way, using everything I've already thought could go wrong; then monitor it once it's deployed to see if anything actually does go wrong. After actually testing it through everything I can think of by hand, function by function, line by line as I write it, it seems totally stupid to write code to test my own code.

    Not to sound like a shit, but that's what end users are for ;)

    • > This may sound weird

      The weird part is not that you don't write unit tests, plenty of developers don't. It's that in 25+ years, you never even had the stray thought "hm, maybe all this manual work that I keep repeating might be worth automating?" and then tested it out to see whether it was worth doing or not

      3 replies →

    • That’s great if you’re the only person working on the code.

      Not everyone is as diligent and not everyone will have thought through/remembered all those edge cases when they happen to end up maintaining/adding to your code.

      Personally I find that tests are a great way to ensure all that hard work you did thinking about those edge cases isn’t wasted. That and manually testing stuff is annoying after the first time.

      5 replies →

    • Say you have to test 15 things to know it works.

      Then you make a change, that change impacts 10 of those. You now need to go back and re-test, manually, those 10 thing and check they haven't broke.

      That's mainly why I use unit tests, as regression testing.

      Also as documentation, if I'm unsure around what a piece of code is actually doing, it's helpful to see tests that hit and find something like.

      When_FooIsInThisState_ThenIExpectThisThingToHappen()

      Then see the setup and expected output.

      It's also good for incrementally building stuff. e.g. I have 3 scenarios I want to get working, I write a test for the first and get it passing. Then I move onto scenario 2, this requires changing the code slightly for scenario 1, but I don't have to re-test scenario 1 as I've got a unit test that is running each time I change the code that tells me it still passes, and so scenario 1 is still working. Then when I do scenario 3, I know 1 and 2 are still working without manually going back and testing them...

      They can really save a lot of manual effort.

      3 replies →

    • I think if you're working alone or on a small team on a small project it's use is negligible. The added tidium may not be worth it, and many major issues can be shaken out with integration tests, QA and users.

      I think a lot of people miss the (in my opinion) best use case for UnitTests. It's in-code self-documentation. Leaving a little nugget of your understanding of the functionality behind for other developers in the future. Even if a method is horribly named or the business logic is so complex it takes an hour to understand exactly what is happening, a simple group of (input -> method -> expected output) unit test can get you working with the method fairly fast.

      Actually, if I am extending a giant god function that is un-tested. I find it exceedingly helpful to add some tests for the specific use cases I am adding, just to be sure I have understood the code path relevant to my change correctly.

      1 reply →

    • That's awesome to hear! I myself do something similar with my own code, but then again, I also trust myself more than any computer ;)

      About how long would you say that process takes you? Do you repeat it every time a minor change needs to happen to your code? If not, how do you verify the impact of your changes?

      7 replies →

  • Not at all stupid. I'm sure people are already using ChatGPT to generate unit tests. There are limits, of course, given that (for now) it doesn't have the full context of your code, but it's definitely capable of generating tests for pure functions that are named well and don't require too much outside context. Some projects have tons of these.

    • Yes, I use it in that way. And if ChatGPT didn't generated the code with pure functions (usually it didn't), you can explicitly ask to generate the code with pure functions. Then ask to generate the tests.

      Usually I get good tests from ChatGPT when I approach it as an iterative process, requesting multiple improvements to the generated test based on what it gives to me. Note that it doesn't replace the skillsets you need to know to write good test coverage.

      For example, you can ask to generate integration tests instead of unit tests in case it need context. Providing details on how the testing code should be generated really helps. Also, asking to refactoring the code in preparation to make it testeable, for example, or requesting to convert some functions to actual pure functions, or requesting to refactor a piece of code it generated to a separate function. Then you ask to generate tests for normal and also for boundary conditions. The more specific you get, the chance of getting a good and extensive tests from it is much higher.

      Having the tests and code generated by ChatGPT really helps to catch the subtle bugs it usually generates on the generated code (fixing it manually), usually I get test coverage that proof the robustness I needed for production code.

      This approach still needs manual fine tuning of the generated code, I think ChatGPT still struggle to get the context right but in general, when it makes sense to use it, I'm more productive writing tests in this way than manually.

    • I used my Copilot trial to explore writing tests more than writing code. I found that it actually worked well, especially for data and boilerplate heavy tests, like table-driven tests in Go. It was saving me quite literally dozens, if not a hundred or more, of keystrokes. I wrote Go for side projects these days, not for work, so I don't pay for the license, but if I was writing Go again professionally I'd pay for the license for this alone.

      In my day job, I write Ruby, and it didn't impress me much when I used it with Rspec. I'd says it was saving me maybe a dozen or so keystrokes in total to write a new spec.

    • chatGPT works surprisingly well for python (and am sure other popular languages too). I can dump a code bit and tell it to write the tests + fixtures for me. The tests it wrote actually showed it "understood" (or recognized the pattern of) the code too. For example part of the code renamed some columns for a data frame that was loaded from a csv. The fixture it created for the csv had the correct column names the code was renaming.

      Unless you're doing TDD or chatGPT is too busy, there's almost no excuse now to not write some unit tests ;)

    • I wonder which is good at for GPT finally: generate impl from test or vice versa. I think former, as a noob.

I don't get it. In their example of fibo(15), they already wrote the fibo function but how do they know it is correct ? Ok it is nice to have an autofill for expected value but not from the same 'thing' that I want to test...

  • I'm not the OP but I think Fibonacci is a contrived example. In practice these "characterisation tests" are great for adding tests to an existing codebase. The "snapshot" just records the current behaviour; it doesn't judge whether it's correct or not, but if you refactor it will catch changes to the behaviour, and then it's up to a human to judge if the changes were desirable or not.

    • Fibo would be excellent for property testing.

      I.e. For all x, Fib(x+2)=Fib(x)+Fib(x+1)

      A property based test produces values of x and tests this property (or invariant).

      If it fails the framework usually then tries to find the simplest failing case.

      8 replies →

    • P.S. They're also great for writing tests for new code, for many of the reasons described in the OP.

  • I think what they've done is implemented a way of doing snapshot testing by putting the snapshot data in the test itself. In snapshot testing you just cache the result of a code block and use that cached data to check the code produces the same output as it did when you knew it worked. Putting the snapshot inline in the test makes it a lot easier to read.

  • IMO The real value in tests is making sure you don’t break things later, rather than for getting your current code working.

    • That’s a big assumption. In TDD you write the tests first, to help you achieve the results expected, this doesn’t fulfill that need. It is basically snapshot testing applied to any kind of function.

      1 reply →

    • Maintaining tests that only test that things haven't changed can be a nightmare if you actually want things to change (like when you're making a change to the logic of a system). They can also be a real pain to maintain since if they fail you almost never know why they failed, just that they failed. This gives you (or the future maintainer) very little help in figuring out what's going on.

      1 reply →

  • IMHO, this is really more a documentation of the original behavior and maybe a test for consistency of later implementations against the original one. But it's not about validity. I'd call it a "traditionality test".

  • Not fib, but... I've written tests like this semi-recently for an algorithm that I had to devise. I went to Excel to generate the test data, based on a data pull from our production database. So, real production data, using a sequence of Excel spreadsheet functions, which also has multiple eyes on it (including legal, since this was regulatory in nature), fed as input to the algorithm to verify the output matched up.

    I'm still on the fence about that but I'm not sure how else to test something that is essentially a glorified algebra formula.

  • Sometimes verifying a result is correct is easier than coming up with the result yourself.

Reminds me of approval tests. Which I think are a wonderful idea. https://approvaltests.com/

Humans are better at verifying the answer is correct over coming up with the answer.

The thing that makes test writing unjoyous for me is most code is hard to test, usually requiring all kinds of scaffolding and even resources like files, db, or mocked http requests.

Being Jane Street they probably use OCaml so yes functional programming languages are nicer to write tests in.

  • Ah neat, I've heard different names for snapshot/golden master type testing before, but generally find the tooling a bit lacking. It seems like https://github.com/approvals/ actually has pretty decent implementations for most languages.

    I feel like the Jane Street approach of embedding the snapshots directly in test sources would make a qualitative difference to the ergonomics. The post implies that this requires some emacs integration, but there's no reason it couldn't be done by a test runner process instead. It would also be cool to have an LSP that provide a hover text or go-to-definition for externally stored snapshots.

    > usually requiring all kinds of scaffolding and even resources like files, db, or mocked http requests

    Yes, anything with timestamps or generated identifiers gets very annoying as you have to manually exclude certain data from the snapshots, negating some of the "easy wins" a snapshot/approval process provides for pure/idempotent code.

    • A VSCode plugin could inline the expected answer file, or at least link to it. I like a seperate file because what if you want whitespace checkin behaviour to be different?

This does not look joyful at all. The second code block example is almost impossible for me to read and understand. A typical unittest with assertions at the end is much easier.

  • This.

    We don't write code to tell computers what to do.

    We write code to communicate with other humans. And we spend more time reading (other people's) code than writing it.

> Tests live in their own directory and are written against the public interface, or, when testing private implementations, against a For_testing module exported just for that purpose.

That is a nice and unique idea. I dislike exposing code just to make it testable, but exposing something under an explicit name is a great compromise.

Having a test runner with a `watch` mode is extremely helpful, as is a runner that will output expected vs actual diffs when a test fails, or provide contextual information about why a test failed.

The premise of initial test in the article - testing a fibonacci function - is also slightly at odds with a better property based testing approach. Validating a function works for a specific input isn't so helpful. We want to validate it works for any number of input!

I'd use the actual definition as a property based test and fuzz over the value:

    const n = generateRandomIntegerBetween(0, 100);
    expect(fib(n) + fib(n+1)).toEqual(fib(n+2))

I also like having precondition and postcondition assertions in my tests.

One example of this is validating the argument generated to pass to the function under test has the correct structure and state before its passed into the function under test.

We can also validate that after the function under test has completed, the argument it received wasn't mutated in any way.

You can get half the benefit, including all of the readability benefit, from two much more accessible factors that can be achieved in most languages without much additional work, and will make your production code more readable to as well:

— Most of your code, including virtually all of your complex code, should be functional in style, so that you can test each case by checking the return value.

— You should be able to represent expected return values literally in code.

The first is important so that you don't have to use mocks to test side effects. Mocking is an important technique when you need it, but it's always better if you can avoid it.

The second is important so that you get a simple readable equality comparison between two values, instead of needing to build up the expected value imperatively and/or write multiple statements to interrogate different aspects of the return value, which is less readable and more mistake-prone.

If your code follows these two principles, you get all the conciseness and readability shown in this demo, but you don't get the expected values automatically filled in for you.

If you have one more thing, for which you'll probably want language support or you won't bother with it, you can at least copy the value from the test failure and paste it in yourself:

— The default way that data is stringified is a literal representation that can be pasted into your code.

It's very cool and very impressive the way they've combined these factors and built tooling to make such a slick workflow!

We used to use a similar approach for testing page objects in stb-tester. It was called stbt auto-selftest (it has since been removed). It would run your page objects against canned screenshots and generate a Python doctest capturing the behaviour of this page objects. You’d then commit the doctest to git and have ci set up to check correctness. See https://david.rothlis.net/pageobjects/characterisation-tests

This was really helpful - we could then see in or pull requests how changes to our page object code would affect the behaviour of that code - just look at the diffs.

It worked ok for a while, but we soon hit limitations with this approach. The big one is the interaction with git. We would get many merge conflicts in these files, particularly when rebasing to reorder commits. Sometimes we’d forget to regenerate the files and have to rerun and rebase after submitting to ci. It worked okish for us as we are sophisticated git users comfortable with rebase and filter-branch, and we’d designed the system so could debug issues effectively, but it wasn’t suitable for wider use.

We’ve moved away from this model to a slightly different one. We still generate this data[^1], but we store the data outside of git in a database indexed by git commit sha. We then display this data and associated diffs in a web-ui. We use GitHub checks to wire it back up to pull requests, so it’s just as visible as when it was committed to git, but more convenient and displayed in a nicer format. You can use the same web-ui to approve changes to this data.

[^1]: though in more structured form than doctests

  • There were other advantages to our new approach too: because all the running of the code is happening server side it is both easier and faster.

    Easier because you don’t need to have a complete development environment available on the machine that you’re making the change. You can make a quick change in the GitHub web ui on mobile and see the results. This is particularly important in our case as we have some code that depends on specific hardware to run (CUDA), that isn’t just available on any developer machine.

    Faster because the server can run the process in parallel across a farm of machines, which will be faster than running locally.

  • On a related note: I think that deterministic generating something based on a git commit and then later diffing the results across commits is in general a very useful technique. Our builds work similarly. We build our software into (several) container/disk images, then we show the diff between these build artefacts on PRs.

    It can help you spot when something changes that you didn’t expect, and vice-versa.

    For this to work we store all our builds indexed by git commit sha.

On a tangent…

I’m currently losing the will to live with integration tests for an http service I’m writing.

It calls out to other services (often 3 or 4 external api calls for each of my http resources, each with complex data structures).

(I just want the external API data in a database!)

Tests are a pain because of the matrix of fixtures needed. My API endpoints behave very differently based on the attributes of the objects in the response from the other APIs. So I have lots of somewhat complex code to generate fixture data, which again is another source of error and a whole lot of code.

For one of my endpoints, I have to mock 1-6 API calls, multiplied by 3 or 4 variations of the returned data. It’s torture.

None of my tests are that obvious what it’s testing because the fixture code and json assertions are so big and long. We aren’t using an http mocking lib right now (eg wiremock) but I’m wondering if it’s worth it.

I guess a lot of it is because the APIs I’m calling are so badly designed.

  • Seeing as how you prob can’t consolidate apis to a single process for more control.

    Seriously, having an http mocking tool is so helpful. I use msw, a node equivalent of wiremock. To help with a situation like yours, it’s easy to say “before this executes, I want the mock tool to return this response”. The abstractions are good enough that I don’t have to deal with recreating my dep-injected service objects, and really reduces the tests’ lines of code. It also seems like the most complex bit here is handling your permutations of code paths, as well as their error paths. I salute. I can’t think of a way to simplify it all except to be glad I work on smaller products now that don’t require so many microservices or 3rd party apis.

    The unfortunate bit is ensuring your mock stays up to date with their api. But if we have most or all of the variations, then it’s their fault for breaking their APIs on us, no?

  • I think it probably IS the APIs that are a problem but since you usually cannot instantly get rid of such things I think you need tricks.

    e.g. keeping your fixtures in separate files and naming them in such a way that the actual data doesn't appear in your test code.

    You might use some fixture as a base value (in my case I had this problem with a big old structure that describes PKI certificates of may types) and have derived fixtures that are (the base plus a change to the "account" field or the base with the "accept" field set to "false"). I did this with python classes and inheritance. Each derived class copied the base class's data and applied a diff. YMMV but I mention it just in case it helps.

    If these are all named and kept in separate files then your actual integration tests can look a bit simpler perhaps?

  • > I guess a lot of it is because the APIs I’m calling are so badly designed.

    Sort of, sounds like the problem is that you have a distributed monolith. If things are this tightly coupled they should just live in same process. If you even had just a regular monolith (which I don't recommend but it is much better than distributed monolith) you wouldn't have this particular problem. Of course another alternative is to redesign the system so that services are based on independent domain capabilities so that they have high cohesion and loose coupling.

I find go test writing a misery because of the style golang recommends and our company has decoded is to be enforced. You have a list of data values and expected results in an array and then a test at the bottom that applies these array items to the function you're testing - sounds great right? No - because often the only way to get all the variations you need in the test is to have helper functions that are referenced in the array to setup the situations you want. i.e. it ends up being far harder to add a new and slightly different test to this array than to just write a new test - and that's not supposed to be allowed.

  • Since you brought up Go, I had a question as someone who's just started using the language. Do Go users generally just use "testing" package for writing tests or are there other commonly used packages that can be used in place of or in tandem with that? Just trying to get a sense of what the options are and what the "state of the art" is.

    • The standard testing package for the win, and many pull in a light weight assertion library like Testify.

      I love testing in Go (and _loath_ testing in Ruby). Our unit and integration tests are mostly great. Tests are just code. No DSL or special anything; just normal code. My favorite test that showed me the light: it would spin up several smtp servers (the system under test), like a dozen or more, get the SMTP conversation to different points and wait for timeouts and ensure that everything behaved. The test ran in under 10ms.

      We have tests that ensure logs happened, metrics emitted, timeouts honored, graceful shutdowns work, and error paths are easily validated with test fakes that return canned errors (no mocks). I love testing in Go.

      3 replies →

The ruby example is wrong in so many levels.

You'd write a test like that in RSpec if you want to reuse it for other examples. The other example they present in Ruby, is what you would write if you were using Ruby std lib testing library.

It doesn't make the test any better or more "joyful".

Also the REPL-like experience their library has actually sucks.

I don't see any value, and if you don't find writing tests to be a joyful experience before that library was created, I don't see how you will see once you have that library.

"In most testing frameworks I’ve used, even the simplest assertions require a surprising amount of toil. Suppose you’re writing a test for a fibonacci function. You start writing assert fibonacci(15) == ... and already you’re forced to think. What does fibonacci(15) equal?"

You need to know what you expect the program to do before writing a test verifying that it does it, no? I don't understand what the author is trying to say here.

  • Maybe that coding is sometimes used to figure out stuff you yourself do not know ffs. I understand your point, but what we do as SWEs is the secondary function of coding.

If you want to try this in Python, you can use https://github.com/ezyang/expecttest which I wrote to do expect tests in PyTorch.

  • My go-to library in python for this is:

    https://pypi.org/project/pytest-regressions/

    It's a bit different in that it'll save the expected to a different file... IMHO that's usually nicer because the test result is usually big and having it separated makes more sense.

    When rerunning it's possible to run pytest with '--force-regen' and then check the git diff to see if all the changes were expected.

"If I could think of all the edge cases myself I would have handled them in my code." - every developer.

It's OK for a developer to write validation tests (ie: tests that ensure the -whatever- is working the way it should). But you really need another team writing its own tests.

That would have saved the crypto guys hundreds of millions of dollars.

That said, this idea is kind of neat. Writing tests is a PITA. But their first example is odd: if you don't know what the expected output is, why are you coding it?

  • > But their first example is odd: if you don't know what the expected output is, why are you coding it?

    My first thought: to discover outputs to then expect. Whether you have a vague idea or no idea of the outputs, it can be a useful starting point.

    You might use it on something that's completely novel, or is poorly documented, or for 'black box' testing. "What happens when I tell the Foo to Baz with a bar, instead of a crunchly? Does it change if I already have a baz?"

    Not really related, but… "Does the set of rows returned include rows that should not be there when the DB has per-record security enabled, the COM host is holding a reference to the initialised SQL driver, and I run a new query with lower access credentials without disposing of the reference?"[1]

    [1] Vague memories of a Sage 300 CRE security bug.

I never write unit tests. However I have written more than 1000 end-to-end system tests for one of the very large C++ libraries I am maintaining. And I am generating an additional 9000 regression tests from large customer examples using the library.

The result is that I haven’t had a single bug in production for 5+ years. One even one.

Worth it!

Writing tests is a normal experience for me (just like writing code) as long as the tests make sense. As soon as some manager (or senior engineer) start to ask for X% code coverage, then it all goes to hell and writing tests become a robotic task that adds nothing but a burden to maintain in the future.

I'd love a club dedicated to "what if <anything> was a joyful experience"

  • ps: this wasn't a joke, a lot of daily activities are 'chores' because of the context. You can almost always make it fun, more useful, more fulfilling. But society doesn't invest in that. Which leads to bore/burnout

People normally write tests for just the sake of correctness. I treat unit test as real debugging environment, it's debugging experience that makes writing test code enjoyful as i see.

I always use emacs to write my unittest it's such a joy to configure it to automatically refresh the compilation buffer on save and have it relaunch the test

This is wrong with a capital "R". I'm all for productivity tools, but sometimes you have to actually think.

I was genuinely expecting another article about ChatGPT and some ingenious way to use it for generating test-cases. Turns out, this is just marketing post after all with no relation to the former... Disappointing...

Fix your footer links. I was trying to get to your main page to find out who you are (already a fail if I have to use your footer over the Jane Street logo at the top) and five of your links 404, including the first one I tried. Not a good look.

  • That's fair criticism but I can assure you that Jane Street don't need to convince anyone about anything. If you don't already know who they are and what they do, your opinion doesn't matter to them. They wouldn't even need to have a functional website for all that matters.