I've had to resort to grotesque hacks before to simulate full disks in test suites — e.g. replacing a file descriptor with a nonsense number to trigger an error at the call site where a write would have failed.
Now that I know /dev/full exists, I can write a test which, although it works only on a subset of operating systems, still exercises a code path which applies to all of them.
You can also create a tiny filesystem in a file, set root user reservation to 100%, mount the file with `-o loop`. That should give you a "real" full device.
SQLite is such an amazing piece of software. Public domain, amazing testing, as well as being so lightweight and efficient. It's a breath of fresh air in a bloated software world.
It's not true that just because a product is open source, it will be of good quality. I don't have enough knowledge to say the following is true or that i'm not just suffering confirmation bias but it does appear to be true that at the extremes, the best quality software delivery processes are from open source projects.
I would shoot holes in my own argument by pointing out specific projects from NASA.
Open Source provides a way for some authors to publish software optimized for quality, rather than optimized for profitability of a commercial entity.
Back in the day physical products might have amazing engineering that in fact undermined profitability because products were designed to last forever. Nowadays product designs need to incorporate planned obsolescence instead, optimizing for profitability by ensuring that products break down in a timely manner and encourage repurchasing.
Open Source can provide an escape from such pressures. Releases can deliver features only when they are ready, not when marketing needs them.
Open Source doesn't guarantee that will happen, because companies can still publish half-baked products under Open Source licenses (Firebase SDK anyone?). But for Open Source projects governed by individuals or by stakeholder communities, it's at least possible to optimize for quality.
>Open Source provides a way for some authors to publish software optimized for quality
This is literally true because of the word "some" in there. But if you mean that it especially leads to quality, rather than just "some quality happens", open source programs are notably deficient in many ways. Programmers don't like writing and don't need documentation for their own program; are familiar with the program's functions so they don't need a good UI; and they have different needs from average users, so their programs may not meet those users' needs very well. Whether you want to call that quality is, I suppose, up to you.
Yes, I was very deliberately not generalizing to all of Open Source. I was exploring these assertions:
> It's not true that just because a product is open source, it will be of good quality.
> [...] it does appear to be true that at the extremes, the best quality software delivery processes are from open source projects.
To turn around your argument, I think your generalizations about "programmers" are too pessimistic. Many of the best Open Source authors enjoy designing good user interfaces and writing good documentation because they want to make something awesome and beautiful. And commercial entities generally won't/can't give software developers free reign to do that because of marketplace pressures to optimize for profitability and value (which are awesome and beautiful in a different way).
I'm looking forward to the improvements to the ergonomics of testing as programming languages mature. For example I really like how Rust has allowed the integration of runnable code into the docstrings of functions to combat documentation being out of date with the current iteration of the function.
Ultimately though, I believe any amount of testing that can be pushed into the compiler/virtualmachine of the language is the most effective. I've learned that people (myself included) don't always code as robustly as they could (either through inexperience, negligence, time/market constraints etc). I can't count how many times I've told myself, I'll go back and write a test for this later.
Rust's implementation of doc tests and its integration of doc tests into standard workflows is praiseworthy. At the same time, we need to acknowledge that doc tests were pioneered elsewhere and have existed in many other languages for a long time.
Like many things in Rust, the innovation is not so much at the computer science conceptual level, but in designing institutions, communities, and processes that allow for integrating the best ideas in a controlled manner.
> At the same time, we need to acknowledge that doc tests were pioneered elsewhere and have existed in many other languages for a long time.
I agree. I understand that a lot of people talk about Rust as if everything it does is novel, when it is really a culmination and execution of a lot of good ideas (Which tbf is what most good things are anyway).
And the fact that many people's introduction to Rust come from books written by the developers means that even beginners will be exposed to these features early on
Python has had doctest[1] in its standard library for a very long time. The wikipedia page[2] references this post[3] of Tim Peters talking about it in 1999!
Maybe my brain cells are slow today but even after reading a bunch of times I can't understand what's the point of the returns_ helper function. Could someone kindly give an explanation for dummies?
It checks for specific return code. It is useful for testing specific type of failure . Since in shell all non zero return codes are considered failure, we need a way to know that it failed because of the expected reason and not some other bug. So if testing for failure with err code 2, the function will "pass" the test only if the process actually fails with err code 2.
Here’s one on SQLite: https://www.sqlite.org/testing.html