Hacker News new | past | comments | ask | show | jobs | submit login

this has downsides if you're comparing attributes with a method result and checking whether said attrs match what you expect. Either you run each test N times for N attr comparisons, accepting the cost of setup/teardown, or do a loop and fire off an assert error with text on which comparison failed.

Since you already have the object right there, why not do the latter approach?




If the setup/teardown is expensive I would do it in reusable fixtures. The reason I wouldn't choose the latter approach is that it would usually be less convenient in the long run. You'd need to replace your asserts with expects to avoid it throwing on the first error (if this isn't what you want), you'll often need to manually add data to the assertion (as GP did) that you would otherwise get for free, and you'll need to look at the assertion error rather than the test case to know what actually failed. This can be quite inconvenient if you e.g. export your test results in a CI/CD pipeline.


normally in a CI/CD pipeline, you'll see what asserts failed in the log output. Github Actions with pytest shows the context of the failed asserts in the log output. TBH, thought this was standard behavior, do you have experience with a CI pipeline that differs?

All the other points you make as negatives are all positives for me. Biggest thing is, if you're making this change that alters things so drastically, is that really a good approach.

Also, fixtures aren't magic. If you can't scope the fixture to module or session, that means by default it runs in function scope, which would be the same thing as having expensive setup/teardown. And untangling fixtures can be a bgger PITA than untangling unexpected circular imports




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: