That's actually the opposite of good practice; good practice in security is to base your planning off of facts and research. Throwing away your whole setup after every gig works for Mission: Impossible, and I guess it makes people feel extra-super-ninja, in practice it just perpetuates the endless (and pointless) culture of I-know-something-you-don't.
Opsec should be based on reality and threat modeling, not endless rounds of whatabout.
Edit: if you (the rhetorical you, not parent specifically) actually know something here, chime in!
That really is the difference between "proven secure" vs "not proven insecure", which would you consider best practice?
As far as fingerprinting WiFi devices goes: It is an rf device and all rf devices vary in behaviour due to component tolerances. This shows in such things as spurious emissions, power variations across its transmission spectrum, oscillator drift, etc, etc. These are fairly easy to detect remotely. One example is shown in this paper: https://www.cs.ucr.edu/~zhiyunq/pub/infocom18_wireless_finge...
That paper states that the accuracy could be as high as 95%. Apple has sold over a billion iOS devices with WiFi radios in them. I'll let you Google the base-rate fallacy for yourself, and decide if that risk is worth it.
The paper is only one such method, there are countless and these methods have been in documented use in signal intelligence since at least WW2, combined your accuracy increases. And this is on top of all the other known methods of fingerprinting network devices.. Besides, most of the time you only care whether the same device was used, 95% gives you a lot of certainty.
Within propper constraints "proven secure" certainly is possible.
Good security practice is considering all devices as insecure until proven otherwise. Also, mitigating known unknowns where a general problem happens a lot. Devices snooping on you, misleading you, interdiction, hacks on firmwate, etc. Then, you mitigate it in situations where you're unsure of what's going on just in case. So, long as mitigation isn't too costly.
I used to buy and get rid of WiFi devices and throwaway computers for that reason. Also, buy them in person at random places with cash. You can even turn it into charity by using FDE, wiping them afterwards, and reselling cheap or donating to others that cant afford full price. Put Ubuntu and Firefox on them to spread some other good things.
Well that's impossible (see also the halting problem) so that's pretty clearly not good security practice.
Nothing in that says anything about what your threat model is. What risk are you mitigating by doing this? This sounds like the type of "ignore the words and listen to the sound of my voice" security espoused by management and vendor sales people.
It sounds like you have a diverting past time, and I wish you the best with that, but this isn't what security is about. Security is about identifying and mitigating specific risks. This goes doubly for operational security. All else is security theater.
Extra comment to add something I left off. There's at least two types of static analysis and solver tools: unsound and sound. The sound ones, especially RV-Match and Astree Analyzer, use a formal semantics of the code, a formal statement of the property, and automatic analysis to determine if it holds or doesn't depending on the goal. Related, SPARK Ada and Frama-C have their formal specs and code turned into verification conditions that check for code conformance to the specs. The VC's go through Why3 which sends them to multiple, automated solvers to logically check them. Far easier to scale and get adoption of these automated methods than manual proofs.
The main drawback is potential errors in the implementations of the analyzers or solvers that invalidate what they prove. Designs for certifying solvers exist which essentially are verified or produce something verifiable as they go. There's examples like verSAT and Verasco. The tech is there to assure the solvers. Personally, I'm guessing it hasn't been done to industrial solvers due to academic incentives. Their funding authorities push them to focus on quantity of papers published over quality or software improvements with new stuff over re-using good old stuff. Like infrastructure code, everyone is probably just hoping someone else does the tedious, boring work of improving the non-novel code everyone depends on.
Also, given my background in high-assurance research, I'm for each of these tools and methods, mathematical or not, to be proven over many benchmarks of synthetic and real-world examples to assess effectiveness. LAVA is one example. I want them proven in theory and practice. The techniques preventing or catching the most bugs get the most trust.
"Well that's impossible (see also the halting problem) so that's pretty clearly not good security practice."
No it's not. It's been done many times. The halting problem applies to a more general issue than the constrained proofs you need for specific, computer programs. If you were right, tools like RV-Match and Astree Analyzer wouldn't be finding piles of vulnerabilities with mathematical analyses. SPARK Ada code would be as buggy as similar C. Clearly, the analyses are working as intended despite not being perfect.
"Security is about identifying and mitigating specific risks. "
Computer security, when it was invented in the 1970's, was about proving that a system followed a specific, security policy (the security goals) in all circumstances or failed safe. The policy was usually isolation. There's others, such as guaranteed ordering or forms of type safety. High-assurance security's basic approach was turned into certification criteria applied to production systems as early as 1985 with SCOMP being first certified. NSA spent five years analyzing and trying to hack that thing. Most get about two years with minimal problems. I describe some of the prescribed activities here in my own framework from way back when:
Note that projects in the 1960's were hitting lower defect rates than projects achieve today. For higher cost-benefit, I identified the combination of Design-by-Contract, Cleanroom (optional), multiple rounds of static analysis by tools with lower false positives, test generators (esp considering the contracts), and fuzzing w/ contracts in as runtime checks (think asserts). That with a memory-safe language should knock out most major problems with minimal effort on developers' part (some annotations). Most of it would run in background or on build servers.
Modern OS's, routers, basic apps, etc aren't as secure as software designed in 1960's-1980's. People are defining secure as mitigates some specific things hackers are doing (they'll do something else) instead of properties the systems must maintain in all executions on all inputs. We have tools and development methods to do this but they're just not applied in general. Some still do, like INTEGRITY-178B and Muen Separation Kernel. Heck, even IRONSIDES DNS and TrustDNS done in SPARK Ada and Rust respectively. Many tools to achieve higher quality/security are free. Don't pretend like it's just genius mathematicians or Fortune 25 companies that can, say, run a fuzzer after developing in a disciplined way with Ada or Rust.
It's less a culture of I-know-something-you-don't than a culture of someone-may-know-something-I-don't. I don't understand your implication of intellectual delusions of grandeur here; I see it as the opposite.
If you read the other reply to my comment, you'll see that it was in fact a case of I-know-something-you-don't, although in this instance they are in fact wrong about the implications of the thing that they know. The gate keeping that goes on in security (saying that there's a threat but not saying what it is) is extremely frustrating to me.