Sure, it's just that a lot of the parquet using people aren't using command line tools for this because it's just too slow and not practical to do that at scale. Scaling down what these tools do is of course possible but not really a priority. Tools probably exist that do this already though. And if they don't you could probably script something together with things like python pretty quickly. It's not rocket science. You could probably script together a grep like tool in a few minutes. Not a big deal. And the odds are you are a quick google search away from finding a gazillion such tools.
But the whole point of parquet is doing things at scale in clusters, not doing some one of quick and dirty processing on tiny data on a laptop. You could do it. But grep and csv work well enough for those use cases. The other way around is a lot less practical.
Tbh these days I'd use parquet down to small files, duckdb is a single binary and give me grep-like searching but faster and with loads more options for when grep itself isn't quite enough.
But the whole point of parquet is doing things at scale in clusters, not doing some one of quick and dirty processing on tiny data on a laptop. You could do it. But grep and csv work well enough for those use cases. The other way around is a lot less practical.