Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i don’t think the workspace memory is op point. if the initial set (not the intermediate or final projection) is too large, reads will come from disk which also creates cpu churn. at least in mssql, if the data page is not in memory it is read into the buffer pool and then read from the buffer bool into the workspace for processing. if there isn’t enough buffer space to fit all of the data set pages in the pool, you’re going to be reading and flushing pages. i think pg operates similarly.


Yes, to sample a population, you must consider every row in the population. There's no way around it. But it doesn't have to be as slow and expensive as reading the entire population table.

To determine which rows are in the sample, you need to consider only two columns: the weight and (a) primary key.

If your population lives in a traditional row-oriented store, then you're going to have to read every row. But if you index your weights, you only need to scan the index, not the underlying population table to identify the sample.

If your population lives in a column-oriented store, identifying the sample is fast, again because you only need to read two small columns to do it. Column stores are optimized for this case.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: