Hacker News new | past | comments | ask | show | jobs | submit login

Googler opinions are my own. I know little about BQ.

My understanding of Big query is that at least some of the tech it's based on allows for handling very large datasets. I'm talking being able to search BI that is petabytes in size. I don't think it's really designed for small datasets.

If you can stick the data in SQLite, I think you're using the wrong tool.

Searching around, it looks like BI Engine may help in cases like this. See: https://cloud.google.com/blog/topics/developers-practitioner...




That's true. Most of our production data is in the "petabyte" category. But often we start out with a small amount of data, for example for a new feature or product. That's when a lot of the exploration and query building happens. Once that's done, we leverage BigQuery to scale this up, serving a few hundred million users. There's a platform/standardisation component to this as well. We want our devs to use a single tool for all of their OLAP uses cases.


Totally makes sense, I guess I do the same thing internally (using partitioned tables, so the queries are a lot smaller, or making sure to only select the minimum columns to prove my query is correct). I also have no clue if my internal queries end up using something like BI Engine where it'll cache data in memory (much of this is opaque to me), but turn around is at least a few seconds per request. Your point is valid here, thanks for raising it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: