So, basically, you have/had an access to closed software designed
specifically for working with system logs and based on that you piss on
everybody who uses what they have at hand on a smaller scale. Or at least this
is how I see your comments here.
I may need to tone down my sarcasm, but likewise, you need to tone down your
arrogance about working at Google or compatible.
But still, thank you for the search keyword ("dremel"). I certainly will read
the paper (though I don't expect too many very specific ideas from
a publication ten pages long), since I dislike the current landscape of only
having ES, flat files, and paid solutions for storing logs at a rate of few GB
per day.
> A dozen or less gigabytes a day means: use grep. This is just like throwing Hadoop at that log volume.
No, not quite. I do also use grep and awk (and App::RecordStream) with that.
I still want to have a query language for working with this data, especially
if it is combined with easily usable histogram plotter.
I may need to tone down my sarcasm, but likewise, you need to tone down your arrogance about working at Google or compatible.
But still, thank you for the search keyword ("dremel"). I certainly will read the paper (though I don't expect too many very specific ideas from a publication ten pages long), since I dislike the current landscape of only having ES, flat files, and paid solutions for storing logs at a rate of few GB per day.
> A dozen or less gigabytes a day means: use grep. This is just like throwing Hadoop at that log volume.
No, not quite. I do also use grep and awk (and App::RecordStream) with that. I still want to have a query language for working with this data, especially if it is combined with easily usable histogram plotter.