> Buffering is the only sane default if you want safety.
A LOT of sites on the web survived for many years without it. And they felt more responsive because of it.
I think it's a fair trade-off to optimize for your user experience instead of the machines. Honestly, Google is not going to ding you for this, and as long as you've got an exception notifier going and are on top of it, there's no reason it should become a real problem for most sites. Being able to make a choice per-request/action would be nice of course.
> My assumption was merely that you should measure it before you start worrying about it.
I have. I wrote the original versions of DataMapper (which BTW, was a DataMapper despite popular belief. The methods hanging off the models were just helpers to the Session originally but it lost it's way at some point).
I can tell you for a fact that it's slow. Sure, go ahead and measure it. I'm not saying that's bad. Just apply some judgement to the situation. Are you grabbing a lot of rows? Are you performing lots of individual selects? (Or using AR+include options where each include is effectively the same as another query?)
You can feel free to write slow code until it becomes a problem, or you can develop some good practices up front, before ever needing to profile, to prevent much of that. Disregard for reasonable optimization and forming good habits based on generalized rules of thumb is just as evil as premature optimization. We're not talking about twiddling minutia here.
> it would be silly to create objects for all of them
Why?
Hibernate can do it. NHibernate can do it. LLBLGenPro could do it (probably the closest to Ruby's Sequel, but here it was a non-issue even with Models). Simpler AR style O/RM's like Wilson O/R Mapper could do it.
Given that AR doesn't really provide a decent DAL tool that I can recall, I'd say it's perfectly reasonable to expect that the "Full Stack" framework you're using covers the bases without nasty surprises. Especially if you've come from a background where millions of method-dispatch events imposes nanoseconds of overhead instead of milliseconds of overhead (Java and .NET at least).
I don't think it's at all "silly". I think it's a perfectly reasonable goal.
A LOT of sites on the web survived for many years without it. And they felt more responsive because of it.
I think it's a fair trade-off to optimize for your user experience instead of the machines. Honestly, Google is not going to ding you for this, and as long as you've got an exception notifier going and are on top of it, there's no reason it should become a real problem for most sites. Being able to make a choice per-request/action would be nice of course.
> My assumption was merely that you should measure it before you start worrying about it.
I have. I wrote the original versions of DataMapper (which BTW, was a DataMapper despite popular belief. The methods hanging off the models were just helpers to the Session originally but it lost it's way at some point).
I can tell you for a fact that it's slow. Sure, go ahead and measure it. I'm not saying that's bad. Just apply some judgement to the situation. Are you grabbing a lot of rows? Are you performing lots of individual selects? (Or using AR+include options where each include is effectively the same as another query?)
You can feel free to write slow code until it becomes a problem, or you can develop some good practices up front, before ever needing to profile, to prevent much of that. Disregard for reasonable optimization and forming good habits based on generalized rules of thumb is just as evil as premature optimization. We're not talking about twiddling minutia here.
> it would be silly to create objects for all of them
Why?
Hibernate can do it. NHibernate can do it. LLBLGenPro could do it (probably the closest to Ruby's Sequel, but here it was a non-issue even with Models). Simpler AR style O/RM's like Wilson O/R Mapper could do it.
Given that AR doesn't really provide a decent DAL tool that I can recall, I'd say it's perfectly reasonable to expect that the "Full Stack" framework you're using covers the bases without nasty surprises. Especially if you've come from a background where millions of method-dispatch events imposes nanoseconds of overhead instead of milliseconds of overhead (Java and .NET at least).
I don't think it's at all "silly". I think it's a perfectly reasonable goal.