> Most prod clusters are configured for high availability and multiple replicas
I've been doing ES consulting and running clusters since 0.14 and I see very few clusters that run more than a single replica. Most 3 node clusters I see are run with three nodes because you can then have three identical configurations at the cost of throwing more hardware at the problem.
> but do you really want all your traffic to hit this one instance when the other one goes down
Whether that's a problem really really depends on whether your cluster is write-heavy or read heavy. Basically all ELK clusters are write heavy and in that case, loosing one of two nodes would also cut writes in half (due to the write amplification that replica cause). Other clusters just have replica for resilience and can survive the read load with half of the nodes available. Whether that is the case for your cluster(s) depends - that's why I explicitly asked what constraints the OP had.
I've been doing ES consulting and running clusters since 0.14 and I see very few clusters that run more than a single replica. Most 3 node clusters I see are run with three nodes because you can then have three identical configurations at the cost of throwing more hardware at the problem.
> but do you really want all your traffic to hit this one instance when the other one goes down
Whether that's a problem really really depends on whether your cluster is write-heavy or read heavy. Basically all ELK clusters are write heavy and in that case, loosing one of two nodes would also cut writes in half (due to the write amplification that replica cause). Other clusters just have replica for resilience and can survive the read load with half of the nodes available. Whether that is the case for your cluster(s) depends - that's why I explicitly asked what constraints the OP had.