That does not sound like a good idea. You can't even maintain a quorum of 2 replicas with n=3 on a cluster like that. Losing one data node would be disastrous.
That’s really not how ES replica works. The quorum is formed on the master eligible nodes (hence a tie-breaker) and is only required to elect a master. The elected master designates a primary shard and as many replica as you configure. However, replica shards are replica and may lag. There’s no read quorum or reconciliation or anything happening. If a primary fails, an (in-sync, depending on the version of ES) replica is auto-promoted. The master keeps track of in-sync replica and you can request that writes are on a number of replica before a write returns, but still, no true quorum.
You can absolutely run 2 data/master-eligible nodes plus a single master-eligible tie-breaker node as a safe configuration. The only constraint is that you should have an uneven number of master-eligibile node to avoid a split brain. You also need to understand what the resilience guarantees are for any given number of replica (roughly: each replica allows for the loss of a single random node) and how many replica you can allocate on a given cluster (at most one per data node). That would allow you to run a 2-datanode cluster in a configuration that survives the loss of one node.
I was not saying that's how they work. Most prod clusters are configured for high availability and multiple replicas. They have at least 3 nodes and 2 replicas configured for the index. Sure you can run this configuration, but do you really want all your traffic to hit this one instance when the other one goes down?
> Most prod clusters are configured for high availability and multiple replicas
I've been doing ES consulting and running clusters since 0.14 and I see very few clusters that run more than a single replica. Most 3 node clusters I see are run with three nodes because you can then have three identical configurations at the cost of throwing more hardware at the problem.
> but do you really want all your traffic to hit this one instance when the other one goes down
Whether that's a problem really really depends on whether your cluster is write-heavy or read heavy. Basically all ELK clusters are write heavy and in that case, loosing one of two nodes would also cut writes in half (due to the write amplification that replica cause). Other clusters just have replica for resilience and can survive the read load with half of the nodes available. Whether that is the case for your cluster(s) depends - that's why I explicitly asked what constraints the OP had.