I would argue that starting at zero comes more naturally if you’ve been working in binary and are often using bits to select or signal things. In that case the single bit zero is a rather important piece of information and where everything starts. Then when you translate these ideas up into higher languages it continues to feel somewhat natural — just as the hardware may be selecting for register addressed as 000000000 etc, so the items in an array would start from zero. We go ahead and represent the indexes after one in decimal, for convenience, but there’s a feeling that they are like the hardware memory addresses under the hood.
This isn’t so compelling in today’s world of mostly people who live in high level programming languages with no real notion of what goes on in the hardware. But backwards compatibility is a thing, and “following conventions” as well, so I see zero based indexing used in modern high level languages as a logical extension of where this all came from.
What’s hard, really, isn’t using zero based indexing, it’s switching back and forth between systems.
Agreed, what's weird to me is how high level languages like Python and JS adopted C's indexing. But I guess that's the huge C influence on the programming community. Most of us learn programming being taught zero-based indexing.
This isn’t so compelling in today’s world of mostly people who live in high level programming languages with no real notion of what goes on in the hardware. But backwards compatibility is a thing, and “following conventions” as well, so I see zero based indexing used in modern high level languages as a logical extension of where this all came from.
What’s hard, really, isn’t using zero based indexing, it’s switching back and forth between systems.