R is all about data structures. Everything is built from vectors and lists. Arrays are vectors with a 'dimension' attribute. Data-frames are lists of vectors of the same length. And so on. And factors, which again are a kind of vector, are the primary tool for partitioning the data in groups, so you can have 'ragged' arrays. When you understand how all these work together you get the hang of R.
Almost all of what you said applies equally to Python when using NumPy, Pandas, and SciPy.
In R, a Factor is also the bizarre result you get if you load a flat file incorrectly. Lots of things in R proceed without stopping on errors, and you end up with weird data that isn't really usable but still lets your program continue.
Language wars again :)
I think both languages have their own strengths. I come from a programming background and took to Python. However, I often come up in situations when there are R implementations to some advanced statistical routines and none exist for Python. I'm sure vice versa would also be true. So, this is an attempt from that angle :)
People using default R installations report that special commands are needed to make it stop on all errors. Are they delusional? What about the R experts who give them solutions, are they doling out placebos?
The poster in that thread was doing the equivalent of stepping line by line through a block of code manually, ignoring all of the errors and manually executing the next line anyway.
It's true, however NumPy, Pandas, and SciPy are external libraries. I think it's a slight disadvantage. In R all these fancy data structures are built into the language and are used everywhere in a natural way. Whereas Numpy feels a little bit like an appendage or like a language inside a language. That said I don't dislike Python/Numpy and I think it definitely has its uses.