The effective latency users see is the maximum of all latencies observed during the session. If your session requires 10 observable requests 1 - 0.95^10 = 40% of users will experience >95%ile latency at some point. There are many considerations (I specifically said "observable" because there are tons of ways to hide latencies to users, and the number of observable requests can greatly vary) but it is never the case that the raw per-request latency equals to the user-visible latency.
Also worth mentioning the slightly fuzzier problem that the effects on user satisfaction from latency is not linear. A user that experiences four 125 ms page loads followed by one 500 ms page load is less happy than the user experiencing five 200 ms page loads.
So the user experience of your system is strongly driven by the tails of the latency distribution. Practically all of the useful information is beyond 95 %, not below it.