I think any intro stats book should do the trick. As far as I know, the material in a first stats course is pretty homogeneous. I'm not a biostatistician, but I happen to like this book [0] for introductory stuff. Amazon says you can get it used for $26.
I took intro to stats at a business school and switched to computational linguistics and honestly, I have only been met with the "It is something that you do" in regards to P-values.
Ill try and look at an introductory book again and see if it satisfies my curiosity.
The general motivation for a p value is that you can model what your data should look like under the assumption that your model is correct, but you can't really say what your data should look like under the assumption that your model in incorrect. There are just too many ways that it could be incorrect.
As a concrete example, I might ask you for the distribution of the mean of N samples given that they come from the standard normal distribution (mean zero, variance 1). That's easy. The sample mean, which is itself a random variable, also is normally distributed with a mean of zero and a variance of 1/N. On the other hand, if I ask you about the mean, but the only info you have is that your data isn't from a standard normal, then it could be anything! There's no objective way to say how the sample mean is distributed, given that one crappy piece of info.
The most basic thing you can do then, is to assume that your model is true and see if your data is plausible. If I have a hypothesis that I'm flipping a fair coin and I get all heads on 10 flips, I'm going to start doubting my hypothesis. The probability of all heads or all tails with a fair coin is only 1/512=0.002. P values formalize that notion. We call the hypothesis we can model our "null hypothesis", and see if we get data that makes sense with it. If your observations are some of the most unlikely ones according to your null model, let's start doubting the model. That's it.
The benefit and trouble are both that we dodged the entire question of what an alternative to our model could be, and how the data looks under those alternatives. Ignoring that incredibly important question can give rise to a weird way of thinking, and opens the door to some conceptually mind bending mistakes, but it all comes from a simple interpretation of a p value. How unlikely is your data given your null hypothesis (given the model you're trying to test). Formally, this tends to be "what is the probability that some statistic is this unlikely or worse."
[0] https://www.amazon.com/Principles-Biostatistics-CD-ROM-Marce...