I don't know how much detail you want (and to go very far I'll have to look some things up). And it would probably be better to have you try to explain it and then lead you through it but that won't work well over a message board.
Hopefully this helps, but if nothing else I suppose it gives a starting point for specific questions.
Essentially p-value is one part of a tool for determining whether a specific statistical outcome's is so far from what you'd expect to happen by random chance that it should be viewed as statistically significant.
The classic example is in flipping a coin. If it is a perfectly fair coin and you flip it 100 times, statistically you would expect to get 50 heads and 50 tails.
However, when you really perform the experiment, you end up with 57 heads and 43 tails. And statistically every possible combination from 0 heads/100 tails to 100 heads/0 tails is possible. But they are not all equally likely. Calculating the p-value is calculating how likely that result was.
The p-value here would be the specific odds of getting a result deviating 7 values from the expected (so the p-value would be the odds of getting either 57 heads or 57 tails). I'm not going to do the math -- and it can be complex so I am assuming you won't need to either -- but let's say this results in a p-value of .0713 or 7.13%.
This would mean that 7.13% of the time, when flipping a coin 100 times you'd expect to get a result of 57 (in either direction).
Now, p-value is not used by itself. It is just a statement of fact, not of importance. So it is used to help decide if a result is significant. For example, when doing a medical study, if 60% of those taking a drug are cured in 3 days, is this significant when those taking the placebo see a 55% cure rate over 3 days or was it just random deviation?
So, before doing your experiment/measurements you need to decide what your significance level will be. A very common one is 5%. This means that it was decided (and it is an arbitrary decision though different fields have professional standards) that an outcome will be viewed as statistically significant only if the observed outcome had less than a 5% chance of happening purely by chance (the p-value).
In our example, the p-value of 57 heads was 7.13%. So if we were using a 5% standard, getting 57 heads would not be viewed as statistically significant. Therefore it can not be viewed as providing support for a hypothesis that the coin is not fair. If however, you got 61 heads, the p-value for that is (again, just making up a number) 2.67%.
2.67% is less than the significance level of 5% which means it would be viewed as statistically significant. One important thing to keep in mind is that this does not prove anything, it is just evidence in favor of the hypothesis that the coin is not fair.
|