The p-value is the smallest level of significance $\alpha_0$ that I would reject the null hypothesis with the observed data.
In olden times people simply reported a result, "rejected the null" or "did not reject the null." This doesn't tell me how close I was to not rejecting the null hypothesis. If a researcher rejected the null hypothesis at significance level $0.05$, this doesn't tell other researchers if they would reject the null hypothesis at significance level $0.01$, which is a stricter condition. Hence, it has become common practice to report all $\alpha_0$'s at which $H_0$ would be rejected.
If the test $\delta$ is of the form "Reject $H_0$ if $T\ge c$" for a test statistic T, and a value of $T=t$ is observed, then the p-value equals the size of the test $\delta_t$ with $\delta_t$ being the test that rejects $H_0$ if $T\ge t$:
$$\text{p-value}=\alpha(\delta_t)=\sup_{\theta\in\Omega_0}\pi(\theta|\delta_t)=\sup_{\theta\in\Omega_0}\Pr(T\ge t|\theta)$$
It's often called the chance of observing a dataset as extreme as the one observed if the null hypothesis is true, which is true if the null hypothesis is simple (contains only one point) or if the power is maximized on a boundary point of $\Omega_0$ (which is often the case).