The meaningful way to turn a likelihood into a probability is to integrate it against a prior probability distribution for $\theta$. Good choices of a prior are more art than science. But in particular, the Lebesgue measure on $\mathbb{R}$ is not admissible for this purpose, because it's not a probability distribution. However, if you have $\theta$ confined to $[a,b]$, then you can use the uniform distribution on $[a,b]$ as a "naive prior". You could do the same if $\theta$ is confined to a finite set.
When you do this, you still don't get $1$, though, you get the probability of observing your particular data set under your prior distribution for $\theta$. For example, consider a Bernoulli($p$) data point $x_1=1$ and the naive prior for $p$. The likelihood function is $L(p \mid \{ 1 \})=p$, and the integral of this against the naive prior is $\int_0^1 p \, dp = 1/2$.