Rather than just providing a mean projection, I'm looking to provide a likely range of projections using output from 9 models.
Each dataset consists of spatial maximum probability values [0,1] for a specific region, each comprised of about 1000 spatial values associated with a specific intensity event. Ideally, I'm looking to construct a bound for the mean difference between the historical and future datasets, thereby quantifying uncertainty using the inter-model ensemble data. Though I'm also open to constructing a bound on future projections based upon the spread of future data.
I've considered using a 90% confidence interval, but am concerned that it is not comprehensive enough primarily because:
- The assumption of normality, as the model data PDFs do not appear to be Gaussian.
- Many values are close to 0 and 1 as the data represents probability, and I believe this may strongly impact the measured spread of the data.
Are there other means to incorporate a more comprehensive uncertainty analysis into a confidence interval? Particularly, for probability values between [0,1], or the difference between probability values?
Many thanks.