I would like to know what assumptions and/or properties characterize the Gaussian copula. I am interested in the case of arbitrary dimensionality, and am not so interested in properties specific to the bivariate case.
What do I mean by characterizing assumption/property?
Many univariate distributions have a "characterizing assumption", i.e. if we assume our variable is a count of independent events happening in a given time, and we assume each event has the same mean rate of happening, then our variable is Poisson distributed.
Alternatively, we might characterize the distribution by a property we want it to have: if we know the mean and variance of continuous real-valued data, but nothing else, then the normal distribution is the maximum entropy distribution for that information.
Some examples
It could be that the Gaussian copula is "the maximum entropy copula with 0 tail dependence" [I have no idea if that is true]
For the bivariate FGM copula, it has a nice characterization that intuitively amounts to "the closest to being independent two variables can be, if they have a given correlation". ("A characterization of Farlie-Gumbel-Morgenstern distributions via spearman's rho and chi-square divergence", Roger B. Nelson).
Motivation
I want a conceptual way to understand the extent to which a Gaussian copula fits a given situation or not, and the degree to which its use is "natural". Right now, the only conceptual litmus test I have for its validity is tail dependence; but its not the only copula with 0 tail dependence, so why should I pick it among the choices? And if/when I am forced to pick it for practical reasons, what have I lost in doing so? [These questions don't need to be answered]