You are right that as usually presented, the grammar of first-order formulas assumes that each predicate symbol "knows" its own arity and requires the number of actual arguments to the predicate to match that. This cannot be expressed purely with a finite context-free grammar.
But it is even worse than that, because the standard assumption is also that we have an infinity of predicate symbols of each arity (as well as an infinity of variable letters and constant symbols). So even your
<Constant> ::= c0 | c1 | ...
is problematic for the completely formal presentation of context-free grammars you see in formal language theory texts. The underlying alphabet is supposed to be finite!
There are several ways to react to that:
The computer-sciency way: Just don't care. There are an infinity of symbols of each kind, the lexer can distinguish them somehow, and we're free to put additional restrictions on top of the syntax as part of our parser actions.
The Turing-machine way: Insist that the alphabet must be finite. Predicate symbols cannot be single symbols; in order to say something like "the fifth predicate symbol with four argument" we need to write something like p'''''[iiii] which the grammar must match one character at a time. In that case we can make a context-free grammar that matches the iiii subscript with arguments:
<PrimitiveFormula> ::= p <Primes> [ <PrimitiveMiddle> )
<PrimitiveMiddle> ::= i ] ( <Term>
| i <PrimitiveMiddle> , <Term>
<Primes> ::= <empty>
| <Primes> '
This is not particularly exciting, though, and works only "by accident" because there's only one count we have to match (and I presciently decided to express the arity in unary notation).
The minimalist way: Still don't care. For each predicate symbol $p$ the grammar will accept both $p(t_1,t_2)$ and $p(t_1,t_2,t_3)$ -- but we shrug and simply decide that these are semantically two different predicates! Our proof system will happily work with that convention, and when the time comes to define structures we just have to say that the meaning of $p$ is a subset of all finite sequences of the universe, rather than a set of (say) ordered tuples from the universe.
The parochial way: Declare that you need to fix a particular logical language before you start writing down grammars. The language will give you a finite number of predicates; let each of them be a symbol of its own and write a grammar production for each of them.
It is not common to insist that only closed formulas are well-formed. Indeed, almost every text you come across will define "well-formed formula" as something that can contain free variables. So matching up variable instances to variable binders is not usually viewed as the job of the grammar.