For non-redundant representations like $(x_n, x_{n-1}, ..., {x_1})$ the recursion is fundamental (although the operator is associative so you can apply the prefix sum optimization to parallelize the operation down to $O(\lg n)$ steps.)
But there are redundant representations where addition can be performed more efficiently, in some cases. The carry-save adder is used in most hardware multiplier implementations, where you need to sum together a large list of numbers.
The redundant representation is just to represent every number by two subsums:
c3 c2 c1
x3 x2 x1 x0
so, for example, the number "5" can be stored either as:
0 0 0
0 1 0 1
or
0 0 1
0 0 1 1
The "addition" operation adds three subsums and produces two subsums:
0 0 1
+0 0 1 1
+0 0 0 1
-----------
0 1 1 0
0 0 0 0
So, not very useful if you are just adding two numbers, but when you need to sum a whole list of numbers, each individual addition is fast, and then only at the end do you have to convert back to a non-redundant representation.
There are other redundant representations used in hardware multiplication algorithms, like Booth encoding where the digits can be positive, zero or negative.