When we have a standard minimization problem expressed in the form
\begin{align} &minimize \,\, \, f_0(x)\\ &s.t. \, \, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, f_i(x)\le0 \qquad i=1, \dots,m\\ &\, \, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, \, h_i(x)=0 \qquad i=1, \dots,p \end{align}
then we can take account of the constraints by considering a weighted sum of them, considering lagrangian
$$L(x,\lambda,\nu) = f_0(x) + \sum_{i=1}^m\lambda_if_i(x) + \sum_{i=1}^p \nu_ih_i(x) $$
and also define the dual function $g$ as $$ g(\lambda,\nu) := \inf_{x}L(x,\lambda,\nu)$$ Of course, when the lagrangian is unbounded below in $x$ the dual formulation takes the value $-\infty$.
What I don't understand is that since the dual function is the pointwise infimum of a family of affine functions of $(\lambda,\nu)$it is concave, even when the problem is not convex.
I understand that $g$ pointwise infimum of affine functions, (because they are basically linear functions), but I'm not relating this with the notion of always being concave.
Also my intuition brings me to consider that if $g$ is concave, then we end up with a maximization problem which will take us to the same optimal value of the original minimization problem, maybe under some additional conditions?
Thank you