0

Example 2 here presents a linear elasticity problem with the weak form

$$ -{\rm div}({\sigma}({\bf u})) = 0 $$

where

$$ {\sigma}({\bf u}) = \lambda\, {\rm div}({\bf u})\,I + \mu\,(\nabla{\bf u} + \nabla{\bf u}^T) $$

is the stress tensor corresponding to displacement field $u$.

Question

On the second equation above, how $\nabla{\bf u}$ and $\nabla{\bf u}^T$ are possible? We know that $u$ is not a scalar field. So, how can we take the gradient of a vector field by the $\nabla$ operator?

Megidd
  • 323
  • 4
  • 11
  • Looks like the gradient of a vector field is actually possible and it results in a matrix/tensor. As posted here: https://math.stackexchange.com/q/156880/197913 – Megidd May 10 '24 at 07:08
  • I would still appreciate any clarification of the linear elasticity PDE... – Megidd May 10 '24 at 07:10
  • 1
    The more general definition of the gradient is as the transpose of the Jacobian. The Jacobian of the map $(x,y,z)\mapsto (u,v,w)$ is a $3\times 3$ matrix and hence the gradient is the transpose, another $3\times 3$ matrix – whpowell96 May 10 '24 at 22:35
  • @whpowell96 I didn't know that. Interesting. – Megidd May 11 '24 at 05:22
  • 1
    Using the covariant derivative you can take the gradient of a tensor of any order. Be careful, as the convention for the shape of the resulting tensor is perhaps not what you'd expect. The standard convention is $(\nabla u){ij}=\nabla_ju_i$, NOT $(\nabla u){ij}=\nabla_iu_j$. – K.defaoite May 11 '24 at 16:17
  • 1
    Using this formalism we may write the divergence of the stress tensor as (assuming const viscosity) $$\nabla_j\sigma^{ij}=\nabla_j\bigg(\lambda (\nabla_ku^k)g^{ij}+\mu\big((\nabla u)^{ij}+(\nabla u)^{ji}\big)\bigg) \ =\lambda \nabla^i(\nabla_ku^k)+\mu \nabla_j\nabla^ju^i+\mu\nabla^i\nabla_ju^j \ =(\lambda+\mu) \nabla^i(\nabla_ku^k)+\mu \nabla_j\nabla^ju^i$$ Or, in coord free form, $$\nabla\cdot\boldsymbol \sigma=(\lambda+\mu)\nabla(\nabla\cdot\boldsymbol u)+\mu\Delta\boldsymbol u$$ – K.defaoite May 11 '24 at 16:24
  • @K.defaoite The covariant derivative was a new topic for me. Thanks. – Megidd May 12 '24 at 05:41
  • @K.defaoite The covariant derivative is similar to Jacobian. Am I right? – Megidd May 12 '24 at 05:50
  • 1
    @Megidd Yes, but not only does it 1) generalize past vectors, it also 2) generalizes to spaces other than Euclidean space ($\mathbb R^n$). – K.defaoite May 12 '24 at 18:03

1 Answers1

1

Gradient & Jacobian

As commented by @whpowell96 :

The more general definition of the gradient is as the transpose of the Jacobian. The Jacobian of the map $(x,y,z)\mapsto (u,v,w)$ is a $3\times 3$ matrix and hence the gradient is the transpose, another $3\times 3$ matrix

Also described here explicitly

$$ {\displaystyle \mathbf {J} ={\begin{bmatrix}{\dfrac {\partial \mathbf {f} }{\partial x_{1}}}&\cdots &{\dfrac {\partial \mathbf {f} }{\partial x_{n}}}\end{bmatrix}}={\begin{bmatrix}\nabla ^{\mathrm {T} }f_{1}\\\vdots \\\nabla ^{\mathrm {T} }f_{m}\end{bmatrix}}={\begin{bmatrix}{\dfrac {\partial f_{1}}{\partial x_{1}}}&\cdots &{\dfrac {\partial f_{1}}{\partial x_{n}}}\\\vdots &\ddots &\vdots \\{\dfrac {\partial f_{m}}{\partial x_{1}}}&\cdots &{\dfrac {\partial f_{m}}{\partial x_{n}}}\end{bmatrix}}} $$

where $\nabla ^{\mathrm {T} }f_{i}$ is the transpose (row vector) of the gradient of the $i$-th component.

Now I understand both the gradient and Jacobian :)

Understanding the equation

This post helped me make sense of the PDE: https://physics.stackexchange.com/q/101737/115714

Of course, I forgot about this: https://en.wikipedia.org/wiki/Linear_elasticity

Megidd
  • 323
  • 4
  • 11