5

A quadric in $3D$ can be expressed as

$ r^T Q r = 0 $

where $ r = [x, y, z, 1] $ , and $Q $ is a symmetric $4 \times 4 $ matrix.

Suppose I have three quadrics and want to find their intersection points $(x, y, z) $. For that I'd like to use the method of pencils which is demonstrated in the two-dimensional case here

I have a 2D pencil of quadrics in 3D. I represent them algebraically using the 4x4 symmetric matrix.

$$ Q(α,β) = αQ_1 + βQ_2 + γQ_3 $$

In a 1D pencil, it's pretty straight forward to find a rank deficient quadric (e.g. a cone of some sort) that is ruled and makes nice for parameterizing the intersection of conics. You can find a set of quadrics in the pencil using the generalized eigenvalue problem. In 3D, this generates a quartic polynomial:

$$ det(A-λB) = 0 $$

What I know of my 2D space of quadrics is that the three intersect in as many as six points. So the pencil is of all quadrics that share these six points, and my goal is to find these six points of intersection.

If I can find rank 2 quadrics (two zero eigenvalues, two with opposite sign), then I have a pair of planes. These are really easy to intersect with one another or other quadrics in order to discover the common points.

Is there any way to algebraically discover plane pair quadrics in such a pencil? It seems like there must be such quadrics. There are six points in common to the space of quadrics, and three points define a plane. So that is exactly two planes.

There are more details, like my three quadrics are already rank deficient hyperbolic cylinders aligned with the three axes of the coordinate system. How would one go about solving for a quadric with two zero eigenvalues, or otherwise discover a plane pair within this 2D space of quadrics?

Thanks in advance.

Gus L.
  • 199

2 Answers2

3

As I've worked on the problem, the answer appeared.

For the 2D pencil: $$ (α,β)=α1+β2+3 $$

In order to find plane pairs (matrices with two zero eigevnalues), simply compute the eigenvalues of Q symbolically.

$$ det(Q(α,β) - λI)=0 $$

This generates a quartic polynomial in λ. In order for two of the eigenvalues to be zero, the two lowest order terms in λ must have zero coefficients. This generates a pair of polynomial equations in α, and β with order 4 in the monomials. The problem is now the solution to these two simultaneous polynomial equations. This is something like 16 solutions which includes all the $10=\tfrac12\tbinom{6}{3}$ possible real point plane pairs as well as a 6 imaginary plane pairs with two real intersection points.

To find real planes, the non-zero eigenvalues must have opposite sign. Complex plane pairs quadrics have eigenvalues with the same sign like diag([1,1,0,0]) which is just the z-axis for real points.

The solution to this polynomial system may be simpler if the quadrics are already rank deficient (e.g. cones), so an initial discovery of cone quadrics in the pencil using simple eigenvalue decomposition could simplify this larger polynomial system problem.

Servaes
  • 67,306
  • 8
  • 82
  • 171
Gus L.
  • 199
  • I don't think you want to make $\alpha Q_1 +\beta Q_2 + Q_3 $ have a double zero eigenvalue. What you should be considering is two matrices $G= Q_1 + \alpha Q_2 $ and $H = Q_1 + \beta Q_3 $. This way when you get the intersection as a line and intersect with $p^T Q_1 p = 0$, you guarantee that $p^T Q_2 p = p^T Q_3 p = 0 $ –  Nov 18 '21 at 01:03
  • I agree this is a good approach. I did this in 2D and it produces a 4th order single variable polynomial which gives all four intersection solutions. If instead I solve the eigenvalue problem to find line pair quadrics in 2D, I only have to solve a cubic (the eigenvalue problem) and then two quadratics (the line-conic intersection problem). I was wondering if this extended to higher dimensions (as in finding plane pairs). In 3D line intersection creates a 6th order polynomial. Could the problem be lower order by finding planes (as in 2D)? Seems not. – Gus L. Nov 18 '21 at 14:02
  • What really interests me is if there is a way of skipping the 16th order polynomial which encodes ALL the planes and find an intermediate solution that encodes even just 1 plane pair quadric in 3D. Once I have the plane pair, I can intersect the two planes with one of the quadrics creating conics that can reduce the complexity of the problem from 6th order to something like quadratic. – Gus L. Nov 18 '21 at 18:09
1

We want to find the intersection of the three quadrics:

$ r^T Q_1 r = 0 ,\hspace{5pt} r^T Q_2 r = 0 ,\hspace{5pt} r^T Q_3 r = 0 $

where $ r = [x, y, z, 1 ]^T $ and $Q_1, Q_2, Q_3$ are symmetric $4 \times 4 $ matrices.

Consider the matrix $Q_\alpha = Q_1 + \alpha Q_2 $

Solutions of $r^T Q_1 r = 0 , r^T Q_2 r = 0 $ are obviously solutions of $r^T Q_\alpha r = 0 $

Now by choosing $\alpha$ such that $| Q_a | = 0 $ we can simplify finding solution candidates. The determinant $|Q_\alpha|$ is a quartic polynomial in $\alpha$ and we can find its roots numerically or through a close-form formula. Let $\alpha_1$ be one of these roots, and let

$Q^* = Q_1 + \alpha_1 Q_2 $

Then $Q^*$ is rank deficient and being symmetric can be diagonalized into

$Q^* = R D R^T $

where

$ D = \begin{bmatrix} D_{11} && 0 && 0 && 0 \\ 0 && D_{22} && 0 && 0 \\ 0 && 0 && D_{33} && 0 \\ 0 && 0 && 0 && 0 \end{bmatrix} $

Now we want to solve

$ r^T R D R^T r = 0 $

Define $u = R^T r $, equivalently, $ r = R u $, then

$ u^T D u = 0 $

Depending on $D_{11} , D_{22}, D_{33} $ we have several cases to consider. I will discuss two important cases:

  • Case I: $D_{11} , D_{22} \gt 0 , D_{33} \lt 0 $

  • Case II: $D_{11} \gt 0 , D_{22} \lt 0 , D_{33} = 0 $

For Case I, the candidate solution for $u$ is

$ u = \begin{bmatrix} \dfrac{1}{\sqrt{D_{11}}} t \cos \theta \\ \dfrac{1}{\sqrt{D_{22}}} t \sin \theta \\ \dfrac{t}{\sqrt{-D_{33} }} \\ r \end{bmatrix} $

where $ t, \theta, r $ are real parameters.

For Case II, the candidate solution for $u$ is

$ u = \begin{bmatrix} \dfrac{t}{\sqrt{D_{11}}} \\ \dfrac{\pm t }{\sqrt{D_{22}}} \\ s \\ r \end{bmatrix} $

where $t, s, r $ are real parameters.

There are three conditions on the parameters that will determine their values

  1. The fourth entry of $r$ is $1$. This translates into

$$ [0, 0, 0, 1] R u = 1 $$

  1. $r^T Q_1 r = 0 $. This translates into

$$ u^T R^T Q_1 R u = 0 $$

  1. $r^T Q_3 r = 0 $. This translates into

$$ u^T R^T Q_3 R u = 0 $$

There is a lot of details in these three conditions.

Let's take Case I first. Applying the first condition to $u$ , we will get the equation

$ t ( a \cos \theta + b \sin \theta + c ) + d r = 1 \hspace{15pt} (I.1)$

where $a, b, c, d$ are constants that depend on the entries of the fourth row of matrix $R$ and the $D_{11} , D_{22}, D_{33} $ in a straight forward way.

The second of these conditions will result in the following equation

$ t^2 f_2(\theta) + tr f_1(\theta) + r^2 f_0 = 0 \hspace{15pt} (I.2)$

where

$f_2(\theta) = C_1 \cos \theta + C_2 \sin \theta + C_3 \cos 2 \theta + C_4 \sin 2 \theta + C_5 $

$f_1(\theta) = C_6 \cos \theta + C_7 \sin \theta + C_8 $

$f_0 = C_9 $

where $C_i , i = 1,.., 9 $ are constants that depend on the entries of the matrix $ R^T Q_1 R $, and the parameterization given above for $u$.

An equation with an identical structure to the above can be derived when applying the third condition, and we end up with

$ t^2 g_2(\theta) + t r g_1(\theta) + r^2 g_0 = 0 \hspace{15pt}(I.3)$

And we want to find $t, \theta, r$ that will satisfy $(I.1), (I.2)$ and $(I.3) $

Substituting from $(I.1)$ we can write $r $ in terms of $t$, and then substitute that into $(I.2) $ and $(I.3)$, and end up with the equations

$ t^2 F_2 (\theta) + t F_1(\theta) + F_0 = 0 \hspace{15pt}(I. 4) $

and

$ t^2 G_2 ( \theta) + t G_1 ( \theta ) + G_0 = 0 \hspace{15pt}(I.5)$

If $t$ satisfies both equations $(I.4)$ and $(I.5)$ then $\theta$ must satisfy the following condition,

$ (F_2 G_0 - F_0 G_2)^2 + (F_0 G_1 - G_0 F_1) (F_2 G_1 - F_1 G_2) = 0 \hspace{15pt}(I.6)$

To solve for $\theta$ using $(I.6)$, one can either transform it into a polynomial of degree $8$ in the variable $z = \tan \dfrac{\theta}{2} $ and find the real roots using one of the standard numerical methods for finding polynomial roots, or one could (and this what I did) scan the interval $[0, 2\pi) $ and use simple bracketing and the bisection method (for example) to find all the possible real roots. Once the roots $\theta$ are found, one can solve for the corresponding $t $ and $r$, thus determining the vector $u$ and consequently the vector $r$.

That's the summary of the method for Case I.

Case II is much simpler, as the first condition leads to an equation of a plane in the variables $t, s, r$, which can be parameterized using only two variables (because it is a two-dimensional space). Applying the second and third conditions leads to equations of conics in the two variables. So the problem reduces to finding the intersection of these two conics. The solution method can be found in the answer to this question