
1. Introduction
Motivation
Figuring out where several 3D lines meet is really useful in fields like 3D Reconstruction or Augmented Reality. For instance, it helps us triangulate a 3D point from its multi-view 2D image detections or find the main 3D center of attention of a camera rotating around an object.
Even though each line may contain some noise, having many of them helps reducing the variability, much like when we calculate the average of several samples.
Two rays VS multiple rays
In a previous article called Intersect 3D Rays (Closest Point) I explained how to intersect a pair of 3D rays. But, what if we want to intersect n
rays? As we’ll see in this article, moving from 2 to n
rays isn’t just a generalization, it’s fundamentally different.

The simple fact of asking a question implies that the question is worth asking. Thus, seeking the intersection of a group of 3D rays inherently suggests the existence of such a common point of convergence, even if there’s some noise.
Quick reminder: Mathematically, a ray is defined by its origin
o
and directiond
. Since it’s a 1D space, any point along the ray is parametrized by its positive distance (or time)t
with respect to the origin.
In the case of two rays, we solve for the parametrization t1
and t2
along each ray near the closest point, then determine from this whether the rays intersect or not, i.e. if t1>0
and t2>0
. Since optimal t1
and t2
can be computed independently, this method also allows early stopping, e.g. skip computation of t2
if t1<0
.
In the case of n
rays, we can assume that the rays are intersecting and directly solve for the 3D intersection point, instead of solving for the n
optimal parametrizations along each ray. Hence, it’s more of a minimization problem than an actual intersection test.
Ray or Line ?
Since we’re not checking the parametrization along each ray, it’s more accurate to talk about lines instead of rays; hence the title of the article. A line is just a ray without the constraint on t
.

2. Analytical Solution
Useful vector trick
Before diving into the main topic, I’d like to quickly discuss a vector trick that will be very useful to us later on.
Scaling a vector by a dot product can also be seen as a matrix-vector product. Terms can indeed be re-arranged thanks to the associative property of the product operator.
This will help us factorize an equation by highlighting its linearity in variable c
.
Minimization Problem
The intersection point can be defined as the 3D point x
that minimizes the sum of squared distances with respect to each input ray.
Orthogonal Projection of a point onto a line
The vector going from the ray origin o
to the point x
can be expressed as the sum of two orthogonal vectors. As illustrated in the diagram below, the first one goes from o
to p
along the line, whereas the second one goes from p
to x
.
Once we know the coordinates of this point p
, i.e. the projection of x
, we can deduce the distance from the point x
to the line by computing the norm of the vector x-p
.

Let’s compute p
.
As illustrated in the system of equations below, the point p
is defined by two key properties: it belongs to the line (with unknown ray parameter tp
) and is such as x-p
is orthogonal to the line.
Introducing the ray parametrization into the second equation yields the value of tp
, which is the dot product between x-o
and d
.
N.B. We assume that the ray direction
d
has been normalized.
We can now put it back into the ray parametrization to retrieve point p
.
The vector trick mentioned previously is helpful here to emphasize the linearity with respect to x-o
, which will help the factorization.
Point-Line Distance
As explained previously, the point-line distance can now be computed from the norm of x-p
. Note the nice factorization in x-o
.
Loss Function
The point-line distance formula gives us the loss function L
we seek to minimize.
The factorization in x-o
makes it straightforward to derive its gradient with respect to x
.
The minimum value of this convex loss function can now be obtained by identifying the point where its gradient equals zero.
We end up with a linear 3x3
system of equations Ax=b
, where the matrix A
and vector b
are defined as follows:
The 3D point closest to all the lines is thus obtained by inverting A
.
Conclusion
Least-Squares minimization can often be reduced to a plain normal equation, where the pseudo-inverse matrix is used to aggregate the n
observations into a square invertible matrix.
It’s worth mentioning that we don’t require the pseudo-inverse here, as the square shape naturally arises from the sum of outer products of the ray directions.
I hope you enjoyed reading this article and that it gave you more insights on how to intersect multiple 3D lines!