PyTorch¶
Full autograd support with SafeSVD and SafeEigh for gradient-stable backward passes.
pytorch ¶
kabsch ¶
kabsch(
P: Tensor, Q: Tensor, weights: Tensor | None = None
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]
Computes the optimal rotation and translation to align P to Q using Safe SVD.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
P
|
Tensor
|
Source points, shape [..., N, D]. |
required |
Q
|
Tensor
|
Target points, shape [..., N, D]. |
required |
weights
|
Tensor | None
|
Per-point weights, shape [..., N]. Non-negative, must sum to > 0. When None, all points are weighted equally. |
None
|
Returns:
| Type | Description |
|---|---|
(R, t, rmsd)
|
Rotation [..., D, D], translation [..., D], RMSD [...]. |
Note
R is only stable under global translation when the cross-covariance matrix H = P_c.T @ Q_c is well-conditioned. When the smallest singular value of H is near zero, U and V from the SVD are not unique, and a small perturbation can select a different rotation. Check the singular values of H if rotation stability matters for your use case.
kabsch_umeyama ¶
kabsch_umeyama(
P: Tensor, Q: Tensor, weights: Tensor | None = None
) -> tuple[
torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor
]
Computes optimal rotation, translation, and scale (Q ~ c * R @ P + t).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
P
|
Tensor
|
Source points, shape [..., N, D]. |
required |
Q
|
Tensor
|
Target points, shape [..., N, D]. |
required |
weights
|
Tensor | None
|
Per-point weights, shape [..., N]. Non-negative, must sum to > 0. When None, all points are weighted equally. |
None
|
Returns:
| Type | Description |
|---|---|
(R, t, c, rmsd)
|
Rotation [..., D, D], translation [..., D], scale [...], |
Tensor
|
RMSD [...]. |
Note
Unlike kabsch, the cross-covariance H is divided by N here. This per-point normalization is required by the Umeyama scale estimator (c = trace(S * D) / var_P) and does not affect the rotation or translation.
R is only stable under global translation and uniform scaling when the cross-covariance matrix H = P_c.T @ Q_c is well-conditioned. When the smallest singular value of H is near zero, U and V from the SVD are not unique, and a small perturbation can select a different rotation. Check the singular values of H if rotation stability matters for your use case.
horn ¶
horn(
P: Tensor, Q: Tensor, weights: Tensor | None = None
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]
Computes optimal rotation and translation to align P to Q using Horn's quaternion method.
horn_with_scale ¶
horn_with_scale(
P: Tensor, Q: Tensor, weights: Tensor | None = None
) -> tuple[
torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor
]
Computes optimal rotation, translation, and scale using Horn's method.
kabsch_rmsd ¶
kabsch_rmsd(
P: Tensor, Q: Tensor, weights: Tensor | None = None
) -> torch.Tensor
Computes RMSD after Kabsch alignment. Gradient-safe training loss.
kabsch_umeyama_rmsd ¶
kabsch_umeyama_rmsd(
P: Tensor, Q: Tensor, weights: Tensor | None = None
) -> torch.Tensor
Computes RMSD after Kabsch-Umeyama alignment. Gradient-safe training loss.