Multi-Dimensional Signal Alignment

Local All-Pass Filter Framework

The estimation of a geometric transformation that aligns two or more signals is a problem that has many applications in signal processing. The problem occurs when signals are either recorded from two or more spatially separated sensors or when a single sensor is recording a time-varying scene. Examples of fundamental tasks that involve this problem are shown in the figure below. In this project we estimate the transformation between these signals using a novel local all-pass (LAP) filtering framework.

Applications requiring multi-dimensional signal alignment.

The underlying principle in our LAP framework is that, on a local level, the geometric transformation between a pair of signals can be approximated as a rigid deformation which is equivalent to an all-pass filtering operation. Thus, efficient estimation of the all-pass filter in question allows an accurate estimation of the local geometric transformation between the signals. Accordingly, repeating this estimation for every sample/pixel/voxel in the signals results in a dense estimation of the whole geometric transformation. This processing chain can be performed efficiently and achieve very accurate results. We have applied this framework to image registration [1], [2], [3], motion correction [4], [5] and time-varying delay estimation [6].


Our problem comprises finding a geometric transform \(\mathcal{T}\) between two signals based on the variation of their sample intensities. We consider a non-rigid transformation characterized by a sample-wise deformation field \(\mathrm{u}\) of the form: $$ \mathcal{T}({\bf x}) = {\bf x} + {\bf u}({\bf x}), $$ where \(\mathbf{x} = [x_1,x_2,\ldots,x_n]^{T}\) is the nth dimensional sample coordinate and \(\mathbf{u}(\mathbf{x}) = [u_1(\mathbf{x}),u_2(\mathbf{x}),\ldots,u_n(\mathbf{x})]^{T}\) is the vector field representing the deformation. We formulate the estimation of this transform assuming the brightness consistency hypothesis: a sample's intensity remains constant under the deformation. Thus, given two signals \(I_1(\mathbf{x})\) and \(I_2(\mathbf{x})\), our problem is to find a deformation field that relates these signals as follows: $$ I_2(\mathbf{x}+\mathbf{u}(\mathbf{x})) = I_1(\mathbf{x}).$$
Remark: This problem is both restrictive, as the brightness consistency is unlikely to be satisfied exactly, and ill-posed as, for \(n>1\) many deformations many satisfy the equation and most are meaningless. However, in many applications, it is important to determine a meaningful deformation field.
To overcome these challenges, we assume the deformation field is slowly varying such that locally it is equivalent to a rigid deformation and apply our local all-pass filter framework.

The central concept in our framework is that a rigid deformation is equivalent to filtering with an all-pass filter. This filter can be estimated efficiently by leveraging the unique frequency structure of an all-pass filter to obtain a linear forward-backward filtering relation as shown in the figure below. Importantly, as the forward-backward filtering is linear in \(p\), it is straightforward and efficient to solve, see [1], [7] for more details.

Forward-Backward Filtering Relationship

To allow for a non-rigid deformation, we limit this estimation to a small local region, estimate a local all-pass filter and extract a local estimate of the deformation. This local estimate corresponds to the centre of the region. Accordingly, a dense, per-sample, deformation estimate is obtained by repeating this process for all the samples in the signal using a sliding-window mechanism, see figure below. Importantly, as discussed in [1] this can be performed very efficiently.

Illustration of the Local All-Pass Filter Framework

We have applied our LAP framework to both non-rigid [1] and parametric [3] image registration. An example of the results obtained from the LAP for a non-rigid registration is shown in the figure below. The input images are shown in (a) and (c), ground truth deformation in (b) and the LAP estimated deformation in (d). Note that each colour represents the direction of the deformation and a colour code is shown in (e).

(a) Image 1
(b) True Deformation
(c) Image 2
(d) LAP Deformation Estimate
(e) Deformation Colour Code
Illustration of the smoothly varying deformation and its estimation using the LAP framework. The first image is shown in (a), the smoothly varying deformation in (b), the second image in (c) and the LAP deformation estimate in (d). The colour coding for the deformation (each colour represents a different direction of the deformation) is in (e). Note that the deformation has a maximum displacement of 15 pixels. [2]

To allow estimation of both slowly and quickly varying deformations, we use an iterative poly-filter LAP framework that starts with large filters estimating the deformation, aligning the images and then repeating with a smaller filter [1].

Poly-Filter LAP Framework

Parametric Registration
For parametric image registration, we introduce a quadratic parametric model for the deformation and iteratively estimate the parameters of the model [3], [8], [9]. This parametric extension is robust to model mis-match (noise, blurring, etc), very accurate and capable of handling very large deformations. Furthermore, by modeling intensity variations, the parametric LAP is capable of handling multi-modal registration problems.

(a) Image 1
(b) Image 2
(c) LAP Registration
Illustration of the results of applying the parametric LAP. The first image is shown in (a), the second image in (b) and the registration of image 2 to image 1 using the deformation obtained from the parametric LAP. [3]

We have applied our LAP framework to 3D volumetric magnetic resonance imaging (MRI) data to remove artefacts caused by respiratory motion [10]. We first estimate the deformation field between the two images (i.e. determining the respiratory motion) and then remove the motion by registering the moving image to the fixed image. Our approach outperformed existing 3D non-rigid registration algorithms in both accuracy and computation speed and was incorporate into a joint MRI/PET motion correction system [4]. An example of the results obtained from the 3D LAP for respiratory motion estimation on an in-vivo MRI dataset is shown below.

(a) Original MRI data
(b) LAP deformation
(c) Motion Corrected MRI data
Illustration of the motion correction results obtained using the 3D LAP. Part (a) shows an animation of a 2D coronial slice of the fixed and moving images due to respiratory motion, part (b) shows a 2D slice of the 3D deformation estimated by the LAP, and part (c) shows an animation of a 2D coronial slice of images after motion correction using the LAP.[10]

To allow estimation of both slowly and quickly varying deformations, we use an iterative poly-filter 3D LAP framework that starts with large filters estimating the deformation, aligning the images and then repeating with a smaller filter [10].

Poly-Filter 3D LAP Framework

LAP + Deep Learning
More recently, the LAP framework has been combined with a deep learning architecture to allow robust motion correction when faced with highly-accelerated, undersampled, MRI data [11]. This network has also been combined with a reconstruction network to allow motion-corrected reconstruction of 4D (3D + time) MRI data [12], [13]. The architecture of this LAP deep learning network (LAPNet) is shown below.

Proposed LAPNet architecture to perform non-rigid registration in MRI k-space. [11]

We have applied our LAP framework to 1D signals recorded from spatial separate sensors to estimate a 1D deformation, which is normally referred to as a time-varying delay signal [6]. Furthermore, we extend our framework to allow for the estimation of a deformation that is common to an ensemble of signals, which we term the Common LAP (CLAP). Illustrations of the 1D LAP framework and the 1D CLAP framework are shown below.

(a) 1D LAP Framework
(b) 1D CLAP Framework
Illustration of the LAP framework for a pair of 1D signals in (a) and the CLAP framework for an ensemble of 1D signals in (b). The CLAP framework estimates a set of local all-pass filters that are common to the ensemble of signals.
To allow estimation of both slowly and quickly varying deformations, we use an iterative multi-scale CLAP framework that starts with large filters estimating the deformation, aligning the images and then repeating with a smaller filter [6].

Multi-scale 1D CLAP Framework


High-Density Surface EMG (HD-sEMG):
We use our CLAP framework to estimate conduction velocity (CV) from high-density surface electromyography (sEMG) recordings [6], [14]. CV describes the speed of propagation of motor unit action potentials (MUAPs) along the muscle fibre and is an important factor in the study of muscle activity revealing information regarding pathology, fatigue or pain in the muscle.

(a) HD-sEMG Data Acquisition
(b) HD-sEMG Data
Illustration of acquiring high-density sEMG data from the bicep, (a), and the corresponding ensemble of electrode signals with a common time-varying delay, (b).

Motion Correction for MRI code can be found: here

Delay estimation code can be found: here
This code contains the several implementations of the LAP framework and a signal generation function which has a number of different delay functions.


References

  1. Local All-Pass Geometric Deformations
    C. Gilliam, and T. Blu
    IEEE Transactions on Image Processing, Vol. 27, No. 2, pp. 1010–1025, feb 2018
  2. Local All-Pass Filters for Optical Flow Estimation
    C. Gilliam, and T. Blu
    In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2015), Brisbane, Australia, pp. 1533–1537, apr 2015
  3. All-Pass Parametric Image Registration
    X. Zhang,  C. Gilliam, and T. Blu
    IEEE Transactions on Image Processing, Vol. 29, pp. 5625–5640, apr 2020
  4. MR-based respiratory and cardiac motion correction for PET imaging
    T. Küstner, M. Schwartz, P. Martirosian, S. Gatidis, F. Seith,  C. GilliamT. Blu, H. Fayad, D. Visvikis, F. Schick, B. Yang, H. Schmidt, and N.F. Schwenzer
    Medical Image Analysis, Vol. 42, pp. 129–144, dec 2017
  5. 3D Motion Flow Estimation using Local All-Pass Filters
    C. GilliamT. Küstner, and T. Blu
    In Proc. IEEE International Symposium on Biomedical Imaging (ISBI 2016), Prague, Czech Republic, pp. 282–285, apr 2016
  6. Time-Varying Delay Estimation Using Common Local All-Pass Filters with Application to Surface Electromyography
    C. Gilliam, A. Bingham, T. Blu, and B. Jelfs
    In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), Calgary, Canada, pp. 841–845, apr 2018
  7. Approximation Order of the LAP Optical Flow Algorithm
    T. Blu, P. Moulin, and C. Gilliam
    In Proc. IEEE International Conference on Image Processing (ICIP 2015), Québec City, Canada, pp. 48 - 52, sep 2015
  8. Iterative fitting after elastic registration: An efficient strategy for accurate estimation of parametric deformations
    X. Zhang,  C. Gilliam, and T. Blu
    In Proc. IEEE International Conference on Image Processing (ICIP 2017), Beijing, China, pp. 1492–1496, sep 2017
  9. Parametric Registration for Mobile Phone Images
    X. Zhang,  C. Gilliam, and T. Blu
    In Proc. IEEE International Conference on Image Processing (ICIP 2019), Taipei, Taiwan, pp. 1312–1316, sep 2019
  10. Finding the Minimum Rate of Innovation in the Presence of Noise
    C. Gilliam, and T. Blu
    In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), Shanghai, China, mar 2016
  11. LAPNet: Non-Rigid Registration Derived in k-Space for Magnetic Resonance Imaging
    T. Küstner, J. Pan, H. Qi, G. Cruz,  C. GilliamT. Blu, B. Yang, S. Gatidis, R. Botnar, and C. Prieto
    IEEE Transactions on Medical Imaging, Vol. 40, No. 12, pp. 3686–3697, dec 2021
  12. Deep-learning based motion-corrected image reconstruction in 4D magnetic resonance imaging of the body trunk
    T. Küstner, J. Pan,  C. Gilliam, H. Qi, G. Cruz, K. Hammernik, B. Yang, T. Blu, D. Rueckert, R. Botnar, C. Prieto, and S. Gatidis
    In Proc. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2020), , pp. 976–985, 2020
    :sparkles:Best Paper Award:sparkles:
  13. Self-Supervised Motion-Corrected Image Reconstruction Network for 4D Magnetic Resonance Imaging of the Body Trunk
    T. Küstner, J. Pan,  C. Gilliam, H. Qi, G. Cruz, K. Hammernik, T. Blu, D. Rueckert, R Botnar, C. Prieto, and S. Gatidis
    APSIPA Transactions on Signal and Information Processing, Vol. 11, No. 1, 2022
  14. Estimating Muscle Fibre Conduction Velocity in the Presence of Array Misalignment
    Christopher Gilliam, and Beth Jelfs
    In Proc. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2018), Honolulu, Hawaii, USA, nov 2018