-
CiteScore
-
Impact Factor
Volume 1, Issue 1, Chinese Journal of Information Fusion
Volume 1, Issue 1, 2024
Submit Manuscript Edit a Special Issue
Chinese Journal of Information Fusion, Volume 1, Issue 1, 2024: 16-32

Open Access | Research Article | 27 May 2024
Simultaneous Spatiotemporal Bias Compensation and Data Fusion for Asynchronous Multisensor Systems
1 School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
2 Department of Electrical and Computer Engineering, McMaster University, Ontario, Canada
* Corresponding Author: Gongjian Zhou, [email protected]
Received: 17 February 2024, Accepted: 23 May 2024, Published: 27 May 2024  
Cited by: 7  (Source: Web of Science) , 8  (Source: Google Scholar)
Abstract
Bias estimation of sensors is an essential prerequisite for accurate data fusion. Neglect of temporal bias in general real systems prevents the existing algorithms from successful application. In this paper, both spatial and temporal biases in asynchronous multisensor systems are investigated and two novel methods for simultaneous spatiotemporal bias compensation and data fusion are presented. The general situation that the sensors sample at different times with different and varying periods is explored, and unknown time delays may exist between the time stamps and the true measurement times. Due to the time delays, the time stamp interval of the measurements from different sensors may be different from their true measurement interval, and the unknown difference between them is considered as the temporal bias and augmented into the state vector to be estimated. Multisensor measurements are collected in batch processing or sequential processing schemes to estimate the augmented state vector, results in two spatiotemporal bias compensation methods. In both processing schemes, the measurements are formulated as functions of both target states and spatiotemporal biases according to the time difference between the measurements and the states to be estimated. The Unscented Kalman Filter is used to handle the nonlinearity of the measurements and produce spatiotemporal bias and target state estimates simultaneously. The posterior Cramer-Rao lower bound (PCRLB) for spatiotemporal bias and state estimation is presented and simulations are conducted to demonstrate the effectiveness of the proposed methods.

Keywords
spatiotemporal bias
state estimation
multisensor data fusion
asynchronous sensors

1. Introduction

In a sensor network, data collected from multiple sensors [1, 3, 4, 8] is synergistically fused to improve overall system performance [2, 5, 6, 7]. An important prerequisite for successful fusion is that the spatial and temporal biases in asynchronous multiple sensor systems must be estimated and compensated. Otherwise, these biases may cause tracking performance degradation, and even worse, may lead to duplicate tracks.

Spatial bias estimation and compensation has been under intensive investigation for the past decades, and various algorithms have been developed in the literature. In [9], the real time quality control (RTQC) routine is developed to compute the bias by averaging the measurements from each sensor. In [10, 11, 12], the sensor registration is formulated as an ordinary or weighted least squares (LS) problem, and sensor biases are then estimated using LS technique. In [13], the exact maximum likelihood (EML) method is used to maximize the likelihood function of sensor measurements to obtain bias estimates. The method in [14] uses the maximum likelihood registration (MLR) method to solve the bias estimation problem of multiple dissimilar sensors. Another series of methods are based on filtering and use Kalman Filter (KF), extended Kalman Filter (EKF) and unscented Kalman Filter (UKF) to obtain online spatial bias estimates [15, 16, 17, 18, 19]. In [16], the KF method is used to estimate the sensor system bias and the attitude bias with the measurement noises being considered. In [17], the EKF method is used to estimate the position and azimuth biases of the distributed radars relative to the common reference coordinate system. Methods in [18, 19] use the augmented state Kalman filter (ASKF) to estimate the augmented state vectors including the target states and the biases of multisensor, so that the two components can be jointly estimated.

All these methods make one fundamental assumption, i.e., the time stamps of all the measurements accurately indicate the measurement times. In practical applications, there may be unknown time delays between them due to the latency of signal processing and/or data transfer. The time stamps cannot always be used as reliable time references to correctly fuse the measurements from multiple sensors, leading to temporal bias problems. The temporal bias must be accurately estimated and compensated, which is the focus of this paper.

Several algorithms have been developed to solve the temporal bias problem in offline mode. In [20, 21, 22], the temporal bias problem is considered in different combination of sensors. The time stamps and the unknown time delays are used to represent the true measurement times of sensors, and the measurement equations are formulated. The objective function of each sensor is built using the measurement error terms and the relevant covariances. The Levenberg-Marquardt (LM) algorithm [23] is used to find the ML estimates of the temporal bias and other unknown parameters based on minimizing the sum of objective functions. In [24], a generalized LS method is used to estimate the radar spatial bias and ADS-B temporal bias, where the radar has accurate time stamps while the transmitting time of ADS-B data packets are unavailable. The two sensors need to have same sampling period for this method to perform correctly. These methods [20, 21, 22, 24] do not consider the case where sensors all have unknown time delays. In [25], a multisensor time-offset estimation method is proposed for different time-offset statistical models and target dynamic models. This method assumes that the sensors are spatially unbiased and only estimates the temporal offset offline without compensation for accurate data fusion. These offline methods use the estimated bias as prior information to calibrate the sensors. This poses a problem that the bias may change each time the system is started, so the sensors have to be recalibrated.

In [26], an online method is proposed to estimate the temporal bias between the camera and the inertial measurement unit (IMU). The time stamps and the temporal bias estimates are used to represent the actual measurement times of the camera. However, due to temporal bias estimation errors, the camera measurements are inevitably processed at incorrect time instants, which may cause errors in the initial stage. Besides, the sensor spatial bias is not considered in this method. In [27, 28, 29], the spatial bias and the temporal bias are jointly considered for different combination of multiple dissimilar sensors. Three spatiotemporal bias estimation methods based on EKF, UKF and expectation-maximization-EKF (EM-EKF), respectively, are proposed to estimate the spatiotemporal biases and target states simultaneously. Since both biases may exist in practical multisensor systems, it is expected to jointly consider spatial and temporal biases into system models. In the three methods mentioned above, only the specific case where the multiple sensors have same sampling period is considered. In most real applications, the sensors may not sample at same time with same intervals.

In this paper, the problem of simultaneous spatiotemporal bias compensation and data fusion for practical multisensor systems, where the sampling periods of sensors may be different and varying, is investigated. In our previous papers [30, 31] on spatiotemporal bias estimation, the particular case where sensors have constant sampling periods is discussed. This paper is a significant extension of the previous work to the general case with varying sampling periods. We consider the difference between the time stamp interval and the true measurement interval of measurements from different sensors as the temporal bias, which is caused by the existence of unknown time delays. First, an augmented state equation combining the target state and spatiotemporal bias is formulated. Multisensor measurements are collected in batch processing or sequential processing schemes to estimate the augmented state vector, results in two spatiotemporal bias compensation methods. In batch processing scheme, multiple measurements from all sensors between two consecutive reference time instants are collected in a measurement vector to update the augmented state vector. We use the time stamp intervals and the temporal biases to represent the true measurement intervals, and an accurate relationship between measurements and states is established. In the sequential processing strategy, each measurement from each sensor is processed sequentially once available. Due to the unavailability of the true measurement intervals, the time stamp intervals are used to formulate state transition and the temporal bias is used to align the measurements with target states in measurement equations. In both processing schemes, the measurements are formulated as functions of both target states and spatiotemporal biases. This enables extraction of both spatial and temporal biases from the measurements. The UKF is used to handle the nonlinearity of the measurements to simultaneously estimate spatiotemporal biases and target states. The contributions of this paper can be summarized as follows:

  1. The multisensor system with spatiotemporal bias is investigated and the time delay difference between the sensors is regarded as the temporal bias to be compensated for proper fusion of the measurements from the sensors.

  2. The feasible state transitions are presented for the multisensor system without exactly known measurement interval.

  3. The measurement equations are formulated to correctly describe the relationship between the measurements and the states with biased time.

  4. Two spatiotemporal bias compensation methods are proposed to simultaneously estimate the biases and target states, one in batch processing scheme and the other in sequential processing scheme.

  5. The posterior Cramer-Rao lower bound [32, 33] (PCRLB) is derived to quantify the best achievable performance.

The rest of the paper is organized as follows. In Section 2, the problem of spatiotemporal bias compensation and data fusion in asynchronous multisensor systems is formulated. In Section 3, the spatiotemporal bias compensation methods are presented in detail. The PCRLB of spatiotemporal bias and target state estimation is derived in Section 4. Section 5 presents the simulation results, followed by conclusions in Section 6.

2. Problem Formulation

Consider a centralized system with N sensors that provides two-dimensional measurements in polar coordinates, namely range and azimuth measurements. The nearly constant velocity (NCV) motion model [34] of the target is considered in the whole paper. The target state vector is described as

X(k)=[x(k),y(k),x˙(k),y˙(k)]

where x(k) and y(k) are the positions in the direction of x and y, respectively, x˙(k) and y˙(k) are the corresponding velocities. Note that other target motion models can be handled seamlessly within the proposed methods. The target state equation is described as

X(k)=F(k1)X(k1)+Γ(k1)v(k1)

where v(k1) is the zero-mean Gaussian white process noise with known covariance Q(k1), F(k1) is the state transition matrix, and Γ(k1) is the process noise gain matrix. The sensor s reports range measurement rs(k) and azimuth measurement θs(k) at a rate, which may vary. The subscript s denotes the sensor index and k stands for the index of the measurement time ts(k). The measurement equation is given by

zs(k)=hs(k,X(k))+ws(k)=[(x(k)xsp)2+(y(k)ysp)2arctan(y(k)yspx(k)xsp)]+ws(k)

where (xsp,ysp) denotes the position of sensor s, and ws(k) represents the zero-mean Gaussian white measurement noise with known covariance Rs(k), which is given as

Rs(k)=diag(σr2,σθ2)

where σr and σθ denote the standard deviations of range and azimuth measurement noises, respectively.

In practical systems, spatial bias bs=[Δrs,Δθs] of sensor s may exist, where Δrs and Δθs stand for range bias and azimuth bias, respectively. Range bias may be caused by internal circuit delay in the sensor, zero drift in the system, or velocity incorrectness of the distance clock. Azimuth bias is usually caused by the deviation which appears when the sensor antenna is aligned with due North [35]. Additionally, the time stamps tagged to the measurements may be different from the true time that the target is observed. Normally, there may be time delays in the time stamps for a number of reasons. For example, some sensors take the time when the measurements are produced as the time stamps. Due to the latency of signal processing and/or data transfer, there is a time delay between the true measurement time and the generation of its time stamp. Different sensors may have different time delays due to different processing or communication latencies. If the time stamps are used as the true measurement times to perform time alignment for data fusion, large error and/or false correlation may be resulted, even there is no spatial bias. Therefore, in order to perform accurate data fusion, the spatiotemporal bias problem should be solved.

Fig1.eps
Figure 1 Time relationship among measurement times, time stamps and time delays.

3. Simultaneous Spatiotemporal Bias Compensation and Data Fusion

In this section, two spatiotemporal bias compensation methods are proposed for asynchronous multisensor systems. The general situation is considered, where sensors measure targets at different times with different and varying intervals. We take a two-sensor system as an example to present the time relationship among measurement times, time stamps and time delays, as illustrated in Figure 1.

In this figure, sensor s=1, 2 measures target state with a varying period Ts(ks) at the true measurement time ts(ks), and the time stamp is t¯s(ks). Due to signal processing or communication latencies, there may exist unknown time delay Δτs=t¯s(ks)ts(ks) between the time stamps of sensor s with respect to the true measurement times. This prevents the time stamps from being used directly to perform proper time alignment and data fusion. For example, to fuse the k1th measurement from sensor 1 and the k2th measurement from sensor 2, the true measurement interval Δψ=t1(k1)t2(k2) between sensors is required. In practice, we only have the time stamp interval Δψ¯=t¯1(k1)t¯2(k2). The temporal bias, Δt2, 1=Δψ¯Δψ=Δτ1Δτ2, between Δψ¯ and Δψ should be compensated for accurate data fusion. Note that we can only compensate for the temporal bias Δt2, 1 between the two sensors instead of the time delay of each sensor. Without loss of generality, we take sensor 1 as the reference sensor and regard Δts, 1=Δτ1Δτs,s=1,,N as relative temporal bias of sensor s with respect to the reference sensor for the general case with N sensors.

3.1 Augmented State Equation

In order to obtain effective data fusion in the presence of both spatial and temporal biases, the idea in this paper is to augment the spatiotemporal biases as a part of state vector to be estimated along with target states. The augmented state vector is given as

𝐗(k)=[X(k)Xb(k)]

where X(k) is the target base state vector, and Xb(k)=[B(k),Ψ(k)] is the spatiotemporal biases of sensors. B(k) consists of the spatial biases of N sensors

B(k)=[b1(k)bN(k)]

and Ψ(k)=[Δt2, 1(k),,ΔtN, 1(k)] consists of the temporal biases of sensor 2,,N with respect to sensor 1.

Assume that the spatiotemporal biases provided by a sensor at different time instants are constants. The augmented state equation of (1) can be given by

𝐗(k)=𝐅(k1)𝐗(k1)+𝚪(k1)v(k1)

where the augmented state transition matrix and the augmented process noise gain matrix are respectively given as

𝐅(k1)=[F(k1)𝟎4,(3N1)𝟎(3N1), 4𝐈3N1]

𝚪(k1)=[Γ(k1)𝟎(3N1), 2]

where 𝟎m,n denotes a mn zero matrix, and 𝐈3N1 denotes identity matrix of order 3N1. Since the spatiotemporal biases are constants, there is no process noise with respect to the biases. The process noise v(k1) in (3) and its covariance are same to those of the process noise in (2). Assume the target moves according to the NCV model, the state transition matrix of the target base state is

F(k1)=[10ΔT(k1)0010ΔT(k1)00100001]

where ΔT(k1) is the time interval between two consecutive states, which is considered in the filter. In the following, two measurement processing schemes, i.e., batch processing and sequential processing schemes, will be presented. Normally, the true measurement interval Δψ of two consecutive measurements should be used as the time interval ΔT(k1). Due to the existence of unknown time delays, the time stamp interval Δψ¯ instead of the true measurement interval Δψ is used to formulate the state transition.

In the batch processing scheme, we choose sensor 1 as the reference sensor and update the state estimates using all the collected measurements when the measurements of sensor 1 are reported. Since the time delay of sensor 1 is constant, the measurement interval Δψ=t1(k)t1(k1) equals the time stamp interval Δψ¯=t¯1(k)t¯1(k1). We can use Δψ to exactly represent ΔT(k1).

In the sequential processing scheme, we estimate the augmented states once a measurement from one sensor is available, and the consecutive measurements may not originate from the same sensor. Since the time delays of sensors may be different, the true measurement interval Δψ is unavailable. In this case, the time stamp interval Δψ¯ instead of Δψ is used to formulate state transition. Assume two consecutive measurements are the (k11)th and (k21)th measurements from sensor 1 and sensor 2, respectively. One has

ΔT(k1)=Δψ¯=t¯2(k21)t¯1(k11)=Δψ+Δt2, 1

As shown in (7), temporal bias exists between Δψ¯ and Δψ, leading to bias in state transition. To eliminate this influence, the temporal bias is used in the measurement equation to align the measurements with target states.

3.2 The Measurement Equation

In practical multisensor systems, the sampling periods of multisensor may be different and varying. To handle the multisensor measurements in the general situation, two measurement processing scheme are presented. One is the batch processing scheme and the other is the sequential processing scheme. In the former, the sensor with longer sampling period is set as the reference sensor and multiple measurements from all sensors between two consecutive reference time instants are collected in a measurement vector to update the augmented state vector. For sequential processing scheme, each measurement from each sensor is processed sequentially to estimate the augmented state once it is available. For both schemes, the measurement equations are formulated as functions of the spatiotemporal biases and target base states, which enables simultaneous spatiotemporal bias estimation and data fusion.

3.2.1 Measurement Equation for Batch Processing Scheme

We consider the kth fusion period (t¯1(k1),t¯1(k)] in this part, where t¯1(k1) and t¯1(k) denote the time stamps of the (k1)th and kth measurements of sensor 1, which is chosen as the reference sensor. Let ms be the number of measurements provided by sensor s in the current fusion period. The measurement vector s(k) of sensor s is given by

s(k)=[zs1(k),,zsj(k),,zsms(k)]

where zsj(k),j=1,,ms denotes the jth measurement provided by sensor s in the kth fusion period and its time stamp t¯sj(k) falls within the period (t¯1(k1),t¯1(k)]. The measurement vector (k) of all sensors in the current fusion period is given by

(k)=[1(k),2(k),,N(k)].

As discussed in Section 3.1, the time interval ΔT(k1) in state transition matrix equals the true measurement interval. Therefore, the augmented state estimate is updated at the measurement time t1(k) of sensor 1 using the measurement vector (k). To properly formulate the measurement equation, the true interval between measurement time t1(k) and measurement time tsj(k), Δψsj=t1(k)tsj(k), is required for alignment of the measurements from sensors with the state to be updated. However, we only have the time stamp interval Δψ¯sj=t¯1(k)t¯sj(k), where unknown temporal bias Δts, 1=Δψ¯sjΔψsj between Δψ¯sj and Δψsj exists. Here, the solution is to use Δψ¯sjΔts, 1 to replace Δψsj in the measurement equation. Since Δts, 1 is part of the augmented state vector, this replacement enables exact description of the relationship between the measurements in (9) and the augmented states in (1).

Here, we denote Δts, 1=0,s=1 as the temporal bias of sensor 1 relative to itself, which is used to ensure that the measurement equation of sensor 1 can be formulated using the general expression, just like other sensors. The measurements in (9) are formulated as functions of the target states, spatiotemporal biases and the time stamp intervals

z(k)=h(𝐗(k))+𝐰(k)=[h1(𝐗(k)),,hN(𝐗(k))]+𝐰(k)

where hs(𝐗(k)),s=1,,N denotes the measurement function of sensor s, 𝐰(k) denotes the zero-mean Gaussian white measurement noise with known covariance (k), which is given by

(k)=diag(𝐑1(k),,𝐑s(k),,𝐑N(k))

where

𝐑s(k)=diag(Rs1(k),,Rsj(k),,Rsms(k))

and Rsj(k)=diag(σr2,σθ2). The measurement function hs(𝐗(k)) of sensor s is

hs(𝐗(k))=[(xs1(k)xsp)2+(ys1(k)ysp)2+Δrs(k)arctan(ys1(k)yspxs1(k)xsp)+Δθs(k)(xsj(k)xsp)2+(ysj(k)ysp)2+Δrs(k)arctan(ysj(k)yspxsj(k)xsp)+Δθs(k)(xsms(k)xsp)2+(ysms(k)ysp)2+Δrs(k)arctan(ysms(k)yspxsms(k)xsp)+Δθs(k)]

with

{xsj(k)=x(k)x˙(k)(Δψ¯sjΔts, 1(k))ysj(k)=y(k)y˙(k)(Δψ¯sjΔts, 1(k))Δψ¯sj=t¯1(k)t¯sj(k)

where (xsp,ysp) denotes the position of sensor s, Δrs(k) and Δθs(k) denote the range and azimuth biases of sensor s, respectively, and (xsj(k),ysj(k)),s=1,,N,j=1,,ms denotes the true target position corresponding to the measurement zsj(k) at time tsj(k). We use Δψ¯sjΔts, 1 to represent the true measurement interval between the time t1(k) and the measurement time tsj(k), which enables each measurement of sensor s to be correctly represented by the target states, spatiotemporal biases and the time stamp intervals according to (13). As a result, the spatiotemporal biases and target states can be estimated simultaneously from the measurements collected in (9).

3.2.2 Measurement Equation for Sequential Processing Scheme

In this part, the measurement equation for sequential processing scheme is presented. We denote ts(ks) and t¯s(ks) as the true measurement time and the time stamp, respectively, corresponding to the ksth measurement zs(ks) from sensor s. The sensor that first provides a measurement is chosen as the reference sensor. Without loss of generality, we assume sensor 1 is the reference sensor and denote Δts, 1=0,s=1 as the temporal bias of sensor 1 relative to itself. To avoid ambiguity, we denote k as the overall measurement index across all sensors. Each time a measurement is received at the fusion center, k is incremented by 1. The example given in Figure 1 is used to illustrate the formulation of the measurement equation, followed by the general case with N sensors.

Assume that the (k11)th measurement z1(k11) provided by sensor 1 is the (k1)th measurement received at the fusion center. We use z1(k11) to initialize the augmented state at time t1(k11), as will be presented in Section 3.4. As shown in Figure 1, the consecutive measurements may be from the same sensor or different sensors, and there are four possible combinations of their sources. The measurement equations for all the four cases are formulated as follows.

The previous measurement z1(k11) with the overall measurement index k1 and the current measurement z2(k21) with the overall measurement index k are from sensor 1 and sensor 2, respectively: In this case, the true measurement interval Δψ=t2(k21)t1(k11) between z2(k21) and z1(k11) is unavailable since their time stamp delays may be different. As discussed in Section 3.1, we use the time stamp interval Δψ¯=t¯2(k21)t¯1(k11) instead of Δψ to represent the time interval ΔT(k1) in the state transition matrix. After transition according to Δψ¯, the true time of the state 𝐗(k) is t2(k21)Δt2, 1 and unequal to the measurement time t2(k21). To eliminate this influence, the temporal bias is used to align the measurement with the state 𝐗(k) in measurement equation, as given by

z(k)=h(𝐗(k))+ws(k)=[xs(k)2+ys(k)2+Δrs(k)arctan(ys(k)xs(k))+Δθs(k)]+ws(k)
{xs(k)=x(k)+x˙(k)Δts, 1(k)xspys(k)=y(k)+y˙(k)Δts, 1(k)ysp

where s=2, (xsp,ysp) denotes the position of sensor s, ws(k) denotes the zero-mean Gaussian white measurement noise with known covariance (k)=diag(σr2,σθ2), and (xs(k),ys(k)) denotes the true target position corresponding to the measurement z2(k21) at time t2(k21). We utilize Δts, 1 to represent the true interval between the time t2(k21)Δt2, 1 of the state 𝐗(k) and the measurement time t2(k21), which enables each measurement of sensor s to be correctly represented by the target states and spatiotemporal biases according to (15a). Accordingly, the spatiotemporal biases and target states can be estimated simultaneously using the measurement from sensor s.

b) Both the previous measurement z2(k21) with the overall measurement index k and the current measurement z2(k2) with the overall measurement index k+1 are from sensor 2: In this case, the measurement interval Δψ equals the time stamp interval Δψ¯ since the time delay of sensor 2 is constant. One has

ΔT(k)=Δψ¯=t¯2(k2)t¯2(k21)=Δψ.

After state transition from the previous update time t2(k21)Δt2, 1 according to ΔT(k), the time of the state is t2(k2)Δt2, 1, which is still unequal to the measurement time of z2(k2). This influence can be eliminated in the similar way as in Section 3.2.2, and the measurement equation can be formulated in the same way as in (15) except that the overall measurement index is k+1.

c) The previous measurement z2(k2) with the overall measurement index k+1 and the current measurement z1(k1) with the overall measurement index k+2 are from sensor 2 and sensor 1, respectively: In this case, the time interval ΔT(k+1) is

ΔT(k+1)=Δψ¯=t¯1(k1)t¯2(k2)=Δψ+Δt2, 1.

After state transition from the previous update time t2(k2)Δt2, 1 according to ΔT(k+1), the time of the state is t1(k1) and equals the measurement time of z1(k1). We have denoted the temporal bias of sensor 1 relative to itself as Δt1, 1=0, so the measurement equation of sensor 1 can be formulated using the general expression (15) except that the overall measurement index is k+2 and the sensor index s is 1.

d) Both the previous measurement z1(k1) with the overall measurement index k+2 and the current measurement z1(k1+1) with the overall measurement index k+3 are from sensor 1: In this case, the time interval ΔT(k+2) equals the true measurement interval Δψ. After state transition from the previous update time t1(k1) according to ΔT(k+2), the time of the state is t1(k1+1) and equals the measurement time of z1(k1+1). The measurement equation can be formulated in the same way as in (15) except that the overall measurement index is k+3 and the sensor index s is 1.

The above cases encompass all possible combinations of the sources for consecutive measurements, without other possibilities. When the subsequent measurements are received at the fusion center, the measurement equations can be formulated according to one of the cases.

Referring to the above formulation in the case with two sensors, the general expression for the measurement equation can be formulated for a system with N sensors. We denote t¯p(kp) and t¯c(kc) as the time stamps when the previous and current measurements are provided by sensor p and c, respectively, where p,c=1,,N. When the current measurement zc(kc) with overall measurement index k is received at the fusion center, the time interval is ΔT(k1)=t¯c(kc)t¯p(kp). Substituting ΔT(k1) into (6), we can formulate the corresponding transition matrix, and the augmented state equation is formulated according to (3)(5). The expression for the measurement equation is the same as (15) except that the subscript s needs to be replaced by subscript c.

3.3 Filtering Process

Owing to the nonlinearity of the measurement equations in Section 3.2, the Unscented Kalman Filter (UKF) is employed to jointly estimate the spatiotemporal biases and target states. This leads to two approaches: batch processing-based (BP-SBDF) and sequential processing-based (SP-SBDF) spatiotemporal bias compensation and data fusion.

The UKF uses the unscented transformation [36] (UT) to approximate the mean and covariance of the augmented state and measurement. First, the sigma points δ and the associated weights W are calculated given the augmented state estimate 𝐗^(k1|k1) and state estimation covariance 𝐏(k1|k1). The mean and covariance are then approximated by using a weighted sample mean and covariance of these sigma points. That is

{δi(k1|k1)=𝐗^(k1|k1),W0=κnx+κ,i=0;δi(k1|k1)=𝐗^(k1|k1)+((nx+κ)𝐏(k1|k1))i,Wi=12(nx+κ),i=1,,nx;δi(k1|k1)=𝐗^(k1|k1)((nx+κ)𝐏(k1|k1))i,Wi=12(nx+κ),i=nx+1,,2nx;

where nx is the dimension of the augmented state, δi(k1|k1) is the ith sigma point, Wi is the associated weight, κ is the scale parameter, and ((nx+κ)𝐏(k1|k1))i is the ith row or column of the matrix square root.

These sigma points can be updated using the augmented state equation given by (3)

δi(k|k1)=𝐅(k1)δi(k1|k1),i=1,,2nx.

The weighted mean of these predicted sigma points for the augmented state is given by

𝐗^(k|k1)=i=02nxWiδi(k|k1).

The prediction covariance of the augmented state is calculated by

𝐏(k|k1)=i=02nxWiΔ𝐗i(k|k1)(Δ𝐗i(k|k1))+Q(k1)

where

Δ𝐗i(k|k1)=δi(k|k1)𝐗^(k|k1)

and Q(k1) denotes the known process noise covariance. We denote ηi(k|k1) as the prediction of sigma points for the measurements. Note that the measurement equations for BP-SBDF method and SP-SBDF method are different, and their expressions have been given by (10) and (15), respectively. Substituting δi(k|k1) into the corresponding measurement functions, we have

ηi(k|k1)=h(δi(k|k1)).

The weighted mean of these sigma points for the measurement is given by

z^(k|k1)=i=02nxWiηi(k|k1).

The covariance of the predicted measurement is given by

𝐏zz(k)=i=02nxWiΔzi(k|k1)(Δzi(k|k1))+(k)

where

Δzi(k|k1)=ηi(k|k1)z^(k|k1)

and (k) denotes the known measurement noise covariance and has two different expressions in BP-SBDF and SP-SBDF methods. The cross-covariance between the augmented states and measurements is given by

𝐏xz(k)=i=02nxWiΔ𝐗i(k|k1)(Δzi(k|k1)).

The filter gain can then be given by

K(k)=𝐏xz(k)𝐏zz(k)1.

Finally, the augmented state estimate and the corresponding covariance are updated by

𝐗^(k|k)=𝐗^(k|k1)+K(k)(z(k)z^(k|k1))

and

𝐏(k|k)=𝐏(k|k1)K(k)𝐏zz(k)K(k).

3.4 Filter Initialization

In this subsection, the one-point initialization method [37] is used to estimate the initial augmented state and its covariance for the two proposed methods. The basic idea is to estimate initial target state using the first reported measurement and find the initial covariance by the measurement covariance. Without loss of generality, we assume sensor 1 first provides the measurement z1(1) in polar coordinates. The unbiased conversion from polar coordinates to Cartesian coordinates [38, 39, 40] can be given by

z1u(1)=[x1u(1)y1u(1)]=[λθ1r1(1)cos(θ1(1))λθ1r1(1)sin(θ1(1))]μu

where x1u(1) and y1u(1) are the unbiased converted measurements in x and y directions, respectively, r1(1) and θ1(1) are the first range and azimuth measurements reported by sensor 1, respectively. λθ is the bias compensation factor, and μu is the mean of the converted measurement error, which are respectively given by

λθ=eσθ2/2

μu=[(λθ1λθ)r1(1)cos(θ1(1))(λθ1λθ)r1(1)sin(θ1(1))].

The converted measurement covariance R1u(1) is

R1u(1)=[R1u, 11(1)R1u, 12(1)R1u, 21(1)R1u, 22(1)]

where

R1u, 11(1)=λθ2r12(1)cos2(θ1(1))+12(r12(1)+σr2)(1+αθcos(2θ1(1)))

R1u, 22(1)=λθ2r12(1)sin2(θ1(1))+12(r12(1)+σr2)(1αθcos(2θ1(1)))

R1u, 12(1)=R1u, 21(1)=λθ2r12(1)sin(θ1(1))cos(θ1(1))+12(r12(1)+σr2)αθsin(2θ1(1))

and αθ=e2σθ2. Based on one position measurement, one has no information on target velocity. If the maximum target velocity is vmax, the uniform distribution of target velocity with appropriate bounds may reflect our ignorance. This uniform distribution is replaced by a Gaussian probability distribution function with mean zero and covariance vmax2𝐈2/3 for velocity. The initial estimate of the target state and its covariance are respectively given by

X^(1|1)=[x1u(1)y1u(1)00]

P(1|1)=[R1u(1)𝟎2, 2𝟎2, 2vmax2𝐈2/3]

where 𝟎2, 2 is a 22 zero matrix, and 𝐈2 is a identity matrix with order 2.

There is no prior information about the spatial and temporal biases, and we set their initial estimates to zero, that is, X^b(1|1)=𝟎(3N1), 1. Additionally, we assume the target states and the spatiotemporal biases are uncorrelated, and the independent bias assumption results in block diagonal covariance matrices of the spatiotemporal biases. As in (39), the maximum range and azimuth biases are assumed to be Δrmax and Δθmax, respectively, and the maximum temporal bias is Δtmax. The initial estimate of the covariance for spatiotemporal biases is

Pb(1|1)=diag(PBb,,PBbN,PΨb,,PΨbN1)

where

{PBb=diag(Δrmax2/3,Δθmax2/3)PΨb=Δtmax2/3.

As a result, the initial estimates of the augmented state and its covariance are

𝐗^(1|1)=[X^(1|1)X^b(1|1)]

and

𝐏(1|1)=diag(P(1|1),Pb(1|1)).

4. Lower Bound of Performance

Since the measurement equations are nonlinear, the optimal solution to the spatiotemporal bias compensation problem cannot be analytically derived. A theoretical lower bound of performance would be helpful to assess the level of approximation introduced by the proposed methods. In time-invariant systems, the standard Cramer-Rao lower bound [41] (CRLB) is commonly used for performance evaluation. While in time-varying systems, the posterior CRLB (PCRLB) provides a theoretical bound on the dynamic state estimates [32]. In this section, the PCRLB for the spatiotemporal bias and state estimation is derived briefly as follows.

To avoid redundancy, we only present the derivation of PCRLB for SP-SBDF method. Assume the current measurement is reported by sensor s, and the augmented state and measurement equations have been given in (3)(6) and (15), respectively. The lower bound on the estimation error is determined by the Fisher information matrix J(k) and the covariance of 𝐗^(k|k) is bounded by

E{(𝐗^(k|k)𝐗(k))(𝐗^(k|k)𝐗(k))}J(k)1

where E{} is the expectation operator. The general frame work for derivation of PCRLB of an unbiased estimator of nonlinear discrete-time system is described in [32], and the information matrix can be calculated by recursion

J(k)=[Q(k1)+𝐅(k1)J(k1)1𝐅(k1)]1+𝐇(k)(k)1𝐇(k)

where Q(k1) is the process noise covariance, and 𝐇(k) is the Jacobian matrix of the measurement equation h(𝐗(k)) evaluated at the true augmented state 𝐗(k), i.e.,

𝐇(k)=[𝐗(k)h(𝐗(k))]

where 𝐗(k) is the gradient operator with respect to the augmented state 𝐗(k). We have

𝐇(k)=[(h(𝐗(k))X(k)),(h(𝐗(k))B(k)),(h(𝐗(k))Ψ(k))]=[𝐇X(k),𝐇B(k),𝐇Ψ(k)]

with

𝐇X(k)=[xs(k)xs(k)2+ys(k)2ys(k)xs(k)2+ys(k)2ys(k)xs(k)2+ys(k)2xs(k)xs(k)2+ys(k)2xs(k)Δts, 1(k)xs(k)2+ys(k)2ys(k)Δts, 1(k)xs(k)2+ys(k)2ys(k)Δts, 1(k)xs(k)2+ys(k)2xs(k)Δts, 1(k)xs(k)2+ys(k)2]

𝐇B(k)=[𝟎2, 2(s1)I2𝟎2, 2(Ns)].

If the sensor index s equals 1, we have 𝐇Ψ(k)=𝟎2,N1. Otherwise, we have

𝐇Ψ(k)=[𝟎2,s2Λ(k)02,Ns]

where

Λ(k)=[xs(k)x˙(k)+ys(k)y˙(k)xτ(k)2+yτ(k)2y˙(k)xs(k)x˙(k)ys(k)xs(k)2+ys(k)2]

{xs(k)=x(k)+x˙(k)Δts, 1(k)xspys(k)=y(k)+y˙(k)Δts, 1(k)ysp.

The PCRLBs of the augmented state components are calculated as the corresponding diagonal elements of the inverse information matrix

PCRLB{𝐗^j(k|k)}=[J(k)1]jj

where []jj represents the element located at the jth row and jth column of a matrix. Recursion in (2) can be implemented based on Monte Carlo averaging over multiple realizations of the target trajectory. Given the initial information matrix, we can calculate the PCRLB through the recursion in (2). In practice, the recursion can be initialized with the inverse of the initial covariance matrix of the filtering method as J(1)=𝐏(1|1)1, which has been presented in Section 3.4.

5. Simulation Results

Simulations and performance comparisons are presented in this section to evaluate the effectiveness of the proposed methods. Two scenarios with relatively small and large temporal biases are investigated to evaluate the influence of temporal bias on estimation performance. The root mean square errors (RMSEs) of the spatiotemporal biases and target states and the normalized estimation error squared (NEES) are used to illustrate the performance of the proposed methods. Also, the PCRLB is given to quantify the best achievable accuracy. For comparison, the simulation results of the standard bias and state estimation (S-BSE) method [18] that fails to consider the temporal biases are also provided.

5.1 Simulation Parameters

Consider a single target tracking problem with two asynchronous sensors located at the two-dimensional Cartesian coordinates (0km, 0km) and (50km, 0km), respectively. The detection probability of sensors is assumed to be unity, and the measurement noise covariance of sensor s is given by

Rs(k)=diag[(10m)2,(0.01rad)2],s=1,2.

The two sensors work asynchronously and start reporting measurements at 0s and 6s, respectively, and sensor 1 is chosen as the reference sensor. To illustrate the capability of the proposed methods to handle measurements with varying sampling periods, the sampling periods of sensor 1 are cyclically selected from 5s, 4s and 3s in turn, and the sampling periods of sensor 2 are cyclically selected from 2s and 1s in turn. In the experiment, sensor 1 reports 400 measurements and sensor 2 reports 1065 measurements in the same time duration. Note that the proposed methods have no requirement of the sampling periods and initial sampling times. Only time stamps with unknown delays are used. Without loss of generality, sensor 1 is assumed to be spatial-bias free, i.e., Δr1=0m and Δθ1=0rad, while sensor 2 contains range bias Δr2=30m and azimuth bias Δθ2=0.02rad.

Fig2.eps
Figure 2 RMSE of temporal bias estimates of the BP-SBDF and SP-SBDF methods in Scenario I.

Fig3.eps
Figure 3 RMSE of range bias estimates of the BP-SBDF, SP-SBDF and S-BSE methods in Scenario I.

Fig4.eps
Figure 4 RMSE of azimuth bias estimates of the BP-SBDF, SP-SBDF and S-BSE methods in Scenario I.

Fig5.eps
Figure 5 RMSE of position estimates of the BP-SBDF, SP-SBDF and S-BSE methods in Scenario I.

Fig6.eps
Figure 6 RMSE of velocity estimates of the BP-SBDF, SP-SBDF and S-BSE methods in Scenario I.

Fig7.eps
Figure 7 Consistency test of the proposed methods in Scenario I.

Two scenarios with relatively small and large temporal biases are investigated. In the scenario with small temporal bias, denoted by Scenario I, time delays of sensor 1 and sensor 2 are Δτ1=1.5s and Δτ2=1s, respectively. In Scenario II, time delays of sensor 1 and sensor 2 are Δτ1=5s and Δτ2=2s, respectively. The temporal bias in Scenario I is Δt2, 1=Δτ1Δτ2=0.5s, and Δt2, 1=Δτ1Δτ2=3s in Scenario II. The trajectory of the target evolves with the NCV model and starts at position (3km, 5km) with an original heading of 53.13deg and an initial speed of 15m/s. The process noises are assumed to be zero-mean Gaussian white with standard deviation 0.001m/s2. Simulations are performed with 1000 Monte Carlo experiments.

As discussed in Section 3, the BP-SBDF method updates estimates at a rate same to the sampling rate of sensor 1, thus 400 estimation results are produced. While the SP-SBDF method outputs 1465 estimates, since once a measurement is received, the state is updated, regardless of whether it comes from sensor 1 or sensor 2. To conduct objective performance comparison, the estimation results at the measurement times of sensor 1 are considered.

5.2 The Scenario with Relatively Small Temporal Bias

The RMSEs of the spatiotemporal bias and target state estimates are plotted in Figures 26. The PCRLB is provided to quantify the theoretically achievable performance in this scenario. Additionally, time-averaged RMSEs of the proposed methods are listed in Table 1 for comparison. For fairness of comparison, two average running times are provided to compare the complexity of the proposed methods. One is the average running time required to handle the overall measurements of the two sensors, and the other is the average running time required to perform a single filtering process, which are also listed in Table 1.

Table 1 Performance comparison in scenario with relatively small temporal bias.
RMSEs Running Time
Temporal Bias Range Bias Azimuth Bias Position Velocity Overall Single
(s) (m) (×104rad) (m) (m/s) (s) (×104s)
BP-SBDF 0.1557 2.2431 1.7348 2.9410 0.0203 0.2177 5.4425
SP-SBDF 0.1502 2.1339 1.7163 2.8155 0.0183 0.4981 3.4000
S-BSE 6.7461 3.7369 4.0488 0.0226
PCRLB 0.1108 1.7534 1.2166 2.3393 0.0134

Fig8.eps
Figure 8 RMSE of temporal bias estimates of the BP-SBDF and SP-SBDF methods in Scenario II.

Fig9.eps
Figure 9 RMSE of range bias estimates of the BP-SBDF, SP-SBDF and S-BSE methods in Scenario II.

Fig10.eps
Figure 10 RMSE of azimuth bias estimates of the BP-SBDF, SP-SBDF and S-BSE methods in Scenario II.

Fig11.eps
Figure 11 RMSE of position estimates of the BP-SBDF, SP-SBDF and S-BSE methods in Scenario II.

Fig12.eps
Figure 12 RMSE of velocity estimates of the BP-SBDF, SP-SBDF and S-BSE methods in Scenario II.

Fig13.eps
Figure 13 Consistency test of the proposed methods in Scenario II.

From Figures 36, it can be seen that the spatial bias and target state RMSEs of the S-BSE method are larger than those of the BP-SBDF and SP-SBDF methods. As shown in Table 1, the improvements in time-averaged RMSEs of the range bias, azimuth bias, position and velocity of the BP-SBDF method are about 4.5030m, 2.0021×10 - 4rad, 1.1078m and 0.0023m/s, respectively, with respect to those of the S-BSE method. Accordingly, the improvements in time-averaged RMSEs of the SP-SBDF method are about 4.6122m, 2.0206×10 - 4rad, 1.2333m and 0.0043m/s, respectively. The S-BSE method does not consider and compensate for the temporal bias between sensors, which leads to estimation errors higher than those of the proposed methods. As a contrast, the proposed BP-SBDF and SP-SBDF methods properly compensate for the temporal bias while providing accurate spatiotemporal bias and target state estimates, both of which can reach the steady state rapidly, as shown in Figures 26 and Table 1. Note that there still exists deviations between the RMSEs and the theoretical lower bounds. The main reason may lie in the high nonlinearity in the measurement equation.

Additionally, we can see that the SP-SBDF method performs slightly better than the BP-SBDF method. The BP-SBDF method estimates the state once using all the measurements collected in a fusion period based on the prior state updated in the previous fusion period. While the SP-SBDF method updates the state once a measurement is received, where the state updated by the previous measurement is used as prior information, which is more accurate than the state estimate of the previous fusion period. This results in the slight superiority of the SP-SBDF method over the BP-SBDF method in estimation accuracy.

As shown in Table 1, the SP-SBDF method requires more time to handle all measurements from the two sensors than the BP-SBDF method. This can be explained by the different measurement processing schemes of the two methods. The BP-SBDF method only utilizes the UKF at the measurement times of sensor 1 to generate the augmented state estimates, so only 400 calls of the UKF are required. On the contrary, the SP-SBDF method calls the UKF 1465 times to generate the augmented state estimates when handling the same number of measurements, which results in the SP-SBDF method requiring more running time than the BP-SBDF method. If we focus on a single filtering process, we can see that the running time required by the SP-SBDF method is less than that of the BP-SBDF method. This is because that the measurement dimension in the BP-SBDF method is higher than that in the SP-SBDF method.

The consistency of the methods is examined based on the evaluation of the NEES, as shown in Figure 7.

Here, we use the two-sided 99% probability region. The results in Figure 7 show the inconsistency of the S-BSE method since its NEES values are mostly outside the region of 99% . The proposed methods are consistent since their NEES values fall within the probability region. Therefore, the proposed methods can fuse the multisensor measurements to provide accurate and consistent state estimation while compensating for the spatiotemporal biases.

5.3 The Scenario with Relatively Large Temporal Bias

This scenario aims to evaluate the effects on the estimation performance when the temporal bias increases. Additionally, the measurements may be reported in the wrong order according to the time stamps since the time delays of sensors differ a lot. This scenario is also used to evaluate whether the proposed methods can still perform well when the measurements are in wrong order. The RMSEs of the spatiotemporal bias and state estimates and the NEES of the tracking filters are shown in Figures 813, respectively. Time-averaged RMSEs and the average running times of the proposed methods are listed in Table 2 for comparison.

Table 2 Performance comparison in scenario with relatively large temporal bias.
RMSEs Running Time
Temporal Bias Range Bias Azimuth Bias Position Velocity Overall Single
(s) (m) (×104rad) (m) (m/s) (s) (×104s)
BP-SBDF 0.2223 3.1206 2.0088 2.9882 0.0204 0.2245 5.6125
SP-SBDF 0.1680 2.3662 1.7675 2.8165 0.0183 0.5054 3.4498
S-BSE 26.9249 14.2627 12.4416 0.0320
PCRLB 0.1258 1.9367 1.2223 2.3867 0.0141

From Figures 913, it can be seen that the improper processing of the temporal bias degrades the performance of the S-BSE method, resulting in estimation error much higher than the PCRLB and NEES outside the 99% probability region. Additionally, as can be seen in Figures 910, the estimation errors of the range bias of the S-BSE method become larger with time, while those of the azimuth bias become smaller. This is because that the target in this scenario is moving away from the sensors. The impact of the temporal bias on the range bias estimation becomes large with the increase of the target range, while the impact on the azimuth bias is the opposite.

On the contrary, the proposed methods can provide accurate and consistent spatiotemporal bias and target state estimation simultaneously. As shown in Table 2, the improvements from the BP-SBDF and SP-SBDF methods over the S-BSE method are about 23.804324.5587m in range bias RMSE, 12.2539×10412.4952×104rad in azimuth bias RMSE, 9.45349.6251m in position RMSE, and 0.01160.0137m/s in velocity RMSE, respectively. Since the temporal bias is properly compensated by the proposed methods, the spatiotemporal bias and target state estimation errors keep small values, which also shows that the proposed methods can perform well even when the measurements are reported in the wrong order. As a consequence, the above results confirm the necessity of our methods to consider the temporal bias between sensors to perform correct data fusion. Additionally, the SP-SBDF method still performs slightly better but requires a little more computation load than the BP-SBDF method when handling all measurements from the two sensors, which are 0.5054s and 0.2245s, respectively. Also, the running time required by the SP-SBDF method in a single filtering process is less than that of the BP-SBDF method, which are 3.4498×104s and 5.6125×104s, respectively. These simulation results agree with those in Section 5.2, which means the proposed methods can perform well irrespective of whether the temporal bias is small or large.

6. Conclusions

In this paper, two spatiotemporal bias compensation methods were proposed to compensate for the spatiotemporal biases and fuse the multisensor measurements to produce accurate target state estimates. The general case where sensors have different and varying sampling periods was considered. The augmented state vectors consist of target states and spatiotemporal biases of multisensor. The measurement equations for the batch processing and sequential processing schemes were formulated as functions of both target states and spatiotemporal biases according to their relationship, which enables simultaneous spatiotemporal bias estimation and data fusion. The UKF was employed to handle the nonlinearity of the measurements and estimate spatiotemporal biases and target states simultaneously. Simulation results demonstrated that the proposed methods can provide accurate spatiotemporal bias and target state estimation simultaneously. Due to high nonlinearities in the measurement equations, the performance of the proposed methods has not reached the PCRLB. Further improving the performance of the spatiotemporal bias estimation is a topic of future efforts.


Data Availability Statement
Data will be made available on request.

Funding
This work was supported by the National Natural Science Foundation of China under Grant 61671181.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Bar-Shalom, Y., Willett, P. K., & Tian, X. (2011). Tracking and data fusion (Vol. 11). Storrs, CT, USA:: YBS publishing.
    [Google Scholar]
  2. Bar-Shalom, Y., Li, X. R., & Kirubarajan, T. (2004). Estimation with applications to tracking and navigation: theory algorithms and software. John Wiley & Sons.
    [Google Scholar]
  3. Ge, Q., Shao, T., Yang, Q., Shen, X., & Wen, C. (2016). Multisensor nonlinear fusion methods based on adaptive ensemble fifth-degree iterated cubature information filter for biomechatronics. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 46(7), 912-925.
    [CrossRef]   [Google Scholar]
  4. Khaleghi, B., Khamis, A., Karray, F. O., & Razavi, S. N. (2013). Multisensor data fusion: A review of the state-of-the-art. Information Fusion, 14(1), 28-44.
    [CrossRef]   [Google Scholar]
  5. Chen, B., Hu, G., Ho, D. W., Zhang, W. A., & Yu, L. (2016). Distributed robust fusion estimation with application to state monitoring systems. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(11), 2994-3005.
    [CrossRef]   [Google Scholar]
  6. Yu, D., Xia, Y., Li, L., Xing, Z., & Zhu, C. (2019). Distributed covariance intersection fusion estimation with delayed measurements and unknown inputs. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 51(8), 5165-5173.
    [CrossRef]   [Google Scholar]
  7. Lin, X., Pan, X., Sun, W., Liu, L., & Chen, X. (2022). Multi-scale asynchronous fusion algorithm for multi-sensor integrated navigation system. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 236(9), 1709-1723.
    [CrossRef]   [Google Scholar]
  8. Lin, H., & Sun, S. (2017). Distributed fusion estimator for multisensor multirate systems with correlated noises. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 48(7), 1131-1139.
    [CrossRef]   [Google Scholar]
  9. Burke, J. (1966). The SAGE real quality control fraction and its interface with BUIC II/BUIC III. MITRE Corporation, Technical report 308.
    [Google Scholar]
  10. Leung, H., Blanchette, M. & Harrison, C. (1994). A least squares fusion of multiple radar data. In Proceedings of Radar 94, Paris, France, (pp. 498–508.)
    [Google Scholar]
  11. Zhou, Y., Leung, H., & Blanchette, M. (1999). Sensor alignment with earth-centered earth-fixed (ECEF) coordinate system. IEEE Transactions on Aerospace and Electronic systems, 35(2), 410-418.
    [CrossRef]   [Google Scholar]
  12. Zheng, Z. W., & Zhu, Y. S. (2004). New least squares registration algorithm for data fusion. IEEE Transactions on Aerospace and Electronic Systems, 40(4), 1410-1416.
    [CrossRef]   [Google Scholar]
  13. Zhou, Y., Leung, H., & Yip, P. C. (1997). An exact maximum likelihood registration algorithm for data fusion. IEEE Transactions on Signal Processing, 45(6), 1560-1573.
    [CrossRef]   [Google Scholar]
  14. Okello, N., & Ristic, B. (2003). Maximum likelihood registration for multiple dissimilar sensors. IEEE Transactions on Aerospace and Electronic Systems, 39(3), 1074-1083.
    [CrossRef]   [Google Scholar]
  15. Rafati, A., Moshiri, B., & Rezaei, J. (2007, July). A new algorithm for general asynchronous sensor bias estimation in multisensor-multitarget systems. In 2007 10th International Conference on Information Fusion (pp. 1-8). IEEE.
    [CrossRef]   [Google Scholar]
  16. Helmick, R. E., & Rice, T. R. (1993). Removal of alignment errors in an integrated system of two 3-D sensors. IEEE Transactions on Aerospace and Electronic systems, 29(4), 1333-1343.
    [CrossRef]   [Google Scholar]
  17. Nabaa, N., & Bishop, R. H. (1999). Solution to a multisensor tracking problem with sensor registration errors. IEEE Transactions on Aerospace and Electronic systems, 35(1), 354-363.
    [CrossRef]   [Google Scholar]
  18. Song, Q. He, Y. & Yang, J. (2007). An augmented state target tracking algorithm with systematic errors based on the unscented Kalman filter. Journal of Projectiles, Rockets, Missiles and Guidance, no. 3, (pp. 311–316).
    [Google Scholar]
  19. Liu, J., Zuo, Y., & Xue, A. (2013, July). An ASEKF algorithm for 2D and 3D radar registration. In Proceedings of the 32nd Chinese Control Conference (pp. 4758-4761). IEEE.https://ieeexplore.ieee.org/abstract/document/6640261
    [Google Scholar]
  20. Furgale, P., Rehder, J., & Siegwart, R. (2013, November). Unified temporal and spatial calibration for multi-sensor systems. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1280-1286). IEEE.
    [CrossRef]   [Google Scholar]
  21. Rehder, J., Beardsley, P., Siegwart, R., & Furgale, P. (2014, September). Spatio-temporal laser to visual/inertial calibration with applications to hand-held, large scale scanning. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 459-465). IEEE.
    [CrossRef]   [Google Scholar]
  22. Rehder, J., Siegwart, R., & Furgale, P. (2016). A general approach to spatiotemporal calibration in multisensor systems. IEEE Transactions on Robotics, 32(2), 383-398.
    [CrossRef]   [Google Scholar]
  23. Nocedal, J., & Wright, S. J. (Eds.). (1999). Numerical optimization. New York, NY: Springer New York.
    [Google Scholar]
  24. You, H., Hongwei, Z., & Xiaoming, T. (2013). Joint systematic error estimation algorithm for radar and automatic dependent surveillance broadcasting. IET Radar, Sonar & Navigation, 7(4), 361-370.
    [CrossRef]   [Google Scholar]
  25. Li, S., Cheng, Y., Brown, D., Tharmarasa, R., Zhou, G., & Kirubarajan, T. (2019). Comprehensive time-offset estimation for multisensor target tracking. IEEE Transactions on Aerospace and Electronic Systems, 56(3), 2351-2373.
    [CrossRef]   [Google Scholar]
  26. Li, M., & Mourikis, A. I. (2013, May). 3-D motion estimation and online temporal calibration for camera-IMU systems. In 2013 IEEE International Conference on Robotics and Automation (pp. 5709-5716). IEEE.
    [CrossRef]   [Google Scholar]
  27. Li, W., & Leung, H. (2004). Simultaneous registration and fusion of multiple dissimilar sensors for cooperative driving. IEEE Transactions on Intelligent Transportation Systems, 5(2), 84-98.
    [CrossRef]   [Google Scholar]
  28. Li, W., Leung, H., & Zhou, Y. (2004). Space-time registration of radar and ESM using unscented Kalman filter. IEEE Transactions on Aerospace and Electronic Systems, 40(3), 824-836.
    [CrossRef]   [Google Scholar]
  29. Huang, D., & Leung, H. (2005). An expectation-maximization-based interacting multiple model approach for cooperative driving systems. IEEE Transactions on Intelligent Transportation Systems, 6(2), 206-228.
    [CrossRef]   [Google Scholar]
  30. Bu, S., & Zhou, G.(2016). Spatiotemporal registration for multi-sensor fusion systems. In Proceedings of the International Conference on Artificial Intelligence and Computer Science, Guilin, China (pp. 333–339).
    [Google Scholar]
  31. Bu, S., Zhou, C., & Zhou, G. (2019). Simultaneous spatiotemporal bias and state estimation for asynchronous multi‐sensor system. The Journal of Engineering, 2019(19), 5824-5828.
    [CrossRef]   [Google Scholar]
  32. Tichavsky, P., Muravchik, C. H., & Nehorai, A. (1998). Posterior Cramér-Rao bounds for discrete-time nonlinear filtering. IEEE Transactions on Signal Processing, 46(5), 1386-1396.
    [CrossRef]   [Google Scholar]
  33. Gongjian, Z. H. O. U., Changjun, Y. U., Naigang, C. U. I., & Taifan, Q. U. A. N. (2012). A tracking filter in spherical coordinates enhanced by de-noising of converted Doppler measurements. Chinese Journal of Aeronautics, 25(5), 757-765.
    [CrossRef]   [Google Scholar]
  34. Blair, W. D. (2008, June). Design of nearly constant velocity track filters for tracking maneuvering targets. In 2008 11th International Conference on Information Fusion (pp. 1-7). IEEE.
    [Google Scholar]
  35. You, H., Jianjuan, X., & Xin, G. (2016). Radar data processing with applications. John Wiley & Sons.
    [Google Scholar]
  36. Angrisani, L., D'Apuzzo, M., & Moriello, R. S. L. (2005, May). The unscented transform: a powerful tool for measurement uncertainty evaluation. In Proceedings of the 2005 IEEE International Workshop on Advanced Methods for Uncertainty Estimation in Measurement, 2005. (pp. 27-32). IEEE.
    [CrossRef]   [Google Scholar]
  37. Challa, S. (2011). Fundamentals of object tracking. Cambridge University Press.
    [Google Scholar]
  38. Lerro, D., & Bar-Shalom, Y. (1993). Tracking with debiased consistent converted measurements versus EKF. IEEE Transactions on Aerospace and Electronic systems, 29(3), 1015-1022.
    [CrossRef]   [Google Scholar]
  39. Longbin, M., Xiaoquan, S., Yiyu, Z., Kang, S. Z., & Bar-Shalom, Y. (1998). Unbiased converted measurements for tracking. IEEE Transactions on Aerospace and Electronic Systems, 34(3), 1023-1027.
    [CrossRef]   [Google Scholar]
  40. Duan, Z., Han, C., & Li, X. R. (2004). Comments on" unbiased converted measurements for tracking". IEEE Transactions on Aerospace and Electronic systems, 40(4), 1374.
    [CrossRef]   [Google Scholar]
  41. Van Trees, H. L. (2004). Detection, estimation, and modulation theory, part I: detection, estimation, and linear modulation theory. John Wiley & Sons.
    [Google Scholar]

Cite This Article
APA Style
Zhou, G., Bu, S., & Kirubarajan, T. (2024). Simultaneous Spatiotemporal Bias Compensation and Data Fusion for Asynchronous Multisensor Systems. Chinese Journal of Information Fusion, 1(1), 16–32. https://doi.org/10.62762/CJIF.2024.361881

Article Metrics
Citations:

Crossref

7

Scopus

7

Web of Science

7
Article Access Statistics:
Views: 3183
PDF Downloads: 708

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
CC BY Copyright © 2024 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Chinese Journal of Information Fusion

Chinese Journal of Information Fusion

ISSN: 2998-3371 (Online) | ISSN: 2998-3363 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/