Next Article in Journal
Sparse Reconstruction of Sound Field Using Bayesian Compressive Sensing and Equivalent Source Method
Next Article in Special Issue
Transmission Line-Planning Method Based on Adaptive Resolution Grid and Improved Dijkstra Algorithm
Previous Article in Journal
Transparent Pneumatic Tactile Sensors for Soft Biomedical Robotics
Previous Article in Special Issue
A Brief Survey of Recent Advances and Methodologies for the Security Control of Complex Cyber–Physical Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Linear Filter Based on Feedback Structure for Sensing Network with Correlated Noises and Data Packet Dropout

1
School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210018, China
2
North Information Control Research Academy Group Co., Ltd., Nanjing 211153, China
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(12), 5673; https://doi.org/10.3390/s23125673
Submission received: 10 April 2023 / Revised: 12 June 2023 / Accepted: 15 June 2023 / Published: 17 June 2023
(This article belongs to the Special Issue Intelligent Sensing, Control and Optimization of Networks)

Abstract

:
This paper is concerned with the estimation of correlated noise and packet dropout for information fusion in distributed sensing networks. By studying the problem of the correlation of correlated noise in sensor network information fusion, a matrix weight fusion method with a feedback structure is proposed to deal with the interrelationship between multi-sensor measurement noise and estimation noise, and the method can achieve optimal estimation in the sense of linear minimum variance. Based on this, a method is proposed using a predictor with a feedback structure to compensate for the current state quantity to deal with packet dropout that occurs during multi-sensor information fusion, which can reduce the covariance of the fusion results. Simulation results show that the algorithm can solve the problem of information fusion noise correlation and packet dropout in sensor networks, and effectively reduce the fusion covariance with feedback.

1. Introduction

Unmanned aerial vehicles (UAVs) have received a lot of attention in recent years. UAVs have experienced rapid growth due to their widespread use in the military industry [1,2,3] and the civilian world. However, as the battlefield environment faced by UAVs becomes more and more complex, individual UAVs face limitations in reconnaissance angles and destruction capabilities when performing missions such as reconnaissance or attack, and it is becoming increasingly difficult for individual UAVs to complete their missions, so the trend [4] has been for multiple UAVs to collaborate on combat technology. UAVs participate in collaborative operations in which sensors collect and acquire a large amount of battlefield situational data, carry out information fusion, collaborative search elements, target assignment, conduct coordinated operations, and use various airborne weapons to achieve strikes against enemy targets and complete combat missions. As the size and number of UAVs increase, unreliable communication links between sensors, random time lags and packet dropout (or uncertain observations) are common in the data transmission of real network systems. On the other hand, accurate or perfect information about the system model is usually not available in practice, so uncertainty is inevitable in UAV navigation and positioning. Therefore, the navigation and positioning technology based on the fusion of multi-sensor data has become a research hot spot [5].
For multi-sensor information fusion techniques, the Kalman filter is the most widespread and well known. It began to be used in the early 1960s for aerospace and military applications such as guidance, navigation and control systems, and is used in an extremely wide range of systems and equipment in almost all areas of engineering. Kalman filters can be divided into two main categories. The first category is the centralized Kalman filter (CKF) [6], where all measured sensor data are sent to a central site for processing. The advantage of this method is the minimal information dropout. However, it can lead to serious computational problems, which it can’t handle due to the filters being overloaded with more data. Therefore, when serious data failures occur, the entire centralized filter may become unreliable or have poor accuracy and stability. The second category is the distributed Kalman filter (DKF). Local estimators from all sensors can obtain a globally optimal or suboptimal state estimator according to certain information fusion criteria. The advantage of this method is that it no longer needs a fusion center that requires a large amount of memory space. Thus, in recent decades, various distributed and parallel versions and applications of the Kalman filter have been reported, such as in [7,8,9], to improve its accuracy. Hashmipour et al. [7] described a parallel Kalman filtering structure for multisensory networks amenable to parallel processing. Carlson [8] presented the famous federated square root filter, which assumes the initial estimation error cross covariance matrices among the local subsystems to be zero, i.e., the local estimation errors among the local subsystems are uncorrelated at the initial time, which does not accord with the general case. Ogle et al. [9], in turn, described a multi-sensor optimal information fusion estimator in the maximum likelihood sense under the assumption of normal distributions.
However, all of the above literature assumes that process noise and measurement noise are uncorrelated at any given moment. However, in actual sensor application environments, due to the use environment of sensor network, as well as its own factors, there is always related measurement noise and estimation noise. As a result, cross-correlation noise can be found in many engineering applications [10]. For example, the correlated noise in a discretized continuous-time system from [11]. Singular systems can transform into normal systems with associated noise, as described in [12]. The network system with random transmission delays [13] and packet dropout [14] can be transformed into a finite element system with step-like correlated noise. Signals measured or transmitted in a common noise environment usually have associated noise, and so on. However, the problem of reducing the impact of excessive covariance on the sensor system while achieving optimal estimation has not been solved yet. Since there is no matrix weight fusion method incorporating a feedback structure described in the previous literature, we consider whether a matrix weight fusion method with a feedback structure could be used to solve the mutual correlation problem between the multi-sensor measurement noise and the estimated noise, as well as to reduce the covariance generated during the sensor data fusion.
In addition, in UAV navigation and positioning, the sensor network of the UAV inevitably generates data packet dropout during transmission, in addition to measurement noise- and estimation noise-related situations. Therefore, in this case, the traditional Kalman filter is no longer applicable. To date, a variety of filtering approaches have been developed for systems with random time delay and multiple packet dropouts, including optimal full-order and reduced-order filters in the sense of linear minimum variance computed using the completion level method [15]. An optimal linear estimator in a unified model with random one-step sensor delay, multiple packet dropouts, and uncertain observations was described in [16], and adaptive filtering with a similar model to that described in [15,16] with an optimal linear estimator was described in [17]. Moreover, with the development of transmission through UAV sensor networks, the consideration of both network data packet dropout and correlated noise [18,19,20] has become a hot area of current research. A suboptimal Kalman-type filter was designed in [18] for systems with multi-packet dropout and finite step autocorrelation measurement noise. For the same system model, the optimal linear filter [19] was used to propose a minimum mean square error sense with better accuracy than [18]. For describing subsystems with packet dropout and correlated noise, filters, predictors and smoothers were also described in [20]. However, the impact of information packet dropout on the fusion results has not been deeply dissected under the optimal estimated fusion architecture with feedback.
Based on the above discussion of the literature, this paper focuses on the problem of noise correlation for information fusion in distributed sensor networks in UAVs and the problem of packet dropout compensation during transmission. Different from as described in [18,19,20], in this paper, we use its predictive estimation as an optimal compensator. The proposed estimator based on predictive compensation has better accuracy than the estimator that uses simple compensation of the latest previously received measurements. Moreover, we use a filter with a feedback structure based on this, making it possible to reduce the covariance of each local tracking error while maintaining the optimality of the trajectory fusion. In summary, the main contributions of this paper are as follows:
(1)
To address the unresolved issue of the impact of excessive covariance on sensor systems in [9,11,13], this paper proposes a matrix weight fusion method with a feedback structure. The method is not only able to deal with the problem of the inter-correlation of measurement and estimation noise in the process of multi-sensor data fusion, is can also achieve optimal estimation in a linear minimum variance sense.
(2)
Based on the matrix weight fusion method with feedback structure proposed in this paper, the problem of the impact of information packet dropout on fusion results during sensor data transmission was not profiled in depth in [14,15,16]. This paper proposes an estimation and compensation method with a feedback structure; this method can reduce the covariance generated during the fusion of multi-sensor information.
(3)
Finally, a Kalman smoothing algorithm is added to optimize the results of the Kalman filter fusion by forward and backward filtering to give better accuracy.
The rest of this paper is organized as follows: In Section 2, the studied problem is formulated. In Section 3, the optimal linear estimators with feedback, including the filter and the predictor, are designed. In Section 4, the optimal information fusion criterion in the linear minimum variance sense is provided. In Section 5, the Kalman optimal smoother linear estimators are given. In Section 6, a tracking example is reported. The last Section 7 of this paper provides the conclusion.

2. Problem Formulation

In this paper, considering the following discrete-time stochastic systems with correlated noise and multiple packet dropouts for UAV sensor networks:
x i ( t + 1 ) = Φ x i ( t ) + Γ w i ( t )
z i ( t ) = H x i ( t ) + v i ( t )
y i ( t ) = ξ ( t ) z i ( t ) + ( 1 ξ ( t ) ) z i ( t | t 1 )
where x i ( t ) R n , i = 1, 2, 3,…, l, is the state, z i ( t ) R m i is the measurement value to be transmitted to the data processing center through the UAV sensor network, and y i ( t ) R m i is the measurement result received by the data processing center. Φ , Γ , H are time-varying matrices with compatible dimensions. A sequence of independent Bernoulli distribution variables { ξ ( t ) R } with probability [ ξ ( t ) = 1 ] = β ( t ) ,   0 < β ( t ) 1 is used to describe the packet dropout situation. This is not correlated with other random variables. From (3), it can be concluded that the measured value z i ( t ) of the sensor is received when ξ ( t ) = 1 and is lost when ξ ( t ) = 0 . If the measurement result z i ( t ) is lost, its predictor z i ( t | t 1 ) is used as a compensator, relying on the previously received information.
Assumption 1.
wi(t) and vi(t), i = 1, 2, …, l are correlated white noises with zero mean and
E w i ( t ) v i ( t ) w i T ( k ) v i T ( k ) = Q i ( t ) S i ( t ) S i T ( t ) R i ( t ) δ t k E v i ( t ) v j T ( k ) = S i j ( t ) δ t k , i j
where the symbol E denotes the mathematical expectation, the superscript T indicates the transpose, and  δ t k  is the Kronecker delta function.
Assumption 2.
The initial state x(0) is independent of wi(t) and vi(t),i = 1, 2, …, l, and
E x ( 0 ) = μ 0 , E ( x ( 0 ) μ 0 ) ( x ( 0 ) μ 0 ) T = P 0
Assumption 3.
The optimal matrix weights,  A ¯ i ( t ) , i = 1, 2, …, l, to minimize the trace of the fusion filtering error variance, x ( t | t ) = A ¯ 1 x 1 ( t | t ) + A ¯ 2 x 2 ( t | t ) + + A ¯ l x l ( t | t ) , where A ¯ i ( t ) , i = 1, 2, …, l, are the weights, and x i ( t ) , i = 1, 2, …, l, are the local filters.
Assumption 4.
When there is feedback, the fusion center broadcasts its latest estimate to the local sensors. Thus, for all i, the following local state  x i ( t ) is replaced by multi-sensor fused state x ( t ) . The equation of state, Equation (1), can be rewritten as
x i ( t + 1 ) = Φ x ( t ) + Γ w i ( t )
Given a UAV sensor network, the communication topology among sensors is described by an undirected graph, G = ( V , E , A ) , which consists of a node set V = { 1 , 2 , , n } , an edge set E V × V . For an undirected graph G , (i, j) E (j,i) E , that is, nodes i and j can sense each other. Graph G characterizes the communication topology among sensors and is connected if there exists a path involving all nodes.

3. Optimal Matrix Weight Fusion Kalman Filter with Feedback Structure

Under Assumptions 1 and 2, for the i-th local sensor subsystem of Systems (1)–(3), a local optimal Kalman filter with multiple sensors is proposed, as described below:
x i ( t | t ) = x i ( t | t 1 ) + K x i ( t | t ) ε i ( t )
x i ( t + 1 | t ) = Φ x i ( t | t ) + Γ w i ( t | t )
The innovation ε i ( t ) and the covariance matrix Q ε i ( t ) can be obtained as
ε i ( t ) = z i ( t ) H x i ( t | t 1 ) v i ( t | t 1 )
Q ε i ( t ) = H P x i ( t | t 1 ) H T + P v i ( t | t 1 )
where the gain matrix K x i ( t | t ) can be obtained by
K x i ( t / t ) = [ P x i ( t / t 1 ) H T ] Q ε i 1 ( t )
The estimated error covariance matrix P x i ( t / t ) and the predictor P x i ( t + 1 / t ) are given by
P x i ( t | t ) = P x i ( t | t 1 ) K x i ( t | t ) Q ε i ( t ) K x i T ( t | t )
P x i ( t + 1 | t ) = Φ P x i ( t | t ) Φ T + Γ P w i ( t | t ) Γ T
To obtain fusion results with higher accuracy, the local filtered values obtained above are fused with matrix weights.
Lemma 1.
Under Assumptions 1 and 2, we can obtain the optimal fusion in a unified form [8] for distributed Kalman filters as
x = A ¯ 1 x 1 + A ¯ 2 x 2 + + A ¯ l x l
where x i , i = 1, 2, …, l, are the unbiased estimators of n-dimensional stochastic vector x. In addition, the optimal matrix weights  A ¯ i , i = 1, 2, …, l are calculated by
A ¯ = Σ 1 e ( e T Σ 1 e ) 1
where Σ = P i j , i , j = 1 , 2 l is an n l × n l symmetric positive definite matrix, A ¯ = [ A ¯ 1 , A ¯ 2 , , A ¯ l ] T and e = [ I n , , I n ] T are both n l × n matrices. The corresponding variance of the optimal information fusion estimator can be obtained by
P x = ( e T Σ 1 e ) 1
Lemma 2.
The local Kalman filtering error cross covariance between the i-th and the j-th sensor subsystems has the following recursive form, from [11]:
P x i x j ( t + 1 | t + 1 ) = [ I n K x i ( t + 1 | t + 1 ) H ] × { Φ P x i x j ( t | t ) Φ T + E [ w i ( t | t ) w j T ( t | t ) ] + Φ E [ x ˜ i ( t | t ) w j T ( t | t ) ] + E [ w i ( t | t ) x ˜ j T ( t | t ) ] Φ ( t ) } × [ I n K x j ( t + 1 | t + 1 ) × H ] T + H S i j ( t + 1 | t + 1 ) K x j T ( t + 1 | t + 1 )
where P x i x j , i, j = 1, 2, …, l are the filtering error cross covariance matrices between the i-th and j-th sensor subsystems,  K x i , K x j is the filtering gain matrix, and the initial values P x i x j ( 0 | 0 ) = P x 0 .
Next, we will show that the inclusion of a feedback structure reduces the covariance of the multi-sensor in the transmission process when optimal estimates are obtained.
Theorem 1.
When feedback exists, under Assumption 4, the optimal estimation at the previous moment is regarded as the prior value at the next moment, then
x i ( t + 1 | t ) = Φ x i ( t | t ) + Γ w i ( t | t ) = Φ x ( t | t ) + Γ w i ( t | t ) = Φ x f ( t | t ) + Γ w i ( t | t )
P x i ( t + 1 | t ) = Φ P x i ( t | t ) Φ T + Γ P w i ( t | t ) Γ T = Φ P x ( t | t ) Φ T + Γ P w i ( t | t ) Γ T = Φ P x f ( t | t ) Φ T + Γ P w i ( t | t ) Γ T
where the subscript “f” denotes the corresponding quantities in the feedback case. x ( t | t ) and P x ( t | t ) can be obtained from Lemmas 1 and 2. In addition, it can be concluded that the trajectory fusion formula with feedback is the same as the trajectory fusion without feedback and the estimation error covariance matrices  P x i ( t + 1 / t + 1 ) P x i f ( t + 1 / t + 1 ) .
Proof. 
After adding feedback, the predicted value of each sensor is as follows:
x i f ( t + 1 | t ) = Φ x f ( t | t ) + Γ w i ( t | t ) = Φ x f ( t + 1 | t )
P x i f ( t + 1 | t ) = Φ P x f ( t | t ) Φ T + Γ P w i ( t | t ) Γ T = P x f ( t + 1 | t )
P x i ( t + 1 | t + 1 ) = P x i ( t + 1 | t ) K x i ( t + 1 | t + 1 ) Q ε i ( t + 1 ) K x i T ( t + 1 | t + 1 )
Local covariance after adding feedback:
P x i f ( t + 1 | t + 1 ) = P x i ( t + 1 | t ) K x i ( t + 1 | t + 1 ) Q ε i ( t + 1 ) K x i T ( t + 1 | t + 1 ) = P x i f ( t + 1 | t ) K x i ( t + 1 | t + 1 ) Q ε i ( t + 1 ) K x i T ( t + 1 | t + 1 ) = P x f ( t + 1 | t ) K x i ( t + 1 | t + 1 ) Q ε i ( t + 1 ) K x i T ( t + 1 | t + 1 )
(20) and (21) are subtracted after each inverse matrix, giving us
P x i 1 ( t + 1 | t + 1 ) P x i f 1 ( t + 1 | t + 1 ) = P x i 1 ( t + 1 | t ) P x f 1 ( t + 1 | t ) = P x i 1 ( t + 1 | t ) P x 1 ( t + 1 | t ) = [ Φ P x i ( t | t ) Φ T + Γ P w i ( t | t ) Γ ] 1 [ Φ P x ( t | t ) Φ T + Γ P w i ( t | t ) Γ ] 1
P x ( t | t ) = ( e T Σ 1 e ) 1 = [ ( Σ 1 / 2 e ) T ( Σ 1 / 2 e i ) ] T × [ ( Σ 1 / 2 e ) T ( Σ 1 / 2 e ) ] 1 [ ( Σ 1 / 2 e ) T ( Σ 1 / 2 e i ) ] ( Σ 1 / 2 e i ) T ( Σ 1 / 2 e i ) = P x i ( t | t )
P x i 1 ( t + 1 | t + 1 ) P x i f 1 ( t + 1 | t + 1 )
P x i ( t + 1 | t + 1 ) P x i f ( t + 1 | t + 1 )
Next
P x i f 1 ( t / t ) x i f ( t / t ) = P x i f 1 ( t / t ) x ( t / t 1 ) + H T P v i 1 ( t ) ( z i ( t ) H x ( t / t 1 ) ) = P x i f 1 ( t / t ) x ( t / t 1 ) + H T P v i 1 ( t ) z i ( t ) H T P v i 1 ( t ) H x ( t / t 1 ) = P x i f 1 ( t / t ) x ( t / t 1 ) + H T P v i 1 ( t ) z i ( t ) P x i f 1 ( t / t ) x ( t / t 1 ) +     P x i 1 ( t / t 1 ) x ( t / t 1 ) = H T P v i 1 ( t ) z i ( t ) + P x i 1 ( t / t 1 ) x ( t / t 1 )
P x 1 ( t / t ) x ( t / t ) = H T P v i 1 ( t ) z i ( t ) + P x 1 ( t / t 1 ) x ( t / t 1 ) = i = 1 l H T ( t ) P v i 1 ( t ) z i ( t ) + P x 1 ( t / t 1 ) x ( t / t 1 )
Trajectory fusion with feedback is expressed as
P x f 1 ( t / t ) = P x f 1 ( t / t - 1 ) + i = 1 l ( P x i f 1 ( t / t ) P x f 1 ( t / t 1 ) ) = i = 1 l P x i f 1 ( t / t ) ( l 1 ) P x f 1 ( t / t - 1 )
P x f 1 ( t / t ) x f ( t / t ) = P f 1 ( t / t 1 ) x f ( t / t 1 ) + i = 1 l ( P x i f 1 ( t / t ) x i f ( t / t ) P x f 1 ( t / t 1 ) x f ( t / t 1 ) = i = 1 l P x i f 1 ( t / t ) x i f ( t / t ) ( l 1 ) P x f 1 ( t / t 1 ) x f ( t / t 1 ) = i = 1 l H T P v i 1 ( t ) z i ( t ) + P x 1 ( t / t 1 ) x ( t / t 1 )
Then
x f ( t / t ) = P x ( t / t ) [ P x 1 ( t / t 1 ) x ( t / t 1 ) + i = 1 l H T P v i 1 ( t ) z i ( t ) ] = x ( t / t )
This proof is completed. From the above proof, it can be obtained that feedback will not affect the global tracking performance at the fusion center, but can reduce the covariance of local sensor, so the feedback improves the local tracking performance of the sensor network. □

4. Optimal Linear Estimators with Dropout

In this section, we will use the prediction estimate as the optimal compensator from Systems (1)–(3) and derive the predicted gain matrix, noise intercorrelation and autocorrelation covariance matrix, measurement noise, and estimated noise required for the information fusion process when there is information packet dropout in Section 4.1, and with a linear optimal filter in Section 4.2.

4.1. Preliminary Lemmas

Theorem 2.
Under Assumptions 1 and 2 and Systems (1)–(3), the innovation  ε i ( t ) and its covariance matrix  Q ε i ( t ) are calculated by
ε i ( t ) = y i ( t ) H x i ( t | t 1 ) v i ( t | t 1 )
Q ε i ( t ) = β ( t ) [ H P x i ( t | t 1 ) H T + P v i ( t | t 1 ) ]
where y i ( t ) can be obtained from (3) with z i ( t | t 1 ) = H x i ( t | t 1 ) + v i ( t | t 1 ) . The predictor state x i ( t | t 1 ) can be calculated by following (5), and its prediction error covariance matrix P x i ( t | t 1 ) can be obtained from (11). β ( t ) is used to illustrate the packet dropout phenomenon. The process noise w i ( t | t ) and the predictor of the measurement noise v i ( t | t 1 ) are both calculated using Theorem 3. The covariance matrices P v i ( t | t 1 ) can be calculated by following Theorem 4.
Proof. 
From the projection theorem in [21], the innovation ε i ( t ) = y i ( t ) y i ( t | t 1 ) can be obtained. Each term on both sides of (3) is projected onto the linear space generated by the measured value ( y i ( 0 ) ,   y i ( 1 ) ,   ,   y i ( t 1 ) ) , giving us y i ( t | t 1 ) = H x i ( t | t 1 ) + v i ( t | t 1 ) . Then, (31) can be obtained. Additionally, the innovation can be rewritten as follows using (2) and (3):
ε i ( t ) = ξ ( t ) [ H x ˜ i ( t | t 1 ) + v ˜ i ( t | t 1 ) ]
Substituting (33) into the covariance matrix Q ε i ( t ) = E [ ε i ( t ) ε i T ( t ) ] , we can obtain
Q ε i ( t ) = E [ ε i ( t ) ε i T ( t ) ] = β ( t ) { H P x i ( t | t 1 ) H T + P v i ( t | t 1 ) + H E [ x ˜ i ( t | t 1 ) v ˜ i T ( t | t 1 ) ] + E [ v ˜ i ( t | t 1 ) x ˜ i T ( t | t 1 ) ] H T
From (34) and E [ x ˜ i ( t | t 1 ) v ˜ i T ( t | t 1 ) ] = 0 , (32) can be obtained. This proof is completed. □
Theorem 3.
Under Assumptions 1 and 2 and Systems (1)–(3), the process noise w i ( t | t ) and its gain matrix K w i ( t | t τ ) are calculated by
w i ( t / t ) = τ = 0 N t K w i ( t | t τ ) ε i ( t τ )
K w i ( t | t τ ) = β ( t τ ) [ P w i x i ( t , t τ | t τ 1 ) H T + P w i v i ( t , t τ | t τ 1 ) ] Q ε i 1 ( t τ ) β ( t τ ) P w i v i ( t , t τ | t τ 1 ) Q ε i 1 ( t τ )
The predictor of the measurement noise v i ( t | t 1 ) and its gain matrix K v i ( t | t τ ) are calculated by
v i ( t | t 1 ) = τ = 1 N t K v i ( t | t τ ) ε i ( t τ ) ,   t 1
K v i ( t | t τ ) = β ( t τ ) P v i ( t , t τ | t τ 1 ) Q ε i 1 ( t τ ) ,   t τ
where v i ( 0 | 1 ) = 0 , the covariance matrices P w i v i ( t , t τ | t τ 1 ) and P v i ( t , t τ | t τ 1 ) are calculated following Theorem 3.
Proof. 
From the projection theorem in [21], we obtain
v i ( t | t ) = v i ( t | t N t 1 ) + τ = 1 N t K v i ( t | t τ ) ε i ( t τ )
which yields the measurement noise filter (37) by noting v i ( t | t N t 1 ) = 0 and Nt = min{t,N}. Using (33), the process noise in the τ-step prediction gain matrix is calculated by:
K v i ( t / t τ ) = E [ v i ( t ) ε i T ( t τ ) ] Q ε i 1 ( t τ ) = β ( t τ ) E { v i ( t ) [ H x ˜ i ( t τ / t τ 1 ) + v ˜ i ( t τ / t τ 1 ) ] T } Q ε i 1 ( t τ )
From (40), and noting E [ v i ( t ) x ˜ i T ( t τ | t τ 1 ) ] = P w i x i ( t , t τ | t τ 1 ) = 0 and E [ v i ( t ) v ˜ i T ( t τ | t τ 1 ) ] = P v i ( t , t τ | t τ 1 ) , we obtain (38). Similarly, (35) and (36) can be derived. This proof is completed. □
Theorem 4.
For Systems (1)–(3) under Assumptions 1 and 2, the following covariance matrices can be obtained:
The cross covariance matrix of the estimation error of the process noise P w i ( t , t k | t k ) and the cross covariance matrix of the estimation error of the measurement noise P v i ( t , t k | t k )  are calculated by
P w i ( t , t k | t k ) = Q ( k ) l = k + 1 N t K w i ( t | t l ) Q ε i ( t l ) K w i T ( t k | t l )
P v i ( t , t k | t k 1 ) = R i ( k ) l = k + 1 N t K v i ( t | t l ) Q ε i ( t l ) K v i T ( t k | t l )
where t k + 1 and P v i ( k , 0 | 1 ) = R i ( k ) . The estimation error cross covariance matrix P w i v i ( t , t k / t k 1 ) between the process noise and the measurement noise is calculated as
P w i v i ( t , t k | t k 1 ) = S i ( k ) l = k + 1 N t K w i ( t | t l ) Q ε i ( t l ) K v i T ( t k | t l )
where t k + 1 ,   P w i v i ( k , 0 | 1 ) = S i k .
Proof. 
Subtracting (35) from w i ( t ) , the filtering error equation of the process noise can be obtained:
w ˜ i ( t / t ) = w i ( t ) τ = 0 N t K w i ( t / t τ ) ε i ( t τ )
Substituting (44) into P w i ( t , t k / t k ) = E [ w i ( t ) w ˜ i T ( t k / t k ) ] , we can obtain (41).
Substituting the above (37) into v i ( t ) , the measurement noise v ˜ i ( t | t 1 ) can be obtained:
v ˜ i ( t | t 1 ) = v i ( t ) τ = 1 N t K v i ( t | t τ ) ε i ( t τ )
Substituting (45) into P w i v i ( t , t k | t k 1 ) = E [ w i ( t ) v ˜ i T ( t k | t k 1 ) ] , we get (43). Similarly, using (45) into P v i ( t , t k / t k ) = E [ v i ( t ) v ˜ i T ( t k / t k ) ] , we can derive (42). This proof is completed. With the above proof, we derive the predicted gain matrix, noise intercorrelation and autocorrelation covariance matrix, measurement noise and estimated noise required for the information fusion process when the measurement information drops packets. □

4.2. Linear Optimal Filter with Feedback

Theorem 5.
Under Assumptions 1 and 2 and Systems (1)–(3), the optimal linear local filter and local predictor are calculated by:
x i ( t | t ) = x i ( t | t 1 ) + K x i d ( t | t ) ε i ( t )
x i ( t + 1 | t ) = Φ x i ( t | t ) + Γ w i ( t | t )
The gain matrix K x i d ( t | t ) can be obtained as
K x i d = β ( t ) [ P x i ( t / t 1 ) H T ] Q ε i 1 ( t )
The covariance matrix of state quantities P x i ( t / t ) and the predictor P x i ( t + 1 / t ) are calculated by
P x i ( t | t ) = P x i ( t | t 1 ) K x i d ( t | t ) Q ε i ( t ) [ K x i d ( t | t ) ] T
P x i ( t + 1 | t ) = Φ P x i ( t | t ) Φ T + Γ P w i ( t | t ) Γ T
Proof. 
According to the projection theorem from [21], we have (46), and the gain matrix K x i d is defined by
K x i d = E [ x i ( t ) ε i T ( t ) ] Q ε i 1 ( t )
Subtracting (33) from (51), we can obtain
K x i d = β ( t ) E { x i ( t ) [ H x ˜ i ( t | t 1 ) + v ˜ i ( t | t 1 ) ] T } Q ε i 1 ( t )
Note that x ^ i ( t | t 1 ) x ˜ i ( t | t 1 ) and x ^ i ( t | t 1 ) v ˜ i ( t | t 1 ) , and we can obtain (48). From (46), the filtering error equation for the state can be obtained:
x ˜ i ( t | t ) = x ˜ i ( t | t 1 ) K x i d ε i ( t )
Substituting (53) into the filtering error covariance matrix P x i ( t | t ) = E [ x ˜ i ( t | t ) x ˜ i T ( t | t ) ] and x ^ i ( t | t 1 ) ε i ( t ) , the following can be obtained:
P x i ( t | t ) = P x i ( t | t 1 ) + K x i d ( t | t ) Q ε i ( t ) [ K x i d ( t | t ) ] T E [ x i ( t ) ε i T ( t ) ] [ K x i d ( t | t ) ] T K x i d ( t | t ) E [ ε i ( t ) x i T ( t ) ]
From (54), and noting E [ x i ( t ) ε i ( t ) ] = K x i d ( t | t ) Q ε i ( t ) , we can obtain (49).
Subtracting (47) from (1), the predictor x i ( t + 1 | t ) can be obtained.
Substituting x ˜ i ( t + 1 | t ) = Φ x ˜ i ( t | t ) + Γ w ˜ i ( t | t ) into the error covariance matrix P x i ( t + 1 | t ) = E [ x ˜ i ( t + 1 | t ) x ˜ i T ( t + 1 | t ) ] , it is possible to obtain (50). □
Theorem 6.
When feedback exists, under Assumption 4, the predictor of the next moment can be same as (16), (17)
x i ( t + 1 | t ) = Φ x i ( t | t ) + Γ w i ( t | t ) = Φ x ( t | t ) + Γ w i ( t | t )
P x i ( t + 1 | t ) = Φ P x i ( t | t ) Φ T + Γ P w i ( t | t ) Γ T + Φ P x i w i ( t | t ) Γ T + Γ P x i w i T ( t | t ) Φ T = Φ P x ( t | t ) Φ T + Γ P w i ( t | t ) Γ T
Furthermore, it can already be concluded that the estimation error covariance matrices P x i ( t + 1 / t + 1 ) P x i f ( t + 1 / t + 1 ) .
Proof. 
After adding feedback, the predicted value of each sensor is as follows
x i f ( t + 1 | t ) = Φ x i ( t | t ) + Γ w i ( t | t ) = Φ x f ( t | t ) + Γ w i ( t | t ) = Φ x f ( t + 1 | t )
P x i f ( t + 1 | t ) = Φ P x i ( t | t ) Φ T + Γ P w i ( t | t ) Γ T = Φ P x f ( t | t ) Φ T + Γ P w i ( t | t ) Γ T = P x f ( t + 1 | t )
P x i ( t + 1 | t + 1 ) = P x i ( t + 1 | t ) K x i d ( t + 1 | t + 1 ) × Q ε i ( t + 1 ) [ K x i d ( t + 1 | t + 1 ) ] T
Local covariance after adding feedback:
P x i f ( t + 1 | t + 1 ) = P x i ( t + 1 | t ) K x i d ( t + 1 | t + 1 ) Q ε i ( t + 1 ) [ K x i d ( t + 1 | t + 1 ) ] T = P x i f ( t + 1 | t ) K x i d ( t + 1 | t + 1 ) Q ε i ( t + 1 ) [ K x i d ( t + 1 | t + 1 ) ] T = P x f ( t + 1 | t ) K x i d ( t + 1 | t + 1 ) Q ε i ( t + 1 ) [ K x i d ( t + 1 | t + 1 ) ] T
It can be seen that (59) and (60) are the same as (20) and (21). Similarly, it can be concluded that P x i ( t + 1 | t + 1 ) P x i f ( t + 1 | t + 1 ) .
This proof is completed. □

5. Kalman Smoothing Algorithm

In this section, to further enhance the fusion effect, we will optimize the fusion result using a smoothing process, thereby proposing the Kalman smoothing algorithm.
Theorem 7.
The smoothed parameters  x s ( t | t ) and P s ( t | t ) can be written as
x s ( t | t ) = x + ( t | t ) + K ( t | t ) [ x s ( t + 1 | t + 1 ) x ( t + 1 | t ) ]
P s ( t | t ) = P + ( t | t ) + K ( t | t ) [ P s ( t + 1 | t + 1 ) P ( t + 1 | t ) ] × K T ( t | t )
x + ( t | t ) and P + ( t | t ) represent the fusion estimate of the posterior estimate of time t. x ( t + 1 | t ) and P ( t + 1 | t ) represent the fusion estimate of the prior estimate of time t. K ( t | t ) represents the reverse filtering gain matrix.
The smoothing algorithm consists of forward filtering and backward filtering. The forward filtering consists of the classical Kalman filter, which is used to estimate the state at each moment. The backward filtering reuses some data on the basis of the forward filtering to obtain a more accurate state estimate.
Proof. 
From (46)–(50), the local optimal Kalman filter can be obtained as
x i ( t | t 1 ) = Φ x i ( t 1 | t 1 ) + Γ w i ( t 1 | t 1 )
P x i ( t | t 1 ) = Φ P x ( t 1 | t 1 ) Φ T + Γ P w i ( t 1 | t 1 ) Γ T
The gain matrix of the forward recursion is
K x i ( t | t ) = β ( t ) [ P x i ( t | t 1 ) H T + P v i ( t | t 1 ) + P v i x i T ( t | t 1 ) ] Q ε i 1 ( t ) = β ( t ) [ P x i ( t | t 1 ) H T ] Q ε i 1 ( t )
The fusion estimate of the posterior estimate of time t is
x + ( t | t ) = x i ( t | t 1 ) + K x i ( t | t ) ε i ( t )
P + ( t | t ) = P x i ( t | t 1 ) K x i ( t | t ) Q ε i ( t ) K x i T ( t | t )
The Kalman filter forward recursion is
x _ ( t + 1 | t ) = Φ x + ( t | t ) + Γ w ^ i ( t | t )
P _ ( t + 1 | t ) = Φ P + ( t | t ) Φ T + Γ P w i ( t | t ) Γ T
The gain matrix of the backward recursion is
K _ ( t | t ) = P + ( t | t ) H T [ P _ ( t + 1 | t ) ] 1
Therefore, the fusion estimation of reverse filtering is
x s ( t | t ) = x + ( t | t ) + K ( t | t ) [ x s ( t + 1 | t + 1 ) x ( t + 1 | t ) ]
P s ( t | t ) = P + ( t | t ) + K ( t | t ) [ P s ( t + 1 | t + 1 ) P ( t + 1 | t ) ] × K T ( t | t )
t forward recursion is performed from the initial time to time t, and then backward recursion is performed from time t to t. This completes the Kalman smoothing process. This proof is completed. □

6. Simulation Example

Consider a radar tracking system with three sensors:
x i ( t + 1 ) = 1 T T 2 / 2 0 1 T 0 0 1 x i ( t ) + 0 0 1 w i ( t )
z i ( t ) = H x i ( t ) + v i ( t )
y i ( t ) = ξ ( t ) z i ( t ) + ( 1 ξ i ( t ) ) z i ( t | t 1 )
v i ( t ) = γ i w ( t ) + ζ i ( t )
i = 1 , 2 , 3
where T is the sampling period. The state is x ( t ) = [ s ( t ) s ˙ ( t ) s ¨ ( t ) ] T , where s ( t ) , s ˙ ( t ) , s ¨ ( t ) are the position, velocity and acceleration, respectively, of the target at time t, y i ( t ) , i = 1, 2, 3 are the measurement signals, v i ( t ) , i = 1, 2, 3 are the measurement noises, respectively, of three sensors, which are correlated with Gaussian white noise wi(t) with mean zero and variance σ w 2 . The coefficients γ i are constant scalars, and ζ i ( t ) , i = 1, 2, 3 are Gaussian white noises with means of zero and variance matrices σ ζ i 2 , and which are independent of wi(t). Our aim is to find the optimal information fusion decentralized Kalman filter x ^ 0 ( t t ) .
We set T = 0.01 , H 1 = [ 1 , 0 , 0 ] , H 2 = [ 0 , 1 , 0 ] , H 3 = [ 0 , 0 , 1 ] ; σ ω 2 = 1 , σ ζ 1 2 = 5 , σ ζ 2 2 = 8 , σ ζ 3 2 = 8 ; γ 1 = 2 , γ 2 = 1 , γ 3 = 1 ; and initial values x ( 0 ) = [ 0 , 0 , 0 ] T , P 0 = 0.1 I 3 . For each sensor system, by applying (46)–(50), the local optimal Kalman filter x i ( t / t ) and corresponding variances P x i ( t / t ) , i = 1, 2, 3 can be obtained. Then, the optimal information fusion filter x ( t | t ) and corresponding variance P x ( t | t ) can be obtained from Lemma 1. Additionally, we substitute the obtained results x ( t | t ) and P x ( t | t ) into (55) and (56), allowing the predictor of the estimation error covariance matrices P x i ( t + 1 / t ) and the optimal fusion x i ( t + 1 | t ) to be obtained.
The simulation is divided into two parts. Firstly, the tracking effects of distributed Kalman filtering and distributed Kalman filtering with dropout are compared in the case of the same target motion. Secondly, the tracking effect between the distributed Kalman filter with dropout, the distributed Kalman filter with dropout and feedback, and the smoothing algorithm are compared. Additionally, a comparison of the local covariance at different times is given in a table.

6.1. The Tracking Effects of Distributed Kalman Filtering with Dropout

Dropout during transmission is inevitable due to the performance differences of different sensors and external interference. Thus, it is assumed that dropout exists at t = 0.6–0.8 s and is compensated by the predicted value z i ( t | t 1 ) of the previous moment. Figure 1 shows a comparison of the tracking effect between distributed Kalman filtering and distributed Kalman filtering with dropout, and Figure 2 shows the MSE of distributed Kalman filtering and distributed Kalman filtering with dropout. The presence of dropout at 0.6–0.8 s in Figure 2 is clearly indicated.

6.2. The Tracking Performance of Distributed Kalman Filtering with Dropout and Distributed Kalman Filtering with Dropout after Adding Feedback and the Smoothing Algorithm

In this section, the tracking trajectories under four different conditions are reported. From Figure 3, it can be seen that the tracking performance of the distributed Kalman filter with feedback and the distributed Kalman filter without feedback are consistent when multiple sensors experience dropout during transmission. Therefore, this case confirms Theorem 4 in Section 3. Then, the tracking trajectory of the optimal fusion Kalman filter after adding the smoothing algorithm is close to the real value from Figure 3. This shows that the tracking effect of Kalman filter can be improved by adding the smoothing algorithm. A clearer comparison in Figure 4 shows that the addition of the smoothing algorithm can reduce filtering error.
Furthermore, we collected the optimal fusion covariance and local covariance for five different time nodes. P a , P b , P c represent respectively the error values of the local covariance with feedback and local covariance of the three sensors. It can be seen from Table 1 that the error value at each time point is consistent with the proof result in P x i ( t + 1 | t + 1 ) P x i f ( t + 1 | t + 1 ) .

7. Conclusions

In this paper, the correlation noise and packet dropout estimation problems of information fusion in distributed sensing networks are investigated. In contrast to previous studies, in this paper, a matrix weight fusion method is proposed in combination with a feedback structure to solve the problem of the correlation between measurement noise and estimation noise generated in sensor networks, effectively solving the mutual correlation problem between multi-sensor measurement noise and estimation noise while achieving optimal estimation in the sense of linear minimum variance. In addition, for the problem of packet dropout in the fusion process, a loss estimation compensation method with a feedback structure is proposed for the multi-sensor information fusion process, successfully reducing the covariance in the fusion process. Finally, the simulation shows that the MSE between the local covariance with the feedback structure and the local covariance without the feedback structure at the selected time node t is between 0 and 0.3, verifying that the local covariance with the feedback structure is smaller than the local covariance without the feedback structure, and thereby proving the effectiveness of the algorithm.

Author Contributions

Conceptualization, K.D.; Methodology, H.Y.; Software, Q.L.; Formal analysis, H.Y.; Investigation, W.S.; Resources, Q.L.; Data curation, W.S.; Writing—original draft, W.S. and H.Z.; Writing—review & editing, H.Y. and K.D.; Supervision, H.Z.; Project administration, K.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Strengthening Plan Technical FieldFund, grant number 2021-JCJQ-JJ-0597 (Corresponding author: HeZhang and Kern Dai).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviation

The following abbreviations are used in this manuscript:
UAVunmanned aerial vehicle
KFKalman filter
CKFcentralized Kalman filter
DKFdistributed Kalman filter
MSEmean square error

References

  1. Zhu, J.; Zhao, H.; Wei, Y.; Ma, C.; Lv, Q. Unmanned aerial vehicle computation task scheduling based on parking resources in post-disaster rescue. Appl. Sci. 2023, 13, 289. [Google Scholar] [CrossRef]
  2. He, X.; Jin, R.; Dai, H. Multi-hop task offloading with on-the-fly computation for multi-UAV remote edge computing. IEEE Trans. Commun. 2021, 70, 1332–1344. [Google Scholar] [CrossRef]
  3. Sacco, A.; Esposito, F.; Marchetto, G.; Montuschi, P. Sustainable task offloading in UAV networks via multi-agent reinforcement learning. IEEE Trans. Veh. Technol. 2021, 70, 5003–5015. [Google Scholar] [CrossRef]
  4. Chen, W.; Liu, B.; Huang, H.; Guo, S.; Zheng, Z. When UAV swarm meets edge-cloud computing: The QoS perspective. IEEE Netw. 2019, 33, 36–43. [Google Scholar] [CrossRef]
  5. Yang, J.; Huang, X. A distributed algorithm for UAV cluster task assignment based on sensor network and mobile information. Appl. Sci. 2023, 13, 3705. [Google Scholar] [CrossRef]
  6. Willner, D.; Chang, C.B.; Dunn, K.P. Kalman filter algorithm for a multisensor system. In Proceedings of the 1976 IEEE Conference on Decision and Control including the 15th Symposium on Adaptive Processes, Clearwater, FL, USA, 1–3 December 1976. [Google Scholar]
  7. Hashmipour, H.R.; Roy, S.; Laub, A.J. Decentralized structures for parallel Kalman filtering. IEEE Trans. Automat. Contr. 1988, 33, 88–94. [Google Scholar] [CrossRef]
  8. Carlson, N.A. Federated square root filter for decentralized parallel processes. IEEE Trans. Aerosp. Electron. Syst. 1990, 26, 517–525. [Google Scholar] [CrossRef]
  9. Ogle, T.L.; Blair, W.D.; Slocumb, B.J.; Dunham, D.T. Assessment of Hierarchical Multi-Sensor Multi-Target Track Fusion in the Presence of Large Sensor Biases. In Proceedings of the 2019 22th International Conference on Information Fusion (FUSION), Ottawa, ON, Canada, 2–5 July 2019; pp. 1–7. [Google Scholar]
  10. Gu, J.-A.; Wei, C.-H. Tracking technique for manoeuvring target with correlated measurement noises and unknown parameters. IEE Proc. F (Radar Signal Process.) 1991, 138, 278–288. [Google Scholar] [CrossRef] [Green Version]
  11. Song, E.; Zhu, Y.; You, Z. The Kalman type recursive state estimator with a finite-step correlated process noises. In Proceedings of the 2008 IEEE International Conference on Automation and Logistics, Qingdao, China, 1–3 September 2008; pp. 196–200. [Google Scholar]
  12. Sun, S.; Ma, J. Optimal filtering and smoothing for discrete-time stochastic singular systems. Signal Process. 2007, 87, 189–201. [Google Scholar] [CrossRef]
  13. Sun, S.; Wang, G. Modeling and estimation for networked systems with multiple random transmission delays and packet losses. Syst. Control Lett. 2014, 73, 6–16. [Google Scholar] [CrossRef]
  14. Ma, J.; Sun, S. Optimal linear estimators for multi-sensor stochastic uncertain systems with packet losses of both sides. Digital Signal Process. 2015, 37, 24–34. [Google Scholar] [CrossRef]
  15. Sun, S.; Xie, L.; Xiao, W. Optimal full-order and reduced-order estimators for discrete-time systems with multiple packet dropouts. IEEE Trans. Signal Process. 2008, 56, 4031–4038. [Google Scholar] [CrossRef]
  16. Ma, J.; Sun, S. Optimal linear estimators for systems with random sensor delays, multiple packet dropouts and uncertain observations. IEEE Trans. Signal Process. 2011, 59, 5181–5192. [Google Scholar]
  17. Sun, S.; Xie, L.; Xiao, W.; Soh, Y.C. Optimal linear estimation for systems with multiple packet dropouts. Automatica 2008, 44, 1333–1342. [Google Scholar] [CrossRef]
  18. Feng, J.; Wang, Z.; Zeng, M. Optimal robust non-fragile Kalman-type recursive filtering with finite-step autocorrelated noises and multiple packet dropouts. Aerosp. Sci. Technol. 2011, 15, 486–494. [Google Scholar] [CrossRef]
  19. Li, F.; Zhou, J.; Wu, D. Optimal filtering for systems with finite-step autocorrelated noises and multiple packet dropouts. Aerosp. Sci. Technol. 2013, 24, 255–263. [Google Scholar] [CrossRef]
  20. Feng, J.; Wang, T.; Guo, J. Recursive estimation for descriptor systems with multiple packet dropouts and correlated noises. Aerosp. Sci. Technol. 2014, 32, 200–211. [Google Scholar] [CrossRef]
  21. Anderson, B.D.; Moore, J.B. Optimal Filtering; Prentice-Hall: Englewood Cliffs, NJ, USA, 1979. [Google Scholar]
Figure 1. Comparison of the tracking effect between distributed Kalman filtering and distributed Kalman filtering with dropout.
Figure 1. Comparison of the tracking effect between distributed Kalman filtering and distributed Kalman filtering with dropout.
Sensors 23 05673 g001
Figure 2. MSE of distributed Kalman filtering and distributed Kalman filtering with dropout.
Figure 2. MSE of distributed Kalman filtering and distributed Kalman filtering with dropout.
Sensors 23 05673 g002
Figure 3. Comparison of the tracking effect between the distributed Kalman filter with dropout, feedback, and the smoothing algorithm.
Figure 3. Comparison of the tracking effect between the distributed Kalman filter with dropout, feedback, and the smoothing algorithm.
Sensors 23 05673 g003
Figure 4. MSE of the distributed Kalman filter with dropout, feedback and the smoothing algorithm.
Figure 4. MSE of the distributed Kalman filter with dropout, feedback and the smoothing algorithm.
Sensors 23 05673 g004
Table 1. The error between the local covariance with feedback and the local covariance with date packet dropout at different times.
Table 1. The error between the local covariance with feedback and the local covariance with date packet dropout at different times.
P P a P b P c
t
0.2 0.000 0.000 0.010 0.000 0.020 0.009 0.010 0.009 0.009 0.000 0.000 0.033 0.000 0.010 0.011 0.033 0.011 0.010 0.000 0.000 0.000 0.000 0.008 0.005 0.000 0.005 0.008
0.4 0.000 0.010 0.006 0.010 0.018 0.010 0.006 0.010 0.053 0.000 0.011 0.032 0.011 0.010 0.008 0.032 0.008 0.260 0.000 0.006 0.000 0.006 0.008 0.001 0.008 0.001 0.170
0.6 0.000 0.000 0.019 0.000 0.005 0.006 0.019 0.006 0.183 0.000 0.000 0.021 0.000 0.023 0.021 0.021 0.021 0.300 0.000 0.000 0.008 0.000 0.006 0.008 0.008 0.008 0.122
0.8 0.008 0.000 0.006 0.000 0.008 0.009 0.006 0.009 0.190 0.000 0.005 0.005 0.005 0.023 0.005 0.005 0.005 0.300 0.018 0.011 0.012 0.011 0.008 0.008 0.012 0.008 0.092
1.0 0.010 0.000 0.021 0.000 0.010 0.011 0.021 0.011 0.221 0.000 0.000 0.006 0.000 0.008 0.005 0.006 0.005 0.251 0.000 0.000 0.021 0.000 0.014 0.013 0.021 0.013 0.082
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shang, W.; Yu, H.; Li, Q.; Zhang, H.; Dai, K. Optimal Linear Filter Based on Feedback Structure for Sensing Network with Correlated Noises and Data Packet Dropout. Sensors 2023, 23, 5673. https://doi.org/10.3390/s23125673

AMA Style

Shang W, Yu H, Li Q, Zhang H, Dai K. Optimal Linear Filter Based on Feedback Structure for Sensing Network with Correlated Noises and Data Packet Dropout. Sensors. 2023; 23(12):5673. https://doi.org/10.3390/s23125673

Chicago/Turabian Style

Shang, Weichen, Hang Yu, Qingyu Li, He Zhang, and Keren Dai. 2023. "Optimal Linear Filter Based on Feedback Structure for Sensing Network with Correlated Noises and Data Packet Dropout" Sensors 23, no. 12: 5673. https://doi.org/10.3390/s23125673

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop