Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Inclined Aerial Image and Satellite Image Matching Based on Edge Curve Direction Angle Features
Previous Article in Journal
Temperature Is a Key Factor Affecting Total Phosphorus and Total Nitrogen Concentrations in Northeastern Lakes Based on Sentinel-2 Images and Machine Learning Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Centroid Extraction Method for High-Dynamic Star Sensors Based on Projection Distribution of Star Trail

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
Daheng College, University of Chinese Academy of Sciences, Beijing 100039, China
3
Key Laboratory of Space-Based Dynamic Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
4
DFH Satellite Co., Ltd., Beijing 100094, China
5
School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China
6
Engineering Research Center of Mine Digitalization of Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(2), 266; https://doi.org/10.3390/rs17020266
Submission received: 30 November 2024 / Revised: 6 January 2025 / Accepted: 8 January 2025 / Published: 13 January 2025

Abstract

:
To improve the centroid extraction accuracy and efficiency of high-dynamic star sensors, this paper proposes a multi-centroid localization method based on the prior distribution of star trail projections. First, the mapping relationship between attitude information and star trails is constructed based on a geometric imaging model, and an endpoint centroid group extraction strategy is designed from the perspectives of time synchronization and computational complexity. Then, the endpoint position parameters are determined by fitting the star trail grayscale projection using a line spread function, and accurate centroid localization is achieved through principal axis analysis and inter-frame correlation. Finally, the effectiveness of the proposed method under different dynamic scenarios was tested using numerical simulations and semi-physical experiments. The experimental results show that when the three-axis angular velocity reaches 8°/s, the centroid extraction accuracy of the proposed method remains superior to 0.1 pixels, achieving an improvement of over 30% compared to existing methods and simultaneously doubling the attitude measurement frequency. This demonstrates the superiority of this method in high-dynamic attitude measurement tasks.

Graphical Abstract

1. Introduction

To meet the increasingly complex demands for spatial information acquisition, dynamic imaging mode has become the mainstream trend in the development of remote sensing technology [1,2,3]. Unlike traditional static imaging modes, dynamic imaging mode can overcome the contradiction between the field of view and the focal length of optical payloads, achieving high-resolution wide-field optical imaging [4,5,6]. The core of dynamic imaging mode lies in high-precision attitude control under maneuvering conditions, a characteristic that places more stringent requirements on the satellite’s attitude measurement capabilities during high-dynamic operations [7,8,9].
Star sensors are currently the most precise devices for attitude measurement in the aerospace field [10,11,12]. By capturing star images, they accurately extract the centroid positions of stars, using star image recognition and attitude determination algorithms to provide high-precision attitude measurements for spacecraft [13,14]. Under static imaging conditions, the measurement accuracy of star sensors can reach the arcsecond level, perfectly meeting the attitude measurement needs in static imaging [15].
However, the attitude measurement frequency of star sensors is limited by exposure time, making it impossible to provide high-frequency attitude measurement results [16]. In addition, under dynamic conditions, star spots no longer exhibit Gaussian distributions but instead form complex trajectories, known as star trails, whose shapes vary with coordinates and velocity. This increases the uncertainty in centroid localization, thereby reducing the accuracy of attitude determination [17,18,19]. These challenges severely limit the application of star sensors in dynamic optical imaging.
To address these issues, many studies have focused on improving the centroid localization accuracy and measurement frequency of star sensors under dynamic conditions. The main methods for centroid localization under static conditions include the grayscale weight method [20,21] and the Gaussian fitting method [22,23]. To enhance the centroid localization accuracy of star sensors in dynamic conditions, researchers have analyzed and compensated for centroid localization errors under dynamic conditions [24,25].
As the dynamic imaging models of star sensors become increasingly sophisticated, some imaging models have been applied to the centroid extraction of high-dynamic star images [26]. Some researchers have developed motion models for specific scenarios to deconvolve and restore star point trajectories, aiming to improve centroid positioning results by increasing the signal-to-noise ratio [27,28]. Some scholars have attempted to directly fit star point trails using prior information from established models to obtain the centroid coordinates [29,30]. However, these methods rely on the accurate modeling of the mission scenario, which limits their applicability.
In addition to improving accuracy, researchers have attempted to break through the limitations of imaging frame rates on measurement frequency. By controlling exposure, star trails are imaged as several short segments, and centroids are processed independently for different segments to estimate the attitude at different times within a single image, thus increasing the update rate of star sensors [31,32,33]. However, the implementation of multi-exposure star sensors often relies on electronic shutters or image intensifiers, not only increasing hardware complexity but also depending on the accurate estimation of motion parameters.
These two types of methods cannot simultaneously improve both the measurement frequency and centroid localization accuracy, making it difficult to meet the requirements of high-dynamic attitude measurements. In essence, star trails are the cumulative energy tracks of star points moving on the image plane during the exposure period, recording the changes in centroid positions over time [17,34]. Therefore, if this prior information can be used as a constraint to recover multiple centroid coordinates from star trails, it may be possible to improve both the measurement accuracy and the frequency of dynamic star sensors.
This paper proposes a new method for extracting centroids from star trails, which aims to simultaneously improve the centroid localization accuracy and double the measurement frequency of high-dynamic star sensors by extracting two precise endpoint centroid coordinates from the star trails. To achieve this, this paper first models the moving star image based on geometric imaging models and line spread functions, analyzes the projection distribution characteristics of star trails, and develops a centroid extraction strategy from the perspectives of time synchronization and algorithmic complexity. Subsequently, this paper details the algorithmic process for extracting endpoint centroids from star trails. Finally, this paper demonstrates an improvement in centroid localization accuracy and the frequency of high-dynamic star sensors through experiments, proving the correctness and application prospects of the proposed method.
The structure of this paper is as follows: Section 1 introduces the research background and significance, Section 2 provides the theoretical basis for the method proposed in this paper, Section 3 presents the specific centroid extraction process, Section 4 conducts numerical simulations and semi-physical experiments, Section 5 discusses the results, and Section 6 draws conclusions.

2. Models

To achieve centroid extraction from the star trails, it is necessary to model the motion of star points on the image plane during attitude maneuvers. For this purpose, this section, based on the geometric imaging model of the star sensor, establishes the mapping relationship between attitude information and star trail, analyzes the time synchronization of trajectory end points, and derives the projection distribution characteristics of the trails grayscale values. This provides a theoretical foundation for the subsequent methods proposed.

2.1. The Star Trail Model

As shown in Figure 1, the coordinate relationships in the star sensor are illustrated. Here, O I X I Y I Z I represents the inertial space coordinate system. The star sensor is simplified to a pinhole imaging model with a focal length of f . The pinhole is taken as the origin O C , and the Z C is the optical axis of the star sensor, establishing the camera coordinate system O C X C Y C Z C . By moving the Y P X P plane along the Z C by f , the O P X P Y P plane is obtained, where the Z C axis intersects with Y P X P at O P , establishing the two-dimensional image plane coordinate system O P X P Y P . To simplify the description of pixel coordinates in image processing, the image plane coordinate system is translated by W / 2 , H / 2 to obtain the image coordinate system O X Y , where W and H are the width and height of the sensor, respectively.
At time t , the unit observation vector of the i-th star within the field of view of the star sensor in the camera coordinate system is w i ( t )
w i ( t ) = 1 x i ( t ) 2 + y i ( t ) 2 + f 2 x i ( t ) y i ( t ) f
where x i ( t ) , y i ( t ) is the centroid coordinates on the image plane, which are the horizontal and vertical coordinates. The unit vector of the star in the inertial coordinate system is r i , described as
r i = cos δ i cos α i cos δ i sin α i sin δ i
where α i and δ i are the right ascension and declination of the navigation star in the inertial coordinate system. The observation vector w i ( t ) and the star vector r i satisfy the following relationship
w i ( t ) = M i ( t ) r i
where M i ( t ) is the attitude matrix at time t , representing the transformation from the inertial coordinate system to the camera coordinate system. The attitude matrix can be defined in terms of Euler angles
M ( t ) = R z α ( t ) π 2 R x β ( t ) + π 2 R z γ ( t )
where α , β , γ are the attitude angles of the camera coordinate system in the inertial coordinate system, and R z , R x are rotation matrices.
Let the total exposure time of the star sensor be T , and the observation vectors of star r i at times t 0 and t 1 = t 0 + Δ t have the following relationship:
w i ( t 1 ) = M ( t 1 ) M ( t 0 ) T w i ( t 0 ) = T t 0 t 1 w i ( t 0 )
where T t 0 t 1 is the state transition matrix from time t 0 to t 1 . When the time interval Δ t is sufficiently small, the transition matrix approximately satisfies the following expression:
T t 0 t 1 = 1 Δ ξ z Δ ξ y Δ ξ z 1 Δ ξ x Δ ξ y Δ ξ x 1
where Δ ξ x , Δ ξ y , and Δ ξ z are the changes in the attitude angles around the three axes during the time interval at time Δ t
Δ ξ x , Δ ξ y , Δ ξ z = ω x , ω y , ω z Δ t
where ω x , ω y and ω z are the angular rates around the X, Y, and Z axes of the camera coordinate system, respectively.
Let x i t 0 , y i t 0 be the position of the centroid in the image coordinate system at time t 0 , and x i t 1 , y i t 1 be the position of the centroid in the image coordinate system at time t 1 . Based on Equations (1), (5) and (6), the displacement of the centroid between these two time points is
x i t 1 y i t 1 = κ x i t 0 + f Δ ξ y + y i t 0 Δ ξ z y i t 0 f Δ ξ x x i t 0 Δ ξ z
where κ is the coefficient
κ = 1 x i ( t 0 ) Δ ξ y + y i ( t 0 ) Δ ξ x / f + 1
where x i and y i are on the order of millimeters, Δ ξ x , Δ ξ y are on the order of milliradians, and the focal length f is more than 10 mm, thus within time Δ t one can take κ = 1 .
An ideal star sensor optically images distant stars onto the image plane, which can be considered as a Gaussian-type spot with energy distributed on a two-dimensional plane centered at the centroid.
E s x , y = η Δ t 2 π ρ 2 exp x x i 2 + y y i 2 2 ρ 2
where ρ is the radius of the Gaussian spot, and η is a proportionality coefficient related to the energy of stars, the optical system, and the detector performance. During attitude maneuvers, the trajectory of the star point is represented by the accumulation of the Gaussian spot over the exposure time.
E d x , y = η 2 π ρ 2 t 0 t 0 + T exp x x i t 2 + y y i t 2 2 ρ 2 d t

2.2. Time Synchronization of Endpoint Centroid Groups

The star sensor begins exposure at time t s t a r t and ends exposure at time t e n d . Define the set of all centroids on the image plane at any time t during the exposure period as a two-dimensional point set W t , referred to as the centroid group at timestamp t
W t = x i ( t ) , y i ( t ) | i = 1 , 2 , , N
where N is the number of centroids within the field of view. This paper assumes that N > 4 meets the minimum star count requirement for star map recognition, and no centroids enter or leave the field of view during the exposure period. Attitude determination is then based on solving the three-axis attitude angles α ( t ) , β ( t ) , γ ( t ) using the centroid group W t .
When the attitude is stable, centroids do not move during the exposure, and it can be considered that there is only one centroid group, which corresponds to time t m i d = t e n d t s t a r t / 2 . However, as shown in Figure 2, during attitude maneuvers, the centroid coordinates become a function of time, resulting in a different centroid group corresponding to each moment during the exposure. When extracting the centroid group, it is necessary to consider not only its spatial position but also to determine the specific moment at which the centroid group corresponds.
To address this, this paper considers two specific moments during the exposure process to eliminate the time uncertainty of the centroid groups. These are the starting centroid groups W t s t a r t and ending centroid groups W t e n d corresponding to the start and end of the exposure, which are located at the starting and ending points of each star point trail. Clearly, the start and end times of the exposure are controlled and well defined, while the coordinates of the centroids are retained at the two end points of the star trails.

2.3. Projection Distribution Characteristics of the Star Trail

Although Equation (11) provides a model for the star trail, the starting and ending centroid coordinates appear in the model as time-varying parameters, making it extremely difficult to directly solve. To simplify this solution process, this paper proposes using the projection distribution characteristics of the star trail as a prior, converting the two-dimensional time-dependent fitting into two one-dimensional fittings.
From Equation (8), it is known that the displacement of the centroid can be decomposed into components along the X-axis and Y-axis of the image coordinate system. Taking the X-axis direction as an example, the displacement consists of two parts: one is a constant displacement f Δ ξ y caused by the angular velocity motion, which is proportional to the focal length of the star sensor; the other is a time-varying displacement y i t 0 Δ ξ z caused by the motion around the optical axis, which is proportional to the Y-axis coordinate of the centroid.
Considering a practical imaging system, the focal length f is typically 1~2 orders of magnitude larger than the image plane dimensions. Therefore, the constant displacement is the dominant part of the centroid displacement. This means that during the exposure period, the displacement speed of the star points along the X-axis and Y-axis can be considered constant.
To study the uniform motion process of star points on the image plane, we first start by discussing the motion of a one-dimensional point spread function. Let the energy distribution of a static star point be represented by the f P S F .
f P S F x | x 1 = 1 2 π ρ exp x x 1 2 2 ρ 2
where x 1 represents the one-dimensional center coordinate of f P S F , and ρ represents the spot radius. The integral of the motion along the X-axis can be expressed as line spread function f L S F
f L S F x | x 1 , x 2 = x x 1 x x 2 f P S F u d u = 1 2 erf x x 1 2 ρ erf x x 2 2 ρ
where x 1 and x 2 represent the central coordinates at the start and end of f P S F movement, respectively, known as the start point and end point of the f L S F . The term erf refers to the error function, which is the integral of the Gaussian function
erf x = 2 π 0 x exp t 2 d t
Figure 3 illustrates the process of forming LSFs through the integration of PSFs with different spot radii.
Furthermore, the one-dimensional uniform motion of a two-dimensional star point is discussed. Let the start point of the two-dimensional Gaussian star point be x 1 , y 0 and the end point be x 2 , y 0 within the exposure time T . The energy distribution based on f L S F is
E m ( x , y ) = x x 1 x x 2 f P S F x f P S F y d x = f L S F ( x ) f P S F y = 1 2 2 π ρ erf x x 1 2 ρ erf x x 2 2 ρ exp y y 0 2 2 ρ 2
The two-dimensional grayscale distribution of the star point is shown in Figure 4. From Equation (16), it can be seen that the grayscale distribution is independent in the coordinate axis directions. On the velocity direction cross-section (parallel to the XOZ plane, shown in orange), the grayscale distribution is represented by f L S F , and on the cross-section perpendicular to the velocity direction (parallel to the ZOY plane, shown in blue), the energy grayscale becomes f P S F , i.e.,
E m x x , y i = f P S F y i f L S F ( x ) E m y x i , y = f L S F ( x i ) f P S F y
For a specific row, f P S F y i and f L S F ( x i ) are coefficients. Considering the summation of the star trail grayscale along the coordinate direction means calculating the marginal projection distribution of the star trail along the coordinate direction.
E p x ( x ) = i h E m x x , y i = f L S F ( x ) i h f P S F y i E p y y = i w E m y x i , y = f P S F y i w f L S F ( x i )
where w and h are the width and length of the effective pixels of the star trail, respectively. The accumulation process only involves the superposition of coefficients, preserving the structural properties of f P S F and f L S F , meaning that the centroid information of start point and end point is retained in the projection distribution.
If the image point has a two-dimensional uniform motion relative to the image plane, let the start point of the image point be x 1 , y 1 and the end point be x 2 , y 2 . As shown in Figure 5, the grayscale distribution of the star trail is no longer parallel to the coordinate axes, and the velocity of the star point on the image plane is decomposed into v x along the X-axis and v y along the Y-axis.
The cumulative projections of the star trail along the two axis directions are the result of the two component motions, as analyzed in the Y direction in Equation (18)
E p x ( x ) = i h E m x x , y i = f L S F ( x ) i h f P S F y i E p y y = i w E m y x i , y = f L S F ( y ) i w f P S F x i
It can be seen that the projection distribution E p x ( x ) of the star trail along the Y-axis and the projection distribution E p y ( y ) along the X-axis still conform to the f L S F .
In this paper, we only consider the situation where the projections of star trails on both the X-axis and Y-axis directions conform to a f L S F . This implies that the star points must have a certain displacement in both directions. Therefore, it is necessary to impose constraints on the minimum displacement pixel l min of star points in both directions [29], which is
ω x t e x p o s u r e 2 arcsin ( l min a cos 2 ϕ 2 f ) ω y t e x p o s u r e 2 arcsin ( l min a cos 2 ϕ 2 f )
where t e x p o s u r e represents the exposure time, a is the pixel size, ϕ is the half field-of-view angle. Assume a is 5.5 μm, ϕ is 8°, the focal length f is 40 mm, and the exposure time t e x p o s u r e is 100 ms. To achieve star trails greater than 20 pixels in both directions, the angular velocity on both axes needs to be greater than 1.6°/s.
Based on the above constraints, to solve the endpoint centroids of the star trail, one can fit its grayscale projection distribution using f L S F to obtain the four positional parameters that constitute the centroid coordinates. Although the final centroid coordinates cannot be obtained directly, the introduction of this prior significantly simplifies the calculation process compared to directly performing a time-dependent fitting on the two-dimensional distribution of the star trail. The determination of the endpoint centroid coordinates will be completed in conjunction with other features of the star trail in the following sections.

3. Methods

The modeling and analysis of star trails in the previous section indicated that the end points of star trails exhibit good time synchronization and grayscale distribution characteristics. This section will detail the multi-centroid extraction algorithm.
As shown in Figure 6, the high-dynamic star sensor begins exposure at time t s t a r t and ends exposure at time t e n d , capturing a dynamic star image and establishing an image coordinate system O X Y . The start coordinates and end coordinates of the grayscale distribution for each of the star trails is determined using the following three steps:
  • Paramet er Fitting: The f L S F is used to fit the grayscale projections of the star trails along the X-axis, identifying 2 positional parameters θ x 1 and θ x 2 as the start or end positions of the star’s movement along the X-axis. The same process is repeated to the Y-axis to find θ y 1 and θ y 2 .
  • Coordinate Generation: Based on the angle φ between the principal axis of grayscale distribution and the X-axis, the 4 positional parameters from step 1 are combined into endpoint coordinates x , y t 1 and x , y t 2 . Note that at this stage, it is still not determined which of t 1 and t 2 corresponds to t s t a r t or t e n d .
  • Timestamp Determination: By combining the results from the previous frame, the temporal order of the two endpoint coordinates x , y t 1 and x , y t 2 is determined, thus obtaining the x , y s t a r t and x , y e n d for the star trail.
Each x , y s t a r t and x , y e n d of all the star trails are then composed into centroid groups W t s t a r t and W t e n d , respectively, completing the multi-centroid extraction for the dynamic star image.

3.1. Fitting Centroid Position Parameters of Star Trails

Let the dynamic star image I ( u , v ) have dimensions U × V , and let S be the set of the star trail.
S = S i | i = 1 , 2 , , N
where i denotes the index of the star trail in the star image, and N is the number of trails in the star image. The subset S i represents the set of two-dimensional coordinates that belong to the connected region of a single star trail
S i = u , v | u , v   belongs   to   star   i
Calculate the projection distribution of a single star trail S i in the row and column directions and normalize it.
P x ( v ) = N o r m u = 1 U I ( u , v ) P y ( u ) = N o r m v = 1 V I ( u , v )
where P x ( v ) and P y ( u ) are the projection distributions in the row and column directions, respectively.
To solve for the start and end points of the projection distributions, a fitting function f X ; θ is designed based on one-dimensional f L S F
f X ; θ = θ A erf X θ 1 2 θ ρ erf X θ 2 2 θ ρ
where θ = [ θ A , θ 1 , θ 2 , θ ρ ] are the parameters to be fitted, with θ A being the amplitude of f L S F , θ 1 and θ 2 being the start and end point coordinates of f L S F , and θ ρ being the spot radius. Parameters θ 1 and θ 2 will form the endpoint coordinates, which are of concern, while θ A and θ ρ are process variables intended to facilitate the convergence of the function and will not be of further interest to us subsequently.
Since it is necessary to perform two fits of the grayscale distribution projections along the X direction and the Y direction separately, θ x = [ θ x A , θ x 1 , θ x 2 , θ x ρ ] and θ y = [ θ y A , θ y 1 , θ y 2 , θ y ρ ] are used to represent the two sets of parameters to be fitted, respectively. Based on the mean squared error, the error functions in the row direction e r r o r x θ x and the column direction e r r o r y θ y are defined as follows:
e r r o r x θ x = 1 V v = 1 V P x ( v ) f ( v ; θ x ) 2 e r r o r y θ y = 1 U u = 1 U P y ( u ) f ( u ; θ y ) 2
Using nonlinear least squares fitting to find the parameter combination that minimizes the error functions, the fitting process can be expressed as
θ x = arg min θ   e r r o r x ( θ x ) θ y = arg min θ   e r r o r x ( θ y ) s . t . 0.9 θ 1 1 , 0.5 θ 4 5 min v < θ x 1 < θ x 2 < max v   in   S i min u < θ y 1 < θ y 2 < max u   in   S i
where the amplitude of f L S F is constrained to be within 0.9 , 1 , and the spot radius is assumed to vary between 0.5 and 5 pixels. The start and end points of the trail should not exceed the range of coordinates in the subset S i . To accelerate the convergence of the fitting process, the initial values of parameters θ x and θ y , denoted as θ x ( 0 ) and θ y ( 0 ) , respectively, are set as follows:
θ x ( 0 ) = [ 1 , 5 , min ( v ) , max ( v ) ] θ y ( 0 ) = [ 1 , 5 , min ( u ) , max ( u ) ]
An example of implementation is shown in Figure 7. The fitting results, θ x 1 , θ x 2 , θ y 1 , and θ y 2 are the 4 parameters that constitute the centroid coordinates.

3.2. Generating Centroid Coordinates

Due to the dimensionality reduction in the grayscale distribution description achieved through projection statistics, the fitting process can only ascertain the numerical values comprising the endpoint centroids, resulting in ambiguity in the coordinates of the end points. This is illustrated in Figure 8 right. It is necessary to further combine the major axis directions of the star trail grayscale for a more accurate determination.
In this paper, the principal axis direction of the star trail is analyzed based on eigenvectors, and the weighted centroid x ¯ , y ¯ of the pixels within the star trail S i is calculated using the moment of inertia method.
x ¯ = I ( u , v ) v I ( u , v ) , y ¯ = I ( u , v ) u I ( u , v )
Calculate the covariance matrix C
C = σ x x σ x y σ x y σ y y σ x x = 1 N v x ¯ 2 σ y y = 1 N u y ¯ 2 σ x y = 1 N v x ¯ u y ¯
and solve for the eigenvalues and eigenvectors of the covariance matrix C . Select the eigenvector a = a x , a y corresponding to the larger eigenvalue; then, the angle φ between the principal axis direction and the X-axis is given by
φ = arctan a y / a x
Based on the angle of the principal axis, the discussion is categorized as follows, as shown in Figure 8.
  • If the principal axis direction is in the 1st and 3rd quadrants, meaning that the signs of v x and v y are the same, then θ x 1 , θ y 1 and θ x 2 , θ y 2 are taken as the end points of the star trail.
  • If the principal axis direction is in the 2nd and 4th quadrants, meaning that the signs of v x and v y are opposite, then θ x 2 , θ y 1 and θ x 1 , θ y 2 are taken as the end points of the star trail, i.e.,
    θ x 1 , θ y 1 θ x 2 , θ y 2 ,     φ π 2 , 0 π 2 , π θ x 2 , θ y 1 θ x 1 , θ y 2 ,     φ π 2 , π 0 , π 2
Apply the processing procedures from Section 3.1 and Section 3.2 to each star trail, and agree that the coordinates that correspond to θ y 1 form the centroid group W t 1 , while the coordinates that correspond to θ y 2 form the centroid group W t 2 . Up to this point, the construction of the two centroid groups is completed.

3.3. Determining Timestamps for Centroid Groups

The start and end exposure timestamps for a dynamic star image come from the onboard computer’s time system, typically with microsecond-level precision. These two timestamps correspond to two constructed centroid groups, but the temporal order of these centroid groups has not yet been determined. This paper determines the temporal order of the centroid groups by analyzing inter-frame associations, thereby achieving the matching of centroid groups with their corresponding timestamps.
In the continuous star images captured by a star sensor under dynamic conditions, the centroids move along approximately linear paths on the image plane. Therefore, for centroid i, if t 0 is taken as the reference time and x ( t 0 ) i , y ( t 0 ) i as the reference position, the distance between the star point and the reference position at time t is positively correlated with the time difference t t 0 . Based on this principle, the temporal proximity can be inferred by analyzing the distances on the image plane.
Let the previous centroid group with a known timestamp, as determined by the star sensor, be W t 0 with the timestamp t 0 . For the i-th centroid in W t 0 , take its coordinates in W t 1 and W t 2 , and calculate the Euclidean distances d ( t 1 ) i and d ( t 2 ) i from its position at t 0 to its positions at t 1 and t 2 , respectively. These distances represent the displacement over the time intervals t 1 t 0 and t 2 t 0
d ( t 1 ) i = x ( t 1 ) i , y ( t 1 ) i x ( t 0 ) i , y ( t 0 ) i 2 d ( t 2 ) i = x ( t 2 ) i , y ( t 2 ) i x ( t 0 ) i , y ( t 0 ) i 2
For the two centroid groups, calculate the average displacement D t 1 and D t 2 of all centroids relative to its coordinates at time t 0
D t 1 = 1 N i = 1 N d ( t 1 ) i D t 2 = 1 N i = 1 N d ( t 2 ) i
The centroid group with the smaller average displacement is also closer in time to the time t 0 , as shown in Figure 9.
W t s t a r t = W t 2 , W t e n d = W t 1 , D t 1 > D t 2 W t s t a r t = W t 1 , W t s t a r t = W t 2 , D t 1 < D t 2
To determine the timestamp of the current dynamic star images centroid group, the W t e n d from the previous frame can be used as W t 0 . For the first dynamic star image where multiple centroids are extracted, there is no preceding W t e n d to serve as a reference. Therefore, the centroid extraction can be performed on the previous star images using a classical centroid extraction method, such as the weighted centroid method (Equation (28)), and the resulting centroid group W t m i d can be used as W t 0 . Since W t m i d does not participate in the attitude information calculation, it does not require particularly high centroid extraction accuracy.

4. Experiment and Results

This paper employs numerical simulations and semi-physical experiments to validate the feasibility of the proposed method. In the numerical simulation experiments, the centroid localization accuracy of the proposed method is compared with that of the classical centroid localization method across various high-dynamic scenarios. The performance of centroid localization under noisy conditions is also tested, and the effect of multiple centroid extraction on doubling the attitude frequency is demonstrated. For the semi-physical experiments, a setup for capturing dynamic star images was constructed using a turntable and a star simulator, through which the effectiveness of the proposed method in real images was verified.

4.1. Numerical Simulation Experiment

To verify the impact of the three-axis angular velocity on the centroid extraction accuracy, simulations of star trail are conducted. The simulation parameters are as follows: focal length f = 42   mm , pixel size a = 5 . 5   μ m , number of pixels 2048 × 2048, exposure time 100 ms, simulation step Δ t = 0 . 1   ms , quantization bit depth 8 bit. The numerical simulation of the star trail is based on Equation (11), with Gaussian-distributed noise added
E x , y = E d x , y + N ( μ , σ )

4.1.1. Impact of Three-Axis Angular Velocity on Accuracy

In the previous analysis of Section 2.3, the Z-axis angular velocity was neglected. To verify the impact of three-axis attitude angles on centroid extraction, a simulation experiment was conducted to test the extraction accuracy under angular velocities of ω x = ω y = 3 / s , 5 / s , 8 / s , and 10 / s , with the Gaussian noise variance set to 0.3.
To validate the effectiveness of the algorithm across the entire image plane, experiments were repeated with a star trail at different positions on the image plane, as shown in Figure 10, which include the center of the image (a), the right edge of the image (b), and the lower left corner of the image (c).
The proposed method is compared with two other extraction approaches: the weighted centroid method [26] and a fitting method [30] based on a dynamic image model. The former represents a classical method, while the latter is an improved method specifically strengthened for star trails.
To evaluate the precision of the centroid extraction process, the Euclidean distance ε between the true centroid position and the computed centroid position within the image coordinate system is utilized as a metric
ε = x s ( t r e f ) x t 2 + y s ( t r e f ) y t 2
where x t and y t denote the centroid coordinates obtained at time t using the centroid extraction method, while x s t and y s t represent the true values from the simulation.
However, since different centroid extraction methods aim to determine the centroid coordinates at different target moments, the true values corresponding to these specific moments t r e f should be selected for comparison. For the weighted centroid method and the fitting method, which aim to determine the centroid coordinates at the midpoint of the exposure time [26,30], the reference time t r e f should be set to the t m i d = t s t a r t + t e n d / 2 . In contrast, the proposed method targets the determination of the centroid positions at both the start and end moments of exposure. Therefore, the reference time t r e f should be set to either t s t a r t or t e n d , accordingly.
The subplot in Figure 10 illustrates the process of this experiment. According to Equation (4), the centroid coordinates are calculated with a step size of 0.1 ms. For an exposure duration of 100 ms, this processes 1000 independent centroid coordinates, as shown by the red dots in the subplot. The accumulation of point spread functions centered on these centroid coordinates forms a star trail. Among these centroid coordinates, the coordinate at t m i d is selected as the true value for comparison with the weighted centroid method and fitting method, while the centroid coordinates at times t s t a r t and t e n d are chosen as the true values for the proposed method, as shown by the green squares.
The experimental results are illustrated in Figure 11, where the horizontal axis represents the angular velocity in degrees per second, and the vertical axis shows the centroid extraction error in pixels. Specifically, particular attention is given to marking the angular velocity ±8°/s and the centroid extraction error of 0.1 pixels by gray dotted line. Different methods are distinguished by distinct colors. The proposed method is represented by orange and green curves, the weighted centroid method is indicated by black, and the fitting method is shown in brown. Note that, since the proposed method can extract two centroids from each star trail, each plot contains two results for it.
From the experimental results, it can be observed that as ω z increases, the errors of all three centroid localization methods increase. However, the error of the proposed method is less sensitive to changes in ω x and ω y . In conditions with ω x and ω y both reaching up to 10°/s, the average centroid extraction accuracy of the proposed methods shows a significant improvement, with enhancements of 70% over the weighted centroid method and 31% over the fitting method. In dynamic scenarios where ω x = ω y 10 / s , ω z 8 / s , the centroid extraction accuracy of the proposed method is better than 0.1 pixels.

4.1.2. Impact of Noise on Accuracy

To evaluate the performance of the proposed method under noisy conditions, star trails were simulated with different levels of Gaussian noise. The image plane positions were randomized, and a total of 30 simulations were conducted. The three-axis angular velocities are set to specific values 8°/s. The noise variance is varied from 0.1 to 2.0.
Figure 12 presents a boxplot that displays the central tendency and distribution of the centroid extraction error for the proposed method under varying variances of Gaussian noise. The horizontal axis denotes the increasing variance of noise, while the vertical axis represents the centroid extraction error in pixels. On each box, the black marker indicates the mean error, and the lower and upper edges of the box correspond to the 25% and 75% percentiles of the data distribution, respectively. The whiskers extend to the furthest points that are not considered outliers, with outliers individually marked by hollow circles.
The results indicate that the centroid extraction error increases with the increase in Gaussian noise variance. However, under conditions where the three-axis angular velocities are as high as 8°/s and the Gaussian noise variance is 2.0, the centroid extraction error does not exceed 0.25 pixels.

4.1.3. High-Frame-Rate Attitude Determination Experiment

The proposed method can extract two centroid groups from a dynamic star image, and these two centroid groups can be independently used for attitude determination, thereby doubling the attitude measurement frequency of the star sensor. To verify the effectiveness of the algorithm in doubling the attitude measurement frequency in a sequence of star images, a numerical simulation experiment was conducted in this section.
The experiment simulates a star sensor that operates at an imaging frame rate of 8 Hz and has an exposure duration of 100 ms. Within the simulation period of 1000 ms, it acquires a total of eight images. Stars with magnitude Mv5.5 from the Tycho-2 catalog are used, with initial attitude angles [2, 5, 0] deg and three-axis angular velocities [−8, −8, 8] deg/s.
Figure 13 shows the 1st, 3rd, 6th, and 8th dynamic star images out of the 8 images, captured during the intervals of 0–100 ms, 250–350 ms, 625–725 ms, and 875–975 ms, respectively. The images have been inverted, where deeper colors indicate higher energy levels. It is evident that the high-dynamic attitude maneuvers cause the star points to appear as distinct trails.
The proposed method was employed to process these dynamic star images in order to obtain multiple centroid groups. In each star image of Figure 13, the green and orange points, respectively, represent the starting centroids and ending centroids of the star trails extracted by the proposed method. Additionally, these four subplots display the endpoint grayscale distribution details of the same star, with the true values of the centroid coordinates marked by “×”. All the green points and orange points in each image constitute the starting centroid group W t s t a r t and the ending centroid group W t e n d . For example, in Figure 13c, the centroid group W 725   ms , composed of all the orange points, corresponds to the centroid positions at the end of the exposure for that star image, which occurs at 725 ms.
The centroid groups are applied to star image recognition and attitude determination algorithms [13], and the results of attitude determination are compared with the attitude data at corresponding moments to validate the effectiveness of the centroid groups. Figure 14 illustrates the high-frame-rate attitude determination results. From top to bottom, it shows the attitude angle calculation results ω x , ω y , and ω z , respectively. The horizontal axis in the figure represents the time of the simulation in seconds, while the left vertical axis indicates angular velocity in degrees, and the right vertical axis shows the angular error Δ ω in arcseconds. The green straight lines represent the attitude angles with uniform change. Green squares denote the attitude angles calculated from the starting centroid group W t s t a r t , and orange squares denote those calculated from the ending centroid group W t e n d . Black asterisks “*” indicate the attitude errors at the current moment.
This experiment shows that the principal axis analysis in Section 3.2 and the timestamp analysis in Section 3.3 ensure the coordinate positions and temporal sequence of the multi-centroid groups. The generated centroid groups satisfy the algorithmic requirements for subsequent attitude measurement tasks, resulting in superior accuracy with attitude errors of less than 8 arcseconds in both the X and Y directions, and less than 20 arcseconds along the Z-axis. Thanks to the extraction of dual centroid groups from a single star image, the proposed method achieves a measurement rate of 16 Hz at an imaging frame rate of 8 Hz, thereby doubling the efficiency of attitude information measurement, which demonstrates its potential for application in dynamic conditions where attitude information changes rapidly.

4.2. Semi-Physical Experiment

To verify the effectiveness of the proposed algorithm on actual situation, a semi-physical experimental system was designed to acquire dynamic star images that contain real noise.
As shown in Figure 15, the semi-physical experimental system for dynamic star images consists of the system controller, the drivers, a three-axis turntable, a star sensor, and a star simulator. The star simulator is fixed on a multi-freedom adjustment platform, and the star sensor is mounted on the three-axis turntable. The coordinate system is defined as shown in the Figure 15, with the Z-axis coinciding with the turntable’s axis of rotation. Initially, the entrance pupil of the star sensor is aligned with the exit pupil of the star simulator.
At the start of the simulation, the system controller drives the star simulator to display a static star image in a specified direction. Simultaneously, the turntable is controlled to move at a set angular velocity through the driver, and the star sensor is controlled to capture images. After controller receives the images from the star sensor, it preprocesses them and uses the proposed method to calculate the centroids and determine the attitude. Based on the attitude determination results, the angular velocity is calculated, and the calculated results are compared with the control parameters of the turntable.
The specifications of the star sensor are as follows: focal length f = 41 . 8   mm , pixel size a = 5 . 5   μ m , number of pixels 2048 × 2048, exposure time 166 ms, and principal coordinates [ u 0 , v 0 ] = 1035.23 , 1133.77 .
The angular velocities for the X-axis and Y-axis of the turntable are set to 5°/s. According to Rodrigues’ rotation formula, the resultant angular velocity is 7.07°/s. Therefore, the rotation angle ψ 0 during the exposure time is
ψ 0 = 7.07 / s × 0.166 s = 1.17 / s
The experimental results are shown in Figure 16 where three consecutive dynamic star images were selected for the test. Based on the proposed method, two centroid groups were extracted from each dynamic star image. The green and orange points in each image represent the starting centroid group and the ending centroid group, respectively. The attitude determination algorithm was then used to calculate the corresponding three-axis attitude angles for each centroid group, and the rotation angles were solved using Rodrigues’ theorem.
Ultimately, the recovered rotation angles ψ from the three dynamic star images were 1.16°/s, 1.28°/s, and 1.19°/s, differing from the actual control values ψ 0 by 0.01°/s, −0.11°/s, and −0.02°/s, respectively.

5. Discussion

This paper proposes a novel centroid extraction method that extracts two centroid groups from a single star image and validates the precision and noise resistance of the proposed algorithm through experiments.
In Section 4.1.1, we compare the performance of the proposed method with that of traditional centroid extraction methods under high-dynamic conditions where angular velocities are present along all three axes. The results show that the proposed method achieves higher centroid extraction accuracy in complex dynamic scenarios. This is because, under the influence of three-axis angular velocities, star trails form curves with small curvatures, leading to significant errors in classical centroid localization methods, which rely on the overall features of the star trails. In contrast, the proposed method focuses on the edge features of the star trail end points, making it less sensitive to overall deformations. However, since the star trail model in this paper assumes uniform linear motion across the image plane, the error increases with the increase in Z-axis angular velocity. Future work could reduce this error through more refined modeling.
In Section 4.1.2, we evaluate the performance of the algorithm under different Gaussian variances, and the results show that the proposed method still performs well. The proposed method is based on fitting the projection distribution, and the summation process of gray values along the X-axis (or Y-axis) during the projection calculation enhances the algorithm’s robustness to noise.
By extracting two independent centroid groups from dynamic star images, the star sensor can perform two attitude measurements in a single imaging process. With the imaging frame rate remaining unchanged, the proposed method achieves a doubling of the measurement frame rate. This has significant potential for applications in high-dynamic scenarios where attitude information changes rapidly, as further demonstrated by the experiments.
The experiments in Section 4.1.3, which involve accurate multi-centroid extraction from consecutive star images, achieve a doubling of the attitude measurement frequency, validating the effectiveness of the proposed algorithmic procedure. The semi-physical simulation experiments in Section 4.2, based on accurate centroid extraction from real star images, complete the angular velocity estimation task for a single star image, expanding the application scenarios of star sensors in high-dynamic tasks.

6. Conclusions

To improve the centroid extraction accuracy and efficiency of high-dynamic star sensors, this paper proposes a multi-centroid localization method based on the prior distribution of star trail projections. This method successfully achieves high-precision extraction of the endpoint centroid coordinates of dynamic star trails by establishing an accurate mapping relationship between attitude information and star trails, thereby enhancing the centroid extraction accuracy and frequency of high-dynamic star sensors. Experimental results show that when the three-axis angular velocity reaches 8°/s, the centroid extraction accuracy of this method remains better than 0.1 pixels, and the attitude measurement frequency is doubled compared to the frame rate, demonstrating the superiority of this method in high-dynamic attitude measurement tasks.

Author Contributions

Conceptualization, X.T., Q.C. and X.Y.; data curation, R.D.; formal analysis, T.X.; funding acquisition, Q.C., Z.F. and X.Y.; investigation, X.T., Q.C. and T.X.; methodology, X.T. and Z.F.; project administration, X.T. and X.Y.; software, X.T.; supervision, X.T. and X.Y.; validation, X.T., Q.C., Z.F. and R.D.; visualization, X.T. and T.X.; writing—original draft, X.T.; writing—review and editing, X.T., Z.F. and T.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the National Natural Science Foundation of China under Grant 62171430, in part by the Key Research and Development Program of Jilin Province under Grant 20230201061GX, and in part by the Natural Science Foundation of Jiangsu Province under Grant BK20241641.

Data Availability Statement

The data, experimental codes, and images used to support the findings of this research are available from the corresponding author upon reasonable request due to privacy reasons.

Acknowledgments

The authors are sincerely grateful for the constructive comments and suggestions of the manuscript reviewers.

Conflicts of Interest

Qipeng Cao was employed by DFH Satellite Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Fu, Z.; Lei, Y.; Tang, X.; Xu, T.; Tian, G.; Gao, S.; Du, J.; Yang, X. Oriented Clustering Reppoints for Resident Space Objects Detection in Time Delay Integration Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5650715. [Google Scholar] [CrossRef]
  2. Jiamin, D.; Xiubin, Y.; Meili, Z.; Xingyu, T. Fast Multispectral Fusion and High-Precision Interdetector Image Stitching of Agile Satellites Based on Velocity Vector Field. IEEE Sens. J. 2022, 22, 22134–22147. [Google Scholar] [CrossRef]
  3. Xu, T.; Yang, X.; Wang, S.; Han, J.; Chang, L.; Yue, W. Imaging Velocity Fields Analysis of Space Camera for Dynamic Circular Scanning. IEEE Access 2020, 8, 191574–191585. [Google Scholar] [CrossRef]
  4. Lu, Z.; Shen, X.; Li, D.; Chen, Y. Integrated Imaging Mission Planning Modeling Method for Multi-Type Targets for Super-Agile Earth Observation Satellite. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4156–4169. [Google Scholar] [CrossRef]
  5. Lu, Z.; Shen, X.; Li, D.; Chen, Y.; Li, D. A Mission Planning Modeling Method of Multipoint Target Imaging Within a Single Pass for Super-Agile Earth Observation Satellite. IEEE Syst. J. 2022, 16, 1921–1932. [Google Scholar] [CrossRef]
  6. Wu, D.; Chen, Y.; Li, Q.; Xu, Z.; Feng, H.; Man, Y. Attitude Scheduling and Verification for Dynamic Imaging of Agile Satellites. Optik 2020, 206, 164365. [Google Scholar] [CrossRef]
  7. Hasan, M.N.; Haris, M.; Qin, S. Fault-Tolerant Spacecraft Attitude Control: A Critical Assessment. Prog. Aerosp. Sci. 2022, 130, 100806. [Google Scholar] [CrossRef]
  8. Liu, Y.; Zhu, L.; Yuan, X. Attitude Optimization for Onboard Localization in Array Based Satellite Systems. IEEE Signal Process. Lett. 2024, 31, 326–330. [Google Scholar] [CrossRef]
  9. Wang, X.; Wu, G.; Xing, L.; Pedrycz, W. Agile Earth Observation Satellite Scheduling Over 20 Years: Formulations, Methods, and Future Directions. IEEE Syst. J. 2021, 15, 3881–3892. [Google Scholar] [CrossRef]
  10. Wu, L.; Jin, Y.; Guo, H.; Wang, J.; Zhang, F.; Zhang, Q.; Li, M. On-Orbit Calibration Method for Star Sensors Based on Microvariation in Intrinsic Parameters. IEEE Sens. J. 2023, 23, 18916–18925. [Google Scholar] [CrossRef]
  11. Wang, X.; Chen, X.; Li, Z.; Zhu, J. An Optical System of Star Sensors with Accuracy Performance Varying with the Field of View. Sensors 2023, 23, 8663. [Google Scholar] [CrossRef] [PubMed]
  12. Yi, J.; Ma, Y.; Long, H.; Lu, K.; Zhao, R. On-Orbit High-Precision Calibration for Deep-Coupled Parameters of Star Sensor and Gyroscope Systems. Opt. Express OE 2024, 32, 32187–32209. [Google Scholar] [CrossRef] [PubMed]
  13. Liebe, C.C. Accuracy Performance of Star Trackers—A Tutorial. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 587–599. [Google Scholar] [CrossRef]
  14. Han, J.; Yang, X.; Xu, T.; Fu, Z.; Chang, L.; Yang, C.; Jin, G. An End-to-End Identification Algorithm for Smearing Star Image. Remote Sens. 2021, 13, 4541. [Google Scholar] [CrossRef]
  15. Xiaolin, N.; Zonghe, D.; Pingping, C.; Wu, W.R.; Fang, J.C.; Liu, G. Spacecraft Angular Velocity Estimation Method Using Optical Flow of Stars. Sci. China Inf. Sci. 2018, 62, 112203. [Google Scholar]
  16. He, L.; Ma, Y.; Zhao, R.; Hou, Y.; Zhu, Z. High Update Rate Attitude Measurement Method of Star Sensors Based on Star Point Correction of Rolling Shutter Exposure. Sensors 2021, 21, 5724. [Google Scholar] [CrossRef]
  17. Wang, Z.; Jiang, J.; Zhang, G. Global Field-of-View Imaging Model and Parameter Optimization for High Dynamic Star Tracker. Opt. Express 2018, 26, 33314. [Google Scholar] [CrossRef]
  18. Weina, Z.; Wei, Q.; Lei, G. Blurred Star Image Processing for Star Sensors under Dynamic Conditions. Sensors 2012, 12, 6712–6726. [Google Scholar] [CrossRef]
  19. Tan, Y.; Chen, X.; Shi, F.; Wang, J.; Cao, M.; Liu, Q.; Wang, C. Dynamic Simulation Analysis of Working Equipment of Small Backhoe Hydraulic Excavator. J. Phys. Conf. Ser. 2024, 2691, 012007. [Google Scholar] [CrossRef]
  20. Ares, J.; Arines, J. Influence of Thresholding on Centroid Statistics: Full Analytical Description. Appl. Opt. AO 2004, 43, 5796–5805. [Google Scholar] [CrossRef]
  21. Xu, W.; Li, Q.; Feng, H.; Xu, Z.; Chen, Y. A Novel Star Image Thresholding Method for Effective Segmentation and Centroid Statistics. Optik 2013, 124, 4673–4677. [Google Scholar] [CrossRef]
  22. Kim, K. A New Star Identification Using Patterns in the Form of Gaussian Mixture Models. Adv. Space Res. 2024, 74, 319–331. [Google Scholar] [CrossRef]
  23. Wang, H.Y.; Xu, E.S.; Li, Z.F.; Li, J.J.; Qin, T. Gaussian Analytic Centroiding Method of Star Image of Star Tracker. Adv. Space Res. 2015, 56, 2196–2205. [Google Scholar] [CrossRef]
  24. Li, T.; Zhang, C.; Kong, L.; Wang, J.; Pang, Y.; Li, H.; Wu, J.; Yang, Y.; Tian, L. Centroiding Error Compensation of Star Sensor Images for Hypersonic Navigation Based on Angular Distance. IEEE Trans. Instrum. Meas. 2024, 73, 8508211. [Google Scholar] [CrossRef]
  25. Tan, W.; Qin, S.; Myers, R.M.; Morris, T.J.; Jiang, G.; Zhao, Y.; Wang, X.; Ma, L.; Dai, D. Centroid Error Compensation Method for a Star Tracker under Complex Dynamic Conditions. Opt. Express 2017, 25, 33559. [Google Scholar] [CrossRef]
  26. Yan, J.; Jiang, J.; Zhang, G. Dynamic Imaging Model and Parameter Optimization for a Star Tracker. Opt. Express 2016, 24, 5961. [Google Scholar] [CrossRef]
  27. Fan, Q.; Zhang, M.; Xue, Y. Restoration of Motion-Blurred Star Images with Elliptical Star Streaks. Meas. Sci. Technol. 2023, 34, 065403. [Google Scholar] [CrossRef]
  28. Mu, Z.; Wang, J.; He, X.; Wei, Z.; He, J.; Zhang, L.; Lv, Y.; He, D. Restoration Method of a Blurred Star Image for a Star Sensor Under Dynamic Conditions. Sensors 2019, 19, 4127. [Google Scholar] [CrossRef]
  29. Wan, X.; Wang, Y.; Wang, G.; Wei, X.; Wang, W.; Li, J.; Zhang, G. A Spatial–Temporal Star Spots Extraction for High Dynamics Star Sensors. IEEE Sens. J. 2023, 23, 13060–13071. [Google Scholar] [CrossRef]
  30. Mu, S.; Wang, L.; Li, G. A Method of Star Spot Center-of-Mass Localization Algorithm for Star Sensor Under Highly Dynamic Conditions. IEEE Sens. J. 2023, 23, 14957–14966. [Google Scholar] [CrossRef]
  31. Yan, J.; Jiang, J.; Zhang, G. Modeling of Intensified High Dynamic Star Tracker. Opt. Express 2017, 25, 927. [Google Scholar] [CrossRef]
  32. Ma, Y.; Jiang, J.; Wang, G.; Li, J.; Wang, Z. Dynamic Star Positioning Accuracy Improving Method Using Coded Exposure for Star Sensor. IEEE Trans. Instrum. Meas. 2024, 73, 5015412. [Google Scholar] [CrossRef]
  33. Yu, W.; Qu, H.; Zhang, Y. A High-Accuracy Star Centroid Extraction Method Based on Kalman Filter for Multi-Exposure Imaging Star Sensors. Sensors 2023, 23, 7823. [Google Scholar] [CrossRef] [PubMed]
  34. Yang, H.; Jin, Y.; Hu, Y.; Zhang, D.; Yu, Y.; Liu, J.; Li, J.; Jiang, X.; Yu, X. Image Degradation Model for Dynamic Star Maps in Multiple Scenarios. Photonics 2022, 9, 673. [Google Scholar] [CrossRef]
Figure 1. Star sensor coordinate relationships.
Figure 1. Star sensor coordinate relationships.
Remotesensing 17 00266 g001
Figure 2. The process of centroid coordinates and centroid groups changing over time during the exposure period. For clarity, only two centroids are plotted as an illustrative representation.
Figure 2. The process of centroid coordinates and centroid groups changing over time during the exposure period. For clarity, only two centroids are plotted as an illustrative representation.
Remotesensing 17 00266 g002
Figure 3. One-dimensional point spread functions (PSFs) and line spread functions (LSFs) at different spot radii. (a) PSFs with different central positions. (b) LSFs are obtained by integrating the PSFs in (a) along the X-axis.
Figure 3. One-dimensional point spread functions (PSFs) and line spread functions (LSFs) at different spot radii. (a) PSFs with different central positions. (b) LSFs are obtained by integrating the PSFs in (a) along the X-axis.
Remotesensing 17 00266 g003
Figure 4. The trail of a star moving along the X-axis. Each horizontal line (parallel to the star’s motion, shown in orange) follows an LSF in grayscale, and each vertical line (perpendicular to the star’s motion, shown in blue) follows a PSF in grayscale. The respective structures are still preserved in the projection distributions.
Figure 4. The trail of a star moving along the X-axis. Each horizontal line (parallel to the star’s motion, shown in orange) follows an LSF in grayscale, and each vertical line (perpendicular to the star’s motion, shown in blue) follows a PSF in grayscale. The respective structures are still preserved in the projection distributions.
Remotesensing 17 00266 g004
Figure 5. The trail of a star moving along both the X-axis and Y-axis. The projection distributions in both directions conform to the LSF structure, containing the coordinate information of the start and end points.
Figure 5. The trail of a star moving along both the X-axis and Y-axis. The projection distributions in both directions conform to the LSF structure, containing the coordinate information of the start and end points.
Remotesensing 17 00266 g005
Figure 6. Schematic diagram of the multi-centroid extraction method for the star trail.
Figure 6. Schematic diagram of the multi-centroid extraction method for the star trail.
Remotesensing 17 00266 g006
Figure 7. An example of determining coordinate parameters through grayscale projection distribution fitting.
Figure 7. An example of determining coordinate parameters through grayscale projection distribution fitting.
Remotesensing 17 00266 g007
Figure 8. Schematic diagram of principal axis analysis. (Left): two possible scenarios for the endpoint centroid coordinates when the 4 positional parameters have been determined, identified using a green stripe and an orange stripe. (Right): the different ranges of principal axis angles correspond to two possible scenarios shown in green and orange respectively.
Figure 8. Schematic diagram of principal axis analysis. (Left): two possible scenarios for the endpoint centroid coordinates when the 4 positional parameters have been determined, identified using a green stripe and an orange stripe. (Right): the different ranges of principal axis angles correspond to two possible scenarios shown in green and orange respectively.
Remotesensing 17 00266 g008
Figure 9. Schematic diagram of the temporal sequence analysis of centroid groups. (Left): Two endpoint centroid groups without determined temporal order. (Middle): The centroid group at time t1 is closer to the centroid group at time t0. (Right): The centroid group at time t2 is closer to the centroid group at time t0.
Figure 9. Schematic diagram of the temporal sequence analysis of centroid groups. (Left): Two endpoint centroid groups without determined temporal order. (Middle): The centroid group at time t1 is closer to the centroid group at time t0. (Right): The centroid group at time t2 is closer to the centroid group at time t0.
Remotesensing 17 00266 g009
Figure 10. Three image plane positions used in the simulation experiments, where a, b, and c represent the center, the right edge, and the lower left edge of the image plane coordinate system, respectively. The local enlargement figure shows the grayscale distribution of the star trail. The continuous red dots superimposed on it are the centroid positions generated based on the attitude information. The green squares represent the centroid coordinates selected at specific reference times.
Figure 10. Three image plane positions used in the simulation experiments, where a, b, and c represent the center, the right edge, and the lower left edge of the image plane coordinate system, respectively. The local enlargement figure shows the grayscale distribution of the star trail. The continuous red dots superimposed on it are the centroid positions generated based on the attitude information. The green squares represent the centroid coordinates selected at specific reference times.
Remotesensing 17 00266 g010
Figure 11. The impact of three-axis attitude angles on the accuracy of the centroid extraction method proposed in this paper and the classical methods. (ad), (eh), and (il) display the results for the center, right edge, and lower left corner of the image plane, respectively.
Figure 11. The impact of three-axis attitude angles on the accuracy of the centroid extraction method proposed in this paper and the classical methods. (ad), (eh), and (il) display the results for the center, right edge, and lower left corner of the image plane, respectively.
Remotesensing 17 00266 g011
Figure 12. The impact of Gaussian noise with different variances on the centroid localization accuracy of the proposed method.
Figure 12. The impact of Gaussian noise with different variances on the centroid localization accuracy of the proposed method.
Remotesensing 17 00266 g012
Figure 13. Multi-centroid extraction from consecutive star images. (ad) show the results of multi-centroid extraction from the 1st, 3rd, 6th, and 8th images, respectively, out of 8 simulated dynamic star images.
Figure 13. Multi-centroid extraction from consecutive star images. (ad) show the results of multi-centroid extraction from the 1st, 3rd, 6th, and 8th images, respectively, out of 8 simulated dynamic star images.
Remotesensing 17 00266 g013
Figure 14. High-frame-rate attitude determination results. (ac) shows the attitude angle calculation results for the X-axis, Y-axis, and Z-axis, respectively.
Figure 14. High-frame-rate attitude determination results. (ac) shows the attitude angle calculation results for the X-axis, Y-axis, and Z-axis, respectively.
Remotesensing 17 00266 g014
Figure 15. Dynamic star images semi-physical experimental system.
Figure 15. Dynamic star images semi-physical experimental system.
Remotesensing 17 00266 g015
Figure 16. Semi-physical experimental results. The green and orange points in each image represent the starting centroid group and the ending centroid group, respectively. And the three-axis attitude angles are displayed in green and orange, representing the attitude angles corresponding to the start and end of the exposure, respectively.
Figure 16. Semi-physical experimental results. The green and orange points in each image represent the starting centroid group and the ending centroid group, respectively. And the three-axis attitude angles are displayed in green and orange, representing the attitude angles corresponding to the start and end of the exposure, respectively.
Remotesensing 17 00266 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, X.; Cao, Q.; Fu, Z.; Xu, T.; Duan, R.; Yang, X. Multi-Centroid Extraction Method for High-Dynamic Star Sensors Based on Projection Distribution of Star Trail. Remote Sens. 2025, 17, 266. https://doi.org/10.3390/rs17020266

AMA Style

Tang X, Cao Q, Fu Z, Xu T, Duan R, Yang X. Multi-Centroid Extraction Method for High-Dynamic Star Sensors Based on Projection Distribution of Star Trail. Remote Sensing. 2025; 17(2):266. https://doi.org/10.3390/rs17020266

Chicago/Turabian Style

Tang, Xingyu, Qipeng Cao, Zongqiang Fu, Tingting Xu, Rui Duan, and Xiubin Yang. 2025. "Multi-Centroid Extraction Method for High-Dynamic Star Sensors Based on Projection Distribution of Star Trail" Remote Sensing 17, no. 2: 266. https://doi.org/10.3390/rs17020266

APA Style

Tang, X., Cao, Q., Fu, Z., Xu, T., Duan, R., & Yang, X. (2025). Multi-Centroid Extraction Method for High-Dynamic Star Sensors Based on Projection Distribution of Star Trail. Remote Sensing, 17(2), 266. https://doi.org/10.3390/rs17020266

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop