aDepartment of Precision Instrument, Tsinghua University, Beijing 100084, China
bState Key Laboratory of Precision Measurement Technology and Instruments, Tsinghua University, Beijing 100084, China
cBeijing Advanced Innovation Center for Integrated Circuits, Tsinghua University, Beijing 100084, China
dJoint International Research Laboratory of Advanced Photonics and Electronics, Beijing Information Science & Technology University, Beijing 100192, China
eBeijing Institute of Control Engineering, Beijing 100190, China
Subpixel localization techniques for estimating the positions of point-like images captured by pixelated image sensors have been widely used in diverse optical measurement fields. With unavoidable imaging noise, there is a precision limit (PL) when estimating the target positions on image sensors, which depends on the detected photon count, noise, point spread function (PSF) radius, and PSF’s intra-pixel position. Previous studies have clearly reported the effects of the first three parameters on the PL but have neglected the intra-pixel position information. Here, we develop a localization PL analysis framework for revealing the effect of the intra-pixel position of small PSFs. To accurately estimate the PL in practical applications, we provide effective PSF (ePSF) modeling approaches and apply the Cramér-Rao lower bound. Based on the characteristics of small PSFs, we first derive simplified equations for finding the best PL and the best intra-pixel region for an arbitrary small PSF; we then verify these equations on real PSFs. Next, we use the typical Gaussian PSF to perform a further analysis and find that the final optimum of the PL is achieved at the pixel boundaries when the Gaussian radius is as small as possible, indicating that the optimum is ultimately limited by light diffraction. Finally, we apply the maximum likelihood method. Its combination with ePSF modeling allows us to successfully reach the PL in experiments, making the above theoretical analysis effective. This work provides a new perspective on combining image sensor position control with PSF engineering to make full use of information theory, thereby paving the way for thoroughly understanding and achieving the final optimum of the PL in optical localization.
Haiyang Zhan, Fei Xing, Jingyu Bao, Ting Sun, Zhenzhen Chen, Zheng You, Li Yuan.
Analyzing the Effect of the Intra-Pixel Position of Small PSFs for Optimizing the PL of Optical Subpixel Localization.
Engineering, 2023, 27(8): 140-149 DOI:10.1016/j.eng.2023.03.009
In optical measurement, a point-like source is a basic and typical optical target. In Fig. 1(a), photons from a point source pass through an optical system and appear on a pixelated image sensor as a spot known as the point spread function (PSF), and then form a digital image after pixelization. From the pixel intensities, the position (x*, y*) of the PSF (or the target) on the image sensor can be computed with subpixel precision. This process, which is known as subpixel localization, has a wide range of applications in diverse fields [1], [2], [3], [4], [5], [6], [7], [8], [9]. In astronomy, celestial bodies are imaged and localized by means of telescopes [10] or star trackers [11] for universe exploration applications such as black hole detection [12], [13] and spacecraft navigation [14], [15], [16]. In biological microscopy, combined with intermittently active fluorescent probes, the subpixel localization of single molecules produces nanoscale reconstructed images that transcend the diffraction limit (DL), enabling biologists to observe cellular structures with unprecedented resolution [17], [18], [19], [20]. One of the most basic questions in these optical measurement applications concerns the precision and the precision limit (PL) of subpixel localization.
Due to unavoidable imaging noise such as photon shot noise and pixel dark noise, subpixel localization has a theoretical PL, which is mainly related to the detected photon count N, the pixel dark noise σd2, the PSF radius r, and the PSF’s intra-pixel position (x, y) representing the relative PSF position with respect to the pixel boundary. The Cramér-Rao lower bound (CRLB) in statistics, which describes the minimum variance of an unbiased estimator when estimating a parameter, has been widely applied for revealing and optimizing the localization PL [21], [22], [23]. Existing studies have clearly demonstrated the effects of the first three arguments on the PL function [23], [24], [25], but they usually neglect the intra-pixel position information by assuming that the PSF is randomly located in a square pixel and then averaging (i.e., taking the root mean square (RMS) of) the PL over all intra-pixel positions. Such studies generally conclude that, with a certain detected photon count and noise, the PL(N, σd2, r) reaches its optimum when the PSF has a moderate size (e.g., Fig. 1(b)). The results of these studies work for the typical PSFs of telescopes and microscopes with sizes comparable to or larger than the pixel size [21], [26], in which case the PL is almost unchanged at different intra-pixel positions. However, when it comes to very small PSFs (e.g., r < 0.3 pixels), the intra-pixel position effect can no longer be ignored; that is, the PL significantly varies with the PSF’s intra-pixel position, and very high precision can be achieved near the pixel boundaries (Fig. 1(c)). The optimization of the localization precision limit PL(N, σd2, r, x, y) that embodies the effect of the intra-pixel position of small PSFs has not been well studied theoretically or experimentally. In addition, a PL analysis usually assumes the PSFs to be ideal analytical functions such as Gaussian functions [23], [24], [25], [27], and thus may fail to accurately predict the localization performance in real applications due to the deviation of real PSFs caused by nonideal imaging conditions such as optical aberration, pixelation, and pixel nonuniform response.
A theoretical PL analysis is useful only when the estimated PL can be achieved in real applications. The maximum likelihood estimation (MLE) has been applied for realizing the PL in many optical measurement systems [28], [29], [30], [31]. However, to the best of our knowledge, it has not been commonly used in studies of small-PSF systems such as star sensors. Compared with a telescope, a star sensor can have a much smaller F-number and usually works with a short exposure time, making it a typical small-PSF optical system that applies subpixel localization techniques. The conventional localization methods applied in star sensors are mainly divided into two categories: centroiding methods and fitting methods. The center of gravity (CG) method—a typical centroiding method—calculates the moment of the intensities at the image sensor pixels [32]. It is a fast yet biased estimator, presenting an S-shape periodic systematic error (usually 0.03-0.10 pixels) [33]. Compensating for this error and determining the influence of the threshold have been researched in many studies [33], [34], [35], [36], [37].
Fitting methods fit the known PSFs to the measured pixel data by means of least-squares fit; among these methods, the Gaussian fitting (GF) method is commonly used [32]. Some refinement methods such as the Gaussian grid method and the Gaussian analytic method have been proposed to reduce the computation cost [38], [39], [40]. These GF-based methods achieve high accuracy in ideal cases, but two reasons hinder them from realizing the PL in real applications. The main reason is that real PSFs—especially small real PSFs—deviate from Gaussian functions due to nonideal imaging conditions, and fitting an inaccurate model introduces systematic errors (Section S1 in Appendix A). The second reason is that, unlike the MLE, the least-squares fit does not make full use of noise information (i.e., the noise probability density function) [27]. To sum up, in this field, techniques have not been well established for realizing the localization PL provided by information theory.
Here, we develop a localization PL analysis framework for revealing the effect of the intra-pixel position of small PSFs in real localization applications. In the framework, we first provide effective PSF (ePSF) modeling approaches to accurately reconstruct real PSFs. Then, we apply the CRLB on the model to obtain an accurate estimation of the localization PL. Based on the characteristics of small PSFs, we derive simplified equations for finding the best PL achieved near the pixel boundaries for an arbitrary small PSF. Taking the typical Gaussian PSF as an example, we derive the optimum of the PL(N, σd2, r, x, y) and show at which PSF radius and intra-pixel position the optimum is achieved. We also reveal that this localization precision optimum is ultimately limited by physical light diffraction. Finally, we apply the MLE method to realize the theoretical PL in real applications. This work provides deep physical insights into optical subpixel localization, extends the high-precision localization literature to small PSFs, and provides a new perspective on combining image sensor position control (on the subpixel scale) with PSF engineering to optimize the localization performance. We hope that this work will contribute to the development of subpixel localization toward achieving the final optimum of the PL in a wide range of optical measurement fields.
2. Material and methods
2.1. ePSF modeling and noise analysis
In the framework, we first provide ePSF modeling approaches and a noise analysis to accurately describe the probability density function of a pixel value in real localization applications. This is a key requirement for the subsequent PL estimation and realization. To avoid the usual deviation between an analytical PSF and a real one, we work through the experimental ePSF method to model the accurate relationship between the measured pixel intensities and the target position [41], [42], which can be described as follows:
where Pij is the value of the pixel (i, j), sij is the background value at the pixel, and φij(Δx, Δy) is the ePSF model, in which Δx = x* − (j + 0.5) and Δy = y* − (i + 0.5). Because i and j represent the row and column label of the pixel, we use (j + 0.5, i + 0.5) to describe the position of the pixel center; thus, (Δx, Δy) represents the target’s relative position with respect to the pixel center. It should be noted that ePSFs corresponding to different pixels are slightly different due to optical aberration and pixel response non-uniformity, so φij(Δx, Δy) is labeled with the pixel index ij.
Here, we introduce two ways to establish ePSFs according to the application requirements and conditions. The first way is to form very accurate ePSFs for cases with sufficient calibration conditions, where each ePSF φij(Δx, Δy) corresponds to one target and one specific pixel. Precise subpixel-scale relative movement between the target and the image sensor is required. To form the relative movement, a turntable can be utilized to rotate the entire optical system, or a nanoscale displacement stage can be used to move only the image sensor. Fig. 2(a) shows the ePSF modeling process for a 1D localization case. In this 1D localization case, a 2D PSF moves on a 2D image sensor but only along 1D, and the position in this dimension will need to be estimated. At the left of Fig. 2(a), we assume that the target moves in the x direction with a subpixel step Δh, and its y* is fixed. At each step, multiple images are sampled and then averaged to reduce the noise. Then, the average pixel value of pixel (i, j) as a function of the target position can be obtained, but the function is now discrete with an interval of Δh (Fig. 2(a), center). We assume that the ePSF changes smoothly as the target moves slowly, so we utilize cubic spline interpolation with the not-a-knot end condition to reconstruct the continuous function. Finally, through a simple transform according to Eq. (1), the ePSF model φij(Δx) can be obtained (Fig. 2(a), right). It should be noted that the ePSFs corresponding to adjacent pixels in the x direction (e.g., φ23(Δx), φ24(Δx), and φ25(Δx)) are slightly different, although this is not obvious in the figure. The calibration interval Δh is related to the PSF size and is set as 0.05-0.25 pixels in our cases. Some simulations analyzing the influence of Δh are shown in Section S2 in Appendix A. For 2D localization cases, the target moves in two directions, and the modeling process for φij(Δx, Δy) remains the same. The advantage of this approach is that the ePSFs are very accurate experimental models embodying the pixel non-uniformity information; however, this approach requires a great deal of calibration work. In this paper, we apply this ePSF modeling approach in laboratory experiments in order to accurately estimate and realize the localization PL and verify our simplified equations describing the effect of small PSFs’ intra-pixel position.
The second way is to form an approximate ePSF φ(Δx, Δy) corresponding to one target, while neglecting optical and pixel non-uniformity. In many cases, the condition that a target moves on the image sensor with a subpixel step is not satisfied. The displacement of the target in different images may be several pixels and is unknown (Fig. 2(b), top). After experimental tests, we adopted simplified modeling procedures of the method developed for star images captured by the Hubble space telescope [41]. For each image, the pixel values of the target within the region of interest (ROI) are extracted, along with an initial position estimated via the conventional CG method. Neglecting pixel non-uniformity, the pixel values then become the samples of one ePSF φ(Δx, Δy) through a simple transform using Eq. (1). For example, in Fig. 2(b), the position of the target in image 1 is estimated to be (1.4, 1.4) using the CG method. The value of the pixel (1, 1) can be remapped to φ(−0.1, −0.1) by P11 − s11 = φ(1.4-1.5, 1.4-1.5). The pixel value P12 can be remapped to φ(−1.1, −0.1) by P12 − s12 = φ(1.4-2.5, 1.4-1.5). If the ROI of the target includes 3 × 3 pixels, nine samples of φ(Δx, Δy) can be obtained from each image.
After we have all the samples from multiple images, we evaluate the ePSF at q × q grid points with an interval of Δh. In the center part of Fig. 2(b), we set Δh as 0.25 pixels and draw blue dashed lines with an interval of Δh. The intersection points of the blue dashed lines represent the grid points where the ePSF is evaluated. For each grid point, the samples within Δh in Δx and Δy are averaged, and the samples that are 2.5σ away from the mean are rejected (where σ is the standard deviation of these samples). The final mean is used as the ePSF value at this grid point. Finally, cubic spline interpolation is performed to obtain the final ePSF (Fig. 2(b), bottom). In the paper, we apply this approach to real star observations. For convenience, the ePSFs developed here include the target flux, so that each ePSF corresponds to one target. It is certainly possible to normalize an ePSF in order to make it adapt to multiple targets with the same intensity distribution but a different flux.
Next, the noise is analyzed based on the pixel response model in European Machine Vision Association (EMVA) 1288 standard [43]. As shown in Fig. 2(c), photons hit a pixel and produce light signal lij electrons (with mean value $ \mu_{l_{i j}}$ and variance value $ \sigma_{l_{i j}}^{2}$). The number of light-induced electrons fluctuates (which is known as “shot noise”) and follows a Poisson distribution so that $ \mu_{l_{i j}}=\sigma_{l_{i j}}^{2}$. Aside from the light-induced signal, the pixel value is induced by the dark signal dij (with mean value $ \mu_{d_{i j}}$ and variance value $ \sigma_{d_{i j}}^{2}$). The sum of the two signals is then amplified by the pixel system gain K and is digitized and converted into the pixel value Pij. Some assumptions and approximations are made in this work. The quantum noise (with variance σq2) is quite small compared with the light noise and the dark noise, so it is neglected here. Since the noise of the dark signal is mainly caused by thermally induced Poisson-distributed electrons, we assume that the dark signal includes two parts: the constant part d0,ij (with mean $ \mu_{d_{0, i j}}$ and variance 0) and the Poisson part d1,ij (with mean value $ \mu_{d_{1, i j}}$ and variance value $ \sigma_{d_{i j}}^{2}$, where $\mu_{d_{1, i j}}=\sigma_{d_{i j}}^{2}$). In the pixel value side, the background sij induced by the dark signal also has two corresponding parts: the constant-part-induced one s0,ij and the Poisson-part-induced one s1,ij. In this model, the sum of the light signal lij and the dark Poisson signal d1,ij, which is given by (lij + d1,ij) or (Pij − s0,ij)/K, is called the fluctuating electronic signal (FES). The FES follows a Poisson distribution. The relationship between its mean value (μFES) or variance value (σ2FES) and the ePSF (φij(Δx, Δy)) is as follows:
Then, we have the following probability density function for a pixel value:
$\begin{array}{l} p\left[P_{i j} \mid\left(x^{*}, y^{*}\right)\right]=p\left[\mathrm{FES}=\left(P_{i j}-s_{0, i j}\right) / K \mid\left(x^{*}, y^{*}\right)\right] \\ =\frac{\left[\varphi_{i j}(\Delta x, \Delta y) / K+\sigma_{d_{i j}}^{2}\right]}{\left[\left(P_{i j}-s_{0, i j}\right) / K\right]!} \mathrm{e}^{\left(P_{i j}-s_{0, i j}\right) / K} \end{array}$
2.2. PL estimation
With clear knowledge of the probability density functions for the measured pixel values, the PL in real localization applications can be accurately estimated following the CRLB calculation process [23]. The PL also reflects the accuracy limit for an unbiased estimator [20]. Based on Eq. (3), the joint probability density function for the pixel values in an arbitrary ROI is
The calculation details are shown in Section S3 in Appendix A. The PL for estimating x* is $\mathrm{PL}_{x}=\sqrt{\mathrm{CRLB}_{x}}$. Estimating y* is analogous (just exchange the positions of x* and y* in Eq. (5)). In real applications, the K and $\sigma_{d_{i j}}^{2}$ of the image sensor can be obtained using the photon transfer method [43]. Notably, φij(Δx, Δy) and its derivatives vary with the target position. Thus, the target’s intra-pixel position has an effect on the localization PL.
2.3. Analysis of the effect of small PSFs’ intra-pixel position on the PL
Due to the complicated form of Eq. (5), it is difficult to analytically research the effect of intra-pixel position on the PL. Nevertheless, for very small PSFs, some approximations can be made to simplify the equation.
Consider a case for estimating the x* of a very small PSF. The influence of y* is small, so we first assume that the PSF flux is only distributed in one row of the pixel array. When the PSF is located at most positions within the pixel (i, j) except around the pixel boundaries, the flux I (unit: digital number (DN)) of the PSF is almost concentrated in this pixel. The following approximations can be made:
where we write φij(Δx, Δy) as φij for convenience. Then, the reciprocal of Eq. (5), which represents the amount of information embodied in the measured pixel data, can be reduced to the following:
Thus, it is impossible to precisely localize a very small PSF concentrating in one pixel. This case is always avoided by defocusing the optics in small-PSF systems.
However, when the PSF is located near the pixel boundaries, the result is totally different. The flux is distributed in two adjacent pixels. We assume that the larger pixel value is less than ten times the smaller pixel value. Otherwise, it can be approximated as the case above. Here, the following approximations are made:
We also assume that the dark noise is uniform for different pixels, the variance of which is then represented by σd2. The reciprocal of Eq. (5) becomes
where N = I/K, $\left|\frac{\partial \varphi_{i j}}{\partial x^{*}}\right|$ is the absolute value of $\frac{\partial \varphi_{i j}}{\partial x^{*}}$. Notably, $\left|\frac{\partial \varphi_{i j}}{\partial x^{*}}\right|$ of small PSFs can be very large near the pixel boundaries, so the PL performance can be very high. Eq. (11) can be used to find the best PL and the corresponding intra-pixel position for an arbitrary small ePSF. Moreover, because the ePSF is an accurate experimental model, Eq. (11) can provide an accurate estimation in practical small PSF localization applications.
Next, we use the typical Gaussian function as the PSF to perform a further analysis on the final optimum of the PL. Because we have assumed that the PSF is distributed in one row of the pixel array, φij can be represented by the integral of a 1D Gaussian function, that is
where x* ≈ j + 1 means that the PSF is located near the pixel boundary j + 1. By combining Eqs. (11), (12), we can compute the PL near the pixel boundary for a Gaussian function. Then, we can find that Eq. (9) reaches its maximum when r → 0 and x* = j + 1. That is, the PSF is located exactly at the pixel boundary with φij = φi,j+1 = I/2. The maximum is
$\left[\mathrm{PL}_{x}\right]_{\text {optimum }} \approx\left\{\begin{array}{c} \frac{r}{N} \sqrt{\frac{\pi\left(N+2 \sigma_{\mathrm{d}}^{2}\right)}{2}}, \text { with } \sigma_{\mathrm{d}}^{2} \\ r \sqrt{\frac{\pi}{2 N}}, \text { without } \sigma_{\mathrm{d}}^{2} \end{array}\right.$
So far, the optimization problem of PLx(N, σd2, r, x) has been solved for Gaussian functions. That is, with certain photons detected, the PL reaches the optimum at the pixel boundaries when the PSF radius is as small as possible, and Eq. (14) can be used to estimate this optimum.
It should be noted that, physically, r cannot be infinitely small due to light diffraction. The DL (unit: pixels) is
$[\mathrm{DL}]=\frac{1.22 \lambda F}{a}$
where λ represents the wavelength, F is the F-number of the optical system, and a is the pixel size. The widely known DL describes the distance from the peak to the first zero of an Airy disk model. The Airy disk is usually approximated as a Gaussian function, for which the region within three times the Gaussian radius from the peak covers almost all the energy. Thus, the relationship between the DL of the Airy disk and the radius of the approximated Gaussian function is
$[\mathrm{DL}] \approx 3 r$
By combining Eqs. (14), (16), the optimum of the localization PL is ultimately limited by light diffraction, which can be described as follows:
Finally, the MLE is applied to the established model of the observed pixel intensities to reach the localization PL at each intra-pixel position. The position that maximizes the probability of the measured pixel values is the position we estimate. The minus logarithm of the probability is used as the cost function:
where J and H are the Jacobian and Hessian matrices, respectively, of the cost function χ (the computation details are shown in Section S3 in Appendix A). The initial iterative value $\left(x^{*[0]}, y^{*[0]}\right)$ can be calculated using the CG method. The ePSF model φij(Δx, Δy) and its derivatives can be determined and tabulated ahead of time. Typically, the localization procedure is fast and reaches convergence in very few iterations (usually 2–4 iterations in our experiment, depending on the convergence condition).
3. Results
3.1. Numerical simulations
We first use numerical data to characterize the performance of the framework. We integrate Gaussian functions with different radii over pixels and generate simulated spots at different intra-pixel positions. Poisson-distributed light noise and dark noise are introduced. The flux I is set as 300 DN. The pixel system gain K is set as 0.2, so the photo–electron count N is 1500. The variance $\sigma_{d_{i j}}^{2}$ representing the dark noise is set as 25 (corresponding to 1 DN standard deviation in a pixel value, according to the noise model) for noise 1 and 250 for noise 2. The parameters (except for noise 2) are close to those measured in our experiments.
The ePSFs are modeled using the first approach in Section 2.1. The PL for estimating x* is computed at each intra-pixel position using Eq. (5). For each PSF, the average PL over all intra-pixel positions is computed (by taking the RMS), and the best PL and best intra-pixel position are found. Here, for small Gaussian PSFs, the best PL is obtained at the pixel boundaries. As reported in previous studies, the average PL as a function of the PSF radius is shown in red curves in Fig. 3(a). We then plot the best PL over the PSF radius as blue curves for comparison. The result shows that, although a very small PSF has a bad average PL, it can achieve very high performance at the pixel boundaries, and the performance continues to improve as the PSF size decreases, until the PSF size is limited by light diffraction. The best PL estimated by Eq. (14) is shown as black curves in Fig. 3(a). The first approximation in Eq. (14) is used for noise 2 (high-level noise), and the second is used for noise 1 (low-level noise). The black curves show excellent agreement with the blue curves for PSFs with r < 0.3 pixels. The equation is verified to be effective for small Gaussian PSFs.
The developed MLE localization method is then tested on the simulated images. Here, we directly compare the localization accuracy with the PL, because the precision reflects the accuracy for an unbiased estimator. In Figs. 3(b) and (c), the localization results in the x direction for two spots (r = 0.2 pixels, r = 0.3 pixels) along a trajectory (x = 0-1 pixels, y = 0.5 pixels) with typical noise level (noise 1) are shown. At each position, multiple images (repeat number n = 500) are localized, and the RMS error is calculated to represent the accuracy. The PL (red curves) is computed using Eq. (5). The result demonstrates that the combination of MLE and ePSF modeling greatly improves the localization accuracy compared with the CG method and achieves the PL at each intra-pixel position. In the enlarged view of the localization result in Figs. 3(b) and (c), the estimations using Eqs. (14), (11) (the second approximation) are shown. As is verified in Fig. 3(a), Eq. (14) accurately predicts the best PL obtained at the pixel boundaries. Eq. (11), which describes the PL near the pixel boundaries for an arbitrary small PSF, is verified to be effective within 0.25 pixels from the pixel boundaries in this result. Localization results for more spots are shown in Section S4 in Appendix A, in which the unbiasedness of the developed method is also verified at each intra-pixel position.
3.2. Experimental results
Next, we performed localization experiments in the laboratory to test the framework. The experimental setup is shown in Fig. 4(a). A small-PSF optical navigation instrument (a Tsinghua University pico-type star sensor) is used for image acquisition, and a high-accuracy three-axis turntable is used to generate relative movement between the optical instrument and a point source. The photons from the point source go through a collimator and arrive at the complementary metal–oxide–semiconductor (CMOS) image sensor of the star sensor fixed on the turntable. The rotating direction of the turntable in the experiments makes the spot move in the x direction. The focal length of the optical system is 25 mm, and the pixel size of the image sensor is 5.3 µm × 5.3 µm. The rotating step is set as 0.0005°, so the step of the spot in the image is about 25 mm × tan(0.0005°)/5.3 µm = 0.041 pixels. We rotated the turntable many times to cause the spot to have a total displacement of about 10 pixels, and we sampled 30 images at each position. The images sampled at half of the positions were used for ePSF modeling; those at the other half were used for localization testing. The background sij was measured to be 37 DN, the system gain K was 0.14, and the variance $\sigma_{d_{i j}}^{2}$ of the dark signal was 54.40. The difference of these parameters for different pixels is neglected here. The calculating window of the spot was set as a 5 × 5 pixels region centering on the brightest pixel. The real position reference was set according to the rotating information of the high-accuracy turntable (Section S5 in Appendix A).
We first tested the framework on a very focused spot (Fig. 4(a)) in the x direction. The ePSF was obtained using the first approach in Section 2.1. The PL at each position was obtained using Eq. (5). Conventional methods and the developed method of using the MLE to fit ePSF were used to localize the spot. The localization RMS error and the mean error of 30 repeat measurements at each position were calculated. The results show that, compared with the CG and GF method, the MLE method improves the accuracy from approximately 0.100 to 0.011 pixels, by about one order of magnitude, whereas the RMS value of the PL curve is 0.010 pixels (Fig. 4(b)). The MLE localization result shows good agreement with the PL, making the theoretical PL analysis meaningful and effective in real applications. The result from Eq. (11) (the second approximation) is also shown on the right in Fig. 4(b). The simplified equation was verified to be effective for estimating the best intra-pixel regions and the corresponding PL performance for the real PSF. A high accuracy of better than 0.005 pixels near the pixel boundaries was successfully predicted and achieved.
In addition, in contrast to the conventional methods, the localization error of the developed method can be effectively decreased by utilizing the unbiasedness. This method achieves an error of 0.004 pixels through 30 repeat measurements (Fig. 4(c)). However, it is not obvious that the multiple measuring result is better near the pixel boundaries, and the performance obtained from n measurements is not improved by $\sqrt{n}$. This is mainly because the turntable inevitably introduces extra position errors (Section S6 in Appendix A). The limited performance of the experimental setup—instead of the developed method—hinders a further increase in localization performance.
Another less-focused spot (Fig. 5(a)) was generated by adjusting the optics of the collimator. The conventional methods show better accuracy for this spot (about 0.050 pixels) than for the former spot. The MLE method exhibits a stable, good performance, with a localization error of 0.010 pixels by means of a single measurement (Fig. 5(b)) and an error of 0.003 pixels by means of 30 repeat measurements (Fig. 5(c)). The MLE result also shows good agreement with the PL, and Eq. (11) is verified to be effective. Although the average localization performance is still about 0.010 pixels, no local accuracy can reach 0.005 pixels, like that in Fig. 4(b). As expected, the intra-pixel position effect is not as obvious for this less-focused spot.
3.3. Real night sky observation
Ground-based real night sky experiments are commonly conducted to test the accuracy of star sensors [44]. Unlike on-orbit cases, ground-based observation is influenced by atmospheric “seeing” conditions, and additional complex noise is introduced into star images. Since our framework does not include an atmospheric noise model, the noise introduces a noticeable deviation, especially to the localization result from a single measurement. Moreover, due to the rotation of the earth, it is not possible to fix the positions of the stars and perform repeat measurements using a fixed optical system. We cannot obtain the mathematical expectation of the localization accuracy at one certain position and then compare it with the PL estimated at this position. In this challenging case, we can verify two things: ① The combination of MLE and ePSF modeling can improve the localization and navigation performance of the star sensor, and the average localization accuracy over multiple positions can approach the average estimated PL; and ② the intra-pixel positions of stars have an effect on the localization performance, and the high-performance region can be approximately estimated. In the experiment, a star sensor was fixed on a platform with its camera pointing to the zenith, and real star images were sampled. The images were sampled with a time interval of 366 ms, and 1000 frames were analyzed. We utilized the second way to determine the approximate ePSF for each star from multiple images (Fig. 6(a)). The ROI of the ePSF was set as 7 × 7 pixels, and we evaluated the ePSF at 29 × 29 grid points (where Δh is 0.25 pixels).
Then, the 2D positions of the stars in the images were determined using the developed method. We show the reconstructed trajectories of two stars in Fig. 6(b), which clearly demonstrate the effectiveness of the combination of the MLE and ePSF modeling in reducing the random error and the systematic error, respectively. We used quadratic polynomials to fit the localization results in the x and y directions, and used the RMS of the residuals to represent the average localization accuracy. For the two stars, the accuracy in the x direction from the conventional method (CG) is 0.065 and 0.084 pixels, and that from the developed method (MLE) is 0.021 and 0.026 pixels, showing agreement with the estimated average PLs of 0.024 and 0.026 pixels (Section S7 in Appendix A for the results for other stars). The attitude determination accuracy of the star sensor is then greatly improved (Section S7).
Furthermore, we equally divided the estimated PLs into three levels; then, each pixel was divided into three regions: a region with low errors (RLE), a region with medium errors (RME), and a region with high errors (RHE). An example is shown in Fig. 6(c). The PL for localizing a real star in the y direction is estimated and given. Every pixel region is divided into the three subpixel regions according to the PL result. We then evaluate the experimental localization performance at these three predicted regions and show the results for eight stars (Fig. 6(d)). Although the spot size is affected by the atmosphere, the accuracy at RLE is better than that at RHE (the RMS values of the RHE and RLE curves are 0.025 and 0.020 pixels at the left side of Fig. 6(d), and 0.026 and 0.020 pixels at the right side of Fig. 6(d)). The results for all stars are shown in Section S7. This experiment demonstrates that the combination of the MLE and ePSF modeling greatly improves the localization and navigation performance of the star sensor, and verifies the potential of locating spots at specific intra-pixel regions for better performance, even in cases with complex noise.
4. Discussion and conclusions
In subpixel localization, the optimization problem of the PL(N, σd2, r) has been well studied. The PL reaches the optimum with a high detected photon number N, a low pixel dark noise σd2, and a moderate PSF radius r. The intra-pixel position (x, y) of the target is usually neglected, because it has almost no effect on the PL for a typical PSF used in telescopes or microscopes. However, the PL significantly varies with the intra-pixel position for very small PSFs, and very high precision can be achieved near the pixel boundaries. This is because the pixel intensity gradient there is very large, which permits easy identification of slight position variations.
In this work, we developed a localization PL analysis framework embodying the effect of the intra-pixel position (x, y) for real localization applications. To accurately estimate the PL in practical cases, we provided experimental ePSF modeling approaches and applied the CRLB on the ePSF. Based on the characteristics of small PSFs, a simplified equation (Eq. (11)) was derived for describing the PL near the pixel boundaries for an arbitrary small ePSF. The equation was verified on real ePSFs in laboratory experiments. Then, we used a typical Gaussian PSF to perform a further analysis of the optimization problem of PL(N, σd2, r, x, y). By deriving Eq. (14), we found that the final optimum of PL is achieved at the pixel boundaries when the Gaussian radius is as small as possible, until it is limited by light diffraction. Although it is well known that subpixel localization methods have been used to transcend the DL, our work reveals that physical diffraction does limit the final optimum of the localization PL (Eq. (17)). Finally, the MLE method was applied. Its combination with ePSF modeling successfully reached the PL in experiments, making the theoretical PL analysis meaningful and effective. This framework provides deep physical insights into subpixel localization theory and provides accurate and detailed guidance for practical localization experiments. It is applicable for general cameras and is not restricted to point sources; rather, it can be extended to general optical targets with sharp image features.
The main limitation of this work is that we only theoretically and experimentally revealed the effect of the intra-pixel position of small PSFs; it is not always possible to retrieve the best localization performance using a fixed optical system. Developing an optical system with an accurately movable image sensor, such as the pixel-shifting-based high-resolution camera [45], is our future research direction. We hope that this work will pave the way for combining PSF engineering with image sensor position control in order to make full use of information theory and achieve the final optimum of the localization PL in optical measurement.
Acknowledgments
The authors would like to acknowledge the support from the National Natural Science Foundation of China (51827806), the National Key Research and Development Program of China (2016YFB0501201), and the Xplorer Prize funded by the Tencent Foundation.
Compliance with ethics guidelines
Haiyang Zhan, Fei Xing, Jingyu Bao, Ting Sun, Zhenzhen Chen, Zheng You, and Li Yuan declare that they have no conflict of interest or financial conflicts to disclose.
A.Yildiz, J.N.Forkey, S.A.McKinney, T.Ha, Y.E.Goldman, P.R.Selvin. Myosin V walks hand-over-hand: single fluorophore imaging with 1.5-nm localization. Science, 300 (5628) (2003), pp. 2061-2065.
[2]
T.Matsuda, A.Miyawaki, T.Nagai. Direct measurement of protein dynamics inside cells using a rationally designed photoconvertible protein. Nat Methods, 5 (4) (2008), pp. 339-345. DOI: 10.1038/nmeth.1193
[3]
C.R. Copeland, J.Geist, C.D.McGray, V.A.Aksyuk, J.A.Liddle, B.R.Ilic, et al. Subnanometer localization accuracy in widefield optical microscopy. Light Sci Appl, 7 (1) (2018), p. 31.
[4]
Y.Wang, J.Lin, Q.Zhang, X.Chen, H.Luan, M.Gu. Fluorescence nanoscopy in neuroscience. Engineering, 16 (2022), pp. 29-38
[5]
P.P.Mathai, J.A.Liddle, S.M.Stavis. Optical tracking of nanoscale particles in microscale environments. Appl Phys Rev, 3 (1) (2016), Article 011105.
[6]
M.Wei, F.Xing, Z.You.A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images. Light Sci Appl, 7 (1) (2018), p. 18006. DOI: 10.1038/lsa.2018.6
[7]
L.Kong, P. Zhou. A light field measurement system through PSF estimation by a morphology-based method. Int J Extrem Manuf, 3 (4) (2021), Article 045201. DOI: 10.1088/2631-7990/ac1455
[8]
Y.Chen, Z.Shu, S.Zhang, P.Zeng, H.Liang, M.Zheng, et al. Sub-10 nm fabrication: methods and applications. Int J Extrem Manuf, 3 (3) (2021), Article 032002
[9]
J.Wang, X.Ji, X.Zhang, Z.Sun, T.Wang. Real-time robust individual X point localization for stereoscopic tracking. Pattern Recogn Lett, 112 (2018), pp. 138-144.
[10]
L.P.D.Silva, M.Auvergne, D.Toublanc, J.Rowe, R.Kuschnig, J.Matthews. Estimation of a super-resolved PSF for the data reduction of undersampled stellar observations—deriving an accurate model for fitting photometry with Corot space telescope. Astron Astrophys, 452 (1) (2006), pp. 363-369. DOI: 10.1590/S0101-81082006000300019
[11]
C.C.Liebe. Accuracy performance of star trackers—a tutorial. IEEE Trans Aerosp Electron Syst, 38 (2) (2002), pp. 587-599.
[12]
R.Genzel, F.Eisenhauer, S.Gillessen. The galactic center massive black hole and nuclear star cluster. Rev Mod Phys, 82 (4) (2010), pp. 3121-3195.
[13]
T.Do, A.Ghez, M.Morris, J.Lu, S.Chappell, A.Feldmeier-Krause, et al. Observational constraints on the formation and evolution of the Milky Way nuclear star cluster with Kect and Gemini. Proc Int Astron Union, 11 (S322) (2016), pp. 222-230.
[14]
S.Du, M.Wang, X.Chen, S.Fang, H.Su. A high-accuracy extraction algorithm of planet centroid image in deep-space autonomous optical navigation. J Navigation, 69 (4) (2016), pp. 828-844.
[15]
S.Zhang, F.Xing, T.Sun, Z.You, M.Wei. Novel approach to improve the attitude update rate of a star tracker. Opt Express, 26 (5) (2018), pp. 5164-5181. DOI: 10.1364/oe.26.005164
[16]
J.Jiang, H.Wang, G.Zhang. High-accuracy synchronous extraction algorithm of star and celestial body features for optical navigation sensor. IEEE Sens J, 18 (2) (2018), pp. 713-723.
[17]
E.Betzig, G.H.Patterson, R.Sougrat, O.W.Lindwasser, S.Olenych, J.S.Bonifacino, et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science, 313 (5793) (2006), pp. 1642-1645. DOI: 10.1126/science.1127344
[18]
M.Bates, B.Huang, G.T.Dempsey, X. Zhuang. Multicolor super-resolution imaging with photo-switchable fluorescent probes. Science, 317 (5845) (2007), pp. 1749-1753. DOI: 10.1126/science.1146598
[19]
S.Manley, J.M.Gillette, G.H.Patterson, H.Shroff, H.F.Hess, E.Betzig, et al. High-density mapping of single-molecule trajectories with photoactivated localization microscopy. Nat Methods, 5 (2) (2008), pp. 155-157. DOI: 10.1038/nmeth.1176
[20]
A.R.Small, R. Parthasarathy. Superresolution localization methods. Annu Rev Phys Chem, 65 (2014), pp. 107-125. DOI: 10.1146/annurev-physchem-040513-103735
[21]
M.Lelek, M.T.Gyparaki, G.Beliu, F.Schueder, J.Griffie, S.Manley, et al. Single-molecule localization microscopy. Nat Rev Methods Primers, 1 (2021), p. 40
[22]
S.Burov, P.Figliozzi, B.Lin, S.A.Rice, N.F.Scherer, A.R.Dinner. Single-pixel interior filling function approach for detecting and correcting errors in particle tracking. Proc Natl Acad Sci USA, 114 (2) (2016), pp. 221-226
[23]
K.A.Winick. Cramér-Rao lower bounds on the performance of charge-coupled-device optical position estimators. J Opt Soc Am A Opt Image Sci Vis, 3 (11) (1986), pp. 1809-1815.
[24]
H.Chen, C.Rao. Accuracy analysis on centroid estimation algorithm limited by photon noise for point object. Opt Commun, 282 (8) (2009), pp. 1526-1530.
[25]
H.Jia, J.Yang, X.Li. Minimum variance unbiased subpixel centroid estimation of point image limited by photon shot noise. J Opt Soc Am A Opt Image Sci Vis, 27 (9) (2010), pp. 2038-2045.
[26]
M.Davidson, L.Lindegren.Early PSF/LSF model. Gaia data release 2 documentation. Madrid: European Space Agency (2019)
[27]
K.I.Mortensen, L.S.Churchman, J.A.Spudich, H.Flyvbjerg. Optimized localization analysis for single-molecule tracking and super-resolution microscopy. Nat Methods, 7 (5) (2010), pp. 377-381. DOI: 10.1038/nmeth.1447
F.Huang, T.M.P.Hartwich, F.E.Rivera-Molina, Y.Lin, W.C.Duim, J.J.Long, et al. Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms. Nat Methods, 10 (7) (2013), pp. 653-658. DOI: 10.1038/nmeth.2488
[30]
A.A.Abdo, M.Ackermann, M.Ajello, W.B.Atwood, M.Axelsson, L.Baldini, et al. Fermi/large area telescope bright gamma-ray source list. Astrophys J Suppl S, 183 (2009), pp. 44-46
[31]
C.Fabricius,L. Lindegren. Astrometric image parameters determination. Gaia data release 2 documentation. European Space Agency, Madrid (2019)
[32]
R.C.Stone. A comparison of digital centering algorithms. Astron J, 97 (4) (1989), pp. 1227-1237
[33]
X.Wei, J.Xu, J.Li, J.Yan, G.Zhang. S-curve centroiding error correction for star sensor. Acta Astronaut, 99 (2014), pp. 231-241.
[34]
G.Rufino, D.Accardo. Enhancement of the centroiding algorithm for the star tracker measure refinement. Acta Astronaut, 53 (2) (2003), pp. 135-147.
[35]
H.Jia, J.Yang, X.Li, J.Yang, M.Yang, Y.Liu, et al. Systematic error analysis and compensation for high accuracy star centroid estimation of star tracker. Sci China Technol Sci, 53 (2010), pp. 3145-3152. DOI: 10.1007/s11431-010-4129-7
[36]
J.Ares, J.Arines. Influence of thresholding on centroid statistics: full analytical description. Appl Opt, 43 (31) (2004), pp. 5796-5805.
[37]
X.Ma, C.Rao, H.Zheng. Error analysis of CCD-based point source centroid computation under the background light. Opt Express, 17 (10) (2009), pp. 8525-8541
[38]
Y.Zhang, J.Jiang, G.Zhang, Y.Lu. Accurate and robust synchronous extraction algorithm for star centroid and nearby celestial body edge. IEEE Access, 7 (2019), pp. 126742-126752. DOI: 10.1109/access.2019.2939148
[39]
T.Delabie, J.D.Schutter, B.Vandenbussche. An accurate and efficient Gaussian fit centroiding algorithm for star trackers. J Astronaut Sci, 61 (1) (2014), pp. 60-84. DOI: 10.1007/s40295-015-0034-4
[40]
H.Wang, E.Xu, Z.Li, J.Li, T.Qin. Gaussian analytic centroiding method of star image of star tracker. Adv Space Res, 56 (10) (2015), pp. 2196-2205.
[41]
J.Anderson, I.R.King.Toward high-precision astrometry with WFPC2. I. Deriving an accurate point-spread function. Publ Astron Soc Pac, 112 (776) (2000), pp. 1360-1382.
[42]
C.Zhai, M.Shao, R.Goullioud, B.Nemati. Micro-pixel accuracy centroid displacement estimation and detector calibration. Proc R Soc A, 467 (2136) (2011), pp. 3550-3569. DOI: 10.1098/rspa.2011.0255
[43]
European Machine Vision Association. EMVA standard 1288:standard for characterization of image sensors and cameras, release 3.1. Report. Barcelona: European Machine Vision Association (EMVA); 2016.
[44]
T.Sun, F.Xing, X.Wang, Z.You, D.Chu. An accuracy measurement method for star trackers based on direct astronomic observation. Sci Rep, 6 (2016), p. 22593.
[45]
FreemanPL. Image shifting apparatus for enhanced image resolution. United States patent US 7420592 B2. 2008.
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.