Issue |
A&A
Volume 684, April 2024
|
|
---|---|---|
Article Number | A89 | |
Number of page(s) | 9 | |
Section | Planets and planetary systems | |
DOI | https://doi.org/10.1051/0004-6361/202348665 | |
Published online | 05 April 2024 |
Advancements in the 3D shape reconstruction of Phobos: An analysis of shape models and future exploration directions★
1
State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University,
Wuhan
430072
Hubei Province, PR China
e-mail: jgyan@whu.edu.cn; huangxf@whu.edu.cn
2
Wuhan Wisdom Technology Co., Ltd.,
Wuhan
430000
Hubei Province, PR China
3
Institute of Planetary Research, German Aerospace Center (DLR),
Rutherfordstr. 2,
12489
Berlin, Germany
4
Geodesy Observatory of Tahiti, University of French Polynesia,
Tahiti,
BP 6570,
98702
Faa’a, French Polynesia, France
Received:
19
November
2023
Accepted:
31
January
2024
Aims. Our research focuses on developing a high-precision and relatively high-resolution shape model of Phobos.
Methods. We employed advanced photogrammetric techniques combined with novel computer vision methods to reconstruct the 3D shape of Phobos from nearly 900 Mars Express/SRC and Viking Orbiter images. This research also involved a comparison of the newly developed shape model with previous models to identify differences for future missions.
Results. This shape model was used to generate new measurements of the volume (5740 ± 30) km3, the surface area (1629 ± 8) km2, and the bulk density (1847 ± 11) kg m−3 of Phobos. By comparing our reconstructed shape model with prior models, we have identified key differences, especially in areas such as the Opik crater and near the Shklovsky crater. These findings highlight critical areas that warrant further investigation in future missions dedicated to exploring Phobos.
Key words: methods: data analysis / techniques: image processing / planets and satellites: individual: Phobos / planets and satellites: surfaces
The shape models are available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/684/A89
© The Authors 2024
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.
1 Introduction
The reconstruction of 3D models is a central focus in missions exploring small celestial bodies. The determination of a 3D shape representation, which ideally combines intricate detail, precision, and comprehensiveness, is fundamental for conducting thorough cartography and scientific analyses of these cosmic entities (Preusker et al. 2017). Phobos is one of the two natural satellites of Mars and has consistently intrigued planetary scientists. Its irregular, potato-like shape has sparked compelling questions about its origin, evolution, and composition. This distinctive shape, combined with the prevalence of impact craters and distinct grooves on its surface, has inspired a variety of hypotheses about its formation (Basilevsky et al. 2014). Precise cartography and careful reconstruction are crucial in furthering scientific inquiries related to Phobos, providing vital insights for understanding this unique celestial body.
Historically, Phobos shape models were largely approximations, such as triaxial ellipsoids or spherical harmonics models (Duxbury 1974,1991; Simonelli et al. 1993; Thomas 1989; Turner 1978). The first detailed shape model of Phobos was created by Willner et al. (2010). They employed stereo-photogrammetric (SPG) processing on Mars Express (MEX)/Super Resolution Channel (SRC) images to develop a new global control point network. This led to the creation of a spherical harmonic model of Phobos with degree and order 17, based on network control points. Subsequently, Willner et al. (2014) extended the image dataset using stereo-photogrammetric methods to derive a global digital terrain model (DTM) with a resolution of 100 m pixel−1 and a spherical harmonic model with degree and order 45. Gaskell (2011) and Ernst et al. (2023) focused on reconstructing shape models of Phobos using stereo-photoclinometry (SPC). This method facilitated the integration of data with varying resolutions and the continuous updating of shape models with new imagery, culminating in a comprehensive solution for the positioning of maplet centers, and the location and rotation of the celestial body (Al Asad et al. 2021; Barnouin et al. 2020; Gaskell et al. 2008,2023).
In comparison to the stereo-photogrammetric results of Willner et al. (2014), the stereo-photoclinometric findings of Gaskell (2011) and Ernst et al. (2023) demonstrated a better albedo consistency and significant details. This enhancement can be attributed to the assumptions inherent in the stereo-photoclinometric surface reflection model. The photogrammetric shape model of Phobos, created by Willner et al. (2010, 2014), exhibits a marginally higher level of accuracy than the models of Gaskell (2011) and Ernst et al. (2023). Because the high-resolution images of Phobos from the MEX/SRC have been updated as recently as 2019, the availability of updated images underscores the opportunity to enhance the shape model reconstruction. In light of future missions such as the Martian moons eXploration (MMX), which aim for a more comprehensive explorations and sample return objectives, there is a pressing need for an updated shape model (Kuramoto et al. 2022).
An enhanced shape model, offering improved precision and resolution, could be of assistance in identifying potential landing sites and enabling comprehensive geological studies.
Our approach, while similar to the stereo-photogrammetric methods used by Willner et al. (2010, 2014), incorporates advanced computer vision techniques, which sets it apart from previous efforts. In this paper, we detail the refinement of the photogrammetric 3D reconstruction process that we have undertaken, which has led to the development of a high-precision and relatively high-resolution shape model of Phobos. This shape model not only updates some of the physical characteristics of Phobos, but also sets a foundation for more targeted exploratory missions. By analyzing the differences between various existing models, we have pinpointed key areas that hold promise for future exploration of Phobos.
2 Image data
A plethora of Mars spacecraft has acquired images of Phobos over the course of their explorations. In the 1970s, Viking 1 and 2 provided some of the earliest comprehensive views of this Martian moon (Duxbury 1989). Owing to the eccentric orbital design and flexible maneuverability of MEX, an extensive number of detailed observations of Phobos has been realized (Pätzold et al. 2016). Considering the importance of a comprehensive coverage for the reconstruction of a global shape model, we predominantly used image data from MEX, augmented by images from the Viking Orbiters (VO).
The MEX spacecraft is equipped with two imaging systems, the High Resolution Stereo Camera (HRSC), and the Super Resolution Channel (SRC). Although the SRC is integrated with the HRSC, it functions as a discrete 1024 × 1024 framing camera (with an effective pixel area of 1008 × 1018), which stands apart from the HRSC line-scan configuration (Oberst et al. 2008). Featuring its separate optics and a lengthier focal length of 988.5 mm, the SRC is particularly advantageous for the high-precision reconstruction of global shape models. The image resolution also plays a critical role in determining the accuracy of the resultant 3D shape model; the finer resolution equates to the enhanced accuracy. With a consistent focal length and pixel size, the image resolution bears a relation to the flight altitude (Zimmerman et al. 2020). The pertinent formula is expressed as
(1)
where R is the image resolution, c is the camera pixel size, H is the flight altitude, and f is the focal length. Using Eq. (1), we filtered the SRC images using a threshold of 5600 km, resulting in a final image set with a resolution of 25 m. The latest Phobos shape reconstruction work comes from Ernst et al. (2023). According to the image list they provided, we learned that the latest SRC image they used was acquired in 2016. Our image dataset comprises a total of 129 images generated after this specified date, again with an average resolution of approximately 25 m. Many of these images were captured under favorable lighting conditions. This aspect is anticipated to positively impact our subsequent photogrammetric reconstruction efforts.
The SRC images encompassed approximately 80% of the Phobos surface, and VO images were used to address coverage deficiencies in the remainder, predominantly, the trailing hemisphere. While the average resolution of VO images is 20 m, their quality is substantially compromised by reseau marks and noise, necessitating preliminary preprocessing (Wellman et al. 1976). We removed these reseau marks, and in response to the prevalent salt and pepper noise, applied the BM3D (Dabov et al. 2007) method for mild denoising to enhance the efficiency of the ensuing image-matching process. We conducted a thorough screening of MEX/SRC and VO images and eliminated those that were significantly impacted by noise or excessive blurriness. This process resulted in an initial dataset comprising 920 SRC images and 36 VO images. This means fewer images than were used in Ernst et al. (2023). In contrast with the image matching in stereo-photogrammetry, the overall reconstruction completeness of stereo-clinometry increases with the available inconsistent illuminations (Liu & Wu 2020; Kirk 1987).
3 Methods
Using advanced computer vision technologies, we propose a method aimed at reconstructing a detailed high-precision 3D shape model of Phobos. This method encompasses two primary stages: initially, aerial triangulation is used to obtain a sparse point cloud of the Phobos surface, followed by a transformation of this point cloud to a mesh. The procedural flowchart, depicted in Fig. 1, sequentially outlines these stages.
3.1 Image matching
Image-matching techniques are crucial for identifying conjugate points across multiple images. They are categorized into intensity-based and feature-based techniques. Intensity-based methods, also known as area-based methods, compute similarity measurements within sliding rectangular windows of the pixel intensity. Although intensity-based methods are mature, they often struggle with geometric distortions and variations in illumination and sensor differences (Ma et al. 2021). In contrast, feature-based matching methods, which are more suitable for spacecraft images with geometric changes and diverse illumination conditions, focus on accurate feature point detection and matching (Li et al. 2022). We employed SuperPoint (DeTone et al. 2018) for the detection and SuperGlue (Sarlin et al. 2020) for the matching, providing an edge over several classic feature-based methods, such as the scale-invariant feature transform (SIFT; Lowe 2004), D2Net (Dusmanu et al. 2019), and R2D2 (Revaud et al. 2019), particularly in handling geometric distortions, variations in illumination, and scenarios with a low signal-to-noise ratio (Brockers et al. 2022; Wan et al. 2022; Zheng et al. 2022).
Feature point detection is crucial for image matching, and its effectiveness is greatly enhanced by deep learning, particularly through convolutional neural networks (CNNs; Alzubaidi et al. 2021; Xu et al. 2020). The accuracy of these algorithms intrinsically relies on ground-truth data, often annotated by experts (Bojanić et al. 2019). Our preference was for SuperPoint, a self-supervised method that combines key point detection and description, which is well suited for tasks such as planetary image processing. The dual-network structure of SuperPoint includes the base detector for identifying corner points (these points merely serve as preliminary feature point candidates and not as final outputs) and the SuperPoint network for the final feature point determination, using the diverse MS-COCO 2014 dataset for training (Lin et al. 2014). Training SuperPoint involves three steps: training the base detector on a synthetic dataset, applying homographic adaptation to real images for the self-labeling of interest points, and using geometric transformations to identify final feature points and descriptors.
Following the detection of feature points from images, it becomes crucial to determine the correspondences between these points. This is achieved by assessing the similarities of descriptors, a process often referred to as feature point matching. The establishment of reliable and accurate correspondences between images poses a considerable challenge, particularly in planetary images that have varying viewpoints and scales. We chose the SuperGlue matching algorithm for its advanced attention mechanism, enhancing its ability to accurately match feature points.
SuperGlue combines an attentional graph neural network (GNN; Vaswani et al. 2017) and a Sinkhorn (Cuturi 2013; Knight 2008) algorithm to solve the optimal transport problem, mimicking a human-like trial and error matching process. This process involves two stages: self-attention, and cross-attention, which together create matching vectors encapsulating features and descriptors. The Sinkhorn algorithm is then applied to these vectors to maximize their similarities, efficiently identifying the conjugate points between images. Through these processes, SuperGlue effectively determines the conjugate points between images.
![]() |
Fig. 1 Flowchart of our reconstruction process. |
3.2 Bundle adjustment
After the completion of image matching, the next step is to carry out bundle adjustment. This process aims to minimize the reprojection error of corresponding image rays in 3D space by adjusting the intrinsic camera parameters along with the associated positional and rotational data (Agarwal et al. 2010; Wu et al. 2011). This involves forming stereo models through the convergence of image rays at the reconstruction target (Yastikli 2007). Each image is matched to maximize the number of conjugate points for establishing stereo models. We incorporated multiple images into a stereo model only when a single image contained sufficient feature points that aligned with the feature points in at least two other images.
In traditional photogrammetric bundle adjustment, control information such as the initial camera position and orientation is crucial for the high-precision integration of stereo models. However, the existing control networks of Phobos, also derived photogrammetrically (Oberst et al. 2014; Willner et al. 2010), are not applicable for our photogrammetric bundle adjustment process. Therefore, our stereo model adjustments mainly depended on the SRC positions and orientations, posing challenges under geometric constraints and highlighting the need for accurate and robust image matching. The prior datasets for position and orientation were sourced from the SPICE kernels (Acton 1996). These kernels define the alignment of the Phobos body-fixed coordinate system relative to the J2000.0 frame and include state information of the MEX throughout its mission, detailing the placement of individual payloads on the spacecraft (Costa 2013; Scholten et al. 2005). Our primary reference for selecting and using these kernels was MEX_OPS_V321_20230405_001.TM, which was created by the European Space Agency (ESA). Through the SPICE built-in interface, we extracted the positional and orientational information of the SRC camera relative to the Phobos body-fixed coordinate frame.
3.3 Point cloud to mesh
After photogrammetric bundle adjustment, a denser 3D point cloud is necessary for a precise shape reconstruction because it is insufficient to rely on sparse point clouds to characterize a shape. Dense matching is required to establish dense point clouds based on the image data at the pixel level. For this purpose, we employed the PatchMatch algorithm, which creates refined depth maps from the initial sparse point cloud. These maps were then filtered and merged into a dense point cloud (Shen 2013). We chose PatchMatch because it is efficient and the resulting dense point cloud is very precise, and it provides depth maps with acceptable error levels (Barnes et al. 2009; Bleyer et al. 2011).
While the dense point cloud provides a spatial outline of the Phobos shape, it lacks detailed visual representation. To make this more practical we transformed this point cloud into a triangular irregular network (TIN). This process is often referred to as point cloud networking. We opted not to represent the shape model in the form of DTM due to the requirements for subsequent texture mapping. We used the Delaunay triangulation method to connect points and form an irregular triangular mesh (Tsai 1993). It is vital in this mesh formation that the triangles do not overlap and interconnect seamlessly. Additionally, it may be necessary to adjust the original 3D surface points for mesh alignment, meaning that the triangle vertices post-construction might slightly differ from those in the original dense point cloud. The resulting mesh model thus offers a more effective representation of the Phobos shape.
The 3D mesh model we derived from the surface of Phobos depicted its shape with considerable accuracy. Nevertheless, the model lacked the detailed textures of the surface, highlighting the necessity of texture mapping. Each model facet, corresponding to images from various viewpoints, requires high-quality images selected based on orthogonality and resolution for an effective texture mapping (Li & Cheng 2015). Color adjustments are needed in areas with luminance disparities to maintain uniformity. We integrated Real-ESRGAN, a super-resolution algorighm based on generative adversarial networks (GAN; Wang et al. 2021), which enhanced the texture mapping by addressing issues such as blur and noise. For SRC and VO images with lower resolution, the super-resolution function of Real-ESRGAN can elevate the quality and detail of the texture mapping.
4 Results
In this research, we present and critically assess the results of three interrelated processes: image matching, bundle adjustment, and the transformation from point cloud to mesh model. These processes are cascaded: the quality of the image matching directly influences the subsequent bundle adjustment; the accuracy of the sparse point cloud generated through bundle adjustment plays a pivotal role in determining the precision of the final shape mesh model; and this shape model is instrumental in deriving specific physical parameters of Phobos, such as its volume and bulk density.
Our method for validating the image-matching performance of SuperPoint and SuperGlue on Phobos imagery is explained from two fundamental perspectives. The first perspective discusses our preference for feature-based image-matching methods over the intensity-based alternatives. The second part elaborates on the reasons for selecting SuperPoint and SuperGlue from among various feature-based approaches.
Willner et al. (2010, 2014) employed two different intensity-based matching methods to create a DTM of Phobos using HRSC line-scan images. The process involved initially identifying conjugate points at the pixel level using the normalized cross-correlation (NCC) method (Heinrichs et al. 2007; Tsai & Lin 2003). This step was followed by the application of least-squares image matching to refine the accuracy to the subpixel level (Ackermann 1984; Bethmann & Luhmann 2010). This workflow proves particularly effective for HRSC images, as each scan line in these images contains position and orientation information. This allows for a more dynamic search within adjacent scan lines, which not only simplifies the search process, but also improves its precision. However, the efficiency of this method diminishes for extensive sets of frame images such as those from SRC and VOs because the rectangular search window is sensitive to rotation and scaling variations. In addition, the matching results are also subject to the size of the search window: an excessively large window may result in unnecessary computations in areas with fewer targeted points and additional false-match results, while a window that is too small might fail to capture sufficient grayscale information around the targeted points, leading to no matches or potential mismatches (Fan et al. 2010). We conducted NCC and least-squares image matching on two SRC images, setting the search window size to 9 pixels, the step size to 50 (to sparsify the matched points for clarity), and the threshold to 0.98.
Despite setting a relatively high threshold of 0.98, we observed a significant number of mismatches. We speculate that this may be attributed to factors such as noise interference, along with rotation and scaling differences between the images. In contrast, feature-based image-matching methods often demonstrate robustness across various conditions, including scaling, rotation, and low signal-to-noise ratios. In addition to the SuperPoint and SuperGlue we employed, one of the most classic alternatives is scale-invariant feature transform (SIFT) and Brute Force. The SIFT (Lowe 2004) method is particularly noted for its precision and rotational invariance, making it a prominent approach in feature point detection. Brute Force offers reliable feature point matching without regard to computational efficiency. To assess the image matching efficacy of SuperPoint and SuperGlue on Phobos images, we performed both qualitative and quantitative evaluations and compared them with the SIFT and Brute Force method (as illustrated in Fig. 2).
Figure 2 illustrates the distribution of key points for different feature-based image-matching algorithms. Using the SuperPoint and SuperGlue method, we extracted 953 and 878 key points, resulting in a total of 285 matched conjugate pairs. In comparison, the SIFT and Brute Force approach yielded less optimal outcomes, with 629 and 256 extracted key points and 143 matched conjugate formed pairs. Additionally, a considerable number of mismatches were observed with the SIFT and Brute Force method, indicating that further filtering of feature points is required.
Following the image-matching phase, we advanced to bundle adjustment. In this process, a total of 865 SRC images and 26 VO images were interconnected through stereo models, with each surface point being observable in an average of 3.4 images. The enhanced feature point detection and matching processes contributed positively to the accuracy of the bundle adjustment: the average reprojection error is 0.628 pixels, and the maximum error is 2.976 pixels. As shown in Fig. 3, the surface points with a poorer accuracy are mainly located in craters and in the trailing hemisphere, which can likely be a contribution of the stronger VO image coverage in this area.
The calculated reprojection error, in conjunction with the resolution of the image dataset employed in our study, suggests an overall positional accuracy of approximately 16.2 m. This compares favorably with the accuracy of 36 m reported by Ernst et al. (2023) for their shape model, whose dataset has a representative resolution of 20 m. We recommend study of the relevant SPC-derived metrics discussed in Ernst et al. (2023) and Al Asad et al. (2021). Willner et al. (2014) did not specify the accuracy of their model in the publication. The Phobos control network also use photogrammetric methods, in which Willner et al. (2014) participated, however, has an accuracy of 13 m, as detailed in Burmeister et al. (2018). Control points are special surface points that are more easily reidentified through image matching in a larger number of images than on average. Consequently, the accuracy of the control points is better than that of the general surface points. We also cautiously infer that the accuracy of our shape model may outperform that of Willner et al. (2014), although the exact margin is not quantified. All this means that the positional accuracy of our shape model is better than in other current models.
Figure 4 shows our reconstructed untextured and textured shape models of Phobos. These models feature 171 863 vertices and 341 724 facets, which numerically slightly exceed the 137 439 vertices and 274 874 facets of the model by Willner et al. (2014). Our shape model and that of Willner et al. (2014) are both numerically inferior to the 1 579 014 vertices and 3 145 728 facets of the shape model of Ernst et al. (2023). This discrepancy arises because the stereo-photoclinometric method computes the surface gradient of Phobos on a pixel-by-pixel basis, thus offering a more detailed characterization of the shape (Al Asad et al. 2021; Barnouin et al. 2020). In contrast, the stereo-photogrammetric method we used, as did Willner et al. (2014), calculates surface points based on collinearity equations with redundant observations, leading to shape models with a higher precision but a lower resolution. Certain parameters of Phobos were updated, as presented in Table 1. The bulk density calculations for our shape model were based on the GM = (0.70765 ± 0.0075) × 10−3 km3 s−2 from Yang et al. (2019). The Phobos parameters from Willner et al. (2014) and Ernst et al. (2023) are also provided as references.
![]() |
Fig. 2 Comparison of image matching between SuperPoint and SuperGlue and SIFT and Brute Force (HC046_0003_SR2, left, and HC069_0004_SR2, right). (a) Results of SuperPoint and SuperGlue. (b) Results of SIFT and Brute Force. |
![]() |
Fig. 3 Three views of the reprojection error distribution. |
Shape model parameters of Phobos.
![]() |
Fig. 4 Six orthographic views of our shape model along the primary axes of the body-fixed Phobos coordinate frame (the axes point toward the viewer; north is up for the +/XY views, and +Y is up for the +/Z views). |
5 Discussion
5.1 Comparisons with existing models
To validated the enhancements in our reconstructed shape model of Phobos, a thorough evaluation of the model is essential. On Earth or the Moon, a common and reliable validation technique involves comparing surface point coordinates derived from the images with those obtained from other more accurate measurements, such as Global Navigation Satellite System (GNSS) or laser systems. The root mean square error between these sets of data is then calculated to assess the model (Benassi et al. 2017). However, this method is not applicable to Phobos because the existing control network for Phobos is also image based, developed through stereo-photogrammetry (Burmeister et al. 2018; Oberst et al. 2014; Willner et al. 2014). An alternative approach that is frequently used to reconstruct the shapes of small objects is to compare real images with synthetic images generated from the model under identical lighting and observational geometries (Ernst et al. 2023; Jorda et al. 2016). This technique has been effective in evaluating stereo-photoclinometric shape models. Nevertheless, our stereo-photogrammetric method does not yield information on the surface albedo, which limits the applicability of this evaluation method for our shape model.
In an alternative approach to evaluate our shape model, we compared it with other existing models, specifically those of Willner et al. (2014) and Ernst et al. (2023). Both Gaskell (2011) and Ernst et al. (2023) employed the SPC method. Because the work of Ernst et al. (2023) involved Gaskell and benefitted from a richer dataset, we exclusively used the results from Ernst et al. (2023) for our subsequent comparisons between the models. This method does not directly quantify the strengths and weaknesses of these models. However, it does facilitate an analysis of the various shape models in terms of the regions in which notable differences are observed. These regions of significant differences are likely to be of particular interest and in the focus for future detailed investigations. The metric used for this comparison is the Hausdorff distance, which is a commonly employed tool in computer vision for shape recognition and comparing differences between models (Aspert et al. 2002; Zhang et al. 2017; Chen et al. 2023). The Hausdorff distance measures the maximum distance from a surface point on one model to the nearest surface point on another model. A higher value of the Hausdorff distance indicates more substantial discrepancies between the models (shown in Fig. 5).
Figure 5 presents the difference distribution between our Phobos shape model and the models developed by Willner et al. (2010) and Ernst et al. (2023). Because Ernst et al. (2023) did not center their shape model, we aligned each model based on the center of the figure for consistency. The mean differences are 42 m and 44 m, respectively, a variance that can be considered acceptable given the positional accuracies of these models. According to the topographic zoning outlined by Wählisch et al. (2014), the regions exhibiting larger differences, denoted by green areas in Fig. 5, are predominantly located in the trailing hemisphere (the −Y view in Fig. 5). Furthermore, the regions with the most significant differences (indicated by warm colors) are also situated in the trailing hemisphere, specifically, in the Opik crater and near the Shklovsky crater, with discrepancies reaching up to 468 m and 474 m. We also compared the Willner et al. (2014) model with the Ernst et al. (2023) model, and the results show an overall consistency and widely distributed differences in the trailing hemisphere. The mean difference is 47 m, and the maximum difference is 347 m.
![]() |
Fig. 5 Six orthographic views of the differences between two shape models along the primary axes of the body-fixed Phobos coordinate frame (the axes point toward the viewer; north is up for the +/XY views, and +Y is up for the +/Z views). |
5.2 Analyses of the differences
The reason for these pronounced differences in the trailing hemisphere is primarily the uneven coverage of the MEX/SRC images and the constraints of the VO images. The highly elliptical orbit of MEX is nearly perpendicular to Phobos. This significantly enhances observations of this Martian moon. This advantageous positioning facilitates capturing detailed images of the far side of Phobos for MEX/SRC, particularly its anti-Mars hemisphere. Nevertheless, because Phobos is tidally locked to Mars, unfavorable illuminations of the trailing side of Phobos ensue, particularly during close encounters with MEX. Additionally, the limited slew capabilities of the HRSC/SRC further complicate observations (Gwinner et al. 2016; Jacobson 2010). Consequently, as Fig. 6 illustrates, the SRC frequently captures images of the trailing hemisphere from increased distances and the resolution of images in these regions generally exceeds 50 m, affecting the accuracy of the bundle adjustment.
The VO images provide high-resolution coverage of the trailing hemisphere of Phobos. However, the presence of reseau marks and image noise in these images still poses challenges to the accuracy of the shape model (Wellman et al. 1976). It is crucial to handle these marks and noise with the utmost care, as shape reconstruction fundamentally involves converting 2D spacecraft image information into 3D spatial information about the celestial object. Any preprocessing of the images unavoidably results in a reduction of this vital information (Ballabeni et al. 2015; Szeliski 2022). Furthermore, the limited quality of the VO navigation data complicates the reconstruction process, potentially leading to a reduced positional accuracy and even systematic errors in areas solely covered by VO images. While it is feasible to adjust the weights of the VO navigation data during the bundle adjustment and to leverage MEX navigation data, the extent of these improvements remains limited. One approach could be to adjust the block in the first run, with the SRC information fixed and without any VO navigation data. This would tie all the VO navigation data together with the SRC to provide weights for the positions and orientations of SRC and VO in the second run.
In summary, the differences observed between different models of the trailing hemisphere of Phobos are largely due to the uneven coverage by MEX/SRC images and the limitations in the quality of VO images. A promising solution to address this situation is the execution of close orbital observations. The upcoming MMX mission, led by the Japan Aerospace Exploration Agency (JAXA), plans to land on Phobos and return surface samples to Earth (Kuramoto et al. 2022). In the interim, our high-precision shape model will be instrumental in aiding the lander touchdown. The mission orbiter is expected to send back wide-coverage high-resolution images, which will significantly enhance the accuracy of the shape reconstruction. Because the trailing hemisphere of Phobos is sparsely grooved and these grooves are intricately linked to its formation and evolution, a detailed examination of this region could provide valuable insights into the history of this Martian moon (Murray & Heggie 2014).
![]() |
Fig. 6 Schematic of the Phobos and MEX orbits and SRC imaging. |
6 Summary
In the absence of a more precise network of control points on Phobos, the reconstruction of a 3D shape model of Phobos strongly relies on robust and accurate image matching. We refined the image-matching process, and following aerial triangulation and the transformation from point cloud to mesh, successfully derived a high-precision and relatively high-resolution shape model of Phobos. By measuring the new shape model, we updated some of the Phobos parameters: the volume is (5740 ± 30) km3, the surface is (16298) km2, and the bulk density is (184711) kgm−3. Our reconstructed shape model is optimized and enhanced compared to the models of Willner et al. (2014) and Ernst et al. (2023), achieving satisfactory precision while maintaining relatively high resolution. Through our comparative analyses, we identified the differences between different shape models of Phobos. In particular, the Opik crater and the vicinity of Shklovsky crater, where these differences are most pronounced, will be areas of particular interest in future explorations of Phobos.
There is potential for a further optimization of the shape reconstruction of Phobos. For instance, a fusion of stereo-photogrammetry and stereo-clinometry could be advantageous. The SPC method can incorporate images from any illumination and observation geometry, and SPG collinearity-equation-based calculations yield more accurate surface points. This method of fusion has previously been applied to create DTM for Mars and the Moon (Jiang et al. 2017; Liu & Wu 2023). However, its full potential has not yet been completely harnessed to reconstruct small-body shape models. Finally, high-resolution images of the trailing hemisphere of Phobos are crucial to advance the accuracy of its shape reconstruction.
Acknowledgements
We are grateful to NASA and ESA for providing image data. The authors thank the HRSC Experiment team at DLR, Institute of Planetary Research, Berlin, and at Freie Universität Berlin, the HRSC Science Team, as well as the Mars Express Project teams at ESTEC, ESOC, and ESAC for their successful planning and acquisition of data as well as for making processed data available to the HRSC team and scientific community. We also thank Jiageng Zhong and Tao Zhang for their advice in image processing. This work is supported by the National Key Research and Development Program of China (No. 2022YFF0503202) and National Natural Science Foundation of China (No. 42241116, 42030110). Jean-Pierre Barriot was funded by a DAR grant in planetology from the French Space Agency (CNES), France.
References
- Ackermann, F. 1984, Photogramm. Rec., 11, 429 [Google Scholar]
- Acton, C. H., Jr 1996, Planet. Space Sci., 44, 65 [Google Scholar]
- Agarwal, S., Snavely, N., Seitz, S. M., & Szeliski, R. 2010, in Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5–11, 2010, Proceedings, Part II 11 (Springer), 29 [Google Scholar]
- Al Asad, M. M., Philpott, L. C., Johnson, C. L., et al. 2021, Planet. Sci. J., 2, 82 [NASA ADS] [CrossRef] [Google Scholar]
- Alzubaidi, L., Zhang, J., Humaidi, A. J., et al. 2021, J. Big Data, 8, 53 [CrossRef] [Google Scholar]
- Aspert, N., Santa-Cruz, D., & Ebrahimi, T. 2002, Proceedings. IEEE International Conference on Multimedia and Expo, 1, 705 [CrossRef] [Google Scholar]
- Ballabeni, A., Apollonio, F. I., Gaiani, M., & Remondino, F. 2015, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-5/W4, 315, iSPRS-Archives [Google Scholar]
- Barnes, C., Shechtman, E., Finkelstein, A., & Goldman, D. B. 2009, ACM Trans. Graph., 28, 24 [Google Scholar]
- Barnouin, O. S., Daly, M. G., Palmer, E. E., et al. 2020, Planet. Space Sci., 180, 104764 [NASA ADS] [CrossRef] [Google Scholar]
- Basilevsky, A., Lorenz, C., Shingareva, T., et al. 2014, Planet. Space Sci., 102, 95 [CrossRef] [Google Scholar]
- Benassi, F., Dall’Asta, E., Diotri, F., et al. 2017, Remote Sensing, 9, 172 [NASA ADS] [CrossRef] [Google Scholar]
- Bethmann, F., & Luhmann, T. 2010, Int. Arch. Photogramm. Remote Sensing Spatial Inform. Sci., 38, 86 [Google Scholar]
- Bleyer, M., Rhemann, C., & Rother, C. 2011, Bmvc, 11, 1 [Google Scholar]
- Bojanić, D., Bartol, K., Pribanić, T., et al. 2019, in 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA) (IEEE), 64 [CrossRef] [Google Scholar]
- Brockers, R., Proença, P., Delaune, J., et al. 2022, in 2022 IEEE Aerospace Conference (AERO) (IEEE), 1 [Google Scholar]
- Burmeister, S., Willner, K., Schmidt, V., & Oberst, J. 2018, J. Geodesy, 92, 963 [Google Scholar]
- Chen, M., Huang, X., Yan, J., Lei, Z., & Barriot, J. P. 2023, Icarus, 401, 115566 [NASA ADS] [CrossRef] [Google Scholar]
- Costa, M. 2013, Planetary Science Informatics and Data Analytics Conference, 2082, 6008 [Google Scholar]
- Cuturi, M. 2013, Adv. Neural Inform. Process. Syst., 26 [Google Scholar]
- Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. 2007, IEEE Trans. Image Process., 16, 2080 [Google Scholar]
- DeTone, D., Malisiewicz, T., & Rabinovich, A. 2018, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 224 [Google Scholar]
- Dusmanu, M., Rocco, I., Pajdla, T., et al. 2019, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8092 [Google Scholar]
- Duxbury, T. C. 1974, Icarus, 23, 290 [Google Scholar]
- Duxbury, T. C. 1989, Icarus, 78, 169 [CrossRef] [Google Scholar]
- Duxbury, T. C. 1991, Planet. Space Sci., 39, 355 [NASA ADS] [CrossRef] [Google Scholar]
- Ernst, C. M., Daly, R. T., Gaskell, R. W., et al. 2023, Earth Planets Space, 75, 103 [NASA ADS] [CrossRef] [Google Scholar]
- Fan, X., Rhody, H., & Saber, E. 2010, IEEE Trans. Geosci. Remote Sensing, 48, 2580 [NASA ADS] [CrossRef] [Google Scholar]
- Gaskell, R. 2011, NASA Planetary Data System, VO1 [Google Scholar]
- Gaskell, R. W., Barnouin-Jha, O. S., Scheeres, D. J., et al. 2008, Meteor. Planet. Sci., 43, 1049 [NASA ADS] [CrossRef] [Google Scholar]
- Gaskell, R. W., Barnouin, O. S., Daly, M. G., et al. 2023, Planet. Sci. J., 4, 63 [NASA ADS] [CrossRef] [Google Scholar]
- Gwinner, K., Jaumann, R., Hauber, E., et al. 2016, Planet. Space Sci., 126, 93 [NASA ADS] [CrossRef] [Google Scholar]
- Heinrichs, M., Rodehorst, V., & Hellwich, O. 2007, Differences (SSD), 2, 1 [Google Scholar]
- Jacobson, R. 2010, AJ, 139, 668 [NASA ADS] [CrossRef] [Google Scholar]
- Jiang, C., Douté, S., Luo, B., & Zhang, L. 2017, ISPRS J. Photogramm. Remote Sensing, 130, 418 [NASA ADS] [CrossRef] [Google Scholar]
- Jorda, L., Gaskell, R., Capanna, C., et al. 2016, Icarus, 277, 257 [Google Scholar]
- Kirk, R. 1987, Ph.D. Thesis, California Institute of Technology, Pasadena, USA [Google Scholar]
- Knight, P. A. 2008, SIAM J. Matrix Anal. Applic., 30, 261 [CrossRef] [Google Scholar]
- Kuramoto, K., Kawakatsu, Y., Fujimoto, M., et al. 2022, Earth, Planets Space, 74, 12 [NASA ADS] [CrossRef] [Google Scholar]
- Li, Y., & Cheng, J. 2015, Remote Sensing Inform., 30, 31 [Google Scholar]
- Li, Z., Yuxuan, L. I. U., Yangjie, S. U. N., et al. 2022, Acta Geod. Cartogr. Si., 51, 1437 [Google Scholar]
- Lin, T.-Y., Maire, M., Belongie, S., et al. 2014, in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part V 13 (Springer), 740 [Google Scholar]
- Liu, W. C., & Wu, B. 2020, ISPRS J. Photogramm. Remote Sensing, 159, 153 [NASA ADS] [CrossRef] [Google Scholar]
- Liu, W. C., & Wu, B. 2023, ISPRS J. Photogramm. Remote Sensing, 204, 237 [NASA ADS] [CrossRef] [Google Scholar]
- Lowe, D. G. 2004, Int. J. Comput. Vis., 60, 91 [CrossRef] [Google Scholar]
- Ma, J., Jiang, X., Fan, A., Jiang, J., & Yan, J. 2021, Int. J. Comput. Vis., 129, 23 [CrossRef] [MathSciNet] [Google Scholar]
- Murray, J., & Heggie, D. 2014, Planet. Space Sci., 102, 119 [NASA ADS] [CrossRef] [Google Scholar]
- Oberst, J., Schwarz, G., Behnke, T., et al. 2008, Planet. Space Sci., 56, 473 [Google Scholar]
- Oberst, J., Zubarev, A., Nadezhdina, I., Shishkina, L., & Rambaux, N. 2014, Planet. Space Sci., 102, 45 [NASA ADS] [CrossRef] [Google Scholar]
- Pätzold, M., Häusler, B., Tyler, G. L., et al. 2016, Planet. Space Sci., 127, 44 [CrossRef] [Google Scholar]
- Preusker, F., Scholten, F., Matz, K.-D., et al. 2017, A&A, 607, L1 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Revaud, J., De Souza, C., Humenberger, M., & Weinzaepfel, P. 2019, Adv. Neural Inform. Process. Syst., 32 [Google Scholar]
- Sarlin, P.-E., DeTone, D., Malisiewicz, T., & Rabinovich, A. 2020, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4938 [Google Scholar]
- Scholten, F., Gwinner, K., Roatsch, T., et al. 2005, Photogramm. Eng. Remote Sensing, 71, 1143 [CrossRef] [Google Scholar]
- Shen, S. 2013, IEEE Trans. Image Process., 22, 1901 [CrossRef] [Google Scholar]
- Simonelli, D. P., Thomas, P. C., Carcich, B. T., & Veverka, J. 1993, Icarus, 103, 49 [NASA ADS] [CrossRef] [Google Scholar]
- Szeliski, R. 2022, Computer Vision: Algorithms and Applications (Springer Nature) [Google Scholar]
- Thomas, P. C. 1989, Icarus, 77, 248 [NASA ADS] [CrossRef] [Google Scholar]
- Tsai, V. J. 1993, Int. J. Geogr. Inform. Sci., 7, 501 [Google Scholar]
- Tsai, D.-M., & Lin, C.-T. 2003, Pattern Recognit. Lett., 24, 2625 [CrossRef] [Google Scholar]
- Turner, R. J. 1978, Icarus, 33, 116 [NASA ADS] [CrossRef] [Google Scholar]
- Vaswani, A., Shazeer, N., Parmar, N., et al. 2017, Adv. Neural Inform. Process. Syst., 30 [Google Scholar]
- Wählisch, M., Stooke, P. J., Karachevtseva, I. P., et al. 2014, Planet. Space Sci., 102, 60 [CrossRef] [Google Scholar]
- Wan, X., Shao, Y., Zhang, S., & Li, S. 2022, IEEE Trans. Geosci. Remote Sensing, 60, 1 [Google Scholar]
- Wang, X., Xie, L., Dong, C., & Shan, Y. 2021, in Proceedings of the IEEE/CVF International Conference on Computer Vision, 1905 [Google Scholar]
- Wellman, J. B., Landauer, F. P., Norris, D. D., & Thorpe, T. E. 1976, J. Spacecraft Rockets, 13, 660 [NASA ADS] [CrossRef] [Google Scholar]
- Willner, K., Oberst, J., Hussmann, H., et al. 2010, Earth Planet. Sci. Lett., 294, 541 [Google Scholar]
- Willner, K., Shi, X., & Oberst, J. 2014, Planet. Space Sci., 102, 51 [Google Scholar]
- Wu, C., Agarwal, S., Curless, B., & Seitz, S. M. 2011, in CVPR 2011 (IEEE), 3057 [CrossRef] [Google Scholar]
- Xu, Z., Yu, J., Yu, C., et al. 2020, in 2020 IEEE 28th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM) (IEEE), 33 [CrossRef] [Google Scholar]
- Yang, X., Yan, J. G., Andert, T., et al. 2019, MNRAS, 490, 2007 [NASA ADS] [CrossRef] [Google Scholar]
- Yastikli, N. 2007, J. Cultural Heritage, 8, 423 [CrossRef] [Google Scholar]
- Zhang, D., He, F., Han, S., et al. 2017, Integr. Comput. Aided Eng., 24, 261 [CrossRef] [Google Scholar]
- Zheng, Y., Birdal, T., Xia, F., et al. 2022, arXiv e-prints [arXiv:2207.06333] [Google Scholar]
- Zimmerman, T., Jansen, K., & Miller, J. 2020, Remote Sensing, 12, 2305 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
All Figures
![]() |
Fig. 1 Flowchart of our reconstruction process. |
In the text |
![]() |
Fig. 2 Comparison of image matching between SuperPoint and SuperGlue and SIFT and Brute Force (HC046_0003_SR2, left, and HC069_0004_SR2, right). (a) Results of SuperPoint and SuperGlue. (b) Results of SIFT and Brute Force. |
In the text |
![]() |
Fig. 3 Three views of the reprojection error distribution. |
In the text |
![]() |
Fig. 4 Six orthographic views of our shape model along the primary axes of the body-fixed Phobos coordinate frame (the axes point toward the viewer; north is up for the +/XY views, and +Y is up for the +/Z views). |
In the text |
![]() |
Fig. 5 Six orthographic views of the differences between two shape models along the primary axes of the body-fixed Phobos coordinate frame (the axes point toward the viewer; north is up for the +/XY views, and +Y is up for the +/Z views). |
In the text |
![]() |
Fig. 6 Schematic of the Phobos and MEX orbits and SRC imaging. |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.