1 |
崔平远, 高艾, 朱圣英. 深空探测器自主导航与制导[M]. 北京: 中国宇航出版社, 2016: 4- 10.
|
|
CUI P Y , GAO A , ZHU S Y . Autonomous navigation and gui-dance of deep space probe[M]. Beijing: China Astronautic Publishing House, 2016: 4- 10.
|
2 |
宁晓琳, 蔡洪炜, 吴伟仁, 等. 月球车的惯性/天文组合导航新方法[J]. 系统工程与电子技术, 2011, 33 (8): 1837- 1844.
|
|
NING X L , CAI H W , WU W R , et al. INS/CNS integrated navigation method for lunar rover[J]. Journal of Systems Engineering and Electronics, 2011, 33 (8): 1837- 1844.
|
3 |
吴伟仁, 于登云. 深空探测发展与未来关键技术[J]. 深空探测学报(中英文), 2014, 1 (1): 5- 17.
|
|
WU W R , YU D Y . Development of deep space exploration and its future key technologies[J]. Journal of Deep Space Exploration, 2014, 1 (1): 5- 17.
|
4 |
于正湜, 崔平远. 行星着陆自主导航与制导控制研究现状与趋势[J]. 深空探测学报, 2016, 3 (4): 345- 355.
|
|
YU Z S , CUI P Y . Research status and developing trend of the autonomous navigation, guidance, and control for planetary landing[J]. Journal of Deep Space Exploration, 2016, 3 (4): 345- 355.
|
5 |
WU W R , LIU W W , QIAO D , et al. Investigation on the development of deep space exploration[J]. Science China Technological Sciences, 2012, 55 (4): 1086- 1091.
doi: 10.1007/s11431-012-4759-z
|
6 |
RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL]. [2022-11-17]. https://arxiv.org/abs/1511.06434.
|
7 |
NG T T, CHANG S F, SUN Q. A data set of authentic and spliced image blocks[EB/OL]. [2022-11-17]. https://arxiv.org/abs/2107.07699v1.
|
8 |
JOLY A , GOEAU H , BONNET P , et al. Interactive plant identification based on social image data[J]. Ecological Informa- tics, 2014, 23, 22- 34.
doi: 10.1016/j.ecoinf.2013.07.006
|
9 |
HAO P, LI C, RAHAMAN M M, et al. A comparison of deep learning classification methods on small-scale image data set: from convolutional neural networks to visual transformers[EB/OL]. [2022-11-17]. https://arxiv.org/abs/2107.07699v1.
|
10 |
陈坤, 王璐, 储珺. 月球表面图像的SIFT特征提取与匹配[J]. 计算机与现代化, 2011, (7): 20-23, 26.
|
|
CHEN K , WANG L , CHU J . SIFT feature extraction and matching of lunar surface image[J]. Computer and Modernization, 2011, (7): 20-23, 26.
|
11 |
欧阳自远. 月球探测的进展与中国的月球探测[J]. 地质科技情报, 2004, 23 (4): 1- 5.
|
|
OUYANG Z Y . International lunar exploration progress and chinese lunar exploration[J]. Bulletin of Geologic Science and Technology, 2004, 23 (4): 1- 5.
|
12 |
秦同, 朱圣英, 崔平远, 等. 行星着陆动力下降段相对视觉导航方法[J]. 宇航学报, 2019, 40 (2): 164- 173.
|
|
QIN T , ZHU S Y , CUI P Y , et al. Relative optical navigation in powered descent phase of planetary landings[J]. Journal of Astronautics, 2019, 40 (2): 164- 173.
|
13 |
HE J , WANG C , JIANG D , et al. Cyclegan with an improved loss function for cell detection using partly labeled images[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 24 (9): 2473- 2480.
|
14 |
SPEYERER E J, ROBINSON M S, DENEVI B W. Lunar reconnaissance orbiter camera global morphological map of the moon[C]//Proc. of the 42nd Annual Lunar and Planetary Science Conference, 2011, (1608): 2387.
|
15 |
GOODFELLOW I J , POUGET-ABADIE J , MIRZA M , et al. Generative adversarial nets[M]. Cambridge: MIT Press, 2014.
|
16 |
WANG K F , GOU C , DUAN Y J , et al. Generative adversarial networks: introduction and outlook[J]. IEEE/CAA Journal of Automatica Sinica, 2017, 4 (4): 588- 598.
|
17 |
MIRZA M , OSINDERO S . Conditional generative adversarial nets[J]. Computer Science, 2014, 2672- 2680.
|
18 |
Z HU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]// Proc. of the IEEE International Conference on Computer Vision, 2017: 2223-2232.
|
19 |
CHU C, ZHMOGINOV A, SANDLER M. Cyclegan, a master of steganography[EB/OL]. [2022-07-13]. https://arxiv.org/abs/1712.02950.
|
20 |
MAHMOOD F , BORDERS D , CHEN R J , et al. Deep adversarial training for multi-organ nuclei segmentation in histopathology images[J]. IEEE Trans. on Medical Imaging, 2019, 39 (11): 3257- 3267.
|
21 |
ALMAHAIRI A, RAJESHWAR S, SORDONI A, et al. Augmented cyclegan: learning many-to-many mappings from unpaired data[C]//Proc. of the International Conference on Machine Learning, 2018: 195-204.
|
22 |
KANEKO T, KAMEOKA H, TANAKA K, et al. Cyclegan-vc2: Improved cyclegan-based non-parallel voice conversion[C]// Proc. of the ICASSP IEEE International Conference on Acoustics, Speech and Signal Processing, 2019: 6820-6824.
|
23 |
ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1125-1134.
|
24 |
LIU X, HSIEH C J. Rob-gan: generator, discriminator, and adversarial attacker[C]//Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 11234-11243.
|
25 |
CHANG H, LU J, YU F, et al. PairedCycleGAN: asymmetric style transfer for applying and removing makeup[C]//Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.
|
26 |
YASUNO T, FUJII J, FUKAMI S. One-class steel detector using patch GAN discriminator for visualising anomalous feature map[EB/OL]. [2022-11-17]. https://arxiv.org/abs/2107.00143.
|
27 |
DEMIR U, UNAL G. Patch-based image inpainting with ge-nerative adversarial networks[EB/OL]. [2022-11-17]. https://arxiv.org/abs/1803.07422v1.
|
28 |
CHOI Y, GHOI M, KIM M, et al. StarGAN: unified generative adversarial networks for multi-domain image-to-image translation[C]//Proc. of the International Conference on Computer Vision and Recognition, 2018: 8789-8797.
|
29 |
WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proc. of the European Conference on Computer Vision, 2018: 3-19.
|
30 |
WANG J S , GAI S , HUANG X , et al. From coarse to fine: a two stage conditional generative adversarial network for single image rain removal[J]. Digital Signal Processing, 2021, 111 (11): 102985.
|