-
CiteScore
-
Impact Factor
Volume 2, Issue 4, Chinese Journal of Information Fusion
Volume 2, Issue 4, 2025
Submit Manuscript Edit a Special Issue
Academic Editor
Gongjian Zhou
Gongjian Zhou
Harbin Institute of Technology, China
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
Chinese Journal of Information Fusion, Volume 2, Issue 4, 2025: 275-295

Open Access | Review Article | 30 October 2025
Research Progress and Prospect of Radar Data Intelligent Processing
1 Institute of Information Fusion, Naval Aviation University, Yantai 264001, China
* Corresponding Authors: Shutao Xia, [email protected] ; Pingliang Xu, [email protected]
Received: 04 March 2025, Accepted: 21 April 2025, Published: 30 October 2025  
Abstract
With the continuous expansion of the application field of artificial intelligence, radar data processing has also begun to fully enter the era of intelligence, and new achievements have emerged in the research fields of target detection, target tracking, and target recognition. At a time of rapid development of artificial intelligence technology, it is necessary to think about the future development of radar data intelligent processing. To this end, combining the research history and the superficial understanding of radar data intelligent processing in the past ten years, our team analyzes the main research progress and challenges of radar data intelligent processing, examines the new requirements for autonomous, multi-modal, and multi-mode radar data intelligent processing, and explores new opportunities in the era of large models, etc. Furthermore, along a research path of ``1 domain foundation model + 3 core task models + N typical application implementations'', the research conceptions and main contents of radar data large model are put forward in order to enable the research of intelligent radar.

Graphical Abstract
Research Progress and Prospect of Radar Data Intelligent Processing

Keywords
artificial intelligence
deep learning
large model
radar data processing
target detection
target tracking
target identification

Data Availability Statement
Not applicable.

Funding
This work was supported by the National Natural Science Foundation of China under Grant 62171453 and Grant U2433216.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Cui, Y., Xu, P., Sun, W., Zhang, S., & Li, J. (2025). DeepAF: Transformer-Based Deep Data Association and Track Filtering Network for Multi-Target Tracking in Clutter. Aerospace, 12(3), 194.
    [CrossRef]   [Google Scholar]
  2. Trunk, G. V. (1978). Range resolution of targets using automatic detectors. IEEE Transactions on Aerospace and Electronic Systems, (5), 750-755.
    [CrossRef]   [Google Scholar]
  3. Barkat, M., Himonas, S. D., & Varshney, P. K. (1989, October). CFAR detection for multiple target situations. In IEE Proceedings F (Radar and Signal Processing) (Vol. 136, No. 5, pp. 193-209). IEE.
    [CrossRef]   [Google Scholar]
  4. Rohling, H. (1983). Radar CFAR thresholding in clutter and multiple target situations. IEEE transactions on aerospace and electronic systems, (4), 608-621.
    [CrossRef]   [Google Scholar]
  5. Kronauge, M., & Rohling, H. (2013). Fast two-dimensional CFAR procedure. IEEE Transactions on Aerospace and Electronic Systems, 49(3), 1817-1823.
    [CrossRef]   [Google Scholar]
  6. Xu, L., Lien, J., & Li, J. (2021). Doppler–range processing for enhanced high-speed moving target detection using LFMCW automotive radar. IEEE Transactions on Aerospace and Electronic Systems, 58(1), 568-580.
    [CrossRef]   [Google Scholar]
  7. Buzzi, S., Lops, M., Venturino, L., & Ferri, M. (2008). Track-before-detect procedures in a multi-target environment. IEEE Transactions on Aerospace and Electronic Systems, 44(3), 1135-1150.
    [CrossRef]   [Google Scholar]
  8. Orlando, D., Ricci, G., & Bar-Shalom, Y. (2011). Track-before-detect algorithms for targets with kinematic constraints. IEEE Transactions on Aerospace and Electronic Systems, 47(3), 1837-1849.
    [CrossRef]   [Google Scholar]
  9. Buzzi, S., Lops, M., & Venturino, L. (2005). Track-before-detect procedures for early detection of moving target from airborne radars. IEEE Transactions on Aerospace and Electronic Systems, 41(3), 937-954.
    [CrossRef]   [Google Scholar]
  10. Tonissen, S. M., & Evans, R. J. (1996). Peformance of dynamic programming techniques for track-before-detect. IEEE Transactions on Aerospace and Electronic Systems, 32(4), 1440-1451.
    [CrossRef]   [Google Scholar]
  11. Boers, Y., & Driessen, J. (2004). Multitarget particle filter track before detect application. IEE Proceedings-Radar, Sonar and Navigation, 151(6), 351-357.
    [CrossRef]   [Google Scholar]
  12. Shibata, T., & Frei, W. (1981, November). Hough transform for target detection in infrared imagery. In Techniques and Applications of Image Understanding (Vol. 281, pp. 105-109). SPIE.
    [CrossRef]   [Google Scholar]
  13. Reed, I. S., Gagliardi, R. M., & Shao, H. (1983). Application of three-dimensional filtering to moving target detection. IEEE Transactions on Aerospace and Electronic Systems(6), 898-905.
    [CrossRef]   [Google Scholar]
  14. Rife, D., & Boorstyn, R. (1974). Single tone parameter estimation from discrete-time observations. IEEE Transactions on information theory, 20(5), 591-598.
    [CrossRef]   [Google Scholar]
  15. Park, S.-T., & Lee, J. G. (2001). Improved Kalman filter design for three-dimensional radar tracking. IEEE Transactions on Aerospace and Electronic Systems, 37(2), 727-739.
    [CrossRef]   [Google Scholar]
  16. Bar-Shalom, Y., Daum, F., & Huang, J. (2009). The probabilistic data association filter. IEEE Control Systems Magazine, 29(6), 82-100.
    [CrossRef]   [Google Scholar]
  17. Bar-Shalom, Y., & Li, X. R. (1995). Multitarget-multisensor tracking: principles and techniques (Vol. 19). Storrs, CT: YBS publishing.
    [CrossRef]   [Google Scholar]
  18. Brookner, E. (1998). Tracking and Kalman filtering made easy. https://pzs.dstu.dp.ua/DataMining/kalman/bibl/Brookner_Kalman.pdf
    [Google Scholar]
  19. Zou, J., Zhu, P., Zhang, W., Xu, Y., & Zou, J. (2024). Software-Based Resolver-to-Digital Converter Using an $\alpha$-$\beta$-$\gamma$ Filter. IEEE Transactions on Instrumentation and Measurement.
    [CrossRef]   [Google Scholar]
  20. Zitar, R. A., Mohsen, A., Seghrouchni, A. E., Barbaresco, F., & Al-Dmour, N. A. (2023). Intensive review of drones detection and tracking: linear kalman filter versus nonlinear regression, an analysis case. Archives of Computational Methods in Engineering, 30(5), 2811-2830.
    [CrossRef]   [Google Scholar]
  21. Mori, S., Chong, C.-Y., Tse, E., & Wishner, R. (1986). Tracking and classifying multiple targets without a priori identification. IEEE transactions on Automatic Control, 31(5), 401-409.
    [CrossRef]   [Google Scholar]
  22. Cox, I. J., & Miller, M. L. (1995). On finding ranked assignments with application to multitarget tracking and motion correspondence. IEEE Transactions on Aerospace and Electronic Systems, 31(1), 486-489.
    [CrossRef]   [Google Scholar]
  23. Zajic, T., & Mahler, R. P. (2003, August). Particle-systems implementation of the PHD multitarget tracking filter. In Signal Processing, Sensor Fusion, and Target Recognition XII (Vol. 5096, pp. 291-299). SPIE.
    [CrossRef]   [Google Scholar]
  24. Vo, B.-N., & Ma, W.-K. (2006). The Gaussian mixture probability hypothesis density filter. IEEE Transactions on Signal Processing, 54(11), 4091-4104.
    [CrossRef]   [Google Scholar]
  25. Vo, B.-N., Singh, S., & Doucet, A. (2005). Sequential Monte Carlo methods for multitarget filtering with random finite sets. IEEE Transactions on Aerospace and Electronic Systems, 41(4), 1224-1245.
    [CrossRef]   [Google Scholar]
  26. Saucan, A.-A., Coates, M. J., & Rabbat, M. (2017). A multisensor multi-Bernoulli filter. IEEE Transactions on Signal Processing, 65(20), 5495-5509.
    [CrossRef]   [Google Scholar]
  27. Buonviri, A., York, M., LeGrand, K., & Meub, J. (2019, March). Survey of challenges in labeled random finite set distributed multi-sensor multi-object tracking. In 2019 IEEE Aerospace Conference (pp. 1-12). IEEE.
    [CrossRef]   [Google Scholar]
  28. Wu, W., Jiang, J., Liu, W., Feng, X., Gao, L., & Qin, X. (2016). Augmented state GM-PHD filter with registration errors for multi-target tracking by Doppler radars. Signal Processing, 120, 117-128.
    [CrossRef]   [Google Scholar]
  29. Choi, M.-E., & Seo, S.-W. (2015). Robust multitarget tracking scheme based on Gaussian mixture probability hypothesis density filter. IEEE Transactions on Vehicular Technology, 65(6), 4217-4229.
    [CrossRef]   [Google Scholar]
  30. Li, B., & Zhan, S. (2018, July). Improved Probability Hypothesis Density Filter for Close Target Tracking in Cluttered Scene. In 2018 37th Chinese Control Conference (CCC) (pp. 4147-4151). IEEE.
    [CrossRef]   [Google Scholar]
  31. Xiong, W., Lu, Y., Song, J., & Chen, X. (2023). A Two-Stage Track-before-Detect Method for Non-Cooperative Bistatic Radar Based on Deep Learning. Remote Sensing, 15(15), 3757.
    [CrossRef]   [Google Scholar]
  32. Ningyuan, S., Xiaolong, C., Guan, J., Xiaoqian, M., & Ningbo, L. (2018). Detection and Classification of Maritime Target with Micro-motion Based on CNNs. Journal of Radars, 7(5), 565-574.
    [CrossRef]   [Google Scholar]
  33. Ning, Z., Qinglong, B., Jian, C., & Huahua, D. (2023). A Non-cooperative Bistatic Radar Target Detection Method Based on Deep Learning. JOURNAL OF SIGNAL PROCESSING, 39(11), 1987-2002.
    [CrossRef]   [Google Scholar]
  34. Xiaoqian, M., Xiaolong, C., Jian, G., Wei Z., Ningbo, L., & Yunlong, D. (2020). Clutter Suppression and Marine Target Detection for Radar Images Based on INet. Journal of Radars, 9(4), 640-653. 10.12000/JR20090
    [CrossRef]   [Google Scholar]
  35. Lan, D., Zilin, W., Yuchen, G., Yuang, D., & Junkun, Y. (2022). Adaptive Region Proposal Selection for SAR Target Detection Using Reinforcement Learning. Journal of Radars, 11(5), 884-896.
    [CrossRef]   [Google Scholar]
  36. Yaqi, C., Pingliang, X., Cheng, G. Zhouchuan, Y., Jianting, Z., Hongbo, Y., & Kai, D. (2023). Multisource Track Association Dataset Based on the Global AIS. Journal of Electronics & Information Technology, 45(2), 746-756.
    [CrossRef]   [Google Scholar]
  37. Zhenyu, X., Yaqi, C., Wei, X. & Xiangqi, G. (2021). Adaptive association for satellite and radar position data. Systems Engineering and Electronics, 43(1), 91-98.
    [Google Scholar]
  38. Wei, X., Gu, X., & Yaqi, C. (2022). Point-track association method with unknown system model. IET Radar, Sonar & Navigation, 16(11), 1779-1795.
    [CrossRef]   [Google Scholar]
  39. Yaqi, C., You, H., Tiantian, T., & Yu, L. (2022). A new target tracking filter based on deep learning. Chinese Journal of Aeronautics, 35(5), 11-24.
    [CrossRef]   [Google Scholar]
  40. Zhu, H., Xiong, W., & Cui, Y. (2023). An adaptive interactive multiple-model algorithm based on end-to-end learning. Chinese Journal of Electronics, 32(5), 1120-1132.
    [CrossRef]   [Google Scholar]
  41. Pingliang, X., Yaqi, C., Wei, X., Zhenyu, X., & Xiangqi, G. (2022). Generative track segment consecutive association method. Systems Engineering and Electronics, 44(5), 1543-1552.
    [Google Scholar]
  42. Xiong, W., Xu, P., Cui, Y., Xiong, Z., Lv, Y., & Gu, X. (2021). Track segment association with dual contrast neural network. IEEE Transactions on Aerospace and Electronic Systems, 58(1), 247-261.
    [CrossRef]   [Google Scholar]
  43. Gao, C., Yan, J., Zhou, S., Varshney, P. K., & Liu, H. (2019). Long short-term memory-based deep recurrent neural networks for target tracking. Information Sciences, 502, 279-296.
    [CrossRef]   [Google Scholar]
  44. Moon, S., Youn, W., & Bang, H. (2021). Novel deep-learning-aided multimodal target tracking. IEEE Sensors Journal, 21(18), 20730-20739.
    [CrossRef]   [Google Scholar]
  45. Deng, L., Li, D., & Li, R. (2020, April). Improved IMM algorithm based on RNNs. In Journal of Physics: Conference Series (Vol. 1518, No. 1, p. 012055). IOP Publishing.
    [CrossRef]   [Google Scholar]
  46. Jouaber, S., Bonnabel, S., Velasco-Forero, S., & Pilte, M. (2021, June). NNAKF: A neural network adapted Kalman filter for target tracking. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4075-4079). IEEE.
    [CrossRef]   [Google Scholar]
  47. Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794).
    [CrossRef]   [Google Scholar]
  48. Zhai, B., Yi, W., Li, M., Ju, H., & Kong, L. (2019). Data-driven XGBoost-based filter for target tracking. The Journal of Engineering, 2019(20), 6683-6687.
    [CrossRef]   [Google Scholar]
  49. Zhang, Y., Li, G., Zhang, X.-P., & He, Y. (2024). A deep learning model based on transformer structure for radar tracking of maneuvering targets. Information Fusion, 103, 102120.
    [CrossRef]   [Google Scholar]
  50. Wenna, L., Shunsheng, Z., & Wenqin, W.. (2022). Multitarget-tracking Method for Airborne Radar Based on a Transformer Network. Journal of Radars, 11(3), 469-478.
    [CrossRef]   [Google Scholar]
  51. CHEN, H., DU, S., LIAN, F., & HAN, C. (2024). Track-MT3: A novel multitarget tracking algorithm based on Transformer network. Journal of Radar, 13(6), 1202-1219.
    [CrossRef]   [Google Scholar]
  52. Zhuangzhuang, T., Ronghui, Z., Jiemin, H., & Jun, Z. (2016). SAR ATR based on convolutional neural network. JOURNAL OF RADARS, 5(3), 320-325.
    [CrossRef]   [Google Scholar]
  53. Xinzheng, Z., Zhiying, T., & Yijian, W. (2017). SAR target recognition based on multi-feature multiple representation classifier fusion. JOURNAL OF RADARS, 6(5), 492-502.
    [CrossRef]   [Google Scholar]
  54. Lingjuan, Y., Yadong, W., Xiaochun, X., Yun, L., & Wen, H. (2018). SAR ATR Based on FCNN and ICAE. JOURNAL OF RADARS, 7(5), 622-631.
    [CrossRef]   [Google Scholar]
  55. Pengfei, Z. H. A. O., & Lijia, H. U. A. N. G. (2021). Target recognition method for multi-aspect synthetic aperture radar images based on EfficientNet and BiGRU. JOURNAL OF RADARS, 10(6), 895-904.
    [CrossRef]   [Google Scholar]
  56. Yixuan, L. Y. U., Zhirui, W. A. N. G., Peijin, W. A. N. G., Shengyang, L. I., Hong, T. A. N., Kaiqiang, C. H. E. N., ... & Xian, S. U. N. (2022). Scattering information and meta-learning based SAR images interpretation for aircraft target recognition. JOURNAL OF RADARS, 11(4), 652-665.
    [CrossRef]   [Google Scholar]
  57. Yi, L. I., Lan, D. U., Ke’er, Z. H. O. U., & Yuang, D. U. (2024). Deep Network for SAR Target Recognition Based on Attribute Scattering Center Convolutional Kernel Modulation. JOURNAL OF RADARS, 13(2), 443-456.
    [CrossRef]   [Google Scholar]
  58. Zongyong, C. U. I., Zhiyuan, Y. A. N. G., Yang, J. I. A. N. G., Zongjie, C. A. O., & Jianyu, Y. A. N. G. (2024). Explainability of Deep Networks for SAR Target Recognition via Class Activation Mapping. JOURNAL OF RADARS, 13(2), 428-442.
    [CrossRef]   [Google Scholar]
  59. Jiaqi, L. I. U., Bo, C. H. E. N., & Xi, J. I. E. (2019). Radar high-resolution range profile target recognition based on attention mechanism and bidirectional gated recurrent. JOURNAL OF RADARS, 8(5), 589-597.
    [CrossRef]   [Google Scholar]
  60. Shuai, G. U. O., Ting, C. H. E. N., Penghui, W. A. N. G., Jun, D. I. N. G., Junkun, Y. A. N., Yinghua, W. A. N. G., & Hongwei, L. I. U. (2023). Multistation cooperative radar target recognition based on an angle-guided transformer fusion network. JOURNAL OF RADARS, 12(3), 516-528.
    [CrossRef]   [Google Scholar]
  61. Kong, Z., Cui, Y., Xiong, W., Yang, F., Xiong, Z., & Xu, P. (2022). Ship target identification via Bayesian-transformer neural network. Journal of Marine Science and Engineering, 10(5), 577.
    [CrossRef]   [Google Scholar]
  62. Kong, Z., Cui, Y., Xiong, W., Xiong, Z., & Xu, P. (2022). Ship Target Recognition Based on Context-Enhanced Trajectory. ISPRS International Journal of Geo-Information, 11(12), 584.
    [CrossRef]   [Google Scholar]
  63. Bi, K., Xie, L., Zhang, H., Chen, X., Gu, X., & Tian, Q. (2023). Accurate medium-range global weather forecasting with 3D neural networks. nature, 619(7970), 533-538.
    [Google Scholar]
  64. Liu, Y., Zhang, K., Li, Y., Yan, Z., Gao, C., Chen, R., Yuan, Z., Huang, Y., Sun, H., & Gao, J. (2024). Sora: A review on background, technology, limitations, and opportunities of large vision models. arXiv preprint arXiv:2402.17177.
    [CrossRef]   [Google Scholar]
  65. Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
    [Google Scholar]
  66. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
    [Google Scholar]
  67. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.
    [Google Scholar]
  68. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
    [Google Scholar]
  69. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., & Gehrmann, S. (2023). Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(1), 11324-11436.
    [Google Scholar]
  70. Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., & Azhar, F. (2023). Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
    [CrossRef]   [Google Scholar]
  71. Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., & Bhosale, S. (2023). Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
    [CrossRef]   [Google Scholar]
  72. Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., & Liu, Q. (2019). ERNIE: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129.
    [CrossRef]   [Google Scholar]
  73. Sun, Y., Wang, S., Li, Y., Feng, S., Tian, H., Wu, H., & Wang, H. (2020, April). Ernie 2.0: A continual pre-training framework for language understanding. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 05, pp. 8968-8975).
    [CrossRef]   [Google Scholar]
  74. Sun, Y., Wang, S., Feng, S., Ding, S., Pang, C., Shang, J., Liu, J., Chen, X., Zhao, Y., & Lu, Y. (2021). Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv preprint arXiv:2107.02137.
    [CrossRef]   [Google Scholar]
  75. Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., Fan, Y., Ge, W., Han, Y., & Huang, F. (2023). Qwen technical report. arXiv preprint arXiv:2309.16609.
    [CrossRef]   [Google Scholar]
  76. Yang, A., Yang, B., Hui, B., Zheng, B., Yu, B., Zhou, C., Li, C., Li, C., Liu, D., & Huang, F. (2024). Qwen2 technical report. arXiv preprint arXiv:2407.10671.
    [CrossRef]   [Google Scholar]
  77. GLM, T., Zeng, A., Xu, B., Wang, B., Zhang, C., Yin, D., Rojas, D., Feng, G., Zhao, H., & Lai, H. (2024). ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools. arXiv preprint arXiv:2406.12793.
    [CrossRef]   [Google Scholar]
  78. Liu, A., Feng, B., Xue, B., Wang, B., Wu, B., Lu, C., Zhao, C., Deng, C., Zhang, C., & Ruan, C. (2024). Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437.
    [CrossRef]   [Google Scholar]
  79. Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., & Bi, X. (2025). Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948.
    [CrossRef]   [Google Scholar]
  80. Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., ... & Sutskever, I. (2021, July). Zero-shot text-to-image generation. In International conference on machine learning (pp. 8821-8831). Pmlr.
    [Google Scholar]
  81. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2), 3.
    [CrossRef]   [Google Scholar]
  82. Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).
    [Google Scholar]
  83. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., & Aila, T. (2020). Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8110-8119).
    [Google Scholar]
  84. Karras, T., Aittala, M., Laine, S., Härkönen, E., Hellsten, J., Lehtinen, J., & Aila, T. (2021). Alias-free generative adversarial networks. Advances in neural information processing systems, 34, 852-863.
    [Google Scholar]
  85. Zeng, W., Ren, X., Su, T., Wang, H., Liao, Y., Wang, Z., ... & Zhang, X. (2021). Pangu: Large-scale autoregressive pretrained Chinese language models with auto-parallel computation. arXiv. arXiv preprint arXiv:2104.12369.
    [CrossRef]   [Google Scholar]
  86. Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., & Anadkat, S. (2023). Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
    [CrossRef]   [Google Scholar]
  87. Team, G., Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., & Hauth, A. (2023). Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805.
    [CrossRef]   [Google Scholar]
  88. Li, Z., Zhang, J., Lin, Q., Xiong, J., Long, Y., Deng, X., Zhang, Y., Liu, X., Huang, M., & Xiao, Z. (2024). Hunyuan-DiT: A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding. arXiv preprint arXiv:2405.08748.
    [CrossRef]   [Google Scholar]
  89. Cai, Z., Cao, M., Chen, H., Chen, K., Chen, K., Chen, X., Chen, X., Chen, Z., Chen, Z., & Chu, P. (2024). Internlm2 technical report. arXiv preprint arXiv:2403.17297.
    [CrossRef]   [Google Scholar]
  90. Xue, H., & Salim, F. D. (2023). Promptcast: A new prompt-based learning paradigm for time series forecasting. IEEE Transactions on Knowledge and Data Engineering, 36(11), 6851-6864.
    [CrossRef]   [Google Scholar]
  91. Gruver, N., Finzi, M., Qiu, S., & Wilson, A. G. (2023). Large language models are zero-shot time series forecasters. arXiv preprint arXiv:2310.07820.
    [CrossRef]   [Google Scholar]
  92. Sun, C., Li, Y., Li, H., & Hong, S. (2023). TEST: Text prototype aligned embedding to activate LLM's ability for time series. arXiv preprint arXiv:2308.08241.
    [CrossRef]   [Google Scholar]
  93. Jin, M., Wang, S., Ma, L., Chu, Z., Zhang, J. Y., Shi, X., Chen, P.-Y., Liang, Y., Li, Y.-F., & Pan, S. (2023). Time-llm: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728.
    [CrossRef]   [Google Scholar]
  94. Rasul, K., Ashok, A., Williams, A. R., Khorasani, A., Adamopoulos, G., Bhagwatkar, R., Biloš, M., Ghonia, H., Hassen, N. V., & Schneider, A. (2023). Lag-llama: Towards foundation models for time series forecasting. arXiv preprint arXiv:2310.08278.
    [CrossRef]   [Google Scholar]
  95. Zhou, T., Niu, P., Wang, X., Sun, L., & Jin, R. (2023). One Fits All: Power General Time Series Analysis by Pretrained LM. arXiv preprint arXiv:2302.11939.
    [CrossRef]   [Google Scholar]
  96. Cao, D., Jia, F., Arik, S. O., Pfister, T., Zheng, Y., Ye, W., & Liu, Y. (2023). Tempo: Prompt-based generative pre-trained transformer for time series forecasting. arXiv preprint arXiv:2310.04948.
    [CrossRef]   [Google Scholar]
  97. Chang, C., Peng, W.-C., & Chen, T.-F. (2023). Llm4ts: Two-stage fine-tuning for time-series forecasting with pre-trained llms. arXiv preprint arXiv:2308.08469.
    [CrossRef]   [Google Scholar]
  98. Ansari, A. F., Stella, L., Turkmen, C., Zhang, X., Mercado, P., Shen, H., Shchur, O., Rangapuram, S. S., Arango, S. P., & Kapoor, S. (2024). Chronos: Learning the Language of Time Series. arXiv preprint arXiv:2403.07815.
    [CrossRef]   [Google Scholar]
  99. Yang, C. H. H., Tsai, Y. Y., & Chen, P. Y. (2021, July). Voice2series: Reprogramming acoustic models for time series classification. In International conference on machine learning (pp. 11808-11819). PMLR.
    [Google Scholar]
  100. Zhang, X., Zhao, Z., Tsiligkaridis, T., & Zitnik, M. (2022). Self-supervised contrastive pre-training for time series via time-frequency consistency. Advances in neural information processing systems, 35, 3988-4003.
    [Google Scholar]
  101. Yue, Z., Wang, Y., Duan, J., Yang, T., Huang, C., Tong, Y., & Xu, B. (2022, June). Ts2vec: Towards universal representation of time series. In Proceedings of the AAAI conference on artificial intelligence (Vol. 36, No. 8, pp. 8980-8987).
    [CrossRef]   [Google Scholar]
  102. Shao, Z., Zhang, Z., Wang, F., & Xu, Y. (2022, August). Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining (pp. 1567-1577).
    [CrossRef]   [Google Scholar]
  103. Tang, P., & Zhang, X. (2022, October). Mtsmae: Masked autoencoders for multivariate time-series forecasting. In 2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI) (pp. 982-989). IEEE.
    [CrossRef]   [Google Scholar]
  104. Nie, Y., Nguyen, N. H., Sinthong, P., & Kalagnanam, J. (2022). A time series is worth 64 words: Long-term forecasting with transformers. arXiv preprint arXiv:2211.14730.
    [CrossRef]   [Google Scholar]
  105. Dong, J., Wu, H., Zhang, H., Zhang, L., Wang, J., & Long, M. (2023). SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling. arXiv preprint arXiv:2302.00861.
    [CrossRef]   [Google Scholar]
  106. Ekambaram, V., Jati, A., Nguyen, N., Sinthong, P., & Kalagnanam, J. (2023). TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting. arXiv preprint arXiv:2306.09364.
    [CrossRef]   [Google Scholar]
  107. Garza, A., & Mergenthaler-Canseco, M. (2023). TimeGPT-1. arXiv preprint arXiv:2310.03589.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Cui, Y., He, Y., Xia, S., Xiong, W., Xu, P., Liu, Y., Wang, Z., Gu, X., Sun, W., Xiu, J., Kong, Z., Li, H., Lu, Y., & Tang, H. (2025). Research Progress and Prospect of Radar Data Intelligent Processing. Chinese Journal of Information Fusion, 2(4), 275–295. https://doi.org/10.62762/CJIF.2025.861974

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 39
PDF Downloads: 11

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
CC BY Copyright © 2025 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Chinese Journal of Information Fusion

Chinese Journal of Information Fusion

ISSN: 2998-3371 (Online) | ISSN: 2998-3363 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/