Volume 1, Issue 2, Frontiers in Biomedical Signal Processing
Volume 1, Issue 2, 2026
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
Frontiers in Biomedical Signal Processing, Volume 1, Issue 2, 2026: 79-104

Open Access | Research Article | 02 March 2026
Adaptive Manifold Concept with Regularized Autoencoders (AMRAE) for Effective Dimensionality Reduction
1 Department of Computing, School of Digital, Technologies and Arts, Staffordshire University, Stoke-on-Trent ST4 2DE, United Kingdom
2 Department of Mathematics, Faculty of Basic Sciences, University of Qom, Qom 37161-46611, Iran
3 Islamic Azad University, Science and Research, Tehran, Iran
4 Department of Mathematics and Computer Science, Amirkabir University of Technology, Tehran 15875-4413, Iran
5 Department of Mechanical Engineering, School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran 14179-35840, Iran
6 Student Research Committee, Urmia University of Medical Sciences, Urmia, Iran
* Corresponding Author: Alireza Soleimani Mamalo, [email protected]
ARK: ark:/57805/fbsp.2025.800185
Received: 31 March 2025, Accepted: 07 July 2025, Published: 02 March 2026  
Abstract
The Adaptive Manifold Concept with Regularized Autoencoders (AMRAE) algorithm is introduced as a novel dimensionality reduction technique that integrates manifold learning with autoencoders to capture the intrinsic geometry of high-dimensional data effectively. The study evaluates the impact of various adjustments and enhancements within the "Manifold Adjustment Box," including different regularization techniques, activation functions, and architectural choices, across diverse datasets. Key findings demonstrate that configurations such as Leaky ReLU Activation and Batch Norm Layer consistently improve accuracy, results highlight the flexibility and robustness of AMRAE. The results underscore the significance of AMRAE in addressing the limitations of traditional dimensionality reduction methods, with potential applications in various fields requiring high-dimensional data analysis. This paper provides a comprehensive summary of the objectives, methodology, results, and conclusions, showcasing AMRAE's efficacy in improving data representation and machine learning performance.

Graphical Abstract
Adaptive Manifold Concept with Regularized Autoencoders (AMRAE) for Effective Dimensionality Reduction

Keywords
dimensionality reduction
manifold concept regularized autoencoders
machine learning
high-dimensional data

Data Availability Statement
Data will be made available on request.

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest.

AI Use Statement
The authors declare that no generative AI was used in the preparation of this manuscript.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Wang, Z., Zhang, G., Xing, X., Xu, X., & Sun, T. (2024). Comparison of dimensionality reduction techniques for multi-variable spatiotemporal flow fields. Ocean Engineering, 291, 116421.
    [CrossRef]   [Google Scholar]
  2. Ray, P., Reddy, S. S., & Banerjee, T. (2021). Various dimension reduction techniques for high dimensional data analysis: a review. Artificial Intelligence Review, 54(5), 3473-3515.
    [CrossRef]   [Google Scholar]
  3. Jiang, X., Kong, X., & Ge, Z. (2023). Augmented industrial data-driven modeling under the curse of dimensionality. IEEE/CAA Journal of Automatica Sinica, 10(6), 1445-1461.
    [CrossRef]   [Google Scholar]
  4. Gunsilius, F., & Schennach, S. (2023). Independent nonlinear component analysis. Journal of the American Statistical Association, 118(542), 1305-1318.
    [CrossRef]   [Google Scholar]
  5. Saccenti, E. (2024). A gentle introduction to principal component analysis using tea‐pots, dinosaurs, and pizza. Teaching Statistics, 46(1), 38-52.
    [CrossRef]   [Google Scholar]
  6. Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley interdisciplinary reviews: computational statistics, 2(4), 433-459.
    [CrossRef]   [Google Scholar]
  7. Gewers, F. L., Ferreira, G. R., Arruda, H. F. D., Silva, F. N., Comin, C. H., Amancio, D. R., & Costa, L. D. F. (2021). Principal component analysis: A natural approach to data exploration. ACM Computing Surveys (CSUR), 54(4), 1-34.
    [CrossRef]   [Google Scholar]
  8. Turaga, P., Anirudh, R., & Chellappa, R. (2020). Manifold learning. In Computer Vision: A Reference Guide (pp. 1-6). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  9. Lunga, D., Prasad, S., Crawford, M. M., & Ersoy, O. (2013). Manifold-learning-based feature extraction for classification of hyperspectral data: A review of advances in manifold learning. IEEE Signal Processing Magazine, 31(1), 55-66.
    [CrossRef]   [Google Scholar]
  10. Lin, T., & Zha, H. (2008). Riemannian manifold learning. IEEE transactions on pattern analysis and machine intelligence, 30(5), 796-809.
    [CrossRef]   [Google Scholar]
  11. Pinaya, W. H. L., Vieira, S., Garcia-Dias, R., & Mechelli, A. (2020). Autoencoders. In Machine learning (pp. 193-208). Academic Press.
    [CrossRef]   [Google Scholar]
  12. Pawar, K., & Attar, V. Z. (2019). Assessment of autoencoder architectures for data representation. In Deep learning: concepts and architectures (pp. 101-132). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  13. Le, L., Patterson, A., & White, M. (2018, December). Supervised autoencoders: improving generalization performance with unsupervised regularizers. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (pp. 107-117).
    [CrossRef]   [Google Scholar]
  14. Cunningham, J. P., & Ghahramani, Z. (2015). Linear dimensionality reduction: Survey, insights, and generalizations. The Journal of Machine Learning Research, 16(1), 2859-2900.
    [CrossRef]   [Google Scholar]
  15. Tharwat, A., Gaber, T., Ibrahim, A., & Hassanien, A. E. (2017). Linear discriminant analysis: A detailed tutorial. AI communications, 30(2), 169-190.
    [CrossRef]   [Google Scholar]
  16. Arulananth, T. S., Balaji, L., Baskar, M., Anbarasu, V., & Rao, K. S. (2023). PCA based dimensional data reduction and segmentation for DICOM images. Neural Processing Letters, 55(1), 3-17.
    [CrossRef]   [Google Scholar]
  17. Pani, A. K. (2022). Non-linear process monitoring using kernel principal component analysis: A review of the basic and modified techniques with industrial applications. Brazilian Journal of Chemical Engineering, 39(2), 327-344.
    [CrossRef]   [Google Scholar]
  18. Reddy, G. T., Reddy, M. P. K., Lakshmanna, K., Kaluri, R., Rajput, D. S., Srivastava, G., & Baker, T. (2020). Analysis of dimensionality reduction techniques on big data. IEEE Access, 8, 54776-54788.
    [CrossRef]   [Google Scholar]
  19. Tang, E. K., Suganthan, P. N., Yao, X., & Qin, A. K. (2005). Linear dimensionality reduction using relevance weighted LDA. Pattern recognition, 38(4), 485-493.
    [CrossRef]   [Google Scholar]
  20. Hayati, R., Munawar, A. A., Lukitaningsih, E., Earlia, N., Karma, T., & Idroes, R. (2024). Combination of PCA with LDA and SVM classifiers: A model for determining the geographical origin of coconut in the coastal plantation, Aceh Province, Indonesia. Case Studies in Chemical and Environmental Engineering, 9, 100552.
    [CrossRef]   [Google Scholar]
  21. Anowar, F., Sadaoui, S., & Selim, B. (2021). Conceptual and empirical comparison of dimensionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, T-SNE). Computer Science Review, 40, 100378.
    [CrossRef]   [Google Scholar]
  22. Maaten, L. V. D., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of machine learning research, 9(Nov), 2579-2605.
    [Google Scholar]
  23. Shi, S., Xu, Y., Xu, X., Mo, X., & Ding, J. (2023). A preprocessing manifold learning strategy based on T-distributed stochastic neighbor embedding. Entropy, 25(7), 1065.
    [CrossRef]   [Google Scholar]
  24. Liu, X., Bao, Y., Zhao, L., & Gu, C. (2024). Establishment and application of steel composition prediction model based on t-distributed stochastic neighbor embedding (t-SNE) dimensionality reduction algorithm. Journal of Sustainable Metallurgy, 10(2), 509–524.
    [CrossRef]   [Google Scholar]
  25. Ghojogh, B., Ghodsi, A., Karray, F., & Crowley, M. (2020). Stochastic neighbor embedding with Gaussian and student-t distributions: Tutorial and survey. arXiv preprint arXiv:2009.10301.
    [Google Scholar]
  26. Ma, X., Li, D., Zhang, J., Sheng, T., Yang, Z., Nai, W., & Sun, Y. (2020, December). t-SNE with high order truncation fractional gradient descent method. In 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC) (Vol. 9, pp. 1929-1933). IEEE.
    [CrossRef]   [Google Scholar]
  27. Sun, Y., Li, D., Hua, Z., Zhu, S., Yang, Z., Nai, W., & Xing, Y. (2021, March). t-SNE based on fixed memory step gradient descent method. In 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC) (pp. 2486-2489). IEEE.
    [CrossRef]   [Google Scholar]
  28. Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326.
    [CrossRef]   [Google Scholar]
  29. Chen, J., & Liu, Y. (2011). Locally linear embedding: A survey. Artificial Intelligence Review, 36, 29–48.
    [CrossRef]   [Google Scholar]
  30. Liu, X. F., Zheng, X. D., Xu, G. C., Wang, L., & Yang, H. (2010). Locally linear embedding-based seismic attribute extraction and applications. Applied Geophysics, 7(4), 365-375.
    [CrossRef]   [Google Scholar]
  31. Hou, Y., Zhang, P., Xu, X., Zhang, X., & Li, W. (2009). Nonlinear dimensionality reduction by locally linear inlaying. IEEE transactions on neural networks, 20(2), 300-315.
    [CrossRef]   [Google Scholar]
  32. Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319–2323.
    [CrossRef]   [Google Scholar]
  33. Lee, J. A., Lendasse, A., & Verleysen, M. (2002). Curvilinear distance analysis versus Isomap. In ESANN (pp. 185–192).
    [Google Scholar]
  34. Lee, J. A., Lendasse, A., & Verleysen, M. (2004). Nonlinear projection with curvilinear distances: Isomap versus curvilinear distance analysis. Neurocomputing, 57, 49–76.
    [CrossRef]   [Google Scholar]
  35. Donoho, D. L., & Grimes, C. (2005). Image manifolds which are isometric to Euclidean space. Journal of Mathematical Imaging and Vision, 23(1), 5–24.
    [CrossRef]   [Google Scholar]
  36. McInnes, L., Healy, J., & Melville, J. (2018). Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.
    [Google Scholar]
  37. Ghojogh, B., Crowley, M., Karray, F., & Ghodsi, A. (2023). Uniform manifold approximation and projection (UMAP). In Elements of dimensionality reduction and manifold learning (pp. 479-497). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  38. Yu, T. T., Chen, C. Y., Wu, T. H., & Chang, Y. C. (2023). Application of high-dimensional uniform manifold approximation and projection (UMAP) to cluster existing landfills on the basis of geographical and environmental features. Science of The Total Environment, 904, 167013.
    [CrossRef]   [Google Scholar]
  39. Lim, H. S., & Qiu, P. (2023). Quantifying cell-type-specific differences of single-cell datasets using uniform manifold approximation and projection for dimension reduction and shapley additive exPlanations. Journal of Computational Biology, 30(7), 738–750.
    [CrossRef]   [Google Scholar]
  40. Yu, S., & Principe, J. C. (2019). Understanding autoencoders with information theoretic concepts. Neural Networks, 117, 104–123.
    [CrossRef]   [Google Scholar]
  41. Asperti, A., & Trentin, M. (2020). Balancing reconstruction error and Kullback-Leibler divergence in variational autoencoders. IEEE Access, 8, 199440–199448.
    [CrossRef]   [Google Scholar]
  42. Wang, W., Huang, Y., Wang, Y., & Wang, L. (2014). Generalized autoencoder: A neural network framework for dimensionality reduction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 490–497).
    [CrossRef]   [Google Scholar]
  43. Nusrat, I., & Jang, S.-B. (2018). A comparison of regularization techniques in deep neural networks. Symmetry, 10(11), 648.
    [CrossRef]   [Google Scholar]
  44. Yu, W., Zeng, G., Luo, P., Zhuang, F., He, Q., & Shi, Z. (2013). Embedding with autoencoder regularization. In Machine Learning and Knowledge Discovery in Databases (pp. 208–223). Springer.
    [CrossRef]   [Google Scholar]
  45. Meilă, M., & Zhang, H. (2024). Manifold learning: What, how, and why. Annual Review of Statistics and Its Application, 11(1), 393-417.
    [CrossRef]   [Google Scholar]
  46. Muscoloni, A., Thomas, J. M., Ciucci, S., Bianconi, G., & Cannistraci, C. V. (2017). Machine learning meets complex networks via coalescent embedding in the hyperbolic space. Nature Communications, 8(1), 1615.
    [CrossRef]   [Google Scholar]
  47. Izenman, A. J. (2012). Introduction to manifold learning. Wiley Interdisciplinary Reviews: Computational Statistics, 4(5), 439-446.
    [CrossRef]   [Google Scholar]
  48. Talwalkar, A., Kumar, S., & Rowley, H. (2008). Large-scale manifold learning. In 2008 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). IEEE.
    [CrossRef]   [Google Scholar]
  49. Zhu, Y., Wu, X., Li, P., Zhang, Y., & Hu, X. (2019). Transfer learning with deep manifold regularized auto-encoders. Neurocomputing, 369, 145–154.
    [CrossRef]   [Google Scholar]
  50. Duque, A. F., Morin, S., Wolf, G., & Moon, K. (2020). Extendable and invertible manifold learning with geometry regularized autoencoders. In 2020 IEEE International Conference on Big Data (pp. 5027–5036). IEEE.
    [CrossRef]   [Google Scholar]
  51. Li, J., Wei, S., & Dai, W. (2021). Combination of manifold learning and deep learning algorithms for mid-term electrical load forecasting. IEEE Transactions on Neural Networks and Learning Systems, 34(5), 2584–2593.
    [CrossRef]   [Google Scholar]
  52. Van Der Maaten, L., Postma, E. O., & Van Den Herik, H. J. (2009). Dimensionality reduction: A comparative review. Journal of Machine Learning Research, 10(66–71), 13.
    [Google Scholar]
  53. Wang, F., & Sun, J. (2015). Survey on distance metric learning and dimensionality reduction in data mining. Data Mining and Knowledge Discovery, 29(2), 534–564.
    [CrossRef]   [Google Scholar]
  54. Vogt, J. E., & Roth, V. (2010). The group-lasso: $\ell$1,$\infty$ regularization versus $\ell$1, 2 regularization. In Joint Pattern Recognition Symposium (pp. 252–261). Springer.
    [CrossRef]   [Google Scholar]
  55. Cortes, C., Mohri, M., & Rostamizadeh, A. (2009, June). L 2 regularization for learning kernels. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (pp. 109-116).
    [CrossRef]   [Google Scholar]
  56. Li, C. N., Shang, M. Q., Shao, Y. H., Xu, Y., Liu, L. M., & Wang, Z. (2019). Sparse L1-norm two dimensional linear discriminant analysis via the generalized elastic net regularization. Neurocomputing, 337, 80-96.
    [CrossRef]   [Google Scholar]
  57. Khan, S. H., Hayat, M., & Porikli, F. (2019). Regularization of deep neural networks with spectral dropout. Neural Networks, 110, 82-90.
    [CrossRef]   [Google Scholar]
  58. Santurkar, S., Tsipras, D., Ilyas, A., & Mądry, A. (2018, December). How does batch normalization help optimization?. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (pp. 2488-2498).
    [CrossRef]   [Google Scholar]
  59. Baldi, P. (2012, June). Autoencoders, unsupervised learning, and deep architectures. In Proceedings of ICML workshop on unsupervised and transfer learning (pp. 37-49). JMLR Workshop and Conference Proceedings.
    [CrossRef]   [Google Scholar]
  60. Zhang, Y. (2018, March). A better autoencoder for image: Convolutional autoencoder. In ICONIP17-DCEC (accessed on 23 March 2017) (p. 34).
    [Google Scholar]
  61. Sewak, M., Sahay, S. K., & Rathore, H. (2020). An overview of deep learning architecture of deep neural networks and autoencoders. Journal of Computational and Theoretical Nanoscience, 17(1), 182–188.
    [CrossRef]   [Google Scholar]
  62. Khare, N., Thakur, P. S., Khanna, P., & Ojha, A. (2021, December). Analysis of loss functions for image reconstruction using convolutional autoencoder. In International Conference on Computer Vision and Image Processing (pp. 338-349). Cham: Springer International Publishing.
    [CrossRef]   [Google Scholar]
  63. Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R., & Rosen, M. S. (2018). Image reconstruction by domain-transform manifold learning. Nature, 555(7697), 487-492.
    [CrossRef]   [Google Scholar]
  64. Braunsmann, J., Rajković, M., Rumpf, M., & Wirth, B. (2024). Convergent autoencoder approximation of low bending and low distortion manifold embeddings. ESAIM: Mathematical Modelling and Numerical Analysis, 58(1), 335–361.
    [CrossRef]   [Google Scholar]
  65. Dogo, E. M., Afolabi, O. J., Nwulu, N. I., Twala, B., & Aigbavboa, C. O. (2018, December). A comparative analysis of gradient descent-based optimization algorithms on convolutional neural networks. In 2018 international conference on computational techniques, electronics and mechanical systems (CTEMS) (pp. 92-99). IEEE.
    [CrossRef]   [Google Scholar]
  66. Bjerrum, E. J., & Sattarov, B. (2018). Improving chemical autoencoder latent space and molecular de novo generation diversity with heteroencoders. Biomolecules, 8(4), 131.
    [CrossRef]   [Google Scholar]
  67. Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., & Cohen-Or, D. (2021). Designing an encoder for stylegan image manipulation. ACM Transactions on Graphics, 40(4), 1–14.
    [CrossRef]   [Google Scholar]
  68. Brodersen, K. H., Ong, C. S., Stephan, K. E., & Buhmann, J. M. (2010, August). The balanced accuracy and its posterior distribution. In 2010 20th international conference on pattern recognition (pp. 3121-3124). IEEE.
    [CrossRef]   [Google Scholar]
  69. Shahapure, K. R., & Nicholas, C. (2020, October). Cluster quality analysis using silhouette score. In 2020 IEEE 7th international conference on data science and advanced analytics (DSAA) (pp. 747-748). IEEE.
    [CrossRef]   [Google Scholar]
  70. Fukunaga, K., & Olsen, D. R. (1971). An algorithm for finding intrinsic dimensionality of data. IEEE Transactions on Computers, 100(2), 176–183.
    [CrossRef]   [Google Scholar]
  71. Pang, G., Shen, C., & Van Den Hengel, A. (2019). Deep anomaly detection with deviation networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 353–362). ACM.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Sadegh-Zadeh, S. A., Sadeghzadeh, N., Ghidary, S. S., Movahedi, S., Kavianpour, K., Barati, M. A., & Mamalo, A. S. (2026). Adaptive Manifold Concept with Regularized Autoencoders (AMRAE) for Effective Dimensionality Reduction. Frontiers in Biomedical Signal Processing, 1(2), 79–104. https://doi.org/10.62762/FBSP.2025.800185
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Sadegh-Zadeh, Seyed-Ali
AU  - Sadeghzadeh, Nasrin
AU  - Ghidary, Saeed Shiry
AU  - Movahedi, Sobhan
AU  - Kavianpour, Kaveh
AU  - Barati, Mohammad Amin
AU  - Mamalo, Alireza Soleimani
PY  - 2026
DA  - 2026/03/02
TI  - Adaptive Manifold Concept with Regularized Autoencoders (AMRAE) for Effective Dimensionality Reduction
JO  - Frontiers in Biomedical Signal Processing
T2  - Frontiers in Biomedical Signal Processing
JF  - Frontiers in Biomedical Signal Processing
VL  - 1
IS  - 2
SP  - 79
EP  - 104
DO  - 10.62762/FBSP.2025.800185
UR  - https://www.icck.org/article/abs/FBSP.2025.800185
KW  - dimensionality reduction
KW  - manifold concept regularized autoencoders
KW  - machine learning
KW  - high-dimensional data
AB  - The Adaptive Manifold Concept with Regularized Autoencoders (AMRAE) algorithm is introduced as a novel dimensionality reduction technique that integrates manifold learning with autoencoders to capture the intrinsic geometry of high-dimensional data effectively. The study evaluates the impact of various adjustments and enhancements within the "Manifold Adjustment Box," including different regularization techniques, activation functions, and architectural choices, across diverse datasets. Key findings demonstrate that configurations such as Leaky ReLU Activation and Batch Norm Layer consistently improve accuracy, results highlight the flexibility and robustness of AMRAE. The results underscore the significance of AMRAE in addressing the limitations of traditional dimensionality reduction methods, with potential applications in various fields requiring high-dimensional data analysis. This paper provides a comprehensive summary of the objectives, methodology, results, and conclusions, showcasing AMRAE's efficacy in improving data representation and machine learning performance.
SN  - request pending
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{SadeghZadeh2026Adaptive,
  author = {Seyed-Ali Sadegh-Zadeh and Nasrin Sadeghzadeh and Saeed Shiry Ghidary and Sobhan Movahedi and Kaveh Kavianpour and Mohammad Amin Barati and Alireza Soleimani Mamalo},
  title = {Adaptive Manifold Concept with Regularized Autoencoders (AMRAE) for Effective Dimensionality Reduction},
  journal = {Frontiers in Biomedical Signal Processing},
  year = {2026},
  volume = {1},
  number = {2},
  pages = {79-104},
  doi = {10.62762/FBSP.2025.800185},
  url = {https://www.icck.org/article/abs/FBSP.2025.800185},
  abstract = {The Adaptive Manifold Concept with Regularized Autoencoders (AMRAE) algorithm is introduced as a novel dimensionality reduction technique that integrates manifold learning with autoencoders to capture the intrinsic geometry of high-dimensional data effectively. The study evaluates the impact of various adjustments and enhancements within the "Manifold Adjustment Box," including different regularization techniques, activation functions, and architectural choices, across diverse datasets. Key findings demonstrate that configurations such as Leaky ReLU Activation and Batch Norm Layer consistently improve accuracy, results highlight the flexibility and robustness of AMRAE. The results underscore the significance of AMRAE in addressing the limitations of traditional dimensionality reduction methods, with potential applications in various fields requiring high-dimensional data analysis. This paper provides a comprehensive summary of the objectives, methodology, results, and conclusions, showcasing AMRAE's efficacy in improving data representation and machine learning performance.},
  keywords = {dimensionality reduction, manifold concept regularized autoencoders, machine learning, high-dimensional data},
  issn = {request pending},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 17
PDF Downloads: 5

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
CC BY Copyright © 2026 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Frontiers in Biomedical Signal Processing

Frontiers in Biomedical Signal Processing

ISSN: request pending (Online) | ISSN: request pending (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/