Towards a Secure Future: Security Challenges for Deep Learning, AI, and Foundation Models in the Next Decade
Article Information
Abstract
This editorial argues that as deep learning and foundation models permeate high-stakes domains, AI security must evolve into a rigorous, end-to-end discipline—prioritizing adaptive robustness, lifecycle integrity, privacy safeguards, and system-level governance—to ensure these powerful systems remain trustworthy under adversarial pressure.
Keywords
Data Availability Statement
Funding
Conflicts of Interest
AI Use Statement
Ethical Approval and Consent to Participate
Cite This Article
TY - JOUR AU - Jiang, Donghua AU - Ahmad, Jawad PY - 2026 DA - 2026/03/02 TI - Towards a Secure Future: Security Challenges for Deep Learning, AI, and Foundation Models in the Next Decade JO - ICCK Transactions on Information Security and Cryptography T2 - ICCK Transactions on Information Security and Cryptography JF - ICCK Transactions on Information Security and Cryptography VL - 2 IS - 2 SP - 70 EP - 72 DO - 10.62762/TISC.2025.709595 UR - https://www.icck.org/article/abs/TISC.2025.709595 KW - AI Security KW - Trustworthy AI AB - This editorial argues that as deep learning and foundation models permeate high-stakes domains, AI security must evolve into a rigorous, end-to-end discipline—prioritizing adaptive robustness, lifecycle integrity, privacy safeguards, and system-level governance—to ensure these powerful systems remain trustworthy under adversarial pressure. SN - 3070-2429 PB - Institute of Central Computation and Knowledge LA - English ER -
@article{Jiang2026Towards,
author = {Donghua Jiang and Jawad Ahmad},
title = {Towards a Secure Future: Security Challenges for Deep Learning, AI, and Foundation Models in the Next Decade},
journal = {ICCK Transactions on Information Security and Cryptography},
year = {2026},
volume = {2},
number = {2},
pages = {70-72},
doi = {10.62762/TISC.2025.709595},
url = {https://www.icck.org/article/abs/TISC.2025.709595},
abstract = {This editorial argues that as deep learning and foundation models permeate high-stakes domains, AI security must evolve into a rigorous, end-to-end discipline—prioritizing adaptive robustness, lifecycle integrity, privacy safeguards, and system-level governance—to ensure these powerful systems remain trustworthy under adversarial pressure.},
keywords = {AI Security, Trustworthy AI},
issn = {3070-2429},
publisher = {Institute of Central Computation and Knowledge}
}
Article Metrics
Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and Permissions
Copyright © 2026 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Portico