Volume 1, Issue 2, Next-Generation Computing Systems and Technologies
Volume 1, Issue 2, 2025
Submit Manuscript Edit a Special Issue
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
Next-Generation Computing Systems and Technologies, Volume 1, Issue 2, 2025: 91-101

Open Access | Research Article | 21 December 2025
A Framework for Secure and Interoperable Clinical Summarization Using the Model Context Protocol: Integrating MIMIC-III and FHIR with TinyLlama
1 Department of Computer Applications, Maharaja Sriram Chandra Bhanja Deo University, Baripada 757003, India
* Corresponding Author: Swarupananda Bissoyi, [email protected]
ARK: ark:/57805/ngcst.2025.784852
Received: 04 October 2025, Accepted: 05 November 2025, Published: 21 December 2025  
Abstract
This research presents a new framework for clinical summarization that combines the TinyLlama model with MIMIC-III and FHIR data using the Model Context Protocol (MCP). Unlike cloud-based models like Med-PaLM, our approach uses local processing to cut costs and protect patient data with AES-256 encryption and strict access controls, meeting HIPAA and GDPR standards. It retrieves FHIR-compliant data from public servers (e.g., \texttt{hapi.fhir.org}) for interoperability across hospital systems. Tested on discharge summaries, it achieves ROUGE-L F1 scores of 0.96 for MIMIC-III and 0.84 for FHIR, beating baselines like BioBERT (0.61, p < 0.001) due to efficient preprocessing and MCP’s accurate data grounding. ROUGE, BLEU and BERTScore metrics, along with visualizations, confirm its reliability. The entire pipeline code is available at \url{https://github.com/shekhar-ai99/clinical-mcp} for transparency and reproducibility.

Graphical Abstract
A Framework for Secure and Interoperable Clinical Summarization Using the Model Context Protocol: Integrating MIMIC-III and FHIR with TinyLlama

Keywords
model context protocol
tinyLlama
clinical summarization
FHIR
interoperability
MIMIC-III
healthcare AI
data security
ROUGE
BERTScore

Data Availability Statement
The source code for this study is publicly available at https://github.com/shekhar-ai99/clinical-mcp (accessed on 20 December 2025).

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
This work utilizes publicly available de-identified datasets, which do not constitute human subjects research requiring additional ethical approval.

References
  1. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
    [Google Scholar]
  2. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623).
    [CrossRef]   [Google Scholar]
  3. Johnson, A. E., Pollard, T. J., Shen, L., Lehman, L. W. H., Feng, M., Ghassemi, M., ... & Mark, R. G. (2016). MIMIC-III, a freely accessible critical care database. Scientific data, 3(1), 1-9.
    [CrossRef]   [Google Scholar]
  4. Introducing the model context protocol. (n.d.). Home Anthropic. Retrieved from https://www.anthropic.com/news/model-context-protocol(accessed on 20 December 2025).
    [Google Scholar]
  5. Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J. (2020). BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4), 1234–1240.
    [CrossRef]   [Google Scholar]
  6. Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., ... & Poon, H. (2021). Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare, 3(1), 1–23.
    [CrossRef]   [Google Scholar]
  7. Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J., Chung, H. W., ... & Natarajan, V. (2023). Large language models encode clinical knowledge. Nature, 620(7972), 172–180.
    [CrossRef]   [Google Scholar]
  8. Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., ... & Kiela, D. (2020, December). Retrieval-augmented generation for knowledge-intensive NLP tasks. In Proceedings of the 34th International Conference on Neural Information Processing Systems (pp. 9459-9474).
    [Google Scholar]
  9. Makary, M. A., & Daniel, M. (2016). Medical error—the third leading cause of death in the US. BMJ, 353, i2139.
    [CrossRef]   [Google Scholar]
  10. Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (pp. 311–318).
    [CrossRef]   [Google Scholar]
  11. Lin, C. Y. (2004). ROUGE: A package for automatic evaluation of summaries. In Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004) (pp. 74–81).
    [Google Scholar]
  12. Zhang, T., Kishore, V., Wu, F., Weinberger, K. Q., & Artzi, Y. (2019). Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
    [Google Scholar]
  13. Bender, D., & Sartipi, K. (2013, June). HL7 FHIR: An agile and RESTful approach to healthcare information exchange. In 2013 IEEE 26th International Symposium on Computer-Based Medical Systems (pp. 326–331). IEEE.
    [CrossRef]   [Google Scholar]
  14. Lehne, M., Sass, J., Essenwanger, A., Schepers, J., & Thun, S. (2019). Why digital medicine depends on interoperability. NPJ Digital Medicine, 2(1), 79.
    [CrossRef]   [Google Scholar]
  15. Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347–1358.
    [CrossRef]   [Google Scholar]
  16. Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56.
    [CrossRef]   [Google Scholar]
  17. Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K., ... & Dean, J. (2019). A guide to deep learning in healthcare. Nature Medicine, 25(1), 24–29.
    [CrossRef]   [Google Scholar]
  18. Hinton, G. (2018). Deep learning—A technology with the potential to transform health care. JAMA, 320(11), 1101–1102.
    [CrossRef]   [Google Scholar]
  19. Li, Z., Roberts, K., Jiang, X., & Long, Q. (2019). Distributed learning from multiple EHR databases: contextual embedding models for medical events. Journal of biomedical informatics, 92, 103138.
    [CrossRef]   [Google Scholar]
  20. Zhang, P., Sun, W., Li, Z., Wang, X., Li, C., Huang, R., ... & Wei, F. (2024). TinyLlama: An open-source small language model. arXiv preprint arXiv:2401.02385.
    [Google Scholar]
  21. Singhal, K., Tu, T., Gottweis, J., Sayres, R., Wulczyn, E., Amin, M., ... & Natarajan, V. (2025). Toward expert-level medical question answering with large language models. Nature Medicine, 31(3), 943-950.
    [CrossRef]   [Google Scholar]
  22. Wang, G., Yang, G., Du, Z., Fan, L., & Li, X. (2023). ClinicalGPT: large language models finetuned with diverse medical data and comprehensive evaluation. arXiv preprint arXiv:2306.09968.
    [Google Scholar]
  23. Luo, R., Sun, L., Xia, Y., Qin, T., Zhang, S., Poon, H., & Liu, T. Y. (2022). BioGPT: Generative pre-trained transformer for biomedical text generation and mining. Briefings in Bioinformatics, 23(6), bbac409.
    [CrossRef]   [Google Scholar]
  24. Jiang, L. Y., Liu, X. C., Nejatian, N. P., Nasir-Moin, M., Wang, D., Abidin, A., ... & Oermann, E. K. (2023). Health system-scale language models are all-purpose prediction engines. Nature, 619(7969), 357-362.
    [CrossRef]   [Google Scholar]
  25. Li, Y., Wang, H., Yerebakan, H. Z., Shinagawa, Y., & Luo, Y. (2024). FHIR-GPT enhances health interoperability with large language models. NEJM AI, 1(8), AIcs2300301.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Behera, C. S., & Bissoyi, S. (2025). A Framework for Secure and Interoperable Clinical Summarization Using the Model Context Protocol: Integrating MIMIC-III and FHIR with TinyLlama. Next-Generation Computing Systems and Technologies, 1(2), 91–101. https://doi.org/10.62762/NGCST.2025.784852
Export Citation
RIS Format
Compatible with EndNote, Zotero, Mendeley, and other reference managers
RIS format data for reference managers
TY  - JOUR
AU  - Behera, Chandra Shekhar
AU  - Bissoyi, Swarupananda
PY  - 2025
DA  - 2025/12/21
TI  - A Framework for Secure and Interoperable Clinical Summarization Using the Model Context Protocol: Integrating MIMIC-III and FHIR with TinyLlama
JO  - Next-Generation Computing Systems and Technologies
T2  - Next-Generation Computing Systems and Technologies
JF  - Next-Generation Computing Systems and Technologies
VL  - 1
IS  - 2
SP  - 91
EP  - 101
DO  - 10.62762/NGCST.2025.784852
UR  - https://www.icck.org/article/abs/NGCST.2025.784852
KW  - model context protocol
KW  - tinyLlama
KW  - clinical summarization
KW  - FHIR
KW  - interoperability
KW  - MIMIC-III
KW  - healthcare AI
KW  - data security
KW  - ROUGE
KW  - BERTScore
AB  - This research presents a new framework for clinical summarization that combines the TinyLlama model with MIMIC-III and FHIR data using the Model Context Protocol (MCP). Unlike cloud-based models like Med-PaLM, our approach uses local processing to cut costs and protect patient data with AES-256 encryption and strict access controls, meeting HIPAA and GDPR standards. It retrieves FHIR-compliant data from public servers (e.g., \texttt{hapi.fhir.org}) for interoperability across hospital systems. Tested on discharge summaries, it achieves ROUGE-L F1 scores of 0.96 for MIMIC-III and 0.84 for FHIR, beating baselines like BioBERT (0.61, p < 0.001) due to efficient preprocessing and MCP’s accurate data grounding. ROUGE, BLEU and BERTScore metrics, along with visualizations, confirm its reliability. The entire pipeline code is available at \url{https://github.com/shekhar-ai99/clinical-mcp} for transparency and reproducibility.
SN  - 3070-3328
PB  - Institute of Central Computation and Knowledge
LA  - English
ER  - 
BibTeX Format
Compatible with LaTeX, BibTeX, and other reference managers
BibTeX format data for LaTeX and reference managers
@article{Behera2025A,
  author = {Chandra Shekhar Behera and Swarupananda Bissoyi},
  title = {A Framework for Secure and Interoperable Clinical Summarization Using the Model Context Protocol: Integrating MIMIC-III and FHIR with TinyLlama},
  journal = {Next-Generation Computing Systems and Technologies},
  year = {2025},
  volume = {1},
  number = {2},
  pages = {91-101},
  doi = {10.62762/NGCST.2025.784852},
  url = {https://www.icck.org/article/abs/NGCST.2025.784852},
  abstract = {This research presents a new framework for clinical summarization that combines the TinyLlama model with MIMIC-III and FHIR data using the Model Context Protocol (MCP). Unlike cloud-based models like Med-PaLM, our approach uses local processing to cut costs and protect patient data with AES-256 encryption and strict access controls, meeting HIPAA and GDPR standards. It retrieves FHIR-compliant data from public servers (e.g., \texttt{hapi.fhir.org}) for interoperability across hospital systems. Tested on discharge summaries, it achieves ROUGE-L F1 scores of 0.96 for MIMIC-III and 0.84 for FHIR, beating baselines like BioBERT (0.61, p < 0.001) due to efficient preprocessing and MCP’s accurate data grounding. ROUGE, BLEU and BERTScore metrics, along with visualizations, confirm its reliability. The entire pipeline code is available at \url{https://github.com/shekhar-ai99/clinical-mcp} for transparency and reproducibility.},
  keywords = {model context protocol, tinyLlama, clinical summarization, FHIR, interoperability, MIMIC-III, healthcare AI, data security, ROUGE, BERTScore},
  issn = {3070-3328},
  publisher = {Institute of Central Computation and Knowledge}
}

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 211
PDF Downloads: 117

Publisher's Note
ICCK stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and Permissions
CC BY Copyright © 2025 by the Author(s). Published by Institute of Central Computation and Knowledge. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Next-Generation Computing Systems and Technologies

Next-Generation Computing Systems and Technologies

ISSN: 3070-3328 (Online)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/icck/