Impact Factor
7.883
Call For Paper
Volume: 12 Issue 03 March 2026
LICENSE
Securing Genertaive Si Systems:prompt Injection Attacks-a Review
-
Author(s):
Sakshi Lokhande | Arvind Gautam
-
Keywords:
Generative AI, Large Language Models, Prompt Injection, Blockchain, Zero-Trust Architecture.
-
Abstract:
Amid The Fast Uptake Of Large Language Models Like ChatGPT, Llama, And DeepSeek In Education,healthcare, And Finance Sectors Among Others, New Security Vulnerabilities Have Emerged. One Of The Most Critical Amongthese Is The Prompt Injection Attack, Which Consists Of Inserting Malicious Commands In User Input To Alter The Modelbehavior. Prompt Injection Is A Form Of Attack Where Attackers Exploit AI Models By Injecting Malicious Inputs That Causethem To Behave Abnormally Or Exfiltrate Sensitive Information. In This Article, We Analyze The Characteristics Of Promptinjection Attacks, Investigate Existing Defense Methods, And Introduce A Synergistic Defense System That Leverages Inputsanitization, Prompt Sanitization, Contextual Isolation, Blockchain-based Logging And Auditing, Zero Trust Architecture Andmixed Encodings To Mitigate Threat. This Enhances The Robustness And Guarantees The Security Of LLM Applications Inpractical Application.
Other Details
-
Paper id:
IJSARTV11I10104198
-
Published in:
Volume: 11 Issue: 10 October 2025
-
Publication Date:
2025-10-29
Download Article