NGINX Modern Apps > Class 16 - Beyond Models: A Practical Guide to Protecting Your AI-Powered Apps > Protecting the ChatBot with AI Gateway Source | Edit on
LLM07: System Prompt Leakage¶
Definition: The system prompt leakage vulnerability in LLMs refers to the risk that the system prompts or instructions used to steer the behavior of the model can also contain sensitive information that was not intended to be discovered. System prompts are designed to guide the model’s output based on the requirements of the application, but may inadvertently contain secrets. When discovered, this information can be used to facilitate other attacks.
Attack¶
The system prompt is defining to the LLM how to behave and what capabilities it has. If an attacker get’s a hold of this information it has a similar impact if it would get the source code of any proprietary application.
Let’s see how we can attack our ChatBot using this attack vector.
Start a new conversation and paste the bellow prompt.

Protect¶
We will now configure the AI Gateway to protect the AI Assistant by using again prompt-injection
processor.
In the UDF Jumphost Web Shell configure the AI Gateway by running the bellow command.
Restart the chat and run the attack again.
You will see that this time AI Gateway is blocking it.
Inspect the AI Gateway logs. You will see similar logs as bellow. The processor has blocked the request with a prompt injection confidence level of confidence:0.9920624494552612
2025/01/12 11:35:25 INFO running processor name=prompt-injection 2025/01/12 11:35:25 INFO processor error response name=prompt-injection metadata="&{RequestID:88e718031ae9605df12a5b9be89b34dd StepID:01945a4c-1df0-7351-8c2b-8da3f8c832f4 ProcessorID:f5:prompt-injection ProcessorVersion:v1 Result:map[confidence:0.9920624494552612 detected:true rejection_reason:Possible Prompt Injection detected] Tags:map[attacks-detected:[prompt-injection]]}" 2025/01/12 11:35:25 ERROR failed to executeStages: failed to chain.Process for stage protect: failed to runProcessor: processor prompt-injection returned error: external processor returned 422 with rejection_reason: Possible Prompt Injection detected