Back to all articles

Grafana resolves indirect prompt injection vulnerability in AI assistant

Security researchers identified a prompt injection vulnerability in Grafana's AI components that could have allowed unauthorized parties to exfiltrate sensitive data. Grafana quickly patched the underlying issue in its Markdown renderer, demonstrating the value of coordinated disclosure in securing AI integrations.

Triage Security Media Team
2 min read

Observability platform Grafana recently resolved a vulnerability that could have allowed unauthorized parties to manipulate its AI capabilities into exposing sensitive data.

Grafana serves as a central hub for compiling and tracking business data, including telemetry, infrastructure health, and financial metrics. Because the platform connects to highly sensitive organizational information, securing its components is a priority for defending business environments.

Security researchers at AI security vendor Noma recently published findings on "GrafanaGhost," an indirect prompt injection vulnerability that could enable a threat actor to exfiltrate data. Noma followed responsible disclosure protocols, and Grafana rapidly patched the core technical issue to protect its users.

Mechanics of the indirect prompt injection

The vulnerability stems from how Grafana's AI components process external information. To evaluate the AI's security boundaries, Noma researchers looked for user-facing areas where indirect prompts are processed. They identified image tags as a viable path for unauthorized instructions.

While Grafana employs protections to prevent external image rendering from untrusted domains, researchers bypassed these safeguards using protocol-relative URLs (which circumvented domain validation) and the keyword "INTENT" (which instructed the AI model to bypass its standard guardrails). By hiding these instructions on a controlled web page, the researchers demonstrated that the AI could ingest the prompt as benign, inadvertently sending requested sensitive data back to an external server as soon as the image began to load.

Sasi Levi, security research lead at Noma Security, noted that this technique does not necessarily require an affected user to click a malicious link.

"[The threat actor needs] to get their indirect prompt stored in a location that Grafana's AI components will later retrieve and process," Levi told Dark Reading. "Once that [injected prompt] is sitting in the data store, it waits and fires automatically when any user performs a normal interaction with their Grafana instance (like browsing entry logs). The user is the unwitting trigger, not the target of a phishing attempt."

Vendor response and remediation

Grafana Labs chief information security officer Joe McManus confirmed that Noma's research identified an issue with Grafana's image renderer in its Markdown component, which the company quickly patched.

However, Grafana and Noma differ on the exact interaction requirements for the vulnerability. McManus stated that the technique was not "zero-click" and could not operate autonomously in the background.

"Any successful execution of this [technique] would have required significant user interaction — specifically, the end user would have to repeatedly instruct our AI assistant to follow malicious instructions contained in logs, even after the AI assistant made the user aware of the malicious instructions," McManus said. He also noted there is no evidence of this bug being used in the wild, and no data was exposed from Grafana Cloud.

In response, Levi maintained that the sequence requires fewer than two steps and that the AI processed the indirect prompt autonomously, interpreting the log content as legitimate context without generating a warning or asking the user to confirm.

Despite the differing perspectives on the execution mechanics, both teams emphasized their shared commitment to user protection. The vulnerability is documented and fully patched, ensuring that Grafana users remain secure against this specific prompt injection technique.