
PIPE - Prompt Injection Primer for Engineers
Date : 2023-08-24
Introduction
Prompt injection is the highest profile vulnerability in AI-powered features and applications. It's also one of the most misunderstood. The impact varies greatly depending on who will use the feature, what data is accessible, and what functionality is exposed to the LLM. This guide aims to assist developers in creating secure AI-powered applications and features by helping them understand the actual risks of prompt injection.
PIPE can be read in a dedicated GitHub repo, and a pdf version can be downloaded from there
Recently on :
Artificial Intelligence
Security | Surveillance | Privacy
PITTI - 2026-01-14
Cultural, Ideological and Political Bias in LLMs
Transcription of a talk given during the work sessions organized by Technoréalisme on December 9, 2025, in Paris. The talk pres...
WEB - 2025-11-13
Measuring political bias in Claude
Anthropic gives insights into their evaluation methods to measure political bias in models.
WEB - 2025-10-09
Defining and evaluating political bias in LLMs
OpenAI created a political bias evaluation that mirrors real-world usage to stress-test their models’ ability to remain objecti...
WEB - 2025-07-23
Preventing Woke AI In Federal Government
Citing concerns that ideological agendas like Diversity, Equity, and Inclusion (DEI) are compromising accuracy, this executive ...
WEB - 2025-07-10
America’s AI Action Plan
To win the global race for technological dominance, the US outlined a bold national strategy for unleashing innovation, buildin...