Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,10 +1,76 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: docker
|
| 7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: n8n Self-Hosted AI Starter Kit
|
| 3 |
+
emoji: 🚀
|
| 4 |
+
colorFrom: green
|
| 5 |
+
colorTo: blue
|
| 6 |
sdk: docker
|
| 7 |
+
app_port: 5678
|
| 8 |
+
pinned: true
|
| 9 |
+
# Add hardware requests if needed, e.g. for Ollama with larger models
|
| 10 |
+
# hardware: cpu-upgrade # or gpu-small / gpu-a10g-small etc.
|
| 11 |
+
# For GPU, Ollama should pick it up if drivers are compatible.
|
| 12 |
+
# This Dockerfile is built for CPU by default for wider compatibility.
|
| 13 |
+
persistent_storage:
|
| 14 |
+
# Request persistent storage. Adjust size as needed for models, db, etc.
|
| 15 |
+
# n8n data, postgres db, qdrant vectors, ollama models
|
| 16 |
+
# Example: 10Gi for models and data
|
| 17 |
+
data:
|
| 18 |
+
type: "disk"
|
| 19 |
+
size: 10 # GiB
|
| 20 |
+
mount_path: "/data" # This is where it will be mounted in the container
|
| 21 |
---
|
| 22 |
|
| 23 |
+
# n8n Self-Hosted AI Starter Kit on Hugging Face Spaces
|
| 24 |
+
|
| 25 |
+
This Space deploys the [n8n Self-Hosted AI Starter Kit](https://github.com/n8n-io/self-hosted-ai-starter-kit) in a single container environment. It includes:
|
| 26 |
+
|
| 27 |
+
* **n8n**: Low-code automation platform.
|
| 28 |
+
* **Ollama**: Run local LLMs.
|
| 29 |
+
* **Qdrant**: Vector database.
|
| 30 |
+
* **PostgreSQL**: Database for n8n.
|
| 31 |
+
|
| 32 |
+
## Setup
|
| 33 |
+
|
| 34 |
+
1. **Secrets**:
|
| 35 |
+
Before this Space can run correctly, you **MUST** configure the following secrets in your Space settings:
|
| 36 |
+
* `N8N_ENCRYPTION_KEY`: A long, random string for encrypting sensitive n8n data.
|
| 37 |
+
* Generate one with: `openssl rand -hex 32`
|
| 38 |
+
* `DB_POSTGRESDB_PASSWORD`: Password for the PostgreSQL `n8n` user and database.
|
| 39 |
+
* Choose a strong password.
|
| 40 |
+
|
| 41 |
+
2. **Wait for Build**: The first time, the Docker image will build, and services will start. This can take a few minutes. The `start.sh` script will also pull a default Ollama model (`llama3.1:8b` or similar small model), which can take time.
|
| 42 |
+
|
| 43 |
+
3. **Access n8n**: Once the Space is running, click the "Open app" button or use the direct URL to access n8n on port 5678.
|
| 44 |
+
|
| 45 |
+
## How to Use
|
| 46 |
+
|
| 47 |
+
1. **n8n First-Time Setup**: When you first access n8n, you'll be prompted to create an owner account.
|
| 48 |
+
2. **Local Services**:
|
| 49 |
+
* **Ollama**: Running on `http://localhost:11434` (internally within the Space). The default model (`llama3.1:8b` or similar) should be available. You can use the "Ollama" node in n8n and set its "Base URL" to `http://localhost:11434`.
|
| 50 |
+
* **Qdrant**: Running on `http://localhost:6333` (internally). Use the "Qdrant" node in n8n and set its "URL" to `http://localhost:6333`.
|
| 51 |
+
* **PostgreSQL**: n8n is already configured to use the internal PostgreSQL instance.
|
| 52 |
+
3. **Persistent Storage**:
|
| 53 |
+
* n8n workflows and credentials are stored in `/data/.n8n`.
|
| 54 |
+
* PostgreSQL data is stored in `/data/postgres`.
|
| 55 |
+
* Qdrant vector data is stored in `/data/qdrant_storage`.
|
| 56 |
+
* Ollama models are stored in `/data/.ollama`.
|
| 57 |
+
This data will persist across Space restarts if you've configured persistent storage for `/data`.
|
| 58 |
+
|
| 59 |
+
4. **Accessing Local Files in n8n**:
|
| 60 |
+
The original starter kit mentions a `/data/shared` directory for n8n to access local files. This directory is available in this Space. When using n8n nodes like "Read/Write Files from Disk", use paths like `/data/shared/yourfile.txt`.
|
| 61 |
+
|
| 62 |
+
## Important Notes & Limitations
|
| 63 |
+
|
| 64 |
+
* **Single Container**: All services (n8n, Postgres, Qdrant, Ollama) run in a single Docker container. This is less robust than a multi-container `docker-compose` setup but necessary for this Hugging Face Space deployment model.
|
| 65 |
+
* **Resource Usage**: Running all these services, especially Ollama with LLMs, can be resource-intensive. Ensure your Space has adequate CPU, RAM, and storage. Consider upgrading hardware if performance is slow.
|
| 66 |
+
* **Ollama Model Management**:
|
| 67 |
+
* A default model is pulled on startup.
|
| 68 |
+
* To pull other models, you would typically exec into the container (possible via `hf cli` or other tools, but not directly via the Space UI) and run `ollama pull <modelname>`. Alternatively, some n8n Ollama nodes might trigger a download if the model isn't present (behavior may vary).
|
| 69 |
+
* **Initial Startup Time**: The first startup might be slow due to database initialization and model download. Check the logs for progress.
|
| 70 |
+
* **WEBHOOK_URL / N8N_EDITOR_BASE_URL**: These are automatically configured by the `start.sh` script to use the Hugging Face Space URL.
|
| 71 |
+
|
| 72 |
+
## Troubleshooting
|
| 73 |
+
|
| 74 |
+
* Check the Space logs for any errors during startup or operation.
|
| 75 |
+
* Ensure secrets are correctly set.
|
| 76 |
+
* If n8n is inaccessible, one of the backend services might have failed to start.
|