
PDS on a pi
installing a pds on my raspberry pi; Photo by Ritam Baishya on Unsplash
I want to self host my own pds for atproto, but I wanna host it on my own hardware in my room, since that feels fun and whimsical. I’ve got a pi 5 sat on my floor which I’m going to use for this, and then I’ll route it out of my vps using tailscale and caddy. Currently the pi doesn’t have ethernet or constant power (i attached a screen to it and forgot how to take it off so i turn it off at night lmfao) so I’m going to create a test account (at://test.vielle.dev) and use that for now. This’ll also be a chance to create a did:web account to mess with.
Note: I’m writing this blog post while I do it so its going to be messy
The plan is to have all my services running on docker compose, which then can be accessed by using tailscale on the pi host. This then goes into tailscale on the vps, which goes into caddy, which goes out to the net. Little complex but should work. Heres a graph
1. Get a dummy service on the pi
I’m gonna get a dummy ping/pong https thing on the pi which i can test forwarding. Presumably if this works, then the PDS will work too when I add it.
ping-pong/main.ts
// using Deno
// new http server on 0.0.0.0:8000 which just sends "Hello"
Deno.serve({ port: 8000, hostname: "0.0.0.0" }, (req) => {
return new Response("Hello! " + req.url);
});
ping-pong/Dockerfile
FROM denoland/deno:latest
WORKDIR /app
COPY . .
CMD ["deno", "run", "--allow-net", "main.ts"]
compose.yml
services:
pingpong:
build: ./ping-pong
restart: unless-stopped
ports:
- 8000:8000
It works, so now I’m going to add a caddy rule to route port 8000 of my pi (http://pi:8000
) to https://pds.vielle.dev:443
.
2. Routing traffic from vps -> pi
All I need to do is add this code to my Caddyfile and run it locally to test.
pds.{$HOST:localhost} {
reverse_proxy pi:8000
}
Once I’m ready to deploy, this SHOULD just be a matter of pushing the commit to master and assuming my shitty cicd works it’ll be deployed, along with this post!
3. Installing the pds in the container.
For this one, we’re gonna look at the pds setup from https://github.com/bluesky-social/pds/. This has 3 parts:
- A caddy container
- The pds
- Watchtower
3.1. Caddyfile
Starting with the caddy container, let’s just make sure our vps has anything in the caddy config its missing.
The config is below, found here
{
email ${PDS_ADMIN_EMAIL}
on_demand_tls {
ask http://localhost:3000/tls-check
}
}
*.${PDS_HOSTNAME}, ${PDS_HOSTNAME} {
tls {
on_demand
}
reverse_proxy http://localhost:3000
}
This is using global config for an email address, which is set further up the script, but I’ll configure with environment variables, and for on demand tls which is on the pds. This means the pds can decide which subdomains get tls, with any level of wildcarding. I don’t plan on using on demand tls for anything else on my site right now, but since it can’t be scoped I might need to make a custom handler myself.
Anyway heres the updated caddyfile config:
{
email {$PDS_ADMIN_EMAIL:404@vielle.dev}
on_demand_tls {
ask pi:8000/tls-check
}
}
# ...
*.pds.{$HOST:localhost}, pds.{$HOST:localhost} {
tls {
on_demand
}
reverse_proxy pi:8000
}
This refuses to connect but I’m going to assume its because the on demand tls is failing so theres just no cert which I can tell firefox to accept.
3.2. Watchtower
Googling it, watchtower seems to be a way to automatically update containers?
I’m just going to add it straight into the compose file lmao
services:
pingpong:
build: ./ping-pong
restart: unless-stopped
ports:
- 8000:8000
watchtower:
container_name: watchtower
image: containrrr/watchtower:latest
network_mode: host
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
restart: unless-stopped
environment:
WATCHTOWER_CLEANUP: true
WATCHTOWER_SCHEDULE: "@midnight"
3.3. The PDS
The reference compose file configures the pds like this:
pds:
container_name: pds
image: ghcr.io/bluesky-social/pds:0.4
network_mode: host
restart: unless-stopped
volumes:
- type: bind
source: /pds
target: /pds
env_file:
- /pds/pds.env
This is self explanatory; we make a pds directory, bind it to /pds, and make sure theres at least a pds.env file in there, which is the only file we need, based on installer.sh
In installer.sh, they generate the env file like so:
cat <<PDS_CONFIG >"${PDS_DATADIR}/pds.env"
PDS_HOSTNAME=${PDS_HOSTNAME}
PDS_JWT_SECRET=$(eval "${GENERATE_SECURE_SECRET_CMD}")
PDS_ADMIN_PASSWORD=${PDS_ADMIN_PASSWORD}
PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX=$(eval "${GENERATE_K256_PRIVATE_KEY_CMD}")
PDS_DATA_DIRECTORY=${PDS_DATADIR}
PDS_BLOBSTORE_DISK_LOCATION=${PDS_DATADIR}/blocks
PDS_BLOB_UPLOAD_LIMIT=52428800
PDS_DID_PLC_URL=${PDS_DID_PLC_URL}
PDS_BSKY_APP_VIEW_URL=${PDS_BSKY_APP_VIEW_URL}
PDS_BSKY_APP_VIEW_DID=${PDS_BSKY_APP_VIEW_DID}
PDS_REPORT_SERVICE_URL=${PDS_REPORT_SERVICE_URL}
PDS_REPORT_SERVICE_DID=${PDS_REPORT_SERVICE_DID}
PDS_CRAWLERS=${PDS_CRAWLERS}
LOG_ENABLED=true
PDS_CONFIG
Lets write our own pds.env file first:
- PDS_HOSTNAME is just
pds.vielle.dev
- PDS_JWT_SECRET is just
openssl rand --hex 16
- PDS_ADMIN_PASSWORD is also just
openssl rand --hex 16
, but must be different - PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX is
openssl ecparam --name secp256k1 --genkey --noout --outform DER | tail --bytes=+8 | head --bytes=32 | xxd --plain --cols 32
- PDS_DATA_DIRECTORY is PDS_DATADIR, which is hardcoded to
/pds
in the shell. Since this is currrently hardcoded to/pds
, we can hardcode it - PDS_BLOBSTORE_DISK_LOCATION is PDS_DATADIR/blocks, so
/pds/blocks
- PDS_BLOB_UPLOAD_LIMIT is set to
52428800
- PDS_DID_PLC_URL is
https://plc.directory
- PDS_BSKY_APP_VIEW_URL is
https://api.bsky.app
- PDS_BSKY_APP_VIEW_DID is
did:web:api.bsky.app
- PDS_REPORT_SERVICE_URL is
https://mod.bsky.app
- PDS_REPORT_SERVICE_DID is
did:plc:ar7c4by46qjdydhdevvrndac
- PDS_CRAWLERS is
https://bsky.network
, but I’ll expand this to include things like https://atproto.africa/ & bad example’s EU relays - LOG_ENABLED is
true
So to generate all this, I’m just going to run
mkdir ./pds
cat <<EOF > ./pds/pds.env
PDS_HOSTNAME=pds.vielle.dev
PDS_JWT_SECRET=$(eval "openssl rand --hex 16")
PDS_ADMIN_PASSWORD=$(eval "openssl rand --hex 16")
PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX=$(eval "openssl ecparam --name secp256k1 --genkey --noout --outform DER | tail --bytes=+8 | head --bytes=32 | xxd --plain --cols 32")
PDS_DATA_DIRECTORY=./pds # edit: change this to /pds (see below)
PDS_BLOBSTORE_DISK_LOCATION=./pds/blocks # edit: change this to /pds/blocks (see below)
PDS_BLOB_UPLOAD_LIMIT=52428800
PDS_DID_PLC_URL=https://plc.directory
PDS_BSKY_APP_VIEW_URL=https://api.bsky.app
PDS_BSKY_APP_VIEW_DID=did:web:api.bsky.app
PDS_REPORT_SERVICE_URL=https://mod.bsky.app
PDS_REPORT_SERVICE_DID=did:plc:ar7c4by46qjdydhdevvrndac
PDS_CRAWLERS=https://bsky.network,https://atproto.africa,https://relay1.us-east.bsky.network,https://relay.fire.hose.cam,https://relay3.fr.hose.cam,https://relay.hayescmd.net,https://relay.xero.systems
LOG_ENABLED=true
EOF
Pretty much everythings set up for a pds now, last step is to configure pds in the compose file and it should be online !
Just got to add this to my compose file and it should all work!
pds:
container_name: pds
image: ghcr.io/bluesky-social/pds:0.4
restart: unless-stopped
# removed network_mode: host since it should still work without it
# and instead bound port 3000 of container to 8000 of host
ports:
- 8000:3000
volumes:
- type: bind
# source is relative
source: ./pds
target: /pds
# env is relative
env_file:
- ./pds/pds.env
Ok. So. Seems i didn’t configure it correctly: Error: Must configure plc rotation key
. I was missing the xxd
command and somehow missed that? A simple sudo apt-get install xxd
should fix things. Running the env command again gives an error that the directory for the database doesnt exist !! How Fun !!
The issue was that i had ./pds
in the environment variables, (which were used inside the container), when the container used /pds
as the datadir. Simple fix, and now it all works!!
Next steps:
Deploy this post + caddy changes, create a test account or two, find a permanent setup for the pi, and setup regular backups of the pds; then migrate to it myself!!
Update (17/09/25)
Required changes to get this properly working:
- Tailscale doesn’t let you do subdomains so I needed to do some rewrites on the url + modifying the default image to allow for handle resolution (see: @vielle.dev/server-config and @vielle.dev/pi-config)
- Tailscale magicDNS is slow so you need to use 100.x.y.z ip addresses
- Probably smart to up the timeout for reverse proxy (I did 3s -> 5s)
- You need to symlink ./pds to /pds to make pdsadmin.sh work properly
Other notes:
- Handles like
test.{domain}
dont work because of a reserved list which is hardcoded on in the official pds package. You need to patch or fork it to override it afaict. - Configure
PDS_PRIMARY_COLOR
for custom theming (ex:PDS_PRIMARY_COLOR="#008282"
) andPDS_OAUTH_PROVIDER_NAME
for a custom name somewhere, idk where