Is it easy to self-host immich so that it operates on a READ-ONLY basis with my images? I really only want to use it for the local-AI indexing/search, but not as a backup or photo management solution (Synology Photos works just fine for that).
I think you can use Immich external libraries for this, also to be extra safe you can just mount your external images folder as read only by adding
:ro
to the docker volume mount so that the container won’t be able to modify anything as a precaution.also to be extra safe you can just mount your external images folder as read only by adding :ro to the docker volume mount so that the container won’t be able to modify anything as a precaution.
This is what I was thinking, too.
Alright, looks like I’ll be setting it up soon! LOL
Yes, pretty easy - that’s exactly how I use it
You can also try out photoprism for that. Immich is best for an all-in-one solution as a replacement for google photos.
Photoprism also has face recognition, maps, and many more features geared towards photography than immich.
I realized after using photoprism that I am too basic for that haha
I don’t think Photoprism has contextual search. Anyway, immich installed and running on my NAS 🤭
Yes, set the external library bind mount in the docker compose project to
:ro
(read only).Yes, that’s how I use it. It has access to a read only bind mount of my photo directory. The ML doesn’t write exif data to the images, just keeps that in its database.
Still requires docker?
If you’re using Arch, the AUR package works well
I’m fully aware of the joy of containers, but I just don’t want all that extra faff
Extra faff?
The additional software required to run it in a container plus its configuration, on top of Immich’s configuration.
Just install & configure Immich, done.
Thats… Just… Docker compose… You copy, paste, docker-compose up and you’re done…
Not on NixOS!
services.immich.enable = true;
I’m planning to do it with podman. It’s supposed to be quite easy to convert between the two.
Can confirm, works without problems in rootless podman.
Can you give me some pointers? I’m still new to docker and podman; hoping to get this going without too much learning curve to start with!
Edit: Thanks to InnerScientist’s instructions below, plus some extra, I got it working. A 5-day-old lemmy comment is probably lost forever to anyone else who wants help, but just in case, here’s some pointers.
- I cross-referenced with the official docker compose file, and two other quadlet examples/guides, neither of which worked on their own for me. I used InnerScientist’s files below as my base. Other guides:
- https://github.com/jbtrystram/immich-podman-systemd
- https://github.com/tbelway/immich-podman-quadlets/blob/main/docs/install/podman-quadlet.md
- Comments need to be on their own line (or deleted), not at the end of the line as below
- The environment file from official docker instructions needs some more lines, including duplicating with correct names for the database container (can you put referenced ones in a quadlet like in the docker compose? I don’t know)
- The env file also needs DB_HOSTNAME and REDIS_HOSTNAME, which for me were
systemd-immich-database
andsystemd-immich-redis
(you can find the names withpodman ps
, when they’re running, as later in this thread. - Podman, to my surprise, bypasses ufw (uncomplicated firewall) just like Docker!
- To restrict access to your computer only, you can
PublishPort=127.0.0.1:2283:2283
as below. - To open to lan,
PublishPort=2283:2283
- If, like me, you’re using a reverse-proxy, but don’t want other LAN access, it’s fairly easy to limit with iptables rules. I use nebula, which puts my vpn within a particular ip range. Script below.
iptables script
#!/bin/bash iptables -I INPUT -p tcp --dport 2283 -j DROP iptables -I INPUT -p tcp --dport 2283 -s 192.168.1.0/24 -j ACCEPT iptables -I INPUT -p tcp --dport 2283 -s localhost -j ACCEPT
That puts each line at the top of iptables, before ufw or anything else. So they end up in reverse order from the script.
- packets to port 2283 from localhost, accept!
- packets to port 2283 from 192.168.1.*, accept!
- packets to port 2283 from anywhere [else], drop!
Sure, I set it up in nixos though this is the short form of that:
spoiler
- Install Podman and passt + slirp4netns for networking
- Setup subuid and subgid
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 johndoe
- I’m using quadlet’s so we need to create those: $HOME/.config/containers/systemd/immich-database.container
[Unit] Description=Immich Database Requires=immich-redis.service immich-network.service [Container] AutoUpdate=registry EnvironmentFile=${immich-config} # add your environment variables file here Image=registry.hub.docker.com/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0 # hash from the official docker-compose, has to be updated from time to time Label=registry Pull=newer # update to newest image, though this image is specified by hash and will never update to another version unless the hash is changed Network=immich.network # attach to the podman network UserNS=keep-id:uid=999,gid=999 # This makes uid 999 and gid 999 map to the user running the service, this is so that you can access the files in the volume without any special handling otherwise root would map to your uid and the uid 999 would map to some very high uid that you can't access without podman - This modifies the image at runtime and may make the systemd service timeout, maybe increase the timeout on low-powered machines Volume=/srv/services/immich/database:/var/lib/postgresql/data # Database persistance Volume=/etc/localtime:/etc/localtime:ro # timezone info Exec=postgres -c shared_preload_libraries=vectors.so -c 'search_path="$user", public, vectors' -c logging_collector=on -c max_wal_size=2GB -c shared_buffers=512MB -c wal_compression=on # also part of official docker-compose.....last time i checked anyways [Service] Restart=always
$HOME/.config/containers/systemd/immich-ml.container
[Unit] Description=Immich Machine Learning Requires=immich-redis.service immich-database.service immich-network.service [Container] AutoUpdate=registry EnvironmentFile=${immich-config} #same config as above Image=ghcr.io/immich-app/immich-machine-learning:release Label=registry Pull=newer # auto update on startup Network=immich.network Volume=/srv/services/immich/ml-cache:/cache # machine learning cache Volume=/etc/localtime:/etc/localtime:ro [Service] Restart=always
$HOME/.config/containers/systemd/immich.network
[Unit] Description=Immich network [Network] DNS=8.8.8.8 Label=app=immich $HOME/.config/containers/systemd/immich-redis.container [Unit] Description=Immich Redis Requires=immich-network.service [Container] AutoUpdate=registry Image=registry.hub.docker.com/library/redis:6.2-alpine@sha256:eaba718fecd1196d88533de7ba49bf903ad33664a92debb24660a922ecd9cac8 # should probably change this to valkey.... Label=registry Pull=newer # auto update on startup Network=immich.network Timezone=Europe/Berlin [Service] Restart=always
$HOME/.config/containers/systemd/immich-server.container
[Unit] Description=Immich Server Requires=immich-redis.service immich-database.service immich-network.service immich-ml.service [Container] AutoUpdate=registry EnvironmentFile=${immich-config} #same config as above Image=ghcr.io/immich-app/immich-server:release Label=registry Pull=newer # auto update on startup Network=immich.network PublishPort=127.0.0.1:2283:2283 Volume=/srv/services/immich/upload:/usr/src/app/upload # i think you can put images here to import, though i never used it Volume=/etc/localtime:/etc/localtime:ro # timezone info Volume=/srv/services/immich/library:/imageLibrary # here the images are stored once imported [Service] Restart=always [Install] WantedBy=multi-user.target default.target
- systemctl --user daemon-reload
- systemctl --user enable --now immich-server.service
- enable linger so systemd user services run even if the user is logged of
loginctl enable-linger $USER
- Setup a reverse proxy like caddy so you can make access to it simple like immich.mini-pc.localnet
Thank you!
Alas I can’t get it to work. After some tweaking and fixing, I’m stuck that the server doesn’t seem to be talking to redis, with this error,
missing 'error' handler on this Redis client microservices worker error: Error: getaddrinfo ENOTFOUND database, stack: Error: getaddrinfo ENOTFOUND database at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26) microservices worker exited with code 1 Killing api process
I’ve been trying to learn bits of podman and docker and how to translate between the two… I think it’s just a bit much for me for now! Thanks anyway, and I’m sure I’ll come back and have another look at your instructions another time.
No Problem!
If you want to fix the issue: That seems like the hostname for one of the databases is wrongly set in the environment file, the hostname of containers is the same as the container name which can be read using
podman ps
.It worked! Thank you so much.
My quadlet files are now cobbled together from various sources, cross-referencing yours with the official docker compose and a couple of podman examples I found. But I now have a functioning Immich running in rootless podman, accessed through a VPS reverse proxy!
I’ll edit my first comment in the chain with some tips in case someone else comes across this looking for help.
Docker to rootful podman is easy. Docker to rootless podman can get annoying due to the file permissions and slightly more limited networking
Yeah. I’m also waiting for a native Ubuntu package, I don’t want to deal with Docker.
I am curious why not docker? it s pretty convenient in my setup (docker compose + traefik). If I need to migrate it s really simple, if I am to nuke a service just bring it down and delete the path.
Been hosting on my Synology NAS for a while now. This is app kicks hard. I love it.
Recent convert to immich and hugely impressed by the software and project - one of FOSS’s shining stars. Good work everyone.
Reminds me that now that all my data is processed (in particular the heavy ML part) I should move the resulting container data to my (much less powerful but always on) NAS.
If it helps, I have an ml container on my more powerful machine and have my Immich insurance pointing at that, then the local NAS container in order. If it’s on, it powers through (so I turn it on if I’m about to dump a batch of photos) and if it’s not it churns slowly through (e.g. if my phone uploads one or two).
It’s super easy to do! Would recommend.
Ah nice, I was aware of the remote ML instance option but I didn’t know it was optional, i.e. if it’s there rely on it, if not still work. I thought it was either do ML locally ALL the time or do ML remotely all the time.
Is it just an optional ML endpoint on the NAS instance pointing to the ML only container on the more powerful machine on the same LAN?
Pretty much that - the NAS instance is running all the time, and there’s a setting in Immich for the network address for the ML container(s) that accepts a comma (or semi colon) separated list, which is tried in order.
The docs mention that you can balance requests, but you’d have to use an external method for that at the moment.
Really great software. Works just as you’d want and has enough stylisism and flow too.
Immich sounds so awesome I plan to start using it soon.
I just set it up this week, I was just settling with nextcloud memories before. Night and day difference.
A few pain points in the process but overall was pretty easy to set up and even add 2FA (though I can’t say authelia was easy to set up to do so), and once it’s off the ground it’s super smooth
Got an authelia config a man can bum?
I can never get it working.
It’s only good for phone photos though. If you also take pictres with a camera, it doesn’t have any clear way to handle those.
Is it still braking changes when upgrading to a newer version?
In the past it felt like I was running an alpha version, which I spend more time fixing it than enjoying its features.