• Showroom7561@lemmy.ca
    link
    fedilink
    arrow-up
    11
    ·
    13 days ago

    Is it easy to self-host immich so that it operates on a READ-ONLY basis with my images? I really only want to use it for the local-AI indexing/search, but not as a backup or photo management solution (Synology Photos works just fine for that).

    • BakedCatboy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      ·
      13 days ago

      I think you can use Immich external libraries for this, also to be extra safe you can just mount your external images folder as read only by adding :ro to the docker volume mount so that the container won’t be able to modify anything as a precaution.

      • Showroom7561@lemmy.ca
        link
        fedilink
        arrow-up
        7
        ·
        13 days ago

        also to be extra safe you can just mount your external images folder as read only by adding :ro to the docker volume mount so that the container won’t be able to modify anything as a precaution.

        This is what I was thinking, too.

        Alright, looks like I’ll be setting it up soon! LOL

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      13 days ago

      You can also try out photoprism for that. Immich is best for an all-in-one solution as a replacement for google photos.

      Photoprism also has face recognition, maps, and many more features geared towards photography than immich.

      I realized after using photoprism that I am too basic for that haha

      • Showroom7561@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        13 days ago

        I don’t think Photoprism has contextual search. Anyway, immich installed and running on my NAS 🤭

    • Jason2357@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      13 days ago

      Yes, that’s how I use it. It has access to a read only bind mount of my photo directory. The ML doesn’t write exif data to the images, just keeps that in its database.

    • SayCyberOnceMore@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      ·
      13 days ago

      If you’re using Arch, the AUR package works well

      I’m fully aware of the joy of containers, but I just don’t want all that extra faff

        • milicent_bystandr@lemm.ee
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          8 days ago

          Can you give me some pointers? I’m still new to docker and podman; hoping to get this going without too much learning curve to start with!


          Edit: Thanks to InnerScientist’s instructions below, plus some extra, I got it working. A 5-day-old lemmy comment is probably lost forever to anyone else who wants help, but just in case, here’s some pointers.

          1. I cross-referenced with the official docker compose file, and two other quadlet examples/guides, neither of which worked on their own for me. I used InnerScientist’s files below as my base. Other guides:
          1. Comments need to be on their own line (or deleted), not at the end of the line as below
          2. The environment file from official docker instructions needs some more lines, including duplicating with correct names for the database container (can you put referenced ones in a quadlet like in the docker compose? I don’t know)
          3. The env file also needs DB_HOSTNAME and REDIS_HOSTNAME, which for me were systemd-immich-database and systemd-immich-redis (you can find the names with podman ps, when they’re running, as later in this thread.
          4. Podman, to my surprise, bypasses ufw (uncomplicated firewall) just like Docker!
          • To restrict access to your computer only, you can PublishPort=127.0.0.1:2283:2283 as below.
          • To open to lan, PublishPort=2283:2283
          • If, like me, you’re using a reverse-proxy, but don’t want other LAN access, it’s fairly easy to limit with iptables rules. I use nebula, which puts my vpn within a particular ip range. Script below.

          iptables script

          #!/bin/bash
          iptables -I INPUT -p tcp --dport 2283 -j DROP
          iptables -I INPUT -p tcp --dport 2283 -s 192.168.1.0/24 -j ACCEPT
          iptables -I INPUT -p tcp --dport 2283 -s localhost -j ACCEPT
          

          That puts each line at the top of iptables, before ufw or anything else. So they end up in reverse order from the script.

          • packets to port 2283 from localhost, accept!
          • packets to port 2283 from 192.168.1.*, accept!
          • packets to port 2283 from anywhere [else], drop!
          • InnerScientist@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            13 days ago

            Sure, I set it up in nixos though this is the short form of that:

            spoiler
            1. Install Podman and passt + slirp4netns for networking
            2. Setup subuid and subgid usermod --add-subuids 100000-165535 --add-subgids 100000-165535 johndoe
            3. I’m using quadlet’s so we need to create those: $HOME/.config/containers/systemd/immich-database.container
            [Unit]
            Description=Immich Database
            Requires=immich-redis.service immich-network.service
            
            [Container]
            AutoUpdate=registry
            EnvironmentFile=${immich-config} # add your environment variables file here
            Image=registry.hub.docker.com/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0 # hash from the official docker-compose, has to be updated from time to time
            Label=registry
            Pull=newer # update to newest image, though this image is specified by hash and will never update to another version unless the hash is changed
            Network=immich.network # attach to the podman network
            UserNS=keep-id:uid=999,gid=999 # This makes uid 999 and gid 999 map to the user running the service, this is so that you can access the files in the volume without any special handling otherwise root would map to your uid and the uid 999 would map to some very high uid that you can't access without podman - This modifies the image at runtime and may make the systemd service timeout, maybe increase the timeout on low-powered machines 
            Volume=/srv/services/immich/database:/var/lib/postgresql/data # Database persistance
            Volume=/etc/localtime:/etc/localtime:ro # timezone info
            Exec=postgres -c shared_preload_libraries=vectors.so -c 'search_path="$user", public, vectors' -c logging_collector=on -c max_wal_size=2GB -c shared_buffers=512MB -c wal_compression=on # also part of official docker-compose.....last time i checked anyways
            [Service]
            Restart=always
            

            $HOME/.config/containers/systemd/immich-ml.container

            [Unit]
            Description=Immich Machine Learning
            Requires=immich-redis.service immich-database.service immich-network.service
            
            [Container]
            AutoUpdate=registry
            EnvironmentFile=${immich-config} #same config as above
            Image=ghcr.io/immich-app/immich-machine-learning:release
            Label=registry
            Pull=newer # auto update on startup
            Network=immich.network
            Volume=/srv/services/immich/ml-cache:/cache # machine learning cache
            Volume=/etc/localtime:/etc/localtime:ro
            
            [Service]
            Restart=always
            

            $HOME/.config/containers/systemd/immich.network

            [Unit]
            Description=Immich network
            
            [Network]
            DNS=8.8.8.8
            Label=app=immich
            
            $HOME/.config/containers/systemd/immich-redis.container
            [Unit]
            Description=Immich Redis
            Requires=immich-network.service
            
            [Container]
            AutoUpdate=registry
            Image=registry.hub.docker.com/library/redis:6.2-alpine@sha256:eaba718fecd1196d88533de7ba49bf903ad33664a92debb24660a922ecd9cac8 # should probably change this  to valkey.... 
            Label=registry
            Pull=newer # auto update on startup
            Network=immich.network
            Timezone=Europe/Berlin
            
            [Service]
            Restart=always
            

            $HOME/.config/containers/systemd/immich-server.container

            [Unit]
            Description=Immich Server
            Requires=immich-redis.service immich-database.service immich-network.service immich-ml.service
            
            [Container]
            AutoUpdate=registry
            EnvironmentFile=${immich-config} #same config as above
            Image=ghcr.io/immich-app/immich-server:release
            Label=registry
            Pull=newer # auto update on startup
            Network=immich.network
            PublishPort=127.0.0.1:2283:2283
            Volume=/srv/services/immich/upload:/usr/src/app/upload # i think you can put images here to import, though i never used it
            Volume=/etc/localtime:/etc/localtime:ro # timezone info
            Volume=/srv/services/immich/library:/imageLibrary # here the images are stored once imported
            
            [Service]
            Restart=always
            
            [Install]
            WantedBy=multi-user.target default.target
            
            1. systemctl --user daemon-reload
            2. systemctl --user enable --now immich-server.service
            3. enable linger so systemd user services run even if the user is logged of loginctl enable-linger $USER
            4. Setup a reverse proxy like caddy so you can make access to it simple like immich.mini-pc.localnet
            • milicent_bystandr@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              10 days ago

              Thank you!

              Alas I can’t get it to work. After some tweaking and fixing, I’m stuck that the server doesn’t seem to be talking to redis, with this error,

              missing 'error' handler on this Redis client
              microservices worker error: Error: getaddrinfo ENOTFOUND database, stack: Error: getaddrinfo ENOTFOUND database    
                   at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26)     
              microservices worker exited with code 1    
              Killing api process
              

              I’ve been trying to learn bits of podman and docker and how to translate between the two… I think it’s just a bit much for me for now! Thanks anyway, and I’m sure I’ll come back and have another look at your instructions another time.

              • InnerScientist@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                10 days ago

                No Problem!

                If you want to fix the issue: That seems like the hostname for one of the databases is wrongly set in the environment file, the hostname of containers is the same as the container name which can be read using podman ps.

                • milicent_bystandr@lemm.ee
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  8 days ago

                  It worked! Thank you so much.

                  My quadlet files are now cobbled together from various sources, cross-referencing yours with the official docker compose and a couple of podman examples I found. But I now have a functioning Immich running in rootless podman, accessed through a VPS reverse proxy!

                  I’ll edit my first comment in the chain with some tips in case someone else comes across this looking for help.

      • Sproutling@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        13 days ago

        Docker to rootful podman is easy. Docker to rootless podman can get annoying due to the file permissions and slightly more limited networking

    • Scrollone@feddit.it
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      13 days ago

      Yeah. I’m also waiting for a native Ubuntu package, I don’t want to deal with Docker.

    • idriss@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      13 days ago

      I am curious why not docker? it s pretty convenient in my setup (docker compose + traefik). If I need to migrate it s really simple, if I am to nuke a service just bring it down and delete the path.

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    13 days ago

    Recent convert to immich and hugely impressed by the software and project - one of FOSS’s shining stars. Good work everyone.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    13 days ago

    Reminds me that now that all my data is processed (in particular the heavy ML part) I should move the resulting container data to my (much less powerful but always on) NAS.

    • jeeva@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      13 days ago

      If it helps, I have an ml container on my more powerful machine and have my Immich insurance pointing at that, then the local NAS container in order. If it’s on, it powers through (so I turn it on if I’m about to dump a batch of photos) and if it’s not it churns slowly through (e.g. if my phone uploads one or two).

      It’s super easy to do! Would recommend.

      • utopiah@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        13 days ago

        Ah nice, I was aware of the remote ML instance option but I didn’t know it was optional, i.e. if it’s there rely on it, if not still work. I thought it was either do ML locally ALL the time or do ML remotely all the time.

        Is it just an optional ML endpoint on the NAS instance pointing to the ML only container on the more powerful machine on the same LAN?

        • jeeva@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 days ago

          Pretty much that - the NAS instance is running all the time, and there’s a setting in Immich for the network address for the ML container(s) that accepts a comma (or semi colon) separated list, which is tried in order.

          The docs mention that you can balance requests, but you’d have to use an external method for that at the moment.

    • nfreak@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 days ago

      I just set it up this week, I was just settling with nextcloud memories before. Night and day difference.

      A few pain points in the process but overall was pretty easy to set up and even add 2FA (though I can’t say authelia was easy to set up to do so), and once it’s off the ground it’s super smooth

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    It’s only good for phone photos though. If you also take pictres with a camera, it doesn’t have any clear way to handle those.

  • zingo@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    13 days ago

    Is it still braking changes when upgrading to a newer version?

    In the past it felt like I was running an alpha version, which I spend more time fixing it than enjoying its features.