Hey there!

I’m thinking about starting a blog about privacy guides, security, self-hosting, and other shenanigans, just for my own pleasure. I have my own server running Unraid and have been looking at self-hosting Ghost as the blog platform. However, I am wondering how “safe” it is to use one’s own homelab for this. If you have any experience regarding this topic, I would gladly appreciate some tips.

I understand that it’s relatively cheap to get a VPS, and that is always an option, but it is always more fun to self-host on one’s own bare metal! :)

  • cron@feddit.org
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    1 month ago

    No, with these reasons:

    • Bandwidth isn’t plenty
    • My “uptime” at home isn’t great
    • No redundant hardware, even a simple mainboard defect would take a while to replace

    I have a VPS for these tasks, and I host a few sites for friends amd family.

    • daddy32@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 month ago

      Weeeell, there’s a school of though leaning towards the opinion that using VPS is still self-hosting ;)

      • cron@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        I agree, but I understood this question in the context of a homelab.

        And for me, a homelab is not the right place for a public website, for the reasons I mentioned.

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    edit-2
    1 month ago

    A VPS still counts as self-hosting :)

    I host my sites on a VPS. Better internet connection and uptime, and you can get pretty good VPSes for less than $40/year.

    The approach I’d take these days is to use a static site generator like Eleventy, Hugo, etc. These generate static HTML files. You can then store those files on literally any host. You can stick them on a VPS and serve them with any web server. You could upload them to a static file hosting service like BunnyCDN storage, Github Pages, Netlify, Cloudflare Pages, etc. Even Amazon S3 and Cloudfront if you want to pay more for the same thing. Note that Github Pages is extremely feature-poor so I’d usually recommend one of the others.

    • JubilantJaguar@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      This is a bit fuzzy. You seem to recommend a VPS but then suggest a bunch of page-hosting platforms.

      If someone is using a static site generator, then they’re already running a web server, even if it’s on localhost. The friction of moving the webserver to the VPS is basically zero, and that way they’re not worsening the web’s corporate centralization problem.

      I host my sites on a VPS. Better internet connection and uptime, and you can get pretty good VPSes for less than $40/year.

      I preferred this advice.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 month ago

        You seem to recommend a VPS but then suggest a bunch of page-hosting platforms.

        Other comments were talking about pros and cons of self-hosting, so I tried to give advice for both approaches. I probably could have been clearer about thay in my comment though. I edited the comment a bit to try and clarify.

        I have some static sites that I just rsync to my VPS and serve using Nginx. That’s definitely a good option.

        If you want to make it faster by using a CDN and don’t want it to be too hard to set up, you’re going to have to use a CDN service.

        Self-hosted CDN is doable, but way more effort. Anycast approach is to get your own IPv4 and IPv6 range, and get VPSes in multiple countries through a provider that allows BGP sessions (Vultr and HostHatch support this for example). Then you can have one IP that goes to the server that’s closest to the viewer. Easier approach is to use Geo DNS where your DNS server returns a different IP depending on the visitor’s location. You can self-host that using something like PowerDNS.

        • JubilantJaguar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          I have some static sites that I just rsync to my VPS and serve using Nginx. That’s definitely a good option.

          Agree. And hard to get security wrong cos no database.

          If you want to make it faster by using a CDN and don’t want it to be too hard to set up, you’re going to have to use a CDN service.

          Yes but this can just be a drop-in frontend for the VPS. Point the domain to Cloudflare and tell only Cloudflare where to find the site. This provides IP privacy and also TLS without having to deal with LetsEncrypt. It’s not ideal because… Cloudflare… but at least you’re using standard web tools. To ditch Cloudflare you just unplug them at the domain and you still have a website.

          Perhaps its irrational but I’m bothered by how many people seem to think that Github Pages is the only way to host a static website. I know that’s not your case.

          • dan@upvote.au
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            That’s not Cloudflare-specific; you can use any CDN that supports origin pull in the same way :)

            It’s not ideal because… Cloudflare… but at least you’re using standard web tools. To ditch Cloudflare you just unplug them at the domain and you still have a website.

            Definitely agree with this! That’s one of the pain points of “cloud” services - they really try to lock you in, making it impossible to swotch.

            without having to deal with LetsEncrypt.

            You still need encryption between your CDN and your origin, ideally using a proper certificate. Let’s Encrypt (and other ACME services like ZeroSSL) are pretty easy to use, and can be fully automated. I’m using Let’s Encrypt even for internal servers on my network, using a DNS challenge for verification instead of a HTTP one.

            Perhaps its irrational but I’m bothered by how many people seem to think that Github Pages is the only way to host a static website

            It’s strange because out of all the possible options, Github Pages is the most basic. You have to store your generated files in a Git repo (which is kinda gross) and it barely supports any features. For example, it doesn’t support server logs or redirects.

            I guess it’s popular because people already use Github and don’t want to look for other services?

            • JubilantJaguar@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              You still need encryption between your CDN and your origin, ideally using a proper certificate.

              It can be self-signed though, that’s what I’m doing and it’s partly to outsource the TLS maintenance. But the main reason I’m doing it is to get IP privacy. WHOIS domain privacy is fine, but to me it seems pretty sub-optimal for a personal site to be publicly associated with even a permanent IP address. A VPS is meant to be private, it’s in the name. This is something that doesn’t get talked about much. I don’t see any way to achieve this without a CDN, unfortunately.

              I guess it’s popular because people already use Github and don’t want to look for other services?

              Yes, and the general confusion between Git and Github, and between public things and private things. It’s everywhere today. Another example: saying “my Substack” as if blogging was just invented by this private company. So it’s worse than just laziness IMO. It’s a reflexive trusting of the private over the public.

              • dan@upvote.au
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                it seems pretty sub-optimal for a personal site to be publicly associated with even a permanent IP address

                What’s the downside you see from having a static IP address?

                I don’t see any way to achieve this without a CDN, unfortunately.

                I think you’re looking for a reverse proxy. CDNs are essentially reverse proxies with edge caching (their main feature is that they cache files on servers that are closer to a user), but it sounds like you don’t really care about the caching for your use case?

                I don’t know if any companies provide reverse proxies without a CDN though.

                • JubilantJaguar@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 month ago

                  What’s the downside you see from having a static IP address?

                  What’s the downside to having one’s phone number in the public directory? There’s no security risk and yet plenty of people opt out. It’s personally identifying information.

                  I don’t know if any companies provide reverse proxies without a CDN though.

                  Exactly.

  • Foster Hangdaan@lemmy.fosterhangdaan.com
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 month ago

    I self-host everything from my home network including my website. I like to keep all my data local. 😁

    It’s a simple setup: just a static site made with Lume, and served with Caddy. The attack surface is pretty small since it’s just HTML and CSS files (no JavaScript).

    • LunchMoneyThief@links.hackliberty.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 month ago

      I wonder sometimes if the advice against pointing DNS records to your own residential IP amounts to a big scare. Like you say, if it’s just a static page served on an up to date and minimal web server, there’s less leverage for an attacker to abuse.

      I’ve found that ISPs too often block port 80 and 443. Did you luck out with a decent one?

      • Foster Hangdaan@lemmy.fosterhangdaan.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 month ago

        I wonder sometimes if the advice against pointing DNS records to your own residential IP amounts to a big scare. Like you say, if it’s just a static page served on an up to date and minimal web server, there’s less leverage for an attacker to abuse.

        That advice is a bit old-fashioned in my opinion. There are many tools nowadays that will get you a very secure setup without much effort:

        • Using a reverse proxy with automatic SSL certs like Caddy.
        • Sandboxing services with Podman.
        • Mitigating DoS attacks by using a WAF such as Bunkerweb.

        And of course, besides all these tools, the simplest way of securing public services is to keep them updated.

        I’ve found that ISPs too often block port 80 and 443. Did you luck out with a decent one?

        Rogers has been my ISP for several years and have no issue receiving HTTP/S traffic. The only issue, like with most providers, is that they block port 25 (SMTP). It’s the only thing keeping me from self-hosting my own email server and have to rely on a VPS.

  • wjs018@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 month ago

    I have hosted a wordpress site on my unraid box before, but ended up moving it to a VPS instead. I ended up moving it primarily because a VPS is just going to have more uptime since I end up tinkering around with my homelab too often. So, any service that I expect other people to use, I often end up moving it to a VPS (mostly wikis for different things). The one exception to that is anything related to media delivery (plex, jellyfin, *arr stack), because I don’t want to make that as publicly accessible and it needs close integration with the storage array in unraid.

    • Sunny' 🌻@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      Good points here, uptime is a factor I had not taken into consideration. Probably better to get a vps as you say.

  • eric@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    I have a Hugo site hosted on GitHub and I use CloudFlare Pages to put it on my custom domain. You don’t have to use GitHub to host the repo. Except for the cost of the domain, it’s free.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 month ago

    I’ve been self-hosting my blog for 21years if you can believe it, much of it has been done on a server in my house. I’ve hosted it on everything from a dusty old Pentium 200Mhz with 16MB of RAM (that’s MB, not GB!) to a shared web host (Webfaction), to a proper VPS (Hetzner), to a Raspberry Pi Kubernetes cluster, which is where it is now.

    The site is currently running Python/Django on a few Kubernetes pods on a few Raspberry Pi 4’s, so the total power consumption is tiny, and since they’re fanless, it’s all very quiet in my office upstairs.

    In terms of safety, there’s always a risk since you’re opening a port to the world for someone to talk directly to software running in your home. You can mitigate that by (a) keeping your software up to date, and (b) ensuring that if you’re maintaining the software yourself (like I am) keeping on top of any dependencies that may have known exploits. Like, don’t just stand up an instance of Wordpress and forget about it. That shit’s going to get compromised :-)

    The safest option is probably to use a static site generator like Hugo, since then your attack surface is limited to whatever you’re using to serve the static sites (probably Nginx), while if you’re running a full-blown application that does publishing etc., then that’s a lot of stuff that could have holes you don’t know about. You may also want to setup something like Cloudflare in front of your site to prevent a DOS attack or something from crippling your home internet, though that may be overkill.

    But yeah, the bandwidth requirements to running a blog are negligible, and the experience of running your own stuff on your own hardware in your own house is pretty great. I recommend it :-)

  • Strit@lemmy.linuxuserspace.show
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    I host mine just like you want to do. Ghost running in a docker container on my homelab, with reverse proxy and domain pointing to it.

    Haven’t had any issues so far.

  • pythia@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    could someone please point me to a “self-host-beginner-tutorial”? I had pretty good ICT-knowledge but when it comes to selfhosting my knowledge ends…

    • Sunny' 🌻@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Here is one of the top of my head; https://perfectmediaserver.com/.

      I’d say it boils down to what you see yourself hosting, what do you need/want? There are many great YT content creators out there documenting their experiences, tips and guides. HardwareHaven, Raid Owl, Jeff Geerling, Christian Lempa, TechnoTim and Wolfgang to mention a few.

      JupiterBroadcasting has a wide variety of Podcasts dedicated to both selfhosting and linux stuff if that should peak your interest.

      If you need tips for what to selfhost, here is another great resource :) https://github.com/awesome-selfhosted/awesome-selfhosted

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    Yes I host everything public with cloudflare tunnels. Everything more heavy is VPN with DDNS on invite basis to friends and fam. For the former it’s Hassle-free HTTPS, no reverse proxy, no firewall, no nonsense.

  • K3CAN@lemmy.radio
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    I self host.

    I use nginx as a reverse proxy with crowdsec. The backends are nginx and mariadb. Everything is running on Debian VMs or LXCs with apparmor profiles and it’s all isolated to an “untrusted” VLAN.

    It’s obviously still “safer” to have someone else host your stuff, like a VPS or Github Pages, etc, but I enjoy selfhosting and I feel like I’ve mitigated most of the risk.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 month ago

    There’s nothing wrong with just using a VPS for this. Despite what some mouth-frothing hobbyists will tell you, it’s still well within the realm of self hosting. There’s just no reason or difference for hosting a blog on your UnRAID server vs a VPS.

    If you really want to be some kind of purist and only use your own hardware, then you could configure a web server that can reverse proxy on your UnRAID server and forward port 443 in your router to your UnRAID box, but you’d have to change your UnRAID access port to something else. You’d want to keep this web server docker container up to date, and preferably see if you can implement some kind of WAF with it or in front of it. You’d then forward the requests from this web server to your ghost container.

    A better idea would be to use a different piece of hardware for this web server reverse proxy, like a raspberry pi or something, and put it on a different subnet in your house. Forward 443 to that, then proxy the connection back to UnRAID, in whatever port you bind the ghost container to. Then you can tighten access that raspberry pi has. Or hell, host the blog on that hardware as well and don’t allow any traffic to your main LAN.

    There are half a dozen better ways to do this, but they all require you to rely on a third party service to some extent.

  • sntx@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    yes: sntx.space, check out the spurce button in the bottom right corner.

    I’m building/running it the homebrewed-unconventional route. That is I have just a bit of html/css and other files I want to serve, then I use nix to build that into a usable website and serve it on one of my homelab machines via nginx. That is made available through a VPS running HA-Proxy and its public IP. The Nebula overlay network (VPN) connects the two machines.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 month ago

    I use a VPS and generate static sites using Hugo. Works fine.

    I could host it in my network, but I don’t see a point, and I’d really rather not have a power outage or loss of internet break my site (much more likely at home than at a datacenter). I host pretty much everything else within my network though.