• 2 Posts
  • 273 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • I think you should give Trilium(Next) Notes a try:

    • it has the hierarchical notes structure that you are familiar with in obsidian

    • it has better ways of keeping things organized (attributes can be values or references, can be shared and inherited, which provides a flexible framework for having notes “types” as templates that can be extended, e.g. people vs. colleagues, businesses vs. companies, etc)

    • it has the concept of note hoisting (which lets you focus on a note and its sub-notes, so other projects/spaces don’t come in the way of autocomplete and placing references), and workspaces that builds further on top of that

    • it can be used standalone (local client/offline-only, like obsidian) but coupling it with a remote-server opens more interesting use-cases (synching, sharing notes with others by public URLs, one-user/multi-client editing) which gives the best of both worlds (local-first/online-first) and lets you access your personal notes on devices you don’t necessarily own (which obsidian doesn’t). The mobile app story isn’t great (it’s a PWA with limited offline capabilities at the moment), but isn’t worse than the alternatives either (I can’t really work and think long form on a handheld, no matter the editor experience, but perhaps that’s just me).







  • I don’t think our views are so incompatible, I just think there are two conflictual paradigms supporting a false dichotomy: one that’s prevalent in the business world where “cost of labour shrinks cost of hardware” and where it’s acceptable to trade some (= a lot of) efficiency for convenience/saving manhours. But this is the “self-hosted” community, where people are running things on their own hardware, often in their own house, paying the high price of inefficiency very directly (electricity costs, less living space, more heat/noise, etc).

    And docker is absolutely fine and relevant in this space, but only when “done right”, i.e. when containers are not just spun up as isolated black boxes, but carefully organized as to avoid overlapping services and resources wastage, in which case managing containers ends-up requiring more effort, not less.

    But this is absolutely not what you suggest. What you suggest would have a much greater wastage impact than “few percent of cpu usage or a little bit of ram”, because essentially you propose for every container to ship its own web server, application server, database, etc… We are no longer talking “few percent” of overhead of the container stack, we are talking “whole new machines” software and compute requirements.

    So, in short, I don’t think there’s a very large overlap between the business world throwing money at their problems and the self-hosting community, and so the behaviours are different (there’s more than one way to use containers, and my observation is that it goes very differently in either). I’m also not hostile to containers in general, but they cannot be recommended in good faith to self-hosters as a solution that is both efficient and convenient (you must pick one).



  • I don’t care […] because it’s in the container or stack and doesn’t impact anything else running on the system.

    This is obviously not how any of this works: down the line those stacks will very much add-up and compete against each other for CPU/memory/IO/…. That’s inherent to the physical nature of the hardware, its architecture and the finiteness of its resources. And here come the balancing act, it’s just unavoidable.

    You may not notice it as the result of having too much hardware thrown at it, I wouldn’t exactly call this a winning strategy long term, and especially not in the context of self-hosting where you directly foot the bill.

    Moreover, those server components which you are needlessly multiplying (web servers, databases, application runtimes, …) have spent decades optimizing for resource pooling (with shared buffers, caching, event scheduling, …). These efforts are all thrown away when run for a single client/container further lowering (and quite drastically at that) the headroom for optimization and scaling.



  • I don’t think containers are bad, nor that the performance lost in abstractions really is significant. I just think that running multiple services on a physical machine is a delicate balancing act that requires knowledge of what’s truly going on, and careful sharing of resources, sometimes across containers. By the time you’ve reached that point (and know what every container does and how its services are set-up), you’ve defeated the main reason why many people use containers in the first place (just to fire and forget black boxes that just work, mostly), and only added layers of tooling and complexity between yourself and what’s going on.



  • I’d like to share your optimism, but what you suggest leaving us to “deal with” isn’t “AI” (which has been present in web search for decades as increasingly clever summarization techniques…) but LLMs, a very specific and especially inscrutable class of AI which has been designed for “sounding convincing”, without care for correctness or truthfulness. Effectively, more humans’ time will be wasted reading invented or counterfeit stories (with no easy way to tell); first-hand information will be harder to source and acknowledge by being increasingly diluted into the AI-generated noise.

    I also haven’t seen any practical advantage to using LLM prompts vs. traditional search engines in the general case: you end up typing more, for the sake of “babysitting” the LLM, and get more to read as a result (which is, again, aggravated by the fact that you are now given a single source/one-sided view on the matter, without citation, reference nor reproducible step to this conclusion).

    Last but not least, LLMs are an environmental disaster in the making, the computational cost is enormous (in new hardware and electricity), and we are at a point where all companies partaking in this new gold rush are selling us a solution in need of a problem, every one of them having to justify the expenditure (so far, none is making a profit out of it, which is the first step towards offsetting the incurred pollution).


  • You can always give a shot at using a third party client (possibly acting as bridge for other/better protocols, like e.g. slidge.im>xmpp or the buggy matrix equivalent), but you need to keep in mind that they will all require you to authenticate (and remain authenticated) using a smartphone, and that usage of 3rd party clients is forbidden from WA’s terms and conditions (which may lead to your account being blocked/deleted).





  • The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.

    That’s a valid concern, but I wouldn’t call it a problem. There are practically 2 types of clients/servers: the ones which are maintained, and which work absolutely fine and well together, and the rest, the unmaintained/abandoned part of the ecosystem.

    And with the protocol being so stable and backwards/forwards compatible in large parts, those unmaintained clients will just work, just not with the latest and greatest features (XMPP has the machinery to let clients and servers advertise about their supported features so the experience is at least cohesive).

    Which client would you recommend?

    Depends on which platform you are on and the type of usage. You should be able to pick one as advertised on https://joinjabber.org , that should keep you away from the fringe/unmaintained stuff. Personally I use gajim and monocles.


  • They both qualify as “open, federated messaging protocols”, with XMPP being the oldest (about 25 years old) and an internet standard (IETF) but at this point we can consider Matrix to be quite old, too (10 years old). On the paper they are quite interchangeable, they both focus on bridging with established protocols, etc.

    Where things differ, though, is that Matrix is practically a single vendor implementation: the same organization (Element/New Vector/ however it’s called these days) develops both the reference client and the reference server. Which incidentally is super complex, not well documented (the code is the documentation), and practically not compatible with the other (semi-official) implementations. This is a red herring because it also happens that this organization was built on venture capital money with no financial stability in sight. XMPP is a much more diverse and accessible ecosystem: there are multiple independent teams and corporations implementing servers and clients, the protocol itself is very stable, versatile and extensible. This is how you can find XMPP today running the backbone of the modern internet, dispatching notifications to all Android devices, being the signaling system behind millions of IoT devices, providing messaging to billion of users (WhatsApp is, by the way, based on XMPP)

    Another significant difference is that, despite 10 years of existence and millions invested into it, Matrix still has not reached stability (and probably never will): the organization recently announced Matrix 2 as the (yet another) definitive answer to the protocol’s shortcomings, without changing anything to what makes the protocol so painful to work with, and the requirements (compute, memory, bandwidth) to run Matrix at even a small scale are still orders of magnitude higher than XMPP. This discouraged many organizations (even serious ones, like Mozilla, KDE, …) from running Matrix themselves and further contributes to the de-facto centralization and single point of control federated protocols are meant to prevent.