• 0 Posts
  • 30 Comments
Joined 7 months ago
cake
Cake day: February 4th, 2024

help-circle

  • and my point was explaining that that work has likely been done because the paper I linked was 20 years old and they talk about the deep connection between “similarity” and “compresses well”. I bet if you read the paper, you’d see exactly why I chose to share it-- particularly the equations that define NID and NCD.

    The difference between “seeing how well similar images compress” and figuring out “which of these images are similar” is the quantized, classficiation step which is trivial compared to doing the distance comparison across all samples with all other samples. My point was that this distance measure (using compressors to measure similarity) has been published for at least 20 years and that you should probably google “normalized compression distance” before spending any time implementing stuff, since it’s very much been done before.


  • I think there’s probably a difference between an intro to computer science course and the PhD level papers that discuss the ability of machines to learn and decide, but my experience in this is limited to my PhD in the topic.

    And, no, textbooks are often not peer reviewed in the same way and generally written by graduate students. They have mistakes in them all the time. Or grand statements taken out of context. Or are simplified explanations because introducing the nuances of PAC-learnability to somebody who doesn’t understand a “for” loop is probably not very productive.

    I came here to share some interesting material from my PhD research topic and you’re calling me an asshole. It sounds like you did not have a wonderful day and I’m sorry for that.

    Did you try learning about how computers learn things and make decisions? It’s pretty neat


  • You seem very upset, so I hate to inform you that neither one of those are peer reviewed sources and that they are simplifying things.

    “Learning” is definitely something a machine can do and then they can use that experience to coordinate actions based on data that is inaccesible to the programmer. If that’s not “making a decision”, then we aren’t speaking the same language. Call it what you want and argue with the entire published field or AI, I guess. That’s certainly an option, but generally I find it useful for words to mean things without getting too pedantic.


  • Yeah. I understand. But first you have to cluster your images so you know which ones are similar and can then do the deduplication. This would be a powerful way to do that. It’s just expensive compared to other clustering algorithms.

    My point in linking the paper is that “the probe” you suggested is a 20 year old metric that is well understood. Using normalized compression distance as a measure of Kolmogorov Complexity is what the linked paper is about. You don’t need to spend time showing similar images will compress more than dissimilar ones. The compression length is itself a measure of similarity.
















  • Look at the top level comment by the user, lurch. If I’m understating him correctly, a reboot should fix it in case that happens. Generally you need to run the dd command to brick stuff in the way you’re imagining. It’s short for either disk duplicator or disk destroyer (if you fuck up). I suspect the cdrecord utility would prevent you from doing anything too stupid on accident.