• 0 Posts
  • 48 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle
  • The issue with sonnet 3.5 is, in my limited testing, is that even with explicit, specific, and direct prompting, it can’t perform to anything near human ability, and will often make very stupid mistakes. I developed a program which essentially lets an AI program, rewrite, and test a game, but sonnet will consistently take lazy routes, use incorrect syntax, and repeatedly call the same function over and over again for no reason. If you can program the game yourself, it’s a quick way to prototype, but unless you know how to properly format JSON and fix strange artefacts, it’s just not there yet.






  • This might be happening because of the ‘elegant’ (incredibly hacky) way openai encodes multiple languages into their models. Instead of using all character sets, they use a modulo operator on each character, to make all Unicode characters represented by a small range of values. On the back end, it somehow detects which language is being spoken, and uses that character set for the response. Seeing as the last line seems to be the same mathematical expression as what you asked, my guess is that your equation just happened to perfectly match some sentence that would make sense in the weird language.


  • I don’t know about that guy, but I used to have a speech impediment that meant I couldn’t pronounce the letter R. I went to several speech therapists, so I started to annunciate every other letter, but that made people think I had a British accent. Anyway, I eventually learned how to say R, so now I have a speech impediment that makes me sound like a British person doing a fake American accent.


  • stingpie@lemmy.worldtoProgrammer Humor@lemmy.mlTrue Story
    link
    fedilink
    arrow-up
    1
    arrow-down
    5
    ·
    3 months ago

    If C++/C were real languages for real programming they’d enforce unreadability in the compiler.

    No sane language designer would say “It is imperative that you write the most unreadable code possible” then write a compiler that says “oh your code doesn’t triple dereference pointers? lol lmao that rocks”

    They have played you all for fools.



  • Yeah. If you interpret the Bible in a much more metaphorical way, it has a lot more internal consistency than the literal interpretation. Like demons don’t make sense literally. If a demon/devil compels you to do something bad, it’s not your fault if you do it. Instead, if demons are more like temptations, it makes perfect sense; you can be blamed for your lack of willpower / desire to do evil.

    It wraps everything up so nicely, I am surprised that it isn’t more common.


  • Recently, I’ve just given up trying to use cuda for machine learning. Instead, I’ve been using (relatively) cpu intensive activation functions & architecture to make up the difference. It hasn’t worked, but I can at least consistently inch forward.





  • The word “have” is used in two different ways. One way is to own or hold something, so if I’m holding a pencil, I have it. But another way is as a way so signal different tenses (as in grammatical tense) so you can say “I shouldn’t have done it” or “they have tried it before.” The contraction “'ve” is only used for tense, but not to own something. So, the phrase “they’ve it” is grammatically incorrect.




  • Let’s play a little game, then. We bothe give each other descriptions of the projects we made, and we try to make the project based on what we can get out of ChatGPT? We send each other the chat log after a week or something. I’ll start: the hierarchical multiscale LSTM is a stacked LSTM where the layer below returns a boundary state which will cause the layer above it to update, if it’s true. the final layer is another LSTM that takes the hidden state from every layer, and returns a final hidden state as an embedding of the whole input sequence.

    I can’t do this myself, because that would break OpenAI’s terms of service, but if you make a model that won’t develop I to anything, that’s fine. Now, what does your framework do?

    Here’s the paper I referenced while implementing it: https://arxiv.org/abs/1807.03595


  • Sorry that my personal experience with ChatGPT is ‘wrong.’ if you feel the need to insult everyone who disagrees with you, that seems like a better indication of your ability to communicate than mine. Furthermore, I think we’re talking about different levels of novelty. You haven’t told me the exact nature of the framework you developed, but the things I’ve tried to use ChatGPT for never turn out too well. I do a lot of ML research, and ChatGPT simply doesn’t have the flexibility to help. I was implementing a hierarchical multiscale LSTM, and no matter what I tried ChatGPT kept getting mixed up and implementing more popular models. ChatGPT, due to the way it learns, can only reliably interpolate between the excerpts of text it’s been trained on. So I don’t doubt ChatGPT was useful for designing your framework, since it is likely similar to other existing frameworks, but for my needs it simply does not work.