• 0 Posts
  • 446 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle




  • Tough question.

    1. Lucky Star. Konata is such an iconic character, it’s pretty hard to compete with her immense otaku power. Also, the show is genuinely quite funny, and the live action endings / karaoke are so creative. It feels like everyone involved had a ton of fun making the show. It’s like a big in-joke that you’re invited to be part of. Ultimately, though, it’s that the character designs are all just so likeable.

    2. Azumanga Diaoh. It’s kinda the OG, and really popularised the genre. There’s lots of silliness, but they always commit to the bit, and it just works.

    3. Full Metal Panic: Fumoffo. Legit one of the funniest anime I recall ever having seen. That said, it was a long time ago, so I don’t remember too much else about it other than laughing a LOT.





  • enkers@sh.itjust.workstoTechnology@lemmy.worldI am disappointed in the AI discourse
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    27 days ago

    Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?

    Here’s the thing, I went out of my way to say I don’t know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn’t sufficiently demonstrate why it’s right.

    Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it’s layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.

    This article is just full on “trust me bro”. I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.


  • enkers@sh.itjust.workstoTechnology@lemmy.worldI am disappointed in the AI discourse
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    edit-2
    27 days ago

    I’ll preface this by saying I’m not an expert, and I don’t like to speak authoritatively on things that I’m not an expert in, so it’s possible I’m mistaken. Also I’ve had a drink or two, so that’s not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It’s not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it’s fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don’t know what they’re talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it’d kinda be like saying that a toaster is an oven. They’re both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.






  • I have a hard time considering something that has an immutable state as sentient, but since there’s no real definition of sentience, that’s a personal decision.

    Technical challenges aside, there’s no explicit reason that LLMs can’t do self-reinforcement of their own models.

    I think animal brains are also “fairly” deterministic, but their behaviour is also dependent on the presence of various neurotransmitters, so there’s a temporal/contextual element to it, so situationally our emotions can affect our thoughts which LLMs don’t really have either.

    I guess it’d be possible to forward feed an “emotional state” as part of the LLM’s context to emulate that sort of animal brain behaviour.