Were you thinking about something like this?
**Tip:** The default source can be referred as `@DEFAULT_SOURCE@` in commands, for example: `$ pactl set-source-mute @DEFAULT_SOURCE@ toggle`
Default sink:
@DEFAULT_SINK@
Were you thinking about something like this?
**Tip:** The default source can be referred as `@DEFAULT_SOURCE@` in commands, for example: `$ pactl set-source-mute @DEFAULT_SOURCE@ toggle`
Default sink:
@DEFAULT_SINK@
Most translators aren’t perfect, they generally can not understand context
op’s feedback can be valuable
What if they don’t have time? What if they don’t want to read a 10 page EULA? It is their choice, but they most likely don’t know what they are accepting. You know what this means therefore you have the power to do something against this (if it is reasonable).
Do you think other people deserve this?
you can export to pdf and the text is searchable (in firefox with ctrl f)
If you live in europe or asia (i think) then probably not
Thank you for the positivity, trying to help lemmy grow by posting but I think reposting from other sites is not the way forward. Lemmy should be lemmy and not reddit
PineTime - Pine64
Idk, I’m struggleing to come up with original content, tought this was funny
I just think it is funny that somebody would try to sue over “not well lit room”
Not surprised it is a printer
I did manage to write a back-propogation algorithm, at this point I don’t fully understand the math behind back-propogation. Generally back-propogation algorithms take the activation, calculate the delta(?) with the activation and the target output (only on last layer). I don’t know where tokens come in. From your comment it sounds like it has to do something in a unsupervised learning network. I am also not a professional. Sorry if I didn’t really understand your comment.
I have experience in creating supervised learning networks. (not large language models) I don’t know what tokens are, I assume they are output nodes. In that case I think increasing the output nodes don’t make the Ai a lot more intelligent. You could measure confidence with the output nodes if they are designed accordingly (1 node corresponds to 1 word, confidence can be measured with the output strength). Ai-s are popular because they can overcome unknown circumstances (most of the cases), like when you input a question slightly different way.
I agree with you on that Ai has a problem understanding the meaning of the words. The Ai’s correct answers happened to be correct because the order of the words (output) happened to match with the order of the correct answer’s words. I think “hallucinations” happen when there is no sufficient answers to the given problem, the Ai gives an answer from a few random contexts pieced together in the most likely order. I think you have mostly good understanding on how Ai-s work.
There were multiple videos covering this plane crash, here is one:
Cessna Engine Failure and Ditching in Ocean, Filmed From Inside (HD)