embedding-shape
a day agolatexr
a day agoIf you don’t know the answer to a question, retrying multiple times only serves to amplify your bias, you have no basis to know the answer is correct.
zamadatix
a day agoIt's a bit of a crutch for LLMs lacking the ability to just say "I'm not sure" because doing so is against how they are rewarded in training.
oivey
21 hours agoobservationist
21 hours agoIf it's out of distribution, you're more likely to get a chaotic distribution around the answer to a question, whereas if it's just not known well, you'll get a normal distribution, with a flatter slope the less well modeled a concept is.
There are all sorts of techniques and methods you can use to get a probabilistically valid assessment of outputs from LLMs, they're just expensive and/or tedious.
Repeated sampling gives you the basis to make a Bayesian model of the output, and you can even work out rigorous numbers specific to the model and your prompt framework by sampling things you know the model has in distribution and comparing the curves against your test case, giving you a measure of relative certainty.
embedding-shape
a day agoI'm asking for the sake of reproducibility and to clarify if they used the text-by-chance generator more than once, to ensure they didn't just hit one out of ten bad cases since they only tested it once.