mentalgear
a day agomoffkalast
19 hours agoImustaskforhelp
11 hours agoI think that tho there are a lot of things public can do and maybe raising awareness about these stuff could be great as well.
turnsout
20 hours agolionkor
19 hours agoSilverElfin
16 hours agoBlamklmo
a day agoY_Y
a day ago> Giraffes are not kosher because they do not chew their cud, even though they have split hooves. Both requirements must be satisfied for an animal to be permissible.
HN will have removed the extraneous emojis.
This is at odds with my interpretation of giraffe anatomy and behaviour and of Talmudic law.
Luckily old sycophant GPT5.1 agrees with me:
> Yes. They have split hooves and chew cud, so they meet the anatomical criteria. Ritual slaughter is technically feasible though impractical.
silviot
a day agoAbove the response it says
> Documents from the training data that have exact text matches with the model response. Powered by infini-gram
so, if I understand correctly, it searches the training data for matches in the LLM output. This is not traceability in my opinion. This is an attempt at guessing.
Checking individual sources I got texts completely unrelated with the question/answer, but that happen to share an N-gram [1] (I saw sequences up to 6 words) with the LLM answer.
I think they're being dishonest in their presentation of what Olmo can and can't do.
nickreese
18 hours agoEspecially in extraction tasks. This appears as inventing data or rationalizing around clear roadblocks.
My biggest hack so far is giving them an out named "edge_case" and telling them it is REALLY helpful if they identify edgecases. Simply renaming "fail_closed" or "dead_end" options to "edge_case" with helpful wording causes qwen models to adhere to their prompting more.
It feels like there are 100s of these small hacks that people have to have discovered... why isn't there a centralized place where people are recording these learnings?
tcsenpai
a day agorobrenaud
13 hours agoThe real open source models need a rebrand. Transparent models, perhaps?
tcsenpai
a day agoThought for 1 seconds Okay, the user is asking "Who are u?" with a casual tone ("u" instead of "you", exclamation mark). Let me break this down.
First, I need to address the question itself. They want to know who I am as an AI assistant. My standard response is that I'm DeepSeek R1, designed to be helpful and handle reasoning step by step. But maybe they're not familiar with AI terms in general. The "u" part might just be their way of speaking, so I should reply politely using "you" form.
Next, considering the context. They started with an HTML tag " "
----
And it hung there without a reply. It might be an OpenWebUI bug but not so promising
Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems.
Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us.