“The more context you give an LLM, the better it performs.” That’s what we thought anyway.

Tencent’s HY Research just dropped a paper that says maybe not. Context learning - the whole “here’s some examples, figure out the pattern” thing - turns out to be a lot messier than the hype suggested.

The paper looks at how LLMs actually learn from in-context examples versus how we assumed they would. The gap between “should work in theory” and “works in practice” is apparently pretty wide.

Look, in-context learning was always oversold. People treated it like you could just dump a few examples and the model would magically get it. But that’s not how it shakes out. Performance is inconsistent. It varies by model. Sometimes adding more examples makes things worse.

This isn’t a knock on LLMs - they’re still genuinely useful. But the narrative that context is a free lunch? That narrative needs to die.

The real takeaway: if you’re building something that depends on consistent behavior, don’t lean too hard on in-context magic. Fine-tuning or RAG is probably your friend.


Source: Hacker News | Read the paper