Why don't large language models seem to be good at humor?
The question is profound and deeply insightful. If we figure out how we as humans do humor, we might also be able to supplement whatever LLMs are lacking in insight and creativity. We should think deeply about this! (both technically and philosophically)
My hypothesis: Humor is, generally, "surprising" and thus one capable of humor must be capable of thinking outside of the box (i.e. give statistically unlikely responses), yet such statistically unlikely responses must still be highly relevant to the topic (albeit not necessarily in the expected manner)
LLMs are currently statistical machines. Asking it to do the statistically unlikely is basically giving it a task it fundamentally not designed to do.
There's also an esoteric aspect to this. Many people who channel spirits say that they are super funny. Somewhat cheeky even. The stark contrast seems to suggest that there is a qualitative gap between statistical intelligence and spiritual intelligence. The former takes averages, while the latter makes choices.
"Choices" are "inconsistent" if you apply statistical methods on them. But they are what makes things interesting. What makes choices "not random noise"? It is the choosing of something interesting. Finding the right thing at the right moment. Synchronicity. We can't do this with statistics.
(btw, it's also interesting how women seem to use humor as a proxy for reproductive fitness as well [more-so than "raw intelligence")
When they say, the gods breathed spirit into humans... what did they actually do?
No comments:
Post a Comment