Whose Opinion to Follow in Multihypothesis Social Learning? A Large Deviations Perspective

We consider a multihypothesis social learning problem in which an agent has access to a set of private observations and chooses an opinion from a set of experts to incorporate into its final decision. To model individual biases, we allow the agent and experts to have general loss functions and possibly different decision spaces. We characterize the loss exponents of both the agent and experts, and provide an asymptotically optimal method for the agent to choose the best expert to follow.

We show that up to asymptotic equivalence, the worst loss exponent for the agent is achieved when it adopts the 0-1 loss function, which assigns a loss of 0 if the true hypothesis is declared and a loss of 1 otherwise. We introduce the concept of hypothesis-loss neutrality, and show that if the agent adopts a particular policy that is hypothesis-loss neutral, then it ignores all experts whose decision spaces are smaller than its own. On the other hand, if experts have the same decision space as the agent, then choosing an expert with the same loss function as itself is not necessarily optimal for the agent, which is somewhat counter-intuitive. We derive sufficient conditions for when it is optimal for the agent with 0-1 loss function to choose an expert with the same loss function.