In ways analogous to humans, autonomous agents require trust and reputation concepts in order to identify communities of agents with which to interact reliably. This paper defines a class of attacks called witness-based collusion attacks designed to exploit trust and reputation models. Empirical results demonstrate that unidimensional trust models are vulnerable to witness-based collusion attacks in ways independent multidimensional trust models are not. This paper analyzes the impact of the proportion of witness-based colluding agents on the society. Furthermore, it demonstrates that here is a need for witness interaction trust to detect colluding agents in addition to the need for direct interaction trust to detect malicious agents. By proposing a set of policies, the paper demonstrates how learning agents can decrease the level of encounter risk in a witness-based collusive society.