Artificial societies – distributed systems of autonomous agents – are becoming increasingly important in e-commerce. Agents base their decisions on trust and reputation in ways analogous to human societies. Many different definitions for trust and reputation have been proposed that incorporate many sources of information; however, system designs have tended to focus much of their attention on direct interactions. Furthermore, trust updating schemes for direct interactions have tended to uncouple updates for positive and negative feedback. Consequently, behaviour in which cycles of positive feedback followed by a single negative feedback results in untrustworthy agents remaining undetected. This con-man style of behaviour is formally described and desirable characteristics of con-resistant trust schemes proposed. A con-resistant scheme is proposed and compared with FIRE, Regret and Yu and Singh’s model. Simulation experiments demonstrate the utility of the con-resistant scheme.