Artificial societies - distributed systems of autonomous agents - are becoming increasingly important in open distributed environments, especially in e-commerce. Agents require trust and reputation concepts in order to identify communities of agents with which to interact reliably. We have noted in real environments that adversaries tend to focus on exploitation of the trust and reputation model. These vulnerabilities reinforce the need for new evaluation criteria for trust and reputation models called exploitation resistance which reflects the ability of a trust model to be unaffected by agents who try to manipulate the trust model. To examine whether a given trust and reputation model is exploitation resistant, the researchers require a flexible, easy-to-use, and general framework. This framework should provide the facility to specify heterogeneous agents with different trust models and behaviors. This paper introduces a Distributed Analysis of Reputation and Trust (DART) framework. The environment of DART is decentralized and game-theoretic. Not only is the proposed environment model compatible with the characteristics of open distributed systems, but it also allows agents to have different types of interactions in this environment model. Besides direct, witness and introduction interactions, agents in our environment model can have a type of interaction called a reporting interaction which represents a decentralized reporting mechanism in distributed environments. The proposed environment model provides various metrics at both micro and macro levels for analyzing the implemented trust and reputation models. Using DART, researchers have empirically demonstrated the vulnerability of well-known trust models against both individual and group attacks.