Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be going on purpose manipulated by humans to undermine their operation. Extending pattern arrangement[1] theory and design methods to adversarial settings is therefore a novel and very relevant research direction, which has not yet been pursued in a systematic way. Our address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation below potential attacks they may incur during operation. It proposes an algorithm for the generation of training and testing sets to be used for Security evaluation . Developing a framework for the empirical evaluation of classifier security at design phase that extends the model selection and act evaluation steps of the classical design cycle. Our proposed framework for empirical evaluation of classifier security that formalizes and generalizes the main thoughts designed in the literature, and give examples of its use in three real applications. report results show that security evaluation can provide a more complete thoughtful of the classifier’s behavior in adversarial environments, and lead to improved design choices .