The artificial neural networks (ANNs) generalization ability is greatly dependent on their architectures. For a given problem constructive algorithms provide an attractive automatic way of determining a near-optimal ANN architecture. Many algorithms have been proposed in the literature and shown their effectiveness. In automatically determining ANN architectures our research work aims at developing a new constructive algorithm (NCA). NCA puts emphasis on architectural adaptation and functional adaptation in its architecture determination process as in most previous studies are determining ANN architectures. It uses a constructive approach to determine the number of hidden layers in an ANN and of neurons in each hidden layer. NCA trains hidden neurons in the ANN by using different training sets that were created by employing a similar concept used in the boosting algorithm, so as to achieve functional adaptation. The purpose of using different training sets is to encourage hidden neurons to learn different parts or aspects of the training data so that the ANN can learn the whole training data in a better way. In the research the convergence and computational issues of NCA are analytically studied. The experimental result in the research shows that, NCA can produce ANN architectures with fewer hidden neurons and better generalization ability compared to existing constructive and non constructive algorithms.