We present an improvement of backpropagation learning (BP) for Sigma-Pi networks with adaptive correction of the learning parameters (ACL). An improvement of convergency is achieved by using the information value, change of the output error and the validity of Funahashi's theorem to analytically determine values for the learning parameters momentum, learning rate and learning motivation in each learning step. Its application to a neural-network based approximation of continuous input-output mappings with high accuracy yields very good results: the number of training periods of ACL BP learning is smaller than the corresponding number of training periods using other BP based learning rules.