Modeling prior information as a fuzzy set and using Zadeh's extension principle, a general approach is presented how to rate linear affine estimators in linear regression. This general approach is applied to fuzzy prior information sets given by ellipsoidal α-cuts. Here, in an important and meaningful subclass, a uniformly best linear affine estimator can be determined explicitly. Surprisingly, such a uniformly best linear affine estimator is optimal with respect to a corresponding relative squared error approach. Two illustrative special cases are discussed, where a generalized least squares estimator on the one hand and a general ridge or Kuks-Olman estimator on the other hand turn out to be uniformly best.