A frequently reoccurring task of humanoid robots is the autonomous navigation towards a goal position. Here we present a simulation of a purely vision-based docking behavior in a 3-D physical world. The robot learns sensorimotor laws and visual features simultaneously and exploits both for navigation towards its virtual target region. The control laws are trained using a two-layer network consisting of a feature (sensory) layer that feeds into an action (Q-value) layer. A reinforcement feedback signal (delta) modulates not only the action but at the same time the feature weights. Under this influence, the network learns interpretable visual features and assigns goal-directed actions successfully. This is a step towards investigating how reinforcement learning can be linked to visual perception.