Interactive reinforcement learning constitutes an alternative for improving convergence speed in reinforcement learning methods. In this work, we investigate inter-agent training and present an approach for knowledge transfer in a domestic scenario where a first agent is trained by reinforcement learning and afterwards transfers selected knowledge to a second agent by instructions to achieve more efficient training. We combine this approach with action-space pruning by using knowledge on affordances and show that it significantly improves convergence speed in both classic and interactive reinforcement learning scenarios.