In the edge-cloud continuum, computational resources may be available in the cloud and at the edge of the network. If services are deployed on these resources in a distributed fashion, the latency between the participating nodes may become a critical factor. This is especially the case if resources from different clusters are used and runtime dependencies between services exist. In addition, edge-cloud landscapes are volatile, i.e., both the topology as well as Quality of Service parameters like latency may change over time. In this paper, we present a service placement strategy suited to deploy services in the cloud-edge continuum, aiming at reducing latency. Our placement strategy takes into account the dynamic nature of the edge-cloud continuum. In addition, dependencies between services are discovered at runtime. To allow this, we present a framework extending Kubernetes, which identifies dependencies automatically. We evaluate our approach and can show that we are able to reduce the latency significantly.