Federated learning provides the solution when multiple parties want to collaboratively train a machine learning model without directly sharing sensitive data. In Federated Learning, each party trains a machine learning model locally on its private data and sends only the models' weights or updates (gradients) to an aggregator, which averages locally trained models into a new global model with higher effectiveness. However, the machine learning models, which have to be shared during the federated learning process, can still leak sensitive information about their training data through e.g. membership inference attacks. Differential Privacy (DP) can mitigate privacy risks in federated learning by introducing noise into machine learning models. In this work, we consider two approaches for achieving Differential Privacy in federated learning: (i) output perturbation of the trained machine learning models and (ii) a differentially-private form of stochastic gradient descent (DP-SGD). We perform an extensive analysis of these two approaches in several federated settings and compare their performance in terms of model utility and achieved privacy. We observe that DP-SGD allows for a better trade-off between privacy and utility.