In developing the system architecture, data curation should be viewed as a foundational component for ensuring data sustainability, particularly given the diverse scales of data organizations manage - each with varying size, complexity, and resource requirements. This approach considers data curation as a staged service aimed at bridging the ''grey area'' between small-scale data management and the requirements of large-scale repositories. In this middle ground, datasets often lack the formal structure and resources of major repositories but still require careful curation to meet the FAIR principles: Findability, Accessibility, Interoperability, and Reusability. To address these needs, we propose an incremental framework within the data curation workflow. A key component of this is the development of a specialized software component - the Data Transfer Facilitator (DTF). The DTF is designed to streamline the transition of data from smaller systems into standardized, large-scale repositories, mitigating common transfer challenges. The architectural design should support adaptable archiving practices and sustainable data management strategies. An incremental approach allows smaller datasets to integrate with larger data infrastructures over time, adapt to evolving standards, and maximize the long-term utility of data resources across multiple disciplines. We use a prototype to show what a DTF can look like for a project and use various prototypes to prove that the DTF can be implemented in practice.