Training deep neural networks to reconstruct nanoporous structures from FIB tomography images using synthetic training data

Link:
Autor/in:
Verlag/Körperschaft:
Hamburg University of Technology
Erscheinungsjahr:
2021
Medientyp:
Datensatz
Schlagworte:
  • Electron microscopy
  • Synthetic training data
  • 3D reconstruction
  • Semantic segmentation
  • SEM simulation
  • 3D CNN
  • 2D CNN with adjacent slices
  • Machine learning
  • Isotropy analysis
  • 620: Ingenieurwissenschaften
Beschreibung:
  • This dataset contains simulated FIB tomography data of nanoporous/hierarchical nanoporous gold, synthetic FIB-SEM images of hierarchical nanoporous gold and segmentation results of real hierarchical nanoporous gold dataset. Abstract of the paper: Focused ion beam (FIB) tomography is a destructive technique used to collect three-dimensional (3D) structural information at a resolution of a few nanometers. For FIB tomography, a material is eroded in a layer-wise manner. After the erosion of each layer (whose thickness ranges on the nanometer scale), the current material surface is imaged by a scanning electron microscope. Especially for nanoporous materials, the reconstruction of the 3D microstructure of the material, from the information collected during FIB tomography, is impaired by the shine-through effect. This effect prevents a unique mapping between voxel intensity values and material phase (e.g., solid or void). It often substantially reduces the accuracy of conventional methods for image segmentation. Here we demonstrate how machine learning can be used to tackle this problem. A bottleneck in doing so is the availability of sufficient training data. To overcome this problem, we present a novel approach to generate synthetic FIB-SEM images using Monte Carlo simulations, which can be used as training data for machine learning. Based on this approach, we compare the performance of different machine learning architectures for segmenting FIB tomography data of nanoporous materials. We demonstrate that two-dimensional (2D) convolutional neural network (CNN) architectures with a group of adjacent slices as input data as well as 3D CNN perform best and can improve the segmentation performance by more than 100%.
Lizenzen:
  • info:eu-repo/semantics/openAccess
  • https://creativecommons.org/licenses/by/4.0/
Quellsystem:
TUHH Open Research

Interne Metadaten
Quelldatensatz
oai:tore.tuhh.de:11420/11060