We develop a novel method based on Deep Convolutional Networks (DCN) to automate the identification and mapping of fracture and fault traces in optical images.The method employs two DCNs in a two players game: a first network, called Generator, learns to segment images to make them resembling the ground truth; a second network, called Discriminator, measures the differences between the ground truth image and each segmented priya gongura pickle image and sends its score feedback to the Generator; based on these scores, the Generator improves its segmentation progressively.As we condition both networks to the ground truth images, the method is called Conditional Generative Adversarial Network (CGAN).We propose a new ventilationstejp loss function for both the Generator and the Discriminator networks, to improve their accuracy.
Using two criteria and a manually annotated optical image, we compare the generalization performance of the proposed method to that of a classical DCN architecture, U-net.The comparison demonstrates the suitability of the proposed CGAN architecture.Further work is however needed to improve its efficiency.