Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Aug 1.
Published in final edited form as: Eur Med J Reprod Health. 2020 Aug 25;6(1):73–80.

Deep Learning strategies for Ultrasound in Pregnancy

Pedro H B Diniz 1, Yi Yin 1, Sally Collins 1
PMCID: PMC7590498  NIHMSID: NIHMS1638999  PMID: 33123376

Abstract

Ultrasound is one of the most ubiquitous imaging modalities in clinical practice. It is cheap, does not require ionizing radiation and can be performed at the bedside, making it the most commonly utilized imaging technique in pregnancy. Despite these advantages, it does have some drawbacks such as relatively low imaging quality, low contrast, and high variability. With these constraints, automating the interpretation of ultrasound images is challenging. However, successful automated identification of structures within 3D ultrasound volumes has the potential to revolutionize clinical practice. For example, a small placental volume in the first trimester has been shown to be correlated to adverse outcome later in pregnancy. If the placenta could be segmented reliably and automatically from a static 3D ultrasound volume, it would facilitate the use of its estimated volume, and other morphological metrics, as part of a screening test for increased risk of pregnancy complications potentially improving clinical outcomes.

Recently, deep learning has emerged, achieving state-of-the-art performance in various research fields, notably medical image analysis involving classification, segmentation, object detection, and tracking tasks. Due to its increased performance with large datasets, it has gained great interest in medical imaging applications. In this review, we present an overview of deep learning methods applied to ultrasound in pregnancy, introducing their architectures and analyzing their strategies. We then present some common problems and provide some perspectives into potential future research.

Keywords: Deep Learning, Ultrasound Imaging, Pregnancy, Segmentation, Morphometry

1. Introduction

In medical imaging, the most commonly employed deep learning methods are convolutional neural networks (CNNs) [18]. Compared to classical machine learning algorithms, CNNs have enabled the development of numerous solutions not previously achievable because they do not need a human operator to identify an initial set of features: they can find relevant features within the data itself. In many cases CNNs are able to identify better features than the human-eye.

CNNs have some disadvantages however; they need large amounts of data in order to automatically find the right features and processing large datasets is both computationally costly and takes time. Fortunately, a CNN’s training time can be reduced significantly if parallel architectures are used (for example by using graphics cards).

In medical imaging, deep learning is increasingly used for tasks such as automated lesion detection, segmentation and registration to assist clinicians in disease diagnosis and surgical planning. Deep learning techniques have the potential to create new screening tools, predict diseases, improve diagnostic accuracy and accelerate clinical tasks, whilst also reducing costs and human error [917]. For example, automated lesion segmentation tools usually run in a few seconds, much faster than human operators and often provide more reproducible results.

Ultrasound is the most commonly used medical imaging modality for diagnosis and screening in clinical practice [18]. It presents many advantages over other modalities such as X-ray, magnetic resonance imaging (MRI), and computed tomography (CT) because it does not use ionizing radiation, is highly portable and is relatively cheap [9]. However, ultrasound has its disadvantages. It often has relatively low imaging quality, is prone to artifacts, is highly dependent on operator experience, and has high inter- and intra-observer variability across different manufacturers’ machines [10]. Nonetheless, it’s safety profile, non-invasive nature and convenience makes it the primary imaging modality for fetal assessment in pregnancy [20]. This includes early pregnancy dating, screening for fetal structural abnormalities and the estimation of fetal weight and growth velocity [21]. Although two-dimensional (2D) ultrasound is most commonly used for pregnancy evaluation due to its wide availability and high resolution, most machines also have three-dimensional (3D) probes and software, which have been successfully employed in detection of fetal structural abnormalities [22].

Ultrasound has a number of limitations when it comes to intrauterine scanning, including small field-of-view, poor image quality under certain conditions (e.g. reduced amniotic fluid), limited soft-tissue acoustic contrast, and beam attenuation caused by adipose tissue [22]. Furthermore, fetal position, gestational age induced effects (poor visualization, skull ossification) and fetal tissue definition can also affect the assessment [20]. As a result, a high level of expertise is essential to ensure adequate image acquisition and appropriate clinical diagnostic performance. Thus ultrasound examination results are highly dependent on the training, experience and skill of the sonographer [23].

A study of the prenatal detection of malformations using ultrasound images demonstrated that the performance sensitivity ranged from 27.5% to 96% among different medical institutions [24]. Even when undertaken correctly by an expert, manual interrogation of ultrasound images is still time-consuming which limits its use as a population-based screening tool.

To address these challenges, automated image analysis tools have been developed which are able to provide faster, more accurate and less subjective ultrasound markers for a variety of diagnoses. In this paper, we review some of the most recent developments in deep learning which have been applied to ultrasound in pregnancy.

2. Deep Learning Applications in Pregnancy Ultrasound

Deep Learning techniques have been used for ultrasound image analysis in pregnancy to address such tasks as classification, object detection and tissue segmentation. This review covers applications in pregnancy. The reviewed papers were identified using a broad free-text search on the most commonly utilized medical databases (PubMed, Google Scholar etc.). The search was augmented by reviewing the references in the identified papers. The resulting papers were assessed by the authors and filtered for perceived novelty, impact in the field, and published date (2017–2020). Table 1 lists the literature reviewed in this section.

Table 1:

State-of-the-art works.

Publication Objective Approach
Fetal Segmentation
Namburete et al. [9] Segmentation + alignment (brain) Modified FCN
Torrents et al. [25] Segmentation (whole fetus) Several Approaches
Philip et al. [26] Segmentation + measurement (heart) 3D-U-net
Al-Bander et al. [11] Segmentation (head) Mask-RCNN + Resnet
Placental Segmentation
Qi et al. [27] Anatomy recognition ResNet
Looney et al. [28] Segmentation Parallel CNN
Looney et al. [29] Segmentation OxNNet
Oguz et al. [30] Segmentation CNN
Yin et al. [31] Anatomy recognition Multi-class FCN
Hu et al. [32] Segmentation Modified U-Net
Torrents et al. [33] Segmentation CGAN
Zimmer et al. [34] Segmentation 3D CNN

2.1. Fetal Segmentation

Ultrasound is the imaging modality most commonly used in routine obstetric examination. Fetal segmentation and volumetric measurement have been explored for many applications, including assessment of the fetal health, calculation of gestational age and growth velocity. Ultrasound is also used for structural and functional assessment of the fetal heart, head biometrics, brain development and cerebral abnormalities. This antenatal assessment allows clinicians to make an early diagnosis of many conditions facilitating parental choice and enabling appropriate planning for the rest of the pregnancy including early delivery.

Currently, fetal segmentation and volumetric measurement still rely on manual or semi-automated methods, which are time-consuming and subject to inter-observer variability [11]. Effective fully automated segmentation is required to address these issues. Recent developments to facilitate automated fetal segmentation from 3D ultrasound are presented below:

Namburete et al. [9] developed a methodology to address the challenge of aligning 3D ultrasound images of the fetal brain to form the basis of automated analysis of brain maturation. A multi-task fully convolutional neural network (FCNN) was used to localize the 3D fetal brain, segment structures, and then align them to a referential coordinate system. The network was optimized by simultaneously learning features shared within the input data pertaining to the correlated tasks, and later branching out into task-specific output streams.

The proposed model was trained on a dataset of 599 volumes with a gestational age ranging from 18 to 34 weeks, and then evaluated on a clinical dataset consisting of 140 volumes presenting both healthy and growth-restricted fetuses acquired from different ethnic and geographical groups. The automatically co-aligned volumes showed a good correlation between fetal anatomies

Torrents-Barrena et al. [25] proposed a “radiomics” based method to segment different fetal tissues from magnetic resonance imaging and 3D ultrasound. This is the first time that ‘radiomics’ (the high-throughput extraction of large numbers of image features from radiographic images [35]) has been used for segmentation purposes. First, handcrafted radiomic features were extracted to characterize the uterus, placenta, umbilical cord, fetal lungs and brain. Then the ‘radiomics’ for each anatomical target were optimized using both K-best and Sequential Forward Feature Selection techniques. Finally, a Support Vector Machine with instance balancing was adopted for accurate segmentation using these features as its input. In addition, several state-of-the-art deep learning-based segmentation approaches were studied and validated on a set of 60 axial MRI and 3D US images from pathological and clinical cases. Their results demonstrated that a combination of 10 selected radiomic features led to the highest tissue segmentation performance.

Philip et al. [26] proposed a 3D U-Net based fully automated method to segment the fetal annulus (base of the heart valves). The aim of this was to build a tool to help fetal medicine experts with assessment of fetal cardiac function. The method was trained and tested on 250 cases (at different points in the cardiac cycle to ensure that the technique was valid). This provided automated measurements of the excursion of the mitral and tricuspid valve annular planes in form of TAPSE/MAPSE (TAPSE: Tricuspid Annular Plane Systolic Excursion; MAPSE: Mitral Annular Plane Systolic Excursion). This demonstrated the feasibility of using this technique for automated segmentation of the fetal annulus.

Al-Bander et al. [11] introduced a deep learning-based method to segment the fetal head in ultrasound images. The fetal head boundary was detected by incorporating an object localization scheme into the segmentation, achieved by combining a Mask R-CNN (Regional Convolutional Neural Network) with a FCNN. The proposed model was trained on 999 3D ultrasound images and tested on 335 images captured from 551 pregnant women with a gestational age ranging between 12 and 20 weeks. Finally, an ellipse was fitted to the contour of the detected fetal head using the least-squares fitting algorithm [36]. Figure 1 illustrates the examples of fetal head segmentation.

Figure 1:

Figure 1:

Examples of fetal head segmentation showing the ellipse fitted results on the 2D ultrasound sections. The manual annotation is in blue, while the automated segmentation result is in red. Source: Al-Bander et al. [11]

2.2. Placental Segmentation

The placenta is an essential organ which plays a vital role in the healthy growth and development of the fetus. It permits the exchange of respiratory gases, nutrients and waste between mother and fetus. It also synthesizes many substances that maintain the pregnancy, including estrogen, progesterone, cytokines, and growth factors. Furthermore, the placenta also functions as a barrier, protecting the fetus against pathogens and drugs [37].

Abnormal placental function affects the development of the fetus and causes obstetric complications such as pre-eclampsia. Placental insufficiency is associated with adverse pregnancy outcomes like fetal growth restriction (FGR), caused by insufficient transport of nutrients and oxygen through the placenta [38]. A good indicator of future placental function is the size of the placenta in early pregnancy. The placental volume as early as 11 to 13 weeks’ gestation has long been known to correlate with birth weight at term [39]. Poor vascularity of the first-trimester placenta has also been demonstrated to increase the risk of developing pre-eclampsia later in pregnancy [40].

Reliable placental segmentation is the basis of further measurement and analysis which has the ability to predict adverse outcomes. However, full automation of this is a challenging task due to the heterogeneity of ultrasound images, indistinct boundaries and them placenta’s highly variable shape and position. Manual segmentation is relatively accurate but is extremely time-consuming. Semi-automated image analysis tools are faster but are still time consuming and typically require the operator to manually identify the placenta within the image. An accurate and fully automated technique for placental segmentation that provided measurements such as placental volume and vascularity would permit population-based screening for pregnancies at risk of adverse outcomes.

Figure 2 illustrates an example of placenta segmentation.

Figure 2:

Figure 2:

Placenta segmentation of first-trimester pregnancy: 2D B-mode plane (left), Semi-automated Random Walker result (middle), OxNNet prediction result (right). Source: Looney et al. [29]

Qi et al. [27] proposed a weakly-supervised CNN for anatomy recognition in 2D placental ultrasound images. This was the first successful attempt at multi-structure detection in placental ultrasound images. The CNN was designed to learn discriminative features in Class Activation Maps (one for each class), which are generated by applying Global Average Pooling in the last hidden layer. An image dataset of 10,808 image patches from 60 placental ultrasound volumes were used to evaluate the proposed method. Experimental results demonstrated that the proposed method achieved high recognition accuracy, and could localize complex anatomical structures around the placenta.

Looney et al. [28] used a CNN named DeepMedic [41] to automate segmentation of placenta in 3D ultrasound. This was the first attempt to segment 3D placental ultrasound using a CNN. Their database contained 300 3D ultrasound volumes from the first trimester. The placenta was segmented in a semi-automated manner using the Random Walker method [42], to provide a ‘ground truth’ dataset. The results of the DeepMedic CNN were compared against semi-automated segmentation, achieving median Dice Similarity Coefficient (DSC) of 0.73 (first Quartile, third Quartile: 0.66, 0.76) and median Hausdorff distance of 27 mm (first Quartile, third Quartile:18 mm, 36 mm).

Looney et al. [29] then presented a new 3D FCNN named OxNNet. This was based on the 2D U-net architecture to fully automate segmentation of the placenta in 3D ultrasound volumes. A large dataset, composed of 2,393 first trimester 3D ultrasound volumes, was used for training and testing purpose. The ground truth dataset was generated using the semi-automated Random Walker method [42] (initially seeded by three expert operators). The OxNNet FCNN obtained placental segmentation with state-of-the-art accuracy (median DSC of 0.84 (Interquartile range 0.09). They also demonstrated that increasing the size of the training set improves the performance of the FCNN. In addition, the placental volumes segmented by OxNNet were correlated with birth weight to predict small-for-gestational-age babies, showing almost identical clinical conclusions to those produced by the validated semi-automated tools.

Oguz et al. [30] combined a CNN with multi-atlas joint label fusion and Random Forests algorithms for fully automated placental segmentation. A dataset of 47 ultrasound volumes from the first trimester was pre-processed by data augmentation. The resulting dataset was used to train a 2D CNN to generate a first 3D prediction. This was used to initialize a multi-atlas joint label fusion algorithm, generating a second prediction. These two predictions were fused together using a Random Forest algorithm, enhancing overall performance. A 4-fold cross-validation was performed and the proposed method reportedly achieved a mean Dice coefficient of 0.8686.3 (±0.05) for the test folds.

Yin et al. [31] proposed a fully automated method combining deep learning and image processing techniques to extract the vasculature of the placental bed from 3D power Doppler ultrasound scans and estimate its perfusion. A multi-class FCNN was applied to segment the placenta, amniotic fluid and fetus from the 3D ultrasound volume to provide accurate localization of the utero-placental interface (UPI) which is where the maternal blood enters the placenta from the uterus. A transfer learning technique was applied to initialize the model using parameters optimized by a single-class model [29] trained on 1200 labelled placental volumes. The vasculature was segmented by a region growing algorithm from the 3D power Doppler signal. Based on the representative vessels at a certain distance from the UPI, the perfusion of placental bed was estimated using a validated technique known as FMBV (fractional moving blood volume) [43].

Hu et al. [32] proposed a FCNN based on the U-net architecture for 2D placental ultrasound segmentation. The U-net had a novel convolutional layer weighted by automated acoustic shadow detection, which helped to recognize ultrasound artifacts. The dataset used for evaluation contained 1364 fetal ultrasound images acquired from 247 patients over 47 months. The dataset was quite diverse as the image data was acquired from different machines operated by different specialists, and presented scanning of fetuses at different gestational ages. The proposed method was first applied across the entire dataset and then over a subset of images containing acoustic shadows. In both cases, the acoustic shadow detection scheme was proven to be able to improve segmentation accuracy.

Torrents-Barrena et al. [33] proposed the first fully-automated framework to segment both the placenta and the fetoplacental vasculature in 3D ultrasound, demonstrating that ultrasound enables the assessment of twin-to-twin transfusion syndrome by providing placental vessel mapping. A conditional Generative Adversarial Network was adopted to identify the placenta, and a Modified Spatial Kernelized Fuzzy C-Means combined with Markov Random Fields was used to extract the vasculature. The method was applied on a dataset of 61 ultrasound volumes, which was heterogeneous due to different placenta positions, in singleton or twin pregnancies of 15 to 38 weeks’ gestation. The results achieved a mean Dice coefficient of 0.75±0.12 for the placenta and 0.70±0.14 for its vessels on images that had been pre-processed by down-sampling and cropping.

Zimmer et al. [34] focused on the placenta at late gestational age. Ultrasound scans are typically useful only in the early stages of pregnancy because a limited field of view only permits the complete capture of small placentas. To overcome this, a multi-probe system was used to acquire different fields of view and then combine them with a voxel-wise fusion algorithm to obtain a fused ultrasound volume capturing the whole placenta. The dataset used for evaluation was composed of 127 single 4D (3D+time) ultrasound volumes from 30 patients covering different parts of the placenta. 42 fused volumes were derived from these simple volumes which extended the field of view. Both the simple and fused volumes were used for evaluation of their 3D CNN based automated segmentation. The best results of placental volume segmentation were comparable to corresponding volumes extracted from MRI, achieving Dice coefficient of 0.81±0.05.

3. Discussion

The number of applications for deep learning in pregnancy ultrasound has increased rapidly over the last few years and they are beginning to show very promising results. Along with new advances in deep learning methods, new ultrasound applications are being developed to improve computer-aided diagnosis and enable the development of automated screening tools for pregnancy.

A number of Deep Learning algorithms have been presented in this review, showing novel approaches, state-of-the-art results and pioneering applications that have contributed so far to the pregnancy ultrasound analysis. Some methods rely on sophisticated hybrid approaches, combining different machine learning or image analysis techniques, whilst others rely on smart manipulation of the dataset such as fusing volumes or applying data augmentation. Large quality-controlled datasets are enabling single deep learning algorithms to be successfully developed still. However, it’s not currently possible to compare these methods directly, even if designed for the same task, because they all use different datasets and measurements.

The technological advances in medical equipment and image acquisition protocols allow better data acquisition to enhance the trained models. The sizes and availability of quality-controlled ground-truth datasets remains a significant issue to be addressed. The performance of deep learning methods usually depends on the number of samples. Most of the presented methods cannot be independently evaluated as their datasets are small and not widely available. In addition, models trained on one dataset might fail on another generated by a different manufacturer’s machine. Large, public and appropriately quality-controlled ultrasound datasets are needed to compare different deep learning methods and achieve robust performance in real world scenarios.

There is also an urgent need to implement deep learning methods to solve relevant clinical problems. Very few papers translate the simple application of algorithms to a broader, practical solution that could be widely used in clinical practice. The practical implementation of deep learning methods and assessment of the correlation between automated results and clinical outcomes should be a focus of future research.

The field of deep learning in pregnancy ultrasound is still developing. Lack of sufficient high-quality data and practical clinical solutions are some of the key barriers. In addition, the newest deep learning methods tend to be applied first to other more homogeneous medical imaging modalities such as CT or MRI. Therefore, there is a need for researchers to collaborate across modalities to transfer existing deep learning algorithms to the field of pregnancy ultrasound to achieve better performance and create new applications in the future.

References

  • [1].Simonyan K, Zisserman A. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv 1409.1556 2014. [Google Scholar]
  • [2].Szegedy C et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015; 1–9. [Google Scholar]
  • [3].He K et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016; 770–778. [Google Scholar]
  • [4].Huang G et al. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017; 4700–4708. [Google Scholar]
  • [5].Hu J et al. Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018; 7132–7141. [Google Scholar]
  • [6].Wang D et al. Exploring linear relationship in feature map subspace for convnets compression. arXiv preprint arXiv:1803.05729 2018. [Google Scholar]
  • [7].Ren S et al. Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. 2015; 91–99. [Google Scholar]
  • [8].He K et al. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence. 2015; 37(9): 1904–1916. [DOI] [PubMed] [Google Scholar]
  • [9].Namburete AI et al. Fully automated alignment of 3d fetal brain ultrasound to a canonical reference space using multi-task learning. Medical image analysis. Elsevier. 2018; 46: 1–14. [DOI] [PubMed] [Google Scholar]
  • [10].Liu S et al. Deep learning in medical ultrasound analysis: a review. Engineering. 2019. [Google Scholar]
  • [11].Al-Bander B et al. Improving fetal head contour detection by object localisation with deep learning. In: Annual Conference on Medical Image Understanding and Analysis. Springer. 2019; 142–150. [Google Scholar]
  • [12].Sa R et al. Intervertebral disc detection in x-ray images using faster r-cnn. In: 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2017; 564–567. [DOI] [PubMed] [Google Scholar]
  • [13].Li Q et al. Medical image classification with convolutional neural network. In: 13th international conference on control automation robotics & vision (ICARCV). IEEE. 2014; 844–848. [Google Scholar]
  • [14].Miao S et al. Real-time 2d/3d registration via cnn regression. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). IEEE. 2016; 1430–1434. [Google Scholar]
  • [15].Diniz JO et al. Spinal cord detection in planning CT for radiotherapy through adaptive template matching, IMSLIC and convolutional neural networks. Computer methods and programs in biomedicine. 2019; 170:53–67. [DOI] [PubMed] [Google Scholar]
  • [16].Diniz PH et al. Detection of white matter lesion regions in MRI using SLIC0 and convolutional neural network. Computer methods and programs in biomedicine. 2018; 167:49–63. [DOI] [PubMed] [Google Scholar]
  • [17].Diniz JO et al. Detection of mass regions in mammograms by bilateral analysis adapted to breast density using similarity indexes and convolutional neural networks. Computer methods and programs in biomedicine. 2018; 156:191–207. [DOI] [PubMed] [Google Scholar]
  • [18].Noble JA, Boukerroui D. Ultrasound image segmentation: a survey. IEEE Transactions on medical imaging. 2006; 25(8): 987–1010. [DOI] [PubMed] [Google Scholar]
  • [19].Brattain LJ et al. Machine learning for medical ultrasound: status, methods, and future opportunities. Abdominal Radiology. 2018; 43(4): 786–799. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Rueda S et al. Evaluation and comparison of current fetal ultrasound image segmentation methods for biometric measurements: a grand challenge. IEEE Transactions on medical imaging. 2013; 33(4): 797–813. [DOI] [PubMed] [Google Scholar]
  • [21].Reddy UM et al. Prenatal imaging: ultrasonography and magnetic resonance imaging. Obstetrics and gynecology. 2008; 112(1): 145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Roy-Lacroix M et al. A comparison of standard two-dimensional ultrasound to three dimensional volume sonography for routine second-trimester fetal imaging. Journal of Perinatology. 2017; 37(4): 380–386. [DOI] [PubMed] [Google Scholar]
  • [23].Sarris I et al. Intra-and interobserver variability in fetal ultrasound measurements. Ultrasound in obstetrics & gynecology. 2012; 39(3): 266–273. [DOI] [PubMed] [Google Scholar]
  • [24].Salomon L et al. A score-based method for quality control of fetal images at routine second-trimester ultrasound examination. Prenatal Diagnosis: Published in Affiliation with the International Society for Prenatal Diagnosis. 2008; 28(9): 822–827. [DOI] [PubMed] [Google Scholar]
  • [25].Torrents-Barrena J et al. Assessment of radiomics and deep learning for the segmentation of fetal and maternal anatomy in magnetic resonance imaging and ultrasound. Academic Radiology. Elsevier. 2019. [DOI] [PubMed] [Google Scholar]
  • [26].Philip ME et al. Convolutional neural networks for automated fetal cardiac assessment using 4d b-mode ultrasound. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE. 2019; 824–828. [Google Scholar]
  • [27].Qi H et al. Weakly supervised learning of placental ultrasound images with residual networks. In: Annual Conference on Medical Image Understanding and Analysis. Springer. 2017; 98–108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Looney P et al. Automatic 3d ultrasound segmentation of the first trimester placenta using deep learning. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). IEEE. 2017; 279–282. [Google Scholar]
  • [29].Looney P et al. Fully automated, real-time 3d ultrasound segmentation to estimate first trimester placental volume using deep learning. JCI insight. 2018; 3(11). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Oguz B et al. Combining deep learning and multi-atlas label fusion for automated placenta segmentation from 3dus. In: Data Driven Treatment Response Assessment and Preterm, Perinatal, and Pediatric Image Analysis. Springer. 2018; 138–148. [Google Scholar]
  • [31].Yin Y et al. Standardization of blood flow measurements by automated vascular analysis from power Doppler ultrasound scan. In: Medical Imaging 2020: Computer-Aided Diagnosis. International Society for Optics and Photonics 2020; 11314(113144C): 1025–1030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Hu R et al. Rohling. Automated placenta segmentation with a convolutional neural network weighted by acoustic shadow detection. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2019; 6718–6723. [DOI] [PubMed] [Google Scholar]
  • [33].Torrents-Barrena J et al. Automatic segmentation of the placenta and its peripheral vasculature in volumetric ultrasound for ttts fetal surgery. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE. 2019; 772–775. [Google Scholar]
  • [34].Zimmer VA et al. Towards whole placenta segmentation at late gestation using multi-view ultrasound images. In: International Conference on Medical Image Computing and Computer Assisted Intervention. Springer. 2019; 628–636. [Google Scholar]
  • [35].Lambin P et al. Radiomics: extracting more information from medical images using advanced feature analysis. European journal of cancer. 2012; 48(4): 441–446. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Stigler SM (1981). ��Gauss and the Invention of Least Squares”. The Annals of Statistics. 9 (3): 465–474. [Google Scholar]
  • [37].Han M et al. Automatic segmentation of human placenta images with u-net. IEEE Access. 2019; 7: 180083–180092. [Google Scholar]
  • [38].Salavati N et al. The possible role of placental morphometry in the detection of fetal growth restriction. Frontiers in physiology. 2019; 9: 1884. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Salafia CM et al. Placenta and fetal growth restriction. Clinical obstetrics and gynecology. 2006; 49(2) 236–256. [DOI] [PubMed] [Google Scholar]
  • [40].Mathewlynn S, Collins S. Volume and vascularity: Using ultrasound to unlock the secrets of the first trimester placenta. Placenta. 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Kamnitsas K et al. Deepmedic for brain tumor segmentation. In: International workshop on Brain lesion: Glioma, multiple sclerosis, stroke and traumatic brain injuries. Springer. 2016; 138–149. [Google Scholar]
  • [42].Grady L Random walks for image segmentation. IEEE transactions on pattern analysis and machine intelligence. 2006; 28(11): 1768–1783. [DOI] [PubMed] [Google Scholar]
  • [43].Stevenson GN et al. A technique for the estimation of fractional moving blood volume by using three-dimensional power Doppler US. Radiology. 2015; 274(1): 230–7. [DOI] [PubMed] [Google Scholar]

RESOURCES