Please upgrade your browser

We built this website using the latest browser technologies to deliver the very best experience.

This makes the site run faster and easier to use. Unfortunately, your browser is out of date and will not support some of these technologies.

We recommend that you use a modern browser such as Google Chrome or Microsoft Edge to view this website.

Download Microsoft EdgeDownload Microsoft Edge
Blog

A weekend, a ‘virtual’ hackathon and ML approaches to automate the analysis of COVID-19 lung CT scans

May 6, 2020

Like many in the bioscience industry, Sensyne is seeking ways to turn its resources and expertise into valuable contributions towards research into COVID-19. In our case, one way we did this was to commit this expertise, in the form of our multidisciplinary Discovery Sciences team comprising clinicians, epidemiologists, biologists, machine learning researchers and data scientists, to a Coronavirus-focussed weekend virtual hackathon.

The idea was to investigate COVID-19 related data and provide initial analysis to find promising ideas that could be taken forward in the longer term both within the company, with collaborators and released to the public domain.

One of the hackathon projects aimed to automate the analysis of Lung CT scans from COVID-19 patients.  There is a significant focus on the development of new methodologies across the spectrum from prevention, testing, and disease progression monitoring to improving outcomes of COVID-19 patients. As part of this momentum, new public data is being made available by teams around the world including imaging datasets, such as X-ray and CT, that are used to measure the effect of the disease on the lungs in severe cases.

While X-ray is often associated with detecting the causes, features and effects of disease, CT is being used for assessing disease burden in COVID-19 cases, with various severity scores being proposed, for example, that of Yang et al (2020). In this example, the lung is partitioned into 20 regions and expert radiologists score 0, 1 and 2 for each region based on the severity of the disease burden.

However, CT is comprised of a large number of slices and quantitative analysis, where the exact amount of pathology is measured, is very time consuming for an expert. Therefore, a number of approaches have been proposed for the detection of COVID-19 in Chest X-rays (Wang et al, 2020 and Zhang et al, 2020). Other methods have began exploring quantification of the disease in CT (Gozes et al, 2020). The hackathon team proposed to train a deep learning model to detect and quantify key radiology signs linked to COVID-19 such as regions of ‘ground glass opacities’  (hazy appearance), ‘consolidation’(completely obscured lung regions)  and ‘pleuraleffusion’ excess fluid between the lungs and the pleura – the tissues surrounding the lungs).  

Automating the assessment of these key signs could provide fast and reproducible measurements for radiology experts.  The Sensyne team’s aim was to attempt to validate the viability of automating such assessments by training a convolutional neural network model to detect ground glass opacities, consolidation and pleural effusion from a small open-access annotated dataset of COVID-19 cases. This was performed on publicly available data.

 

Model

The team used a standard U-Net model derived from the following implementation - https://github.com/zhixuhao/unet. This was adapted to solve a multi-label task using softmax and categorical cross-validation with 4 channels output from the final layer corresponding to background, ground glass opacity, consolidation and pleural effusion (as defined in the dataset)  The code is available here and will continue to be developed.

 

Dataset

Sensyne used a publicly available dataset of 100 axial slices from 60 patients from SIRM. The pre-processing and creation of the datasets is outlined in this blog post.

 

Datasets like these are hugely valuable to the clinical and AI communities.  In particular, in this case, they allow the viability of building AI models to be tested on COVID-19 disease characteristics. This is, however, a small dataset of 100 individual slices which limits validation and additionally poses a larger limitation in that slices were not mapped to individual patients so different slices in the training and validation set may be derived from the same patient.

 

Results

Ten percent of the slices were held back from training to create the validation set, and example results are shown below. The results are promising for capturing and interpreting COVID related pathology. The two examples below show the Lung CT slice, automatic pathology detection and ground truth (blue = ground glass, yellow = consolidation and red = pleural effusion). The examples are interesting because they show a case with predominantly consolidation (above) and a case with ground glass opacity (below).  

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

In addition, cases from the test set without a known ground truth are shown below. This is interesting to review because the expert analysis is not known for these cases, but the algorithm appears to capture the pathology well.

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

Limitations, Learnings and Next Steps

These early results are exciting because they show how AI tools support decision making with quantitative analysis. However, the work was developed over one weekend and although it shows promise it remains an early attempt at training a deep learning model for characteristics of COVID-19.

The model has been trained and tested on only a limited number of CT slices. Critically, not knowing whether multiple slices belong to the same patient limits the independent validation until independent cases can be acquired. Nevertheless, even if the slices are acquired from the same patient, this illustrates the potential to quickly segment the whole volume from a limited number of manual annotations.

While such approaches appear to have a lot of potential and we expect approaches such as these to reach clinical readiness; similar tools are also being developed in this space.

Of key importance will be access to datasets across multiple hospitals. This will allow building models that are stable to variations in CT scanners and can be validated in future to meet regulatory requirements. This might sound challenging, but the ability of the radiologist to manually review and correct automatic delineation, means that any risk of mislabelling can be managed.

A tool like this has the potential to improve the radiology workflow and provide quantitative and reproducible analysis of pathology burden, especially in situations like COVID-19 where every second of the clinicians’ time counts.  

We hope this post encourages more development in this area.  In the meanwhile, we’ve open sourced the initial results and the code for comment and to allow others to use the work.

Ben Irving
Principal Machine Learning Researcher
Sensyne Health
Blog

A weekend, a ‘virtual’ hackathon and ML approaches to automate the analysis of COVID-19 lung CT scans

May 6, 2020

Like many in the bioscience industry, Sensyne is seeking ways to turn its resources and expertise into valuable contributions towards research into COVID-19. In our case, one way we did this was to commit this expertise, in the form of our multidisciplinary Discovery Sciences team comprising clinicians, epidemiologists, biologists, machine learning researchers and data scientists, to a Coronavirus-focussed weekend virtual hackathon.

The idea was to investigate COVID-19 related data and provide initial analysis to find promising ideas that could be taken forward in the longer term both within the company, with collaborators and released to the public domain.

One of the hackathon projects aimed to automate the analysis of Lung CT scans from COVID-19 patients.  There is a significant focus on the development of new methodologies across the spectrum from prevention, testing, and disease progression monitoring to improving outcomes of COVID-19 patients. As part of this momentum, new public data is being made available by teams around the world including imaging datasets, such as X-ray and CT, that are used to measure the effect of the disease on the lungs in severe cases.

While X-ray is often associated with detecting the causes, features and effects of disease, CT is being used for assessing disease burden in COVID-19 cases, with various severity scores being proposed, for example, that of Yang et al (2020). In this example, the lung is partitioned into 20 regions and expert radiologists score 0, 1 and 2 for each region based on the severity of the disease burden.

However, CT is comprised of a large number of slices and quantitative analysis, where the exact amount of pathology is measured, is very time consuming for an expert. Therefore, a number of approaches have been proposed for the detection of COVID-19 in Chest X-rays (Wang et al, 2020 and Zhang et al, 2020). Other methods have began exploring quantification of the disease in CT (Gozes et al, 2020). The hackathon team proposed to train a deep learning model to detect and quantify key radiology signs linked to COVID-19 such as regions of ‘ground glass opacities’  (hazy appearance), ‘consolidation’(completely obscured lung regions)  and ‘pleuraleffusion’ excess fluid between the lungs and the pleura – the tissues surrounding the lungs).  

Automating the assessment of these key signs could provide fast and reproducible measurements for radiology experts.  The Sensyne team’s aim was to attempt to validate the viability of automating such assessments by training a convolutional neural network model to detect ground glass opacities, consolidation and pleural effusion from a small open-access annotated dataset of COVID-19 cases. This was performed on publicly available data.

 

Model

The team used a standard U-Net model derived from the following implementation - https://github.com/zhixuhao/unet. This was adapted to solve a multi-label task using softmax and categorical cross-validation with 4 channels output from the final layer corresponding to background, ground glass opacity, consolidation and pleural effusion (as defined in the dataset)  The code is available here and will continue to be developed.

 

Dataset

Sensyne used a publicly available dataset of 100 axial slices from 60 patients from SIRM. The pre-processing and creation of the datasets is outlined in this blog post.

 

Datasets like these are hugely valuable to the clinical and AI communities.  In particular, in this case, they allow the viability of building AI models to be tested on COVID-19 disease characteristics. This is, however, a small dataset of 100 individual slices which limits validation and additionally poses a larger limitation in that slices were not mapped to individual patients so different slices in the training and validation set may be derived from the same patient.

 

Results

Ten percent of the slices were held back from training to create the validation set, and example results are shown below. The results are promising for capturing and interpreting COVID related pathology. The two examples below show the Lung CT slice, automatic pathology detection and ground truth (blue = ground glass, yellow = consolidation and red = pleural effusion). The examples are interesting because they show a case with predominantly consolidation (above) and a case with ground glass opacity (below).  

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

In addition, cases from the test set without a known ground truth are shown below. This is interesting to review because the expert analysis is not known for these cases, but the algorithm appears to capture the pathology well.

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

Limitations, Learnings and Next Steps

These early results are exciting because they show how AI tools support decision making with quantitative analysis. However, the work was developed over one weekend and although it shows promise it remains an early attempt at training a deep learning model for characteristics of COVID-19.

The model has been trained and tested on only a limited number of CT slices. Critically, not knowing whether multiple slices belong to the same patient limits the independent validation until independent cases can be acquired. Nevertheless, even if the slices are acquired from the same patient, this illustrates the potential to quickly segment the whole volume from a limited number of manual annotations.

While such approaches appear to have a lot of potential and we expect approaches such as these to reach clinical readiness; similar tools are also being developed in this space.

Of key importance will be access to datasets across multiple hospitals. This will allow building models that are stable to variations in CT scanners and can be validated in future to meet regulatory requirements. This might sound challenging, but the ability of the radiologist to manually review and correct automatic delineation, means that any risk of mislabelling can be managed.

A tool like this has the potential to improve the radiology workflow and provide quantitative and reproducible analysis of pathology burden, especially in situations like COVID-19 where every second of the clinicians’ time counts.  

We hope this post encourages more development in this area.  In the meanwhile, we’ve open sourced the initial results and the code for comment and to allow others to use the work.

Ben Irving
Principal Machine Learning Researcher
Sensyne Health
Blog

A weekend, a ‘virtual’ hackathon and ML approaches to automate the analysis of COVID-19 lung CT scans

A weekend, a ‘virtual’ hackathon and ML approaches to automate the analysis of COVID-19 lung CT scans

May 6, 2020

Like many in the bioscience industry, Sensyne is seeking ways to turn its resources and expertise into valuable contributions towards research into COVID-19. In our case, one way we did this was to commit this expertise, in the form of our multidisciplinary Discovery Sciences team comprising clinicians, epidemiologists, biologists, machine learning researchers and data scientists, to a Coronavirus-focussed weekend virtual hackathon.

The idea was to investigate COVID-19 related data and provide initial analysis to find promising ideas that could be taken forward in the longer term both within the company, with collaborators and released to the public domain.

One of the hackathon projects aimed to automate the analysis of Lung CT scans from COVID-19 patients.  There is a significant focus on the development of new methodologies across the spectrum from prevention, testing, and disease progression monitoring to improving outcomes of COVID-19 patients. As part of this momentum, new public data is being made available by teams around the world including imaging datasets, such as X-ray and CT, that are used to measure the effect of the disease on the lungs in severe cases.

While X-ray is often associated with detecting the causes, features and effects of disease, CT is being used for assessing disease burden in COVID-19 cases, with various severity scores being proposed, for example, that of Yang et al (2020). In this example, the lung is partitioned into 20 regions and expert radiologists score 0, 1 and 2 for each region based on the severity of the disease burden.

However, CT is comprised of a large number of slices and quantitative analysis, where the exact amount of pathology is measured, is very time consuming for an expert. Therefore, a number of approaches have been proposed for the detection of COVID-19 in Chest X-rays (Wang et al, 2020 and Zhang et al, 2020). Other methods have began exploring quantification of the disease in CT (Gozes et al, 2020). The hackathon team proposed to train a deep learning model to detect and quantify key radiology signs linked to COVID-19 such as regions of ‘ground glass opacities’  (hazy appearance), ‘consolidation’(completely obscured lung regions)  and ‘pleuraleffusion’ excess fluid between the lungs and the pleura – the tissues surrounding the lungs).  

Automating the assessment of these key signs could provide fast and reproducible measurements for radiology experts.  The Sensyne team’s aim was to attempt to validate the viability of automating such assessments by training a convolutional neural network model to detect ground glass opacities, consolidation and pleural effusion from a small open-access annotated dataset of COVID-19 cases. This was performed on publicly available data.

 

Model

The team used a standard U-Net model derived from the following implementation - https://github.com/zhixuhao/unet. This was adapted to solve a multi-label task using softmax and categorical cross-validation with 4 channels output from the final layer corresponding to background, ground glass opacity, consolidation and pleural effusion (as defined in the dataset)  The code is available here and will continue to be developed.

 

Dataset

Sensyne used a publicly available dataset of 100 axial slices from 60 patients from SIRM. The pre-processing and creation of the datasets is outlined in this blog post.

 

Datasets like these are hugely valuable to the clinical and AI communities.  In particular, in this case, they allow the viability of building AI models to be tested on COVID-19 disease characteristics. This is, however, a small dataset of 100 individual slices which limits validation and additionally poses a larger limitation in that slices were not mapped to individual patients so different slices in the training and validation set may be derived from the same patient.

 

Results

Ten percent of the slices were held back from training to create the validation set, and example results are shown below. The results are promising for capturing and interpreting COVID related pathology. The two examples below show the Lung CT slice, automatic pathology detection and ground truth (blue = ground glass, yellow = consolidation and red = pleural effusion). The examples are interesting because they show a case with predominantly consolidation (above) and a case with ground glass opacity (below).  

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

In addition, cases from the test set without a known ground truth are shown below. This is interesting to review because the expert analysis is not known for these cases, but the algorithm appears to capture the pathology well.

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

Limitations, Learnings and Next Steps

These early results are exciting because they show how AI tools support decision making with quantitative analysis. However, the work was developed over one weekend and although it shows promise it remains an early attempt at training a deep learning model for characteristics of COVID-19.

The model has been trained and tested on only a limited number of CT slices. Critically, not knowing whether multiple slices belong to the same patient limits the independent validation until independent cases can be acquired. Nevertheless, even if the slices are acquired from the same patient, this illustrates the potential to quickly segment the whole volume from a limited number of manual annotations.

While such approaches appear to have a lot of potential and we expect approaches such as these to reach clinical readiness; similar tools are also being developed in this space.

Of key importance will be access to datasets across multiple hospitals. This will allow building models that are stable to variations in CT scanners and can be validated in future to meet regulatory requirements. This might sound challenging, but the ability of the radiologist to manually review and correct automatic delineation, means that any risk of mislabelling can be managed.

A tool like this has the potential to improve the radiology workflow and provide quantitative and reproducible analysis of pathology burden, especially in situations like COVID-19 where every second of the clinicians’ time counts.  

We hope this post encourages more development in this area.  In the meanwhile, we’ve open sourced the initial results and the code for comment and to allow others to use the work.

Ben Irving
Principal Machine Learning Researcher
Sensyne Health
Blog

A weekend, a ‘virtual’ hackathon and ML approaches to automate the analysis of COVID-19 lung CT scans

A weekend, a ‘virtual’ hackathon and ML approaches to automate the analysis of COVID-19 lung CT scans

Like many in the bioscience industry, Sensyne is seeking ways to turn its resources and expertise into valuable contributions towards research into COVID-19. In our case, one way we did this was to commit this expertise, in the form of our multidisciplinary Discovery Sciences team comprising clinicians, epidemiologists, biologists, machine learning researchers and data scientists, to a Coronavirus-focussed weekend virtual hackathon.

The idea was to investigate COVID-19 related data and provide initial analysis to find promising ideas that could be taken forward in the longer term both within the company, with collaborators and released to the public domain.

One of the hackathon projects aimed to automate the analysis of Lung CT scans from COVID-19 patients.  There is a significant focus on the development of new methodologies across the spectrum from prevention, testing, and disease progression monitoring to improving outcomes of COVID-19 patients. As part of this momentum, new public data is being made available by teams around the world including imaging datasets, such as X-ray and CT, that are used to measure the effect of the disease on the lungs in severe cases.

While X-ray is often associated with detecting the causes, features and effects of disease, CT is being used for assessing disease burden in COVID-19 cases, with various severity scores being proposed, for example, that of Yang et al (2020). In this example, the lung is partitioned into 20 regions and expert radiologists score 0, 1 and 2 for each region based on the severity of the disease burden.

However, CT is comprised of a large number of slices and quantitative analysis, where the exact amount of pathology is measured, is very time consuming for an expert. Therefore, a number of approaches have been proposed for the detection of COVID-19 in Chest X-rays (Wang et al, 2020 and Zhang et al, 2020). Other methods have began exploring quantification of the disease in CT (Gozes et al, 2020). The hackathon team proposed to train a deep learning model to detect and quantify key radiology signs linked to COVID-19 such as regions of ‘ground glass opacities’  (hazy appearance), ‘consolidation’(completely obscured lung regions)  and ‘pleuraleffusion’ excess fluid between the lungs and the pleura – the tissues surrounding the lungs).  

Automating the assessment of these key signs could provide fast and reproducible measurements for radiology experts.  The Sensyne team’s aim was to attempt to validate the viability of automating such assessments by training a convolutional neural network model to detect ground glass opacities, consolidation and pleural effusion from a small open-access annotated dataset of COVID-19 cases. This was performed on publicly available data.

 

Model

The team used a standard U-Net model derived from the following implementation - https://github.com/zhixuhao/unet. This was adapted to solve a multi-label task using softmax and categorical cross-validation with 4 channels output from the final layer corresponding to background, ground glass opacity, consolidation and pleural effusion (as defined in the dataset)  The code is available here and will continue to be developed.

 

Dataset

Sensyne used a publicly available dataset of 100 axial slices from 60 patients from SIRM. The pre-processing and creation of the datasets is outlined in this blog post.

 

Datasets like these are hugely valuable to the clinical and AI communities.  In particular, in this case, they allow the viability of building AI models to be tested on COVID-19 disease characteristics. This is, however, a small dataset of 100 individual slices which limits validation and additionally poses a larger limitation in that slices were not mapped to individual patients so different slices in the training and validation set may be derived from the same patient.

 

Results

Ten percent of the slices were held back from training to create the validation set, and example results are shown below. The results are promising for capturing and interpreting COVID related pathology. The two examples below show the Lung CT slice, automatic pathology detection and ground truth (blue = ground glass, yellow = consolidation and red = pleural effusion). The examples are interesting because they show a case with predominantly consolidation (above) and a case with ground glass opacity (below).  

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

In addition, cases from the test set without a known ground truth are shown below. This is interesting to review because the expert analysis is not known for these cases, but the algorithm appears to capture the pathology well.

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

Limitations, Learnings and Next Steps

These early results are exciting because they show how AI tools support decision making with quantitative analysis. However, the work was developed over one weekend and although it shows promise it remains an early attempt at training a deep learning model for characteristics of COVID-19.

The model has been trained and tested on only a limited number of CT slices. Critically, not knowing whether multiple slices belong to the same patient limits the independent validation until independent cases can be acquired. Nevertheless, even if the slices are acquired from the same patient, this illustrates the potential to quickly segment the whole volume from a limited number of manual annotations.

While such approaches appear to have a lot of potential and we expect approaches such as these to reach clinical readiness; similar tools are also being developed in this space.

Of key importance will be access to datasets across multiple hospitals. This will allow building models that are stable to variations in CT scanners and can be validated in future to meet regulatory requirements. This might sound challenging, but the ability of the radiologist to manually review and correct automatic delineation, means that any risk of mislabelling can be managed.

A tool like this has the potential to improve the radiology workflow and provide quantitative and reproducible analysis of pathology burden, especially in situations like COVID-19 where every second of the clinicians’ time counts.  

We hope this post encourages more development in this area.  In the meanwhile, we’ve open sourced the initial results and the code for comment and to allow others to use the work.

Ben Irving
Principal Machine Learning Researcher
Sensyne Health
Arrange to meet us
Blog

A weekend, a ‘virtual’ hackathon and ML approaches to automate the analysis of COVID-19 lung CT scans

May 6, 2020

Like many in the bioscience industry, Sensyne is seeking ways to turn its resources and expertise into valuable contributions towards research into COVID-19. In our case, one way we did this was to commit this expertise, in the form of our multidisciplinary Discovery Sciences team comprising clinicians, epidemiologists, biologists, machine learning researchers and data scientists, to a Coronavirus-focussed weekend virtual hackathon.

The idea was to investigate COVID-19 related data and provide initial analysis to find promising ideas that could be taken forward in the longer term both within the company, with collaborators and released to the public domain.

One of the hackathon projects aimed to automate the analysis of Lung CT scans from COVID-19 patients.  There is a significant focus on the development of new methodologies across the spectrum from prevention, testing, and disease progression monitoring to improving outcomes of COVID-19 patients. As part of this momentum, new public data is being made available by teams around the world including imaging datasets, such as X-ray and CT, that are used to measure the effect of the disease on the lungs in severe cases.

While X-ray is often associated with detecting the causes, features and effects of disease, CT is being used for assessing disease burden in COVID-19 cases, with various severity scores being proposed, for example, that of Yang et al (2020). In this example, the lung is partitioned into 20 regions and expert radiologists score 0, 1 and 2 for each region based on the severity of the disease burden.

However, CT is comprised of a large number of slices and quantitative analysis, where the exact amount of pathology is measured, is very time consuming for an expert. Therefore, a number of approaches have been proposed for the detection of COVID-19 in Chest X-rays (Wang et al, 2020 and Zhang et al, 2020). Other methods have began exploring quantification of the disease in CT (Gozes et al, 2020). The hackathon team proposed to train a deep learning model to detect and quantify key radiology signs linked to COVID-19 such as regions of ‘ground glass opacities’  (hazy appearance), ‘consolidation’(completely obscured lung regions)  and ‘pleuraleffusion’ excess fluid between the lungs and the pleura – the tissues surrounding the lungs).  

Automating the assessment of these key signs could provide fast and reproducible measurements for radiology experts.  The Sensyne team’s aim was to attempt to validate the viability of automating such assessments by training a convolutional neural network model to detect ground glass opacities, consolidation and pleural effusion from a small open-access annotated dataset of COVID-19 cases. This was performed on publicly available data.

 

Model

The team used a standard U-Net model derived from the following implementation - https://github.com/zhixuhao/unet. This was adapted to solve a multi-label task using softmax and categorical cross-validation with 4 channels output from the final layer corresponding to background, ground glass opacity, consolidation and pleural effusion (as defined in the dataset)  The code is available here and will continue to be developed.

 

Dataset

Sensyne used a publicly available dataset of 100 axial slices from 60 patients from SIRM. The pre-processing and creation of the datasets is outlined in this blog post.

 

Datasets like these are hugely valuable to the clinical and AI communities.  In particular, in this case, they allow the viability of building AI models to be tested on COVID-19 disease characteristics. This is, however, a small dataset of 100 individual slices which limits validation and additionally poses a larger limitation in that slices were not mapped to individual patients so different slices in the training and validation set may be derived from the same patient.

 

Results

Ten percent of the slices were held back from training to create the validation set, and example results are shown below. The results are promising for capturing and interpreting COVID related pathology. The two examples below show the Lung CT slice, automatic pathology detection and ground truth (blue = ground glass, yellow = consolidation and red = pleural effusion). The examples are interesting because they show a case with predominantly consolidation (above) and a case with ground glass opacity (below).  

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

In addition, cases from the test set without a known ground truth are shown below. This is interesting to review because the expert analysis is not known for these cases, but the algorithm appears to capture the pathology well.

NOTE: all images are generated by our algorithm, and derived from the CT slices and expert annotations that were provided by http://medicalsegmentation.com/covid19 and originally made public by https://www.sirm.org/en/category/articles/covid-19-database

Limitations, Learnings and Next Steps

These early results are exciting because they show how AI tools support decision making with quantitative analysis. However, the work was developed over one weekend and although it shows promise it remains an early attempt at training a deep learning model for characteristics of COVID-19.

The model has been trained and tested on only a limited number of CT slices. Critically, not knowing whether multiple slices belong to the same patient limits the independent validation until independent cases can be acquired. Nevertheless, even if the slices are acquired from the same patient, this illustrates the potential to quickly segment the whole volume from a limited number of manual annotations.

While such approaches appear to have a lot of potential and we expect approaches such as these to reach clinical readiness; similar tools are also being developed in this space.

Of key importance will be access to datasets across multiple hospitals. This will allow building models that are stable to variations in CT scanners and can be validated in future to meet regulatory requirements. This might sound challenging, but the ability of the radiologist to manually review and correct automatic delineation, means that any risk of mislabelling can be managed.

A tool like this has the potential to improve the radiology workflow and provide quantitative and reproducible analysis of pathology burden, especially in situations like COVID-19 where every second of the clinicians’ time counts.  

We hope this post encourages more development in this area.  In the meanwhile, we’ve open sourced the initial results and the code for comment and to allow others to use the work.

Ben Irving
Principal Machine Learning Researcher
Sensyne Health