AI-Based
3D Reconstruction from CT SCANS

3D Reconstruction of Organ and Tumor from CT Scans

The Challenge

Surgery planning and simulation is a known practice in hard tissue-based surgeries like hip and knee replacements. However, it is not the case with soft tissue-based procedures. 

 

Currently, the surgical community relies on PADUA and RENAL scores for kidney cancer patients, where the scores are manually generated from CT scans. 

This project attempts the automatic generation of 3D models for kidney cancer patients from CT scans using artificial intelligence and deep learning algorithms with the intent that these become tools for pre-surgical planning.

Approach

01

Data Preparation: Starting with the full-body CTs, scans were cropped to focus on the abdominal area that covers the kidney and tumor.

02

Base Model Selection: We started with the nnUnet model with a few architectural variations on this dataset, given its prior success with similar modalities.

03

Model Training: We tried a model cascade version of nnUnet with both high and low-resolution versions.

04

Experiments: We converted our segmented kidney and tumor volumes into clouds of points over which we computed a smooth surface using laplacian surface reconstruction, and the introduction of some small amount of random noise in the location. From here we evaluated the quality of the segmentation operations.

No Test
05

Infrastructure and Hardware: This project was completed on the Google cloud platform(GCP), in which we dynamically assigned compute resources, both CPU cores, and GPUs, as needed for our computational workloads.

Results

%

Kidney Dice Coefficient

%

Tumor Dice Coefficient

%

Kidney and Tumor Average Dice Coefficient

Conclusion

We observed that realistic kidney reconstructions from bulk CT scans are feasible with current state-of-the-art technologies. 

 

The software re-generation of a patient’s kidney when coupled with surgical simulations, could enable non-invasive surgical practice tools on soft models of the patient’s specific anatomy. 

 

These tools allow for building simulated environments where a surgeon could practice surgeries before actually doing them on a patient.

 

A sufficiently accurate model of a patient’s anatomy could lead to a simulated environment wherein a surgeon could practice, record, or undo surgical actions developed for a specific patient. 

 

A surgeon can then simply execute this recording on the actual patient, avoiding the pitfalls encountered within the simulation.

Featured Work

All Data Inclusive, Deep Learning Models to Predict Critical Events in the Medical Information Mart for Intensive Care III Database (MIMIC III)

All Data Inclusive, Deep Learning Models to Predict Critical Events in the Medical Information Mart for Intensive Care III Database (MIMIC III)

Featured Work

Artificial Intelligence and Robotic Surgery: Current Perspective and Future Directions

Artificial Intelligence and Robotic Surgery: Current Perspective and Future Directions

Featured Work

Augmented Intelligence: A synergy between man and the machine

Augmented Intelligence: A synergy between man and the machine

Featured Work

Building Artificial Intelligence (AI) Based Personalized Predictive Models (PPM)

Building Artificial Intelligence (AI) Based Personalized Predictive Models (PPM)

Featured Work

Predicting intraoperative and postoperative consequential events using machine learning techniques in patients undergoing robotic partial nephrectomy (RPN)

Predicting intraoperative and postoperative consequential events using machine learning techniques in patients undergoing robotic partial nephrectomy (RPN)

Featured Work

Stereo Correspondence and Reconstruction of Endoscopic Data Challenge

Stereo Correspondence and Reconstruction of Endoscopic Data Challenge