On February 17th a panel discussion about the upcoming USCT Data Challenge 2019 took place at SPIE Medical Imaging 2019 in San Diego, California, USA. The panel discussion was chaired by Christian Boehm (ETH Zurich, Switzerland) and Nicole Ruiter (Karlsruhe Institute of Technology, Germany). As panelists we were happy to welcome

  • Andreas Fichtner (ETH Zurich, Switzerland)
  • Marko Jakovljevic (Stanford University, USA)
  • Neb Duric (Delphinus Medical Technologies, USA)
  • Xiaoyue Fang (Huazhong University of Science and Technology, China)
  • Mohamed Almekkawy (Pennsylvania State University, USA)
Panelists at SPIE medical imaging 2019 USCT data challenge panel discussion (from left to right): Andreas Fichtner (ETH Zurich, Switzerland), Marko Jakovljevic (Stanford University, USA, Neb Duric (Delphinus Medical Technologies, USA), Xiaoyue Fang (Huazhong University of Science and Technology, China), Mohamed Almekkawy (Pennsylvania State University, USA).

Panelists at SPIE medical imaging 2019 USCT data challenge panel discussion (from left to right): Andreas Fichtner (ETH Zurich, Switzerland), Marko Jakovljevic (Stanford University, USA, Neb Duric (Delphinus Medical Technologies, USA), Xiaoyue Fang (Huazhong University of Science and Technology, China), Mohamed Almekkawy (Pennsylvania State University, USA). Photo courtesy SPIE, Felicia Andreotta.

After a short introduction and recap of the USCT Data Challenge 2017 by Christian Boehm, the new data challenge 2019 was presented. Starting shortly after SPIE 2019, participants are invited to use the synthetic data that will be provided to apply their image reconstruction methods. The results of the data challenge will be presented at the 2nd International Workshop on Medical Ultrasound Tomography (MUST), which will take place in Detroit, Michigan, USA in early fall 2019.

Christian Boehm presenting the USCT data challenge 2019 and a list of topics for the panel discussion.

Christian Boehm and Nicole Ruiter presenting the USCT data challenge 2019 and a list of topics for the panel discussion. Photo courtesy SPIE, Felicia Andreotta.

Panelists and audience were afterwards invited to discuss on several tropics:

Evaluation criteria for the data challenge 2019

Still open questions are criteria with which submissions for the data challenge should be evaluated. Panelists widely agreed that a single metric to evaluate the reconstructed images will not be enough to compare the results. As Neb Duric pointed out, the problem should be defined first: the aim of the challenge could either be to derive the best image quality independently of the computing time or to judge methods also by their clinical applicability, i.e. by the computing time. Mohamed Almekkawy suggested to rank the metrics by significance: e.g. the resolution might be more important than computing time.

When having the aim of the challenge defined, several metrics e.g. for evaluating image resolution, image contrast, computing time, required computing power etc. should be defined. However, as Marko Jakovljevic pointed out, depending on the dataset even a judgement by physicians might be necessary as they are the primary target group to later on read the images. Andreas Fichtner also brought up that a confidence level of the reconstruction, i.e. how good a model describes the data, may also be an applicable metric.P

Performance assessment

As already came up in discussing the evaluation criteria, the required computing time and computational resources should be part of the evaluation of submissions. Yet it is still under discussion whether a restriction on the computing time or resources should be part of the data challenge. There was a consent that submitters should not be limited in computation time and that computing time should play a secondary role. Comparing computing times – on the other hand – may not be trivial due to different computational infrastructure, sequential or parallel implementations etc. Algorithm complexity or the FLOPS of the computing infrastructure may be taken into account. A contribution from the audience suggested that algorithms might compete during the MUST workshop on a defined machine – yet this might not be practical due to severe computational demands and/or organizational restrictions in disclosing the algorithms.

How challenging should the challenge be?

Another open question is how challenging the synthetic data should be and which information should be disclosed. Andreas Fichtner suggested that a two-level approach might be applicable in which an error-free dataset should be provided but also one containing errors.

Data format

Currently there is no standard data format for storing and exchanging ultrasound RF data. Christian Boehm presented a proposal for the common data format, which will be used for the data challenge 2019. It will be based on HDF5. Participants of the data challenge are encouraged to give feedback on the data format.