Impact Statement
This empirical study delves into the pressing challenge posed by the escalating amount of biological microscopy imaging data and the consequential strain on existing data infrastructure. Effective image compression methods could help reduce the data size significantly without losing necessary information and therefore reduce the burden on data management infrastructure and permit fast transmission through the network for data sharing or cloud computing. In response, we investigate both classic and deep-learning-based image compression methods within the domain of 2D/3D grayscale bright-field microscopy images and their influence on the downstream task. Our findings unveil the superiority of deep-learning-based techniques, presenting elevated compression ratios while preserving reconstruction quality and with little effect on the downstream data analysis. Hence, the integration of deep-learning-based compression techniques into the existing bioimage analysis pipeline would be immensely beneficial in data sharing and storage.
1. Introduction
Image compression is the process of reducing the size of digital images while retaining the useful information for reconstruction. This is achieved by removing redundancies in the image data, resulting in a compressed version of the original image that requires less storage space and can be transmitted more efficiently. In many fields of research, including microscopy, high-resolution images are often acquired and processed, leading to significant challenges in terms of storage and computational resources. In particular, researchers in the microscopy image analysis field are often faced with infrastructure limitations, such as limited storage capacity or network bandwidth. Image compression can help mitigate such challenges, allowing researchers to store and transmit images efficiently without compromising their quality and validity. Lossless image compression refers to the compression techniques preserving every bit of information in data and making error-free reconstruction, ideal for applications where data integrity is paramount. However, the limited size reduction capability, such as a compression ratio of 2 $ \sim $ 3 as reported by Walker et al.(Reference Walker, Li, Mcglothlin and Cai 1 ), is far from sufficient to alleviate the data explosion crisis. In this work, we focus on lossy compression methods, where some information lost may occur but can yield significantly higher compression ratio.
Image compression has historically been employed to reduce data burdens in various scenarios. For instance, the WebP format is used by web developers to enhance web performance by reducing webpage loading times.(Reference Ginesu, Pintus and Giusto 2 ) Similarly, Apple’s High Efficiency Image File (HEIF) format optimizes storage on mobile devices, improving data transmission and storage efficiency.(Reference Lainema, Hannuksela, Vadakital and Aksu 3 ) Despite lossy compression techniques (both classic and deep-learning-based) being widely employed in the computer vision field, their feasibility and impact in the field of biological microscopy images remain largely underexplored.
In this paper, we propose a two-phase evaluation pipeline, compression algorithm comparison and downstream task analysis in the context of microscopy images. To fully explore the impact of lossy image compression on downstream image analysis tasks, we employed a set of label-free models, a.k.a., in-silico labeling.(Reference Christiansen, Yang, Ando, Javaherian, Skibinski, Lipnick, Mount, O’Neil, Shah and Lee 4 ) A label-free model denotes a deep-learning approach capable of directly predicting fluorescent images from transmitted light bright-field images.(Reference Ounkomol, Seshamani, Maleckar, Collman and Johnson 5 ) Considering the large amount of bright-field images being used in regular biological studies, it is of great importance that such data compression techniques can be utilized without compromising the prediction quality.
Through intensive experiments, we demonstrated that deep-learning-based compression methods can outperform the classic algorithms in terms of compression ratio, and post-compression reconstruction quality, and their impact on the downstream label-free task, indicating their huge potentials in the bioimaging field. Meanwhile, we made a preliminary attempt to build 3D compression models and reported the current limitation and possible future directions. Overall, we want to raise the awareness of the importance and potentials of deep-learning-based compression techniques and hopefully help in the strategical planning of future data infrastructure for bioimaging.
Specifically, the main contribution of the paper is:
-
1. Benchmark common classic and deep-learning-based image compression techniques in the context of 2D grayscale bright-field microscopy images.
-
2. Empirically investigate the impact of data compression to the downstream label-free tasks.
-
3. Expand the scope of the current compression analysis for 3D microscopy images.
The remaining of this paper is organized as follows: Section 2 will introduce classic and deep-learning-based image compression techniques, followed by the method descriptions in Section 3 and experimental settings in Section 4. Results and discussions will be presented in Section 5 with conclusions in Section 7.
2. Related works
The classic data compression techniques have been well studied in the last few decades, with the development of JPEG,(Reference Wallace 6 ) a popular lossy compression algorithm since 1992, and its successors, JPEG 2000,(Reference Marcellin, Gormish, Bilgin and Boliek 7 ) JPEG XR,(Reference Dufaux, Sullivan and Ebrahimi 8 ) and so forth. In recent years, some more powerful algorithms, such as limited error raster compression (LERC), are proposed. Generally, the compression process approximately involves the following steps: color transform (with optional downsampling), domain transform (e.g., discrete cosine transform(Reference Ahmed, Natarajan and Rao 9 ) in JPEG), quantization, and further lossless entropy coding (e.g., run-length encoding or Huffmann coding(Reference Huffman 10 )).
Recently, deep-learning-based image compression gained popularity thanks to the significantly improved compression performance. Roughly speaking, a deep-learning-based compression model consists of two sub-networks: a neural encoder $ f $ that compresses the image data and a neural decoder $ g $ that reconstructs the original image from the compressed representation. Besides, the latent representation will be further losslessly compressed by some entropy coding techniques (e.g., arithmetic coding(Reference Rissanen and Langdon 11 )) as seen in Figure 1. Specially, the latent vector will be firstly discretized into $ \mathbf{z} $ : . Afterward, $ \mathbf{z} $ will be encoded/decoded by the entropy coder ( $ \hskip0.3em {f}_e/{g}_e $ ) and decompressed by the neural decoder $ g $ : $ \hat{\mathbf{X}}=g\left({g}_e\left(\hskip0.3em {f}_e\left(\mathbf{z}\right)\right)\right) $ . The objective is to minimize the loss function containing rate–distortion trade-off(Reference Cover 12 , Reference Shannon 13 ):
where $ \mathcal{R} $ corresponds to the rate loss term, which highlights the compression ability of the system. $ P $ is the entropy model that provides prior probability to the entropy coding, and $ -{\log}_2P\left(\cdot \right) $ denotes the information entropy and can approximately estimate the optimal compression ability of the entropy encoder $ {f}_e $ , defined by the Shannon theory.(Reference Shannon 13 , Reference Shannon 14 ) $ \mathcal{D} $ is the distortion term, which can control the reconstruction quality. $ \rho $ is the norm or perceptual metric, for example, MSE, MS-SSIM,(Reference Wang, Simoncelli and Bovik 15 ) and so forth. The trade-off between these two terms is achieved by the scale hyper-parameter $ \lambda $ .
Because the lossless entropy coding entails the accurate modeling of the prior probability of the quantized latent representation $ P\left(\mathbf{z}\right) $ , Ballé et al.(Reference Ballé, Minnen, Singh, Hwang and Johnston 16 ) justified that there exist statistical dependencies in the latent representation using the current fully-factorized entropy model, which will lead to suboptimal performance and not be adaptive to all images. To further improve the entropy model, Ballé et al. propose a hyperprior approach,(Reference Ballé, Minnen, Singh, Hwang and Johnston 16 ) where a hyper latent $ \mathbf{h} $ (also called side information) is generated by the auxillary neural encoder $ {f}_a $ from the latent space $ \mathbf{y} $ : $ \mathbf{h}={f}_a\left(\mathbf{y}\right) $ , then the scale parameter of the entropy model can be estimated by the output of the auxillary decoder $ {g}_a $ : $ \phi ={g}_a\left({E}_a\left(\mathbf{h}\right)\right) $ so that the entropy model can be adaptively adjusted by the input image $ \mathbf{x} $ , with the bit-rate further enhanced. Minnen et al.(Reference Minnen, Ballé and Toderici 17 ) extended the work to get the more reliable entropy model by jointly combining the data from the above mentioned hyperprior and the proposed autoregressive Context Model.
Besides the improvement in the entropy model, lots of effort is also put into the enhancement of the network architecture. Ballé et al.(Reference Ballé, Laparra and Simoncelli 18 ) replaced the normal RELU activation with the proposed generalized division normalization (GDN) module to better capture the image statistics. Johnston et al.(Reference Johnston, Eban, Gordon and Ballé 19 ) optimized the GDN module in a computationally efficient manner without sacrificing the accuracy. Cheng et al.(Reference Cheng, Sun, Takeuchi and Katto 20 ) introduced the skip connection and attention mechanism. The transformer-based auto-encoder was also reported for data compression in recent years.(Reference Zhu, Yang and Cohen 21 )
3. Methodology
The evaluation pipeline was proposed in this study to benchmark the performance of the compression model in the bioimage field and estimate their influence to the downstream label-free generation task. As illustrated in Figure 2, the whole pipeline contains two parts: compression part: $ x\overset{g\circ f}{\to}\hat{x} $ and downstream label-free part: $ \left(x/\hat{x}\right)\overset{f_l}{\to}\left(y/\hat{y}\right) $ , where the former is designed to measure the rate–distortion performance of the compression algorithms and the latter aims to quantify their influence to the downstream task.
During the compression part, the raw image $ x $ will be transformed to the reconstructed image $ \hat{x} $ through the compression algorithm $ g\hskip0.3em \circ \hskip0.3em f $ :
where $ f $ represents the compression process, and $ g $ denotes the decompression process. Note that the compression methods could be both classic strategies (e.g., JPEG) and deep-learning-based algorithms. The performance of the algorithm can be evaluated through rate–distortion performance, as explained in (1) to (3).
In the downstream label-free part, the prediction will be made by the model $ {f}_l $ using both the raw image $ x $ and the reconstructed image $ \hat{x} $ :
The evaluation to measure the compression influence to the downstream tasks is made by:
where the evaluation metric L is the collection of different metrics $ {\rho}_i $ on different image pairs $ {S}_i $ . V is the collection of the raw prediction $ y $ , prediction made by the reconstructed image $ \hat{y} $ and the ground truth $ {y}_t $ . S is formed by pairwise combinations of elements from V. $ \rho $ represents the metric we used to measure the relation between image pairs. In this study, we totally utilized four metrics: learned perceptual image patch similarity (LPIPS),(Reference Zhang, Isola, Efros, Shechtman and Wang 22 ) SSIM, peak signal-to-noise ratio (PSNR), and Pearson correlation.
To conclude, through the above proposed two-phase evaluation pipeline, the compression performance of the compression algorithm will be fully estimated, and their impact on the downstream task will also be well investigated.
4. Experimental settings
4.1. Dataset
The dataset used in this study is the human-induced pluripotent stem cell single-cell image dataset(Reference Viana, Chen, Knijnenburg, Vasan, Yan, Arakaki, Bailey, Berry, Borensztejn and Brown 23 ) released by the Allen Institute for Cell Science. We utilized grayscale bright-field images and its corresponding fluorescent image pairs from the fibrillarin cell line, where the dense fibrillar component of the nucleolus is endogenously tagged. For 3D experiments, 500 samples were chosen from the dataset, with 395 for training and the remaining 105 samples for evaluation. While in terms of 2D experiments, the middle slice of each 3D sample was extracted, resulting in 2D slices of 624 × 924 pixels.
4.2. Implementation details
During the first compression part of the proposed two-phase evaluation pipeline, we made the comparison using both classic methods and deep-learning-based algorithms. In terms of the classic compression, we employed the Python package “tifffile” to apply 3 classic image compression: JPEG 2000, JPEG XR, and LERC, focusing on level 8 for the highest image quality preservation. To enhance compression efficiency, we used a 16 × 16-pixel tile-based approach, facilitating image data access during compression and decompression. This methodology enabled a thorough exploration of the storage versus image quality trade-off.
Regarding learning-based methods, 6 pre-trained models proposed in refs. (Reference Ballé, Minnen, Singh, Hwang and Johnston3, Reference Shannon17, Reference Sonneck, Zhou and Chen20) were applied in 2D compression, with each kind of model trained with 2 different metrics (MSE and MS-SSIM), resulting in 12 models in total. The pretrained checkpoints were provided by the CompressAI tool.(Reference Bégaint, Racapé, Feltman and Pushparaja 24 ) For the 3D senario, an adapted bmshj2018-factorized compression model(Reference Ballé, Minnen, Singh, Hwang and Johnston 16 ) was trained and evaluated on our microscopy dataset. For the first 50 epochs, MSE metric was employed in the reconstruction loss term, followed by MS-SSIM metric for another 50 epochs to enhance the image quality.
When it comes to the second label-free generation part, the pretrained Pix2Pix 2D (Fnet 2D as the generator) and Fnet 3D model were obtained from the mmv_im2im Python package.(Reference Sonneck, Zhou and Chen 25 ) All the label-free 2D/3D models were trained by raw images. Detailed training recipes are listed in Supplementary Tables S3 and S4.
5. Results
In this section, we will present and analyze the performance of the image compression algorithms and their impact on the downstream label-free task, using the proposed two-phase evaluation pipeline.
5.1. Data compression results
First, we did the compression performance comparison experiment in the context of grayscale microscopic bright-field image, based on the first part of the evaluation pipeline. The results show that deep-learning-based compression algorithms behave well in terms of the reconstruction quality and compression ratio ability in both 2D and 3D cases and outperform the classic methods.
The second to the fourth rows in Table 1 and Supplementary Table S1 demonstrate the quantitative rate–distortion performance for the three traditional compression techniques involved. Although the classic method LERC achieved the highest result in all the quality metrics for the reconstructed image, it just saves 12.36% of the space, which is way lower compared to the deep-learning-based methods. Meanwhile, JPEG-2000-LOSSY can achieve comparable compression ratios with respect to AI-based algorithms, but its quality metric ranks the bottom, with only 0.158 in correlation and 0.424 in SSIM. The above results compellingly showcase that the classic methods cannot make a trade-off in the rate–distortion performance.
First column: compression methods, with the second to the fourth rows as the classic methods and fifth to the last as the deep-learning-based methods. The second to the last columns indicate the four metrics that we used to measure the reconstruction ability: LPIPS (the smaller the better), SSIM, Correlation, PSNR (the larger the better).
Besides, results from deep-learning models exhibit close similarities, yielding favorable outcomes, as illustrated in Table 1 and Supplementary Table S1 from the fifth row to the last. From Figure 3, it is evident that there is a trade-off between the image quality and the compression ratio. Notably, the “mbt2018-ms-ssim-8” method exhibits a slight advantage in terms of SSIM, achieving a value of 0.971. Conversely, the “mbt2018-mean-ms-ssim-8” method showcases a slight edge in correlation, with a score of 0.987. When considering compression ratio, “cheng2020-anchor-mse-6” outperforms the others, with an compression ratio of 47.298. A sample result is visualized in Figure 4.
As illustrated in Figure 5, the 3D compression result is visually plausible and the quantitative evaluation metrics are listed in the first row in Table 4. The metrics are relatively high, reaching 0.922 in SSIM and 0.949 in correlation. Regarding the compression ratio, 97.74 $ \% $ of space will be saved.
In brief, the above findings suggest that deep-learning-based compression methods behave well in the context of microscopic image field and averagely outperform the classic methods in terms of reconstruction ability and compression ratios.
5.2. Downstream label-free results
We also conducted an experiment to assess the impact of the aforementioned compression techniques on downstream AI-based bioimage analysis tasks, specifically the label-free task in our study (please refer to the Supplementary Case Study section for the analysis of additional downstream tasks). Our results indicate that in 2D cases, the prediction accuracy is higher when the input image is compressed using deep-learning-based methods, as opposed to traditional methods. Furthermore, this accuracy closely aligns with the predictions derived from the raw image, suggesting that deep-learning-based compression methods have a minimal impact on the downstream task.
Tables 2 and 3 exhibit the influence of data compression to the downstream label-free task in 2D cases. Regarding the comparison of the accuracy between the predictions using compressed input and original input (Table 2), we found that although the slight degradation in correlation and PSNR, the average SSIM value among deep-learning-based methods is akin to the original prediction and surpasses the classic methods, with “bmshj2018-hyperprior-ms-ssim-8” model reaching the highest value (0.752). If we compare the similarity between the predictions using compressed images and original images (Table 3), “mbt2018-ms-ssim-8” and LERC ranked the highest in SSIM and correlation, respectively.
First column: compression methods, with the second to the fourth rows as the classic methods and fifth to the last as the deep-learning-based methods.
First column: compression methods, with the second to the fourth rows as the classic methods and fifth to the last as the deep-learning-based methods.
When it comes to 3D cases, the prediction from the compressed image is not comparable to that predicted by the raw bright-field image (2.54 dB $ \downarrow $ in PSNR and 0.08 dB $ \downarrow $ in SSIM), as shown in the second and third rows from Table 4, indicating a quality downgrade during compression. This can be attributed primarily to the ignorance of considering compression in the training phase of the label-free model. Notably, the accuracy gap is mitigated when the label-free model is also trained with the compressed images. As illustrated in Figure 5, despite the visually plausible reconstruction result, the information loss during the compression process also heavily affects the downstream label-free generation task. For instance, the fibrillarin structure pointed by the arrow in the prediction result from the compressed image is missing, which is quite obvious in the corresponding prediction from the raw image.
The table evaluates both compression performance (first row) and its impact on downstream tasks (rows 2–4). In addition, it compares results from compressed training (fifth row).
Briefly, the above result suggests that in 2D cases, the downstream task will be less affected when deep-learning-based methods were applied. However, the prediction accuracy will be largely affected in 3D cases.
5.3. Label-free results with compressed training
Given that the 2D label-free models were all trained with raw uncompressed images, it is also crucial to measure the impact of compression during the training phase in the downstream label-free task. For this purpose, we devised the following experiment: Two label-free models were trained with raw uncompressed data and data compressed using mbt2018 (mse) model, respectively. Therefore, we compared the performance of these models on the test images also compressed using mbt2018 (mse) model. As illustrated in Figure 6, we observed significant artifacts in the prediction when the model was not trained on the compressed data used as input, which is subject to the relative low quality metrics shown in Table 2. However, artifacts were almost mitigated when the model was trained with data using the same compression algorithm, which has the closer data distribution. A similar phenomenon is observed in other AI-based compression scenarios (see Supplementary Table S2), where correlation improves when the label-free model is trained with compressed data. The above phenomenon highlights the importance of considering compression in the training process in order to achieve favorable outcomes.
6. Discussion
The AI-based compression method used in the proposed evaluation pipeline has several shortcomings. First, in 2D cases, only pre-trained models are used. It would perform better if we fine-tuned the compression model on the microscopy dataset. In addition, to achieve optimal downstream task performance, the model for the downstream task should also be trained with the compressed data. This requirement restricts its application if the model was already trained beforehand, which is often the case. Furthermore, the encoding and decoding latency is higher compared to traditional compression methods.
Regardless of these drawbacks, the potential for integrating image compression with current data guidelines, while emphasizing the preservation of original data, is promising. Bioimage storage platforms could leverage this approach by enabling users to download compressed latent representations for quick preview and assessment using offline decoder. This strategy allows biologists to efficiently screen large datasets, conserving storage and bandwidth. Subsequently, researchers can access the original high-resolution data for in-depth analysis when needed.
7. Conclusion
In this research, we proposed a two-phase evaluation pipeline to benchmark the rate–distortion performance of different data compression techniques in the context of grayscale microscopic brightfield images and fully explored the influence of such compression on the downstream label-free task. We found that AI-based image compression methods can significantly outperform classic compression methods and have minor influence on the following label-free model prediction. Despite some limitations, we hope that our work can raise the awareness of the application of deep-learning-based image compression in the bioimaging field and provide insights into the way of integration with other AI-based image analysis tasks.
Supplementary material
The supplementary material for this article can be found at http://doi.org/10.1017/S2633903X24000151.
Data availability statement
The codebase has been released at https://github.com/MMV-Lab/data-compression. The data are from the public hiPSC single cell image dataset from the Allen Institute for Cell Science: https://open.quiltdata.com/b/allencell/packages/aics/hipsc_single_cell_image_dataset. The checkpoints and configs are available at https://zenodo.org/records/13134355.
Acknowledgments
We are grateful for the technical assistance from CompressAI team.
Author contribution
Conceptualization: J.Chen; Y.Z. Data Analysis: Y.Z; J.S. Writing original draft: Y.Z; J.S. Supervision: J.Chen. All authors approved the final submitted draft.
Funding statement
This research was supported by grants from the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) in Germany (grant number 161 L0272); the Ministry of Culture and Science of the State of North Rhine-Westphalia (Ministerium für Kultur und Wissenschaft des Landes Nordrhein-Westfalen, MKW NRW).
Competing interest
The authors declare no competing interests.
Ethical standard
The research meets all ethical guidelines, including adherence to the legal requirements of the study country.