Frei-Chen bases based lossy digital image compression technique

Mahmood Al-khassaweneh (Department of Computer and Mathematical Sciences, Lewis University, Romeoville, Illinois, USA) (Department of Computer Engineering, Yarmouk University, Irbid, Jordan)
Omar AlShorman (Department Electrical Engineering, Faculty of Engineering, Najran University, Najran, Saudi Arabia) (Project Manager, AlShrouk Trading Company, Najran University, Najran, Saudi Arabia)

Applied Computing and Informatics

ISSN: 2634-1964

Article publication date: 29 July 2020

Issue publication date: 5 January 2024

1245

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Keywords

Citation

Al-khassaweneh, M. and AlShorman, O. (2024), "Frei-Chen bases based lossy digital image compression technique", Applied Computing and Informatics, Vol. 20 No. 1/2, pp. 105-118. https://doi.org/10.1016/j.aci.2019.12.004

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Mahmood Al-khassaweneh and Omar AlShorman

License

Published in Applied Computing and Informatics. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

In recent decades, many algorithms have been proposed in the field of digital data compression [1,2], all of which have focused on reducing the redundancy of digital data to increase the efficiency of storing, processing, and transmitting data [3]. Image compression is an image processing application used to reduce the size of a digital images [4,5]. The main goals are to allow more images to be stored in the same storage device and more images to be shared/transferred over the same link/network [6].

In image compression there are two main algorithms: compression algorithm and the decompression algorithm. The result of the compression algorithm is the compressed image which is smaller in size compared to the original image. On the other hand, the decompression algorithm tries to reconstruct the original image from the compressed image [7,8]. All image compression algorithms aim at having small size for compressed image (high compression factor) and a high-quality reconstructed image (high-quality compression). The block diagram for image compression is shown in Figure 1. In order to reduce the pixel redundancy, the input image is transformed to another format by applying a compression algorithm/compressor. Moreover, the decompression algorithm is applied as an inverse process of the compressor.

The efficiency of the compression algorithms can be measured, depending on the application, by using different criterions. The most important criteria is the compression factor. The compression factor measures the size of the image before and after compression. Thus, the larger the compression factor, the more effective the compression algorithm [7]. The compression factor (R) is given by Eq. (1),

(1)R=S(o)S(c)
where, S(o) and S(c) are the sizes of the original and compressed images, respectively.

Even though, high compression factor is desirable, the quality of the reconstructed images should still be high in order to have applicable compression algorithm. Therefore, there is a tradeoff between the compression factor and compression quality. In other words, achieving high compression factor should also produce a decompressed image with minimal distortions [9].

Compression algorithms are divided into two techniques based on the decompressed, reconstructed, image. If the reconstructed image is similar to the original one, the compression is lossless, otherwise it is considered lossy. The lossless techniques and [10–12] are used when the quality of the decompressed image is the most important. In lossless techniques, which are also called reversible techniques, [9], the correlation between the original and decompressed images is1. Therefore, the decompressed image will keep all data details about the original image [13]. On the other hand, in lossy compression technique, some details are lost during compression/decompression and thus it is impossible to reconstruct the exact original image from the compressed one. Lossy compression algorithms usually have higher compression factors compared to lossless algorithms. Lossy compression techniques are commonly used in loss-tolerant applications such as multimedia images [14,15], where a loss in some of the image data does not cause problems especially when it is not noticeable by the human eye.

Recently, many algorithms for data and image compression have been proposed. For example, mixed Discrete Cosine Transform (DCT) and Huffman Coding techniques are used in Joint Photographic Experts Group (JPEG) and Moving Picture Experts Group (MPEG) compression algorithms [16].

Furthermore, Temporal-Length is used in Three-dimensional discrete cosine transform (3-D DCT) coding technique [17], mixed Wavelet Transform and RLE techniques [18], Wavelet Transform [19], lossy JPEG [20,21], Lossless JPEG [22], mixed block optimization and RLE [23], Geometric wavelet (GW) [24] and Compression for Join-Based Graph Mining Algorithms [25]. In addition to image compression [26,27], Run Length Encoding (RLE) algorithm is used for other applications such as fingerprint analysis [28], motion detection [29], data compression [30,31], video compression [32,33], edge detection in real-time image processing [34,35], and scanning and pattern recognition [36].

In this paper, we integrate Frei-Chen bases and Run Length Encoding techniques to achieve high compression factors and high quality decompressed images. The proposed method is applied to gray scale images, however, it can be extended to RGB images. Thus, Frei-Chen bases technique and the Modified Run Length Encoding (RLE) are applied to compress images. The Frei-Chen bases technique is applied at the first stage of compression in which the nine pixels, in most 3X3 blocks of image, are replaced by a single value that represents the average value of these pixels. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate.

2. Methodology

2.1 Frei-Chen basis technique

Frei-Chen bases [37] are used for boundary detection technique. They were suggested by Werner Frei and Chung-Ching Chen to recognize edges and lines features in digital images [38].

In general, Frei-Chen technique consists of nine orthonormal bases (wi), where i=1,2,,9. Each basis represents some features of the image. Any 3X3 sub-image can be written as a weighted sum from these nine orthonormal1 Frei-Chen bases (also known as masks). The Frei-Chen bases can be classified into three subspaces: edge, line and average subspaces.

2.1.1 Edge subspace

There are four bases for the edge subspace, w1, w2, w3 and w4. The first pair represents isotropic average gradient while the second pair represents the ripple gradient. As shown in Figures 2a and 2b. These bases are used to detect vertical or horizontal edge in the image.

2.1.2 Line subspace

The next four bases w5, w6, w7 and w8 represent line subspace where w5 and w6 represent directional line and w7 and w8 represent unidirectional line or discrete Laplacian. Figures 3a and 3b show line subspace bases.

2.1.3 Average subspace

Finally, w9 which is shown in Figure 4 is used to compute the average area. When applied to images, this subspace gives equal weights to all image pixels.

Therefore, each Frei-Chen basis reveals different details about the image and hence can be used, as we will see, in image compressions. Table 1 summarizes Frei-Chen basis characteristics [37].

2.2 Frei-Chen expansion

As mentioned earlier, Frei-Chen bases are masks that can be applied to an image after dividing it into subimages, blocks, of size 3X3. The projection of different Frei-Chen bases will represent specific characteristic for each subimage. The basis that results in the highest projection value represents the characteristic that best describes that block of the image [38]. This means that we can replace the nine pixel values with one value that corresponds to basis with the highest projection.

In order to explain the advantages of using Frei-Chen bases, suppose we have an image I of size NXN. The total energy (T) that image I has is represented by:

(2)T=II
where represents the convolution operator. The energy (E) for the projection of each Frei-Chen basis is represented by:
(3)Ei=wiB
where B is a block, subimage, of the image i. The bases that produces the highest energy suggests which subspace the block belongs to. Using the fact that Frei-Chen bases are orthonormal, we can reconstruct the block, B from the projections, Ei=w_iB, by:
(4)B=i=19(Eiwi)=i=19(wiB)wi

Equation (4) suggests that a block B can be fully reconstructed from its projections into the Frei-Chen bases. In addition, those bases with high projections would be sufficient to recover B with little distortions.

2.3 Frei-Chen compression technique

In this paper; Frei-Chen bases are used for image compression as a first stage. We use that fact that in natural images, neighboring pixels are very close in value to each other. This means that the projection of the average space, w9, is the highest and thus it will be sufficient to estimate block B by using w9. In this case, Eq. (4) can be written as,

(5)B=E9w9=(w9B)w9

This means that the nine pixels which represent block B can be replaced by a single value that corresponds to the projection of w9. The first stage of compression can be summarized as follows:

  • 1.

    The original grayscale image I of size NXN is divided into blocks, B, of size 3×3.

  • 2.

    Each block is projected into all Frei-Chen bases.

  • 3.

    For each block, if the energy of w9 projection is the largest, we calculate the average grayscale level mean (X) of the pixels of that block. Otherwise, the block pixels will remain the same.

  • 4

    Steps 1–3 are repeated for all blocks.

The algorithm of the Frei-Chen compression technique is shown below.

2.4 Run length encoding (RLE) technique

After using Frei-Chen bases in the first stage, the resultant image will be an image that looks close to the original image. The only difference is that for those blocks at which the projection of w9 is the highest, the pixels values are replaced by a single value (X) which represents the average value of the nine pixels. Therefore, the image resulted from the first stage will contain many adjacent repeated pixels.

In order to make use of this repetition and redundancy, we propose to use the well-known Run-Length Encoding (RLE) algorithm to further compress the image. RLE is a lossless data compression technique and is used to reduce the number of redundant bits. The main advantage for using the lossless RLE algorithm is to increase the compression factor without introducing new distortion to the decompressed image.

The RLE algorithm for image compression is similar to that of data compression. The image resulted from applying Frei-Chen algorithm will be scanned to find the run of similar pixels. This scan can be horizontally, vertically or Zig-Zag as shown in Figure 6.

To enhance the RLE algorithm, the Block-Block scanning technique will be used, this technique is expected to produce higher efficiency since we have blocks (the result of the first stage) in which all pixels have the same value X. The Block-Block scanning is shown in Figure 7. The image is divided into blocks; where the pixels inside each block are either scanned vertically or horizontally and all blocks together are also scanned vertically or horizontally. In this paper, blocks are scanned vertically and pixels inside the block are scanned horizontally. The results of this scan are stored in a vector which contains the pixel values and its run. In the resultant vector, the run of the same pixel’s value will be merged to reduce the size of the vector as shown in Figure 6.

To reconstruct the image, a reverse two stage decompression algorithms will be applied. In the first stage, the vector resulted from the RLE algorithm will be used to reconstruct all blocks of the image.

Frei-Chen bases will then be applied to get the final decompressed image as shown in Figure 8.

3. Results and discussion

In this section, we will present the experimental results obtained by applying the proposed compression algorithm. The proposed algorithm has been implemented and test using MATLAB software. Several images have been tested. The average compression ratio of all images is 8.965 with standard deviation of 0.01. In addition, the correlation factor and the Mean Square Error MSE were 0.952 and 0.123, respectively. In this paper, however, we will show the results of Barbara, Boat, Cameraman, Lena and Mandrill images shown in Figure 9.

The results of applying the first stage of the proposed compression algorithm using Frei-Chen bases is shown in Table 2. The compression ratio on all images was around 9. This is expected since in the first stage, we are replacing the nine pixels of the blocks in which the average subspace w9 is the highest with one value which is the average of all pixels. This means that if the projection of w9 into all the blocks is the highest, the compression ratio would be 9. In addition, in natural images, the adjacent pixels have very close values, which means that the projection of w9 basis will be the highest in most blocks. This also explains the reason of having compression ratio close to 9.

Since the first stage replaces nine pixels values by one value, it would introduce some distortions when reconstructing the image. In order to study this distortion, we use two measure: the correlation and the mean square error (MSE) between the original and reconstructed images. The correlation measure shows how close both images are to each other. A correlation value of 1 means that both images are the same and a value close to 0 means both images are very different. MSE measure how much error does using Frei-Chen bases introduce. The MSE is given by,

(6)MSE=( (PiQi)2/(NM))
where Pi is the pixel value form the decompressed image, Qi is the pixel value form the original image, and (N,M) are the number of rows and columns, respectively. The results for the correlation and MSE for all test images are shown in Table 3. The results show that the first stage of the proposed algorithm produce highly correlated decompressed image with small MSE.

3.1 RLE compression results

In the second stage of the proposed algorithm, we applied RLE. RLE is a lossless algorithm and thus the results of the correlation and the MSE between the decompressed and the original images will be similar to the results of the first stage as shown in Table 3.

RLE is applied to the resultant image from the first stage using different block sizes. Table 4 shows the correlation values between the decompressed image from the second stage and the decompressed image of the first stage.

Table 5 shows the compression factor of the first stage (Frei-Chen) and the second stage (RLE with block size 7X7). It is clear that most compression come from the first stage. In order to find the overall compression ratio (RT), we use Eq. (7),

(7)RT=RFC×RRLE
where RFC and RRLE are the compression ratios from Frei-Chen and RLE stages, respectively. The overall compression ratio RT, MSE, and PSNR are listed in Table 6. As mentioned earlier, the MSE values does not change by applying RLE because it’s a lossless algorithm.

Peak signal-to-noise ratio (PSNR) [39,40] is calculated using Eq. (8),

(8)PSNR=10log10(25521NMi(PiQi)2)

In order to show the effectiveness of the proposed algorithm, we compared it with several algorithms. Tables 7 and 8 show the compression factor R for different compression algorithms applied on the Lena and Cameraman images.

As shown in the results, the proposed algorithm outperforms other algorithms by big margin.

4. Conclusion

In this paper, a new algorithm for digital image compression is proposed. The proposed algorithm consists of two stages; Frei-Chen bases stage and Run-Length Encoding stage. The main focus of this algorithm was to obtain a higher compression factor while decreasing the distortion of the decompressed image.

The experimental results showed the efficiency of the proposed algorithm in terms of compression factor and MSE. The Frei-Chen stage provides high compression factor and yet preserve high correlation values. To improve the compression factor, Frei-Chen bases are combined with well-known RLE. The results of the proposed algorithm outperfroms other algorithms in the area of image compression. In the future, the proposed compression algorithm can be extended and applied for the RGB medical images.

Figures

Block Diagram for Image Compression/Decompression.

Figure 1

Block Diagram for Image Compression/Decompression.

Edge Subspace (w1 and w2): Isotropic average gradient basis.

Figure 2a

Edge Subspace (w1 and w2): Isotropic average gradient basis.

Edge Subspace (w3 and w4): The ripple basis.

Figure 2b

Edge Subspace (w3 and w4): The ripple basis.

Line Subspace (w5 and w6): Directional line.

Figure 3a

Line Subspace (w5 and w6): Directional line.

Line Subspace (w7 and w8): Discrete Laplacian.

Figure 3b

Line Subspace (w7 and w8): Discrete Laplacian.

Average Subspace (w9): Average mask.

Figure 4

Average Subspace (w9): Average mask.

Horizontal, Vertical and Zig-Zag scans.

Figure 5

Horizontal, Vertical and Zig-Zag scans.

Block-Block scanning technique.

Figure 6

Block-Block scanning technique.

Compression/Decompression outlines.

Figure 7

Compression/Decompression outlines.

Test images; a) Barbara, b) Boat, c) Cameraman, d) Lena and e) Mandrill.

Figure 8

Test images; a) Barbara, b) Boat, c) Cameraman, d) Lena and e) Mandrill.

Frei-Chen basis characteristics.

WBlock feature
W1Vertical Gradient
W2Horizontal Gradient
W3Vertical Ripple
W4Horizontal Ripple
W5Vertical Line
W6Horizontal Line
W7Vertical Discrete Laplacian
W8Horizontal Discrete Laplacian
W9Constant Area

Frei-Chen compression results.

The imageOriginal sizeCompressed sizeR
Barbara512 × 512171 × 1718.9649
Boat512 × 512171 × 1718.9649
Cameraman512 × 512171 × 1718.9649
Mandril225 × 22575 × 759.0000
Lena512 × 512171 × 1718.9649

Frei-Chen compression correlation and MSE.

The imageRCorrelation factorMSE
Barbara8.96490.95120.1435
Boat8.96490.96200.0983
Cameraman8.96490.97120.1261
Mandril9.00000.86440.0960
Lena8.96490.97990.0616

RLE Results for all block sizes.

The imageR (3*3)R (5*5)R (7*7)
Barbara1.20191.19431.2328
Boat1.44751.47601.4760
Cameraman2.04292.09772.321
Mandril1.28731.22881.2779
Lena1.53441.49051.5710

R for Frei-Chen and RLE.

The imageR (Frei-Chen)R (RLE)
Barbara8.96491.2328
Boat8.96491.4760
Cameraman8.964922.321
Mandril9.00001.2779
Lena8.96491.5710

Final results.

The imageR (Tot)MSECorrelation FactorPSNR
Barbara11.05190.14350.951256.5622
Boat13.23210.09830.962058.2052
Cameraman20.80750.12610.971257.1236
Mandril11.50740.09600.864458.3080
Lena14.08380.06160.979960.2350

Comparison with other compression techniques for Lena.

The techniqueR
Proposed method14.083
Fractal based [41]13.33
Chinese remainder theorem [42]6
Artificial bee colony and genetic algorithms [43]10
Fuzzy C-Means-Based JPEG Algorithm [44]12.77
Searchless fractal image coding [45]12.8

Comparison with other compression techniques for cameraman.

The techniqueR
Proposed method20.80
Neural Networks-multilayer7
perceptrons [46]0.75
Curvelet, ridgelet and wavelet [47]8.19
Global Structure Transformation [48]7
JPEG standard Jesse’s scheme [49]5.45

Algorithm 1 Frei-Chen compression technique
1: Input: Image I, Frei Chen basis (w1,…,w9)
2: For every 3×3 block in image I do
3: project every block with w1,…,w9
4: If w9 project is the largest then
5: replace the block by the average value
6: else
7: keep the block unchanged
8: end if
9: end for

Notes

1.

All Frei-Chen subspaces are multiplied by the orthonormality constants. These constants are: 18 for w1, w2, w3 and w4, 12 for w5 and w6, 16 for w7 and w8 and 13 for w9.

References

[1]Awdhesh K. Shukla, Akanksha Singh, Balvinder Singh, Amod Kumar, A secure and high-capacity data-hiding method using compression, encryption and optimized pixel value differencing, IEEE Access 6 (2018) 5113051139.

[2]V.A. Kokovin, S.U. Uvaysov, S.S. Uvaysova, Real-time sorting and lossless compression of data on FPGA, in: 2018 Moscow Workshop on Electronic and Networking Technologies (MWENT), IEEE, 2018, pp. 15.

[3]J.F. Kennedy, Random projection and orthonormality for lossy image compression, Image Vis. Comput. 25 (2007) 754766.

[4]Xuesen Shi, Yuyao Shen, Yongqing Wang, Li Bai, Differential-clustering compression algorithm for real-time aerospace telemetry data, IEEE Access 6 (2018) 5742557433.

[5]Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, George Toderici, Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 43854393.

[6]Khursheed Aurangzeb, Musaed Alhussein, Mattias O’Nils, Data reduction using change coding for remote applications of wireless visual sensor networks, IEEE Access (2018).

[7]Omar Al-Shorman, Lossy Digital Image Compression Technique using Runlength Encoding and Frei-Chen Basis Masters’ thesis, Yarmouk University, 2012.

[8]V. Cmojevic, V. Senk, Z. Trpovski’, Lossy Lempel-Ziv Algorithm for Image Compression, in Proc. 2003 IEEE Telecommunications In Modern Satellite, Cable And Broadcasting Service Conf., pp. 522525.

[9]L. Mengyi. (2006). Fundamental Data Compression (1st ed.) [Online]. Available http://www.elsevier.com.

[10]E. Esfahani, S. Samavi, N. Karimi, S. Shirani, Near-Lossless Image Compression Based on Maximization of Run Length Sequences, in: Proc. 2007 IEEE ICIP conf., pp. 177180.

[11]L. Kau, Y. Lin, Least Squares-Adapted Edge-look-ahead Prediction with Run- Length Encodings for Lossless Compression of Images, in: Proc. 2008 IEEE ICASSP Conf., pp. 11851188.

[12]N. Memon, D. Neuhoff, S. Shende, An analysis of some common scanning techniques for lossless image coding, IEEE Trans. Image Processing 9 (2000) 18371848.

[13]M. Al-Wahaib, K. Wong, A lossless image compression algorithm using duplication free run-length coding, in: Proc. 2010 IEEE Network Applications, Protocols and Services Conf., pp. 245250.

[14]D. Marpe, G. Blattermann, J. Ricke, P. Maab, A two-layered wavelet-based algorithm for efficient lossless and lossy image compression, IEEE Trans. Circuits Syst. Video Technol. 10 (2000) 10941102.

[15]F. Payan, M. Antonini, Mean square error approximation for wavelet-based semiregular mesh compression, IEEE Trans. Visualizat. Computer Graphics 4 (12) (2006) 649657.

[16]G. Lakhani, Optimal Huffman coding of DCT Blocks, IEEE Trans. Circuits Systems Video Technol. 14 (2004) 522527.

[17]Y. Chan, W. Siu, Variable temporal-length 3-d discrete cosine transform coding, IEEE Trans. Image Process. 6 (1997) 758763.

[18]B. Rajoub, An efficient coding algorithm for the compression of ECG signals using the wavelet transform, IEEE Trans. Biomed. Eng. 49 (2002) 355362.

[19]F. Marino, T. Acharya, L. Karam, Wavelet-based perceptually lossless coding of R-G-B images, Integr. Comput.-Aided Eng. 7 (2) (2000) 117134.

[20]Adwitiya Mukhopadhyay, Ashil Raj, Rony P. Shaji, LRJPEG: a luminance reduction based modification for jpeg algorithm to improve medical image compression, in: LRJPEG: A Luminance Reduction based Modification for JPEG Algorithm to Improve Medical Image Compression (ICACCI), IEEE, 2018, pp. 617623.

[21]Alessandro Artusi, Rafal K. Mantiuk, Thomas Richter, Pavel Korshunov, Philippe Hanhart, Touradj Ebrahimi, Massimiliano Agostinelli, JPEG XT: A compression standard for HDR and WCG images [standards in a nutshell], IEEE Signal Process Mag. 33 (2) (2016) 118124.

[22]G. Carvajal, B. Penna, E. Magli, Unified Lossy and near-lossless hyper spectral image compression based on JPEG 2000, IEEE J. Geosci. Remote Sens. (2008) 593597.

[23]A. Banerjee, A. Halder, An efficient dynamic image compression algorithm based on block optimization, byte compression and run-length encoding along Y-axis, in: Proc. 2010 IEEE ICCSIT Conf., pp. 356360.

[24]G. Chopra, A.K. Pal, An improved image compression algorithm using binary space partition scheme and geometric wavelets, IEEE Trans. Image processing 20 (2011) 270275.

[25]Mostofa Kamal Rasel, Young-Koo Lee, On-the-fly output compression for joinbased graph mining algorithms, IEEE Access 6 (2018) 6400864022.

[26]S. Aviran, P. Siegel, J. Wolf, Optimal parsing trees for run-length coding of biased data, IEEE Trans. Inf. Theory 54 (Feb. 2008) 841949.

[27]W. Berghorn, T. Boskamp, M. Lang, H. Peitgen, Context conditioning and runlength coding for hybrid, embedded progressive image coding, IEEE Trans. Image Process. 10 (2001) 17911800.

[28]J. Shin, H. Hwang, S. Chiena, Detecting fingerprint minutiae by run length encoding scheme, Pattern Recogn. 39 (2006) 11401154.

[29]F. Eracal, A systolic image difference algorithm for RLE-compressed images, IEEE Trans. Parallel Distributed Syst. 11 (2000) 433443.

[30]A.H.El-Maleh, Test data compression for system-on-a-chip using extended frequency-directed run-length code, in: Proc. 2008 IET Comput. Digit. Tech. Conf., pp. 155163.

[31]B. Ye, Q. Zhao, D. Zhou, X. Wang, M. Luo, Test data compression using alternating variable run-length code, Integration VLSI J 44 (2011) 103110.

[32]Shiqi Wang, Xinfeng Zhang, Xianming Liu, Jian Zhang, Siwei Ma, Wen Gao, Utility-driven adaptive preprocessing for screen content video compression, IEEE Trans. Multimedia 19 (3) (2017) 660667.

[33]Li Li, Zh.u. Li, Xiang Ma, Haitao Yang, Houqiang Li, Advanced spherical motion model and local padding for 360-degree video compression, IEEE Trans. Image Process. (2018).

[34]C. Messom, G. Gupta, S. Demidenko, Hough transform run length encoding for real-time image processing, IEEE Trans. Instrum. Measurem. 56 (2007) 962967.

[35]C.H. Messom, S. Demidenko, K. Subramaniam, G. Sen Gupta, Size/position identification in real-time image processing using run length encoding, in: Proc. 2002 EEE Instrumentation and Measurement Technology Conf., pp. 10551059.

[36]L. He, Y. Chao, K. Suzuki, A run-based two-scan labeling algorithm, IEEE Trans. Image Processing 17 (2008) 749756.

[37]Frei, Chung-Ching Chen, Fast boundary detection: a generalization and a new algorithm, IEEE Trans. Comput. C-26 (10) (1977) 988998, https://doi.org/10.1109/TC.1977.1674733.

[38]Rae-Hong Park, A Fourier interpretation of the Frei-Chen edge masks, Pattern Recogn. Lett. 11 (9) (1990) 631636, https://doi.org/10.1016/0167-8655(90)90016-U.

[39]R. Kumar, U. Patbhaje, A. Kumar, An efficient technique for image compression and quality retrieval using matrix completion, J. King Saud Univ. – Comp. Inf. Sci. (2019), https://doi.org/10.1016/j.jksuci.2019.08.002.

[40]U. Patbhaje, R. Kumar, A. Kumar, H.-N. Lee, Compression of medical image using wavelet based sparsification and coding, in: 4th IEEE Int. Conf. on Signal Processing and Integrated Networks (SPIN 2017), pp. 394398, 2017, Noida, India.

[41]S.K. Roy, S. Kumar, B. Chanda, B.B. Chaudhuri, S. Banerjee, Fractal image compression using upper bound on scaling parameter, Chaos, Solitons Fractals 106 (2018) 1622.

[42]Tejas Duseja, Maroti Deshmukh, Image compression and encryption using Chinese remainder theorem, Multimed. Tools Appl. 78 (12) (2019) 1672716753, https://doi.org/10.1007/s11042-018-7023-0.

[43]A. Ahamed, C. Eswaran, R. Kannan, Lossy image compression based on vector quantization using artificial bee colony and genetic algorithms, Adv. Sci. Lett. 24 (2) (2018) 11341137.

[44]V. Kakollu, G. Narsimha, P.C. Reddy, Fuzzy C-means-based JPEG algorithm for still image compression, in: Smart Intelligent Computing and Applications, Springer, 2019, pp. 447458.

[45]D.J. Jackson, H. Ren, X. Wu, K.G. Ricks, A hardware architecture for real-time image compression using a searchless fractal image coding method, J. Real- Time Image Proc. 1 (3) (2007) 225237.

[46]I. Vilovic, An experience in image compression using neural networks, in: Proceedings ELMAR 2006, IEEE, 2006, pp. 9598.

[47]M. Joshi, R. Manthalkar, Y. Joshi, Image compression using curvelet, ridgelet and wavelet transform, a comparative study, ICGST-GVIP, ISSN, pp. 2534, 2008.

[48]M. Mahasree, D.A. Pabi, P. Aruna, N. Puviarasan, Adoption of Global Structure Transformation in lossy image compression based on Curvelet and Cosine transforms, Int. J. Innov. Res. Comp. Commun. Eng. (2017).

[49]J.D. Kornblum, Defense Cyber Crime Institute, United States, Using JPEG quantization tables to identify imagery processed by software, Digital Investigation S21–S25, 2008 Digital Forensic Research Workshop, Elsevier Ltd, 2008, https://doi.org/10.1016/j.diin.2008.05.004.

Acknowledgements

Declaration of Competing Interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.Publishers note: The publisher wishes to inform readers that the article “Frei-Chen bases based lossy digital image compression technique” was originally published by the previous publisher of Applied Computing and Informatics and the pagination of this article has been subsequently changed. There has been no change to the content of the article. This change was necessary for the journal to transition from the previous publisher to the new one. The publisher sincerely apologises for any inconvenience caused. To access and cite this article, please use Al-khassaweneh, M., AlShorman, O. (2020), “Frei-Chen bases based lossy digital image compression technique”, Applied Computing and Informatics. Vol. ahead-of-print No. ahead-of-print. https://10.1016/j.aci.2019.12.004. The original publication date for this paper was 02/01/2020.

Corresponding author

Mahmood Al-khassaweneh can be contacted at: khassaweneh@ieee.org

Related articles