1 Introduction

Technology digital watermarking [1,2,3] is an important subfield of today’s information security [4,5,6]. Digital image watermarking has also emerged as a significant research focus for digital watermarking technology, and it has been significant in content authentication [7] and protection for copyright in digital images [8, 9]. However, for most traditional digital watermarking algorithms [10,11,12], the watermark needs to be embedded into the carrier. This watermark embedding method has two significant drawbacks: it degrades the original image’s quality, and it is difficult to balance robustness and imperceptibility.

The use of medical images in therapy and medical decisions is becoming increasingly important as medical imaging technologies advance [13, 14]. However, processing and sharing medical images may result in poorer data security and greater copyright infringement; hence, securing these images has been an important concern in recent years. Confidentiality, integrity, and authenticity are required for medical image transfer. Unauthorized usage of these images may result in the disclosure of patient data. Furthermore, because these images are susceptible to minor changes, an incorrect diagnosis could endanger patients’ lives. For these reasons, researchers addressed image protection issues with traditional embedded watermarking.

Watermarking technologies in medical images hide the personal information of patients, safeguarding their privacy and guaranteeing secure information transmission [15]. However, medical images have distinct properties. The majority of medical images are gray-scale, single-channel images. A specific disease may be represented due to a little change in extremely comparable background tissues. Any minor alteration might generate mutilated medical images and adversely affect the diagnosis of a doctor [16, 17]. Due to the specificity of medical images, these issues are challenging to overcome using typical medical watermarking methods. As a result, for protecting medical images experts did not desire direct embedding with loss watermarking techniques. To protect the integrity of original medical images, lossless watermarking methods have been introduced, like zero watermarking [18,19,20,21,22,23,24] and reversible watermarking [25,26,27]. Zero-watermark techniques provide a significant advantage over reversible watermarking. There is no information embedded inside the original image with zero-watermark techniques. As a result, the zero watermarking preserves the original medical image from tampering. Reversible watermarking, on the other hand, embeds a watermark to restore the original image. Furthermore, to verify image content, reversible watermarking techniques mostly are employed, whereas zero-watermarking tactics safeguard copyright. High robustness to numerous forms of attacks and improved imperceptibility are the main benefits of the zero-watermark approaches [28]. Because changes to medical images are not permitted, zero watermarking draws the attention of researchers. A new direction of research in zero watermarking is presented by Fierro-Radilla [29] recently where they presented a zero-watermarking technique using CNN. According to [29], Han et al. [30] and Mohamed et al. [31] used VGG19 deep convolution neural network to present a robust approach for zero watermarking.

This paper proposes a new zero-watermarking algorithm for medical color images using the Resnet50 deep convolution neural network which has excellent intrinsic properties. When compared to other forms of CNNs, Resnet50 improves network depth and uses an alternate structure with numerous nonlinear activation layers and convolution layers. This is beneficial for extracting precise features. As a result, and the preprocessing procedure, to extract deep feature maps from color medical images, we simply use the max pooling layers and convolution layers from the pre-trained Resnet50, as opposed to image classification tasks. In contrast to other zero-watermarking algorithms, the suggested scheme is capable of extracting high-level features from color medical images to increase anti-geometric zero-watermarking’s attack capability. Additionally, a chaotic system based on a logistic and Gaussian chaotic system known as LOGAM [32] is used to ensure the proposed scheme’s high level of security. This paper’s contributions and novelty are as follows:

  1. (1)

    The proposed scheme used a deep conventional neural network called Resnet50 to extract the robust essential features of the medical color images for zero watermarking.

  2. (2)

    To improve security and equalization, the feature images are scrambled as well as an original watermark is encrypted using a novel approach LOGAM [32].

  3. (3)

    Resnet50 is used to create feature images, and its beneficial intrinsic properties ensure that the proposed scheme has significantly increased robustness against geometric attacks and common signal processing.

  4. (4)

    Experimental results demonstrate that the proposed scheme achieves a good balance between robustness and imperceptibility and outperforms other algorithms in terms of security and robustness.

In this paper, the feature map for the original medical color image is firstly created during the CNN (Resnet50) training process using the output of the last convolutional layer with the maximum number of features known as “res5c_branch2b”; then, using a security chaos sequence created by a LOGAM, the extracted features are transformed to the binarized features. Next, the owner’s watermark sequence is combined with the binarized features. Finally, an XOR operation is performed on the encrypted version of the binarized features of the image and the encrypted version of the binary watermark digits to create a verification key of ownership, also known as a zero watermark.

The rest of this paper is organized as follows. Section 2 reviews the existing digital image watermarking approaches. Section 3 provides preliminary information used in the context of our work. Section 4 describes the proposed zero-watermarking algorithm for medical color images. Section 5 contains an in-depth experimental analysis and comparison. Section 6 concludes this paper and makes suggestions for further research.

2 Literature study

Many approaches and techniques for watermarking medical images have been developed, particularly for transmission across various healthcare networks. A blind watermarking approach for safeguarding medical images was proposed by Thakkar and Srivastava [33] using DWTSVD combinations. Their method was applied in telemedicine applications. Rayachoti et al. [34] developed a watermarking approach based on a region of interest (ROI) to protect medical images in telemedicine using the singular value decomposition (SVD) and Slantlet transform (SLT). For secure telemedicine transmission, Moad et al. [35] introduced a wavelet-based medical image watermarking approach. Yan et al. [36] suggested a multi-medical image watermarking approach using the brainstorming optimization technique and a quantum random walk. For the authentication of the medical image and tamper pixels detection, Sanivarapu [37] presented a robust medical watermarking tamper detection approach in the transform domain. Depending on the decomposition of Schur and chaotic sequences, Soualmi et al. [38] presented for medical images a blind watermarking approach. Singh et al. [39] suggested a hybrid region-based watermarking technique to assure the validity, authorization, integrity, and confidentiality of medical images transferred over a public network in IoMT. Kahlessenane et al. [40] employed the decomposition of Schur and four discrete frequency domain transformations to incorporate image collection data and patient information into the image. Zheng et al. [41] proposed a convolution kernel feature extraction-based robust watermarking method for medical images. Li et al. [42] proposed a robust medical image watermarking algorithm that integrated discrete cosine transform (DCT) and log-polar transform (LPT), allowing for the lossless embedding of patient information in medical images. A secured and efficient watermark approach that is described as WatMIF and depends on multimodal medical image fusion was implemented by Singh et al. [43].

Three main components make up the suggested approach: multimodal medical image fusion, host media encryption, and fused mark embedding and extraction. These approaches are watermarking with lossy direct embedding approaches. Furthermore, due to their failure to accomplish geometric invariances, these approaches are vulnerable to geometric attacks.

Zero watermarking has been a significant study issue in the medical industry as an efficient means of medical image protection to safeguard the copyright of digital images. Xiao et al. [44] developed an approach utilizing improved cellular neural networks and singular value decomposition to overcome the issue of diagonal distortion. An effective medical image watermarking method was developed by Wu et al. [45]. In this method, vectors of low-frequency sub-bands are constructed by employing the discrete cosine transform (DCT), and multi-scale image features are obtained by utilizing the contourlet transform. This method has been proven to be robust in medical applications. To generate feature vectors, Qin et al. [46] employed an RSA pseudo-random sequence with curvelet-DCT and utilized the curvelet-DCT by extracting the most concentrated energy from medical images. They used the RSA technique to encrypt the watermarking, which improved patient privacy protection. To obtain image characteristics for medical images, Wu et al. [47] presented a method that combined singular value decomposition, curvelet transform, and discrete wavelet transform. They also made use of subdivision blocks to further increase the algorithm’s stability. A zero-watermarking approach based on Hessenberg decomposition and the nonsubsampled Shearlet transform (NSST) was proposed by Xue et al. [48]. They created a feature matrix by executing operations on the image, including block Hessenberg decomposition and NSST transformation, and then QR codes were combined with it to create a zero-watermarking image, which significantly increased resistance to cropping and rotation attacks. Liu et al. [49] developed a technique for medical imaging, using the DCT transformation with the dual-tree complex wavelet transform, to create feature sequences. They used the logistic chaos encryption technique to improve security. Xia et al. [50] created a zero-watermarking method using FoRHFMs. In this method, the IoRHFMs expanded to FoRHFMs, which significantly increased robustness and numerical stability. Vaidya et al. [51] presented a watermarking approach using a hybrid transform. This approach improved the robustness and imperceptibility against image attacks by combining the best features of the local binary pattern (LBP) in the hybrid domain, DWT, and lifting wavelet transform (LWT). Watermarking algorithms were developed by Fang et al. [52]. They extracted visual features using an improved Bandelet and DCT (Bandelet-DCT) and incorporated the scale-invariant feature transform (SIFT) for data preparation, which is effectively resistant to several types of assaults. In IoMT applications, for medical image copyright protection and security, Magdy et al. [53] used multi-channel fractional Legendre Fourier moments (MFrLFMs) to provide multiple zero-watermarking approaches without distorting the original medical images. Because of their great precision, geometric invariances, resilience to numerous attacks, and numerical stability, MFrLFMs are frequently used. Depending on KAZE-DCT, Zeng et al. [54] suggested a medical images zero-watermarking approach. First, KAZE-DCT is employed to extract medical image feature vectors, and then perceptual hashing is utilized to create medical image feature sequences. The multi-watermark images are then encrypted using chaotic mapping, and the watermarks are embedded and extracted using the zero-watermarking technique. Lastly, the correlation coefficient is utilized to assess the relationship between the watermark extraction and the embedding algorithm. A zero-watermarking technique for colored medical images utilizing 1D Chebyshev chaotic features and accurate MFrGHMs was provided by Khafaga et al. [55] to accomplish lossless copyright protection. Dai et al. [56] successfully combined the complementary benefits of zero and reversible watermarking in an approach of hybrid reversible zero watermarking (HRZW) to secure medical images. Ali et al. [57] suggested a hybrid efficient method for digital medical images using fragile zero watermarking. Visual cryptography and chaotic randomization are key components of their method for preventing unauthorized information breaches.

3 Preliminaries

This section explains Resnet50 feature extraction and the logistic Gaussian map in detail.

3.1 Resnet50 feature extraction

Resnet50 is a residual network with 50 layers trained with ImageNet-1k at 224 × 224 resolution. He et al. [58] initially discussed it in their paper image recognition based on deep residual learning. The Resnet50 model’s input size is equal to or greater than 224 × 224 × 3, and convolutional layers have a size of 3 × 3 filters and follow certain simple designs such as the output being the same for layers with the same number of filters. To maintain the time complexity per layer, if the convolved output size is cut in half, the number of filters is doubled. The model terminates with a 1000-way SoftMax fully linked layer and an average pooling layer.

If \({T}_{j}\) and \({\delta }_{j}\) are the weights and biases of the jth convolution layer, the feature may be retrieved as follows:

$${Z}_{j}^{\text{out}}=S\left({T}_{j}*{Z}_{j}^{\text{in}}+{\delta }_{j}\right),$$
(1)

where \({Z}_{j}^{\text{in}}\) and \({Z}_{j}^{\text{out}}\) indicate the feature maps input and output, respectively, and \(S\) denotes the rectified linear unit (ReLU). The Resnet50 network architecture, depicted in Fig. 1, is utilized in our zero-watermark technique. The feature map was created using the output of the “res5c_branch2b” layer, as shown in Fig. 1.

Fig. 1
figure 1

ResNet50 network architecture utilized to extract the feature map

3.2 Logistic Gaussian map

3.2.1 Logistic map

Logistic maps are a type of chaotic dynamical system that is both simple and widely studied. In a 1976 publication, biologist Robert May presented the map, in part as a discrete-time demographic model akin to the logistic equation initially developed by Pierre François Verhulst. One of the most studied chaotic maps is the logistic map, which is theoretically expressed by the equation:

$${u}_{i+1}=r{u}_{i}\left(1-{u}_{i}\right),$$
(2)

wherein \({u}_{i}\in \left[0, 1\right]\) denotes the discrete state of the resulting chaotic sequence, and \(r\) denotes the control parameter with values ranging from \(0\) to 4. When \(r\in (3.569945972, ..., 4]\) the logistic map operates chaotically.

3.2.2 Gaussian map

The chaotic Gaussian map (GM) is formally expressed as follows [32]:

$${u}_{i+1}=\text{exp}\left(-\mu {u}_{i}^{2}\right)+c,$$
(3)

where \(\mu \in [4.7, 17]\), and \(c\in [-1, 1]\) is the control parameter. This map, often known as the mouse map, is the result of several mathematical assumptions and approximations over the Gaussian noise function [59]. It is insufficient on its own due to the system’s small range of chaotic intervals.

3.2.3 Logistic Gaussian map

The improved dynamical system emerges from the LOGAM established in [32] and may theoretically expressed as:

$$ {u_{(i + 1)}} = ( - (r - 33){u_i}(1 - {u_i}) + \left( {\frac{r + 37}{4}} \right) + \exp ( - \mu u_i^2))\bmod 1, $$
(4)

where \({u}_{i}\in \left[0, 1\right], r\in \left[0, 4\right], \mu \in \left[4.7, 17\right].\)

4 Proposed zero-watermarking scheme

The scheme proposed is organized into two stages: “Embedding and encryption scheme” and “Extraction and decryption scheme.” The objective of zero-watermark production is to generate a zero watermark by utilizing an important feature of an original image, and the purpose of zero-watermark verification is to validate the original image’s copyright. The proposed scheme’s phases are described in the following two subsections.

4.1 Embedding and encryption scheme

Our technique utilizes a binary image with a specified meaning \(W\) with size \(P\times Q\) as the initial watermarking image in the experiment, and a color medical image \(I\) with size \(M\times N\). Let \(P=Q=64\) and \(M=N=512\) for ease of computation. Algorithm 1 and Fig. 2, shows the watermarking embedding technique. The major steps are listed below:

Fig. 2
figure 2

Overall schematic architecture of the proposed zero-watermark algorithm

(1) The deep feature maps \(FM(k,l,p)\) of the original medical color image \(I\left(i,j\right)\) are extracted using the pre-trained Resnet50:

$$ \left( {i,j} \right) \to Resnet50 \to FM\left( {k,l,p} \right), $$
(5)

where the feature map \(FM\) produced by the Resnet50 network architecture seen in Fig. 1 has matrix dimensions denoted by \(k\), \(l\), and \(p\), \(1\le k\le 16, 1\le l\le 16, 1\le p\le 512\).

(2) We build the \(BF\) feature matrix by randomly selecting \(P\times Q\) features from the feature maps and then binarizing each one.

$$BF\left(r\right)=\left\{\begin{array}{cc}1& \text{if}\,\, F\!M\left(r\right)\ge M\!E,\\ 0& \text{otherwise}.\end{array}\right.$$
(6)

where r is a random number, \(1\le r\le 16\times 16\times 512\), and \(ME\) is the mean of feature maps selected.

(3) Binary feature sequence permutation. A decimal chaotic sequence \({S}_{3}\) is used to scramble the binary feature sequence \(BF\) to \(B{F}_{s}\), \({S}_{3}\) created by LOGAM with the secret key \(S{K}_{3}\), which is then translated into the matrix \(B{F}_{s}\) with \(P\times Q\) (as seen on the left side of Fig. 3). Figure 3 depicts the architecture for using LOGAM in the proposed zero-watermarking scheme. Encryption of the watermark (shown in the right section of Fig. 3) employs bit operation diffusion and pixel-level scrambling to randomly confuse the pixel coordinates and modify the watermark image’s bit values. The following are the procedures for watermark encryption using LOGAM, assuming the watermark \(W\) is \(P\times Q\) in size.

Fig. 3
figure 3

Implementation of the LOGAM framework in the zero-watermarking method

  1. (1)

    Using the secret keys \(S{K}_{1}=({u}_{0}^{1},{r}^{1}, {\mu }^{1})\), the chaotic system (6) is executed for \(P\times Q\) iterations to obtain \(P\times Q\) values of \({u}_{i+1}\)

    figure a

    Algorithm 1 Watermarking embedding scheme

  2. (2)

    A chaotic decimal sequence \(C{S}_{1}\) of length, \(P\times Q\) can be generated. Similarly, the chaotic decimal sequences \(C{S}_{2}\) and \({CS}_{3}\) are built using \(S{K}_{2}=({u}_{0}^{2},{r}^{2}, {\mu }^{2})\) and \(S{K}_{3}=({u}_{0}^{3},{r}^{3}, {\mu }^{3})\), respectively.

  3. (3)

    The following equation transforms the decimal chaotic sequence \({S}_{2}\) into a chaotic binary sequence.

    $${CS}_{2}\left(i\right)=\left\{\begin{array}{cc}1& when\; {CS}_{2}\left(i\right)\ge ME,\\ 0& \text{when}\; {CS}_{2}\left(i\right)<ME,\end{array} 1\le i\le P\times Q.\right.$$
    (7)

where \(ME\) is the decimal chaotic sequence’s mean \({CS}_{2}\).

  1. (4)

    The decimal chaotic sequence \(C{S}_{1}\) is ordered ascendingly based on the pixel location in the initial chaotic sequence \(C{S}_{1}\), and the related index vector \(L=({L}_{1},{L}_{2},\dots , {L}_{P\times Q})\) is computed.

  2. (5)

    To obtain the shuffled watermark \({W}_{c}\), the original watermark \(W\) is reorganized as a \(P\times Q\) 1D array \({W}_{b}\), and then scrambled on the level of a pixel using the index vector \(L\)’s members.

  3. (6)

    To generate the encrypted version \({W}_{sc}\), use the following formula to conduct an (XOR) operator at the bit level on the shuffled watermark \({W}_{c}\):

    $${W}_{sc}={W}_{c}\oplus {CS}_{2},$$
    (8)

in which the symbol \(\oplus \) stands for an exclusive OR operator.

(4) Signal construction with zero watermarks. To generate the zero-watermark signal \({W}_{\text{zero}}\), an (XOR) operator is performed on the scrambled binary feature matrix \({BF}_{s}\) and the encrypted watermark image \({W}_{s}\), as shown below.

$${W}_{\text{zero}}={W}_{sc}\oplus {BF}_{s},$$
(9)

in which the symbol \(\oplus \) represents the exclusive OR operator, \({W}_{s}\) denotes the encrypted watermark image, and \({BF}_{s}\) is the permuted binary feature matrix.

(5) Lastly, the copyright verification database stores the zero-watermark signal \({W}_{\text{zero}}\), as well as the secret keys \(S{K}_{1}\), \(S{K}_{2}\), and \(S{K}_{3}\).

4.2 Verification and decryption scheme

During the verification stage, only the protected image and the accompanying reserved zero-watermark signal are required. The zero-watermark verification process is shown in Algorithm 2, Fig. 2, and the full process is described below.

Step 1–Step 3: The first three steps of the verification of the proposed zero-watermarking and decryption scheme are the same as the first three steps of the zero-watermarking embedding and encryption procedure.

$${W{\prime}}_{s}={W}_{\text{zero}}\oplus {BF{\prime}}_{s}$$
(10)

Step 4: Create the encrypted watermark image. To construct the encrypted watermark image \({W{\prime}}_{s}\), an (XOR) operator is performed on the scrambled binary feature matrix \(B{F{\prime}}_{s}\) and the protected image’s corresponding reserved zero-watermark signal \({W}_{\text{zero}}\) as follows:

Step 5: The recovery of a watermark that can be verified. Lastly, by decrypting the encrypted watermark image \({W}_{s}^{\prime}\) with LOGAM, the verifiable watermark image \({W}^{\prime}\) is retrieved.

In the initial phase of the descrambling process, a binary chaotic sequence, \(C{S}_{2}\), is used to execute a back-diffusion operation on the scrambled image \({W}_{s}^{\prime}\). This particular chaotic sequence is constructed by using a LOGAM with a secret key called \(S{K}_{2}\). The primary objective of this method is to provide unpredictability and confusion to the image. The original watermarking image \(W^{\prime}\) is then recovered using an inverse scrambling method. This process makes use of the ascending order index of a chaotic decimal sequence \(C{S}_{1}\), which is also generated by a LOGAM with a secret key \(S{K}_{1}\). The purpose of this technique is to restore the watermarking image’s original structure and appearance. The reverse scrambling of the watermarking image will be identical to the original watermarking image if the secret keys \(S{K}_{1}\) and \(S{K}_{2}\) are accurate and match those used in the scrambling method. Figure 2 shows a graphical illustration of this reverse scrambling process, demonstrating how accurately the original watermarking image can be recovered when the same secret keys are utilized.

figure b

Algorithm 2 Watermarking extraction scheme

5 Experimental result

Experiments were carried out in order to assess the suggested scheme. As shown in Fig. 4, 18 medical color MRI images with size \(256\times 256\) are selected from the “Whole-Brain Atlas” [60], \(32\times 32\) binary images as watermarks, as illustrated in Fig. 5. Table 1 contains abbreviations for a number of the attacks used in this section.

Fig. 4
figure 4

Original color medical images

Fig. 5
figure 5

Ten original watermarks

Table 1 List of abbreviations for some different attacks

5.1 Evaluation metrics

To assess the visual quality of the attacked images, the peak signal-to-noise-ratio (PSNR) between the attacked and original images is calculated as follows:

$$\text{PSNR}=10\times log\left(\frac{{M\times N\times 3\times 255}^{2}}{{\sum }_{k=1}^{3}{\sum }_{x=1}^{M}{\sum }_{y=1}^{N}{\left[{I}_{k}\left(x,y\right)-{{I}^{\prime}}_{k}\left(x,y \right)\right]}^{2}}\right),$$
(11)

in which \(I\left(x,y\right)\) and \(I^{\prime}\left(x,y\right)\) are the original and attacked images of size \(M\times N\), respectively, \(k\epsilon \{R,G,B\}\). The bit error rate (BER) and normalized cross-correlation (NCC) of the recovered watermark image were used to assess the robustness of the suggested scheme. The terms BER and NCC are defined as follows:

$$\text{BER}=\frac{1}{P\times Q}{\sum }_{i=1}^{P}{\sum }_{j=1}^{Q}[W(i,j)\oplus W^{\prime}(i,j)] ,$$
(12)
$${\text{NCC}} = \frac{{\sum\nolimits_{i = 1}^P {\sum\nolimits_{j = 1}^Q {\left[ {W\left( {i,j} \right)*{W^\prime }\left( {i,j} \right)} \right]} } }}{{\sqrt {\sum\nolimits_{i = 1}^P {\sum\nolimits_{j = 1}^Q {{{\left[ {W\left( {i,j} \right)} \right]}^2}} } } \sqrt {\sum\nolimits_{i = 1}^P {\sum\nolimits_{j = 1}^Q {{{\left[ {{W^\prime }\left( {i,j} \right)} \right]}^2}} } } }}, $$
(13)

wherein \(W^{\prime}(i,j)\) and \(W(i, j)\) represent the recovered and original \(P \times Q\) watermark images. Obviously, when the BER value is low, the NCC value increases, and this indicates that the robustness is better, and when the PSNR ratio is high, this indicates an increase in quality.

5.2 Feature extraction analysis

Experiments in this part were carried out to verify the efficiency and superiority of the feature matrix extracted from Resnet50, and we measured the mean square error (MSE) between the feature matrix extracted from the attacked image and the original image, and then the results were compared with the MSE of VGG16, VGG19 [61] and Alexnet deep learning models. We used the \(512\times 512\) medical color image “Image 13” as a carrier image and the \(64\times 64\) binary image “Flower” as a watermark. Watermarked images are typically subjected to various attacks in this part such as filtering, additive noise, rotation, JPEG compression, scaling, and combination attacks.

According to Table 2, the suggested Resnet50 model outperforms other models such as VGG16, VGG19, and AlexNet in terms of mean square error (MSE). The Resnet50 model outperforms other models in terms of reducing the mean squared errors of the features between the attacked and original image. This indicates that the Resnet50 model extracts features with greater accuracy and precision than the other models mentioned. The Resnet50 model stands out as a practical and dependable alternative for a variety of tasks and applications, particularly zero watermarking of medical images, because of its outstanding MSE values.

Table 2 MSE values of the proposed Resnet50 model in comparison with other models

5.3 Ablation experiment

The scheme in this paper used a special layer “res5c_branch2b” to extract high-dimensional robust features from medical images; we will refer to the feature map generated by this layer using FM_7. In this ablation experiment, we will use NCC and BER to evaluate the performance of the base model feature map FM_7 against six other Resnet50 feature maps. The feature maps FM_1, FM_2, FM_3, FM_4, FM_5, and FM_6, were generated from the following Resnet50 layers: “res4c_branch2b,” “res4f_branch2a,” “res4f_branch2c,” “res5a_branch2a,” “res5a_branch1,” and “res5c_branch2c,” respectively. The medical color image “Image 13” was used as a carrier image with size \(256\times 256\) and the binary image “Flower” as a watermark with size \(32\times 32\). Watermark image is typically subjected to various attacks listed in Table 1.

According to the results in Table 3, the NCC values obtained from the base model FM_7 for various attacks were 1 or very close to the ideal value of 1. Moreover, these NCC values were higher than other Resnet50 feature maps, FM_1, FM_2, FM_3, FM_4, FM_5, and FM_6, further highlighting the superior performance of the selected feature map, FM_7. The BER results in Table 4 were consistent with the NCC results in Table 3 and the BER values for FM_7 were zero or very close to the optimal value of zero. The results of Tables 3 and 4 indicate that FM_7 outperforms the other Resnet50 feature maps, FM_1, FM_2, FM_3, FM_4, FM_5, and FM_6. These findings justify our selection of the “res5c_branch2b” layer in the proposed zero-watermarking approach, as it is more resistant to attacks.

Table 3 NCC values of the proposed FM_7 in comparison with other Resnet50 feature maps
Table 4 BER values of the proposed FM_7 in comparison with other Resnet50 feature maps

5.4 Zero-watermark equalization

The equalization specifies that the “1” and “0” bit distribution in a produced zero watermark must be balanced. It can be calculated by dividing the difference in the number of “1” and “0” bits by the total number of bits. Zero watermarking has a high level of security when there is good equalization, which occurs when the numbers “0” and “1” are equal or nearly equal. Table 5 shows the zero-watermark equalization obtained from the eighteen color medical images displayed in Fig. 4. The mean of 18 zero-watermark equalizations is 0.0180, according to Table 5, indicating that the numbers of “0” and “1” in the zero watermarks are roughly equal, and from a statistical standpoint, the suggested zero watermarking has strong security and good equalization.

Table 5 Equalizations of zero-watermark signals produced by eighteen color medical images

5.5 Robustness

Experiments in this section were carried out to verify the robustness of the suggested scheme, rotation, LWR changing attacks, scaling, JPEG compression, additive noise, filtering, and other attacks are commonly subjected to the watermarked images. We can divide this section into two main parts. In the first, we utilized 256 × 256 medical color image “Image 13” as a carrier image and 32 × 32 binary image “Flower” as a watermark. In the second one, we utilized six medical color images “Image 19 to Image 24” with size 512 × 512 as a carrier images and 64 × 64 binary image “Flower” as a watermark.

5.5.1 Analysis of anti-attack performance


(1) Rotation attacks.

The rotation test and rotation with cropping for the color medical image “Image 13” were performed, and the rotation angle is selected as θ = 5°, 10°, 15°, 25°, 30°. Table 6 shows the results of the rotation attacks. The NCC values after the rotation attack exceed 0.9956 at multiple angles, indicating that this method can effectively resist rotation attacks.

Table 6 Robustness against rotation attacks

(2) LWR changing attacks

The \(M\times N\) rectangular image \(I\) is scaled to become the square image \({I}^{*}=\{{g}^{*}\left(s,t\right),0\le s,t<\frac{M+N}{2}\}\) with the size of \(\frac{M+N}{2}\times \frac{M+N}{2}\) to resist the image LWR changing attack before the zero-watermark detection. In LWR changing attack, two parameters are used: the first represents the image’s vertical scaling factor, which takes the values (0.25, 0.5, 0.75, 1.0, 1.0), and the second represents the image’s horizontal scaling factor, which takes the values (1.0, 1.0, 1.0, 0.5, 1.5). Table 7 displays the results achieved after altering the LWR. When the LWR was modified, the PSNR could not be computed between the images, as with the scaling attacks, hence the PSNR value was not provided here.

Table 7 Robustness against LWR attacks

Following the adjustment of the various ratios’ LWR, the NCC values of the detected logos are all 1.0, demonstrating that this method can successfully resist LWR changing attacks. It should be noted that the LWR altering attack becomes a scaling attack when the horizontal scaling factor equals the vertical scaling factor.


(3) Scaling attacks.

The watermarked image is scaled in this experiment utilizing the following scaling factors: 0.25, 0.5, 2, and 4. Table 8 shows the experimental result for scaling attacks. Table 8 indicates that all NCC values are 1 and that the proposed scheme is extremely resistant to scaling attacks.

Table 8 Robustness against scaling attacks

(4) Filtering attacks.

Filtering attacks under consideration include Win_F, Med_F, Gus_F, and Avr_F. Table 9 displays the NCC values for various sorts of filtering attacks. All NCC values for filtering attacks are more than 0.9941 in the testing results, showing that the extracted and original watermarks are more similar and that the suggested scheme can successfully resist image filtering attacks.

Table 9 Robustness against filtering attack

(5) Noisy attacks.

The robustness measures for two forms of additive noise are computed by altering the noise density or variance in increments of 0.1, 0.2, 0.3, and 0.5, as shown in Table 10. The NCC values in Table 10 show that the suggested scheme is extremely robust to S&P_N and Gus_N.

Table 10 Robustness against noisy attacks

(6) JPEG compression.

For the simulation of JPEG compression attacks, QF (quality factor) varies between 5 and 90%, and the results are shown in Table 11. After compressing the watermarked image with a lower QF of 5%, all NCC values are 1, indicating that the watermark can be detected effectively by the proposed scheme.

Table 11 Robustness against JPEG compression

(7) Additional attacks.

Sharpening, histogram, flipping H, flipping V and cropping attacks are also applied, and the result is presented in Table 12. According to Table 12, all NCC values are 1, and the suggested scheme is highly resistant to sharpening, histogram, flipping, and cropping attacks.

Table 12 Robustness against some additional attacks

5.5.2 Robustness evaluation for different medical images

In this subsection, we used six 512 × 512 different medical color images (“Image 19 to Image 24”) as carrier images and a 64 × 64 binary image (“Flower”) as a watermark. We measured the BER values and their averages for various medical images under several types of attacks in Table 13. The findings reported in Table 13 show that the suggested strategy performs very well in terms of BER. Specifically, when different types of attacks were applied to various images, the majority of the BER values for the proposed technique were zero or quite close to zero. The average BER value was found to be 0.0002, while the maximum reported BER value was 0.0010. These findings show that the suggested scheme is also extremely robust to various attacks when using different images and dimensions.

Table 13 BER value for different medical images under various types of attacks

5.5.3 Comparison of robustness

To thoroughly evaluate how well the proposed scheme worked, the neural network methods and some representative zero-watermarking schemes were used for comparison. The comparison experiments were conducted on two levels. In the first level, a general comparison was carried out between the robustness of the proposed method and existing zero-watermarking methods [62,63,64,65,66,67]. More specifically, in the second level, an evaluation was performed to compare the robustness of the proposed method against existing approaches that utilize deep neural network models for zero watermarks [29, 68, 69]. The two comparison levels are thoroughly explained in the two subsections that follow.

5.5.3.1 Level 1: general zero-watermark comparison

We examined the robustness of the suggested zero-watermarking scheme in this experiments using 512 × 512 medical images as carrier images and the 64 × 64 binary image “Flower” as a watermark against various attacks such as Gus_N 0.03, S&P_N 0.03, Gus_F 5 × 5, Med_F 5 × 5, J_Com 70%, J_Com 90%, Rot 15°, Rot 25°, Rot 35°, Scaling 0.75, Scaling 1.75, and Rot 25° & J_Com 80%. The proposed scheme outperforms the zero-watermarking algorithms [62,63,64,65,66,67], as evidenced by the comparative experiment results in Fig. 6 and Tables 14 and 15. Table 14 summarizes the obtained results, and Fig. 6 depicts the obtained results for readability and simplicity. In Fig. 6 and Table 14, we compare these results to the outcomes of the zero-watermarking approaches [62,63,64,65,66,67]. The findings of Fig. 6 and Table 14 demonstrate that the suggested scheme’s BER values for various attack outcomes are extremely near to zero and less than the values compared in [62,63,64,65,66,67]. These findings show that as compared to zero-watermarking techniques, the suggested scheme can effectively withstand image-based attacks [62,63,64,65,66,67]. Lastly, based on the experimental results in Table 15, the NCC values of the proposed approach are quite near to the optimum 1. This highlights the suggested scheme’s obvious boost in resistance against multiple attacks over zero-watermarking techniques [62,63,64,65,66,67].

Fig. 6
figure 6

BER values for six zero-watermark methods with different kinds of attacks

Table 14 BER values of the proposed scheme in comparison with methods [62,63,64,65,66,67]
Table 15 NCC values of the proposed scheme in comparison with methods [62,63,64,65,66,67]
5.5.3.2 Level 2: comparison with neural network methods

In these experiments, the robustness of the proposed zero-watermarking scheme was investigated. Specifically, 256 × 256 medical images were used as carrier images, while 32 × 32 binary images as the watermark. The scheme was subjected to various attacks, including Gus_N, Med_F, J_Com, Rot, Scaling, and cropping, to assess its effectiveness. The analysis of the results presented in Table 16 reveals a notable decrease in the robustness of most schemes as the severity of the attacks increases. Specifically, high-intensity Gaussian attacks, cropping attacks, and compression attacks demonstrate a significant impact on the robustness of these schemes. In contrast, our proposed scheme exhibits remarkable robustness, as evidenced by the clear and unaffected extraction of the watermark even under high-intensity attacks, and it outperforms existing zero-watermarking algorithms [29, 68, 69]. The NCC values obtained from the comparative experiment for different attacks were 1 or close to the optimal value of 1. Moreover, these NCC values were higher than those reported in [29, 68, 69], further highlighting the superior performance of the suggested scheme. This exceptional performance ensures the preservation of copyright for medical images. The superiority of our scheme can be attributed to the utilization of more comprehensive and robust model features compared to other approaches. Additionally, the simplicity and effectiveness of the employed method enables the restoration of essential watermark information that may have been lost due to attacks. Consequently, our scheme exhibits stability against various geometric and high-intensity attacks.

Table 16 NCC values of the proposed scheme in comparison with deep neural network methods [29, 68, 69]

6 Conclusion

We presented a robust medical-colored image zero-watermarking technique based on Resnet50 and a chaotic system in this work. Resnet50, a deep convolutional neural network, outperformed other deep convolutional neural networks in image feature extraction. It was well-suited for zero-watermark generation and verification because of its ability to learn intricate features in images. A logistic Gaussian map was used to confuse and diffuse the feature matrix of the medical image and the watermark image to increase security and equalization. The suggested zero-watermarking scheme in this study may be utilized to protect the copyright of color images while maintaining significant resistance against geometric and signal processing attacks and good image visual quality. The experiments’ outcomes assure resistance and robustness to many forms of attack. Furthermore, when compared to other current strategies, the proposed scheme achieves a balanced trade-off between robustness and imperceptibility, and it has some benefits in terms of security, equalization, and robustness. In terms of mean square error, the proposed Resnet50 model outperformed other models such as VGG16, VGG19, and AlexNet. Future work will include some research investigations. The proposed scheme will be enhanced to handle stereoscopic images and 3D images. Furthermore, utilizing a new chaotic map, to increase security. Additionally, we will apply the suggested scheme to different real-world applications, such as IoT medical systems.