Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Confused yet Successful:

Theoretical Comparison of Distinguishers for Monobit Leakages in Terms of Confusion Coefficient and SNR

  • Conference paper
  • First Online:
Information Security and Cryptology (Inscrypt 2018)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 11449))

Included in the following conference series:

  • 1706 Accesses

Abstract

Many side-channel distinguishers (such as DPA/DoM, CPA, Euclidean Distance, KSA, MIA, etc.) have been devised and studied to extract keys from cryptographic devices. Each has pros and cons and find applications in various contexts. These distinguishers have been described theoretically in order to determine which distinguisher is best for a given context, enabling an unambiguous characterization in terms of success rate or number of traces required to extract the secret key.

In this paper, we show that in the case of monobit leakages, the theoretical expression of all distinguishers depend only on two parameters: the confusion coefficient and the signal-to-noise ratio. We provide closed-form expressions and leverage them to compare the distinguishers in terms of convergence speed for distinguishing between key candidates. This study contrasts with previous works where only the asymptotic behavior was determined—when the number of traces tends to infinity, or when the signal-to-noise ratio tends to zero.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We cover in this paper the following distinguishers: Difference of Means (DoM) [13], Correlation Power Analysis (CPA) [3], Euclidean distance [12, §3], Kolmogorov-Smirnov Analysis (KSA) [22], and Mutual Information Analysis (MIA) [9].

  2. 2.

    In [10], CPA is treated as a distinguisher, but without the absolute values. Those remove false positives which occur in monobit leakages when there are anti-correlations. Our value of the success exponent is, therefore, different from theirs.

References

  1. Batina, L., Robshaw, M. (eds.): CHES 2014. LNCS, vol. 8731. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44709-3

    Book  MATH  Google Scholar 

  2. Blahut, R.E.: Principles and Practice of Information Theory. Addison-Wesley Longman Publishing Co. Inc., Boston (1987)

    MATH  Google Scholar 

  3. Brier, É., Clavier, C., Olivier, F.: Correlation power analysis with a leakage model. In: Joye, M., Quisquater, J.-J. (eds.) CHES 2004. LNCS, vol. 3156, pp. 16–29. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28632-5_2

    Chapter  Google Scholar 

  4. Carlet, C., Heuser, A., Picek, S.: Trade-offs for S-boxes: cryptographic properties and side-channel resilience. In: Gollmann, D., Miyaji, A., Kikuchi, H. (eds.) ACNS 2017. LNCS, vol. 10355, pp. 393–414. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61204-1_20

    Chapter  Google Scholar 

  5. Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley-Interscience, New York (2006). ISBN-10: 0471241954, ISBN-13: 978-0471241959

    MATH  Google Scholar 

  6. Daemen, J., Rijmen, V.: Rijndael for AES. In: AES Candidate Conference, pp. 343–348 (2000)

    Google Scholar 

  7. Fei, Y., Ding, A.A., Lao, J., Zhang, L.: A statistics-based success rate model for DPA and CPA. J. Cryptographic Eng. 5(4), 227–243 (2015). https://doi.org/10.1007/s13389-015-0107-0

    Article  Google Scholar 

  8. Fei, Y., Luo, Q., Ding, A.A.: A statistical model for DPA with novel algorithmic confusion analysis. In: Prouff, E., Schaumont, P. (eds.) CHES 2012. LNCS, vol. 7428, pp. 233–250. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33027-8_14

    Chapter  Google Scholar 

  9. Gierlichs, B., Batina, L., Tuyls, P., Preneel, B.: Mutual information analysis. In: Oswald, E., Rohatgi, P. (eds.) CHES 2008. LNCS, vol. 5154, pp. 426–442. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85053-3_27

    Chapter  Google Scholar 

  10. Guilley, S., Heuser, A., Rioul, O.: A key to success. In: Biryukov, A., Goyal, V. (eds.) INDOCRYPT 2015. LNCS, vol. 9462, pp. 270–290. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-26617-6_15

    Chapter  Google Scholar 

  11. Heuser, A., Rioul, O., Guilley, S.: A theoretical study of Kolmogorov-Smirnov distinguishers – side-channel analysis vs. differential cryptanalysis. In: Prouff [17], pp. 9–28. https://doi.org/10.1007/978-3-319-10175-0_2

    Google Scholar 

  12. Heuser, A., Rioul, O., Guilley, S.: Good is not good enough - deriving optimal distinguishers from communication theory. In: Batina and Robshaw [1], pp. 55–74. https://doi.org/10.1007/978-3-662-44709-3_4

    Chapter  Google Scholar 

  13. Kocher, P., Jaffe, J., Jun, B.: Differential power analysis. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 388–397. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48405-1_25

    Chapter  Google Scholar 

  14. Lomné, V., Prouff, E., Rivain, M., Roche, T., Thillard, A.: How to estimate the success rate of higher-order side-channel attacks. In: Batina and Robshaw [1], pp. 35–54. https://doi.org/10.1007/978-3-662-44709-3_3

    Chapter  Google Scholar 

  15. Mangard, S., Oswald, E., Popp, T.: Power Analysis Attacks. Revealing the Secrets of Smart Cards. Springer, Boston (2007). https://doi.org/10.1007/978-0-387-38162-6

    Book  MATH  Google Scholar 

  16. Mangard, S., Oswald, E., Standaert, F.: One for all - all for one: unifying standard differential power analysis attacks. IET Inf. Secur. 5(2), 100–110 (2011). https://doi.org/10.1049/iet-ifs.2010.0096

    Article  Google Scholar 

  17. Prouff, E. (ed.): COSADE 2014. LNCS, vol. 8622. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10175-0

    Book  MATH  Google Scholar 

  18. Reparaz, O., Gierlichs, B., Verbauwhede, I.: A note on the use of margins to compare distinguishers. In: Prouff [17], pp. 1–8. https://doi.org/10.1007/978-3-319-10175-0_1

    Google Scholar 

  19. Rivain, M.: On the exact success rate of side channel analysis in the Gaussian model. In: Avanzi, R.M., Keliher, L., Sica, F. (eds.) SAC 2008. LNCS, vol. 5381, pp. 165–183. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04159-4_11

    Chapter  MATH  Google Scholar 

  20. Schindler, W., Lemke, K., Paar, C.: A stochastic model for differential side channel cryptanalysis. In: Rao, J.R., Sunar, B. (eds.) CHES 2005. LNCS, vol. 3659, pp. 30–46. Springer, Heidelberg (2005). https://doi.org/10.1007/11545262_3

    Chapter  Google Scholar 

  21. Whitnall, C., Oswald, E.: A fair evaluation framework for comparing side-channel distinguishers. J. Cryptographic Eng. 1(2), 145–160 (2011)

    Article  Google Scholar 

  22. Whitnall, C., Oswald, E., Mather, L.: An exploration of the Kolmogorov-Smirnov test as a competitor to mutual information analysis. In: Prouff, E. (ed.) CARDIS 2011. LNCS, vol. 7079, pp. 234–251. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-27257-8_15

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eloi de Chérisey .

Editor information

Editors and Affiliations

Appendices

A Proof of Lemma 3

The MIA distinguisher is expressed as

$$\begin{aligned} \mathcal {D}(k) =I(Y(k^*)+N;Y(k))= h( Y(k^*) + N ) - h(Y(k^*) + N \mid Y(k)). \end{aligned}$$
(36)

From Sect. 3.1, \(Y(k^*)\) knowing Y(k) is a binary random variable with probability \(\kappa (k)\). As N is Gaussian independent from Y(k), the pdf of \(Y(k^*) + N\) knowing Y(k) is a Gaussian mixture that can take two forms:

$$\begin{aligned} p_{\kappa (k)}(x)= {\left\{ \begin{array}{ll} \frac{1}{\sqrt{2\pi }\sigma }[ \kappa (k) e^{-\frac{(x-1)^2}{2\sigma ^2}} + (1-\kappa (k)) e^{-\frac{(x+1)^2}{2\sigma ^2}} ] \\ \frac{1}{\sqrt{2\pi }\sigma }[ \kappa (k) e^{-\frac{(x+1)^2}{2\sigma ^2}} + (1-\kappa (k)) e^{-\frac{(x-1)^2}{2\sigma ^2}} ] \end{array}\right. }, \end{aligned}$$
(37)

By symmetry, their entropy \(h(Y(k^*) + N \mid Y(k))\) will be the same and we can take any of these pdfs. Letting \(\phi \) be the standard normal density, we can write

$$\begin{aligned} p_{\kappa (k)}(x)&= p_{1/2}(x) - 2(1/2 - \kappa (k)) \phi (x) e^{-\frac{1}{\sigma ^2}}\sinh (\frac{x}{\sigma ^2}) \end{aligned}$$
(38)
$$\begin{aligned}&= p_{1/2}(x)(1 - 2(1/2 - \kappa (k))\tanh (\frac{x}{2\sigma ^2}). \end{aligned}$$
(39)

where

$$\begin{aligned} p_{1/2}(x) = \frac{1}{2\sqrt{2\pi }\sigma }[ e^{-\frac{(x-1)^2}{2\sigma ^2}} + e^{-\frac{(x+1)^2}{2\sigma ^2}} ] = \frac{1}{\sigma }e^{-\frac{1}{2\sigma ^2}}\phi (\frac{x}{\sigma })\cosh (\frac{x}{\sigma ^2}). \end{aligned}$$
(40)

For notational convenience define \(\epsilon = 2(1/2 - \kappa (k))\), \(p=p_{1/2}(x)\), and \(t=\tanh (x)\). Then

$$\begin{aligned} I(X;Y(k))&=h( Y(k^*) + N ) - h(Y(k^*) + N \mid Y(k))\end{aligned}$$
(41)
$$\begin{aligned}&= -\int p\log _2 p + \int (p(1 - \epsilon t)) \log _2(p(1 - \epsilon t)) \end{aligned}$$
(42)
$$\begin{aligned}&= -\int \epsilon p t \log _2 p + \int p \log _2(1 - \epsilon t) - \int p\epsilon t \log _2(1-\epsilon t). \end{aligned}$$
(43)

The first term vanishes since p is even and t odd. We apply a Taylor expansion:

$$\begin{aligned} I(X;Y(k)) = \int p[-\epsilon t - \frac{\epsilon ^2 t^2}{2} - \frac{\epsilon ^3 t^3}{3} + O(\epsilon ^4) ] - \int \epsilon p t [-\epsilon t - \frac{\epsilon ^2 t^2}{2} - \frac{\epsilon ^3 t^3}{3} + O(\epsilon ^4) ]. \end{aligned}$$
(44)

The odd terms of the expansion are null as t is odd and p even. We therefore obtain:

$$\begin{aligned} I(X;Y(k)) = \int p[ - \frac{\epsilon ^2 t^2}{2} + O(\epsilon ^4) ] - \int [-\epsilon ^2 pt^2 + O(\epsilon ^4) ] = \int \frac{\epsilon ^2 p t^2}{2} + O(\epsilon ^4). \end{aligned}$$
(45)

Thus, finally,

$$\begin{aligned} \mathcal {D}(k) = 2\log _2(e)(1/2 - \kappa (k))^2g(\sigma ), \end{aligned}$$
(46)

where

$$\begin{aligned} g(\sigma ) =\frac{1}{\sigma } e^{-\frac{1}{2\sigma ^2}}\int _{\mathbb {R}} \phi (\frac{x}{\sigma })\cosh (\frac{x}{\sigma ^2})\tanh ^2(\frac{x}{\sigma ^2})\mathrm {d}x. \end{aligned}$$
(47)

There are several ways to express \(g(\sigma )\). For example, we have:

$$\begin{aligned} g(\sigma ) = e^{-\frac{1}{2\sigma ^2}}\int _{\mathbb {R}} \phi (x)\cosh (\frac{x}{\sigma })\tanh ^2(\frac{x}{\sigma })\mathrm {d}x. \end{aligned}$$
(48)

This expression can be reduced to:

$$\begin{aligned} g(\sigma ) = \frac{1}{2}\mathbb {E}_X \left[ \tanh ^2(\frac{X}{\sigma }+ \frac{1}{\sigma ^2}) + \tanh ^2(\frac{X}{\sigma }- \frac{1}{\sigma ^2}) \right] , \end{aligned}$$
(49)

where \(X\sim \mathcal {N}(0,1)\). By the dominated convergence theorem (\(\tanh ^2(\frac{X}{\sigma }+ \frac{1}{\sigma ^2})\) is always smaller than 1) when \(\sigma \rightarrow 0\), we obtain \(g(0)=1\) and when \(\sigma \rightarrow \infty \) we obtain the equivalent \(\frac{1}{\sigma ^2}\).

B Proof of Lemma 4

The success exponent is defined by

$$\begin{aligned} \mathsf {SE} = \frac{\mathbb {E}[\widehat{\mathcal {D}}(k^*) - \widehat{\mathcal {D}}(k)]^2}{2\mathrm {Var}(\widehat{\mathcal {D}}(k^*) - \widehat{\mathcal {D}}(k))}. \end{aligned}$$
(50)

where in our case

$$\begin{aligned} \widehat{\mathcal {D}}(k) = \frac{1}{q\sqrt{1+\sigma ^2}} \Bigl |\sum _{i=1}^q X_i Y_i(k) \Bigr |. \end{aligned}$$
(51)

First for large q we can consider that \(\mathbb {E}[|\sum _i X_iY_i(k)|] = |\mathbb {E}[\sum _i X_iY_i(k)]|\).

(52)

hence

(53)

Secondly we have

$$\begin{aligned} \mathrm {Var}(\widehat{\mathcal {D}}(k^*) - \widehat{\mathcal {D}}(k)) = \frac{1}{q^2(1+\sigma ^2)}\mathrm {Var}\Bigl ( \Bigl |\sum _{i=1}^q X_i Y_i(k^*) \Bigr | - \Bigl |\sum _{i=1}^q X_i Y_i(k) \Bigr | \Bigr ). \end{aligned}$$
(54)

To remove the absolute values, we distinguish two cases whether the sum is positive or negative. We consider that q is large enough to have strictly positive or negative values.

$$\begin{aligned} \mathrm {Var}(\widehat{\mathcal {D}}(k^*) - \widehat{\mathcal {D}}(k))&= \frac{1}{q^2(1+\sigma ^2)}\mathrm {Var}\Bigl ( \sum _{i=1}^q X_i Y_i(k^*) \mp \sum _{i=1}^q X_i Y_i(k) \Bigr ) \end{aligned}$$
(55)
$$\begin{aligned}&= \frac{1}{q^2(1+\sigma ^2)}\mathrm {Var}\Bigl ( \sum _{i=1}^q X_i \bigl (Y_i(k^*) \mp Y_i(k) \bigr ) \Bigr ) \end{aligned}$$
(56)
$$\begin{aligned}&= \frac{1}{q(1+\sigma ^2)}\mathrm {Var}\bigl ( X \bigl (Y(k^*) \mp Y(k) \bigr ) \bigr ) \end{aligned}$$
(57)
$$\begin{aligned}&= \frac{1}{q(1+\sigma ^2)}\mathrm {Var}\bigl ( (Y(k^*) + N) \bigl (Y(k^*) \mp Y(k) \bigr ) \bigr ) \end{aligned}$$
(58)
$$\begin{aligned}&= \frac{1}{q(1+\sigma ^2)}\mathrm {Var}\bigl ( \mp Y(k^*)Y(k) +N(Y(k^*) \mp Y(k)) \bigr ). \end{aligned}$$
(59)

The variance term is the difference of the two following quantities

(60)
(61)

Combining all the above expressions we obtain (33).

C Proof of Lemma 5

To prove the success rate of KSA, we first need an estimator for the cumulative density function. We take as kernel a function \(\varPhi \) as simple as possible i.e. the Heaviside function \(\varPhi (x)=0\) if \(x<0\) and \(\varPhi (x)=1\) if \(x\ge 0\).

With this function and for \(x\in \mathbb {R}\), we can estimate \(F(x|Y(k) = 1) - F(x)\) by the following estimator:

$$\begin{aligned} \tilde{F}(x |Y(k) = 1) - \tilde{F}(x) = \frac{\sum _{i|Y_i(k)=1}\varPhi (x-X_i)}{\sum _{i|Y_i(k)=1}1} - \frac{\sum _i \varPhi (x-X_i)}{q}. \end{aligned}$$
(62)

We suppose that q is large enough to consider that \(\sum _{i|Y_i(k)=1}1 = \frac{q}{2}\) (by the law of large numbers). Therefore we have:

$$\begin{aligned} \tilde{F}(x |Y(k) = 1) - \tilde{F}(x) = \frac{\sum _{i|Y_i(k)=1}\varPhi (x-X_i)}{q} - 2\frac{\sum _i \varPhi (x-X_i)}{q}. \end{aligned}$$
(63)

We notice that \(\sum _{i|Y_i(k)=1}\varPhi (x-X_i) = \frac{1}{2}\sum _i (Y_i(k)+1)\varPhi (x-X_i)\). Therefore

$$\begin{aligned} \tilde{F}(x |Y(k) = 1) - \tilde{F}(x) = \frac{1}{q}\sum _{i=1}^q Y_i(k)\varPhi (x-X_i). \end{aligned}$$
(64)

This estimator is a sum of i.i.d. random variables. We can therefore apply the central limit theorem.

$$\begin{aligned} \mathbb {E}[ \tilde{F}(x |Y(k) = 1) - \tilde{F}(x) ]&= \mathbb {E}[Y(k)\varPhi (x-X_i)] \end{aligned}$$
(65)
$$\begin{aligned}&= \mathbb {E}[Y(k)\varPhi (x- Y(k^*) - N)] \end{aligned}$$
(66)
$$\begin{aligned}&= \frac{1}{2} ( \kappa (k) - 0.5) \Bigl ( \mathrm {erf}\Bigl (\frac{1-x}{\sigma \sqrt{2}}) + \mathrm {erf}\Bigl (\frac{1+x}{\sigma \sqrt{2}} \Bigr ) \Bigr ). \end{aligned}$$
(67)

The maximum of the absolute value is for \(x=0\) and we obtain:

$$\begin{aligned} \Vert \mathbb {E}[ \tilde{F}(x |Y(k) = 1) - \tilde{F}(x) ] \Vert _{\infty } = |0.5 - \kappa (k)|\mathrm {erf}\Bigl ( \frac{1}{\sigma \sqrt{2}} \Bigr ). \end{aligned}$$
(68)

We notice that \(\Vert \mathbb {E}[ \tilde{F}(x |Y(k) = 1) - \tilde{F}(x) ] \Vert _{\infty } = \Vert \mathbb {E}[ \tilde{F}(x |Y(k) = -1) - \tilde{F}(x) ] \Vert _{\infty } \). To calculate the variance, we consider that \(x=0\) as it is the value that maximizes the expectation of the distinguisher.

$$\begin{aligned} \mathrm {Var}(\widehat{\mathcal {D}}(k^*) - \widehat{\mathcal {D}}(k)) = \mathrm {Var}\Bigl ( \frac{1}{q} \Bigl ( \sum _{i=1}^q \varPhi (x-X_i)(Y_i(k^*)-Y_i(k)) \Bigr ) \Bigr ) \end{aligned}$$
(69)

The computation of this variance gives:

$$\begin{aligned} \mathrm {Var}(\widehat{\mathcal {D}}(k^*) - \widehat{\mathcal {D}}(k)) = 2(0.5 - |0.5 - \kappa (k)|) - \mathrm {erf}\Bigl (\frac{1}{\sigma \sqrt{2}} \Bigr )^2 (0.5 - |0.5 - \kappa (k)|)^2. \end{aligned}$$
(70)

Overall, the success exponent is:

(71)

D Proof of Lemma 6

For MIA, we refer to [10, Section 5.3] for the theoretical justifications. In order to obtain a simple closed-form expression of the success exponent, we suppose that \(\sigma \gg 1\) and that the probability density functions are all Gaussian. This means that X|Y(k) is a Gaussian random variable of standard deviation \(\sqrt{4\kappa (k)(1-\kappa (k)) + \sigma ^2}\). Moreover, we will keep only the first order approximation in \(\mathsf {SNR}=\sigma ^{-2}\) of the SE.

$$\begin{aligned} h(X|Y(k)) - h(X|Y(k^*)&= \frac{1}{2}\log _2(2\pi e \cdot (4\kappa (k)(1-\kappa (k)) + \sigma ^2) - \frac{1}{2}\log _2(2 \pi e \cdot \sigma ^2) \end{aligned}$$
(72)
$$\begin{aligned}&= \frac{1}{2}\log _2 \frac{4\kappa (k)(1-\kappa (k)) + \sigma ^2}{\sigma ^2} \end{aligned}$$
(73)
$$\begin{aligned}&\approx \frac{\log _2(e)4\kappa (k)(1-\kappa (k))}{2\sigma ^2} \end{aligned}$$
(74)

The Fisher information of a Gaussian random variable of standard deviation \(\zeta \) is equal to \(\frac{1}{\zeta ^2}\). Therefore the Fisher information of X knowing \(Y=y(k)\) is:

$$\begin{aligned} F(X|Y(k) = y(k)) = \frac{1}{4\kappa (k)(1-\kappa (k)) + \sigma ^2}. \end{aligned}$$
(75)

As this value does not depend on the value of Y(k), we have:

$$\begin{aligned} F(X|Y(k))&= \frac{1}{4\kappa (k)(1-\kappa (k)) + \sigma ^2} \end{aligned}$$
(76)
$$\begin{aligned} J(X|Y(k)) - J(X|Y(k^*))&= \frac{1}{4\kappa (k)(1-\kappa (k)) + \sigma ^2} - \frac{1}{\sigma ^2}\end{aligned}$$
(77)
$$\begin{aligned}&\approx -\frac{\kappa (k)(1-\kappa (k))}{\sigma ^4}. \end{aligned}$$
(78)

Last, we have to calculate \(\mathrm {Var}(-\log _2 p(X|Y(k) = y(k)))\). Let \(\zeta ^2 = \sigma ^2 + 4\kappa (k)(1-\kappa (k))\) and C the normalization constant. We have:

(79)
(80)
(81)
(82)

Overall, the success exponent defined in [10, Proposition 6] can be simplified in the case of monobit leakage as:

$$\begin{aligned} \mathsf {SE} \approx \min _{k\ne k^*} 4\frac{\log _2(e)^2\kappa (k)^2(1-\kappa (k))^2}{\sigma ^4}. \end{aligned}$$
(83)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Chérisey, E., Guilley, S., Rioul, O. (2019). Confused yet Successful: . In: Guo, F., Huang, X., Yung, M. (eds) Information Security and Cryptology. Inscrypt 2018. Lecture Notes in Computer Science(), vol 11449. Springer, Cham. https://doi.org/10.1007/978-3-030-14234-6_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-14234-6_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-14233-9

  • Online ISBN: 978-3-030-14234-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics