Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Miskolc Mathematical Notes Vol. 17 (2016), No. 1, pp. 635–645 HU e-ISSN 1787-2413 DOI: 10.18514/MMN.2016.1435 ON THE MATRIX NEARNESS PROBLEM FOR (SKEW–)SYMMETRIC MATRICES ASSOCIATED WITH THE MATRIX EQUATIONS .A1 XB1 ; : : : ; Ak XBk / D .C1 ; : : : ; Ck / S. S.İMS.EK, M. SARDUVAN, AND H. ÖZDEMİR Received 19 November, 2014 Abstract. Suppose that the matrix equations system .A1 XB1 ; : : : ; Ak XBk / D .C1 ; : : : ; Ck / with unknown matrix X is given, where Ai , Bi , and Ci , i D 1; 2; : : : ; k; are known matrices of suitable sizes. The matrix nearness problem is considered over the general and least squares solutions of this system. The explicit forms of the best approximate solutions of the problems over the sets of symmetric and skew–symmetric matrices are established as well. Moreover, a comparative table depending on some numerical examples in the literature is given. 2010 Mathematics Subject Classification: 15A06; 15A09; 15A24; 65F35 Keywords: best approximate solution, Frobenius norm, matrix equations, Moore–Penrose generalized inverse, least squares solutions 1. I NTRODUCTION AND N OTATIONS Let Rm;n , Rn , RnS , and RnSS be the sets of m  n real matrices, n  n real matrices, n  n real symmetric matrices, and n  n real skew–symmetric matrices, respectively. The symbols AT and AŽ will denote the transpose and the Moore– Penrose generalized inverse of a matrix A 2 Rm;n , respectively. Further, vec ./ T will stand for the vec operator, i.e., vec .A/ D a1T ; a2T ; : : : ; anT for the matrix A D .a1 ; a2 ; : : : ; an / 2 Rm;n , ai 2 Rm;1 , i D 1; 2; : : : ; n; and A ˝ B will stand for the Kronecker product of matrices A 2 Rm;n and B 2 Rp;r (see [1]). Moreover, let Rm1 ;n1      Rmk ;nk D fŒA1 ; : : : ; Ak  j Ai 2 Rmi ;ni ; i D 1; 2; : : : ; kg: It is easy to see that Rm1 ;n1      Rmk ;nk is a linear space over the real number field. We define the inner product for all ŒA1 ; A2 ; : : : ; Ak  and ŒB1 ; B2 ; : : : ; Bk  2 Rm1 ;n1  Rm2 ;n2      Rmk ;nk in the linear space as follows: hŒA1 ; A2 ; : : : ; Ak ; ŒB1 ; B2 ; : : : ; Bk i D t r.B1T A1 / C    C t r.BkT Ak /; This work was supported by Research Fund of the Sakarya University. Project Number: 2014-0200-001. c 2016 Miskolc University Press 636 S. S.İMS.EK, M. SARDUVAN, AND H. ÖZDEMİR then Rm1 ;n1  Rm2 ;n2      Rmk ;nk is a Hilbert inner space. Furthermore, let kkH denotes the norm that is derived by inner product, i.e., kŒA1 ; A2 ; : : : ; Ak kH D hŒA1 ; A2 ; : : : ; Ak ; ŒA1 ; A2 ; : : : ; Ak i1=2 D Œt r.AT1 A1 / C t r.AT2 A2 / C    C t r.ATk Ak /1=2  1=2 D kA1 k2 C : : : C kAk k2 ; where kk denotes the Frobenius norm (see, for example, [11]). The well–known linear matrix equation AXB D C , where A, B, C are known matrices of suitable sizes and X is the matrix of unknowns, were studied in the case of special solution structures, e.g. symmetric, triangular, Hermitian, nonnegative definite, reflexive, diagonal etc. using matrix decomposition such as the singular value decomposition (SVD), the generalized SVD (GSVD), the quotient SVD, and the canonical correlation decomposition (CCD) in [5, 6, 12, 15–17, 23, 36]. Now, first, consider the following two problems. Problem 1. For given matrices A 2 Rm;n , B 2 Rn;r , and C 2 Rm;r , find XO 2 ˝ such that AXO B C D min kAXB C k ; X 2˝ where ˝ is anyone of the sets of special matrices such as symmetric, skew–symmetric, Hermitian, reflexive etc. Problem 2. Let SE1 be the solution set of Problem 1. For a given matrix X0 2 Rn , find XO 2 SE1 such that XO X0 D min kX X2SE1 X0 k : Problem 2 which is very important in applied sciences is known as the matrix nearness problem in the literature and it has been extensively studied in recent years. For instance, Peng et al. [28] and Huang et al. [14] presented matrix iteration methods for finding the symmetric and skew–symmetric solutions of Problem 2, respectively. Peng et al. [27] gave the necessary and sufficient conditions for solvability of Problem 2 over reflexive and anti–reflexive matrices. Moreover, they obtained the explicit expression of the optimal approximation solution of Problem 2 when X is a reflexive and an anti–reflexive matrix. In these literatures, the linear matrix equation AXB D C is consistent. But, it is rarely possible to satisfy the consistency condition of the linear matrix equation AXB D C , since the matrices A, B, and C occurring in practice are usually obtained from an experiment. When Problem 2 is inconsistent, Qui et al. [32], Lei et al. [18], and Peng [31] established iterative methods over the (skew–)symmetric, and (skew–)symmetric P –commuting matrices. On the other hand, Liao et al. [20], Huang et al. [13] and Zhao et al. [38] derived an explicit expressions of the least squares solution to Problem 2 when X is (skew–)symmetric ON THE MATRIX NEARNESS PROBLEM FOR... 637 and .P; Q/–orthogonal symmetric matrix, respectively. Moreover, Sarduvan et al. established the explicit forms of the best approximate solutions of Problem 2 when X is .P; Q/–orthogonal (skew–)symmetric matrix [33]. And now, consider the following two problems. Problem 3. For given matrices A1 2 Rm1 ;n , B1 2 Rn;p1 , C1 2 Rm1 ;p1 , A2 2 Rm2 ;n , B2 2 Rn;p2 , and C2 2 Rm2 ;p2 , find XO 2 ˝ such that ŒA1 XO B1 C1 ; A2 XO B2 C2  H D min kŒA1 XB1 X2˝ C1 ; A2 XB2 C2 kH where ˝ is anyone of the sets of special matrices such as symmetric, skew–symmetric, Hermitian, reflexive etc. Problem 4. Let SE2 be the solution set of Problem 3. For a given matrix X0 2 Rn , find XO 2 SE2 such that XO X0 D min kX X2SE2 X0 k : Research on solving a pair of matrix equations has been actively ongoing for past years. For instance, Mitra [24] and Navarra [25] established conditions for the existence of a solution and a representation of a general common solution to Problem 3. Also, Özgüler et al. [26], Woude [35], Wang [37], and Liu [22] derived necessary and sufficient conditions for existence of common solution to Problem 3. Moreover, Dehghan et al. [8] obtained conditions for the existence of .R; S/–(skew–)symmetric solution of Problem 3. Ding et al. [9] presented an iterative method for solving a pair of inconsistent matrix equations. Besides the works on finding the conditions for the existence of common solution to Problem 3, there are some valuable efforts on solving of the matrix nearness problem for a pair of matrix equations. For example, in the case that the matrix equations in Problem 3 are consistent, iterative algorithms were presented for solving Problem 4 with certain constraints on solution such as symmetric, reflexive, bisymmetric, generalized centro–symmetric, and generalized reflexive matrices in [3, 7, 29, 30, 34]. Cai et al. [2] and Chen et al. [4] derived iterative algorithms over bisymmetric and symmetric solutions, respectively, in the case that the matrix equations in Problem 4 are inconsistent. It is noteworthy that when the pair of matrix equations is inconsistent, its least squares solutions with minimum norm cannot be obtained by GSVD and CCD. In order to over come this difficulty, Liao, Lei [19] and Liao et al. [21] derived a different approach based on the projection theorem. Therefore, they could used the method of GSVD and CCD to obtain the solution. In this paper, it is established the general expressions of the (skew–)symmetric solutions to Problem 4 using kronecker product and Moore–Penrose inverse. Moreover, this general expressions are expanded for the matrix equations of the form 638 S. S.İMS.EK, M. SARDUVAN, AND H. ÖZDEMİR .A1 XB1 ; : : : ; Ak XBk / D .C1 ; : : : ; Ck /. Furthermore, a comparative table depending on some numerical examples in the literature is given. 2. P RELIMINARY R ESULTS The vector x0 2 Rn;1 is a least squares solution (LSS) to the inconsistent system of linear equations Ax D g, where A 2 Rm;n , if and only if .Ax g/T .Ax g/  .Ax0 g/T .Ax0 g/ for all x 2 Rn;1 [10]. The vector x0 2 Rn;1 is the best approximate solution (BAS) to the inconsistent system of linear equations Ax D g, where A 2 Rm;n , if and only if (1) .Ax g/T .Ax g/  .Ax0 g/T .Ax0 g/ for all x 2 Rn;1 , (2) x T x > x0 T x0 for all x 2 Rn;1 n fx0 g satisfying .Ax g/T .Ax g/ D .Ax0 g/T .Ax0 g/ [10]. It is noteworthy that there may be many LSS for an inconsistent system of linear equations. In addition, a LSS may not be the BAS while the BAS is always a LSS. However, the BAS is always unique. If it is assumed that the matrix equation AXB D C , where A 2 Rm;n , B 2 Rp;r , C 2 Rm;r are known nonzero matrices and X 2 Rn;p is the matrix of unknowns, is inconsistent, then it may be asked to find a matrix X such that kAXB C k is minimum. A matrix satisfying this condition is called an approximate solution to the matrix equation. The matrix XO 2 Rn;p is defined to be the BAS to the matrix equation AXB D C if and only if (1) kAXB C k  AXO B C for all X 2 Rn;p , n o (2) kXk > XO for all X 2 Rn;p n XO satisfying kAXB C k D AXO B C . We note that a vector k 2 Rmn;1 will stand for the vector vec.K/ in the rest of the text, where K 2 Rm;n . It is known that the matrix equation AXB D C can be equivalently written as   B T ˝ A x D c: (2.1) Consequently, the solutions of a matrix equation AXB D C can be obtained by considering the usual system of linear equations (2.1) instead of the matrix equation AXB D C . Now, we will give the following Lemma which can be proved easily. Lemma 1 ([10]). Suppose that Sg is the set of all solutions to the consistent system of linear equations Ax D g, where A 2 Rm;n is a known matrix, g 2 Rm;1 is a known vector, and x 2 Rn;1 is the vector of unknowns. For a given vector x0 2 Rn;1 , the vector xO 2 Sg satisfying kxO x0 k D min kx x2Sg x0 k ON THE MATRIX NEARNESS PROBLEM FOR... 639 is given by  xO D AŽ g C I  AŽ A x0 : Lemma 2. Let Se be the set of all least squares solutions to the system of linear equations Ax D g which do not need to be consistent, where A 2 Rm;n is a known matrix, g 2 Rm;1 is a known vector, and x 2 Rn;1 is the vector of unknowns. For a given vector x0 2 Rn;1 , the vector xO 2 Se satisfying kxO x0 k D min kx x2Se x0 k is given by  xO D AŽ g C I  AŽ A x0 : Proof. If the system is consistent, then the proof is clear from Lemma 1. Now, let the system be inconsistent. Then, the normal equations of the system is AT Ax D AT g (2.2) which is consistent. So, from Lemma 1, the BAS of the inconsistent system Ax D g is   xO D .AT A/Ž AT g C I .AT A/Ž .AT A/ x0 or, equivalently, from [10, Theorem 6.2.16]  xO D AŽ g C I  AŽ A x0 :  It is noteworthy that the structures of xO in Lemmas 1 and 2 are exactly the same. Remark 1. The minimization problem min kX X0 k is equivalent to the minimization problem  1 min X X0 C X0T 2 over the set of RnS since  2  2 1 1 X0 C X0T X0 X0T C ; 8X 2 RnS : 2 2  So, the matrix 12 X0 C X0T instead of the matrix X0 is taken to find the symmetric solutions of Problems 4 if the matrix X0 is not symmetric.  Similarly if the matrix X0 is not skew–symmetric, then the matrix 21 X0 X0T instead of the matrix X0 is taken to find the skew–symmetric solutions of Problem 4. kX X0 k 2 D X 640 S. S.İMS.EK, M. SARDUVAN, AND H. ÖZDEMİR 3. T HE (S KEW–)S YMMETRIC S OLUTION OF P ROBLEM 4 Our aim is to find a symmetric solution of Problem 4 with an arbitrary matrix X0 2 Rn . To do this, let us consider the quartet of matrix equations A1 XB1 D C1 B1T X AT1 D C1T A2 XB2 D C2 B2T X AT2 D C2T or, equivalently, the usual system of linear equations 2 3 2 B1T ˝ A1 vec .C1 / 6 A1 ˝ B T 7 6 vec C T 1 7 1 6 6 4 B T ˝ A2 5 x D 4 vec .C2 / 2  vec C2T A2 ˝ B2T 3 7 7: 5 (3.1) In view of Lemma 2, the solution vector of the matrix nearness problem of the system (3.1) is 3 2 3 2 3Ž 2 3Ž 2 B1T ˝ A1 vec .C1 / B1T ˝ A1 B1T ˝ A1 T 7 6 T 7 6 A1 ˝ B T 7 6 vec C T 7 6 1 7 6 1 7 C x0 6 A1 ˝ B1 7 6 A1 ˝ B1 7 x0 ; xO D 6 T T T 4 B ˝ A2 5 4 vec .C2 / 5 4 B ˝ A2 5 4 B ˝ A2 5 2 2 2  T T vec C2 A2 ˝ B2T A2 ˝ B2T A2 ˝ B2 (3.2) where x0 D vec. 21 .X0 C X0T //. Thus, we have the following theorem within the framework of those. Theorem 1. Let A1 2 Rm1 ;n , B1 2 Rn;p1 , C1 2 Rm1 ;p1 , A2 2 Rm2 ;n , B2 2 Rn;p2 , C2 2 Rm2 ;p2 , X0 2 Rn are known matrices, and x0 D vec. 21 .X0 C X0T //. Then the symmetric solution XO 2 RnS of Problem 4 is given as in (3.2) in view of xO D vec.XO /.  If it is required to find skew–symmetric solution of Problem 4, then vec CiT is  taken instead of vec CiT , i D 1; 2. By continuing with the same idea, Theorem 1 can be extended to k matrix equations where k is an arbitrary positive integer. Theorem 2. Let Ai 2 Rmi ;n , Bi 2 Rn;pi , Ci 2 Rmi ;pi , i D 1; 2; : : : ; k, are known matrices and SE D fX j X 2 ˝; kŒA1 XB1 C1 ; : : : ; Ak XBk Ck kH D mi ng: For a given matrix X0 2 Rn the symmetric solution XO 2 SE satisfying XO X0 D min kX X 2SE X0 k (3.3) ON THE MATRIX NEARNESS PROBLEM FOR... is given by 2 B1T ˝ A1 6 A1 ˝ B T 1 6 6 :: xO D 6 : 6 4 B T ˝ Ak k Ak ˝ BkT 3Ž 2 7 7 7 7 7 5 vec .C1 / vec C1T :: : 6 6 6 6 6 4 vec .Ck /  vec CkT 3 7 7 7 7 C x0 7 5 2 B1T ˝ A1 A1 ˝ B1T :: : 6 6 6 6 6 4 B T ˝ Ak k Ak ˝ BkT 641 3Ž 2 7 7 7 7 7 5 B1T ˝ A1 A1 ˝ B1T :: : 6 6 6 6 6 4 B T ˝ Ak k Ak ˝ BkT 3 7 7 7 7 x0 ; 7 5 (3.4) where x0 D vec. 21 .X0 C X0T // and, xO D vec.XO /.   Similarly, vec CiT is taken instead of vec CiT , i D 1; 2; : : : ; k, for finding skew–symmetric solution. TABLE 1. A comparative table for the examples chosen from the literature " (F) kX X0 k 43.6600 kŒA1 XB1 C1 ; A2 XB2 4.5687e+003 C2 kH 1 43.6600 0.4366 4.5687e+003 45.6873 0.4366 0.0044 45.6873 0.4569 0.0044 4.3660e-005 0.4569 0.0046 4.3660e-005 28.7113 0.0046 6.1988e+003 33.6729 0.2871 4.5687e+003 66.9884 0.3367 0.0029 48.1408 0.6199 0.0034 2.8711e-005 0.4568 0.0062 3.3673e-005 413.4852 0.0046 2.6688e-021 413.4852 69.9995 4.4970e-021 9.0727e-023 69.9995 1.3436e-021 0.01 Example 2 in [19] 0.0001 0.000001 1 0.01 Example 3 in [21] 0.0001 0.000001 Example 1 in [29] Example 4.1 in [34] (F) " is as given in [19, 21]. We close this section with a comparative table (Table 1) consisting examples chosen from the literature. In each cell, the first value is the result obtained by the 642 S. S.İMS.EK, M. SARDUVAN, AND H. ÖZDEMİR method proposed in this work while the second one is the result in the referenced work. All the computations have been performed using Matlab 7.5. 4. C ONCLUSIONS To solve matrix equations system problems become relatively difficult when it is used matrix decompositions. For example, if the matrix equations are inconsistent, the matrix decompositions GSVD and CCD can not be individually used to solve them, and the difficulty lies in the fact that the invariance of the Frobenius norm does not hold for general nonsingular matrices in these decompositions [19]. For this reason, these kinds of Problems are usually solved using iterative methods. However, it is a well known fact that solving these kinds of problems by elementary methods, which are very simple and elegant, eliminates errors caused by processes of iteration. Due to these kinds of facts, in our opinion, it is better to give the explicit analytical expressions of the solutions obtained by elementary methods instead of giving, especially, the implicit solutions obtained by iterative methods for inconsistent matrix equations encountered in most of physical problems. If the dimensions and elements of the matrices included in the problems are large and sparse, it is clear that the computing processes, especially in the elementary methods, have contained highly large number of terms. Therefore, elementary methods may not be useful with the current computer technology in these kinds of situations. On the other hand, the speed of technological developments is incredible. So, we believe that these difficulties will be disappeared in the nearest future. Consequently, within the framework of these considerations, to establish the solutions as in this note are important not only mathematical point of view but also practically. R EFERENCES [1] A. Ben-Israel and T. N. E. Greville, Generalized Inverses: Theory and Applications, 2nd ed. New York: CMS Books in Mathematics, Springer–Verlag, 2003. [2] J. Cai and G. Chen, “An iterative algorithm for the least squares bisymmetric solutions of the matrix equations A1 XB1 D C1 , A2 XB2 D C2 ,” Math. Comput. Modelling, vol. 50, pp. 1237– 1244, 2009, doi: 10.1016/j.mcm.2009.07.004. [3] J. Cai, G. Chen, and Q. Liu, “An iterative method for the bisymmetric solutions of the consistent matrix equations A1 XB1 D C1 , A2 XB2 D C2 ,” Int. J. Comput. Math., vol. 87, no. 12, pp. 2706– 2715, 2010, doi: 10.1080/00207160902722357. [4] Y. Chen, Z. Peng, and T. Zhou, “LSQR iterative common symmetric solutions to matrix equations AXB D E and CXD D F ,” Appl. Math. Comput., vol. 217, pp. 230–236, 2010, doi: 10.1016/j.amc.2010.05.053. [5] K. E. Chu, “Singular value and generalized singular value decompositions and the solutions of linear matrix equations,” Linear Algebra Appl., vol. 88, pp. 83–98, 1987, doi: 10.1016/00243795(87)90104-2. [6] K. E. Chu, “Symmetric solutions of linear matrix equations by matrix decompositions,” Linear Algebra Appl., vol. 119, pp. 35–50, 1989, doi: 10.1016/0024-3795(89)90067-0. ON THE MATRIX NEARNESS PROBLEM FOR... 643 [7] M. Dehghan and M. Hajarian, “An iterative algorithm for solving a pair of matrix equations AYB D E, C YD D F over generalized centro–symmetric matrices,” Comput. Math. Appl., vol. 56, pp. 3246–3260, 2008, doi: 10.1016/j.camwa.2008.07.031. [8] M. Dehghan and M. Hajarian, “The (R,S)–symmetric and (R,S)–skew symmetric solutions of the pair of matrix equations A1 XB1 D C1 and A2 XB2 D C2 ,” Bull. Iranian Math. Soc., vol. 37, no. 3, pp. 269–279, 2011. [9] J. Ding, Y. Liu, and F. Ding, “Iterative solutions to matrix equations of the form Ai XBi D Fi ,” Comput. Math. Appl., vol. 59, pp. 3500–3507, 2010, doi: 10.1016/j.camwa.2010.03.041. [10] F. A. Graybill, Matrices with applications in statistics, 2nd ed. Belmont CA, USA: Wadsworth Group, 1983. [11] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge, UK: Cambridge University Press, 1985. doi: 10.1017/CBO9780511810817. [12] D. Hua, “On the symmetric solutions of linear matrix equations,” Linear Algebra Appl., vol. 131, pp. 1–7, 1990, doi: 10.1016/0024-3795(90)90370-R. [13] G. X. Huang, F. Yin, and K. Guo, “The general solutions on the minimum residual problem and the matrix nearness problem for symmetric matrices or anti–symmetric matrices,” Appl. Math. Comput., vol. 194, pp. 85–91, 2007, doi: 10.1016/j.amc.2007.04.041. [14] G. X. Huang, F. Yin, and K. Guo, “An iterative method for the skew–symmetric solution and the optimal approximate solution of the matrix equation AXB D C ,” J. Comput. Appl. Math., vol. 212, pp. 231–244, 2008, doi: 10.1016/j.cam.2006.12.005. [15] D. S. C. Ilić, “The reflexive solutions of the matrix equation AXB D C ,” Comput. Math. Appl., vol. 51, pp. 897–902, 2006, doi: 10.1016/j.camwa.2005.11.032. [16] D. S. C. Ilić, “Re–nnd solutions of the matrix equation AXB D C ,” J. Aust. Math. Soc., vol. 84, pp. 63–72, 2008, doi: 10.1017/S1446788708000207. [17] C. G. Khatri and S. K. Mitra, “Hermitian and nonnegative definite solutions of linear matrix equations,” Siam J. Appl. Math., vol. 31, no. 4, pp. 579–585, 1976, doi: 10.1137/0131050. [18] Y. Lei and A. P. Liao, “A minimal residual algorithm for the inconsistent matrix equation AXB D C over symmetric matrices,” Appl. Math. Comput., vol. 188, pp. 499–513, 2007, doi: 10.1016/j.amc.2006.10.011. [19] A. P. Liao and Y. Lei, “Least–squares solution with the minimum–norm for the matrix equation .AXB; GXH / D .C; D/,” Comput. Math. Appl., vol. 50, pp. 539–549, 2005, doi: 10.1016/j.camwa.2005.02.011. [20] A. P. Liao and Y. Lei, “Optimal approximate solution of the matrix equation AXB D C over symmetric matrices,” J. Comput. Math., vol. 25, no. 5, pp. 543–552, 2007. [21] A. P. Liao, Y. Lei, and S. F. Yuan, “The matrix nearness problem for symmetric matrices associated with the matrix equation ŒAT XA; B T XB D ŒC; D,” Linear Algebra Appl., vol. 418, pp. 939– 954, 2006, doi: 10.1016/j.laa.2006.03.032. [22] Y. H. Liu, “Ranks of least squares solutions of the matrix equation AXB D C ,” Comput. Math. Appl., vol. 55, pp. 1270–1278, 2008, doi: 10.1016/j.camwa.2007.06.023. [23] J. R. Magnus, “L–structured matrices and linear matrix equation,,” Linear Multilinear Algebra, vol. 14, pp. 67–88, 1983, doi: 10.1080/03081088308817543. [24] S. K. Mitra, “Common solutions to a pair of linear matrix equations A1 XB1 D C1 and A2 XB2 D C2 ,” Proc. Camb. Phil. Soc., vol. 74, pp. 213–216, 1973, doi: 10.1017/S030500410004799X. [25] A. Navarra, P. L. Odell, and D. M. Young, “A representation of the general common solution to the matrix equations A1 XB1 D C1 and A2 XB2 D C2 with applications,” Comput. Math. Appl., vol. 41, pp. 929–935, 2001, doi: 10.1016/S0898-1221(00)00330-8. 644 S. S.İMS.EK, M. SARDUVAN, AND H. ÖZDEMİR [26] A. B. Özgüler and N. Akar, “A common solution to a pair of linear matrix equations over a principal ideal domain,” Linear Algebra Appl., vol. 144, pp. 85–99, 1991, doi: 10.1016/00243795(91)90063-3. [27] X. Y. Peng, X. Y. Hu, and L. Zhang, “The reflexive and anti–reflexive solutions of the matrix equation AH XB D C ,” J. Comput. Appl. Math., vol. 200, pp. 749–760, 2007, doi: 10.1016/j.cam.2006.01.024. [28] Y. X. Peng, X. Y. Hu, and L. Zhang, “An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB D C ,” Appl. Math. Comput., vol. 160, pp. 763–777, 2005, doi: 10.1016/j.amc.2003.11.030. [29] Y. X. Peng, X. Y. Hu, and L. Zhang, “An iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations A1 XB1 D C1 , A2 XB2 D C2 ,” Appl. Math. Comput., vol. 183, pp. 1127–1137, 2006, doi: 10.1016/j.amc.2006.05.124. [30] Z. H. Peng, X. Y. Hu, and L. Zhang, “An efficient algorithm for the least–squares reflexive solution of the matrix equation A1 XB1 D C1 and A2 XB2 D C2 ,” Appl. Math. Comput., vol. 181, pp. 988– 999, 2006, doi: 10.1016/j.amc.2006.01.071. [31] Z. Y. Peng, “An iterative method for the least squares symmetric solution of the linear matrix equation AXB D C ,” Appl. Math. Comput., vol. 170, pp. 711–723, 2005, doi: 10.1016/j.amc.2004.12.032. [32] Y. Qui, Z. Zhang, and J. Lu, “Matrix iterative solutions to the least squares problem of BXAT D F with some linear constraints,” Appl. Math. Comput., vol. 185, pp. 284–300, 2007, doi: 10.1016/j.amc.2006.06.097. [33] M. Sarduvan, S. S.ims.ek, and H. Özdemir, “On the best approximate (P,Q)–orthogonal symmetric and skew–symmetric solution of the matrix equation AXB D C ,” J. Numer. Math., vol. 22, no. 3, pp. 255–269, 2014, doi: 10.1515/jnma-2014-0011. [34] X. Sheng and G. Chen, “A finite iterative method for solving a pair of linear matrix equations .AXB; CXD/ D .E; F /,” Appl. Math. Comput., vol. 189, pp. 1350–1358, 2007, doi: 10.1016/j.amc.2006.12.026. [35] J. W. van der Woude, “On the existence of a common solution X to the matrix equations Ai XBi D Cij , .i; j / 2 ,” Linear Algebra Appl., vol. 375, pp. 135–145, 2003, doi: 10.1016/S0024-3795(03)00608-6. [36] Q. Wang and C. Yang, “The re–nonnegative definite solutions to the matrix equations AXB D C ,” Comment. Math. Univ. Carolin., vol. 39, no. 1, pp. 7–13, 1998. [37] Q. W. Wang, “A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity,” Linear Algebra Appl., vol. 384, pp. 43–54, 2004, doi: 10.1016/j.laa.2003.12.039. [38] L. L. Zhao, G. L. Chen, and Q. B. Liu, “Least squares (P,Q)–orthogonal symmetric solutions of the matrix equation and its optimal approximation,” Electron. J. Linear Algebra, vol. 20, pp. 537–551, 2010, doi: 10.13001/1081-3810.1392. Authors’ addresses S. S.ims.ek Kırklareli University, Department of Mathematics, TR39100, Kırklareli, Turkey E-mail address: sinem.simsek@klu.edu.tr M. Sarduvan Sakarya University, Department of Mathematics, TR54187, Sakarya, Turkey E-mail address: msarduvan@sakarya.edu.tr ON THE MATRIX NEARNESS PROBLEM FOR... H. Özdemir Sakarya University, Department of Mathematics, TR54187, Sakarya, Turkey E-mail address: hozdemir@sakarya.edu.tr 645