Abstract
Bidiagonal factorizations for the change of basis matrices between monomial and Newton polynomial bases are obtained. The total positivity of these matrices is characterized in terms of the sign of the nodes of the Newton bases. It is shown that computations to high relative accuracy for algebraic problems related to these matrices can be achieved whenever the nodes have the same sign. Stirling matrices can be considered particular cases of these matrices, and then computations to high relative accuracy for collocation and Wronskian matrices of Touchard polynomial bases can be obtained. The performed numerical experimentation confirms the accurate solutions obtained when solving algebraic problems using the proposed factorizations, for instance, for the calculation of their eigenvalues, singular values, and inverses, as well as the solution of some linear systems of equations associated with these matrices.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The resolution of interpolation or approximation problems in a vector space of functions usually requires linear algebra computations with collocation or Wronskian matrices of a given basis of the space. For example, these matrices appear when imposing Lagrange or Taylor interpolation conditions on the considered basis functions. Unfortunately, when the dimension increases, these matrices may become very ill-conditioned, and so standard routines implementing best traditional numerical methods cannot obtain accurate solutions for the considered problems.
Taking into account the previous considerations, an important topic in numerical linear algebra is to achieve computations to high relative accuracy (HRA computations) whose relative errors are of the order of the machine precision. In the last years, HRA computations when considering totally positive collocation and Wronskian matrices of different polynomial bases have been achieved (see [7, 8, 12, 13, 24,25,26]).
The Vandermonde matrices have relevant applications in interpolation and numerical quadrature (see [15, 30]). These matrices are known to be totally positive at increasing sequences of positive parameters and the HRA resolution of related algebraic problems has been achieved by considering a bidiagonal factorization of them (see [4, 12] and references therein). Let us observe that Vandermonde matrices can be considered the change of basis matrices between monomial and Lagrange polynomial bases.
The polynomial basis used in the Newton interpolation formula is called the Newton basis. In this paper, for a given sequence of nodes, not necessarily distinct, we shall factorize the change of basis matrices between the monomial and the Newton bases of the same dimension. These matrices have a triangular structure, and their total positivity will be fully characterized in terms of the sign of the considered nodes. Also, HRA calculations with the change of basis matrices will be achieved as long as all nodes have the same sign even though the matrices are not totally positive. Among other applications, the proposed factorization will be used to obtain HRA computations with Wronskian matrices of Newton bases.
Furthermore, this paper shows that second-kind Stirling numbers can be considered divided differences of monomial polynomials at sets of nodes formed by the first consecutive nonnegative integers. Then, the change of basis matrix between the corresponding Newton and monomial bases is Stirling matrices, that is, triangular matrices whose entries are given in terms of Stirling numbers. On the other hand, these matrices allow us to define the Touchard polynomial bases.
Touchard polynomials are also called the exponential polynomials and generalize the Bell polynomials for the enumeration of the permutations when the cycles possess certain properties. Algebraic, combinatorial, and probabilistic properties of these polynomials are described in [6, 29, 31, 33]. In this paper, the total positivity of Touchard polynomial bases is proved, and a procedure to get HRA computations with their collocation and Wronskian matrices is provided.
This paper is organized as follows. Section 2 recalls basic aspects related to total positivity and HRA. The Neville elimination procedure to derive the parameterization of totally positive matrices leading to HRA algorithms is also described. In addition, conditions that guarantee HRA computations for non totally positive matrices are also provided. Section 3 focuses on the change of basis matrices relating monomial and Newton polynomial bases and shows some applications of these matrices. Their total positivity is characterized in terms of the sign of the considered nodes, and the bidiagonal decompositions providing HRA computations are also obtained. These findings are applied to achieve accurate computations with Wronskian matrices of Newton bases in Section 4 and with Stirling matrices in Section 5. The total positivity of Touchard polynomial bases is proved in Section 6. Moreover, Touchard Wronskian matrices are proved to be totally positive for positive parameters guaranteeing the HRA resolution of related algebraic problems. Finally, Section 7 shows the accurate computations obtained when solving relevant algebraic problems with collocation and Wronskian matrices of Touchard polynomials.
2 Notations, basic concepts, and auxiliary results
Given a basis \((b_0, \ldots , b_n)\) of a space U(I) of functions defined on \(I\subseteq \mathbb {R}\) and a sequence of values \(t_0, \ldots , t_{n}\) on I, the corresponding collocation matrix is
If the functions are n-times continuously differentiable at \(t\in I\), we can define the Wronskian matrix at t as follows
where the i-th derivative of b at the value t is denoted by \(b^{(i)}(t)\).
A matrix is said to be totally positive or TP if all its minors are nonnegative and strictly totally positive or STP if all its minors are positive (see [1]). In the literature, TP and STP matrices are also called as totally nonnegative and totally positive matrices, respectively (see [14, 20]). Nice and interesting TP and STP matrix applications can be found in [1, 14, 32]. Let us recall that, from Theorem 3.1 of [1], the product of TP matrices is another TP matrix.
An important topic in numerical linear algebra is the design and analysis of algorithms adapted to the structure of TP matrices and allowing the resolution of related algebraic problems with relative errors of the order the machine precision, that is, algorithms to high relative accuracy (HRA).
A real value \(y\ne 0\) is said to be computed to HRA whenever the obtained \( \widetilde{y}\) satisfies
where u is the unit round-off (or machine precision), and \(k>0\) is a constant, which does not depend on the arithmetic precision. Algorithms avoiding inaccurate cancelations can be performed to HRA (see page 52 in [11]). Then, we say that they satisfy the non-inaccurate cancellation condition, namely NIC condition, and they only compute multiplications, divisions, and additions of numbers with the same sign. Moreover, if the floating-point arithmetic is well-implemented, the subtraction of initial data can also be allowed without losing HRA (see page 53 in [11]).
Nowadays, bidiagonal factorizations are very useful to achieve accurate algorithms for performing computations with TP matrices. In fact, the parameterization of TP matrices leading to HRA algorithms is provided by their bidiagonal factorization, which is in turn very closely related to the Neville elimination, NE hereafter (cf. [16,17,18]).
The essence of the NE is to obtain, from a given matrix \( A=(a_{i,j})_{1\le i,j\le n+1}\), an upper triangular matrix by adding to each row a multiple of the previous one. In particular, the NE of A consists of n major steps defining matrices \(A^{(1)}:=A\) and \( A^{(r)} =(a_{i,j}^{(r)})_{1\le i,j\le n+1}\), such that,
\(r=2,\ldots , n+1\), so that \(U:=A^{(n+1)} \) is upper triangular. In more detail, \(A^{(r+1)} \) is computed from \(A^{(r)} \) as follows
The entry
is the (i, j) pivot and \(p_{i,i}\) is called the i-th diagonal pivot of the NE of the matrix A. Furthermore, the value
for \(1\le j < i\le n+1\), is called the (i, j) multiplier of the NE of A. The complete Neville elimination (CNE) of a matrix A can be performed whenever no row swaps are needed in the NE of the matrices A and \(U^T\). In this case, the multipliers of the CNE of A are the multipliers of the NE of A if \(i \ge j\) and the multipliers of the NE of \(A^T\) if \(j \ge i\) (see [18]).
The total positivity property of a matrix can not be immediately deduced. However, the following result, derived from Corollary 5.5 of [16] and the reasoning in p. 116 of [18], illustrates that NE characterizes the class of STP and nonsingular TP matrices.
Theorem 1
A given matrix A is STP (respectively, nonsingular TP) if and only if its CNE can be performed with no row and column swaps, the diagonal pivots of the NE of A are positive and the multipliers of the NE of A and \(A^T\) are positive (respectively, nonnegative).
In fact, total positivity of \(A\in \mathbb R^{(n+1)\times (n+1)}\) can be studied by analyzing a bidiagonal factorization of the matrix. In this sense, by Theorem 4.2 and the arguments of p.116 of [18], a nonsingular TP matrix A admits a factorization of the form
where \(F_i\in \mathbb R^{(n+1)\times (n+1)}\) (respectively, \(G_i\in \mathbb R^{(n+1)\times (n+1)}\)) is the TP, lower (respectively, upper) triangular bidiagonal matrix given by
and \(D=\textrm{diag}\left( p_{1,1},\ldots , p_{n+1,n+1}\right) \) has positive diagonal entries. The diagonal elements of D are the positive diagonal pivots of the NE of the matrix A, and the entries \(m_{i,j}\) and \(\widetilde{m}_{i,j}\) are the multipliers of the NE of the matrices A and \(A^T\), respectively. Under certain conditions, the factorization (6) is unique, and in [2], more general classes of matrices satisfying the bidiagonal factorization were obtained.
Remark 1
The NE of a nonsingular and TP matrix A also provides a bidiagonal factorization of the matrix \(A^{-1}\). In fact, by considering (6), a bidiagonal decomposition for \(A^{-1}\) can be computed as follows:
where \(\widehat{F}_i\) (respectively, \(\widehat{G}_i\)) is the lower (respectively, upper) triangular bidiagonal matrix with the form described by (7), which is obtained by replacing the off-diagonal entries \(\{m_{i+1,1},\ldots , m_{n+1,n+1-i}\} \) and \(\{\widetilde{m}_{i+1,1},\ldots , \widetilde{m}_{n+1,n+1-i}\} \) by the values \(\{-m_{i+1,i},\ldots ,-m_{n+1,i} \}\) and \(\{-\widetilde{m}_{i+1,i},\ldots ,- \widetilde{m}_{n+1,i}\}\), respectively (see Theorem 2.2 of [27]), that is
In the sequel, we shall use the matrix notation introduced in [19], allowing us to store the bidiagonal factorization (6) of A, as well as the bidiagonal factorization (8) of \(A^{-1}\), by means of a matrix \(BD(A)=(BD(A)_{i,j})_{1\le i,j\le n+1}\), whose diagonal entries are the diagonal pivots of the NE of A and the entries above and below its diagonal are the multipliers of the NE of \(A^T\) and A, that is,
If the pivots and multipliers, and so BD(A), are given to HRA, then the Matlab functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve available in the software library TNTools in [21] take as input argument BD(A) and compute to HRA the eigenvalues and singular values of A, the inverse matrix \(A^{-1}\) (using the algorithm presented in [27]) and even the solution of linear systems \(Ax=b\), for vectors b with alternating signs.
The following result provides conditions that guarantee that a given matrix is the inverse of a TP matrix. Under these conditions, algebraic problems solving for non-TP matrices can also be done to HRA.
Theorem 2
Let \(A\in \mathbb {R}^{(n+1)\times (n+1)} \) and \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\). If the CNE of the matrix A can be performed with no row and column swaps, the diagonal pivots of the NE of A are positive and the multipliers of the NE of A and \(A^T\) are nonpositive, then A is the inverse of a TP matrix and the matrix \(A_J:=JAJ\) is nonsingular TP.
Moreover, if the computation of the mentioned diagonal pivots and multipliers satisfies the NIC condition, the eigenvalues and singular values of A, its inverse matrix \(A^{-1}\), as well as the solution of \(A{ x}= { b}\), where the entries of the vector \({ b} = (b_1, \ldots , b_{n+1})^T\) have the same sign, can be obtained to HRA.
Proof
Under the considered hypotheses,
where the matrices \(F_i\) (respectively, \(G_i\)), \(i=1,\ldots ,n\), are lower (respectively, upper) triangular bidiagonal, and have the structure described by (7). The diagonal elements of D are positive and the off-diagonal entries \(m_{i,j}\) and \(\widetilde{m}_{i,j}\) of the bidiagonal factors are nonpositive.
Let us notice that \(A^{-1}\) can be factorized as in (8) and the bidiagonal matrices \(\widehat{F}_i\) and \(\widehat{G}_i\) are TP since \(-m_{i,j}\ge 0\) and \(-\widetilde{m}_{i,j}\ge 0\). Consequently, we can deduce that \(A^{-1}\) is a TP matrix because it is the product of TP matrices. So, using Theorem 3.3 of [1], we can derive that \(A_J\) is TP. In fact,
has nonnegative entries.
If the computation of \(p_{i,i}\), \(m_{i,j}\) and \(\widetilde{m}_{i,j}\) satisfies the NIC condition, (11) can be computed to HRA. This fact guarantees that the eigenvalues and singular values of \(A_J\), the inverse matrix \(A_J^{-1}\) and the solution of \(A_J { x}= { d}\), where the entries of the vector \({ d} = (d_1, \ldots , d_{n+1})^T\) have alternating signs can also be obtained to HRA (see Section 3 of [12]).
Since J is a unitary matrix, we deduce that the eigenvalues and singular values of A coincide with those of \(A_J\), and therefore, using (11), their computation to HRA can be also achieved.
For the accurate computation of the inverse matrix \(A^{-1}\), we consider that \(A_J^{-1} \) can be computed to HRA. Since \( A^{-1}=JA_J^{-1}J\), by means of an appropriate change of sign of the elements of \(A_J^{-1}\), we can also compute the matrix \(A^{-1}\) to HRA. Finally, for the linear system \(A{ x}= { d}\), since the entries of J d have alternating signs, we can compute to HRA the solution \({ y}\in \mathbb R^{n+1}\) of the system \(A_J { y}= J { d} \), and then \({ x}=J{y}\).
\(\square \)
3 Matrix conversion between Newton and monomial bases
The Lagrange formula of the polynomial interpolant p of a function f at nodes \(t_0,\ldots , t_n\) such that \(t_i\ne t_j\) for \(i\ne j\), is obtained when the interpolant p is expressed in terms of the Lagrange basis \((\ell _0,\ldots , \ell _n)\) of the space \({\textbf{P}}^n\) of polynomials of degree not greater than n,
Denote \(f_i:=f(t_i)\) for \(i=0,\ldots ,n\). The polynomial p can also be written in terms of the monomial basis \((m_0,\ldots , m_n)\) of \({\textbf{P}}^n\),
and, in this representation, the coefficients are the solution of the linear system
where \({ c}=(c_0, \ldots , c_n)^T \) and \({ f}=(f_0, \ldots , f_n)^T \). Then, we can write
where \(V:= M \left[ \begin{array}{c} m_0, \ldots , m_n \\ t_0,\ldots , t_{n} \end{array} \right] \) is the Vandermonde matrix at the nodes \(t_0,\ldots , t_n\). Taking into account (12), (13) and (14), we have
and deduce that
This means that the Vandermonde matrix V is the change of basis matrix between the monomial and the Lagrange basis of the polynomial space \({\textbf{P}}^n\).
This section is devoted to achieve HRA computations in algebraic problems related to the change of basis matrix between the monomial basis and the Newton basis corresponding to nodes not necessarily distinct.
Given nodes \(t_0\le \cdots \le t_n\), the Newton form of the polynomial interpolant p of a function f at \(t_0,\ldots , t_n\) is obtained when p is written as follows:
where \([t_0,\ldots ,t_i ]f\) denotes the divided difference of f at nodes \(t_0,\ldots ,t_i\) and
The polynomial basis \((w_0,\ldots , w_n)\) is the Newton basis of the space \({\textbf{P}}^n\) determined by the nodes \(t_0,\ldots , t_{n-1}\). The Newton polynomial (16) is sometimes called Newton’s divided differences interpolation polynomial because its coefficients can be obtained using Newton’s divided differences method.
Let us recall that, if f is n times continuously differentiable on \([t_0,t_n]\), the divided differences \([t_0,\ldots ,t_i ]f \), \(i=0,\ldots ,n\), can be obtained using the following recursion
Moreover, given two functions f and g defined on an interval containing the nodes \(t_0,\ldots ,t_n\), the following Leibnitz-type formula for divided differences is satisfied
This formula has played a relevant role to derive recurrence relations for B-spline functions (cf. [3]).
Since \(m_i(t)=t^i\), \(i=0,\ldots ,n\), coincides with its interpolant at \(t_0,\ldots , t_n\), taking into account the Newton formula (16) for \(m_i\), \(i=0,\ldots ,n\), we deduce that the change of basis matrix U, satisfying
is upper triangular. In addition, \(U=(u_{i,j})_{1\le i,j \le n+1}\) with \( u_{i,j}= [t_0,\ldots , t_{i-1}]m_{j-1}\), that is,
Let us notice that, by induction, it can be checked that
On the other hand, the collocation matrix of the Newton basis \((w_0,\ldots ,w_n)\) at nodes \(t_0,\ldots ,t_n\) is a lower triangular matrix \(L=(l_{i,j})_{1\le i,j\le n+1}\) whose entries
satisfy the following recurrences
Moreover, taking into account (15) and (19), we obtain the following Crout factorization of Vandermonde matrices
This factorization can be used to solve linear Vandermonde systems \(Vx=f\) by considering the systems \(Ld=f\) and \(Ux=d\). Note that, in Lagrange interpolation problems, the vectors \(d:=(d_1,\ldots ,d_{n+1})^T\) and \(f:=(f_1,\ldots ,f_{n+1})^T\) with \(d_i:=[t_0,\ldots ,t_{i-1}]f\) and \(f_i:=f(t_{i-1})\), \(i=1,\ldots ,n+1\), are related by
So, the matrix U relates the vector solution x with an intermediate vector d of divided differences (see [5]).
The following result deduces the pivots and multipliers of the NE of the matrix U in (20) and its inverse \(U^{-1}\). Their decomposition (6) is obtained and their total positivity will be analyzed.
Theorem 3
Let U be the change of basis matrix between the monomial and the Newton basis (17). Then,
where \(G_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices whose structure is described by (7) and their off-diagonal entries are
Moreover,
where \( \widehat{G}_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries
Proof
Let us define the lower triangular matrix \(L:=U^T\). Clearly, \(L=(l_{i,j} )_{1\le i,j\le n+1}\) with
Now, let \(L^{(1)}:=L\) and, for \(r=2,\ldots , n+1\), let \(L^{(r)} =(l_{ij}^{(r)})_{1\le i,j \le n+1}\) be the matrix obtained after \(r-1\) steps of the NE of L. By induction on r, we shall deduce that
First, taking into account formula (26), identities (27) clearly hold for \(r=1\). If (27) holds for some \(r\in \{1,\ldots ,n\}\), we have that
Since \(l_{i,j}^{(r+1)}=l_{i,j}^{(r)}- \left( {l_{i,r}^{(r)}}/{l_{i-1,r}^{(r)}} \right) l_{i-1,j}^{(r)}\), taking into account (27), (28) and the Leibnitz’s rule for divided differences (18) to \(m_j(t)=t m_{j-1}(t)\), we can write
corresponding to the identity (27) for \(r+1\). Now, from (4) and (27), the pivots \(p_{i,j}\) of the NE of L satisfy
Consequently, the diagonal pivots are \(p_{i,i}=1\), \(i=1,\ldots ,n+1\), and the multipliers satisfy
Then,
and the off-diagonal elements \(m_{i,j}\) of \(F_{i}\), \(i=1,\ldots ,n\), are given by (30). Taking into account that \(U=L^T\), we have
and defining \( G_i:= F_{i}^T\), \(i=1,\ldots ,n\), the factorization (22) for U is obtained. Taking into account (30), formula (23) for the off-diagonal entries \(\widetilde{m}_{i,j}\) is confirmed. Finally, taking into account Remark 1, the factorization (24) for \(U^{-1}\) can be deduced.
\(\square \)
The provided factorization (6) for the matrices U in (20), as well as the factorization (8) of \(U^{-1}\), can be stored by defining \(BD(U)=(BD(U)_{i,j})_{1\le i,j1\le n+1}\) with
Taking into account the factorization derived in Theorem 3, computations to HRA with the matrices U and \(U^{-1}\) can be deduced when the interpolation nodes do not change their sign.
Corollary 4
The matrix U in (20) (respectively, \(U^{-1}\)) is TP if and only if the nodes of the Newton basis (17) satisfy \( t_i\ge 0\) (respectively, \( t_i\le 0\)), \(i=0,\ldots ,n-1\).
Moreover, if U (respectively, \(U^{-1}\)) is TP, its bidiagonal factorization (22) (respectively, (24)) can be computed to HRA. Consequently, the eigenvalues and singular values of U (respectively, \(U^{-1}\)), as well as the solution of the linear systems \(U{ x}= { b}\) (respectively, \(U^{-1}{ x}= { b}\)), where the entries of \({ b} = (b_0, \ldots , b_n)^T\) have alternating signs, can be obtained to HRA.
Proof
If U (respectively, \(U^{-1}\)) is TP, by Theorem 1, we can guarantee that its CNE can be performed with no row and column swaps, and the multipliers are nonnegative. From Theorem 3, the decomposition (6) of U (respectively, \(U^{-1}\)) is given in (22) (respectively, in (24)), and we deduce that the nodes are nonnegative (respectively, nonpositive).
If \(t_i\ge 0\) (respectively, \(t_i\le 0\)), \(i=0,\ldots ,n-1\), the off-diagonal entries of the bidiagonal matrix factors in (22) (respectively, in (24)) are nonnegative. Then, we can derive that the bidiagonal matrix factors are TP and conclude that U (respectively, \(U^{-1}\)) is TP since it is the product of TP matrices. In addition, the computation of the bidiagonal factorization (6) satisfies the NIC condition, and so, it can be computed to HRA. This fact guarantees the computation of the mentioned algebraic problems to HRA (see Section 3 of [12]).
\(\square \)
Furthermore, using Theorems 2 and 3, HRA computations with the matrix U can also be obtained when considering the matrices JUJ and \(JU^{-1}J\).
Corollary 5
Let U be the matrix in (20). Then, JUJ (respectively, \(JU^{-1}J\)) is TP if and only if the nodes of the Newton basis (17) satisfy \( t_i\le 0\) (respectively, \( t_i\ge 0\)), \(i=0,\ldots ,n-1\).
Moreover, if JUJ (respectively, \(JU^{-1}J\)) is TP, its decomposition (22) (respectively, (24)) can be computed to HRA. Consequently, the eigenvalues and singular values of U (respectively, of \(U^{-1}\)), as well as the solution of the linear systems \(U{ x}= { b}\) (respectively, \(U^{-1}{ x}= { b}\)), where the entries of \({ b} = (b_0, \ldots , b_n)^T\) have the same sign, can be obtained to HRA.
4 Accurate computations with Wronskian matrices of Newton bases
Let us recall that Corollary 1 of [23] provides the following factorization (6) of \(W:=W(m_{0},\ldots ,m_{n})(t)\), the Wronskian matrix of the monomial basis \((m_{0},\ldots ,m_{n})\) in (13),
where \(D=\text {diag}\{0!,1!,\dots ,n!\}\) and \(G_{i }\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries
For the matrix representation BD(W) of (33), we have
Clearly, the computation of BD(W) satisfies the NIC condition, and therefore, this matrix can be computed to HRA. In addition, taking into consideration the sign of the entries of BD(W), one can derive that the Wronskian matrix of the monomial basis is TP for any \(t>0\). In [23], using (33), computations to HRA when solving algebraic problems related to \(W(m_{0},\ldots ,m_{n})(t)\), \(t>0\), have been achieved.
For \(t<0\), taking into account (34), we clearly see that the hypotheses of Theorem 2 hold and deduce that HRA computations can also be obtained when considering the Wronskian matrix \(W(m_{0},\ldots ,m_{n})(t)\).
Corollary 6
Let \(W:=W(m_{0},\ldots ,m_{n})(t)\) be the Wronskian matrix of the monomial polynomial basis in (13) and \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\). Given \(t<0\), the matrix \(W_J:=J W J\) is TP and its bidiagonal factorization (6) is
where \(D=\text {diag}\{0!,1!,\dots ,n!\}\) and \( \widehat{G}_{i}\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries
The bidiagonal decomposition (35) can be computed to HRA. Consequently, the eigenvalues and singular values of W, the matrix \(W^{-1}\), as well as the solution of linear systems \(W{ x}= { d}\), where the entries of d have the same sign, can be obtained to HRA.
Taking into account the factorizations obtained for \(W(m_{0},\ldots ,m_{n})(t)\) and for the change of basis matrix between the monomial and the Newton bases, the total positivity of Wronskian matrices of the Newton basis can be analyzed, and their bidiagonal factorization can be derived.
Theorem 7
Let \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\) and \(W:=W(w_{0},\ldots ,w_{n})(t)\) be the Wronskian matrix of the Newton basis \((w_{0},\ldots ,w_{n})\) defined in (17). Then,
-
(a)
If \(t_i\le 0\), \(i=0,\ldots ,n-1\), W is TP for any \(t > 0\) and its factorization (6) can be computed to HRA.
-
(b)
If \(t_i \ge 0\), \(i=0,\ldots ,n-1\), \(W_J:=JW J\) is TP for any \(t< 0\) and its factorization (6) can be computed to HRA.
Proof
Let \((m_0,\ldots ,m_n)\) be the monomial basis of \({\textbf{P}}^n\) and U be the change of basis matrix such that \((m_0,\ldots ,m_n)=(w_0,\ldots ,w_n) {U}\) (see (20)). Then,
If \(t_i\le 0\), \(i=0,\ldots ,n-1\), by Corollary 4, \(U^{-1}\) is TP and its decomposition (24) can be computed to HRA. If \(t>0\), by Corollary 1 of [23], \(W(m_{0},\ldots ,m_{n})(t)\) is TP. Taking into account that the product of TP matrices is TP, we deduce that W is TP and its factorization (6) can also be computed to HRA (see (34) for \(BD(W(m_{0},\ldots ,m_{n}))\).
If \(t_i\ge 0\), \(i=0,\ldots ,n-1\), using Corollary 5, we deduce that \(JU^{-1} J\) is TP and its factorization (24) can be obtained to HRA. Moreover, if \(t<0\), using Corollary 6, we deduce that \(JW(m_0,\ldots ,m_n)(t)J\) is TP and its bidiagonal factorization (6) can be computed to HRA. Since \(J=J^{-1}\), from (36), we can write
and deduce that JWJ is TP because it can be written as the product of TP matrices.
Using Algorithm 5.1 of [20], if the decomposition (6) of two nonsingular TP matrices is provided to HRA, then the decomposition (6) of the product is computed to HRA. Consequently, the decomposition (6) of W for the case \(t_i\le 0\), \(i=0,\ldots ,n-1\), and \(t > 0\) as well as the decomposition (6) of \(W_J\) for the case \(t_i\ge 0\), \(i=0,\ldots ,n-1\) and \(t < 0\) can be obtained to HRA.
\(\square \)
5 Applications to Stirling matrices
Stirling numbers of the first kind arise in combinatorics, when analyzing permutations. They can be seen as the coefficients s(n, k), \(n,k\in \mathbb {N}\cup \{0\}\), \(k\le n\), in the expansion of the falling factorial, defined as
in terms of powers of the variable x, that is,
First kind Stirling numbers can be computed using the relation
with
Since \(sign(s(n,k))=(-1)^{n-k}\), Stirling numbers of the first kind are also called signed Stirling numbers. The absolute values of the first kind Stirling numbers are known as unsigned Stirling numbers and are usually denoted by c(n, k) or \(\begin{bmatrix}n\\ k\end{bmatrix}\). These numbers satisfy
and can be seen as the coefficients in the expansion of the rising factorial:
Unsigned Stirling numbers can be computed using the relation
with
The Stirling numbers of the second kind are denoted by \({\displaystyle S(n,k)}\) or \(\begin{Bmatrix}n\\k\end{Bmatrix}\), count the number of partitions of a set of size n into k disjoint non-empty subsets and can also be characterized as the coefficients arising when one expresses powers of an indeterminate x in terms of the falling factorials (38), that is,
Second kind Stirling numbers can be computed using the relation
with
or by the explicit formula
Let us observe that, when considering the nodes \(t_i:=i\) for \(i=0,\ldots ,n-1\), the corresponding Newton basis (17) satisfies
and, taking into account (44), Stirling numbers of the second kind can be seen as divided differences of monomials with respect to the set of nodes formed by the first consecutive nonnegative integers. In particular,
where \(m_i(t)=t^i\), \(i=0,\ldots ,n\).
Moreover, the corresponding change of basis matrix U between the monomial and the Newton basis corresponding to \(t_i:=i\), \(i=0,\ldots , n-1\), and satisfying (19), is upper triangular and \(U=(u_{i,j})_{1\le i,j \le n+1}\), with
We shall say that the matrix U, whose entries are given in (47), is the \((n+1)\times (n+1)\) second kind Stirling matrix. As a direct consequence of Theorem 3 and Corollary 4, we can deduce a bidiagonal decomposition providing HRA computations with second kind Stirling matrices.
Theorem 8
The second kind Stirling matrix U described by (47) is TP and admits the following decomposition
where \(G_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries
Moreover, the decomposition (48) can be computed to HRA. Consequently, the eigenvalues and singular values of U, as well as the solution of the linear systems \(U{ x}= { b}\), where the entries of b have alternating signs, can be obtained to HRA.
Furthermore, using (39), we can also deduce that the inverse of the second kind Stirling matrix U described by (47) is the upper triangular \(\widetilde{U}:=U^{-1}=(\widetilde{u}_{i,j})_{1\le i,j \le n+1}\) such that
where s(n, k) denotes the corresponding (signed) first kind Stirling number provided by (40). We shall say that this matrix is the \((n+1)\times (n+1)\) signed Stirling matrix. Using Theorems 3 and 2, we have the following result.
Theorem 9
Let \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\) and \(\widetilde{U}\) be the signed Stirling matrix described by (50). Then, \(J\widetilde{U}J\) is TP and admits the following factorization
where \(G_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries
Moreover, the decomposition (51) can be computed to HRA. Consequently, the eigenvalues and singular values of \(\widetilde{U}\), as well as the solution of the linear systems \(\widetilde{U}{ x}= { b}\), where the entries of b have the same sign, can be obtained to HRA.
Finally, let us observe that the matrix \(\widehat{U}:=J\widetilde{U}J= (\widehat{u}_{i,j})_{1\le i,j \le n+1} \) in Theorem 9 satisfies
where \(\begin{bmatrix}n\\ k\end{bmatrix}\) denotes the unsigned Stirling numbers of the first kind that can be computed by the recurrence relation (43). We shall say that this matrix is the \((n+1)\times (n+1)\) unsigned Stirling matrix. Then, we have the following result.
Corollary 10
The unsigned Stirling matrix \(\widehat{U} \) described by (52) is TP and admits the following decomposition
where \( {G}_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices of the form (7), whose off-diagonal entries are
Moreover, the decomposition (53) can be computed to HRA. Consequently, eigenvalues and singular values of \(\widehat{U}\), as well as the solution of the linear systems \(\widehat{U}{ x}= { b}\), where the entries of b have alternating signs, can be obtained to HRA.
In order to conclude this section, let us notice that bidiagonal factorizations of matrices formed by other types of Stirling numbers such as Jacobi-Stirling or q-Stirling numbers can be found in [10] and [9], respectively.
6 Total positivity and HRA computations with Touchard bases
The Touchard polynomials are also called the exponential polynomials or Bell polynomials and comprise a polynomial sequence defined by
where \(S(n,k)=\begin{Bmatrix}n\\k\end{Bmatrix}\) is the Stirling number of the second kind in (44) (cf [33]). We shall say that the basis \((T_0,\ldots , T_n)\) of \({\textbf{P}}^n\) is the \((n+1)\)-dimensional Touchard basis.
We clearly have
where \((m_{0},\ldots ,m_{n})\) is the monomial basis of \({\textbf{P}}^n\) and \(U=(u_{i,j})_{1\le i,j\le n+1}\) is the \((n+1)\times (n+1)\) second kind Stirling matrix, that is,
(see (47)). Let us recall that the monomial basis \((m_{0},\ldots ,m_{n})\) of \({\textbf{P}}^n\) is STP on \((0,+\infty )\) and so, given \(0<t_0< \cdots <t_{n}\), the corresponding collocation matrix
is STP (see Section 3 of [19]). V is the Vandermonde matrix at the considered nodes.
Taking into account the total positivity of the Vandermonde matrices at positive nodes in increasing ordering and the total positivity of the second kind Stirling matrices, we can deduce the total positivity of Touchard bases, as well as factorizations providing computations to HRA when considering their collocation matrices.
Theorem 11
The basis \((T_{0},\ldots ,T_{n})\) of Touchard polynomials defined in (54) is STP on \((0,\infty )\). Moreover, given \(0< t_{0}< \cdots < t_{n }\), the collocation matrix
and its bidiagonal factorization (6) can be computed to HRA.
Proof
Given \(0< t_{0}< \cdots < t_{n }\), by formula (55), the collocation matrix (58) of the Touchard basis satisfies
where V is the Vandermonde matrix (57) and U is the \((n+1)\times (n+1)\) second kind Stirling matrix described by (56).
It is well known that V is STP for \(0< t_{1}< \cdots < t_{n+1}\) and its decomposition (6) can be computed to HRA (see [19] or Theorem 3 of [22]). By Theorem 8, U is a nonsingular TP matrix, and its decomposition (6) can be also computed to HRA. Therefore, we can deduce that T is an STP matrix since it is the product of an STP matrix and a nonsingular TP matrix (see Theorem 3.1 of [1]). Moreover, using Algorithm 5.1 of [20], if the decomposition (6) of two nonsingular TP matrices is provided to HRA, then the decomposition of the product can be obtained to HRA. Consequently, T and its decomposition (6) can be obtained to HRA.
\(\square \)
Now, we can also analyze the total positivity of Wronskian matrices of Touchard bases.
Theorem 12
Let \((T_{0},\ldots ,T_{n})\) be the Touchard polynomial basis in (54). For any \(t> 0\), the Wronskian matrix \(W:=W(T_{0},\ldots ,T_{n})(t)\) is nonsingular and TP. Furthermore, W and its bidiagonal decomposition (6) can be computed to HRA.
Proof
Using formula (55), it can be checked that
Following the reasoning in the proof of Theorem 11, the result readily follows.
\(\square \)
7 Numerical experiments
In order to encourage the understanding of the numerical experimentation carried out, we provide the pseudocode of several algorithms. Firstly, using Theorem 8, we present Algorithm 1 for computing to HRA the matrix form BD(U) (48) of the bidiagonal decomposition of the second kind Stirling matrix U in (47). Furthermore, we also provide the pseudocode of Algorithms 2 and 3 for computing to HRA the matrix form (9) of the bidiagonal decomposition of the collocation and Wronskian matrices of Touchard bases. Taking into account (59), Algorithm 2 requires BD(U) and the bidiagonal decompositon of the Vandermonde matrix implemented in the Matlab function TNVandBD available in [21]. In addition, following (60), Algorithm 3 requires BD(U) and the bidiagonal decomposition (34) of the Wronskian matrix of the monomial basis. Finally, let us observe that both algorithms call the Matlab function TNProduct available in [21]. Let us recall that, given \(A=BD(F)\) and \(B =BD(G)\) to HRA, TNProduct(A,B) computes \( BD(F\cdot G)\) to HRA. The computational cost of the mentioned function and algorithms is \(O(n^3)\) arithmetic operations.
Let us illustrate with a simple example the bidiagonal decompositions obtained by Algorithms 2 and 3 for the collocation and Wronskian matrices of Touchard bases. For \(n+1=10\), Algorithm 1 computes the following matrix storing the bidiagonal decomposition of the second kind Stirling matrix U:
Let us now considerer the following sequence of parameters \(\widetilde{\textbf{t}}:=[2,4,6,8,10,12,\) 14, 16, 18, 20]. The bidiagonal factorization of the corresponding Vandermonde matrix V computed by TNVandBD is as follows:
Using BD(V) and BD(U), Algorithm 2 computes \(BD(V\cdot U)=BD(T)\) for the bidiagonal factorization of the collocation matrix T at the parameters \(\widetilde{\textbf{t}}\) of the \((n+1)\)-dimensional Touchard basis, obtaining the following:
On the other hand, let \(t:=2\). The bidiagonal factorization of the Wronskian matrix \(\widetilde{W}\) of the monomial basis at t can be represented by the following:
Using \(BD(\widetilde{W})\) and BD(U), Algorithm 3 computes \( BD(\widetilde{W} \cdot U)=BD(W)\) for the bidiagonal factorization of the Wronskian matrix W of the \((n+1)\)-dimensional Touchard basis at t.
To test the accuracy on floating point arithmetic provided by the proposed bidiagonal factorizations, for different dimensions \(n+1=5,6, \ldots , 20\), we have solved algebraic problems related to collocation matrices \(T_n\) of Touchard bases with \(t_{i}=1+(i+1)/(n+1)\), \(i=0,\ldots , n,\) and Wronskian matrices \(W_n\) of Touchard bases at \(t=20\). In order to analyze the accuracy of the results, when calculating the relative errors, we have considered the solutions obtained in Mathematica using 100-digit arithmetic as the exact solutions.
In addition, we have also computed the 2-norm condition number of all considered matrices. In Fig. 1, the conditioning obtained in Mathematica is depicted. It can be easily observed that the conditioning drastically increases with the size of the matrices. Due to the ill-conditioning of these matrices, standard routines do not obtain accurate solutions because they can suffer from inaccurate cancelations. In contrast, the algorithms using the factorizations obtained in this paper exploit the structure of the considered matrices obtaining, as we will see, numerical results to HRA.
Computation of eigenvalues and singular values. Given \(B=BD(A)\) to HRA, the Matlab functions TNEigenValues(B) and TNSingularValues(B) available in [21] compute the eigenvalues and singular values of a matrix A to HRA. Its computational cost is \(O(n^3)\) (see [19]).
Algorithm 4 uses the bidiagonal decompositions provided by Algorithms 2 and 3 to compute the eigenvalues and singular values of collocation matrices and singular values of Wronskian matrices of Touchard bases to HRA. Let us note that the eigenvalues of the Wronskian matrices of Touchard bases are exact.
Let us observe that ill-conditioned matrices have extremely small singular values. Moreover, small relative perturbations in the entries of a totally positive matrix can produce enormous relative perturbations in the small eigenvalues and singular values. So, traditional methods to obtain the eigenvalues or the singular values of an ill-conditioned TP matrix only guarantee relative accuracy in the computation of the largest eigenvalues or singular values (cf. [19]). In this context, we have compared the smallest eigenvalue and singular value obtained using Algorithm 4 and Matlab comands eig and svd. The values provided by Mathematica using 100-digit arithmetic have been considered the exact solution of the algebraic problem, and the relative error e of each approximation has been computed as \(e:= \vert a-\tilde{a} \vert / \vert a \vert \), where a denotes the smallest eigenvalue and singular value computed in Mathematica and \(\tilde{a}\) the smallest eigenvalue and singular value computed in Matlab.
In Figs. 2 and 3, the relative errors are shown. Note that our approach computes accurately the smallest eigenvalue and singular value regardless of the 2-norm condition number of the considered matrices. In contrast, the Matlab commands eig and svd return results that are not accurate at all.
Computation of inverses. Given \(B=BD(A)\) to HRA, the function TNInverseExpand(B) available in [21] returns \( A^{-1}\) to HRA, requiring \(O(n^2\)) arithmetic operations (see [27]).
Algorithm 5 uses the bidiagonal decomposition provided by Algorithms 2 and 3 to compute the inverse of these matrices to HRA.
For all considered matrices, we have compared their inverses obtained using Algorithm 5 and the Matlab command inv. To look over the accuracy of these two methods, we have compared both Matlab approximations with the inverse matrix \(A ^{-1}\) computed by Mathematica using 100-digit arithmetic, taking into account the formula \(e=\Vert A ^{-1}-\widetilde{A} ^{-1} \Vert /\Vert A ^{-1}\Vert \) for the corresponding relative error.
The achieved relative errors are shown in Fig. 4. Note that, in contrast to the Matlab command inv, our algorithm provides HRA results.
Resolution of linear systems. Given \(B=BD(A)\) to HRA and a vector d with alternating signs, the Matlab function TNSolve(B, d) available in [21] returns the solution c of \(Ac=d\) to HRA. It requires \(O(n^2\)) arithmetic operations (see [21]).
Algorithm 6 uses the bidiagonal decomposition provided by Algorithms 2 and 3 to compute to HRA the solution of the linear systems \(T_nc=d\) and \(W_nc=d\) where \({ d }=((-1)^{i+1}d_i)_{1\le i\le n+1}\) and \(d_i\), \(i=1,\ldots ,n+1\), are random nonnegative integer values.
For all considered matrices, we have compared the solution obtained using Algorithm 6 and the Matlab command \(\setminus \). The solution provided by Mathematica using 100-digit arithmetic has been considered the exact solution c. Then, we have computed in Mathematica the relative error of the computed approximation with Matlab \(\tilde{c}\), taking into account the formula \(e=\Vert c-\tilde{c}\Vert / \Vert c\Vert \).
As opposed to the results obtained with the command \(\setminus \), the proposed algorithm preserves the accuracy for all the considered dimensions. Figure 5 illustrates the relative errors.
8 Conclusions
In this paper, we have focused on the change of bases matrices between the monomial and the Newton bases corresponding to a given sequence of nodes, illustrating that their total positivity can be characterized in terms of the sign of the nodes. If the nodes have the same sign, using the bidiagonal factorization (6) provided by Theorem 3, algebraic problems related to these matrices can be achieved to high relative accuracy, even though the matrix does not possess the total positivity property. As an interesting application, the Stirling numbers of the second kind can be considered divided differences of monomial polynomials at sets of nodes formed by the first consecutive nonnegative integers. Then, Stirling matrices can be considered particular cases of the above mentioned change of bases matrices, and consequently, algorithms to high relative accuracy have been delivered for the resolution of algebraic problems with collocation and Wronskian matrices of Touchard polynomial bases.
Data availability
The authors confirm that the data supporting the findings of this study are available within the manuscript. The Matlab and Mathematica codes to run the numerical experiments are available upon request.
References
Ando, T.: Totally positive matrices. Linear Algebra Appl. 90, 165–219 (1987)
Barreras, A., Peña, J.M.: Accurate computations of matrices with bidiagonal decomposition using methods for totally positive matrices. Numer. Linear Algebra Appl. 20, 413–424 (2013)
de Boor, C.: A practical guide to splines. Springer-Verlag, New York (1978)
\(\mathring{A}\). Björck, V. Pereyra, Solution of Vandermonde systems of equations, Math. Comp. 24 893–903 (1970)
J.M. Carnicer, Y. Khiar, J.M. Peña, Factorization of Vandermonde matrices and the Newton formula, Monografias Matemáticas Garcáa de Galdeano 40 53–60 (2015)
Chrysaphinou, O.: On Touchard polynomials. Discret. Math. 54, 143–152 (1985)
J. Delgado, H. Orera, J.M. Peña, Accurate computations with Laguerre matrices, Numer. Linear Algebra Appl. 26: e2217 (10 pp.) (2019)
Delgado, J., Orera, H., Peña, J.M.: Accurate algorithms for Bessel matrices. J. Sci. Comput. 80, 1264–1278 (2019)
Delgado, J., Orera, H., Peña, J.M.: High relative accuracy with matrices of q-integers. Numer. Linear Algebra Appl. 28, e2383 (2021)
Delgado, J., Peña, J.M.: Fast and accurate algorithms for Jacobi-Stirling matrices. Appl. Math. Comput. 236, 253–259 (2014)
Demmel, J., Gu, M., Eisenstat, S., Slapnicar, I., Veselic, K., Drmac, Z.: Computing the singular value decomposition with high relative accuracy. Linear Algebra Appl. 299, 21–80 (1999)
Demmel, J., Koev, P.: The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 27, 42–52 (2005)
Demmel, J., Koev, P.: Accurate SVDs of polynomial Vandermonde matrices involving orthonormal polynomials. Linear Algebra Appl. 27, 382–396 (2006)
S.M. Fallat, C.R. Johnson, Totally nonnegative matrices, Princeton University Press, Princeton, NJ, Princeton Series in Applied Mathematics (2011)
Finck, T., Heinig, G., Rost, K.: An inversion formula and fast algorithms for Cauchy-Vandermonde matrices. Linear Algebra Appl. 183, 179–191 (1993)
Gasca, M., Peña, J.M.: Total positivity and Neville elimination. Linear Algebra Appl. 165, 25–44 (1992)
Gasca, M., Peña, J.M.: A matricial description of Neville elimination with applications to total positivity. Linear Algebra Appl. 202, 33–53 (1994)
Gasca, M., Peña, J.M.: On factorizations of totally positive matrices. In: Gasca, M., Micchelli, C.A. (eds.) Total positivity and its applications, pp. 109–130. Kluwer Academic Publishers, Dordrecht, The Netherlands (1996)
Koev, P.: Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 27, 1–23 (2005)
Koev, P.: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29, 731–751 (2007)
P. Koev, http://math.mit.edu/~plamen/software/TNTool.html Accessed August 16th (2022)
Mainar, E., Peña, J.M.: Accurate computations with collocation matrices of a general class of bases. Numer. Linear Algebra Appl. 25, e2184 (2018)
Mainar, E., Peña, J.M., Rubio, B.: Accurate computations with Wronskian matrices. Calcolo 58, 1 (2021)
Mainar, E., Peña, J.M., Rubio, B.: Accurate computations with collocations and Wronskian matrices of Jacoby polynomials. J. Sci. Comput. 87, 77 (2021)
Mainar, E., Peña, J.M., Rubio, B.: Accurate computations with matrices related to bases \(\{t^i e^{\lambda t} \}\). Adv. Comput. Math. 48, 38 (2022)
E. Mainar, J.M. Peña, B. Rubio, Accurate computations with Gram and Wronskian matrices of geometric and Poisson bases, Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat. 116 126 (2022)
Marco, A., Martínez, J.J.: Accurate computation of the Moore-Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 350, 299–308 (2019)
Marco, A., Martínez, J.J.: Accurate computations with totally positive Bernstein-Vandermonde matrices. Electron. J. Linear Algebra 26, 357–380 (2013)
Nazir, A., Usman, M., Mohyud-Din, S.T., Tauseef, M.D.: Touchard polynomials method for integral equations. Int. J. Mod. Theor. Phys. 3, 74–89 (2014)
Oruc, H., Phillips, G.M.: Explicit factorization of the Vandermonde matrix. Linear Algebra Appl. 315, 113–123 (2000)
Paris, R.B.: The asymptotics of the Touchard Polynomials: a uniform approximation. Math. Aeterna 6, 765–779 (2016)
Pinkus, A.: Totally positive matrices, Cambridge Tracts in Mathematics, 181. Cambridge University Press, Cambridge (2010)
Touchard, J.: Sur les cycles des substitutions. Acta Math. 70(1), 243–297 (1939)
Acknowledgements
We thank the anonymous referees for their helpful comments and suggestions, which have improved this paper.
Funding
This work was partially supported by Spanish research grants PGC2018-096321-B-I00 and RED2022-134176-T (MCI/AEI) and by Gobierno de Aragón (E41_23R). Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Contributions
The authors contributed equally to this work.
Corresponding author
Ethics declarations
Ethics approval
Not applicable
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mainar, E., Peña, J.M. & Rubio, B. High relative accuracy through Newton bases. Numer Algor 95, 747–772 (2024). https://doi.org/10.1007/s11075-023-01588-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-023-01588-9