Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
40 views

Algorithm - Multiply Polynomials - Stack Overflow

This document discusses an algorithm for multiplying two polynomials in Θ(nlogn) time using divide and conquer. It works by recursively dividing each polynomial into high and low halves, computing products of the halves, and combining the results. The key steps are recursive calls to compute products of the divided polynomial halves. Left shifting in the algorithm corresponds to multiplying polynomial coefficients by successive powers of the variable.

Uploaded by

Cupa no Densetsu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Algorithm - Multiply Polynomials - Stack Overflow

This document discusses an algorithm for multiplying two polynomials in Θ(nlogn) time using divide and conquer. It works by recursively dividing each polynomial into high and low halves, computing products of the halves, and combining the results. The key steps are recursive calls to compute products of the divided polynomial halves. Left shifting in the algorithm corresponds to multiplying polynomial coefficients by successive powers of the variable.

Uploaded by

Cupa no Densetsu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Multiply polynomials

Asked 4 years, 11 months ago Active 4 years, 11 months ago Viewed 6k times

Show how to multiply two linear polynomials $ax+b$ and $cx+d$ using only three
multiplications.
0 Give a divide-and-conquer algorithms for multiplying two polynomials of degree-bound n
that runs in time Θ(n^(lg3). The algorithm should divide the input polynomial coefficients
into a high half and a low half.
(ax+b)(cx+d)=ac x^2+((a+b)(c+d)-ac-bd)x+bd
1 We let p and q be the vectors of coefficients of the first and second polynomial P and Q,
respectively.

We assume that both of these vectors are of length n=max{ length(P), length(Q)} and we
let m=ceil(n/2).

Then P=A x^m+B, where A=p_m+p_(m+1) x+ .... + p_(n-1) x^(n-1-m), B=p_0+p_1 x+


...s+p_(m-1)x^(m-1) and Q=Cx^m+D, where C=q_m+q_(m+1) x+ ...+ q_(n-1) x^(n-1-m)
and D=q_0+q_1 x+ ... + q_(n-1) x^(n-1)=q_0+q_1 x+ ... + q_(m-1) x^(m-1).

Using the previous result, it holds that (Ax^m+B)(Cx^m+D)=AC x^(2m)+((A+B)(C+D)-AC-BD)


x^m+BD (1)$.

I found the following algorithm (link)

Algorithm(p,q){
n=size(p)
m=ceil(n/2)
if n==1 return p*q
else{
a=p[m,n-1]
b=p[0,m-1]
c=q[m,n-1]
d=q[0,m-1]
tmp1=Algorithm(a+b,c+d)
tmp2=Algorithm(a,c)
tmp3=Algorithm(b,d)
return tmp2<<n+(tmp1-tmp2-tmp3)<<m+tmp3

So we suppose that a,b,c,d are vectors, right?

Could you explain me why we make these recursive calls:

tmp1=Algorithm(a+b,c+d)
tmp2=Algorithm(a,c)
tmp3=Algorithm(b,d)

I haven't really understood it... Also, how could we elsewhise shift a number to the left by a
specific number of digits?

algorithm multiplication polynomial-math

By using our site, you acknowledge that you have read and understand our Cookie asked
PolicyJun
, Privacy Policy, and
9 '15 at 21:16
our Terms of Service. Mary Star
339 4 22

It should be mentioned somewhere that this algorithm is known as Karatsuba multiplication. –


Lutz Lehmann Aug 24 '15 at 11:05

1 I'm voting to close this question as off-topic because it appears to be about math in general, not about
programming. – TylerH Apr 10 at 18:15

2 Answers Active Oldest Votes

Let's say you have two polynomials of maximum degree n, and you want to find the
polynomial that is their product. Furthermore, you want to use a divide and conquer approach
3 in order to optimize your solution.

How can we break the problem of multiplying two polynomials of degree n into analogous
subproblems that involve less work? Well, we can turn them into a larger number of smaller
polynomials. Given a polynomial of degree n, you can turn it into two smaller polynomials by
following the reasoning below.

Given a polynomial P of degree n as follows:

Let:

We can factor out x^m from the "second half" of the polynomial. Or, continuing with our
notation:

Or, if we let:

We have:

By using our site,managed


We've you acknowledge that you
to express our have read andofunderstand
polynomial ourterms
degree n in Cookie
ofPolicy, Privacypolynomials,
two smaller Policy, and of
our Termsdegree
of Service
m.. Why is this useful?
Well, let us return to the original problem for a second. Recalling that we are given two
polynomials to find the product of, let us split them as seen above. Let A, B denote the first
and second "halves" of the first polynomial P, and let C, D denote the two halves of the
second polynomial Q.*

Rearranging the product of P and Q:

Awesome! We've managed to express the product of our two large polynomials as a sum of
three products of smaller...well...polynomials. That's not so great, is it. So when you want to
compute AC, or BD, or (A+C)(B+D), you have the same problem you started out with: "take
these two polynomials and multiply them".

But this is the very problem we are solving! Assuming our method is properly implemented to
return (the coefficient vector of) the product of two polynomials, we can just pass it the three
products we need to compute. This is why you have three recursive calls to Algorithm , for
each of A, C , B, D and A+B, C+D .

Finally, regarding the left shift: I am fairly certain this is referring to leftward shifting within the
coefficient vector, and not bitwise left shift. Since the index within the coefficient vector
represents the power of x of the term the coefficient applies to, we can represent multiplication
of some polynomial by a given power of x by shifting the terms in its coefficient vector
leftwards by a corresponding amount.

*Note that both polynomials are treated as having the exact same degree, i.e. the degree of
the higher order polynomial, among P and Q. All the missing terms in the other polynomial
have coefficient zero.

answered Jun 9 '15 at 22:13


Asad Saeeduddin
40.8k 5 73 116

2 Should the limit on the second summations be n-m instead of m-n ? – mjkaufer Sep 5 '17 at 20:53

The recursive calls are for polynomial multiplication, which has to be done when you compute
AC , etc., in order to evaluate the formula AC x^n , in which AC is a polynomial. (You should

0 probably think in terms of replacing n with a larger even number, if necessary, in which case
the leading coefficient of the polynomial might be 0 . Then you always have n=2m .) Just walk
through it with a product of second degree polynomials; it should be obvious.

As for the shift operator, it's pseudocode, so there is some latitude in interpretation. For
polynomials, the shift operator << n means multiply by x^n , which also corresponds to a left
shift of the vector of coefficients.

The reason the shift operator is used here is because the real interesting part is if we interpret
this in the context of integer multiplication. We can represent integers as polynomials; e.g., 6
= 1 * 2^2 + 1 * 2^1 + 0 * 2^0 which corresponds to the polynomial 1 * x^2 + 1 * x^1 + 0 *
x^0 under the correspondence x=2 . Then integer multiplication corresponds to polynomial
multiplication (but with carries) and multiplication by x corresponds to multiplication by 2
which
By using our site,corresponds to left
you acknowledge shift.
that you (We
have don't have
read and to worry our
understand about the carries
Cookie because
Policy, Privacy they're
Policy, and
our Termstaken care. of by the + operator in the context of integer addition.)
of Service
edited Jun 9 '15 at 22:21 answered Jun 9 '15 at 22:15
Edward Doolittle
3,602 2 10 21

Why if we replace n with a larger even number, might the leading coefficient of the polynomial be 0?
Also, would it be right if we returned the following? tmp2*x^n+(tmp1-tmp2-tmp3)*x^m+tmp3 –
Mary Star Jun 9 '15 at 22:32

Yes, leading coefficient might be 0, but that's OK in this context. It would be right to return the
polynomial for the polynomial question, but the idea here is to make the same algorithm do double duty
(or multiple duty) in multiple different contexts; that's the idea behind generic programming. I strongly
recommend Stepanov's From Math to Generic Programming. – Edward Doolittle Jun 9 '15 at 22:39

By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and
our Terms of Service.

You might also like