MATLAB Array Manipulation Tips and Tricks PDF
MATLAB Array Manipulation Tips and Tricks PDF
Copyright 20002003 Peter J. Acklam. All rights reserved. Any material in this document may be reproduced or duplicated for personal or educational use. M ATLAB is a trademark of The MathWorks, Inc. TEX is a trademark of the American Mathematical Society. Adobe, Acrobat, Acrobat Reader, and PostScript are trademarks of Adobe Systems Incorporated.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 High-level vs low-level code 1.1 Introduction . . . . . . . . . . 1.2 Advantages and disadvantages 1.2.1 Portability . . . . . . . 1.2.2 Verbosity . . . . . . . 1.2.3 Speed . . . . . . . . . 1.2.4 Obscurity . . . . . . . 1.2.5 Difculty . . . . . . . 1.3 Words of warning . . . . . . . v 1 1 1 1 2 2 2 2 2 3 3 4 5 6 6 6 6 7 7 7 7 8 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 10 10 11 11 11
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
2 Operators, functions and special characters 2.1 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Built-in functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 M-le functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Basic array properties 3.1 Size . . . . . . . . . . . . . . . . . . . 3.1.1 Size along a specic dimension 3.1.2 Size along multiple dimensions 3.2 Dimensions . . . . . . . . . . . . . . . 3.2.1 Number of dimensions . . . . . 3.2.2 Singleton dimensions . . . . . . 3.3 Number of elements . . . . . . . . . . . 3.3.1 Empty arrays . . . . . . . . . . 4 Array indices and subscripts 5 Creating basic vectors, matrices and arrays 5.1 Creating a constant array . . . . . . . . . . . . . . . . . . . . 5.1.1 When the class is determined by the scalar to replicate 5.1.2 When the class is stored in a string variable . . . . . . 5.2 Special vectors . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Uniformly spaced elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ii
CONTENTS
7 Replicating elements and arrays 7.1 Creating a constant array . . . . . . . . . . . . . . . . . . 7.2 Replicating elements in vectors . . . . . . . . . . . . . . . 7.2.1 Replicate each element a constant number of times 7.2.2 Replicate each element a variable number of times 7.3 Using KRON for replicating elements . . . . . . . . . . . 7.3.1 KRON with an matrix of ones . . . . . . . . . . . 7.3.2 KRON with an identity matrix . . . . . . . . . . . 8 Reshaping arrays 8.1 Subdividing 2D matrix . . . . . . . . . . . . . . . . . . 8.1.1 Create 4D array . . . . . . . . . . . . . . . . . . 8.1.2 Create 3D array (columns rst) . . . . . . . . . . 8.1.3 Create 3D array (rows rst) . . . . . . . . . . . 8.1.4 Create 2D matrix (columns rst, column output) 8.1.5 Create 2D matrix (columns rst, row output) . . 8.1.6 Create 2D matrix (rows rst, column output) . . 8.1.7 Create 2D matrix (rows rst, row output) . . . . 8.2 Stacking and unstacking pages . . . . . . . . . . . . . . . . . . . . . . .
iii 13 13 13 13 13 14 14 14 16 16 16 16 17 17 18 18 19 19 20 20 20 21 22 22 23 24 25 26 27 28 28 28 29 30 30 30 30 30 31 32 32 32 33 33 33 33 34
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
9 Rotating matrices and arrays 9.1 Rotating 2D matrices . . . . . . . . . . . . . . . . . . . . . 9.2 Rotating ND arrays . . . . . . . . . . . . . . . . . . . . . . 9.3 Rotating ND arrays around an arbitrary axis . . . . . . . . . 9.4 Block-rotating 2D matrices . . . . . . . . . . . . . . . . . . 9.4.1 Inner vs outer block rotation . . . . . . . . . . . 9.4.2 Inner block rotation 90 degrees counterclockwise . 9.4.3 Inner block rotation 180 degrees . . . . . . . . . . 9.4.4 Inner block rotation 90 degrees clockwise . . . . . 9.4.5 Outer block rotation 90 degrees counterclockwise 9.4.6 Outer block rotation 180 degrees . . . . . . . . . 9.4.7 Outer block rotation 90 degrees clockwise . . . . 9.5 Blocktransposing a 2D matrix . . . . . . . . . . . . . . . . 9.5.1 Inner blocktransposing . . . . . . . . . . . . . . . 9.5.2 Outer blocktransposing . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
10 Basic arithmetic operations 10.1 Multiply arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Multiply each 2D slice with the same matrix (element-by-element) . 10.1.2 Multiply each 2D slice with the same matrix (left) . . . . . . . . . 10.1.3 Multiply each 2D slice with the same matrix (right) . . . . . . . . . 10.1.4 Multiply matrix with every element of a vector . . . . . . . . . . . 10.1.5 Multiply each 2D slice with corresponding element of a vector . . . 10.1.6 Outer product of all rows in a matrix . . . . . . . . . . . . . . . . . 10.1.7 Keeping only diagonal elements of multiplication . . . . . . . . . . 10.1.8 Products involving the Kronecker product . . . . . . . . . . . . . . 10.2 Divide arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Divide each 2D slice with the same matrix (element-by-element) . . 10.2.2 Divide each 2D slice with the same matrix (left) . . . . . . . . . . . 10.2.3 Divide each 2D slice with the same matrix (right) . . . . . . . . . .
CONTENTS
11 More complicated arithmetic operations 11.1 Calculating distances . . . . . . . . . . . . . . . . . 11.1.1 Euclidean distance . . . . . . . . . . . . . . 11.1.2 Distance between two points . . . . . . . . . 11.1.3 Euclidean distance vector . . . . . . . . . . 11.1.4 Euclidean distance matrix . . . . . . . . . . 11.1.5 Special case when both matrices are identical 11.1.6 Mahalanobis distance . . . . . . . . . . . . . 12 Statistics, probability and combinatorics 12.1 Discrete uniform sampling with replacement . . 12.2 Discrete weighted sampling with replacement . 12.3 Discrete uniform sampling without replacement 12.4 Combinations . . . . . . . . . . . . . . . . . . 12.4.1 Counting combinations . . . . . . . . . 12.4.2 Generating combinations . . . . . . . . 12.5 Permutations . . . . . . . . . . . . . . . . . . 12.5.1 Counting permutations . . . . . . . . . 12.5.2 Generating permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv 35 35 35 35 35 35 36 36 38 38 38 38 39 39 39 39 39 40 41 41 41 42 42 42 42 43 43 43
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
13 Identifying types of arrays 13.1 Numeric array . . . . . . . . . . . . . . . . . . . 13.2 Real array . . . . . . . . . . . . . . . . . . . . . 13.3 Identify real or purely imaginary elements . . . . 13.4 Array of negative, non-negative or positive values 13.5 Array of integers . . . . . . . . . . . . . . . . . 13.6 Scalar . . . . . . . . . . . . . . . . . . . . . . . 13.7 Vector . . . . . . . . . . . . . . . . . . . . . . . 13.8 Matrix . . . . . . . . . . . . . . . . . . . . . . . 13.9 Array slice . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
14 Logical operators and comparisons 44 14.1 List of logical operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 14.2 Rules for logical operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 14.3 Quick tests before slow ones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 15 Miscellaneous 15.1 Accessing elements on the diagonal . . . . . 15.2 Creating index vector from index limits . . . 15.3 Matrix with different incremental runs . . . . 15.4 Finding indices . . . . . . . . . . . . . . . . 15.4.1 First non-zero element in each column 15.4.2 First non-zero element in each row . . 15.4.3 Last non-zero element in each row . . 15.5 Run-length encoding and decoding . . . . . . 15.5.1 Run-length encoding . . . . . . . . . 15.5.2 Run-length decoding . . . . . . . . . 15.6 Counting bits . . . . . . . . . . . . . . . . . Glossary . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 46 47 48 48 48 49 50 50 50 51 51 52 53
CONTENTS
Preface
The essence
This document is intended to be a compilation of tips and tricks mainly related to efcient ways of manipulating arrays in M ATLAB. Here, manipulating arrays includes replicating and rotating arrays or parts of arrays, inserting, extracting, replacing, permuting and shifting arrays or parts of arrays, generating combinations and permutations of elements, run-length encoding and decoding, arithmetic operations like multiplying and dividing arrays, calculating distance matrices and more. A few other issues related to writing fast M ATLAB code are also covered. I want to thank Ken Doniger, Dr. Denis Gilbert for their contributions, suggestions, and corrections.
Intended audience
This document is mainly intended for those of you who already know the basics of M ATLAB and would like to dig further into the material regarding manipulating arrays efciently.
vi
CONTENTS
vii
Organization
Instead of just providing a compilation of questions and answers, I have organized the material into sections and attempted to give general answers, where possible. That way, a solution for a particular problem doesnt just answer that one problem, but rather, that problem and all similar problems. Many of the sections start off with a general description of what the section is about and what kind of problems that are solved there. Following that are implementations which may be used to solve the given problem.
Typographical convensions
All M ATLAB code is set in a monospaced font, like this, and the rest is set in a proportional font. En ellipsis (...) is sometimes used to indicated omitted code. It should be apparent from the context whether the ellipsis is used to indicate omitted code or if the ellipsis is the line continuation symbol used in M ATLAB. M ATLAB functions are, like other M ATLAB code, set in a proportional font, but in addition, the text is hyperlinked to the documentation pages at The MathWorks web site. Thus, depending on the PDF document reader, clicking the function name will open a web browser window showing the appropriate documentation page.
Credits
To the extent possible, I have given credit to what I believe is the author of a particular solution. In many cases there is no single author, since several people have been tweaking and trimming each others solutions. If I have given credit to the wrong person, please let me know. In particular, I do not claim to be the sole author of a solution even when there is no other name mentioned.
Chapter 1
The use of the higher-level operator makes the code more compact and more easy to read, but this is not always the case. Before you start using high-level functions extensively, you ought to consider the advantages and disadvantages.
1.2.1 Portability
Low-level code looks much the same in most programming languages. Thus, someone who is used to writing low-level code in some other language will quite easily be able to do the same in M ATLAB. And vice versa, low-level M ATLAB code is more easily ported to other languages than high-level M ATLAB code. 1
1.2.2 Verbosity
The whole purpose of a high-level function is to do more than the low-level equivalent. Thus, using high-level functions results in more compact code. Compact code requires less coding, and generally, the less you have to write the less likely it is that you make a mistake. Also, is is more easy to get an overview of compact code; having to wade through vast amounts of code makes it more easy to lose the big picture.
1.2.3 Speed
Traditionally, low-level M ATLAB code ran more slowly than high-level code, so among M ATLAB users there has always been a great desire to speed up execution by replacing low-level code with high-level code. This is clearly seen in the M ATLAB newsgroup on Usenet, comp.soft-sys.matlab, where many postings are about how to translate a low-level construction into the high-level equivalent. In M ATLAB 6.5 an accelerator was introduced. The accelerator makes low-level code run much faster. At this time, not all code will be accelerated, but the accelerator is still under development and it is likely that more code will be accelerated in future releases of M ATLAB. The M ATLAB documentation contains specic information about what code is accelerated.
1.2.4 Obscurity
High-level code is more compact than low-level code, but sometimes the code is so compact that is it has become quite obscure. Although it might impress someone that a lot can be done with a minimum code, it is a nightmare to maintain undocumented high-level code. You should always document your code, and this is even more important if your extensive use of high-level code makes the code obscure.
1.2.5 Difculty
Writing efcient high-level code requires a different way of thinking than writing low-level code. It requires a higher level of abstraction which some people nd difcult to master. As with everything else in life, if you want to be good at it, you must practice.
Chapter 2
at the command prompt and take a look at the list of operators, functions and special characters, and look at the associated help pages. When manipulating arrays in M ATLAB there are some operators and functions that are particularely useful.
2.1 Operators
In addition to the arithmetic operators, M ATLAB provides a couple of other useful operators : . The colon operator. Type help colon for more information. Non-conjugate transpose. Type help transpose for more information. Complex conjugate transpose. Type help ctranspose for more information.
Some of these functions are shorthands for combinations of other built-in functions, lik length(x) ndims(x) numel(x) is is is max(size(x)) length(size(x)) prod(size(x))
Others are shorthands for frequently used tests, like isempty(x) isinf(x) isfinite(x) is is is numel(x) == 0 abs(x) == Inf abs(x) ~= Inf
Others are shorthands for frequently used functions which could have been written with low-level code, like diag, eye, find, sum, cumsum, cumprod, sort, tril, triu, etc.
Chapter 3
The length of the size vector sx is the number of dimensions in x. That is, length(size(x)) is identical to ndims(x) (see section 3.2.1). No builtin array class in M ATLAB has less than two dimensions. To change the size of an array without changing the number of elements, use reshape.
This will return one for all singleton dimensions (see section 3.2.2), and, in particular, it will return one for all dim greater than ndims(x).
which is the essential part of the function mttsize in the MTT Toolbox. Code like the following is sometimes seen, unfortunately. It might be more intuitive than the above, but it is more fragile since it might use a lot of memory when dims contains a large value.
sx = size(x); n = max(dims(:)) - ndims(x); sx = [ sx ones(1, n) ]; siz = sx(dims); % % % % get size along all dimensions number of dimensions to append pad size vector extract dimensions of interest
An unlikely scenario perhaps, but imagine what happens if x and dims both are scalars and that dims is a billion. The above code would require more than 8 GB of memory. The suggested solution further above requires a negligible amount of memory. There is no reason to write fragile code when it can easily be avoided.
3.2 Dimensions
3.2.1 Number of dimensions
The number of dimensions of an array is the number of the highest non-singleton dimension (see section 3.2.2) but never less than two since arrays in M ATLAB always have at least two dimensions. The function which returns the number of dimensions is ndims, so the number of dimensions of an array x is
dx = ndims(x); % number of dimensions
One may also say that ndims(x) is the largest value of dim, no less than two, for which size(x,dim) is different from one. Here are a few examples
x x x x x x = = = = = = ones(2,1) ones(2,1,1,1) ones(1,0) ones(1,2,3,0,0) ones(2,3,0,0,1) ones(3,0,0,1,2) % % % % % % 2-dimensional 2-dimensional 2-dimensional 5-dimensional 4-dimensional 5-dimensional
Chapter 4
Chapter 5
Following are three other ways to achieve the same, all based on what repmat uses internally. Note that for these to work, the array X should not already exist
X(prod(siz)) = val; X = reshape(X, siz); X(:) = X(end); % array of right class and num. of elements % reshape to specified size % fill val into X (redundant if val is zero)
If the size is given as a cell vector siz ={m n p q ...}, there is no need to reshape
X(siz{:}) = val; X(:) = X(end); % array of right class and size % fill val into X (redundant if val is zero)
but this solution requires more memory since it creates an index array. Since an index array is used, it only works if val is a variable, whereas the other solutions above also work when val is a function returning a scalar value, e.g., if val is Inf or NaN:
X = NaN(ones(siz)); X = repmat(NaN, siz); % this wont work unless NaN is a variable % here NaN may be a function or a variable
Avoid using 10
11
since it does unnecessary multiplications and only works for classes for which the multiplication operator is dened.
As a special case, to create an array of class cls with only zeros, you can use
X = repmat(feval(cls, 0), siz); % a nice one-liner
or
X(prod(siz)) = feval(cls, 0); X = reshape(X, siz);
Avoid using
X = feval(cls, zeros(siz)); % might require a lot more memory
since it rst creates an array of class double which might require many times more memory than X if an array of class cls requires less memory pr element than a double array.
If the difference upper-lower is not a multiple of step, the last element of X, X(end), will be less than upper. So the condition A(end) <= upper is always satised.
Chapter 6
Shifting
6.1 Vectors
To shift and rotate the elements of a vector, use
X([ X([ X([ X([ end 1:end-1 ]); end-k+1:end 1:end-k ]); 2:end 1 ]); k+1:end 1:k ]); % % % % shift shift shift shift right/down 1 element right/down k elements left/up 1 element left/up k elements
Note that these only work if k is non-negative. If k is an arbitrary integer one may use something like
X( mod((1:end)-k-1, end)+1 ); X( mod((1:end)+k-1, end)+1 ); % shift right/down k elements % shift left/up k element
12
Chapter 7
If A is a column-vector, use
B = A(:,ones(1,N)).; B = B(:);
but this requires unnecessary arithmetic. The only advantage is that it works regardless of whether A is a row or column vector.
13
14
or simply
B = repmat(A, [m n]);
15
Chapter 8
Reshaping arrays
8.1 Subdividing 2D matrix
Assume X is an m-by-n matrix.
Now,
X = [ Y(:,:,1,1) Y(:,:,2,1) ... Y(:,:,m/p,1) Y(:,:,1,2) ... Y(:,:,1,n/q) Y(:,:,2,2) ... Y(:,:,2,n/q) ... ... ... Y(:,:,m/p,2) ... Y(:,:,m/p,n/q) ];
into
Y = cat( 3, A, C, B, D );
use
Y = reshape( X, [ p m/p q n/q ] ); Y = permute( Y, [ 1 3 2 4 ] ); Y = reshape( Y, [ p q m*n/(p*q) ] )
16
17
];
into
Y = cat( 3, A, B, C, D );
use
Y = reshape( X, [ p m/p n ] ); Y = permute( Y, [ 1 3 2 ] ); Y = reshape( Y, [ p q m*n/(p*q) ] );
Now,
X = [ Y(:,:,1) Y(:,:,2) Y(:,:,n/q+1) Y(:,:,n/q+2) ... ... Y(:,:,(m/p-1)*n/q+1) Y(:,:,(m/p-1)*n/q+2) ... Y(:,:,n/q) ... Y(:,:,2*n/q) ... ... ... Y(:,:,m/p*n/q) ];
into
Y = [ A C B D ];
18
into
Y = [ A C B D ];
use
Y = reshape( X, [ p m/p q n/q ] ) Y = permute( Y, [ 1 3 2 4 ] ); Y = reshape( Y, [ p m*n/p ] );
into
Y = [ A B C D ];
use
Y = reshape( X, [ p m/p q n/q ] ); Y = permute( Y, [ 1 4 2 3 ] ); Y = reshape( Y, [ m*n/q q ] );
19
into
Y = [ A B C D ];
use
Y = reshape( X, [ p m/p n ] ); Y = permute( Y, [ 1 3 2 ] ); Y = reshape( Y, [ p m*n/p ] );
into
Y = [ A B C ... ];
use
Y = permute( X, [ 1 3 2 ] ); Y = reshape( Y, [ m*p n ] );
Chapter 9
or the one-liner
Y = reshape( X(end:-1:1,end:-1:1,:), size(X) );
20
21
or the one-liner
Y = permute(reshape(X(end:-1:1,:), size(X)), [2 1 3:ndims(X)]);
22
However, an outer block rotation 90 degrees counterclockwise will have the following effect
[ A B C D E F G H I ] => [ C F I B E H A D G ]
In all the examples below, it is assumed that X is an m-by-n matrix of p-by-q blocks.
23
use
Y Y Y Y = = = = reshape( X, [ p m/p q n/q ] ); Y(:,:,q:-1:1,:); permute( Y, [ 3 2 1 4 ] ); reshape( Y, [ q*m/p p*n/q ] ); % or Y = Y(:,:,end:-1:1,:);
use
Y Y Y Y = = = = reshape( X, [ p q n/q ] ); Y(:,q:-1:1,:); permute( Y, [ 2 1 3 ] ); reshape( Y, [ q m*n/q ] ); % or Y = Y(:,end:-1:1,:); % or Y = Y(:,:);
use
Y Y Y Y = = = = X(:,q:-1:1); reshape( Y, [ p m/p q ] ); permute( Y, [ 3 2 1 ] ); reshape( Y, [ q*m/p p ] ); % or Y = X(:,end:-1:1);
24
use
Y = reshape( X, [ p m/p q n/q ] ); Y = Y(p:-1:1,:,q:-1:1,:); Y = reshape( Y, [ m n ] ); % or Y = Y(end:-1:1,:,end:-1:1,:);
use
Y = reshape( X, [ p q n/q ] ); Y = Y(p:-1:1,q:-1:1,:); Y = reshape( Y, [ m n ] ); % or Y = Y(end:-1:1,end:-1:1,:); % or Y = Y(:,:);
use
Y = reshape( X, [ p m/p q ] ); Y = Y(p:-1:1,:,q:-1:1); Y = reshape( Y, [ m n ] ); % or Y = Y(end:-1:1,:,end:-1:1);
25
use
Y Y Y Y = = = = reshape( X, [ p m/p q n/q ] ); Y(p:-1:1,:,:,:); permute( Y, [ 3 2 1 4 ] ); reshape( Y, [ q*m/p p*n/q ] ); % or Y = Y(end:-1:1,:,:,:);
use
Y Y Y Y = = = = X(p:-1:1,:); reshape( Y, [ p q n/q ] ); permute( Y, [ 2 1 3 ] ); reshape( Y, [ q m*n/q ] ); % or Y = X(end:-1:1,:);
% or Y = Y(:,:);
use
Y Y Y Y = = = = reshape( X, [ p m/p q ] ); Y(p:-1:1,:,:); permute( Y, [ 3 2 1 ] ); reshape( Y, [ q*m/p p ] ); % or Y = Y(end:-1:1,:,:);
26
use
Y Y Y Y = = = = reshape( X, [ p m/p q n/q ] ); Y(:,:,:,n/q:-1:1); permute( Y, [ 1 4 3 2 ] ); reshape( Y, [ p*n/q q*m/p ] ); % or Y = Y(:,:,:,end:-1:1);
use
Y Y Y Y = = = = reshape( X, [ p q n/q ] ); Y(:,:,n/q:-1:1); permute( Y, [ 1 3 2 ] ); reshape( Y, [ m*n/q q ] ); % or Y = Y(:,:,end:-1:1);
use
Y = reshape( X, [ p m/p q ] ); Y = permute( Y, [ 1 3 2 ] ); Y = reshape( Y, [ p n*m/p ] );
% or Y(:,:);
27
use
Y = reshape( X, [ p m/p q n/q ] ); Y = Y(:,m/p:-1:1,:,n/q:-1:1); % or Y = Y(:,end:-1:1,:,end:-1:1); Y = reshape( Y, [ m n ] );
use
Y = reshape( X, [ p q n/q ] ); Y = Y(:,:,n/q:-1:1); Y = reshape( Y, [ m n ] ); % or Y = Y(:,:,end:-1:1); % or Y = Y(:,:);
use
Y = reshape( X, [ p m/p q ] ); Y = Y(:,m/p:-1:1,:); Y = reshape( Y, [ m n ] ); % or Y = Y(:,end:-1:1,:);
28
use
Y Y Y Y = = = = reshape( X, [ p m/p q n/q ] ); Y(:,m/p:-1:1,:,:); permute( Y, [ 1 4 3 2 ] ); reshape( Y, [ p*n/q q*m/p ] ); % or Y = Y(:,end:-1:1,:,:);
use
Y = reshape( X, [ p q n/q ] ); Y = permute( Y, [ 1 3 2 ] ); Y = reshape( Y, [ m*n/q q ] );
use
Y Y Y Y = = = = reshape( X, [ p m/p q ] ); Y(:,m/p:-1:1,:); permute( Y, [ 1 3 2 ] ); reshape( Y, [ p n*m/p ] ); % or Y = Y(:,end:-1:1,:);
use
Y = reshape( X, [ p m/p q n/q ] ); Y = permute( Y, [ 3 2 1 4 ] ); Y = reshape( Y, [ q*m/p p*n/q ] );
29
use
Y = reshape( X, [ p m/p q n/q ] ); Y = permute( Y, [ 1 4 3 2 ] ); Y = reshape( Y, [ p*n/q q*m/p] );
Chapter 10
for all i=1,...,p, j=1,...,q, etc. This can be done with nested for-loops, or by the following vectorized code
sx = size(X); Z = X .* repmat(Y, [1 1 sx(3:end)]);
for all i=1,...,p, j=1,...,q, etc. This can be done with nested for-loops, or by the following vectorized code
sx = size(X); sy = size(Y); Z = reshape(Y * X(:,:), [sy(1) sx(2:end)]);
The above works by reshaping X so that all 2D slices X(:,:,i,j,...) are placed next to each other (horizontal concatenation), then multiply with Y, and then reshaping back again. The X(:,:) is simply a short-hand for reshape(X, [sx(1) prod(sx)/sx(1)]).
31
for all i=1,...,p, j=1,...,q, etc. This can be done with nested for-loops, or by vectorized code. First create the variables
sx = size(X); sy = size(Y); dx = ndims(X);
Note how the complex conjugate transpose () on the 2D slices of X was replaced by a combination of permute and conj. Actually, because signs will cancel each other, we can simplify the above by removing the calls to conj and replacing the complex conjugate transpose () with the non-conjugate transpose (.). The code above then becomes
Xt = permute(X, [2 1 3:dx]); Z = Y. * Xt(:,:); Z = reshape(Z, [sy(2) sx(1) sx(3:dx)]); Z = permute(Z, [2 1 3:dx]);
An alternative method is to perform the multiplication X(:,:,i,j,...) * Y directly but that requires that we stack all 2D slices X(:,:,i,j,...) on top of each other (vertical concatenation), multiply, and unstack. The code is then
Xt = permute(X, [1 3:dx 2]); Xt = reshape(Xt, [prod(sx)/sx(2) sx(2)]); Z = Xt * Y; Z = reshape(Z, [sx(1) sx(3:dx) sy(2)]); Z = permute(Z, [1 dx 2:dx-1]);
The rst two lines perform the stacking and the two last perform the unstacking.
For the more general problem where X is an m-by-n-by-p-by-q-by-... array and v is a p-by-qby-... array, the for-loop
32
may be written as
sx = size(X); Z = X .* repmat(reshape(v, [1 1 sx(3:end)]), [sx(1) sx(2)]);
a non-for-loop solution is
j = 1:n; Y = reshape(repmat(X, n, 1) .* X(:,j(ones(n, 1),:))., [n n m]);
Note the use of the non-conjugate transpose in the second factor to ensure that it works correctly also for complex matrices.
33
Solution (1) does a lot of unnecessary work, since we only keep the n diagonal elements of the n2 computed elements. Solution (2) only computes the elements of interest and is signicantly faster if n is large.
for all i=1,...,p, j=1,...,q, etc. This can be done with nested for-loops, or by the following vectorized code
sx = size(X); Z = X./repmat(Y, [1 1 sx(3:end)]);
for all i=1,...,p, j=1,...,q, etc. This can be done with nested for-loops, or by the following vectorized code
Z = reshape(Y\X(:,:), size(X));
34
for all i=1,...,p, j=1,...,q, etc. This can be done with nested for-loops, or by the following vectorized code
sx = size(X); dx = ndims(X); Xt = reshape(permute(X, [1 3:dx 2]), [prod(sx)/sx(2) sx(2)]); Z = Xt/Y; Z = permute(reshape(Z, sx([1 3:dx 2])), [1 dx 2:dx-1]);
The third line above builds a 2D matrix which is a vertical concatenation (stacking) of all 2D slices X(:,:,i,j,...). The fourth line does the actual division. The fth line does the opposite of the third line. The ve lines above might be simplied a little by introducing a dimension permutation vector
sx = size(X); dx = ndims(X); v = [1 3:dx 2]; Xt = reshape(permute(X, v), [prod(sx)/sx(2) sx(2)]); Z = Xt/Y; Z = ipermute(reshape(Z, sx(v)), v);
If you dont care about readability, this code may also be written as
sx = size(X); dx = ndims(X); v = [1 3:dx 2]; Z = ipermute(reshape(reshape(permute(X, v), ... [prod(sx)/sx(2) sx(2)])/Y, sx(v)), v);
Chapter 11
35
36
The following code inlines the call to repmat, but requires to temporary variables unless one doesnt mind changing X and Y
Xt = permute(X, [1 3 2]); Yt = permute(Y, [3 1 2]); D = sqrt(sum(abs( Xt(:,ones(1,n),:) ... - Yt(ones(1,m),:,:) ).^2, 3));
The distance matrix may also be calculated without the use of a 3-D array:
i = (1:m).; % index vector for x i = i(:,ones(1,n)); % index matrix for x j = 1:n; % index vector for y j = j(ones(1,m),:); % index matrix for y D = zeros(m, n); % initialise output matrix D(:) = sqrt(sum(abs(X(i(:),:) - Y(j(:),:)).^2, 2));
One might want to take advantage of the fact that D will be symmetric. The following code rst creates the indices for the upper triangular part of D. Then it computes the upper triangular part of D and nally lets the lower triangular part of D be a mirror image of the upper triangular part.
[ i j ] = find(triu(ones(m), 1)); % trick to get indices D = zeros(m, m); % initialise output matrix D( i + m*(j-1) ) = sqrt(sum(abs( X(i,:) - X(j,:) ).^2, 2)); D( j + m*(i-1) ) = D( i + m*(j-1) );
Assume Y is an ny-by-p matrix containing a set of vectors and X is an nx-by-p matrix containing another set of vectors, then the Mahalanobis distance from each vector Y(j,:) (for j=1,...,ny) to the set of vectors in X can be calculated with
nx = size(X, 1); % size of set in X ny = size(Y, 1); % size of set in Y m = mean(X); C = cov(X); d = zeros(ny, 1); for j = 1:ny d(j) = (Y(j,:) - m) / C * (Y(j,:) - m); end
37
which is computed more efciently with the following code which does some inlining of functions (mean and cov) and vectorization
nx = size(X, 1); ny = size(Y, 1); m Xc C Yc d = = = = = sum(X, 1)/nx; X - m(ones(nx,1),:); (Xc * Xc)/(nx - 1); Y - m(ones(ny,1),:); sum(Yc/C.*Yc, 2)); % size of set in X % size of set in Y % % % % % centroid (mean) distance to centroid of X variance matrix distance to centroid of X Mahalanobis distances
The call to conj is to make sure it also works for the complex case. The call to real is to remove numerical noise. The Statistics Toolbox contains the function mahal for calculating the Mahalanobis distances, but mahal computes the distances by doing an orthogonal-triangular (QR) decomposition of the matrix C. The code above returns the same as d = mahal(Y, X). Special case when both matrices are identical If Y and X are identical in the code above, the code may be simplied somewhat. The for-loop solution becomes
n = size(X, 1); % size of set in X m = mean(X); C = cov(X); d = zeros(n, 1); for j = 1:n d(j) = (Y(j,:) - m) / C * (Y(j,:) - m); end
Again, to make it work in the complex case, the last line must be written as
d = real(sum(Xc/C.*conj(Xc), 2)); % Mahalanobis distances
Chapter 12
Note that the number of times through the loop depends on the number of probabilities and not the sample size, so it should be quite fast even for large samples.
39
12.4 Combinations
Combinations is what you get when you pick k elements, without replacement, from a sample of size n, and consider the order of the elements to be irrelevant.
which
which may overow. Unfortunately, both n and k have to be scalars. If n and/or k are vectors, one may use the fact that n! n (n + 1) = = k! (n k)! (k + 1) (n k + 1) k and calculate this in with
round(exp(gammaln(n+1) - gammaln(k+1) - gammaln(n-k+1)))
where the round is just to remove any numerical noise that might have been introduced by gammaln and exp.
12.5 Permutations
12.5.1 Counting permutations
p = prod(n-k+1:n);
40
Chapter 13
since, by default, isnan and isinf are only dened for class double. A solution that works is to use the following, where tf is either true or false
tf = isnumeric(x); if isa(x, double) tf = tf & ~any(isnan(x(:))) & ~any(isinf(x(:))) end
If one is only interested in arrays of class double, the above may be written as
isa(x,double) & ~any(isnan(x(:))) & ~any(isinf(x(:)))
Note that there is no need to call isnumeric in the above, since a double array is always numeric.
The essence is that isreal returns false (i.e., 0) if space has been allocated for an imaginary part. It doesnt care if the imaginary part is zero, if it is present, then isreal returns false. To see if an array x is real in the sense that it has no non-zero imaginary part, use
~any(imag(x(:)))
Note that x might be real without being numeric; for instance, isreal(a) returns true, but isnumeric(a) returns false. 41
42
% see if x contains only (possibly complex) integers all(x(:) == round(x(:))) % see if x contains only real integers isreal(x) & all(x(:) == round(x(:)))
13.6 Scalar
To see if an array x is scalar, i.e., an array with exactly one element, use
all(size(x) == 1) prod(size(x)) == 1 any(size(x) ~= 1) prod(size(x)) ~= 1 % % % % is is is is a scalar a scalar not a scalar not a scalar
43
13.7 Vector
An array x is a non-empty vector if the following is true
~isempty(x) & sum(size(x) > 1) <= 1 isempty(x) | sum(size(x) > 1) > 1 % is a non-empty vector % is not a non-empty vector
An array x is a possibly empty row or column vector if the following is true (the two methods are equivalent)
ndims(x) <= 2 & sum(size(x) > 1) <= 1 ndims(x) <= 2 & ( size(x,1) <= 1 | size(x,2) <= 1 )
13.8 Matrix
An array x is a possibly empty matrix if the following is true
ndims(x) == 2 ndims(x) > 2 % is a possibly empty matrix % is not a possibly empty matrix
Chapter 14
but if x is a large array, the above might be very slow since it has to look at each element at least once (the isinf test). The following is faster and requires less typing 44
45
Note how the last three tests get simplied because, since we have put the test for scalarness before them, we can safely assume that x is scalar. The last three tests arent even performed at all unless x is a scalar.
Chapter 15
Miscellaneous
This section contains things that dont t anywhere else.
% m-by-n matrix where m >= n (4) % m-by-n matrix where m <= n (5)
To get the linear index values of the elements on the following anti-diagonals
(1) [ 0 0 3 0 2 0 1 0 0 ] (2) [ 0 0 0 1 0 0 2 0 0 3 0 0 ] (3) [ 0 0 3 0 0 2 0 0 1 0 0 0 ] (4) [ 0 0 1 0 0 2 0 0 3 0 0 0 ] (5) [ 0 0 0 3 0 0 2 0 0 1 0 0 ]
46
47
which unfortunately requires a lot of memory copying since a new x has to be allocated each time through the loop. A better for-loop solution is one that allocates the required space and then lls in the elements afterwards. This for-loop solution above may be several times faster than the rst one
m len n lst = = = = length(lo); hi - lo + 1; sum(len); cumsum(len); % % % % length of input vectors length of each "run" length of index vector last index in each run
idx = zeros(1, n); % initialize index vector for i = 1:m idx(lst(i)-len(i)+1:lst(i)) = lo(i):hi(i); end
Neither of the for-loop solutions above can compete with the the solution below which has no forloops. It uses cumsum rather than the : to do the incrementing in each run and may be many times faster than the for-loop solutions above.
m = length(lo); len = hi - lo + 1; n = sum(len); % length of input vectors % length of each "run" % length of index vector
idx = ones(1, n); % initialize index vector idx(1) = lo(1); len(1) = len(1)+1; idx(cumsum(len(1:end-1))) = lo(2:m) - hi(1:m-1); idx = cumsum(idx);
If fails, however, if lo(i)>hi(i) for any i. Such a case will create an empty vector anyway, so the problem can be solved by a simple pre-processing step which removing the elements for which lo(i)>hi(i)
i = lo <= hi; lo = lo(i); hi = hi(i);
There also exists a one-line solution which is very compact, but not as fast as the no-for-loop solution above
x = eval([[ sprintf(%d:%d,, [lo ; hi]) ]]);
48
How does one create the matrix where the ith column contains the vector 1:a(i) possibly padded with zeros:
b = [ 1 2 3 0 1 2 0 0 1 2 3 4 ];
or
m = max(a); aa = a(:); aa = aa(:,ones(m, 1)); bb = 1:m; bb = bb(ones(length(a), 1),:); b = bb .* (bb <= aa);
49
50
or
m = size(x, 1); j = zeros(m, 1); for i = 1:m k = [ 0 find(x(i,:) ~= 0) ]; j(i) = k(end); end
51
which of the two above that is faster depends on the data. For more or less sorted data, the rst one seems to be faster in most cases. For random data, the second one seems to be faster. These two steps required to get both the run-lengths and values may be combined into
i = [ find(x(1:end-1) ~= x(2:end)) length(x) ]; len = diff([ 0 i ]); val = x(i);
the above method requires approximately 2*length(len)+sum(len) ops. There is a way that only requires approximately length(len)+sum(len) ops, but is slightly slower (not sure why, though).
len(1) = len(1)+1; i = cumsum(len); j = zeros(1, i(end)-1); j(i(1:end-1)) = 1; j(1) = 1; x = val(cumsum(j)); % length(len) flops
% sum(len) flops
This following method requires approximately length(len)+sum(len) ops and only four lines of code, but is slower than the two methods suggested above.
i = cumsum([ 1 len ]); j = zeros(1, i(end)-1); j(i(1:end-1)) = 1; x = val(cumsum(j)); % length(len) flops
% sum(len) flops
or
52
The following solution is slower, but requires less memory than the above so it is able to handle larger arrays
nsetbits = zeros(size(x)); k = find(x); while length(k) nsetbits = nsetbits + bitand(x, 1); x = bitshift(x, -1); k = k(logical(x(k))); end
Glossary
null-operation an operation which has no effect on the operand operand an argument on which an operator is applied singleton dimension a dimension along which the length is zero subscript context an expression used as an array subscript is in a subscript context vectorization taking advantage of the fact that many operators and functions can perform the same operation on several elements in an array without requiring the use of a for-loop
53
Index
matlab faq, 55 comp.soft-sys.matlab, vi, 2, 55 dimensions number of, 7 singleton, 7 trailing singleton, 7 elements number of, 7 emptiness, 8 empty array, see emptiness null-operation, 7 run-length decoding, 51 encoding, 50 shift elements in vectors, 12 singleton dimensions, see dimensions, singleton size, 6 trailing singleton dimensions, see dimensions, trailing singleton Usenet, vi, 2
54
Appendix A
M ATLAB resources
The MathWorks home page
On The MathWorks web page one can nd the complete set of M ATLAB documentaton in addition to technical solutions and lots of other information. http://www.mathworks.com/
55