Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Data Field Haskell

2001, Electronic Notes in Theoretical Computer Science

Data Field Haskell Jonas Holmerin1 and Bjorn Lisper2 1 2 Department of Numerical Analysis and Computing Science, Royal Institute of Technology, SE-100 44 Stockholm, SWEDEN joho@nada.kth.se Dept. of Computer Engineering, Malardalen University, P.O. Box 883, SE-721 23 Vasteras, SWEDEN bjorn.lisper@mdh.se Abstract. Data elds provide a exible and highly general model for in- dexed collections of data. Data Field Haskell is a Haskell dialect that provides an instance of data elds. It can be used for very generic collection-oriented programming, with a special emphasis on multidimensional structures. We give a brief description of the data eld model and its underlying theory. We then describe Data Field Haskell, and an implementation. 1 Introduction Indexed data structures are important in many computing applications. The canonical indexed data structure is the array, but other indexed structures like hash tables and explicitly parallel entities are also common. In many applications the indexing capability provides an important part of the model: when solving partial di erential equations, for instance, the index is often closely related to a physical coordinate, and explicitly parallel algorithms often use processor ID's as indices. Since the time of APL [5] it has been recognised that a programming model that provides operations directly on data structures can be very convenient. This style of programming is often called collection-oriented programming [27]. Modern array and data parallel languages like Fortran 90 [3] provide support for this programming style, as do higher-order functional languages, which usually o er collection-oriented list operations. However, these languages typically restrict the scope of collection-oriented operations to a single kind of data type, and the semantics of these operations can be somewhat ad-hoc. Data elds1 model indexed data structures as partial functions supplied with explicit information about their domains. This leads to a programming model that is highly uniform over di erent indexed data structures, and where the operations are designed according to common semantical principles. Data Field Haskell is a Haskell dialect where the arrays have been replaced by an instance of data elds. This particular instance consists of multidimensional array-like data elds, which can be sparse, dense, or sparse in some dimensions and dense in others. Thus, this dialect is targeted towards rapid prototyping of parallel algorithms, which may involve sparse structures, but we believe it is useful for a wide range of applications amenable to collection-oriented programming. There is reason to believe that, in a few years' time, the advances in semiconductor technology will force the replacement of current processor architectures with multiprocessors on a chip [7]. When this happens, parallelism will become central also in mainstream computing. The collection-oriented paradigm provides an attractive parallel programming model due to its conceptual simplicity. This motivates a 1 \Field" should be understood as in physics, as an entity that is a function of space and possibly time. continued investigation in general collection-oriented programming models, and is one reason for the development of Data Field Haskell. The rest of this paper is organised as follows. Section 2 gives a brief description of the underlying data eld model. Section 3 describes Data Field Haskell, and Section 4 gives a simple example of its use. Section 5 contains a description of the current implementation. Section 6 provides an account for related work. In Section 7, nally, the story is wrapped up. The limited space does not allow a complete description of Data Field Haskell here { see [1, 11] for the details. Various versions of the data eld model have been described elsewere [8, 16{18]. The use of Data Field Haskell for rapid prototyping of parallel algorithms has been reported in [12, 19]. The contribution of this paper is a more thorough description of the language and of an implementation. 2 The Data Field Model The concept of data elds is based on the more abstract model of indexed data structures as functions with nite domain [8, 16]. An array with range [1..n], for instance, can be seen as a function from f1; : : : ; ng, but we could also model \irregular" indexed structures as functions with non-contiguous, possibly non-numerical domains. In order to give partial functions conventional function types they are seen as functions that return a distinguished error value , with algebraic properties similar to ?, when called with an argument outside their domains. The partial function model is simple and powerful, and most types of collectionoriented operations [27] can be de ned as higher order functions operating on partial functions [8, 18]. However, certain operations require explicit information about the function domains. Thus, we consider entities (f; b) { the data elds { where f is a function and and b is a bound, a set representation that bounds the domain of the corresponding function. We require that the following operations are de ned for bounds: { For each bound an interpretation as a predicate (or set). { A predicate classifying each bound as either nite or in nite, depending on whether its set is surely nite or possibly in nite. { For every bound b de ning a nite set, size (b) that yields the size of the set and enum (b) that is a function enumerating its elements. { Binary operations u, t on bounds (\intersection", \union"). { The bounds all and nothing representing the universal and empty set, respectively. These operations are chosen to support the operations on partial functions that require the domain of the functions, without revealing the inner structure of the bounds. They must have certain properties, see [18]. The theory of data elds also de nes '-abstraction, a syntax for convenient de nition of data elds that parallels -abstraction for functions. The meaning of 'x:t is a data eld (x:t; b) where b provides an upper approximation to the domain of x:t. The purpose of '-abstraction is to provide a formal semantics for collection-oriented operations where the bound of the result is implicitly given by the bounds of the operands. Such operations are convenient to use and common in array languages, and the data eld model extends them beyond arrays. 3 Data Field Haskell Data Field Haskell is a Haskell dialect where the arrays have been replaced by an instance of data elds, a variation of the sparse/dense arrays of [17, 18]. The new data types are Datafield a b for sparse/dense array data elds and Bounds a for the corresponding bounds. a must belong to the classes Ix (array index types) and Pord (types with partial ordering, which is convenient when de ning certain operations on bounds: see [11]). Pord has the same instances as Ix: thus, possible index types for data elds are the same as for Haskell arrays (integers, characters, enumerations, and single-constructor data types whose components are index types). We will omit quali cations \(Pord a,Ix a) =>" when they are evident. 3.1 Basic Operations on Data Fields builds data elds from functions and bounds, and bounds provides the bounds of a data eld: datafield datafield :: (a -> b) -> Bounds a -> Datafield a b bounds :: Datafield a b -> Bounds a As for Haskell arrays, the in x operation ! is used for indexing. The constant outofBounds represents , and the predicate isoutofBounds tests for this value. 3.2 Bounds Data Field Haskell has a rich variety of bounds. They are classi ed as either nite or in nite. There are a number of operations to construct them: (<:>) :: a -> a -> Bounds a yields dense bounds, i.e., usual array bounds. For instance, (1,1)<:>(10,20) returns a bound representing the rectangle with lower left corner (1; 1) and upper right corner (10; 20). Dense bounds are nite. sparse :: [a] -> Bounds a creates sparse bounds that represent general nite sets. sparse [(1,2),(17,9),(1,2),(42,44)], for instance, returns a sparse bound representing f(1; 2); (17; 9); (42; 44)g. Sparse bounds are also nite. predicate :: (a -> Bool) -> Bounds a forms predicate bounds. For instance, predicate (\x -> f x /= 0) represents the set where the function f is nonzero. Predicate bounds are classi ed as in nite. universe (in nite) represents the universal set (the bound all ) and empty ( nite) the empty set (the bound nothing ). (<*>) :: Bounds a -> Bounds b -> Bounds (a,b) de nes product bounds representing Cartesian products. It can be used to create conventional multidimensional array bounds, e.g.,(1<:>10)<*>(1<:>20) (which equals (1,1)<:>(10,20)), but also other bounds like (sparse [5,7,13])<*>(1<:>10) and (1<:>10)<*>universe. b1<*>b2 is nite precisely when both b1 and b2 are. In general, prod_n forms ntuples of bounds2 . Some two-dimensional bounds are illustrated in Fig. 1. 3.3 Operations on Bounds The most basic operations on bounds are join and meet (t and u). They essentially compute the union and intersection, respectively, of their arguments seen as sets (if the arguments are both dense, then join may compute an overapproximation of the union, see Fig. 2). The kind of bound computed depends on the arguments as shown in Table 1. join and meet for product bounds are de ned elementwise, i.e., the equation (bx1<*>by1) `meet` (bx2<*>by2) = (bx1 `meet` bx2)<*>(by1 `meet` by2) holds for meet and similarly for join. 2 <*> is syntactic sugar for `prod 2`. Fig. 1. Some two-dimensional bounds: three product bounds, and a sparse two-dimensional bound. b1 b2 b1 ‘join‘ b2 Fig. 2. Join of two one-dimensional dense bounds. meet  join E U S D P E E E E E E E E E U S D P U U S D P U U U U U E U S D P  S S S S S S D D S D D P P P P  P P      U S S P S=P  P  Table 1. Result \types" of join and meet as a function of the argument \types". = E , = universe, = sparse, = dense, = predicate,  = product bound. \ " in the table for join means that the result is sparse if the product bound is nite, and a predicate otherwise. empty S=P U S D P join and meet are used primarily to de ne higher level data eld constructs. An example is the explicit restriction operator on bounds, <\>. It satis es the following equation: (datafield f b1) <\> b2 = datafield f (b2 `meet` b1) We now see why in nite bounds can make sense: for instance, (datafield f (1<:>n)) <\> predicate p will yield a sparse data eld, de ned for the points in the range 1..n where p is true. This provides a data eld counterpart to array operations that are performed for the indices where a \mask" is true [15]. Some more operations on bounds are: { finite :: Bounds a -> Bool, which tests bounds for niteness, { enumerate :: Bounds a -> [a], which returns the list of elements of the set de ned by a nite bound in the following order: for nite non-product bounds in the order given by the \<" operation in the Ord class, and for product bounds in the lexicographic order de ned by the orders of the components. { size :: Bounds a -> Int, which gives the number of elements in a nite bound, and { inBounds :: a -> Bounds a -> Bool, which checks for membership in the set de ned by a bound. 3.4 Operations on Finite Data Fields Sometimes it is desirable to force the evaluation of all elements in a data eld. There are, for instance, parallel algorithms whose eciency depends on the compile-time knowledge of which computations to perform. This is similar to strictness declarations for functions, which sometimes are necessary to ensure ecient execution. To this end, we have de ned three data eld evaluators, all of type (Pord a, Ix a, Eval a) => Datafield a b -> Datafield a b that evaluate their respective arguments to di erent degrees. hstrictTab, for instance, evaluates all elements in a hyperstrict fashion (i.e., to the innermost constructor). foldlDf, of type (Pord a, Ix a, Eval a) => (b -> c -> b) -> b -> (Datafield a c) -> b is the data eld equivalent to foldl for lists. It reduces its data eld argument in the order given by the enumeration of its bound. The reduction only includes the values indexed by elements in the domain of the corresponding partial function (note that the bound may overapproximate this domain: see [11] for details). As for lists, there are various versions of data eld folds [11]. The operations in this section are only meaningful for nite data elds and will yield a runtime error if applied to an in nite data eld. 3.5 Forall-abstraction Data Field Haskell provides a form of '-abstraction, with the following syntax (described in the metasyntax of the Haskell report [23]): forall apat1 : : : apatn -> exp Thus, the syntax is analogous to -abstraction in Haskell and includes such features as pattern-matching (which is convenient when de ning multidimensional data elds). Type inference works in the same way as for -abstraction, although the identi ers being abstracted over must be instances of the Pord and Ix classes. The semantics of forall-abstraction is forall x -> t = datafield (\x -> t) b where the bound b is a function of the form of t. The limited space prohibits a detailed account for how b is computed: the exact rules are found in [1, 11]. Here, we give an informal description supported by representative examples. First, if a!x occurs in a strict position in the body of forall x -> ... then bounds a should constrain the bounds of forall x -> .... Thus, bounds (forall x -> a!x + b!x + 17) = (bounds a) `meet` (bounds b) The principle generalises to forall-abstraction over tuples, which should have product bounds where each component constrains the respective variable in the tuple. Thus, bounds (forall (x,y) -> a!x * b!y) = (bounds a) <*> (bounds b) so this expression yields the outer product of a and b with the expected bounds. For conditionals, any of the branches could be taken for any value of x. Thus, the bounds from the branches should be joined. Moreover, the conditional is strict in the condition, thus, bounds (forall x -> if a!x then b!x else c!x) = (bounds a) `meet` ((bounds b) `join` (bounds c)) Multidimensional arrays are important in array languages, and they often provide convenient syntax to select subarrays from matrices. In order to generalise this feature to data elds, components of product bounds of multidimensional data elds occurring in forall-abstraction can constrain the bound of the abstraction. Thus, if bounds a = b1<*>b2, we have3 bounds (forall x -> a!(1,x)) = b2 (selection of row one), and bounds (forall x -> a!(x,x)) = b1 `meet` b2 (main diagonal). This feature can be combined with forall-abstraction over tuples, like bounds (forall (x,y) -> a!(y,x)) = b2 <*> b1 (\data eld transpose"). If the bound of a is a sparse multidimensional bound, then the smallest enclosing product bound is rst computed and the above then applies. Finally, we allow translations of bounds w.r.t. linear o sets, e.g., if bounds a = 1<:>5 then bounds (forall x -> a!(x+1)) = 0<:>4 Sparse bounds are translated similarly, and this feature combines with the others. If none of the previous cases apply (e.g., forall x -> a!(f x)), then the bound universe will result. The \compute bounds rst" evaluation order of forall-abstraction gives data elds a lazy avour. For instance, one may de ne a two-dimensional data eld with nitely many in nitely long columns; rows are then still nite data elds. 3 a more exact bound would be if (inBounds 1 b1) then rent version of Data Field Haskell does not compute this. , but the cur- b2 else empty 3.6 For-abstraction for-abstraction provides a convenient syntax to de ne data elds by cases. It essentially de nes a data eld from a list of pairs of bounds and expressions and can be thought of as a \parallel case" where the di erent bounds provide the cases. The syntax is for pat in { e1 -> e10 ; : : : ; en -> en0 } with semantics (forall pat -> if inBounds pat ( 1 ) then 01 else if else if inBounds pat ( n ) then 0n else outofBounds) <\> ( 1 ) `join` ( 2 ) `join` `join` ( n ) e e e e e ::: e ::: e 4 A Simple Example The limited space only allows a short example, see [1, 12, 19] for more examples. Consider the linear equation system Ax = b, where A is an n  n lower-triangular matrix. (1) gives the classical forward-solving algorithm for computing x: x = i b? i P? i 1 j =1 a ii a x ij j ; i = 1; : : : ; n (1) This algorithm can be more or less directly expressed in Data Field Haskell: dfSum = foldlDf (+) 0 fsolv a b = forall i -> (b!i - dfSum (for j in 1<:>(i-1) -> a!(i,j) * (fsolv a b)!j)) /a!(i,i) P Note how \dfSum (for j in 1<:>(i-1) -> ...)" corresponds to \ ?=11 : : :". What is the bound of fsolv a b? It will be constrained by the bound of b, and the bounds with respect to i derived from dfSum (...) and a!(i,i). If bounds a = b1<*>b2, then the latter bounds are b1 and b1 `meet` b2, respectively, and we obtain i j bounds (fsolv a b) = (bounds b) `meet` b1 `meet` b1 `meet` b2 If (bounds b) = b1 = b2 = 1<:>n, then bounds (fsolv a b) = 1<:>n as expected. The bound of the data eld being summed over, nally, is given by the constraints on k: thus, it equals 1<:>(i-1) `meet` b2 `meet` bounds (fsolv a b). With bounds b, b1, and b2 as above this equals 1<:>(i-1). Interestingly, the code above works also for sparse a: a sparse version of a dense matrix can be created with the very generic function sparsify de ned below: sparsify x = x <\> predicate (\i -> x!i /= 0) If x has a nite bound, then sparsify x will have a nite sparse bound. If fsolv is given a sparse a, the current version of Data Field Haskell will rst create the bounds b1 and b2 by projecting bounds a as indicated in Fig. 3, and then the above works as before. Note that this leads to loose approximations: in particular for each i, the bounds for the summed data eld really only needs to contain the k in 1<:>(i-1) where a(i,k) is de ned. It is possible to de ne a more complex scheme for deriving constraints of bounds arising from the use of sparse multidimensional data elds, which yields exactly this: the details can be found in [18]. However, Data Field Haskell does not yet use this scheme. b2 b1 bounds a Fig. 3. The two one-dimensional projections of a sparse, two-dimensional bound. 5 Implementation Our implementation of Data Field Haskell is based on the NHC compiler [25], which implements Haskell v. 1.3. The execution mechanism is graph reduction, which is performed by a variant of the G-machine. Our implementation consists of: { { { { { Modi cations to the front-end in order to parse and type-check forall and for-abstractions, automatic derivation of instances for the new type class Pord, and for the Eval class which has been slightly modi ed [11], a program transformation of intermediate code with forall- and for-abstractions into intermediate code without forall and for-abstractions, the abstract data types for Datafield and Bounds implemented in Haskell, and simple exception handling (used to implement outofBounds), implemented mostly by modi cations to the back-end. Portability and development time was deemed more important than execution speed, thus we have strived to make most of the implementation in Haskell itself. We have not implemented any advanced optimizations. The front-end modi cations are quite straightforward, as the automatic derivation of instances for the Pord and Eval classes. for- and forall-abstractions are translated into intermediate code that uses the datafield function to build data elds. In this process, calls to join and meet are also introduced. These operations obey the following equations, and we perform the corresponding simpli cation of expressions for bounds in the translation: universe `meet` x = x empty `join` x = x x `meet` universe = x x `join` empty = x The implementation of the abstract data types for data elds and bounds was not entirely straightforward to do in Haskell. The problem is Bounds a. Ideally, one would de ne this as an algebraic data type with constructors for the di erent kinds of bounds. However, product bounds do not t into this scheme since they require that a is a tuple type. It would indeed be possible to de ne a type PBounds_n a1 ... an = ... that includes product bounds, but this type could then not be used for bounds over non-tuple-types and one would have to use different types for bounds and data elds over tuple types and non-tuple types. Overloading the operations on data elds and bounds through the class system does not work, since the type constructors Bounds and PBounds_n have di erent arities. Pattern-matching in type declarations, like data PBounds_n (a1,...,an) = ... would make it possible to de ne a constructor class for bounds, but this is not allowed in Haskell. Thus, we have reverted to a low-level implementation of data elds and bounds, done in Haskell but with incorrect types. The implementation has some similarities with how dictionaries are used to implement overloading in Haskell. Coercion functions, which are manually given (incorrect) function types, are used as interfaces between the Datafield and Bounds types and their implementations. Sparse bounds and tabulated data elds are represented by an abstract data type for sets, which is based on balanced binary trees. If n is the number of elements stored in the tree, then membership tests (and lookups) are done in time O(log n), unions, intersections, enumerations, and folds in time O(n), and the size is calculated in time O(log2 n). The production of ordinary error values in Haskell results in immediate termination. outofBounds must be handled in a less strict fashion, since data elds represent partial functions where the bounds may overapproximate the partial function domain, and certain operations should only be performed over the elements in this domain. Thus, it must be possible to just skip occurrences of outofBounds rather than terminating directly when it appears. We wanted the implementation of this to be reasonably ecient. Therefore we have introduced a simple exception handling mechanism. On the Haskell kernel level a function handle is introduced that adheres to the following: handle x y = y handle x y = x -- if x evaluates to outofBounds -- otherwise isoutofBounds can now be de ned as: isoutofBounds = handle (seq x False) True is implemented by catching exceptions, and outofBounds is implemented by throwing them. handle < n1 ) : n2 < n1 : : : HANDLE C; D; E > EVAL REMOVEHANDLER S; G; S; G; : < S; G; REMOVEHANDLER < S; G; FAIL : C; D; ( : C; D; t 0 0 n; S ; C ; D : 0 : E > ): C; D; ) E > ( n2 ; S; C; D ): E > < S; G; C; D; E > ) < n : 0 S ; EVAL : 0 0 C ;D ;E > Fig. 4. State transitions for HANDLE , REMOVEHANDLER and FAIL. The exception handling was implemented by modifying the G-machine of NHC. The basic G-machine, as described in [24], has four-tuples < S; G; C; D > as states. Here, S is a stack of node names, G is the graph, C is the sequence of G-code being executed, and D is the dump, a stack of pairs of code sequences and stacks. The Gmachine of NHC adheres to this scheme, although its instruction set and low-level representations are somewhat di erent. Our modi ed G-machine has ve-tuples < S; G; C; D; E > as states. The new component E , the exception stack, consists of quadruples (n; S; C; D) of a node name, a stack, a code sequence and a dump. (S; C; D) saves the current state when the handling of an exception is set up, and n points to the node to be evaluated on failure. We also need three new instructions: HANDLE, REMOVEHANDLER, and FAIL. The code generated for outofBounds is simply FAIL and the code for handle x y is <code that puts x on the stack> <code that puts y on the stack> HANDLE The idea is to abort the evaluation of x if FAIL is executed, restore the machine state to what is was before the evaluation of x began, and evaluate y. The semantics of the instructions as transitions of the modi ed G-machine is shown in Figure 4. The description above is for exception handling in the basic G-machine. Our actual solution for the G-machine of NHC is slightly di erent, due to the internal details of this G-machine, but the basic idea is the same. See [11]. 6 Related Work There is a wealth of collection-oriented languages and it is impossible to give a full account here. An excellent survey of collection-oriented languages up to around 1990 is found in [27]. Array and data parallel languages like Fortran 90, HPF [15], and *lisp [29] have been important sources of inspiration for Data Field Haskell. The language closest to Data Field Haskell is probably FIDIL [26], whose implicit intersection rule corresponds to the propagation of bounds from strict positions below a forall-abstraction. The arrays in FIDIL resemble data elds also in other respects, for instance they can have a wider variety of shapes than traditional array bounds. Examples of functional data parallel and array languages are Connection Machine Lisp [28], Id [4], Sisal [6], NESL [2], Data Parallel Haskell [10], and pH [21]. These languages are intended for direct parallel implementation whereas Data Field Haskell targets collection-oriented programming in general, with more emphasis on expressiveness than eciency. Haskell itself [23] is to some extent collection-oriented through its set of collective list operations, and it has been suggested for data parallel programming [22]. FISh [13] is an imperative array language, which shares some features with Data Field Haskell such as advanced polymorphism. It is, however restricted to regular arrays and certain recursion patterns, which enables the generation of good code but makes it less suitable for speci cation of sparse or dynamic algorithms. A survey of the research in parallel functional programming is found in [9]. \Bulk types", like the ones provided by the STL C++ library [20], provide generic collection-orientation and are similar in this respect to data elds. Peyton Jones [14] has used the class system of Haskell to de ne bulk types. Bulk types do not provide any particular support for multidimensional structures, and there is no counterpart to forall-abstraction and implicit derivation of bounds for expressions. 7 Conclusions and Further Research We have de ned and implemented Data Field Haskell, a Haskell dialect where data elds replace arrays. Data elds are designed with the abstract view of indexed structures as partial functions in mind. This leads to the view of bounds as set representations, and to the design of forall-abstraction, which is inspired by abstraction. The intention has been to create a language that supports collectionoriented programming at a very high level. Although our initial inspiration comes from array and data parallel programming, we believe that the data eld concept is general enough to support collection-oriented programming in a variety of applications. Data Field Haskell is designed for expressiveness rather than speed. We believe this is the right place to start, and then investigate how restricted sublanguages can be given an ecient implementation and how performance-enhancing features like mutable data elds could be introduced. Parallel implementations are also certainly possible. The eciency of our current implementation can also be greatly improved. We have furthermore found some cases of forall-abstraction where it would be natural to have a tighter bound. We plan to upgrade our implementation to Haskell 98: in this process, we may x some of the current de ciencies. Another desirable feature is elemental intrinsics overloading, which refers to the ability in some array languages to apply certain \scalar" operators to arrays with the meaning that it is applied to each element. For data elds, it would be natural to resolve this overloading into forall-expressions, e.g., a+b ! forall x-> a!x + b!x provided that a and b have the proper data eld type. To some extent this is possible to do within the class system of Haskell, but the resulting overloading has certain restrictions and is also likely to lead to ineciencies. We are investigating another scheme for elemental intrinsics overloading that is less restricted, but it is still only de ned for explicitly typed languages [30]. An obvious goal is to extend this scheme to implicitly typed languages. The low-level representation of data elds and bounds is somewhat unsatisfactory, since it hurts the portability of the implementation. If Haskell's algebraic type declarations allowed pattern matching on type parameters then it would be possible to de ne classes for bounds and data elds. We could then do away with the low level representations. This would also make it possible for users to de ne their own types of bounds. The formal data eld model [18] was speci cally designed to support the development of abstract data types for bounds and data elds, and the ability to de ne new types of bounds would be an important enhancement of the language. References 1. Data Field Haskell homepage. http://www.it.kth.se/labs/paradis/dfh/. 2. Guy E. Blelloch. Programming parallel algorithms. Comm. ACM, 39(3), March 1996. 3. Walter S. Brainerd, Charles H. Goldberg, and Jeanne C. Adams. Programmer's Guide to FORTRAN 90. Programming Languages. McGraw-Hill, 1990. 4. Kattamuri Ekanadham. A perspective on Id. In Boleslaw K. Szymanski, editor, Parallel Functional Languages and Compilers, chapter 6, pages 197{253. AddisonWesley, 1991. 5. A.D. Falko and K.E. Iverson. The Design of APL. IBM Journal of Research and Development, pages 324{333, July 1973. 6. John T. Feo, David C. Cann, and Rodney R. Oldehoeft. A report on the Sisal language project. J. Parallel Distrib. Comput., 10:349{366, 1990. 7. Tom R. Halfhill. Sun reveals secrets of \magic". Microprocessor Report, pages 13{17, August 1999. 8. Per Hammarlund and Bjorn Lisper. On the relation between functional and data parallel programming languages. In Proc. Sixth Conference on Functional Programming Languages and Computer Architecture, pages 210{222. ACM Press, June 1993. 9. Kevin Hammond and Greg Michaelson, editors. Research Directions in Parallel Functional Programming. Springer-Verlag, 1999. 10. Jonathan M. D. Hill. Data Parallel Haskell: Mixing old and new glue. Tech. Rep. 611, Queen Mary and West eld College, December 1992. 11. Jonas Holmerin. Implementing data elds in Haskell. Technical Report TRITA-IT R 99:04, Dept. of Teleinformatics, KTH, Stockholm, November 1999. ftp://ftp.it.kth.se/Reports/paradis/DFH-report.ps.gz. 12. Jonas Holmerin and Bjorn Lisper. Development of parallel algorithms in Data Field Haskell. Accepted to Euro-Par 2000, 2000. 13. C. Barry Jay and P. A. Steckler. The functional imperative: shape! In Chris Hankin, editor, Proc. 7th European Symposium on Programming, volume 1381 of Lecture Notes in Comput. Sci., pages 139{53, Lisbon, Portugal, March 1998. Springer-Verlag. 14. Simon Peyton Jones. Bulk types with class. In Electronic Proceedings of the 1996 Glasgow Functional Programming Workshop, Ullapool, July 1996. 15. Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Guy L. Steele, Jr., and Mary E. Zosel. The High Performance Fortran Handbook. Scienti c and Engineering Computation. MIT Press, Cambridge, MA, 1994. 16. Bjorn Lisper. Data parallelism and functional programming. In Guy-Renee Perrin and Alain Darte, editors, The Data Parallel Programming Model: Foundations, HPF Realization, and Scienti c Applications, Vol. 1132 of Lecture Notes in Comput. Sci., pages 220{251, Les Menuires, France, March 1996. Springer-Verlag. 17. Bjorn Lisper. Data elds. In Proc. Workshop on Generic Programming, Marstrand, Sweden, June 1998. http://wsinwp01.win.tue.nl:1234/WGPProceedings/. 18. Bjorn Lisper and Per Hammarlund. The data eld model. Submitted. Preliminary version avaliable as Tech. Rep. TRITA-IT R 99:02, Dept. of Teleinformatics, KTH, Stockholm, 2000. 19. Bjorn Lisper and Jonas Holmerin. Development and veri cation of parallel algorithms in the data eld model. In Sergei Gorlatch and Christian Lengauer, editors, Proc. 2nd Int. Workshop on Constructive Methods for Parallel Programming, pages 115{130, Ponte de Lima, Portugal, July 2000. 20. David R. Musser and Atul Saini. STL Tutorial and Reference Guide. Addison-Wesley, Reading, MA, 1996. 21. Rishiyur S. Nikhil, Arvind, James E. Hicks, Shail Aditya, Lennart Augustsson, JanWillem Maessen, and Y. Zhou. pH language reference manual, version 1.0. Technical Report CSG-Memo-369, Massachussets Institute of Technology, Laboratory for Computer Science, January 1995. 22. John T. O'Donnell. Data parallelism. In Hammond and Michaelson [9], chapter 7, pages 191{206. 23. John Peterson, Kevin Hammond, Lennart Augustsson, Brian Boutel, Warren Burton, Joseph Fasel, Andrew D. Gordon, John Hughes, Paul Hudak, Thomas Johnsson, Mark Jones, Erik Meijer, Simon L. Peyton Jones, Alastair Reid, and Philip Wadler. Report on the programming language Haskell: A non-strict purely functional language, version 1.4, April 1997. http://www.haskell.org/definition/. 24. Simon L. Peyton Jones. The Implementation of Functional Programming Languages. Prentice-Hall International Series in Computer Science. Prentice Hall, 1987. 25. Niklas Rojemo. Garbage Collection, and Memory Eciency, in Lazy Functional Languages. PhD thesis, Department of Computing Science, Chalmers University of Technology, Gothenburg, Sweden, 1995. 26. Luigi Semenzato and Paul Hil nger. Arrays in FIDIL. In Lenore M. R Mullin, Michael Jenkins, Gaetan Hains, Robert Bernecky, and Guang Gao, editors, Arrays, Functional Languages, and Parallel Systems, chapter 10, pages 155{169. Kluwer Academic Publishers, Boston, 1991. 27. Jay M. Sipelstein and Guy E. Blelloch. Collection-oriented languages. Proc. IEEE, 79(4):504{523, April 1991. 28. Guy L. Steele and W. D. Hillis. Connection Machine LISP: Fine grained parallel symbolic programming. In Proc. 1986 ACM Conference on LISP and Functional Programming, pages 279{297, Cambridge, MA, 1986. ACM. 29. Thinking Machines Corporation, Cambridge, MA. Getting Started in *Lisp, June 1991. 30. Claes Thornberg. Towards Polymorphic Type Inference with Elemental Function Overloading. Licentiate thesis, Dept. of Teleinformatics, KTH, Stockholm, May 1999. Research Report TRITA-IT R 99:03. View publication stats