This preview shows page 1. Sign up to view the full content.
Unformatted text preview: A Fast Fourier Transform Compiler
Matteo Frigo MIT Laboratory for Computer Science 545 Technology Square NE43203 Cambridge, MA 02139
4 3 ¦ ¢ ( & $ " ¦ ¤ ¢ © ¡ ¨ ¦ ¤ ¢ 210)'%#!¥¥§£§¥£¡ February 16, 1999
Abstract The FFTW library for computing the discrete Fourier transform (DFT) has gained a wide acceptance in both academia and industry, because it provides excellent performance on a variety of machines (even competitive with or faster than equivalent libraries supplied by vendors). In FFTW, most of the performancecritical code was generated automatically by a specialpurpose compiler, called , that outputs C code. Written in Objective Caml, can produce DFT programs for any input length, and it can specialize the DFT program for the common case where the input data are real instead of complex. Unexpectedly, “discovered” algorithms that were previously unknown, and it was able to reduce the arithmetic complexity of some other existing algorithms. This paper describes the internals of this specialpurpose compiler in some detail, and it argues that a specialized compiler is a valuable tool. 1 Introduction Recently, Steven G. Johnson and I released Version 2.0 of the FFTW library [FJ98, FJ], a comprehensive collection of fast C routines for computing the discrete Fourier transform (DFT) in one or more dimensions, of both real and complex data, and of arbitrary input size. The DFT [DV90] is one of the most important computational problems, and many realworld applications require that the transform be computed as quickly as possible. FFTW is one of the fastest DFT programs available (see Figures 1 and 2) because of two unique features. First, FFTW automatically adapts the computation to the hardware. Second, the inner loop of FFTW
The author was supported in part by the Defense Advanced Research Projects Agency (DARPA) under Grant F306029710270, and by a Digital Equipment Corporation fellowship. 250 150 100 50 0 Figure 1: Graph of the performance of FFTW versus Sun’s Performance Library on a 167 MHz UltraSPARC processor in single precision. The graph plots the speed in “mﬂops” (higher is better) versus the size of the transform. This ﬁgure shows sizes that are powers of two, while Figure 2 shows other sizes that can be factored into powers of 2, 3, 5, and 7. This distinction is important because the DFT algorithm depends on the factorization of the size, and most implementations of the DFT are optimized for the case of powers of two. See [FJ97] for additional experimental results. FFTW was compiled with Sun’s C compiler (WorkShop Compilers 4.2 30 Oct 1996 C 4.2). 1 I To appear in Proceedings of the 1999 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), Atlanta, Georgia, May 1999. (which amounts to 95% of the total code) was generated automatically by a specialpurpose compiler written in Objective Caml [Ler98]. This paper explains how this compiler works. FFTW does not implement a single DFT algorithm, but it is structured as a library of codelets—sequences of C code that can be composed in many ways. In FFTW’s lingo, a composition of codelets is called a plan. You can imagine the plan as a sort of bytecode that dictates which codelets should be executed in what order. (In the current FFTW implementation, however, a plan has a tree structure.) The precise plan used by FFTW depends on the size of the input (where “the input” is an array of complex numbers), and on which codelets happen to run fast on the underly H GH GH GH GH H H G H GH GH H G GH H H H H G HH G
transform size G 200 G H GG GG GG G H GH FFTW SUNPERF B@@86 DA7E75 B@@86 DA7905 B@@86 CA7975 F 150 100 50 0 transform size Figure 2: See caption of Figure 1. produces a directed 1. In the creation phase, acyclic graph (dag) of the codelet, according to some wellknown algorithm for the DFT [DV90]. The generator contains many such algorithms and it applies the most appropriate. 2. In the simpliﬁer, applies local rewriting rules to each node of the dag, in order to simplify it. In traditional compiler terminology, this phase performs algebraic transformations and commonsubexpression elimination, but it also performs other transformations that are speciﬁc to the DFT. For example, it turns out that if all ﬂoating point constants are made positive, the generated code runs faster. (See Section 5.) Another important transformation is dag transposition, which derives from the theory of linear networks [CO75]. Moreover,
1 In the actual FFTW system, some codelets perform more tasks, however. For the purposes of this paper, we consider only the generation of transforms of a ﬁxed size. 2 Buried somewhere in the computation dag generated by the algorithm are statements of the form , , . The generator simpliﬁes these statements to , provided and are not needed elsewhere. Incidentally, [SB96] reports an algorithm with 188 additions and 40 multiplications, using a more involved DFT algorithm that I have not implemented yet. To my knowledge, the program generated by performs the lowest known number of additions for this problem. 3 In fact, saves a few operations in certain cases, such as . 2 ` 't
g Y b wuq v` gdY ` tscrq pic1g fcaY ehb ` e db ` B@@86 CA7975 XV T !WUI y 22x B@@86 DA7E75 ing hardware. The user need not choose a plan by hand, however, because FFTW chooses the fastest plan automatically. The machinery required for this choice is described elsewhere [FJ97, FJ98]. Codelets form the computational kernel of FFTW. You can imagine a codelet as a fragment of C code that computes a Fourier transform of a ﬁxed size (say, 16, or 19).1 FFTW’s codelets are produced automatically by the FFTW codelet . is an generator, unimaginatively called unusual specialpurpose compiler. While a normal compiler accepts C code (say) and outputs numbers, inputs the single integer (the size of the transform) and outputs C code. The codelet generator contains optimizations that are advantageous for DFT programs but not appropriate for a general compiler, and conversely, it does not contain optimizations that are not required for the DFT programs it generates (for example loop unrolling). operates in four phases. 4. Finally, the schedule is unparsed to C. (It would be easy to produce FORTRAN or other languages by changing the unparser.) The unparser is rather obvious and uninteresting, except for one subtlety discussed in Section 7. Although the creation phase uses algorithms that have is at been known for several years, the output of times completely unexpected. For example, for a complex transform of size , the generator employs an algorithm due to Rader, in the form presented by Tolimieri and others [TAL97]. In its most sophisticated variant, this algorithm performs 214 real (ﬂoatingpoint) additions and 76 real multiplications. (See [TAL97, Page 161].) The generated code in FFTW for the same algorithm, however, contains only 176 real additions and 68 real multiplications, because found certain simpliﬁcations that the authors of [TAL97] did not notice.2 The generator specializes the dag automatically for the case where the input data are real, which occurs frequently in applications. This specialization is nontrivial, and in the past the design of an efﬁcient real DFT algorithm required a serious effort that was well worth a publication [SJHB87]. , however, automatically derives real DFT programs from the complex algorithms, and the resulting programs have the same arithmetic complexity as those discussed by [SJHB87, Table II].3 The generator also produces real variants of the Rader’s algorithm mentioned above, which to my knowledge do not appear anywhere in the literature. S CR B@@86 DA7905 B@@86 DA7E75 y 22x Q Q PPQ B@@86 DA7905 B@@86 DD7975 P PQ PQ QQ QQQ Q Q Q P QQ P PQ P QP P QPPQ P B@@86 CA7975 P 200 P Q B@@86 DD7975 PP B@@86 CA7975 P I P 250 FFTW SUNPERF besides noticing common subexpressions, the simpliﬁer also attempts to create them. The simpliﬁer is written in monadic style [Wad97], which allowed me to deal with the dag as if it were a tree, making the implementation much easier. 3. In the scheduler, produces a topological sort of the dag (a “schedule”) that, for transforms of size , provably minimizes the asymptotic number of register spills, no matter how many registers the target machine has. This truly remarkable fact can be derived from the analysis of the redblue pebbling game of Hong and Kung [HK81], as we shall see in Section 6. For transforms of other sizes the scheduling strategy is no longer provably good, but it still works well in practice. Again, the scheduler depends heavily on the topological structure of DFT dags, and it would not be appropriate in a generalpurpose compiler. B@@86 DA7905 I found a specialpurpose compiler such as the FFTW codelet generator to be a valuable tool, for a variety of reasons that I now discuss brieﬂy. 3 sI This paper, of necessity, brings together concepts from both the Programming Languages and the Signal Processing literatures. The paper, however, is biased towards a reader familiar with compilers [ASU86], programming languages, and monads [Wad97], while the required signalprocessing knowledge is kept to the bare minimum. (For example, I am This formula yields the CooleyTukey fast Fourier transform algorithm (FFT) [CT65]. The algorithm computes k ¥i h k ¥i h Ed n d n Cd s m is I f m l n h ope h pe h p£e g g feCd 2Ed f e T f I s f sT f I f s m is I m mT s I
f I fI T I I Finally, derived some new algorithms, as in the example discussed above. While this paper does not focus on these algorithms per se, they are of independent theoretical and practical interest in the Digital Signal Processing community. The backward transform is the “scaled inverse” of the forward DFT, in the sense that computing the backward transform of the forward transform yields the original array multiplied by . If can be factored into , Equation (1) can be rewritten as follows. Let , and . We then have, (3) ki ¥jh T £ n d 9m hp g fe ¥0d The generator is effective because it can apply problemspeciﬁc code improvements. For example, the scheduler is effective only for DFT dags, and it would perform poorly for other computations. Moreover, the simpliﬁer performs certain improvements that depend on the DFT being a linear transformation. dn where is the complex conjugate of . The backward DFT ﬂips the sign at the exponent of and it is deﬁned in the following equation. ~ ~ T } q 9wtpI  I Iz {% y ax v usr DjtT d n where is a primitive th root of unity, and . In case is a real vector, the transform has the hermitian symmetry dwf jj¥e ~ Rapid turnaround was essential to achieve the performance goals. For example, the scheduler described in Section 6 is the result of a trialanderror process in search of the best result, since the schedule of a codelet interacts with C compilers (in particular, with register allocators) in nonobvious ways. I could usually implement a new idea and regenerate the whole system within minutes, until I found the solution described in this paper. q p h e ki jh n T £ d o9m l g fe Ad I Achieving correctness has been surprisingly easy. The DFT algorithms in are encoded straightforwardly using a highlevel language. The simpliﬁcation phase transforms this highlevel algorithm into optimized code by applying simple algebraic rules that are easy to verify. In the rare cases during development when the generator contained a bug, the output was completely incorrect, making the bug manifest. 2 Background This section deﬁnes the discrete Fourier transform (DFT), and mentions the most common algorithms to compute it. Let be an array of complex numbers. The (forward) discrete Fourier transform of is the array given by (1) B@@86 DA7905 B@@86 DA0975 B@@86 DA7E75 Performance was the main goal of the FFTW project, and it could not have been achieved without . For example, the codelet that performs a DFT of size 64 is used routinely by FFTW on the Alpha processor. The codelet is about twice as fast as Digital’s DXML library on the same machine. The codelet consists of about 2400 lines of code, including 912 additions and 248 multiplications. Writing such a program by hand would be a formidable task for any programmer. At least for the DFT problem, these long sequences of straightline code seem to be necessary in order to take full advantage of large CPU register sets and the scheduling capabilities of C compilers. not describing some advanced numbertheoretical DFT algorithms used by the generator.) Readers unfamiliar with the discrete Fourier transform, however, are encouraged to read the good tutorial by Duhamel and Vetterli [DV90]. The rest of the paper is organized as follows. Section 2 gives some background on the discrete Fourier transform and on algorithms for computing it. Section 3 overviews related work on automatic generation of DFT programs. The remaining sections follow the evolution of a codelet within . Section 4 describes what the codelet dag looks like and how it is constructed. Section 5 presents the dag simpliﬁer. Section 6 describes the scheduler and proves that it minimizes the number of transfers between memory and registers. Section 7 discusses some pragmatic aspects of , such as running time and memory requirements, and it discusses the interaction of ’s output with C compilers. B@@86 DA7E75 B@@86 DD7975 XVTI B@@86 DD7975 , (2) 4 Creation of the expression dag This section describes the creation of an expression dag. We ﬁrst deﬁne the data type, which encodes a directed acyclic graph (dag) of a codelet. We then describe a few
4 Maruhn argues that PL/I is more suited than FORTRAN to this programgeneration task, and has the following curious remark. One peculiar difﬁculty is that some FORTRAN systems produce an output format for ﬂoatingpoint numbers without the exponent delimiter “E”, and this makes them illegal in FORTRAN statements.
5 Unrelated to Steven G. Johnson, the other author of FFTW. 4 S CR I ±° ¯ !£@C«8 B@ D@A089675 ±°@¯ £D«8 B@@86 DA7905 ³³ AA² ³³ DA² ³³ AA² ® q ¬q ªq ©q ¨ 7V 7j«j00q q X q R 6´¯ 0D«8 S CR §I transforms of size (the inner sum), it multiplies the result by the socalled twiddle factors , and ﬁnally it computes transforms of size (the outer sum). If , the prime factor algorithm can be applied, which avoids the multiplications by the twiddle factors at the expense of a more involved computation of indices. (See [OS89, page 619].) If is a multiple of , the splitradix algorithm [DV90] can save some operations with respect to CooleyTukey. If is prime, uses either Equation (1) directly, or Rader’s algorithm [Rad68], which converts the transform into a circular convolution of size . The circular convolution can be computed recursively using two Fourier transforms, or by means of a clever technique due to Winograd [Win78] ( does not employ this technique yet, however). Other algorithms are known for prime sizes, and this is still the subject of active research. See [TAL97] for a recent compendium on the topic. Any algorithm for the forward DFT can be readily adapted to compute the backward DFT, the difference being that certain complex constants become conjugate. For the purposes of this paper, we do not distinguish between forward and backward transform, and we simply refer to both as the “complex DFT”. In the case when the input is purely real, the transform can be computed with roughly half the number of operations of the complex case, and the hermitian output requires half the storage of a complex array of the same size. In general, keeping track of the hermitian symmetry throughout the recursion is nontrivial, however. This bookkeeping is relatively easy for the splitradix algorithm, and it becomes particularly nasty for the prime factor and the Rader algorithms. The topic is discussed in detail in [SJHB87]. In the real transform case, it becomes important to distinguish the forward transform, which takes a real input and produces an hermitian output, from the backward transform, whose input is hermitian and whose output is real, requiring a different algorithm. We refer to these cases as the “real to complex” and “complex to real” DFT, respectively. In the DFT literature, unlike in most of Computer Science, it is customary to report the exact number of arithmetic operations performed by the various algorithms, instead of their asymptotic complexity. Indeed, the time complexity of all DFT algorithms of interest is , and a detailed count of the exact number of operation is usually doable (which by no means implies that the analysis is easy to carry out). It is no problem for me to follow this convention in this paper, since produces an exact arithmetic count anyway. In the literature, the term FFT (“fast Fourier transform”) refers to either the CooleyTukey algorithm or to any algorithm for the DFT, depending on the author. In this paper, FFT denotes any algorithm. 3 Related work Researchers have been generating FFT programs for at least twenty years, possibly to avoid the tedium of getting all the implementation details right by hand. To my knowledge, the ﬁrst generator of FFT programs was FOURGEN, written by J. A. Maruhn [Mar76]. It was written in PL/I and it generated FORTRAN.4 FOURGEN is limited to transforms of size . Perez and Takaoka [PT87] present a generator of Pascal programs implementing a prime factor FFT algorithm. This program is limited to complex transforms of size , where must be factorable into mutually prime factors in the set . Johnson5 and Burrus [JB83] applied dynamic programming to the automatic design of DFT modules. Selesnick and Burrus [SB96] used a program to generate MATLAB subroutines for DFTs of certain prime sizes. In many cases, these subroutines are the best known in terms of arithmetic complexity. The EXTENT system by Gupta and others [GHSJ96] generates FORTRAN code in response to an input expressed in a tensor product language. Using the tensor product abstraction one can express concisely a variety of algorithms that includes the FFT and matrix multiplication (including Strassen’s algorithm). Another program called generating Haskell FFT subroutines is part of the benchmark for Haskell [Par92]. Unlike my program, this is limited to transforms of size . The program in is not documented at all, but apparently it can be traced back to [HV92]. Veldhuizen [Vel95] used a template metaprograms technique to generate programs. The technique exploits the template facility of to force the compiler to perform computations at compile time. All these systems are restricted to complex transforms, and the FFT algorithm is known a priori. To my knowledge, the FFTW generator is the only one that produces real algorithms, and in fact, which can derive real algorithms by specializing a complex algorithm. Also, my generator is the only one that addressed the problem of scheduling the program efﬁciently. 9¢¡ I ¤£ I B@@86 CA7975 B@@86 CA7975 ¦9%¥¢¡ I ¤£ I h e p I I dn B@@86 DA7905 sI V T !s I q f C I fI fI i9¢¡ I ¤£ I V} { I A prime factor algorithm (as described in [OS89, page 619]), if factors into , where and . Otherwise, 5 865B@ 97DDA@ ´DÂD¯DÄj¾¿CC¶ 0A·U8 @ Ü6Ä ¯² ãâ ° ¼ 85 D» t° Ë 865B@ 97DDA@ ÙA°ÈB ¶D8!°¾¦6D«9EC0Ã Ä±Â°½Â » Ø B A!° ¶8 8» 6 One subtlety is that a complex multiplication by a constant can be implemented with either 4 real multiplications and 2 real additions, or 3 real multiplications and 3 real additions [Knu98, Exercise 4.6.441]. The current generator uses the former algorithm, since the operation count of the dag is generally dominated by additions. On most CPUs, it is advantageous to move work from the ﬂoatingpoint adder to the multiplier. The ﬁrst argument to is the size of the transform. The second argument is a function with type . The application returns a complex expression that contains the th input. The third argument is either or , and it determines the direction of the transform. Depending on the size of the requested transform, dispatches one of the algorithms mentioned above. We now discuss how the CooleyTukey FFT algorithm is implemented. The implementation of the other algorithms is similar, and it is not shown in this paper. The Objective Caml code that implements the CooleyTukey algorithm can be found in Figure 4. In order to understand the code, recall Equation (3). This equation translates almost verbatim into Objective Caml. With reference to Figure 4, the function application computes the inner sum of Equation (3) for a given value of , and it returns a function of . ( is curried over , and therefore does not appear explicitly in the deﬁnition.) Next, is multiplied by the twiddle factors, yielding , that is, the expression in square braces in Equation (3). Next, computes the outer summation, which is itself a DFT of size . (Again, is a function of and , curried over .) In order to obtain the th element of the output of the transform, the index is ﬁnally mapped into and and is returned. Observe that the code in Figure 4 does not actually perform any computation. Instead, it builds a symbolic expression dag that speciﬁes the computation. The other DFT algorithms are implemented in a similar fashion. At the top level, the generator invokes with the size and the direction speciﬁed by the user. The function is set to , i.e., a function that loads the th input variable. Recall now that returns a function è7¶ B å #¶ ¼ B ¼Ø ° ÙA°{B ¶A!° 8 » ¶8 B AØ !° »8 s Ù ½ ¶ Ü 6¾ Ü 6 Ä ¯ ² ã â B 8 0DCÝ¿CD¶ 0AÍ0!° A0!° ãâ B8 ã â Ù ½ ¶ Ü 6¾ Ü 6 Ä ¯ ² ã â B 8 A 0DCÝ¿CD¶ ¼ 0AÍ0!° Ø A0!tß97ACA@ ãâ B8° á 865B@ ¼ Ø s sm f f èæ å Aç#¶ B ¼ 865B@ E7ADA@ We now look at the operation of The function has type f V ä} 865B@ 97DDA@ V I å #¶ B ¼ é «¶ B ¼ 85 Dt° Ë ½ ¶ Ü 6¾ Ü 6 Ä ¯ ² ã â B 8 0DC'iCD¶ 0ÈDA!° ¼ f Ùè° å° é E·7«¶ B ¼Ø sI é «¶ B ¼ ÙCå7°{èAæ f 865B@ 97ACA@ s ancillary data structures and functions that provide complex arithmetic. Finally, the bulk of the section describes the func, which actually produces the expression dag. tion We start by deﬁning the data type, which encodes an arithmetic expression dag. Each node of the dag represents an operator, and the node’s children represent the operands. This is the same representation as the one generally used in compilers [ASU86, Section 5.2]. A node in the dag can have more than one “parent”, in which case the node represents a common subexpression. The deﬁnition of is given in Figure 3, and it is straightforward. A node is either a real number (encoded by the abstract data type ), a load of an input variable, a store of an expression into an output node, the sum of the children nodes, the product of two nodes, or the sign negation of a node. For example, the expression , where and are input variables, is represented by . The structure maintains ﬂoatingpoint constants with arbitrarily high precision (currently, 50 decimal digits), in case the user wants to use the quadruple precision ﬂoatingis point unit on a processor such as the UltraSPARC. implemented on top of Objective Caml’s arbitraryprecision rationals. The structure encodes the input/output nodes of the dag, and the temporary variables of the generated C code. Variables can be considered an abstract data type that is never used explicitly in this paper. The data type encodes expressions over real numbers, since the ﬁnal C output contains only real expressions. For creating the expression dag of the codelet, however, it is convenient to express the algorithms in terms of complex numbers. The generator contains a structure called , which implements complex expressions on top of the data type, in a straightforward way.6 The type (not shown) is essentially a pair of s. We now describe the function , which creates a dag for a DFT of size . In the current implementation, uses one of the following algorithms. Direct application of the deﬁnition of DFT (Equation (1)). more closely. XV à ÍtI T ¨ aI I 8 !° ×Ó ´Â DA¯ Ä Ë » Ô CÎ Ó ¼ Á Õ lË 1 ÆÖ» Ó Ê Ô} ½6 9«± 6´¯ 0D«8 ¼» 9Dº ÒÑÐ t9Ï Figure 3: Objective Caml code that deﬁnes the which encodes an expression dag. data type, ( is a prime number) Rader’s algorithm for transforms of prime length [Rad68] if or . Otherwise, I The CooleyTukey FFT algorithm (Equation (3)) if factors into , where . Otherwise, V ÞT ßp I I V{)p I ÞT sI I f sI I f V {T s I q f C I I 6´¯8 @ AD«·D¯ 8 !° ¼Î 6´¯8 É 6´¯8 @ Ë 0D«lÍ0D«D¯ » 6 ° C¢¹ Ì B «w0D7ÈD¯ Ë ¼ Ä r¹ °Ä 6´¯8 @ 60´D¯78É6C«±9E°C0Æ¦DEEC7ÈDÈ70B r¹ Ä Â ½ Â Å ¾ 6 ÄË ± Â ° ½ Â Ã @ ¯ 6 Ë C¯» Ê ½ 6C±9ÂE°97Æ¦D«9EC0wDwDD¯ ¹ Ä ½ Â Å¾ 6 Ä ± Â ° ½ Â Ã @ ¯ ´ Â Ç Á ¹ ½6 C«± À¿C«± 8¾ ½ 6 @ D¯ ¼ 9» ¼ » ¸ 6 ´ ¯ E8Dr ¶ µ 9Dº 0D7·7¹ DAB ¼»º6 865B@ 97ACA@ 6Ä±Â°½Â DEEC7Ã 6´¯ 0D«8 I ½6 C«± ½6 C«± ¼» EDº ¼ 9» ½0¶DÜC6Ý¾¿CD¶ 0² Ü6Ä ¯ 60D«8 ¼ ´¯ Ü6Ä ¯ CD¶ 0² ¼ 8¾ ½ 6 À¿9«± 6´¯ 0D«8 865B@ 97DDA@ ÛÙÔ ´Â tÚwDD¯ ¼9Dº » 6´¯ AD«8 865B@ 97DDA@ Á Ø A splitradix algorithm [DV90], if Otherwise, is a multiple of . Figure 4: Fragment of the FFTW codelet generator that implements the CooleyTukey FFT algorithm. The inﬁx operator computes the complex product. The function computes the constant exp . 5 The simpliﬁer In this section, we present FFTW’s simpliﬁer, which transforms code such as the one in Figure 5 into simpler code. The simpliﬁer transforms a dag of s (see Section 4) into another dag of s. We ﬁrst discuss how the simpliﬁer transforms the dag, and then how the simpliﬁer is actually implemented. The implementation beneﬁts heavily from the use of monads. 5.1 What the simpliﬁer does We ﬁrst illustrate the improvements applied by the simpliﬁer to the dag. The simpliﬁer traverses the dag bottomup, and it
7 These performance improvements were important for a user of FFTW who needed a hardcoded transform of size 101, and had not obtained an answer after the generator had run for three days. See Section 7 for more details. applies a series of local improvements to every node. For explanation purposes, these improvements can be subdivided into three categories: algebraic transformations, commonsubexpression elimination, and DFTspeciﬁc improvements. Since the ﬁrst two kinds are wellknown [ASU86], I just discuss them brieﬂy. We then consider the third kind in more detail. Algebraic transformations reduce the arithmetic complexity of the dag. Like a traditional compiler, the simpliﬁer performs constant folding, and it simpliﬁes multiplications by , , or , and additions of . Moreover, the simpliﬁer applies the distributive property systematically. Expressions of the form are transformed into . In the same way, expressions of the form are transformed into . In general, these two transformations have the potential of destroying common subexpressions, and they might increase the operation count. This does not appear to be the case for all DFT dags I have studied, although I do not fully understand the reason for this phenomenon. Commonsubexpression elimination is also applied systematically. Not only does the simpliﬁer eliminate common subexpressions, it also attempts to create new ones. For example, it is common for a DFT dag (especially in the case of real input) to contain both and as subexpressions, for some and . The simpliﬁer converts both expressions to either and , or and , depending on which expression is encountered ﬁrst during the dag traversal. The simpliﬁer applies two kinds of DFTspeciﬁc improvements. First, all numeric constants are made positive, possibly propagating a minus sign to other nodes of the dag. This curious transformation is effective because constants generally appear in pairs and in a DFT dag. To my knowl6} B7 6 ý 4 7 6 8¥5 6 6 } 7 ArCu} s 5 6} @i7 6 f 5 x 7} @s6 5 D} 7 B} 6 8u} ÏÒ)(' îÚ' 5 7 ¦5 2ô0 39¡( 7 6 7 A} 6 ¦5 ts 5 V ä} 6 6 1ô0 E¡( 5 f 9 V x , where is a complex expression that computes the th element of the output array. The top level builds a list of expressions that store into the th output variable, for all . This list s is the codelet dag that forms the input of subseof quent phases of the generator. We conclude this section with a few remarks. According to the description given in this section, contains no special support for the case where the input is real. This statement is not completely true. In the actual implementation, maintains certain symmetries explicitly (for example, if the input is real, then the output is known to have hermitian symmetry). These additional constraints do not change the ﬁnal output, but they speed up the generation process, since they avoid computing and simplifying the same expression twice. For the same reason, the actual implementation memoizes expressions such as in Figure 4, so that they are only computed once.7 At this stage, the generated dag contains many redundant computations, such as multiplications by or , additions of , and so forth. makes no attempt to eliminate these redundancies. Figure 5 shows a possible C translation of a codelet dag at this stage of the generation process. Figure 5: C translation of a dag for a complex DFT of size 2, as generated by . Variable declarations have been omitted from the ﬁgure. The code contains many common subexpression (e.g., and ), and redundant multiplications by or . ×ÙÙ #A¶ É B ÙD¶ É Ù B ³ ¼Ù l&¶ ¼ © × Ù$ Ù ! B #AØ ¶ É ¼ ÙD¶ É Ù B â ¼Ù &¶ ¼ © × $Ù Ù é B #Ø A««¶ É ¼ ÙDC#¶ É Ùå B ³ Ù¼ è l7¶ ¼ © × $Ù Ù " B #Ø A£¶ É ¼ ÙÙè B D«¶ É â Ù¼ å C¶ ¼ © $Ø ¼ ³ Ù! ç¶ © $Ø BÉ ·å ¸¼ â Ø AØ ¶ Ù © $Ø B·å É ¸¼ Ø ³ AÙ " lØ ¶ © % BØ ·å É ¸ Ø â AÙ é Ø «¶ © % BØ É·å ¸ ×#£AØå B Ù Ø Û ×#£å Õ B ÙÛ ×#£å Õ B ÙÛ ×#£å Õ B ÙÛ ×#£Û Õ B Ù ×#£Û Õ B Ù© ×#£Û Õ B Ù© ×Ù © #£Û Õ B © Õ BÉå £â ³ Ø¶ ÙÛ £!å B DØ DB ¯ ¨ § BÉå Õ £â » »³ Ø Ø¶ ÙÛ £!å B DØ DB ¯ ¨¦£ §¥ Ø B ÉÕ å » »³ Á ¼ £Û B DØ DB ¯ ¨ Ø¶ Ù § B © #Õ å » »³ Ø É ¼ Ø¶ Ù £Û B DØ DB ¯ ¨¦£ §¥ Ø ¶ 8 © § » ¸ " D!°#Õ »¨wÁ¶ B » ¶8 § ¸ ! D!° Ø ¨¶ ¼ B » D!° Ø ¨¦¤¶ ¼ B ¶8 §¥£ ¸ » D!° Á ¨¦¤«¶ ¼ B ¶8 Ø §¥£ ¸ é » D!° Á¨«¶ ¼ B ¶8 Ø § ¸ è » ¶8 § ¸ D!° Ø ¨¶ ¼ B » D!° Ø ¨¦¤¶ ¼ B ¶8 §¥£ ¸ » D!° Á ¨¦¤í#¶ ¼ B ¶8 Ø §¥£ ¸ å Ø Á ¼ » ò Úñ Iz ßp Ù° A{B DB ¯ ¶ » »Ø å° è° å 7cÚ¶ B ¼ ÙÙå8 ð AC#·° C#D¯ ° «¶ ·A·U8 @ Ùå8 ´ é B ãâ ° Ø ¼Ø ¼ » Ø !° 8 85 D!° C7«¶ B «E7ADAwç7Í«¶ wCDÄ Ùå° è è8 865B@@ ¸ å° é B B6 8 å ° è æ å B É¼ ï Ø Ù è æ É å ° É 8 5 !°7ËAç#¶ «A·7{Ct° 8¼¶Ü CC6 ¼ ¸ è æ å ËÍè B B 6 ÈAl7îØ «¶ wCDÄ ° 8!°U8D5t° DAw«l·æ B D!{DÍ8¼ @ ÙÙèæ ³ è8 É å ¶8° ãâ åæ Ë å 8 8 6 5 B» @ @ ¸ è æ å B» B 6 #Ø E7ADAwÈAí#¶ wØ CDÄ ¸ 85 Dt° B A!{« #çC6 E«CD0D977ÈCDÄ ¶8° è8 å 8 µ Bëµ6Ä¯¯ê ¼ê6½ B6 Ë» ì 0» ö Ï ôó tõ!!Ò x V 865B@ 97DDA@ y Èx 6´¯ 0C«8 6¯ 7½C7B Ç ÙA°{B ¶CB ¯ »Ø ûú ùø D£2¥÷ 865B@ E7ADA@ » 6´¯ 0C«8
¢ ÿý ¡2þü
865B@ 97ADD@ 6½¯ 0C7B B CB ¯ ¶ »» x Ç Figure 6: Illustration of “network” transposition. Each graph deﬁnes an algorithm for computing a linear function. These graphs are called linear networks, and they can be interpreted as follows. Data are ﬂowing in the network, from input nodes to output nodes. An edge multiplies data by some constant (possibly ), and each node is understood to compute the sum of all incoming edges. In this example, the network on the left computes and . The network on the right is the “transposed” form of the ﬁrst network, obtained by reversing all edges. The new netand . work computes the linear function In general, if a network computes for some matrix , . (See [CO75] for a the transposed network computes proof.) These linear networks are similar to but not the same as expression dags normally used in compilers and in , because in the latter case the nodes and not the edges perform computation. A network can be easily transformed into an expression dag, however. The converse is not true in general, but it is true for DFT dags where all multiplications are by constants. 5.2 Implementation of the simpliﬁer The simpliﬁer is written in monadic style [Wad97]. The monad performs two important functions. First, it allows the simpliﬁer to treat the expression dag as if it were a tree, which makes the implementation considerably easier. Second, the monad performs commonsubexpression elimination. We now discuss these two topics. Treating dags as trees. Recall that the goal of the sim 8 Although one might imagine iterating this process, three passes seem to be sufﬁcient in all cases. 7 ip rÚVR ÔR 9tV Ó ª fi T CX 9Eq·si Ô Ó R p T TÓ iRV T t!UsÔ i T ª Ó Ô CX ip X T rtsUuÔ Ó 9R in the edge, every C compiler would store both and program text, and it would load both constants into a register at runtime. Making all constants positive reduces the number of loads of constants by a factor of two, and this transformation alone speeds up the generated codelets by 1015% on most machines. This transformation has the additional effect of converting subexpressions into a canonical form, which helps commonsubexpression elimination. The second DFTspeciﬁc improvement is not local to nodes, and is instead applied to the whole dag. The transformation is based on the fact that a dag computing a linear function can be “reversed” yielding a transposed dag [CO75]. This transposition process is wellknown in the Signal Processing literature [OS89, page 309], and it operates a shown in Figure 6. It turns out that in certain cases the transposed dag exposes some simpliﬁcations that are not present in the original dag. (An example will be shown later.) Accordingly, the simpliﬁer performs three passes over the dag. It ﬁrst simpliﬁes the original dag yielding a dag . Then, it simpliﬁes the transposed dag yielding a dag . Finally, it simpliﬁes (the transposed dag of ) yielding a dag .8 Figure 7 shows the savings in arithmetic complexity that derive from dag transposition for codelets of various sizes. As it can be seen in the ﬁgure, transposition can reduce the number of multiplications, but it does not reduce the number of additions. Figure 8 shows a simple case where transposition is benee gf 5 D} d ef 5 e Ud c f h EI W E Q T GF Q F ø T UH P (''ÏÒ bt) R ý G R SQ P I E ø H `aYR W G H W XR H P Q R F T H G G E Q ø P I GI F T VH adds muls size (not transposed) complex to complex 5 32 16 10 84 32 13 176 88 15 156 68 real to complex 5 12 8 10 34 16 13 76 44 15 64 31 complex to real 5 12 9 9 32 20 10 34 18 12 38 14 13 76 43 15 64 37 16 58 22 32 156 62 64 394 166 128 956 414 adds muls (transposed) 32 84 176 156 12 34 76 64 12 32 34 38 76 64 58 156 394 956 12 24 68 56 6 12 34 25 7 18 14 10 35 31 18 54 146 374 ø H G RE Figure 7: Summary of the beneﬁts of dag transposition. The table shows the number of additions and multiplications for codelets of various size, with and without dag transposition. Sizes for which the transposition has no effect are not reported in this table. . ﬁcial. The network in the ﬁgure computes It is not safe to simplify this expression to , since this transformation destroys the common subexpressions and . (The transformation destroys one operation and two common subexpressions, which might increase the operation count by one.) Indeed, the whole point of most FFT algorithms is to create common subexpressions. When the network is transposed, however, it computes and . These transposed expressions can be safely transformed into and because each transformation saves one operation and destroys one common subexpression. Consequently, the operation count cannot increase. In a sense, transposition provides a simple and elegant way to detect which dag nodes have more than one parent, which would be difﬁcult to detect when the dag is being traversed. Figure 8: A linear network where which dag transposition exposes some optimization possibilities. See the text for an explanation. 6 The scheduler In this section we discuss the scheduler, which produces a topological sort of the dag in an attempt to maximize register usage. For transforms whose size is a power of , we prove that a schedule exists that is asymptotically optimal in this respect, even though the schedule is independent of the number of registers. This fact is derived from the redblue pebbling game of Hong and Kung [HK81]. Even after simpliﬁcation, a codelet dag of a large transform still contains hundreds or even thousands of nodes, and there is no way to execute it fully within the register set of any existing processor. The scheduler attempts to reorder the dag in such a way that register allocators commonly used in compilers [Muc97, Section 16] can minimize the number of register spills. Note that the FFTW codelet generator does not address the instruction scheduling problem; that is, the 8 x R ¨Ó Ó B@@86 DA0975 Ó } ¦rÓ Ó} uÓ Ó Ô The code should be read “call , and then name the result and return .” The advantage of this transformation is that the meanings of “then” (the inﬁx operator ) and “return” (the function ) can be deﬁned so that they perform all sorts of interesting activities, such as carrying state around, perform I/O, act nondeterministically, etc. In is deﬁned so the speciﬁc case of the FFTW simpliﬁer, as to keep track of a few tables used for memoization, and performs commonsubexpression elimination. The core of the simpliﬁer is the function , as shown in Figure 9. dispatches on the argument (of type ), and it calls a simpliﬁer function for the appropriate case. If the node has subnodes, the subnodes are simpliﬁed ﬁrst. For example, suppose is a node. Since a node has two subnodes and , the function ﬁrst calls itself recursively on , yielding , and then on , yielding . Then, passes control to the function . If both and are constants, computes the product directly. In the same way, takes care of the case where either or is or , and so on. The code for is shown in Figure 10. The neat trick of using memoization for graph traversal was invented by Joanna Kulik in her master’s thesis [Kul95], as far as I can tell. Commonsubexpression elimination (CSE) is performed behind the scenes by the monadic operator . The CSE algorithm is essentially the classical bottomup
8 D½ ÔÓ 8 ¦D½ » B6 C0½ 8 D½ B6 97½ into something that looks like construction from [ASU86, page 592]. The monad maintains a table of all nodes produced during the traversal of the dag. Each time a new node is constructed and returned, checks whether the node appears elsewhere in the dag. If so, the new node is discarded and returns the old node. (Two nodes are considered the same if they is the compute equivalent expressions. For example, same as .) It is worth remarking that the simpliﬁer interleaves commonsubexpression elimination with algebraic transformations. To see why interleaving is important, consider for example the expression , where and are distinct nodes of the dag that compute the same subexpression. CSE rewrites the expression to , which is then simpliﬁed to . This pattern occurs frequently in DFT dags. ô0) 93 pliﬁer is to simplify an expression dag. The simpliﬁer, however, is written as if it were simplifying an expression tree. The map from trees to dags is accomplished by memoization, which is performed implicitly by a monad. The monad maintains a table of all previously simpliﬁed dag nodes, along with their simpliﬁed versions. Whenever a node is visited for the second time, the monad returns the value in the table. In order to continue reading this section, you really should be familiar with monads [Wad97]. In any case, here is a very brief summary on monads. The idea of a monadicstyle program is to convert all expressions of the form Figure 9: The toplevel simpliﬁer function monadic style. See the text for an explanation. Ü ÙÜ 8 B6½ ãâ ·D½ C7·AÜ » ¹ ÙÙw Â DxaÅ 797B 6½¯ 8 C½ C7½ B6 Ç ãâ wÂ AaØ 8 9AÍ¶ » ° 90Â @ ¸ ã ãØ Â 5Ä » ãâ Ù AÂ
Å¼ Ë 7C7B 6½¯ 8 !° ¸ãã Â 9AÍ¶ Ø ° 90Â ¹ 5Ä Ç Ë » EtË ¼» ãâ AÂ ¼ Ë !° 8 ¼Î Ùw ± w y3xÂ 6 £B » 9¢¹ °Ë ãAâ y±È8 @ÈCAcË ¶ ¼ ° Ë 90Â w ¸ããØ ± 5Ä ãAaÂ8 @9AÍ¶ Ë ° 90Â âw » ¸ ã ã Â ¼ 5Ä » ãÙ Aâ¥± Â ¼ Ë 6 ° Ì Ä¶ ¸CAãÂÈ¶ ° 9AÍ¦Ë 9Â ¼ r¹ ã 5ÄÂØ ¶ Ë» Ë ¼ Ë DÍÂ ¼ Ä ãâ Â 8 AÍÂ » r¹ ãâ Ë Ê ¼E» Ë ¼» 8 ¯ ° 9BCê º 9E£Ú8 @ 5 8 ° °» 0!£b«¯ Ø 6 ¸Ü ·¶ ° 9A«0w9DÄ 5 Ä Â ê 6 ½¼ B ¼ 6 ¼Ë
, written in » V 6 B ° Ë 6 ¼ B Ë ° Ë¼ Ë ¶ ° 90Â 5Ä ¼Ë w xÂ » ¸ã CAã B6 C7½ 6° Ë¼Ì x ± ¸ã CAã w ± Â Ü b Â
¶ w xÂ w y± w Â ° 90Â 5Ä ¼Ë ÙÜ £Í± u @ Ø 8 D½ ¶ 8 D½ ° 90Â 5Ä ¼Ë » B6 97½ 6 £B ° Ë¼Ë v
v » ÙÜ w± B6½ ãâ Ü 97D{w8 wCAÂ @ ¸ãã » w y± Y ÙÜ ± g Ø
6 £B ° Ë¼Ë ± ¶ ° 90Â 5Ä 6°¼Ë Ë¼Ì 6´¯ 0D«8 8° Â ¸ Ü B6 !{ÍÍÈCDÄ Ø
8 D½ » B6 C7½ Ü Ü Figure 10: Code for the function , which simpliﬁes the ) product of two expressions. The comments (delimited with brieﬂy discuss the various simpliﬁcations. Even if it operates on a dag, this is exactly the code one would write to simplify a tree. Figure 11: Illustration of the scheduling problem. The butterﬂy graph (in black) represents an abstraction of the data ﬂow of the fast Fourier transform algorithm on 8 inputs. (In practice, the graph is more complicated because data are complex, and the real and imaginary part interact in nontrivial ways.) The boxes denote two different execution orders that are explained in the text. 9 k 9 The same result holds for any twolevel memory, such as L1 cache vs. L2, or physical memory vs. disk. 10 We say “cache” and not “registeroblivious” since this notion ﬁrst arose from the analysis of the caching behavior of Cilk [FLR98] programs using shared memory. Work is still in progress to understand and deﬁne cacheobliviousness formally, and this concept does not yet appear in the literature. Simple divideandconquer cacheoblivious algorithms for matrix multiplication and LU decomposition are described in [BFJ 96]. d 9¢¡ ' d ¤£ d f I ¤£ I xCot90gCo1j d ¤£ 91j d d d d d ¤£ f I ¤£ I Cogi9¢¡ I d d d ¤£ f I ¤£ I Co§g¦9¢¡ d maximization of pipeline usage is left to the C compiler. Figure 11 illustrates the scheduling problem. Suppose a processor has 4 registers, and consider a “column major” execution order that ﬁrst executes all nodes in the diagonallystriped box (say, topdown), and then proceeds to the next column of nodes. Since there are 8 values to propagate from column to column, and the machine has 4 registers, at least four registers must be spilled if this strategy is adopted. A different strategy would be to execute all operations in the gray box before executing any other node. These operations can be performed fully within registers once the input nodes have been loaded. It is clear that different schedules lead to different behaviors with respect to register spills. A lower bound on the number of register spills incurred by any execution of the FFT graph was ﬁrst proved by Hong and Kung [HK81] in the context of the socalled “redblue pebbling game”. Paraphrased in compiler terminology, Theorem 2.1 from [HK81] states that the execution of the FFT graph of size on a machine with registers (where ) requires at least register spills.9 Aggarwal and Vitter [AV88] generalize this result to disk I/O, where a single I/O operation can transfer a block of elements. In addition, Aggarwal and Vitter give a schedule that matches the lower bound. Their schedule is constructed as in the example that follows. With reference to Figure 11, assume again that the machine has registers. The schedule loads the four topmost input nodes of the dag, and then executes all nodes in the gray box, which can be done completely using the 4 registers. Then, the four outputs of the gray box are written to some temporary memory location, and the same process is repeated for the four inputs in the bottom part of the dag. Finally, the schedule executes the rightmost column of the dag. In general, the algorithm proceeds by partitioning the dag into blocks that have input nodes and depth. Consequently, there are such blocks, and each block can be executed with transfers between registers and memory. The total number of transfers is thus at most , and the algorithm matches the lower bound. Unfortunately, Aggarwal and Vitter’s algorithm depends on , and a schedule for a given value of does not work well for other values. The aim of the FFTW generator is to produce portable code, however, and the generator cannot make any assumption about . It is perhaps surprising that a schedule exists that matches the asymptotic lower bound for all values of . In other words, a single sequential order of execution of an FFT dag exists that, for all , requires register spills on a machine with registers. We say that such a schedule is cacheoblivious.10 We now show that the CooleyTukey FFT becomes cacheoblivious when the factors of are chosen appropriately (as in [VS94a, VS94b]). We ﬁrst formulate a recursive algorithm, which is easier to understand and analyze than a dag. Then, we examine how the computation dag of the algorithm should be scheduled in order to mimic the register/cache behavior of the cacheoblivious algorithm. Consider the CooleyTukey algorithm applied to a trans hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi ÙÙ± ApÂ 6° 8 D½ C7·A¥± Â B6½ ãâ Ù Ì Ø ¹ ÙÂ w a3y± 6 Ø Ë B ¼ ADy± » wë ° ã âjØ Ù Ù w Â Â Ë 9!° jØ ¼ 8Cº » Ø ¹ Ù ±â ã¸ AÉ EÈ7D¸ Ø l·£ÈÉ Ë ± É Ë å â¼ ± ãâ Â 68 DÍl«9¯ ë ¿9«± ° ¾Ø ½ 6 8 6 Ù Ë± ¼ » 97¦A» Â 9tË ¼» 9Dº ¼ Ù ± ã¸ 0É 0A¸ ·{É ± É¼åË ± 8 B 6 9» º Ø C½ C7½Cj¹ ã â Â 6 8 ¯ ° ¾Ø ½ 6 AÍl«9Cë ¿9«± 86 Ù 97¦A± Â» ¼» ¼» ºØ Ù 0É ã¸ 0A¸ É Ë É ± ¯ 9½ D6 º ¾ ½ 6 7C0iC«± 89Cj¹ © ¼» ã â Â ¯ ½ 6 ° ¾Ø ½ 6 DÍl7C03ë © ¿9«± 8 6 9Ù C ¼ 97¦Aº± Â9» Ë Ë ¼ 9» Ù Dê º 0õÜ 6 £B 9Cj¹ ° ¼» ºØ ãÜ AâÍ8 w9AlÄ ¿9«Ø ± Ë ¼ Ë 8 @ ¸ãã Ù± Â ¾ ½6 » DAA x± «¼ ¼»º ¼ ãâ ÙÙê » 6 9D° 'Ø Â9» Ë º ¼» ºØ Ù ± Â ¼ » ¾ j½ 6 lÄ ED¿Ø 9«Ë ± ¼ Ì 89Cj¹ » Ù0É ½C7± 8«¨µ9¶!°EÄ 6 ¯B Ä B É «D¥± 9D'Ø Â9» Ë ã¼ â Ù ¼ » º ¼ Ë ¼ 9» » ¸ãã Ù± ¼» 8!° «¼ CØ A¥pÂ 9Cº 6 B 9Cj¹ ° ¼» ºØ ¼» Ë Ù0¥±Â Èã7A¸ » E{·É A·¥± Ø Ë !° ¼ Â ÉÙ É â ¸ Ë ± â 9É tË Â ãâ Ù 8 Ø ¼ °Î Ø 8!° ¸ãã Ù± Ë CAØ ¥pÂ » 6 CB ¹ ¼» 9t Ù Ù± 0É¥ÉÂ âÈ7¸A¸ Ë» ±ÉËDÉ A·¥± Ø Â Ë ¼ !° Ë ã Ââ ãâ Ù 8 Ë£ ê 96 ÝØ ÈB 6 ¼Î½ 8¯°Bê @ ¸ 9EÚØ £8 Í· 6 B» 77¹ CDÄ ° » Ë¼Ë
ò ò b Ý I y d Ø Ò0( ¡bD%3 d d ¤£ f I ¤£ I Co§g9e T Wd RT S ÈfI 10 êê A£5 Ë ê5 £C6 B@@86 DA0975 ê5 £C6 Ë DA0975 B@@86 å¾ å¾ å ê 5 î#2#£â C6 Ë Ë ê5 £C6 B@@86 DA7E75 The recurrence has solution , which matches the lower bound. We now reexamine the operation of the cacheoblivious FFT algorithm in terms of the FFT dag as the one in Figure 11. Partitioning a problem of size into problems of size is equivalent to cutting the dag with a vertical line that partitions the dag into two halves of (roughly) equal size. Every node in the ﬁrst half is executed before any node in the second half. Each half consists of connected components, which are scheduled recursively in the same way. The scheduler uses this recursive partitioning technique for transforms of all sizes (not just powers of 2). The scheduler cuts the dag roughly into two halves. “Half a dag” is not well deﬁned, however, except for the power of 2 case, and therefore the scheduler uses a simple heuristic (described below) to compute the two halves for the general case. The cut induces a set of connected components that are scheduled recursively. The scheduler guarantees that all components in the ﬁrst half of the dag (the one containing the inputs) are executed before the second half is scheduled. For the special case , because of the previous analysis, we know that this schedule of the dag allows the register allocator of the C compiler to minimize the number of register spills (up to some constant factor). Little is known about the optimality of this scheduling strategy for general , for which neither the lowerbound nor the upperbound analyses hold. Finally, we discuss the heuristic used to cut the dag into two halves. The heuristic consists of “burning the candle at both ends”. Initially, the scheduler colors the input nodes red, the output nodes blue, and all other nodes black. After this initial step, the scheduler alternates between a red and a blue coloring phase. In a red phase, any node whose predecessors are all red becomes red. In a blue phase, all nodes whose successors are blue are colored blue. This alternation continues while black nodes exist. When coloring is done, red nodes form the ﬁrst “half” of the dag, and blue nodes the second. When is a power of two, the FFT dag has a regular structure like the one shown in Figure 11, and this process has the effect of cutting the dag in the middle with a vertical line, yielding the desired optimal behavior. B@@86 DA7905 d ¤£ f I ¤£ I w9g9¢¡ v I d j r u1tsI when otherwise This section discusses brieﬂy the running time and the memory requirements of , and also some problems that arise in the interaction of the scheduler with C compilers. The FFTW codelet generator is not optimized for speed, since it is intended to be run only once. Indeed, users of FFTW can download a distribution of generated C code and never run at all. Nevertheless, the resources needed by are quite modest. Generation of C code for a transform of size 64 (the biggest used in FFTW) takes about 75 seconds on a 200MHz Pentium Pro running Linux 2.2 and the nativecode compiler of Objective Caml 2.01. needs less than 3 MB of memory to complete the generation. The resulting codelet contains 912 additions, 248 multiplications. On the same machine, the whole FFTW system can be regenerated in about 15 minutes. The system contains about 55,000 lines of code in 120 ﬁles, consisting of various kinds of codelets for forward, backward, real to complex, and complex to real transforms. The sizes of these transforms in the standard FFTW distribution include all integers up to 16 and all powers of two up to 64. A few FFTW users needed fast hardcoded transforms of uncommon sizes (such as 19 and 23), and they were able to run the generator to produce a system tailored to their needs. The biggest program generated so far was for a complex transform of size 101, which required slightly less than two hours of CPU time on the Pentium Pro machine, and about 10 MB of memory. Again, a user had a special need for such a transform, which would be formidable to code by hand. In order to achieve this running time, I was forced to replace a linkedlist implementation of associative tables by hashing, and to avoid generating “obvious” common subexpressions more than once when the dag is created. The naive generator was somewhat more elegant, but had not produced an answer after three days. The long sequences of straightline code produced by can push C compilers (in particular, register allocators) to their limits. The combined effect of and of the C compiler can lead to performance problems. The following discussion presents two particular cases that I found particularly surprising, and is not intended to blame any particular compiler or vendor. The optimizer of the compiler performs an instruction scheduling pass, followed by register allocation, followed by another instruction scheduling pass. On some architectures, including the SPARC and PowerPC processors, employs the socalled “Haifa scheduler”, which usually produces better code than the normal / scheduler. The ﬁrst pass of the Haifa scheduler, however, has the unfortunate effect of destroying ’s schedule (computed as explained in Section 6). In , the ﬁrst instruction scheduling pass can be disabled with the option B@@86 DD7975 B@@86 CA7975 B@@86 DA7905 B@@86 DD7975 form of size . Assume for simplicity that is itself a power of two, although the result holds for any positive integer . At every stage of the recursion, we have a choice of the factors and of . Choose . The algorithm computes transforms of size , followed by multiplications by the twiddle factors, followed by more transforms of size . When , the transform can be computed fully within registers. Thus, the number of transfers between memory and registers when computing a transform of size satisﬁes this recurrence. 7 Pragmatic aspects of B@@86 DA7E75 I l I l mT 5 d j z 91mcI I Il sI T I f
l I 91j l T ¥
n I d u1j ¢¡ I q
pI rR I l n l I B@@86 DA0975 I I SR T 9UçI I l I sI
l I SR T 9tI B@@86 DA0975 fI o YT ¥
n I ¥
n I I l ¢¡ I 5 8 Conclusion In my opinion, the main contribution of this paper is to present a realworld application of domainspeciﬁc compilers and of advanced programming techniques, such as monads. In this respect, the FFTW experience has been very successful: the current release FFTW2.0.1 is being downloaded by more than 100 people every week, and a few users have been motivated to learn ML after their experience with FFTW. In the rest of this concluding section, I offer some ideas about future work and possible developments of the FFTW system. The current program is somewhat specialized to computing linear functions, using algorithms whose control References
[ACT90] Myoung An, James W. Cooley, and Richard Tolimieri. Factorization method for crystallographic Fourier transforms. Advances in Applied Mathematics, 11:358–371, 1990. [ASU86] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers, principles, techniques, and tools. AddisonWesley, March 1986. [AV88] Alok Aggarwal and Jeffrey Scott Vitter. The input/output complexity of sorting and related problems. Communications of the ACM, 31(9):1116–1127, September 1988. 11 B@@86 DA0975 , and on a 167 MHz UltraSPARC I, the compiled code is between 50% and 100% faster and about half the size when this option is used. Inspection of the assembly code produced by reveals that the difference consists entirely of register spills and reloads. Digital’s C compiler for Alpha (DEC C V5.6071 on Digital UNIX V4.0 (Rev. 878)) seems to be particularly sensitive to the way local variables are declared. For example, Figure 12 illustrates two ways to declare temporary variables in a C program. Let’s call them the “left” and can be programmed to produce the “right” style. code in either way, and for most compilers I have tried there is no appreciable performance difference between the two styles. Digital’s C compiler, however, appears to produce better code with the right style (the right side of Figure 12). For a transform of size 64, for example, and compiler ﬂags , a 467MHz Alpha achieves about 450 MFLOPS with the left style, and 600 MFLOPS with the right style. (Different sizes lead to similar results.) I could not determine the exact source of this difference. 9 Acknowledgments I am grateful to Compaq for awarding me the Digital Equipment Corporation fellowship. Thanks to Compaq, Intel, and Sun Microsystems for donations of hardware that was used to develop . Prof. Charles E. Leiserson of MIT provided continuous support and encouragement. FFTW would not exist without him. Charles also proposed the name “codelets” for the basic FFT blocks. Thanks to Steven G. Johnson for sharing with me the development of FFTW. Thanks to Steven and the anonymous referees for helpful comments on this paper. The notion of cache obliviousness arose in discussions with Charles Leiserson, Harald Prokop, Sridhar “Cheema” Ramachandran, and Keith Randall. B@@86 DA0975 B@@86 DA7E75 Figure 12: Two possible declarations of local variables in C. On the left side, variables are declared in the topmost lexical scope. On the right side, variables are declared in a private lexical scope that encompasses the lifetime of the variable. B@@86 DD7975 B@@86 CA7975 Ë ¾A¾ßÂ@D¯·6 C0t«fD¾ °B6@°Ä ¾ ¼ §D± A´ ×Â 6Ä ¯ » ¾ A¾ ±@D¯·6 C0t«fD¾ °B6@°Ä ¾ ¼ ¦D± A´ × ± 6Ä ¯ » Ë
y x y å´ â ¥AB ÍB «B78 9CA0CA7½ CCâ ¯ 6 Bâ ½6´½¯6 ¶@ 5½ ACÂ ° 9Dâ E«0Â ° 9D¨¦â Ë b£» 3«Eâ 8Â Â°Ä 8 Â âË z â ê68 Ë Ë Ë © Ù´°¯ ¯¯@ ´°¯ £t7Å D0ÈÚ7Å Ø ê5 £C6
x y x structure is independent of the input. Even with this restriction, the ﬁeld of applicability of is potentially huge. For example, signal processing FIR and IIR ﬁlters fall into this category, as well as other kinds of transforms used in image processing (for example, the discrete cosine transform used in JPEG). I am conﬁdent that the techniques described in this paper will prove valuable in this sort of application. Recently, I modiﬁed to generate crystallographic Fourier transforms [ACT90]. In this particular application, the input consists of 2D or 3D data with certain symmetries. For example, the input data set might be invariant with respect to rotations of 60 degrees, and it is desirable to have a specialpurpose FFT algorithm that does not execute redundant computations. Preliminary investigation shows that is able to exploit most symmetries. I am currently working on this problem. In its present form, is somewhat unsatisfactory because it intermixes programming and metaprogramming. At the programming level, one speciﬁes a DFT algorithm, as in Figure 4. At the metaprogramming level, one speciﬁes how the program should be simpliﬁed and scheduled. In the current implementation, the two levels are confused together in a single binary program. It would be nice to have a clean separation of these two levels, but I currently do not know how to do it. B@@86 DD7975 ¾A¾ ¾ A¾ 8 !«ADÄ D7ê 07CCâ 8°â6 ´6 â¯8@ ËË » Ë B@@86 DD7975 ± @¯ ·D6 C0t«Ä A¾ y °B6@° ¾ Âw@D6 ¼ C0t«Ä A¾ ¯ °B6@° ¾ ¼ × ± 6Ä ¯ iD± 0´ ×Â 6Ä ¯ D± » 0´ »x Ù´°¯ ¯¯@ ´°¯ £Ú«7Å AAwt7Å Ø 12 'Ð ¡39Ï [BFJ 96] Robert D. Blumofe, Matteo Frigo, Chrisopher F. Joerg, Charles E. Leiserson, and Keith H. Randall. An analysis of dagconsistent distributed sharedmemory algorithms. In Proceedings of the Eighth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA), pages 297–308, Padua, Italy, June 1996. [CO75] R. E. Crochiere and A. V. Oppenheim. Analysis of linear digital networks. Proceedings of the IEEE, 63:581–595, April 1975. [CT65] J. W. Cooley and J. W. Tukey. An algorithm for the machine computation of the complex Fourier series. Mathematics of Computation, 19:297–301, April 1965. [DV90] P. Duhamel and M. Vetterli. Fast Fourier transforms: a tutorial review and a state of the art. Signal Processing, 19:259–299, April 1990. [FJ] Matteo Frigo and Steven G. Johnson. The FFTW web page. . [FJ97] Matteo Frigo and Steven G. Johnson. The fastest Fourier transform in the West. Technical Report MITLCSTR728, MIT Lab for Computer Science, September 1997. The description of the codelet generator given in this report is no longer current. [FJ98] Matteo Frigo and Steven G. Johnson. FFTW: An adaptive software architecture for the FFT. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, pages 1381– 1384, Seattle, WA, May 1998. [FLR98] Matteo Frigo, Charles E. Leiserson, and Keith H. Randall. The implementation of the Cilk5 multithreaded language. In Proceedings of the ACM SIGPLAN ’98 Conference on Programming Language Design and Implementation (PLDI), pages 212–223, Montreal, Canada, June 1998. ACM. [GHSJ96] S. K. S. Gupta, C.H. Huang, P. Sadayappan, and R. W. Johnson. A framework for generating distributedmemory parallel programs for block recursive algorithms. Journal of Parallel and Distributed Computing, 34(2):137–153, 1 May 1996. [HK81] JiaWei Hong and H. T. Kung. I/O complexity: the redblue pebbling game. In Proceedings of the Thirteenth Annual ACM Symposium on Theory of Computing, pages 326–333, Milwaukee, 1981. [HV92] P. H. Hartel and W. G. Vree. Arrays in a lazy functional language—a case study: the fast Fourier transform. In G. Hains and L. M. R. Mullin, editors, Arrays, functional languages, and parallel systems (ATABLE), pages 52–66, June 1992. [JB83] H. W. Johnson and C. S. Burrus. The design of optimal DFT algorithms using dynamic programming. IEEE Transactions on Acoustics, Speech and Signal Processing, 31:378–387, April 1983. [Knu98] Donald E. Knuth. The Art of Computer Programming, volume 2 (Seminumerical Algorithms). AddisonWesley, 3rd edition, 1998. [Kul95] Joanna L. Kulik. Implementing compiler optimizations using parallel graph reduction. Master’s thesis, Massachussets Institute of Technology, February 1995.
( ' ' ~ Ñ Ò ( 0 Ð Ò { ( ~ ~} ô ( ( $33!
bqÚEqp!b{ k Xavier Leroy. The Objective Caml system release 2.00. Institut National de Recherche en Informatique at Automatique (INRIA), August 1998. [Mar76] J. A. Maruhn. FOURGEN: a fast Fourier transform program generator. Computer Physics Communications, 12:147–162, 1976. [Muc97] Steven S. Muchnick. Advanced Compiler Design Implementation. Morgan Kaufmann, 1997. [OS89] A. V. Oppenheim and R. W. Schafer. Discretetime Signal Processing. PrenticeHall, Englewood Cliffs, NJ 07632, 1989. benchmark suite of Haskell [Par92] Will Partain. The programs. In J. Launchbury and P. M. Sansom, editors, Functional Programming, Workshops in Computing, pages 195–202. Springer Verlag, 1992. [PT87] F. Perez and T. Takaoka. A prime factor FFT algorithm implementation using a program generation technique. IEEE Transactions on Acoustics, Speech and Signal Processing, 35(8):1221–1223, August 1987. [Rad68] C. M. Rader. Discrete Fourier transforms when the number of data samples is prime. Proc. of the IEEE, 56:1107–1108, June 1968. [SB96] I. Selesnick and C. S. Burrus. Automatic generation of prime length FFT programs. IEEE Transactions on Signal Processing, pages 14–24, January 1996. [SJHB87] H. V. Sorensen, D. L. Jones, M. T. Heideman, and C. S. Burrus. Realvalued fast Fourier transform algorithms. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP35(6):849–863, June 1987. [TAL97] Richard Tolimieri, Myoung An, and Chao Lu. Algorithms for Discrete Fourier Transform and Convolution. Springer Verlag, 1997. [Vel95] Todd Veldhuizen. Using C++ template metaprograms. C++ Report, 7(4):36–43, May 1995. Reprinted in C++ Gems, ed. Stanley Lippman. [VS94a] J. S. Vitter and E. A. M. Shriver. Optimal algorithms for parallel memory I: Twolevel memories. Algorithmica, 12(2–3):110–147, 1994. double special issue on LargeScale Memories. [VS94b] J. S. Vitter and E. A. M. Shriver. Optimal algorithms for parallel memory II: Hierarchical multilevel memories. Algorithmica, 12(2–3):148–169, 1994. double special issue on LargeScale Memories. [Wad97] Philip Wadler. How to declare an imperative. ACM Computing Surveys, 29(3):240–263, September 1997. [Win78] S. Winograd. On computing the discrete Fourier transform. Mathematics of Computation, 32(1):175–199, January 1978. [Ler98] ...
View
Full
Document
This note was uploaded on 03/28/2010 for the course INFOMATIC 2 taught by Professor Mr.cherni during the Fall '10 term at Dominican School of Philosophy and Theology.
 Fall '10
 MR.cherni

Click to edit the document details