Clang - The C, C++ Compiler: Synopsis
Clang - The C, C++ Compiler: Synopsis
Clang - The C, C++ Compiler: Synopsis
Contents
SYNOPSIS
clang [options] filename ...
DESCRIPTION
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing,
optimization, code generation, assembly, and linking. Depending on which high-level
mode setting is passed, Clang will stop before doing a full link. While Clang is highly
integrated, it is important to understand the stages of compilation, to understand how to
invoke it. These stages are:
Driver
The clang executable is actually a small driver which controls the overall execution
of other tools such as the compiler, assembler and linker. Typically, you do not
need to interact with the driver, but you transparently use it to run the other tools.
Preprocessing
This stage handles tokenization of the input source file, macro expansion, #include
expansion and handling of other preprocessor directives. The output of this stage
is typically called a ”.i” (for C), ”.ii” (for C++), ”.mi” (for Objective-C), or ”.mii” (for
Objective-C++) file.
Assembler
This stage runs the target assembler to translate the output of the compiler into a
target object file. The output of this stage is typically called a ”.o” file or “object” file.
Linker
This stage runs the target linker to merge multiple object files into an executable
or dynamic library. The output of this stage is typically called an “a.out”, ”.dylib” or
”.so” file.
-march=<cpu>
Specify that Clang should generate code for a specific processor family member
and later. For example, if you specify -march=i486, the compiler is allowed to
generate instructions that are valid on i486 and later processors, but which may
not exist on earlier ones.
-march=znver1
Use this architecture flag for enabling best code generation and tuning for AMD’s
Zen based x86 architecture. All x86 Zen ISA and associated intrinsics are
supported
-march=znver2
Use this architecture flag for enabling best code generation and tuning for AMD’s
Zen2 based x86 architecture. All x86 Zen2 ISA and associated intrinsics are
supported.
-O0 Means “no optimization”: this level compiles the fastest and generates the
most debuggable code.
-O3 Like -O2, except that it enables optimizations that take longer to perform or
that may generate larger code (in an attempt to make the program run faster).
The -O3 level in AOCC has more optimizations when compared to the base LLVM
version on which it is based. These optimizations include improved handling of
indirect calls, advanced vectorization etc.
-Ofast Enables all the optimizations from -O3 along with other aggressive
optimizations that may violate strict compliance with language standards.
The -Ofast level in AOCC has more optimizations when compared to the base
LLVM version on which it is based. These optimizations include partial
unswitching, improvements to inlining, unrolling etc.
-Oz Like -Os (and thus -O2), but reduces code size further.
-O Equivalent to -O2.
The following optimizations are not present in LLVM and are specific to AOCC
-fstruct-layout=[1,2,3,4,5]
Analyzes the whole program to determine if the structures in the code can be
peeled and if pointers in the structure can be compressed. If feasible, this
optimization transforms the code to enable these improvements. This
transformation is likely to improve cache utilization and memory bandwidth. This,
in turn, is expected to improve the scalability of programs executed on multiple
cores.
This is effective only under flto as the whole program analysis is required to
perform this optimization. You can choose different levels of aggressiveness with
which this optimization can be applied to your application with 1 being the least
aggressive and 5 being the most aggressive level.
• fstruct-layout=1 enables structure peeling
• fstruct-layout=2 enables structure peeling and pointer compression when size fits
within 64KB and 4GB.
• fstruct-layout=3 enables structure peeling and pointer compression when size fits
within 64KB.
• fstruct-layout=4 enables data compression in addition to level 2
• fstruct-layout=5 enables data compression in addition to level 3
-fitodcalls
Promotes indirect to direct calls by placing conditional calls. Application or
benchmarks that have small and deterministic set of target functions for function
pointers that are passed as call parameters benefit from this optimization. Indirect-
to-direct call promotion transforms the code to use all possible determined targets
under runtime checks and falls back to the original code for all other cases.
Runtime checks are introduced by the compiler for each of these possible function
pointer targets followed by direct calls to the targets.
This is a link time optimization which is invoked as -flto -fitodcalls.
-fitodcallsbyclone
Performs value specialization for functions with function pointers passed as an
argument. It does this specialization by generating a clone of the function. The
cloning of the function happens in the call chain as needed to allow conversion of
indirect function call to direct call. This complement -fitodcalls optimization and is
also a link time optimization which is invoked as -flto -fitodcallsbyclone.
-fremap-arrays
Transforms the data layout of a single dimensional array to provide better cache
locality. This optimization is effective only under flto as the whole program analysis
is required to perform this optimization which can be invoked as -flto -fremap-
arrays.
-finline-aggressive
Enables improved inlining capability through better heuristics. This optimization is
more effective when using with flto as the whole program analysis is required to
perform this optimization, which can be invoked as -flto -finline-aggressive.
-enable-partial-unswitch
Enables partial loop un-switching which is an enhancement to the existing loop un-
switching optimization in LLVM. Partial loop un-switching hoists a condition inside
a loop from a path for which the execution condition remains invariant whereas the
original loop un-switching works for condition that is completely loop invariant. The
condition inside the loop gets hoisted out from the invariant path and original loop
is retained for the path where condition is variant.
-aggressive-loop-unswitch
Experimental option which enables aggressive loop unswitching heuristic
(including -enable-partial-unswitch) based on the usage of the branch conditional
values. Loop unswitching leads to code-bloat. Code-bloat can be minimized if the
hoisted condition is executed more often. This heuristic prioritizes the conditions
based on the number of times they are used within the loop. The heuristic can be
controlled with the following options:
• -unswitch-identical-branches-min-count=<n>
Enables unswitching of a loop with respect to a branch conditional value (B), where B
appears in at least <n> compares in the loop. This option is enabled with -aggressive-
loop-unswitch. Default value is 3.
• -unswitch-identical-branches-max-count=<n>
Enables unswitching of a loop with respect to a branch conditional value (B), where B
appears in at most <n> compares in the loop. This option is enabled with -aggressive-
loop-unswitch. Default value is 6.
Note: These options may facilitate more unswitching in some of the workloads. Since loop-
unswitching inherently leads to code bloat, facilitating more unswitching may significantly
increase the code size and hence may also lead to longer compilation times.
-enable-strided-vectorization
Enables strided memory vectorization as an enhancement to the interleaved
vectorization framework present in LLVM. It enables effective use of gather and
scatter kind of instruction patterns. This flag needs to be used along with the
interleave vectorization flag.
-enable-epilog-vectorization
Enables vectorization of epilog-iterations as an enhancement to existing
vectorization framework. This enables generation of an additional epilog vector
loop version for the remainder iterations of the original vector loop. The vector size
or factor of the original loop should be large enough to allow effective epilog
vectorization of the remaining iterations. This this optimization takes effect only
when the original vector loop is vectorized with a vector width or factor of sixteen.
This vectorization width of sixteen may be overwritten by -min-width-epilog-
vectorization command line option.
-vectorize-memory-aggressively
This option assumes that memory accesses do not alias in the process of
vectorizing a loop. The loop vectorizer generates runtime checks for all unique
memory accesses when the compiler is not sure about memory aliasing. The result
of the runtime check determines whether the vectorized loop version or the scalar
loop version is executed. If the runtime check detects any memory aliasing, then
the scalar loop is executed otherwise, the vector loop executed. This option forces
the loop vectorizer not to generate these runtime alias checks by assuming that
memory accesses do not alias. The responsibility of correct usage of this option is
left to the user. This option may be used only if memory accesses do not overlap
in loops.
-enable-redundant-movs
Removes any redundant mov operations including redundant loads from memory
and stores to memory. This may be invoked by -Wl,-plugin-opt=-enable-
redundant-movs.
-merge-constant
Attempts to promote frequently occurring constants to registers. The aim is to
reduce the size of the instruction encoding for instructions using constants and
thereby obtain performance improvement.
-function-specialize
Optimizes functions with compile time constant formal arguments
-lv-function-specialization
Generates specialized function versions when the loops inside function are
vectorizable and the arguments are not aliased with each other
-enable-vectorize-compares
-inline-recursion=[1,2,3,4]
Enables inlining for recursive functions based on heuristics with level 4 being most
aggressive. Default level will be 2. Higher levels may lead to code bloat due to
expansion of recursive functions at call sites.
• For level 1-2: Enables inlining for recursive functions using heuristics with inline
depth 1. Level 2 uses more aggressive heuristics
• For level 3: Enables inlining for all recursive functions with inline depth 1
• For level 4: Enables inlining for all recursive function with inline depth 10
This is more effective with flto as the whole program analysis is required to perform
this optimization, which can be invoked as -flto -inline-recursion=[1,2,3,4].
-reduce-array-computations=[1,2,3]
Performs array dataflow analysis and optimizes the unused array computations
This optimization is effective with flto as the whole program analysis is required to
perform this optimization, which can be invoked as -flto -reduce-array-
computations=[1,2,3,4].
-global-vectorize-slp
Vectorizes the straight-line code inside a basic block with data reordering vector
operations
-region-vectorize
Experimental flag for enabling vectorization on certain loops with complex control
flow which the normal vectorizer cannot handle.
This optimization is effective with flto as the whole program analysis is required to
perform this optimization, which can be invoked as -flto -region-vectorize.
-nt-store
Generate nontemporal store instruction for array accesses in a loop with large trip
count.
-nt-store-aggressive
This is an experimental option to generate non-temporal store instruction for array
accesses in a loop whose iteration count cannot be determined at compile time. In
this case, compiler assumes the iteration count is huge.
-enable-X86-prefetching
Enables the generation of x86 prefetch instruction for the memory references
inside a loop/ inside an inner most loop of a loop nest to prefetch the second
dimension of multidimensional array/memory references in the inner most of a loop
nest. This is an experimental pass whose profitability is being improved.
-suppress-fmas
Identifies the reduction patterns on FMA and suppresses the FMA generation as it
is not profitable on reduction patterns.
Driver Options
-mllvm <options>
Need to provide -mllvm so that the option can pass through the compiler front end
and get applied on the optimizer where this optimization is implemented.
For example: -mllvm -enable-strided-vectorization
-fuse-ld=lld
To invoke lld linker from compiler driver as it is the preferred linker