1 //===- Threads.h ------------------------------------------------*- C++ -*-===//
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
7 //===----------------------------------------------------------------------===//
9 // LLD supports threads to distribute workloads to multiple cores. Using
10 // multicore is most effective when more than one core are idle. At the
11 // last step of a build, it is often the case that a linker is the only
12 // active process on a computer. So, we are naturally interested in using
13 // threads wisely to reduce latency to deliver results to users.
15 // That said, we don't want to do "too clever" things using threads.
16 // Complex multi-threaded algorithms are sometimes extremely hard to
17 // reason about and can easily mess up the entire design.
19 // Fortunately, when a linker links large programs (when the link time is
20 // most critical), it spends most of the time to work on massive number of
21 // small pieces of data of the same kind, and there are opportunities for
22 // large parallelism there. Here are examples:
24 // - We have hundreds of thousands of input sections that need to be
25 // copied to a result file at the last step of link. Once we fix a file
26 // layout, each section can be copied to its destination and its
27 // relocations can be applied independently.
29 // - We have tens of millions of small strings when constructing a
30 // mergeable string section.
32 // For the cases such as the former, we can just use parallelForEach
33 // instead of std::for_each (or a plain for loop). Because tasks are
34 // completely independent from each other, we can run them in parallel
35 // without any coordination between them. That's very easy to understand
38 // For the cases such as the latter, we can use parallel algorithms to
39 // deal with massive data. We have to write code for a tailored algorithm
40 // for each problem, but the complexity of multi-threading is isolated in
41 // a single pass and doesn't affect the linker's overall design.
43 // The above approach seems to be working fairly well. As an example, when
44 // linking Chromium (output size 1.6 GB), using 4 cores reduces latency to
45 // 75% compared to single core (from 12.66 seconds to 9.55 seconds) on my
46 // Ivy Bridge Xeon 2.8 GHz machine. Using 40 cores reduces it to 63% (from
47 // 12.66 seconds to 7.95 seconds). Because of the Amdahl's law, the
48 // speedup is not linear, but as you add more cores, it gets faster.
50 // On a final note, if you are trying to optimize, keep the axiom "don't
51 // guess, measure!" in mind. Some important passes of the linker are not
52 // that slow. For example, resolving all symbols is not a very heavy pass,
53 // although it would be very hard to parallelize it. You want to first
54 // identify a slow pass and then optimize it.
56 //===----------------------------------------------------------------------===//
58 #ifndef LLD_COMMON_THREADS_H
59 #define LLD_COMMON_THREADS_H
61 #include "llvm/Support/Parallel.h"
66 extern bool threadsEnabled;
68 template <typename R, class FuncTy> void parallelForEach(R &&range, FuncTy fn) {
70 for_each(llvm::parallel::par, std::begin(range), std::end(range), fn);
72 for_each(llvm::parallel::seq, std::begin(range), std::end(range), fn);
75 inline void parallelForEachN(size_t begin, size_t end,
76 llvm::function_ref<void(size_t)> fn) {
78 for_each_n(llvm::parallel::par, begin, end, fn);
80 for_each_n(llvm::parallel::seq, begin, end, fn);
83 template <typename R, class FuncTy> void parallelSort(R &&range, FuncTy fn) {
85 sort(llvm::parallel::par, std::begin(range), std::end(range), fn);
87 sort(llvm::parallel::seq, std::begin(range), std::end(range), fn);