1 Pull in r219009 from upstream llvm trunk (by Adam Nemet):
3 [ISel] Keep matching state consistent when folding during X86 address match
5 In the X86 backend, matching an address is initiated by the 'addr' complex
6 pattern and its friends. During this process we may reassociate and-of-shift
7 into shift-of-and (FoldMaskedShiftToScaledMask) to allow folding of the
8 shift into the scale of the address.
10 However as demonstrated by the testcase, this can trigger CSE of not only the
11 shift and the AND which the code is prepared for but also the underlying load
12 node. In the testcase this node is sitting in the RecordedNode and MatchScope
13 data structures of the matcher and becomes a deleted node upon CSE. Returning
14 from the complex pattern function, we try to access it again hitting an assert
15 because the node is no longer a load even though this was checked before.
17 Now obviously changing the DAG this late is bending the rules but I think it
18 makes sense somewhat. Outside of addresses we prefer and-of-shift because it
19 may lead to smaller immediates (FoldMaskAndShiftToScale is an even better
20 example because it create a non-canonical node). We currently don't recognize
21 addresses during DAGCombiner where arguably this canonicalization should be
22 performed. On the other hand, having this in the matcher allows us to cover
23 all the cases where an address can be used in an instruction.
25 I've also talked a little bit to Dan Gohman on llvm-dev who added the RAUW for
26 the new shift node in FoldMaskedShiftToScaledMask. This RAUW is responsible
27 for initiating the recursive CSE on users
28 (http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-September/076903.html) but it
29 is not strictly necessary since the shift is hooked into the visited user. Of
30 course it's safer to keep the DAG consistent at all times (e.g. for accurate
31 number of uses, etc.).
33 So rather than changing the fundamentals, I've decided to continue along the
34 previous patches and detect the CSE. This patch installs a very targeted
35 DAGUpdateListener for the duration of a complex-pattern match and updates the
36 matching state accordingly. (Previous patches used HandleSDNode to detect the
37 CSE but that's not practical here). The listener is only installed on X86.
39 I tested that there is no measurable overhead due to this while running
40 through the spec2k BC files with llc. The only thing we pay for is the
41 creation of the listener. The callback never ever triggers in spec2k since
42 this is a corner case.
44 Fixes rdar://problem/18206171
46 This fixes a possible crash in x86 code generation when compiling recent
47 llvm/clang trunk sources.
49 Introduced here: http://svnweb.freebsd.org/changeset/base/286033
51 Index: include/llvm/CodeGen/SelectionDAGISel.h
52 ===================================================================
53 --- include/llvm/CodeGen/SelectionDAGISel.h
54 +++ include/llvm/CodeGen/SelectionDAGISel.h
55 @@ -238,6 +238,12 @@ class SelectionDAGISel : public MachineFunctionPas
56 const unsigned char *MatcherTable,
59 + /// \brief Return true if complex patterns for this target can mutate the
61 + virtual bool ComplexPatternFuncMutatesDAG() const {
67 // Calls to these functions are generated by tblgen.
68 Index: lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
69 ===================================================================
70 --- lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
71 +++ lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
72 @@ -2345,6 +2345,45 @@ struct MatchScope {
73 bool HasChainNodesMatched, HasGlueResultNodesMatched;
76 +/// \\brief A DAG update listener to keep the matching state
77 +/// (i.e. RecordedNodes and MatchScope) uptodate if the target is allowed to
78 +/// change the DAG while matching. X86 addressing mode matcher is an example
80 +class MatchStateUpdater : public SelectionDAG::DAGUpdateListener
82 + SmallVectorImpl<std::pair<SDValue, SDNode*> > &RecordedNodes;
83 + SmallVectorImpl<MatchScope> &MatchScopes;
85 + MatchStateUpdater(SelectionDAG &DAG,
86 + SmallVectorImpl<std::pair<SDValue, SDNode*> > &RN,
87 + SmallVectorImpl<MatchScope> &MS) :
88 + SelectionDAG::DAGUpdateListener(DAG),
89 + RecordedNodes(RN), MatchScopes(MS) { }
91 + void NodeDeleted(SDNode *N, SDNode *E) {
92 + // Some early-returns here to avoid the search if we deleted the node or
93 + // if the update comes from MorphNodeTo (MorphNodeTo is the last thing we
94 + // do, so it's unnecessary to update matching state at that point).
95 + // Neither of these can occur currently because we only install this
96 + // update listener during matching a complex patterns.
97 + if (!E || E->isMachineOpcode())
99 + // Performing linear search here does not matter because we almost never
100 + // run this code. You'd have to have a CSE during complex pattern
102 + for (SmallVectorImpl<std::pair<SDValue, SDNode*> >::iterator I =
103 + RecordedNodes.begin(), IE = RecordedNodes.end(); I != IE; ++I)
104 + if (I->first.getNode() == N)
105 + I->first.setNode(E);
107 + for (SmallVectorImpl<MatchScope>::iterator I = MatchScopes.begin(),
108 + IE = MatchScopes.end(); I != IE; ++I)
109 + for (SmallVector<SDValue, 4>::iterator J = I->NodeStack.begin(),
110 + JE = I->NodeStack.end(); J != JE; ++J)
111 + if (J->getNode() == N)
117 SDNode *SelectionDAGISel::
118 @@ -2599,6 +2638,14 @@ SelectCodeCommon(SDNode *NodeToMatch, const unsign
119 unsigned CPNum = MatcherTable[MatcherIndex++];
120 unsigned RecNo = MatcherTable[MatcherIndex++];
121 assert(RecNo < RecordedNodes.size() && "Invalid CheckComplexPat");
123 + // If target can modify DAG during matching, keep the matching state
125 + OwningPtr<MatchStateUpdater> MSU;
126 + if (ComplexPatternFuncMutatesDAG())
127 + MSU.reset(new MatchStateUpdater(*CurDAG, RecordedNodes,
130 if (!CheckComplexPattern(NodeToMatch, RecordedNodes[RecNo].second,
131 RecordedNodes[RecNo].first, CPNum,
133 Index: lib/Target/X86/X86ISelDAGToDAG.cpp
134 ===================================================================
135 --- lib/Target/X86/X86ISelDAGToDAG.cpp
136 +++ lib/Target/X86/X86ISelDAGToDAG.cpp
137 @@ -290,6 +290,13 @@ namespace {
138 const X86InstrInfo *getInstrInfo() const {
139 return getTargetMachine().getInstrInfo();
142 + /// \brief Address-mode matching performs shift-of-and to and-of-shift
143 + /// reassociation in order to expose more scaled addressing
145 + bool ComplexPatternFuncMutatesDAG() const {
151 Index: test/CodeGen/X86/addr-mode-matcher.ll
152 ===================================================================
153 --- test/CodeGen/X86/addr-mode-matcher.ll
154 +++ test/CodeGen/X86/addr-mode-matcher.ll
156 +; RUN: llc < %s | FileCheck %s
158 +; This testcase used to hit an assert during ISel. For details, see the big
159 +; comment inside the function.
162 +; The AND should be turned into a subreg access.
164 +; The shift (leal) should be folded into the scale of the address in the load.
166 +; CHECK: movl {{.*}},4),
168 +target datalayout = "e-m:o-p:32:32-f64:32:64-f80:128-n8:16:32-S128"
169 +target triple = "i386-apple-macosx10.6.0"
171 +define void @foo(i32 %a) {
176 + %tmp1694 = phi i32 [ 0, %bb ], [ %tmp1745, %bb1692 ]
177 + %xor = xor i32 0, %tmp1694
179 +; %load1 = (load (and (shl %xor, 2), 1020))
180 + %tmp1701 = shl i32 %xor, 2
181 + %tmp1702 = and i32 %tmp1701, 1020
182 + %tmp1703 = getelementptr inbounds [1028 x i8]* null, i32 0, i32 %tmp1702
183 + %tmp1704 = bitcast i8* %tmp1703 to i32*
184 + %load1 = load i32* %tmp1704, align 4
186 +; %load2 = (load (shl (and %xor, 255), 2))
187 + %tmp1698 = and i32 %xor, 255
188 + %tmp1706 = shl i32 %tmp1698, 2
189 + %tmp1707 = getelementptr inbounds [1028 x i8]* null, i32 0, i32 %tmp1706
190 + %tmp1708 = bitcast i8* %tmp1707 to i32*
191 + %load2 = load i32* %tmp1708, align 4
193 + %tmp1710 = or i32 %load2, %a
195 +; While matching xor we address-match %load1. The and-of-shift reassocication
196 +; in address matching transform this into into a shift-of-and and the resuting
197 +; node becomes identical to %load2. CSE replaces %load1 which leaves its
198 +; references in MatchScope and RecordedNodes stale.
199 + %tmp1711 = xor i32 %load1, %tmp1710
201 + %tmp1744 = getelementptr inbounds [256 x i32]* null, i32 0, i32 %tmp1711
202 + store i32 0, i32* %tmp1744, align 4
203 + %tmp1745 = add i32 %tmp1694, 1
204 + indirectbr i8* undef, [label %bb1756, label %bb1692]
210 + indirectbr i8* undef, [label %bb5721, label %bb5736]