1 .\" Copyright (c) 2010 The FreeBSD Foundation
2 .\" All rights reserved.
4 .\" This software was developed by Pawel Jakub Dawidek under sponsorship from
5 .\" the FreeBSD Foundation.
7 .\" Redistribution and use in source and binary forms, with or without
8 .\" modification, are permitted provided that the following conditions
10 .\" 1. Redistributions of source code must retain the above copyright
11 .\" notice, this list of conditions and the following disclaimer.
12 .\" 2. Redistributions in binary form must reproduce the above copyright
13 .\" notice, this list of conditions and the following disclaimer in the
14 .\" documentation and/or other materials provided with the distribution.
16 .\" THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND
17 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19 .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE
20 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
21 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
22 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
23 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
24 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
25 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
35 .Nd "Highly Available Storage daemon"
44 daemon is responsible for managing highly available GEOM providers.
47 allows to transparently store data on two physically separated machines
48 connected over the TCP/IP network.
49 Only one machine (cluster node) can actively use storage provided by
51 This machine is called primary.
54 daemon operates on block level, which makes it transparent to file
55 systems and applications.
59 daemon which starts new worker process as soon as a role for the given
60 resource is changed to primary or as soon as a role for the given
61 resource is changed to secondary and remote (primary) node will
62 successfully connect to it.
63 Every worker process gets a new process title (see
64 .Xr setproctitle 3 ) ,
65 which describes its role and resource it controls.
67 .Bd -literal -offset indent
68 hastd: <resource name> (<role>)
73 operates in primary role for the given resource, corresponding
75 disk-like device (GEOM provider) is created.
76 File systems and applications can use this provider to send I/O
78 Every write, delete and flush operation
79 .Dv ( BIO_WRITE , BIO_DELETE , BIO_FLUSH )
80 is send to local component and replicated to the remote (secondary) node if it
84 are handled locally unless I/O error occurs or local version of the data
85 is not up-to-date yet (synchronization is in progress).
89 daemon uses the GEOM Gate class to receive I/O requests from the
90 in-kernel GEOM infrastructure.
93 module is loaded automatically if the kernel was not compiled with the
95 .Bd -ragged -offset indent
96 .Cd "options GEOM_GATE"
99 The connection between two
101 daemons is always initiated from the one running as primary to the one
102 running as secondary.
105 is unable to connect or connection fails, it will try to re-establish
106 connection every few seconds.
107 Once connection is established, primary
109 will synchronize every extent that was modified during connection outage
113 It is possible that in case of connection outage between the nodes
115 primary role for the given resource will be configured on both nodes.
116 This in turn leads to incompatible data modifications.
117 Such condition is called split-brain and cannot be automatically
120 daemon as this will lead most likely to data corruption or lost of
122 Even though it cannot be fixed by
124 itself, it will be detected and further connection between independently
125 modified nodes will not be possible.
126 Once this situation is manually resolved by an administrator, resource
127 on one of the nodes can be initialized (erasing local data), which makes
128 connection to the remote node possible again.
129 Connection of freshly initialized component will trigger full resource
134 daemon itself never picks his role up automatically.
135 The role has to be configured with the
137 control utility by additional software like
141 that can reliably manage role separation and switch secondary node to
142 primary role in case of original primary failure.
146 daemon can be started with the following command line arguments:
147 .Bl -tag -width ".Fl P Ar pidfile"
149 Specify alternative location of the configuration file.
150 The default location is
153 Print or log debugging information.
154 This option can be specified multiple times to raise the verbosity
159 daemon in the foreground.
162 starts in the background.
168 Specify alternative location of a file where main process PID will be
170 The default location is
171 .Pa /var/run/hastd.pid .
174 .Bl -tag -width ".Pa /var/run/hastd.pid" -compact
175 .It Pa /etc/hast.conf
176 The configuration file for
180 .It Pa /var/run/hastctl
181 Control socket used by the
183 control utility to communicate with
185 .It Pa /var/run/hastd.pid
186 The default location of the
191 Exit status is 0 on success, or one of the values described in
198 Set role for resource
204 Create file system on
206 provider and mount it.
207 .Bd -literal -offset indent
209 nodeB# hastctl role secondary shared
212 nodeA# hastctl role primary shared
213 nodeA# newfs -U /dev/hast/shared
214 nodeA# mount -o noatime /dev/hast/shared /shared
231 .An Pawel Jakub Dawidek Aq pjd@FreeBSD.org
232 under sponsorship of the FreeBSD Foundation.