Not all of these are, strictly speaking, “frequently asked;” some represent trouble found that seemed worth documenting.
1. Slony-I FAQ: Building and Installing Slony-I | |
| |
1.1. | I am using Frotznik Freenix 4.5, with its FFPM (Frotznik Freenix Package Manager) package management system. It comes with FFPM packages for PostgreSQL 7.4.7, which are what I am using for my databases, but they don't include Slony-I in the packaging. How do I add Slony-I to this? |
Frotznik Freenix is new to me, so it's a bit dangerous to give really hard-and-fast definitive answers. The answers differ somewhat between the various combinations of PostgreSQL and Slony-I versions; the newer versions generally somewhat easier to cope with than are the older versions. In general, you almost certainly need to compile Slony-I from sources; depending on versioning of both Slony-I and PostgreSQL, you may need to compile PostgreSQL from scratch. (Whether you need to use the PostgreSQL compile is another matter; you probably don't...)
In effect, the “worst case” scenario takes place if you are using a version of Slony-I earlier than 1.1 with an “elderly” version of PostgreSQL, in which case you can expect to need to compile PostgreSQL from scratch in order to have everything that the Slony-I compile needs even though you are using a “packaged” version of PostgreSQL. If you are running a recent PostgreSQL and a recent Slony-I, then the codependencies can be fairly small, and you may not need extra PostgreSQL sources. These improvements should ease the production of Slony-I packages so that you might soon even be able to hope to avoid compiling Slony-I. |
|
|
|
1.2. |
I tried building Slony-I 1.1 and got the following error message: configure: error: Headers for libpqserver are not found in the includeserverdir. This is the path to postgres.h. Please specify the includeserverdir with --with-pgincludeserverdir=<dir> |
You are almost certainly running version PostgreSQL 7.4
or earlier, where server headers are not installed by default if you
just do a You need to install server headers when you install PostgreSQL
via the command |
|
1.3. |
Slony-I seemed to compile fine; now, when I run a slon, some events are moving around, but no replication is taking place. Slony logs might look like the following: DEBUG1 remoteListenThread_1: connected to 'host=host004 dbname=pgbenchrep user=postgres port=5432' ERROR remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3, ev_data4, ev_data5, ev_data6, ev_data7, ev_data8 from "_pgbenchtest".sl_event e where (e.ev_origin = '1' and e.ev_seqno > '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress Alternatively, it may appear like... ERROR remoteListenThread_2: "select ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3, ev_data4, ev_data5, ev_data6, ev_data7, ev_data8 from "_sl_p2t2".sl_event e where (e.ev_origin = '2' and e.ev_seqno > '0') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Error 0
|
On AIX and Solaris (and possibly elsewhere), both
Slony-I and PostgreSQL must be compiled with the
What breaks here is that the libc (threadsafe) and libpq (non-threadsafe) use different memory locations for errno, thereby leading to the request failing. Problems like this crop up with disadmirable regularity on AIX
and Solaris; it may take something of an “object code audit” to
make sure that ALL of the necessary components have been
compiled and linked with For instance, I ran into the problem one that
|
|
Note that with libpq version 7.4.2, on Solaris, a further thread patch was required; similar is also required for PostgreSQL version 8.0. |
|
2. Slony-I FAQ: Connection Issues | |
| |
2.1. | I looked for the |
If the DSNs are wrong, then slon instances can't connect to the nodes. This will generally lead to nodes remaining entirely untouched. Recheck the connection configuration. By the way, since slon links to libpq, you could have password information
stored in |
|
2.2. |
I created a “superuser” account,
Unfortunately, I ran into a problem the next time I subscribed to a new set. DEBUG1 copy_set 28661 DEBUG1 remoteWorkerThread_1: connected to provider DB DEBUG2 remoteWorkerThread_78: forward confirm 1,594436 received by 78 DEBUG2 remoteWorkerThread_1: copy table public.billing_discount ERROR remoteWorkerThread_1: "select "_mycluster".setAddTable_int(28661, 51, 'public.billing_discount', 'billing_discount_pkey', 'Table public.billing_discount with candidate primary key billing_discount_pkey'); " PGRES_FATAL_ERROR ERROR: permission denied for relation pg_class CONTEXT: PL/pgSQL function "altertableforreplication" line 23 at select into variables PL/pgSQL function "setaddtable_int" line 76 at perform WARN remoteWorkerThread_1: data copy for set 28661 failed - sleep 60 seconds This continues to fail, over and over, until I restarted the
slon to connect as
|
The problem is fairly self-evident; permission is being
denied on the system table, |
|
The “fix” is thus: update pg_shadow set usesuper = 't', usecatupd='t' where usename = 'slony'; |
|
In version 8.1 and higher, you may also need the following: update pg_authid set rolcatupdate = 't', rolsuper='t' where rolname = 'slony'; |
|
2.3. |
I'm trying to get a slave subscribed, and get the following messages in the logs: DEBUG1 copy_set 1 DEBUG1 remoteWorkerThread_1: connected to provider DB WARN remoteWorkerThread_1: transactions earlier than XID 127314958 are still in progress WARN remoteWorkerThread_1: data copy for set 1 failed - sleep 60 seconds |
There is evidently some reasonably old outstanding transaction blocking Slony-I from processing the sync. You might want to take a look at pg_locks to see what's up: sampledb=# select * from pg_locks where transaction is not null order by transaction; relation | database | transaction | pid | mode | granted ----------+----------+-------------+---------+---------------+--------- | | 127314921 | 2605100 | ExclusiveLock | t | | 127326504 | 5660904 | ExclusiveLock | t (2 rows) See? 127314921 is indeed older than 127314958, and it's still running. A long running G/L report, a runaway RT3 query, a pg_dump, all will open up transactions that may run for substantial periods of time. Until they complete, or are interrupted, you will continue to see the message “ data copy for set 1 failed - sleep 60 seconds ”. By the way, if there is more than one database on the PostgreSQL cluster, and activity is taking place on the OTHER database, that will lead to there being “transactions earlier than XID whatever” being found to be still in progress. The fact that it's a separate database on the cluster is irrelevant; Slony-I will wait until those old transactions terminate. |
|
2.4. | Same as the above. What I forgot to mention, as well, was that I was trying to add TWO subscribers, concurrently. |
That doesn't work out: Slony-I can't work on the
$ ps -aef | egrep '[2]605100' postgres 2605100 205018 0 18:53:43 pts/3 3:13 postgres: postgres sampledb localhost COPY This happens to be a This has the (perhaps unfortunate) implication that you cannot populate two slaves concurrently from a single provider. You have to subscribe one to the set, and only once it has completed setting up the subscription (copying table contents and such) can the second subscriber start setting up the subscription. |
|
2.5. |
We got bitten by something we didn't foresee when completely uninstalling a slony replication cluster from the master and slave... WarningMAKE SURE YOU STOP YOUR APPLICATION RUNNING AGAINST YOUR MASTER DATABASE WHEN REMOVING THE WHOLE SLONY CLUSTER, or at least re-cycle all your open connections after the event! The connections “remember” or refer to OIDs which are removed by the uninstall node script. And you will get lots of errors as a result... |
There are two notable areas of PostgreSQL that cache query plans and OIDs:
The problem isn't particularly a Slony-I one; it would occur any time such significant changes are made to the database schema. It shouldn't be expected to lead to data loss, but you'll see a wide range of OID-related errors. |
|
The problem occurs when you are using some sort of “connection pool” that keeps recycling old connections. If you restart the application after this, the new connections will create new query plans, and the errors will go away. If your connection pool drops the connections, and creates new ones, the new ones will have new query plans, and the errors will go away. |
|
In our code we drop the connection on any error we cannot map to an expected condition. This would eventually recycle all connections on such unexpected problems after just one error per connection. Of course if the error surfaces as a constraint violation which is a recognized condition, this won't help either, and if the problem is persistent, the connections will keep recycling which will drop the effect of the pooling, in the latter case the pooling code could also announce an admin to take a look... |
|
2.6. |
I upgraded my cluster to Slony-I version 1.2. I'm now getting the following notice in the logs: NOTICE: Slony-I: log switch to sl_log_2 still in progress - sl_log_1 not truncated Both |
This is symptomatic of the same issue as above with
dropping replication: if there are still old connections lingering
that are using old query plans that reference the old stored
functions, resulting in the inserts to Closing those connections and opening new ones will resolve the issue. |
|
In the longer term, there is an item on the PostgreSQL TODO list to implement dependancy checking that would flush cached query plans when dependent objects change. |
|
2.7. | I pointed a subscribing node to a different provider and it stopped replicating |
We noticed this happening when we wanted to re-initialize a node, where we had configuration thus:
The subscription for node 3 was changed to have node 1 as provider, and we did DROP SET /SUBSCRIBE SET for node 2 to get it repopulating. Unfortunately, replication suddenly stopped to node 3. The problem was that there was not a suitable set of “listener paths” in sl_listen to allow the events from node 1 to propagate to node 3. The events were going through node 2, and blocking behind the SUBSCRIBE SET event that node 2 was working on. The following slonik script dropped out the listen paths where node 3 had to go through node 2, and added in direct listens between nodes 1 and 3. cluster name = oxrslive; node 1 admin conninfo='host=32.85.68.220 dbname=oxrslive user=postgres port=5432'; node 2 admin conninfo='host=32.85.68.216 dbname=oxrslive user=postgres port=5432'; node 3 admin conninfo='host=32.85.68.244 dbname=oxrslive user=postgres port=5432'; node 4 admin conninfo='host=10.28.103.132 dbname=oxrslive user=postgres port=5432'; try { store listen (origin = 1, receiver = 3, provider = 1); store listen (origin = 3, receiver = 1, provider = 3); drop listen (origin = 1, receiver = 3, provider = 2); drop listen (origin = 3, receiver = 1, provider = 2); } Immediately after this script was run,
The issues of “listener paths” are discussed further at Section 9, “Slony-I listen paths” |
|
2.8. |
I was starting a slon, and got the following “FATAL” messages in its logs. What's up??? 2006-03-29 16:01:34 UTC CONFIG main: slon version 1.2.0 starting up 2006-03-29 16:01:34 UTC DEBUG2 slon: watchdog process started 2006-03-29 16:01:34 UTC DEBUG2 slon: watchdog ready - pid = 28326 2006-03-29 16:01:34 UTC DEBUG2 slon: worker process created - pid = 28327 2006-03-29 16:01:34 UTC CONFIG main: local node id = 1 2006-03-29 16:01:34 UTC DEBUG2 main: main process started 2006-03-29 16:01:34 UTC CONFIG main: launching sched_start_mainloop 2006-03-29 16:01:34 UTC CONFIG main: loading current cluster configuration 2006-03-29 16:01:34 UTC CONFIG storeSet: set_id=1 set_origin=1 set_comment='test set' 2006-03-29 16:01:34 UTC DEBUG2 sched_wakeup_node(): no_id=1 (0 threads + worker signaled) 2006-03-29 16:01:34 UTC DEBUG2 main: last local event sequence = 7 2006-03-29 16:01:34 UTC CONFIG main: configuration complete - starting threads 2006-03-29 16:01:34 UTC DEBUG1 localListenThread: thread starts 2006-03-29 16:01:34 UTC FATAL localListenThread: "select "_test1538".cleanupNodelock(); insert into "_test1538".sl_nodelock values ( 1, 0, "pg_catalog".pg_backend_pid()); " - ERROR: duplicate key violates unique constraint "sl_nodelock-pkey" 2006-03-29 16:01:34 UTC FATAL Do you already have a slon running against this node? 2006-03-29 16:01:34 UTC FATAL Or perhaps a residual idle backend connection from a dead slon? |
The table |
|
This error message is typically a sign that you have started up a second slon process for a given node. The slon asks the obvious question: “Do you already have a slon running against this node?” |
|
Supposing you experience some sort of network outage, the connection between slon and database may fail, and the slon may figure this out long before the PostgreSQL instance it was connected to does. The result is that there will be some number of idle connections left on the database server, which won't be closed out until TCP/IP timeouts complete, which seems to normally take about two hours. For that two hour period, the slon will try to connect, over and over, and will get the above fatal message, over and over. An administrator may clean this out by logging onto the server
and issuing You can mostly avoid this by making sure that slon processes always run somewhere nearby the server that each one manages. If the slon runs on the same server as the database it manages, any “networking failure” that could interrupt local connections would be likely to be serious enough to threaten the entire server. |
|
2.9. | When can I shut down slon processes? |
Generally, it's no big deal to shut down a slon process. Each one is “merely” a PostgreSQL client, managing one node, which spawns threads to manage receiving events from other nodes. The “event listening” threads are no big deal; they are doing nothing fancier than periodically checking remote nodes to see if they have work to be done on this node. If you kill off the slon these threads will be closed, which should have little or no impact on much of anything. Events generated while the slon is down will be picked up when it is restarted. The “node managing” thread is a bit more
interesting; most of the time, you can expect, on a subscriber, for
this thread to be processing The only situation where this will
cause particular “heartburn” is if
the event being processed was one which takes a long time to process,
such as The other thing that might cause trouble
is if the slon runs fairly distant from nodes that it connects to;
you could discover that database connections are left There is one other case that could cause trouble; when the
slon managing the origin node is not running,
no |
|
2.10. | Are there risks to doing so? How about benefits? |
In short, if you don't have something like an 18
hour |
|
3. Slony-I FAQ: Configuration Issues | |
| |
3.1. |
Slonik fails - cannot load PostgreSQL library -
When I run the sample setup script I get an error message similar
to:
|
Evidently, you haven't got the
This may also point to there being some other mismatch between the PostgreSQL binary instance and the Slony-I instance. If you compiled Slony-I yourself, on a machine that may have multiple PostgreSQL builds “lying around,” it's possible that the slon or slonik binaries are asking to load something that isn't actually in the library directory for the PostgreSQL database cluster that it's hitting. Long and short: This points to a need to “audit” what installations of PostgreSQL and Slony-I you have in place on the machine(s). Unfortunately, just about any mismatch will cause things not to link up quite right. See also thread safety concerning threading issues on Solaris ... Life is simplest if you only have one set of PostgreSQL binaries on a given server; in that case, there isn't a “wrong place” in which Slony-I components might get installed. If you have several software installs, you'll have to verify that the right versions of Slony-I components are associated with the right PostgreSQL binaries. |
|
3.2. | I tried creating a CLUSTER NAME with a "-" in it. That didn't work. |
Slony-I uses the same rules for unquoted identifiers as the PostgreSQL main parser, so no, you probably shouldn't put a "-" in your identifier name. You may be able to defeat this by putting “quotes” around identifier names, but it's still liable to bite you some, so this is something that is probably not worth working around. |
|
3.3. |
ps finds passwords on command line If I run a |
Take the passwords out of the Slony configuration, and
put them into |
|
3.4. |
Table indexes with FQ namespace names set add table (set id = 1, origin = 1, id = 27, full qualified name = 'nspace.some_table', key = 'key_on_whatever', comment = 'Table some_table in namespace nspace with a candidate primary key'); |
If you have |
|
3.5. | Replication has fallen behind, and it appears that the
queries to draw data from sl_log_1/sl_log_2 are taking a long time to pull just a few
|
Until version 1.1.1, there was only one index on sl_log_1/sl_log_2, and if
there were multiple replication sets, some of the columns on the index
would not provide meaningful selectivity. If there is no index on
column |
|
3.6. | I need to rename a column that is in the primary key for one of my replicated tables. That seems pretty dangerous, doesn't it? I have to drop the table out of replication and recreate it, right? |
Actually, this is a scenario which works out remarkably cleanly. Slony-I does indeed make intense use of the primary key columns, but actually does so in a manner that allows this sort of change to be made very nearly transparently. Suppose you revise a column name, as with the SQL DDL The ideal and proper handling of this change would involve using EXECUTE SCRIPT to deploy the alteration, which ensures it is applied at exactly the right point in the transaction stream on each node. Interestingly, that isn't forcibly necessary. As long as the
alteration is applied on the replication set's origin before
application on subscribers, things won't break irrepairably. Some
|
|
3.7. | I have a PostgreSQL 7.2-based system that I really, really want to use Slony-I to help me upgrade it to 8.0. What is involved in getting Slony-I to work for that? |
Rod Taylor has reported the following... This is approximately what you need to do:
Of course, now that you have done all of the above, it's not compatible with standard Slony now. So you either need to implement 7.2 in a less hackish way, or you can also hack up slony to work without schemas on newer versions of PostgreSQL so they can talk to each other. Almost immediately after getting the DB upgraded from 7.2 to 7.4, we deinstalled the hacked up Slony (by hand for the most part), and started a migration from 7.4 to 7.4 on a different machine using the regular Slony. This was primarily to ensure we didn't keep our system catalogues which had been manually fiddled with. All that said, we upgraded a few hundred GB from 7.2 to 7.4 with about 30 minutes actual downtime (versus 48 hours for a dump / restore cycle) and no data loss. |
|
That represents a sufficiently ugly set of “hackery” that the developers are exceedingly reluctant to let it anywhere near to the production code. If someone were interested in “productionizing” this, it would probably make sense to do so based on the Slony-I 1.0 branch, with the express plan of not trying to keep much in the way of forwards compatibility or long term maintainability of replicas. You should only head down this road if you are sufficiently comfortable with PostgreSQL and Slony-I that you are prepared to hack pretty heavily with the code. |
|
3.8. | I had a network “glitch” that led to my using FAILOVER to fail over to an alternate node. The failure wasn't a disk problem that would corrupt databases; why do I need to rebuild the failed node from scratch? |
The action of FAILOVER is to abandon the failed node so that no more Slony-I activity goes to or from that node. As soon as that takes place, the failed node will progressively fall further and further out of sync. |
|
The big problem with trying to recover the failed node is that it may contain updates that never made it out of the origin. If they get retried, on the new origin, you may find that you have conflicting updates. In any case, you do have a sort of “logical” corruption of the data even if there never was a disk failure making it “physical.” |
|
As discusssed in Section 8, “Doing switchover and failover with Slony-I”, using FAILOVER should be considered a last resort as it implies that you are abandoning the origin node as being corrupted. |
|
3.9. |
After notification of a subscription on another node, replication falls over on one of the subscribers, with the following error message: ERROR remoteWorkerThread_1: "begin transaction; set transaction isolation level serializable; lock table "_livesystem".sl_config_lock; select "_livesystem".enableSubscription(25506, 1, 501); notify "_livesystem_Event"; notify "_livesystem_Confirm"; insert into "_livesystem".sl_event (ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type , ev_data1, ev_data2, ev_data3, ev_data4 ) values ('1', '4896546', '2005-01-23 16:08:55.037395', '1745281261', '1745281262', '', 'ENABLE_SUBSCRIPTION', '25506', '1', '501', 't'); insert into "_livesystem".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (1, 4, '4896546', CURRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or update on table "sl_subscribe" violates foreign key constraint "sl_subscribe-sl_path-ref" DETAIL: Key (sub_provider,sub_receiver)=(1,501) is not present in table "sl_path". This is then followed by a series of failed syncs as the slon shuts down: DEBUG2 remoteListenThread_1: queue event 1,4897517 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897518 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897519 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897520 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897521 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897522 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897523 SYNC |
If you see a slon shutting down with ignore new events due to shutdown log entries, you typically need to step back in the log to before they started failing to see indication of the root cause of the problem. |
|
In this particular case, the problem was that some of the STORE PATH commands had not yet made it to node 4 before the SUBSCRIBE SET command propagated. This demonstrates yet another example of the need to not do things in a rush; you need to be sure things are working right before making further configuration changes. |
|
3.10. | I just used MOVE SET to move the origin to a new node. Unfortunately, some subscribers are still pointing to the former origin node, so I can't take it out of service for maintenance without stopping them from getting updates. What do I do? |
You need to use SUBSCRIBE SET to alter the subscriptions for those nodes to have them subscribe to a provider that will be sticking around during the maintenance. WarningWhat you don't do is to UNSUBSCRIBE SET; that would require reloading all data for the nodes from scratch later. |
|
3.11. |
After notification of a subscription on another node, replication falls over, starting with the following error message: ERROR remoteWorkerThread_1: "begin transaction; set transaction isolation level serializable; lock table "_livesystem".sl_config_lock; select "_livesystem".enableSubscription(25506, 1, 501); notify "_livesystem_Event"; notify "_livesystem_Confirm"; insert into "_livesystem".sl_event (ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type , ev_data1, ev_data2, ev_data3, ev_data4 ) values ('1', '4896546', '2005-01-23 16:08:55.037395', '1745281261', '1745281262', '', 'ENABLE_SUBSCRIPTION', '25506', '1', '501', 't'); insert into "_livesystem".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (1, 4, '4896546', CURRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or update on table "sl_subscribe" violates foreign key constraint "sl_subscribe-sl_path-ref" DETAIL: Key (sub_provider,sub_receiver)=(1,501) is not present in table "sl_path". This is then followed by a series of failed syncs as the slon shuts down: DEBUG2 remoteListenThread_1: queue event 1,4897517 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897518 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897519 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897520 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897521 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897522 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897523 SYNC
|
If you see a slon shutting down with ignore new events due to shutdown log entries, you'll typically have to step back to before they started failing to see indication of the root cause of the problem. |
|
In this particular case, the problem was that some of the STORE PATH commands had not yet made it to node 4 before the SUBSCRIBE SET command propagated. This is yet another example of the need to not do things too terribly quickly; you need to be sure things are working right before making further configuration changes. |
|
3.12. | Is the ordering of tables in a set significant? |
Most of the time, it isn't. You might imagine it of some value to order the tables in some particular way in order that “parent” entries would make it in before their “children” in some foreign key relationship; that isn't the case since foreign key constraint triggers are turned off on subscriber nodes. |
|
(Jan Wieck comments:) The order of table ID's is only significant during a LOCK SET in preparation of switchover. If that order is different from the order in which an application is acquiring its locks, it can lead to deadlocks that abort either the application or slon. |
|
(David Parker) I ran into one other case where the
ordering of tables in the set was significant: in the presence of
inherited tables. If a child table appears before its parent in a set,
then the initial subscription will end up deleting that child table
after it has possibly already received data, because the
|
|
3.13. |
If you have a slonik script
something like this, it will hang on you and never complete, because
you can't have try { echo 'Moving set 1 to node 3'; lock set (id=1, origin=1); echo 'Set locked'; wait for event (origin = 1, confirmed = 3); echo 'Moving set'; move set (id=1, old origin=1, new origin=3); echo 'Set moved - waiting for event to be confirmed by node 3'; wait for event (origin = 1, confirmed = 3); echo 'Confirmed'; } on error { echo 'Could not move set for cluster foo'; unlock set (id=1, origin=1); exit -1; } |
You must not invoke WAIT FOR EVENT inside a “try” block. |
|
3.14. |
Slony-I: cannot add table to currently subscribed set 1 I tried to add a table to a set, and got the following message: Slony-I: cannot add table to currently subscribed set 1 |
You cannot add tables to sets that already have subscribers. The workaround to this is to create ANOTHER set, add the new tables to that new set, subscribe the same nodes subscribing to "set 1" to the new set, and then merge the sets together. |
|
3.15. |
ERROR: duplicate key violates unique constraint "sl_table-pkey" I tried setting up a second replication set, and got the following error: stdin:9: Could not create subscription set 2 for oxrslive! stdin:11: PGRES_FATAL_ERROR select "_oxrslive".setAddTable(2, 1, 'public.replic_test', 'replic_test__Slony-I_oxrslive_rowID_key', 'Table public.replic_test without primary key'); - ERROR: duplicate key violates unique constraint "sl_table-pkey" CONTEXT: PL/pgSQL function "setaddtable_int" line 71 at SQL statement |
The table IDs used in SET ADD TABLE are required to be unique ACROSS ALL SETS. Thus, you can't restart numbering at 1 for a second set; if you are numbering them consecutively, a subsequent set has to start with IDs after where the previous set(s) left off. |
|
4. Slony-I FAQ: Performance Issues | |
| |
4.1. | Replication has been slowing down, I'm seeing
|
There are actually a number of possible causes for
this sort of thing. There is a question involving similar pathology
where the problem is that
Another “ proximate cause ” for this growth is for
there to be a connection connected to the node that sits That open transaction will have multiple negative effects, all of which will adversely affect performance: |
|
You can monitor for this condition inside the database
only if the PostgreSQL |
|
You should also be able to search for “ idle in transaction ” in the process table to find processes that are thus holding on to an ancient transaction. |
|
It is also possible (though rarer) for the problem to
be a transaction that is, for some other reason, being held open for a
very long time. The |
|
There are plans for PostgreSQL to have a timeout
parameter, |
|
4.2. | After dropping a node, sl_log_1 isn't getting purged out anymore. |
This is a common scenario in versions before 1.0.5, as the “clean up” that takes place when purging the node does not include purging out old entries from the Slony-I table, sl_confirm, for the recently departed node. The node is no longer around to update confirmations of what syncs have been applied on it, and therefore the cleanup thread that purges log entries thinks that it can't safely delete entries newer than the final sl_confirm entry, which rather curtails the ability to purge out old logs. Diagnosis: Run the following query to see if there are any “phantom/obsolete/blocking” sl_confirm entries: oxrsbar=# select * from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node); con_origin | con_received | con_seqno | con_timestamp ------------+--------------+-----------+---------------------------- 4 | 501 | 83999 | 2004-11-09 19:57:08.195969 1 | 2 | 3345790 | 2004-11-14 10:33:43.850265 2 | 501 | 102718 | 2004-11-14 10:33:47.702086 501 | 2 | 6577 | 2004-11-14 10:34:45.717003 4 | 5 | 83999 | 2004-11-14 21:11:11.111686 4 | 3 | 83999 | 2004-11-24 16:32:39.020194 (6 rows) In version 1.0.5, the DROP NODE function purges out entries in sl_confirm for the departing node. In earlier versions, this needs to be done manually. Supposing the node number is 3, then the query would be: delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3; Alternatively, to go after “all phantoms,” you could use oxrsbar=# delete from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node); DELETE 6 General “due diligence” dictates starting with a
You'll need to run this on each node that remains... Note that as of 1.0.5, this is no longer an issue at all, as it purges unneeded entries from sl_confirm in two places: |
|
4.3. | The slon spent the weekend out of commission [for some reason], and it's taking a long time to get a sync through. |
You might want to take a look at the sl_log_1/sl_log_2 tables, and
do a summary to see if there are any really enormous Slony-I
transactions in there. Up until at least 1.0.2, there needs to be a
slon connected to the origin in order for
If none are being generated, then all of the updates until the next one is generated will collect into one rather enormous Slony-I transaction. Conclusion: Even if there is not going to be a subscriber around, you really want to have a slon running to service the origin node. Slony-I 1.1 provides a stored procedure that allows
|
|
4.4. |
Some nodes start consistently falling behind I have been running Slony-I on a node for a while, and am seeing system performance suffering. I'm seeing long running queries of the form: fetch 100 from LOG; |
This can be characteristic of You quite likely need to do a Slon daemons already vacuum a bunch of tables, and
There is, however, still a scenario where this will still “bite.” Under MVCC, vacuums cannot delete tuples that were made “obsolete” at any time after the start time of the eldest transaction that is still open. Long running transactions will cause trouble, and should be avoided, even on subscriber nodes. |
|
4.5. | I have submitted a MOVE SET / EXECUTE SCRIPT request, and it seems to be stuck on one of my nodes. Slony-I logs aren't displaying any errors or warnings |
Is it possible that you are running pg_autovacuum, and it has taken out locks on some tables in the replication set? That would somewhat-invisibly block Slony-I from performing operations that require acquisition of exclusive locks. You might check for these sorts of locks using the following
query: |
|
5. Slony-I FAQ: Slony-I Bugs in Elder Versions | |
| |
5.1. |
The slon processes servicing my subscribers are growing to enormous size, challenging system resources both in terms of swap space as well as moving towards breaking past the 2GB maximum process size on my system. By the way, the data that I am replicating includes some rather large records. We have records that are tens of megabytes in size. Perhaps that is somehow relevant? |
Yes, those very large records are at the root of the
problem. The problem is that slon normally draws in
about 100 records at a time when a subscriber is processing the query
which loads data from the provider. Thus, if the average record size
is 10MB, this will draw in 1000MB of data which is then transformed
into That obviously leads to slon growing to a fairly tremendous size. The number of records that are fetched is controlled by the
value #ifdef SLON_CHECK_CMDTUPLES #define SLON_COMMANDS_PER_LINE 1 #define SLON_DATA_FETCH_SIZE 100 #define SLON_WORKLINES_PER_HELPER (SLON_DATA_FETCH_SIZE * 4) #else #define SLON_COMMANDS_PER_LINE 10 #define SLON_DATA_FETCH_SIZE 10 #define SLON_WORKLINES_PER_HELPER (SLON_DATA_FETCH_SIZE * 50) #endif If you are experiencing this problem, you might modify the
definition of |
|
In version 1.2, configuration values sync_max_rowsize and sync_max_largemem are associated with a new algorithm that changes the logic as follows. Rather than fetching 100 rows worth of data at a time:
This should alleviate problems people have been experiencing when they sporadically have series' of very large tuples. |
|
5.2. | I am trying to replicate
|
PostgreSQL 8.1 is quite a lot more strict about what UTF-8 mappings of Unicode characters it accepts as compared to version 8.0. If you intend to use Slony-I to update an older database to 8.1, and might have invalid UTF-8 values, you may be for an unpleasant surprise. Let us suppose we have a database running 8.0, encoding in UTF-8.
That database will accept the sequence If you replicate into a PostgreSQL 8.1 instance, it will complain about this, either at subscribe time, where Slony-I will complain about detecting an invalid Unicode sequence during the COPY of the data, which will prevent the subscription from proceeding, or, upon adding data, later, where this will hang up replication fairly much irretrievably. (You could hack on the contents of sl_log_1, but that quickly gets really unattractive...) There have been discussions as to what might be done about this. No compelling strategy has yet emerged, as all are unattractive. If you are using Unicode with PostgreSQL 8.0, you run a considerable risk of corrupting data. If you use replication for a one-time conversion, there is a risk of failure due to the issues mentioned earlier; if that happens, it appears likely that the best answer is to fix the data on the 8.0 system, and retry. In view of the risks, running replication between versions seems to be something you should not keep running any longer than is necessary to migrate to 8.1. For more details, see the discussion on postgresql-hackers mailing list. . |
|
5.3. | I am running Slony-I 1.1 and have a 4+ node setup where there are two subscription sets, 1 and 2, that do not share any nodes. I am discovering that confirmations for set 1 never get to the nodes subscribing to set 2, and that confirmations for set 2 never get to nodes subscribing to set 1. As a result, sl_log_1 grows and grows and is never purged. This was reported as Slony-I bug 1485 . |
Apparently the code for
In the interim, you'll want to manually add some sl_listen entries using STORE LISTEN or |
|
5.4. | I am finding some multibyte columns (Unicode, Big5) are being truncated a bit, clipping off the last character. Why? |
This was a bug present until a little after Slony-I
version 1.1.0; the way in which columns were being captured by the
|
|
5.5. | Bug #1226 indicates an error condition that can come up if you have a replication set that consists solely of sequences. |
The short answer is that having a replication set consisting only of sequences is not a best practice. |
|
The problem with a sequence-only set comes up only if you have a case where the only subscriptions that are active for a particular subscriber to a particular provider are for “sequence-only” sets. If a node gets into that state, replication will fail, as the query that looks for data from sl_log_1 has no tables to find, and the query will be malformed, and fail. If a replication set with tables is added back to the mix, everything will work out fine; it just seems scary. This problem should be resolved some time after Slony-I 1.1.0. |
|
5.6. | I need to drop a table from a replication set |
This can be accomplished several ways, not all equally desirable ;-).
|
|
5.7. | I need to drop a sequence from a replication set |
If you are running 1.0.5 or later, there is a SET DROP SEQUENCE command in Slonik to allow you to do this, parallelling SET DROP TABLE. If you are running 1.0.2 or earlier, the process is a bit more manual. Supposing I want to get rid of the two sequences listed below,
oxrsorg=# select * from _oxrsorg.sl_sequence where seq_id in (93,59); seq_id | seq_reloid | seq_set | seq_comment --------+------------+---------+------------------------------------- 93 | 107451516 | 1 | Sequence public.whois_cachemgmt_seq 59 | 107451860 | 1 | Sequence public.epp_whoi_cach_seq_ (2 rows) The data that needs to be deleted to stop Slony from continuing to replicate these are thus: delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59); delete from _oxrsorg.sl_sequence where seq_id in (93,59); Those two queries could be submitted to all of the nodes via schemadocddlscript_complete( integer, text, integer ) / EXECUTE SCRIPT, thus eliminating the sequence everywhere “at once.” Or they may be applied by hand to each of the nodes. Similarly to SET DROP TABLE, this is implemented Slony-I version 1.0.5 as SET DROP SEQUENCE. |
|
6. Slony-I FAQ: Hopefully Obsolete Issues | |
| |
6.1. |
slon does not restart after crash After an immediate stop of PostgreSQL (simulation of system
crash) in The logs claim that
|
The problem is that the system table The “trash” in that table needs to be thrown away. It's handy to keep a slonik script similar to the following to run in such cases: twcsds004[/opt/twcsds004/OXRS/slony-scripts]$ cat restart_org.slonik cluster name = oxrsorg ; node 1 admin conninfo = 'host=32.85.68.220 dbname=oxrsorg user=postgres port=5532'; node 2 admin conninfo = 'host=32.85.68.216 dbname=oxrsorg user=postgres port=5532'; node 3 admin conninfo = 'host=32.85.68.244 dbname=oxrsorg user=postgres port=5532'; node 4 admin conninfo = 'host=10.28.103.132 dbname=oxrsorg user=postgres port=5532'; restart node 1; restart node 2; restart node 3; restart node 4; RESTART NODE cleans up dead notifications so that you can restart the node. As of version 1.0.5, the startup process of slon looks for this condition, and automatically cleans it up. As of version 8.1 of PostgreSQL, the functions that manipulate
|
|
6.2. |
I tried the following query which did not work: sdb=# explain select query_start, current_query from pg_locks join pg_stat_activity on pid = procpid where granted = true and transaction in (select transaction from pg_locks where granted = false); ERROR: could not find hash function for hash operator 716373 It appears the Slony-I What's up? |
Slony-I defined an XXID data type and operators on that type in order to allow manipulation of transaction IDs that are used to group together updates that are associated with the same transaction. Operators were not available for PostgreSQL 7.3 and earlier
versions; in order to support version 7.3, custom functions had to be
added. The |
|
This has not been considered a “release-critical” bug, as Slony-I does not internally generate queries likely to use hash joins. This problem shouldn't injure Slony-I's ability to continue replicating. |
|
Future releases of Slony-I (e.g.
1.0.6, 1.1) will omit the |
|
Supposing you wish to repair an existing instance, so that your own queries will not run afoul of this problem, you may do so as follows: /* cbbrowne@[local]/dba2 slony_test1=*/ \x Expanded display is on. /* cbbrowne@[local]/dba2 slony_test1=*/ select * from pg_operator where oprname = '=' and oprnamespace = (select oid from pg_namespace where nspname = 'public'); -[ RECORD 1 ]+------------- oprname | = oprnamespace | 2200 oprowner | 1 oprkind | b oprcanhash | t oprleft | 82122344 oprright | 82122344 oprresult | 16 oprcom | 82122365 oprnegate | 82122363 oprlsortop | 82122362 oprrsortop | 82122362 oprltcmpop | 82122362 oprgtcmpop | 82122360 oprcode | "_T1".xxideq oprrest | eqsel oprjoin | eqjoinsel /* cbbrowne@[local]/dba2 slony_test1=*/ update pg_operator set oprcanhash = 'f' where oprname = '=' and oprnamespace = 2200 ; UPDATE 1 |
|
6.3. | I can do a |
Slony-I depends on there being an already existant
index on the primary key, and leaves all indexes alone whilst using
the PostgreSQL When you use |
|
If you can drop unnecessary indices while the
|
|
Slony-I version 1.1.5 and later versions should handle this automatically; it “thumps” on the indexes in the PostgreSQL catalog to hide them, in much the same way triggers are hidden, and then “fixes” the index pointers and reindexes the table. |
|
6.4. |
Replication Fails - Unique Constraint Violation Replication has been running for a while, successfully, when a node encounters a “glitch,” and replication logs are filled with repetitions of the following: DEBUG2 remoteWorkerThread_1: syncing set 2 with 5 table(s) from provider 1 DEBUG2 remoteWorkerThread_1: syncing set 1 with 41 table(s) from provider 1 DEBUG2 remoteWorkerThread_1: syncing set 5 with 1 table(s) from provider 1 DEBUG2 remoteWorkerThread_1: syncing set 3 with 1 table(s) from provider 1 DEBUG2 remoteHelperThread_1_1: 0.135 seconds delay for first row DEBUG2 remoteHelperThread_1_1: 0.343 seconds until close cursor ERROR remoteWorkerThread_1: "insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '34', '35090538', 'D', '_rserv_ts=''9275244'''); delete from only public.epp_domain_host where _rserv_ts='9275244';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '34', '35090539', 'D', '_rserv_ts=''9275245'''); delete from only public.epp_domain_host where _rserv_ts='9275245';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090540', 'D', '_rserv_ts=''24240590'''); delete from only public.epp_domain_contact where _rserv_ts='24240590';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090541', 'D', '_rserv_ts=''24240591'''); delete from only public.epp_domain_contact where _rserv_ts='24240591';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090542', 'D', '_rserv_ts=''24240589'''); delete from only public.epp_domain_contact where _rserv_ts='24240589';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '11', '35090543', 'D', '_rserv_ts=''36968002'''); delete from only public.epp_domain_status where _rserv_ts='36968002';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '11', '35090544', 'D', '_rserv_ts=''36968003'''); delete from only public.epp_domain_status where _rserv_ts='36968003';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090549', 'I', '(contact_id,status,reason,_rserv_ts) values (''6972897'',''64'','''',''31044208'')'); insert into public.contact_status (contact_id,status,reason,_rserv_ts) values ('6972897','64','','31044208');insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090550', 'D', '_rserv_ts=''18139332'''); delete from only public.contact_status where _rserv_ts='18139332';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090551', 'D', '_rserv_ts=''18139333'''); delete from only public.contact_status where _rserv_ts='18139333';" ERROR: duplicate key violates unique constraint "contact_status_pkey" - qualification was: ERROR remoteWorkerThread_1: SYNC aborted The transaction rolls back, and
Slony-I tries again, and again, and again.
The problem is with one of the last SQL
statements, the one with |
A certain cause for this has been difficult to arrive at. By the time we notice that there is a problem, the seemingly missed delete transaction has been cleaned out of sl_log_1, so there appears to be no recovery possible. What has seemed necessary, at this point, is to drop the replication set (or even the node), and restart replication from scratch on that node. In Slony-I 1.0.5, the handling of purges of sl_log_1 became more conservative, refusing to purge entries that haven't been successfully synced for at least 10 minutes on all nodes. It was not certain that that would prevent the “glitch” from taking place, but it seemed plausible that it might leave enough sl_log_1 data to be able to do something about recovering from the condition or at least diagnosing it more exactly. And perhaps the problem was that sl_log_1 was being purged too aggressively, and this would resolve the issue completely. It is a shame to have to reconstruct a large replication node for this; if you discover that this problem recurs, it may be an idea to break replication down into multiple sets in order to diminish the work involved in restarting replication. If only one set has broken, you may only need to unsubscribe/drop and resubscribe the one set. In one case we found two lines in the SQL error message in the log file that contained identical insertions into sl_log_1. This ought to be impossible as is a primary key on sl_log_1. The latest (somewhat) punctured theory that comes from that was that perhaps this PK index has been corrupted (representing a PostgreSQL bug), and that perhaps the problem might be alleviated by running the query: # reindex table _slonyschema.sl_log_1; On at least one occasion, this has resolved the problem, so it is worth trying this. |
|
This problem has been found to represent a PostgreSQL bug as opposed to one in Slony-I. Version 7.4.8 was released with two resolutions to race conditions that should resolve the issue. Thus, if you are running a version of PostgreSQL earlier than 7.4.8, you should consider upgrading to resolve this. |
|
6.5. | I started doing a backup using pg_dump, and suddenly Slony stops |
Ouch. What happens here is a conflict between:
The initial query that will be blocked is thus: select "_slonyschema".createEvent('_slonyschema, 'SYNC', NULL); (You can see this in The actual query combination that is causing the lock is from
the function LOCK TABLE %s.sl_event; INSERT INTO %s.sl_event (...stuff...) SELECT currval('%s.sl_event_seq'); The Every subsequent query submitted that touches
sl_event will block behind the
There are a number of possible answers to this:
|
|
7. Slony-I FAQ: Oddities and Heavy Slony-I Hacking | |
| |
7.1. | What happens with rules and triggers on Slony-I-replicated tables? |
Firstly, let's look at how it is handled absent of the special handling of the STORE TRIGGER Slonik command. The function schemadocaltertableforreplication( integer ) prepares each table for replication.
A somewhat unfortunate side-effect is that this handling of the
rules and triggers somewhat “tramples” on them. The
rules and triggers are still there, but are no longer properly tied to
their tables. If you do a |
|
Now, consider how STORE TRIGGER enters into things. Simply put, this command causes
Slony-I to restore the trigger using
|
|
This implies that if you plan to draw backups from a subscriber node, you will need to draw the schema from the origin node. It is straightforward to do this: % pg_dump -h originnode.example.info -p 5432 --schema-only --schema=public ourdb > schema_backup.sql % pg_dump -h subscribernode.example.info -p 5432 --data-only --schema=public ourdb > data_backup.sql |
|
7.2. |
I was trying to request EXECUTE SCRIPT or MOVE SET, and found messages as follows on one of the subscribers: NOTICE: Slony-I: multiple instances of trigger defrazzle on table frobozz NOTICE: Slony-I: multiple instances of trigger derez on table tron ERROR: Slony-I: Unable to disable triggers |
The trouble would seem to be that you have added triggers on tables whose names conflict with triggers that were hidden by Slony-I. Slony-I hides triggers (save for those “unhidden” via STORE TRIGGER) by repointing them to the primary key of the table. In the case of foreign key triggers, or other triggers used to do data validation, it should be quite unnecessary to run them on a subscriber, as equivalent triggers should have been invoked on the origin node. In contrast, triggers that do some form of “cache invalidation” are ones you might want to have run on a subscriber. The Right Way to handle such triggers is normally to use STORE TRIGGER, which tells Slony-I that a trigger should not get deactivated. |
|
But some intrepid DBA might take matters into their own hands and install a trigger by hand on a subscriber, and the above condition generally has that as the cause. What to do? What to do? The answer is normally fairly simple: Drop out the “extra” trigger on the subscriber before the event that tries to restore them runs. Ideally, if the DBA is particularly intrepid, and aware of this issue, that should take place before there is ever a chance for the error message to appear. If the DBA is not that intrepid, the answer is to connect to
the offending node and drop the “visible” version of the
trigger using the SQL |
|
7.3. |
Behaviour - all the subscriber nodes start to fall behind the origin, and all the logs on the subscriber nodes have the following error message repeating in them (when I encountered it, there was a nice long SQL statement above each entry): ERROR remoteWorkerThread_1: helper 1 finished with error ERROR remoteWorkerThread_1: SYNC aborted |
Cause: you have likely issued The solution is to rebuild the trigger on the affected table and fix the entries in sl_log_1 by hand.
|
|
7.4. |
Node #1 was dropped via DROP NODE, and the slon one of the other nodes is repeatedly failing with the error message: ERROR remoteWorkerThread_3: "begin transaction; set transaction isolation level serializable; lock table "_mailermailer".sl_config_lock; select "_mailermailer" .storeListen_int(2, 1, 3); notify "_mailermailer_Event"; notify "_mailermailer_C onfirm"; insert into "_mailermailer".sl_event (ev_origin, ev_seqno, ev_times tamp, ev_minxid, ev_maxxid, ev_xip, ev_type , ev_data1, ev_data2, ev_data3 ) values ('3', '2215', '2005-02-18 10:30:42.529048', '3286814', '3286815', '' , 'STORE_LISTEN', '2', '1', '3'); insert into "_mailermailer".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (3, 2, '2215', CU RRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or updat e on table "sl_listen" violates foreign key constraint "sl_listen-sl_path-ref" DETAIL: Key (li_provider,li_receiver)=(1,3) is not present in table "sl_path". DEBUG1 syncThread: thread done Evidently, a STORE LISTEN request hadn't propagated yet before node 1 was dropped. |
This points to a case where you'll
need to do “event surgery” on one or more of the nodes.
A Let's assume, for exposition purposes, that the remaining nodes are #2 and #3, and that the above error is being reported on node #3. That implies that the event is stored on node #2, as it
wouldn't be on node #3 if it had not already been processed
successfully. The easiest way to cope with this situation is to
delete the offending sl_event entry on node #2.
You'll connect to node #2's database, and search for the
There may be several entries, only some of which need to be purged. -# begin; -- Don't straight delete them; open a transaction so you can respond to OOPS BEGIN; -# delete from sl_event where ev_type = 'STORE_LISTEN' and -# (ev_data1 = '1' or ev_data2 = '1' or ev_data3 = '1'); DELETE 3 -# -- Seems OK... -# commit; COMMIT The next time the slon for node 3
starts up, it will no longer find the “offensive”
|