Table of Contents

How deferred processing works in Citadel

One advantage of the Citadel system over other, less tightly integrated groupware packages is that it has the ability to defer potentially resource-intensive operations until off-hours, improving the interactive performance of the system during the hours that users are online and active. This is primarily used for performing “delete” operations in a batch mode. This article explains the technological underpinnings, and is mainly intended for developers.

Data model

In order to understand what's going on under the covers, there are several things you need to know about Citadel's data model:

Synchronous operations

Here are some activities which are performed synchronously – in other words, the user must wait while they are completed.

Asynchronous operations (or, what happens during a purger run)

Here's where the magic happens. We run a nightly batch job, affectionately known as The Dreaded Auto-Purger, which is responsible for cleaning everything up. It does a lot of work, in a very specific order to ensure that it doesn't have to run twice to get everything. The code can all be found in modules/expire/serv_expire.c. Here's how it works.

  1. Purge users. If the system is configured to automatically delete inactive accounts, the user file is scanned, and the date of last login is calculated. Accounts which have not been accessed in the configured amount of time are deleted. If the system is using an external source of authentication (such as a PAM database), we instead delete accounts which no longer exist on the host system. Either way, you will note that we only delete the account itself – we are not yet deleting rooms or messages which belong to the account.
  2. Purge messages. For rooms which are configured to automatically expire messages older than a certain age, and for rooms which are configured to keep no more than a specific maximum number of messages online, we go into those rooms and delete the old messages. This is done similar to an interactive delete: the message pointer is removed and its reference count is decremented.
  3. Purge rooms. The system may be configured to automatically expire rooms which have not been accessed in a certain amount of time; if so, these rooms are deleted now. We also delete any rooms which exist in a namespace belonging to a user who does not exist. The latter conditon conveniently removes rooms which were deleted, or which belonged to a user who was deleted. Before deleting a room, we of course delete every message in the room (again with the same operation: remove the pointer, decrement the reference count).
  4. Purge visits. The “visits” table contains records which describe the relationship between one user and one room. It handles things like access control, seen/unseen message flags, and other flags. At this time we delete any record which refers to a user or room which no longer exists.
  5. Purge Use Table. The “use table” keeps track of the Message ID's of messages which recently arrived over a network, including a Citadel network, or RSS aggregation, or POP3 aggregation. In the latter two cases, these records are refreshed every time a message re-appears. We keep this data around in order to keep the same message from being imported multiple times. At this time, we delete any records which are older than a certain age.
  6. Purge EUID Index Table. This table is simply an index of messages by EUID, for rooms which require it. We delete records which are no longer in use.
  7. Purge stale OpenID associations. The OpenID Associations table maps OpenID identifiers to user numbers. At this time we delete any records which point to a user who no longer exists.
  8. Process the reference count adjustment queue. By this time we now have a lot of data in the reference count adjustment queue (which, you will remember, is in a flat file called refcount_adjustments.dat). Now it is time to process this data. So we rename it to a temporary file, so that a new file can be created and written by other users that are still on the system.
  9. Reference count adjustments in the temporary file are then processed one at a time. The reference count for each message is kept in the message's metadata record, and we adjust it by whatever value each record specifies.
  10. When a message's reference count reaches zero, we know that there are no longer any references to the message anywhere on the system.
  11. Before deleting the message from disk, however, we first must remove it from the full-text index. That operation is performed at this time.
  12. After the message is de-indexed, it is finally deleted from the message database. Remember, however, that you will not see an immediate reduction of disk utilization on the host system, because Berkeley DB does not shrink its files when records are deleted. This space will be marked as unused, and new messages can potentially be stored there. Therefore on a well-managed system with a fairly consistent traffic rate and a sensible expire policy, disk utilization will initially grow until it reaches an equilibrium of new messages vs. expiring messages, and then it will stay there. On the other hand, if you have no expire policy and your users never empty their trash folders, you may expect disk utilization to grow indefinitely.