Skip to content
wwaites edited this page Sep 13, 2010 · 9 revisions

This is a fork of Garlik’s 4store for attempting to solve the Single Client Problem by improving the locking mechanism.

As of 2009/01/17 it works for values of work which include passing all tests and running simultaneous 4s-import, 4s-query and scripts using Py4s For outstanding issues see TODO. As it is now in shape that other people can start testing it out (understanding it to be bleeding edge), the ticket system has been enabled for this repository. Any problems with this code, please open a ticket here

Currently 4store causes a second client to block in fs_backend_open_files_intl because each of the various hash files that is opened is always opened with O_RDRW and that causes them to do a flock(fd, LOCK_EX).

We start by introducing a Lockable File Type and them modify the various hashing routines to use it.

As a general rule, locking is done by the top level message handling routines in server.c coupled with calls to fs_assert(fs_lockale_test(fp, LOCK_XX)) in the low-level routines that require locking to make sure the necessary locks are in place.

When multiple locks must be held, a strict ordering in their acquisition must be observed to avoid the possibility of deadlock:

  1. if model lock is required, lock be→models
  2. if resource lock is required, lock be→res
  3. if predicate lock is required, lock be→predicates

(is this the optimal order?)

The locks should be released in the reverse order in which they were acquired, but while this is good style, it is not strictly necessary. An example of this can be found in handle_start_import and handle_stop_import

Performance

Read performance should be unaffected excepting the cost of an extra call to flock(LOCK_SH) and fstat(2). Write performance through an import process should likewise be unaffected since locks are acquired at the beginning of the process, however it will exclusively lock everything and block everything else. Small writes should have minimal impact. Writes outside an import process will introduce a small amount of overhead as other processes re-read header metadata from the various hash files once they need to acquire a lock.

Performance of the tests from make test show a slight noticeable deterioration, the bulk import showing 20-25k triples/sec depending on the other work happening on my laptop.

Clone this wiki locally