A hash table provides O(1) lookup, but does not support range scans. However, a hash table is suitable for a number of index requirements. For example, it would be a better fit for the ID2TERM and TERM2ID indices in the triple/quad store lexicon. A hash table is also required for various kinds of analytic query operations, including hash-joins, high data volume distincts, and high data volume fact tables for aggregation queries. The java hash table implementations are suitable for transient data structures as long as the data scale is small (when compared to the scale of a database index). However, Java garbage collectors are strongly challenged when presented with lots of long lived objects such as would be generated by a hash join. Therefore, we need to implement a persistence capable hash table to support analytic query (getting the data out of the Java heap and into either direct ByteBuffers or onto the disk). The hash table can also be used to improve lexicon performance.
There are two broad categories of hash maps: static and dynamic. All
in all, static hashing is problematic unless the maximum #of records is known in advance. The broad class of dynamic hashing includes extendable (or extensible) hashing, which is based based on an address table) on the one hand and linear hashing and extended) spiral hashing, which do not use an address table, on the other hand. (There are also some hybrid designs. For example, in bounded disorder a B+Tree is combined with hash tables for the leaf nodes which are substantially larger than the B+Tree page size (being made up multiple pages themselves). Bounded disorder trees are an interesting hybrid which supports out of order key-range scans and has many of the benefits of a hash table (access to a record in one or two IOs assuming that the upper part of the B+Tree nodes is wired into memory). However, the index segment already possesses many of these characteristics as the nodes region is fully buffered in memory and we do one IO (guaranteed) per leaf.)
Considering the dynamic hashing algorithms, it seems likely that the choice of the algorithm will come down to whether we have a bare file index format for hashing (extended spiral hashing might be the best choice if we develop new persistence store specifically for this purpose), whether we use RWStore for page allocation management, and whether or not the hash table is integrated with the MVCC architecture (based on copy-on-write combined with eventual release or recycling of deleted records).