Modular 1 Multi-Modal 2 Transactional 3 Database
For Artificial Intelligence 4 and Semantic Search 5
2. can store: Blobs • Documents • Graphs • 🔜 Features • 🔜 Texts
3: guarantees Atomicity • Consistency • Isolation • Durability
4: comes with Pandas and NetworkX API and 🔜 PyTorch data-loaders
5: brings vector-search integrated with USearch and UForm
packages: PyPI • CMake • Docker Hub
Youtube intro • Discord chat • Full documentation
Installing UStore is a breeze, and the usage is about as simple as a Python dict
.
$ pip install ukv
$ python
from ukv import umem
db = umem.DataBase()
db.main[42] = 'Hi'
We have just create an in-memory embedded transactional database and added one entry in its main
collection.
Would you prefer that data on disk?
Change one line.
from ukv import rocksdb
db = rocksdb.DataBase('/some-folder/')
Would you prefer to connect to a remote UStore server? UStore comes with an Apache Arrow Flight RPC interface!
from ukv import flight_client
db = flight_client.DataBase('grpc://0.0.0.0:38709')
Are you storing NetworkX-like MultiDiGraph
?
Or Pandas-like DataFrame
?
db = rocksdb.DataBase()
users_table = db['users'].table
users_table.merge(pd.DataFrame([
{'id': 1, 'name': 'Lex', 'lastname': 'Fridman'},
{'id': 2, 'name': 'Joe', 'lastname': 'Rogan'},
]))
friends_graph = db['friends'].graph
friends_graph.add_edge(1, 2)
assert friends_graph.has_edge(1, 2) and \
friends_graph.has_node(1) and \
friends_graph.number_of_edges(1, 2) == 1
Function calls may look identical, but the underlying implementation can be addressing hundreds of terabytes of data placed somewhere in persistent memory on a remote machine.
Is someone else concurrently updating those collections? Bundle your operations to guarantee consistency!
db = rocksdb.DataBase()
with db.transact() as txn:
txn['users'].table.merge(...)
txn['friends'].graph.add_edge(1, 2)
So far we have only covered the tip of the UStore. You may use it to...
- Get C99, Python, GoLang, or Java wrappers for RocksDB or LevelDB.
- Serve them via Apache Arrow Flight RPC to Spark, Kafka, or PyTorch.
- Store Document and Graphs in embedded DB, avoiding networking overheads.
- Tier DBMS between in-memory and persistent backends under one API.
But UStore can more. Here is the map:
- Basic Usage:
- Advanced Usage for production, performance tuning, and administration:
- For contributors and advanced users looking to fork, extend, wrap, or distribute and, potentially, monetize alternative builds of UStore:
## Basic Usage
UStore is intended not just as database, but as "build your database" toolkit and an open standard for NoSQL potentially-transactional databases, defining zero-copy binary interfaces for "Create, Read, Update, Delete" operations, or CRUD for short.
A few simple C99 headers can link almost any underlying storage engine to numerous high-level language drivers, extending their support for binary string values to graphs, flexible-schema documents, and other modalities, aiming to replace MongoDB, Neo4J, Pinecone, and ElasticSearch with a single ACID-transactional system.
Redis, for example, provides RediSearch, RedisJSON, and RedisGraph with similar objectives. UStore does it better, allowing you to add your favorite Key-Value Stores (KVS), embedded, standalone, or sharded, such as FoundationDB, multiplying its functionality.
Binary Large Objects can be placed inside UStore. The performance will vastly vary depending on the used underlying technology. The in-memory UCSet will be the fastest, but the least suited for larger objects. The persistent UDisk, when properly configured, can entirely bypass the the Linux kernel, including the filesystem layer, directly addressing block devices.
Modern persistent IO on high-end servers can exceed 100 GB/s per socket when built on user-space drivers like SPDK. This is close to the real-world throughput of high-end RAM and unlocks new, uncommon to databases use cases. One may now put a Gigabyte-sized video file in an ACID-transactional database, right next to its metadata, instead of using a separate object store, like MinIO.
JSON is the most commonly used document format these days. UStore document collections support JSON, as well as MessagePack, and BSON, used by MongoDB.
UStore doesn't scale horizontally yet, but provides much higher single-node performance, and has almost linear vertical scalability on many-core systems thanks to the open-source simdjson
and yyjson
libraries.
Moreover, to interact with data, you don't need a custom query language like MQL.
Instead we prioritize open RFC standards to truly avoid vendor locks:
- JSON Pointer: RFC 6901 to address nested fields.
- JSON Patch: RFC 6902 for field-level updates.
- JSON MergePatch: RFC 7386 for document-level updates.
Modern Graph databases, like Neo4J, struggle with large workloads. They require too much RAM, and their algorithms observe data one entry at a time. We optimize on both fronts:
- Using delta-coding to compress inverted indexes.
- Updating classical graph algorithms for high-latency storage to process graphs in Batch-like or Edge-centric fashion.
Feature Stores and Vector Databases, like Pinecone, Milvus, and USearch provide standalone indexes for vector search. UStore implements it as a separate modality, on par with Documents and Graphs. Features:
- 8-bit integer quantization.
- 16-bit floating-point quantization.
- Cosine, Inner Product, and Euclidean metrics.
UStore for Python and for C++ look very different. Our Python SDK mimics other Python libraries - Pandas and NetworkX. Similarly, C++ library provides the interface C++ developers expect.
As we know, people use different languages for different purposes. Some C-level functionality isn't implemented for some languages. Either because there was no demand for it, or as we haven't gotten to it yet.
Name | Transact | Collections | Batches | Docs | Graphs | Copies |
---|---|---|---|---|---|---|
C99 Standard | ✓ | ✓ | ✓ | ✓ | ✓ | 0 |
C++ SDK | ✓ | ✓ | ✓ | ✓ | ✓ | 0 |
Python SDK | ✓ | ✓ | ✓ | ✓ | ✓ | 0-1 |
GoLang SDK | ✓ | ✓ | ✓ | ✗ | ✗ | 1 |
Java SDK | ✓ | ✓ | ✗ | ✗ | ✗ | 1 |
Arrow Flight API | ✓ | ✓ | ✓ | ✓ | ✓ | 0-2 |
Some frontends here have entire ecosystems around them! Apache Arrow Flight API, for instance, has its own drivers for C, C++, C#, Go, Java, JavaScript, Julia, MATLAB, Python, R, Ruby and Rust.
- Transactions are ACI(D) by-default. What does it mean?
- Why not use LevelDB or RocksDB interface? Answered
- Why not use SQL, MQL or CYPHER? Answered
- Does UStore support Time-To-Live? Answered
- Does UStore support compression? Answered
- Does UStore support queues? Answered
- How can I add drivers for language X? Answered
- How can I add database X as an engine? Answered
Following engines can be used almost interchangeably. Historically, LevelDB was the first one. RocksDB then improved on functionality and performance. Now it serves as the foundation for half of the DBMS startups.
LevelDB | RocksDB | UDisk | UCSet | |
---|---|---|---|---|
Speed | 1x | 2x | 10x | 30x |
Persistent | ✓ | ✓ | ✓ | ✗ |
Transactional | ✗ | ✓ | ✓ | ✓ |
Block Device Support | ✗ | ✗ | ✓ | ✗ |
Encryption | ✗ | ✗ | ✓ | ✗ |
Watches | ✗ | ✓ | ✓ | ✓ |
Snapshots | ✓ | ✓ | ✓ | ✗ |
Random Sampling | ✗ | ✗ | ✓ | ✓ |
Bulk Enumeration | ✗ | ✗ | ✓ | ✓ |
Named Collections | ✗ | ✓ | ✓ | ✓ |
Open-Source | ✓ | ✓ | ✗ | ✓ |
Compatibility | Any | Any | Linux | Any |
Maintainer | Unum | Unum |
UCSet and UDisk are both designed and maintained by Unum.
Both are feature-complete, but the most crucial feature our alternatives provide is performance.
Being fast in memory is easy.
The core logic of UCSet can be found in the templated header-only ucset
library.
Designing UDisk was a much more challenging 7-year long endeavour.
It included inventing new tree-like structures, implementing partial kernel bypass with io_uring
, complete bypass with SPDK
, CUDA GPU acceleration, and even a custom internal filesystem.
UDisk is the first engine to be designed from scratch with parallel architectures and kernel-bypass in mind.
Atomicity is always guaranteed. Even on non-transactional writes - either all updates pass or all fail.
Consistency is implemented in the strictest possible form - "Strict Serializability" meaning that:
- reads are "Serializable",
- writes are "Linearizable".
The default behavior, however, can be tweaked at the level of specific operations.
For that the ::ustore_option_transaction_dont_watch_k
can be passed to ustore_transaction_init()
or any transactional read/write operation, to control the consistency checks during staging.
Reads | Writes | |
---|---|---|
Head | Strict Serial | Strict Serial |
Transactions over Snapshots | Serial | Strict Serial |
Transactions w/out Snapshots | Strict Serial | Strict Serial |
Transactions w/out Watches | Strict Serial | Sequential |
If this topic is new to you, please check out the Jepsen.io blog on consistency.
Reads | Writes | |
---|---|---|
Transactions over Snapshots | ✓ | ✓ |
Transactions w/out Snapshots | ✗ | ✓ |
Durability doesn't apply to in-memory systems by definition. In hybrid or persistent systems we prefer to disable it by default. Almost every DBMS that builds on top of KVS prefers to implement its own durability mechanism. Even more so in distributed databases, where three separate Write Ahead Logs may exist:
- in KVS,
- in DBMS,
- in Distributed Consensus implementation.
If you still need durability, flush writes on commits with an optional flag.
In the C driver you would call ustore_transaction_commit()
with the ::ustore_option_write_flush_k
flag.
The entire DBMS fits into a sub 100 MB Docker image.
Run the following script to pull and run the container, exposing Apache Arrow Flight server on the port 38709
.
Client SDKs will also communicate through that same port, by default.
docker run -d --rm --name ustore-test -p 38709:38709 unum/ustore
The default configuration file can be retrieved with:
cat /var/lib/ustore/config.json
The simplest way to connect and test would be the following command:
python ...
Pre-packaged UStore images are available on multiple platforms:
- Docker Hub image: v0.7.
- RedHat OpenShift operator: v0.7.
- Amazon AWS Marketplace images:
- Free Community Edition: v0.4.
- In-Memory Edition: 🔜
- Performance Edition: 🔜
Don't hesitate to commercialize and redistribute UStore.
Tuning databases is as much art as it is science. Projects like RocksDB provide dozens of knobs to optimize the behavior. We allow forwarding specialized configuration files to the underlying engine.
{
"version": "1.0",
"directory": "./tmp/"
}
We also have a simpler procedure, which would be enough for 80% of users. That can be extended to utilize multiple devices or directories, or to forward a specialized engine config.
{
"version": "1.0",
"directory": "/var/lib/ustore",
"data_directories": [
{
"path": "/dev/nvme0p0/",
"max_size": "100GB"
},
{
"path": "/dev/nvme1p0/",
"max_size": "100GB"
}
],
"engine": {
"config_file_path": "./engine_rocksdb.ini",
}
}
Database collections can also be configured with JSON files.
As of the current version, 64-bit signed integers are used.
It allows unique keys in the range from [0, 2^63)
.
128-bit builds with UUIDs are coming, but variable-length keys are highly discouraged.
Why so?
Using variable length keys forces numerous limitations on the design of a Key-Value store. Firstly, it implies slow character-wise comparisons — a performance killer on modern hyperscalar CPUs. Secondly, it forces keys and values to be joined on a disk to minimize the needed metadata for navigation. Lastly, it violates our simple logical view of KVS as a "persistent memory allocator", putting a lot more responsibility on it.
The recommended approach to dealing with string keys is:
- Choose a mechanism to generate unique integer keys (UID). Ex: monotonically increasing values.
- Use "paths" modality build up a persistent hash map of strings to UIDs.
- Use those UIDs to address the rest of the data in binary, document and graph modalities.
This will result in a single conversion point from string to integer representations and will keep most of the system snappy and the C-level interfaces simpler than they could have been.
We can only address 4 GB values or smaller as of the current now. Why? Key-Value Stores are generally intended for high-frequency operations. Frequently (thousands of times each second), accessing and modifying 4 GB and larger files is impossible on modern hardware. So we stick to smaller length types, making using Apache Arrow representation slightly easier and allowing the KVS to compress indexes better.
Our development roadmap is public and is hosted within the GitHub repository. Upcoming tasks include:
- Builds for Arm, MacOS.
- Persistent Snapshots.
- Continuous Replication.
- Document-schema validation.
- Richer drivers for GoLang, Java, JavaScript.
- Improved Vector Search.
- Collection-level configuration.
- Owning and non-owning C++ wrappers.
- Horizontal Scaling.