A file-based database system designed for storage, retrieval, and management of data. It offers multiple ways to read and store data, along with encryption functionality. DataDepot is an ideal solution for small to medium-sized projects that require a lightweight yet powerful data storage option without the overhead of traditional databases.
npm
npm install datadepot
yarn
yarn add datadepot
pnpm
pnpm add datadepot
cjs
const { DataDepot, Depot } = require("datadepot");
esm
import { DataDepot, Depot } from "datadepot";
const store = new Depot();
With TypeScript, you may declare storage type
const store = new Depot<Record<string, unknown>>();
store.setItem("data", { foo: "foo", bar: "bar" });
store.updateItem("data", { foo: "bar" });
store.getItem("data"); // { foo: 'bar' }
store.removeItem("data");
One may not duplicate setting keys, update, get or remove a non-existent key, otherwise an exception will be thrown to force the program terminates.
store.load({ data: { foo: "foo", bar: "bar" } });
const json = store.export(); // { data: { foo: 'foo', bar: 'bar' } }
The method load()
will load data and overwrite all data in the db/storage container.
The method export()
will export all the keys and their data in json form.
These two operations shall not be often used unless necessary.
store.clear();
After calling the member method clear(), the db/storage container will be cleared.
store.destory();
After calling the member method destory(), the db/storage container will be destroyed, accessing the db/storage container afterwards in any way will lead to an exception.
Write into a chunk file with default options
DataDepot.write(store, ".");
All the data in the db/storage container will be serialized, compressed and save into a single chunk file with the specified directory. The chunk name would be an unix timestamp.
Specifying options
DataDepot.write(store, ".", {
chunkName: "sampleChunk",
key: "sampleKey",
});
The above example will save the data in a file named sampleChunk. And the file is symmetrically encrypted by the AES algorithm, and the key will be needed when reading the file.
Dump into multiple files
DataDepot.write(store, ".", {
chunkName: "sampleChunk",
key: "sampleKey",
maxChunkCount: 3
});
With maxChunkCount
option, the data will be evenly save into n
chunk files.
DataDepot.write(store, ".", {
chunkName: "sampleChunk",
key: "sampleKey",
maxChunkSize: 1000,
});
With maxChunkSize
option, the max size of a chunk file can be specified, when the max size is reached, it will move on to next chunk file.
However, you may only specify one of the maxChunkCount
or maxChunkSize
option, or an exception will be thrown.
Read from chunk file(s)
DataDepot.load(store, ".", "sampleChunk", "sampleKey");
One don't need to worries about multiple file, it will find all chunk files belong to the same chunk name. Key can be omitted if it is not specified when writing.
write and read from json file
DataDepot.writeToJson(store, "sample.json");
DataDepot.loadFromJson(store, "sample.json");
class SampleClass {
constructor(a, b) {
this.a = a;
this.b = b;
}
}
DataDepot.insertAnObject(store, "object", new SampleClass("a", "b"));
store.getItem("object"); // { a: 'a', b: 'b' }
DataDepot.loadFromObject(store, new SampleClass("a", "b"));
store.getItem("a"); // a
One may directly insert an object into the db/storage container, or use an object as data (overwritting all the data in db/storage container).