Skip to content

Add support for seperate name table (ServerDBAdapter support only)#113

Merged
lmg-anon merged 2 commits intolmg-anon:namestorefrom
saood06:main
Oct 28, 2025
Merged

Add support for seperate name table (ServerDBAdapter support only)#113
lmg-anon merged 2 commits intolmg-anon:namestorefrom
saood06:main

Conversation

@saood06
Copy link
Copy Markdown
Contributor

@saood06 saood06 commented May 16, 2025

As mentioned in #102 (comment) with a large amount of entries the initial loadtime can take minutes as even though sessions only returns the name the json_extract is very costly. This change makes startup down time in the realm of seconds now which is much more tolerable.

I also found that the data is highly compressible via: phiresky/sqlite-zstd in my case, tested across two different databases:

Compressed 14752 rows Total size of entries before: 31.04GB, afterwards: 3.40GB, (average: before=2.10MB, after=230.77kB)
Compressed 8042 rows Total size of entries before: 8.62GB, afterwards: 581.33MB, (average: before=1.07MB, after=72.29kB)

I used the following migration script (after making a backup):

BEGIN TRANSACTION;

CREATE TABLE names (
    key TEXT PRIMARY KEY,
    data TEXT
);

INSERT INTO names (key, data)
SELECT
    sessions.key,
    json_extract(sessions.data, '$.name')
FROM
    sessions
WHERE
    json_extract(sessions.data, '$.name') IS NOT NULL;



UPDATE sessions
SET data = json_remove(data, '$.name')
WHERE
    json_extract(data, '$.name') IS NOT NULL;

COMMIT;

And the following compression script:

const sqlite3 = require('sqlite3');

const db = new sqlite3.Database('../../web-session-storage.db', (err) => {
    if (err) {
        console.error(err.message);
        throw err;
    }
});


db.loadExtension('../../libsqlite_zstd.so', (err) => {
    if (err) {
        console.error(err.message);
        throw err;
    } else {
    }
});

db.get(`SELECT zstd_enable_transparent('{"table": "sessions","column": "data", "compression_level": 19, "dict_chooser": "''a''", "train_dict_samples_ratio": 10}')`, (err, row) => {
    if (err) {
        console.error(err.message);
    }
});

db.get(`SELECT zstd_incremental_maintenance(null, 1)`, (err, row) => {
    if (err) {
        console.error(err.message);
    }
});

db.run(`VACUUM`, (err) => {
    if (err) {
        console.error(err.message);
    }
});

The train_dict_samples_ratio should be set according to your datasize (see phiresky/sqlite-zstd#16 and phiresky/sqlite-zstd#11), and compression_level according to your preference (I set 19 and waited patiently).

In the code you say: "TODO: Remove saveQueue" but I think saveQueue can be kept if it is set to 10 minutes (similar to how desktop office suites do autosave). It is how I am using it.

@mpetruc
Copy link
Copy Markdown

mpetruc commented Oct 26, 2025

@saood06 i've merged your PR and tried to use your version. However i'm getting this error in the console:

Uncaught (in promise) DOMException: IDBDatabase.transaction: 'Names' is not a known object store name
    loadFromDatabase file:///D:/llm_frontends/mikupad/mikupad.html:4870
    loadFromDatabase file:///D:/llm_frontends/mikupad/mikupad.html:4869
    loadFromDatabase file:///D:/llm_frontends/mikupad/mikupad.html:5128
    loadFromDatabase file:///D:/llm_frontends/[m](file:///D:/llm_frontends/mikupad/mikupad.html)ikupad/mikupad.html:5267
m

and the page doesn't display anything. Would you mind letting me know how to use enable your PR? Thanks.

@lmg-anon lmg-anon changed the base branch from main to namestore October 28, 2025 21:19
@lmg-anon
Copy link
Copy Markdown
Owner

Thank you for your contribution! This sounds like a great change, I'm going to finish it and merge to main later.

@lmg-anon lmg-anon marked this pull request as ready for review October 28, 2025 21:21
@lmg-anon lmg-anon merged commit 6f341fa into lmg-anon:namestore Oct 28, 2025
@saood06 saood06 mentioned this pull request Dec 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants