concurrency - Idea on building a live, serverless multi-user local-network-shared database with SQLite -


i need input on idea had. if you're interested/experienced in sqlite , lowest-budget-high-end-solutions (win ;)) you. while fleshing out question have gotten pretty excited idea, bear me. there's tl;dr below

i have developed vb.net (3.5) desktop application sqlite. data structured in eav-model, because need flexibility (the semantic structure of data stored separately in xml file). application performs above expectations, - scenario @ hand - large database (about 120mb main file).

as expected performance abysmal when sqlite file(s) on network folder. it's bearable have higher goals. expected scenarios (for now) max. 10 users need access database concurrently within local windows network. 98% of concurrence required heavy read-access, inserts , updates sparse , small.

the software used exclusively within environments low budget , technical infrastructure (support , hardware) lower, goal avoid using database server. not want implement own sqlite "server" la sqlitening (i.e. tell 1 instance of application automagically "share" database in network) because want database able reside on independent network drive.

my first impulse after realizing situation cannot amended optimizing queries, "lazy synchronization" approach painless implement. database "wiki-esque" (almost) inserts, there won't (much) conflict issues @ all: whoever comes "last" wins, each field has change history timestamp , userid , can rolled individually. entries "marked deleted" , can discarded upon "cleanup"-action. though comes @ cost of never being "live" data other users change or enter during session, , synching process might take while, users potentially blocking each other @ "rush-hour". we're talking maybe couple of minutes worst-case, wouldn't big deal wouldn't cool, either.

tl;dr: how implement live, serverless, local-network-shared database sqlite end-user application performs on local datafile within scenario with

  • many selects
  • few inserts
  • no deletes ,
  • hardly any updates per user session.

let's assume furthermore

  • there always sufficient hard drive space available
  • due archive mechanisms (and data privacy constraints) database never grows above 200mb
  • we can efficiently tell whether file has been changed , whom in "shared directory" database files reside
  • we can copy files sufficiently fast shared directory

now had in mind implementing differencing files each session locally cached read access.

at start of client-session:

  • check big file , session-specific (see below) files in shared directory changes since last session (crc+logfile in shared directory)
  • copy big (200mb) current database file to local path before each session if changed or not cached
  • also copy session-specific files (see below) if changed or not cached

  • all inserts during session written small, session-specific file in shared directory

    • this limited suitable size 2mb per file or smaller, new file created
  • read-access (selects) performed sequentially
    • in local copy of main file
    • in local copies of session files
    • in current (ie. new) session files in shared cache
  • on detection of new session file, session files copied local cache again
  • finally, session files periodically merged big file
    • this @ end of every session, if not mistaken, whenever.

this

  • eliminate write concurrency
  • eliminate read concurrency on big file , local session files
  • reduce needed concurrency reads on small session files
  • minimize network usage accessing current session files (2mb per concurrent user)
  • preserve live view of current data state every client

this sounds beyond awesome me.

questions: there name "protocol" outlining can further research?

would consider viable approach sqlite or wild goose chase - overlooking obvious drawbacks?

if you're on board, size session files (n * page_size?)?

thank input!

christoph

it seams me you're trying reinventing wheel!

your application overhead, such copying big database file, sure overrun ultra tunned sqlite engine. , lot privacy concerns should arise approach (maybe not in case, in isolated server/client environment).

also, "custom rdbms" sure break acid compliance.

at last, requirements seams fit in situations sqlite works well.


Comments

Popular posts from this blog

Unable to remove the www from url on https using .htaccess -