The Why
node-gyp
is great but it's not as portable and does not scale as I'd like to.If you try to use
sqlite3
via npm
in Arch Linux ARM, as example, even if the native sqlite library is there and usable that won't work ... moreover ...What really bothers me is that
node-gyp
does not update within the system as any other system package would do.You need to rebuild, recompile, re-do everything, even if you distributed a specific linux version that trust the package manager for updates and does not want to bother users with build tasks.
This is quite common in embedded hardware and related Linux distro so I've asked myself:
why on earth I cannot simplypacman -Syu
once in a while and just have automagically built for me the latest version ofsqlite3
, the one the whole system is using and trusting anyhow, together with any other update including the node one?
The What
The repository is here!So here the thing: dblite is nothing more than a spawn process over
sqlite-shell
with piped and handled io
.
Anything you could write directly in sqlite3 shell will just work through this module and everything that produces a result such SELECT
or PRAGMA
, will be parsed only once fully flushed and asynchronously at speed-light and without blowing the memory in order to create an Array
of rows
where these could be either transformed into objects, or simply as Array
of fields.Here the equivalent of the first sqlite3 usage example in
dblite
:
// node dblite.test.js var dblite = require('dblite'); var db = dblite(':memory:'); var start = Date.now(); db.query('CREATE TABLE lorem (info TEXT)'); db.query('BEGIN'); for (var i = 0; i < 10; i++) { db.query( 'INSERT INTO lorem VALUES (?)', ['Ipsum ' + i] ); } db.query('COMMIT'); db.query( 'SELECT rowid, info FROM lorem', // retrieved as ['id', 'info'], // once retrieved function (rows) { rows.forEach(eachRow); } ); function eachRow(row, i, rows) { console.log(row.id + ": " + row.info); if ((i + 1) === rows.length) { start = Date.now() - start; console.log(start); db.close(); } }Interesting note is that in my Macbook Pro above code performs in about 4~5 milliseconds against about 15~21 milliseconds using the
sqlite3
module: 3X faster!
An Intuitive API ... Like, For Real!
I'd like to do a test now: I write down some code and you think about what the code does. After that, I tell you what it does, and you'll realize it's hopefully and most likely what you thought ... deal?db.query( 'INSERT INTO table VALUES (?, ?)', [null, 'some text'] ); db.query( 'INSERT INTO table VALUES (:id, :value)', { id: 123, value: "wat's up?" } );I believe you understand these are just inserts with automatically addressed and escaped values, am I correct?
Let's do something else!
db.query( 'SELECT * FROM table WHERE id = ?', [123], function (rows) { console.log(rows.length); console.log(rows[0]); } );What do you say? A select with an id that will produce an output like this?
1 // the rows length ['123', "wat's up?"] // the row itselfOK, OK, you got that ... how about this one then?
db.query( 'SELECT * FROM table WHERE id = ?', [123], { id: Number, text: String } function (rows) { console.log(rows.length); console.log(rows[0]); } );Would you ever bet this is the result in console?
1 // still the rows length {id: 123, text: "wat's up?"} // the rowHow about all together?
db.query( 'SELECT * FROM table WHERE id = :id AND value = :value', { id: 123, value: "wat's up?" }, { index: Number, value: String } function (rows) { console.log(rows.length); console.log(rows[0]); } );Yep, validation will populate the resulting row as
{index: 123, value: "what's up?"}
since this is how properties can be remapped in a query results: specifying object properties names adding validations to the result.
db.query( 'INSERT INTO users VALUES (?, ?, ?)', [null, 'WebReflection', '1978-05-17'] ); // what can we do with that date as string? db.query( 'SELECT * FROM users WHERE name = ?', ['WebReflection'], { id: Number, name: String, bday: Date }, function (rows) { rows[0]; /* { id: 35, name: 'WebReflection', bday: [object Date] } */ } );As summary, here is how the query method works: a SQL statement, optional fields to escape for the query, optional fields to populate results as objects instead of arrays and optional validation per each field where the default is always
String
.I believe this is straight forward enough but if I am wrong please tell me your idea of intuitive API after playing a little bit with this
query
one, thanks :)
The Target
Raspberry Pi, Cubieboard, and other ARM based Hardware are the main tested platforms and if it goes fast there, it goes fast everywhere.As written and tested in the main github project page, it takes 0.178 seconds for 100 inserts in a SD Card and Raspberry Pi while it takes on average 30 milliseconds to fetch 200+ rows at once and memory consumption is considered too.
I will test properly
sqlite3
module performance against this one but I believe there are many cases this wrapper for a single spawn object could surprise in term of performance delegating all the horses power to the native sqlite3 shell
without bindings around.Enjoy!
4 comments:
Yes !!
ab -c100 -n10000 -k http://127.0.0.1:7001/item/45001
Concurrency Level: 100
Time taken for tests: 4.777 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 10000
Total transferred: 20862080 bytes
HTML transferred: 19090000 bytes
Requests per second: 2093.34 [#/sec] (mean)
Time per request: 47.771 [ms] (mean)
Time per request: 0.478 [ms] (mean, across all concurrent requests)
Transfer rate: 4264.79 [Kbytes/sec] received
that seems to be like ... very good performance, right? :)
the github repo for this project is at:
https://github.com/WebReflection/dblite/
there are few links but I guess I should change CSS color for them :D
Post a Comment