It provides serialization and deserialization using a defined data model. That's it.
Mapping data from A to B.
E.g. your database query returns snake_cased keys and embedded relationships but you want to send it to the client with camelCase keys and sideloaded relationships. quick-model can do that.
Another usecase is a client sends data to your server in a client-friendly format and you need to 'deserialize' it into yet another format and insert it into some 3rd party database like a SalesForce table with a wacky__c schema.
npm install quick-modelconst { Model, Transforms } = require('quick-model');
const personModel = new Model({
name: Model.attr(Transforms.stringTransform)
});
const bookModel = new Model({
title: Model.attr(Transforms.stringTransform),
author: Model.one(personModel)
});Imagine your database query returns an object (continuing the example above) :
const book = await db.books.findByTitle('foundation');
console.log(book);
// {
// book_title: 'Foundation',
// book_author: {
// full_name: 'Isaac Asimov'
// }
// }Notice the differences between db result and defined model.
database -> model
book_title -> title
book_author -> author
full_name -> name
This is where quick-model shines. Let us make a quick serializer!
const { Serializer } = require('quick-model');
const personSerializer = new Serializer({
model: personModel,
keyForNonSerializedAttribute(attribute) {
const { name } = attribute;
// attribute.name is the key you defined when creating an attr() in your model.
if (name === 'name') {
return 'full_name';
}
// default behavior
return name;
}
});
const bookSerializer = new Serializer({
model: bookModel,
serializers: {
author: personSerializer
},
keyForNonSerializedAttribute(attribute) {
const { name } = attribute;
return `book_${name}`;
},
keyForNonSerializedRelationship(relationship) {
const { name } = relationship;
if (name === 'author') {
return 'book_author';
}
return name;
}
});Now you can call bookSerializer.serialize(dataFromDatabase).
It will properly extract the fields from the raw data and , by default, return an object that resembles how the model was defined:
bookSerializer.serialize({
book_title: 'Foundation',
book_author: {
full_name: 'Isaac Asimov'
}
});
// returns:
{
title: 'Foundation',
author: {
name: 'Isaac Asimov'
}
}This is a simple example. But you can model some really unfriendly data and serialize it into something friendly :)
Full docs coming as soon...
Some ideas I'd personally like to explore (or see explored) in future:
- Building a JSON API Serializer
- Building an XML Serializer
-
implement primaryKey in model and write tests!
-
create a map-compact util & test
-
tests working for both directories: lib, utils
-
test for deserializeAttribute, keyForDeserializedRelationship
-
test for serialize
-
test for deserialize
-
Cache for camelize and underscore
-
tests for camelize and underscore
-
move utils into files that represent the data type they operate on/with
- array
- string
- function
- object
-
deepAssign when overwriting transform's deserializer, serializer, and validator objects. I.e. don't overwrite entire object, just merge the defined properties
deepAssign({ serializer: { foo: bar } }, { serializer: { baz: 'boo' } })
// returns:
{ serializer: {
foo: 'bar',
baz: 'boo'
}
}
-
in each directory, combine tests for that directory and place in tests/ directory relative to where each test currently is
-
split deserialize, serialize, and normalize functionality into separate mixins
-
re-organize serializer tests into the appropriate mixins/tests/*-test.js file
-
{ include: [], exclude: [] } options support during serialize, deserialize, and normalize
-
serializer.serialize/deserialize should accept a hash of 3rd party serializers in the event of embedded relationships. So that serializing an embedded relationship can use the correct serializer.
-
serializer.normalize
-
accept hash of filter functions that can be applied to attributes or relationships.e.g. { filters: { password(x) { return x.replace('.+', '*') } } }