Continuing in this issue the conversation started in the latest PR
(@OliverDavis comment)
How about this as a return object for both the annotate and forget endpoints?
{ "data": [current return object], "stats": {name, value for model fit stats} }
RE: I think that makes lots of sense. Also, appending the new stats objects to the current backend response is super quick to do too.
I think we should just decide what the stats object should contain, and which metric we want to calculate.
One first hypothesis can be to generate a per-class score (e.g. per-class precision), along with an overall score for all the faces currently on the board.
In other words, the stats object can have the following structure:
{
stats: {
happy: happy score in [0, 1]
sad: sad score in [0, 1]
angry: angry score in [0, 1]
disgust: disgust score in [0, 1]
fear: fear score in [0, 1]
surprise: surprise score in [0, 1]
overall: overall global score in [0, 1]
}
When a new face will come in, updated scores for old 24 + 1 faces on the board will be returned.
Would that make any sense?
Continuing in this issue the conversation started in the latest PR
(@OliverDavis comment)
RE: I think that makes lots of sense. Also, appending the new
statsobjects to the current backend response is super quick to do too.I think we should just decide what the
statsobject should contain, and which metric we want to calculate.One first hypothesis can be to generate a per-class score (e.g. per-class precision), along with an overall score for all the faces currently on the board.
In other words, the
statsobject can have the following structure:When a new face will come in, updated scores for old 24 + 1 faces on the board will be returned.
Would that make any sense?