-
Notifications
You must be signed in to change notification settings - Fork 1
Implements Sessions in the lock client #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…imer implementaion, splitting of domains for acquire and release, addition of context
…n the lock client: gracefulShutdown, maintaining list of current locks
…criptor and LockDescriptor separation, solving #29
|
Still needs to handle some race conditions when session time outs occur, add tests specific to sessions. |
suraj44
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion of having a mutex per process.
Really good work though! such clean CLEAN code!
| } | ||
|
|
||
| // Parse parses an ID from a byte slice. | ||
| func Parse(idBytes []byte) (ID, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't understand what this function does exactly ? Why are there two version of an ID - string and bytes?
Also, it doesn't seem to be used elsewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's recommended to pass the ID in type ID but sometimes we do need its string form. Thus bytes.
| close := make(chan struct{}, 1) | ||
| go func() { | ||
| for { | ||
| sc.mu.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think instead of having a single mutex for the whole client, it would make sense to have a mutex per process. because if MANY clients are connected to the client, and all of the call acquire() at around the same time then they'll all just be waiting (because others might get the lock first) but would ideally like for them to acquire there respective locks right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the way you've done this though! really nice :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You really missed the point of locks in that question. Locks are meant to serialise things. As you can see, we have a COMMON MAP session*, which when used in multiple goroutines cause a race. Thus a lock and thus a single lock.
| } | ||
| }() | ||
| ld := lockservice.NewLockDescriptor(d.ID(), s.ProcessID().String()) | ||
| err := sc.acquire(ctx, ld) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't there be a lock() and unlock() before and after acquire()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohhhhh nice, you've handled the timeout in acquire() :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this is a principle Ive used generally while writing raft w.r.t locks - if a function needs something to be locked, let it lock it inside itself and dont lock it before its called. It keeps the locks cleaner and within the scope of the problem.
This closes #24 and #29