Introduce lightweight API framework for Go code + test suite#63
Introduce lightweight API framework for Go code + test suite#63
Conversation
|
|
||
| if err := pgx.BeginFunc(ctx, a.dbPool, func(tx pgx.Tx) error { | ||
| for _, jobID := range jobIDs { | ||
| func (a *jobCancelEndpoint) Execute(ctx context.Context, req *jobCancelRequest) (*jobCancelResponse, error) { |
There was a problem hiding this comment.
So each endpoint gets an Execute function like this one, with the main benefits compared to the alternative being:
- Error handling gets much easier; the handler can just return an error or API error as the second return value and not having to worry about writing the error back or calling
return. - Makes general testing much easier because most tests can just send a strongly typed request struct and receive a typed response.
- Moves marshaling/unmarshaling and other common API endpoint features down a layer so the handler code can be made much more succinct.
|
|
||
| resp, err := endpoint.Execute(ctx, &jobCancelRequest{JobIDs: []int64String{int64String(insertRes1.Job.ID), int64String(insertRes2.Job.ID)}}) | ||
| require.NoError(t, err) | ||
| require.Equal(t, &jobCancelResponse{Status: "ok"}, resp) |
There was a problem hiding this comment.
There are two classes of new API tests in this PR. This is the first that zeroes in on the endpoint and just tests that, with the benefits being:
- Strongly typed request/resp so each test doesn't have marshal and unmarshal JSON everywhere, which adds a ton of noise.
- Similarly, much faster because we get to skip the entire HTTP layer in most cases.
- In case of an error, it's much clearer to interpret because you get a real error instead of 500 response.
Most test cases including edges like not founds will be written in this style because it's more convenient and lighter weight.
| // | ||
|
|
||
| makeAPICall(t, "JobCancel", http.MethodPost, makeURL("/api/jobs/cancel"), mustMarshalJSON(t, &jobCancelRequest{JobIDs: []int64String{int64String(job.ID)}})) | ||
| makeAPICall(t, "JobGet", http.MethodGet, makeURL("/api/jobs/%d", job.ID), nil) |
There was a problem hiding this comment.
This is the second "style" of test.
It's fine for most test cases to skip the whole HTTP rigamarole, but it's still important to make sure the entire thing is tested somewhere to protect against problems like an endpoint being not mounted by mistake.
This style of test that I usually call "integration" tests to the entire thing end-to-end. As noted above, it's ~always better to write the lighter weight tests around each endpoint, so here we just have one call per endpoint to make sure that each works and responds with a status code >= 200 && < 300.
| @@ -0,0 +1,117 @@ | |||
| package riverinternaltest | |||
There was a problem hiding this comment.
This and all files in the diff below this one were copied basically wholesale from the main River project.
|
@bgentry This isn't done, but I wanted to make sure you're okay with the broad design before going any further. This is basically how I've set up other API projects in Go. There's definitely a little more abstraction compared to the raw handlers, but IMO has huge benefits for writing much more succinct implementations and testability. It's also potentially introspectable in case we ever want to build an OpenAPI spec for anything like that (in that it's possible to use reflect to iterate over the endpoint/request/response structs and extract information about them). |
bgentry
left a comment
There was a problem hiding this comment.
Looks awesome! I have no objections to any of it, it's a great model ![]()
api_handler.go
Outdated
| a.logger.ErrorContext(ctx, "error decoding job IDs", slog.String("error", err.Error())) | ||
| http.Error(rw, http.StatusText(http.StatusBadRequest), http.StatusBadRequest) | ||
| return | ||
| // withSetBundle is an interface that's automatically implemented by an types |
There was a problem hiding this comment.
| // withSetBundle is an interface that's automatically implemented by an types | |
| // withSetBundle is an interface that's automatically implemented by types |
|
@bgentry Okay, great! Let me add a little more testing and docs for the core APU infrastructure and then I'll kick it back to you. I might do the rest of the endpoint conversion in a different PR since with the tests especially, it'll be a big diff unto itself. |
9c9e094 to
4cc5bc0
Compare
|
@bgentry Okay, added docs and tests for all the core infrastructure. I tried to write a few more API endpoint tests too, but found I needed the changes in riverqueue/river#402 for anything queue-related, so going to push those for now. Mind taking another look? |
bgentry
left a comment
There was a problem hiding this comment.
LGTM w/ one comment on naming
| .env | ||
| .env.* | ||
| !.env.example | ||
| .tool-versions |
There was a problem hiding this comment.
Hah, actually I do use asdf! I figured I'd keep this out of the repo so not everyone gets affixed to one particular Node version though — I always find it kind of annoying when I go to a repo and am forced to install a new version because there's a .tool-versions in there. Does that make sense in this instance? The project should work on a variety of Node versions shouldn't it?
internal/apiendpoint/api_endpoint.go
Outdated
| // or `POST`, and may contain Go 1.22 path variables like `{name}`, whose | ||
| // values should be extracted by an endpoint request struct's custom | ||
| // ExtractRaw implementation. | ||
| Path string |
There was a problem hiding this comment.
From the Path name, I definitely didn't expect that this would take a compound string of METHOD /path, though it makes sense because that's what ServeMux.HandleFunc() takes. However they use the term pattern which I think is more appropriate than "path". What do you think about renaming to something like that or another term?
There was a problem hiding this comment.
Hmm, TBH "pattern" sounds a little weird to me (sounds like a regex?), but you're right, it is what Go's using now, so I guess it's okay. Changed to Pattern.
Put in a lightweight API framework for River UI's Go code to make
writing endpoints more succinct and better enable testing. Endpoints are
defined as a type that embeds a struct declaring their request and
response types along with metadata that declares their path and success
status code:
type jobCancelEndpoint struct {
apiBundle
apiendpoint.Endpoint[jobCancelRequest, jobCancelResponse]
}
func (*jobCancelEndpoint) Meta() *apiendpoint.EndpointMeta {
return &apiendpoint.EndpointMeta{
Path: "POST /api/jobs/cancel",
StatusCode: http.StatusOK,
}
}
The request/response types know to unmarshal and marshal themselves from
to JSON, or encapsulate any path/query parameters they need to capture:
type jobCancelRequest struct {
JobIDs []int64String `json:"ids"`
}
type jobCancelResponse struct {
Status string `json:"status"`
}
Endpoints define an `Execute` function that takes a request struct and
returns a response struct along with a possible error:
func (a *jobCancelEndpoint) Execute(ctx context.Context, req *jobCancelRequest) (*jobCancelResponse, error) {
...
return &jobCancelResponse{Status: "ok"}, nil
}
This makes the endpoints a lot easier to write because serialization
code gets removed, and errors can be returned succinctly according to
normal Go practices instead of each one having to be handled in a custom
way and be a liability in case of a forgotten `return` after writing it
back in the response. The underlying API framework takes care of writing
back errors that should be user-facing (anything in the newly added
`apierror` package) or logging an internal error and return a generic
message. Context deadline code also gets pushed down.
The newly added test suite shows that the `Execute` shape also makes
tests easier and more succinct to write because structs can be sent and
read directly without having to go through a JSON/HTTP layer, and errors
can be handled directly without having to worry about them being
converted to a server error, which makes debugging broken tests a lot
easier.
resp, err := endpoint.Execute(ctx, &jobCancelRequest{JobIDs: []int64String{int64String(insertRes1.Job.ID), int64String(insertRes2.Job.ID)}})
require.NoError(t, err)
require.Equal(t, &jobCancelResponse{Status: "ok"}, resp)
We also add a suite of integration-level tests that test each endpoint
through the entire HTTP/JSON stack to make sure that everything works at
that level. This suite is written much more sparingly -- one test fer
endpoint -- because the vast majority of endpoint tests should be
written in the handler-level suite for the reasons mentioned above.
4cc5bc0 to
3fff4c2
Compare
|
Thanks for the review! |
Follows up #63 with one more test case that checks to make sure the right thing happens if a timeout occurs. (The API endpoint executor adds a five seconds timeout to every run by default.)
A general problem with Go is that validating things is a very noisy
affair involving long lists of if statements combined with error returns
for every field on a struct.
A technique that we've been using for a while is to use the Go validator
framework [1] that allows fields to be tagged with succinct validation
syntax for a variety of different things. e.g.
type jobCancelRequest struct {
JobIDs []int64String `json:"ids" validate:"required,min=1,max=1000"`
}
I'm not sure the use of something like this is necessary for a project
that's a dependency like core River itself, but but internal use on more
of an "application" project like River UI, it might be helpful.
We combine the validator with the new API framework from #63 so that
incoming request structs are validated automatically for every endpoint,
which shaves a lot of lines of otherwise necessary validation code out
of individual API endpoint definitions.
[1] https://github.com/go-playground/validator?tab=readme-ov-file
Follows up #63 with one more test case that checks to make sure the right thing happens if a timeout occurs. (The API endpoint executor adds a five seconds timeout to every run by default.)
A general problem with Go is that validating things is a very noisy
affair involving long lists of if statements combined with error returns
for every field on a struct.
A technique that we've been using for a while is to use the Go validator
framework [1] that allows fields to be tagged with succinct validation
syntax for a variety of different things. e.g.
type jobCancelRequest struct {
JobIDs []int64String `json:"ids" validate:"required,min=1,max=1000"`
}
I'm not sure the use of something like this is necessary for a project
that's a dependency like core River itself, but but internal use on more
of an "application" project like River UI, it might be helpful.
We combine the validator with the new API framework from #63 so that
incoming request structs are validated automatically for every endpoint,
which shaves a lot of lines of otherwise necessary validation code out
of individual API endpoint definitions.
[1] https://github.com/go-playground/validator?tab=readme-ov-file
A general problem with Go is that validating things is a very noisy
affair involving long lists of if statements combined with error returns
for every field on a struct.
A technique that we've been using for a while is to use the Go validator
framework [1] that allows fields to be tagged with succinct validation
syntax for a variety of different things. e.g.
type jobCancelRequest struct {
JobIDs []int64String `json:"ids" validate:"required,min=1,max=1000"`
}
I'm not sure the use of something like this is necessary for a project
that's a dependency like core River itself, but but internal use on more
of an "application" project like River UI, it might be helpful.
We combine the validator with the new API framework from #63 so that
incoming request structs are validated automatically for every endpoint,
which shaves a lot of lines of otherwise necessary validation code out
of individual API endpoint definitions.
[1] https://github.com/go-playground/validator?tab=readme-ov-file
A general problem with Go is that validating things is a very noisy
affair involving long lists of if statements combined with error returns
for every field on a struct.
A technique that we've been using for a while is to use the Go validator
framework [1] that allows fields to be tagged with succinct validation
syntax for a variety of different things. e.g.
type jobCancelRequest struct {
JobIDs []int64String `json:"ids" validate:"required,min=1,max=1000"`
}
I'm not sure the use of something like this is necessary for a project
that's a dependency like core River itself, but but internal use on more
of an "application" project like River UI, it might be helpful.
We combine the validator with the new API framework from #63 so that
incoming request structs are validated automatically for every endpoint,
which shaves a lot of lines of otherwise necessary validation code out
of individual API endpoint definitions.
[1] https://github.com/go-playground/validator?tab=readme-ov-file
Put in a lightweight API framework for River UI's Go code to make
writing endpoints more succinct and better enable testing. Endpoints are
defined as a type that embeds a struct declaring their request and
response types along with metadata that declares their path and success
status code:
The request/response types know to unmarshal and marshal themselves from
to JSON, or encapsulate any path/query parameters they need to capture:
Endpoints define an
Executefunction that takes a request struct andreturns a response struct along with a possible error:
This makes the endpoints a lot easier to write because serialization
code gets removed, and errors can be returned succinctly according to
normal Go practices instead of each one having to be handled in a custom
way and be a liability in case of a forgotten
returnafter writing itback in the response. The underlying API framework takes care of writing
back errors that should be user-facing (anything in the newly added
apierrorpackage) or logging an internal error and return a genericmessage. Context deadline code also gets pushed down.
The newly added test suite shows that the
Executeshape also makestests easier and more succinct to write because structs can be sent and
read directly without having to go through a JSON/HTTP layer, and errors
can be handled directly without having to worry about them being
converted to a server error, which makes debugging broken tests a lot
easier.
We also add a suite of integration-level tests that test each endpoint
through the entire HTTP/JSON stack to make sure that everything works at
that level. This suite is written much more sparingly -- one test fer
endpoint -- because the vast majority of endpoint tests should be
written in the handler-level suite for the reasons mentioned above.