Conversation
I tried to make `JSON.stringify(this.visualization.options.columnMapping)` a variable to avoid repeating it, but if I make it a `let` the linter throws an error and if I make it a `const` then it doesn’t change with the UI and the logic doesn’t work. :( updated based on PR comments
In the long run we'll be able to install additional dependencies by having an own Dockerfile to build images based on the Redash image but that installs additional Python dependencies. But until we have a fork with lots of changes ourselves we need to do it this way. Redash-stmo contains the ability to hook up our own Dockerflow library. Refs #13 Refs #37
Extend the Remote User Auth backend with the ability to pass remote user groups via a configurable request header similar to the REMOTE_USER header. Refs #37. If enabled the feature allows checks the header value against a configured list of group names, including the ability to use UNIX shell-style wildcards.
* Use --max-tasks-per-child as per celery documentation * Set --max-memory-per-child to 1/4th of total system memory * Split exec command over multiple lines * Fix memory variable typo
|
It'd be neat to try this out eventually, not sure how I can do this locally to be honest :( |
|
@jezdez I haven't looked into the cause of the failing tests here, but I've tried this out locally and it's working in that less data is passed along per request, which is good. However, I'm not entirely sure that this will resolve the issue #785 for a couple of reasons.
That said, I think once these tests are fixed, we can land this PR anyways (perhaps it should be done upstream?) It may or may not fix the issue but it's still useful to be sending around less data. I also think if we could connect to the BigQuery data source locally that is failing on stage, it might be easier to reproduce and debug the issue. Is there any chance we could do that, @jasonthomas? |
|
For reference, here are some differences in data that is passed along with the change in this PR: [Before] Fetching a dataset [After] Fetching a dataset [Before] Fetching table names [After] Fetching table names [Before] Fetching a table schema: [After] Fetching a table schema: |
@emtwo I've sent you an email with the json key. The project id information is also listed in the json key file. One thing to mention is that the redash documentation recommends either using the Big Query Admin role or creating a custom role for the service account with the permissions. I've created a custom role with the following permissions: The documentation also includes 'bigquery.datasets.list' and 'bigquery.tables.getData' which is not included in the above permissions. 'bigquery.datasets.list' does not exist and ''bigquery.tables.getData' would give access to all table data [2]. Instead the service account user has access to specific datasets/tables. [1] https://redash.io/help/data-sources/setup/bigquery |
|
As per today's STMO meeting we decided to test the BigQuery data source with a a gcp service account that contained role 'BigQuery Admin' to rule out this is a permission issue. I created a new service account, created a new Big Query data source on stage stmo and was unable to view the schema. The error was the same as above |
|
Note: bug #785 has a fix that was merged upstream: getredash#3382 |
|
Hey @emtwo, @jasonthomas Do you think this would be a good idea to open a PR upstream? |
Yes definitely. |
|
Closing in favor of getredash#3673. |
Refs #785.
I used the Google API Explorer to build the values for the fields parameters: e.g. https://developers.google.com/apis-explorer/#p/bigquery/v2/bigquery.tables.get
More info for the fields parameter: https://developers.google.com/api-client-library/python/guide/performance#partial-response-fields-parameter