diff --git a/.dependency_license b/.dependency_license index da5267c0ea..48ce53ee44 100644 --- a/.dependency_license +++ b/.dependency_license @@ -105,6 +105,10 @@ traffic_portal/app/src/assets/js/chartjs/angular-chart\..*, BSD traffic_portal/app/src/assets/css/jsonformatter\..*, Apache traffic_portal/app/src/assets/js/jsonformatter\..*, Apache traffic_portal/app/src/assets/js/fast-json-patch\..*, MIT +traffic_portal/app/src/assets/css/colReorder.dataTables\..*, MIT +traffic_portal/app/src/assets/js/colReorder.dataTables\..*, MIT +traffic_ops/traffic_ops_golang/vendor/github\.com/dgrijalva/.*, MIT +traffic_ops/traffic_ops_golang/vendor/github\.com/lestrrat-go/.*, MIT # Ignored - Do not report. \.DS_Store, Ignore # Created automatically OSX. diff --git a/.rat-excludes b/.rat-excludes index 9946f24d04..3cc1216374 100644 --- a/.rat-excludes +++ b/.rat-excludes @@ -41,6 +41,7 @@ j[mM]enu(?:\.jquery)?\.(?:css|js)(?: MIT. Properly documented in LICENSE ){0} downloadjs\-min.*\.js(?: MIT. Properly documented in LICENSE ){0} jquery\-ui\..*(?: MIT. Properly documented in LICENSE ){0} jquery\.dataTables\..*(?: MIT. Properly documented in LICENSE ){0} +colReorder\.dataTables\..*(?: MIT. Properly documented in LICENSE ){0} .*underscore.*(?: MIT. Properly documented in LICENSE ){0} .*moment.*(?: MIT. Properly documented in LICENSE ){0} .*jsdiff.*(?: BSD 3-clause. Properly documented in LICENSE ){0} diff --git a/CHANGELOG.md b/CHANGELOG.md index 182e190329..0e5e1945a5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -19,6 +19,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - /api/1.4/cdns/dnsseckeys/refresh `GET` - /api/1.1/cdns/name/:name/dnsseckeys `GET` - /api/1.4/cdns/name/:name/dnsseckeys `GET` + - /api/1.4/user/login/oauth `POST` - To support reusing a single riak cluster connection, an optional parameter is added to riak.conf: "HealthCheckInterval". This options takes a 'Duration' value (ie: 10s, 5m) which affects how often the riak cluster is health checked. Default is currently set to: "HealthCheckInterval": "5s". - Added a new Go db/admin binary to replace the Perl db/admin.pl script which is now deprecated and will be removed in a future release. The new db/admin binary is essentially a drop-in replacement for db/admin.pl since it supports all of the same commands and options; therefore, it should be used in place of db/admin.pl for all the same tasks. - Added an API 1.4 endpoint, /api/1.4/cdns/dnsseckeys/refresh, to perform necessary behavior previously served outside the API under `/internal`. @@ -31,6 +32,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - In Traffic Portal, removes the need to specify line breaks using `__RETURN__` in delivery service edge/mid header rewrite rules, regex remap expressions, raw remap text and traffic router additional request/response headers. - In Traffic Portal, provides the ability to clone delivery service assignments from one cache to another cache of the same type. Issue #2963. - Traffic Ops now allows each delivery service to have a set of query parameter keys to be retained for consistent hash generation by Traffic Router. +- Added an API 1.4 endpoint, /api/1.4/user/login/oauth to handle SSO login using OAuth. +- Added /#!/sso page to Traffic Portal to catch redirects back from OAuth provider and POST token into the API. ### Changed - Traffic Router, added TLS certificate validation on certificates imported from Traffic Ops @@ -47,13 +50,17 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - Modified Traffic Router logging format to include an additional field for DNS log entries, namely `rhi`. This defaults to '-' and is only used when EDNS0 client subnet extensions are enabled and a client subnet is present in the request. When enabled and a subnet is present, the subnet appears in the `chi` field and the resolver address is in the `rhi` field. - Changed traffic_ops_ort.pl so that hdr_rw-.config files are compared with strict ordering and line duplication when detecting configuration changes. - Traffic Ops (golang), Traffic Monitor, Traffic Stats are now compiled using Go version 1.11. Grove was already being compiled with this version which improves performance for TLS when RSA certificates are used. +- Fixed issue #3497: TO API clients that don't specify the latest minor version will overwrite/default any fields introduced in later versions - Issue 3476: Traffic Router returns partial result for CLIENT_STEERING Delivery Services when Regional Geoblocking or Anonymous Blocking is enabled. - Upgraded Traffic Portal to AngularJS 1.7.8 - Issue 3275: Improved the snapshot diff performance and experience. - Issue #3605: Fixed Traffic Monitor custom ports in health polling URL. - Issue 3587: Fixed Traffic Ops Golang reverse proxy and Riak logs to be consistent with the format of other error logs. - Database migrations have been collapsed. Rollbacks to migrations that previously existed are no longer possible. +- Issue #3750: Fixed Grove access log fractional seconds. - Issue #3646: Fixed Traffic Monitor Thresholds. +- Added fields to traffic_portal_properties.json to configure SSO through OAuth. +- Added field to cdn.conf to configure whitelisted URLs for Json Key Set URL returned from OAuth provider. ## [3.0.0] - 2018-10-30 ### Added diff --git a/ISSUE_TEMPLATE.md b/ISSUE_TEMPLATE.md new file mode 100644 index 0000000000..bcc1e825c7 --- /dev/null +++ b/ISSUE_TEMPLATE.md @@ -0,0 +1,68 @@ + + + + +## I'm submitting a ... + + +- [ ] bug report +- [ ] new feature / enhancement request +- [ ] improvement request (usability, performance, tech debt, etc.) +- [ ] other + +## Traffic Control components affected ... + +- [ ] CDN in a Box +- [ ] Documentation +- [ ] Grove +- [ ] Traffic Control Client +- [ ] Traffic Monitor +- [ ] Traffic Ops +- [ ] Traffic Ops ORT +- [ ] Traffic Portal +- [ ] Traffic Router +- [ ] Traffic Stats +- [ ] Traffic Vault +- [ ] unknown + +## Current behavior: + + +## Expected / new behavior: + + +## Minimal reproduction of the problem with instructions: + + +## Anything else: + + + \ No newline at end of file diff --git a/LICENSE b/LICENSE index e50a46b6e6..4a0a44249d 100644 --- a/LICENSE +++ b/LICENSE @@ -243,10 +243,15 @@ For the JSON formatter component: For the DataTables component: @traffic_portal/app/src/assets/css/jquery.dataTables.min_1.10.9.css -@traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.16.js +@traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.19_patched.js @traffic_portal/app/src/assets/images/sort_* ./licenses/MIT-datatables +For the DataTables ColReorder component: +@traffic_portal/app/src/assets/css/colReorder.dataTables.min_1.5.1.css +@traffic_portal/app/src/assets/js/colReorder.dataTables.min_1.5.1.js +./licenses/MIT-ColReorder + For the moment.js component: @traffic_portal/app/src/assets/js/moment-min_2.22.1.js ./licenses/MIT-momentjs @@ -427,3 +432,11 @@ The modern-go/concurrent component is used under the Apache 2.0 license: The modern-go/reflect2 component is used under the Apache 2.0 license: @vendor/github.com/modern-go/reflect2/* ./vendor/github.com/modern-go/reflect2/LICENSE + +For the lestrrat-go/jwx component: +@traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/* +./traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/LICENSE + +For the dgrijalva/jwt-go component: +@traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/* +./traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/LICENSE diff --git a/PULL_REQUEST_TEMPLATE.md b/PULL_REQUEST_TEMPLATE.md index 073d719bc2..901e34dd19 100644 --- a/PULL_REQUEST_TEMPLATE.md +++ b/PULL_REQUEST_TEMPLATE.md @@ -46,7 +46,7 @@ it includes tests (and most should), outline here the steps needed to run the tests. If not, lay out the manual testing procedure and please explain why tests are unnecessary for this Pull Request. --> -## If this is a bug fix, what versions of Traffic Ops are affected? +## If this is a bug fix, what versions of Traffic Control are affected? Snapshot CRConfig`, perform :guilabel:`Diff CRConfig` and click :guilabel:`Write CRConfig`. +#. Go to :ref:`the Traffic Portal CDNs view `, click on :guilabel:`Diff CDN Config Snapshot`, and click :guilabel:`Perform Snapshot`. - .. figure:: regionalgeo/03.png - :scale: 70% + .. figure:: anonymous_blocking/03.png + :width: 40% :align: center diff --git a/docs/source/admin/quick_howto/anonymous_blocking/01.png b/docs/source/admin/quick_howto/anonymous_blocking/01.png index bda89db115..5b785e4b3d 100644 Binary files a/docs/source/admin/quick_howto/anonymous_blocking/01.png and b/docs/source/admin/quick_howto/anonymous_blocking/01.png differ diff --git a/docs/source/admin/quick_howto/anonymous_blocking/02.png b/docs/source/admin/quick_howto/anonymous_blocking/02.png index 0b74046936..60df6df564 100644 Binary files a/docs/source/admin/quick_howto/anonymous_blocking/02.png and b/docs/source/admin/quick_howto/anonymous_blocking/02.png differ diff --git a/docs/source/admin/quick_howto/anonymous_blocking/03.png b/docs/source/admin/quick_howto/anonymous_blocking/03.png new file mode 100644 index 0000000000..92363e348c Binary files /dev/null and b/docs/source/admin/quick_howto/anonymous_blocking/03.png differ diff --git a/docs/source/admin/quick_howto/ciab.rst b/docs/source/admin/quick_howto/ciab.rst index 2683a8f839..abee210113 100644 --- a/docs/source/admin/quick_howto/ciab.rst +++ b/docs/source/admin/quick_howto/ciab.rst @@ -201,9 +201,9 @@ The enroller runs within CDN in a Box using :option:`--dir` which provides the a Auto Snapshot/Queue-Updates --------------------------- -An automatic snapshot of the current Traffic Ops CDN configuration/toplogy will be performed once the "enroller" has finished loading all of the data and a minimum number of servers have been enrolled. To enable this feature, set the boolean ``AUTO_SNAPQUEUE_ENABLED`` to ``true`` [8]_. The snapshot and queue-updates actions will not be performed until all servers in ``AUTO_SNAPQUEUE_SERVERS`` (comma-delimited string) have been enrolled. The current enrolled servers will be polled every ``AUTO_SNAPQUEUE_POLL_INTERVAL`` seconds, and each action (snapshot and queue-updates) will be delayed ``AUTO_SNAPQUEUE_ACTION_WAIT`` seconds [9]_. +An automatic :term:`Snapshot` of the current Traffic Ops CDN configuration/topology will be performed once the "enroller" has finished loading all of the data and a minimum number of servers have been enrolled. To enable this feature, set the boolean ``AUTO_SNAPQUEUE_ENABLED`` to ``true`` [8]_. The :term:`Snapshot` and :term:`Queue Updates` actions will not be performed until all servers in ``AUTO_SNAPQUEUE_SERVERS`` (comma-delimited string) have been enrolled. The current enrolled servers will be polled every ``AUTO_SNAPQUEUE_POLL_INTERVAL`` seconds, and each action (:term:`Snapshot` and :term:`Queue Updates`) will be delayed ``AUTO_SNAPQUEUE_ACTION_WAIT`` seconds [9]_. -.. [8] Automatic Snapshot/Queue-Updates is enabled by default in :file:`infrastructure/cdn-in-a-box/variables.env`. +.. [8] Automatic :term:`Snapshot`/:term:`Queue Updates` is enabled by default in `variables.env`_. .. [9] Server poll interval and delay action wait are defaulted to a value of 2 seconds. Mock Origin Service diff --git a/docs/source/admin/quick_howto/ds_requests.rst b/docs/source/admin/quick_howto/ds_requests.rst index 833db821f9..e069ffbeff 100644 --- a/docs/source/admin/quick_howto/ds_requests.rst +++ b/docs/source/admin/quick_howto/ds_requests.rst @@ -18,7 +18,7 @@ ************************* Delivery Service Requests ************************* -When enabled in :file:`traffic_portal_properties.json`, Delivery Service Requests are created when *all* users attempt to create, update or delete a :term:`Delivery Service`. This allows users with higher level permissions ("operations" or "admin") to review the changes for completeness and accuracy before deploying the changes. In addition, most :term:`Delivery Service` changes require cache configuration updates (aka queue updates) and/or a CDN :term:`Snapshot`. Both of these actions are reserved for users with elevated permissions. +When enabled in :file:`traffic_portal_properties.json`, Delivery Service Requests are created when *all* users attempt to create, update or delete a :term:`Delivery Service`. This allows users with higher level permissions ("operations" or "admin") to review the changes for completeness and accuracy before deploying the changes. In addition, most :term:`Delivery Service` changes require configuration updates (i.e. :term:`Queue Updates`) and/or a CDN :term:`Snapshot`. Both of these actions are reserved for users with elevated permissions. A list of the Delivery Service requests associated with your :term:`Tenant` can be found under :menuselection:`Services --> Delivery Service Requests` @@ -53,10 +53,10 @@ Reject the Delivery Service Request Rejecting a Delivery Service Request will set status to 'rejected' and the request can no longer be modified. This will auto-assign the request to the user doing the rejection. Fulfill the Delivery Service Request - Fulfilling a Delivery Service Request will show the requested changes and, once committed, will apply the desired changes and set status to 'pending'. The request is pending because many types of changes will require :term:`cache server` configuration updates (aka queue updates) and/or a CDN snapshot. Once queue updates and/or CDN snapshot is complete, the request should be marked 'complete'. + Fulfilling a Delivery Service Request will show the requested changes and, once committed, will apply the desired changes and set status to 'pending'. The request is pending because many types of changes will require :term:`cache server` configuration updates (i.e. :term:`Queue Updates`) and/or a CDN :term:`Snapshot`. Once :term:`Queue Updates` and/or CDN :term:`Snapshot` is complete, the request should be marked 'complete'. Complete the Delivery Service Request - Only after the Delivery Service Request has been fulfilled and the changes have been applied can a Delivery Service Request be marked as 'complete'. Marking a Delivery Service Request as 'complete' is currently a manual step because some changes require :term:`cache server` configuration updates (aka queue updates) and/or a CDN :term:`Snapshot`. Once that is done and the changes have been deployed, the request status should be changed from 'pending' to 'complete'. + Only after the Delivery Service Request has been fulfilled and the changes have been applied can a Delivery Service Request be marked as 'complete'. Marking a Delivery Service Request as 'complete' is currently a manual step because some changes require :term:`cache server` configuration updates (i.e. :term:`Queue Updates`) and/or a CDN :term:`Snapshot`. Once that is done and the changes have been deployed, the request status should be changed from 'pending' to 'complete'. Delete the Delivery Service request Delivery Service Requests with a status of 'draft' or 'submitted' can always be deleted entirely if appropriate. diff --git a/docs/source/admin/quick_howto/index.rst b/docs/source/admin/quick_howto/index.rst index 01a46c880f..1ea4fdb209 100644 --- a/docs/source/admin/quick_howto/index.rst +++ b/docs/source/admin/quick_howto/index.rst @@ -27,6 +27,7 @@ Traffic Control is a complicated system, and documenting it is not trivial. Some ds_requests federations multi_site + oauth_login regionalgeo static_dns steering diff --git a/docs/source/admin/quick_howto/multi_site.rst b/docs/source/admin/quick_howto/multi_site.rst index f919e6bd00..565f39ca09 100644 --- a/docs/source/admin/quick_howto/multi_site.rst +++ b/docs/source/admin/quick_howto/multi_site.rst @@ -58,15 +58,15 @@ The following steps will take you through the procedure of setting up an :abbr:` - This :abbr:`OSBU (Origin Server Base URL)` must be valid - :abbr:`ATS (Apache Traffic Server)` will perform a DNS lookup on this :abbr:`FQDN (Fully Qualified Domain Name)` even if IPs, not DNS, are used in the :file:`parent.config`. - The :abbr:`OSBU (Origin Server Base URL)` entered as the "Origin Server Base URL" will be sent to the origins as a host header. All origins must be configured to respond to this host. -#. Create a delivery service profile. This must be done to set the :abbr:`MSO (Multi-Site Origin)` algorithm. Also, as of :abbr:`ATS (Apache Traffic Server)` 6.x, multi-site options must be set as parameters within the :file:`parent.config`. Header rewrite parameters will be ignored. See `ATS parent.config `_ for more details. These parameters are now handled by the creation of a :term:`Delivery Service` profile. +#. Create a delivery service profile. This must be done to set the :abbr:`MSO (Multi-Site Origin)` algorithm. Also, as of :abbr:`ATS (Apache Traffic Server)` 6.x, multi-site options must be set as parameters within the :file:`parent.config`. Header rewrite parameters will be ignored. See `ATS parent.config `_ for more details. These :term:`Parameters` are now handled by the creation of a :term:`Delivery Service` :term:`Profile`. - a) Create a profile of the type ``DS_PROFILE`` for the :term:`Delivery Service` in question. + a) Create a :term:`Profile` of the :ref:`profile-type` ``DS_PROFILE`` for the :term:`Delivery Service` in question. .. figure:: multi_site/ds_profile.png :scale: 50% :align: center - #) Click :guilabel:`Show profile parameters` to bring up the parameters screen for the profile. Create parameters for the following: + #) Click :guilabel:`Show profile parameters` to bring up the :term:`Parameters` screen for the :term:`Profile`. Create the following :term:`Parameters`: +----------------------------------------+------------------+--------------------------+-------------------------+ | Parameter Name | Config File Name | Value | ATS parent.config value | @@ -97,6 +97,6 @@ The following steps will take you through the procedure of setting up an :abbr:` #) In the :term:`Delivery Service` page, select the newly created ``DS_PROFILE`` and save the :term:`Delivery Service`. -#. Turn on parent_proxy_routing in the MID profile. +#. Turn on parent_proxy_routing in the MID :term:`Profile`. .. Note:: Support for multisite configurations with single-layer CDNs is now available. If a :term:`Cache Group`\ s defined parents are either blank or of the type ``ORG_LOC``, that :term:`cache server`'s ``parent.config`` will be generated as a top layer cache, even if it is an edge. In the past, ``parent.config`` generation was strictly determined by cache type. The new method examines the parent :term:`Cache Group` definitions and generates the :file:`parent.config` accordingly. diff --git a/docs/source/admin/quick_howto/oauth_login.rst b/docs/source/admin/quick_howto/oauth_login.rst new file mode 100644 index 0000000000..6a1349abfa --- /dev/null +++ b/docs/source/admin/quick_howto/oauth_login.rst @@ -0,0 +1,90 @@ +.. +.. +.. Licensed under the Apache License, Version 2.0 (the "License"); +.. you may not use this file except in compliance with the License. +.. You may obtain a copy of the License at +.. +.. http://www.apache.org/licenses/LICENSE-2.0 +.. +.. Unless required by applicable law or agreed to in writing, software +.. distributed under the License is distributed on an "AS IS" BASIS, +.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +.. See the License for the specific language governing permissions and +.. limitations under the License. +.. +.. _oauth_login: + +********************* +Configure OAuth Login +********************* + +An opt-in configuration for SSO using OAuth is supported and can be configured through the :file:`/opt/traffic_portal/public/traffic_portal_properties.json` and :file:`/opt/traffic_ops/app/conf/cdn.conf` files. OAuth uses a third party provider to authenticate the user. Once enabled, the Traffic Portal Login page will no longer accept username and password but instead will authenticate using OAuth. This will redirect to the ``oAuthUrl`` from :file:`/opt/traffic_portal/public/traffic_portal_properties.json` which will authenticate the user then redirect to the new ``/sso`` page with an authorization code. The new ``/sso`` page will then construct the full URL to exchange the authorization code for a JSON Web Token, and ``POST`` this information to the :ref:`to-api-user-login-oauth` API endpoint. The :ref:`to-api-user-login-oauth` API endpoint will ``POST`` to the URL provided and receive JSON Web Token. The :ref:`to-api-user-login-oauth` API endpoint will decode the token, validate that it is between the issued time and the expiration time, and validate that the public key set URL is allowed by the list of whitelisted URLs read from :file:`/opt/traffic_ops/app/conf/cdn.conf`. It will then authorize the user from the database and return a mojolicious cookie as per the normal login workflow. + +.. Note:: Ensure that the user names in the Traffic Ops database match the value returned in the `sub` field in the response from the OAuth provider when setting up with the OAuth provider. The `sub` field is used to reference the roles in the Traffic Ops database in order to authorize the user. + +.. Note:: OAuth providers sometimes do not return the public key set URL but instead require a locally stored key. This functionality is not currently supported and will require further development. + +.. Note:: The ``POST`` from the API to the OAuth provider to exchange the code for a token expects the response to have the token in JSON format with `access_token` as the desired field (and can include other fields). It also supports a response with just the token itself as the body. Further development work will need to be done to allow other resposne forms or other response fields. + +To configure OAuth login: + +- Set up authentication with a third party OAuth provider. + +- Update :file:`/opt/traffic_portal/public/traffic_portal_properties.json` and ensure the following properties are set up correctly: + + .. table:: OAuth Configuration Property Definitions In traffic_portal_properties.json + + +------------------------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------+ + | Name | Type | Description | + +==============================+============+===========================================================================================================================================+ + | enabled | boolean | Allow OAuth SSO login | + +------------------------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------+ + | oAuthUrl | string | URL to your OAuth provider | + +------------------------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------+ + | redirectUriParameterOverride | string | Query parameter override if the oAuth provider requires a different key for the redirect_uri parameter, defaults to ``redirect_uri`` | + +------------------------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------+ + | clientId | string | Client id registered with OAuth provider, passed in with `client_id` parameter | + +------------------------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------+ + | oAuthCodeTokenUrl | string | URL to your OAuth provider's endpoint for exchanging the code (from oAuthUrl) for a token | + +------------------------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------+ + | clientSecret | string | Client secret registered with OAuth provider to verify client, passed in with `client_secret` parameter | + +------------------------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------+ + + + .. code-block:: json + :caption: Example OAuth Configuration Properties In traffic_portal_properties.json + + { + "oAuth": { + "_comment": "Opt-in OAuth properties for SSO login. See http://traffic-control-cdn.readthedocs.io/en/release-4.0.0/admin/quick_howto/oauth_login.html for more details. redirectUriParameterOverride defaults to redirect_uri if left blank.", + "enabled": true, + "oAuthUrl": "example.oauth.com", + "redirectUriParameterOverride": "", + "clientId": "", + "oAuthCodeTokenUrl": "example.oauth.com/oauth/token", + "clientSecret": "" + } + } + +- Update :file:`/opt/traffic_ops/app/conf/cdn.conf` property traffic_ops_golang.whitelisted_oauth_urls to contain all allowed domains for the JSON key set (Use ``*`` for wildcard): + + .. table:: OAuth Configuration Property Definitions In cdn.conf + + +--------------------------+--------------------+-----------------------------------------------------------------------------------------------------------------+ + | Name | Type | Description | + +==========================+====================+=================================================================================================================+ + | whitelisted_oauth_urls | Array of strings | List of whitelisted URLs for the JSON public key set returned by OAuth provider. Can contain ``*`` wildcards. | + +--------------------------+--------------------+-----------------------------------------------------------------------------------------------------------------+ + + + .. code-block:: json + :caption: Example OAuth Configuration Properties In cdn.conf + + { + "traffic_ops_golang": { + "whitelisted_oauth_urls": [ + "example.oauth.com", + "*.oauth.com" + ] + } + } \ No newline at end of file diff --git a/docs/source/admin/quick_howto/regionalgeo.rst b/docs/source/admin/quick_howto/regionalgeo.rst index eaa9abf340..121518b21f 100644 --- a/docs/source/admin/quick_howto/regionalgeo.rst +++ b/docs/source/admin/quick_howto/regionalgeo.rst @@ -56,7 +56,7 @@ Configure Regional Geo-blocking (RGB) An optional element that is an array of :abbr:`CIDR (Classless Inter-Domain Routing)` blocks indicating the IPv4 subnets that are allowed by the rule. If this list exists and the value is not empty, client IP will be matched against the :abbr:`CIDR (Classless Inter-Domain Routing)` list, bypassing the value of ``geoLocation``. If there is no match in the white list, Traffic Router defers to the value of ``geoLocation`` to determine if content ought to be blocked. -#. Add :abbr:`RGB (Regional Geographic-based Blocking)` parameters in Traffic Portal to the :term:`Delivery Service`'s Traffic Router(s)'s profile(s). The ``configFile`` field should be set to ``CRConfig.json``, and the following two parameter name/values need to be specified: +#. Add :abbr:`RGB (Regional Geographic-based Blocking)` :term:`Parameters` in Traffic Portal to the :term:`Delivery Service`'s Traffic Router(s)'s :term:`Profile`\ (s). The :ref:`parameter-config-file` value should be set to ``CRConfig.json``, and the following two :term:`Parameter` :ref:`parameter-name`/:ref:`parameter-value` pairs need to be specified: ``regional_geoblocking.polling.url`` The URL of the RGB configuration file. Traffic Router will fetch the file from this URL using an HTTP ``GET`` request. @@ -64,19 +64,19 @@ Configure Regional Geo-blocking (RGB) The interval on which Traffic Router polls the :abbr:`RGB (Regional Geographic-based Blocking)` configuration file. .. figure:: regionalgeo/01.png - :scale: 100% + :width: 40% :align: center -#. Enable RGB for a :term:`Delivery Service` +#. Enable :abbr:`RGB (Regional Geographic-based Blocking)` for a :term:`Delivery Service` using the :ref:`Delivery Services view in Traffic Portal ` (don't forget to save changes!) .. figure:: regionalgeo/02.png - :scale: 100% + :width: 40% :align: center -#. Go to :menuselection:`Tools --> Snapshot CRConfig`, perform :guilabel:`Diff CRConfig` and click :guilabel:`Write CRConfig`. +#. Go to :ref:`the Traffic Portal CDNs view `, click on :guilabel:`Diff CDN Config Snapshot`, and click :guilabel:`Perform Snapshot`. .. figure:: regionalgeo/03.png - :scale: 70% + :width: 40% :align: center Traffic Router Access Log diff --git a/docs/source/admin/quick_howto/regionalgeo/01.png b/docs/source/admin/quick_howto/regionalgeo/01.png index 0443a17a58..f9b11c5aaf 100644 Binary files a/docs/source/admin/quick_howto/regionalgeo/01.png and b/docs/source/admin/quick_howto/regionalgeo/01.png differ diff --git a/docs/source/admin/quick_howto/regionalgeo/02.png b/docs/source/admin/quick_howto/regionalgeo/02.png index 553092067a..9684ab1d4f 100644 Binary files a/docs/source/admin/quick_howto/regionalgeo/02.png and b/docs/source/admin/quick_howto/regionalgeo/02.png differ diff --git a/docs/source/admin/quick_howto/regionalgeo/03.png b/docs/source/admin/quick_howto/regionalgeo/03.png index ce2676b85b..713ec68347 100644 Binary files a/docs/source/admin/quick_howto/regionalgeo/03.png and b/docs/source/admin/quick_howto/regionalgeo/03.png differ diff --git a/docs/source/admin/traffic_ops/configuration.rst b/docs/source/admin/traffic_ops/configuration.rst index 8d754948f6..fc9d232b56 100644 --- a/docs/source/admin/traffic_ops/configuration.rst +++ b/docs/source/admin/traffic_ops/configuration.rst @@ -130,147 +130,6 @@ You will need to update the file :file:`/opt/traffic_ops/app/conf/cdn.conf` with ... -Content Delivery Networks -========================= - -.. _param-prof: - -Profile Parameters ------------------- -Many of the settings for the different servers in a Traffic Control CDN are controlled by parameters in the :menuselection:`Configure --> Parameters` view of Traffic Portal. Parameters are grouped in profiles and profiles are assigned to a server or a :term:`Delivery Service`. For a typical cache there are hundreds of configuration settings to apply. The Traffic Portal :menuselection:`Parameters` view contains the defined settings. To make life easier, Traffic Portal allows for duplication, comparison, import and export of profiles. Traffic Ops also has a "Global profile" - the parameters in this profile are going to be applied to all servers in the Traffic Ops instance, or apply to Traffic Ops themselves. These parameters are explained in the :ref:`global-profile-parameters` table. - -.. _global-profile-parameters: -.. table:: Global Profile Parameters - - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | Name | ConfigFile | Value | - +==========================+===============+=======================================================================================================================================+ - | tm.url | global | The URL at which this Traffic Ops instance services requests | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | tm.rev_proxy.url | global | Not required. The URL where a caching proxy for configuration files generated by Traffic Ops may be found. Requires a minimum | - | | | :term:`ORT` version of 2.1. When configured, :term:`ORT` will request configuration files via this | - | | | :abbr:`FQDN (Fully Qualified Domain Name)`, which should be set up as a reverse proxy to the Traffic Ops server(s). The suggested | - | | | cache lifetime for these files is 3 minutes or less. This setting allows for greater scalability of a CDN maintained by Traffic Ops | - | | | by caching configuration files of profile and CDN scope, as generating these is a very computationally expensive process | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | tm.toolname | global | The name of the Traffic Ops tool. Usually "Traffic Ops" - this will appear in the comment headers of the generated files | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | tm.infourl | global | This is the "for more information go here" URL, which used to be visible in the "About" page of the now-deprecated Traffic Ops UI | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | tm.logourl | global | This is the URL of the logo for Traffic Ops and can be relative if the logo is under :file:`traffic_ops/app/public` | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | tm.instance_name | global | The name of the Traffic Ops instance - typically to distinguish instances when multiple are active | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | tm.traffic_mon_fwd_proxy | global | When collecting stats from Traffic Monitor, Traffic Ops will use this forward proxy instead of the actual Traffic Monitor host. | - | | | This can be any of the MID tier caches, or a forward cache specifically deployed for this purpose. Setting | - | | | this variable can significantly lighten the load on the Traffic Monitor system and it is recommended to | - | | | set this parameter on a production system. | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | geolocation.polling.url | CRConfig.json | The location of a geographic IP mapping database for Traffic Router instances to use | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | geolocation6.polling.url | CRConfig.json | The location of a geographic IPv6 mapping database for Traffic Router instances to use | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - | maxmind.default.override | CRConfig.json | The destination geographic coordinates to use for client location when the geographic IP mapping database returns a default location | - | | | that matches the country code. This parameter can be specified multiple times with different values to support default overrides for | - | | | multiple countries. The reason for the name "maxmind" is because MaxMind's GeoIP2 database is the default geographic IP mapping | - | | | database implementation used by Comcast production servers (and the only officially supported implementation at the time of this | - | | | writing). The format of this Parameter's value is: ``;,``, e.g. ``US;37.751,-97.822`` | - +--------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------+ - -These parameters should be set to reflect the local environment. - -After running the :program:`postinstall` script, Traffic Ops has the :ref:`tbl-default-profiles` pre-loaded. - -.. _tbl-default-profiles: -.. table:: Default Profiles - - +----------+-------------------------------------------------------------------------------------------------------------------------------------------------+ - | Name | Description | - +==========+=================================================================================================================================================+ - | EDGE1 | The profile to be applied to the latest supported version of :abbr:`ATS (Apache Traffic Server)`, when running as an Edge-tier cache | - +----------+-------------------------------------------------------------------------------------------------------------------------------------------------+ - | TR1 | The profile to be applied to the latest version of Traffic Router | - +----------+-------------------------------------------------------------------------------------------------------------------------------------------------+ - | TM1 | The profile to be applied to the latest version of Traffic Monitor | - +----------+-------------------------------------------------------------------------------------------------------------------------------------------------+ - | MID1 | The profile to be applied to the latest supported version of :abbr:`ATS (Apache Traffic Server)`, when running as a Mid-tier cache | - +----------+-------------------------------------------------------------------------------------------------------------------------------------------------+ - | RIAK_ALL | "Riak" profile for all CDNs to be applied to the Traffic Vault servers ("Riak" being the name of the underlying database used by Traffic Vault) | - +----------+-------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. Note:: The "EDGE1" and "MID1" profiles contain some information that is specific to the hardware being used (most notably the disk configuration), so some parameters will have to be changed to reflect your configuration. Future releases of Traffic Control will separate the hardware and software profiles so it is easier to "mix-and-match" different hardware configurations. The :ref:`cache-server-hardware-parameters` table tabulates the cache parameters that are likely to need changes from the default profiles shipped with Traffic Ops. - -.. _cache-server-hardware-parameters: -.. table:: Cache Server Hardware Parameters - - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | Name | ConfigFile | Description | - +===========================================+===================+==============================================================================================================================================+ - | allow_ip | astats.config | This is a comma-separated list of IPv4 :abbr:`CIDR (Classless Inter-Domain Routing)` blocks that will have access to the 'astats' statistics | - | | | on the cache servers. The Traffic Monitor IP addresses have to be included in this if they are using IPv4 to monitor the cache servers | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | allow_ip6 | astats.config | This is a comma-separated list of IPv6 :abbr:`CIDR (Classless Inter-Domain Routing)` blocks that will have access to the 'astats' statistics | - | | | on the cache servers. The Traffic Monitor IP addresses have to be included in this if they are using IPv6 to monitor the cache servers | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | Drive_Prefix | storage.config | The device path start of the disks. For example, if storage devices ``/dev/sda`` through ``/dev/sdf`` are to be used for caching, this | - | | | should be set to ``/dev/sd`` | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | Drive_Letters | storage.config | A comma-separated list of the letter part of the storage devices to be used for caching. For example, if storage devices ``/dev/sda`` | - | | | through ``/dev/sdf`` are to be used for caching, this should be set to ``a,b,c,d,e,f`` | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | purge_allow_ip | ip_allow.config | The IP address range that is allowed to execute the PURGE method on the caches (not related to :ref:`purge`) | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | coalesce_masklen_v4 | ip_allow.config | The mask length to use when coalescing IPv4 networks into one line using | - | | | `the NetAddr\:\:IP Perl library `_ | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | coalesce_number_v4 | ip_allow.config | The number to use when coalescing IPv4 networks into one line using | - | | | `the NetAddr\:\:IP Perl library `_ | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | coalesce_masklen_v6 | ip_allow.config | The mask length to use when coalescing IPv6 networks into one line using | - | | | `the NetAddr\:\:IP Perl library. `_ | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | health.threshold.loadavg | rascal.properties | The Unix 'load average' (as given by :manpage:`uptime(1)`) at which Traffic Router will stop sending traffic to this cache | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | health.threshold.availableBandwidthInKbps | rascal.properties | The amount of bandwidth (in kilobits per second) that Traffic Router will try to keep available on the cache. For example ">1500000" means | - | | | "stop sending new traffic to this cache server when traffic is at 8.5Gbps on a 10Gbps interface" | - +-------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - -The :ref:`plugin-parameters` table contains all Traffic Server plug-ins that must be configured as global parameters. - -.. _plugin-parameters: -.. table:: Plugin Parameters - - +------------------+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+ - | Name | ConfigFile | Description | - +==================+===============+===========================================================================================================================================+ - | astats_over_http | package | The package version for the :abbr:`ATS (Apache Traffic Server)` | - | | | `astats_over_http plugin `_ | - +------------------+---------------+----------------------------------------------------------+--------------------------------------------------------------------------------+ - | trafficserver | package | The package version of :abbr:`ATS (Apache Traffic Server)` | - +------------------+---------------+----------------------------------------------------------+--------------------------------------------------------------------------------+ - | regex_revalidate | plugin.config | The configuration to be used for the :abbr:`ATS (Apache Traffic Server)` `regex_revalidate plugin`_ | - +------------------+---------------+----------------------------------------------------------+--------------------------------------------------------------------------------+ - | remap_stats | plugin.config | The configuration to be used for the :abbr:`ATS (Apache Traffic Server)` | - | | | `remap_stats plugin `_ - value should be left blank | - +------------------+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+ - -Cache server parameters for special configurations, which are unlikely to need changes but may be useful in particular circumstances, may be found in the :ref:`special-parameters` table. - -.. _special-parameters: -.. table:: Special Parameters - - +--------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Name | ConfigFile | Description | - +==============+===================+=============================================================================================================================================================================+ - | not_a_parent | parent.config | This is a boolean flag and is considered ``true`` if it exists and has any value except ``false``. This prevents cache servers with this parameter in their profile from being | - | | | inserted into the ``parent.config`` files generated for other cache servers that have the affected cache server(s)'s :term:`Cache Group` as a parent of their own | - | | | :term:`Cache Group`. This is primarily useful for when Edge-tier cache servers are configured to have a :term:`Cache Group` of other Edge-tier cache servers as parents (a | - | | | highly unusual configuration), and it is necessary to exclude some - but not all - Edge-tier cache servers in the parent :term:`Cache Group` from the ``parent.config`` (for | - | | | example because they lack necessary capabilities), but still have all Edge-tier cache servers in the same :term:`Cache Group` in order to take traffic from ordinary | - | | | :term:`Delivery Service`\ s at that :term:`Cache Group`\ 's geographic location. Once again, this is a highly unusual scenario, and under ordinary circumstances this parameter | - | | | should not exist. | - +--------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - Regions, Locations and Cache Groups =================================== All servers have to have a :term:`Physical Location`, which defines their geographic latitude and longitude. Each :term:`Physical Location` is part of a :term:`Region`, and each :term:`Region` is part of a :term:`Division`. For example, ``Denver`` could be the name of a :term:`Physical Location` in the ``Mile High`` :term:`Region` and that :term:`Region` could be part of the ``West`` :term:`Division`. The hierarchy between these terms is illustrated graphically in :ref:`topography-hierarchy`. @@ -289,32 +148,7 @@ All servers also have to be part of a :term:`Cache Group`. A :term:`Cache Group` Configuring Content Purge ========================= -Purging cached content using :abbr:`ATS (Apache Traffic Server)` is not simple; there is no file system from which to delete files and/or directories, and in large caches it can be hard to delete content matching a simple regular expression from the cache. This is why Traffic Control uses the `Regex Revalidate Plugin `_ to purge content from the cache. The cached content is not actually removed, instead a check that runs before each request on each cache server is serviced to see if this request matches a list of regular expressions. If it does, the cache server is forced to send the request upstream to its parents (possibly other caches, possibly the origin) without checking for the response in its cache. The Regex Revalidate Plugin will monitor its configuration file, and will pick up changes to it without needing to alert :abbr:`ATS (Apache Traffic Server). Changes to this file need to be distributed to the highest tier (Mid-tier) cache servers in the CDN before they are distributed to the lower tiers, to prevent filling the lower tiers with the content that should be purged from the higher tiers without hitting the origin. This is why the :term:`ORT` script will - by default - push out configuration changes to Mid-tier cache servers first, confirm that they have all been updated, and then push out the changes to the lower tiers. In large CDNs, this can make the distribution and time to activation of the purge too long, and because of that there is the option to not distribute the ``regex_revalidate.config`` file using the :term:`ORT` script, but to do this using other means. By default, Traffic Ops will use :term:`ORT` to distribute the ``regex_revalidate.config`` file. Content Purge is controlled by the parameters in the profile of the cache server specified in the :ref:`content-purge-parameters` table. - -.. _content-purge-parameters: -.. table:: Content Purge Parameters - - +----------------------+-------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Name | ConfigFile | Description | - +======================+=========================+================================================================================================================================================================+ - | location | regex_revalidate.config | Where in the file system the ``regex_revalidate.config`` file should located on the cache server. The presence of this parameter tells ORT to distribute this | - | | | file; delete this parameter from the profile if this file is distributed using other means | - +----------------------+-------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | maxRevalDurationDays | regex_revalidate.config | The maximum duration for which a purge shall be active. To prevent a build-up of many checks before each request, this is longest duration (in days) for which | - | | | the system will allow content purges to remain active | - +----------------------+-------------------------+--------------------------------------------------+-------------------------------------------------------------------------------------------------------------+ - | regex_revalidate | plugin.config | The configuration to be used for the `regex_revalidate plugin`_ | - +----------------------+-------------------------+--------------------------------------------------+-------------------------------------------------------------------------------------------------------------+ - | use_reval_pending | global | Configures Traffic Ops to use a separate ``reval_pending`` flag for each cache server. When this flag is in use :term:`ORT` will check for a new | - | | | ``regex_revalidate.config`` every 60 seconds in "SYNCDS" mode during the dispersal timer. This will also allow :term:`ORT` to be run in "REVALIDATE" mode, | - | | | which will check for and clear the ``reval_pending`` flag. This can be set to run via :manpage:`cron(8)` task. Enable with a value of ``1``. | - +----------------------+-------------------------+--------------------------------------------------+-------------------------------------------------------------------------------------------------------------+ - -.. versionadded:: 2.1 - ``use_reval_pending`` was unavailable prior to Traffic Ops version 2.1. - - -.. Note:: The :abbr:`TTL (Time To Live)` entered by the administrator in the purge request should be longer than the :abbr:`TTL (Time To Live)` of the content to ensure the bad content will not be used. If the CDN is serving content of unknown, or unlimited :abbr:`TTL (Time To Live)`, the administrator should consider using `proxy-config-http-cache-guaranteed-min-lifetime `_ to limit the maximum time an object can be in the cache before it is considered stale, and set that to the same value as `maxRevalDurationDays` (Note that the former is in seconds and the latter is in days, so convert appropriately). +Purging cached content using :abbr:`ATS (Apache Traffic Server)` is not simple; there is no file system from which to delete files and/or directories, and in large caches it can be hard to delete content matching a simple regular expression from the cache. This is why Traffic Control uses the `Regex Revalidate Plugin `_ to purge content from the cache. The cached content is not actually removed, instead a check that runs before each request on each cache server is serviced to see if this request matches a list of regular expressions. If it does, the cache server is forced to send the request upstream to its parents (possibly other caches, possibly the origin) without checking for the response in its cache. The Regex Revalidate Plugin will monitor its configuration file, and will pick up changes to it without needing to alert :abbr:`ATS (Apache Traffic Server). Changes to this file need to be distributed to the highest tier (Mid-tier) cache servers in the CDN before they are distributed to the lower tiers, to prevent filling the lower tiers with the content that should be purged from the higher tiers without hitting the origin. This is why the :term:`ORT` script will - by default - push out configuration changes to Mid-tier cache servers first, confirm that they have all been updated, and then push out the changes to the lower tiers. In large CDNs, this can make the distribution and time to activation of the purge too long, and because of that there is the option to not distribute the ``regex_revalidate.config`` file using the :term:`ORT` script, but to do this using other means. By default, Traffic Ops will use :term:`ORT` to distribute the ``regex_revalidate.config`` file. .. _Creating-CentOS-Kickstart: diff --git a/docs/source/admin/traffic_ops/using.rst b/docs/source/admin/traffic_ops/using.rst index 6dc0f63ec8..99fc0a5215 100644 --- a/docs/source/admin/traffic_ops/using.rst +++ b/docs/source/admin/traffic_ops/using.rst @@ -13,12 +13,6 @@ .. limitations under the License. .. -.. |graph| image:: images/graph.png -.. |info| image:: images/info.png -.. |checkmark| image:: images/good.png -.. |X| image:: images/bad.png -.. |clock| image:: images/clock-black.png - .. _to-using: ******************* @@ -38,113 +32,6 @@ The Traffic Ops Menu The following tabs are available in the menu at the top of the Traffic Ops user interface. -.. index:: - Health Tab - -Health ------- -Information on the health of the system. Hover over this tab to get to the following options: - -+---------------+------------------------------------------------------------------------------------------------------------------------------------+ -| Option | Description | -+===============+====================================================================================================================================+ -| Table View | A real time view into the main performance indicators of the CDNs managed by Traffic Control. | -| | This view is sourced directly by the Traffic Monitor data and is updated every 10 seconds. | -| | This is the default screen of Traffic Ops. | -| | See :ref:`health-table` for details. | -+---------------+------------------------------------------------------------------------------------------------------------------------------------+ -| Graph View | A real graphical time view into the main performance indicators of the CDNs managed by Traffic Control. | -| | This view is sourced by the Traffic Monitor data and is updated every 10 seconds. | -| | On loading, this screen will show a history of 24 hours of data from Traffic Stats | -| | See :ref:`health-graph` for details. | -+---------------+------------------------------------------------------------------------------------------------------------------------------------+ -| Server Checks | A table showing the results of the periodic check extension scripts that are run. See :ref:`server-checks` | -+---------------+------------------------------------------------------------------------------------------------------------------------------------+ -| Daily Summary | A graph displaying the daily peaks of bandwidth, overall bytes served per day, and overall bytes served since initial installation | -| | per CDN. | -+---------------+------------------------------------------------------------------------------------------------------------------------------------+ - -Servers -------- -The main Servers table. This is where you Create/Read/Update/Delete servers of all types. Click the main tab to get to the main table, and hover over to get these sub options: - -+-------------------+--------------------------------------+ -| Option | Description | -+===================+======================================+ -| Upload Server CSV | Bulk add of servers from a CSV file. | -+-------------------+--------------------------------------+ - -Parameters ----------- -Parameters and Profiles can be edited here. Hover over the tab to get the following options: - -+-----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Option | Description | -+=============================+=====================================================================================================================================================================================+ -| Global Profile | The table of global parameters. See :ref:`param-prof`. This is where you Create/Read/Update/Delete parameters in the Global profile | -+-----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| All :term:`Cache Group`\ s | The table of all parameters *that are assigned to a cachegroup* - this may be slow to pull up, as there can be thousands of parameters. | -+-----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| All Profiles | The table of all parameters *that are assigned to a profile* - this may be slow to pull up, as there can be thousands of parameters. | -+-----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Select Profile | Select the parameter list by profile first, then get a table of just the parameters for that profile. | -+-----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Export Profile | Profiles can be exported from one Traffic Ops instance to another using 'Select Profile' and under the "Profile Details" dialog for the desired profile | -+-----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Import Profile | Profiles can be imported from one Traffic Ops instance to another using the button "Import Profile" after using the "Export Profile" feature | -+-----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Orphaned Parameters | A table of parameters that are not associated to any profile of :term:`Cache Group`. These parameters either should be deleted or associated with a profile of :term:`Cache Group`. | -+-----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Tools ------ -Tools for working with Traffic Ops and it's servers. Hover over this tab to get the following options: - -+--------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| Option | Description | -+====================+===================================================================================================================================+ -| Generate ISO | Generate a bootable image for any of the servers in the Servers table (or any server for that matter). See :ref:`generate-iso` | -+--------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| Queue Updates | Send Updates to the caches. See :ref:`queue-updates` | -+--------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| DB Dump | Backup the Database to a .sql file. | -+--------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| Snapshot CRConfig | Send updates to the Traffic Monitor / Traffic Router servers. See :ref:`queue-updates` | -+--------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| Invalidate Content | Invalidate or purge content from all caches in the CDN. See :ref:`purge` | -+--------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| Manage DNSSEC keys | Manage DNSSEC Keys for a chosen CDN. | -+--------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - - -Misc ----- -Miscellaneous editing options. Hover over this tab to get the following options: - -+------------------------------+-------------------------------------------------------------------------------------------+ -| Option | Description | -+==============================+===========================================================================================+ -| CDNs | Create/Read/Update/Delete CDNs | -+------------------------------+-------------------------------------------------------------------------------------------+ -| :term:`Cache Group`\ s | Create/Read/Update/Delete :term:`Cache Group`\ s | -+------------------------------+-------------------------------------------------------------------------------------------+ -| Users | Create/Read/Update/Delete users | -+------------------------------+-------------------------------------------------------------------------------------------+ -| Profiles | Create/Read/Update/Delete profiles. See :ref:`working-with-profiles` | -+------------------------------+-------------------------------------------------------------------------------------------+ -| Networks(ASNs) | Create/Read/Update/Delete Autonomous System Numbers See :ref:`asn-czf` | -+------------------------------+-------------------------------------------------------------------------------------------+ -| Hardware | Get detailed hardware information (note: this should be moved to a Traffic Ops Extension) | -+------------------------------+-------------------------------------------------------------------------------------------+ -| Data Types | Create/Read/Update/Delete data types | -+------------------------------+-------------------------------------------------------------------------------------------+ -| Divisions | Create/Read/Update/Delete divisions | -+------------------------------+-------------------------------------------------------------------------------------------+ -| Regions | Create/Read/Update/Delete regions | -+------------------------------+-------------------------------------------------------------------------------------------+ -| Physical Locations | Create/Read/Update/Delete locations | -+------------------------------+-------------------------------------------------------------------------------------------+ - .. index:: Change Log @@ -167,489 +54,6 @@ Help for Traffic Ops and Traffic Control. Hover over this tab to get the followi | Logout | Logout from Traffic Ops | +---------------+---------------------------------------------------------------------+ - -.. index:: - Edge Health - Health - -Health -====== - -.. _health-table: - -The Health Table ----------------- -The Health table is the default landing screen for Traffic Ops, it displays the status of the EDGE caches in a table form directly from Traffic Monitor (bypassing Traffic Stats), sorted by Mbps Out. The columns in this table are: - - -:Profile: the Profile of this server or ALL, meaning this row shows data for multiple servers, and the row shows the sum of all values. -:Edge Cache Group: the edge :term:`Cache Group` short name or ALL, meaning this row shows data for multiple servers, and the row shows the sum of all values. -:Host Name: the host name of the server or ALL, meaning this row shows data for multiple servers, and the row shows the sum of all values. -:Healthy: indicates if this cache is healthy according to the Health Protocol. A row with ALL in any of the columns will always show a |checkmark|, this column is valid only for individual EDGE caches. -:Admin: shows the administrative status of the server. -:Connections: the number of connections this cache (or group of caches) has open (``ats.proxy.process.http.current_client_connections`` from ATS). -:Mbps Out: the bandwidth being served out if this cache (or group of caches) - -Since the top line has ALL, ALL, ALL, it shows the total connections and bandwidth for all caches managed by this instance of Traffic Ops. - -.. _health-graph: - -Graph View ----------- -The Graph View shows a live view of the last 24 hours of bits per seconds served and open connections at the edge in a graph. This data is sourced from Traffic Stats. If there are 2 CDNs configured, this view will show the statistis for both, and the graphs are stacked. On the left-hand side, the totals and immediate values as well as the percentage of total possible capacity are displayed. This view is update every 10 seconds. - - -.. _server-checks: - -Server Checks -------------- -The server checks page is intended to give an overview of the Servers managed by Traffic Control as well as their status. This data comes from `Traffic Ops extensions `_. - -+------+-----------------------------------------------------------------------+ -| Name | Description | -+======+=======================================================================+ -| ILO | Ping the iLO interface for EDGE or MID servers | -+------+-----------------------------------------------------------------------+ -| 10G | Ping the IPv4 address of the EDGE or MID servers | -+------+-----------------------------------------------------------------------+ -| 10G6 | Ping the IPv6 address of the EDGE or MID servers | -+------+-----------------------------------------------------------------------+ -| MTU | Ping the EDGE or MID using the configured MTU from Traffic Ops | -+------+-----------------------------------------------------------------------+ -| FQDN | DNS check that matches what the DNS servers responds with compared to | -| | what Traffic Ops has. | -+------+-----------------------------------------------------------------------+ -| DSCP | Checks the DSCP value of packets from the edge server to the Traffic | -| | Ops server. | -+------+-----------------------------------------------------------------------+ -| RTR | Content Router checks. Checks the health of the Content Routers. | -| | Checks the health of the caches using the Content Routers. | -+------+-----------------------------------------------------------------------+ -| CHR | Cache Hit Ratio in percent. | -+------+-----------------------------------------------------------------------+ -| CDU | Total Cache Disk Usage in percent. | -+------+-----------------------------------------------------------------------+ -| ORT | Operational Readiness Test. Uses the ORT script on the edge and mid | -| | servers to determine if the configuration in Traffic Ops matches the | -| | configuration on the edge or mid. The user that this script runs as | -| | must have an ssh key on the edge servers. | -+------+-----------------------------------------------------------------------+ - -Daily Summary -------------- -Displays daily max gbps and bytes served for all CDNs. In order for the graphs to appear, the 'daily_bw_url' and 'daily_served_url' parameters need to be be created, assigned to the global profile, and have a value of a grafana graph. For more information on configuring grafana, see the `Traffic Stats <../traffic_stats.html>`_ section. - -.. _server: - -Server -====== -This view shows a table of all the servers in Traffic Ops. The table columns show the most important details of the server. The **IPAddrr** column is clickable to launch an ``ssh://`` link to this server. The |graph| icon will link to a Traffic Stats graph of this server for caches, and the |info| will link to the server status pages for other server types. - - -Server Types ------------- -These are the types of servers that can be managed in Traffic Ops: - -+---------------+---------------------------------------------+ -| Name | Description | -+===============+=============================================+ -| EDGE | Edge Cache | -+---------------+---------------------------------------------+ -| MID | Mid Tier Cache | -+---------------+---------------------------------------------+ -| ORG | Origin | -+---------------+---------------------------------------------+ -| CCR | Traffic Router | -+---------------+---------------------------------------------+ -| RASCAL | Rascal health polling & reporting | -+---------------+---------------------------------------------+ -| TOOLS_SERVER | Ops hosts for managment | -+---------------+---------------------------------------------+ -| RIAK | Riak keystore | -+---------------+---------------------------------------------+ -| SPLUNK | SPLUNK indexer search head etc | -+---------------+---------------------------------------------+ -| TRAFFIC_STATS | traffic_stats server | -+---------------+---------------------------------------------+ -| INFLUXDB | influxDb server | -+---------------+---------------------------------------------+ - -.. _asn-czf: - -The Coverage Zone File and ASN Table ------------------------------------- -The Coverage Zone File (CZF) should contain a cachegroup name to network prefix mapping in the form: - -.. code-block:: json - - { - "coverageZones": { - "cache-group-01": { - "coordinates": { - "latitude": 1.1, - "longitude": 2.2 - }, - "network6": [ - "1234:5678::/64", - "1234:5679::/64" - ], - "network": [ - "192.168.8.0/24", - "192.168.9.0/24" - ] - }, - "cache-group-02": { - "coordinates": { - "latitude": 3.3, - "longitude": 4.4 - }, - "network6": [ - "1234:567a::/64", - "1234:567b::/64" - ], - "network": [ - "192.168.4.0/24", - "192.168.5.0/24" - ] - } - } - } - -.. _deep-czf: - -The Deep Coverage Zone File ---------------------------- -The Deep Coverage Zone File (DCZF) format is similar to the CZF format but adds a ``caches`` list under each ``deepCoverageZone``: - -.. code-block:: json - - { - "deepCoverageZones": { - "location-01": { - "coordinates": { - "latitude": 5.5, - "longitude": 6.6 - }, - "network6": [ - "1234:5678::/64", - "1234:5679::/64" - ], - "network": [ - "192.168.8.0/24", - "192.168.9.0/24" - ], - "caches": [ - "edge-01", - "edge-02" - ] - }, - "location-02": { - "coordinates": { - "latitude": 7.7, - "longitude": 8.8 - }, - "network6": [ - "1234:567a::/64", - "1234:567b::/64" - ], - "network": [ - "192.168.4.0/24", - "192.168.5.0/24" - ], - "caches": [ - "edge-02", - "edge-03" - ] - } - } - } - -Each entry in the ``caches`` list is the hostname of an edge cache registered in Traffic Ops which will be used for "deep" caching in that Deep Coverage Zone. Unlike a regular CZF, coverage zones in the DCZF do not map to a :term:`Cache Group` in Traffic Ops, so currently the deep coverage zone name only needs to be unique. - -If the Traffic Router gets a DCZF "hit" for a requested :term:`Delivery Service` that has Deep Caching enabled, the client will be routed to an available "deep" cache from that zone's ``caches`` list. - -.. note:: The ``"coordinates"`` section is optional. - - -.. _working-with-profiles: - -Parameters and Profiles -======================= -Parameters are shared between profiles if the set of ``{ name, config_file, value }`` is the same. To change a value in one profile but not in others, the parameter has to be removed from the profile you want to change it in, and a new parameter entry has to be created (**Add Parameter** button at the bottom of the Parameters view), and assigned to that profile. It is easy to create new profiles from the **Misc > Profiles** view - just use the **Add/Copy Profile** button at the bottom of the profile view to copy an existing profile to a new one. Profiles can be exported from one system and imported to another using the profile view as well. It makes no sense for a parameter to not be assigned to a single profile - in that case it really has no function. To find parameters like that use the **Parameters > Orphaned Parameters** view. It is easy to create orphaned parameters by removing all profiles, or not assigning a profile directly after creating the parameter. - -.. seealso:: :ref:`param-prof` in the *Configuring Traffic Ops* section. - -.. _ccr-profile: - -Traffic Router Profile ----------------------- - -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| Name | Config_file | Description | -+=========================================+========================+==================================================================================================================================================+ -| location | dns.zone | Location to store the DNS zone files in the local file system of Traffic Router. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| location | http-log4j.properties | Location to find the log4j.properties file for Traffic Router. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| location | dns-log4j.properties | Location to find the dns-log4j.properties file for Traffic Router. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| location | geolocation.properties | Location to find the log4j.properties file for Traffic Router. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| CDN_name | rascal-config.txt | The human readable name of the CDN for this profile. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| CoverageZoneJsonURL | CRConfig.xml | The location (URL) to retrieve the coverage zone map file in JSON format from. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| ecsEnable | CRConfig.json | Boolean value to enable or disable ENDS0 client subnet extensions. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| geolocation.polling.url | CRConfig.json | The location (URL) to retrieve the geo database file from. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| geolocation.polling.interval | CRConfig.json | How often to refresh the coverage geo location database in ms | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| coveragezone.polling.interval | CRConfig.json | How often to refresh the coverage zone map in ms | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| coveragezone.polling.url | CRConfig.json | The location (URL) to retrieve the coverage zone map file in JSON format from. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| deepcoveragezone.polling.interval | CRConfig.json | How often to refresh the deep coverage zone map in ms | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| deepcoveragezone.polling.url | CRConfig.json | The location (URL) to retrieve the deep coverage zone map file in JSON format from. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| client.steering.forced.diversity | CRConfig.json | Enable the Client Steering Forced Diversity feature (value = "true") to diversify CLIENT_STEERING results by including more unique edge caches | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.soa.expire | CRConfig.json | The value for the expire field the Traffic Router DNS Server will respond with on Start of Authority (SOA) records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.soa.minimum | CRConfig.json | The value for the minimum field the Traffic Router DNS Server will respond with on SOA records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.soa.admin | CRConfig.json | The DNS Start of Authority admin. Should be a valid support email address for support if DNS is not working correctly. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.soa.retry | CRConfig.json | The value for the retry field the Traffic Router DNS Server will respond with on SOA records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.soa.refresh | CRConfig.json | The TTL the Traffic Router DNS Server will respond with on A records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.ttls.NS | CRConfig.json | The TTL the Traffic Router DNS Server will respond with on NS records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.ttls.SOA | CRConfig.json | The TTL the Traffic Router DNS Server will respond with on SOA records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.ttls.AAAA | CRConfig.json | The Time To Live (TTL) the Traffic Router DNS Server will respond with on AAAA records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.ttls.A | CRConfig.json | The TTL the Traffic Router DNS Server will respond with on A records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.ttls.DNSKEY | CRConfig.json | The TTL the Traffic Router DNS Server will respond with on DNSKEY records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| tld.ttls.DS | CRConfig.json | The TTL the Traffic Router DNS Server will respond with on DS records. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| api.port | server.xml | The TCP port Traffic Router listens on for API (REST) access. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| api.cache-control.max-age | CRConfig.json | The value of the ``Cache-Control: max-age=`` header in the API responses of Traffic Router. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| api.auth.url | CRConfig.json | The API authentication URL (https://${tmHostname}/api/1.1/user/login); ${tmHostname} is a search and replace token used by Traffic Router to | -| | | construct the correct URL) | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| consistent.dns.routing | CRConfig.json | Control whether DNS :term:`Delivery Service`\ s use consistent hashing on the edge FQDN to select caches for answers. May improve performance if | -| | | set to true; defaults to false | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| dnssec.enabled | CRConfig.json | Whether DNSSEC is enabled; this parameter is updated via the DNSSEC administration user interface. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| dnssec.allow.expired.keys | CRConfig.json | Allow Traffic Router to use expired DNSSEC keys to sign zones; default is true. This helps prevent DNSSEC related outages due to failed Traffic | -| | | Control components or connectivity issues. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| dynamic.cache.primer.enabled | CRConfig.json | Allow Traffic Router to attempt to prime the dynamic zone cache; defaults to true | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| dynamic.cache.primer.limit | CRConfig.json | Limit the number of permutations to prime when dynamic zone cache priming is enabled; defaults to 500 | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| keystore.maintenance.interval | CRConfig.json | The interval in seconds which Traffic Router will check the keystore API for new DNSSEC keys | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| keystore.api.url | CRConfig.json | The keystore API URL (https://${tmHostname}/api/1.1/cdns/name/${cdnName}/dnsseckeys.json; ${tmHostname} and ${cdnName} are search and replace | -| | | tokens used by Traffic Router to construct the correct URL) | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| keystore.fetch.timeout | CRConfig.json | The timeout in milliseconds for requests to the keystore API | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| keystore.fetch.retries | CRConfig.json | The number of times Traffic Router will attempt to load keys before giving up; defaults to 5 | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| keystore.fetch.wait | CRConfig.json | The number of milliseconds Traffic Router will wait before a retry | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| signaturemanager.expiration.multiplier | CRConfig.json | Multiplier used in conjunction with a zone's maximum TTL to calculate DNSSEC signature durations; defaults to 5 | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| zonemanager.threadpool.scale | CRConfig.json | Multiplier used to determine the number of cores to use for zone signing operations; defaults to 0.75 | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| zonemanager.cache.maintenance.interval | CRConfig.json | The interval in seconds which Traffic Router will check for zones that need to be resigned or if dynamic zones need to be expired from cache | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| zonemanager.dynamic.response.expiration | CRConfig.json | A string (e.g.: 300s) that defines how long a dynamic zone | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| DNSKEY.generation.multiplier | CRConfig.json | Used to deteremine when new keys need to be regenerated. Keys are regenerated if expiration is less than the generation multiplier * the TTL. If | -| | | the parameter does not exist, the default is 10. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| DNSKEY.effective.multiplier | CRConfig.json | Used when creating an effective date for a new key set. New keys are generated with an effective date of old key expiration - (effective | -| | | multiplier * TTL). Default is 2. | -+-----------------------------------------+------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ - -Tools -===== - -.. index:: - ISO - Generate ISO - -.. _generate-iso: - -Generate ISO ------------- -Generate ISO is a tool for building custom ISOs for building caches on remote hosts. Currently it only supports Centos 7, but if you're brave and pure of heart you MIGHT be able to get it to work with other unix-like OS's. - -The interface is *mostly* self-explanatory as it's got hints. - -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| Field | Explaination | -+===============================+=================================================================================================================================+ -|Choose a server from list: | This option gets all the server names currently in the Traffic Ops database and will autofill known values. | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| OS Version: | There needs to be an _osversions.cfg_ file in the ISO directory that maps the name of a directory to a name that shows up here. | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| Hostname: | This is the FQDN of the server to be installed. It is required. | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| Root password: | If you don't put anything here it will default to the salted MD5 of "Fred". Whatever put is MD5 hashed and writte to disk. | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| DHCP: | if yes, other IP settings will be ignored | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| IP Address: | Required if DHCP=no | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| Netmask: | Required if DHCP=no | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| Gateway: | Required if DHCP=no | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| IPV6 Address: | Optional. /64 is assumed if prefix is omitted | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| IPV6 Gateway: | Ignored if an IPV4 gateway is specified | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| Network Device: | Optional. Typical values are bond0, eth4, etc. Note: if you enter bond0, a LACP bonding config will be written | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| MTU: | If unsure, set to 1500 | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ -| Specify disk for OS install: | Optional. Typical values are "sda". | -+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------+ - - -When you click the **Download ISO** button the folling occurs (all paths relative to the top level of the directory specified in _osversions.cfg_): - -#. Reads /etc/resolv.conf to get a list of nameservers. This is a rather ugly hack that is in place until we get a way of configuring it in the interface. -#. Writes a file in the ks_scripts/state.out that contains directory from _osversions.cfg_ and the mkisofs string that we'll call later. -#. Writes a file in the ks_scripts/network.cfg that is a bunch of key=value pairs that set up networking. -#. Creates an MD5 hash of the password you specify and writes it to ks_scripts/password.cfg. Note that if you do not specify a password "Fred" is used. Also note that we have experienced some issues with webbrowsers autofilling that field. -#. Writes out a disk configuration file to ks_scripts/disk.cfg. -#. mkisofs is called against the directory configured in _osversions.cfg_ and an ISO is generated in memory and delivered to your webbrowser. - -You now have a customized ISO that can be used to install Red Hat and derivative Linux installations with some modifications to your ks.cfg file. - -Kickstart/Anaconda will mount the ISO at /mnt/stage2 during the install process (at least with 6). - -You can directly include the password file anywhere in your ks.cfg file (usually in the top) by doing %include /mnt/stage2/ks_scripts/password.cfg - -What we currently do is have 2 scripts, one to do hard drive configuration and one to do network configuration. Both are relatively specific to the environment they were created in, and both are *probably* wrong for other organizations, however they are currently living in the "misc" directory as examples of how to do things. - -We trigger those in a %pre section in ks.cfg and they will write config files to /tmp. We will then include those files in the appropriate places using %pre. - -For example this is a section of our ks.cfg file: :: - - %include /mnt/stage2/ks_scripts/packages.txt - - %pre - python /mnt/stage2/ks_scripts/create_network_line.py - bash /mnt/stage2/ks_scripts/drive_config.sh - %end - -These two scripts will then run _before_ anaconda sets up it's internal structures, then a bit further up in the ks.cfg file (outside of the %pre %end block) we do an :: - - %include /mnt/stage2/ks_scripts/password.cfg - ... - %include /tmp/network_line - - %include /tmp/drive_config - ... - -This snarfs up the contents and inlines them. - -If you only have one kind of hardware on your CDN it is probably best to just put the drive config right in the ks.cfg. - -If you have simple networking needs (we use bonded interfaces in most, but not all locations and we have several types of hardware meaning different ethernet interface names at the OS level etc.) then something like this: - -.. code-block:: bash - - #!/bin/bash - source /mnt/stage2/ks_scripts/network.cfg - echo "network --bootproto=static --activate --ipv6=$IPV6ADDR --ip=$IPADDR --netmask=$NETMASK --gateway=$GATEWAY --ipv6gateway=$GATEWAY --nameserver=$NAMESERVER --mtu=$MTU --hostname=$HOSTNAME" >> /tmp/network.cfg - -,, Note:: that this is an example and may not work at all. - -You could also put this in the %pre section. Lots of ways to solve it. - -We have included the two scripts we use in the "misc" directory of the git repo: - -* kickstart_create_network_line.py -* kickstart_drive_config.sh - -These scripts were written to support a very narrow set of expectations and environment and are almost certainly not suitable to just drop in, but they might provide a good starting point. - -.. _queue-updates: - -Queue Updates and Snapshot CRConfig ------------------------------------ -When changing delivery services special care has to be taken so that Traffic Router will not send traffic to caches for delivery services that the cache doesn't know about yet. In general, when adding delivery services, or adding servers to a delivery service, it is best to update the caches before updating Traffic Router and Traffic Monitor. When deleting delivery services, or deleting server assignments to delivery services, it is best to update Traffic Router and Traffic Monitor first and then the caches. Updating the cache configuration is done through the *Queue Updates* menu, and updating Traffic Monitor and Traffic Router config is done through the *Snapshot CRConfig* menu. - -.. index:: - Cache Updates - Queue Updates - -Queue Updates -""""""""""""" -Every 15 minutes the caches should run a *syncds* to get all changes needed from Traffic Ops. The files that will be updated by the syncds job are: - -- records.config -- remap.config -- parent.config -- cache.config -- hosting.config -- url\_sig\_(.*)\.config -- hdr\_rw\_(.*)\.config -- regex_revalidate.config -- ip_allow.config - -A cache will only get updated when the update flag is set for it. To set the update flag, use the *Queue Updates* menu - here you can schedule updates for a whole CDN or a :term:`Cache Group`: - -#. Click **Tools > Queue Updates**. -#. Select the CDN to queue updates for or select All. -#. Select the :term:`Cache Group` to queue updates for or select All. -#. Click the **Queue Updates** button. -#. When the Queue Updates for this Server? (all) window opens, click **OK**. - -To schedule updates for just one cache, use the "Server Checks" page, and click the |checkmark| in the *UPD* column. The UPD column of Server Checks page will change show a |clock| when updates are pending for that cache. - -.. index:: - Snapshot CRConfig - -.. _snapshot-crconfig: - -Snapshot CRConfig -""""""""""""""""" -Every 60 seconds Traffic Monitor will check with Traffic Ops to see if a new CRConfig snapshot exists; Traffic Monitor polls Traffic Ops for a new CRConfig, and Traffic Router polls Traffic Monitor for the same file. This is necessary to ensure that Traffic Monitor sees configuration changes first, which helps to ensure that the health and state of caches and delivery services propagates properly to Traffic Router. See :ref:`ccr-profile` for more information on the CRConfig file. - -To create a new snapshot, use the *Tools > Snapshot CRConfig* menu: - - #. Click **Tools > Snapshot CRConfig**. - #. Verify the selection of the correct CDN from the Choose CDN drop down and click **Diff CRConfig**. - On initial selection of this, the CRConfig Diff window says the following: - - There is no existing CRConfig for [cdn] to diff against... Is this the first snapshot??? - If you are not sure why you are getting this message, please do not proceed! - To proceed writing the snapshot anyway click the 'Write CRConfig' button below. - - If there is an older version of the CRConfig, a window will pop up showing the differences - between the active CRConfig and the CRConfig about to be written. - - #. Click **Write CRConfig**. - #. When the This will push out a new CRConfig.json. Are you sure? window opens, click **OK**. - #. The "Successfully wrote CRConfig.json!" window opens, click **OK**. - -.. Note:: Snapshotting the CDN also deletes all HTTPS certificates for every :term:`Delivery Service` which has been deleted since the last :term:`Snapshot`. - .. index:: Invalidate Content Purge @@ -690,7 +94,7 @@ The Manage DNSSEC Keys screen also allows a user to perform the following action Activate/Deactivate DNSSEC for a CDN ------------------------------------ -Fairly straight forward, this button set the **dnssec.enabled** param to either **true** or **false** on the Traffic Router profile for the CDN. The Activate/Deactivate option is only available if DNSSEC keys exist for CDN. In order to active DNSSEC for a CDN a user must first generate keys and then click the **Active DNSSEC** button. +Fairly straight forward, this button set the :term:`Parameter` with the :ref:`parameter-name` "dnssec.enabled" to have a :ref:`parameter-value` of either "true" or "false" on the Traffic Router :term:`Profile` for the CDN. The Activate/Deactivate option is only available if DNSSEC keys exist for CDN. In order to active DNSSEC for a CDN a user must first generate keys and then click the :guilabel:`Active DNSSEC` button. Generate Keys ------------- diff --git a/docs/source/admin/traffic_portal/default_profiles.rst b/docs/source/admin/traffic_portal/default_profiles.rst deleted file mode 100644 index 42fa2e6853..0000000000 --- a/docs/source/admin/traffic_portal/default_profiles.rst +++ /dev/null @@ -1,46 +0,0 @@ -.. -.. -.. Licensed under the Apache License, Version 2.0 (the "License"); -.. you may not use this file except in compliance with the License. -.. You may obtain a copy of the License at -.. -.. http://www.apache.org/licenses/LICENSE-2.0 -.. -.. Unless required by applicable law or agreed to in writing, software -.. distributed under the License is distributed on an "AS IS" BASIS, -.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -.. See the License for the specific language governing permissions and -.. limitations under the License. -.. - -.. _default-profiles: - -**************** -Default Profiles -**************** -Traffic Ops has the concept of :ref:`working-with-profiles`, which are an integral component of Traffic Ops. To get started, a set of default Traffic Ops profiles are provided. These can be imported into Traffic Ops, and are required by the Traffic Control components Traffic Router, Traffic Monitor, and Apache Traffic Server (Edge-tier and Mid-tier caches). Download Default Profiles from `here `_ - -.. _to-profiles-min-needed: - -Minimum Traffic Ops Profiles Needed -=================================== - -- :file:`EDGE_ATS_{version}_{platform}_PROFILE.traffic_ops` -- :file:`MID_ATS_{version}_{platform}_PROFILE.traffic_ops` -- :file:`TRAFFIC_MONITOR_PROFILE.traffic_ops` -- :file:`TRAFFIC_ROUTER_PROFILE.traffic_ops` -- :file:`TRAFFIC_STATS_PROFILE.traffic_ops` -- :file:`EDGE_GROVE_PROFILE.traffic_ops` - -.. note:: Despite that these have the ``.traffic_ops`` extension, they use JSON to store data. If your syntax highlighting doesn't work in some editor or viewer, try changing the extension to ``.json``. - -.. warning:: These profiles will likely need to be modified to suit your system. Many of them contain hardware-specific parameters and parameter values. - -Steps to Import a Profile -========================= -#. Sign into Traffic Portal -#. Navigate to :menuselection:`Configure --> Profiles` -#. Click on :menuselection:`More --> Import Profile` -#. Drag and drop your desired profile into the upload pane -#. Click :guilabel:`Import` -#. Continue these steps for each of the `Minimum Traffic Ops Profiles Needed`_. diff --git a/docs/source/admin/traffic_portal/installation.rst b/docs/source/admin/traffic_portal/installation.rst index e357cafe0c..7220f21d6a 100644 --- a/docs/source/admin/traffic_portal/installation.rst +++ b/docs/source/admin/traffic_portal/installation.rst @@ -41,6 +41,11 @@ Configuring Traffic Portal - Optional: update :file:`/opt/traffic_portal/public/resources/assets/css/custom.css` to customize Traffic Portal styling. +Configuring OAuth Through Traffic Portal +======================================== +See :ref:`oauth_login`. + + Starting Traffic Portal ======================= The Traffic Portal RPM comes with a :manpage:`systemd(1)` unit file, so under normal circumstances Traffic Portal may be started with :manpage:`systemctl(1)`. diff --git a/docs/source/admin/traffic_portal/usingtrafficportal.rst b/docs/source/admin/traffic_portal/usingtrafficportal.rst index 73e488f90c..e51eb60dfc 100644 --- a/docs/source/admin/traffic_portal/usingtrafficportal.rst +++ b/docs/source/admin/traffic_portal/usingtrafficportal.rst @@ -38,33 +38,33 @@ Current Connections The current number of connections to all of your CDNs. Healthy Caches - Displays the number of healthy caches across all CDNs. Click the link to view the healthy caches on the cache stats page. + Displays the number of healthy :term:`cache servers` across all CDNs. Click the link to view the healthy caches on the cache stats page. Unhealthy Caches - Displays the number of unhealthy caches across all CDNs. Click the link to view the unhealthy caches on the cache stats page. + Displays the number of unhealthy :term:`cache servers` across all CDNs. Click the link to view the unhealthy caches on the cache stats page. Online Caches - Displays the number of :term:`cache server` s with ONLINE status. Traffic Monitor will not monitor the state of ONLINE servers [1]_. + Displays the number of :term:`cache servers` with ONLINE :term:`Status`. Traffic Monitor will not monitor the state of ONLINE servers. Reported Caches - Displays the number of :term:`cache server` s with REPORTED status [1]_. + Displays the number of :term:`cache servers` with REPORTED :term:`Status`. Offline Caches - Displays the number of :term:`cache server` s with OFFLINE status [1]_. + Displays the number of :term:`cache servers` with OFFLINE :term:`Status`. Admin Down Caches - Displays the number of caches with ADMIN_DOWN status [1]_. + Displays the number of caches with ADMIN_DOWN :term:`Status`. -Each component of this view is updated on the intervals defined in the :file:`traffic_portal_properties.json` configuration file. +Each component of this view is updated on the intervals defined in the :atc-file:`traffic_portal/app/src/traffic_portal_properties.json` configuration file. -.. [1] For more information, see :ref:`health-proto`. +.. _tp-cdns: CDNs ==== A table of CDNs with the following columns: :Name: The name of the CDN -:Domain: The CDN's Top-Level Domain (TLD) +:Domain: The CDN's :abbr:`TLD (Top-Level Domain)` :DNSSEC Enabled: 'true' if :ref:`tr-dnssec` is enabled on this CDN, 'false' otherwise. CDN management includes the ability to (where applicable): @@ -72,18 +72,18 @@ CDN management includes the ability to (where applicable): - create a new CDN - update an existing CDN - delete an existing CDN -- queue/clear updates on all servers in a CDN -- diff CDN snapshots -- create a CDN snapshot +- :term:`Queue Updates` on all servers in a CDN, or clear such updates +- Compare CDN :term:`Snapshots` +- create a CDN :term:`Snapshot` - manage a CDN's DNSSEC keys -- manage a CDN's federations -- view :term:`Delivery Service`\ s of a CDN -- view CDN profiles +- manage a CDN's :term:`Federations` +- view :term:`Delivery Services` of a CDN +- view CDN :term:`Profiles` - view servers within a CDN Monitor ======= -The :guilabel:`Monitor` section of Traffic Portal is used to display statistics regarding the various :term:`cache server` s within all CDNs visible to the user. It retrieves this information through the Traffic Ops API from Traffic Monitor instances. +The :guilabel:`Monitor` section of Traffic Portal is used to display statistics regarding the various :term:`cache servers` within all CDNs visible to the user. It retrieves this information through the :ref:`to-api` from Traffic Monitor instances. .. figure:: ./images/tp_menu_monitor.png :align: center @@ -94,41 +94,43 @@ The :guilabel:`Monitor` section of Traffic Portal is used to display statistics Cache Checks ------------ -A real-time view into the status of each cache. The :menuselection:`Monitor --> Cache Checks` page is intended to give an overview of the caches managed by Traffic Control as well as their status. +A real-time view into the status of each :term:`cache server`. The :menuselection:`Monitor --> Cache Checks` page is intended to give an overview of the caches managed by Traffic Control as well as their status. -:Hostname: Cache host name -:Profile: The name of the profile applied to the cache -:Status: The status of the cache (one of: ONLINE, REPORTED, ADMIN_DOWN, OFFLINE) +.. warning:: Several of these columns may be empty by default - particularly in the :ref:`ciab` environment - and require :ref:`Traffic Ops Extensions ` to be installed/enabled/configured in order to work. + +:Hostname: The (short) hostname of the :term:`cache server` +:Profile: The :ref:`profile-name` of the :term:`Profile` used by the :term:`cache server` +:Status: The :term:`Status` of the :term:`cache server` .. seealso:: :ref:`health-proto` -:UPD: Configuration updates pending for an Edge-tier or Mid-tier :term:`cache server` -:RVL: Content invalidation requests are pending for this server and/or its parent(s) -:ILO: Ping the iLO interface for Edge-tier or Mid-tier :term:`cache server` s -:10G: Ping the IPv4 address of the Edge-tier or Mid-tier :term:`cache server` s -:FQDN: DNS check that matches what the DNS servers responds with compared to what Traffic Ops has -:DSCP: Checks the :abbr:`DSCP (Differentiated Services Code Point)` value of packets from the Edge-tier :term:`cache server` to the Traffic Ops server -:10G6: Ping the IPv6 address of the Edge-tier or Mid-tier :term:`cache server` s -:MTU: Ping the Edge-tier or Mid-tier using the configured :abbr:`MTU (Maximum Transmission Unit)` from Traffic Ops -:RTR: Content Router checks. Checks the health of the Traffic Router servers. Also checks the health of the :term:`cache server` s using the Traffic Routers -:CHR: Cache Hit Ratio percent -:CDU: Total Cache Disk Usage percent -:ORT: Operational Readiness Test - uses the :term:`ORT` script on the Edge-tier and Mid-tier :term:`cache server` s to determine if the configuration in Traffic Ops matches the configuration on the Edge-tier or Mid-tier. The user as whom this script runs must have an SSH key on the Edge-tier servers. +:UPD: Displays whether or not this :term:`cache server` has configuration updates pending +:RVL: Displays whether or not this :term:`cache server` (or one or more of its :term:`parents`) has content invalidation requests pending +:ILO: Indicates the status of an :abbr:`iLO (Integrated Lights-Out)` interface for this :term:`cache server` +:10G: Indicates whether or not the IPv4 address of this :term:`cache server` is reachable via ICMP "pings" +:FQDN: DNS check that matches what the DNS servers respond with compared to what Traffic Ops has configured +:DSCP: Checks the :abbr:`DSCP (Differentiated Services Code Point)` value of packets received from this :term:`cache server` +:10G6: Indicates whether or not the IPv6 address of this :term:`cache server` is reachable via ICMP "pings" +:MTU: Checks the :abbr:`MTU (Maximum Transmission Unit)` by sending ICMP "pings" from the Traffic Ops server +:RTR: Checks the reachability of the :term:`cache server` from the CDN's configured Traffic Routers +:CHR: Cache-Hit Ratio (percent) +:CDU: Total Cache-Disk Usage (percent) +:ORT: Uses the :term:`ORT` script on the :term:`cache server` to determine if the configuration in Traffic Ops matches the configuration on :term:`cache server` itself. The user as whom this script runs must have an SSH key on each server. Cache Stats ----------- A table showing the results of the periodic :ref:`to-check-ext` that are run. These can be grouped by :term:`Cache Group` and/or :term:`Profile`. -:Profile: Name of the :term:`Profile` applied to the Edge-tier or Mid-tier :term:`cache server` +:Profile: :ref:`profile-name` of the :term:`Profile` applied to the Edge-tier or Mid-tier :term:`cache server`, or the special name "ALL" indicating that this row is a group of all :term:`cache servers` within a single :term:`Cache Group` :Host: 'ALL' for entries grouped by :term:`Cache Group`, or the hostname of a particular :term:`cache server` -:Cache Group: Name of the :term:`Cache Group` to which this server belongs, or the name of the :term:`Cache Group` that is grouped for entries grouped by :term:`Cache Group` +:Cache Group: Name of the :term:`Cache Group` to which this server belongs, or the name of the :term:`Cache Group` that is grouped for entries grouped by :term:`Cache Group`, or the special name "ALL" indicating that this row is an aggregate across all :term:`Cache Groups` :Healthy: True/False as determined by Traffic Monitor .. seealso:: :ref:`health-proto` :Status: Status of the :term:`cache server` or :term:`Cache Group` -:Connections: Number of connections to this :term:`cache server` or :term:`Cache Group` +:Connections: Number of currently open connections to this :term:`cache server` or :term:`Cache Group` :MbpsOut: Data flow rate outward from the CDN (toward client) in Megabits per second .. _tp-services: @@ -267,15 +269,15 @@ A table of all servers (of all kinds) across all :term:`Delivery Services` and C :UPD: 'true' when updates to the server's configuration are pending, 'false' otherwise :Host: The hostname of the server -:Domain: The server's domain. (The :abbr:`FQDN (Fully Qualified Domain Name)` of the server is given by 'Host.Domain') +:Domain: The server's domain. (The :abbr:`FQDN (Fully Qualified Domain Name)` of the server is given by :file:`{Host}.{Domain}`) :IP: The server's IPv4 address :IPv6: The server's IPv6 address -:Status: The server's status +:Status: The server's :term:`Status` .. seealso:: :ref:`health-proto` -:Type: The type of server e.g. EDGE for an Edge-tier :term:`cache server` -:Profile: The name of the server's :term:`Profile` +:Type: The :term:`Type` of server e.g. EDGE for an :term:`Edge-tier cache server` +:Profile: The :ref:`profile-name` of the server's :term:`Profile` :CDN: The name of the CDN to which this server is assigned (if any) :Cache Group: The name of the :term:`Cache Group` to which this server belongs :Phys Location: The name of the :term:`Physical Location` to which this server belongs @@ -288,7 +290,7 @@ Server management includes the ability to (where applicable): - create a new server - update an existing server - delete an existing server -- queue/clear updates on a server +- :term:`Queue Updates` on a server, or clear such updates - update server status - view server :term:`Delivery Services` - view server configuration files @@ -319,7 +321,7 @@ A table of all :term:`origins`. These are automatically created for the :term:`o :Coordinate: The name of the geographic coordinate pair that defines the physical location of this :term:`origin server`. :term:`Origins` created for :term:`Delivery Services` automatically will **not** have associated Coordinates. This can be rectified on the details pages for said :term:`origins` :Cachegroup: The name of the :term:`Cache Group` to which this :term:`origin` belongs, if any. -:Profile: The name of a :term:`Profile` used by this :term:`origin`. +:Profile: The :ref:`profile-name` of a :term:`Profile` used by this :term:`origin`. :term:`Origin` management includes the ability to (where applicable): @@ -327,17 +329,17 @@ A table of all :term:`origins`. These are automatically created for the :term:`o - update an existing :term:`origin` - delete an existing :term:`origin` -.. _tp-profiles-page: +.. _tp-configure-profiles: Profiles -------- -A table of all :term:`Profile`\ s. From here you can see :term:`Parameter`\ s, servers and :term:`Delivery Service`\ s assigned to each :term:`Profile`. Each entry in the table has these fields: +A table of all :term:`Profiles`. From here you can see :term:`Parameters`, servers and :term:`Delivery Services` assigned to each :term:`Profile`. Each entry in the table has these fields: -:Name: The name of the :term:`Profile` -:Type: The type of this :term:`Profile`, which indicates the kinds of objects to which the :term:`Profile` may be assigned -:Routing Disabled: For :term:`Profile`\ s applied to :term:`cache server` s (Edge-tier or Mid-tier) this indicates that Traffic Router will refuse to provide routes to these machines -:Description: A user-defined description of the :term:`Profile`, typically indicating its purpose -:CDN: The CDN to which this :term:`Profile` is restricted. To use the same :term:`Profile` across multiple CDNs, clone the :term:`Profile` and change the clone's CDN field. +:Name: The :ref:`profile-name` of the :term:`Profile` +:Type: The :ref:`profile-type` of this :term:`Profile`, which indicates the kinds of objects to which the :term:`Profile` may be assigned +:Routing Disabled: The :ref:`profile-routing-disabled` setting of this :term:`Profile` +:Description: This :term:`Profile`'s :ref:`profile-description` +:CDN: The :ref:`profile-cdn` to which this :term:`Profile` is restricted. To use the same :term:`Profile` across multiple CDNs, clone the :term:`Profile` and change the clone's :ref:`profile-cdn` field. :term:`Profile` management includes the ability to (where applicable): @@ -346,29 +348,29 @@ A table of all :term:`Profile`\ s. From here you can see :term:`Parameter`\ s, s - delete an existing :term:`Profile` - clone a :term:`Profile` - export a :term:`Profile` -- view :term:`Profile` :term:`Parameter`\ s -- view :term:`Profile` :term:`Delivery Service`\ s +- view :term:`Profile` :term:`Parameters` +- view :term:`Profile` :term:`Delivery Services` - view :term:`Profile` servers -.. seealso:: :ref:`working-with-profiles` +.. _tp-configure-parameters: Parameters ---------- -This page displays a table of :term:`Parameter`\ s from all :term:`Profile`\ s with the following columns: +This page displays a table of :term:`Parameters` from all :term:`Profiles` with the following columns: -:Name: The name of the :term:`Parameter` -:Config File: The configuration file where this :term:`Parameter` is stored, possibly the special value ``location``, indicating that this :term:`Parameter` actually names the location of a configuration file rather than its contents, or ``package`` to indicate that this :term:`Parameter` specifies a package to be installed rather than anything to do with configuration files -:Value: The value of the :term:`Parameter`. The meaning of this depends on the value of 'Config File' -:Secure: When this is 'true', a user requesting to see this :term:`Parameter` will see the value ``********`` instead of its actual value if the user's permission role isn't 'admin' -:Profiles: The number of :term:`Profile`\ s currently using this :term:`Parameter` +:Name: The :ref:`parameter-name` of the :term:`Parameter` +:Config File: The :ref:`parameter-config-file` to which the :term:`Parameter` belongs. +:Value: The :ref:`parameter-value` of the :term:`Parameter`. +:Secure: Whether or not the :term:`Parameter` is :ref:`parameter-secure` +:Profiles: The number of :term:`Profiles` currently using this :term:`Parameter` :term:`Parameter` management includes the ability to (where applicable): - create a new :term:`Parameter` - update an existing :term:`Parameter` - delete an existing :term:`Parameter` -- view :term:`Parameter` :term:`Profile`\ s -- manage assignments of a :term:`Parameter` to one or more :term:`Profile`\ s and/or :term:`Delivery Service`\ s +- view :term:`Parameter` :term:`Profiles` +- manage assignments of a :term:`Parameter` to one or more :term:`Profiles` and/or :term:`Delivery Services` .. _tp-configure-types: @@ -416,7 +418,7 @@ Topology Cache Groups ------------ -This page is a table of :term:`Cache Group`\ s, each entry of which has the following fields: +This page is a table of :term:`Cache Groups`, each entry of which has the following fields: :Name: The full name of this :term:`Cache Group` :Short Name: A shorter, more human-friendly name for this :term:`Cache Group` @@ -429,12 +431,12 @@ This page is a table of :term:`Cache Group`\ s, each entry of which has the foll - create a new :term:`Cache Group` - update an existing :term:`Cache Group` - delete an existing :term:`Cache Group` -- queue/clear updates for all servers in a :term:`Cache Group` +- :term:`Queue Updates` for all servers in a :term:`Cache Group`, or clear such updates - view :term:`Cache Group` :abbr:`ASN (Autonomous System Number)`\ s .. seealso:: `The Wikipedia page on Autonomous System Numbers `_ -- view and assign :term:`Cache Group` :term:`Parameter`\ s +- view and assign :term:`Cache Group` :term:`Parameters` - view :term:`Cache Group` servers Coordinates @@ -535,6 +537,8 @@ Invalidate content includes the ability to (where applicable): - create a new invalidate content job +.. _tp-tools-generate-iso: + Generate ISO ------------ Generates a boot-able system image for any of the servers in the Servers table (or any server for that matter). Currently it only supports CentOS 7, but if you're brave and pure of heart you MIGHT be able to get it to work with other Unix-like Operating Systems. The interface is *mostly* self-explanatory, but here is a short explanation of the fields in that form. @@ -551,11 +555,23 @@ DHCP If this is 'no' the IP settings of the system must be specified, and the following extra fields will appear: IP Address - The resultant system's IPv4 Address + The resultant system's IPv4 address + IPv6 Address + The resultant system's IPv6 address Network Subnet The system's network subnet mask Network Gateway - The system's network gateway's IPv4 Address + The system's network gateway's IPv4 address + IPv6 Gateway + The system's network gateway's IPv6 address + Management IP Address + An optional IP address (IPv4 or IPv6) of a "management" server for the resultant system (e.g. for :abbr:`ILO (Integrated Lights-Out)`) + Management IP Netmask + The subnet mask (IPv4 or IPv6) used by a "management" server for the resultant system (e.g. for :abbr:`ILO (Integrated Lights-Out)`) - only needed if the Management IP Address is provided + Management IP Gateway + The IP address (IPv4 or IPv6) of the network gateway used by a "management" server for the resultant system (e.g. for :abbr:`ILO (Integrated Lights-Out)`) - only needed if the Management IP Address is provided + Management Interface + The network interface used by a "management" server for the resultant system (e.g. for :abbr:`ILO (Integrated Lights-Out)`) - only needed if the Management IP Address is provided. Must not be the same as "Interface Name". Network MTU The system's network's :abbr:`MTU (Maximum Transmission Unit)`. Despite being a text field, this can only be 1500 or 9000 - it should almost always be 1500 @@ -565,7 +581,7 @@ Network MTU Disk for OS Install The disk on which to install the base system. A reasonable default is ``sda`` (the ``/dev/`` prefix is not necessary) Root Password - The password to be used for the root user. Input is MD5 hashed before being written to disk + The password to be used for the root user. Input is hashed using MD5 before being written to disk Confirm Root Password Repeat the 'Root Password' to be sure it's right Interface Name @@ -576,6 +592,8 @@ Interface Name Stream ISO If this is 'yes', then the download will start immediately as the ISO is written directly to the socket connection to Traffic Ops. If this is 'no', then the download will begin only *after* the ISO has finished being generated. For almost all use cases, this should be 'yes'. +.. impl-detail:: Traffic Ops uses Red Hat's `Kickstart ` to create these ISOs, so many configuration options not available here can be tweaked in the :ref:`Kickstart configuration file `. + User Admin ========== This section offers administrative functionality for users and their permissions. @@ -603,6 +621,8 @@ User management includes the ability to (where applicable): - update an existing user - view :term:`Delivery Service`\ s visible to a user +.. Note:: If OAuth is enabled, creating/deleting a user here will update the user's :term:`role` but the user needs to be created/deleted with the OAuth provider as well. + Tenants ------- Each entry in the table of :term:`Tenant`\ s on this page has the following entries: diff --git a/docs/source/admin/traffic_router.rst b/docs/source/admin/traffic_router.rst index cb0b92df5b..3f21242419 100644 --- a/docs/source/admin/traffic_router.rst +++ b/docs/source/admin/traffic_router.rst @@ -32,9 +32,9 @@ Requirements Installing Traffic Router ========================= -#. If no suitable :term:`Profile` exists, create a new :term:`Profile` for Traffic Router via the :guilabel:`+` button on the :ref:`tp-profiles-page` page in Traffic Portal +#. If no suitable :term:`Profile` exists, create a new :term:`Profile` for Traffic Router via the :guilabel:`+` button on the :ref:`tp-configure-profiles` page in Traffic Portal - .. warning:: Traffic Ops will *only* recognize a profile as assignable to a Traffic Router if its name starts with the prefix ``ccr-``. The reason for this is a legacy limitation related to the old name for Traffic Router (Comcast Cloud Router), and will (hopefully) be rectified in the future as the old Perl parts of Traffic Ops are re-written in Go. + .. warning:: Traffic Ops will *only* recognize a :term:`Profile` as assignable to a Traffic Router if its :ref:`profile-name` starts with the prefix ``ccr-``. The reason for this is a legacy limitation related to the old name for Traffic Router (Comcast Cloud Router), and will (hopefully) be rectified in the future as the old Perl parts of Traffic Ops are re-written in Go. #. Enter the Traffic Router server into Traffic Portal on the :ref:`tp-configure-servers` page (or via the :ref:`to-api`), assign to it a Traffic Router :term:`Profile`, and ensure that its status is set to ``ONLINE``. #. Ensure the :abbr:`FQDN (Fully Qualified Domain Name)` of the Traffic Router is resolvable in DNS. This :abbr:`FQDN (Fully Qualified Domain Name)` must be resolvable by the clients expected to use this CDN. @@ -81,8 +81,6 @@ Installing Traffic Router #. Perform a CDN :term:`Snapshot`. - .. SeeAlso:: :ref:`snapshot-crconfig` - .. Note:: Once the :term:`Snapshot` is taken, live traffic will be sent to the new Traffic Routers provided that their status has been set to ``ONLINE``. #. Ensure that the parent domain (e.g.: ``cdn.local``) for the CDN's top level domain (e.g.: ``ciab.cdn.local``) contains a delegation (Name Server records) for the new Traffic Router, and that the value specified matches the :abbr:`FQDN (Fully Qualified Domain Name)` of the Traffic Router. @@ -95,13 +93,13 @@ Configuring Traffic Router .. versionchanged:: 3.0 Traffic Router 3.0 has been converted to a formal Tomcat instance, meaning that is now installed separately from the Tomcat servlet engine. The Traffic Router installation package contains all of the Traffic Router-specific software, configuration and startup scripts including some additional configuration files needed for Tomcat. These new configuration files can all be found in the :file:`/opt/traffic_router/conf` directory and generally serve to override Tomcat's default settings. -For the most part, the configuration files and :term:`Parameter`\ s used by Traffic Router are used to bring it online and start communicating with various Traffic Control components. Once Traffic Router is successfully communicating with Traffic Control, configuration should mostly be performed in Traffic Portal, and will be distributed throughout Traffic Control via CDN :term:`Snapshot` process. See :ref:`snapshot-crconfig` for more information. Please see the :term:`Parameter` documentation for Traffic Router in the Using Traffic Ops guide documented under :ref:`ccr-profile` for :term:`Parameter`\ s that influence the behavior of Traffic Router via the :term:`Snapshot`. +For the most part, the configuration files and :term:`Parameters` used by Traffic Router are used to bring it online and start communicating with various Traffic Control components. Once Traffic Router is successfully communicating with Traffic Control, configuration should mostly be performed in Traffic Portal, and will be distributed throughout Traffic Control via CDN :term:`Snapshot` process. .. _tr-config-files: -.. table:: Traffic Router Parameters +.. table:: Traffic Router Configuration File Parameters +----------------------------+-------------------------------------------+----------------------------------------------------------------------------------+----------------------------------------------------+ - | ConfigFile | Parameter Name | Description | Default Value | + | Configuration File | Parameter Name | Description | Default Value | +============================+===========================================+==================================================================================+====================================================+ | traffic_monitor.properties | traffic_monitor.bootstrap.hosts | Semicolon-delimited Traffic Monitor | N/A | | | | :abbr:`FQDN (Fully Qualified Domain Name)`\ s with port numbers as necessary | | @@ -169,6 +167,123 @@ For the most part, the configuration files and :term:`Parameter`\ s used by Traf | | | of Tomcat | | +----------------------------+-------------------------------------------+----------------------------------------------------------------------------------+----------------------------------------------------+ +.. _tr-profile: + +The Traffic Router Profile +-------------------------- +Much of a Traffic Router's configuration can be obtained through the :term:`Parameters` on its :term:`Profile`. The :term:`Parameters` of a Traffic Router's :term:`Profile` that have meaning (others are just ignored) are detailed in the :ref:`tr-profile-parameters`. + +.. _tr-profile-parameters: + +.. table:: The Parameters of a Traffic Router Profile + + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | :ref:`parameter-name` | :ref:`parameter-config-file` | :ref:`parameter-value` Description | + +=========================================+==============================+=======================================================================================================================================+ + | location | dns.zone | Location to store the DNS zone files in the local file system of Traffic Router. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | location | http-log4j.properties | Location to find the log4j.properties file for Traffic Router. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | location | dns-log4j.properties | Location to find the dns-log4j.properties file for Traffic Router. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | location | geolocation.properties | Location to find the log4j.properties file for Traffic Router. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | CDN_name | rascal-config.txt | The human readable name of the CDN for this :term:`Profile`. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | CoverageZoneJsonURL | CRConfig.xml | The location (URL) where a :term:`Coverage Zone Map` may be found. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | ecsEnable | CRConfig.json | Boolean value to enable or disable ENDS0 client subnet extensions. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | geolocation.polling.url | CRConfig.json | The location (URL) where a geographic IP mapping database may be found. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | geolocation.polling.interval | CRConfig.json | How often - in milliseconds - Traffic Router should check for an updated geographic IP mapping database. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | coveragezone.polling.interval | CRConfig.json | How often - in milliseconds - Traffic Router should check for an updated :term:`Coverage Zone Map`. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | coveragezone.polling.url | CRConfig.json | The location (URL) where a :term:`Coverage Zone Map` may be found. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | deepcoveragezone.polling.interval | CRConfig.json | How often - in milliseconds - Traffic Router should check for an updated :term:`Deep Coverage Zone Map` | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | deepcoveragezone.polling.url | CRConfig.json | The location (URL) where a :term:`Deep Coverage Zone Map` may be found. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | client.steering.forced.diversity | CRConfig.json | When this :term:`Parameter` exists and is exactly "true", it enables the "Client Steering Forced Diversity" feature to diversify | + | | | CLIENT_STEERING results by including more unique :term:`Edge-Tier Cache Servers` in the response to the client's request. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.soa.expire | CRConfig.json | The value for the "expire" field the Traffic Router DNS Server will respond with on :abbr:`SOA (Start of Authority)` records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.soa.minimum | CRConfig.json | The value for the minimum field the Traffic Router DNS Server will respond with on :abbr:`SOA (Start of Authority)` records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.soa.admin | CRConfig.json | The DNS Start of Authority administration email address, which clients will be directed to contact for support if DNS is not working | + | | | correctly. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.soa.retry | CRConfig.json | The value for the "retry" field the Traffic Router DNS Server will respond with on :abbr:`SOA (Start of Authority)` records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.soa.refresh | CRConfig.json | The value for the "refresh" field the Traffic Router DNS Server will respond with on :abbr:`SOA (Start of Authority)` records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.ttls.NS | CRConfig.json | The :abbr:`TTL (Time To Live)` the Traffic Router DNS Server will respond with on NS records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.ttls.SOA | CRConfig.json | The :abbr:`TTL (Time To Live)` the Traffic Router DNS Server will respond with on :abbr:`SOA (Start of Authority)` records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.ttls.AAAA | CRConfig.json | The :abbr:`TTL (Time To Live)` the Traffic Router DNS Server will respond with on AAAA records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.ttls.A | CRConfig.json | The :abbr:`TTL (Time To Live)` the Traffic Router DNS Server will respond with on A records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.ttls.DNSKEY | CRConfig.json | The :abbr:`TTL (Time To Live)` the Traffic Router DNS Server will respond with on DNSKEY records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tld.ttls.DS | CRConfig.json | The :abbr:`TTL (Time To Live)` the Traffic Router DNS Server will respond with on DS records. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | api.port | server.xml | The TCP port on which Traffic Router servers the :ref:`tr-api`. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | api.cache-control.max-age | CRConfig.json | The value of the ``Cache-Control: max-age=`` HTTP header in the of the :ref:`tr-api`. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | api.auth.url | CRConfig.json | The URL of the authentication endpoint of the :ref:`to-api` (:ref:`to-api-user-login`). The actual | + | | | :abbr:`FQDN (Fully Qualified Domain Name)` can be subsituted with ``${tmHostname}`` to have Traffic Router automatically fill it in, | + | | | e.g. ``https://${tmHostname}/api/1.1/user/login``. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | consistent.dns.routing | CRConfig.json | Control whether :ref:`DNS-routed ` :term:`Delivery Services` use `Consistent Hashing`. May improve performance if set to | + | | | "true"; defaults to "false". | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | dnssec.enabled | CRConfig.json | Whether DNSSEC is enabled; this parameter is updated via the DNSSEC administration user interface in Traffic Portal. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | dnssec.allow.expired.keys | CRConfig.json | Allow Traffic Router to use expired DNSSEC keys to sign zones; default is "true". This helps prevent DNSSEC related outages due to | + | | | failed Traffic Control components or connectivity issues. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | dynamic.cache.primer.enabled | CRConfig.json | Allow Traffic Router to attempt to prime the dynamic zone cache; defaults to "true". | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | dynamic.cache.primer.limit | CRConfig.json | Limit the number of permutations to prime when dynamic zone cache priming is enabled; defaults to "500". | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | keystore.maintenance.interval | CRConfig.json | The interval in seconds which Traffic Router will check the :ref:`to-api` for new DNSSEC keys. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | keystore.api.url | CRConfig.json | The URL of the DNSSEC key management endpoint of the :ref:`to-api` (:ref:`to-api-cdns-name-name-dnsseckeys`). The actual | + | | | :abbr:`FQDN (Fully Qualified Domain Name)` may be substituted with ``${tmHostname}`` to and the name of a CDN may be substituted with | + | | | ``${cdnName}`` to have Traffic Router automatically fill them in. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | keystore.fetch.timeout | CRConfig.json | The timeout in milliseconds for requests to the DNSSEC Key management endpoint of the :ref:`to-api` | + | | | (:ref:`to-api-cdns-name-name-dnsseckeys`). | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | keystore.fetch.retries | CRConfig.json | The number of times Traffic Router will attempt to load DNSSEC keys before giving up; defaults to "5". | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | keystore.fetch.wait | CRConfig.json | The number of milliseconds Traffic Router will wait in between attempts to load DNSSEC keys | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | signaturemanager.expiration.multiplier | CRConfig.json | Multiplier used in conjunction with a zone's maximum :abbr:`TTL (Time To Live)` to calculate DNSSEC signature durations; defaults to | + | | | "5". | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | zonemanager.threadpool.scale | CRConfig.json | Multiplier used to determine the number of CPU cores to use for zone signing operations; defaults to "0.75". | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | zonemanager.cache.maintenance.interval | CRConfig.json | The interval in seconds on which Traffic Router will check for zones that need to be re-signed or if dynamic zones need to be expired | + | | | from its cache. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | zonemanager.dynamic.response.expiration | CRConfig.json | A duration (e.g.: "300s") that defines how long a dynamic zone will remain valid before expiring. | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | DNSKEY.generation.multiplier | CRConfig.json | Used to determine when new DNSSEC keys need to be generated. Keys are re-generated if expiration is less than the generation | + | | | multiplier multiplied by the :abbr:`TTL (Time To Live)`. If this :term:`Parameter` does not exist, the default is "10". | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | DNSKEY.effective.multiplier | CRConfig.json | Used when creating an effective date for a new key set. New keys are generated with an effective date of that is the effective | + | | | multiplier multiplied by the :abbr:`TTL (Time To Live)` less than the old key's expiration date. Default is "2". | + +-----------------------------------------+------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + +.. deprecated:: ATCv4.0 + The use of "CRConfig.xml" as a :ref:`Parameter "Config File" value ` has no known meaning, and its use for configuring Traffic Router is deprecated. All configuration (?) that previously used that value should instead use the equivalent :term:`Parameter` with the :ref:`parameter-config-file` value "CRConfig.json". .. _consistent-hashing: @@ -237,7 +352,7 @@ Overview Operation --------- -Upon startup or a configuration change, Traffic Router obtains keys from the 'keystore' API in Traffic Ops which returns :abbr:`KSK (Key Signing Key)`\ s and :abbr:`ZSK (Zone Signing Key)`\ s for each :term:`Delivery Service` that is a sub-domain of the CDN's :abbr:`TLD (Top Level Domain)` in addition to the keys for the CDN :abbr:`TLD (Top Level Domain)` itself. Each key has timing information that allows Traffic Router to determine key validity (expiration, inception, and effective dates) in addition to the appropriate :abbr:`TTL (Time To Live)` to use for the DNSKEY record(s). All :abbr:`TTL (Time To Live)`\ s are configurable :term:`Parameter`\ s; see the :ref:`ccr-profile` documentation for more information. +Upon startup or a configuration change, Traffic Router obtains keys from the 'keystore' API in Traffic Ops which returns :abbr:`KSK (Key Signing Key)`\ s and :abbr:`ZSK (Zone Signing Key)`\ s for each :term:`Delivery Service` that is a sub-domain of the CDN's :abbr:`TLD (Top Level Domain)` in addition to the keys for the CDN :abbr:`TLD (Top Level Domain)` itself. Each key has timing information that allows Traffic Router to determine key validity (expiration, inception, and effective dates) in addition to the appropriate :abbr:`TTL (Time To Live)` to use for the DNSKEY record(s). All :abbr:`TTL (Time To Live)`\ s are configurable :term:`Parameters` in :ref:`tr-profile`. Once Traffic Router obtains the key data from the API, it converts each public key into the appropriate record types (DNSKEY, DS) to place in zones and uses the private key to sign zones. DNSKEY records are added to each :term:`Delivery Service`'s zone (e.g.: ``demo1.mycdn.ciab.test``) for every valid key that exists, in addition to the CDN :abbr:`TLD (Top Level Domain)`'s zone. A DS record is generated from each zone's :abbr:`KSK (Key Signing Key)` and is placed in the CDN :abbr:`TLD (Top Level Domain)`'s zone (e.g.: ``mycdn.ciab.test``); the DS record for the CDN :abbr:`TLD (Top Level Domain)` must be placed in its parent zone, which is not managed by Traffic Control. @@ -352,7 +467,7 @@ DS_BYPASS DS_CLIENT_GEO_UNSUPPORTED Traffic Router did not find a resource supported by coverage zone data and was unable to determine the geographic location of the requesting client DS_CZ_BACKUP_CG - Traffic Router found a backup cache via fall-back (CRconfig's ``edgeLocation``) or via coordinates (:abbr:`CZF (Coverage Zone File)`) configuration + Traffic Router found a backup cache via fall-back (through the ``edgeLocation`` field of a :term:`Snapshot`) or via coordinates (:term:`Coverage Zone File`) configuration DS_CZ_ONLY The selected :term:`Delivery Service` only supports resource lookup based on coverage zone data DS_NO_BYPASS @@ -454,12 +569,14 @@ Deep Caching is a feature that enables clients to be routed to the closest possi What You Need ------------- #. Edge cache deployed in "deep" locations and registered in Traffic Ops -#. A :abbr:`DCZF (Deep Coverage Zone File)` mapping these deep cache hostnames to specific network prefixes (see :ref:`deep-czf` for details) -#. Deep caching parameters in the Traffic Router Profile (see :ref:`ccr-profile` for details): +#. A :term:`Deep Coverage Zone File` mapping these deep cache hostnames to specific network prefixes +#. Deep caching :term:`Parameters` in the Traffic Router :term:`Profile` - ``deepcoveragezone.polling.interval`` - ``deepcoveragezone.polling.url`` + .. seealso:: See :ref:`tr-profile` for details. + #. Deep Caching enabled on one or more HTTP :term:`Delivery Service`\ s (i.e. 'Deep Caching' field on the :term:`Delivery Service` details page (under :guilabel:`Advanced Options`) set to ``ALWAYS``) How it Works @@ -475,11 +592,11 @@ Overview -------- A Steering :term:`Delivery Service` is a :term:`Delivery Service` that is used to route a client to another :term:`Delivery Service`. The :ref:`Type ` of a Steering :term:`Delivery Service` is either STEERING or CLIENT_STEERING. A Steering :term:`Delivery Service` will have target :term:`Delivery Service`\ s configured for it with weights assigned to them. Traffic Router uses the weights to make a consistent hash ring which it then uses to make sure that requests are routed to a target based on the configured weights. This consistent hash ring is separate from the consistent hash ring used in cache selection. -Special regular expressions - referred to as 'filters' - can also be configured for target :term:`Delivery Service`\ s to pin traffic to a specific :term:`Delivery Service`. For example, if the filter :regexp:`.*/news/.*` for a target called ``target-ds-1`` is created, any requests to Traffic Router with "news" in them will be routed to ``target-ds-1``. This will happen regardless of the configured weights. +Special regular expressions - referred to as 'filters' - can also be configured for target :term:`Delivery Services` to pin traffic to a specific :term:`Delivery Service`. For example, if the filter :regexp:`.*/news/.*` for a target called ``target-ds-1`` is created, any requests to Traffic Router with "news" in them will be routed to ``target-ds-1``. This will happen regardless of the configured weights. Some other points of interest """"""""""""""""""""""""""""" -- Steering is currently only available for HTTP :term:`Delivery Service`\ s that are a part of the same CDN. +- Steering is currently only available for HTTP :term:`Delivery Services` that are a part of the same CDN. - A new role called STEERING has been added to the Traffic Ops database. Only users with the Steering :term:`Role` or higher can modify steering assignments for a :term:`Delivery Service`. - Traffic Router uses the steering endpoints of the :ref:`to-api` to poll for steering assignments, the assignments are then used when routing traffic. @@ -487,22 +604,22 @@ A couple simple use-cases for Steering are: - Migrating traffic from one :term:`Delivery Service` to another over time. - Trying out new functionality for a subset of traffic with an experimental :term:`Delivery Service`. -- Load balancing between :term:`Delivery Service`\ s +- Load balancing between :term:`Delivery Services` The Difference Between STEERING and CLIENT_STEERING --------------------------------------------------- -The only difference between the STEERING and CLIENT_STEERING :term:`Delivery Service` :term:`Type`\ s is that CLIENT_STEERING explicitly allows a client to bypass Steering by choosing a destination :term:`Delivery Service`. A client can accomplish this by providing the ``X-TC-Steering-Option`` HTTP header with a value of the ``xml_id`` of the target :term:`Delivery Service` to which they desire to be routed. When Traffic Router receives this header it will route to the requested target :term:`Delivery Service` regardless of weight configuration. This header is ignored by STEERING :term:`Delivery Service`\ s. +The only difference between the STEERING and CLIENT_STEERING :term:`Delivery Service` :term:`Type`\ s is that CLIENT_STEERING explicitly allows a client to bypass Steering by choosing a destination :term:`Delivery Service`. A client can accomplish this by providing the ``X-TC-Steering-Option`` HTTP header with a value of the ``xml_id`` of the target :term:`Delivery Service` to which they desire to be routed. When Traffic Router receives this header it will route to the requested target :term:`Delivery Service` regardless of weight configuration. This header is ignored by STEERING :term:`Delivery Services`. Configuration ------------- The following needs to be completed for Steering to work correctly: -#. Two target :term:`Delivery Service`\ s are created in Traffic Ops. They must both be HTTP :term:`Delivery Service`\ s part of the same CDN. +#. Two target :term:`Delivery Services` are created in Traffic Ops. They must both be HTTP :term:`Delivery Services` part of the same CDN. #. A :term:`Delivery Service` with type STEERING or CLIENT_STEERING is created in Traffic Portal. -#. Target :term:`Delivery Service`\ s are assigned to the Steering :term:`Delivery Service` using Traffic Portal. +#. Target :term:`Delivery Services` are assigned to the Steering :term:`Delivery Service` using Traffic Portal. #. A user with the role of Steering is created. -#. The Steering user assigns weights to the target :term:`Delivery Service`\ s. -#. If desired, the Steering user can create filters for the target :term:`Delivery Service`\ s. +#. The Steering user assigns weights to the target :term:`Delivery Services`. +#. If desired, the Steering user can create filters for the target :term:`Delivery Services`. .. seealso:: For more information see :ref:`steering-qht`. diff --git a/docs/source/admin/traffic_stats.rst b/docs/source/admin/traffic_stats.rst index 1dd2d60a64..bfe232b81a 100644 --- a/docs/source/admin/traffic_stats.rst +++ b/docs/source/admin/traffic_stats.rst @@ -60,7 +60,7 @@ influxPassword pollingInterval The interval at which Traffic Monitor is polled and stats are stored in InfluxDB statusToMon - The status of Traffic Monitor to poll (poll ONLINE or OFFLINE traffic monitors) + The status of Traffic Monitor to poll (poll ONLINE or OFFLINE Traffic Monitors) seelogConfig The absolute path of the seelog configuration file dailySummaryPollingInterval @@ -68,9 +68,9 @@ dailySummaryPollingInterval cacheRetentionPolicy The default retention policy for cache stats dsRetentionPolicy - The default retention policy for :term:`Delivery Service`\ stats + The default retention policy for :term:`Delivery Service` statistics dailySummaryRetentionPolicy - The retention policy to be used for the daily stats + The retention policy to be used for the daily statistics influxUrls An array of InfluxDB hosts for Traffic Stats to write stats to. @@ -86,7 +86,7 @@ To easily create databases, retention policies, and continuous queries, run :pro Configuring Grafana ------------------- -In Traffic Portal the :menuselection:`Other --> Grafana` menu item can be configured to display Grafana graphs using InfluxDB data. In order for this to work correctly, you will need two things: +In Traffic Portal the :menuselection:`Other --> Grafana` menu item can be configured to display Grafana graphs using InfluxDB data (when not configured, this menu item will not appear). In order for this to work correctly, you will need two things: #. A :term:`Parameter` with the graph URL (more information below) #. The graphs created in Grafana. See below for how to create some simple graphs in Grafana. These instructions assume that InfluxDB has been configured and that data has been written to it. If this is not true, you will not see any graphs. @@ -112,41 +112,52 @@ To create a graph in Grafana, you can follow these basic steps: In order for Traffic Portal users to see Grafana graphs, Grafana will need to allow anonymous access. Information on how to configure anonymous access can be found on the configuration page of the `Grafana Website `_. -Traffic Portal uses custom dashboards to display information about individual :term:`Delivery Service`\ s or :term:`Cache Group`\ s. In order for the custom graphs to display correctly, the `traffic_ops_*.js `_ files need to be in the ``/usr/share/grafana/public/dashboards/`` directory on the Grafana server. If your Grafana server is the same as your Traffic Stats server the RPM install process will take care of putting the files in place. If your Grafana server is different from your Traffic Stats server, you will need to manually copy the files to the correct directory. +Traffic Portal uses custom dashboards to display information about individual :term:`Delivery Services` or :term:`Cache Groups`. In order for the custom graphs to display correctly, the Javascript files in :atc-file:`traffic_stats/grafana/` need to be in the :file:`/usr/share/grafana/public/dashboards/` directory on the Grafana server. If your Grafana server is the same as your Traffic Stats server the RPM install process will take care of putting the files in place. If your Grafana server is different from your Traffic Stats server, you will need to manually copy the files to the correct directory. -More information on custom scripted graphs can be found in the `scripted dashboards `_ section of the Grafana documentation. +.. seealso:: More information on custom scripted graphs can be found in the `scripted dashboards `_ section of the Grafana documentation. Configuring Traffic Portal for Traffic Stats -------------------------------------------- -- The InfluxDB servers need to be added to Traffic Portal with profile = InfluxDB. Make sure to use port 8086 in the configuration. -- The traffic stats server should be added to Traffic Ops with profile = Traffic Stats. -- :term:`Parameter`\ s for which stats will be collected are added with the release, but any changes can be made via parameters that are assigned to the Traffic Stats profile. +- The InfluxDB servers need to be added to Traffic Portal with a :term:`Profile` that has the :ref:`profile-type` InfluxDB. Make sure to use port 8086 in the configuration. +- The traffic stats server should be added to Traffic Ops with a :term:`Profile` that has the :ref:`profile-type` TRAFFIC_STATS. +- :term:`Parameters` for which stats will be collected are added with the release, but any changes can be made via :term:`Parameters` that are assigned to the Traffic Stats :term:`Profile`. Configuring Traffic Portal to use Grafana Dashboards ---------------------------------------------------- -To configure Traffic Portal to use Grafana Dashboards, you need to enter the following :term:`Parameter`\ s and assign them to the GLOBAL profile. This assumes you followed the above instructions to install and configure InfluxDB and Grafana. You will need to place 'cdn-stats','deliveryservice-stats', and 'daily-summary' with the name of your dashboards. +To configure Traffic Portal to use Grafana Dashboards, you need to enter the following :term:`Parameters` and assign them to the special GLOBAL :term:`Profile`. This assumes you followed instructions in the Installation_, `Configuring Traffic Stats`_, `Configuring InfluxDB`_, and `Configuring Grafana`_ sections. .. table:: Traffic Stats Parameters - +---------------------------+----------------------------------------------------------------------------------------------------+ - | parameter name | parameter value | - +===========================+====================================================================================================+ - | all_graph_url | ``https:///dashboard/db/deliveryservice-stats`` | - +---------------------------+----------------------------------------------------------------------------------------------------+ - | cachegroup_graph_url | ``https:///dashboard/script/traffic_ops_cachegroup.js?which=`` | - +---------------------------+----------------------------------------------------------------------------------------------------+ - | deliveryservice_graph_url | ``https:///dashboard/script/traffic_ops_deliveryservice.js?which=`` | - +---------------------------+----------------------------------------------------------------------------------------------------+ - | server_graph_url | ``https:///dashboard/script/traffic_ops_server.js?which=`` | - +---------------------------+----------------------------------------------------------------------------------------------------+ - | visual_status_panel_1 | ``https:///dashboard-solo/db/cdn-stats?panelId=2&fullscreen&from=now-24h&to=now-60s`` | - +---------------------------+----------------------------------------------------------------------------------------------------+ - | visual_status_panel_2 | ``https:///dashboard-solo/db/cdn-stats?panelId=1&fullscreen&from=now-24h&to=now-60s`` | - +---------------------------+----------------------------------------------------------------------------------------------------+ - | daily_bw_url | ``https:///dashboard-solo/db/daily-summary?panelId=1&fullscreen&from=now-3y&to=now`` | - +---------------------------+----------------------------------------------------------------------------------------------------+ - | daily_served_url | ``https:///dashboard-solo/db/daily-summary?panelId=2&fullscreen&from=now-3y&to=now`` | - +---------------------------+----------------------------------------------------------------------------------------------------+ + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + | parameter name | parameter value | + +===========================+====================================================================================================================+ + | all_graph_url | :file:`https://{grafanaHost}/dashboard/db/{deliveryservice-stats-dashboard}` | + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + | cachegroup_graph_url | :file:`https://{grafanaHost}/dashboard/script/traffic_ops_cachegroup.js?which=` | + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + | deliveryservice_graph_url | :file:`https://{grafanaHost}/dashboard/script/traffic_ops_deliveryservice.js?which=` | + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + | server_graph_url | :file:`https://{grafanaHost}/dashboard/script/traffic_ops_server.js?which=` | + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + | visual_status_panel_1 | :file:`https://{grafanaHost}/dashboard-solo/db/{cdn-stats-dashboard}?panelId=2&fullscreen&from=now-24h&to=now-60s` | + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + | visual_status_panel_2 | :file:`https://{grafanaHost}/dashboard-solo/db/{cdn-stats-dashboard}?panelId=1&fullscreen&from=now-24h&to=now-60s` | + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + | daily_bw_url | :file:`https://{grafanaHost}/dashboard-solo/db/{daily-summary-dashboard}?panelId=1&fullscreen&from=now-3y&to=now` | + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + | daily_served_url | :file:`https://{grafanaHost}/dashboard-solo/db/{daily-summary-dashboard}?panelId=2&fullscreen&from=now-3y&to=now` | + +---------------------------+--------------------------------------------------------------------------------------------------------------------+ + +where + +grafanaHost + is the :abbr:`FQDN (Fully Qualified Domain Name)` of the Grafana server (again, usually the same as the Traffic Stats server), +cdn-stats-dashboard + is the name of the Dashboard providing CDN-level statistics, +deliveryservice-stats-dashboard + is the name of the Dashboard providing :term:`Delivery Service`-level statistics, and +daily-summary-dashboard + is the name of the Dashboard providing a daily summary of general statistics that would be of interest to administrators using Traffic Portal InfluxDB Tools ============== diff --git a/docs/source/api/cachegroup_parameterID_parameter.rst b/docs/source/api/cachegroup_parameterID_parameter.rst index eabeee6d7e..4db7783f13 100644 --- a/docs/source/api/cachegroup_parameterID_parameter.rst +++ b/docs/source/api/cachegroup_parameterID_parameter.rst @@ -25,7 +25,7 @@ ``GET`` ======= -Extract identifying information about all cachegroups with a specific parameter +Extract identifying information about all :term:`Cache Groups` with a specific :term:`Parameter` :Auth. Required: Yes :Roles Required: None @@ -43,10 +43,10 @@ Request Structure Response Structure ------------------ -:cachegroups: An array of all Cache Groups with an associated parameter identifiable by the ``parameter_id`` request path parameter +:cachegroups: An array of all :term:`Cache Groups` with an associated :term:`Parameter` identifiable by the ``parameter_id`` request path parameter - :id: The numeric ID of the Cache Group - :name: The human-readable name of the Cache Group + :id: The integral, unique identifier of the :term:`Cache Group` + :name: The human-readable name of the :term:`Cache Group` .. code-block:: json :caption: Response Example diff --git a/docs/source/api/cachegroupparameters.rst b/docs/source/api/cachegroupparameters.rst index a847175dcd..37dc58d09f 100644 --- a/docs/source/api/cachegroupparameters.rst +++ b/docs/source/api/cachegroupparameters.rst @@ -21,7 +21,7 @@ ``GET`` ======= -Extract information about parameters associated with :term:`Cache Group`\ s +Extract information about :term:`Parameters` associated with :term:`Cache Groups` :Auth. Required: Yes :Roles Required: None @@ -33,10 +33,10 @@ No available parameters Response Structure ------------------ -:cachegroupParameters: An array of identifying information for parameters assigned to :term:`Cache Group` profiles +:cachegroupParameters: An array of identifying information for :term:`Parameters` assigned to :term:`Cache Group` :term:`Profiles` - :parameter: Numeric ID of the parameter - :last_updated: Date and time of last modification in ISO format + :parameter: The :term:`Parameter`'s :ref:`parameter-id` + :last_updated: Date and time of last modification in an ISO-like format :cachegroup: Name of the :term:`Cache Group` .. code-block:: http @@ -68,7 +68,7 @@ Response Structure ``POST`` ======== -Assign parameter(s) to :term:`Cache Group`\ (s). +Assign :term:`Parameter`\ (s) to :term:`Cache Group`\ (s). :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -78,8 +78,8 @@ Request Structure ----------------- The request data can take the form of either a single object or an array of one or more objects. -:cacheGroupId: Integral, unique identifier for the :term:`Cache Group` to which a parameter is being assigned -:parameterId: Integral, unique identifier for the Parameter being assigned +:cacheGroupId: Integral, unique identifier for the :term:`Cache Group` to which a :term:`Parameter` is being assigned +:parameterId: Integral, unique identifier for the :term:`Parameter` being assigned .. code-block:: http :caption: Request Example @@ -99,8 +99,8 @@ The request data can take the form of either a single object or an array of one Response Structure ------------------ -:parameter: Numeric ID of the parameter -:last_updated: Date and time of last modification in ISO format +:parameter: Integral, unique identifier of the :term:`Parameter` +:last_updated: Date and time of last modification in an ISO-like format :cachegroup: Name of the :term:`Cache Group` .. code-block:: http diff --git a/docs/source/api/cachegroupparameters_id_parameterID.rst b/docs/source/api/cachegroupparameters_id_parameterID.rst index 257ec874ca..918d735445 100644 --- a/docs/source/api/cachegroupparameters_id_parameterID.rst +++ b/docs/source/api/cachegroupparameters_id_parameterID.rst @@ -21,7 +21,7 @@ ``DELETE`` ========== -De-associate a parameter with a :term:`Cache Group` +De-associate a :term:`Parameter` with a :term:`Cache Group` :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -31,13 +31,13 @@ Request Structure ----------------- .. table:: Request Path Parameters - +-------------+-------------------------------------------------------------------------------------------------+ - | Name | Description | - +=============+=================================================================================================+ - | ID | Unique identifier for the :term:`Cache Group` which will have the parameter association deleted | - +-------------+-------------------------------------------------------------------------------------------------+ - | parameterID | Unique identifier for the parameter which will be removed from a :term:`Cache Group` | - +-------------+-------------------------------------------------------------------------------------------------+ + +-------------+---------------------------------------------------------------------------------------------------------+ + | Name | Description | + +=============+=========================================================================================================+ + | ID | Unique identifier for the :term:`Cache Group` which will have the :term:`Parameter` association deleted | + +-------------+---------------------------------------------------------------------------------------------------------+ + | parameterID | Unique identifier for the :term:`Parameter` which will be removed from a :term:`Cache Group` | + +-------------+---------------------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example diff --git a/docs/source/api/cachegroups_id_parameters.rst b/docs/source/api/cachegroups_id_parameters.rst index 757d02cafc..2999af0f95 100644 --- a/docs/source/api/cachegroups_id_parameters.rst +++ b/docs/source/api/cachegroups_id_parameters.rst @@ -18,9 +18,7 @@ ********************************* ``cachegroups/{{ID}}/parameters`` ********************************* -Gets all the parameters associated with a :term:`Cache Group` - -.. seealso:: :ref:`param-prof` +Gets all the :term:`Parameters` associated with a :term:`Cache Group` ``GET`` ======= @@ -41,12 +39,12 @@ Request Structure Response Structure ------------------ -:configFile: Configuration file associated with the parameter -:id: A numeric, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last updated, in an ISO-like format -:name: Name of the parameter -:secure: If ``true``, the parameter value is only visible to "admin"-role users -:value: Value of the parameter +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value describing whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example diff --git a/docs/source/api/cachegroups_id_queue_update.rst b/docs/source/api/cachegroups_id_queue_update.rst index 89f95fedc8..51bbf45b50 100644 --- a/docs/source/api/cachegroups_id_queue_update.rst +++ b/docs/source/api/cachegroups_id_queue_update.rst @@ -21,7 +21,7 @@ ``POST`` ======== -Queue or dequeue updates for all servers assigned to a :term:`Cache Group` limited to a specific CDN. +:term:`Queue` or "dequeue" updates for all servers assigned to a :term:`Cache Group` limited to a specific CDN. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -38,8 +38,8 @@ Request Structure +------+---------------------------------------------------------------------------------------------------------+ :action: The action to perform; one of "queue" or "dequeue" -:cdn: The full name of the CDN in need of update queue/dequeue\ [1]_ -:cdnId: The integral, unique identifier for the CDN in need of update queue/dequeue\ [1]_ +:cdn: The full name of the CDN in need of :term:`Queue Updates`, or a "dequeue" thereof\ [#required]_ +:cdnId: The integral, unique identifier for the CDN in need of :term:`Queue Updates`, or a "dequeue" thereof\ [#required]_ .. code-block:: http :caption: Request Example @@ -54,12 +54,11 @@ Request Structure {"action": "queue", "cdn": "CDN-in-a-Box"} -.. [1] Either 'cdn' or 'cdnID' *must* be in the request data (but not both). Response Structure ------------------ :action: The action processed, one of "queue" or "dequeue" -:cachegroupId: The integral, unique identifier of the :term:`Cache Group` for which updates were queued/dequeued +:cachegroupId: The integral, unique identifier of the :term:`Cache Group` for which :term:`Queue Updates` was performed or cleared :cachegroupName: The name of the :term:`Cache Group` for which updates were queued/dequeued :cdn: The name of the CDN to which the queue/dequeue operation was restricted :serverNames: An array of the (short) hostnames of the servers within the :term:`Cache Group` which are also assigned to the CDN specified in the ``"cdn"`` field @@ -88,3 +87,5 @@ Response Structure "cdn": "CDN-in-a-Box", "cachegroupID": 8 }} + +.. [#required] Either 'cdn' or 'cdnID' *must* be in the request data (but not both). diff --git a/docs/source/api/cachegroups_id_unassigned_parameters.rst b/docs/source/api/cachegroups_id_unassigned_parameters.rst index bcf4b6f135..75fc8a1548 100644 --- a/docs/source/api/cachegroups_id_unassigned_parameters.rst +++ b/docs/source/api/cachegroups_id_unassigned_parameters.rst @@ -18,12 +18,11 @@ ******************************************** ``cachegroups/{{id}}/unassigned_parameters`` ******************************************** -Gets all the parameters NOT associated with a specific :term:`Cache Group` - -.. seealso:: :ref:`param-prof` ``GET`` ======= +Gets all the :term:`Parameters` *not* associated with a specific :term:`Cache Group` + :Auth. Required: Yes :Roles Required: None :Response Type: Array @@ -32,21 +31,21 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------------------+----------+------------------------+ - | Name | Required | Description | - +==================+==========+========================+ - | ``id`` | yes | :term:`Cache Group` ID | - +------------------+----------+------------------------+ + +------------------+----------+---------------------------------------------------------+ + | Name | Required | Description | + +==================+==========+=========================================================+ + | ``id`` | yes | An integral, unique identifier of a :term:`Cache Group` | + +------------------+----------+---------------------------------------------------------+ Response Structure ------------------ -:configFile: Configuration file associated with the parameter -:id: A numeric, unique identifier for this parameter -:lastUpdated: The Time / Date this entry was last updated -:name: Name of the parameter -:secure: Is the parameter value only visible to admin users -:value: Value of the parameter +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: json :caption: Response Example diff --git a/docs/source/api/cachegroups_parameterID_parameter_available.rst b/docs/source/api/cachegroups_parameterID_parameter_available.rst index 242a22c298..f2646131c7 100644 --- a/docs/source/api/cachegroups_parameterID_parameter_available.rst +++ b/docs/source/api/cachegroups_parameterID_parameter_available.rst @@ -25,7 +25,7 @@ ``GET`` ======= -Gets a list of :term:`Cache Group`\ s which are available to have a specific parameter assigned to them +Gets a list of :term:`Cache Groups` which are available to have a specific :term:`Parameter` assigned to them :Auth. Required: Yes :Roles Required: None @@ -38,7 +38,7 @@ Request Structure +------------------+----------+--------------------------------------------------------------+ | Name | Required | Description | +==================+==========+==============================================================+ - | ``parameter ID`` | yes | The integral, unique identifier of the parameter of interest | + | ``parameter ID`` | yes | The :ref:`parameter-id` of the :term:`Parameter` of interest | +------------------+----------+--------------------------------------------------------------+ Response Structure diff --git a/docs/source/api/caches_stats.rst b/docs/source/api/caches_stats.rst index 1369875b00..ec84a0ce2a 100644 --- a/docs/source/api/caches_stats.rst +++ b/docs/source/api/caches_stats.rst @@ -46,7 +46,7 @@ Response Structure :hostname: The (short) hostname of the cache :ip: The IP address of the cache :kbps: Cache upload speed (to clients) in Kilobits per second -:profile: The name of the profile in use by this cache +:profile: The :ref:`profile-name` of the :term:`Profile` in use by this :term:`cache server` :status: The status of the cache .. code-block:: http diff --git a/docs/source/api/cdns_domains.rst b/docs/source/api/cdns_domains.rst index ec56610bed..7af0dac31a 100644 --- a/docs/source/api/cdns_domains.rst +++ b/docs/source/api/cdns_domains.rst @@ -21,7 +21,7 @@ ``GET`` ======= -Gets a list of domains and their related Traffic Router profiles for all CDNs. +Gets a list of domains and their related Traffic Router :term:`Profiles` for all CDNs. :Auth. Required: Yes :Roles Required: None @@ -33,11 +33,11 @@ No parameters available. Response Structure ------------------ -:domainName: The top-level domain (TLD) assigned to this CDN -:parameterId: The integral, unique identifier for the parameter that sets this TLD on the Traffic Router +:domainName: The :abbr:`TLD (Top-Level Domain)` assigned to this CDN +:parameterId: The :ref:`parameter-id` for the :term:`Parameter` that sets this :abbr:`TLD (Top-Level Domain)` on the Traffic Router :profileDescription: A short, human-readable description of the Traffic Router's profile -:profileId: The integral, unique identifier for the profile assigned to the Traffic Router responsible for serving ``domainName`` -:profileName: The name of the profile assigned to the Traffic Router responsible for serving ``domainName`` +:profileId: The :ref:`profile-id` of the :term:`Profile` assigned to the Traffic Router responsible for serving ``domainName`` +:profileName: The :ref:`profile-name` of the :term:`Profile` assigned to the Traffic Router responsible for serving ``domainName`` .. code-block:: json :caption: Response Example diff --git a/docs/source/api/cdns_id_queue_update.rst b/docs/source/api/cdns_id_queue_update.rst index a83c79c515..79ea2ad55e 100644 --- a/docs/source/api/cdns_id_queue_update.rst +++ b/docs/source/api/cdns_id_queue_update.rst @@ -21,7 +21,7 @@ ``POST`` ======== -Queue or dequeue updates for all servers assigned to a specific CDN. +:term:`Queue` or "dequeue" updates for all servers assigned to a specific CDN. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -55,7 +55,7 @@ Request Structure Response Structure ------------------ :action: The action processed, either ``"queue"`` or ``"dequeue"`` -:cdnId: The integral, unique identifier for the CDN on which updates were (de)queued +:cdnId: The integral, unique identifier for the CDN on which :term:`Queue Updates` was performed or cleared .. code-block:: http :caption: Response Example diff --git a/docs/source/api/cdns_id_snapshot.rst b/docs/source/api/cdns_id_snapshot.rst index b7076de2ce..d7df4ca35d 100644 --- a/docs/source/api/cdns_id_snapshot.rst +++ b/docs/source/api/cdns_id_snapshot.rst @@ -23,7 +23,7 @@ ``PUT`` ======= -Performs a CDN snapshot. Effectively, this propagates the new *configuration* of the CDN to its *operating state*, which replaces the output of the :ref:`to-api-cdns-name-snapshot` endpoint with the output of the :ref:`to-api-cdns-name-snapshot-new` endpoint. +Performs a CDN :term:`Snapshot`. Effectively, this propagates the new *configuration* of the CDN to its *operating state*, which replaces the output of the :ref:`to-api-cdns-name-snapshot` endpoint with the output of the :ref:`to-api-cdns-name-snapshot-new` endpoint. .. Note:: Snapshotting the CDN also deletes all HTTPS certificates for every :term:`Delivery Service` which has been deleted since the last CDN :term:`Snapshot`. @@ -35,11 +35,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+--------------------------------------------------------------------------------+ - | Name | Description | - +======+================================================================================+ - | ID | The integral, unique identifier of the CDN for which a snapshot shall be taken | - +------+--------------------------------------------------------------------------------+ + +------+----------------------------------------------------------------------------------------+ + | Name | Description | + +======+========================================================================================+ + | ID | The integral, unique identifier of the CDN for which a :term:`Snapshot` shall be taken | + +------+----------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example diff --git a/docs/source/api/cdns_name_configs_monitoring.rst b/docs/source/api/cdns_name_configs_monitoring.rst index b3e1665130..82babcc0aa 100644 --- a/docs/source/api/cdns_name_configs_monitoring.rst +++ b/docs/source/api/cdns_name_configs_monitoring.rst @@ -57,7 +57,7 @@ Response Structure :health.polling.interval: An interval in milliseconds on which to poll for health statistics :health.threadPool: The number of threads to be used for health polling :health.timepad: A 'padding time' to add to requests to spread them out for Traffic Control systems that use a large number of Traffic Monitors - :tm.crConfig.polling.url: The URL from which a CRConfig can be obtained + :tm.crConfig.polling.url: The URL from which a :term:`Snapshot` can be obtained :tm.dataServer.polling.url: The URL from which a list of data servers can be obtained :tm.healthParams.polling.url: The URL from which a list of health-polling parameters can be obtained :tm.polling.interval: The interval at which to poll for configuration updates @@ -69,10 +69,10 @@ Response Structure :totalTpsThreshold: A threshold amount of transactions per second that this :term:`Delivery Service` is configured to handle :xmlId: An integral, unique identifier for this Deliver Service (named "xmlId" for legacy reasons) -:profiles: An array of the profiles in use by the :term:`cache server` s and :term:`Delivery Service`\ s belonging to this CDN +:profiles: An array of the :term:`Profiles` in use by the :term:`cache servers` and :term:`Delivery Services` belonging to this CDN - :name: The profile's name - :parameters: An array of the parameters in this profile that relate to monitoring configuration. This can be ``null`` if the servers using this profile cannot be monitored (e.g. Traffic Routers) + :name: The :term:`Profile`'s :ref:`profile-name` + :parameters: An array of the :term:`Parameters` in this :term:`Profile` that relate to monitoring configuration. This can be ``null`` if the servers using this :term:`Profile` cannot be monitored (e.g. Traffic Routers) :health.connection.timeout: A timeout value, in milliseconds, to wait before giving up on a health check request :health.polling.url: A URL to request for polling health. Substitutions can be made in a shell-like syntax using the properties of an object from the ``"trafficServers"`` array @@ -81,7 +81,7 @@ Response Structure :health.threshold.queryTime: The highest allowed length of time for completing health queries (after connection has been established) in milliseconds :history.count: The number of past events to store; once this number is reached, the oldest event will be forgotten before a new one can be added - :type: The type of the profile + :type: The :ref:`profile-type` of the :term:`Profile` :trafficMonitors: An array of objects representing each Traffic Monitor that monitors this CDN (this is used by Traffic Monitor's "peer polling" function) @@ -90,10 +90,10 @@ Response Structure :ip6: The IPv6 address of this Traffic Monitor - when applicable :ip: The IP address of this Traffic Monitor :port: The port on which this Traffic Monitor listens for incoming connections - :profile: The name of the profile assigned to this Traffic Monitor + :profile: The :ref:`profile-name` of the :term:`Profile` assigned to this Traffic Monitor :status: The status of the server running this Traffic Monitor instance -:trafficServers: An array of objects that represent the caches being monitored within this CDN +:trafficServers: An array of objects that represent the :term:`cache servers` being monitored within this CDN :cacheGroup: The Cache Group to which this cache belongs :fqdn: A Fully Qualified Domain Name (FQDN) that resolves to the :term:`cache server`'s IP (or IPv6) address @@ -103,7 +103,7 @@ Response Structure :ip6: The cache's IPv6 address - when applicable :ip: The cache's IP address :port: The port on which the cache listens for incoming connections - :profile: The name of the profile assigned to this cache + :profile: The :ref:`profile-name` of the :term:`Profile` assigned to this :term:`cache server` :status: The status of the Cache :type: The type of the cache - should be either ``EDGE`` or ``MID`` diff --git a/docs/source/api/cdns_name_snapshot.rst b/docs/source/api/cdns_name_snapshot.rst index 846c6bd29f..e80f9016b5 100644 --- a/docs/source/api/cdns_name_snapshot.rst +++ b/docs/source/api/cdns_name_snapshot.rst @@ -22,7 +22,7 @@ ``GET`` ======= -Retrieves the *current* snapshot for a CDN, which represents the current *operating state* of the CDN, **not** the current *configuration* of the CDN. The contents of this snapshot are currently used by Traffic Monitor and Traffic Router. +Retrieves the *current* :term:`Snapshot` for a CDN, which represents the current *operating state* of the CDN, **not** the current *configuration* of the CDN. The contents of this :term:`Snapshot` are currently used by Traffic Monitor and Traffic Router. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -32,11 +32,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+------------------------------------------------------------+ - | Name | Description | - +======+============================================================+ - | name | The name of the CDN for which a snapshot shall be returned | - +------+------------------------------------------------------------+ + +------+--------------------------------------------------------------------+ + | Name | Description | + +======+====================================================================+ + | name | The name of the CDN for which a :term:`Snapshot` shall be returned | + +------+--------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -115,7 +115,7 @@ Response Structure :ip6: This Traffic Router's IPv6 address :location: The name of the Cache Group to which this Traffic Router belongs :port: The port number on which this Traffic Router listens for incoming HTTP requests - :profile: The name of the profile used by this Traffic Router + :profile: The :ref:`profile-name` of the :term:`Profile` used by this Traffic Router :status: The health status of this Traffic Router .. seealso:: :ref:`health-proto` @@ -136,7 +136,7 @@ Response Structure :ip: The server's IPv4 address :locationId: This field is exactly the same as ``cacheGroup`` and only exists for legacy compatibility reasons :port: The port on which this :term:`cache server` listens for incoming HTTP requests - :profile: The name of the profile used by the :term:`cache server` + :profile: The :ref:`profile-name` of the :term:`Profile` used by the :term:`cache server` :routingDisabled: An integer representing the boolean concept of whether or not Traffic Routers should route client traffic this :term:`cache server`; one of: 0 @@ -301,9 +301,9 @@ Response Structure :httpsPort: The port number on which this Traffic Monitor listens for incoming HTTPS requests :ip6: This Traffic Monitor's IPv6 address :ip: This Traffic Monitor's IPv4 address - :location: The name of the Cache Group to which this Traffic Monitor belongs + :location: The name of the :term:`Cache Group` to which this Traffic Monitor belongs :port: The port number on which this Traffic Monitor listens for incoming HTTP requests - :profile: The name of the profile used by this Traffic Monitor + :profile: The :ref:`profile-name` of the :term:`Profile` used by this Traffic Monitor .. note:: For legacy reasons, this must always start with "RASCAL-". @@ -316,7 +316,7 @@ Response Structure :CDN_name: The name of this CDN :date: The UNIX epoch timestamp date in the Traffic Ops server's own timezone :tm_host: The FQDN of the Traffic Ops server - :tm_path: A path relative to the root of the Traffic Ops server where a request may be replaced to have this snapshot overwritten by the current *configured state* of the CDN + :tm_path: A path relative to the root of the Traffic Ops server where a request may be replaced to have this :term:`Snapshot` overwritten by the current *configured state* of the CDN .. deprecated:: 1.1 This field is still present for legacy compatibility reasons, but its contents should be ignored. Instead, make a ``PUT`` request to :ref:`to-api-snapshot-name`. diff --git a/docs/source/api/cdns_name_snapshot_new.rst b/docs/source/api/cdns_name_snapshot_new.rst index 352e771f30..6c807894d1 100644 --- a/docs/source/api/cdns_name_snapshot_new.rst +++ b/docs/source/api/cdns_name_snapshot_new.rst @@ -21,7 +21,7 @@ ``GET`` ======= -Retrieves the *pending* snapshot for a CDN, which represents the current *configuration* of the CDN, **not** the current *operating state* of the CDN. The contents of this snapshot are currently used by Traffic Monitor and Traffic Router. +Retrieves the *pending* :term:`Snapshot` for a CDN, which represents the current *configuration* of the CDN, **not** the current *operating state* of the CDN. The contents of this :term:`Snapshot` are currently used by Traffic Monitor and Traffic Router. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -31,11 +31,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+------------------------------------------------------------+ - | Name | Description | - +======+============================================================+ - | name | The name of the CDN for which a snapshot shall be returned | - +------+------------------------------------------------------------+ + +------+--------------------------------------------------------------------+ + | Name | Description | + +======+====================================================================+ + | name | The name of the CDN for which a :term:`Snapshot` shall be returned | + +------+--------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -112,17 +112,17 @@ Response Structure :httpsPort: The port number on which this Traffic Router listens for incoming HTTPS requests :ip: This Traffic Router's IPv4 address :ip6: This Traffic Router's IPv6 address - :location: The name of the Cache Group to which this Traffic Router belongs + :location: The name of the :term:`Cache Group` to which this Traffic Router belongs :port: The port number on which this Traffic Router listens for incoming HTTP requests - :profile: The name of the profile used by this Traffic Router + :profile: The :ref:`profile-name` of the :term:`Profile` used by this Traffic Router :status: The health status of this Traffic Router .. seealso:: :ref:`health-proto` -:contentServers: An object containing keys which are the (short) hostnames of the Edge-Tier :term:`cache server` s in the CDN; the values corresponding to those keys are routing information for said servers +:contentServers: An object containing keys which are the (short) hostnames of the :term:`Edge-Tier cache servers` in the CDN; the values corresponding to those keys are routing information for said servers - :cacheGroup: The name of the Cache Group to which the server belongs - :deliveryServices: An object containing keys which are the names of :term:`Delivery Service`\ s to which this :term:`cache server` is assigned; the values corresponding to those keys are arrays of FQDNs that resolve to this :term:`cache server` + :cacheGroup: The name of the :term:`Cache Group` to which the server belongs + :deliveryServices: An object containing keys which are the names of :term:`Delivery Services` to which this :term:`cache server` is assigned; the values corresponding to those keys are arrays of FQDNs that resolve to this :term:`cache server` .. note:: Only Edge-tier :term:`cache server` s can be assigned to a Delivery SErvice, and therefore this field will only be present when ``type`` is ``"EDGE"``. @@ -135,7 +135,7 @@ Response Structure :ip: The server's IPv4 address :locationId: This field is exactly the same as ``cacheGroup`` and only exists for legacy compatibility reasons :port: The port on which this :term:`cache server` listens for incoming HTTP requests - :profile: The name of the profile used by the :term:`cache server` + :profile: The :ref:`profile-name` of the :term:`Profile` used by the :term:`cache server` :routingDisabled: An integer representing the boolean concept of whether or not Traffic Routers should route client traffic this :term:`cache server`; one of: 0 @@ -300,13 +300,10 @@ Response Structure :httpsPort: The port number on which this Traffic Monitor listens for incoming HTTPS requests :ip6: This Traffic Monitor's IPv6 address :ip: This Traffic Monitor's IPv4 address - :location: The name of the Cache Group to which this Traffic Monitor belongs + :location: The name of the :term:`Cache Group` to which this Traffic Monitor belongs :port: The port number on which this Traffic Monitor listens for incoming HTTP requests - :profile: The name of the profile used by this Traffic Monitor - - .. note:: For legacy reasons, this must always start with "RASCAL-". - - :status: The health status of this Traffic Monitor + :profile: The :ref:`profile-name` of the :term:`Profile` used by this Traffic Monitor + :status: The health status of this Traffic Monitor .. seealso:: :ref:`health-proto` @@ -315,7 +312,7 @@ Response Structure :CDN_name: The name of this CDN :date: The UNIX epoch timestamp date in the Traffic Ops server's own timezone :tm_host: The FQDN of the Traffic Ops server - :tm_path: A path relative to the root of the Traffic Ops server where a request may be replaced to have this snapshot overwritten by the current *configured state* of the CDN + :tm_path: A path relative to the root of the Traffic Ops server where a request may be replaced to have this :term:`Snapshot` overwritten by the current *configured state* of the CDN .. deprecated:: 1.1 This field is still present for legacy compatibility reasons, but its contents should be ignored. Instead, make a ``PUT`` request to :ref:`to-api-snapshot-name`. diff --git a/docs/source/api/deliveryservices.rst b/docs/source/api/deliveryservices.rst index 4dfa8e29d2..bf1309ccad 100644 --- a/docs/source/api/deliveryservices.rst +++ b/docs/source/api/deliveryservices.rst @@ -21,183 +21,132 @@ ``GET`` ======= -Retrieves all :term:`Delivery Services` +Retrieves :term:`Delivery Services` :Auth. Required: Yes -:Roles Required: None\ [1]_ +:Roles Required: None\ [#tenancy]_ :Response Type: Array Request Structure ----------------- .. table:: Request Query Parameters - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | Name | Required | Description | - +=============+==========+================================================================================================================================================+ - | cdn | no | Show only the :term:`Delivery Service`\ s belonging to the CDN identified by this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | id | no | Show only the :term:`Delivery Service` that has this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | logsEnabled | no | If true, return only :term:`Delivery Service`\ s with logging enabled, otherwise return only :term:`Delivery Service`\ s with logging disabled | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | profile | no | Return only :term:`Delivery Service`\ s using the :term:`Profile` identified by this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | tenant | no | Show only the :term:`Delivery Service`\ s belonging to the :term:`Tenant` identified by this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | type | no | Return only :term:`Delivery Service`\ s of the :term:`Delivery Service` type identified by this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | Name | Required | Description | + +=============+==========+============================================================================================================================================+ + | cdn | no | Show only the :term:`Delivery Services` belonging to the :ref:`ds-cdn` identified by this integral, unique identifier | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | id | no | Show only the :term:`Delivery Service` that has this integral, unique identifier | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | logsEnabled | no | Show only the :term:`Delivery Services` that have :ref:`ds-logs-enabled` set or not based on this boolean | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | profile | no | Return only :term:`Delivery Services` using the :term:`Profile` that has this :ref:`profile-id` | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | tenant | no | Show only the :term:`Delivery Services` belonging to the :term:`Tenant` identified by this integral, unique identifier | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | type | no | Return only :term:`Delivery Services` of the :term:`Delivery Service` :ref:`ds-types` identified by this integral, unique identifier | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ Response Structure ------------------ -:active: ``true`` if the :term:`Delivery Service` is active, ``false`` otherwise -:anonymousBlockingEnabled: ``true`` if :ref:`Anonymous Blocking ` has been configured for the :term:`Delivery Service`, ``false`` otherwise -:cacheurl: A setting for a deprecated feature of now-unsupported Trafficserver versions +:active: A boolean that defines :ref:`ds-active`. +:anonymousBlockingEnabled: A boolean that defines :ref:`ds-anonymous-blocking` +:cacheurl: A :ref:`ds-cacheurl` .. deprecated:: ATCv3.0 This field has been deprecated in Traffic Control 3.x and is subject to removal in Traffic Control 4.x or later -:ccrDnsTtl: The Time To Live (TTL) of the DNS response for A or AAAA record queries requesting the IP address of the Traffic Router - named "ccrDnsTtl" for legacy reasons -:cdnId: The integral, unique identifier of the CDN to which the :term:`Delivery Service` belongs -:cdnName: Name of the CDN to which the :term:`Delivery Service` belongs -:checkPath: The path portion of the URL to check connections to this :term:`Delivery Service`'s origin server -:consistentHashRegex: If defined, this is a regular expression used for the Pattern-Based Consistent Hashing feature.\ [#httpOnly]_ +:ccrDnsTtl: The :ref:`ds-dns-ttl` - named "ccrDnsTtl" for legacy reasons +:cdnId: The integral, unique identifier of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:cdnName: Name of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:checkPath: A :ref:`ds-check-path` +:consistentHashRegex: A :ref:`ds-consistent-hashing-regex` .. versionadded:: 1.4 -:consistentHashQueryParams: A set (actually array due to limitations of JSON) of query parameters which will be considered by Traffic Router when using a client request to consistently find an :term:`Edge-tier cache server` to which to redirect them.\ [#httpOnly]_ +:consistentHashQueryParams: An array of :ref:`ds-consistent-hashing-qparams` .. versionadded:: 1.4 -:displayName: The display name of the :term:`Delivery Service` -:dnsBypassCname: Domain name to overflow requests for HTTP :term:`Delivery Service`\ s - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassIp: The IPv4 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassIp6: The IPv6 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassTtl: The time for which a DNS bypass of this :term:`Delivery Service`\ shall remain active\ [4]_ -:dscp: The Differentiated Services Code Point (DSCP) with which to mark traffic as it leaves the CDN and reaches clients -:edgeHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:fqPacingRate: The Fair-Queuing Pacing Rate in Bytes per second set on the all TCP connection sockets in the :term:`Delivery Service` (see ``man tc-fc_codel`` for more information) - Linux only -:geoLimit: The setting that determines how content is geographically limited - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - None - no limitations - 1 - Only route when the client's IP is found in the Coverage Zone File (CZF) - 2 - Only route when the client's IP is found in the CZF, or when the client can be determined to be from the United States of America - - .. warning:: This does not prevent access to content or make content secure; it merely prevents routing to the content through Traffic Router - -:geoLimitCountries: A string containing a comma-separated list of country codes (e.g. "US,AU") which are allowed to request content through this :term:`Delivery Service` -:geoLimitRedirectUrl: A URL to which clients blocked by :ref:`Regional Geographic Blocking ` or the ``geoLimit`` settings will be re-directed -:geoProvider: An integer that represents the provider of a database for mapping IPs to geographic locations; currently only the following values are supported: - - 0 - The "Maxmind" GeoIP2 database (default) - 1 - Neustar - -:globalMaxMbps: The maximum global bandwidth allowed on this :term:`Delivery Service`. If exceeded, traffic will be routed to ``dnsBypassIp`` (or ``dnsBypassIp6`` for IPv6 traffic) for DNS :term:`Delivery Service`\ s and to ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:globalMaxTps: The maximum global transactions per second allowed on this :term:`Delivery Service`. When this is exceeded traffic will be sent to the ``dnsBypassIp`` (and/or ``dnsBypassIp6``) for DNS :term:`Delivery Service`\ s and to the httpBypassFqdn for HTTP :term:`Delivery Service`\ s -:httpBypassFqdn: The HTTP destination to use for bypass on an HTTP :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` +:deepCachingType: The :ref:`ds-deep-caching` setting for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:displayName: The :ref:`ds-display-name` +:dnsBypassCname: A :ref:`ds-dns-bypass-cname` +:dnsBypassIp: A :ref:`ds-dns-bypass-ip` +:dnsBypassIp6: A :ref:`ds-dns-bypass-ipv6` +:dnsBypassTtl: The :ref:`ds-dns-bypass-ttl` +:dscp: A :ref:`ds-dscp` to be used within the :term:`Delivery Service` +:edgeHeaderRewrite: A set of :ref:`ds-edge-header-rw-rules` +:exampleURLs: An array of :ref:`ds-example-urls` +:fqPacingRate: The :ref:`ds-fqpr` + + .. versionadded:: 1.3 + +:geoLimit: An integer that defines the :ref:`ds-geo-limit` +:geoLimitCountries: A string containing a comma-separated list defining the :ref:`ds-geo-limit-countries` +:geoLimitRedirectUrl: A :ref:`ds-geo-limit-redirect-url` +:geoProvider: The :ref:`ds-geo-provider` +:globalMaxMbps: The :ref:`ds-global-max-mbps` +:globalMaxTps: The :ref:`ds-global-max-tps` +:httpBypassFqdn: A :ref:`ds-http-bypass-fqdn` :id: An integral, unique identifier for this :term:`Delivery Service` -:infoUrl: This is a string which is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:initialDispersion: The number of caches between which traffic requesting the same object will be randomly split - meaning that if 4 clients all request the same object (one after another), then if this is above 4 there is a possibility that all 4 are cache misses. For most use-cases, this should be 1\ [#httpOnly]_ -:ipv6RoutingEnabled: If ``true``, clients that connect to Traffic Router using IPv6 will be given the IPv6 address of a suitable Edge-tier cache; if ``false`` all addresses will be IPv4, regardless of the client connection -:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in a ``ctime``-like format -:logsEnabled: If ``true``, logging is enabled for this :term:`Delivery Service`, otherwise it is disabled -:longDesc: A description of the :term:`Delivery Service` -:longDesc1: A field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: A field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired -:matchList: An array of methods used by Traffic Router to determine whether or not a request can be serviced by this :term:`Delivery Service` +:infoUrl: An :ref:`ds-info-url` +:initialDispersion: The :ref:`ds-initial-dispersion` +:ipv6RoutingEnabled: A boolean that defines the :ref:`ds-ipv6-routing` setting on this :term:`Delivery Service` +:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in :rfc:`3339` format +:logsEnabled: A boolean that defines the :ref:`ds-logs-enabled` setting on this :term:`Delivery Service` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: The :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: The :ref:`ds-longdesc3` of this :term:`Delivery Service` +:matchList: The :term:`Delivery Service`'s :ref:`ds-matchlist` :pattern: A regular expression - the use of this pattern is dependent on the ``type`` field (backslashes are escaped) - :setNumber: An integral, unique identifier for the set of types to which the ``type`` field belongs - :type: The type of match performed using ``pattern`` to determine whether or not to use this :term:`Delivery Service` + :setNumber: An integer that provides explicit ordering of :ref:`ds-matchlist` items - this is used as a priority ranking by Traffic Router, and is not guaranteed to correspond to the ordering of items in the array. + :type: The type of match performed using ``pattern``. - HOST_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``Host:`` HTTP header of an HTTP request, or the name requested for resolution in a DNS request - HEADER_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches an HTTP header (both the name and value) in an HTTP request\ [#httpOnly]_ - PATH_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the request path of this :term:`Delivery Service`'s URL\ [#httpOnly]_ - STEERING_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``xml_id`` of one of this :term:`Delivery Service`'s "Steering" target :term:`Delivery Service` - -:maxDnsAnswers: The maximum number of IPs to put in responses to A/AAAA DNS record requests (0 means all available)\ [4]_ -:maxOriginConnections: The maximum number of connections allowed to the origin (0 means no maximum). +:maxDnsAnswers: The :ref:`ds-max-dns-answers` allowed for this :term:`Delivery Service` +:maxOriginConnections: The :ref:`ds-max-origin-connections` .. versionadded:: 1.4 -:midHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:missLat: The latitude to use when the client cannot be found in the CZF or a geographic IP lookup -:missLong: The longitude to use when the client cannot be found in the CZF or a geographic IP lookup -:multiSiteOrigin: ``true`` if the Multi Site Origin feature is enabled for this :term:`Delivery Service`, ``false`` otherwise\ [3]_ -:orgServerFqdn: The URL of the :term:`Delivery Service`'s origin server for use in retrieving content from the origin server - - .. note:: Despite the field name, this must truly be a full URL - including the protocol (e.g. ``http://`` or ``https://``) - **NOT** merely the server's Fully Qualified Domain Name (FQDN) - -:originShield: An "origin shield" is a forward proxy that sits between Mid-tier caches and the origin and performs further caching beyond what's offered by a standard CDN. This field is a string of FQDNs to use as origin shields, delimited by ``|`` -:profileDescription: The description of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:profileId: The integral, unique identifier for the Traffic Router profile with which this :term:`Delivery Service` is associated -:profileName: The name of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:protocol: The protocol which clients will use to communicate with Edge-tier :term:`cache server` s\ [#httpOnly]_ - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - HTTP - 1 - HTTPS - 2 - Both HTTP and HTTPS - -:qstringIgnore: Tells caches whether or not to consider URLs with different query parameter strings to be distinct - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - URLs with different query parameter strings will be considered distinct for caching purposes, and query strings will be passed upstream to the origin - 1 - URLs with different query parameter strings will be considered identical for caching purposes, and query strings will be passed upstream to the origin - 2 - Query strings are stripped out by Edge-tier caches, and thus are neither taken into consideration for caching purposes, nor passed upstream in requests to the origin - -:rangeRequestHandling: Tells caches how to handle range requests\ [7]_ - this is an integer on the interval [0,2] where the values have these meanings: - - 0 - Range requests will not be cached, but range requests that request ranges of content already cached will be served from the cache - 1 - Use the `background_fetch plugin `_ to service the range request while caching the whole object - 2 - Use the `experimental cache_range_requests plugin `_ to treat unique ranges as unique objects - -:regexRemap: A regular expression remap rule to apply to this :term:`Delivery Service` at the Edge tier +:midHeaderRewrite: A set of :ref:`ds-mid-header-rw-rules` +:missLat: The :ref:`ds-geo-miss-default-latitude` used by this :term:`Delivery Service` +:missLong: The :ref:`ds-geo-miss-default-longitude` used by this :term:`Delivery Service` +:multiSiteOrigin: A boolean that defines the use of :ref:`ds-multi-site-origin` by this :term:`Delivery Service` +:orgServerFqdn: The :ref:`ds-origin-url` +:originShield: A :ref:`ds-origin-shield` string +:profileDescription: The :ref:`profile-description` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileId: The :ref:`profile-id` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileName: The :ref:`profile-name` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:protocol: An integral, unique identifier that corresponds to the :ref:`ds-protocol` used by this :term:`Delivery Service` +:qstringIgnore: An integral, unique identifier that corresponds to the :ref:`ds-qstring-handling` setting on this :term:`Delivery Service` +:rangeRequestHandling: An integral, unique identifier that corresponds to the :ref:`ds-range-request-handling` setting on this :term:`Delivery Service` +:regexRemap: A :ref:`ds-regex-remap` +:regionalGeoBlocking: A boolean defining the :ref:`ds-regionalgeo` setting on this :term:`Delivery Service` +:remapText: :ref:`ds-raw-remap` +:signed: ``true`` if and only if ``signingAlgorithm`` is not ``null``, ``false`` otherwise +:signingAlgorithm: Either a :ref:`ds-signing-algorithm` or ``null`` to indicate URL/URI signing is not implemented on this :term:`Delivery Service` - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ + .. versionadded:: 1.3 -:regionalGeoBlocking: ``true`` if Regional Geo Blocking is in use within this :term:`Delivery Service`, ``false`` otherwise - see :ref:`regionalgeo-qht` for more information -:remapText: Additional, raw text to add to the remap line for caches +:sslKeyVersion: This integer indicates the :ref:`ds-ssl-key-version` +:tenantId: The integral, unique identifier of the :ref:`ds-tenant` who owns this :term:`Delivery Service` - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ + .. versionadded:: 1.3 -:signed: ``true`` if token-based authentication is enabled for this :term:`Delivery Service`, ``false`` otherwise -:signingAlgorithm: Type of URL signing method to sign the URLs, basically comes down to one of two plugins or ``null``: +:trRequestHeaders: If defined, this defines the :ref:`ds-tr-req-headers` used by Traffic Router for this :term:`Delivery Service` - ``null`` - Token-based authentication is not enabled for this :term:`Delivery Service` - url_sig: - URL Signing token-based authentication is enabled for this :term:`Delivery Service` - uri_signing - URI Signing token-based authentication is enabled for this :term:`Delivery Service` + .. versionadded:: 1.3 - .. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_ and `the draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, **NOT** the latest +:trResponseHeaders: If defined, this defines the :ref:`ds-tr-resp-headers` used by Traffic Router for this :term:`Delivery Service` -:sslKeyVersion: This integer indicates the generation of keys in use by the :term:`Delivery Service` - if any - and is incremented by the Traffic Portal client whenever new keys are generated + .. versionadded:: 1.3 - .. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! - -:tenantId: The integral, unique identifier of the tenant who owns this :term:`Delivery Service` -:trRequestHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router access logs for requests - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:trResponseHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router responses - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:type: The name of the routing type of this :term:`Delivery Service` e.g. "HTTP" -:typeId: The integral, unique identifier of the routing type of this :term:`Delivery Service` -:xmlId: A unique string that describes this :term:`Delivery Service` - exists for legacy reasons +:type: The :ref:`ds-types` of this :term:`Delivery Service` +:typeId: The integral, unique identifier of the :ref:`ds-types` of this :term:`Delivery Service` +:xmlId: This :term:`Delivery Service`'s :ref:`ds-xmlid` .. code-block:: http :caption: Response Example @@ -295,160 +244,103 @@ Response Structure "maxOriginConnections": 0 }]} -.. [1] Users with the roles "admin" and/or "operations" will be able to see *all* :term:`Delivery Services`, whereas any other user will only see the :term:`Delivery Services` their Tenant is allowed to see. -.. [#httpOnly] This only applies to HTTP-:ref:`routed ` :term:`Delivery Services` -.. [3] See :ref:`ds-multi-site-origin` -.. [4] This only applies to DNS-routed :term:`Delivery Services` ``POST`` ======== Allows users to create :term:`Delivery Service`. :Auth. Required: Yes -:Roles Required: "admin" or "operations" +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: Array Request Structure ----------------- -:active: If ``true``, the :term:`Delivery Service` will immediately become active and serves traffic -:anonymousBlockingEnabled: An optional field which, if defined and ``true`` will cause :ref:`Anonymous Blocking ` to be used with the new :term:`Delivery Service` -:cacheurl: An optional setting for a deprecated feature of now-unsupported Trafficserver versions (read: "Don't use this") +:active: A boolean that defines :ref:`ds-active`. +:anonymousBlockingEnabled: A boolean that defines :ref:`ds-anonymous-blocking` +:cacheurl: A :ref:`ds-cacheurl` .. deprecated:: ATCv3.0 This field has been deprecated in Traffic Control 3.x and is subject to removal in Traffic Control 4.x or later -:ccrDnsTtl: The Time To Live (TTL) in seconds of the DNS response for A or AAAA record queries requesting the IP address of the Traffic Router - named "ccrDnsTtl" for legacy reasons -:cdnId: The integral, unique identifier for the CDN to which this :term:`Delivery Service`\ shall be assigned -:checkPath: The path portion of the URL which will be used to check connections to this :term:`Delivery Service`'s origin server -:consistentHashRegex: If defined, this is a regular expression used for the Pattern-Based Consistent Hashing feature. It is only applicable for HTTP and Steering Delivery Services +:ccrDnsTtl: The :ref:`ds-dns-ttl` - named "ccrDnsTtl" for legacy reasons +:cdnId: The integral, unique identifier of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:checkPath: A :ref:`ds-check-path` +:consistentHashRegex: A :ref:`ds-consistent-hashing-regex` .. versionadded:: 1.4 -:consistentHashQueryParams: If defined, this is a set (encoded as an array, but duplicates are **not** allowed and order is not preserved) of query parameters which will be considered by Traffic Router when using a client request to consistently find an :term:`Edge-tier cache server` to which to redirect them.\ [#httpOnly]_ +:consistentHashQueryParams: An array of :ref:`ds-consistent-hashing-qparams` .. versionadded:: 1.4 -:deepCachingType: A string describing when to do Deep Caching for this :term:`Delivery Service`: - - NEVER - Deep Caching will never be used by this :term:`Delivery Service` (default) - ALWAYS - Deep Caching will always be used by this :term:`Delivery Service` - -:displayName: The human-friendly name for this :term:`Delivery Service` -:dnsBypassCname: Domain name to overflow requests for HTTP :term:`Delivery Service`\ s - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassIp: The IPv4 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassIp6: The IPv6 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassTtl: The time for which a DNS bypass of this :term:`Delivery Service`\ shall remain active -:dscp: The Differentiated Services Code Point (DSCP) with which to mark downstream (EDGE -> customer) traffic. This should be zero in most cases -:edgeHeaderRewrite: An optional string which, if present, defines rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:fqPacingRate: An optional integer which, if present, sets the Fair-Queuing Pacing Rate in bytes per second set on the all TCP connection sockets in the :term:`Delivery Service` (see ``man tc-fc_codel`` for more information) - Linux only, defaults to 0 meaning "disabled" -:geoLimit: The setting that determines how content is geographically limited - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - None - no limitations - 1 - Only route when the client's IP is found in the Coverage Zone File (CZF) - 2 - Only route when the client's IP is found in the CZF, or when the client can be determined to be from the United States of America - - .. warning:: This does not prevent access to content or make content secure; it merely prevents routing to the content through Traffic Router - -:geoLimitCountries: A string containing a comma-separated list of country codes (e.g. "US,AU") which are allowed to request content through this :term:`Delivery Service`\ [5]_ -:geoLimitRedirectUrl: A URL to which clients blocked by :ref:`Regional Geographic Blocking ` or the ``geoLimit`` settings will be re-directed\ [5]_ -:geoProvider: An integer that represents the provider of a database for mapping IPs to geographic locations; currently only the following values are supported: - - 0 - The "Maxmind" GeoIP2 database (default) - 1 - Neustar - -:globalMaxMbps: An optional integer that will set the maximum global bandwidth allowed on this :term:`Delivery Service`. If exceeded, traffic will be routed to ``dnsBypassIp`` (or ``dnsBypassIp6`` for IPv6 traffic) for DNS :term:`Delivery Service`\ s and to ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:globalMaxTps: An optional integer that will set the maximum global transactions per second allowed on this :term:`Delivery Service`. When this is exceeded traffic will be sent to the ``dnsBpassIp`` (and/or ``dnsBypassIp6``)for DNS :term:`Delivery Service`\ s and to the ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:httpBypassFqdn: An optional Fully Qualified Domain Name (FQDN) to use for bypass on an HTTP :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [#httpOnly]_ -:infoUrl: An optional string which, if present, is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:initialDispersion: The number of caches between which traffic requesting the same object will be randomly split - meaning that if 4 clients all request the same object (one after another), then if this is above 4 there is a possibility that all 4 are cache misses. For most use-cases, this should be 1\ [#httpOnly]_\ [6]_ -:ipv6RoutingEnabled: If ``true``, clients that connect to Traffic Router using IPv6 will be given the IPv6 address of a suitable :term:`Edge-tier cache server`; if ``false`` all addresses will be IPv4, regardless of the client connection - optional for ANY_MAP-:ref:`ds-types` :term:`Delivery Services` -:logsEnabled: If ``true``, logging is enabled for this :term:`Delivery Service`, otherwise it is disabled -:longDesc: An optional description of the :term:`Delivery Service` -:longDesc1: An optional field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: An optional field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired -:maxDnsAnswers: An optional field which, when present, specifies the maximum number of IPs to put in responses to A/AAAA DNS record requests - defaults to 0, meaning "no limit"\ [4]_ -:maxOriginConnections: The maximum number of connections allowed to the origin (0 means no maximum). +:deepCachingType: The :ref:`ds-deep-caching` setting for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:displayName: The :ref:`ds-display-name` +:dnsBypassCname: A :ref:`ds-dns-bypass-cname` +:dnsBypassIp: A :ref:`ds-dns-bypass-ip` +:dnsBypassIp6: A :ref:`ds-dns-bypass-ipv6` +:dnsBypassTtl: The :ref:`ds-dns-bypass-ttl` +:dscp: A :ref:`ds-dscp` to be used within the :term:`Delivery Service` +:edgeHeaderRewrite: A set of :ref:`ds-edge-header-rw-rules` +:fqPacingRate: The :ref:`ds-fqpr` + + .. versionadded:: 1.3 + +:geoLimit: An integer that defines the :ref:`ds-geo-limit` +:geoLimitCountries: A string containing a comma-separated list defining the :ref:`ds-geo-limit-countries`\ [#geolimit]_ +:geoLimitRedirectUrl: A :ref:`ds-geo-limit-redirect-url`\ [#geolimit]_ +:geoProvider: The :ref:`ds-geo-provider` +:globalMaxMbps: The :ref:`ds-global-max-mbps` +:globalMaxTps: The :ref:`ds-global-max-tps` +:httpBypassFqdn: A :ref:`ds-http-bypass-fqdn` +:infoUrl: An :ref:`ds-info-url` +:initialDispersion: The :ref:`ds-initial-dispersion` +:ipv6RoutingEnabled: A boolean that defines the :ref:`ds-ipv6-routing` setting on this :term:`Delivery Service` +:logsEnabled: A boolean that defines the :ref:`ds-logs-enabled` setting on this :term:`Delivery Service` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: An optional field containing the :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: An optional field containing the :ref:`ds-longdesc3` of this :term:`Delivery Service` +:maxDnsAnswers: The :ref:`ds-max-dns-answers` allowed for this :term:`Delivery Service` +:maxOriginConnections: The :ref:`ds-max-origin-connections` .. versionadded:: 1.4 -:midHeaderRewrite: An optional string containing rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:missLat: The latitude to use when the client cannot be found in the CZF or a geographic IP lookup\ [7]_ -:missLong: The longitude to use when the client cannot be found in the CZF or a geographic IP lookup\ [7]_ -:multiSiteOrigin: ``true`` if the Multi Site Origin feature is enabled for this :term:`Delivery Service`, ``false`` otherwise\ [3]_\ [7]_ -:orgServerFqdn: The URL of the :term:`Delivery Service`'s origin server for use in retrieving content from the origin server\ [7]_ - - .. note:: Despite the field name, this must truly be a full URL - including the protocol (e.g. ``http://`` or ``https://``) - **NOT** merely the server's Fully Qualified Domain Name (FQDN) - -:originShield: An "origin shield" is a forward proxy that sits between Mid-tier caches and the origin and performs further caching beyond what's offered by a standard CDN. This optional field is a string of FQDNs to use as origin shields, delimited by ``|`` -:profileId: An optional, integral, unique identifier for the Traffic Router profile with which this :term:`Delivery Service`\ shall be associated -:protocol: The protocol which clients will use to communicate with Edge-tier :term:`cache server` s - this is an (optional for ANY_MAP :term:`Delivery Service`\ s) integer on the interval [0,2] where the values have these meanings: - - 0 - HTTP - 1 - HTTPS - 2 - Both HTTP and HTTPS - -:qstringIgnore: Tells caches whether or not to consider URLs with different query parameter strings to be distinct\ [7]_ - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - URLs with different query parameter strings will be considered distinct for caching purposes, and query strings will be passed upstream to the origin - 1 - URLs with different query parameter strings will be considered identical for caching purposes, and query strings will be passed upstream to the origin - 2 - Query strings are stripped out by Edge-tier caches, and thus are neither taken into consideration for caching purposes, nor passed upstream in requests to the origin - -:rangeRequestHandling: Tells caches how to handle range requests\ [7]_ - this is an integer on the interval [0,2] where the values have these meanings: - - 0 - Range requests will not be cached, but range requests that request ranges of content already cached will be served from the cache - 1 - Use the `background_fetch plugin `_ to service the range request while caching the whole object - 2 - Use the `experimental cache_range_requests plugin `_ to treat unique ranges as unique objects - -:regexRemap: An optional, regular expression remap rule to apply to this :term:`Delivery Service` at the Edge tier +:midHeaderRewrite: A set of :ref:`ds-mid-header-rw-rules` +:missLat: The :ref:`ds-geo-miss-default-latitude` used by this :term:`Delivery Service` +:missLong: The :ref:`ds-geo-miss-default-longitude` used by this :term:`Delivery Service` +:multiSiteOrigin: A boolean that defines the use of :ref:`ds-multi-site-origin` by this :term:`Delivery Service` +:orgServerFqdn: The :ref:`ds-origin-url` +:originShield: A :ref:`ds-origin-shield` string +:profileId: An optional :ref:`profile-id` of a :ref:`ds-profile` with which this :term:`Delivery Service` shall be associated +:protocol: An integral, unique identifier that corresponds to the :ref:`ds-protocol` used by this :term:`Delivery Service` +:qstringIgnore: An integral, unique identifier that corresponds to the :ref:`ds-qstring-handling` setting on this :term:`Delivery Service` +:rangeRequestHandling: An integral, unique identifier that corresponds to the :ref:`ds-range-request-handling` setting on this :term:`Delivery Service` +:regexRemap: A :ref:`ds-regex-remap` +:regionalGeoBlocking: A boolean defining the :ref:`ds-regionalgeo` setting on this :term:`Delivery Service` +:remapText: :ref:`ds-raw-remap` +:signed: ``true`` if and only if ``signingAlgorithm`` is not ``null``, ``false`` otherwise +:signingAlgorithm: Either a :ref:`ds-signing-algorithm` or ``null`` to indicate URL/URI signing is not implemented on this :term:`Delivery Service` - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ + .. versionadded:: 1.3 -:regionalGeoBlocking: ``true`` if Regional Geo Blocking is in use within this :term:`Delivery Service`, ``false`` otherwise - see :ref:`regionalgeo-qht` for more information -:remapText: Optional, raw text to add to the remap line for caches +:sslKeyVersion: This integer indicates the :ref:`ds-ssl-key-version` +:tenantId: The integral, unique identifier of the :ref:`ds-tenant` who owns this :term:`Delivery Service` - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ + .. versionadded:: 1.3 -:routingName: The routing name of this :term:`Delivery Service`, used as the top-level part of the FQDN used by clients to request content from the :term:`Delivery Service` e.g. ``routingName.xml_id.CDNName.com`` -:signed: An optional field which should be ``true`` if token-based authentication will be enabled for this :term:`Delivery Service`, ``false`` (default) otherwise -:signingAlgorithm: Type of URL signing method to sign the URLs, basically comes down to one of two plugins or ``null``: +:trRequestHeaders: If defined, this defines the :ref:`ds-tr-req-headers` used by Traffic Router for this :term:`Delivery Service` - ``null`` - Token-based authentication is not enabled for this :term:`Delivery Service` - url_sig: - URL Signing token-based authentication is enabled for this :term:`Delivery Service` - uri_signing - URI Signing token-based authentication is enabled for this :term:`Delivery Service` + .. versionadded:: 1.3 - .. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_ and `the draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, **NOT** the latest +:trResponseHeaders: If defined, this defines the :ref:`ds-tr-resp-headers` used by Traffic Router for this :term:`Delivery Service` -:sslKeyVersion: This optional integer indicates the generation of keys to be used by the :term:`Delivery Service` - if any - and is incremented by the Traffic Portal client whenever new keys are generated - - .. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! - -:tenantId: An optional, integral, unique identifier of the tenant who will own this :term:`Delivery Service` -:trRequestHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router access logs for requests - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:trResponseHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router responses - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:typeId: The integral, unique identifier for the routing type of this :term:`Delivery Service` -:xmlId: A unique string that describes this :term:`Delivery Service` - exists for legacy reasons - - .. note:: This should almost never be different from the :term:`Delivery Service`'s ``displayName`` + .. versionadded:: 1.3 +:type: The :ref:`ds-types` of this :term:`Delivery Service` +:typeId: The integral, unique identifier of the :ref:`ds-types` of this :term:`Delivery Service` +:xmlId: This :term:`Delivery Service`'s :ref:`ds-xmlid` .. code-block:: http :caption: Request Example @@ -465,22 +357,17 @@ Request Structure "active": false, "anonymousBlockingEnabled": false, "cdnId": 2, - "cdnName": "CDN-in-a-Box", "deepCachingType": "NEVER", "displayName": "test", - "exampleURLs": [ - "http://test.test.mycdn.ciab.test" - ], "dscp": 0, "geoLimit": 0, "geoProvider": 0, "initialDispersion": 1, "ipv6RoutingEnabled": false, - "lastUpdated": "2018-11-14 18:21:17+00", "logsEnabled": true, - "longDesc": "A :term:`Delivery Service` created expressly for API documentation examples", - "missLat": -1, - "missLong": -1, + "longDesc": "A Delivery Service created expressly for API documentation examples", + "missLat": 0, + "missLong": 0, "maxOriginConnections": 0, "multiSiteOrigin": false, "orgServerFqdn": "http://origin.infra.ciab.test", @@ -496,161 +383,107 @@ Request Structure "xmlId": "test" } -.. [5] These fields must be defined if and only if ``geoLimit`` is non-zero -.. [6] These fields are required for HTTP-routed :term:`Delivery Service`\ s, and optional for all others -.. [7] These fields are required for HTTP-routed and DNS-routed :term:`Delivery Service`\ s, but are optional for (and in fact may have no effect on) STEERING and ANY_MAP :term:`Delivery Service`\ s Response Structure ------------------ -:active: ``true`` if the :term:`Delivery Service` is active, ``false`` otherwise -:anonymousBlockingEnabled: ``true`` if :ref:`Anonymous Blocking ` has been configured for the :term:`Delivery Service`, ``false`` otherwise -:cacheurl: A setting for a deprecated feature of now-unsupported Trafficserver versions +:active: A boolean that defines :ref:`ds-active`. +:anonymousBlockingEnabled: A boolean that defines :ref:`ds-anonymous-blocking` +:cacheurl: A :ref:`ds-cacheurl` .. deprecated:: ATCv3.0 This field has been deprecated in Traffic Control 3.x and is subject to removal in Traffic Control 4.x or later -:ccrDnsTtl: The Time To Live (TTL) of the DNS response for A or AAAA record queries requesting the IP address of the Traffic Router - named "ccrDnsTtl" for legacy reasons -:cdnId: The integral, unique identifier of the CDN to which the :term:`Delivery Service` belongs -:cdnName: Name of the CDN to which the :term:`Delivery Service` belongs -:checkPath: The path portion of the URL to check connections to this :term:`Delivery Service`'s origin server -:consistentHashRegex: If defined, this is a regular expression used for the Pattern-Based Consistent Hashing feature.\ [#httpOnly]_ +:ccrDnsTtl: The :ref:`ds-dns-ttl` - named "ccrDnsTtl" for legacy reasons +:cdnId: The integral, unique identifier of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:cdnName: Name of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:checkPath: A :ref:`ds-check-path` +:consistentHashRegex: A :ref:`ds-consistent-hashing-regex` .. versionadded:: 1.4 -:consistentHashQueryParams: A set (actually array due to limitations of JSON) of query parameters which will be considered by Traffic Router when using a client request to consistently find an :term:`Edge-tier cache server` to which to redirect them.\ [#httpOnly]_ +:consistentHashQueryParams: An array of :ref:`ds-consistent-hashing-qparams` .. versionadded:: 1.4 -:displayName: The display name of the :term:`Delivery Service` -:dnsBypassCname: Domain name to overflow requests for HTTP :term:`Delivery Service`\ s - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassIp: The IPv4 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassIp6: The IPv6 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassTtl: The time for which a DNS bypass of this :term:`Delivery Service`\ shall remain active\ [4]_ -:dscp: The Differentiated Services Code Point (DSCP) with which to mark traffic as it leaves the CDN and reaches clients -:edgeHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:fqPacingRate: The Fair-Queuing Pacing Rate in Bytes per second set on the all TCP connection sockets in the :term:`Delivery Service` (see ``man tc-fc_codel`` for more information) - Linux only -:geoLimit: The setting that determines how content is geographically limited - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - None - no limitations - 1 - Only route when the client's IP is found in the Coverage Zone File (CZF) - 2 - Only route when the client's IP is found in the CZF, or when the client can be determined to be from the United States of America - - .. warning:: This does not prevent access to content or make content secure; it merely prevents routing to the content through Traffic Router - -:geoLimitCountries: A string containing a comma-separated list of country codes (e.g. "US,AU") which are allowed to request content through this :term:`Delivery Service` -:geoLimitRedirectUrl: A URL to which clients blocked by :ref:`Regional Geographic Blocking ` or the ``geoLimit`` settings will be re-directed -:geoProvider: An integer that represents the provider of a database for mapping IPs to geographic locations; currently only the following values are supported: - - 0 - The "Maxmind" GeoIP2 database (default) - 1 - Neustar - -:globalMaxMbps: The maximum global bandwidth allowed on this :term:`Delivery Service`. If exceeded, traffic will be routed to ``dnsBypassIp`` (or ``dnsBypassIp6`` for IPv6 traffic) for DNS :term:`Delivery Service`\ s and to ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:globalMaxTps: The maximum global transactions per second allowed on this :term:`Delivery Service`. When this is exceeded traffic will be sent to the ``dnsBypassIp`` (and/or ``dnsBypassIp6``) for DNS :term:`Delivery Service`\ s and to the httpBypassFqdn for HTTP :term:`Delivery Service`\ s -:httpBypassFqdn: The HTTP destination to use for bypass on an HTTP :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` +:deepCachingType: The :ref:`ds-deep-caching` setting for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:displayName: The :ref:`ds-display-name` +:dnsBypassCname: A :ref:`ds-dns-bypass-cname` +:dnsBypassIp: A :ref:`ds-dns-bypass-ip` +:dnsBypassIp6: A :ref:`ds-dns-bypass-ipv6` +:dnsBypassTtl: The :ref:`ds-dns-bypass-ttl` +:dscp: A :ref:`ds-dscp` to be used within the :term:`Delivery Service` +:edgeHeaderRewrite: A set of :ref:`ds-edge-header-rw-rules` +:exampleURLs: An array of :ref:`ds-example-urls` +:fqPacingRate: The :ref:`ds-fqpr` + + .. versionadded:: 1.3 + +:geoLimit: An integer that defines the :ref:`ds-geo-limit` +:geoLimitCountries: A string containing a comma-separated list defining the :ref:`ds-geo-limit-countries` +:geoLimitRedirectUrl: A :ref:`ds-geo-limit-redirect-url` +:geoProvider: The :ref:`ds-geo-provider` +:globalMaxMbps: The :ref:`ds-global-max-mbps` +:globalMaxTps: The :ref:`ds-global-max-tps` +:httpBypassFqdn: A :ref:`ds-http-bypass-fqdn` :id: An integral, unique identifier for this :term:`Delivery Service` -:infoUrl: This is a string which is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:initialDispersion: The number of caches between which traffic requesting the same object will be randomly split - meaning that if 4 clients all request the same object (one after another), then if this is above 4 there is a possibility that all 4 are cache misses. For most use-cases, this should be 1\ [#httpOnly]_ -:ipv6RoutingEnabled: If ``true``, clients that connect to Traffic Router using IPv6 will be given the IPv6 address of a suitable :term:`Edge-tier cache server`; if ``false`` all addresses will be IPv4, regardless of the client connection -:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in a ``ctime``-like format -:logsEnabled: If ``true``, logging is enabled for this :term:`Delivery Service`, otherwise it is disabled -:longDesc: A description of the :term:`Delivery Service` -:longDesc1: A field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: A field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired -:matchList: An array of methods used by Traffic Router to determine whether or not a request can be serviced by this :term:`Delivery Service` +:infoUrl: An :ref:`ds-info-url` +:initialDispersion: The :ref:`ds-initial-dispersion` +:ipv6RoutingEnabled: A boolean that defines the :ref:`ds-ipv6-routing` setting on this :term:`Delivery Service` +:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in :rfc:`3339` format +:logsEnabled: A boolean that defines the :ref:`ds-logs-enabled` setting on this :term:`Delivery Service` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: The :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: The :ref:`ds-longdesc3` of this :term:`Delivery Service` +:matchList: The :term:`Delivery Service`'s :ref:`ds-matchlist` :pattern: A regular expression - the use of this pattern is dependent on the ``type`` field (backslashes are escaped) - :setNumber: An integral, unique identifier for the set of types to which the ``type`` field belongs - :type: The type of match performed using ``pattern`` to determine whether or not to use this :term:`Delivery Service` - - HOST_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``Host:`` HTTP header of an HTTP request, or the name requested for resolution in a DNS request - HEADER_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches an HTTP header (both the name and value) in an HTTP request\ [#httpOnly]_ - PATH_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the request path of this :term:`Delivery Service`'s URL\ [#httpOnly]_ - STEERING_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``xml_id`` of one of this :term:`Delivery Service`'s "Steering" target :term:`Delivery Service` + :setNumber: An integer that provides explicit ordering of :ref:`ds-matchlist` items - this is used as a priority ranking by Traffic Router, and is not guaranteed to correspond to the ordering of items in the array. + :type: The type of match performed using ``pattern``. -:maxDnsAnswers: The maximum number of IPs to put in responses to A/AAAA DNS record requests (0 means all available)\ [4]_ -:maxOriginConnections: The maximum number of connections allowed to the origin (0 means no maximum). +:maxDnsAnswers: The :ref:`ds-max-dns-answers` allowed for this :term:`Delivery Service` +:maxOriginConnections: The :ref:`ds-max-origin-connections` .. versionadded:: 1.4 -:midHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:missLat: The latitude to use when the client cannot be found in the CZF or a geographic IP lookup -:missLong: The longitude to use when the client cannot be found in the CZF or a geographic IP lookup -:multiSiteOrigin: ``true`` if the Multi Site Origin feature is enabled for this :term:`Delivery Service`, ``false`` otherwise\ [3]_ -:orgServerFqdn: The URL of the :term:`Delivery Service`'s origin server for use in retrieving content from the origin server - - .. note:: Despite the field name, this must truly be a full URL - including the protocol (e.g. ``http://`` or ``https://``) - **NOT** merely the server's Fully Qualified Domain Name (FQDN) - -:originShield: An "origin shield" is a forward proxy that sits between Mid-tier caches and the origin and performs further caching beyond what's offered by a standard CDN. This field is a string of FQDNs to use as origin shields, delimited by ``|`` -:profileDescription: The description of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:profileId: The integral, unique identifier for the Traffic Router profile with which this :term:`Delivery Service` is associated -:profileName: The name of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:protocol: The protocol which clients will use to communicate with Edge-tier :term:`cache server` s\ [#httpOnly]_ - this is an integer on the interval [0-2] where the values have these meanings: +:midHeaderRewrite: A set of :ref:`ds-mid-header-rw-rules` +:missLat: The :ref:`ds-geo-miss-default-latitude` used by this :term:`Delivery Service` +:missLong: The :ref:`ds-geo-miss-default-longitude` used by this :term:`Delivery Service` +:multiSiteOrigin: A boolean that defines the use of :ref:`ds-multi-site-origin` by this :term:`Delivery Service` +:orgServerFqdn: The :ref:`ds-origin-url` +:originShield: A :ref:`ds-origin-shield` string +:profileDescription: The :ref:`profile-description` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileId: The :ref:`profile-id` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileName: The :ref:`profile-name` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:protocol: An integral, unique identifier that corresponds to the :ref:`ds-protocol` used by this :term:`Delivery Service` +:qstringIgnore: An integral, unique identifier that corresponds to the :ref:`ds-qstring-handling` setting on this :term:`Delivery Service` +:rangeRequestHandling: An integral, unique identifier that corresponds to the :ref:`ds-range-request-handling` setting on this :term:`Delivery Service` +:regexRemap: A :ref:`ds-regex-remap` +:regionalGeoBlocking: A boolean defining the :ref:`ds-regionalgeo` setting on this :term:`Delivery Service` +:remapText: :ref:`ds-raw-remap` +:signed: ``true`` if and only if ``signingAlgorithm`` is not ``null``, ``false`` otherwise +:signingAlgorithm: Either a :ref:`ds-signing-algorithm` or ``null`` to indicate URL/URI signing is not implemented on this :term:`Delivery Service` - 0 - HTTP - 1 - HTTPS - 2 - Both HTTP and HTTPS + .. versionadded:: 1.3 -:qstringIgnore: Tells caches whether or not to consider URLs with different query parameter strings to be distinct - this is an integer on the interval [0-2] where the values have these meanings: +:sslKeyVersion: This integer indicates the :ref:`ds-ssl-key-version` +:tenantId: The integral, unique identifier of the :ref:`ds-tenant` who owns this :term:`Delivery Service` - 0 - URLs with different query parameter strings will be considered distinct for caching purposes, and query strings will be passed upstream to the origin - 1 - URLs with different query parameter strings will be considered identical for caching purposes, and query strings will be passed upstream to the origin - 2 - Query strings are stripped out by Edge-tier caches, and thus are neither taken into consideration for caching purposes, nor passed upstream in requests to the origin + .. versionadded:: 1.3 -:rangeRequestHandling: Tells caches how to handle range requests\ [7]_ - this is an integer on the interval [0,2] where the values have these meanings: +:trRequestHeaders: If defined, this defines the :ref:`ds-tr-req-headers` used by Traffic Router for this :term:`Delivery Service` - 0 - Range requests will not be cached, but range requests that request ranges of content already cached will be served from the cache - 1 - Use the `background_fetch plugin `_ to service the range request while caching the whole object - 2 - Use the `experimental cache_range_requests plugin `_ to treat unique ranges as unique objects + .. versionadded:: 1.3 -:regexRemap: A regular expression remap rule to apply to this :term:`Delivery Service` at the Edge tier +:trResponseHeaders: If defined, this defines the :ref:`ds-tr-resp-headers` used by Traffic Router for this :term:`Delivery Service` - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ + .. versionadded:: 1.3 -:regionalGeoBlocking: ``true`` if Regional Geo Blocking is in use within this :term:`Delivery Service`, ``false`` otherwise - see :ref:`regionalgeo-qht` for more information -:remapText: Additional, raw text to add to the remap line for caches - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:signed: ``true`` if token-based authentication is enabled for this :term:`Delivery Service`, ``false`` otherwise -:signingAlgorithm: Type of URL signing method to sign the URLs, basically comes down to one of two plugins or ``null``: - - ``null`` - Token-based authentication is not enabled for this :term:`Delivery Service` - url_sig: - URL Signing token-based authentication is enabled for this :term:`Delivery Service` - uri_signing - URI Signing token-based authentication is enabled for this :term:`Delivery Service` - - .. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_ and `the draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, **NOT** the latest - -:sslKeyVersion: This integer indicates the generation of keys in use by the :term:`Delivery Service` - if any - and is incremented by the Traffic Portal client whenever new keys are generated - - .. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! - -:tenantId: The integral, unique identifier of the tenant who owns this :term:`Delivery Service` -:trRequestHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router access logs for requests - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:trResponseHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router responses - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:type: The name of the routing type of this :term:`Delivery Service` e.g. "HTTP" -:typeId: The integral, unique identifier of the routing type of this :term:`Delivery Service` -:xmlId: A unique string that describes this :term:`Delivery Service` - exists for legacy reasons +:type: The :ref:`ds-types` of this :term:`Delivery Service` +:typeId: The integral, unique identifier of the :ref:`ds-types` of this :term:`Delivery Service` +:xmlId: This :term:`Delivery Service`'s :ref:`ds-xmlid` .. code-block:: http :caption: Response Example @@ -702,7 +535,7 @@ Response Structure "ipv6RoutingEnabled": false, "lastUpdated": "2018-11-19 19:45:49+00", "logsEnabled": true, - "longDesc": "A :term:`Delivery Service` created expressly for API documentation examples", + "longDesc": "A Delivery Service created expressly for API documentation examples", "longDesc1": null, "longDesc2": null, "matchList": [ @@ -744,3 +577,6 @@ Response Structure "tenant": "root" } ]} + +.. [#tenancy] Only those :term:`Delivery Services` assigned to :term:`Tenants` that are the requesting user's :term:`Tenant` or children thereof will appear in the output of a ``GET`` request, and the same constraints are placed on the allowed values of the ``tenantId`` field of a ``POST`` request to create a new :term:`Delivery Service` +.. [#geoLimit] These fields must be defined if and only if ``geoLimit`` is non-zero diff --git a/docs/source/api/deliveryservices_hostname_name_sslkeys.rst b/docs/source/api/deliveryservices_hostname_name_sslkeys.rst index 72141c57bd..f50d740b5b 100644 --- a/docs/source/api/deliveryservices_hostname_name_sslkeys.rst +++ b/docs/source/api/deliveryservices_hostname_name_sslkeys.rst @@ -54,7 +54,7 @@ Cookie: mojolicious=... Response Structure ------------------ -:businessUnit: An optional field which, if present, contains the business unit entered by the user when generating the SSL certificate\ [1]_ +:businessUnit: An optional field which, if present, contains the business unit entered by the user when generating the SSL certificate\ [#optional]_ :certificate: An object containing the actual generated key, certificate, and signature of the SSL keys :crt: Base 64-encoded (or not if the ``decode`` query parameter was given and ``true``) certificate for the :term:`Delivery Service` identified by ``deliveryservice`` @@ -64,12 +64,12 @@ Response Structure .. caution:: There's almost certainly no good reason to request the private key! Even when "base 64-encoded" do not let **ANYONE** see this who would be unable to request it themselves! :cdn: The CDN of the :term:`Delivery Service` for which the certs were generated -:city: An optional field which, if present, contains the city entered by the user when generating the SSL certificate\ [1]_ -:country: An optional field which, if present, contains the country entered by the user when generating the SSL certificate\ [1]_ +:city: An optional field which, if present, contains the city entered by the user when generating the SSL certificate\ [#optional]_ +:country: An optional field which, if present, contains the country entered by the user when generating the SSL certificate\ [#optional]_ :deliveryservice: The 'xml_id' of the :term:`Delivery Service` for which the certificate was generated :hostname: The hostname generated by Traffic Ops that is used as the common name when generating the certificate - this will be an :abbr:`FQDN (Fully Qualified Domain Name)` for DNS :term:`Delivery Service`\ s and a wildcard URL for HTTP :term:`Delivery Service`\ s -:organization: An optional field which, if present, contains the organization entered by the user when generating certificate\ [1]_ -:state: An optional field which, if present, contains the state entered by the user when generating certificate\ [1]_ +:organization: An optional field which, if present, contains the organization entered by the user when generating certificate\ [#optional]_ +:state: An optional field which, if present, contains the state entered by the user when generating certificate\ [#optional]_ :version: The version of the certificate record in Traffic Vault .. code- block:: http @@ -103,4 +103,4 @@ Response Structure .. note:: The response example uses abbreviated values for the ``crt``, ``key``, and ``csr``, as these will generally be very large, base64-encoded SSL keys and certificates. Note that in general the output of this request should **not** be made available, as the ``key`` field contains the *private* SSL key corresponding to the certificate. -.. [1] These optional fields will be present in the response if and only if they were specified during key generation; they are optional during key generation and thus cannot be guaranteed to exist or not exist. +.. [#optional] These optional fields will be present in the response if and only if they were specified during key generation; they are optional during key generation and thus cannot be guaranteed to exist or not exist. diff --git a/docs/source/api/deliveryservices_id.rst b/docs/source/api/deliveryservices_id.rst index 8984f2311c..31c713cdcd 100644 --- a/docs/source/api/deliveryservices_id.rst +++ b/docs/source/api/deliveryservices_id.rst @@ -18,34 +18,37 @@ *************************** ``deliveryservices/{{ID}}`` *************************** -.. deprecated:: 1.1 - Use the ``id`` query parameter of :ref:`to-api-deliveryservices` instead ``GET`` ======= +.. caution:: + It's often much better to the ``id`` query parameter of a ``GET`` request to :ref:`to-api-deliveryservices` instead. + Retrieves a specific :term:`Delivery Service` :Auth. Required: Yes -:Roles Required: None\ [1]_ +:Roles Required: None\ [#tenancy]_ :Response Type: Array Request Structure ----------------- .. table:: Request Query Parameters - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | Name | Required | Description | - +=============+==========+================================================================================================================================================+ - | cdn | no | Show only the :term:`Delivery Service`\ s belonging to the CDN identified by this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | logsEnabled | no | If true, return only :term:`Delivery Service`\ s with logging enabled, otherwise return only :term:`Delivery Service`\ s with logging disabled | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | profile | no | Return only :term:`Delivery Service`\ s using the profile identified by this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | tenant | no | Show only the :term:`Delivery Service`\ s belonging to the tenant identified by this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ - | type | no | Return only :term:`Delivery Service`\ s of the :term:`Delivery Service` type identified by this integral, unique identifier | - +-------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------+ + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | Name | Required | Description | + +=============+==========+============================================================================================================================================+ + | cdn | no | Show only the :term:`Delivery Services` belonging to the CDN identified by this integral, unique identifier | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | logsEnabled | no | If true, return only :term:`Delivery Services` with logging enabled, otherwise return only :term:`Delivery Services` with logging disabled | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | profile | no | Return only :term:`Delivery Services` using the :term:`Profile` with this :ref:`profile-id` | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | tenant | no | Show only the :term:`Delivery Services` belonging to the tenant identified by this integral, unique identifier | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | type | no | Return only :term:`Delivery Services` of the :ref:`Delivery Service Type ` identified by this integral, unique identifier | + +-------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------+ + +.. tip:: Since this method of this endpoint by definition should only ever return one :term:`Delivery Service`, using these query parameters is typically useless. Unless for some reason the only desired information is the value of one of these specific fields. .. table:: Request Path Parameters @@ -58,164 +61,104 @@ Request Structure Response Structure ------------------ -.. versionchanged:: 1.3 - Removed ``fqPacingRate`` field, added fields: ``deepCachingType``, ``signingAlgorithm``, and ``tenant``. - -:active: ``true`` if the :term:`Delivery Service` is active, ``false`` otherwise -:anonymousBlockingEnabled: ``true`` if :ref:`Anonymous Blocking ` has been configured for the :term:`Delivery Service`, ``false`` otherwise -:cacheurl: A setting for a deprecated feature of now-unsupported Trafficserver versions +:active: A boolean that defines :ref:`ds-active`. +:anonymousBlockingEnabled: A boolean that defines :ref:`ds-anonymous-blocking` +:cacheurl: A :ref:`ds-cacheurl` .. deprecated:: ATCv3.0 This field has been deprecated in Traffic Control 3.x and is subject to removal in Traffic Control 4.x or later -:ccrDnsTtl: The Time To Live (TTL) of the DNS response for A or AAAA record queries requesting the IP address of the Traffic Router - named "ccrDnsTtl" for legacy reasons -:cdnId: The integral, unique identifier of the CDN to which the :term:`Delivery Service` belongs -:cdnName: Name of the CDN to which the :term:`Delivery Service` belongs -:checkPath: The path portion of the URL to check connections to this :term:`Delivery Service`'s origin server -:consistentHashRegex: If defined, this is a regular expression used for the Pattern-Based Consistent Hashing feature.\ [#httpOnly]_ +:ccrDnsTtl: The :ref:`ds-dns-ttl` - named "ccrDnsTtl" for legacy reasons +:cdnId: The integral, unique identifier of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:cdnName: Name of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:checkPath: A :ref:`ds-check-path` +:consistentHashRegex: A :ref:`ds-consistent-hashing-regex` .. versionadded:: 1.4 -:consistentHashQueryParams: A set (actually array due to limitations of JSON) of query parameters which will be considered by Traffic Router when using a client request to consistently find an :term:`Edge-tier cache server` to which to redirect them.\ [#httpOnly]_ +:consistentHashQueryParams: An array of :ref:`ds-consistent-hashing-qparams` .. versionadded:: 1.4 -:deepCachingType: A string that describes when "Deep Caching" will be used by this :term:`Delivery Service` - one of: +:deepCachingType: The :ref:`ds-deep-caching` setting for this :term:`Delivery Service` - ALWAYS - "Deep Caching" will always be used with this :term:`Delivery Service` - NEVER - "Deep Caching" will never be used with this :term:`Delivery Service` + .. versionadded:: 1.3 + +:displayName: The :ref:`ds-display-name` +:dnsBypassCname: A :ref:`ds-dns-bypass-cname` +:dnsBypassIp: A :ref:`ds-dns-bypass-ip` +:dnsBypassIp6: A :ref:`ds-dns-bypass-ipv6` +:dnsBypassTtl: The :ref:`ds-dns-bypass-ttl` +:dscp: A :ref:`ds-dscp` to be used within the :term:`Delivery Service` +:edgeHeaderRewrite: A set of :ref:`ds-edge-header-rw-rules` +:exampleURLs: An array of :ref:`ds-example-urls` +:fqPacingRate: The :ref:`ds-fqpr` .. versionadded:: 1.3 -:displayName: The display name of the :term:`Delivery Service` -:dnsBypassCname: Domain name to overflow requests for HTTP :term:`Delivery Service`\ s - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassIp: The IPv4 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassIp6: The IPv6 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassTtl: The time for which a DNS bypass of this :term:`Delivery Service`\ shall remain active -:dscp: The :abbr:`FQDN (Differentiated Services Code Point)` with which to mark traffic as it leaves the CDN and reaches clients -:edgeHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite :abbr:`ATS (Apache Traffic Server)` plugin -:fqPacingRate: The Fair-Queuing Pacing Rate in Bytes per second set on the all TCP connection sockets in the :term:`Delivery Service` (see ``man tc-fc_codel`` for more information) - Linux only - - .. deprecated:: 1.3 - This field is only present/available in API versions 1.2 and lower - it has been removed in API version 1.3 - -:geoLimit: The setting that determines how content is geographically limited - this is an integer on the interval [0-2] where the values have these meanings: -:geoLimitCountries: A string containing a comma-separated list of country codes (e.g. "US,AU") which are allowed to request content through this :term:`Delivery Service` -:geoLimitRedirectUrl: A URL to which clients blocked by :ref:`Regional Geographic Blocking ` or the ``geoLimit`` settings will be re-directed - - 0 - None - no limitations - 1 - Only route when the client's IP is found in the :term:`Coverage Zone File` - 2 - Only route when the client's IP is found in the :term:`Coverage Zone File`, or when the client can be determined to be from the United States of America - - .. warning:: This does not prevent access to content or make content secure; it merely prevents routing to the content through Traffic Router - -:geoProvider: An integer that represents the provider of a database for mapping IPs to geographic locations; currently only ``0`` - which represents MaxMind - is supported -:globalMaxMbps: The maximum global bandwidth allowed on this :term:`Delivery Service`. If exceeded, traffic will be routed to ``dnsBypassIp`` (or ``dnsBypassIp6`` for IPv6 traffic) for DNS :term:`Delivery Service`\ s and to ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:globalMaxTps: The maximum global transactions per second allowed on this :term:`Delivery Service`. When this is exceeded traffic will be sent to the dnsByPassIp* for DNS :term:`Delivery Service`\ s and to the httpBypassFqdn for HTTP :term:`Delivery Service`\ s -:httpBypassFqdn: The HTTP destination to use for bypass on an HTTP :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:id: An integral, unique identifier for this :term:`Delivery Service` -:infoUrl: This is a string which is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:initialDispersion: The number of caches between which traffic requesting the same object will be randomly split - meaning that if 4 clients all request the same object (one after another), then if this is above 4 there is a possibility that all 4 are cache misses. For most use-cases, this should be 1 -:ipv6RoutingEnabled: If ``true``, clients that connect to Traffic Router using IPv6 will be given the IPv6 address of a suitable :term:`Edge-tier cache server`; if ``false`` all addresses will be IPv4, regardless of the client connection -:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in a ``ctime``-like format -:logsEnabled: If ``true``, logging is enabled for this :term:`Delivery Service`, otherwise it is disabled -:longDesc: A description of the :term:`Delivery Service` -:longDesc1: A field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: A field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired -:matchList: An array of methods used by Traffic Router to determine whether or not a request can be serviced by this :term:`Delivery Service` +:geoLimit: An integer that defines the :ref:`ds-geo-limit` +:geoLimitCountries: A string containing a comma-separated list defining the :ref:`ds-geo-limit-countries` +:geoLimitRedirectUrl: A :ref:`ds-geo-limit-redirect-url` +:geoProvider: The :ref:`ds-geo-provider` +:globalMaxMbps: The :ref:`ds-global-max-mbps` +:globalMaxTps: The :ref:`ds-global-max-tps` +:httpBypassFqdn: A :ref:`ds-http-bypass-fqdn` +:id: An integral, unique identifier for this :term:`Delivery Service` +:infoUrl: An :ref:`ds-info-url` +:initialDispersion: The :ref:`ds-initial-dispersion` +:ipv6RoutingEnabled: A boolean that defines the :ref:`ds-ipv6-routing` setting on this :term:`Delivery Service` +:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in :rfc:`3339` format +:logsEnabled: A boolean that defines the :ref:`ds-logs-enabled` setting on this :term:`Delivery Service` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: The :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: The :ref:`ds-longdesc3` of this :term:`Delivery Service` +:matchList: The :term:`Delivery Service`'s :ref:`ds-matchlist` :pattern: A regular expression - the use of this pattern is dependent on the ``type`` field (backslashes are escaped) - :setNumber: An integral, unique identifier for the set of types to which the ``type`` field belongs - :type: The type of match performed using ``pattern`` to determine whether or not to use this :term:`Delivery Service` - - HOST_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``Host:`` HTTP header of an HTTP request, or the name requested for resolution in a DNS request - HEADER_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches an HTTP header (both the name and value) in an HTTP request\ [#httpOnly]_ - PATH_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the request path of this :term:`Delivery Service`'s URL\ [#httpOnly]_ - STEERING_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``xml_id`` of one of this :term:`Delivery Service`'s "Steering" target :term:`Delivery Services` - -:maxDnsAnswers: The maximum number of IPs to put in a A/AAAA response for a DNS :term:`Delivery Service` (0 means all available) -:midHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:missLat: The latitude to use when the client cannot be found in the CZF or a geographic IP lookup -:missLong: The longitude to use when the client cannot be found in the CZF or a geographic IP lookup -:multiSiteOrigin: ``true`` if the Multi Site Origin feature is enabled for this :term:`Delivery Service`, ``false`` otherwise\ [3]_ -:originShield: An "origin shield" is a forward proxy that sits between Mid-tier caches and the origin and performs further caching beyond what's offered by a standard CDN. This field is a string of FQDNs to use as origin shields, delimited by ``|`` -:orgServerFqdn: The origin server's Fully Qualified Domain Name (FQDN) - including the protocol (e.g. http:// or https://) - for use in retrieving content from the origin server -:profileDescription: The description of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:profileId: The integral, unique identifier for the Traffic Router profile with which this :term:`Delivery Service` is associated -:profileName: The name of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:protocol: The protocol which clients will use to communicate with Edge-tier :term:`cache server` s\ [#httpOnly]_ - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - HTTP - 1 - HTTPS - 2 - Both HTTP and HTTPS - -:qstringIgnore: Tells caches whether or not to consider URLs with different query parameter strings to be distinct - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - URLs with different query parameter strings will be considered distinct for caching purposes, and query strings will be passed upstream to the origin - 1 - URLs with different query parameter strings will be considered identical for caching purposes, and query strings will be passed upstream to the origin - 2 - Query strings are stripped out by Edge-tier caches, and thus are neither taken into consideration for caching purposes, nor passed upstream in requests to the origin - -:rangeRequestHandling: Tells caches how to handle range requests\ [#httpOnly]_ - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - Range requests will not be cached, but range requests that request ranges of content already cached will be served from the cache - 1 - Use the `background_fetch plugin `_ to service the range request while caching the whole object - 2 - Use the `experimental cache_range_requests plugin `_ to treat unique ranges as unique objects - -:regexRemap: A regular expression remap rule to apply to this :term:`Delivery Service` at the Edge tier - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:regionalGeoBlocking: ``true`` if Regional Geo Blocking is in use within this :term:`Delivery Service`, ``false`` otherwise - see :ref:`regionalgeo-qht` for more information -:remapText: Additional, raw text to add to the remap line for caches - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:signed: ``true`` if token-based authentication is enabled for this :term:`Delivery Service`, ``false`` otherwise -:signingAlgorithm: Type of URL signing method to sign the URLs, basically comes down to one of two plugins or ``null``: - - ``null`` - Token-based authentication is not enabled for this :term:`Delivery Service` - url_sig: - URL Signing token-based authentication is enabled for this :term:`Delivery Service` - uri_signing - URI Signing token-based authentication is enabled for this :term:`Delivery Service` - - .. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_ and `the draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, NOT the latest. + :setNumber: An integer that provides explicit ordering of :ref:`ds-matchlist` items - this is used as a priority ranking by Traffic Router, and is not guaranteed to correspond to the ordering of items in the array. + :type: The type of match performed using ``pattern``. + +:maxDnsAnswers: The :ref:`ds-max-dns-answers` allowed for this :term:`Delivery Service` +:maxOriginConnections: The :ref:`ds-max-origin-connections` + + .. versionadded:: 1.4 + +:midHeaderRewrite: A set of :ref:`ds-mid-header-rw-rules` +:missLat: The :ref:`ds-geo-miss-default-latitude` used by this :term:`Delivery Service` +:missLong: The :ref:`ds-geo-miss-default-longitude` used by this :term:`Delivery Service` +:multiSiteOrigin: A boolean that defines the use of :ref:`ds-multi-site-origin` by this :term:`Delivery Service` +:orgServerFqdn: The :ref:`ds-origin-url` +:originShield: A :ref:`ds-origin-shield` string +:profileDescription: The :ref:`profile-description` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileId: The :ref:`profile-id` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileName: The :ref:`profile-name` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:protocol: An integral, unique identifier that corresponds to the :ref:`ds-protocol` used by this :term:`Delivery Service` +:qstringIgnore: An integral, unique identifier that corresponds to the :ref:`ds-qstring-handling` setting on this :term:`Delivery Service` +:rangeRequestHandling: An integral, unique identifier that corresponds to the :ref:`ds-range-request-handling` setting on this :term:`Delivery Service` +:regexRemap: A :ref:`ds-regex-remap` +:regionalGeoBlocking: A boolean defining the :ref:`ds-regionalgeo` setting on this :term:`Delivery Service` +:remapText: :ref:`ds-raw-remap` +:signed: ``true`` if and only if ``signingAlgorithm`` is not ``null``, ``false`` otherwise +:signingAlgorithm: Either a :ref:`ds-signing-algorithm` or ``null`` to indicate URL/URI signing is not implemented on this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:sslKeyVersion: This integer indicates the :ref:`ds-ssl-key-version` +:tenantId: The integral, unique identifier of the :ref:`ds-tenant` who owns this :term:`Delivery Service` .. versionadded:: 1.3 -:sslKeyVersion: This integer indicates the generation of keys in use by the :term:`Delivery Service` - if any - and is incremented by the Traffic Portal client whenever new keys are generated +:trRequestHeaders: If defined, this defines the :ref:`ds-tr-req-headers` used by Traffic Router for this :term:`Delivery Service` - .. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! + .. versionadded:: 1.3 -:tenant: The name of the tenant who owns this :term:`Delivery Service` +:trResponseHeaders: If defined, this defines the :ref:`ds-tr-resp-headers` used by Traffic Router for this :term:`Delivery Service` .. versionadded:: 1.3 -:tenantId: The integral, unique identifier of the tenant who owns this :term:`Delivery Service` -:trRequestHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router access logs for requests - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:trResponseHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router responses - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:type: The name of the routing type of this :term:`Delivery Service` e.g. "HTTP" -:typeId: The integral, unique identifier of the routing type of this :term:`Delivery Service` -:xmlId: A unique string that describes this :term:`Delivery Service` - exists for legacy reasons +:type: The :ref:`ds-types` of this :term:`Delivery Service` +:typeId: The integral, unique identifier of the :ref:`ds-types` of this :term:`Delivery Service` +:xmlId: This :term:`Delivery Service`'s :ref:`ds-xmlid` .. code-block:: http :caption: Response Example @@ -227,230 +170,189 @@ Response Structure Access-Control-Allow-Origin: * Content-Type: application/json Set-Cookie: mojolicious=...; Path=/; HttpOnly - Whole-Content-Sha512: Mw4ZsiNKfnxZvN+LsfAzxIZjgGTzcBLcZK24mMdhN1XMRBtwEj9VI3ExNvWKv3dp0f3HRRCUTx6C+ST8bRL9jA== + Whole-Content-Sha512: SYwzDioAWWqHo6IDYpwUMVZBp9rHHqQLfqzysMYuPJPlDGIrjM2z3CO5/3621VOVUoBTFzGeA9V3wo4K2TjeDQ== X-Server-Name: traffic_ops_golang/ - Date: Wed, 14 Nov 2018 21:43:36 GMT - Content-Length: 1290 - - { "response": [ - { - "active": true, - "anonymousBlockingEnabled": false, - "cacheurl": null, - "ccrDnsTtl": null, - "cdnId": 2, - "cdnName": "CDN-in-a-Box", - "checkPath": null, - "displayName": "Demo 1", - "dnsBypassCname": null, - "dnsBypassIp": null, - "dnsBypassIp6": null, - "dnsBypassTtl": null, - "dscp": 0, - "edgeHeaderRewrite": null, - "geoLimit": 0, - "geoLimitCountries": null, - "geoLimitRedirectURL": null, - "geoProvider": 0, - "globalMaxMbps": null, - "globalMaxTps": null, - "httpBypassFqdn": null, - "id": 1, - "infoUrl": null, - "initialDispersion": 1, - "ipv6RoutingEnabled": true, - "lastUpdated": "2018-11-14 18:21:17+00", - "logsEnabled": true, - "longDesc": "Apachecon North America 2018", - "longDesc1": null, - "longDesc2": null, - "matchList": [ - { - "type": "HOST_REGEXP", - "setNumber": 0, - "pattern": ".*\\.demo1\\..*" - } - ], - "maxDnsAnswers": null, - "midHeaderRewrite": null, - "missLat": 42, - "missLong": -88, - "multiSiteOrigin": false, - "originShield": null, - "orgServerFqdn": "http://origin.infra.ciab.test", - "profileDescription": null, - "profileId": null, - "profileName": null, - "protocol": 0, - "qstringIgnore": 0, - "rangeRequestHandling": 0, - "regexRemap": null, - "regionalGeoBlocking": false, - "remapText": null, - "routingName": "video", - "signed": false, - "sslKeyVersion": null, - "tenantId": 1, - "type": "HTTP", - "typeId": 1, - "xmlId": "demo1", - "exampleURLs": [ - "http://video.demo1.mycdn.ciab.test" - ], - "deepCachingType": "NEVER", - "signingAlgorithm": null, - "tenant": "root" - } - ]} + Date: Mon, 10 Jun 2019 13:43:48 GMT + Content-Length: 1500 + { "response": [{ + "active": true, + "anonymousBlockingEnabled": false, + "cacheurl": null, + "ccrDnsTtl": null, + "cdnId": 2, + "cdnName": "CDN-in-a-Box", + "checkPath": null, + "displayName": "Demo 1", + "dnsBypassCname": null, + "dnsBypassIp": null, + "dnsBypassIp6": null, + "dnsBypassTtl": null, + "dscp": 0, + "edgeHeaderRewrite": null, + "geoLimit": 0, + "geoLimitCountries": null, + "geoLimitRedirectURL": null, + "geoProvider": 0, + "globalMaxMbps": null, + "globalMaxTps": null, + "httpBypassFqdn": null, + "id": 1, + "infoUrl": null, + "initialDispersion": 1, + "ipv6RoutingEnabled": true, + "lastUpdated": "2019-06-10 13:05:19+00", + "logsEnabled": true, + "longDesc": "Apachecon North America 2018", + "longDesc1": null, + "longDesc2": null, + "matchList": [ + { + "type": "HOST_REGEXP", + "setNumber": 0, + "pattern": ".*\\.demo1\\..*" + } + ], + "maxDnsAnswers": null, + "midHeaderRewrite": null, + "missLat": 42, + "missLong": -88, + "multiSiteOrigin": false, + "originShield": null, + "orgServerFqdn": "http://origin.infra.ciab.test", + "profileDescription": null, + "profileId": null, + "profileName": null, + "protocol": 2, + "qstringIgnore": 0, + "rangeRequestHandling": 0, + "regexRemap": null, + "regionalGeoBlocking": false, + "remapText": null, + "routingName": "video", + "signed": false, + "sslKeyVersion": 1, + "tenantId": 1, + "type": "HTTP", + "typeId": 1, + "xmlId": "demo1", + "exampleURLs": [ + "http://video.demo1.mycdn.ciab.test", + "https://video.demo1.mycdn.ciab.test" + ], + "deepCachingType": "NEVER", + "fqPacingRate": null, + "signingAlgorithm": null, + "tenant": "root", + "trResponseHeaders": null, + "trRequestHeaders": null, + "consistentHashRegex": null, + "consistentHashQueryParams": [ + "abc", + "pdq", + "xxx", + "zyx" + ], + "maxOriginConnections": 0 + }]} -.. [1] Users with the :term:`Roles` "admin" and/or "operation" will be able to see *all* :term:`Delivery Services`, whereas any other user will only see the :term:`Delivery Services` their :term:`Tenant` is allowed to see. -.. [#httpOnly] This only applies to HTTP :term:`Delivery Services` -.. [3] See :ref:`ds-multi-site-origin` -.. [4] This only applies to DNS-:ref:`routed ` :term:`Delivery Services` ``PUT`` ======= Allows users to edit an existing :term:`Delivery Service`. :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [10]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: **NOT PRESENT** - Despite returning a ``200 OK`` response (rather than e.g. a ``204 NO CONTENT`` response), this endpoint does **not** return a representation of the modified resource in its payload, and instead returns nothing - not even a success message. Request Structure ----------------- -:active: If ``true``, the :term:`Delivery Service` will immediately become active and serves traffic -:anonymousBlockingEnabled: An optional field which, if defined and ``true`` will cause :ref:`Anonymous Blocking ` to be used with the new :term:`Delivery Service` -:cacheurl: An optional setting for a deprecated feature of now-unsupported Trafficserver versions (read: "Don't use this") +:active: A boolean that defines :ref:`ds-active`. +:anonymousBlockingEnabled: A boolean that defines :ref:`ds-anonymous-blocking` +:cacheurl: A :ref:`ds-cacheurl` .. deprecated:: ATCv3.0 This field has been deprecated in Traffic Control 3.x and is subject to removal in Traffic Control 4.x or later -:ccrDnsTtl: The Time To Live (TTL) in seconds of the DNS response for A or AAAA record queries requesting the IP address of the Traffic Router - named "ccrDnsTtl" for legacy reasons -:cdnId: The integral, unique identifier for the CDN to which this :term:`Delivery Service`\ shall be assigned -:checkPath: The path portion of the URL which will be used to check connections to this :term:`Delivery Service`'s origin server -:consistentHashRegex: If defined, this is a regular expression used for the Pattern-Based Consistent Hashing feature.\ [#httpOnly]_ +:ccrDnsTtl: The :ref:`ds-dns-ttl` - named "ccrDnsTtl" for legacy reasons +:cdnId: The integral, unique identifier of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:checkPath: A :ref:`ds-check-path` +:consistentHashRegex: A :ref:`ds-consistent-hashing-regex` .. versionadded:: 1.4 -:consistentHashQueryParams: A set (actually array due to limitations of JSON) of query parameters which will be considered by Traffic Router when using a client request to consistently find an :term:`Edge-tier cache server` to which to redirect them.\ [#httpOnly]_ +:consistentHashQueryParams: An array of :ref:`ds-consistent-hashing-qparams` + + .. versionadded:: 1.4 + +:deepCachingType: The :ref:`ds-deep-caching` setting for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:displayName: The :ref:`ds-display-name` +:dnsBypassCname: A :ref:`ds-dns-bypass-cname` +:dnsBypassIp: A :ref:`ds-dns-bypass-ip` +:dnsBypassIp6: A :ref:`ds-dns-bypass-ipv6` +:dnsBypassTtl: The :ref:`ds-dns-bypass-ttl` +:dscp: A :ref:`ds-dscp` to be used within the :term:`Delivery Service` +:edgeHeaderRewrite: A set of :ref:`ds-edge-header-rw-rules` +:fqPacingRate: The :ref:`ds-fqpr` + + .. versionadded:: 1.3 + +:geoLimit: An integer that defines the :ref:`ds-geo-limit` +:geoLimitCountries: A string containing a comma-separated list defining the :ref:`ds-geo-limit-countries`\ [#geolimit]_ +:geoLimitRedirectUrl: A :ref:`ds-geo-limit-redirect-url`\ [#geolimit]_ +:geoProvider: The :ref:`ds-geo-provider` +:globalMaxMbps: The :ref:`ds-global-max-mbps` +:globalMaxTps: The :ref:`ds-global-max-tps` +:httpBypassFqdn: A :ref:`ds-http-bypass-fqdn` +:infoUrl: An :ref:`ds-info-url` +:initialDispersion: The :ref:`ds-initial-dispersion` +:ipv6RoutingEnabled: A boolean that defines the :ref:`ds-ipv6-routing` setting on this :term:`Delivery Service` +:logsEnabled: A boolean that defines the :ref:`ds-logs-enabled` setting on this :term:`Delivery Service` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: An optional field containing the :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: An optional field containing the :ref:`ds-longdesc3` of this :term:`Delivery Service` +:maxDnsAnswers: The :ref:`ds-max-dns-answers` allowed for this :term:`Delivery Service` +:maxOriginConnections: The :ref:`ds-max-origin-connections` .. versionadded:: 1.4 -:deepCachingType: A string describing when to do Deep Caching for this :term:`Delivery Service`: - - NEVER - Deep Caching will never be used by this :term:`Delivery Service` (default) - ALWAYS - Deep Caching will always be used by this :term:`Delivery Service` - -:displayName: The human-friendly name for this :term:`Delivery Service` -:dnsBypassCname: Domain name to overflow requests for HTTP :term:`Delivery Service`\ s - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassIp: The IPv4 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassIp6: The IPv6 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:dnsBypassTtl: The time for which a DNS bypass of this :term:`Delivery Service`\ shall remain active -:dscp: The Differentiated Services Code Point (DSCP) with which to mark downstream (EDGE -> customer) traffic. This should be zero in most cases -:edgeHeaderRewrite: An optional string which, if present, defines rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:fqPacingRate: An optional integer which, if present, sets the Fair-Queuing Pacing Rate in bytes per second set on the all TCP connection sockets in the :term:`Delivery Service` (see ``man tc-fc_codel`` for more information) - Linux only, defaults to 0 meaning "disabled" -:geoLimit: The setting that determines how content is geographically limited - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - None - no limitations - 1 - Only route when the client's IP is found in the Coverage Zone File (CZF) - 2 - Only route when the client's IP is found in the CZF, or when the client can be determined to be from the United States of America - - .. warning:: This does not prevent access to content or make content secure; it merely prevents routing to the content through Traffic Router - -:geoLimitCountries: A string containing a comma-separated list of country codes (e.g. "US,AU") which are allowed to request content through this :term:`Delivery Service`\ [5]_ -:geoLimitRedirectUrl: A URL to which clients blocked by :ref:`Regional Geographic Blocking ` or the ``geoLimit`` settings will be re-directed\ [5]_ -:geoProvider: An integer that represents the provider of a database for mapping IPs to geographic locations; currently only the following values are supported: - - 0 - The "Maxmind" GeoIP2 database (default) - 1 - Neustar - -:globalMaxMbps: An optional integer that will set the maximum global bandwidth allowed on this :term:`Delivery Service`. If exceeded, traffic will be routed to ``dnsBypassIp`` (or ``dnsBypassIp6`` for IPv6 traffic) for DNS :term:`Delivery Service`\ s and to ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:globalMaxTps: An optional integer that will set the maximum global transactions per second allowed on this :term:`Delivery Service`. When this is exceeded traffic will be sent to the ``dnsBpassIp`` (and/or ``dnsBypassIp6``)for DNS :term:`Delivery Service`\ s and to the ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:httpBypassFqdn: An optional Fully Qualified Domain Name (FQDN) to use for bypass on an HTTP :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [#httpOnly]_ -:infoUrl: An optional string which, if present, is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:initialDispersion: The number of caches between which traffic requesting the same object will be randomly split - meaning that if 4 clients all request the same object (one after another), then if this is above 4 there is a possibility that all 4 are cache misses. For most use-cases, this should be 1\ [#httpOnly]_\ [6]_ -:ipv6RoutingEnabled: If ``true``, clients that connect to Traffic Router using IPv6 will be given the IPv6 address of a suitable :term:`Edge-tier cache server`; if ``false`` all addresses will be IPv4, regardless of the client connection - optional for ANY_MAP-:ref:`ds-types` :term:`Delivery Services` -:logsEnabled: If ``true``, logging is enabled for this :term:`Delivery Service`, otherwise it is disabled -:longDesc: An optional description of the :term:`Delivery Service` -:longDesc1: An optional field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: An optional field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired -:maxDnsAnswers: An optional field which, when present, specifies the maximum number of IPs to put in responses to A/AAAA DNS record requests - defaults to 0, meaning "no limit"\ [4]_ -:midHeaderRewrite: An optional string containing rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:missLat: The latitude to use when the client cannot be found in the CZF or a geographic IP lookup\ [7]_ -:missLong: The longitude to use when the client cannot be found in the CZF or a geographic IP lookup\ [7]_ -:multiSiteOrigin: ``true`` if the Multi Site Origin feature is enabled for this :term:`Delivery Service`, ``false`` otherwise\ [3]_\ [7]_ -:orgServerFqdn: The URL of the :term:`Delivery Service`'s origin server for use in retrieving content from the origin server\ [7]_ - - .. note:: Despite the field name, this must truly be a full URL - including the protocol (e.g. ``http://`` or ``https://``) - **NOT** merely the server's Fully Qualified Domain Name (FQDN) - -:originShield: An "origin shield" is a forward proxy that sits between Mid-tier caches and the origin and performs further caching beyond what's offered by a standard CDN. This optional field is a string of FQDNs to use as origin shields, delimited by ``|`` -:profileId: An optional, integral, unique identifier for the Traffic Router profile with which this :term:`Delivery Service`\ shall be associated -:protocol: The protocol which clients will use to communicate with Edge-tier :term:`cache server` s - this is an (optional for ANY_MAP :term:`Delivery Service`\ s) integer on the interval [0,2] where the values have these meanings: - - 0 - HTTP - 1 - HTTPS - 2 - Both HTTP and HTTPS - -:qstringIgnore: Tells caches whether or not to consider URLs with different query parameter strings to be distinct\ [7]_ - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - URLs with different query parameter strings will be considered distinct for caching purposes, and query strings will be passed upstream to the origin - 1 - URLs with different query parameter strings will be considered identical for caching purposes, and query strings will be passed upstream to the origin - 2 - Query strings are stripped out by Edge-tier caches, and thus are neither taken into consideration for caching purposes, nor passed upstream in requests to the origin - -:rangeRequestHandling: Tells caches how to handle range requests\ [7]_ - this is an integer on the interval [0,2] where the values have these meanings: - - 0 - Range requests will not be cached, but range requests that request ranges of content already cached will be served from the cache - 1 - Use the `background_fetch plugin `_ to service the range request while caching the whole object - 2 - Use the `experimental cache_range_requests plugin `_ to treat unique ranges as unique objects - -:regexRemap: An optional, regular expression remap rule to apply to this :term:`Delivery Service` at the Edge tier - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:regionalGeoBlocking: ``true`` if Regional Geo Blocking is in use within this :term:`Delivery Service`, ``false`` otherwise - see :ref:`regionalgeo-qht` for more information -:remapText: Optional, raw text to add to the remap line for caches - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:routingName: The routing name of this :term:`Delivery Service`, used as the top-level part of the FQDN used by clients to request content from the :term:`Delivery Service` e.g. ``routingName.xml_id.CDNName.com`` -:signed: An optional field which should be ``true`` if token-based authentication\ [8]_ will be enabled for this :term:`Delivery Service`, ``false`` (default) otherwise -:signingAlgorithm: Type of URL signing method to sign the URLs\ [8]_, basically comes down to one of two plugins or ``null``: - - ``null`` - Token-based authentication is not enabled for this :term:`Delivery Service` - url_sig: - URL Signing token-based authentication is enabled for this :term:`Delivery Service` - uri_signing - URI Signing token-based authentication is enabled for this :term:`Delivery Service` - - .. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_ and `the draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, **NOT** the latest - -:sslKeyVersion: This optional integer indicates the generation of keys to be used by the :term:`Delivery Service` - if any - and is incremented by the Traffic Portal client whenever new keys are generated - - .. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! - -:tenantId: An optional, integral, unique identifier of the tenant who will own this :term:`Delivery Service` -:trRequestHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router access logs for requests - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:trResponseHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router responses - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:typeId: The integral, unique identifier for the routing type of this :term:`Delivery Service` -:xmlId: A unique string that describes this :term:`Delivery Service` - exists for legacy reasons +:midHeaderRewrite: A set of :ref:`ds-mid-header-rw-rules` +:missLat: The :ref:`ds-geo-miss-default-latitude` used by this :term:`Delivery Service` +:missLong: The :ref:`ds-geo-miss-default-longitude` used by this :term:`Delivery Service` +:multiSiteOrigin: A boolean that defines the use of :ref:`ds-multi-site-origin` by this :term:`Delivery Service` +:orgServerFqdn: The :ref:`ds-origin-url` +:originShield: A :ref:`ds-origin-shield` string +:profileId: An optional :ref:`profile-id` of the :ref:`ds-profile` with which this :term:`Delivery Service` will be associated +:protocol: An integral, unique identifier that corresponds to the :ref:`ds-protocol` used by this :term:`Delivery Service` +:qstringIgnore: An integral, unique identifier that corresponds to the :ref:`ds-qstring-handling` setting on this :term:`Delivery Service` +:rangeRequestHandling: An integral, unique identifier that corresponds to the :ref:`ds-range-request-handling` setting on this :term:`Delivery Service` +:regexRemap: A :ref:`ds-regex-remap` +:regionalGeoBlocking: A boolean defining the :ref:`ds-regionalgeo` setting on this :term:`Delivery Service` +:remapText: :ref:`ds-raw-remap` +:routingName: The :ref:`ds-routing-name` of this :term:`Delivery Service` +:signed: ``true`` if and only if ``signingAlgorithm`` is not ``null``, ``false`` otherwise +:signingAlgorithm: Either a :ref:`ds-signing-algorithm` or ``null`` to indicate URL/URI signing is not implemented on this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:sslKeyVersion: This integer indicates the :ref:`ds-ssl-key-version` +:tenantId: The integral, unique identifier of the :ref:`ds-tenant` who owns this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:trRequestHeaders: If defined, this defines the :ref:`ds-tr-req-headers` used by Traffic Router for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:trResponseHeaders: If defined, this defines the :ref:`ds-tr-resp-headers` used by Traffic Router for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:typeId: The integral, unique identifier of the :ref:`ds-types` of this :term:`Delivery Service` +:xmlId: This :term:`Delivery Service`'s :ref:`ds-xmlid` .. note:: While this field **must** be present, it is **not** allowed to change; this must be the same as the ``xml_id`` the :term:`Delivery Service` already has. This should almost never be different from the :term:`Delivery Service`'s ``displayName``. @@ -473,9 +375,6 @@ Request Structure "cdnName": "CDN-in-a-Box", "deepCachingType": "NEVER", "displayName": "demo", - "exampleURLs": [ - "http://video.demo.mycdn.ciab.test" - ], "dscp": 0, "geoLimit": 0, "geoProvider": 0, @@ -483,7 +382,7 @@ Request Structure "ipv6RoutingEnabled": false, "lastUpdated": "2018-11-14 18:21:17+00", "logsEnabled": true, - "longDesc": "A :term:`Delivery Service` created expressly for API documentation examples", + "longDesc": "A Delivery Service created expressly for API documentation examples", "missLat": -1, "missLong": -1, "multiSiteOrigin": false, @@ -500,10 +399,6 @@ Request Structure "xmlId": "demo1" } -.. [5] These fields must be defined if and only if ``geoLimit`` is non-zero -.. [6] These fields are required for HTTP-routed :term:`Delivery Service`\ s, and optional for all others -.. [7] These fields are required for HTTP-routed and DNS-routed :term:`Delivery Service`\ s, but are optional for (and in fact may have no effect on) STEERING and ANY_MAP :term:`Delivery Service`\ s -.. [8] See "token-based-auth" TODO --- wat for more information Response Structure ------------------ @@ -523,25 +418,23 @@ Response Structure Content-Type: text/plain; charset=utf-8 -.. [10] Users with the roles "admin" and/or "operation" will be able to edit *all* :term:`Delivery Service`\ s, whereas any other user will only be able to edit the :term:`Delivery Service`\ s their Tenant is allowed to edit. - ``DELETE`` ========== Deletes the target :term:`Delivery Service` :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [11]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: ``undefined`` Request Structure ----------------- .. table:: Request Path Parameters - +------+---------------------------------------------------------------------------------+ - | Name | Description | - +======+=================================================================================+ - | ID | The integral, unique identifier of the :term:`Delivery Service` to be retrieved | - +------+---------------------------------------------------------------------------------+ + +------+-------------------------------------------------------------------------------+ + | Name | Description | + +======+===============================================================================+ + | ID | The integral, unique identifier of the :term:`Delivery Service` to be deleted | + +------+-------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -577,4 +470,6 @@ Response Structure } ]} -.. [11] Users with the roles "admin" and/or "operation" will be able to delete *all* :term:`Delivery Service`\ s, whereas any other user will only be able to delete the :term:`Delivery Service`\ s their Tenant is allowed to delete. + +.. [#tenancy] Only those :term:`Delivery Services` assigned to :term:`Tenants` that are the requesting user's :term:`Tenant` or children thereof will appear in the output of a ``GET`` request, and the same constraints are placed on the allowed values of the ``tenantId`` field of a ``PUT`` request to update a new :term:`Delivery Service`. Furthermore, the only :term:`Delivery Services` a user may delete are those assigned to a :term:`Tenant` that is either the same :term:`Tenant` as the user's :term:`Tenant`, or a descendant thereof. +.. [#geoLimit] These fields must be defined if and only if ``geoLimit`` is non-zero diff --git a/docs/source/api/deliveryservices_id_capacity.rst b/docs/source/api/deliveryservices_id_capacity.rst index 5070c3f85a..58b1c23dbe 100644 --- a/docs/source/api/deliveryservices_id_capacity.rst +++ b/docs/source/api/deliveryservices_id_capacity.rst @@ -26,7 +26,7 @@ Retrieves the usage percentages of a servers associated with a :term:`Delivery Service` :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [1]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: Object Request Structure @@ -41,8 +41,8 @@ Request Structure Response Structure ------------------ -:availablePercent: The percent of servers assigned to this :term:`Delivery Service` that is available - the allowed traffic level in terms of data per time period for all :term:`cache server`\ s that remains unused -:unavailablePercent: The percent of servers assigned to this :term:`Delivery Service` that is unavailable - the allowed traffic level in terms of data per time period for all :term:`cache server`\ s that can't be used because the servers are deemed unhealthy +:availablePercent: The percent of servers assigned to this :term:`Delivery Service` that is available - the allowed traffic level in terms of data per time period for all :term:`cache servers` that remains unused +:unavailablePercent: The percent of servers assigned to this :term:`Delivery Service` that is unavailable - the allowed traffic level in terms of data per time period for all :term:`cache servers` that can't be used because the servers are deemed unhealthy :utilizedPercent: The percent of servers assigned to this :term:`Delivery Service` that is currently in use - the allowed traffic level in terms of data per time period that is currently devoted to servicing requests :maintenancePercent: The percent of servers assigned to this :term:`Delivery Service` that is unavailable due to server maintenance - the allowed traffic level in terms of data per time period that is unavailable because servers have intentionally been marked offline by administrators @@ -70,4 +70,4 @@ Response Structure "maintenancePercent": 0 }} -.. [1] Users with the roles "admin" and/or "operations" will be able to see details for *all* :term:`Delivery Service`\ s, whereas any other user will only see details for the :term:`Delivery Service`\ s their Tenant is allowed to see. +.. [#tenancy] Users will only be able to see capacity details for the :term:`Delivery Services` their :term:`Tenant` is allowed to see. diff --git a/docs/source/api/deliveryservices_id_health.rst b/docs/source/api/deliveryservices_id_health.rst index 73666fe4a3..dc044104b5 100644 --- a/docs/source/api/deliveryservices_id_health.rst +++ b/docs/source/api/deliveryservices_id_health.rst @@ -23,10 +23,10 @@ ``GET`` ======= -Retrieves the health of all :term:`Cache Group`\ s assigned to a particular :term:`Delivery Service` +Retrieves the health of all :term:`Cache Groups` assigned to a particular :term:`Delivery Service` :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [1]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: Object Request Structure @@ -45,11 +45,11 @@ Response Structure :cachegroups: An array of objects that represent the health of each :term:`Cache Group` assigned to this :term:`Delivery Service` :name: The name of the :term:`Cache Group` represented by this object - :offline: The number of offline :term:`cache server`\ s within this :term:`Cache Group` - :online: The number of online :term:`cache server`\ s within this :term:`Cache Group` + :offline: The number of offline :term:`cache servers` within this :term:`Cache Group` + :online: The number of online :term:`cache servers` within this :term:`Cache Group` -:totalOffline: Total number of offline :term:`cache server`\ s assigned to this :term:`Delivery Service` -:totalOnline: Total number of online :term:`cache server`\ s assigned to this :term:`Delivery Service` +:totalOffline: Total number of offline :term:`cache servers` assigned to this :term:`Delivery Service` +:totalOnline: Total number of online :term:`cache servers` assigned to this :term:`Delivery Service` .. code-block:: http :caption: Response Example @@ -80,4 +80,4 @@ Response Structure ] }} -.. [1] Users with the roles "admin" and/or "operations" will be able to the see :term:`Cache Group`\ s associated with *any* :term:`Delivery Service`\ s, whereas any other user will only be able to see the :term:`Cache Group`\ s associated with :term:`Delivery Service`\ s their Tenant is allowed to see. +.. [#tenancy] Users will only be able to see :term:`Cache Group` health details for the :term:`Delivery Services` their :term:`Tenant` is allowed to see. diff --git a/docs/source/api/deliveryservices_id_regexes.rst b/docs/source/api/deliveryservices_id_regexes.rst index 48255c849b..9d1a9c6664 100644 --- a/docs/source/api/deliveryservices_id_regexes.rst +++ b/docs/source/api/deliveryservices_id_regexes.rst @@ -24,7 +24,7 @@ Retrieves routing regular expressions for a specific :term:`Delivery Service`. :Auth. Required: Yes -:Roles Required: None\ [1]_ +:Roles Required: None\ [#tenancy]_ :Response Type: Array Request Structure @@ -32,7 +32,7 @@ Request Structure .. table:: Request Path Parameters +------+---------------------------------------------------------------------------------+ - | Name | Description | + | Name | Description | +======+=================================================================================+ | ID | The integral, unique identifier of the :term:`Delivery Service` being inspected | +------+---------------------------------------------------------------------------------+ @@ -79,14 +79,13 @@ Response Structure } ]} -.. [1] If tenancy is used, then users (regardless of role) will only be able to see the routing regular expressions used by :term:`Delivery Service`\ s their tenant has permissions to see. ``POST`` ======== Creates a routing regular expression for a :term:`Delivery Service`. :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [2]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: Object Request Structure @@ -161,4 +160,4 @@ Response Structure }} -.. [2] If tenancy is used, then users (regardless of role) will only be able to edit the routing regular expressions used by :term:`Delivery Service`\ s their tenant has permissions to edit. Assuming tenancy is satisfied, a routing regular expression can only be created by a user with the "admin" or "operations" role. +.. [#tenancy] Users will only be able to view and create regular expressions for the :term:`Delivery Services` their :term:`Tenant` is allowed to see. diff --git a/docs/source/api/deliveryservices_id_regexes_rid.rst b/docs/source/api/deliveryservices_id_regexes_rid.rst index 94e4c15935..924a7ebe87 100644 --- a/docs/source/api/deliveryservices_id_regexes_rid.rst +++ b/docs/source/api/deliveryservices_id_regexes_rid.rst @@ -24,7 +24,7 @@ Retrieves a specific routing regular expression for a specific :term:`Delivery Service`. :Auth. Required: Yes -:Roles Required: None\ [1]_ +:Roles Required: None\ [#tenancy]_ :Response Type: Array Request Structure @@ -81,15 +81,13 @@ Response Structure } ]} -.. [1] If tenancy is used, then users (regardless of role) will only be able to see the routing regular expressions used by :term:`Delivery Service`\ s their tenant has permissions to see. - ``PUT`` ======= Updates a routing regular expression. :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [2]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: Object Request Structure @@ -166,14 +164,13 @@ Response Structure }} -.. [2] If tenancy is used, then users (regardless of role) will only be able to edit the routing regular expressions used by :term:`Delivery Service`\ s their tenant has permissions to edit. Assuming tenancy is satisfied, a routing regular expression can only be edited by a user with the "admin" or "operations" role. ``DELETE`` ========== Deletes a routing regular expression. :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [3]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: ``undefined`` Request Structure @@ -221,4 +218,4 @@ Response Structure } ]} -.. [3] If tenancy is used, then users (regardless of role) will only be able to delete the routing regular expressions used by :term:`Delivery Service`\ s their tenant has permissions to delete. Assuming tenancy is satisfied, a routing regular expression can only be deleted by a user with the "admin" or "operations" role. +.. [#tenancy] Users will only be able to view, delete and update regular expressions for the :term:`Delivery Services` their :term:`Tenant` is allowed to see. diff --git a/docs/source/api/deliveryservices_id_routing.rst b/docs/source/api/deliveryservices_id_routing.rst index 5f1bcb0127..7f47186a3f 100644 --- a/docs/source/api/deliveryservices_id_routing.rst +++ b/docs/source/api/deliveryservices_id_routing.rst @@ -24,7 +24,7 @@ Retrieves routing method statistics for a particular :term:`Delivery Service` :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [1]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: Object Request Structure @@ -48,7 +48,7 @@ Request Structure Response Structure ------------------ -:cz: The percent of requests to the Traffic Router for this :term:`Delivery Service` that were satisfied by a coverage zone file (CZF) +:cz: The percent of requests to the Traffic Router for this :term:`Delivery Service` that were satisfied by a :term:`Coverage Zone File` :dsr: The percent of requests to the Traffic Router for this :term:`Delivery Service` that were satisfied by sending the client to an overflow :term:`Delivery Service` :err: The percent of requests to the Traffic Router for this :term:`Delivery Service` that resulted in an error :fed: The percent of requests to the Traffic Router for this :term:`Delivery Service` that were satisfied by sending the client to a federated CDN @@ -56,7 +56,7 @@ Response Structure :miss: The percent of requests to the Traffic Router for this :term:`Delivery Service` that could not be satisfied :regionalAlternate: The percent of requests to the Traffic Router for this :term:`Delivery Service` that were satisfied by sending the client to the alternate, Regional Geo-blocking URL :regionalDenied: The percent of Traffic Router requests for this :term:`Delivery Service` that were denied due to geographic location policy -:staticRoute: The percent of requests to the Traffic Router for this :term:`Delivery Service` that were satisfied with pre-configured DNS entries +:staticRoute: The percent of requests to the Traffic Router for this :term:`Delivery Service` that were satisfied with :ref:`ds-static-dns-entries` .. code-block:: http :caption: Response Example @@ -88,4 +88,4 @@ Response Structure "miss": 0 }} -.. [1] Users with the roles "admin" and/or "operations" will be able to see details for *all* :term:`Delivery Service`\ s, whereas any other user will only see details for the :term:`Delivery Service`\ s their Tenant is allowed to see. +.. [#tenancy] Users will only be able to view routing details for the :term:`Delivery Services` their :term:`Tenant` is allowed to see. diff --git a/docs/source/api/deliveryservices_id_safe.rst b/docs/source/api/deliveryservices_id_safe.rst index cf0f0426e3..ad56197870 100644 --- a/docs/source/api/deliveryservices_id_safe.rst +++ b/docs/source/api/deliveryservices_id_safe.rst @@ -24,7 +24,7 @@ Allows a user to edit metadata fields of a :term:`Delivery Service`. :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [1]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: Array Request Structure @@ -37,11 +37,11 @@ Request Structure | ID | The integral, unique identifier of the :term:`Delivery Service` being modified | +------+--------------------------------------------------------------------------------+ -:displayName: The human-friendly name for this :term:`Delivery Service` -:infoUrl: A string which is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:longDesc: A description of the :term:`Delivery Service` -:longDesc1: A field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: A field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired +:displayName: The :ref:`ds-display-name` +:infoUrl: An :ref:`ds-info-url` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: The :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: The :ref:`ds-longdesc3` of this :term:`Delivery Service` .. note:: All of these fields are optional; this ``PUT`` behaves more like a ``PATCH`` @@ -59,7 +59,7 @@ Request Structure { "displayName": "demo", "infoUrl": "www.info.com", - "longDesc": "A :term:`Delivery Service` created for the CDN-in-a-Box project", + "longDesc": "A Delivery Service created for the CDN-in-a-Box project", "longDesc1": null, "longDesc2": null } @@ -67,263 +67,200 @@ Request Structure Response Structure ------------------ -.. versionchanged:: 1.3 - Removed ``fqPacingRate`` field, added fields: ``deepCachingType``, ``signingAlgorithm``, and ``tenant``. - -:active: ``true`` if the :term:`Delivery Service` is active, ``false`` otherwise -:anonymousBlockingEnabled: ``true`` if :ref:`Anonymous Blocking ` has been configured for the :term:`Delivery Service`, ``false`` otherwise -:cacheurl: A setting for a deprecated feature of now-unsupported Trafficserver versions +:active: A boolean that defines :ref:`ds-active`. +:anonymousBlockingEnabled: A boolean that defines :ref:`ds-anonymous-blocking` +:cacheurl: A :ref:`ds-cacheurl` .. deprecated:: ATCv3.0 This field has been deprecated in Traffic Control 3.x and is subject to removal in Traffic Control 4.x or later -:ccrDnsTtl: The Time To Live (TTL) of the DNS response for A or AAAA record queries requesting the IP address of the Traffic Router - named "ccrDnsTtl" for legacy reasons -:cdnId: The integral, unique identifier of the CDN to which the :term:`Delivery Service` belongs -:cdnName: Name of the CDN to which the :term:`Delivery Service` belongs -:checkPath: The path portion of the URL to check connections to this :term:`Delivery Service`'s origin server -:consistentHashRegex: If defined, this is a regular expression used for the Pattern-Based Consistent Hashing feature.\ [#httpOnly]_ +:ccrDnsTtl: The :ref:`ds-dns-ttl` - named "ccrDnsTtl" for legacy reasons +:cdnId: The integral, unique identifier of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:cdnName: Name of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:checkPath: A :ref:`ds-check-path` +:consistentHashRegex: A :ref:`ds-consistent-hashing-regex` .. versionadded:: 1.4 -:consistentHashQueryParams: A set (actually array due to limitations of JSON) of query parameters which will be considered by Traffic Router when using a client request to consistently find an :term:`Edge-tier cache server` to which to redirect them.\ [#httpOnly]_ +:consistentHashQueryParams: An array of :ref:`ds-consistent-hashing-qparams` .. versionadded:: 1.4 -:deepCachingType: A string that describes when "Deep Caching" will be used by this :term:`Delivery Service` - one of: +:deepCachingType: The :ref:`ds-deep-caching` setting for this :term:`Delivery Service` + + .. versionadded:: 1.3 - ALWAYS - "Deep Caching" will always be used with this :term:`Delivery Service` - NEVER - "Deep Caching" will never be used with this :term:`Delivery Service` +:displayName: The :ref:`ds-display-name` +:dnsBypassCname: A :ref:`ds-dns-bypass-cname` +:dnsBypassIp: A :ref:`ds-dns-bypass-ip` +:dnsBypassIp6: A :ref:`ds-dns-bypass-ipv6` +:dnsBypassTtl: The :ref:`ds-dns-bypass-ttl` +:dscp: A :ref:`ds-dscp` to be used within the :term:`Delivery Service` +:edgeHeaderRewrite: A set of :ref:`ds-edge-header-rw-rules` +:exampleURLs: An array of :ref:`ds-example-urls` +:fqPacingRate: The :ref:`ds-fqpr` .. versionadded:: 1.3 -:displayName: The display name of the :term:`Delivery Service` -:dnsBypassCname: Domain name to overflow requests for HTTP :term:`Delivery Service`\ s - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassIp: The IPv4 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassIp6: The IPv6 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassTtl: The time for which a DNS bypass of this :term:`Delivery Service`\ shall remain active\ [4]_ -:dscp: The Differentiated Services Code Point (DSCP) with which to mark traffic as it leaves the CDN and reaches clients -:edgeHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:fqPacingRate: The Fair-Queuing Pacing Rate in Bytes per second set on the all TCP connection sockets in the :term:`Delivery Service` (see ``man tc-fc_codel`` for more information) - Linux only - - .. deprecated:: 1.3 - This field is only present/available in API versions 1.2 and lower - it has been removed in API version 1.3 - -:geoLimit: The setting that determines how content is geographically limited - this is an integer on the interval [0-2] where the values have these meanings: -:geoLimitCountries: A string containing a comma-separated list of country codes (e.g. "US,AU") which are allowed to request content through this :term:`Delivery Service` -:geoLimitRedirectUrl: A URL to which clients blocked by :ref:`Regional Geographic Blocking ` or the ``geoLimit`` settings will be re-directed - - 0 - None - no limitations - 1 - Only route when the client's IP is found in the Coverage Zone File (CZF) - 2 - Only route when the client's IP is found in the CZF, or when the client can be determined to be from the United States of America - - .. warning:: This does not prevent access to content or make content secure; it merely prevents routing to the content through Traffic Router - -:geoProvider: An integer that represents the provider of a database for mapping IPs to geographic locations; currently only ``0`` - which represents MaxMind - is supported -:globalMaxMbps: The maximum global bandwidth allowed on this :term:`Delivery Service`. If exceeded, traffic will be routed to ``dnsBypassIp`` (or ``dnsBypassIp6`` for IPv6 traffic) for DNS :term:`Delivery Service`\ s and to ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:globalMaxTps: The maximum global transactions per second allowed on this :term:`Delivery Service`. When this is exceeded traffic will be sent to the dnsByPassIp* for DNS :term:`Delivery Service`\ s and to the httpBypassFqdn for HTTP :term:`Delivery Service`\ s -:httpBypassFqdn: The HTTP destination to use for bypass on an HTTP :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:id: An integral, unique identifier for this :term:`Delivery Service` -:infoUrl: This is a string which is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:initialDispersion: The number of caches between which traffic requesting the same object will be randomly split - meaning that if 4 clients all request the same object (one after another), then if this is above 4 there is a possibility that all 4 are cache misses. For most use-cases, this should be 1 -:ipv6RoutingEnabled: If ``true``, clients that connect to Traffic Router using IPv6 will be given the IPv6 address of a suitable :term:`Edge-tier cache server`; if ``false`` all addresses will be IPv4, regardless of the client connection -:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in a ``ctime``-like format -:logsEnabled: If ``true``, logging is enabled for this :term:`Delivery Service`, otherwise it is disabled -:longDesc: A description of the :term:`Delivery Service` -:longDesc1: A field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: A field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired -:matchList: An array of methods used by Traffic Router to determine whether or not a request can be serviced by this :term:`Delivery Service` +:geoLimit: An integer that defines the :ref:`ds-geo-limit` +:geoLimitCountries: A string containing a comma-separated list defining the :ref:`ds-geo-limit-countries` +:geoLimitRedirectUrl: A :ref:`ds-geo-limit-redirect-url` +:geoProvider: The :ref:`ds-geo-provider` +:globalMaxMbps: The :ref:`ds-global-max-mbps` +:globalMaxTps: The :ref:`ds-global-max-tps` +:httpBypassFqdn: A :ref:`ds-http-bypass-fqdn` +:id: An integral, unique identifier for this :term:`Delivery Service` +:infoUrl: An :ref:`ds-info-url` +:initialDispersion: The :ref:`ds-initial-dispersion` +:ipv6RoutingEnabled: A boolean that defines the :ref:`ds-ipv6-routing` setting on this :term:`Delivery Service` +:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in :rfc:`3339` format +:logsEnabled: A boolean that defines the :ref:`ds-logs-enabled` setting on this :term:`Delivery Service` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: The :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: The :ref:`ds-longdesc3` of this :term:`Delivery Service` +:matchList: The :term:`Delivery Service`'s :ref:`ds-matchlist` :pattern: A regular expression - the use of this pattern is dependent on the ``type`` field (backslashes are escaped) - :setNumber: An integral, unique identifier for the set of types to which the ``type`` field belongs - :type: The type of match performed using ``pattern`` to determine whether or not to use this :term:`Delivery Service` - - HOST_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``Host:`` HTTP header of an HTTP request, or the name requested for resolution in a DNS request - HEADER_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches an HTTP header (both the name and value) in an HTTP request\ [#httpOnly]_ - PATH_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the request path of this :term:`Delivery Service`'s URL\ [#httpOnly]_ - STEERING_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``xml_id`` of one of this :term:`Delivery Service`'s "Steering" target :term:`Delivery Services` - -:maxDnsAnswers: The maximum number of IPs to put in a A/AAAA response for a DNS :term:`Delivery Service` (0 means all available)\ [4]_ -:midHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:missLat: The latitude to use when the client cannot be found in the CZF or a geographic IP lookup -:missLong: The longitude to use when the client cannot be found in the CZF or a geographic IP lookup -:multiSiteOrigin: ``true`` if the Multi Site Origin feature is enabled for this :term:`Delivery Service`, ``false`` otherwise\ [3]_ -:originShield: An "origin shield" is a forward proxy that sits between Mid-tier caches and the origin and performs further caching beyond what's offered by a standard CDN. This field is a string of FQDNs to use as origin shields, delimited by ``|`` -:orgServerFqdn: The origin server's Fully Qualified Domain Name (FQDN) - including the protocol (e.g. http:// or https://) - for use in retrieving content from the origin server -:profileDescription: The description of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:profileId: The integral, unique identifier for the Traffic Router profile with which this :term:`Delivery Service` is associated -:profileName: The name of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:protocol: The protocol which clients will use to communicate with Edge-tier :term:`cache server`\ s\ [#httpOnly]_ - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - HTTP - 1 - HTTPS - 2 - Both HTTP and HTTPS - -:qstringIgnore: Tells caches whether or not to consider URLs with different query parameter strings to be distinct - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - URLs with different query parameter strings will be considered distinct for caching purposes, and query strings will be passed upstream to the origin - 1 - URLs with different query parameter strings will be considered identical for caching purposes, and query strings will be passed upstream to the origin - 2 - Query strings are stripped out by Edge-tier caches, and thus are neither taken into consideration for caching purposes, nor passed upstream in requests to the origin - -:rangeRequestHandling: Tells caches how to handle range requests\ [5]_ - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - Range requests will not be cached, but range requests that request ranges of content already cached will be served from the cache - 1 - Use the `background_fetch plugin `_ to service the range request while caching the whole object - 2 - Use the `experimental cache_range_requests plugin `_ to treat unique ranges as unique objects - -:regexRemap: A regular expression remap rule to apply to this :term:`Delivery Service` at the Edge tier - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:regionalGeoBlocking: ``true`` if Regional Geo Blocking is in use within this :term:`Delivery Service`, ``false`` otherwise - see :ref:`regionalgeo-qht` for more information -:remapText: Additional, raw text to add to the remap line for caches - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:signed: ``true`` if token-based authentication is enabled for this :term:`Delivery Service`, ``false`` otherwise -:signingAlgorithm: Type of URL signing method to sign the URLs, basically comes down to one of two plugins or ``null``: - - ``null`` - Token-based authentication is not enabled for this :term:`Delivery Service` - url_sig: - URL Signing token-based authentication is enabled for this :term:`Delivery Service` - uri_signing - URI Signing token-based authentication is enabled for this :term:`Delivery Service` - - .. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_ and `the draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, NOT the latest. + :setNumber: An integer that provides explicit ordering of :ref:`ds-matchlist` items - this is used as a priority ranking by Traffic Router, and is not guaranteed to correspond to the ordering of items in the array. + :type: The type of match performed using ``pattern``. + +:maxDnsAnswers: The :ref:`ds-max-dns-answers` allowed for this :term:`Delivery Service` +:maxOriginConnections: The :ref:`ds-max-origin-connections` + + .. versionadded:: 1.4 + +:midHeaderRewrite: A set of :ref:`ds-mid-header-rw-rules` +:missLat: The :ref:`ds-geo-miss-default-latitude` used by this :term:`Delivery Service` +:missLong: The :ref:`ds-geo-miss-default-longitude` used by this :term:`Delivery Service` +:multiSiteOrigin: A boolean that defines the use of :ref:`ds-multi-site-origin` by this :term:`Delivery Service` +:orgServerFqdn: The :ref:`ds-origin-url` +:originShield: A :ref:`ds-origin-shield` string +:profileDescription: The :ref:`profile-description` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileId: The :ref:`profile-id` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileName: The :ref:`profile-name` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:protocol: An integral, unique identifier that corresponds to the :ref:`ds-protocol` used by this :term:`Delivery Service` +:qstringIgnore: An integral, unique identifier that corresponds to the :ref:`ds-qstring-handling` setting on this :term:`Delivery Service` +:rangeRequestHandling: An integral, unique identifier that corresponds to the :ref:`ds-range-request-handling` setting on this :term:`Delivery Service` +:regexRemap: A :ref:`ds-regex-remap` +:regionalGeoBlocking: A boolean defining the :ref:`ds-regionalgeo` setting on this :term:`Delivery Service` +:remapText: :ref:`ds-raw-remap` +:signed: ``true`` if and only if ``signingAlgorithm`` is not ``null``, ``false`` otherwise +:signingAlgorithm: Either a :ref:`ds-signing-algorithm` or ``null`` to indicate URL/URI signing is not implemented on this :term:`Delivery Service` .. versionadded:: 1.3 -:sslKeyVersion: This integer indicates the generation of keys in use by the :term:`Delivery Service` - if any - and is incremented by the Traffic Portal client whenever new keys are generated +:sslKeyVersion: This integer indicates the :ref:`ds-ssl-key-version` +:tenantId: The integral, unique identifier of the :ref:`ds-tenant` who owns this :term:`Delivery Service` - .. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! + .. versionadded:: 1.3 + +:trRequestHeaders: If defined, this defines the :ref:`ds-tr-req-headers` used by Traffic Router for this :term:`Delivery Service` + + .. versionadded:: 1.3 -:tenant: The name of the tenant who owns this :term:`Delivery Service` +:trResponseHeaders: If defined, this defines the :ref:`ds-tr-resp-headers` used by Traffic Router for this :term:`Delivery Service` .. versionadded:: 1.3 -:tenantId: The integral, unique identifier of the :term:`Tenant` who owns this :term:`Delivery Service` -:trRequestHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router access logs for requests - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:trResponseHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router responses - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:type: The name of the routing type of this :term:`Delivery Service` e.g. "HTTP" -:typeId: The integral, unique identifier of the routing type of this :term:`Delivery Service` -:xmlId: A unique string that describes this :term:`Delivery Service` - exists for legacy reasons +:type: The :ref:`ds-types` of this :term:`Delivery Service` +:typeId: The integral, unique identifier of the :ref:`ds-types` of this :term:`Delivery Service` +:xmlId: This :term:`Delivery Service`'s :ref:`ds-xmlid` .. code-block:: http :caption: Response Example HTTP/1.1 200 OK Access-Control-Allow-Credentials: true - Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept + Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Set-Cookie, Cookie Access-Control-Allow-Methods: POST,GET,OPTIONS,PUT,DELETE Access-Control-Allow-Origin: * - Cache-Control: no-cache, no-store, max-age=0, must-revalidate Content-Type: application/json - Date: Mon, 19 Nov 2018 19:29:40 GMT - Server: Mojolicious (Perl) - Set-Cookie: mojolicious=...; expires=Mon, 19 Nov 2018 23:29:40 GMT; path=/; HttpOnly - Vary: Accept-Encoding - Whole-Content-Sha512: wSCPoNQbFTN0FonjXYH13jwTvOwo0ltSD0ACRQ4d/eaWIfzNyAFAD/RapflUP2PIqttb6NlnHkZve0j6ETJ+gw== - Content-Length: 1439 - - { "alerts": [ - { - "level": "success", - "text": "Deliveryservice safe update was successful." - } - ], - "response": [ - { - "profileId": null, - "protocol": 0, - "deepCachingType": "NEVER", - "regionalGeoBlocking": 0, - "routingName": "video", - "orgServerFqdn": "http://origin.infra.ciab.test", - "cdnId": 2, - "geoProvider": 0, - "longDesc2": null, - "globalMaxMbps": null, - "dnsBypassIp6": null, - "geoLimit": 0, - "maxDnsAnswers": null, - "id": 1, - "sslKeyVersion": null, - "midHeaderRewrite": null, - "geoLimitRedirectURL": null, - "active": 1, - "logsEnabled": 1, - "initialDispersion": 1, - "regexRemap": null, - "geoLimitCountries": null, - "missLat": 42, - "anonymousBlockingEnabled": 0, - "longDesc": "A :term:`Delivery Service` created for the CDN-in-a-Box project", - "matchList": [ - { - "pattern": ".*\\.demo1\\..*", - "setNumber": 0, - "type": "HOST_REGEXP" - } - ], - "rangeRequestHandling": 0, - "profileName": null, - "dnsBypassCname": null, - "globalMaxTps": null, - "type": "HTTP", - "httpBypassFqdn": null, - "infoUrl": "www.info.com", - "signingAlgorithm": null, - "missLong": -88, - "trRequestHeaders": null, - "trResponseHeaders": null, - "exampleURLs": [ - "http://video.demo1.mycdn.ciab.test" - ], - "remapText": null, - "longDesc1": null, - "displayName": "demo", - "qstringIgnore": 0, - "multiSiteOrigin": 0, - "xmlId": "demo1", - "lastUpdated": "2018-11-19 16:26:57.310527+00", - "ipv6RoutingEnabled": 1, - "ccrDnsTtl": null, - "dscp": 0, - "dnsBypassIp": null, - "dnsBypassTtl": null, - "originShield": null, - "cacheurl": null, - "edgeHeaderRewrite": null, - "profileDescription": null, - "typeId": 1, - "cdnName": "CDN-in-a-Box", - "signed": false, - "checkPath": null, - "fqPacingRate": null - } - ]} - - -.. [1] Users with the "admin" or "operations" roles will be able to edit *any*:term:`Delivery Service`, whereas other users will only be able to edit :term:`Delivery Services` that their tenant has permissions to edit. -.. [#httpOnly] This only applies to HTTP-:ref:`routed ` :term:`Delivery Services` -.. [3] See :ref:`ds-multi-site-origin` -.. [4] This only applies to DNS-:ref:`routed ` :term:`Delivery Services` -.. [5] These fields are required for HTTP-:ref:`routed ` and DNS-:ref:`routed ` :term:`Delivery Services`, but are optional for (and in fact may have no effect on) STEERING and ANY_MAP :term:`Delivery Services` + Set-Cookie: mojolicious=...; Path=/; HttpOnly + Whole-Content-Sha512: mCLMjvACRKHNGP/OSx4javkOtxxzyiDdQzsV78IamUhVmvyKyKaCeOKRmpsG69w+nhh3OkPZ6e9MMeJpcJSKcA== + X-Server-Name: traffic_ops_golang/ + Date: Thu, 15 Nov 2018 19:04:29 GMT + Transfer-Encoding: chunked + + { "response": [{ + "active": true, + "anonymousBlockingEnabled": false, + "cacheurl": null, + "ccrDnsTtl": null, + "cdnId": 2, + "cdnName": "CDN-in-a-Box", + "checkPath": null, + "displayName": "demo", + "dnsBypassCname": null, + "dnsBypassIp": null, + "dnsBypassIp6": null, + "dnsBypassTtl": null, + "dscp": 0, + "edgeHeaderRewrite": null, + "geoLimit": 0, + "geoLimitCountries": null, + "geoLimitRedirectURL": null, + "geoProvider": 0, + "globalMaxMbps": null, + "globalMaxTps": null, + "httpBypassFqdn": null, + "id": 1, + "infoUrl": "www.info.com", + "initialDispersion": 1, + "ipv6RoutingEnabled": true, + "lastUpdated": "2019-05-15 14:32:05+00", + "logsEnabled": true, + "longDesc": "A Delivery Service created for the CDN-in-a-Box project", + "longDesc1": null, + "longDesc2": null, + "matchList": [ + { + "type": "HOST_REGEXP", + "setNumber": 0, + "pattern": ".*\\.demo1\\..*" + } + ], + "maxDnsAnswers": null, + "midHeaderRewrite": null, + "missLat": 42, + "missLong": -88, + "multiSiteOrigin": false, + "originShield": null, + "orgServerFqdn": "http://origin.infra.ciab.test", + "profileDescription": null, + "profileId": null, + "profileName": null, + "protocol": 2, + "qstringIgnore": 0, + "rangeRequestHandling": 0, + "regexRemap": null, + "regionalGeoBlocking": false, + "remapText": null, + "routingName": "video", + "signed": false, + "sslKeyVersion": null, + "tenantId": 1, + "type": "HTTP", + "typeId": 1, + "xmlId": "demo1", + "exampleURLs": [ + "http://video.demo1.mycdn.ciab.test", + "https://video.demo1.mycdn.ciab.test" + ], + "deepCachingType": "NEVER", + "fqPacingRate": null, + "signingAlgorithm": null, + "tenant": "root", + "trResponseHeaders": null, + "trRequestHeaders": null, + "consistentHashRegex": null, + "consistentHashQueryParams": [ + "abc", + "pdq", + "xxx", + "zyx" + ], + "maxOriginConnections": 0 + }]} + + +.. [#tenancy] Only those :term:`Delivery Services` assigned to :term:`Tenants` that are the requesting user's :term:`Tenant` or children thereof may be modified with this endpoint. diff --git a/docs/source/api/deliveryservices_id_servers.rst b/docs/source/api/deliveryservices_id_servers.rst index 11500c83d5..7b72440061 100644 --- a/docs/source/api/deliveryservices_id_servers.rst +++ b/docs/source/api/deliveryservices_id_servers.rst @@ -71,9 +71,9 @@ Response Structure :offlineReason: A user-entered reason why the server is in ADMIN_DOWN or OFFLINE status (will be empty if not offline) :physLocation: The name of the physical location at which the server resides :physLocationId: An integral, unique identifier for the physical location at which the server resides -:profile: The name of the profile assigned to this server -:profileDesc: A description of the profile assigned to this server -:profileId: An integral, unique identifier for the profile assigned to this server +:profile: The :ref:`profile-name` of the :term:`Profile` assigned to this server +:profileDesc: A :ref:`profile-description` of the :term:`Profile` assigned to this server +:profileId: The :ref:`profile-id` of the :term:`Profile` assigned to this server :rack: A string indicating "rack" location :routerHostName: The human-readable name of the router :routerPortName: The human-readable name of the router port diff --git a/docs/source/api/deliveryservices_id_servers_eligible.rst b/docs/source/api/deliveryservices_id_servers_eligible.rst index 7cf42d5ff3..5380eb8041 100644 --- a/docs/source/api/deliveryservices_id_servers_eligible.rst +++ b/docs/source/api/deliveryservices_id_servers_eligible.rst @@ -72,9 +72,9 @@ Response Structure :offlineReason: A user-entered reason why the server is in ADMIN_DOWN or OFFLINE status (will be empty if not offline) :physLocation: The name of the physical location at which the server resides :physLocationId: An integral, unique identifier for the physical location at which the server resides -:profile: The name of the profile assigned to this server -:profileDesc: A description of the profile assigned to this server -:profileId: An integral, unique identifier for the profile assigned to this server +:profile: The :ref:`profile-name` of the :term:`Profile` assigned to this server +:profileDesc: A :ref:`profile-description` of the :term:`Profile` assigned to this server +:profileId: The :ref:`profile-id` of the :term:`Profile` assigned to this server :rack: A string indicating "rack" location :routerHostName: The human-readable name of the router :routerPortName: The human-readable name of the router port diff --git a/docs/source/api/deliveryservices_id_unassigned_servers.rst b/docs/source/api/deliveryservices_id_unassigned_servers.rst index fe390a367d..2b55f537fe 100644 --- a/docs/source/api/deliveryservices_id_unassigned_servers.rst +++ b/docs/source/api/deliveryservices_id_unassigned_servers.rst @@ -69,9 +69,9 @@ Response Structure :offlineReason: A user-entered reason why the server is in ADMIN_DOWN or OFFLINE status :physLocation: The physical location name :physLocationId: The physical location id -:profile: The assigned profile name -:profileDesc: The assigned profile description -:profileId: The assigned profile Id +:profile: The :ref:`profile-name` of the :term:`Profile` assigned to this server +:profileDesc: A :ref:`profile-description` of the :term:`Profile` assigned to this server +:profileId: The :ref:`profile-id` of the :term:`Profile` assigned to this server :rack: A string indicating rack location :routerHostName: The human readable name of the router :routerPortName: The human readable name of the router port diff --git a/docs/source/api/deliveryservices_sslkeys_add.rst b/docs/source/api/deliveryservices_sslkeys_add.rst index 62f783388c..87d56656a4 100644 --- a/docs/source/api/deliveryservices_sslkeys_add.rst +++ b/docs/source/api/deliveryservices_sslkeys_add.rst @@ -38,7 +38,7 @@ Request Structure :csr: The csr file for the :term:`Delivery Service` identified by ``key`` :key: The private key for the :term:`Delivery Service` identified by ``key`` -:key: The 'xml_id' of the :term:`Delivery Service` to which these keys will be assigned +:key: The :ref:`ds-xmlid` of the :term:`Delivery Service` to which these keys will be assigned :version: An integer that defines the "version" of the key - which may be thought of as the sequential generation; that is, the higher the number the more recent the key .. code-block:: http diff --git a/docs/source/api/deliveryservices_sslkeys_generate.rst b/docs/source/api/deliveryservices_sslkeys_generate.rst index 20ed278b98..a2f8fc5c0b 100644 --- a/docs/source/api/deliveryservices_sslkeys_generate.rst +++ b/docs/source/api/deliveryservices_sslkeys_generate.rst @@ -35,11 +35,11 @@ Request Structure .. note:: In most cases, this must be the same as the :term:`Delivery Service` URL' -:key: The 'xml_id' of the :term:`Delivery Service` for which keys will be generated +:key: The :ref:`ds-xmlid` of the :term:`Delivery Service` for which keys will be generated :organization: An optional field which, if present, will represent the organization for which the SSL certificate was generated :state: An optional field which, if present, will represent the resident state or province of the generated SSL certificate :businessUnit: An optional field which, if present, will represent the business unit for which the SSL certificate was generated -:version: version of the keys being generated +:version: version of the keys being generated .. code-block:: http :caption: Request Example diff --git a/docs/source/api/deliveryservices_xmlid_servers.rst b/docs/source/api/deliveryservices_xmlid_servers.rst index d15ccfadd3..cbe0160e99 100644 --- a/docs/source/api/deliveryservices_xmlid_servers.rst +++ b/docs/source/api/deliveryservices_xmlid_servers.rst @@ -21,10 +21,10 @@ ``POST`` ======== -Assigns :term:`cache server`\ s to a :term:`Delivery Service`. +Assigns :term:`cache servers` to a :term:`Delivery Service`. :Auth. Required: Yes -:Roles Required: "admin" or "operations"\ [1]_ +:Roles Required: "admin" or "operations"\ [#tenancy]_ :Response Type: Object Request Structure @@ -37,7 +37,7 @@ Request Structure | xml_id | The 'xml_id' of the :term:`Delivery Service` whose server assignments are being edited | +--------+----------------------------------------------------------------------------------------+ -:serverNames: An array of hostname of :term:`cache server`\ s to assign to this :term:`Delivery Service` +:serverNames: An array of hostname of :term:`cache servers` to assign to this :term:`Delivery Service` .. code-block:: http :caption: Request Example @@ -54,8 +54,8 @@ Request Structure Response Structure ------------------ -:xml_id: The 'xml_id' of the :term:`Delivery Service` to which the servers in ``serverNames`` have been assigned -:serverNames: An array of hostnames of :term:`cache server`\ s assigned to :term:`Delivery Service` identified by ``xml_id`` +:xml_id: The :ref:`ds-xmlid` of the :term:`Delivery Service` to which the servers in ``serverNames`` have been assigned +:serverNames: An array of hostnames of :term:`cache servers` assigned to :term:`Delivery Service` identified by ``xml_id`` .. code-block:: http :caption: Response Example @@ -79,4 +79,4 @@ Response Structure "xmlId": "test" }} -.. [1] Users with the roles "admin" and/or "operation" will be able to edit the server assignments of *all* :term:`Delivery Service`\ s, whereas any other user will only be able to edit the server assignments of the :term:`Delivery Service`\ s their Tenant is allowed to edit. +.. [#tenancy] Users can only assign servers to :term:`Delivery Services` that are visible to their :term:`Tenant`. diff --git a/docs/source/api/deliveryservices_xmlid_urisignkeys.rst b/docs/source/api/deliveryservices_xmlid_urisignkeys.rst index 039136e073..ebd77ff2b8 100644 --- a/docs/source/api/deliveryservices_xmlid_urisignkeys.rst +++ b/docs/source/api/deliveryservices_xmlid_urisignkeys.rst @@ -19,15 +19,18 @@ ``deliveryservices/{{xml_id}}/urisignkeys`` ******************************************* -DELETE +``DELETE`` +========== +Deletes URISigning objects for a :term:`Delivery Service`. - Deletes URISigning objects for a delivery service. +:Auth. Required: Yes +:Roles Required: admin\ [#tenancy]_ +:Response Type: ``undefined`` - Authentication Required: Yes +Request Structure +----------------- - Role(s) Required: admin - - **Request Route Parameters** +.. table:: Request Path Parameters +-----------+----------+----------------------------------------+ | Name | Required | Description | @@ -35,15 +38,21 @@ DELETE | xml_id | yes | xml_id of the desired delivery service | +-----------+----------+----------------------------------------+ -**GET deliveryservices/:xml_id/urisignkeys** - - Retrieves one or more URISigning objects for a delivery service. +Response Structure +------------------ +TBD - Authentication Required: Yes +``GET`` +======= +Retrieves one or more URISigning objects for a delivery service. - Role(s) Required: admin +:Auth. Required: Yes +:Roles Required: admin\ [#tenancy]_ +:Response Type: ``undefined`` - **Request Route Parameters** +Request Structure +----------------- +.. table:: Request Route Parameters +-----------+----------+----------------------------------------+ | Name | Required | Description | @@ -51,58 +60,50 @@ DELETE | xml_id | yes | xml_id of the desired delivery service | +-----------+----------+----------------------------------------+ - **Response Properties** - - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Type | Description | - +=====================+========+=========================================================================================================================================+ - | ``Issuer`` | string | a string describing the issuer of the URI signing object. Multiple URISigning objects may be returned in a response, see example | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``renewal_kid`` | string | a string naming the jwt key used for renewals. | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``keys`` | string | json array of jwt symmetric keys . | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``alg`` | string | this parameter repeats for each jwt key in the array and specifies the jwa encryption algorithm to use with this key, :rfc:`7518` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``kid`` | string | this parameter repeats for each jwt key in the array and specifies the unique id for the key as defined in :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``kty`` | string | this parameter repeats for each jwt key in the array and specifies the key type as defined in :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``k`` | string | this parameter repeats for each jwt key in the array and specifies the base64 encoded symmetric key see :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - - **Response Example** :: - - { - "Kabletown URI Authority": { - "renewal_kid": "Second Key", - "keys": [ - { - "alg": "HS256", - "kid": "First Key", - "kty": "oct", - "k": "Kh_RkUMj-fzbD37qBnDf_3e_RvQ3RP9PaSmVEpE24AM" - }, - { - "alg": "HS256", - "kid": "Second Key", - "kty": "oct", - "k": "fZBpDBNbk2GqhwoB_DGBAsBxqQZVix04rIoLJ7p_RlE" - } - ] +Response Structure +------------------ + +:Issuer: a string describing the issuer of the URI signing object. Multiple URISigning objects may be returned in a response, see example +:renewal_kid: a string naming the jwt key used for renewals +:keys: json array of jwt symmetric keys +:alg: this parameter repeats for each jwt key in the array and specifies the jwa encryption algorithm to use with this key, :rfc:`7518` +:kid: this parameter repeats for each jwt key in the array and specifies the unique id for the key as defined in :rfc:`7516` +:kty: this parameter repeats for each jwt key in the array and specifies the key type as defined in :rfc:`7516` +:k: this parameter repeats for each jwt key in the array and specifies the base64 encoded symmetric key see :rfc:`7516` + +.. code-block:: json + :caption: Response Example + + { "Kabletown URI Authority": { + "renewal_kid": "Second Key", + "keys": [ + { + "alg": "HS256", + "kid": "First Key", + "kty": "oct", + "k": "Kh_RkUMj-fzbD37qBnDf_3e_RvQ3RP9PaSmVEpE24AM" + }, + { + "alg": "HS256", + "kid": "Second Key", + "kty": "oct", + "k": "fZBpDBNbk2GqhwoB_DGBAsBxqQZVix04rIoLJ7p_RlE" } - } - + ] + }} -**POST deliveryservices/:xml_id/urisignkeys** - Assigns URISigning objects to a delivery service. +``POST`` +======== +Assigns URISigning objects to a delivery service. - Authentication Required: Yes +:Auth. Required: Yes +:Roles Required: admin\ [#tenancy]_ +:Response Type: ``undefined`` - Role(s) Required: admin - - **Request Route Parameters** +Request Structure +----------------- +.. table:: Request Path Parameters +-----------+----------+----------------------------------------+ | Name | Required | Description | @@ -110,57 +111,48 @@ DELETE | xml_id | yes | xml_id of the desired delivery service | +-----------+----------+----------------------------------------+ - **Request Properties** - - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Type | Description | - +=====================+========+=========================================================================================================================================+ - | ``Issuer`` | string | a string describing the issuer of the URI signing object. Multiple URISigning objects may be returned in a response, see example | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``renewal_kid`` | string | a string naming the jwt key used for renewals. | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``keys`` | string | json array of jwt symmetric keys . | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``alg`` | string | this parameter repeats for each jwt key in the array and specifies the jwa encryption algorithm to use with this key, :rfc:`7518` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``kid`` | string | this parameter repeats for each jwt key in the array and specifies the unique id for the key as defined in :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``kty`` | string | this parameter repeats for each jwt key in the array and specifies the key type as defined in :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``k`` | string | this parameter repeats for each jwt key in the array and specifies the base64 encoded symmetric key see :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - - **Request Example** :: - - { - "Kabletown URI Authority": { - "renewal_kid": "Second Key", - "keys": [ - { - "alg": "HS256", - "kid": "First Key", - "kty": "oct", - "k": "Kh_RkUMj-fzbD37qBnDf_3e_RvQ3RP9PaSmVEpE24AM" - }, - { - "alg": "HS256", - "kid": "Second Key", - "kty": "oct", - "k": "fZBpDBNbk2GqhwoB_DGBAsBxqQZVix04rIoLJ7p_RlE" - } - ] +Request Structure +----------------- +:Issuer: a string describing the issuer of the URI signing object. Multiple URISigning objects may be returned in a response, see example +:renewal_kid: a string naming the jwt key used for renewals +:keys: json array of jwt symmetric keys +:alg: this parameter repeats for each jwt key in the array and specifies the jwa encryption algorithm to use with this key, :rfc:`7518` +:kid: this parameter repeats for each jwt key in the array and specifies the unique id for the key as defined in :rfc:`7516` +:kty: this parameter repeats for each jwt key in the array and specifies the key type as defined in :rfc:`7516` +:k: this parameter repeats for each jwt key in the array and specifies the base64 encoded symmetric key see :rfc:`7516` + +.. code-block:: json + :caption: Request Example + + { "Kabletown URI Authority": { + "renewal_kid": "Second Key", + "keys": [ + { + "alg": "HS256", + "kid": "First Key", + "kty": "oct", + "k": "Kh_RkUMj-fzbD37qBnDf_3e_RvQ3RP9PaSmVEpE24AM" + }, + { + "alg": "HS256", + "kid": "Second Key", + "kty": "oct", + "k": "fZBpDBNbk2GqhwoB_DGBAsBxqQZVix04rIoLJ7p_RlE" } - } - -**PUT deliveryservices/:xml_id/urisignkeys** - - updates URISigning objects on a delivery service. + ] + }} - Authentication Required: Yes +``PUT`` +======= +updates URISigning objects on a delivery service. - Role(s) Required: admin +:Auth. Required: Yes +:Roles Required: admin\ [#tenancy]_ +:Response Type: ``undefined`` - **Request Route Parameters** +Request Structure +----------------- +.. table:: Request Path Parameters +-----------+----------+----------------------------------------+ | Name | Required | Description | @@ -168,46 +160,35 @@ DELETE | xml_id | yes | xml_id of the desired delivery service | +-----------+----------+----------------------------------------+ - **Request Properties** - - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Type | Description | - +=====================+========+=========================================================================================================================================+ - | ``Issuer`` | string | a string describing the issuer of the URI signing object. Multiple URISigning objects may be returned in a response, see example | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``renewal_kid`` | string | a string naming the jwt key used for renewals. | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``keys`` | string | json array of jwt symmetric keys . | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``alg`` | string | this parameter repeats for each jwt key in the array and specifies the jwa encryption algorithm to use with this key, :rfc:`7518` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``kid`` | string | this parameter repeats for each jwt key in the array and specifies the unique id for the key as defined in :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``kty`` | string | this parameter repeats for each jwt key in the array and specifies the key type as defined in :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ``k`` | string | this parameter repeats for each jwt key in the array and specifies the base64 encoded symmetric key see :rfc:`7516` | - +---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------+ - - **Request Example** :: - - { - "Kabletown URI Authority": { - "renewal_kid": "Second Key", - "keys": [ - { - "alg": "HS256", - "kid": "First Key", - "kty": "oct", - "k": "Kh_RkUMj-fzbD37qBnDf_3e_RvQ3RP9PaSmVEpE24AM" - }, - { - "alg": "HS256", - "kid": "Second Key", - "kty": "oct", - "k": "fZBpDBNbk2GqhwoB_DGBAsBxqQZVix04rIoLJ7p_RlE" - } - ] +Request Structure +----------------- +:Issuer: a string describing the issuer of the URI signing object. Multiple URISigning objects may be returned in a response, see example +:renewal_kid: a string naming the jwt key used for renewals +:keys: json array of jwt symmetric keys +:alg: this parameter repeats for each jwt key in the array and specifies the jwa encryption algorithm to use with this key, :rfc:`7518` +:kid: this parameter repeats for each jwt key in the array and specifies the unique id for the key as defined in :rfc:`7516` +:kty: this parameter repeats for each jwt key in the array and specifies the key type as defined in :rfc:`7516` +:k: this parameter repeats for each jwt key in the array and specifies the base64 encoded symmetric key see :rfc:`7516` + +.. code-block:: json + :caption: Request Example + + { "Kabletown URI Authority": { + "renewal_kid": "Second Key", + "keys": [ + { + "alg": "HS256", + "kid": "First Key", + "kty": "oct", + "k": "Kh_RkUMj-fzbD37qBnDf_3e_RvQ3RP9PaSmVEpE24AM" + }, + { + "alg": "HS256", + "kid": "Second Key", + "kty": "oct", + "k": "fZBpDBNbk2GqhwoB_DGBAsBxqQZVix04rIoLJ7p_RlE" } - } + ] + }} -| +.. [#tenancy] URI Signing Keys can only be created, viewed, deleted, or modified on :term:`Delivery Services` that either match the requesting user's :term:`Tenant` or are descendants thereof. diff --git a/docs/source/api/deliveryserviceserver.rst b/docs/source/api/deliveryserviceserver.rst index 996c5c5519..5dc56e9a90 100644 --- a/docs/source/api/deliveryserviceserver.rst +++ b/docs/source/api/deliveryserviceserver.rst @@ -107,9 +107,9 @@ Assign a set of one or more servers to a :term:`Delivery Service` Request Structure ----------------- -:deliveryService: The integral, unique identifier of the :term:`Delivery Service` to which the servers identified in the ``servers`` array will be assigned -:replace: If ``true``, any existing assignments for a server identified in the ``servers`` array will be overwritten by this request -:servers: An array of integral, unique identifiers for servers which are to be assigned to the :term:`Delivery Service` identified by ``deliveryService`` +:dsId: The integral, unique identifier of the :term:`Delivery Service` to which the servers identified in the ``servers`` array will be assigned +:replace: If ``true``, any existing assignments for a server identified in the ``servers`` array will be overwritten by this request +:servers: An array of integral, unique identifiers for servers which are to be assigned to the :term:`Delivery Service` identified by ``deliveryService`` .. code-block:: http :caption: Request Example @@ -126,9 +126,9 @@ Request Structure Response Structure ------------------ -:deliveryService: The integral, unique identifier of the :term:`Delivery Service` to which the servers identified by the elements of the ``servers`` array have been assigned -:replace: If ``true``, any existing assignments for a server identified in the ``servers`` array have been overwritten by this request -:servers: An array of integral, unique identifiers for servers which have been assigned to the :term:`Delivery Service` identified by ``deliveryService`` +:dsId: The integral, unique identifier of the :term:`Delivery Service` to which the servers identified by the elements of the ``servers`` array have been assigned +:replace: If ``true``, any existing assignments for a server identified in the ``servers`` array have been overwritten by this request +:servers: An array of integral, unique identifiers for servers which have been assigned to the :term:`Delivery Service` identified by ``deliveryService`` .. code-block:: http :caption: Response Example diff --git a/docs/source/api/origins.rst b/docs/source/api/origins.rst index ece78f9495..bbd93db4b0 100644 --- a/docs/source/api/origins.rst +++ b/docs/source/api/origins.rst @@ -32,26 +32,26 @@ Request Structure ----------------- .. table:: Request Query Parameters - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Name | Required | Description | - +=================+==========+=====================================================================================================================================================================+ - | cachegroup | no | Return only :term:`origin`\ s within the :term:`Cache Group` identified by this integral, unique identifier | - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | coordinate | no | Return only :term:`origin`\ s located at the geographic coordinates identified by this integral, unique identifier | - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | deliveryservice | no | Return only :term:`origin`\ s that belong to the :term:`Delivery Service` identified by this integral, unique identifier | - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | id | no | Return only the :term:`origin` that has this integral, unique identifier | - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | name | no | Return only :term:`origin`\ s by this name | - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | profileId | no | Return only :term:`origin`\ s which use the :term:`Profile` identified by this integral, unique identifier | - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | primary | no | If ``true``, return only :term:`origin`\ s which are the the primary :term:`origin` of the :term:`Delivery Service` to which they belong - if ``false`` return only | - | | | :term:`origin`\ s which are *not* the primary :term:`origin` of the :term:`Delivery Service` to which they belong | - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | tenant | no | Return only :term:`origin`\ s belonging to the tenant identified by this integral, unique identifier | - +-----------------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Name | Required | Description | + +=================+==========+===================================================================================================================================================================+ + | cachegroup | no | Return only :term:`origins` within the :term:`Cache Group` identified by this integral, unique identifier | + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | coordinate | no | Return only :term:`origins` located at the geographic coordinates identified by this integral, unique identifier | + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | deliveryservice | no | Return only :term:`origins` that belong to the :term:`Delivery Service` identified by this integral, unique identifier | + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | id | no | Return only the :term:`origin` that has this integral, unique identifier | + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | name | no | Return only :term:`origins` by this name | + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | profileId | no | Return only :term:`origins` which use the :term:`Profile` that has this :ref:`profile-id` | + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | primary | no | If ``true``, return only :term:`origins` which are the the primary :term:`origin` of the :term:`Delivery Service` to which they belong - if ``false`` return only | + | | | :term:`origins` which are *not* the primary :term:`origin` of the :term:`Delivery Service` to which they belong | + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | tenant | no | Return only :term:`origins` belonging to the tenant identified by this integral, unique identifier | + +-----------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ .. note:: Several fields of origin definitions which are filterable by Query Parameters are allowed to be ``null``. ``null`` values in these fields will be filtered *out* appropriately by such Query Parameters, but do note that ``null`` is not a valid value accepted by any of these Query Parameters, and attempting to pass it will result in an error. @@ -80,8 +80,8 @@ Response Structure :lastUpdated: The date and time at which this :term:`origin` was last modified :name: The name of the :term:`origin` :port: The TCP port on which the :term:`origin` listens -:profile: The name of the :term:`Profile` used by this :term:`origin` -:profileId: An integral, unique identifier for the :term:`Profile` used by this :term:`origin` +:profile: The :ref:`profile-name` of the :term:`Profile` used by this :term:`origin` +:profileId: The :ref:`profile-id` of the :term:`Profile` used by this :term:`origin` :protocol: The protocol used by this origin - will be one of 'http' or 'https' :tenant: The name of the :term:`Tenant` that owns this :term:`origin` :tenantId: An integral, unique identifier for the :term:`Tenant` that owns this :term:`origin` @@ -149,7 +149,7 @@ Request Structure :name: A human-friendly name of the :term:`Origin` :port: An optional port number on which the :term:`origin` listens for incoming TCP connections -:profileId: An optional, integral, unique identifier for a :term:`Profile` that the new :term:`origin` shall use +:profileId: An optional :ref:`profile-id` ofa :term:`Profile` that shall be used by this :term:`origin` :protocol: The protocol used by the origin - must be one of 'http' or 'https' :tenantId: An optional\ [1]_, integral, unique identifier for the :term:`Tenant` which shall own the new :term:`origin` @@ -191,8 +191,8 @@ Response Structure :lastUpdated: The date and time at which this :term:`origin` was last modified :name: The name of the :term:`origin` :port: The TCP port on which the :term:`origin` listens -:profile: The name of the :term:`Profile` used by this :term:`origin` -:profileId: An integral, unique identifier for the :term:`Profile` used by this :term:`origin` +:profile: The :ref:`profile-name` of the :term:`Profile` used by this :term:`origin` +:profileId: The :ref:`profile-id` the :term:`Profile` used by this :term:`origin` :protocol: The protocol used by this origin - will be one of 'http' or 'https' :tenant: The name of the :term:`Tenant` that owns this :term:`origin` :tenantId: An integral, unique identifier for the :term:`Tenant` that owns this :term:`origin` @@ -267,7 +267,7 @@ Request Structure :isPrimary: An optional boolean which, if ``true`` will set this :term:`origin` as the 'primary' origin served by the :term:`Delivery Service` identified by ``deliveryServiceID`` :name: A human-friendly name of the :term:`Origin` :port: An optional port number on which the :term:`origin` listens for incoming TCP connections -:profileId: An optional, integral, unique identifier for a :term:`Profile` that the new :term:`origin` shall use +:profileId: An optional :ref:`profile-id` of the :term:`Profile` that shall be used by this :term:`origin` :protocol: The protocol used by the :term:`origin` - must be one of 'http' or 'https' :tenantId: An optional\ [1]_, integral, unique identifier for the :term:`Tenant` which shall own the new :term:`origin` @@ -309,8 +309,8 @@ Response Structure :lastUpdated: The date and time at which this :term:`origin` was last modified :name: The name of the :term:`origin` :port: The TCP port on which the :term:`origin` listens -:profile: The name of the :term:`Profile` used by this :term:`origin` -:profileId: An integral, unique identifier for the :term:`Profile` used by this :term:`origin` +:profile: The :ref:`profile-name` of the :term:`Profile` used by this :term:`origin` +:profileId: The :ref:`profile-id` the :term:`Profile` used by this :term:`origin` :protocol: The protocol used by this origin - will be one of 'http' or 'https' :tenant: The name of the :term:`Tenant` that owns this :term:`origin` :tenantId: An integral, unique identifier for the :term:`Tenant` that owns this :term:`origin` diff --git a/docs/source/api/osversions.rst b/docs/source/api/osversions.rst index 4a53a83285..1025b6df96 100644 --- a/docs/source/api/osversions.rst +++ b/docs/source/api/osversions.rst @@ -18,7 +18,7 @@ ************** ``osversions`` ************** -.. seealso:: :ref:`generate-iso` +.. seealso:: :ref:`tp-tools-generate-iso` ``GET`` ======= @@ -34,7 +34,7 @@ No parameters available. Response Structure ------------------ -This endpoint has no constant keys in its ``response``. Instead, each key in the ``response`` object is the name of an OS, and the value is a string that names the directory where the ISO source can be found. These directories sit under `/var/www/files/` on the Traffic Ops host machine by default, or at the location defined by the ``kickstart.files.location`` parameter, if it is defined. +This endpoint has no constant keys in its ``response``. Instead, each key in the ``response`` object is the name of an OS, and the value is a string that names the directory where the ISO source can be found. These directories sit under `/var/www/files/` on the Traffic Ops host machine by default, or at the location defined by the ``kickstart.files.location`` :term:`Parameter` of the Traffic Ops server's :term:`Profile`, if it is defined. .. code-block:: http :caption: Response Example diff --git a/docs/source/api/parameterprofile.rst b/docs/source/api/parameterprofile.rst index 1505abe88c..8583578bca 100644 --- a/docs/source/api/parameterprofile.rst +++ b/docs/source/api/parameterprofile.rst @@ -23,7 +23,7 @@ ``POST`` ======== -Create one or more parameter/profile assignments. +Create one or more :term:`Parameter`/:term:`Profile` assignments. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -31,9 +31,9 @@ Create one or more parameter/profile assignments. Request Structure ----------------- -:paramId: The integral, unique identifier for the parameter to be assigned to the profiles identified within the ``profileIds`` array -:profileIds: An array of integral, unique identifiers for profiles to which the parameter identified by ``paramId`` shall be assigned -:replace: An optional boolean (default: false) which, if ``true``, will cause any conflicting profile/parameter assignments to be overridden. +:paramId: The :ref:`parameter-id` of the :term:`Parameter` to be assigned to the :term:`Profiles` identified within the ``profileIds`` array +:profileIds: An array of :term:`Profile` :ref:`IDs ` to which the :term:`Parameter` identified by ``paramId`` shall be assigned +:replace: An optional boolean (default: false) which, if ``true``, will cause any conflicting :term:`Profile`/:term:`Parameter` assignments to be overridden. .. code-block:: http :caption: Request Example @@ -53,9 +53,9 @@ Request Structure Response Structure ------------------ -:paramId: The integral, unique identifier for the parameter which has been assigned to the profiles identified within the ``profileIds`` array -:profileIds: An array of integral, unique identifiers for profiles to which the parameter identified by ``paramId`` has been assigned -:replace: An optional boolean (default: false) which, if ``true``, caused any conflicting profile/parameter assignments to be overridden. +:paramId: The :ref:`parameter-id` of the :term:`Parameter` which has been assigned to the :term:`Profiles` identified within the ``profileIds`` array +:profileIds: An array of :term:`Profile` :ref:`IDs ` to which the :term:`Parameter` identified by ``paramId`` has been assigned +:replace: An optional boolean (default: false) which, if ``true``, caused any conflicting :term:`Profile`/:term:`Parameter` assignments to be overridden. .. code-block:: http :caption: Response Example diff --git a/docs/source/api/parameters.rst b/docs/source/api/parameters.rst index 68717f12b5..33257cd6cc 100644 --- a/docs/source/api/parameters.rst +++ b/docs/source/api/parameters.rst @@ -21,7 +21,7 @@ ``GET`` ======= -Gets all parameters configured in Traffic Ops +Gets all :term:`Parameters` configured in Traffic Ops :Auth. Required: Yes :Roles Required: None @@ -31,15 +31,15 @@ Request Structure ----------------- .. table:: Request Query Parameters - +------------+----------+---------------------------------------------------+ - | Name | Required | Description | - +============+==========+===================================================+ - | id | no | Filter parameters by integral, unique identifier | - +------------+----------+---------------------------------------------------+ - | name | no | Filter parameters by name | - +------------+----------+---------------------------------------------------+ - | configFile | no | Filter parameters by configuration file | - +------------+----------+---------------------------------------------------+ + +------------+----------+-----------------------------------------------------------+ + | Name | Required | Description | + +============+==========+===========================================================+ + | id | no | Filter :term:`Parameters` by :ref:`parameter-id` | + +------------+----------+-----------------------------------------------------------+ + | name | no | Filter :term:`Parameters` by :ref:`parameter-name` | + +------------+----------+-----------------------------------------------------------+ + | configFile | no | Filter :term:`Parameters` by :ref:`parameter-config-file` | + +------------+----------+-----------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -52,13 +52,13 @@ Request Structure Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example @@ -92,7 +92,7 @@ Response Structure ``POST`` ======== -Creates one or more new parameters. +Creates one or more new :term:`Parameters`. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -100,14 +100,14 @@ Creates one or more new parameters. Request Structure ----------------- -The request body may be in one of two formats, a single parameter object or an array of parameter objects. Each parameter object shall have the following keys: +The request body may be in one of two formats, a single :term:`Parameter` object or an array of :term:`Parameter` objects. Each :term:`Parameter` object shall have the following keys: -.. caution:: At the time of this writing, there is a bug in the Go rewrite of this endpoint such that the "array format" will not be accepted by the server. Watch `GitHub issue #3093 `_ for further developments +.. caution:: At the time of this writing, there is a bug in the Go rewrite of this endpoint such that the "array format" will not be accepted by the server. Watch :issue:`3093` for further developments -:name: Parameter name -:configFile: The *base* filename of the configuration file to which this parameter shall belong e.g. "foo" not "/path/to/foo" -:secure: A boolean value which, when ``true`` will prohibit users who do not have the "admin" role from viewing the parameter's ``value`` (at the time of this writing the obfuscation value is defined to be ``"********"``) -:value: Parameter value +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Request Example - Single Object Format @@ -153,13 +153,13 @@ The request body may be in one of two formats, a single parameter object or an a Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter - should be ``null`` immediately after parameter creation -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example - Single Object Format diff --git a/docs/source/api/parameters_id.rst b/docs/source/api/parameters_id.rst index 00a1da4156..426c1ddc63 100644 --- a/docs/source/api/parameters_id.rst +++ b/docs/source/api/parameters_id.rst @@ -21,7 +21,7 @@ ``GET`` ======= -Gets details about a specific parameter +Gets details about a specific :term:`Parameter` .. deprecated:: 1.1 Use the ``id`` query parameter of the :ref:`to-api-parameters` endpoint instead @@ -37,7 +37,7 @@ Request Structure +------+------------------------------------------------------------------------+ | Name | Description | +======+========================================================================+ - | ID | The integral, unique identifier of the parameter which will be deleted | + | ID | The :ref:`parameter-id` of the :term:`Parameter` which will be deleted | +------+------------------------------------------------------------------------+ .. code-block:: http @@ -51,13 +51,13 @@ Request Structure Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example @@ -91,7 +91,7 @@ Response Structure ``PUT`` ======= -Replaces a parameter. +Replaces a :term:`Parameter`. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -104,13 +104,13 @@ Request Structure +------+------------------------------------------------------------------------+ | Name | Description | +======+========================================================================+ - | ID | The integral, unique identifier of the parameter which will be deleted | + | ID | The :ref:`parameter-id` of the :term:`Parameter` which will be deleted | +------+------------------------------------------------------------------------+ -:name: Parameter name -:configFile: The *base* filename of the configuration file to which this parameter shall belong e.g. "foo" not "/path/to/foo" -:secure: A boolean value which, when ``true`` will prohibit users who do not have the "admin" role from viewing the parameter's ``value`` (at the time of this writing the obfuscation value is defined to be ``"********"``) -:value: Parameter value +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Request Example @@ -132,13 +132,13 @@ Request Structure Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example @@ -173,7 +173,7 @@ Response Structure ``DELETE`` ========== -Deletes the specified parameter. If, however, the parameter is associated with one or more profiles, deletion will fail. +Deletes the specified :term:`Parameter`. If, however, the :term:`Parameter` is associated with one or more :term:`Profiles`, deletion will fail. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -186,7 +186,7 @@ Request Structure +------+------------------------------------------------------------------------+ | Name | Description | +======+========================================================================+ - | ID | The integral, unique identifier of the parameter which will be deleted | + | ID | The :ref:`parameter-id` of the :term:`Parameter` which will be deleted | +------+------------------------------------------------------------------------+ .. code-block:: http diff --git a/docs/source/api/parameters_id_profiles.rst b/docs/source/api/parameters_id_profiles.rst index 314c5a8b07..4d512fb020 100644 --- a/docs/source/api/parameters_id_profiles.rst +++ b/docs/source/api/parameters_id_profiles.rst @@ -23,7 +23,7 @@ ``GET`` ======= -Retrieves all profiles assigned to the parameter. +Retrieves all :term:`Profiles` assigned to a specific :term:`Parameter`. :Auth. Required: Yes :Roles Required: None @@ -33,11 +33,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+--------------------------------------------------------------------------------------------+ - | Name | Description | - +======+============================================================================================+ - | ID | An integral, unique identifier that specifies for which parameter shall profiles be listed | - +------+--------------------------------------------------------------------------------------------+ + +------+---------------------------------------------------------------------------------------------+ + | Name | Description | + +======+=============================================================================================+ + | ID | The :ref:`parameter-id` of the :term:`Parameter` for which :term:`Profiles` shall be listed | + +------+---------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Structure @@ -50,18 +50,12 @@ Request Structure Response Structure ------------------ -:description: A description of profile -:id: An integral, unique identifier for this profile -:lastUpdated: The date and time at which this profile was last updated -:name: Profile name -:routingDisabled: An integer that defines whether or not Traffic Routers will route to servers using these profiles - can only be one of: - - 0 - Traffic Routers will route traffic to these servers normally - 1 - Traffic Routers will ignore these servers, and not route traffic to them - -:type: The profile's type +:description: The :term:`Profile`'s :ref:`profile-description` +:id: The :term:`Profile`'s :ref:`profile-id` +:lastUpdated: The date and time at which this :term:`Profile` was last updated, in an ISO-like format +:name: The :term:`Profile`'s :ref:`profile-name` +:routingDisabled: The :term:`Profile`'s :ref:`profile-routing-disabled` setting +:type: The :term:`Profile`'s :ref:`profile-type` .. code-block:: http :caption: Response Example diff --git a/docs/source/api/parameters_id_unassigned_profiles.rst b/docs/source/api/parameters_id_unassigned_profiles.rst index 60f01231f1..aacdd37e57 100644 --- a/docs/source/api/parameters_id_unassigned_profiles.rst +++ b/docs/source/api/parameters_id_unassigned_profiles.rst @@ -22,7 +22,7 @@ ``GET`` ======= -Retrieves all profiles to which the specified parameter is NOT assigned to the parameter. +Retrieves all :term:`Profiles` to which the specified :term:`Parameter` is *not* assigned. :Auth. Required: Yes :Roles Required: None @@ -32,11 +32,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+-------------------------------------------------------------------------------------------------------+ - | Name | Description | - +======+=======================================================================================================+ - | ID | An integral, unique identifier that specifies for which parameter unassigned profiles shall be listed | - +------+-------------------------------------------------------------------------------------------------------+ + +------+--------------------------------------------------------------------------------------------------------+ + | Name | Description | + +======+========================================================================================================+ + | ID | The :ref:`parameter-id` of the :term:`Parameter` for which unassigned :term:`Profiles` shall be listed | + +------+--------------------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -49,18 +49,12 @@ Request Structure Response Structure ------------------ -:description: A description of profile -:id: An integral, unique identifier for this profile -:lastUpdated: The date and time at which this profile was last updated -:name: Profile name -:routingDisabled: An integer that defines whether or not Traffic Routers will route to servers using these profiles - can only be one of: - - 0 - Traffic Routers will route traffic to these servers normally - 1 - Traffic Routers will ignore these servers, and not route traffic to them - -:type: The profile's type +:description: The :term:`Profile`'s :ref:`profile-description` +:id: The :term:`Profile`'s :ref:`profile-id` +:lastUpdated: The date and time at which this :term:`Profile` was last updated, in an ISO-like format +:name: The :term:`Profile`'s :ref:`profile-name` +:routingDisabled: The :term:`Profile`'s :ref:`profile-routing-disabled` setting +:type: The :term:`Profile`'s :ref:`profile-type` .. code-block:: http :caption: Response Example diff --git a/docs/source/api/parameters_profile_name.rst b/docs/source/api/parameters_profile_name.rst index b58158abbc..262efb6d9d 100644 --- a/docs/source/api/parameters_profile_name.rst +++ b/docs/source/api/parameters_profile_name.rst @@ -23,7 +23,7 @@ ``GET`` ======= -Gets details about a specific profile's parameters +Gets details about a specific :term:`Profile`'s :term:`Parameters` :Auth. Required: Yes :Roles Required: None @@ -33,11 +33,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+-------------------------------------------------------------+ - | Name | Description | - +======+=============================================================+ - | name | The name of the profile for which parameters will be listed | - +------+-------------------------------------------------------------+ + +------+--------------------------------------------------------------------------------------------+ + | Name | Description | + +======+============================================================================================+ + | name | The :ref:`profile-name` of the :term:`Profile` for which :term:`Parameters` will be listed | + +------+--------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -50,13 +50,13 @@ Request Structure Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example diff --git a/docs/source/api/parameters_validate.rst b/docs/source/api/parameters_validate.rst index e7ebb8c926..c3dd0d3d54 100644 --- a/docs/source/api/parameters_validate.rst +++ b/docs/source/api/parameters_validate.rst @@ -19,11 +19,11 @@ ``parameters/validate`` *********************** .. deprecated:: 1.1 - To check for the existence of a parameter with specific name, value etc., use the query parameters of the :ref:`to-api-parameters` endpoint instead. + To check for the existence of a :term:`Parameter` with a specific :ref:`parameter-name`, :ref:`parameter-value` etc., use the query parameters of the :ref:`to-api-parameters` endpoint instead. ``POST`` ======== -Returns a successful response and message if a parameter matching the one in the payload exists, and an error response and message if no such parameter is found. +Returns a successful response and message if a :term:`Parameter` matching the one in the payload exists, and an error response and message if no such :term:`Parameter` is found. :Auth. Required: Yes :Roles Required: None @@ -31,10 +31,10 @@ Returns a successful response and message if a parameter matching the one in the Request Structure ----------------- -:name: Parameter name -:configFile: The *base* filename of the configuration file to which this parameter belongs e.g. "foo" not "/path/to/foo" -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Request Example @@ -56,13 +56,11 @@ Request Structure Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example - Parameter Found @@ -119,4 +117,4 @@ Response Structure } ]} -.. note:: This endpoint returns a client-side error response when the parameter was not found - as such any API tools that wish to use this endpoint should be aware that a client-side error response code may not actually mean that an error occurred. However, neither can it be said that a ``400`` response code means that the parameter wasn't found; that response code is also returned in the event of _true_ client-side errors e.g. a malformed JSON payload in the request. +.. note:: This endpoint returns a client-side error response when the parameter was not found - as such any API tools that wish to use this endpoint should be aware that a client-side error response code may not actually mean that an error occurred. However, neither can it be said that a ``400`` response code means that the :term:`Parameter` wasn't found; that response code is also returned in the event of _true_ client-side errors e.g. a malformed JSON payload in the request. diff --git a/docs/source/api/profileparameter.rst b/docs/source/api/profileparameter.rst index 88c1afdea6..908885b2d8 100644 --- a/docs/source/api/profileparameter.rst +++ b/docs/source/api/profileparameter.rst @@ -18,12 +18,11 @@ ******************** ``profileparameter`` ******************** -.. deprecated:: 1.1 - Use :ref:`to-api-profileparameters` instead. +.. seealso:: :ref:`to-api-profileparameters`. ``POST`` ======== -Create one or more profile/parameter assignments. +Create one or more :term:`Profile`/:term:`Parameter` assignments. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -31,9 +30,9 @@ Create one or more profile/parameter assignments. Request Structure ----------------- -:paramIds: An array of integral, unique identifiers for parameters which shall be assigned to the profile identified by ``profileId`` -:profileId: The integral, unique identifier of a profile to which parameters will be assigned -:replace: An optional boolean (default: false) which, if ``true``, will cause any conflicting profile/parameter assignments to be overridden. +:profileId: The :ref:`profile-id` of the :term:`Profile` to which the :term:`Parameters` identified within the ``parameterIds`` array will be assigned +:paramIds: An array of :term:`Parameter` :ref:`IDs ` which shall be assigned to the :term:`Profile` identified by ``profileId`` +:replace: An optional boolean (default: false) which, if ``true``, will cause any conflicting :term:`Profile`/:term:`Parameter` assignments to be overridden. .. code-block:: http :caption: Request Example @@ -53,9 +52,9 @@ Request Structure Response Structure ------------------ -:paramIds: An array of integral, unique identifiers for parameters which have been assigned to the profile identified by ``profileId`` -:profileId: The integral, unique identifier of a profile to which parameters have been assigned -:replace: An optional boolean (default: false) which, if ``true``, caused any conflicting profile/parameter assignments to be overridden. +:profileId: The :ref:`profile-id` of the :term:`Profile` to which the :term:`Parameters` identified within the ``parameterIds`` array are assigned +:paramIds: An array of :term:`Parameter` :ref:`IDs ` which have been assigned to the :term:`Profile` identified by ``profileId`` +:replace: An optional boolean (default: false) which, if ``true``, indicates that any conflicting :term:`Profile`/:term:`Parameter` assignments have been overridden. .. code-block:: http :caption: Response Example diff --git a/docs/source/api/profileparameters.rst b/docs/source/api/profileparameters.rst index 348f5ae66a..949c8a7672 100644 --- a/docs/source/api/profileparameters.rst +++ b/docs/source/api/profileparameters.rst @@ -22,9 +22,9 @@ ``GET`` ======= .. deprecated:: 1.1 - To get the profiles associated with a particular parameter, use the ``param`` query parameter of :ref:`to-api-profiles` instead. To see the parameters associated with a particular profile, refer to the ``params`` key in the response of a ``GET`` request to :ref:`to-api-profiles-id` instead. + To get the :term:`Profiles` associated with a particular :term:`Parameter`, use the ``param`` query parameter of :ref:`to-api-profiles` instead. To see the :term:`Parameters` associated with a particular :term:`Profile`, refer to the ``params`` key in the response of a ``GET`` request to :ref:`to-api-profiles-id` instead. -Retrieves all parameter/profile assignments. +Retrieves all :term:`Parameter`/:term:`Profile` assignments. :Auth. Required: Yes :Roles Required: None @@ -36,9 +36,9 @@ No parameters available Response Structure ------------------ -:lastUpdated: The date and time at which this profile/parameter association was last modified -:parameter: An integral, unique identifier for a parameter assigned to ``profile`` -:profile: The name of the profile to which the parameter identified by ``parameter`` is assigned +:lastUpdated: The date and time at which this :term:`Profile`/:term:`Parameter` association was last modified, in an ISO-like format +:parameter: The :ref:`parameter-id` of a :term:`Parameter` assigned to ``profile`` +:profile: The :ref:`profile-name` of the :term:`Profile` to which the :term:`Parameter` identified by ``parameter`` is assigned .. code-block:: http :caption: Response Structure @@ -72,7 +72,7 @@ Response Structure ``POST`` ======== -Associate parameter to profile. +Associate a :term:`Parameter` to a :term:`Profile`. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -83,14 +83,14 @@ Request Structure This endpoint accepts two formats for the request payload: Single Object Format - For assigning a single parameter to a single profile + For assigning a single :term:`Parameter` to a single :term:`Profile` Array Format - For making multiple assignments of parameters to profiles simultaneously + For making multiple assignments of :term:`Parameters` to :term:`Profiles` simultaneously Single Object Format """""""""""""""""""" -:parameterId: The integral, unique identifier of a parameter to assign to some profile -:profileId: The integral, unique identifier of the profile to which the parameter identified by ``parameterId`` will be assigned +:parameterId: The :ref:`parameter-id` of a :term:`Parameter` to assign to some :term:`Profile` +:profileId: The :ref:`profile-id` of the :term:`Profile` to which the :term:`Parameter` identified by ``parameterId`` will be assigned .. code-block:: http :caption: Request Example - Single Object Format @@ -110,10 +110,10 @@ Single Object Format Array Format """""""""""" -.. caution:: Array format is broken as of the time of this writing. Follow `GitHub Issue #3103 `_ for further developments. +.. caution:: Array format is broken as of the time of this writing. Follow :issue:`3103` for further developments. -:parameterId: The integral, unique identifier of a parameter to assign to some profile -:profileId: The integral, unique identifier of the profile to which the parameter identified by ``parameterId`` will be assigned +:parameterId: The :ref:`parameter-id` of a :term:`Parameter` to assign to some :term:`Profile` +:profileId: The :ref:`profile-id` of the :term:`Profile` to which the :term:`Parameter` identified by ``parameterId`` will be assigned .. code-block:: http :caption: Request Example - Array Format @@ -137,11 +137,11 @@ Array Format Response Structure ------------------ -:lastUpdated: The date and time at which the profile/parameter assignment was last modified, in ISO format -:parameter: Name of the parameter which is assigned to ``profile`` -:parameterId: The integral, unique identifier of the assigned parameter -:profile: Name of the profile to which the parameter is assigned -:profileId: The integral, unique identifier of the profile to which the parameter identified by ``parameterId`` is assigned +:lastUpdated: The date and time at which the :term:`Profile`/:term:`Parameter` assignment was last modified, in an ISO-like format +:parameter: :ref:`parameter-name` of the :term:`Parameter` which is assigned to ``profile`` +:parameterId: The :ref:`parameter-id` of the assigned :term:`Parameter` +:profile: :ref:`profile-name` of the :term:`Profile` to which the :term:`Parameter` is assigned +:profileId: The :ref:`profile-id` of the :term:`Profile` to which the :term:`Parameter` identified by ``parameterId`` is assigned .. code-block:: http :caption: Response Example - Single Object Format diff --git a/docs/source/api/profileparameters_profileID_parameterID.rst b/docs/source/api/profileparameters_profileID_parameterID.rst index 47628f9bb3..28abfe0c59 100644 --- a/docs/source/api/profileparameters_profileID_parameterID.rst +++ b/docs/source/api/profileparameters_profileID_parameterID.rst @@ -21,7 +21,7 @@ ``DELETE`` ========== -Deletes a profile/parameter association. +Deletes a :term:`Profile`/:term:`Parameter` association. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -31,13 +31,13 @@ Request Structure ----------------- .. table:: Request Path Parameters - +-------------+----------------------------------------------------------------------------------------------------------------------+ - | Name | Description | - +=============+======================================================================================================================+ - | profileID | The integral, unique identifier of the profile from which a parameter shall be removed | - +-------------+----------------------------------------------------------------------------------------------------------------------+ - | parameterID | The integral, unique identifier of the parameter which shall be removed from the profile identified by ``profileID`` | - +-------------+----------------------------------------------------------------------------------------------------------------------+ + +-------------+------------------------------------------------------------------------------------------------------------------------------+ + | Name | Description | + +=============+==============================================================================================================================+ + | profileID | The :ref:`profile-id` of the :term:`Profile` from which a :term:`Parameter` shall be removed | + +-------------+------------------------------------------------------------------------------------------------------------------------------+ + | parameterID | The :ref:`parameter-id` of the :term:`Parameter` which shall be removed from the :term:`Profile` identified by ``profileID`` | + +-------------+------------------------------------------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example diff --git a/docs/source/api/profiles.rst b/docs/source/api/profiles.rst index ace4bb8bf6..042028bf6e 100644 --- a/docs/source/api/profiles.rst +++ b/docs/source/api/profiles.rst @@ -29,17 +29,17 @@ Request Structure ----------------- .. table:: Request Query Parameters - +-------+----------+------------------------------------------------------------------------------------------------+ - | Name | Required | Description | - +=======+==========+================================================================================================+ - | cdn | no | Used to filter profiles by the integral, unique identifier of the CDN to which they belong | - +-------+----------+------------------------------------------------------------------------------------------------+ - | id | no | Filters profiles by integral, unique identifier | - +-------+----------+------------------------------------------------------------------------------------------------+ - | name | no | Filters profiles by name | - +-------+----------+------------------------------------------------------------------------------------------------+ - | param | no | Used to filter profiles by the integral, unique identifier of a parameter associated with them | - +-------+----------+------------------------------------------------------------------------------------------------+ + +-------+----------+--------------------------------------------------------------------------------------------------------+ + | Name | Required | Description | + +=======+==========+========================================================================================================+ + | cdn | no | Used to filter :term:`Profiles` by the integral, unique identifier of the CDN to which they belong | + +-------+----------+--------------------------------------------------------------------------------------------------------+ + | id | no | Filters :term:`Profiles` by :ref:`profile-id` | + +-------+----------+--------------------------------------------------------------------------------------------------------+ + | name | no | Filters :term:`Profiles` by :ref:`profile-name` | + +-------+----------+--------------------------------------------------------------------------------------------------------+ + | param | no | Used to filter :term:`Profiles` by the :ref:`parameter-id` of a :term:`Parameter` associated with them | + +-------+----------+--------------------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -52,14 +52,14 @@ Request Structure Response Structure ------------------ -:cdn: The integral, unique identifier of the CDN to which this profile belongs -:cdnName: The CDN name -:description: A description of the profile -:id: The integral, unique identifier of this profile -:lastUpdated: The date and time at which this profile was last updated -:name: The name of the profile -:routingDisabled: A boolean which, if ``true`` will disable Traffic Router's routing to servers using this profile -:type: The name of the 'type' of the profile +:cdn: The integral, unique identifier of the :ref:`profile-cdn` to which this :term:`Profile` belongs +:cdnName: The name of the :ref:`profile-cdn` to which this :term:`Profile` belongs +:description: The :term:`Profile`'s :ref:`profile-description` +:id: The :term:`Profile`'s :ref:`profile-id` +:lastUpdated: The date and time at which this :term:`Profile` was last updated, in an ISO-like format +:name: The :term:`Profile`'s :ref:`profile-name` +:routingDisabled: The :term:`Profile`'s :ref:`profile-routing-disabled` setting +:type: The :term:`Profile`'s :ref:`profile-type` .. code-block:: http :caption: Response Example @@ -91,7 +91,7 @@ Response Structure ``POST`` ======== -Creates a new profile. +Creates a new :term:`Profile`. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -99,11 +99,11 @@ Creates a new profile. Request Structure ----------------- -:name: Name of the new profile -:description: A description of the new profile -:cdn: The integral, unique identifier of the CDN to which the profile shall be assigned -:type: The type of the profile -:routingDisabled: A boolean which, if ``true``, will prevent the Traffic Router from directing traffic to any servers assigned this profile +:cdn: The integral, unique identifier of the :ref:`profile-cdn` to which this :term:`Profile` shall belong +:description: The :term:`Profile`'s :ref:`profile-description` +:name: The :term:`Profile`'s :ref:`profile-name` +:routingDisabled: The :term:`Profile`'s :ref:`profile-routing-disabled` setting +:type: The :term:`Profile`'s :ref:`profile-type` .. code-block:: http :caption: Request Example @@ -126,14 +126,14 @@ Request Structure Response Structure ------------------ -:cdn: The integral, unique identifier of the CDN to which this profile belongs -:cdnName: The CDN name -:description: A description of the profile -:id: The integral, unique identifier of this profile -:lastUpdated: The date and time at which this profile was last updated -:name: The name of the profile -:routingDisabled: A boolean which, if ``true`` will disable Traffic Router's routing to servers using this profile -:type: The name of the 'type' of the profile +:cdn: The integral, unique identifier of the :ref:`profile-cdn` to which this :term:`Profile` belongs +:cdnName: The name of the :ref:`profile-cdn` to which this :term:`Profile` belongs +:description: The :term:`Profile`'s :ref:`profile-description` +:id: The :term:`Profile`'s :ref:`profile-id` +:lastUpdated: The date and time at which this :term:`Profile` was last updated, in an ISO-like format +:name: The :term:`Profile`'s :ref:`profile-name` +:routingDisabled: The :term:`Profile`'s :ref:`profile-routing-disabled` setting +:type: The :term:`Profile`'s :ref:`profile-type` .. code-block:: http :caption: Response Example diff --git a/docs/source/api/profiles_id.rst b/docs/source/api/profiles_id.rst index 6cca196bfe..4a91223880 100644 --- a/docs/source/api/profiles_id.rst +++ b/docs/source/api/profiles_id.rst @@ -32,11 +32,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +-----------+----------------------------------------------------------------+ - | Parameter | Description | - +===========+================================================================+ - | id | The integral, unique identifier of the profile to be retrieved | - +-----------+----------------------------------------------------------------+ + +-----------+--------------------------------------------------------------+ + | Parameter | Description | + +===========+==============================================================+ + | id | The :ref:`profile-id` of the :term:`Profile` to be retrieved | + +-----------+--------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -49,24 +49,24 @@ Request Structure Response Structure ------------------ -:cdn: The integral, unique identifier of the CDN to which this profile belongs -:cdnName: The CDN name -:description: A description of the profile -:id: The integral, unique identifier of this profile -:lastUpdated: The date and time at which this profile was last updated -:name: The name of the profile -:params: An array of parameters in use by this profile - - :configFile: The *base* filename to which this parameter belongs - :id: An integral, unique identifier for this parameter - :lastUpdated: The date and time at which this parameter was last modified in ISO format - :name: The parameter name - :profiles: An array of profile names that use this parameter - :secure: When ``true``, the parameter value is visible only to "admin"-role users - :value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing - -:routingDisabled: A boolean which, if ``true`` will disable Traffic Router's routing to servers using this profile -:type: The name of the 'type' of the profile +:cdn: The integral, unique identifier of the :ref:`profile-cdn` to which this :term:`Profile` belongs +:cdnName: The name of the :ref:`profile-cdn` to which this :term:`Profile` belongs +:description: The :term:`Profile`'s :ref:`profile-description` +:id: The :term:`Profile`'s :ref:`profile-id` +:lastUpdated: The date and time at which this :term:`Profile` was last updated, in an ISO-like format +:name: The :term:`Profile`'s :ref:`profile-name` +:params: An array of :term:`Parameters` in use by this :term:`Profile` + + :configFile: The :term:`Parameter`'s :ref:`parameter-config-file` + :id: The :term:`Parameter`'s :ref:`parameter-id` + :lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format + :name: :ref:`parameter-name` of the :term:`Parameter` + :profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` + :secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` + :value: The :term:`Parameter`'s :ref:`parameter-value` + +:routingDisabled: The :term:`Profile`'s :ref:`profile-routing-disabled` setting +:type: The :term:`Profile`'s :ref:`profile-type` .. code-block:: http :caption: Response Example @@ -119,7 +119,7 @@ Response Structure ``PUT`` ======= -Replaces the specified profile with the one in the response payload +Replaces the specified :term:`Profile` with the one in the request payload :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -129,21 +129,20 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+---------------------------------------------------------------+ - | Name | Description | - +======+===============================================================+ - | ID | The integral, unique identifier of the profile being modified | - +------+---------------------------------------------------------------+ + +------+-------------------------------------------------------------+ + | Name | Description | + +======+=============================================================+ + | ID | The :ref:`profile-id` of the :term:`Profile` being modified | + +------+-------------------------------------------------------------+ -:name: New of the name profile -:description: A new description of the new profile -:cdn: The integral, unique identifier of the CDN to which the profile shall be assigned -:type: The type of the profile +:cdn: The integral, unique identifier of the :ref:`profile-cdn` to which this :term:`Profile` will belong +:description: The :term:`Profile`'s new :ref:`profile-description` +:name: The :term:`Profile`'s new :ref:`profile-name` +:routingDisabled: The :term:`Profile`'s new :ref:`profile-routing-disabled` setting +:type: The :term:`Profile`'s new :ref:`profile-type` .. warning:: Changing this will likely break something, be **VERY** careful when modifying this value -:routingDisabled: A boolean which, if ``true``, will prevent the Traffic Router from directing traffic to any servers assigned this profile - .. code-block:: http :caption: Request Example @@ -165,14 +164,14 @@ Request Structure Response Structure ------------------ -:cdn: The integral, unique identifier of the CDN to which this profile belongs -:cdnName: The CDN name -:description: A description of the profile -:id: The integral, unique identifier of this profile -:lastUpdated: The date and time at which this profile was last updated -:name: The name of the profile -:routingDisabled: A boolean which, if ``true`` will disable Traffic Router's routing to servers using this profile -:type: The name of the 'type' of the profile +:cdn: The integral, unique identifier of the :ref:`profile-cdn` to which this :term:`Profile` belongs +:cdnName: The name of the :ref:`profile-cdn` to which this :term:`Profile` belongs +:description: The :term:`Profile`'s :ref:`profile-description` +:id: The :term:`Profile`'s :ref:`profile-id` +:lastUpdated: The date and time at which this :term:`Profile` was last updated, in an ISO-like format +:name: The :term:`Profile`'s :ref:`profile-name` +:routingDisabled: The :term:`Profile`'s :ref:`profile-routing-disabled` setting +:type: The :term:`Profile`'s :ref:`profile-type` .. code-block:: http :caption: Response Example @@ -209,7 +208,7 @@ Response Structure ``DELETE`` ========== -Allows user to delete a profile. +Allows user to delete a :term:`Profile`. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -219,11 +218,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+--------------------------------------------------------------+ - | Name | Description | - +======+==============================================================+ - | ID | The integral, unique identifier of the profile being deleted | - +------+--------------------------------------------------------------+ + +------+------------------------------------------------------------+ + | Name | Description | + +======+============================================================+ + | ID | The :ref:`profile-id` of the :term:`Profile` being deleted | + +------+------------------------------------------------------------+ .. code-block:: http :caption: Request Example diff --git a/docs/source/api/profiles_id_parameters.rst b/docs/source/api/profiles_id_parameters.rst index 033c783d39..e7cd0f1106 100644 --- a/docs/source/api/profiles_id_parameters.rst +++ b/docs/source/api/profiles_id_parameters.rst @@ -24,7 +24,7 @@ .. deprecated:: 1.1 Refer to the ``params`` key in the response of :ref:`to-api-profiles-id` instead -Retrieves all parameters assigned to the profile. +Retrieves all :term:`Parameters` assigned to the :term:`Profile`. :Auth. Required: Yes :Roles Required: None @@ -34,11 +34,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+------------------------------------------------------------------------------------+ - | Name | Description | - +======+====================================================================================+ - | ID | An integral, unique identifier for the profile for which parameters will be listed | - +------+------------------------------------------------------------------------------------+ + +------+------------------------------------------------------------------------------------------+ + | Name | Description | + +======+==========================================================================================+ + | ID | The :ref:`profile-id` of the :term:`Profile` for which :term:`Parameters` will be listed | + +------+------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -51,13 +51,13 @@ Request Structure Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example @@ -114,7 +114,7 @@ Response Structure .. deprecated:: 1.1 Use :ref:`to-api-profiles-name-name-parameters` instead -Associate parameters to a profile. If the parameter does not exist, create it and associate to the profile. If the parameter already exists, associate it to the profile. If the parameter is already associated with the profile, keep the association. +Associates :term:`Parameters` to a :term:`Profile`. If the :term:`Parameter` does not exist, creates it and associates it to the :term:`Profile`. If the :term:`Parameter` already exists, associates it to the :term:`Profile`. If the :term:`Parameter` is already associated with the :term:`Profile`, keep the association. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -124,27 +124,27 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+-------------------------------------------------------------------------------------+ - | Name | Description | - +======+=====================================================================================+ - | ID | An integral, unique identifier for the profile to which parameters will be assigned | - +------+-------------------------------------------------------------------------------------+ + +------+-------------------------------------------------------------------------------------------+ + | Name | Description | + +======+===========================================================================================+ + | ID | The :ref:`profile-id` of the :term:`Profile` to which :term:`Parameters` will be assigned | + +------+-------------------------------------------------------------------------------------------+ This endpoint accepts two formats for the request payload: Single Object Format - For assigning a single parameter to a single profile + For assigning a single :term:`Parameter` to a single :term:`Profile` Parameter Array Format - For making multiple assignments of parameters to profiles simultaneously + For making multiple assignments of :term:`Parameters` to :term:`Profiles` simultaneously -.. warning:: Most API endpoints dealing with parameters treat ``secure`` as a boolean value, whereas this endpoint takes the legacy approach of treating it as an integer. Be careful when passing data back and forth, as boolean values will **not** be accepted by this endpoint! +.. warning:: Most API endpoints dealing with :term:`Parameters` treat :ref:`parameter-secure` as a boolean value, whereas this endpoint takes the legacy approach of treating it as an integer. Be careful when passing data back and forth, as boolean values will **not** be accepted by this endpoint! Single Parameter Format """"""""""""""""""""""" -:configFile: The *base* filename of the configuration file to which this parameter shall belong e.g. "foo" not "/path/to/foo" -:name: Parameter name -:secure: An integer which, when any number other than ``0``, will prohibit users who do not have the "admin" role from viewing the parameter's ``value`` (at the time of this writing the obfuscation value is defined to be ``"********"``) -:value: Parameter value +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example - Single Parameter Format @@ -167,10 +167,10 @@ Single Parameter Format Parameter Array Format """""""""""""""""""""" -:configFile: The *base* filename of the configuration file to which this parameter shall belong e.g. "foo" not "/path/to/foo" -:name: Parameter name -:secure: An integer which, when any number other than ``0``, will prohibit users who do not have the "admin" role from viewing the parameter's ``value`` (at the time of this writing the obfuscation value is defined to be ``"********"``) -:value: Parameter value +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Request Example - Parameter Array Format @@ -198,15 +198,15 @@ Parameter Array Format Response Structure ------------------ -:parameters: An array of objects representing the parameters which have been assigned +:parameters: An array of objects representing the :term:`Parameters` which have been assigned - :configFile: The *base* filename of the configuration file to which this parameter shall belong e.g. "foo" not "/path/to/foo" - :name: Parameter name - :secure: An integer which, when any number other than ``0``, will prohibit users who do not have the "admin" role from viewing the parameter's ``value`` (at the time of this writing the obfuscation value is defined to be ``"********"``) - :value: Parameter value + :configFile: The :term:`Parameter`'s :ref:`parameter-config-file` + :name: :ref:`parameter-name` of the :term:`Parameter` + :secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` + :value: The :term:`Parameter`'s :ref:`parameter-value` -:profileId: The integral, unique identifier for the profile to which the parameter(s) have been assigned -:profileName: Name of the profile to which the parameter(s) have been assigned +:profileId: The :ref:`profile-id` of the :term:`Profile` to which the :term:`Parameter`\ (s) have been assigned +:profileName: :ref:`profile-name` of the :term:`Profile` to which the :term:`Parameter`\ (s) have been assigned .. code-block:: http :caption: Response Example - Single Parameter Format diff --git a/docs/source/api/profiles_id_unassigned_parameters.rst b/docs/source/api/profiles_id_unassigned_parameters.rst index b992b380a6..8266d9ce43 100644 --- a/docs/source/api/profiles_id_unassigned_parameters.rst +++ b/docs/source/api/profiles_id_unassigned_parameters.rst @@ -22,7 +22,7 @@ ``GET`` ======= -Retrieves all parameters NOT assigned to the specified profile. +Retrieves all :term:`Parameters` *not* assigned to the specified :term:`Profile`. :Auth. Required: Yes :Roles Required: None @@ -32,11 +32,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+-----------------------------------------------------------------------------------------------+ - | Name | Description | - +======+===============================================================================================+ - | ID | The integral, unique identifier of the profile for which unassigned parameters will be listed | - +------+-----------------------------------------------------------------------------------------------+ + +------+-----------------------------------------------------------------------------------------------------+ + | Name | Description | + +======+=====================================================================================================+ + | ID | The :ref:`profile-id` of the :term:`Profile` for which unassigned :term:`Parameters` will be listed | + +------+-----------------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -49,13 +49,13 @@ Request Structure Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Response Example diff --git a/docs/source/api/profiles_name_name_copy_copy.rst b/docs/source/api/profiles_name_name_copy_copy.rst index 7431d2b1c0..8a3d3633b4 100644 --- a/docs/source/api/profiles_name_name_copy_copy.rst +++ b/docs/source/api/profiles_name_name_copy_copy.rst @@ -21,7 +21,7 @@ ``POST`` ======== -Copy profile to a new profile. The new profile name must not exist. +Copy :term:`Profile` to a new :term:`Profile`. The new :term:`Profile`'s :ref:`profile-name` must not already exist. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -31,13 +31,13 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+------------------------------------------------------+ - | Name | Description | - +======+======================================================+ - | name | The name of the new profile | - +------+------------------------------------------------------+ - | copy | The name of profile from which the copy will be made | - +------+------------------------------------------------------+ + +------+-----------------------------------------------------------------------------+ + | Name | Description | + +======+=============================================================================+ + | name | The :ref:`profile-name` of the new :term:`Profile` | + +------+-----------------------------------------------------------------------------+ + | copy | The :ref:`profile-name` of :term:`Profile` from which the copy will be made | + +------+-----------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -50,11 +50,11 @@ Request Structure Response Structure ------------------ -:description: The description of the new profile -:id: An integral, unique identifier for the new profile -:idCopyFrom: The integral, unique identifier for the profile from which the copy was made -:name: The name of the new profile -:profileCopyFrom: The name of the profile from which the copy was made +:description: The new :term:`Profile`'s :ref:`profile-description` +:id: The :ref:`profile-id` of the new :term:`Profile` +:idCopyFrom: The :ref:`profile-id` of the :term:`Profile` from which the copy was made +:name: The :ref:`profile-name` of the new :term:`Profile` +:profileCopyFrom: The :ref:`profile-name` of the :term:`Profile` from which the copy was made .. code-block:: http :caption: Response Example diff --git a/docs/source/api/profiles_name_name_parameters.rst b/docs/source/api/profiles_name_name_parameters.rst index a1072a4983..6000269c27 100644 --- a/docs/source/api/profiles_name_name_parameters.rst +++ b/docs/source/api/profiles_name_name_parameters.rst @@ -21,7 +21,7 @@ ``GET`` ======= -Retrieves all parameters associated with a given profile +Retrieves all :term:`Parameters` associated with a given :term:`Profile` :Auth. Required: Yes :Roles Required: None @@ -31,11 +31,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+-------------------------------------------------------------+ - | Name | Description | - +======+=============================================================+ - | name | The name of the profile for which parameters will be listed | - +------+-------------------------------------------------------------+ + +------+--------------------------------------------------------------------------------------------+ + | Name | Description | + +======+============================================================================================+ + | name | The :ref:`profile-name` of the :term:`Profile` for which :term:`Parameters` will be listed | + +------+--------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -48,15 +48,16 @@ Request Structure Response Structure ------------------ -:configFile: The *base* filename to which this parameter belongs -:id: An integral, unique identifier for this parameter -:lastUpdated: The date and time at which this parameter was last modified in ISO format -:name: The parameter name -:profiles: An array of profile names that use this parameter -:secure: When ``true``, the parameter value is visible only to "admin"-role users -:value: The parameter value - if ``secure`` is true and the user does not have the "admin" role this will be obfuscated (at the time of this writing the obfuscation value is defined to be ``"********"``) but **not** missing +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:id: The :term:`Parameter`'s :ref:`parameter-id` +:lastUpdated: The date and time at which this :term:`Parameter` was last updated, in an ISO-like format +:name: :ref:`parameter-name` of the :term:`Parameter` +:profiles: An array of :term:`Profile` :ref:`Names ` that use this :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` -**Response Example** :: +.. code-block:: http + :caption: Response Example HTTP/1.1 200 OK Access-Control-Allow-Credentials: true @@ -107,7 +108,7 @@ Response Structure ``POST`` ======== -Associate parameters to a profile. If the parameter does not exist, create it and associate to the profile. If the parameter already exists, associate it to the profile. If the parameter is already associated with the profile, keep the association. +Associates :term:`Parameters` to a :term:`Profile`. If the :term:`Parameter` does not exist, creates it and associates it to the :term:`Profile`. If the :term:`Parameter` already exists, associates it to the :term:`Profile`. If the :term:`Parameter` is already associated with the :term:`Profile`, keep the association. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -117,11 +118,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+--------------------------------------------------------------+ - | Name | Description | - +======+==============================================================+ - | name | The name of the profile to which parameters will be assigned | - +------+--------------------------------------------------------------+ + +------+---------------------------------------------------------------------------------------------+ + | Name | Description | + +======+=============================================================================================+ + | name | The :ref:`profile-name` of the :term:`Profile` to which :term:`Parameters` will be assigned | + +------+---------------------------------------------------------------------------------------------+ This endpoint accepts two formats for the request payload: @@ -134,10 +135,10 @@ Parameter Array Format Single Parameter Format """"""""""""""""""""""" -:configFile: The *base* filename of the configuration file to which this parameter shall belong e.g. "foo" not "/path/to/foo" -:name: Parameter name -:secure: An integer which, when any number other than ``0``, will prohibit users who do not have the "admin" role from viewing the parameter's ``value`` (at the time of this writing the obfuscation value is defined to be ``"********"``) -:value: Parameter value +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Request Example - Single Parameter Format @@ -159,10 +160,10 @@ Single Parameter Format Parameter Array Format """""""""""""""""""""" -:configFile: The *base* filename of the configuration file to which this parameter shall belong e.g. "foo" not "/path/to/foo" -:name: Parameter name -:secure: An integer which, when any number other than ``0``, will prohibit users who do not have the "admin" role from viewing the parameter's ``value`` (at the time of this writing the obfuscation value is defined to be ``"********"``) -:value: Parameter value +:configFile: The :term:`Parameter`'s :ref:`parameter-config-file` +:name: :ref:`parameter-name` of the :term:`Parameter` +:secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` +:value: The :term:`Parameter`'s :ref:`parameter-value` .. code-block:: http :caption: Request Example - Parameter Array Format @@ -190,15 +191,15 @@ Parameter Array Format Response Structure ------------------ -:parameters: An array of objects representing the parameters which have been assigned +:parameters: An array of objects representing the :term:`Parameters` which have been assigned - :configFile: The *base* filename of the configuration file to which this parameter shall belong e.g. "foo" not "/path/to/foo" - :name: Parameter name - :secure: An integer which, when any number other than ``0``, will prohibit users who do not have the "admin" role from viewing the parameter's ``value`` (at the time of this writing the obfuscation value is defined to be ``"********"``) - :value: Parameter value + :configFile: The :term:`Parameter`'s :ref:`parameter-config-file` + :name: :ref:`parameter-name` of the :term:`Parameter` + :secure: A boolean value that describes whether or not the :term:`Parameter` is :ref:`parameter-secure` + :value: The :term:`Parameter`'s :ref:`parameter-value` -:profileId: The integral, unique identifier for the profile to which the parameter(s) have been assigned -:profileName: Name of the profile to which the parameter(s) have been assigned +:profileId: The :ref:`profile-id` of the :term:`Profile` to which the :term:`Parameter`\ (s) have been assigned +:profileName: :ref:`profile-name` of the :term:`Profile` to which the :term:`Parameter`\ (s) have been assigned .. code-block:: http :caption: Response Example diff --git a/docs/source/api/profiles_profile_configfiles_ats_filename.rst b/docs/source/api/profiles_profile_configfiles_ats_filename.rst index 69548fbf71..b5ef81099e 100644 --- a/docs/source/api/profiles_profile_configfiles_ats_filename.rst +++ b/docs/source/api/profiles_profile_configfiles_ats_filename.rst @@ -33,13 +33,13 @@ Request Structure ----------------- .. table:: Request Path Parameters - +-----------+-------------------+--------------------------------------------------------------+ - | Parameter | Type | Description | - +===========+===================+==============================================================+ - | profile | string or integer | Either the name or integral, unique, identifier of a profile | - +-----------+-------------------+--------------------------------------------------------------+ - | filename | string | The name of a configuration file used by ``profile`` | - +-----------+-------------------+--------------------------------------------------------------+ + +-----------+-------------------+--------------------------------------------------------------------------+ + | Parameter | Type | Description | + +===========+===================+==========================================================================+ + | profile | string or integer | Either the :ref:`profile-name` or :ref:`profile-id` of a :term:`Profile` | + +-----------+-------------------+--------------------------------------------------------------------------+ + | filename | string | The name of a configuration file used by ``profile`` | + +-----------+-------------------+--------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -52,7 +52,7 @@ Request Structure Response Structure ------------------ -.. note:: If the file identified by ``filename`` doesn't exist at the profile, a JSON response will be returned and the ``alerts`` array will contain a ``"level": "error"`` node which suggests other scopes to check for the configuration file. +.. note:: If the file identified by ``filename`` doesn't exist at the :term:`Profile`, a JSON response will be returned and the ``alerts`` array will contain a ``"level": "error"`` node which suggests other scopes to check for the configuration file. .. code-block:: http :caption: Response Example diff --git a/docs/source/api/profiles_trimmed.rst b/docs/source/api/profiles_trimmed.rst index 92ef4d076b..4dcc96f268 100644 --- a/docs/source/api/profiles_trimmed.rst +++ b/docs/source/api/profiles_trimmed.rst @@ -31,7 +31,7 @@ No parameters available Response Structure ------------------ -:name: The name of the profile +:name: The :ref:`profile-name` of the :term:`Profile` .. code-block:: http :caption: Response Example diff --git a/docs/source/api/servers.rst b/docs/source/api/servers.rst index 614c85f107..cae3c90e1b 100644 --- a/docs/source/api/servers.rst +++ b/docs/source/api/servers.rst @@ -42,7 +42,7 @@ Request Structure +------------+----------+-------------------------------------------------------------------------------------------------------------------+ | id | no | Return only the server with this integral, unique identifier | +------------+----------+-------------------------------------------------------------------------------------------------------------------+ - | profileId | no | Return only those servers that are using the profile identified by this integral, unique identifier | + | profileId | no | Return only those servers that are using the :term:`Profile` that has this :ref:`profile-id` | +------------+----------+-------------------------------------------------------------------------------------------------------------------+ | status | no | Return only those servers with this status - see :ref:`health-proto` | +------------+----------+-------------------------------------------------------------------------------------------------------------------+ @@ -92,9 +92,9 @@ Response Structure :offlineReason: A user-entered reason why the server is in ADMIN_DOWN or OFFLINE status :physLocation: The name of the physical location where the server resides :physLocationId: An integral, unique identifier for the physical location where the server resides -:profile: The name of the profile this server uses -:profileDesc: A description of the profile this server uses -:profileId: An integral, unique identifier for the profile used by this server +:profile: The :ref:`profile-name` of the :term:`Profile` used by this server +:profileDesc: A :ref:`profile-description` of the :term:`Profile` used by this server +:profileId: The :ref:`profile-id` the :term:`Profile` used by this server :revalPending: A boolean value which, if ``true`` indicates that this server has pending content invalidation/revalidation :rack: A string indicating "server rack" location :routerHostName: The human-readable name of the router responsible for reaching this server @@ -216,7 +216,7 @@ Request Structure :mgmtIpGateway: An optional IPv4 address of a gateway used by some network interface on the server used for 'management' :mgmtIpNetmask: An optional IPv4 subnet mask used by some network interface on the server used for 'management' :physLocationId: An integral, unique identifier for the physical location where the server resides -:profileId: An integral, unique identifier for the profile used by this server +:profileId: The :ref:`profile-id` the :term:`Profile` that shall be used by this server :revalPending: A boolean value which, if ``true`` indicates that this server has pending content invalidation/revalidation :rack: An optional string indicating "server rack" location :routerHostName: An optional string containing the human-readable name of the router responsible for reaching this server @@ -311,9 +311,9 @@ Response Structure :offlineReason: A user-entered reason why the server is in ADMIN_DOWN or OFFLINE status :physLocation: The name of the physical location where the server resides :physLocationId: An integral, unique identifier for the physical location where the server resides -:profile: The name of the profile this server uses -:profileDesc: A description of the profile this server uses -:profileId: An integral, unique identifier for the profile used by this server +:profile: The :ref:`profile-name` of the :term:`Profile` used by this server +:profileDesc: A :ref:`profile-description` of the :term:`Profile` used by this server +:profileId: The :ref:`profile-id` the :term:`Profile` used by this server :revalPending: A boolean value which, if ``true`` indicates that this server has pending content invalidation/revalidation :rack: A string indicating "server rack" location :routerHostName: The human-readable name of the router responsible for reaching this server diff --git a/docs/source/api/servers_hostname_name_details.rst b/docs/source/api/servers_hostname_name_details.rst index ee8e2aa24c..9578818e83 100644 --- a/docs/source/api/servers_hostname_name_details.rst +++ b/docs/source/api/servers_hostname_name_details.rst @@ -67,8 +67,8 @@ Response Structure :ipNetmask: The IPv4 subnet mask used by ``interfaceName`` :offlineReason: A user-entered reason why the server is in ADMIN_DOWN or OFFLINE status :physLocation: The name of the physical location where the server resides -:profile: The name of the profile this server uses -:profileDesc: A description of the profile this server uses +:profile: The :ref:`profile-name` of the :term:`Profile` used by this server +:profileDesc: A :ref:`profile-description` of the :term:`Profile` used by this server :rack: A string indicating "server rack" location :routerHostName: The human-readable name of the router responsible for reaching this server :routerPortName: The human-readable name of the port used by the router responsible for reaching this server diff --git a/docs/source/api/servers_id.rst b/docs/source/api/servers_id.rst index e298ad5717..ad98e5699a 100644 --- a/docs/source/api/servers_id.rst +++ b/docs/source/api/servers_id.rst @@ -83,9 +83,9 @@ Response Structure :offlineReason: A user-entered reason why the server is in ADMIN_DOWN or OFFLINE status :physLocation: The name of the physical location where the server resides :physLocationId: An integral, unique identifier for the physical location where the server resides -:profile: The name of the profile this server uses -:profileDesc: A description of the profile this server uses -:profileId: An integral, unique identifier for the profile used by this server +:profile: The :ref:`profile-name` of the :term:`Profile` used by this server +:profileDesc: A :ref:`profile-description` of the :term:`Profile` used by this server +:profileId: The :ref:`profile-id` the :term:`Profile` used by this server :revalPending: A boolean value which, if ``true`` indicates that this server has pending content invalidation/revalidation :rack: A string indicating "server rack" location :routerHostName: The human-readable name of the router responsible for reaching this server @@ -215,7 +215,7 @@ Request Structure :mgmtIpGateway: An optional IPv4 address of a gateway used by some network interface on the server used for 'management' :mgmtIpNetmask: An optional IPv4 subnet mask used by some network interface on the server used for 'management' :physLocationId: An integral, unique identifier for the physical location where the server resides -:profileId: An integral, unique identifier for the profile used by this server +:profileId: The :ref:`profile-id` the :term:`Profile` that shall be used by this server :revalPending: A boolean value which, if ``true`` indicates that this server has pending content invalidation/revalidation :rack: An optional string indicating "server rack" location :routerHostName: An optional string containing the human-readable name of the router responsible for reaching this server @@ -310,9 +310,9 @@ Response Structure :offlineReason: A user-entered reason why the server is in ADMIN_DOWN or OFFLINE status :physLocation: The name of the physical location where the server resides :physLocationId: An integral, unique identifier for the physical location where the server resides -:profile: The name of the profile this server uses -:profileDesc: A description of the profile this server uses -:profileId: An integral, unique identifier for the profile used by this server +:profile: The :ref:`profile-name` of the :term:`Profile` used by this server +:profileDesc: A :ref:`profile-description` of the :term:`Profile` used by this server +:profileId: The :ref:`profile-id` the :term:`Profile` used by this server :revalPending: A boolean value which, if ``true`` indicates that this server has pending content invalidation/revalidation :rack: A string indicating "server rack" location :routerHostName: The human-readable name of the router responsible for reaching this server diff --git a/docs/source/api/servers_id_deliveryservices.rst b/docs/source/api/servers_id_deliveryservices.rst index 4157cf8745..fa4d1b341c 100644 --- a/docs/source/api/servers_id_deliveryservices.rst +++ b/docs/source/api/servers_id_deliveryservices.rst @@ -21,21 +21,21 @@ ``GET`` ======= -Retrieves all :term:`Delivery Service`\ s assigned to a specific server. +Retrieves all :term:`Delivery Services` assigned to a specific server. :Auth. Required: Yes -:Roles Required: None +:Roles Required: None\ [#tenancy]_ :Response Type: Array Request Structure ----------------- .. table:: Request Path Parameters - +------+--------------------------------------------------------------------------------------------------------------+ - | Name | Description | - +======+==============================================================================================================+ - | ID | The integral, unique identifier of the server for which assigned :term:`Delivery Service`\ s shall be listed | - +------+--------------------------------------------------------------------------------------------------------------+ + +------+------------------------------------------------------------------------------------------------------------+ + | Name | Description | + +======+============================================================================================================+ + | ID | The integral, unique identifier of the server for which assigned :term:`Delivery Services` shall be listed | + +------+------------------------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -48,152 +48,104 @@ Request Structure Response Structure ------------------ -:active: ``true`` if the :term:`Delivery Service` is active, ``false`` otherwise -:anonymousBlockingEnabled: ``true`` if :ref:`Anonymous Blocking ` has been configured for the :term:`Delivery Service`, ``false`` otherwise -:cacheurl: A setting for a deprecated feature of now-unsupported :abbr:`ATS (Apache Traffic Server)` versions +:active: A boolean that defines :ref:`ds-active`. +:anonymousBlockingEnabled: A boolean that defines :ref:`ds-anonymous-blocking` +:cacheurl: A :ref:`ds-cacheurl` .. deprecated:: ATCv3.0 This field has been deprecated in Traffic Control 3.x and is subject to removal in Traffic Control 4.x or later -:ccrDnsTtl: The :abbr:`TTL (Time To Live)` of the DNS response for A or AAAA record queries requesting the IP address of the Traffic Router - named "ccrDnsTtl" for legacy reasons -:cdnId: The integral, unique identifier of the CDN to which the :term:`Delivery Service` belongs -:cdnName: Name of the CDN to which the :term:`Delivery Service` belongs -:checkPath: The path portion of the URL to check connections to this :term:`Delivery Service`'s origin server -:consistentHashRegex: If defined, this is a regex used for the Pattern-Based Consistent Hashing feature. It is only applicable for HTTP and Steering Delivery Services - - .. versionadded:: 1.5 - -:displayName: The display name of the :term:`Delivery Service` -:dnsBypassCname: Domain name to overflow requests for HTTP :term:`Delivery Service`\ s - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [3]_ -:dnsBypassIp: The IPv4 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [3]_ -:dnsBypassIp6: The IPv6 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [3]_ -:dnsBypassTtl: The time for which a DNS bypass of this :term:`Delivery Service`\ shall remain active\ [3]_ -:dscp: The :abbr:`DSCP (Differentiated Services Code Point)` with which to mark traffic as it leaves the CDN and reaches clients -:edgeHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite :abbr:`ATS (Apache Traffic Server)` plugin -:fqPacingRate: The Fair-Queuing Pacing Rate in Bytes per second set on the all TCP connection sockets in the :term:`Delivery Service` (see :manpage:`tc-fq_codel(8)` for more information) - Linux only -:geoLimit: The setting that determines how content is geographically limited - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - None - no limitations - 1 - Only route when the client's IP is found in the :term:`Coverage Zone File` - 2 - Only route when the client's IP is found in the :term:`Coverage Zone File`, or when the client can be determined to be from the United States of America - - .. warning:: This does not prevent access to content or make content secure; it merely prevents routing to the content through Traffic Router - -:geoLimitCountries: A string containing a comma-separated list of country codes (e.g. "US,AU") which are allowed to request content through this :term:`Delivery Service` -:geoLimitRedirectUrl: A URL to which clients blocked by :ref:`Regional Geographic Blocking ` or the ``geoLimit`` settings will be re-directed -:geoProvider: An integer that represents the provider of a database for mapping IPs to geographic locations; currently only the following values are supported: - - 0 - `The "Maxmind" GeoIP2 database (default) `_ - 1 - `Neustar GeoPoint IP address database `_ - - .. warning:: It's not clear whether Neustar databases are actually supported; this is an old option and compatibility may have been broken over time. - -:globalMaxMbps: The maximum global bandwidth allowed on this :term:`Delivery Service`. If exceeded, traffic will be routed to ``dnsBypassIp`` (or ``dnsBypassIp6`` for IPv6 traffic) for DNS :term:`Delivery Service`\ s and to ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:globalMaxTps: The maximum global transactions per second allowed on this :term:`Delivery Service`. When this is exceeded traffic will be sent to the ``dnsBypassIp`` (and/or ``dnsBypassIp6``) for DNS :term:`Delivery Service`\ s and to the httpBypassFqdn for HTTP :term:`Delivery Service`\ s -:httpBypassFqdn: The HTTP destination to use for bypass on an HTTP :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` -:id: An integral, unique identifier for this :term:`Delivery Service` -:infoUrl: This is a string which is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:initialDispersion: The number of :term:`cache server`\ s between which traffic requesting the same object will be randomly split - meaning that if 4 clients all request the same object (one after another), then if this is above 4 there is a possibility that all 4 are cache misses. For most use-cases, this should be 1\ [#httpOnly]_ -:ipv6RoutingEnabled: If ``true``, clients that connect to Traffic Router using IPv6 will be given the IPv6 address of a suitable :term:`Edge-tier cache server`; if ``false`` all addresses will be IPv4, regardless of the client connection -:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in a :manpage:`ctime(3)`-like format -:logsEnabled: If ``true``, logging is enabled for this :term:`Delivery Service`, otherwise it is disabled -:longDesc: A description of the :term:`Delivery Service` -:longDesc1: A field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: A field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired -:matchList: An array of methods used by Traffic Router to determine whether or not a request can be serviced by this :term:`Delivery Service` - - :pattern: A regular expression - the use of this pattern is dependent on the ``type`` field (backslashes are escaped) - :setNumber: An integral, unique identifier for the set of types to which the ``type`` field belongs - :type: The :term:`Type` of match performed using ``pattern`` to determine whether or not to use this :term:`Delivery Service` +:ccrDnsTtl: The :ref:`ds-dns-ttl` - named "ccrDnsTtl" for legacy reasons +:cdnId: The integral, unique identifier of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:cdnName: Name of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:checkPath: A :ref:`ds-check-path` +:consistentHashRegex: A :ref:`ds-consistent-hashing-regex` - HOST_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``Host:`` HTTP header of an HTTP request, or the name requested for resolution in a DNS request - HEADER_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches an HTTP header (both the name and value) in an HTTP request\ [#httpOnly]_ - PATH_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the request path of this :term:`Delivery Service`'s URL\ [#httpOnly]_ - STEERING_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``xml_id`` of one of this :term:`Delivery Service`'s "Steering" target :term:`Delivery Services` + .. versionadded:: 1.4 -:maxDnsAnswers: The maximum number of IPs to put in responses to A/AAAA DNS record requests (0 means all available)\ [3]_ -:midHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite :abbr:`ATS (Apache Traffic Server)` plugin -:missLat: The latitude to use when the client cannot be found in the :term:`Coverage Zone File` or a geographic IP lookup -:missLong: The longitude to use when the client cannot be found in the :term:`Coverage Zone File` or a geographic IP lookup -:multiSiteOrigin: ``true`` if the Multi Site Origin feature is enabled for this :term:`Delivery Service`, ``false`` otherwise\ [2]_ -:orgServerFqdn: The URL of the :term:`Delivery Service`'s origin server for use in retrieving content from the :term:`origin server` +:consistentHashQueryParams: An array of :ref:`ds-consistent-hashing-qparams` - .. note:: Despite the field name, this must truly be a full URL - including the protocol (e.g. ``http://`` or ``https://``) - **NOT** merely the server's :abbr:`FQDN (Fully Qualified Domain Name)` + .. versionadded:: 1.4 -:originShield: An "origin shield" is a forward proxy that sits between Mid-tier :term:`cache server`\ s and the :term:`origin` and performs further caching beyond what's offered by a standard CDN. This field is a string of :abbr:`FQDN (Fully Qualified Domain Name)`\ s to use as origin shields, delimited by ``|`` -:profileDescription: The description of the Traffic Router :term:`Profile` with which this :term:`Delivery Service` is associated -:profileId: The integral, unique identifier for the Traffic Router :term:`Profile` with which this :term:`Delivery Service` is associated -:profileName: The name of the Traffic Router :term:`Profile` with which this :term:`Delivery Service` is associated -:protocol: The protocol which clients will use to communicate with Edge-tier :term:`cache server`\ s\ [#httpOnly]_ - this is an integer on the interval [0-2] where the values have these meanings: +:deepCachingType: The :ref:`ds-deep-caching` setting for this :term:`Delivery Service` - 0 - HTTP - 1 - HTTPS - 2 - Both HTTP and HTTPS + .. versionadded:: 1.3 -:qstringIgnore: Tells :term:`cache server`\ s whether or not to consider URLs with different query parameter strings to be distinct - this is an integer on the interval [0-2] where the values have these meanings: +:displayName: The :ref:`ds-display-name` +:dnsBypassCname: A :ref:`ds-dns-bypass-cname` +:dnsBypassIp: A :ref:`ds-dns-bypass-ip` +:dnsBypassIp6: A :ref:`ds-dns-bypass-ipv6` +:dnsBypassTtl: The :ref:`ds-dns-bypass-ttl` +:dscp: A :ref:`ds-dscp` to be used within the :term:`Delivery Service` +:edgeHeaderRewrite: A set of :ref:`ds-edge-header-rw-rules` +:exampleURLs: An array of :ref:`ds-example-urls` +:fqPacingRate: The :ref:`ds-fqpr` - 0 - URLs with different query parameter strings will be considered distinct for caching purposes, and query strings will be passed upstream to the :term:`origin` - 1 - URLs with different query parameter strings will be considered identical for caching purposes, and query strings will be passed upstream to the :term:`origin` - 2 - Query strings are stripped out by Edge-tier :term:`cache server`\ s, and thus are neither taken into consideration for caching purposes, nor passed upstream in requests to the :term:`origin` + .. versionadded:: 1.3 -:rangeRequestHandling: Tells caches how to handle range requests\ [4]_ - this is an integer on the interval [0,2] where the values have these meanings: - - 0 - Range requests will not be cached, but range requests that request ranges of content already cached will be served from the :term:`cache server` - 1 - Use the `background_fetch plugin `_ to service the range request while caching the whole object - 2 - Use the `experimental cache_range_requests plugin `_ to treat unique ranges as unique objects +:geoLimit: An integer that defines the :ref:`ds-geo-limit` +:geoLimitCountries: A string containing a comma-separated list defining the :ref:`ds-geo-limit-countries` +:geoLimitRedirectUrl: A :ref:`ds-geo-limit-redirect-url` +:geoProvider: The :ref:`ds-geo-provider` +:globalMaxMbps: The :ref:`ds-global-max-mbps` +:globalMaxTps: The :ref:`ds-global-max-tps` +:httpBypassFqdn: A :ref:`ds-http-bypass-fqdn` +:id: An integral, unique identifier for this :term:`Delivery Service` +:infoUrl: An :ref:`ds-info-url` +:initialDispersion: The :ref:`ds-initial-dispersion` +:ipv6RoutingEnabled: A boolean that defines the :ref:`ds-ipv6-routing` setting on this :term:`Delivery Service` +:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in :rfc:`3339` format +:logsEnabled: A boolean that defines the :ref:`ds-logs-enabled` setting on this :term:`Delivery Service` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: The :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: The :ref:`ds-longdesc3` of this :term:`Delivery Service` +:matchList: The :term:`Delivery Service`'s :ref:`ds-matchlist` -:regexRemap: A regular expression "remap rule" to apply to this :term:`Delivery Service` at the Edge tier + :pattern: A regular expression - the use of this pattern is dependent on the ``type`` field (backslashes are escaped) + :setNumber: An integer that provides explicit ordering of :ref:`ds-matchlist` items - this is used as a priority ranking by Traffic Router, and is not guaranteed to correspond to the ordering of items in the array. + :type: The type of match performed using ``pattern``. - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ +:maxDnsAnswers: The :ref:`ds-max-dns-answers` allowed for this :term:`Delivery Service` +:maxOriginConnections: The :ref:`ds-max-origin-connections` -:regionalGeoBlocking: ``true`` if Regional Geo Blocking is in use within this :term:`Delivery Service`, ``false`` otherwise + .. versionadded:: 1.4 - .. seealso:: See :ref:`regionalgeo-qht` for more information +:midHeaderRewrite: A set of :ref:`ds-mid-header-rw-rules` +:missLat: The :ref:`ds-geo-miss-default-latitude` used by this :term:`Delivery Service` +:missLong: The :ref:`ds-geo-miss-default-longitude` used by this :term:`Delivery Service` +:multiSiteOrigin: A boolean that defines the use of :ref:`ds-multi-site-origin` by this :term:`Delivery Service` +:orgServerFqdn: The :ref:`ds-origin-url` +:originShield: A :ref:`ds-origin-shield` string +:profileDescription: The :ref:`profile-description` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileId: The :ref:`profile-id` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileName: The :ref:`profile-name` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:protocol: An integral, unique identifier that corresponds to the :ref:`ds-protocol` used by this :term:`Delivery Service` +:qstringIgnore: An integral, unique identifier that corresponds to the :ref:`ds-qstring-handling` setting on this :term:`Delivery Service` +:rangeRequestHandling: An integral, unique identifier that corresponds to the :ref:`ds-range-request-handling` setting on this :term:`Delivery Service` +:regexRemap: A :ref:`ds-regex-remap` +:regionalGeoBlocking: A boolean defining the :ref:`ds-regionalgeo` setting on this :term:`Delivery Service` +:remapText: :ref:`ds-raw-remap` +:signed: ``true`` if and only if ``signingAlgorithm`` is not ``null``, ``false`` otherwise +:signingAlgorithm: Either a :ref:`ds-signing-algorithm` or ``null`` to indicate URL/URI signing is not implemented on this :term:`Delivery Service` -:remapText: Additional, raw text to add to the line for this :term:`Delivery Service` for :term:`cache server`\ s + .. versionadded:: 1.3 - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ +:sslKeyVersion: This integer indicates the :ref:`ds-ssl-key-version` +:tenantId: The integral, unique identifier of the :ref:`ds-tenant` who owns this :term:`Delivery Service` -:signed: ``true`` if token-based authentication is enabled for this :term:`Delivery Service`, ``false`` otherwise -:signingAlgorithm: Type of URL signing method to sign the URLs, basically comes down to one of two plugins or ``null``: + .. versionadded:: 1.3 - ``null`` - Token-based authentication is not enabled for this :term:`Delivery Service` - url_sig: - URL Signing token-based authentication is enabled for this :term:`Delivery Service` - uri_signing - URI Signing token-based authentication is enabled for this :term:`Delivery Service` +:trRequestHeaders: If defined, this defines the :ref:`ds-tr-req-headers` used by Traffic Router for this :term:`Delivery Service` - .. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_ and `the draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, **NOT** the latest + .. versionadded:: 1.3 -:sslKeyVersion: This integer indicates the generation of keys in use by the :term:`Delivery Service` - if any - and is incremented by the Traffic Portal client whenever new keys are generated +:trResponseHeaders: If defined, this defines the :ref:`ds-tr-resp-headers` used by Traffic Router for this :term:`Delivery Service` - .. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! + .. versionadded:: 1.3 -:tenantId: The integral, unique identifier of the :term:`Tenant` who owns this :term:`Delivery Service` -:trRequestHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router access logs for requests - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:trResponseHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router responses - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:type: The name of the routing type of this :term:`Delivery Service` e.g. "HTTP" -:typeId: The integral, unique identifier of the routing type of this :term:`Delivery Service` -:xmlId: A unique string that describes this :term:`Delivery Service` - exists for legacy reasons, but is used heavily by Traffic Control components +:type: The :ref:`ds-types` of this :term:`Delivery Service` +:typeId: The integral, unique identifier of the :ref:`ds-types` of this :term:`Delivery Service` +:xmlId: This :term:`Delivery Service`'s :ref:`ds-xmlid` .. code-block:: http :caption: Response Example @@ -205,71 +157,91 @@ Response Structure Access-Control-Allow-Origin: * Content-Type: application/json Set-Cookie: mojolicious=...; Path=/; HttpOnly - Whole-Content-Sha512: heK6DafnKW6KdyqQ7lTJQcStli3ixkWYjnbQ2EzR8ZU6Tlij3Takr6CNr0BcD5yWFVN1D8mvMPcj5XLP3FTt5w== + Whole-Content-Sha512: CFmtW41aoDezCYxtAXnS54dfFOD6jdxDJ2/LMpbBqnndy5kac7JQhdFAWF109sl95XVSUV85JHFzXZTw/mJabQ== X-Server-Name: traffic_ops_golang/ - Date: Mon, 10 Dec 2018 16:53:04 GMT - Content-Length: 1129 - - { "response": [ - { - "active": true, - "cacheurl": null, - "ccrDnsTtl": null, - "cdnId": 2, - "checkPath": null, - "deepCachingType": null, - "displayName": "Demo 1", - "dnsBypassCname": null, - "dnsBypassIp": null, - "dnsBypassIp6": null, - "dnsBypassTtl": null, - "dscp": 0, - "edgeHeaderRewrite": null, - "fqPacingRate": null, - "geoLimit": 0, - "geoLimitCountries": null, - "geoLimitRedirectURL": null, - "geoProvider": 0, - "globalMaxMbps": null, - "globalMaxTps": null, - "httpBypassFqdn": null, - "id": 1, - "infoUrl": null, - "initialDispersion": 1, - "ipv6RoutingEnabled": true, - "lastUpdated": "2018-12-05 17:51:00+00", - "logsEnabled": true, - "longDesc": "Apachecon North America 2018", - "longDesc1": null, - "longDesc2": null, - "maxDnsAnswers": null, - "midHeaderRewrite": null, - "missLat": 42, - "missLong": -88, - "multiSiteOrigin": false, - "multiSiteOriginAlgo": null, - "originShield": null, - "orgServerFqdn": "http://origin.infra.ciab.test", - "profileDescription": null, - "profileId": null, - "protocol": 0, - "qstringIgnore": 0, - "rangeRequestHandling": 0, - "regexRemap": null, - "regionalGeoBlocking": false, - "remapText": null, - "routingName": "video", - "signingAlgorithm": null, - "sslKeyVersion": null, - "trRequestHeaders": null, - "trResponseHeaders": null, - "tenantId": 1, - "typeId": 1, - "xmlId": "demo1" - } - ]} - -.. [#httpOnly] This only applies to HTTP-:ref:`routed ` :term:`Delivery Services` -.. [2] See :ref:`ds-multi-site-origin` -.. [3] This only applies to DNS-routed :term:`Delivery Services` -.. [4] These fields are required for HTTP-routed and DNS-routed :term:`Delivery Services`, but are optional for (and in fact may have no effect on) STEERING and ANY_MAP :term:`Delivery Services` + Date: Mon, 10 Jun 2019 17:01:30 GMT + Content-Length: 1500 + + { "response": [ { + "active": true, + "anonymousBlockingEnabled": false, + "cacheurl": null, + "ccrDnsTtl": null, + "cdnId": 2, + "cdnName": "CDN-in-a-Box", + "checkPath": null, + "displayName": "Demo 1", + "dnsBypassCname": null, + "dnsBypassIp": null, + "dnsBypassIp6": null, + "dnsBypassTtl": null, + "dscp": 0, + "edgeHeaderRewrite": null, + "geoLimit": 0, + "geoLimitCountries": null, + "geoLimitRedirectURL": null, + "geoProvider": 0, + "globalMaxMbps": null, + "globalMaxTps": null, + "httpBypassFqdn": null, + "id": 1, + "infoUrl": null, + "initialDispersion": 1, + "ipv6RoutingEnabled": true, + "lastUpdated": "2019-06-10 15:14:29+00", + "logsEnabled": true, + "longDesc": "Apachecon North America 2018", + "longDesc1": null, + "longDesc2": null, + "matchList": [ + { + "type": "HOST_REGEXP", + "setNumber": 0, + "pattern": ".*\\.demo1\\..*" + } + ], + "maxDnsAnswers": null, + "midHeaderRewrite": null, + "missLat": 42, + "missLong": -88, + "multiSiteOrigin": false, + "originShield": null, + "orgServerFqdn": "http://origin.infra.ciab.test", + "profileDescription": null, + "profileId": null, + "profileName": null, + "protocol": 2, + "qstringIgnore": 0, + "rangeRequestHandling": 0, + "regexRemap": null, + "regionalGeoBlocking": false, + "remapText": null, + "routingName": "video", + "signed": false, + "sslKeyVersion": 1, + "tenantId": 1, + "type": "HTTP", + "typeId": 1, + "xmlId": "demo1", + "exampleURLs": [ + "http://video.demo1.mycdn.ciab.test", + "https://video.demo1.mycdn.ciab.test" + ], + "deepCachingType": "NEVER", + "fqPacingRate": null, + "signingAlgorithm": null, + "tenant": "root", + "trResponseHeaders": null, + "trRequestHeaders": null, + "consistentHashRegex": null, + "consistentHashQueryParams": [ + "abc", + "pdq", + "xxx", + "zyx" + ], + "maxOriginConnections": 0 + }]} + + +.. [#tenancy] Only the :term:`Delivery Services` visible to the requesting user's :term:`Tenant` will appear, regardless of their :term:`Role` or the :term:`Delivery Services`' actual 'server assignment' status. diff --git a/docs/source/api/servers_id_queue_update.rst b/docs/source/api/servers_id_queue_update.rst index 383e4a3ea2..1ac729a3a9 100644 --- a/docs/source/api/servers_id_queue_update.rst +++ b/docs/source/api/servers_id_queue_update.rst @@ -18,12 +18,11 @@ ******************************* ``servers/{{ID}}/queue_update`` ******************************* -.. deprecated:: 1.1 - Use the ``PUT`` method of the :ref:`to-api-servers-id` endpoint instead. +.. caution:: In the vast majority of cases, it is advisable that the ``PUT`` method of the :ref:`to-api-servers-id` endpoint be used instead. ``POST`` ======== -Queue or dequeue updates for a specific server. +:term:`Queue` or dequeue updates for a specific server. :Auth. Required: Yes :Roles Required: "admin" or "operations" @@ -42,7 +41,7 @@ Request Structure :action: A string describing what action to take regarding server updates; one of: queue - Enqueue updates for the server, propagating configuration changes to the actual server + :term:`Queue Updates` for the server, propagating configuration changes to the actual server dequeue Cancels any pending updates on the server @@ -66,7 +65,7 @@ Response Structure :action: The action processed, one of: queue - Enqueued updates for the server, propagating configuration changes to the actual server + :term:`Queue Updates` was performed on the server, propagating configuration changes to the actual server dequeue Canceled any pending updates on the server diff --git a/docs/source/api/servers_server_configfiles_ats.rst b/docs/source/api/servers_server_configfiles_ats.rst index 913bfbce85..50af45a347 100644 --- a/docs/source/api/servers_server_configfiles_ats.rst +++ b/docs/source/api/servers_server_configfiles_ats.rst @@ -45,8 +45,8 @@ Response Structure :cdnId: The integral, unique, identifier of the CDN to which ``server`` is assigned :cdnName: The name of the CDN to which ``server`` is assigned - :profileId: The integral, unique, identifier of the profile used by ``server`` - :profileName: The name of the profile used by ``server`` + :profileName: The :ref:`profile-name` of the :term:`Profile` used by this server + :profileId: The :ref:`profile-id` the :term:`Profile` used by this server :serverId: An integral, unique, identifier for ``server`` :serverIpv4: IPv4 address of the server :serverTcpPort: The port number on which ``server`` listens for incoming TCP connections @@ -63,7 +63,7 @@ Response Structure "cdns" The file is used by all caches in the CDN "profiles" - The file is used by all servers with the same profile + The file is used by all servers with the same :term:`Profile` "servers" The most specific grouping of servers which use this file is simply a collection of distinct servers diff --git a/docs/source/api/snapshot_name.rst b/docs/source/api/snapshot_name.rst index 0141bf1c82..19377b4154 100644 --- a/docs/source/api/snapshot_name.rst +++ b/docs/source/api/snapshot_name.rst @@ -21,7 +21,7 @@ ``PUT`` ======= -Performs a CDN snapshot. Effectively, this propagates the new *configuration* of the CDN to its *operating state*, which replaces the output of the :ref:`to-api-cdns-name-snapshot` endpoint with the output of the :ref:`to-api-cdns-name-snapshot-new` endpoint. +Performs a CDN :term:`Snapshot`. Effectively, this propagates the new *configuration* of the CDN to its *operating state*, which replaces the output of the :ref:`to-api-cdns-name-snapshot` endpoint with the output of the :ref:`to-api-cdns-name-snapshot-new` endpoint. .. Note:: Snapshotting the CDN also deletes all HTTPS certificates for every :term:`Delivery Service` which has been deleted since the last :term:`Snapshot`. @@ -33,11 +33,11 @@ Request Structure ----------------- .. table:: Request Path Parameters - +------+---------------------------------------------------------+ - | Name | Description | - +======+=========================================================+ - | name | The name of the CDN for which a snapshot shall be taken | - +------+---------------------------------------------------------+ + +------+-----------------------------------------------------------------+ + | Name | Description | + +======+=================================================================+ + | name | The name of the CDN for which a :term:`Snapshot` shall be taken | + +------+-----------------------------------------------------------------+ .. code-block:: http :caption: Request Example diff --git a/docs/source/api/system_info.rst b/docs/source/api/system_info.rst index 60512d9a94..d4b37ff07a 100644 --- a/docs/source/api/system_info.rst +++ b/docs/source/api/system_info.rst @@ -33,7 +33,7 @@ Response Structure ------------------ :parameters: An object containing information about the Traffic Ops server - .. note:: These are all the parameters in the ``GLOBAL`` profile, so the keys below are merely those present by default required for Traffic Control to operate + .. note:: These are all the :term:`Parameters` in :ref:`the-global-profile`, so the keys below are merely those present by default required for Traffic Control to operate :default_geo_miss_latitude: The default latitude used when geographic lookup of an IP address fails :default_geo_miss_longitude: The default longitude used when geographic lookup of an IP address fails diff --git a/docs/source/api/user_id_deliveryservices_available.rst b/docs/source/api/user_id_deliveryservices_available.rst index 00f3b253b6..fc6889ca27 100644 --- a/docs/source/api/user_id_deliveryservices_available.rst +++ b/docs/source/api/user_id_deliveryservices_available.rst @@ -19,10 +19,10 @@ ``GET`` ======= -Lists identifying information for all of the :term:`Delivery Service`\ s assigned to a user - **not**, as the name implies, the :term:`Delivery Service`\ s *available* to be assigned to that user. +Lists identifying information for all of the :term:`Delivery Services` assigned to a user - **not**, as the name implies, the :term:`Delivery Services` *available* to be assigned to that user. :Auth. Required: Yes -:Roles Required: None +:Roles Required: None\ [#tenancy]_ :Response Type: Array Request Structure @@ -46,9 +46,9 @@ Request Structure Response Structure ------------------ -:displayName: This :term:`Delivery Service`'s name +:displayName: This :term:`Delivery Service`'s :ref:`ds-display-name` :id: The integral, unique identifier of this :term:`Delivery Service` -:xmlId: The 'xml_id' which (also) uniquely identifies this :term:`Delivery Service` +:xmlId: The :ref:`ds-xmlid` which (also) uniquely identifies this :term:`Delivery Service` .. code-block:: http :caption: Response Example @@ -72,3 +72,5 @@ Response Structure "xmlId": "demo1" } ]} + +.. [#tenancy] Only the :term:`Delivery Services` visible to the requesting user's :term:`Tenant` will appear, regardless of :term:`Role` or actual 'assignment' status. diff --git a/docs/source/api/user_login_oauth.rst b/docs/source/api/user_login_oauth.rst new file mode 100644 index 0000000000..867b79d758 --- /dev/null +++ b/docs/source/api/user_login_oauth.rst @@ -0,0 +1,81 @@ +.. +.. +.. Licensed under the Apache License, Version 2.0 (the "License"); +.. you may not use this file except in compliance with the License. +.. You may obtain a copy of the License at +.. +.. http://www.apache.org/licenses/LICENSE-2.0 +.. +.. Unless required by applicable law or agreed to in writing, software +.. distributed under the License is distributed on an "AS IS" BASIS, +.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +.. See the License for the specific language governing permissions and +.. limitations under the License. +.. + +.. _to-api-user-login-oauth: + +******************** +``user/login/oauth`` +******************** +.. versionadded:: 1.4 + +``POST`` +======== + +Authentication of a user by exchanging a code for an encrypted JSON Web Token from an OAuth service. Traffic Ops will ``POST`` to the authCodeTokenUrl to exchange the code for an encrypted JSON Web Token. It will then decode and validate the token, validate the key set domain, and send back a session cookie. + +:Auth. Required: No +:Roles Required: None +:Response Type: ``undefined`` + +Request Structure +----------------- +:authCodeTokenUrl: URL for code-to-token conversion +:code: Code +:clientId: Client Id +:clientSecret: Client Secret +:redirectUri: Redirect URI + +.. code-block:: http + :caption: Request Example + + POST /api/1.4/user/login/oauth HTTP/1.1 + Host: trafficops.infra.ciab.test + User-Agent: curl/7.47.0 + Accept: */* + Cookie: mojolicious=... + Content-Length: 26 + Content-Type: application/json + + { + "authCodeTokenUrl": "https://url-to-convert-code-to-token.com", + "code": "AbCd123", + "clientId": "oauthClientId", + "clientSecret": "clientSecret", + "redirectUri": "https://traffic-portal.com/sso", + } + +Response Structure +------------------ +.. code-block:: http + :caption: Response Example + + HTTP/1.1 200 OK + Access-Control-Allow-Credentials: true + Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Set-Cookie, Cookie + Access-Control-Allow-Methods: POST,GET,OPTIONS,PUT,DELETE + Access-Control-Allow-Origin: * + Content-Type: application/json + Set-Cookie: mojolicious=...; Path=/; Expires=Thu, 13 Dec 2018 21:21:33 GMT; HttpOnly + Whole-Content-Sha512: UdO6T3tMNctnVusDXzRjVwwYOnD7jmnBzPEB9PvOt2bHajTv3SKTPiIZjDzvhU6EX4p+JoG4fA5wlhgxpsejIw== + X-Server-Name: traffic_ops_golang/ + Date: Thu, 13 Dec 2018 15:21:33 GMT + Content-Length: 65 + + { "alerts": [ + { + "text": "Successfully logged in.", + "level": "success" + } + ]} diff --git a/docs/source/api/users_id_deliveryservices.rst b/docs/source/api/users_id_deliveryservices.rst index 9934b31e1b..34cb3fcaab 100644 --- a/docs/source/api/users_id_deliveryservices.rst +++ b/docs/source/api/users_id_deliveryservices.rst @@ -18,24 +18,25 @@ ********************************* ``users/{{ID}}/deliveryservices`` ********************************* +.. caution:: This endpoint has several issues related to tenancy and newer :term:`Delivery Service` fields. For these and other reasons, the assigning of :term:`Delivery Services` to users is strongly discouraged. ``GET`` ======= -Retrieves all :term:`Delivery Service`\ s assigned to the user. +Retrieves all :term:`Delivery Services` assigned to the user. :Auth. Required: Yes -:Roles Required: None\ [1]_ +:Roles Required: None\ [#tenancy]_ :Response Type: Array Request Structure ----------------- .. table:: Request Path Parameters - +------+---------------------------------------------------------------------------------------------------+ - | Name | Description | - +======+===================================================================================================+ - | ID | The integral, unique identifier of the users whose :term:`Delivery Service`\ s shall be retrieved | - +------+---------------------------------------------------------------------------------------------------+ + +------+-------------------------------------------------------------------------------------------------+ + | Name | Description | + +======+=================================================================================================+ + | ID | The integral, unique identifier of the users whose :term:`Delivery Services` shall be retrieved | + +------+-------------------------------------------------------------------------------------------------+ .. code-block:: http :caption: Request Example @@ -48,152 +49,106 @@ Request Structure Response Structure ------------------ -:active: ``true`` if the :term:`Delivery Service` is active, ``false`` otherwise -:anonymousBlockingEnabled: ``true`` if :ref:`Anonymous Blocking ` has been configured for the :term:`Delivery Service`, ``false`` otherwise -:cacheurl: A setting for a deprecated feature of now-unsupported Trafficserver versions +:active: A boolean that defines :ref:`ds-active`. +:anonymousBlockingEnabled: A boolean that defines :ref:`ds-anonymous-blocking` +:cacheurl: A :ref:`ds-cacheurl` .. deprecated:: ATCv3.0 This field has been deprecated in Traffic Control 3.x and is subject to removal in Traffic Control 4.x or later -:ccrDnsTtl: The Time To Live (TTL) of the DNS response for A or AAAA record queries requesting the IP address of the Traffic Router - named "ccrDnsTtl" for legacy reasons -:cdnId: The integral, unique identifier of the CDN to which the :term:`Delivery Service` belongs -:cdnName: Name of the CDN to which the :term:`Delivery Service` belongs -:checkPath: The path portion of the URL to check connections to this :term:`Delivery Service`'s origin server -:consistentHashRegex: If defined, this is a regular expression used for the Pattern-Based Consistent Hashing feature.\ [#httpOnly]_ +:ccrDnsTtl: The :ref:`ds-dns-ttl` - named "ccrDnsTtl" for legacy reasons +:cdnId: The integral, unique identifier of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:cdnName: Name of the :ref:`ds-cdn` to which the :term:`Delivery Service` belongs +:checkPath: A :ref:`ds-check-path` +:consistentHashRegex: A :ref:`ds-consistent-hashing-regex` .. versionadded:: 1.4 -:consistentHashQueryParams: A set (actually array due to limitations of JSON) of query parameters which will be considered by Traffic Router when using a client request to consistently find an :term:`Edge-tier cache server` to which to redirect them.\ [#httpOnly]_ +:consistentHashQueryParams: An array of :ref:`ds-consistent-hashing-qparams` .. versionadded:: 1.4 + .. caution:: This field will always appear to be ``null`` - even when the :term:`Delivery Service` in question has :ref:`ds-consistent-hashing-qparams` assigned to it. -:displayName: The display name of the :term:`Delivery Service` -:dnsBypassCname: Domain name to overflow requests for HTTP :term:`Delivery Service`\ s - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassIp: The IPv4 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassIp6: The IPv6 IP to use for bypass on a DNS :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service`\ [4]_ -:dnsBypassTtl: The time for which a DNS bypass of this :term:`Delivery Service`\ shall remain active\ [4]_ -:dscp: The Differentiated Services Code Point (DSCP) with which to mark traffic as it leaves the CDN and reaches clients -:edgeHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:fqPacingRate: The Fair-Queuing Pacing Rate in Bytes per second set on the all TCP connection sockets in the :term:`Delivery Service` (see ``man tc-fc_codel`` for more information) - Linux only -:geoLimit: The setting that determines how content is geographically limited - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - None - no limitations - 1 - Only route when the client's IP is found in the Coverage Zone File (CZF) - 2 - Only route when the client's IP is found in the CZF, or when the client can be determined to be from the United States of America - - .. warning:: This does not prevent access to content or make content secure; it merely prevents routing to the content through Traffic Router - -:geoLimitCountries: A string containing a comma-separated list of country codes (e.g. "US,AU") which are allowed to request content through this :term:`Delivery Service` -:geoLimitRedirectUrl: A URL to which clients blocked by :ref:`Regional Geographic Blocking ` or the ``geoLimit`` settings will be re-directed -:geoProvider: An integer that represents the provider of a database for mapping IPs to geographic locations; currently only the following values are supported: - - 0 - The "Maxmind" GeoIP2 database (default) - 1 - Neustar - -:globalMaxMbps: The maximum global bandwidth allowed on this :term:`Delivery Service`. If exceeded, traffic will be routed to ``dnsBypassIp`` (or ``dnsBypassIp6`` for IPv6 traffic) for DNS :term:`Delivery Service`\ s and to ``httpBypassFqdn`` for HTTP :term:`Delivery Service`\ s -:globalMaxTps: The maximum global transactions per second allowed on this :term:`Delivery Service`. When this is exceeded traffic will be sent to the ``dnsBypassIp`` (and/or ``dnsBypassIp6``) for DNS :term:`Delivery Service`\ s and to the httpBypassFqdn for HTTP :term:`Delivery Service`\ s -:httpBypassFqdn: The HTTP destination to use for bypass on an HTTP :term:`Delivery Service` - bypass starts when the traffic on this :term:`Delivery Service` exceeds ``globalMaxMbps``, or when more than ``globalMaxTps`` is being exceeded within the :term:`Delivery Service` +:deepCachingType: The :ref:`ds-deep-caching` setting for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:displayName: The :ref:`ds-display-name` +:dnsBypassCname: A :ref:`ds-dns-bypass-cname` +:dnsBypassIp: A :ref:`ds-dns-bypass-ip` +:dnsBypassIp6: A :ref:`ds-dns-bypass-ipv6` +:dnsBypassTtl: The :ref:`ds-dns-bypass-ttl` +:dscp: A :ref:`ds-dscp` to be used within the :term:`Delivery Service` +:edgeHeaderRewrite: A set of :ref:`ds-edge-header-rw-rules` +:exampleURLs: An array of :ref:`ds-example-urls` +:fqPacingRate: The :ref:`ds-fqpr` + + .. versionadded:: 1.3 + +:geoLimit: An integer that defines the :ref:`ds-geo-limit` +:geoLimitCountries: A string containing a comma-separated list defining the :ref:`ds-geo-limit-countries` +:geoLimitRedirectUrl: A :ref:`ds-geo-limit-redirect-url` +:geoProvider: The :ref:`ds-geo-provider` +:globalMaxMbps: The :ref:`ds-global-max-mbps` +:globalMaxTps: The :ref:`ds-global-max-tps` +:httpBypassFqdn: A :ref:`ds-http-bypass-fqdn` :id: An integral, unique identifier for this :term:`Delivery Service` -:infoUrl: This is a string which is expected to contain at least one URL pointing to more information about the :term:`Delivery Service`. Historically, this has been used to link relevant JIRA tickets -:initialDispersion: The number of caches between which traffic requesting the same object will be randomly split - meaning that if 4 clients all request the same object (one after another), then if this is above 4 there is a possibility that all 4 are cache misses. For most use-cases, this should be 1\ [#httpOnly]_ -:ipv6RoutingEnabled: If ``true``, clients that connect to Traffic Router using IPv6 will be given the IPv6 address of a suitable :term:`Edge-tier cache server`; if ``false`` all addresses will be IPv4, regardless of the client connection -:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in a ``ctime``-like format -:logsEnabled: If ``true``, logging is enabled for this :term:`Delivery Service`, otherwise it is disabled -:longDesc: A description of the :term:`Delivery Service` -:longDesc1: A field used when more detailed information that that provided by ``longDesc`` is desired -:longDesc2: A field used when even more detailed information that that provided by either ``longDesc`` or ``longDesc1`` is desired -:matchList: An array of methods used by Traffic Router to determine whether or not a request can be serviced by this :term:`Delivery Service` +:infoUrl: An :ref:`ds-info-url` +:initialDispersion: The :ref:`ds-initial-dispersion` +:ipv6RoutingEnabled: A boolean that defines the :ref:`ds-ipv6-routing` setting on this :term:`Delivery Service` +:lastUpdated: The date and time at which this :term:`Delivery Service` was last updated, in :rfc:`3339` format +:logsEnabled: A boolean that defines the :ref:`ds-logs-enabled` setting on this :term:`Delivery Service` +:longDesc: The :ref:`ds-longdesc` of this :term:`Delivery Service` +:longDesc1: The :ref:`ds-longdesc2` of this :term:`Delivery Service` +:longDesc2: The :ref:`ds-longdesc3` of this :term:`Delivery Service` +:matchList: The :term:`Delivery Service`'s :ref:`ds-matchlist` :pattern: A regular expression - the use of this pattern is dependent on the ``type`` field (backslashes are escaped) - :setNumber: An integral, unique identifier for the set of types to which the ``type`` field belongs - :type: The type of match performed using ``pattern`` to determine whether or not to use this :term:`Delivery Service` - - HOST_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``Host:`` HTTP header of an HTTP request, or the name requested for resolution in a DNS request - HEADER_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches an HTTP header (both the name and value) in an HTTP request\ [#httpOnly]_ - PATH_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the request path of this :term:`Delivery Service`'s URL\ [#httpOnly]_ - STEERING_REGEXP - Use the :term:`Delivery Service` if ``pattern`` matches the ``xml_id`` of one of this :term:`Delivery Service`'s "Steering" target :term:`Delivery Services` - -:maxDnsAnswers: The maximum number of IPs to put in responses to A/AAAA DNS record requests (0 means all available)\ [4]_ -:midHeaderRewrite: Rewrite operations to be performed on TCP headers at the Edge-tier cache level - used by the Header Rewrite Apache Trafficserver plugin -:missLat: The latitude to use when the client cannot be found in the CZF or a geographic IP lookup -:missLong: The longitude to use when the client cannot be found in the CZF or a geographic IP lookup -:multiSiteOrigin: ``true`` if the Multi Site Origin feature is enabled for this :term:`Delivery Service`, ``false`` otherwise\ [3]_ -:orgServerFqdn: The URL of the :term:`Delivery Service`'s origin server for use in retrieving content from the origin server - - .. note:: Despite the field name, this must truly be a full URL - including the protocol (e.g. ``http://`` or ``https://``) - **NOT** merely the server's Fully Qualified Domain Name (FQDN) - -:originShield: An "origin shield" is a forward proxy that sits between Mid-tier caches and the origin and performs further caching beyond what's offered by a standard CDN. This field is a string of FQDNs to use as origin shields, delimited by ``|`` -:profileDescription: The description of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:profileId: The integral, unique identifier for the Traffic Router profile with which this :term:`Delivery Service` is associated -:profileName: The name of the Traffic Router Profile with which this :term:`Delivery Service` is associated -:protocol: The protocol which clients will use to communicate with Edge-tier :term:`cache server`\ s\ [#httpOnly]_ - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - HTTP - 1 - HTTPS - 2 - Both HTTP and HTTPS - -:qstringIgnore: Tells caches whether or not to consider URLs with different query parameter strings to be distinct - this is an integer on the interval [0-2] where the values have these meanings: - - 0 - URLs with different query parameter strings will be considered distinct for caching purposes, and query strings will be passed upstream to the origin - 1 - URLs with different query parameter strings will be considered identical for caching purposes, and query strings will be passed upstream to the origin - 2 - Query strings are stripped out by Edge-tier caches, and thus are neither taken into consideration for caching purposes, nor passed upstream in requests to the origin - -:rangeRequestHandling: Tells caches how to handle range requests - this is an integer on the interval [0,2] where the values have these meanings: - - 0 - Range requests will not be cached, but range requests that request ranges of content already cached will be served from the cache - 1 - Use the `background_fetch plugin `_ to service the range request while caching the whole object - 2 - Use the `experimental cache_range_requests plugin `_ to treat unique ranges as unique objects - -:regexRemap: A regular expression remap rule to apply to this :term:`Delivery Service` at the Edge tier - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:regionalGeoBlocking: ``true`` if Regional Geo Blocking is in use within this :term:`Delivery Service`, ``false`` otherwise - see :ref:`regionalgeo-qht` for more information -:remapText: Additional, raw text to add to the remap line for caches - - .. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ - -:signed: ``true`` if token-based authentication is enabled for this :term:`Delivery Service`, ``false`` otherwise -:signingAlgorithm: Type of URL signing method to sign the URLs, basically comes down to one of two plugins or ``null``: - - ``null`` - Token-based authentication is not enabled for this :term:`Delivery Service` - url_sig: - URL Signing token-based authentication is enabled for this :term:`Delivery Service` - uri_signing - URI Signing token-based authentication is enabled for this :term:`Delivery Service` - - .. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_ and `the draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, **NOT** the latest - -:sslKeyVersion: This integer indicates the generation of keys in use by the :term:`Delivery Service` - if any - and is incremented by the Traffic Portal client whenever new keys are generated - - .. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! - -:tenantId: The integral, unique identifier of the tenant who owns this :term:`Delivery Service` -:trRequestHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router access logs for requests - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:trResponseHeaders: If defined, this takes the form of a string of HTTP headers to be included in Traffic Router responses - it's a template where ``__RETURN__`` translates to a carriage return and line feed (``\r\n``)\ [#httpOnly]_ -:type: The name of the routing type of this :term:`Delivery Service` e.g. "HTTP" -:typeId: The integral, unique identifier of the routing type of this :term:`Delivery Service` -:xmlId: A unique string that describes this :term:`Delivery Service` - exists for legacy reasons + :setNumber: An integer that provides explicit ordering of :ref:`ds-matchlist` items - this is used as a priority ranking by Traffic Router, and is not guaranteed to correspond to the ordering of items in the array. + :type: The type of match performed using ``pattern``. + +:maxDnsAnswers: The :ref:`ds-max-dns-answers` allowed for this :term:`Delivery Service` +:maxOriginConnections: The :ref:`ds-max-origin-connections` + + .. versionadded:: 1.4 + +:midHeaderRewrite: A set of :ref:`ds-mid-header-rw-rules` +:missLat: The :ref:`ds-geo-miss-default-latitude` used by this :term:`Delivery Service` +:missLong: The :ref:`ds-geo-miss-default-longitude` used by this :term:`Delivery Service` +:multiSiteOrigin: A boolean that defines the use of :ref:`ds-multi-site-origin` by this :term:`Delivery Service` +:orgServerFqdn: The :ref:`ds-origin-url` +:originShield: A :ref:`ds-origin-shield` string +:profileDescription: The :ref:`profile-description` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileId: The :ref:`profile-id` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:profileName: The :ref:`profile-name` of the :ref:`ds-profile` with which this :term:`Delivery Service` is associated +:protocol: An integral, unique identifier that corresponds to the :ref:`ds-protocol` used by this :term:`Delivery Service` +:qstringIgnore: An integral, unique identifier that corresponds to the :ref:`ds-qstring-handling` setting on this :term:`Delivery Service` +:rangeRequestHandling: An integral, unique identifier that corresponds to the :ref:`ds-range-request-handling` setting on this :term:`Delivery Service` +:regexRemap: A :ref:`ds-regex-remap` +:regionalGeoBlocking: A boolean defining the :ref:`ds-regionalgeo` setting on this :term:`Delivery Service` +:remapText: :ref:`ds-raw-remap` +:signed: ``true`` if and only if ``signingAlgorithm`` is not ``null``, ``false`` otherwise +:signingAlgorithm: Either a :ref:`ds-signing-algorithm` or ``null`` to indicate URL/URI signing is not implemented on this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:sslKeyVersion: This integer indicates the :ref:`ds-ssl-key-version` +:tenantId: The integral, unique identifier of the :ref:`ds-tenant` who owns this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:trRequestHeaders: If defined, this defines the :ref:`ds-tr-req-headers` used by Traffic Router for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:trResponseHeaders: If defined, this defines the :ref:`ds-tr-resp-headers` used by Traffic Router for this :term:`Delivery Service` + + .. versionadded:: 1.3 + +:type: The :ref:`ds-types` of this :term:`Delivery Service` +:typeId: The integral, unique identifier of the :ref:`ds-types` of this :term:`Delivery Service` +:xmlId: This :term:`Delivery Service`'s :ref:`ds-xmlid` .. code-block:: http :caption: Response Example @@ -205,12 +160,12 @@ Response Structure Access-Control-Allow-Origin: * Content-Type: application/json Set-Cookie: mojolicious=...; Path=/; HttpOnly - Whole-Content-Sha512: bAq7+0tpGE/POGmM5qF/FFjgAuOV5eZmpoOD8AOGHswLviGv8y2ukIEasQuhAPKVBlAPqalueTUx7ZasGxIjAw== + Whole-Content-Sha512: /YG9PdSw9PAkVLfbTcOfEUbJe14UTkWQp2P9x632RbmsbbAQvbluT5QIMLJ4OatmEGwWKs47NUaRLUc8z0/qSA== X-Server-Name: traffic_ops_golang/ - Date: Thu, 13 Dec 2018 19:29:06 GMT - Content-Length: 1194 + Date: Mon, 10 Jun 2019 16:50:25 GMT + Content-Length: 1348 - { "response": [{ + {"response": [{ "active": true, "anonymousBlockingEnabled": false, "cacheurl": null, @@ -236,7 +191,7 @@ Response Structure "infoUrl": null, "initialDispersion": 1, "ipv6RoutingEnabled": true, - "lastUpdated": "2018-12-12 16:26:44+00", + "lastUpdated": "2019-06-10 15:14:29+00", "logsEnabled": true, "longDesc": "Apachecon North America 2018", "longDesc1": null, @@ -252,7 +207,7 @@ Response Structure "profileDescription": null, "profileId": null, "profileName": null, - "protocol": 0, + "protocol": 2, "qstringIgnore": 0, "rangeRequestHandling": 0, "regexRemap": null, @@ -260,18 +215,21 @@ Response Structure "remapText": null, "routingName": "video", "signed": false, - "sslKeyVersion": null, + "sslKeyVersion": 1, "tenantId": 1, "type": "HTTP", "typeId": 1, "xmlId": "demo1", "exampleURLs": null, "deepCachingType": "NEVER", + "fqPacingRate": null, "signingAlgorithm": null, - "tenant": "root" + "tenant": "root", + "trResponseHeaders": null, + "trRequestHeaders": null, + "consistentHashRegex": null, + "consistentHashQueryParams": null, + "maxOriginConnections": null }]} -.. [1] Users with the :term:`Roles` "admin" and/or "operations" will be able to see *all* :term:`Delivery Services`, whereas any other user will only see the :term:`Delivery Services` their :term:`Tenant` is allowed to see. -.. [#httpOnly] This only applies to HTTP-:ref:`routed ` :term:`Delivery Services` -.. [3] See :ref:`ds-multi-site-origin` -.. [4] This only applies to DNS-:ref:`routed ` :term:`Delivery Services` +.. [#tenancy] While it is totally possible to assign a :term:`Delivery Service` to a user who's :term:`Tenant` does not have permission to own said :term:`Delivery Service`, users that request this endpoint will only see :term:`Delivery Services` that their :term:`Tenant` has permission to see. This means that there's no real guarantee that the output of this endpoint shows all of the :term:`Delivery Services` assigned to the user requested, even if the user is requesting their own assigned :term:`Delivery Services`. diff --git a/docs/source/development/documentation_guidelines.rst b/docs/source/development/documentation_guidelines.rst index fbcb013f17..d1a776b9b2 100644 --- a/docs/source/development/documentation_guidelines.rst +++ b/docs/source/development/documentation_guidelines.rst @@ -189,7 +189,7 @@ Documenting API Routes ---------------------- Follow all of the formatting conventions in `Formatting`_. Maintain the structural format of the API documentation as outlined in the :ref:`to-api` section. API routes that have variable paths e.g. :ref:`to-api-profiles-id` should use `mustache templates `_ **not** the Mojolicious-specific ``:param`` syntax. This keeps the templates generic, familiar, and reflects the inability of a request path to contain procedural instructions or program logic. Please do not include the ``/api/1.x/`` part of the request path for Traffic Ops API endpoints. If an endpoint is unavailable prior to a specific version, use the ``.. versionadded`` directive to indicate that version. Likewise, do not make a new page for an endpoint when it changes across versions, instead call out the changes using the ``.. versionchanged`` directive. If an endpoint should not be used because newer endpoints provide the same functionality in a better way, use the ``.. deprecated`` directive to link to them and explain why they are better. -When documenting an API route, be sure to include *all* methods, request/response JSON payload fields, path parameters, and query parameters, whether they are optional or not. When describing a field in a JSON payload, remember that JSON does not have "hashes" it has "objects" or even "maps". When documenting path parameters such as profile ID in :ref:`to-api-profiles-id`, consider that the endpoint path cannot be formed without defining **all** path parameters, and so to label them as "required" is superfluous. +When documenting an API route, be sure to include *all* methods, request/response JSON payload fields, path parameters, and query parameters, whether they are optional or not. When describing a field in a JSON payload, remember that JSON does not have "hashes" it has "objects" or even "maps". When documenting path parameters such as :term:`Profile` :ref:`profile-id` in :ref:`to-api-profiles-id`, consider that the endpoint path cannot be formed without defining **all** path parameters, and so to label them as "required" is superfluous. The "Response Example" must **always** exist. "TODO" is **not** an acceptable Response Example for new endpoints. The "Request Example" must only exist if the request requires data in the body (most commonly this will be for ``PATCH``, ``POST`` and ``PUT`` methods). It is, however, strongly advised that a request example be given if the endpoint takes Query Parameters or Path Parameters, and it is required if the Response Example is a response to a request that used a query or path parameter. If the Request Example *is* present, then the Response Example **must** be the appropriate response **to that request**. When generating Request/Response Examples, attempt to use the :ref:`ciab` environment whenever possible to provide a common basis and familiarity to new users who likely set up "CDN in a Box" as a primer for understanding CDNs/Traffic Control. Responses are sometimes hundreds of lines long, and in those cases only as much as is required for an understanding of the structure needs to be included in the example - along with a note mentioning that the output was trimmed. Also always attempt to place structure explanations before any example so that the content of the example can be understood by the reader (though in general the placement of a floating environment like a code listing is not known at compile-time). Whenever possible, the Request and Response examples should include the *complete HTTP stack*, which captures behavior like Query Parameters, Path Parameters and HTTP cookie operations like those used by e.g. :ref:`to-api-logs`. A few caveats to the "include all headers" rule: diff --git a/docs/source/development/traffic_router/traffic_router_api.rst b/docs/source/development/traffic_router/traffic_router_api.rst index bbf037866f..5efabb4ec8 100644 --- a/docs/source/development/traffic_router/traffic_router_api.rst +++ b/docs/source/development/traffic_router/traffic_router_api.rst @@ -18,7 +18,7 @@ ****************** Traffic Router API ****************** -By default, Traffic Router serves its API via HTTP (not HTTPS) on port 3333. This can be configured in :file:`/opt/traffic_router/conf/server.xml` or by setting a :term:`Parameter` named ``api.port`` with ``configFile`` ``server.xml`` on the Traffic Router's :term:`Profile`. +By default, Traffic Router serves its API via HTTP (not HTTPS) on port 3333. This can be configured in :file:`/opt/traffic_router/conf/server.xml` or by setting a :term:`Parameter` with the :ref:`parameter-name` "api.port", and the :ref:`parameter-config-file` "server.xml" on the Traffic Router's :term:`Profile`. Traffic Router API endpoints only respond to ``GET`` requests. diff --git a/docs/source/glossary.rst b/docs/source/glossary.rst index b01a7db0cd..97f99c403e 100644 --- a/docs/source/glossary.rst +++ b/docs/source/glossary.rst @@ -44,7 +44,7 @@ Glossary .. Note:: Often the Edge-tier to Mid-tier relationship is based on network distance, and does not necessarily match the geographic distance. - .. seealso:: A :dfn:`Cache Group` serves a particular part of the network as defined in the coverage zone file. See :ref:`asn-czf` for details. + .. seealso:: A :dfn:`Cache Group` serves a particular part of the network as defined in the :term:`Coverage Zone File` (or :term:`Deep Coverage Zone File`, when applicable). Consider the example CDN in :numref:`fig-cg_hierarchy`. Here some country/province/region has been divided into quarters: Northeast, Southeast, Northwest, and Southwest. The arrows in the diagram indicate the flow of requests. If a client in the Northwest, for example, were to make a request to the :term:`Delivery Service`, it would first be directed to some :term:`cache server` in the "Northwest" Edge-tier :dfn:`Cache Group`. Should the requested content not be in cache, the Edge-tier server will select a parent from the "West" :dfn:`Cache Group` and pass the request up, caching the result for future use. All Mid-tier :dfn:`Cache Groups` (usually) answer to a single :term:`origin` that provides canonical content. If requested content is not in the Mid-tier cache, then the request will be passed up to the :term:`origin` and the result cached. @@ -64,10 +64,46 @@ Glossary Coverage Zone Map The :abbr:`CZM (Coverage Zone Map)` or :abbr:`CZF (Coverage Zone File)` is a file that maps network prefixes to :term:`Cache Groups`. Traffic Router uses the :abbr:`CZM (Coverage Zone Map)` to determine what :term:`Cache Group` is closest to the client. If the client IP address is not in this :abbr:`CZM (Coverage Zone Map)`, it falls back to geographic mapping, using a `MaxMind GeoIP2 database `_ to find the client's location, and the geographic coordinates from Traffic Ops for the :term:`Cache Group`. Traffic Router is inserted into the HTTP retrieval process by making it the authoritative DNS server for the domain of the CDN :term:`Delivery Service`. In the example of the :term:`reverse proxy`, the client was given the ``http://www-origin-cache.cdn.com/foo/bar/fun.html`` URL. In a Traffic Control CDN, URLs start with a routing name, which is configurable per-:term:`Delivery Service`, e.g. ``http://foo.mydeliveryservice.cdn.com/fun/example.html`` with the chosen routing name ``foo``. + .. code-block:: json + :caption: Example Coverage Zone File + + { "coverageZones": { + "cache-group-01": { + "network6": [ + "1234:5678::/64", + "1234:5679::/64" + ], + "network": [ + "192.168.8.0/24", + "192.168.9.0/24" + ] + } + }} + + Deep Coverage Zone File Deep Coverage Zone Map The :abbr:`DCZF (Deep Coverage Zone File)` or :abbr:`DCZM (Deep Coverage Zone Map)` maps network prefixes to "locations" - almost like the :term:`Coverage Zone File`. Location names must be unique, and within the file are simply used to group :term:`Edge-tier cache servers`. When a mapping is performed by Traffic Router, it will only look in the :abbr:`DCZF (Deep Coverage Zone File)` if the :term:`Delivery Service` to which a client is being directed makes use of :ref:`ds-deep-caching`. If the client's IP address cannot be matched by entries in this file, Traffic Router will first fall back to the regular :term:`Coverage Zone File`. Then, failing that, it will perform geographic mapping using a database provided by the :term:`Delivery Service`'s :ref:`ds-geo-provider`. + .. code-block:: json + :caption: Example Deep Coverage Zone File + + { "coverageZones": { + "cache-group-01": { + "network6": [ + "1234:5678::/64", + "1234:5679::/64" + ], + "network": [ + "192.168.8.0/24", + "192.168.9.0/24" + ], + "caches": [ + "edge" + ] + } + }} + Delivery Service Delivery Services :dfn:`Delivery Services` are often referred to as a :term:`reverse proxy` "remap rule" that exists on Edge-tier :term:`cache servers`. In most cases, a :dfn:`Delivery Service` is a one-to-one mapping to an :abbr:`FQDN (Fully Qualified Domain Name)` that is used as a hostname to deliver the content. Many options and settings regarding how to optimize the content delivery exist, which are configurable on a :dfn:`Delivery Service` basis. Some examples of these :dfn:`Delivery Service`\ settings are: @@ -85,7 +121,7 @@ Glossary Division Divisions - A group of :term:`Region`\ s. + A group of :term:`Regions`. Edge Edge-tier @@ -209,11 +245,15 @@ Glossary The source of content for the CDN. Usually a redundant HTTP/1.1 webserver. ORT - The "Operational Readiness Test" script that stitches the configuration configured in Traffic Portal and generated by Traffic Ops into the :term:`cache server`\ s. See :ref:`traffic-ops-ort` for more information. + The "Operational Readiness Test" script that stitches the configuration configured in Traffic Portal and generated by Traffic Ops into the :term:`cache servers`. + + .. seealso:: See :ref:`traffic-ops-ort` for a Python implementation of ORT that is (theoretically) compatible with the one actually provided in Apache Traffic Control releases. Parameter Parameters - Typically refers to a line in a configuration file, but in practice can represent any arbitrary configuration option + Typically refers to a line in a configuration file, but in practice can represent any arbitrary configuration option. + + .. seealso:: The :ref:`profiles-and-parameters` overview section. parent parents @@ -225,75 +265,24 @@ Glossary Profile Profiles - A :dfn:`Profile` is, most generally, a group of :term:`Parameter`\ s that will be applied to a server. :dfn:`Profiles` are typically re-used by all Edge-Tier :term:`cache server`\ s within a CDN or :term:`Cache Group`. A :dfn:`Profile` will, in addition to configuration :term:`Parameter`\ s, define the CDN to which a server belongs and the "Type" of the profile - which determines some behaviors of Traffic Control components. The allowed "Types" of :dfn:`Profiles` are **not** the same as :term:`Type`\ s, and are maintained as a PostgreSQL "Enum" in :file:`traffic_ops/app/db/migrations/20170205101432_cdn_table_domain_name.go`. The only allowed values are: - - UNK_PROFILE - A catch-all type that can be assigned to anything without imbuing it with any special meaning or behavior - TR_PROFILE - A Traffic Router Profile. - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``CCR_`` or ``TR_``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - TM_PROFILE - A Traffic Monitor Profile. - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``RASCAL_`` or ``TM_``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - TS_PROFILE - A Traffic Stats Profile - - .. warning:: For legacy reasons, the names of Profiles of this type *must* be ``TRAFFIC_STATS``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - TP_PROFILE - A Traffic Portal Profile - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``TRAFFIC_PORTAL``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - INFLUXDB_PROFILE - A Profile used with `InfluxDB `_, which is used by Traffic Stats. - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``INFLUXDB``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - RIAK_PROFILE - A Profile used for each `Riak `_ server in a Traffic Stats cluster. - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``RIAK``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - SPLUNK_PROFILE - - A Profile meant to be used with `Splunk `_ servers. - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``SPLUNK``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - ORG_PROFILE - Origin Profile. - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``MSO``, or contain either ``ORG`` or ``ORIGIN`` anywhere in the name. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - KAFKA_PROFILE - A Profile for `Kafka `_ servers. - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``KAFKA``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! - - LOGSTASH_PROFILE - A Profile for `Logstash `_ servers. - - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``LOGSTASH_``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + A :dfn:`Profile` is, most generally, a group of :term:`Parameters` that will be applied to a server. :dfn:`Profiles` are typically re-used by all :term:`Edge-Tier cache servers` within a CDN or :term:`Cache Group`. A :dfn:`Profile` will, in addition to configuration :term:`Parameters`, define the CDN to which a server belongs and the :ref:`"Type" ` of the Profile - which determines some behaviors of Traffic Control components. The allowed :ref:`"Types" ` of :dfn:`Profiles` are **not** the same as :term:`Types`, and are maintained as a PostgreSQL "Enum" in :atc-file:`traffic_ops/app/db/create_tables.sql`. - ES_PROFILE - A Profile for `ElasticSearch `_ servers. + .. tip:: A :dfn:`Profile` of the wrong type assigned to a Traffic Control component *will* (in general) cause it to function incorrectly, regardless of the :term:`Parameters` assigned to it. - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``ELASTICSEARCH``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + .. seealso:: The :ref:`profiles-and-parameters` overview section. - ATS_PROFILE - A Profile that can be used with either an Edge-tier or Mid-tier :term:`cache server`\ ` (but not both, in general). + Queue + Queue Updates + Queue Server Updates + :dfn:`Queuing Updates` is an action that signals to various ATC components - most notably :term:`cache servers` - that any configuration changes that are pending are to be applied now. Specifically, Traffic Monitor and Traffic Router are updated through a CDN :term:`Snapshot`, and *not* :dfn:`Queued Updates`. In particular, :term:`ORT` will notice that the server on which it's running has new configuration, and will request the new configuration from Traffic Ops. - .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``EDGE`` or ``MID``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + Updates may be queued on a server-by-server basis (in Traffic Portal's :ref:`tp-configure-servers` view), a Cache Group-wide basis (in Traffic Portal's :ref:`tp-configure-cache-groups` view), or on a CDN-wide basis (in Traffic Portal's :ref:`tp-cdns` view). Usually using the CDN-wide version is easiest, and unless there are special circumstances, and/or the user really knows what he or she is doing, it is recommended that the full CDN-wide :dfn:`Queue Updates` be used. + This is similar to taking a CDN :term:`Snapshot`, but this configuration change affects only servers, and not routing. - .. tip:: A :dfn:`Profile` of the wrong type assigned to a Traffic Control component *will* (in general) cause it to function incorrectly, regardless of the :term:`Parameter`\ s assigned to it. + That seems like a vague difference because it is - in general the rule to follow is that changes to :term:`Profiles` and :term:`Parameters` requires only updates be queued, changes to the assignments of :term:`cache servers` to :term:`Delivery Services` requires both a :term:`Snapshot` *and* a :dfn:`Queue Updates`, and changes to only a :term:`Delivery Service` itself (usually) entails a :term:`Snapshot` only. These aren't exhaustive rules, and a grasp of what changes require which action(s) will take time to form. In general, when doing both :dfn:`Queuing Updates` as well as taking a CDN :term:`Snapshot`, it is advisable to first :dfn:`Queue Updates` and *then* take the :term:`Snapshot`, as otherwise Traffic Router may route clients to :term:`Edge-tier cache servers` that are not equipped to service their request(s). However, when modifying the assignment(s) of :term:`cache servers` to one or more :term:`Delivery Services`, a :term:`Snapshot` ought to be taken before updates are queued. - .. danger:: Nearly all of these :dfn:`Profile` types have strict naming requirements, and it may be noted that some of said requirements are prefixes ending with ``_``, while others are either not prefixes or do not end with ``_``. This is exactly true; some requirements **need** that ``_`` and some may or may not have it. It is our suggestion, therefore, that for the time being all prefixes use the ``_`` notation to separate words, so as to avoid causing headaches remembering when that matters and when it does not. + .. warning:: Updates to :term:`Parameters` with certain :ref:`parameter-config-file` values may require running :term:`ORT` in a different mode, occasionally manually. Though the server may appear to no longer have pending updates in these cases, until this manual intervention is performed the configuration *will* **not** *be correct*. Region Regions @@ -389,6 +378,8 @@ Glossary Snapshot Snapshots + CDN Snapshot + CDN Snapshots Previously called a "CRConfig" or "CRConfig.json" (and still called such in many places), this is a rather large set of routing information generated from a CDN's configuration and topology. Status diff --git a/docs/source/overview/delivery_services.rst b/docs/source/overview/delivery_services.rst index 653e1e4e8f..c49b4e1fb5 100644 --- a/docs/source/overview/delivery_services.rst +++ b/docs/source/overview/delivery_services.rst @@ -24,9 +24,13 @@ Delivery Services are modeled several times over, in the Traffic Ops database, i .. seealso:: The API reference for Delivery Service-related endpoints such as :ref:`to-api-deliveryservices` contains definitions of the Delivery Service object(s) returned and/or accepted by those endpoints. +.. _ds-active: + Active ------ -Whether or not this Delivery Service is active on the CDN and can be served. When a Delivery Service is not "active", Traffic Router will not be made aware of its existence - i.e. it will not appear in CDN :term:`Snapshot`\ s. Setting a Delivery Service to be "active" (or "inactive") will require that a new :term:`Snapshot` be taken. +Whether or not this Delivery Service is active on the CDN and can be served. When a Delivery Service is not "active", Traffic Router will not be made aware of its existence - i.e. it will not appear in CDN :term:`Snapshots`. Setting a Delivery Service to be "active" (or "inactive") will require that a new :term:`Snapshot` be taken. + +.. _ds-anonymous-blocking: Anonymous Blocking ------------------ @@ -45,6 +49,8 @@ Enables/Disables blocking of anonymized IP address - proxies, :abbr:`TOR (The On .. seealso:: The :ref:`anonymous_blocking-qht` "Quick-How-To" guide. +.. _ds-cacheurl: + Cache URL Expression -------------------- .. deprecated:: 3.0 @@ -54,10 +60,14 @@ Manipulates the cache key of the incoming requests. Normally, the cache key is t .. warning:: This field provides access to a feature that was only present in :abbr:`ATS (Apache Traffic Server)` 6.X and earlier. As :term:`cache server`\ s must now use :abbr:`ATS (Apache Traffic Server)` 7.1.X, this field **must** be blank unless all :term:`cache servers` can be guaranteed to use that older :abbr:`ATS (Apache Traffic Server)` version (**NOT** recommended). +.. _ds-cdn: + CDN --- A CDN to which this Delivery Service belongs. Only :term:`cache servers` within this CDN are available to route content for this Delivery Service. Additionally, only Traffic Routers assigned to this CDN will perform said routing. Most often ``cdn``/``CDN`` refers to the *name* of the CDN to which the Delivery Service belongs, but occasionally (most notably in the payloads and/or query parameters of certain :ref:`to-api` endpoints) it actually refers to the *integral, unique identifier* of said CDN. +.. _ds-check-path: + Check Path ---------- A request path on the :term:`origin server` which is used to by certain :ref:`Traffic Ops Extensions ` to indicate the "health" of the :term:`origin`. @@ -107,30 +117,42 @@ NEVER .. impl-detail:: Traffic Ops and Traffic Ops client Go code use an empty string as the name of the enumeration member that represents "NEVER". +.. _ds-display-name: + Display Name ------------ The "name" of the Delivery Service. Since nearly any use of a string-based identification method for Delivery Services (e.g. in Traffic Portal tables) uses xml_id_, this is of limited use. For that reason and for consistency's sake it is suggested that this be the same as the xml_id_. However, unlike the xml_id_, this can contain any UTF-8 characters without restriction. +.. _ds-dns-bypass-cname: + DNS Bypass CNAME ---------------- When the limits placed on this Delivery Service by the `Global Max Mbps`_ and/or `Global Max Tps`_ are exceeded, a DNS-:ref:`Routed ` Delivery Service will direct excess traffic to the host referred to by this :abbr:`CNAME (Canonical Name)` record. .. note:: IPv6 traffic will be redirected if and only if `IPv6 Routing Enabled`_ is "true" for this Delivery Service. +.. _ds-dns-bypass-ip: + DNS Bypass IP ------------- When the limits placed on this Delivery Service by the `Global Max Mbps`_ and/or `Global Max Tps`_ are exceeded, a DNS-:ref:`Routed ` Delivery Service will direct excess IPv4 traffic to this IPv4 address. +.. _ds-dns-bypass-ipv6: + DNS Bypass IPv6 --------------- When the limits placed on this Delivery Service by the `Global Max Mbps`_ and/or `Global Max Tps`_ are exceeded, a DNS-:ref:`Routed ` Delivery Service will direct excess IPv6 traffic to this IPv6 address. .. note:: This requires an accompanying configuration of `IPv6 Routing Enabled`_ such that IPv6 traffic is allowed at all. +.. _ds-dns-bypass-ttl: + DNS Bypass TTL -------------- When the limits placed on this Delivery Service by the `Global Max Mbps`_ and/or `Global Max Tps`_ are exceeded, a DNS-:ref:`Routed ` Delivery Service will direct excess traffic to their `DNS Bypass IP`_, `DNS Bypass IPv6`_, or `DNS Bypass CNAME`_. +.. _ds-dns-ttl: + DNS TTL ------- The :abbr:`TTL (Time To Live)` on the DNS record for the Traffic Router A and AAAA records. DNS-:ref:`Routed ` Delivery Services will send this :abbr:`TTL (Time To Live)` along with their record responses to clients requesting access to this Delivery Service. Setting too high or too low will result in poor caching performance. @@ -159,18 +181,45 @@ The :abbr:`DSCP (Differentiated Services Code Point)` which will be used to mark .. impl-detail:: DSCP settings only apply on :term:`cache servers` that run :abbr:`Apache Traffic Server`. The implementation uses the `ATS Header Rewrite Plugin `_ to create a rule that will mark traffic bound outward from the CDN to the client. +.. _ds-edge-header-rw-rules: + Edge Header Rewrite Rules ------------------------- This field in general contains the contents of the a configuration file used by the `ATS Header Rewrite Plugin `_ when serving content for this Delivery Service - on :term:`Edge-tier cache server`\ s. .. tip:: Because this ultimately is the contents of an :abbr:`ATS (Apache Traffic Server)` configuration file, it can make use of the :ref:`ort-special-strings`. +.. _ds-example-urls: + +Example URLs +------------ +The Example URLs of a Delivery Service are the scheme/host specifications that clients can use to request content through it. These are determined by Traffic Ops from the Delivery Service's configuration, and are read-only in virtually every context. The only reason a Delivery Service should ever have no Example URLs is if it is an ANY_MAP-`Type`_ Delivery Service (since they are not routed). For example, a Delivery Service that can deliver HTTP and HTTPS content, has a `Routing Name`_ of "cdn", an `xml_id`_ of "demo1", and belonging to a `CDN`_ that is authoritative for the `mycdn.ciab.test` domain would have two Example URLs: + +- `https://cdn.demo1.mycdn.ciab.test` +- `http://cdn.demo1.mycdn.ciab.test` + +Note that these are irrespective of request path; meaning a client can request e.g. `https://cdn.demo1.mycdn.ciab.test/index.html` through this Delivery Service. + +.. warning:: This list does not consider any `Static DNS Entries`_ configured on the Delivery Service, those are + +.. table:: Aliases + + +-----------------------+----------------------+-----------------------------+ + | Name | Use(s) | Type(s) | + +=======================+======================+=============================+ + | Delivery Service URLs | Traffic Portal forms | unchanged (list of strings) | + +-----------------------+----------------------+-----------------------------+ + +.. _ds-fqpr: + Fair-Queuing Pacing Rate Bps ---------------------------- The maximum bytes per second a :term:`cache server` will deliver on any single TCP connection. This uses the Linux kernel’s Fair-Queuing :manpage:`setsockopt(2)` (``SO_MAX_PACING_RATE``) to limit the rate of delivery. Traffic exceeding this speed will only be rate-limited and not diverted. This option requires extra configuration on all :term:`cache servers` assigned to this Delivery Service - specifically, the line ``net.core.default_qdisc = fq`` must exist in :file:`/etc/sysctl.conf`. .. seealso:: :manpage:`tc-fq_codel(8)` +.. seealso:: This is implemented using the `ATS fq_pacing plign `_. + .. table:: Aliases +--------------+---------------------------------------------------------------------------------+---------------------------------------+ @@ -179,6 +228,8 @@ The maximum bytes per second a :term:`cache server` will deliver on any single T | FQPacingRate | Traffic Ops source code, Delivery Service objects returned by the :ref:`to-api` | unchanged (``int``, ``integer`` etc.) | +--------------+---------------------------------------------------------------------------------+---------------------------------------+ +.. _ds-geo-limit: + Geo Limit --------- Limits access to a Delivery Service by geographic location. The only practical difference between this and `Regional Geoblocking`_ is the configuration method; as opposed to `Regional Geoblocking`_, GeoLimit configuration is handled by country-wide codes and the :term:`Coverage Zone File`. When a client is denied access to a requested resource on an HTTP-:ref:`Routed ` Delivery Service, they will receive a ``503 Service Unavailable`` instead of the usual ``302 Found`` response - unless `Geo Limit Redirect URL`_ is defined, in which case a ``302 Found`` response pointing to that URL will be returned by Traffic Router. If the Delivery Service is a DNS-:ref:`Routed ` Delivery Service, the IP address of the *resolver* for the client DNS request is what is checked. If the IP address of this resolver is found to be in a restricted location, the Traffic Router will respond with an ``NXDOMAIN`` response, causing the name resolution to fail. This is nearly always an integral, unique identifier for a behavior set to be followed by Traffic Router. The defined values are: @@ -203,6 +254,8 @@ Limits access to a Delivery Service by geographic location. The only practical d .. danger:: Geographic access limiting is **not** sufficient to guarantee access is properly restricted. The limiting is implemented by Traffic Router, which means that direct requests to :term:`Edge-tier cache server`\ s will bypass it entirely. +.. _ds-geo-limit-countries: + Geo Limit Countries ------------------- When `Geo Limit`_ is being used with this Delivery Service (and is set to exactly ``2``), this is optionally a list of country codes to which access to content provided by the Delivery Service will be restricted. Normally, this is a comma-delimited string of said country codes. When creating a Delivery Service with this field or modifying the Geo Limit Countries field on an existing Delivery Service, any amount of whitespace between country codes is permissible, as it will be removed on submission, but responses from the :ref:`to-api` should never include such whitespace. @@ -216,6 +269,8 @@ When `Geo Limit`_ is being used with this Delivery Service (and is set to exactl | | | country code - one should exist for each allowed country code | +------------------+---------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+ +.. _ds-geo-limit-redirect-url: + Geo Limit Redirect URL ---------------------- If `Geo Limit`_ is being used with this Delivery Service, this is optionally a URL to which clients will be redirected when Traffic Router determines that they are not in a geographic zone that permits their access to the Delivery Service content. This changes the response from Traffic Router from ``503 Service Unavailable`` to ``302 Found`` with a provided location that will be this URL. There is no restriction on the provided URL; it may even be the path to a resource served by this Delivery Service. In fact, this field need not even be a full URL, it can be a relative path. Both of these cases are handled specially by Traffic Router. @@ -264,6 +319,8 @@ This is nearly always the integral, unique identifier of a provider for a databa | geoProvider | Traffic Ops and Traffic Ops client code, :ref:`to-api` requests and responses | unchanged (integral, unique identifier) | +-------------+-------------------------------------------------------------------------------+-----------------------------------------+ +.. _ds-geo-miss-default-latitude: + Geo Miss Default Latitude ------------------------- Default Latitude for this Delivery Service. When the geographic location of the client cannot be determined, they will be routed as if they were at this latitude. @@ -276,6 +333,8 @@ Default Latitude for this Delivery Service. When the geographic location of the | missLat | In :ref:`to-api` responses and Traffic Ops source code | unchanged (numeric) | +---------+--------------------------------------------------------+---------------------+ +.. _ds-geo-miss-default-longitude: + Geo Miss Default Longitude -------------------------- Default Longitude for this Delivery Service. When the geographic location of the client cannot be determined, they will be routed as if they were at this longitude. @@ -288,6 +347,8 @@ Default Longitude for this Delivery Service. When the geographic location of the | missLong | In :ref:`to-api` responses and Traffic Ops source code | unchanged (numeric) | +----------+--------------------------------------------------------+---------------------+ +.. _ds-global-max-mbps: + Global Max Mbps --------------- The maximum :abbr:`Mbps (Megabits per second)` this Delivery Service can serve across all :term:`Edge-tier cache server`\ s before traffic will be diverted to the bypass destination. For a DNS-:ref:`Routed ` Delivery Service, the `DNS Bypass IP`_ or `DNS Bypass IPv6`_ will be used (depending on whether this was a A or AAAA request), and for HTTP-:ref:`Routed ` Delivery Services the `HTTP Bypass FQDN`_ will be used. @@ -300,6 +361,8 @@ The maximum :abbr:`Mbps (Megabits per second)` this Delivery Service can serve a | totalKbpsThreshold | In :ref:`to-api` responses - most notably :ref:`to-api-cdns-name-configs-monitoring` | unchanged (numeric), but converted from :abbr:`Mbps (Megabits per second)` to :abbr:`Kbps (kilobits per second)` | +--------------------+--------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+ +.. _ds-global-max-tps: + Global Max TPS -------------- The maximum :abbr:`TPS (Transactions per Second)` this Delivery Service can serve across all :term:`Edge-tier cache server`\ s before traffic will be diverted to the bypass destination. For a DNS-:ref:`Routed ` Delivery Service, the `DNS Bypass IP`_ or `DNS Bypass IPv6`_ will be used (depending on whether this was a A or AAAA request), and for HTTP-:ref:`Routed ` Delivery Services the `HTTP Bypass FQDN`_ will be used. @@ -312,14 +375,20 @@ The maximum :abbr:`TPS (Transactions per Second)` this Delivery Service can serv | totalTpsThreshold | In :ref:`to-api` responses - most notably :ref:`to-api-cdns-name-configs-monitoring` | unchanged (numeric) | +-------------------+--------------------------------------------------------------------------------------+---------------------+ +.. _ds-http-bypass-fqdn: + HTTP Bypass FQDN ---------------- When the limits placed on this Delivery Service by the `Global Max Mbps`_ and/or `Global Max Tps`_ are exceeded, an HTTP-:ref:`Routed ` Delivery Service will direct excess traffic to this :abbr:`Fully Qualified Domain Name`. +.. _ds-ipv6-routing: + IPv6 Routing Enabled -------------------- A boolean value that controls whether or not clients using IPv6 can be routed to this Delivery Service by Traffic Router. When creating a Delivery Service in Traffic Portal, this will default to "true". +.. _ds-info-url: + Info URL -------- This should be a URL (though neither the :ref:`to-api` nor the Traffic Ops Database in any way enforce the validity of said URL) to which administrators or others may refer for further information regarding a Delivery Service - e.g. a related JIRA ticket. @@ -330,12 +399,16 @@ Initial Dispersion ------------------ The number of :term:`Edge-tier cache servers` across which a particular asset will be distributed within each :term:`Cache Group`. For most use-cases, this should be 1, meaning that all clients requesting a particular asset will be directed to 1 :term:`cache server` per :term:`Cache Group`. Depending on the popularity and size of assets, consider increasing this number in order to spread the request load across more than 1 :term:`cache server`. The larger this number, the more copies of a particular asset are stored in a :term:`Cache Group`, which can "pollute" caches (if load distribution is unnecessary) and decreases caching efficiency (due to cache misses if the asset is not requested enough to stay "fresh" in all the caches). +.. _ds-logs-enabled: + Logs Enabled ------------ A boolean switch that can be toggled to enable/disable logging for a Delivery Service. .. note:: This doesn't actually do anything. It was part of the functionality for a planned Traffic Control component named "Traffic Logs" - which was never created. +.. _ds-longdesc: + Long Description ---------------- Free text field that has no strictly defined purpose, but it is suggested that it contain a short description of the Delivery Service and its purpose. @@ -348,6 +421,8 @@ Free text field that has no strictly defined purpose, but it is suggested that i | longDesc | Traffic Control source code and :ref:`to-api` responses | unchanged (``string``, ``String`` etc.) | +----------+---------------------------------------------------------+-----------------------------------------+ +.. _ds-longdesc2: + Long Description 2 ------------------ Free text field that has no strictly defined purpose. @@ -360,6 +435,8 @@ Free text field that has no strictly defined purpose. | longDesc1\ [#cardinality]_ | Traffic Control source code and :ref:`to-api` responses | unchanged (``string``, ``String`` etc.) | +----------------------------+---------------------------------------------------------+-----------------------------------------+ +.. _ds-longdesc3: + Long Description 3 ------------------ Free text field that has no strictly defined purpose. @@ -372,6 +449,8 @@ Free text field that has no strictly defined purpose. | longDesc2\ [#cardinality]_ | Traffic Control source code and :ref:`to-api` responses | unchanged (``string``, ``String`` etc.) | +----------------------------+---------------------------------------------------------+-----------------------------------------+ +.. _ds-matchlist: + Match List ---------- A Match List is a set of regular expressions used by Traffic Router to determine whether a given request from a client should be served by this Delivery Service. Under normal circumstances this field should only ever be read-only as its contents should be generated by Traffic Ops based on the Delivery Service's configuration. These regular expressions can each be one of the following types: @@ -395,16 +474,28 @@ STEERING_REGEXP | deliveryservice_regex | Traffic Ops database | unique, integral identifier for a regular expression | +-----------------------+----------------------+------------------------------------------------------+ +.. _ds-max-dns-answers: + Max DNS Answers --------------- The maximum number of :term:`Edge-tier cache server` IP addresses that the Traffic Router will include in responses to DNS requests for DNS-:ref:`Routed ` Delivery Services. The :ref:`to-api` restricts this value to the range [1, 15], but no matching restraints are placed on the actual data as stored in the Traffic Ops Database. When provided, the :term:`cache server` IP addresses included are rotated in each response to spread traffic evenly. This number should scale according to the amount of traffic the Delivery Service is expected to serve. +.. _ds-max-origin-connections: + +Max Origin Connections +---------------------- +The maximum number of TCP connections individual :term:`Mid-tier cache servers` are allowed to make to the `Origin Server Base URL`. A value of ``0`` in this field indicates that there is no maximum. + +.. _ds-mid-header-rw-rules: + Mid Header Rewrite Rules ------------------------ This field in general contains the contents of the a configuration file used by the `ATS Header Rewrite Plugin `_ when serving content for this Delivery Service - on :term:`Mid-tier cache servers`. .. tip:: Because this ultimately is the contents of an :abbr:`ATS (Apache Traffic Server)` configuration file, it can make use of the :ref:`ort-special-strings`. +.. _ds-origin-url: + Origin Server Base URL ---------------------- The Origin Server’s base URL which includes the protocol (http or https). Example: ``http://movies.origin.com``. Must not include paths, query parameters, document fragment identifiers, or username/password URL fields. @@ -417,13 +508,17 @@ The Origin Server’s base URL which includes the protocol (http or https). Exam | orgServerFqdn | :ref:`to-api` responses and in Traffic Control source code | unchanged (usually ``str``, ``string`` etc.) | +---------------+------------------------------------------------------------+----------------------------------------------+ +.. _ds-origin-shield: + Origin Shield ------------- An experimental feature that allows administrators to list additional forward proxies that sit between the :term:`Mid-tier` and the :term:`origin`. In most scenarios, this is represented (and required to be input) as a pipe (``|``)-delimited string. +.. _ds-profile: + Profile ------- -Either the name of a :term:`Profile` used by this Delivery Service, or an integral, unique identifier for said :term:`Profile`. +Either the :ref:`profile-name` of a :term:`Profile` used by this Delivery Service, or the :ref:`profile-id` of said :term:`Profile`. .. table:: Aliases @@ -435,6 +530,8 @@ Either the name of a :term:`Profile` used by this Delivery Service, or an integr | profileName | In Traffic Control source code and some :ref:`to-api` responses dealing with Delivery Services | Unlike the more general "Profile", this is *always* a name (``str``, ``string``, etc.) | +-------------+------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ +.. _ds-protocol: + Protocol -------- The protocol with which to serve content from this Delivery Service. This defines the way the Delivery Service will handle client requests that are either HTTP or HTTPS, which is distinct from what protocols are used to direct traffic. For example, this can be used to direct clients to only request content using HTTP, or to allow clients to use either HTTP or HTTPS, etc. Normally, this will be the name of the protocol handling, but occasionally this will appear as the integral, unique identifier of the protocol handling instead. The integral, unique identifiers and their associated names and meanings are: @@ -496,6 +593,8 @@ The Delivery Service's Query String Handling can be set directly as a field on t .. seealso:: When implemented as a :term:`Parameter` (``psel.qstring_handling``), its value must be a valid value for the ``qstring`` field of a line in the :abbr:`ATS (Apache Traffic Server)` ``parent.config`` configuration file. For a description of valid values, see the `documentation for parent.config `_ +.. _ds-range-request-handling: + Range Request Handling ---------------------- Describes how HTTP "Range Requests" should be handled by the Delivery Service at the :term:`Edge-tier`. This is nearly always an integral, unique identifier for the behavior set required of the :term:`Edge-tier cache server`\ s. The valid values and their respective meanings are: @@ -524,6 +623,8 @@ For HTTP and DNS-:ref:`Routed ` Delivery Services, this will be added .. note:: This field **must** be defined on ANY_MAP-`Type`_ Delivery Services, but is otherwise optional. +.. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ + .. table:: Aliases +-----------+-----------------------------------------------------------------+---------------------------------------+ @@ -532,6 +633,8 @@ For HTTP and DNS-:ref:`Routed ` Delivery Services, this will be added | remapText | In Traffic Ops source code and :ref:`to-api` requests/responses | unchanged (``text``, ``string`` etc.) | +-----------+-----------------------------------------------------------------+---------------------------------------+ +.. _ds-regex-remap: + Regex Remap Expression ---------------------- Allows remapping of incoming requests URL using regular expressions to search and replace text. In a more literal sense, this is the raw contents of a configuration file used by the `ATS regex_remap plugin `_. At its most basic, the contents of this field should consist of ``map`` followed by a regular expression and then a "template URL" - all space-separated. The regular expression matches a client's request *path* (i.e. not a full URL - ``/path/to/content`` **not** ``https://origin.example.com/path/to/content``) and when such a match occurs, the request is transformed into a request for the template URL. The most basic usage of the template URL is to use ``$1``-``$9`` to insert the corresponding regular expression capture group. For example, a regular expression of :regexp:`^/a/(.*)` and a template URL of ``https://origin.example.com/b/$1`` maps requests for :term:`origin` content under path ``/a/`` to the same sub-paths under path ``b``. Note that since it's a full URL, this mapping can be made to another server entirely. @@ -544,6 +647,8 @@ Allows remapping of incoming requests URL using regular expressions to search an .. tip:: It is, of course, entirely possible to write a Regex Remap Expression that reproduces the desired `Query String Handling`_ as well as any other desired behavior. +.. seealso:: `The Apache Trafficserver documentation for the Regex Remap plugin `_ + .. table:: Aliases +------------+----------------------------------------------------------------------------+-----------------------------+ @@ -552,26 +657,36 @@ Allows remapping of incoming requests URL using regular expressions to search an | regexRemap | Traffic Ops source code and database, and :ref:`to-api` requests/responses | unchanged (``string`` etc.) | +------------+----------------------------------------------------------------------------+-----------------------------+ +.. _ds-regionalgeo: + Regional Geoblocking -------------------- A boolean value that defines whether or not :ref:`Regional Geoblocking ` is active on this Delivery Service. The actual configuration of :ref:`Regional Geoblocking ` is done in the :term:`Profile` used by the Traffic Router serving the Delivery Service. Rules for this Delivery Service may exist, but they will not actually be used unless this field is ``true``. .. tip:: :ref:`Regional Geoblocking ` is configured primarily with respect to Canadian postal codes, so unless specifically Canadian regions should be allowed/disallowed to access content, `Geo Limit`_ is probably a better setting for controlling access to content according to geographic location. +.. _ds-routing-name: + Routing Name ------------ A DNS label in the Delivery Service's domain that forms the :abbr:`FQDN (Fully Qualified Domain Name)` that is used by clients to request content. All together, the constructed :abbr:`FQDN (Fully Qualified Domain Name)` looks like: :file:`{Delivery Service Routing Name}.{Delivery Service xml_id}.{CDN Subdomain}.{CDN Domain}.{Top-Level Domain}`\ [#xmlValid]_. +.. _ds-servers: + Servers ------- Servers can be assigned to Delivery Services using the :ref:`tp-configure-servers` and :ref:`tp-services-delivery-service` Traffic Portal sections, or by directly using the :ref:`to-api-deliveryserviceserver` endpoint. Only :term:`Edge-tier cache servers` can be assigned to a Delivery Service, and once they are so assigned they will begin to serve content for the Delivery Service (after updates are queued and then applied). Any servers assigned to a Delivery Service must also belong to the same CDN_ as the Delivery Service itself. At least one server must be assigned to a Delivery Service in order for it to serve any content. +.. _ds-signing-algorithm: + Signing Algorithm ----------------- URLs/URIs may be signed using one of two algorithms before a request for the content to which they refer is sent to the :term:`origin` (which in practice can be any upstream network). At the time of this writing, this field is restricted within the Traffic Ops Database to one of two values (or ``NULL``/"None", to indicate no signing should be done). .. seealso:: The url_sig `README `_. +.. seealso:: `The draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, **NOT** the latest. + url_sig URL signing will be implemented in this Delivery Service using the `url_sig Apache Traffic Server plugin `_. (Aliased as "URL Signature Keys" in Traffic Portal forms) uri_signing @@ -588,12 +703,24 @@ uri_signing Keys for either algorithm can be generated within :ref:`Traffic Portal `. +.. _ds-ssl-key-version: + +SSL Key Version +--------------- +An integer that describes the version of the SSL key(s) - if any - used by this Delivery Service. This is incremented whenever Traffic Portal generates new SSL keys for the Delivery Service. + +.. warning:: This number will not be correct if keys are manually replaced using the API, as the key generation API does not increment it! + +.. _ds-static-dns-entries: + Static DNS Entries ------------------ Static DNS Entries can be added *under* a Delivery Service's domain. These DNS records can be configured in the :ref:`tp-services-delivery-service` section of Traffic Portal, and can be any valid CNAME, A or AAAA DNS record - provided the associated hostname falls within the DNS domain for the Delivery Service. For example, a Delivery Service with xml_id_ "demo1" and belonging to a CDN_ with domain "mycdn.ciab.test" could have Static DNS Entries for hostnames "foo.demo1.mycdn.ciab.test" or "foo.bar.demo1.mycdn.ciab.test" but not "foo.bar.mycdn.ciab.test" or "foo.bar.test". .. note:: The `Routing Name`_ of a Delivery Service is not part of the :abbr:`SOA (Start of Authority)` record for the Delivery Service's domain, and so there is no need to place Static DNS Entries below a domain containing it. +.. _ds-tenant: + Tenant ------ The :term:`Tenant` who owns this Delivery Service. They (and their parents, if any) are the only ones allowed to make changes to this Delivery Service. Typically, ``tenant``/``Tenant`` refers to the *name* of the owning :term:`Tenant`, but occasionally (most notably in the payloads and/or query parameters of certain :ref:`to-api` requests) it actually refers to the *integral, unique identifier* of said :term:`Tenant`. @@ -606,6 +733,8 @@ The :term:`Tenant` who owns this Delivery Service. They (and their parents, if a | TenantID | Go code and :ref:`to-api` requests/responses | Integral, unique identifier (``bigint``, ``int`` etc.) | +----------+----------------------------------------------+--------------------------------------------------------+ +.. _ds-tr-resp-headers: + Traffic Router Additional Response Headers ------------------------------------------ List of HTTP header ``{{name}}:{{value}}`` pairs separated by ``__RETURN__`` or simply on separate lines. Listed pairs will be included in all HTTP responses from Traffic Router for HTTP-:ref:`Routed ` Delivery Services. @@ -621,6 +750,8 @@ List of HTTP header ``{{name}}:{{value}}`` pairs separated by ``__RETURN__`` or | trResponseHeaders | Traffic Control source code and Delivery Service objects returned by the :ref:`to-api` | unchanged (``string`` etc.) | +-------------------+----------------------------------------------------------------------------------------+-----------------------------+ +.. _ds-tr-req-headers: + Traffic Router Log Request Headers ---------------------------------- List of HTTP header names separated by ``__RETURN__`` or simply on separate lines. Listed pairs will be logged for all HTTP requests to Traffic Router for HTTP-:ref:`Routed ` Delivery Services. diff --git a/docs/source/overview/index.rst b/docs/source/overview/index.rst index bdb53932d6..f8e94a83e2 100644 --- a/docs/source/overview/index.rst +++ b/docs/source/overview/index.rst @@ -13,18 +13,13 @@ .. limitations under the License. .. +************************ Traffic Control Overview ************************ Introduces the Traffic Control architecture, components, and their integration. .. toctree:: :maxdepth: 2 + :glob: - introduction.rst - traffic_ops - traffic_portal - traffic_router - traffic_monitor - traffic_stats - traffic_vault - delivery_services + * diff --git a/docs/source/overview/profiles_and_parameters.rst b/docs/source/overview/profiles_and_parameters.rst new file mode 100644 index 0000000000..bb739a6595 --- /dev/null +++ b/docs/source/overview/profiles_and_parameters.rst @@ -0,0 +1,651 @@ +.. +.. Licensed under the Apache License, Version 2.0 (the "License"); +.. you may not use this file except in compliance with the License. +.. You may obtain a copy of the License at +.. +.. http://www.apache.org/licenses/LICENSE-2.0 +.. +.. Unless required by applicable law or agreed to in writing, software +.. distributed under the License is distributed on an "AS IS" BASIS, +.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +.. See the License for the specific language governing permissions and +.. limitations under the License. +.. + +.. _Apache Traffic Server configuration files: https://docs.trafficserver.apache.org/en/7.1.x/admin-guide/files/index.en.html + +.. _profiles-and-parameters: + +*********************** +Profiles and Parameters +*********************** +:dfn:`Profiles` are a collection of configuration options, defined partially by the Profile's Type_ (not to be confused with the more general ":term:`Type`" used by many other things in Traffic Control) and partially by the :dfn:`Parameters` set on them. Mainly, Profiles and Parameters are used to configure :term:`cache servers`, but they can also be used to configure parts of (nearly) any Traffic Control component, and can even be linked with more abstract concepts like :ref:`Delivery Services ` and :term:`Cache Groups`. The vast majority of configuration done within a Traffic Control CDN must be done through Profiles_ and Parameters_, which can be achieved either through the :ref:`to-api` or in the :ref:`tp-configure-profiles` view of Traffic Portal. For ease of use, Traffic Portal allows for duplication, comparison, import and export of Profiles_ including all of their associated Parameters_. + +.. _profiles: + +Profiles +======== + +Properties +---------- +Profile objects as represented in e.g. the :ref:`to-api` or in the :ref:`Traffic Portal Profiles view ` have several properties that describe their general operation. In certain contexts, the Parameters_ assigned to a Profile (and/or the integral, unique identifiers thereof) may appear as properties of the Profile, but that will not appear in this section as a description of Parameters_ is provided in the section of that name. + +.. _profile-cdn: + +CDN +""" +A Profile is restricted to operate within a single CDN. Often, "CDN" (or "cdn") refers to the integral, unique identifier of the CDN, but occasionally it refers to the *name* of said CDN. It may also appear as e.g. ``cdnId`` or ``cdnName`` in :ref:`to-api` payloads and responses. A Profile may only be assigned to a server, :term:`Delivery Service`, or :term:`Cache Group` within the same CDN as the Profile itself. + +.. _profile-description: + +Description +""""""""""" +Profiles may have a description provided by the creating user (or Traffic Control itself in the case of the `Default Profiles`_). The :ref:`to-api` does not enforce length requirements on the description (though Traffic Portal does), and so it's possible for Profiles to have empty descriptions, though it is strongly recommended that Profiles have meaningful descriptions. + +.. _profile-id: + +ID +"" +An integral, unique identifier for the Profile. + +.. _profile-name: + +Name +"""" +Ostensibly this is simply the Profile's name. However, the name of a Profile has drastic consequences for how Traffic Control treats it. Particularly, the name of a Profile is heavily conflated with its Type_. These relationships are discussed further in the Type_ section, on a Type-by-Type basis. + +.. _profile-routing-disabled: + +Routing Disabled +"""""""""""""""" +This property can - and in fact *must* - exist on a Profile of any Type_, but it only has any meaning on a Profile that has a name matching the constraints placed on the names of ATS_PROFILE-`Type`_ Profiles. This means that it will also have meaning on Profiles of Type_ UNK_PROFILE that for whatever reason have names beginning with ``EDGE`` or ``MID``. When this field is defined as ``1`` (may be displayed as ``true`` in e.g. Traffic Portal), Traffic Router will not be informed of any :term:`Delivery Services` to which the :term:`cache server` using this Profile may be assigned. Effectively, this means that client traffic cannot be routed to them, although existing connections would be uninterrupted. + +.. _profile-type: + +Type +"""" +A Profile's :dfn:`Type` determines how its configured Parameters_ are treated by various components, and often even determine how the object using the Profile is treated (particularly when it is a server). Unlike the more general ":term:`Type`" employed by Traffic Control, the allowed Types of Profiles are set in stone, and they are as follows. + +.. danger:: Nearly all of these Profile Types have strict naming requirements, and it may be noted that some of said requirements are prefixes ending with ``_``, while others are either not prefixes or do not end with ``_``. This is exactly true; some requirements **need** that ``_`` and some may or may not have it. It is our suggestion, therefore, that for the time being all prefixes use the ``_`` notation to separate words, so as to avoid causing headaches remembering when that matters and when it does not. + +ATS_PROFILE + A Profile that can be used with either an Edge-tier or Mid-tier :term:`cache server` (but not both, in general). This is the only Profile type that will ultimately pass its Parameters_ on to :term:`ORT` in the form of generated configuration files. For this reason, it can make use of the :ref:`ort-special-strings` in the values of some of its Parameters_. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``EDGE`` or ``MID``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +DS_PROFILE + A Profile that, rather than applying to a server, is instead :ref:`used by a Delivery Service `. + +ES_PROFILE + A Profile for `ElasticSearch `_ servers. This has no known special meaning to any component of Traffic Control, but if ElasticSearch is in use the use of this Profile Type is suggested regardless. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``ELASTICSEARCH``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +GROVE_PROFILE + A Profile for use with the experimental Grove HTTP caching proxy. + +INFLUXDB_PROFILE + A Profile used with `InfluxDB `_, which is used by Traffic Stats. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``INFLUXDB``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +KAFKA_PROFILE + A Profile for `Kafka `_ servers. This has no known special meaning to any component of Traffic Control, but if Kafka is in use the use of this Profile Type is suggested regardless. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``KAFKA``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +LOGSTASH_PROFILE + A Profile for `Logstash `_ servers. This has no known special meaning to any component of Traffic Control, but if Logstash is in use the use of this Profile Type is suggested regardless. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``LOGSTASH_``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +ORG_PROFILE + A Profile that may be used by either :term:`origin servers` or :term:`origins` (no, they aren't the same thing). + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``MSO``, or contain either ``ORG`` or ``ORIGIN`` anywhere in the name. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +RIAK_PROFILE + A Profile used for each `Riak `_ server in a Traffic Stats cluster. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``RIAK``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +SPLUNK_PROFILE + A Profile meant to be used with `Splunk `_ servers. This has no known special meaning to any component of Traffic Control, but if Splunk is in use the use of this Profile Type is suggested regardless. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``SPLUNK``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +TM_PROFILE + A Traffic Monitor Profile. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``RASCAL_`` or ``TM_``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + +TP_PROFILE + A Traffic Portal Profile. This has no known special meaning to any Traffic Control component(s) (not even Traffic Portal itself), but its use is suggested for the profiles used by any and all Traffic Portal servers anyway. + +TR_PROFILE + A Traffic Router Profile. + + .. warning:: For legacy reasons, the names of Profiles of this type *must* begin with ``CCR_`` or ``TR_``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! + + .. seealso:: :ref:`tr-profile` + +TS_PROFILE + A Traffic Stats Profile. + + .. caution:: For legacy reasons, the names of Profiles of this type *must* be ``TRAFFIC_STATS``. This is **not** enforced by the :ref:`to-api` or Traffic Portal, but certain Traffic Control operations/components expect this and will fail to work otherwise! Furthermore, because Profile names must be unique, this means that only one TS_PROFILE-Type Profile can exist at a time. + +UNK_PROFILE + A catch-all type that can be assigned to anything without imbuing it with any special meaning or behavior. + +.. tip:: A Profile of the wrong type assigned to a Traffic Control component *will* (in general) cause it to function incorrectly, regardless of the Parameters_ assigned to it. + +.. _default-profiles: + +Default Profiles +---------------- +Traffic Control comes with some pre-installed Profiles for its basic components, but users are free to define their own as needed. Additionally, these default Profiles can be modified or even removed completely. One of these Profiles is `The GLOBAL Profile`_, which has a dedicated section. + +INFLUXDB + A Profile used by InfluxDB servers that store Traffic Stats information. It has a Type_ of UNK_PROFILE and is assigned to the special "ALL" CDN_. +RIAK_ALL + This Profile is used by Traffic Vault, which is, generally speaking, the only instance in Traffic Control as it can store keys for an arbitrary number of CDNs. It has a Type_ of UNK_PROFILE and is assigned to the special "ALL" CDN_. +TRAFFIC_ANALYTICS + A default Profile that was intended for use with the now-unplanned "Traffic Analytics" :abbr:`ATC (Apache Traffic Control)` component. It has a Type_ of UNK_PROFILE and is assigned to the special "ALL" CDN_. +TRAFFIC_OPS + A Profile used by the Traffic Ops server itself. It's suggested that any and all "mirrors" of Traffic Ops for a given Traffic Control instance be recorded separately and all assigned to this Profile for record-keeping purposes. It has a Type_ of UNK_PROFILE and is assigned to the special "ALL" CDN_. +TRAFFIC_OPS_DB + A Profile used by the PostgreSQL database server that stores all of the data needed by Traffic Ops. It has a Type_ of UNK_PROFILE and is assigned to the special "ALL" CDN_. +TRAFFIC_PORTAL + A Profile used by Traffic Portal servers. This profile name has no known special meaning to any Traffic Control components (not even Traffic Portal itself), but its use is suggested for Traffic Portal servers anyway. It has a Type_ of UNK_PROFILE and is assigned to the special "ALL" CDN_. +TRAFFIC_STATS + This is the **only** Profile used by Traffic Stats (though InfluxDB servers have their own Profile(s)). It has a Type_ of UNK_PROFILE and is assigned to the special "ALL" CDN_. + +In addition to these Profiles, each release of Apache Traffic Control is accompanied by a set of suggested Profiles suitable for import in the :ref:`tp-configure-profiles` view of Traffic Portal. They may be found on `the Profiles Downloads Index page `_. These Profiles are typically built from production Profiles by a company using Traffic Control, and as such are typically highly specific to the hardware and network infrastructure available to them. **None of the Profiles bundled with a release are suitable for immediate use without modification**, and in fact many of them cannot actually be imported directly into a new Traffic Control environment, because Profiles with the same :ref:`Names ` already exist (as above). + +Administrators may alternatively wish to consult the Profiles and Parameters_ available in the :ref:`ciab` environment, as they might be more familiar with them. Furthermore, those Profiles are built with a minimum running Traffic Control system in mind, and thus may be easier to look through. The Profiles and their associated Parameters_ may be found within the :atc-file:`infrastructure/cdn-in-a-box/traffic_ops_data/profiles/` directory. + +.. _the-global-profile: + +The GLOBAL Profile +------------------ +There is a special Profile of Type_ UNK_PROFILE that holds global configuration information - its :ref:`profile-name` is "GLOBAL", its Type_ is UNK_PROFILE and it is assigned to the special "ALL" CDN_. The Parameters_ that may be configured on this Profile are laid out in the :ref:`global-profile-parameters` Table. + +.. _global-profile-parameters: +.. table:: Global Profile Parameters + + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | :ref:`parameter-name` | `Config File`_ | Value_ | + +==========================+=========================+=======================================================================================================================================+ + | tm.url | global | The URL at which this Traffic Ops instance services requests. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tm.rev_proxy.url | global | Not required. The URL where a caching proxy for configuration files generated by Traffic Ops may be found. Requires a minimum | + | | | :term:`ORT` version of 2.1. When configured, :term:`ORT` will request configuration files via this | + | | | :abbr:`FQDN (Fully Qualified Domain Name)`, which should be set up as a reverse proxy to the Traffic Ops server(s). The suggested | + | | | cache lifetime for these files is 3 minutes or less. This setting allows for greater scalability of a CDN maintained by Traffic Ops | + | | | by caching configuration files of profile and CDN scope, as generating these is a very computationally expensive process. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tm.toolname | global | The name of the Traffic Ops tool. Usually "Traffic Ops" - this will appear in the comment headers of generated configuration files. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tm.infourl | global | This is the "for more information go here" URL, which used to be visible in the "About" page of the now-deprecated Traffic Ops UI. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tm.logourl | global | This is the URL of the logo for Traffic Ops and can be relative if the logo is under :file:`traffic_ops/app/public`. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tm.instance_name | global | The name of the Traffic Ops instance - typically to distinguish instances when multiple are active. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | tm.traffic_mon_fwd_proxy | global | When collecting stats from Traffic Monitor, Traffic Ops will use this forward proxy instead of the actual Traffic Monitor host. | + | | | Setting this :ref:`Parameter ` can significantly lighten the load on the Traffic Monitor system and it is therefore | + | | | recommended that this be set on a production system. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | use_reval_pending | global | When this Parameter is present and its Value_ is exactly "1", Traffic Ops will separately keep track of :term:`cache servers`' | + | | | updates and pending content invalidation jobs. This behavior should be enabled by default, and disabling it, while still possible, is | + | | | **EXTREMELY DISCOURAGED**. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | use_tenancy | global | This :ref:`Parameter `, when it exists and has a Value_ of exactly "1" enables the use :term:`Tenants` in Traffic | + | | | Control. This should be enabled by default, and while disabling this is still possible, it is **EXTREMELY DISCOURAGED**. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | geolocation.polling.url | CRConfig.json | The location of a geographic IP mapping database for Traffic Router instances to use. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | geolocation6.polling.url | CRConfig.json | The location of a geographic IPv6 mapping database for Traffic Router instances to use. | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | maxmind.default.override | CRConfig.json | The destination geographic coordinates to use for client location when the geographic IP mapping database returns a default location | + | | | that matches the country code. This parameter can be specified multiple times with different values to support default overrides for | + | | | multiple countries. The reason for the name "maxmind" is because the default geographic IP mapping database used by Traffic Control | + | | | is MaxMind's GeoIP2 database. The format of this :ref:`Parameter `'s Value_ is: | + | | | :file:`{Country Code};{Latitude},{Longitude}`, e.g. ``US;37.751,-97.822`` | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | maxRevalDurationDays | regex_revalidate.config | This :ref:`Parameter ` sets the maximum duration, in days, for which a content invalidation job may run. This is | + | | | **extremely** important, as there is currently no way to delete a content invalidation job once it has been created. Furthermore, | + | | | while there is no restriction placed on creating multiple Parameters_ with this :ref:`parameter-name` and `Config File`_ - | + | | | potentially with differing :ref:`Values ` - this is **EXTREMELY DISCOURAGED as any** :ref:`Parameter ` | + | | | **that has both that** :ref:`parameter-name` **and** `Config File`_ **might be used when generating any given** | + | | | `regex_revalidate.config`_ **file for any given** :term:`cache server` **and whenever such** Parameters_ **exist, the actual maximum | + | | | duration for content invalidation jobs is undefined, and CAN and WILL differ from server to server, and configuration file to | + | | | configuration file.** | + +--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + + +.. note:: Since the Traffic Ops UI has been removed, the tm.logourl has no real meaning, and in fact most Traffic Ops distributions neither set this :ref:`Parameter `, nor provide a logo. + +Some of these Parameters_ have the `Config File`_ value global_, while others have `CRConfig.json`_. This is not a typo, and the distinction is that those that use global_ are typically configuration options relating to Traffic Control as a whole or to Traffic Ops itself, whereas `CRConfig.json`_ is used by configuration options that are set globally, but pertain mainly to routing and are thus communicated to Traffic Routers through :term:`CDN Snapshots` (which historically were called "CRConfig Snapshots" or simply "the CRConfig"). +When a :ref:`Parameter ` has a `Config File`_ value that *isn't* one of global_ or `CRConfig.json`_, it refers to the global configuration of said `Config File`_ across all servers that use it across all CDNs configured in Traffic Control. This can be used to easily apply extremely common configuration to a great many servers in one place. + +.. _parameters: + +Parameters +========== +A :dfn:`Parameter` is usually a way to set a line in a configuration file that will appear on the servers using Profiles_ that have said Parameter. More generally, though, a Parameter merely describes some kind of configuration for some aspect of some thing. There are many Parameters that *must* exist for Traffic Control to work properly, such as those on `The GLOBAL Profile`_ or the `Default Profiles`_. Some Traffic Control components can be associated with Profiles_ that only have a few allowed (or actually just meaningful - others are ignored and don't cause problems) but some can have any number of Parameters to describe custom configuration of things of which Traffic Control itself may not even be aware (most notably :term:`cache servers`). For most Parameters, the meaning of each Parameter's various properties are very heavily tied to the allowed contents of `Apache Traffic Server configuration files`_. + +Properties +---------- +When represented in Traffic Portal (in the :ref:`tp-configure-parameters` view) or in :ref:`to-api` request and/or response payloads, a Parameter has several properties that define it. In some of these contexts, the Profiles_ to which a Parameter is assigned (and/or the integral, unique identifiers thereof) are represented as a property of the Parameter. However, an explanation of this "property" is not provided here, as the Profiles_ section exists for the purpose of explaining those. + +.. _parameter-config-file: + +Config File +""""""""""" +This (usually) names the configuration file to which the Parameter belongs. Note that it is only the *name of* the file and **not** the *full path to* the file - e.g. ``remap.config`` not ``/opt/trafficserver/etc/trafficserver/remap.config``. To define the full path to any given configuration file, Traffic Ops relies on a reserved :ref:`parameter-name` value: :ref:`"location" `. + +.. seealso:: This section is only meant to cover the special handling of Parameters assigned to specific Config File values. It is **not** meant to be a primer on Apache Traffic Server configuration files, nor is it intended to be exhaustive of the manners in which said files may be manipulated by Traffic Control. For more information, consult the documentation for `Apache Traffic Server configuration files`_. + +Certain Config Files are handled specially by Traffic Ops's configuration file generation. Specifically, the format of the configuration is tailored to be correct when the syntax of a configuration file is known. However, these configuration files **must** have :ref:`"location" ` Parameters on the :ref:`Profile ` of servers, or they will not be generated. The Config File values that are special in this way are detailed within this section. When a `Config File`_ is none of these special values, each Parameter assigned to given server's :ref:`Profile ` with the same `Config File`_ value will create a single line in the resulting configuration file (with the possible exception being when the :ref:`parameter-name` is "header") + +12M_facts +''''''''' +This legacy file is generated entirely from a :ref:`Profile `'s metadata, and cannot be affected by Parameters. + +.. tip:: This Config File serves an unknown and likely historical purpose, so most users/administrators/developers don't need to worry about it. + +50-ats.rules +'''''''''''' +Parameters have no meaning when assigned to this Config File (except :ref:`"location" `), but it *is* affected by Parameters that are on the same :ref:`Profile ` with the Config File ``storage.config`` - **NOT this Config File**. For each letter in the special "Drive Letters" Parameter, a line will be added of the form :file:`KERNEL=="{Prefix}{Letter}", OWNER="ats"` where ``Prefix`` is the Value_ of the Parameter with the :ref:`parameter-name` "Drive Prefix" and the Config File ``storage.config`` - but with the first instance of ``/dev/`` removed - , and ``Letter`` is the drive letter. Also, if the Parameter with the :ref:`parameter-name` "RAM Drive Prefix" exists on the same Profile assigned to the server, a line will be inserted for each letter in the special "RAM Drive Letters" Parameter of the form :file:`KERNEL=="{Prefix}{Letter}", OWNER="ats"` where ``Prefix`` is the Value_ of the "RAM Drive Prefix" Parameter - but with the first instance of ``/dev/`` removed -, and ``Letter`` is the drive letter. + +.. tip:: This Config File serves an unknown and likely historical purpose, so most users/administrators/developers don't need to worry about it. + +astats.config +''''''''''''' +This configuration file will be generated with a line for each Parameter with this Config File value on the :term:`cache server`'s :ref:`Profile ` in the form :file:`{Name}={Value}` where ``Name`` is the Parameter's :ref:`parameter-name` with trailing characters that match :regexp:`__\\d+$` stripped, and ``Value`` is its Value_. + +bg_fetch.config +''''''''''''''' +This configuration file always generates static contents besides the header, and cannot be affected by any Parameters (besides its :ref:`"location" ` Parameter). + +.. seealso:: For an explanation of the contents of this file, consult `the Background Fetch Apache Traffic Server plugin's official documentation `_. + +cache.config +'''''''''''' +This configuration is built entirely from :term:`Delivery Service` configuration, and cannot be affected by Parameters. + +.. seealso:: `The Apache Traffic Server cache.config documentation `_ + +:file:`cacheurl{anything}.config` +''''''''''''''''''''''''''''''''' +Config Files that match this pattern - where ``anything`` is a string of zero or more characters - can only be generated by providing a :ref:`location ` and their contents will be fully determined by properties of :term:`Delivery Services`. + +.. seealso:: `The official documentation for the Cache URL Apache Traffic Server plugin `_. + +.. deprecated:: ATCv3.0 + This configuration file is only used by Apache Traffic Server version 6.x, whose use is deprecated both by that project and Traffic Control. These Config Files will have no special meaning at some point in the future. + +chkconfig +''''''''' +This actually isn't a configuration file at all, kind of. Specifically, it is a valid configuration file for the legacy `chkconfig utility `_ - but it is never written to disk on any :term:`cache server`. Though all Traffic Control-supported systems are now using :manpage:`systemd(8)`, :term:`ORT` still uses ``chkconfig``-style configuration to set the status of services on its host system(s). This means that any Parameter with this Config File value should have a :ref:`parameter-name` that is the name of a service on the :term:`cache servers` using the :ref:`Profile ` to which the Parameter is assigned, and it's Value_ should be a valid ``chkconfig`` configuration line for that service. + +CRConfig.json +''''''''''''' +In general, the term "CRConfig" refers to :term:`CDN Snapshots`, which historically were called "CRConfig Snapshots" or simply "the CRConfig". Parameters with this Config File should be only be on either `The GLOBAL Profile`_ where they will affect global routing configuration, or on a Traffic Router's :ref:`Profile ` where they will affect routing configuration for that Traffic Router only. + +.. seealso:: For the available configuration Parameters for a Traffic Router Profile, see :ref:`tr-profile`. + +drop_qstring.config +''''''''''''''''''' +This configuration file will be generated with a single line that is exactly: :regexp:`/([^?]+) \$s://\$t/\$1\n` **unless** a Parameter exists on the :ref:`Profile ` with this Config File value, and the :ref:`parameter-name` "content". In the latter case, the contents of the file will be exactly the Parameter's Value_ (with terminating newline appended). + +global +'''''' +In general, this Config File isn't actually handled specially by Traffic Ops when generating server configuration files. However, this is the Config File value typically used for Parameters assigned to `The GLOBAL Profile`_ for truly "global" configuration options, and it is suggested that this precedent be maintained - i.e. don't create Parameters with this Config File. + +:file:`hdr_rw_{anything}.config` +'''''''''''''''''''''''''''''''' +Config Files that match this pattern - where ``anything`` is zero or more characters - are written specially by Traffic Ops to accommodate the :ref:`ds-dscp` setting of :term:`Delivery Services`. + +.. tip:: The ``anything`` in those file names is typically a :term:`Delivery Service`'s :ref:`ds-xmlid` - though the inability to affect the file's contents is utterly independent of whether or not a :term:`Delivery Service` with that :ref:`ds-xmlid` actually exists. + +.. seealso:: For information on the contents of files like this, consult `the Header Rewrite Apache Traffic Server plugin's documentation `_ + +hosting.config +'''''''''''''' +This configuration file is mainly generated based on the assignments of :term:`cache servers` to :term:`Delivery Services` and the :term:`Cache Group` hierarchy, but there are a couple of Parameter :ref:`Names ` that can affect it when assigned to this Config File. When a Parameter assigned to the ``storage.config`` Config File - **NOT this Config File** - with the :ref:`parameter-name` "RAM_Drive_Prefix" *exists*, it will cause lines to be generated in this configuration file for each :term:`Delivery Service` that is of on of the :ref:`Types ` DNS_LIVE (only if the server is an :term:`Edge-Tier Cache Server`), HTTP_LIVE (only if the server is an :term:`Edge-Tier Cache Server`), DNS_LIVE_NATNL, or HTTP_LIVE_NATNL to which the :term:`cache server` to which the :ref:`Profile ` containing that Parameter belongs is assigned. Specifically, it will cause each of them to use ``volume=1`` **UNLESS** the Parameter with the :ref:`parameter-name` "Drive_Prefix" associated with Config File ``storage.config`` - again, **NOT this Config File** - *also* exists, in which case they will use ``volume=2``. + +.. caution:: If a Parameter with Config File ``storage.config`` and :ref:`parameter-name` "RAM_Drive_Prefix" does *not* exist on a :ref:`Profile `, then the :term:`cache servers` using that :ref:`Profile ` will **be incapable of serving traffic for** :term:`Delivery Services` **of the aforementioned** :ref:`Types `, **even when a** :ref:`"location" ` **Parameter exists**. + +.. seealso:: For an explanation of the syntax of this configuration file, refer to `the Apache Traffic Server hosting.config documentation `_. + +ip_allow.config +''''''''''''''' +This configuration file is mostly generated from various server data, but can be affected by a Parameter that has a :ref:`parameter-name` of "purge_allow_ip", which will cause the insertion of a line with :file:`src_ip={VALUE} action=ip_allow method=ALL` where ``VALUE`` is the Parameter's Value_. Additionally, Parameters with :ref:`Names ` like :file:`coalesce_{masklen|number}_v{4|6}` cause Traffic Ops to generate coalesced IP ranges in different ways. In the case that ``number`` was used, the Parameter's Value_ sets the the maximum number of IP address that may be coalesced into a single range. If ``masklen`` was used, the lines that are generated are coalesced into :abbr:`CIDR (Classless Inter-Domain Routing)` ranges using mask lengths determined by the Value_ of the parameter (using '4' sets the mask length of IPv4 address coalescing while using '6' sets the mask length to use when coalescing IPv6 addresses). This is not recommended, as the default mask lengths allow for maximum coalescence. Furthermore, if two Parameters on the same :ref:`Profile ` assigned to a server having Config File values of ``ip_allow.config`` and :ref:`Names ` that are both "coalesce_masklen_v4" but each has a different Value_, then the actual mask length used to coalesce IPv4 addresses is undefined (but will be one of the two). All forms of the "coalescence Parameters" have this problem. + +.. impl-detail:: At the time of this writing, coalescence is implemented through the `the NetAddr\:\:IP Perl library `_. + +.. seealso:: `The Apache Traffic Server ip_allow.config documentation `_ explains the syntax and meaning of lines in that file. + +logging.config +'''''''''''''' +This configuration file can only be affect by Parameters with specific :ref:`Names `. Specifically, for each Parameter assigned to this Config File on the :ref:`Profile ` used by the :term:`cache server` with the name :file:`LogFormat{N}.Name` where ``N`` is either the empty string or a natural number on the interval [1,9] the text in :ref:`logging.config-format-snippet` will be inserted. In that snippet, ``NAME`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFormat{N}.Name`, and ``FORMAT`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFormat{N}.Format` for the same value of ``N``\ [#logs-format]_. + +.. _logging.config-format-snippet: + +.. code-block:: text + :caption: Log Format Snippet + + NAME = format { + Format = 'FORMAT ' + } + +.. tip:: The order in which these Parameters are considered is exactly the numerical ordering implied by ``N`` (starting with it being empty). However, each section is generated for all values of ``N`` before moving on to the next. + +Furthermore, for a given value of ``N`` - as before restricted to either the empty string or a natural number on the interval [1,9] -, if a Parameter exists on the :term:`cache server`'s :ref:`Profile ` having this Config File value with the :ref:`parameter-name` :file:`LogFilter{N}.Name`, a line of the format :file:`{NAME} = filter.{TYPE}.('{FILTER}')` will be inserted, where ``NAME`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Name`, ``TYPE`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Type`, and ``FILTER`` is the Value_ of the Parameter with the name :file:`LogFilter{N}.Filter`\ [#logs-filter]_. + +.. note:: When, for a given value of ``N``, a Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Name` exists, but a Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Type` does *not* exist, the value of ``TYPE`` will be ``accept``. + +Finally, for a given value of ``N``, if a Parameter exists on the :term:`cache server`'s :ref:`Profile ` having this Config File value with the :ref:`parameter-name` :file:`LogObject{N}.Filename`, the text in :ref:`logging.config-object-snippet` will be inserted. In that snippet, ``TYPE`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Type` + +.. _logging.config-object-snippet: + +.. code-block:: text + :caption: Log Object Snippet + + log.TYPE { + Format = FORMAT, + Filename = 'FILENAME', + +.. note:: When, for a given value of ``N`` a Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Filename` exists, but a Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Type` does *not* exist, the value of ``TYPE`` in :ref:`logging.config-object-snippet` will be ``ascii``. + +At this point, if the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Type` is **exactly** ``pipe``, a line of the format :file:`\ \ Filters = { FILTERS }` will be inserted where ``FILTERS`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Filters`, followed by a line containing only a closing "curly brace" (:kbd:`}`) - *if and* **only** *if said Parameter is* **not** *empty*. If, however, the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Type` is **not** exactly ``pipe``, then the text in :ref:`logging.config-object-not-pipe-snippet` is inserted. + +.. _logging.config-object-not-pipe-snippet: + +.. code-block:: text + :caption: Log Object (not a "pipe") Snippet + + RollingEnabled = ROLLING, + RollingIntervalSec = INTERVAL, + RollingOffsetHr = OFFSET, + RollingSizeMb = SIZE + } + +In this snippet, ``ROLLING`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingEnabled`, ``INTERVAL`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingIntervalSec`, ``OFFSET`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingOffsetHr`, and ``SIZE`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.SizeMb` - all still having the same value of ``N``, and the Config File value ``logging.config``, of course. + +.. warning:: The contents of these fields are not validated by Traffic Control - handle with care! + +.. seealso:: `The Apache Traffic Server documentation for the logging.config configuration file `_ + +logging.yaml +'''''''''''' +This is a YAML-format configuration file used by :term:`cache servers` that use Apache Traffic Server version 8 or higher - for lower versions, users/administrators/developers should instead be configuring ``logging.config``. This configuration always starts with (after the header) the single line: :literal:`format:\ `. Afterward, for every Parameter assigned to this Config File with a :ref:`parameter-name` like :file:`LogFormat{N}.Name` where ``N`` is either the empty string or a natural number on the interval [1,9], the YAML fragment shown in :ref:`logging.yaml-format-snippet` will be inserted. In this snippet, ``NAME`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFormat{N}.Name`, and for the same value of ``N`` ``FORMAT`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFormat{N}.Format`. + +.. _logging.yaml-format-snippet: + +.. code-block:: yaml + :caption: Log Format Snippet + + - name: NAME + format: 'FORMAT' + +.. tip:: The order in which these Parameters are considered is exactly the numerical ordering implied by ``N`` (starting with it being empty). However, each section is generated for all values of ``N`` before moving on to the next. + +After this, a single line containing only ``filters:`` is inserted. Then, for each Parameter on the :term:`cache server`'s :ref:`Profile ` with a :ref:`parameter-name` like :file:`LogFilter{N}.Name` where ``N`` is either the empty string or a natural number on the interval [1,9], the YAML fragment in :ref:`logging.yaml-filter-snippet` will be inserted. In that snippet, ``NAME`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Name`, ``TYPE`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Type` for the same value of ``N``, and ``FILTER`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Filter` for the same value of ``N``. + +.. _logging.yaml-filter-snippet: + +.. code-block:: yaml + :caption: Log Filter Snippet + + - name: NAME + action: TYPE + condition: FILTER + +.. note:: When, for a given value of ``N``, a Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Name` exists, but a Parameter with the :ref:`parameter-name` :file:`LogFilter{N}.Type` does *not* exist, the value of ``TYPE`` in :ref:`logging.yaml-filter-snippet` will be ``accept``. + +At this point, a single line containing only ``logs:`` is inserted. Finally, for each Parameter on the :term:`cache server`'s :ref:`Profile ` assigned to this Config File with a :ref:`parameter-name` like :file:`LogObject{N}.Filename` where ``N`` is once again either an empty string or a natural number on the interval [1,9] the YAML fragment in :ref:`logging.yaml-object-snippet` will be inserted. In this snippet, for a given value of ``N`` ``TYPE`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Type`, ``FILENAME`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Filename`, ``FORMAT`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Format`. + +.. _logging.yaml-object-snippet: + +.. code-block:: yaml + :caption: Log Object Snippet + + - mode: TYPE + filename: FILENAME + format: FORMAT + ROLLING_OR_FILTERS + +.. note:: When, for a given value of ``N`` a Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Filename` exists, but a Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Type` does *not* exist, the value of ``TYPE`` in :ref:`logging.yaml-object-snippet` will be ``ascii``. + +``ROLLING_OR_FILTERS`` will be one of two YAML fragments based on the Value_ of the Parameter with the name :file:`LogObject{N}.Type`. In particular, if it is exactly ``pipe``, then ``ROLLING_OR_FILTERS`` will be :file:`filters: [{FILTERS}]` where ``FILTERS`` is the Value_ of the Parameter assigned to this Config File with the :ref:`parameter-name` :file:`LogObject{N}.Filters` for the same value of ``N``. If, however, the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Type` is **not** exactly ``pipe``, ``ROLLING_OR_FILTERS`` will have the format given by :ref:`logging.yaml-object-not-pipe-snippet`. In that snippet, ``ROLLING`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingEnabled`, ``INTERVAL`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingIntervalSec`, ``OFFSET`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingOffsetHr`, and ``SIZE`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingSizeMb` - all for the same value of ``N`` and assigned to the ``logging.yaml`` Config File, obviously. + +.. _logging.yaml-object-not-pipe-snippet: + +.. code-block:: yaml + :caption: Log Object (not a "pipe") Snippet + + rolling_enabled: ROLLING + rolling_interval_sec: INTERVAL + rolling_offset_hr: OFFSET + rolling_size_mb: SIZE + + +.. seealso:: For an explanation of YAML syntax, refer to the `official specification thereof `_. For an explanation of the syntax of a valid Apache Traffic Server ``logging.yaml`` configuration file, refer to `that project's dedicated documentation `_. + +logs_xml.config +''''''''''''''' +This configuration file is somewhat more complex than most Config Files, in that it generates XML document tree segments\ [#xml-caveat]_ for each Parameter on the :term:`cache server`'s :ref:`Profile ` rather than simply a plain-text line. Specifically, up to ten of the document fragment shown in :ref:`logs_xml-format-snippet` will be inserted, one for each Parameter with this Config File value on the :term:`cache server`'s :ref:`Profile ` that has a :ref:`parameter-name` like :file:`LogFormat{N}.Name` where ``N`` is either the empty string or a natural number on the range [1,9]. In that snippet, the string ``NAME`` is actually the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFormat{N}.Name"` ``FORMAT`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogFormat{N}.Format`\ [#logs-format]_, where again ``N`` is either the empty string or a natural number on the interval [1,9] - same-valued ``N`` Parameters are associated. + +.. _logs_xml-format-snippet: + +.. code-block:: text + :caption: LogFormat Snippet + + + + + + +.. tip:: The order in which these Parameters are considered is exactly the numerical ordering implied by ``N`` (starting with it being empty). + +Furthermore, for a given value of ``N``, if a Parameter exists on the :term:`cache server`'s :ref:`Profile ` having this Config File value with the :ref:`parameter-name` :file:`LogObject{N}.Filename`, the document fragment shown in :ref:`logs_xml-object-snippet` will be inserted. In that snippet, ``OBJ_FORMAT`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Format`, ``FILENAME`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Filename`, ``ROLLING`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingEnabled`, ``INTERVAL`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingIntervalSec`, ``OFFSET`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingOffsetHr`, ``SIZE`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.RollingSizeMb`, and ``HEADER`` is the Value_ of the Parameter with the :ref:`parameter-name` :file:`LogObject{N}.Header` - all having the same value of ``N``, and the Config File value ``logs_xml.config``, of course. + +.. _logs_xml-object-snippet: + +.. code-block:: text + :caption: LogObject Snippet + + + + + + + + +
+ + +.. warning:: The contents of these fields are not validated by Traffic Control - handle with care! + +.. seealso:: The `Apache Traffic Control documentation on the logs_xml.config configuration file `_ + +.. deprecated:: ATCv3.0 + + This file is only used by Apache Traffic Server version 6.x. The use of Apache Traffic Server version < 7.1 has been deprecated, and will not be supported in the future. Developers are encouraged to instead configure the `logging.config`_ configuration file. + +package +''''''' +This is a special, reserved Config File that isn't a file at all. When a Parameter's Config File is ``package``, then its name is interpreted as the name of a package. :term:`ORT` on the server using the :ref:`Profile ` that has this Parameter will attempt to install a package by that name, interpreting the Parameter's Value_ as a version string if it is not empty. The package manager used will be :manpage:`yum(8)`, regardless of system (though the Python version of :term:`ORT` will attempt to use the host system's package manager - :manpage:`yum(8)`, :manpage:`apt(8)` and ``pacman`` are supported) but that shouldn't be a problem because only CentOS 7 is supported. + +The current implementation of :term:`ORT` will expect Parameters to exist on a :term:`cache server`'s :ref:`Profile ` with the :ref:`Names ` ``astats_over_http`` and ``trafficserver`` before being run the first time, as both of these are required for a :term:`cache server` to operate within a Traffic Control CDN. It is possible to install these outside of :term:`ORT` - and indeed even outside of :manpage:`yum(8)` - but such configuration is not officially supported. + +packages +'''''''' +This Config File is reserved, and is used by :term:`ORT` to pull bulk information about all of the Parameters with Config File values of package_. It doesn't actually correspond to any configuration file. + +parent.config +''''''''''''' +This configuration file is generated entirely from :term:`Cache Group` relationships, as well as :term:`Delivery Service` configuration. This file *can* be affected by Parameters on the server's :ref:`Profile ` if and only if its :ref:`parameter-name` is one of the following: + +- ``algorithm`` +- ``qstring`` +- ``psel.qstring_handling`` +- ``not_a_parent`` - unlike the other Parameters listed (which have a 1:1 correspondence with Apache Traffic Server configuration options), this Parameter affects the generation of :term:`parent` relationships between :term:`cache servers`. When a Parameter with this :ref:`parameter-name` and Config File exists on a :ref:`Profile ` used by a :term:`cache server`, it will not be added as a :term:`parent` of any other :term:`cache server`, regardless of :term:`Cache Group` hierarchy. Under ordinary circumstances, there's no real reason for this Parameter to exist. + +Additionally, :term:`Delivery Service` :ref:`Profiles ` can have special Parameters with the :ref:`parameter-name` "mso.parent_retry" to :ref:`multi-site-origin-qht`. + +.. seealso:: To see how the :ref:`Values ` of these Parameters are interpreted, refer to the `Apache Traffic Server documentation on the parent.config configuration file `_ + +plugin.config +''''''''''''' +For each Parameter with this Config File value on the same :ref:`Profile `, a line in the resulting configuration file is produced in the format :file:`{NAME} {VALUE}` where ``NAME`` is the Parameter's :ref:`parameter-name` with trailing characters matching the regular expression :regexp:`__\\d+$` stripped out and ``VALUE`` is the Parameter's Value_. + +.. caution:: In order for Parameters for Config Files relating to Apache Traffic Server plugins - e.g. `regex_revalidate.config`_ - to have any effect, a Parameter must exist with this Config File value to instruct Apache Traffic Server to load the plugin. Typically, this is more easily achieved by assigning these Parameters to `The GLOBAL Profile`_ than on a server-by-server basis. + +.. seealso:: `The Apache Traffic server documentation on the plugin.config configuration file `_ explains what Value_ and :ref:`parameter-name` a Parameter should have to be valid. + +rascal.properties +''''''''''''''''' +This Config File is meant to be on Parameters assigned to either Traffic Monitor Profiles_ or :term:`cache server` Profiles_. Its allowed :ref:`Parameter Names ` are all configuration options for Traffic Monitor. The :ref:`Names ` with meaning are as follows. + +health.threshold.loadavg + The Value_ of this Parameter sets the "load average" above which the associated :ref:`Profile `'s :term:`cache server` will be considered "unhealthy". + + .. seealso:: :ref:`health-proto` + + .. seealso:: The definition of a "load average" can be found in the documentation for the Linux/Unix command :manpage:`uptime(1)`. + + .. caution:: If more than one Parameter with this :ref:`parameter-name` and Config File exist on the same :ref:`Profile ` with different :ref:`Values `, the actual Value_ used by any given Traffic Monitor instance is undefined (though it will be the Value_ of one of those Parameters). + +health.threshold.availableBandwidthInKbps + The Value_ of this Parameter sets the amount of bandwidth (in kilobits per second) that Traffic Control will try to keep available on the :term:`cache server`. For example a Value_ of ">1500000" indicates that the :term:`cache server` will be marked "unhealthy" if its available remaining bandwidth on the network interface used by the caching proxy falls below 1.5Gbps. + + .. seealso:: :ref:`health-proto` + + .. caution:: If more than one Parameter with this :ref:`parameter-name` and Config File exist on the same :ref:`Profile ` with different :ref:`Values `, the actual Value_ used by any given Traffic Monitor instance is undefined (though it will be the Value_ of one of those Parameters). + +records.config +'''''''''''''' +For each Parameter with this Config File value on the same :ref:`Profile `, a line in the resulting configuration file is produced in the format :file:`{NAME} {VALUE}` where ``NAME`` is the Parameter's :ref:`parameter-name` with trailing characters matching the regular expression :regexp:`__\\d+$` stripped out and ``VALUE`` is the Parameter's Value_. + +.. seealso:: `The Apache Traffic Server records.config documentation `_ + +:file:`regex_remap_{anything}.config` +'''''''''''''''''''''''''''''''''''''''''''' +Config Files matching this pattern - where ``anything`` is zero or more characters - are generated entirely from :term:`Delivery Service` configuration, which cannot be affected by any Parameters (except :ref:`"location" `). + +.. seealso:: For the syntax of configuration files for the "Regex Remap" plugin, see `the Regex Remap plugin's official documentation `_. For instructions on how to enable a plugin, consult, the `plugin.config documentation `_. + +regex_revalidate.config +''''''''''''''''''''''' +This configuration file can only be affected by the special ``maxRevalDurationDays``, which is discussed in the `The GLOBAL Profile`_ section. + +.. seealso:: For the syntax of configuration files for the "Regex Revalidate" plugin, see `the Regex Revalidate plugin's official documentation `_. For instructions on how to enable a plugin, consult, the `plugin.config documentation `_. + +remap.config +'''''''''''' +This configuration file can only be affected by one Parameter on a :ref:`Profile ` assigned to a :term:`Delivery Service`. Then, for every Parameter assigned to that :ref:`Profile ` that has the Config File value "cachekey.config" - **NOT this Config File** -, a parameter will be added to the line for that :term:`Delivery Service` like so: :file:`pparam=--{Name}={Value}` where ``Name`` is the Parameter's :ref:`parameter-name`, and ``Value`` is its Value_. + +.. seealso:: For an explanation of the syntax of this configuration file, refer to `the Apache Traffic Server remap.config documentation `_. + +:file:`set_dscp_{anything}.config` +'''''''''''''''''''''''''''''''''' +Configuration files matching this pattern - where ``anything`` is a string of zero or more characters is generated entirely from a :ref:`"location" ` Parameter. + +.. tip:: ``anything`` in that Config File name only has meaning if it is a natural number - specifically, one of each value of :ref:`ds-dscp` on every :term:`Delivery Service` to which the :term:`cache server` using the :ref:`Profile ` on which the Parameter(s) exist(s). + +ssl_multicert.config +'''''''''''''''''''' +This configuration file is generated from the SSL keys of :term:`Delivery Services`, and is unaffected by any Parameters (except :ref:`"location" `) + +.. seealso:: `The official ssl_multicert.config documentation `_ + +storage.config +'''''''''''''' +This configuration file can only be affected by a handful of Parameters. If a Parameter with the :ref:`parameter-name` "Drive Prefix" exists the generated configuration file will have a line inserted in the format :file:`{PREFIX}{LETTER} volume=1` for each letter in the comma-delimited list that is the Value_ of the Parameter on the same :ref:`Profile ` with the :ref:`parameter-name` "Drive Letters", where ``PREFIX`` is the Value_ of the Parameter with the :ref:`parameter-name` "Drive Prefix", and ``LETTER`` is each of the aforementioned letters in turn. Additionally, if a Parameter on the same :ref:`Profile ` exists with the :ref:`parameter-name` "RAM Drive Prefix" then for each letter in the comma-delimited list that is the Value_ of the Parameter on the same :ref:`Profile ` with the :ref:`parameter-name` "RAM Drive Letters", a line will be generated in the format :file:`{PREFIX}{LETTER} volume={i}` where ``PREFIX`` is the Value_ of the Parameter with the :ref:`parameter-name` "RAM Drive Prefix", ``LETTER`` is each of the aforementioned letters in turn, and ``i`` is 1 *if and* **only** *if* a Parameter does **not** exist on the same :ref:`Profile ` with the :ref:`parameter-name` "Drive Prefix" and is 2 otherwise. Finally, if a Parameter exists on the same :ref:`Profile ` with the :ref:`parameter-name` "SSD Drive Prefix", then a line is inserted for each letter in the comma-delimited list that is the Value_ of the Parameter on the same :ref:`Profile ` with the :ref:`parameter-name` "SSD Drive Letters" in the format :file:`{PREFIX}{LETTER} volume={i}` where ``PREFIX`` is the Value_ of the Parameter with the :ref:`parameter-name` "SSD Drive Prefix", ``LETTER`` is each of the aforementioned letters in turn, and ``i`` is 1 *if and* **only** *if* **both** a Parameter with the :ref:`parameter-name` "Drive Prefix" and a Parameter with the :ref:`parameter-name` "RAM Drive Prefix" *don't exist on the same* :ref:`Profile `, or 2 if only **one** of them exists, or otherwise 3. + +.. seealso:: `The Apache Traffic Server storage.config file documentation `_. + +traffic_stats.config +'''''''''''''''''''' +This Config File value is only handled specially when the :ref:`Profile ` to which it is assigned is of the special TRAFFIC_STATS Type_. In that case, the :ref:`parameter-name` of any Parameters with this Config File is restrained to one of "CacheStats" or "DsStats". When it is "Cache Stats", the Value_ is interpreted specially based on whether or not it starts with "ats.". If it does, then what follows must be the name of one of `the core Apache Traffic Server statistics `_. This signifies to Traffic Stats that it should store that statistic for :term:`cache servers` within Traffic Control. Additionally, the special statistics "bandwidth", "maxKbps" are supported as :ref:`Names ` - and in fact it is suggested that they exist in every Traffic Control deployment. + +When the Parameter :ref:`parameter-name` is "DSStats", the allowed :ref:`Values ` are: + +- kbps +- status_4xx +- status_5xx +- tps_2xx +- tps_3xx +- tps_4xx +- tps_5xx +- tps_total + +.. seealso:: For more information on the statistics gathered by Traffic Stats, see :ref:`ts-admin`. For information about how these statics are gathered, consult the only known documentation of the "astats_over_http" Apache Traffic Server plugin: :atc-file:`traffic_server/plugins/astats_over_http/README.md`. + +sysctl.config +''''''''''''' +For each Parameter with this Config File value on the same :ref:`Profile `, a line in the resulting configuration file is produced in the format :file:`{NAME} = {VALUE}` where ``NAME`` is the Parameter's :ref:`parameter-name` with trailing characters matching the regular expression :regexp:`__\\d+$` stripped out and ``VALUE`` is the Parameter's Value_. + +:file:`uri_signing_{anything}.config` +''''''''''''''''''''''''''''''''''''' +Config Files matching this pattern - where ``anything`` is zero or more characters - are generated entirely from the URI Signing Keys configured on a :term:`Delivery Service` through either the :ref:`to-api` or the :ref:`tp-services-delivery-service` view in Traffic Portal. + +.. seealso:: `The draft RFC for uri_signing `_ - note, however that the current implementation of uri_signing uses Draft 12 of that RFC document, **NOT** the latest. + +:file:`url_sig_{anything}.config` +''''''''''''''''''''''''''''''''' +Config Files that match this pattern - where ``anything`` is zero or more characters - are mostly generated using the URL Signature Keys as configured either through the :ref:`to-api` or the :ref:`tp-services-delivery-service` view in Traffic Portal. However, if no such keys have been configured, they may be provided by fall-back Parameters. In this case, for each Parameter on assigned to this Config File on the same :ref:`Profile ` a line is inserted into the resulting configuration file in the format :file:`{NAME} = {VALUE}` where ``NAME`` is the Parameter's :ref:`parameter-name` and ``VALUE`` is the Parameter's Value_. + +.. seealso:: `The Apache Trafficserver documentation for the url_sig plugin `_. + +volume.config +''''''''''''' +This Config File is peculiar in that it depends only on the existence of Parameters, and not each Parameter's actual Value_. The Parameters that affect the generated configuration file are the Parameters with the :ref:`Names ` "Drive Prefix", "RAM Drive Prefix", and "SSD Drive Prefix". Each of these Parameters must be assigned to the ``storage.config`` Config File - **NOT this Config File** - and, of course, be on the same :ref:`Profile `. The contents of the generated Config File will be between zero and three lines (excluding headers) where the number of lines is equal to the number of the aforementioned Parameters that actually exist on the same :ref:`Profile `. Each line has the format :file:`volume={i} scheme=http size={SIZE}%` where ``i`` is a natural number that ranges from 1 to the number of those Parameters that exist. ``SIZE`` is :math:`100 / N` - where :math:`N` is the number of those special Parameters that exist - truncated to the nearest natural number, e.g. :math:`100 / 3 = 33`. + +.. seealso:: `The Apache Traffic Server volume.config file documentation `_. + +.. _parameter-id: + +ID +"" +An integral, unique identifier for a Parameter. Note that Parameters must have a unique combination of `Config File`_, :ref:`parameter-name`, and Value_, and so those should be used for identifying a unique Parameter whenever possible. + +.. impl-detail:: If two Profiles_ have been assigned Parameters that have the same values for `Config File`_, :ref:`parameter-name`, and Value_ then Traffic Ops actually only stores one Parameter object and merely *links* it to both Profiles_. This can be seen by inspecting the Parameters' IDs, as they will be the same. There are many cases where a user or developer must rely on this implementation detail, but both are encouraged to do so only when absolutely necessary. + +.. _parameter-name: + +Name +"""" +The Name of a Parameter has different meanings depending on the type of any and all Profiles_ to which it is assigned, as well as the `Config File`_ to which the Parameter belongs, but most generally it is used in `Apache Traffic Server configuration files`_ as the name of a configuration option in a name/value pair. Traffic Ops interprets the Name and Value_ of a Parameter in intelligent ways depending on the type of object to which the :ref:`Profile ` using the Parameter is assigned. For example, if `Config File`_ is ``records.config`` and the Parameter's :ref:`Profile ` is assigned to a :term:`cache server`, then a single line is placed in the configuration file specified by `Config File`_, and that line will have the contents :file:`{Name} {Value}`. However, if the `Config File`_ of the Parameter is something without special meaning to Traffic Ops e.g. "foo", then a line containing **only** the Parameter's Value_ would be inserted into that file (presuming it also has a Parameter with a Name of :ref:`"location" ` and a `Config File`_ of "foo"). Additionally, there are a few Names that are treated specially by Traffic Control. + +.. _parameter-name-location: + +location + The Value_ of this Parameter is to be interpreted as a path under which the configuration file specified by `Config File`_ shall be found (or written, if not found). Any configuration file that is to exist on a server must have an associated "location" Parameter, even if the contents of the file cannot be affected by Parameters. + + .. caution:: If a single :ref:`Profile ` has multiple "location" Parameters for the same `Config File`_ with different :ref:`Values `, the actual location of the generated configuration file is undefined (but will be one of those Parameters' :ref:`Values `). + +header + If the :ref:`Profile ` containing this Parameter is assigned to a server, **and** if the `Config File`_ is not one of the special values that Traffic Ops uses to determine special syntax formatting, then the Value_ of this Parameter will be used instead of the typical Traffic Ops header - *unless* it is the special string "none", in which case no header will be inserted at all. + + .. caution:: If a single :ref:`Profile ` has multiple "header" Parameters for the same `Config File`_ with different :ref:`Values `, the actual header is undefined (but will be one of those Parameters' :ref:`Values `). + +.. _parameter-secure: + +Secure +"""""" +When this is 'true', a user requesting to see this Parameter will see the value ``********`` instead of its actual value if the user's permission :term:`Role` isn't 'admin'. + +.. _parameter-value: + +Value +""""" +In general, a Parameter's :dfn:`Value` can be anything, and in the vast majority of cases the Value is *in no way validated by Traffic Control*. Usually, though, the Value has a special meaning depending on the values of the Parameter's `Config File`_ and/or :ref:`parameter-name`. + +.. [#xml-caveat] The contents of this file are not valid XML, but are rather XML-like so developers writing procedures that will consume and parse it should be aware of this, and note the actual syntax as specified in the `Apache Traffic Server documentation for logs_xml.config `_ +.. [#logs-format] This Value_ may safely contain double quotes (:kbd:`"`) as they will be backslash-escaped in the generated output. +.. [#logs-filter] This Value_ may safely contain backslashes (:kbd:`\\`) and single quotes (:kbd:`'`), as they will be backslash-escaped in the generated output. diff --git a/docs/source/tools/compare.rst b/docs/source/tools/compare.rst index 4a12634b47..107940a7f0 100644 --- a/docs/source/tools/compare.rst +++ b/docs/source/tools/compare.rst @@ -82,7 +82,7 @@ traffic_ops/testing/compare/compare.go Print version information and exit .. versionchanged:: 3.0.0 - Removed the ``-s`` command line switch to compare CDN :term:`Snapshot`\ s - this is now the responsibility of the :program:`genConfigRoutes.py` script. + Removed the ``-s`` command line switch to compare CDN :term:`Snapshots` - this is now the responsibility of the :program:`genConfigRoutes.py` script. .. program:: genConfigRoutes.py diff --git a/grove/plugin/ats_log.go b/grove/plugin/ats_log.go index b40f7a40ec..9cfc393ace 100644 --- a/grove/plugin/ats_log.go +++ b/grove/plugin/ats_log.go @@ -117,10 +117,10 @@ func atsEventLogStr( ) string { unixNano := timestamp.UnixNano() unixSec := unixNano / NSPerSec - unixFrac := 1 / (unixNano % NSPerSec) + unixFrac := (unixNano / (NSPerSec / 1000)) - (unixSec * 1000) // gives fractional seconds to three decimal points, like the ATS logs. unixFracStr := strconv.FormatInt(unixFrac, 10) - if len(unixFracStr) > 3 { - unixFracStr = unixFracStr[:3] + for len(unixFracStr) < 3 { + unixFracStr = "0" + unixFracStr // leading zeros, so e.g. a fraction of '42' becomes '1234.042' not '1234.42' } cfsc := "FIN" if !clientRespSuccess { diff --git a/grove/plugin/ats_log_test.go b/grove/plugin/ats_log_test.go new file mode 100644 index 0000000000..757310e9ae --- /dev/null +++ b/grove/plugin/ats_log_test.go @@ -0,0 +1,57 @@ +package plugin + +/* + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +import ( + "fmt" + "strings" + "testing" + "time" +) + +func TestATSLogTimeFractionalSeconds(t *testing.T) { + testTimes := []int64{ + 1563936732547355432, + 1563937732000355432, + 1563936732000000000, + 1463136732999000000, + 1563916732009000000, + 1503936232090000000, + 1563936732099000000, + 1563936722900000000, + 1563236282909000000, + } + for _, testTime := range testTimes { + timestamp := time.Unix(0, testTime) + + logStr := atsEventLogStr(timestamp, "", "", "", "", "", "", "", "", "", 0, 0, 0, 0, 0, false, false, "", "", "", "", "", 0) + + logFields := strings.Fields(logStr) + if len(logFields) < 1 { + t.Fatalf("atsEventLogStr expected >1 fields, actual %v", len(logFields)) + } + + timeField := logFields[0] + + // the time field should be the Unix timestamp in seconds, as a float with 3 decimal places. + unixNano := timestamp.UnixNano() + unixSec := float64(unixNano) / float64(NSPerSec) + unixSecThreeDecimalPts := fmt.Sprintf("%.3f", unixSec) + + if timeField != unixSecThreeDecimalPts { + t.Errorf("atsEventLogStr time expected '%v' actual '%v'", unixSecThreeDecimalPts, timeField) + } + } +} diff --git a/infrastructure/cdn-in-a-box/README.md b/infrastructure/cdn-in-a-box/README.md index af21634e27..0a79957d1c 100644 --- a/infrastructure/cdn-in-a-box/README.md +++ b/infrastructure/cdn-in-a-box/README.md @@ -166,9 +166,9 @@ To expose the ports of each service on the host, add the `docker-compose.expose- If you scroll back through the output ( or use `docker-compose logs trafficops-perl | grep "User defined signal 2"` ) and see a line that says something like `/run.sh: line 79: 118 User defined signal 2 $TO_DIR/local/bin/hypnotoad script/cdn` then you've hit a mysterious known error. We don't know what this is or why it happens, but your best bet is to send up a quick prayer and restart the stack. -### Traffic Monitor is stuck waiting for a valid CRConfig +### Traffic Monitor is stuck waiting for a valid Snapshot -Often times you must snap the CDN in order for a valid CRConfig to be generated. This can be done by logging into the Traffic Portal and clicking the camera icon, then clicking the perform snapshot button. +Often times you must take a CDN [Snapshot](https://traffic-control-cdn.readthedocs.io/en/latest/glossary.html#term-snapshot) in order for a valid Snapshot to be generated. This can be done through [Traffic Portal's "CDNs" view](https://traffic-control-cdn.readthedocs.io/en/latest/admin/traffic_portal/usingtrafficportal.html#cdns), clicking on the "CDN-in-a-Box" CDN, then pressing the camera button, and finally the "Perform Snapshot" button. ### I'm seeing a failure to open a socket and/or set a socket option diff --git a/infrastructure/cdn-in-a-box/traffic-ops-overrides.tgz b/infrastructure/cdn-in-a-box/traffic-ops-overrides.tgz new file mode 100644 index 0000000000..8128e85ba5 Binary files /dev/null and b/infrastructure/cdn-in-a-box/traffic-ops-overrides.tgz differ diff --git a/infrastructure/cdn-in-a-box/traffic_monitor/Dockerfile b/infrastructure/cdn-in-a-box/traffic_monitor/Dockerfile index 998ab86cfc..9cf40d70e6 100644 --- a/infrastructure/cdn-in-a-box/traffic_monitor/Dockerfile +++ b/infrastructure/cdn-in-a-box/traffic_monitor/Dockerfile @@ -40,7 +40,7 @@ RUN yum install -y epel-release && \ yum clean all RUN mkdir -p /opt/traffic_monitor/conf -ADD traffic_monitor/traffic_monitor.cfg /opt/traffic_monitor/conf/ +ADD traffic_monitor/traffic_monitor.cfg /opt/traffic_monitor/conf/traffic_monitor.cfg.template ADD enroller/server_template.json \ traffic_ops/to-access.sh \ diff --git a/infrastructure/cdn-in-a-box/traffic_monitor/run.sh b/infrastructure/cdn-in-a-box/traffic_monitor/run.sh index b40bfc6fc7..a1032fca78 100755 --- a/infrastructure/cdn-in-a-box/traffic_monitor/run.sh +++ b/infrastructure/cdn-in-a-box/traffic_monitor/run.sh @@ -101,15 +101,12 @@ export TO_PASSWORD="$TM_PASSWORD" export TO_USER=$TO_ADMIN_USER export TO_PASSWORD=$TO_ADMIN_PASSWORD -touch /opt/traffic_monitor/var/log/traffic_monitor.log - -# Do not start until there is a valid CRConfig available -until [ $(to-get "/CRConfig-Snapshots/$CDN_NAME/CRConfig.json" 2>/dev/null | jq -c -e '.config|length') -gt 0 ] ; do - echo "Waiting on valid CRConfig..."; - sleep 3; +# Do not start until there a valid Snapshot has been taken +until [ $(to-get "/api/1.4/cdns/$CDN_NAME/snapshot" 2>/dev/null | jq -c -e '.response.config|length') -gt 0 ] ; do + echo "Waiting on valid Snapshot..."; + sleep 3; done +envsubst < /opt/traffic_monitor/conf/traffic_monitor.cfg.template > /opt/traffic_monitor/conf/traffic_monitor.cfg cd /opt/traffic_monitor -/opt/traffic_monitor/bin/traffic_monitor -opsCfg /opt/traffic_monitor/conf/traffic_ops.cfg -config /opt/traffic_monitor/conf/traffic_monitor.cfg & -disown -exec tail -f /opt/traffic_monitor/var/log/traffic_monitor.log +/opt/traffic_monitor/bin/traffic_monitor -opsCfg /opt/traffic_monitor/conf/traffic_ops.cfg -config /opt/traffic_monitor/conf/traffic_monitor.cfg diff --git a/infrastructure/cdn-in-a-box/traffic_monitor/traffic_monitor.cfg b/infrastructure/cdn-in-a-box/traffic_monitor/traffic_monitor.cfg index b5223b1f71..91ea4fe336 100644 --- a/infrastructure/cdn-in-a-box/traffic_monitor/traffic_monitor.cfg +++ b/infrastructure/cdn-in-a-box/traffic_monitor/traffic_monitor.cfg @@ -10,11 +10,11 @@ "max_health_history": 5, "health_flush_interval_ms": 20, "stat_flush_interval_ms": 20, - "log_location_event": "/opt/traffic_monitor/var/log/traffic_monitor.log", - "log_location_error": "/opt/traffic_monitor/var/log/traffic_monitor.log", - "log_location_warning": "/opt/traffic_monitor/var/log/traffic_monitor.log", - "log_location_info": "/opt/traffic_monitor/var/log/traffic_monitor.log", - "log_location_debug": "/opt/traffic_monitor/var/log/traffic_monitor.log", + "log_location_event": "$TM_LOG_EVENT", + "log_location_error": "$TM_LOG_ERROR", + "log_location_warning": "$TM_LOG_WARNING", + "log_location_info": "$TM_LOG_INFO", + "log_location_debug": "$TM_LOG_DEBUG", "serve_read_timeout_ms": 10000, "serve_write_timeout_ms": 10000, "http_poll_no_sleep": false, diff --git a/infrastructure/cdn-in-a-box/traffic_ops/config.sh b/infrastructure/cdn-in-a-box/traffic_ops/config.sh index 22619adde1..2697169711 100755 --- a/infrastructure/cdn-in-a-box/traffic_ops/config.sh +++ b/infrastructure/cdn-in-a-box/traffic_ops/config.sh @@ -80,7 +80,7 @@ cat <<-EOF >/opt/traffic_ops/app/conf/cdn.conf "workers" : 12 }, "traffic_ops_golang" : { - "insecure": true, + "insecure": true, "port" : "$TO_PORT", "proxy_timeout" : 60, "proxy_keep_alive" : 60, @@ -90,15 +90,16 @@ cat <<-EOF >/opt/traffic_ops/app/conf/cdn.conf "read_header_timeout" : 60, "write_timeout" : 60, "idle_timeout" : 60, - "log_location_error": "stdout", - "log_location_warning": "stdout", - "log_location_info": "stdout", - "log_location_debug": "stdout", - "log_location_event": "stdout", + "log_location_error": "$TO_LOG_ERROR", + "log_location_warning": "$TO_LOG_WARNING", + "log_location_info": "$TO_LOG_INFO", + "log_location_debug": "$TO_LOG_DEBUG", + "log_location_event": "$TO_LOG_EVENT", "max_db_connections": 20, "backend_max_connections": { "mojolicious": 4 - } + }, + "whitelisted_oauth_urls": [] }, "cors" : { "access_control_allow_origin" : "*" diff --git a/infrastructure/cdn-in-a-box/traffic_ops/run-go.sh b/infrastructure/cdn-in-a-box/traffic_ops/run-go.sh index b38a8d0d4f..6a542c797d 100755 --- a/infrastructure/cdn-in-a-box/traffic_ops/run-go.sh +++ b/infrastructure/cdn-in-a-box/traffic_ops/run-go.sh @@ -87,94 +87,12 @@ while true; do done ### Add SSL keys for demo1 delivery service -demo1_sslkeys_verified=false -demo1_version=1 -while [[ "$demo1_sslkeys_verified" = false ]]; do - while true; do - sslkeys_response=$(to-get "api/1.4/deliveryservices/xmlId/$ds_name/sslkeys?decode=true") - echo "CDN SSLKeys=$sslkeys_response" - [[ -n "$sslkeys_response" ]] && break - sleep 2 - done - demo1_crt="$(sed -n -e '/-----BEGIN CERTIFICATE-----/,$p' $X509_DEMO1_CERT_FILE | jq -s -R '.')" - demo1_csr="$(sed -n -e '/-----BEGIN CERTIFICATE REQUEST-----/,$p' $X509_DEMO1_REQUEST_FILE | jq -s -R '.')" - demo1_key="$(sed -n -e '/-----BEGIN PRIVATE KEY-----/,$p' $X509_DEMO1_KEY_FILE | jq -s -R '.')" - demo1_json_request=$(jq -n \ - --arg cdn "$CDN_NAME" \ - --arg hostname "*.demo1.mycdn.ciab.test" \ - --arg dsname "$ds_name" \ - --argjson crt "$demo1_crt" \ - --argjson csr "$demo1_csr" \ - --argjson key "$demo1_key" \ - --argjson version $demo1_version \ - "{ cdn: \$cdn, - certificate: { - crt: \$crt, - csr: \$csr, - key: \$key - }, - deliveryservice: \$dsname, - hostname: \$hostname, - key: \$dsname, - version: $demo1_version - }") - - demo1_json_response=$(to-post 'api/1.4/deliveryservices/sslkeys/add' "$demo1_json_request") - - if [[ -n "$demo1_json_response" ]] ; then - sleep 2 - cdn_sslkeys_response=$(to-get "api/1.3/cdns/name/$CDN_NAME/sslkeys.json" | jq '.response[] | length') - echo "cdn_sslkeys_response=$cdn_sslkeys_response" - - if [ -n "$cdn_sslkeys_response" ] ; then - if ((cdn_sslkeys_response==0)); then - sleep 2 # Submit it again because the first time doesn't work ! - demo1_json_response=$(to-post 'api/1.4/deliveryservices/sslkeys/add' "$demo1_json_request") - - if [[ -n "$demo1_json_response" ]] ; then - demo1_sslkeys_verified=true - fi - elif ((cdn_sslkeys_response>0)); then - demo1_sslkeys_verified=true - fi - fi - fi - - ((demo_version+=1)) -done +to-add-sslkeys $CDN_NAME $ds_name "*.demo1.mycdn.ciab.test" $X509_DEMO1_CERT_FILE $X509_DEMO1_REQUEST_FILE $X509_DEMO1_KEY_FILE ### Automatic Queue/Snapshot ### -while [[ "$AUTO_SNAPQUEUE_ENABLED" = true ]] ; do +if [[ "$AUTO_SNAPQUEUE_ENABLED" = true ]]; then # AUTO_SNAPQUEUE_SERVERS should be a comma delimited list of expected docker service names to be enrolled - see varibles.env - expected_servers_json=$(echo "$AUTO_SNAPQUEUE_SERVERS" | tr ',' '\n' | jq -R . | jq -M -c -e -s '.|sort') - expected_servers_list=$(jq -r -n --argjson expected "$expected_servers_json" '$expected|join(",")') - expected_servers_total=$(jq -r -n --argjson expected "$expected_servers_json" '$expected|length') - - current_servers_json=$(to-get 'api/1.4/servers' 2>/dev/null | jq -c -e '[.response[] | .xmppId] | sort') - [ -z "$current_servers_json" ] && current_servers_json='[]' - current_servers_list=$(jq -r -n --argjson current "$current_servers_json" '$current|join(",")') - current_servers_total=$(jq -r -n --argjson current "$current_servers_json" '$current|length') - - remain_servers_json=$(jq -n --argjson expected "$expected_servers_json" --argjson current "$current_servers_json" '$expected-$current') - remain_servers_list=$(jq -r -n --argjson remain "$remain_servers_json" '$remain|join(",")') - remain_servers_total=$(jq -r -n --argjson remain "$remain_servers_json" '$remain|length') - - echo "AUTO-SNAPQUEUE - Expected Servers ($expected_servers_total): $expected_servers_list" - echo "AUTO-SNAPQUEUE - Current Servers ($current_servers_total): $current_servers_list" - echo "AUTO-SNAPQUEUE - Remain Servers ($remain_servers_total): $remain_servers_list" - - if ((remain_servers_total == 0)) ; then - echo "AUTO-SNAPQUEUE - All expected servers enrolled." - sleep $AUTO_SNAPQUEUE_ACTION_WAIT - echo "AUTO-SNAPQUEUE - Do automatic snapshot..." - to-put 'api/1.3/cdns/2/snapshot' - sleep $AUTO_SNAPQUEUE_ACTION_WAIT - echo "AUTO-SNAPQUEUE - Do queue updates..." - to-post 'api/1.3/cdns/2/queue_update' '{"action":"queue"}' - break - fi - - sleep $AUTO_SNAPQUEUE_POLL_INTERVAL -done + to-auto-snapqueue $AUTO_SNAPQUEUE_SERVERS $CDN_NAME +fi exec tail -f /dev/null diff --git a/infrastructure/cdn-in-a-box/traffic_ops/set-to-ips-from-dns.sh b/infrastructure/cdn-in-a-box/traffic_ops/set-to-ips-from-dns.sh index cdf86cd54c..1df6a8f6a9 100755 --- a/infrastructure/cdn-in-a-box/traffic_ops/set-to-ips-from-dns.sh +++ b/infrastructure/cdn-in-a-box/traffic_ops/set-to-ips-from-dns.sh @@ -22,7 +22,7 @@ base_data_dir="/traffic_ops_data" servers_dir="${base_data_dir}/servers" profiles_dir="${base_data_dir}/profiles" -service_names='db trafficops trafficops-perl trafficportal trafficmonitor trafficvault trafficrouter edge mid origin enroller socksproxy client vnc dns' +service_names='db trafficops trafficops-perl trafficportal trafficmonitor trafficvault trafficrouter enroller dns' service_domain='infra.ciab.test' @@ -38,11 +38,22 @@ done service_ips="${gateway_ip}" service_ip6s="${gateway_ip6}" +INTERFACE=$(ip link | awk '/\/ && !/LOOPBACK/ {sub(/@.*/, "", $2); print $2}') +NETMASK=$(route | awk -v INTERFACE=$INTERFACE '$8 ~ INTERFACE && $1 !~ "default" {print $3}') +DIG_IP_RETRY=10 for service_name in $service_names; do service_fqdn="${service_name}.${service_domain}" - service_ip="$(dig +short ${service_fqdn} A)" + for (( i=1; i<=DIG_IP_RETRY; i++ )); do + service_ip="$(dig +short ${service_fqdn} A)" + if [ -z "${service_ip}" ]; then + printf "service \"${service_fqdn}\" not found in dns, count=$i, waiting ...\n" + sleep 3 + else + break + fi + done # # TODO add a way to determine if a service wasn't built in the Compose, @@ -54,6 +65,7 @@ for service_name in $service_names; do if [ -z "${service_ip}" ]; then # TODO sleep and try again? Up to n times? printf "setting ips from dns: service \"${service_fqdn}\" not found in dns, skipping!\n" + continue fi service_ip6="$(dig +short $service_name AAAA)" @@ -65,12 +77,13 @@ for service_name in $service_names; do # not all services have server files printf "setting ips from dns: checking file for dir '${servers_dir}' service '${service_name}'\n" - service_file="$(ls ${servers_dir}/*${service_name}* 2>/dev/null)" + service_file="$(ls ${servers_dir}/*-${service_name}* 2>/dev/null)" printf "setting ips from dns: trying service file '${service_file}'\n" if [ -n "${service_file}" ]; then printf "setting ips from dns: service file '${service_file}' exists, adding IPs\n" cat "${service_file}" | jq '. + {"ipAddress":"'"${service_ip}"'"}' > "${service_file}.tmp" && mv "${service_file}.tmp" "${service_file}" cat "${service_file}" | jq '. + {"ipGateway":"'"${gateway_ip}"'"}' > "${service_file}.tmp" && mv "${service_file}.tmp" "${service_file}" + cat "${service_file}" | jq '. + {"ipNetmask":"'"${NETMASK}"'"}' > "${service_file}.tmp" && mv "${service_file}.tmp" "${service_file}" if [ -n "${service_ip6}" ]; then cat "${service_file}" | jq '. + {"ip6Address":"'"${service_ip6}"'"}' > "${service_file}.tmp" && mv "${service_file}.tmp" "${service_file}" fi diff --git a/infrastructure/cdn-in-a-box/traffic_ops/to-access.sh b/infrastructure/cdn-in-a-box/traffic_ops/to-access.sh index 7e3b2ce343..ad83504bcd 100644 --- a/infrastructure/cdn-in-a-box/traffic_ops/to-access.sh +++ b/infrastructure/cdn-in-a-box/traffic_ops/to-access.sh @@ -269,3 +269,91 @@ function testenrolled() { tmp=$(echo $tmp | jq '.response[]|select(.hostName=="'"$MY_HOSTNAME"'")') echo "$tmp" } + +# Add SSL keys +# args: +# cdn_name +# deliveryservice_name +# hostname +# crt_path +# csr_path +# key_path +to-add-sslkeys() { + demo1_crt="$(sed -n -e '/-----BEGIN CERTIFICATE-----/,$p' $4 | jq -s -R '.')" + demo1_csr="$(sed -n -e '/-----BEGIN CERTIFICATE REQUEST-----/,$p' $5 | jq -s -R '.')" + demo1_key="$(sed -n -e '/-----BEGIN PRIVATE KEY-----/,$p' $6 | jq -s -R '.')" + json_request=$(jq -n \ + --arg cdn "$1" \ + --arg dsname "$2" \ + --arg hostname "$3" \ + --argjson crt "$demo1_crt" \ + --argjson csr "$demo1_csr" \ + --argjson key "$demo1_key" \ + "{ cdn: \$cdn, + certificate: { + crt: \$crt, + csr: \$csr, + key: \$key + }, + deliveryservice: \$dsname, + hostname: \$hostname, + key: \$dsname, + version: 1 + }") + + while true; do + json_response=$(to-post 'api/1.4/deliveryservices/sslkeys/add' "$json_request") + if [[ -n "$json_response" ]] ; then + sleep 3 + cdn_sslkeys_response=$(to-get "api/1.3/cdns/name/$1/sslkeys.json" | jq '.response[] | length') + if ((cdn_sslkeys_response>0)); then + break + else + # Submit it again because the first time doesn't work ! + sleep 3 + fi + else + sleep 3 + fi + done +} + +# AUTO_SNAPQUEUE +# args: +# expected_servers - should be a comma delimited list of expected docker service names to be enrolled +# cdn_name +to-auto-snapqueue() { + while true; do + # AUTO_SNAPQUEUE_SERVERS should be a comma delimited list of expected docker service names to be enrolled - see varibles.env + expected_servers_json=$(echo "$1" | tr ',' '\n' | jq -R . | jq -M -c -e -s '.|sort') + expected_servers_list=$(jq -r -n --argjson expected "$expected_servers_json" '$expected|join(",")') + expected_servers_total=$(jq -r -n --argjson expected "$expected_servers_json" '$expected|length') + + current_servers_json=$(to-get 'api/1.4/servers' 2>/dev/null | jq -c -e '[.response[] | .xmppId] | sort') + [ -z "$current_servers_json" ] && current_servers_json='[]' + current_servers_list=$(jq -r -n --argjson current "$current_servers_json" '$current|join(",")') + current_servers_total=$(jq -r -n --argjson current "$current_servers_json" '$current|length') + + remain_servers_json=$(jq -n --argjson expected "$expected_servers_json" --argjson current "$current_servers_json" '$expected-$current') + remain_servers_list=$(jq -r -n --argjson remain "$remain_servers_json" '$remain|join(",")') + remain_servers_total=$(jq -r -n --argjson remain "$remain_servers_json" '$remain|length') + + echo "AUTO-SNAPQUEUE - Expected Servers ($expected_servers_total): $expected_servers_list" + echo "AUTO-SNAPQUEUE - Current Servers ($current_servers_total): $current_servers_list" + echo "AUTO-SNAPQUEUE - Remain Servers ($remain_servers_total): $remain_servers_list" + + if ((remain_servers_total == 0)) ; then + echo "AUTO-SNAPQUEUE - All expected servers enrolled." + sleep $AUTO_SNAPQUEUE_ACTION_WAIT + echo "AUTO-SNAPQUEUE - Do automatic snapshot..." + cdn_id=$(to-get "api/1.3/cdns?name=$2" |jq '.response[0].id') + to-put "api/1.3/cdns/$cdn_id/snapshot" + sleep $AUTO_SNAPQUEUE_ACTION_WAIT + echo "AUTO-SNAPQUEUE - Do queue updates..." + to-post "api/1.3/cdns/$cdn_id/queue_update" '{"action":"queue"}' + break + fi + + sleep $AUTO_SNAPQUEUE_POLL_INTERVAL + done +} diff --git a/infrastructure/cdn-in-a-box/traffic_ops_data/servers/010-dns_server.json b/infrastructure/cdn-in-a-box/traffic_ops_data/servers/010-dns_server.json index 9058fd3354..ff24cce4f4 100644 --- a/infrastructure/cdn-in-a-box/traffic_ops_data/servers/010-dns_server.json +++ b/infrastructure/cdn-in-a-box/traffic_ops_data/servers/010-dns_server.json @@ -3,7 +3,6 @@ "domainName": "infra.ciab.test", "cachegroup": "CDN_in_a_Box_Edge", "interfaceName": "eth0", - "ipNetmask": "255.255.255.0", "interfaceMtu": 1500, "type": "BIND", "physLocation": "Apachecon North America 2018", diff --git a/infrastructure/cdn-in-a-box/traffic_ops_data/servers/020-db_server.json b/infrastructure/cdn-in-a-box/traffic_ops_data/servers/020-db_server.json index 05b3502cdf..39fa73dfe0 100644 --- a/infrastructure/cdn-in-a-box/traffic_ops_data/servers/020-db_server.json +++ b/infrastructure/cdn-in-a-box/traffic_ops_data/servers/020-db_server.json @@ -3,7 +3,6 @@ "domainName": "infra.ciab.test", "cachegroup": "CDN_in_a_Box_Edge", "interfaceName": "eth0", - "ipNetmask": "255.255.255.0", "interfaceMtu": 1500, "type": "TRAFFIC_OPS_DB", "physLocation": "Apachecon North America 2018", diff --git a/infrastructure/cdn-in-a-box/traffic_ops_data/servers/030-enroller_server.json b/infrastructure/cdn-in-a-box/traffic_ops_data/servers/030-enroller_server.json index d4618c905c..4070e90566 100644 --- a/infrastructure/cdn-in-a-box/traffic_ops_data/servers/030-enroller_server.json +++ b/infrastructure/cdn-in-a-box/traffic_ops_data/servers/030-enroller_server.json @@ -3,7 +3,6 @@ "domainName": "infra.ciab.test", "cachegroup": "CDN_in_a_Box_Edge", "interfaceName": "eth0", - "ipNetmask": "255.255.255.0", "interfaceMtu": 1500, "type": "ENROLLER", "physLocation": "Apachecon North America 2018", diff --git a/infrastructure/cdn-in-a-box/variables.env b/infrastructure/cdn-in-a-box/variables.env index 098788f294..c5140b2edd 100644 --- a/infrastructure/cdn-in-a-box/variables.env +++ b/infrastructure/cdn-in-a-box/variables.env @@ -61,6 +61,11 @@ TM_PORT=80 TM_EMAIL=tmonitor@cdn.example.com TM_PASSWORD=jhdslvhdfsuklvfhsuvlhs TM_USER=tmon +TM_LOG_EVENT=stdout +TM_LOG_ERROR=stdout +TM_LOG_WARNING=stdout +TM_LOG_INFO=stdout +TM_LOG_DEBUG=stdout TO_ADMIN_PASSWORD=twelve TO_ADMIN_USER=admin TO_EMAIL=cdnadmin@example.com @@ -69,6 +74,11 @@ TO_PORT=443 TO_PERL_HOST=trafficops-perl TO_PERL_PORT=443 TO_SECRET=blahblah +TO_LOG_ERROR=stdout +TO_LOG_WARNING=stdout +TO_LOG_INFO=stdout +TO_LOG_DEBUG=stdout +TO_LOG_EVENT=stdout TP_HOST=trafficportal TP_EMAIL=tp@cdn.example.com TR_HOST=trafficrouter diff --git a/lib/go-tc/deliveryservices.go b/lib/go-tc/deliveryservices.go index 2dcf0cf32a..296440f4cc 100644 --- a/lib/go-tc/deliveryservices.go +++ b/lib/go-tc/deliveryservices.go @@ -13,7 +13,7 @@ import ( "github.com/apache/trafficcontrol/lib/go-util" "github.com/asaskevich/govalidator" - "github.com/go-ozzo/ozzo-validation" + validation "github.com/go-ozzo/ozzo-validation" ) /* @@ -43,6 +43,11 @@ type DeliveryServicesResponse struct { Response []DeliveryService `json:"response"` } +// DeliveryServicesNullableResponse ... +type DeliveryServicesNullableResponse struct { + Response []DeliveryServiceNullable `json:"response"` +} + // CreateDeliveryServiceResponse ... type CreateDeliveryServiceResponse struct { Response []DeliveryService `json:"response"` @@ -61,6 +66,12 @@ type UpdateDeliveryServiceResponse struct { Alerts []DeliveryServiceAlert `json:"alerts"` } +// UpdateDeliveryServiceNullableResponse ... +type UpdateDeliveryServiceNullableResponse struct { + Response []DeliveryServiceNullable `json:"response"` + Alerts []DeliveryServiceAlert `json:"alerts"` +} + // DeliveryServiceResponse ... type DeliveryServiceResponse struct { Response DeliveryService `json:"response"` @@ -75,6 +86,7 @@ type DeleteDeliveryServiceResponse struct { type DeliveryService struct { DeliveryServiceV13 MaxOriginConnections int `json:"maxOriginConnections" db:"max_origin_connections"` + ConsistentHashRegex string `json:"consistentHashRegex"` ConsistentHashQueryParams []string `json:"consistentHashQueryParams"` } @@ -83,7 +95,7 @@ type DeliveryServiceV13 struct { DeepCachingType DeepCachingType `json:"deepCachingType"` FQPacingRate int `json:"fqPacingRate,omitempty"` SigningAlgorithm string `json:"signingAlgorithm" db:"signing_algorithm"` - Tenant string `json:"tenant,omitempty"` + Tenant string `json:"tenant"` TRRequestHeaders string `json:"trRequestHeaders,omitempty"` TRResponseHeaders string `json:"trResponseHeaders,omitempty"` } @@ -108,7 +120,6 @@ type DeliveryServiceV11 struct { EdgeHeaderRewrite string `json:"edgeHeaderRewrite"` ExampleURLs []string `json:"exampleURLs"` GeoLimit int `json:"geoLimit"` - FQPacingRate int `json:"fqPacingRate"` GeoProvider int `json:"geoProvider"` GlobalMaxMBPS int `json:"globalMaxMbps"` GlobalMaxTPS int `json:"globalMaxTps"` @@ -143,10 +154,12 @@ type DeliveryServiceV11 struct { TypeID int `json:"typeId"` Type DSType `json:"type"` TRResponseHeaders string `json:"trResponseHeaders"` - TenantID int `json:"tenantId,omitempty"` + TenantID int `json:"tenantId"` XMLID string `json:"xmlId"` } +type DeliveryServiceNullableV14 DeliveryServiceNullable // this type alias should always alias the latest minor version of the deliveryservices endpoints + type DeliveryServiceNullable struct { DeliveryServiceNullableV13 ConsistentHashRegex *string `json:"consistentHashRegex"` @@ -157,7 +170,7 @@ type DeliveryServiceNullable struct { type DeliveryServiceNullableV13 struct { DeliveryServiceNullableV12 DeepCachingType *DeepCachingType `json:"deepCachingType" db:"deep_caching_type"` - FQPacingRate *int `json:"fqPacingRate"` + FQPacingRate *int `json:"fqPacingRate" db:"fq_pacing_rate"` SigningAlgorithm *string `json:"signingAlgorithm" db:"signing_algorithm"` Tenant *string `json:"tenant"` TRResponseHeaders *string `json:"trResponseHeaders"` @@ -187,7 +200,6 @@ type DeliveryServiceNullableV11 struct { DNSBypassTTL *int `json:"dnsBypassTtl" db:"dns_bypass_ttl"` DSCP *int `json:"dscp" db:"dscp"` EdgeHeaderRewrite *string `json:"edgeHeaderRewrite" db:"edge_header_rewrite"` - FQPacingRate *int `json:"fqPacingRate" db:"fq_pacing_rate"` GeoLimit *int `json:"geoLimit" db:"geo_limit"` GeoLimitCountries *string `json:"geoLimitCountries" db:"geo_limit_countries"` GeoLimitRedirectURL *string `json:"geoLimitRedirectURL" db:"geolimit_redirect_url"` @@ -231,42 +243,6 @@ type DeliveryServiceNullableV11 struct { ExampleURLs []string `json:"exampleURLs"` } -// NewDeliveryServiceNullableFromV12 creates a new V13 DS from a V12 DS, filling new fields with appropriate defaults. -func NewDeliveryServiceNullableFromV12(ds DeliveryServiceNullableV12) DeliveryServiceNullable { - newDSv13 := DeliveryServiceNullableV13{DeliveryServiceNullableV12: ds} - newDS := DeliveryServiceNullable{DeliveryServiceNullableV13: newDSv13} - newDS.Sanitize() - return newDS -} - -// NewDeliveryServiceNullableFromV13 creates a new V14 DS from a V13 DS, filling new fields with appropriate defaults. -func NewDeliveryServiceNullableFromV13(ds DeliveryServiceNullableV13) DeliveryServiceNullable { - newDS := DeliveryServiceNullable{DeliveryServiceNullableV13: ds} - newDS.Sanitize() - return newDS -} - -func (ds *DeliveryServiceNullableV12) Sanitize() { - if ds.GeoLimitCountries != nil { - *ds.GeoLimitCountries = strings.ToUpper(strings.Replace(*ds.GeoLimitCountries, " ", "", -1)) - } - if ds.ProfileID != nil && *ds.ProfileID == -1 { - ds.ProfileID = nil - } - if ds.EdgeHeaderRewrite != nil && strings.TrimSpace(*ds.EdgeHeaderRewrite) == "" { - ds.EdgeHeaderRewrite = nil - } - if ds.MidHeaderRewrite != nil && strings.TrimSpace(*ds.MidHeaderRewrite) == "" { - ds.MidHeaderRewrite = nil - } - if ds.RoutingName == nil || *ds.RoutingName == "" { - ds.RoutingName = util.StrPtr(DefaultRoutingName) - } - if ds.AnonymousBlockingEnabled == nil { - ds.AnonymousBlockingEnabled = util.BoolPtr(false) - } -} - func requiredIfMatchesTypeName(patterns []string, typeName string) func(interface{}) error { return func(value interface{}) error { switch v := value.(type) { @@ -302,53 +278,6 @@ func requiredIfMatchesTypeName(patterns []string, typeName string) func(interfac } } -func (ds *DeliveryServiceNullableV12) validateTypeFields(tx *sql.Tx) error { - // Validate the TypeName related fields below - err := error(nil) - DNSRegexType := "^DNS.*$" - HTTPRegexType := "^HTTP.*$" - SteeringRegexType := "^STEERING.*$" - latitudeErr := "Must be a floating point number within the range +-90" - longitudeErr := "Must be a floating point number within the range +-180" - - typeName, err := ValidateTypeID(tx, ds.TypeID, "deliveryservice") - if err != nil { - return err - } - - errs := validation.Errors{ - "initialDispersion": validation.Validate(ds.InitialDispersion, - validation.By(requiredIfMatchesTypeName([]string{HTTPRegexType}, typeName)), - validation.By(tovalidate.IsGreaterThanZero)), - "ipv6RoutingEnabled": validation.Validate(ds.IPV6RoutingEnabled, - validation.By(requiredIfMatchesTypeName([]string{SteeringRegexType, DNSRegexType, HTTPRegexType}, typeName))), - "missLat": validation.Validate(ds.MissLat, - validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName)), - validation.Min(-90.0).Error(latitudeErr), - validation.Max(90.0).Error(latitudeErr)), - "missLong": validation.Validate(ds.MissLong, - validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName)), - validation.Min(-180.0).Error(longitudeErr), - validation.Max(180.0).Error(longitudeErr)), - "multiSiteOrigin": validation.Validate(ds.MultiSiteOrigin, - validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName))), - "orgServerFqdn": validation.Validate(ds.OrgServerFQDN, - validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName)), - validation.NewStringRule(validateOrgServerFQDN, "must start with http:// or https:// and be followed by a valid hostname with an optional port (no trailing slash)")), - "protocol": validation.Validate(ds.Protocol, - validation.By(requiredIfMatchesTypeName([]string{SteeringRegexType, DNSRegexType, HTTPRegexType}, typeName))), - "qstringIgnore": validation.Validate(ds.QStringIgnore, - validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName))), - "rangeRequestHandling": validation.Validate(ds.RangeRequestHandling, - validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName))), - } - toErrs := tovalidate.ToErrors(errs) - if len(toErrs) > 0 { - return errors.New(util.JoinErrsStr(toErrs)) - } - return nil -} - func validateOrgServerFQDN(orgServerFQDN string) bool { _, fqdn, port, err := ParseOrgServerFQDN(orgServerFQDN) if err != nil || !govalidator.IsHost(*fqdn) || (port != nil && !govalidator.IsPort(*port)) { @@ -378,36 +307,25 @@ func ParseOrgServerFQDN(orgServerFQDN string) (*string, *string, *string, error) return &protocol, &FQDN, port, nil } -func (ds *DeliveryServiceNullableV12) Validate(tx *sql.Tx) error { - ds.Sanitize() - isDNSName := validation.NewStringRule(govalidator.IsDNSName, "must be a valid hostname") - noPeriods := validation.NewStringRule(tovalidate.NoPeriods, "cannot contain periods") - noSpaces := validation.NewStringRule(tovalidate.NoSpaces, "cannot contain spaces") - errs := validation.Errors{ - "active": validation.Validate(ds.Active, validation.NotNil), - "cdnId": validation.Validate(ds.CDNID, validation.Required), - "displayName": validation.Validate(ds.DisplayName, validation.Required, validation.Length(1, 48)), - "dscp": validation.Validate(ds.DSCP, validation.NotNil, validation.Min(0)), - "geoLimit": validation.Validate(ds.GeoLimit, validation.NotNil), - "geoProvider": validation.Validate(ds.GeoProvider, validation.NotNil), - "logsEnabled": validation.Validate(ds.LogsEnabled, validation.NotNil), - "regionalGeoBlocking": validation.Validate(ds.RegionalGeoBlocking, validation.NotNil), - "routingName": validation.Validate(ds.RoutingName, isDNSName, noPeriods, validation.Length(1, 48)), - "typeId": validation.Validate(ds.TypeID, validation.Required, validation.Min(1)), - "xmlId": validation.Validate(ds.XMLID, noSpaces, noPeriods, validation.Length(1, 48)), +func (ds *DeliveryServiceNullable) Sanitize() { + if ds.GeoLimitCountries != nil { + *ds.GeoLimitCountries = strings.ToUpper(strings.Replace(*ds.GeoLimitCountries, " ", "", -1)) } - toErrs := tovalidate.ToErrors(errs) - if err := ds.validateTypeFields(tx); err != nil { - toErrs = append(toErrs, errors.New("type fields: "+err.Error())) + if ds.ProfileID != nil && *ds.ProfileID == -1 { + ds.ProfileID = nil } - if len(toErrs) > 0 { - return util.JoinErrs(toErrs) + if ds.EdgeHeaderRewrite != nil && strings.TrimSpace(*ds.EdgeHeaderRewrite) == "" { + ds.EdgeHeaderRewrite = nil + } + if ds.MidHeaderRewrite != nil && strings.TrimSpace(*ds.MidHeaderRewrite) == "" { + ds.MidHeaderRewrite = nil + } + if ds.RoutingName == nil || *ds.RoutingName == "" { + ds.RoutingName = util.StrPtr(DefaultRoutingName) + } + if ds.AnonymousBlockingEnabled == nil { + ds.AnonymousBlockingEnabled = util.BoolPtr(false) } - return nil -} - -func (ds *DeliveryServiceNullable) Sanitize() { - ds.DeliveryServiceNullableV12.Sanitize() signedAlgorithm := SigningAlgorithmURLSig if ds.Signed && (ds.SigningAlgorithm == nil || *ds.SigningAlgorithm == "") { ds.SigningAlgorithm = &signedAlgorithm @@ -428,6 +346,11 @@ func (ds *DeliveryServiceNullable) Sanitize() { func (ds *DeliveryServiceNullable) validateTypeFields(tx *sql.Tx) error { // Validate the TypeName related fields below err := error(nil) + DNSRegexType := "^DNS.*$" + HTTPRegexType := "^HTTP.*$" + SteeringRegexType := "^STEERING.*$" + latitudeErr := "Must be a floating point number within the range +-90" + longitudeErr := "Must be a floating point number within the range +-180" typeName, err := ValidateTypeID(tx, ds.TypeID, "deliveryservice") if err != nil { @@ -443,6 +366,30 @@ func (ds *DeliveryServiceNullable) validateTypeFields(tx *sql.Tx) error { } return fmt.Errorf("consistentHashQueryParams not allowed for '%s' deliveryservice type", typeName) })), + "initialDispersion": validation.Validate(ds.InitialDispersion, + validation.By(requiredIfMatchesTypeName([]string{HTTPRegexType}, typeName)), + validation.By(tovalidate.IsGreaterThanZero)), + "ipv6RoutingEnabled": validation.Validate(ds.IPV6RoutingEnabled, + validation.By(requiredIfMatchesTypeName([]string{SteeringRegexType, DNSRegexType, HTTPRegexType}, typeName))), + "missLat": validation.Validate(ds.MissLat, + validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName)), + validation.Min(-90.0).Error(latitudeErr), + validation.Max(90.0).Error(latitudeErr)), + "missLong": validation.Validate(ds.MissLong, + validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName)), + validation.Min(-180.0).Error(longitudeErr), + validation.Max(180.0).Error(longitudeErr)), + "multiSiteOrigin": validation.Validate(ds.MultiSiteOrigin, + validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName))), + "orgServerFqdn": validation.Validate(ds.OrgServerFQDN, + validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName)), + validation.NewStringRule(validateOrgServerFQDN, "must start with http:// or https:// and be followed by a valid hostname with an optional port (no trailing slash)")), + "protocol": validation.Validate(ds.Protocol, + validation.By(requiredIfMatchesTypeName([]string{SteeringRegexType, DNSRegexType, HTTPRegexType}, typeName))), + "qstringIgnore": validation.Validate(ds.QStringIgnore, + validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName))), + "rangeRequestHandling": validation.Validate(ds.RangeRequestHandling, + validation.By(requiredIfMatchesTypeName([]string{DNSRegexType, HTTPRegexType}, typeName))), } toErrs := tovalidate.ToErrors(errs) if len(toErrs) > 0 { @@ -455,19 +402,30 @@ func (ds *DeliveryServiceNullable) Validate(tx *sql.Tx) error { ds.Sanitize() neverOrAlways := validation.NewStringRule(tovalidate.IsOneOfStringICase("NEVER", "ALWAYS"), "must be one of 'NEVER' or 'ALWAYS'") + isDNSName := validation.NewStringRule(govalidator.IsDNSName, "must be a valid hostname") + noPeriods := validation.NewStringRule(tovalidate.NoPeriods, "cannot contain periods") + noSpaces := validation.NewStringRule(tovalidate.NoSpaces, "cannot contain spaces") errs := tovalidate.ToErrors(validation.Errors{ - "deepCachingType": validation.Validate(ds.DeepCachingType, neverOrAlways), + "active": validation.Validate(ds.Active, validation.NotNil), + "cdnId": validation.Validate(ds.CDNID, validation.Required), + "deepCachingType": validation.Validate(ds.DeepCachingType, neverOrAlways), + "displayName": validation.Validate(ds.DisplayName, validation.Required, validation.Length(1, 48)), + "dscp": validation.Validate(ds.DSCP, validation.NotNil, validation.Min(0)), + "geoLimit": validation.Validate(ds.GeoLimit, validation.NotNil), + "geoProvider": validation.Validate(ds.GeoProvider, validation.NotNil), + "logsEnabled": validation.Validate(ds.LogsEnabled, validation.NotNil), + "regionalGeoBlocking": validation.Validate(ds.RegionalGeoBlocking, validation.NotNil), + "routingName": validation.Validate(ds.RoutingName, isDNSName, noPeriods, validation.Length(1, 48)), + "typeId": validation.Validate(ds.TypeID, validation.Required, validation.Min(1)), + "xmlId": validation.Validate(ds.XMLID, noSpaces, noPeriods, validation.Length(1, 48)), }) - if v12Err := ds.DeliveryServiceNullableV12.Validate(tx); v12Err != nil { - errs = append(errs, v12Err) - } if err := ds.validateTypeFields(tx); err != nil { errs = append(errs, errors.New("type fields: "+err.Error())) } if len(errs) == 0 { return nil } - return util.JoinErrs(errs) // don't add context, so versions chain well + return util.JoinErrs(errs) } // Value implements the driver.Valuer interface diff --git a/licenses/MIT-ColReorder b/licenses/MIT-ColReorder new file mode 100644 index 0000000000..b5a9fd8604 --- /dev/null +++ b/licenses/MIT-ColReorder @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (C) 2008-2019, SpryMedia Ltd. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/licenses/MIT-datatables b/licenses/MIT-datatables index 675cfd5678..19ecb75be7 100644 --- a/licenses/MIT-datatables +++ b/licenses/MIT-datatables @@ -1,7 +1,7 @@ /** * @summary DataTables * @description Paginate, search and order HTML tables - * @version 1.10.4 + * @version 1.10.19 * @file jquery.dataTables.js * @author SpryMedia Ltd (www.sprymedia.co.uk) * @contact www.sprymedia.co.uk/contact diff --git a/traffic_monitor/srvhttp/srvhttp.go b/traffic_monitor/srvhttp/srvhttp.go index 45b06a746b..f6992f10f7 100644 --- a/traffic_monitor/srvhttp/srvhttp.go +++ b/traffic_monitor/srvhttp/srvhttp.go @@ -25,6 +25,7 @@ import ( "net" "net/http" "net/url" + "strings" "sync" "time" @@ -186,7 +187,16 @@ func (s *Server) RunHTTPSRedirect(addr string, addrForRedirect string, readTimeo } func (s *Server) redirectTLS(w http.ResponseWriter, r *http.Request) { - host, _, _ := net.SplitHostPort(r.Host) + host, _, err := net.SplitHostPort(r.Host) + if err != nil { + if strings.Contains(err.Error(), "missing port in address") { + host = r.Host + } else { + w.WriteHeader(http.StatusInternalServerError) + w.Write([]byte(`{"error": "getting host from request: ` + err.Error() + `"}`)) + return + } + } http.Redirect(w, r, "https://"+host+s.addrToRedirect+r.RequestURI, http.StatusMovedPermanently) } diff --git a/traffic_ops/app/conf/cdn.conf b/traffic_ops/app/conf/cdn.conf index 8852291f7e..fdb0e43194 100644 --- a/traffic_ops/app/conf/cdn.conf +++ b/traffic_ops/app/conf/cdn.conf @@ -31,7 +31,8 @@ "backend_max_connections": { "mojolicious": 4 }, - "profiling_enabled": false + "whitelisted_oauth_urls": [], + "profiling_enabled": false }, "cors" : { "access_control_allow_origin" : "*" diff --git a/traffic_ops/app/db/seeds.sql b/traffic_ops/app/db/seeds.sql index 3c53325540..5bdb7d65e9 100644 --- a/traffic_ops/app/db/seeds.sql +++ b/traffic_ops/app/db/seeds.sql @@ -397,6 +397,7 @@ INSERT INTO role_capability (role_id, cap_name) SELECT (SELECT id FROM role WHER -- auth insert into api_capability (http_method, route, capability) values ('POST', 'user/login', 'auth') ON CONFLICT (http_method, route, capability) DO NOTHING; +insert into api_capability (http_method, route, capability) values ('POST', 'user/login/oauth', 'auth') ON CONFLICT (http_method, route, capability) DO NOTHING; insert into api_capability (http_method, route, capability) values ('POST', 'user/login/token', 'auth') ON CONFLICT (http_method, route, capability) DO NOTHING; insert into api_capability (http_method, route, capability) values ('POST', 'user/logout', 'auth') ON CONFLICT (http_method, route, capability) DO NOTHING; insert into api_capability (http_method, route, capability) values ('POST', 'user/reset_password', 'auth') ON CONFLICT (http_method, route, capability) DO NOTHING; diff --git a/traffic_ops/bin/traffic_ops_ort.pl b/traffic_ops/bin/traffic_ops_ort.pl index 47a2a1f2f9..4bcc5df666 100755 --- a/traffic_ops/bin/traffic_ops_ort.pl +++ b/traffic_ops/bin/traffic_ops_ort.pl @@ -504,7 +504,7 @@ sub start_service { } my $running_string = ""; if ( $pkg_name eq "trafficserver" ) { - $running_string = "traffic_manager"; + $running_string = "traffic_manager|traffic_cop"; } else { $running_string = $pkg_name; @@ -599,7 +599,7 @@ sub restart_service { } my $running_string = ""; if ( $pkg_name eq "trafficserver" ) { - $running_string = "traffic_manager"; + $running_string = "traffic_manager|traffic_cop"; } if ( $running_string ne "" ) { if ( $pkg_running =~ m/$running_string \(pid (\d+)\) is running.../ ) { @@ -2945,7 +2945,7 @@ sub adv_processing_udev { } ( my @df_lines ) = split( /\n/, `/bin/df` ); foreach my $l (@df_lines) { - if ( $l =~ m/$dev_path/ ) { + if ( $l =~ m/$dev_path\d/ ) { ( $log_level >> $FATAL ) && print "FATAL Device /dev/$dev has an active partition and a file system!!\n"; } } diff --git a/traffic_ops/testing/api/v14/deliveryservicematches_test.go b/traffic_ops/testing/api/v14/deliveryservicematches_test.go index 42c8004809..cc8925662b 100644 --- a/traffic_ops/testing/api/v14/deliveryservicematches_test.go +++ b/traffic_ops/testing/api/v14/deliveryservicematches_test.go @@ -39,7 +39,7 @@ func GetTestDeliveryServiceMatches(t *testing.T) { } for _, ds := range testData.DeliveryServices { - if ds.Type == tc.DSTypeAnyMap { + if ds.Type == tc.DSTypeAnyMap || len(ds.MatchList) == 0 { continue // ANY_MAP DSes don't require matchLists } if _, ok := dsMatchMap[tc.DeliveryServiceName(ds.XMLID)]; !ok { diff --git a/traffic_ops/testing/api/v14/deliveryservices_test.go b/traffic_ops/testing/api/v14/deliveryservices_test.go index 6f73655587..0bfe4d59e4 100644 --- a/traffic_ops/testing/api/v14/deliveryservices_test.go +++ b/traffic_ops/testing/api/v14/deliveryservices_test.go @@ -16,6 +16,13 @@ package v14 */ import ( + "bytes" + "encoding/json" + "fmt" + "io" + "io/ioutil" + "net/http" + "reflect" "strconv" "testing" "time" @@ -30,6 +37,7 @@ func TestDeliveryServices(t *testing.T) { UpdateTestDeliveryServices(t) UpdateNullableTestDeliveryServices(t) GetTestDeliveryServices(t) + DeliveryServiceMinorVersionsTest(t) DeliveryServiceTenancyTest(t) }) } @@ -74,8 +82,8 @@ func GetTestDeliveryServices(t *testing.T) { cnt++ } } - if cnt > 1 { - t.Errorf("exactly 1 deliveryservice should have more than one query param; found %d", cnt) + if cnt > 2 { + t.Errorf("exactly 2 deliveryservices should have more than one query param; found %d", cnt) } } @@ -222,6 +230,289 @@ func DeleteTestDeliveryServices(t *testing.T) { } } +func DeliveryServiceMinorVersionsTest(t *testing.T) { + testDS := testData.DeliveryServices[4] + if testDS.XMLID != "ds-test-minor-versions" { + t.Errorf("expected XMLID: ds-test-minor-versions, actual: %s\n", testDS.XMLID) + } + + dses, _, err := TOSession.GetDeliveryServicesNullable() + if err != nil { + t.Errorf("cannot GET DeliveryServices: %v - %v\n", err, dses) + } + ds := tc.DeliveryServiceNullable{} + for _, d := range dses { + if *d.XMLID == testDS.XMLID { + ds = d + break + } + } + // GET latest, verify expected values for 1.3 and 1.4 fields + if ds.DeepCachingType == nil { + t.Errorf("expected DeepCachingType: %s, actual: nil\n", testDS.DeepCachingType.String()) + } else if *ds.DeepCachingType != testDS.DeepCachingType { + t.Errorf("expected DeepCachingType: %s, actual: %s\n", testDS.DeepCachingType.String(), ds.DeepCachingType.String()) + } + if ds.FQPacingRate == nil { + t.Errorf("expected FQPacingRate: %d, actual: nil\n", testDS.FQPacingRate) + } else if *ds.FQPacingRate != testDS.FQPacingRate { + t.Errorf("expected FQPacingRate: %d, actual: %d\n", testDS.FQPacingRate, *ds.FQPacingRate) + } + if ds.SigningAlgorithm == nil { + t.Errorf("expected SigningAlgorithm: %s, actual: nil\n", testDS.SigningAlgorithm) + } else if *ds.SigningAlgorithm != testDS.SigningAlgorithm { + t.Errorf("expected SigningAlgorithm: %s, actual: %s\n", testDS.SigningAlgorithm, *ds.SigningAlgorithm) + } + if ds.Tenant == nil { + t.Errorf("expected Tenant: %s, actual: nil\n", testDS.Tenant) + } else if *ds.Tenant != testDS.Tenant { + t.Errorf("expected Tenant: %s, actual: %s\n", testDS.Tenant, *ds.Tenant) + } + if ds.TRRequestHeaders == nil { + t.Errorf("expected TRRequestHeaders: %s, actual: nil\n", testDS.TRRequestHeaders) + } else if *ds.TRRequestHeaders != testDS.TRRequestHeaders { + t.Errorf("expected TRRequestHeaders: %s, actual: %s\n", testDS.TRRequestHeaders, *ds.TRRequestHeaders) + } + if ds.TRResponseHeaders == nil { + t.Errorf("expected TRResponseHeaders: %s, actual: nil\n", testDS.TRResponseHeaders) + } else if *ds.TRResponseHeaders != testDS.TRResponseHeaders { + t.Errorf("expected TRResponseHeaders: %s, actual: %s\n", testDS.TRResponseHeaders, *ds.TRResponseHeaders) + } + if ds.ConsistentHashRegex == nil { + t.Errorf("expected ConsistentHashRegex: %s, actual: nil\n", testDS.ConsistentHashRegex) + } else if *ds.ConsistentHashRegex != testDS.ConsistentHashRegex { + t.Errorf("expected ConsistentHashRegex: %s, actual: %s\n", testDS.ConsistentHashRegex, *ds.ConsistentHashRegex) + } + if ds.ConsistentHashQueryParams == nil { + t.Errorf("expected ConsistentHashQueryParams: %v, actual: nil\n", testDS.ConsistentHashQueryParams) + } else if !reflect.DeepEqual(ds.ConsistentHashQueryParams, testDS.ConsistentHashQueryParams) { + t.Errorf("expected ConsistentHashQueryParams: %v, actual: %v\n", testDS.ConsistentHashQueryParams, ds.ConsistentHashQueryParams) + } + if ds.MaxOriginConnections == nil { + t.Errorf("expected MaxOriginConnections: %d, actual: nil\n", testDS.MaxOriginConnections) + } else if *ds.MaxOriginConnections != testDS.MaxOriginConnections { + t.Errorf("expected MaxOriginConnections: %d, actual: %d\n", testDS.MaxOriginConnections, *ds.MaxOriginConnections) + } + + // GET 1.1, verify 1.3 and 1.4 fields are nil + data := tc.DeliveryServicesNullableResponse{} + if err = makeV11Request(http.MethodGet, "deliveryservices/"+strconv.Itoa(*ds.ID), nil, &data); err != nil { + t.Errorf("cannot GET 1.1 deliveryservice: %s\n", err.Error()) + } + respDS := data.Response[0] + if !dsV13FieldsAreNil(respDS) || !dsV14FieldsAreNil(respDS) { + t.Errorf("expected 1.3 and 1.4 values to be nil, actual: non-nil") + } + + // GET 1.3, verify 1.3 fields are non-nil and 1.4 fields are nil + data = tc.DeliveryServicesNullableResponse{} + if err = makeV13Request(http.MethodGet, "deliveryservices/"+strconv.Itoa(*ds.ID), nil, &data); err != nil { + t.Errorf("cannot GET 1.3 deliveryservice: %s\n", err.Error()) + } + respDS = data.Response[0] + if dsV13FieldsAreNil(respDS) { + t.Errorf("expected 1.3 values to be non-nil, actual: nil\n") + } + if !dsV14FieldsAreNil(respDS) { + t.Errorf("expected 1.4 values to be nil, actual: non-nil") + } + if _, err = TOSession.DeleteDeliveryService(strconv.Itoa(*ds.ID)); err != nil { + t.Errorf("cannot DELETE deliveryservice: %s\n", err.Error()) + } + + ds.ID = nil + dsBody, err := json.Marshal(ds) + if err != nil { + t.Errorf("cannot POST deliveryservice, failed to marshal JSON: %s\n", err.Error()) + } + dsV11Body, err := json.Marshal(ds.DeliveryServiceNullableV11) + if err != nil { + t.Errorf("cannot POST deliveryservice, failed to marshal JSON: %s\n", err.Error()) + } + + // POST 1.3 w/ 1.4 data, verify 1.4 fields were ignored + postDSResp := tc.CreateDeliveryServiceNullableResponse{} + if err = makeV13Request(http.MethodPost, "deliveryservices", bytes.NewBuffer(dsBody), &postDSResp); err != nil { + t.Errorf("cannot POST 1.3 deliveryservice, failed to make request: %s\n", err.Error()) + } + if !dsV14FieldsAreNil(postDSResp.Response[0]) { + t.Errorf("POST 1.3 expected 1.4 values to be nil, actual: non-nil") + } + respID := postDSResp.Response[0].ID + getDS, _, err := TOSession.GetDeliveryServiceNullable(strconv.Itoa(*respID)) + if err != nil { + t.Errorf("cannot GET deliveryservice: %s\n", err.Error()) + } + if !dsV14FieldsAreNilOrDefault(*getDS) { + t.Errorf("POST 1.3 expected 1.4 values to be nil/default, actual: non-nil/default") + } + if _, err = TOSession.DeleteDeliveryService(strconv.Itoa(*respID)); err != nil { + t.Errorf("cannot DELETE deliveryservice: %s\n", err.Error()) + } + + // POST 1.1 w/ 1.4 data, verify 1.3 and 1.4 fields were ignored + postDSResp = tc.CreateDeliveryServiceNullableResponse{} + if err = makeV11Request(http.MethodPost, "deliveryservices", bytes.NewBuffer(dsBody), &postDSResp); err != nil { + t.Errorf("cannot POST 1.1 deliveryservice, failed to make request: %s\n", err.Error()) + } + if !dsV13FieldsAreNil(postDSResp.Response[0]) || !dsV14FieldsAreNil(postDSResp.Response[0]) { + t.Errorf("POST 1.1 expected 1.3 and 1.4 values to be nil, actual: non-nil %++v\n", postDSResp.Response[0]) + } + respID = postDSResp.Response[0].ID + getDS, _, err = TOSession.GetDeliveryServiceNullable(strconv.Itoa(*respID)) + if err != nil { + t.Errorf("cannot GET deliveryservice: %s\n", err.Error()) + } + if !dsV13FieldsAreNilOrDefault(*getDS) || !dsV14FieldsAreNilOrDefault(*getDS) { + t.Errorf("POST 1.1 expected 1.3 and 1.4 values to be nil/default, actual: non-nil/default %++v\n", *getDS) + } + + // PUT 1.4 w/ 1.4 data, then verify that a PUT 1.1 with 1.1 data preserves the existing 1.3 and 1.4 data + if _, err = TOSession.UpdateDeliveryServiceNullable(strconv.Itoa(*respID), &ds); err != nil { + t.Errorf("cannot PUT deliveryservice: %s\n", err.Error()) + } + putDSResp := tc.UpdateDeliveryServiceNullableResponse{} + if err = makeV11Request(http.MethodPut, "deliveryservices/"+strconv.Itoa(*respID), bytes.NewBuffer(dsV11Body), &putDSResp); err != nil { + t.Errorf("cannot PUT 1.1 deliveryservice, failed to make request: %s\n", err.Error()) + } + if !dsV13FieldsAreNil(putDSResp.Response[0]) || !dsV14FieldsAreNil(putDSResp.Response[0]) { + t.Errorf("PUT 1.1 expected 1.3 and 1.4 values to be nil, actual: non-nil %++v\n", putDSResp.Response[0]) + } + getDS, _, err = TOSession.GetDeliveryServiceNullable(strconv.Itoa(*respID)) + if err != nil { + t.Errorf("cannot GET deliveryservice: %s\n", err.Error()) + } + if getDS.FQPacingRate == nil { + t.Errorf("expected FQPacingRate: %d, actual: nil\n", testDS.FQPacingRate) + } else if *getDS.FQPacingRate != testDS.FQPacingRate { + t.Errorf("expected FQPacingRate: %d, actual: %d\n", testDS.FQPacingRate, *getDS.FQPacingRate) + } + if getDS.MaxOriginConnections == nil { + t.Errorf("expected MaxOriginConnections: %d, actual: nil\n", testDS.MaxOriginConnections) + } else if *getDS.MaxOriginConnections != testDS.MaxOriginConnections { + t.Errorf("expected MaxOriginConnections: %d, actual: %d\n", testDS.MaxOriginConnections, *getDS.MaxOriginConnections) + } + + // PUT 1.3 w/ 1.1 data, verify that 1.4 fields were preserved + putDSResp = tc.UpdateDeliveryServiceNullableResponse{} + if err = makeV13Request(http.MethodPut, "deliveryservices/"+strconv.Itoa(*respID), bytes.NewBuffer(dsV11Body), &putDSResp); err != nil { + t.Errorf("cannot PUT 1.3 deliveryservice, failed to make request: %s\n", err.Error()) + } + if !dsV14FieldsAreNil(putDSResp.Response[0]) { + t.Errorf("PUT 1.3 expected 1.4 values to be nil, actual: non-nil %++v\n", putDSResp.Response[0]) + } + getDS, _, err = TOSession.GetDeliveryServiceNullable(strconv.Itoa(*respID)) + if err != nil { + t.Errorf("cannot GET deliveryservice: %s\n", err.Error()) + } + if getDS.MaxOriginConnections == nil { + t.Errorf("expected MaxOriginConnections: %d, actual: nil\n", testDS.MaxOriginConnections) + } else if *getDS.MaxOriginConnections != testDS.MaxOriginConnections { + t.Errorf("expected MaxOriginConnections: %d, actual: %d\n", testDS.MaxOriginConnections, *getDS.MaxOriginConnections) + } + + // DELETE+POST 1.1 again, so that 1.3 and 1.4 fields are back to nil/default + if _, err = TOSession.DeleteDeliveryService(strconv.Itoa(*respID)); err != nil { + t.Errorf("cannot DELETE deliveryservice: %s\n", err.Error()) + } + postDSResp = tc.CreateDeliveryServiceNullableResponse{} + if err = makeV11Request(http.MethodPost, "deliveryservices", bytes.NewBuffer(dsV11Body), &postDSResp); err != nil { + t.Errorf("cannot POST 1.1 deliveryservice, failed to make request: %s\n", err.Error()) + } + respID = postDSResp.Response[0].ID + + // PUT 1.1 w/ 1.4 data - make sure 1.3 and 1.4 fields were ignored + putDSResp = tc.UpdateDeliveryServiceNullableResponse{} + if err = makeV11Request(http.MethodPut, "deliveryservices/"+strconv.Itoa(*respID), bytes.NewBuffer(dsBody), &putDSResp); err != nil { + t.Errorf("cannot PUT 1.1 deliveryservice, failed to make request: %s\n", err.Error()) + } + if !dsV13FieldsAreNil(putDSResp.Response[0]) || !dsV14FieldsAreNil(putDSResp.Response[0]) { + t.Errorf("PUT 1.1 expected 1.3 and 1.4 values to be nil, actual: non-nil %++v\n", putDSResp.Response[0]) + } + respID = putDSResp.Response[0].ID + getDS, _, err = TOSession.GetDeliveryServiceNullable(strconv.Itoa(*respID)) + if err != nil { + t.Errorf("cannot GET deliveryservice: %s\n", err.Error()) + } + if !dsV13FieldsAreNilOrDefault(*getDS) || !dsV14FieldsAreNilOrDefault(*getDS) { + t.Errorf("PUT 1.1 expected 1.3 and 1.4 values to be nil/default, actual: non-nil/default %++v\n", *getDS) + } + + // PUT 1.3 w/ 1.4 data, make sure 1.4 fields were ignored + putDSResp = tc.UpdateDeliveryServiceNullableResponse{} + if err = makeV13Request(http.MethodPut, "deliveryservices/"+strconv.Itoa(*respID), bytes.NewBuffer(dsBody), &putDSResp); err != nil { + t.Errorf("cannot PUT 1.1 deliveryservice, failed to make request: %s\n", err.Error()) + } + if !dsV14FieldsAreNil(putDSResp.Response[0]) { + t.Errorf("PUT 1.3 expected 1.4 values to be nil, actual: non-nil\n") + } + respID = putDSResp.Response[0].ID + getDS, _, err = TOSession.GetDeliveryServiceNullable(strconv.Itoa(*respID)) + if err != nil { + t.Errorf("cannot GET deliveryservice: %s\n", err.Error()) + } + if !dsV14FieldsAreNilOrDefault(*getDS) { + t.Errorf("PUT 1.3 expected 1.4 values to be nil/default, actual: non-nil/default\n") + } +} + +func dsV13FieldsAreNilOrDefault(ds tc.DeliveryServiceNullable) bool { + return (ds.DeepCachingType == nil || *ds.DeepCachingType == tc.DeepCachingTypeNever) && + (ds.FQPacingRate == nil || *ds.FQPacingRate == 0) && + (ds.TRRequestHeaders == nil || *ds.TRRequestHeaders == "") && + (ds.TRResponseHeaders == nil || *ds.TRResponseHeaders == "") +} + +func dsV14FieldsAreNilOrDefault(ds tc.DeliveryServiceNullable) bool { + return (ds.ConsistentHashRegex == nil || *ds.ConsistentHashRegex == "") && + (ds.ConsistentHashQueryParams == nil || len(ds.ConsistentHashQueryParams) == 0) && + (ds.MaxOriginConnections == nil || *ds.MaxOriginConnections == 0) +} + +func dsV13FieldsAreNil(ds tc.DeliveryServiceNullable) bool { + return ds.DeepCachingType == nil && + ds.FQPacingRate == nil && + ds.SigningAlgorithm == nil && + ds.Tenant == nil && + ds.TRRequestHeaders == nil && + ds.TRResponseHeaders == nil +} + +func dsV14FieldsAreNil(ds tc.DeliveryServiceNullable) bool { + return ds.ConsistentHashRegex == nil && + (ds.ConsistentHashQueryParams == nil || len(ds.ConsistentHashQueryParams) == 0) && + ds.MaxOriginConnections == nil +} + +func makeV11Request(method string, path string, body io.Reader, respStruct interface{}) error { + return makeRequest("1.1", method, path, body, respStruct) +} + +func makeV13Request(method string, path string, body io.Reader, respStruct interface{}) error { + return makeRequest("1.3", method, path, body, respStruct) +} + +// TODO: move this helper function into a better location +func makeRequest(version string, method string, path string, body io.Reader, respStruct interface{}) error { + req, err := http.NewRequest(method, TOSession.URL+"/api/"+version+"/"+path, body) + if err != nil { + return fmt.Errorf("failed to create request: %s", err.Error()) + } + resp, err := TOSession.Client.Do(req) + if err != nil { + return fmt.Errorf("running request: %s", err.Error()) + } + defer resp.Body.Close() + bts, err := ioutil.ReadAll(resp.Body) + if err != nil { + return fmt.Errorf("reading body: " + err.Error()) + } + if err = json.Unmarshal(bts, respStruct); err != nil { + return fmt.Errorf("unmarshalling body '" + string(bts) + "': " + err.Error()) + } + return nil +} + func DeliveryServiceTenancyTest(t *testing.T) { dses, _, err := TOSession.GetDeliveryServicesNullable() if err != nil { diff --git a/traffic_ops/testing/api/v14/tc-fixtures.json b/traffic_ops/testing/api/v14/tc-fixtures.json index b76a8f420a..e0eb4e034b 100644 --- a/traffic_ops/testing/api/v14/tc-fixtures.json +++ b/traffic_ops/testing/api/v14/tc-fixtures.json @@ -491,6 +491,66 @@ "type": "ANY_MAP", "xmlId": "anymap-ds", "anonymousBlockingEnabled": true + }, + { + "active": true, + "cdnName": "cdn1", + "cacheurl": "cacheUrl1", + "ccrDnsTtl": 3600, + "cdnName": "cdn1", + "checkPath": "", + "consistentHashQueryParams": ["a", "b", "c"], + "consistentHashRegex": "foo", + "deepCachingType": "ALWAYS", + "displayName": "ds-test-minor-versions", + "dnsBypassCname": null, + "dnsBypassIp": "", + "dnsBypassIp6": "", + "dnsBypassTtl": 30, + "dscp": 40, + "edgeHeaderRewrite": "edgeHeader1", + "fqPacingRate": 42, + "geoLimit": 0, + "geoLimitCountries": "", + "geoLimitRedirectURL": null, + "geoProvider": 0, + "globalMaxMbps": 0, + "globalMaxTps": 0, + "httpBypassFqdn": "", + "infoUrl": "TBD", + "initialDispersion": 1, + "ipv6RoutingEnabled": true, + "logsEnabled": false, + "longDesc": "d s 1", + "longDesc1": "ds1", + "longDesc2": "ds1", + "maxDnsAnswers": 0, + "maxOriginConnections": 1000, + "midHeaderRewrite": "midHeader1", + "missLat": 41.881944, + "missLong": -87.627778, + "multiSiteOrigin": false, + "orgServerFqdn": "http://origin-test-minor-version.example.net", + "originShield": null, + "profileDescription": null, + "profileName": null, + "protocol": 2, + "qstringIgnore": 1, + "rangeRequestHandling": 0, + "regexRemap": "rr1", + "regionalGeoBlocking": false, + "remapText": "@plugin=tslua.so @pparam=/opt/trafficserver/etc/trafficserver/remapPlugin1.lua", + "routingName": "cdn", + "signed": true, + "signingAlgorithm": "url_sig", + "sslKeyVersion": 2, + "tenantId": 1, + "tenant": "root", + "trRequestHeaders": "X-Foo", + "trResponseHeaders": "Access-Control-Allow-Origin: *", + "type": "HTTP_LIVE", + "xmlId": "ds-test-minor-versions", + "anonymousBlockingEnabled": true } ], "divisions": [ diff --git a/traffic_ops/testing/api/v14/withobjs.go b/traffic_ops/testing/api/v14/withobjs.go index e39d0d8c52..3211a5bccd 100644 --- a/traffic_ops/testing/api/v14/withobjs.go +++ b/traffic_ops/testing/api/v14/withobjs.go @@ -89,5 +89,5 @@ var withFuncs = map[TCObj]TCObjFuncs{ Tenants: {CreateTestTenants, DeleteTestTenants}, Types: {CreateTestTypes, DeleteTestTypes}, Users: {CreateTestUsers, ForceDeleteTestUsers}, - UsersDeliveryServices: {CreateTestUsersDeliveryServices, DeleteTestUsersDeliveryServices}, + UsersDeliveryServices: {CreateTestUsersDeliveryServices, DeleteTestUsersDeliveryServices}, } diff --git a/traffic_ops/traffic_ops_golang/README.md b/traffic_ops/traffic_ops_golang/README.md index a368998127..4bfb952967 100644 --- a/traffic_ops/traffic_ops_golang/README.md +++ b/traffic_ops/traffic_ops_golang/README.md @@ -101,7 +101,6 @@ Most structs do not have versioning. If you are adding a field to a struct with 1. In `lib/go-tc`, rename the old struct to be the previous minor version. - For example, if you are adding a field to Delivery Service and existing minor version is 1.4 (so your new minor version is 1.5), in `lib/go-tc/deliveryservices.go` rename `type DeliveryServiceNullable struct` to `type DeliveryServiceNullableV14 struct`. - - Also rename any `Sanitize` and `Validate` functions to the old object. 2. In `lib/go-tc`, create a new struct with an unversioned name, and anonymously embed the previous struct (that you just renamed), along with your new field. - For example: @@ -112,58 +111,84 @@ type DeliveryServiceNullable struct { } ``` -3. Create a `Sanitize` function on the new struct, e.g. `func (ds *DeliveryServiceNullable) Sanitize()`, which sets your new field to a default value, if it is null. - - It must always be possible to create objects with previous API versions. Therefore, this step is not optional. - - The new `Sanitize` function must call the previous version's `Sanitize` as well, in order to sanitize all previous versions. E.g. +3. In `lib/go-tc`, change the struct's type alias to the new minor version. + - For example: +```go +type DeliveryServiceNullableV15 DeliveryServiceNullable +``` + +4. Update the `Sanitize` function on the unversioned struct, e.g. `func (ds *DeliveryServiceNullable) Sanitize()`, which sets your new field to a default value, if it is null. ```go func (ds *DeliveryServiceNullable) Sanitize() { - ds.DeliveryServiceNullableV14.Sanitize() + if ds.MyNewField == nil { ... } ``` -4. Create a `Validate` function, which immediately calls the `Sanitize` function, as well as doing any other validation on your new field. - - `Validate` is used to `Sanitize` by the API frameworks. If a `Validate` function doesn't exist, your new field won't be checked and made valid, and may result in nil panics. Therefore, this step is not optional. +5. Update the `Validate` function on the unversioned struct to add validation for your new field. - For example, if your new field is a port, `Validate` should verify it is between 0 and 65535. - Almost all fields can be invalid! Don't skip this step. Proper validation is essential to Traffic Control functioning properly and rejecting invalid input. - For example: +6. Add new versioned Create and Update handlers for the new version in e.g. `deliveryservice/deliveryservices.go`. The added Create and Update handlers will decode requests into the latest version of the struct and should pass it to an underlying versioned `create` or `update` function: + For example: ```go -func (ds *DeliveryServiceNullableV14) Validate(tx *sql.Tx) error { - ds.Sanitize() +func CreateV15(w http.ResponseWriter, r *http.Request) { + ... + ds := tc.DeliveryServiceNullableV15{} + if err := json.NewDecoder(r.Body).Decode(&ds); err != nil { + api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("decoding: "+err.Error()), nil) + return + } + + res, status, userErr, sysErr := createV15(w, r, inf, ds) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, status, userErr, sysErr) + return + } + api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice creation was successful.", []tc.DeliveryServiceNullableV15{*res}) +} + +func createV15(w http.ResponseWriter, r *http.Request, inf *api.APIInfo, reqDS tc.DeliveryServiceNullableV15) *tc.DeliveryServiceNullableV15 { + ... +} ``` +NOTE: the underlying `create` and `update` functions are chained together so that requests for previous minor versions are upgraded into requests of the next latest version until they are finally handled at the latest minor version. -5. Create a func to convert the previous version to the new latest struct. For example, `func NewDeliveryServiceNullableFromV14(ds DeliveryServiceNullableV14) DeliveryServiceNullable`. This function will typically do nothing more than create the latest object with the older version, and sanitize new fields. E.g. -```go -func NewDeliveryServiceNullableFromV14(ds DeliveryServiceNullableV14) DeliveryServiceNullable { - newDS := DeliveryServiceNullable{DeliveryServiceNullableV14: ds} - newDS.Sanitize() - return newDS -} +Example call chains: ``` + CreateV12 -> createV12 -> createV13 -> createV14 -> createV15 + CreateV13 -> createV13 -> createV14 -> createV15 + CreateV14 -> createV14 -> createV15 + CreateV15 -> createV15 + ``` -6. In `traffic_ops/traffic_ops_golang`, copy the existing previous version file, e.g. `cp traffic_ops/traffic_ops_golang/deliveryservice/deliveryservicesv1{3,4}.go`. - - If the object has no previous version, see `deliveryservice` for an example. The "CRUDer" version file should contain only boilerplate, no logic, and no reference to other versions except the latest. Hence, it should be possible to copy and rename, with no logic changes. The logic and latest version should all be in the main file, e.g. `deliveryservice/deliveryservices.go`. +In this example you would rename the existing `createV14` function to `createV15` and update its signature to accept and return a V15 struct. Then you would create a new `createV14` function, in which you would simply create a V15 struct, insert the V14 struct into it, and pass it to the `createV15` function. By doing that, the V14 request would essentially be upgraded into a V15 request for the underlying `createV15` handler to use. -7. In the new version file, rename all instances of the previous version to the new version, e.g. `sed -i 's/v13/v14/' deliveryservicesv14.go`. +For an `updateV14` function, you would follow the same pattern as the create function, but you also have to take into account any existing 1.5 fields that may already exist in the resource. So, you have to read existing 1.5 fields from the DB into your V15 struct before passing it to `updateV15`. That is how an "update" request can be upgraded from a 1.4 request to a 1.5 request. -8. Add the logic for your new field to the latest version file, e.g. `deliveryservice/deliveryservices.go`. +7. Modify the `createV15` and `updateV15` functions (and associated INSERT and UPDATE SQL queries) to create and update the new field in e.g. `deliveryservice/deliveryservices.go`. -9. Add your new version to `traffic_ops/traffic_ops_golang/routing/routes.go`, and add the versioned object to the previous route. +8. Modify the `Read` function (and associated SELECT SQL query) to read structs of the new version. For example in `deliveryservice/deliveryservices.go`, you would update the `switch` statement so that `version.Minor >= 5` returns structs of `DeliveryServiceNullable` (the latest version of the struct), and `version.Minor >= 4` returns structs of the embedded `DeliveryServiceNullableV14`. The SELECT SQL query should always be updated to read all of the latest fields, and the `Read` handler should always return the proper versioned struct for the requested API version. + +NOTE: the `Delete` handler should not need any modification when adding a new minor version of an API endpoint. + +9. Add the routes for your new `CreateV15` and `UpdateV15` handlers to `traffic_ops/traffic_ops_golang/routing/routes.go`. - The new latest route must go above the previous version. If the new version is below the old, the new version will never be routed to! For example, Change: ```go -{1.4, http.MethodGet, `deliveryservices/{id}/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryService{}), auth.PrivLevelReadOnly, Authenticated, nil}, + {1.4, http.MethodPost, `deliveryservices/?(\.json)?$`, deliveryservice.CreateV14, auth.PrivLevelOperations, Authenticated, nil}, ``` To: ```go -{1.5, http.MethodGet, `deliveryservices/{id}/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryService{}), auth.PrivLevelReadOnly, Authenticated, nil}, -{1.4, http.MethodGet, `deliveryservices/{id}/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryServiceV14{}), auth.PrivLevelReadOnly, Authenticated, nil}, + {1.5, http.MethodPost, `deliveryservices/?(\.json)?$`, deliveryservice.CreateV15, auth.PrivLevelOperations, Authenticated, nil}, + {1.4, http.MethodPost, `deliveryservices/?(\.json)?$`, deliveryservice.CreateV14, auth.PrivLevelOperations, Authenticated, nil}, ``` +NOTE: the `Read` and `Delete` handlers should always point to the lowest minor version since they are meant to handle requests of any minor version, so the routes for these handlers should not change when adding a new minor version. + ## Converting Routes to Traffic Ops Golang Traffic Ops is moving to Go! You can help! diff --git a/traffic_ops/traffic_ops_golang/api/api.go b/traffic_ops/traffic_ops_golang/api/api.go index 03577e28e5..8f21f82684 100644 --- a/traffic_ops/traffic_ops_golang/api/api.go +++ b/traffic_ops/traffic_ops_golang/api/api.go @@ -287,6 +287,7 @@ type APIInfo struct { IntParams map[string]int User *auth.CurrentUser ReqID uint64 + Version *Version Tx *sqlx.Tx Config *config.Config } @@ -333,6 +334,7 @@ func NewInfo(r *http.Request, requiredParams []string, intParamNames []string) ( if err != nil { return &APIInfo{Tx: &sqlx.Tx{}}, errors.New("getting reqID: " + err.Error()), nil, http.StatusInternalServerError } + version := getRequestedAPIVersion(r.URL.Path) user, err := auth.GetCurrentUser(r.Context()) if err != nil { @@ -350,6 +352,7 @@ func NewInfo(r *http.Request, requiredParams []string, intParamNames []string) ( return &APIInfo{ Config: cfg, ReqID: reqID, + Version: version, Params: params, IntParams: intParams, User: user, @@ -379,6 +382,40 @@ func (val APIInfoImpl) APIInfo() *APIInfo { return val.ReqInfo } +type Version struct { + Major uint64 + Minor uint64 +} + +// getRequestedAPIVersion returns a pointer to the requested API Version from the request if it exists or returns nil otherwise. +func getRequestedAPIVersion(path string) *Version { + pathParts := strings.Split(path, "/") + if len(pathParts) < 2 { + return nil // path doesn't start with `/api`, so it's not an api request + } + if strings.ToLower(pathParts[1]) != "api" { + return nil // path doesn't start with `/api`, so it's not an api request + } + if len(pathParts) < 3 { + return nil // path starts with `/api` but not `/api/{version}`, so it's an api request, and an unknown/nonexistent version. + } + version := pathParts[2] + + versionParts := strings.Split(version, ".") + if len(versionParts) != 2 { + return nil + } + majorVersion, err := strconv.ParseUint(versionParts[0], 10, 64) + if err != nil { + return nil + } + minorVersion, err := strconv.ParseUint(versionParts[1], 10, 64) + if err != nil { + return nil + } + return &Version{Major: majorVersion, Minor: minorVersion} +} + // GetDB returns the database from the context. This should very rarely be needed, rather `NewInfo` should always be used to get a transaction, except in extenuating circumstances. func GetDB(ctx context.Context) (*sqlx.DB, error) { val := ctx.Value(DBContextKey) diff --git a/traffic_ops/traffic_ops_golang/config/config.go b/traffic_ops/traffic_ops_golang/config/config.go index ea4e931eea..99d1f003a8 100644 --- a/traffic_ops/traffic_ops_golang/config/config.go +++ b/traffic_ops/traffic_ops_golang/config/config.go @@ -87,6 +87,7 @@ type ConfigTrafficOpsGolang struct { ProfilingEnabled bool `json:"profiling_enabled"` ProfilingLocation string `json:"profiling_location"` RiakPort *uint `json:"riak_port"` + WhitelistedOAuthUrls []string `json:"whitelisted_oauth_urls"` // CRConfigUseRequestHost is whether to use the client request host header in the CRConfig. If false, uses the tm.url parameter. // This defaults to false. Traffic Ops used to always use the host header, setting this true will resume that legacy behavior. diff --git a/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservices.go b/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservices.go index ab5c1ef06d..313cabd741 100644 --- a/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservices.go +++ b/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservices.go @@ -64,12 +64,38 @@ func (ds *TODeliveryService) SetKeys(keys map[string]interface{}) { ds.ID = &i } +func (ds TODeliveryService) GetKeys() (map[string]interface{}, bool) { + if ds.ID == nil { + return map[string]interface{}{"id": 0}, false + } + return map[string]interface{}{"id": *ds.ID}, true +} + +func (ds TODeliveryService) GetKeyFieldsInfo() []api.KeyFieldInfo { + return []api.KeyFieldInfo{{"id", api.GetIntKey}} +} + +func (ds *TODeliveryService) GetAuditName() string { + if ds.XMLID != nil { + return *ds.XMLID + } + return "" +} + +func (ds *TODeliveryService) GetType() string { + return "ds" +} + +// IsTenantAuthorized checks that the user is authorized for both the delivery service's existing tenant, and the new tenant they're changing it to (if different). +func (ds *TODeliveryService) IsTenantAuthorized(user *auth.CurrentUser) (bool, error) { + return isTenantAuthorized(ds.ReqInfo, &ds.DeliveryServiceNullable) +} + func (ds *TODeliveryService) Validate() error { return ds.DeliveryServiceNullable.Validate(ds.APIInfo().Tx.Tx) } -// TODO allow users to post names (type, cdn, etc) and get the IDs from the names. This isn't trivial to do in a single query, without dynamically building the entire insert query, and ideally inserting would be one query. But it'd be much more convenient for users. Alternatively, remove IDs from the database entirely and use real candidate keys. -func Create(w http.ResponseWriter, r *http.Request) { +func CreateV12(w http.ResponseWriter, r *http.Request) { inf, userErr, sysErr, errCode := api.NewInfo(r, nil, nil) if userErr != nil || sysErr != nil { api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) @@ -77,37 +103,98 @@ func Create(w http.ResponseWriter, r *http.Request) { } defer inf.Close() - ds := tc.DeliveryServiceNullable{} - if err := api.Parse(r.Body, inf.Tx.Tx, &ds); err != nil { + ds := tc.DeliveryServiceNullableV12{} + if err := json.NewDecoder(r.Body).Decode(&ds); err != nil { api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("decoding: "+err.Error()), nil) return } - if ds.RoutingName == nil || *ds.RoutingName == "" { - ds.RoutingName = util.StrPtr("cdn") + res, status, userErr, sysErr := createV12(w, r, inf, ds) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, status, userErr, sysErr) + return + } + api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice creation was successful.", []tc.DeliveryServiceNullableV12{*res}) +} + +func CreateV13(w http.ResponseWriter, r *http.Request) { + inf, userErr, sysErr, errCode := api.NewInfo(r, nil, nil) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) + return + } + defer inf.Close() + + ds := tc.DeliveryServiceNullableV13{} + if err := json.NewDecoder(r.Body).Decode(&ds); err != nil { + api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("decoding: "+err.Error()), nil) + return } - if err := ds.Validate(inf.Tx.Tx); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("invalid request: "+err.Error()), nil) + + res, status, userErr, sysErr := createV13(w, r, inf, ds) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, status, userErr, sysErr) return } - ds, errCode, userErr, sysErr = create(inf, ds) + api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice creation was successful.", []tc.DeliveryServiceNullableV13{*res}) +} + +// TODO allow users to post names (type, cdn, etc) and get the IDs from the names. This isn't trivial to do in a single query, without dynamically building the entire insert query, and ideally inserting would be one query. But it'd be much more convenient for users. Alternatively, remove IDs from the database entirely and use real candidate keys. +func CreateV14(w http.ResponseWriter, r *http.Request) { + inf, userErr, sysErr, errCode := api.NewInfo(r, nil, nil) if userErr != nil || sysErr != nil { api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) return } - api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice creation was successful.", []tc.DeliveryServiceNullable{ds}) + defer inf.Close() + + ds := tc.DeliveryServiceNullableV14{} + if err := json.NewDecoder(r.Body).Decode(&ds); err != nil { + api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("decoding: "+err.Error()), nil) + return + } + + res, status, userErr, sysErr := createV14(w, r, inf, ds) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, status, userErr, sysErr) + return + } + api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice creation was successful.", []tc.DeliveryServiceNullableV14{*res}) +} + +func createV12(w http.ResponseWriter, r *http.Request, inf *api.APIInfo, reqDS tc.DeliveryServiceNullableV12) (*tc.DeliveryServiceNullableV12, int, error, error) { + dsV13 := tc.DeliveryServiceNullableV13{DeliveryServiceNullableV12: reqDS} + res, status, userErr, sysErr := createV13(w, r, inf, dsV13) + if res != nil { + return &res.DeliveryServiceNullableV12, status, userErr, sysErr + } + return nil, status, userErr, sysErr +} + +func createV13(w http.ResponseWriter, r *http.Request, inf *api.APIInfo, reqDS tc.DeliveryServiceNullableV13) (*tc.DeliveryServiceNullableV13, int, error, error) { + dsV14 := tc.DeliveryServiceNullableV14{DeliveryServiceNullableV13: reqDS} + res, status, userErr, sysErr := createV14(w, r, inf, dsV14) + if res != nil { + return &res.DeliveryServiceNullableV13, status, userErr, sysErr + } + return nil, status, userErr, sysErr } // create creates the given ds in the database, and returns the DS with its id and other fields created on insert set. On error, the HTTP status code, user error, and system error are returned. The status code SHOULD NOT be used, if both errors are nil. -func create(inf *api.APIInfo, ds tc.DeliveryServiceNullable) (tc.DeliveryServiceNullable, int, error, error) { +func createV14(w http.ResponseWriter, r *http.Request, inf *api.APIInfo, reqDS tc.DeliveryServiceNullableV14) (*tc.DeliveryServiceNullableV14, int, error, error) { + ds := tc.DeliveryServiceNullable(reqDS) user := inf.User tx := inf.Tx.Tx cfg := inf.Config + if err := ds.Validate(tx); err != nil { + return nil, http.StatusBadRequest, errors.New("invalid request: " + err.Error()), nil + } + if authorized, err := isTenantAuthorized(inf, &ds); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("checking tenant: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("checking tenant: " + err.Error()) } else if !authorized { - return tc.DeliveryServiceNullable{}, http.StatusForbidden, errors.New("not authorized on this tenant"), nil + return nil, http.StatusForbidden, errors.New("not authorized on this tenant"), nil } // TODO change DeepCachingType to implement sql.Valuer and sql.Scanner, so sqlx struct scan can be used. @@ -172,88 +259,126 @@ func create(inf *api.APIInfo, ds tc.DeliveryServiceNullable) (tc.DeliveryService if err != nil { usrErr, sysErr, code := api.ParseDBError(err) - return tc.DeliveryServiceNullable{}, code, usrErr, sysErr + return nil, code, usrErr, sysErr } defer resultRows.Close() id := 0 lastUpdated := tc.TimeNoMod{} if !resultRows.Next() { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("no deliveryservice request inserted, no id was returned") + return nil, http.StatusInternalServerError, nil, errors.New("no deliveryservice request inserted, no id was returned") } if err := resultRows.Scan(&id, &lastUpdated); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("could not scan id from insert: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("could not scan id from insert: " + err.Error()) } if resultRows.Next() { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("too many ids returned from deliveryservice request insert") + return nil, http.StatusInternalServerError, nil, errors.New("too many ids returned from deliveryservice request insert") } ds.ID = &id if ds.ID == nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("missing id after insert") + return nil, http.StatusInternalServerError, nil, errors.New("missing id after insert") } if ds.XMLID == nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("missing xml_id after insert") + return nil, http.StatusInternalServerError, nil, errors.New("missing xml_id after insert") } if ds.TypeID == nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("missing type after insert") + return nil, http.StatusInternalServerError, nil, errors.New("missing type after insert") } dsType, err := getTypeFromID(*ds.TypeID, tx) if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("getting delivery service type: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("getting delivery service type: " + err.Error()) } ds.Type = &dsType if err := createDefaultRegex(tx, *ds.ID, *ds.XMLID); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("creating default regex: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("creating default regex: " + err.Error()) } if c, err := createConsistentHashQueryParams(tx, *ds.ID, ds.ConsistentHashQueryParams); err != nil { usrErr, sysErr, code := api.ParseDBError(err) - return tc.DeliveryServiceNullable{}, code, usrErr, sysErr + return nil, code, usrErr, sysErr } else { api.CreateChangeLogRawTx(api.ApiChange, fmt.Sprintf("Created %d consistent hash query params for delivery service: %s", c, *ds.XMLID), user, tx) } matchlists, err := GetDeliveryServicesMatchLists([]string{*ds.XMLID}, tx) if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("creating DS: reading matchlists: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("creating DS: reading matchlists: " + err.Error()) } if matchlist, ok := matchlists[*ds.XMLID]; !ok { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("creating DS: reading matchlists: not found") + return nil, http.StatusInternalServerError, nil, errors.New("creating DS: reading matchlists: not found") } else { ds.MatchList = &matchlist } cdnName, cdnDomain, dnssecEnabled, err := getCDNNameDomainDNSSecEnabled(*ds.ID, tx) if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("creating DS: getting CDN info: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("creating DS: getting CDN info: " + err.Error()) } ds.ExampleURLs = MakeExampleURLs(ds.Protocol, *ds.Type, *ds.RoutingName, *ds.MatchList, cdnDomain) if err := EnsureParams(tx, *ds.ID, *ds.XMLID, ds.EdgeHeaderRewrite, ds.MidHeaderRewrite, ds.RegexRemap, ds.CacheURL, ds.SigningAlgorithm, dsType, ds.MaxOriginConnections); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("ensuring ds parameters:: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("ensuring ds parameters:: " + err.Error()) } if dnssecEnabled { if err := PutDNSSecKeys(tx, cfg, *ds.XMLID, cdnName, ds.ExampleURLs); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("creating DNSSEC keys: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("creating DNSSEC keys: " + err.Error()) } } if err := createPrimaryOrigin(tx, user, ds); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("creating delivery service: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("creating delivery service: " + err.Error()) } ds.LastUpdated = &lastUpdated if err := api.CreateChangeLogRawErr(api.ApiChange, "Created ds: "+*ds.XMLID+" id: "+strconv.Itoa(*ds.ID), user, tx); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("error writing to audit log: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("error writing to audit log: " + err.Error()) + } + + dsLatest := tc.DeliveryServiceNullableV14(ds) + return &dsLatest, http.StatusOK, nil, nil +} + +func createDefaultRegex(tx *sql.Tx, dsID int, xmlID string) error { + regexStr := `.*\.` + xmlID + `\..*` + regexID := 0 + if err := tx.QueryRow(`INSERT INTO regex (type, pattern) VALUES ((select id from type where name = 'HOST_REGEXP'), $1::text) RETURNING id`, regexStr).Scan(®exID); err != nil { + return errors.New("insert regex: " + err.Error()) + } + if _, err := tx.Exec(`INSERT INTO deliveryservice_regex (deliveryservice, regex, set_number) VALUES ($1::bigint, $2::bigint, 0)`, dsID, regexID); err != nil { + return errors.New("executing parameter query to insert location: " + err.Error()) + } + return nil +} + +func createConsistentHashQueryParams(tx *sql.Tx, dsID int, consistentHashQueryParams []string) (int, error) { + if len(consistentHashQueryParams) == 0 { + return 0, nil + } + c := 0 + q := `INSERT INTO deliveryservice_consistent_hash_query_param (name, deliveryservice_id) VALUES ($1, $2)` + for _, k := range consistentHashQueryParams { + if _, err := tx.Exec(q, k, dsID); err != nil { + return c, err + } + c++ } - return ds, http.StatusOK, nil, nil + + return c, nil } func (ds *TODeliveryService) Read() ([]interface{}, error, error, int) { + version := ds.APIInfo().Version + if version == nil { + return nil, nil, errors.New("TODeliveryService.Read called with nil API version"), http.StatusInternalServerError + } + if version.Major != 1 || version.Minor < 1 { + return nil, nil, fmt.Errorf("TODeliveryService.Read called with invalid API version: %d.%d", version.Major, version.Minor), http.StatusInternalServerError + } + returnable := []interface{}{} dses, errs, _ := readGetDeliveryServices(ds.APIInfo().Params, ds.APIInfo().Tx, ds.APIInfo().User) if len(errs) > 0 { @@ -266,12 +391,22 @@ func (ds *TODeliveryService) Read() ([]interface{}, error, error, int) { } for _, ds := range dses { - returnable = append(returnable, ds) + switch { + // NOTE: it's required to handle minor version cases in a descending >= manner + case version.Minor >= 4: + returnable = append(returnable, ds) + case version.Minor >= 3: + returnable = append(returnable, ds.DeliveryServiceNullableV13) + case version.Minor >= 1: + returnable = append(returnable, ds.DeliveryServiceNullableV12) + default: + return nil, nil, fmt.Errorf("TODeliveryService.Read called with invalid API version: %d.%d", version.Major, version.Minor), http.StatusInternalServerError + } } return returnable, nil, nil, http.StatusOK } -func Update(w http.ResponseWriter, r *http.Request) { +func UpdateV12(w http.ResponseWriter, r *http.Request) { inf, userErr, sysErr, errCode := api.NewInfo(r, nil, []string{"id"}) if userErr != nil || sysErr != nil { api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) @@ -281,78 +416,169 @@ func Update(w http.ResponseWriter, r *http.Request) { id := inf.IntParams["id"] - ds := tc.DeliveryServiceNullable{} + ds := tc.DeliveryServiceNullableV12{} if err := json.NewDecoder(r.Body).Decode(&ds); err != nil { api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("malformed JSON: "+err.Error()), nil) return } ds.ID = &id - if err := ds.Validate(inf.Tx.Tx); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("invalid request: "+err.Error()), nil) + res, status, userErr, sysErr := updateV12(w, r, inf, &ds) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, status, userErr, sysErr) return } + api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice update was successful.", []tc.DeliveryServiceNullableV12{*res}) +} - ds, errCode, userErr, sysErr = update(inf, &ds) +func UpdateV13(w http.ResponseWriter, r *http.Request) { + inf, userErr, sysErr, errCode := api.NewInfo(r, nil, []string{"id"}) if userErr != nil || sysErr != nil { api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) return } - api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice update was successful.", []tc.DeliveryServiceNullable{ds}) -} + defer inf.Close() -func createDefaultRegex(tx *sql.Tx, dsID int, xmlID string) error { - regexStr := `.*\.` + xmlID + `\..*` - regexID := 0 - if err := tx.QueryRow(`INSERT INTO regex (type, pattern) VALUES ((select id from type where name = 'HOST_REGEXP'), $1::text) RETURNING id`, regexStr).Scan(®exID); err != nil { - return errors.New("insert regex: " + err.Error()) + id := inf.IntParams["id"] + + ds := tc.DeliveryServiceNullableV13{} + if err := json.NewDecoder(r.Body).Decode(&ds); err != nil { + api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("malformed JSON: "+err.Error()), nil) + return } - if _, err := tx.Exec(`INSERT INTO deliveryservice_regex (deliveryservice, regex, set_number) VALUES ($1::bigint, $2::bigint, 0)`, dsID, regexID); err != nil { - return errors.New("executing parameter query to insert location: " + err.Error()) + ds.ID = &id + + res, status, userErr, sysErr := updateV13(w, r, inf, &ds) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, status, userErr, sysErr) + return } - return nil + api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice update was successful.", []tc.DeliveryServiceNullableV13{*res}) } -func createConsistentHashQueryParams(tx *sql.Tx, dsID int, consistentHashQueryParams []string) (int, error) { - if len(consistentHashQueryParams) == 0 { - return 0, nil +func UpdateV14(w http.ResponseWriter, r *http.Request) { + inf, userErr, sysErr, errCode := api.NewInfo(r, nil, []string{"id"}) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) + return } - c := 0 - q := `INSERT INTO deliveryservice_consistent_hash_query_param (name, deliveryservice_id) VALUES ($1, $2)` - for _, k := range consistentHashQueryParams { - if _, err := tx.Exec(q, k, dsID); err != nil { - return c, err + defer inf.Close() + + id := inf.IntParams["id"] + + ds := tc.DeliveryServiceNullableV14{} + if err := json.NewDecoder(r.Body).Decode(&ds); err != nil { + api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("malformed JSON: "+err.Error()), nil) + return + } + ds.ID = &id + + res, status, userErr, sysErr := updateV14(w, r, inf, &ds) + if userErr != nil || sysErr != nil { + api.HandleErr(w, r, inf.Tx.Tx, status, userErr, sysErr) + return + } + api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice update was successful.", []tc.DeliveryServiceNullableV14{*res}) +} + +func updateV12(w http.ResponseWriter, r *http.Request, inf *api.APIInfo, reqDS *tc.DeliveryServiceNullableV12) (*tc.DeliveryServiceNullableV12, int, error, error) { + dsV13 := tc.DeliveryServiceNullableV13{DeliveryServiceNullableV12: *reqDS} + // query the DB for existing 1.3 fields in order to "upgrade" this 1.2 request into a 1.3 request + query := ` +SELECT + ds.deep_caching_type, + ds.fq_pacing_rate, + ds.signing_algorithm, + ds.tr_response_headers, + ds.tr_request_headers +FROM + deliveryservice ds +WHERE + ds.id = $1` + if err := inf.Tx.Tx.QueryRow(query, *reqDS.ID).Scan( + &dsV13.DeepCachingType, + &dsV13.FQPacingRate, + &dsV13.SigningAlgorithm, + &dsV13.TRResponseHeaders, + &dsV13.TRRequestHeaders, + ); err != nil { + if err == sql.ErrNoRows { + return nil, http.StatusNotFound, fmt.Errorf("delivery service ID %d not found", *dsV13.ID), nil } - c++ + return nil, http.StatusInternalServerError, nil, fmt.Errorf("querying delivery service ID %d: %s", *dsV13.ID, err.Error()) + } + if dsV13.DeepCachingType != nil { + *dsV13.DeepCachingType = tc.DeepCachingTypeFromString(string(*dsV13.DeepCachingType)) } - return c, nil + res, status, userErr, sysErr := updateV13(w, r, inf, &dsV13) + if res != nil { + return &res.DeliveryServiceNullableV12, status, userErr, sysErr + } + return nil, status, userErr, sysErr } -func update(inf *api.APIInfo, ds *tc.DeliveryServiceNullable) (tc.DeliveryServiceNullable, int, error, error) { +func updateV13(w http.ResponseWriter, r *http.Request, inf *api.APIInfo, reqDS *tc.DeliveryServiceNullableV13) (*tc.DeliveryServiceNullableV13, int, error, error) { + dsV14 := tc.DeliveryServiceNullableV14{DeliveryServiceNullableV13: *reqDS} + // query the DB for existing 1.4 fields in order to "upgrade" this 1.3 request into a 1.4 request + query := ` +SELECT + ds.consistent_hash_regex, + ds.max_origin_connections, + (SELECT ARRAY_AGG(name ORDER BY name) + FROM deliveryservice_consistent_hash_query_param + WHERE deliveryservice_id = ds.id) AS query_keys +FROM + deliveryservice ds +WHERE + ds.id = $1` + if err := inf.Tx.Tx.QueryRow(query, *reqDS.ID).Scan( + &dsV14.ConsistentHashRegex, + &dsV14.MaxOriginConnections, + pq.Array(&dsV14.ConsistentHashQueryParams), + ); err != nil { + if err == sql.ErrNoRows { + return nil, http.StatusNotFound, fmt.Errorf("delivery service ID %d not found", *dsV14.ID), nil + } + return nil, http.StatusInternalServerError, nil, fmt.Errorf("querying delivery service ID %d: %s", *dsV14.ID, err.Error()) + } + res, status, userErr, sysErr := updateV14(w, r, inf, &dsV14) + if res != nil { + return &res.DeliveryServiceNullableV13, status, userErr, sysErr + } + return nil, status, userErr, sysErr +} + +func updateV14(w http.ResponseWriter, r *http.Request, inf *api.APIInfo, reqDS *tc.DeliveryServiceNullableV14) (*tc.DeliveryServiceNullableV14, int, error, error) { + converted := tc.DeliveryServiceNullable(*reqDS) + ds := &converted tx := inf.Tx.Tx cfg := inf.Config user := inf.User + if err := ds.Validate(tx); err != nil { + return nil, http.StatusBadRequest, errors.New("invalid request: " + err.Error()), nil + } + if authorized, err := isTenantAuthorized(inf, ds); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("checking tenant: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("checking tenant: " + err.Error()) } else if !authorized { - return tc.DeliveryServiceNullable{}, http.StatusForbidden, errors.New("not authorized on this tenant"), nil + return nil, http.StatusForbidden, errors.New("not authorized on this tenant"), nil } if ds.XMLID == nil { - return tc.DeliveryServiceNullable{}, http.StatusBadRequest, errors.New("missing xml_id"), nil + return nil, http.StatusBadRequest, errors.New("missing xml_id"), nil } if ds.ID == nil { - return tc.DeliveryServiceNullable{}, http.StatusBadRequest, errors.New("missing id"), nil + return nil, http.StatusBadRequest, errors.New("missing id"), nil } dsType, ok, err := getDSType(tx, *ds.XMLID) if !ok { - return tc.DeliveryServiceNullable{}, http.StatusNotFound, errors.New("delivery service '" + *ds.XMLID + "' not found"), nil + return nil, http.StatusNotFound, errors.New("delivery service '" + *ds.XMLID + "' not found"), nil } if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("getting delivery service type during update: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("getting delivery service type during update: " + err.Error()) } // oldHostName will be used to determine if SSL Keys need updating - this will be empty if the DS doesn't have SSL keys, because DS types without SSL keys may not have regexes, and thus will fail to get a host name. @@ -360,7 +586,7 @@ func update(inf *api.APIInfo, ds *tc.DeliveryServiceNullable) (tc.DeliveryServic if dsType.HasSSLKeys() { oldHostName, err = getOldHostName(*ds.ID, tx) if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("getting existing delivery service hostname: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("getting existing delivery service hostname: " + err.Error()) } } @@ -427,53 +653,53 @@ func update(inf *api.APIInfo, ds *tc.DeliveryServiceNullable) (tc.DeliveryServic if err != nil { usrErr, sysErr, code := api.ParseDBError(err) - return tc.DeliveryServiceNullable{}, code, usrErr, sysErr + return nil, code, usrErr, sysErr } defer resultRows.Close() if !resultRows.Next() { - return tc.DeliveryServiceNullable{}, http.StatusNotFound, errors.New("no delivery service found with this id"), nil + return nil, http.StatusNotFound, errors.New("no delivery service found with this id"), nil } lastUpdated := tc.TimeNoMod{} if err := resultRows.Scan(&lastUpdated); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("scan updating delivery service: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("scan updating delivery service: " + err.Error()) } if resultRows.Next() { xmlID := "" if ds.XMLID != nil { xmlID = *ds.XMLID } - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("updating delivery service " + xmlID + ": " + "this update affected too many rows: > 1") + return nil, http.StatusInternalServerError, nil, errors.New("updating delivery service " + xmlID + ": " + "this update affected too many rows: > 1") } if ds.ID == nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("missing id after update") + return nil, http.StatusInternalServerError, nil, errors.New("missing id after update") } if ds.XMLID == nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("missing xml_id after update") + return nil, http.StatusInternalServerError, nil, errors.New("missing xml_id after update") } if ds.TypeID == nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("missing type after update") + return nil, http.StatusInternalServerError, nil, errors.New("missing type after update") } if ds.RoutingName == nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("missing routing name after update") + return nil, http.StatusInternalServerError, nil, errors.New("missing routing name after update") } newDSType, err := getTypeFromID(*ds.TypeID, tx) if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("getting delivery service type after update: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("getting delivery service type after update: " + err.Error()) } ds.Type = &newDSType cdnDomain, err := getCDNDomain(*ds.ID, tx) // need to get the domain again, in case it changed. if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("getting CDN domain after update: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("getting CDN domain after update: " + err.Error()) } matchLists, err := GetDeliveryServicesMatchLists([]string{*ds.XMLID}, tx) if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("getting matchlists after update: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("getting matchlists after update: " + err.Error()) } if ml, ok := matchLists[*ds.XMLID]; !ok { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("no matchlists after update") + return nil, http.StatusInternalServerError, nil, errors.New("no matchlists after update") } else { ds.MatchList = &ml } @@ -483,22 +709,22 @@ func update(inf *api.APIInfo, ds *tc.DeliveryServiceNullable) (tc.DeliveryServic if dsType.HasSSLKeys() { newHostName, err = getHostName(ds.Protocol, *ds.Type, *ds.RoutingName, *ds.MatchList, cdnDomain) if err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("getting hostname after update: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("getting hostname after update: " + err.Error()) } } if newDSType.HasSSLKeys() && oldHostName != newHostName { if err := updateSSLKeys(ds, newHostName, tx, cfg); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("updating delivery service " + *ds.XMLID + ": updating SSL keys: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("updating delivery service " + *ds.XMLID + ": updating SSL keys: " + err.Error()) } } if err := EnsureParams(tx, *ds.ID, *ds.XMLID, ds.EdgeHeaderRewrite, ds.MidHeaderRewrite, ds.RegexRemap, ds.CacheURL, ds.SigningAlgorithm, newDSType, ds.MaxOriginConnections); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("ensuring ds parameters:: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("ensuring ds parameters:: " + err.Error()) } if err := updatePrimaryOrigin(tx, user, *ds); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("updating delivery service: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("updating delivery service: " + err.Error()) } ds.LastUpdated = &lastUpdated @@ -506,22 +732,69 @@ func update(inf *api.APIInfo, ds *tc.DeliveryServiceNullable) (tc.DeliveryServic // the update may change or delete the query params -- delete existing and re-add if any provided q := `DELETE FROM deliveryservice_consistent_hash_query_param WHERE deliveryservice_id = $1` if res, err := tx.Exec(q, *ds.ID); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, fmt.Errorf("deleting consistent hash query params for ds %s: %s", *ds.XMLID, err.Error()) + return nil, http.StatusInternalServerError, nil, fmt.Errorf("deleting consistent hash query params for ds %s: %s", *ds.XMLID, err.Error()) } else if c, _ := res.RowsAffected(); c > 0 { api.CreateChangeLogRawTx(api.ApiChange, fmt.Sprintf("Deleted %d consistent hash query params for delivery service: %s", c, *ds.XMLID), user, tx) } if c, err := createConsistentHashQueryParams(tx, *ds.ID, ds.ConsistentHashQueryParams); err != nil { usrErr, sysErr, code := api.ParseDBError(err) - return tc.DeliveryServiceNullable{}, code, usrErr, sysErr + return nil, code, usrErr, sysErr } else { api.CreateChangeLogRawTx(api.ApiChange, fmt.Sprintf("Created %d consistent hash query params for delivery service: %s", c, *ds.XMLID), user, tx) } if err := api.CreateChangeLogRawErr(api.ApiChange, "Updated ds: "+*ds.XMLID+" id: "+strconv.Itoa(*ds.ID), user, tx); err != nil { - return tc.DeliveryServiceNullable{}, http.StatusInternalServerError, nil, errors.New("writing change log entry: " + err.Error()) + return nil, http.StatusInternalServerError, nil, errors.New("writing change log entry: " + err.Error()) } - return *ds, http.StatusOK, nil, nil + dsLatest := tc.DeliveryServiceNullableV14(*ds) + return &dsLatest, http.StatusOK, nil, nil +} + +//Delete is the DeliveryService implementation of the Deleter interface. +func (ds *TODeliveryService) Delete() (error, error, int) { + if ds.ID == nil { + return errors.New("missing id"), nil, http.StatusBadRequest + } + + xmlID, ok, err := GetXMLID(ds.ReqInfo.Tx.Tx, *ds.ID) + if err != nil { + return nil, errors.New("ds delete: getting xmlid: " + err.Error()), http.StatusInternalServerError + } else if !ok { + return errors.New("delivery service not found"), nil, http.StatusNotFound + } + ds.XMLID = &xmlID + + // Note ds regexes MUST be deleted before the ds, because there's a ON DELETE CASCADE on deliveryservice_regex (but not on regex). + // Likewise, it MUST happen in a transaction with the later DS delete, so they aren't deleted if the DS delete fails. + if _, err := ds.ReqInfo.Tx.Tx.Exec(`DELETE FROM regex WHERE id IN (SELECT regex FROM deliveryservice_regex WHERE deliveryservice=$1)`, *ds.ID); err != nil { + return nil, errors.New("TODeliveryService.Delete deleting regexes for delivery service: " + err.Error()), http.StatusInternalServerError + } + + if _, err := ds.ReqInfo.Tx.Tx.Exec(`DELETE FROM deliveryservice_regex WHERE deliveryservice=$1`, *ds.ID); err != nil { + return nil, errors.New("TODeliveryService.Delete deleting delivery service regexes: " + err.Error()), http.StatusInternalServerError + } + + userErr, sysErr, errCode := api.GenericDelete(ds) + if userErr != nil || sysErr != nil { + return userErr, sysErr, errCode + } + + paramConfigFilePrefixes := []string{"hdr_rw_", "hdr_rw_mid_", "regex_remap_", "cacheurl_"} + configFiles := []string{} + for _, prefix := range paramConfigFilePrefixes { + configFiles = append(configFiles, prefix+*ds.XMLID+".config") + } + + if _, err := ds.ReqInfo.Tx.Tx.Exec(`DELETE FROM parameter WHERE name = 'location' AND config_file = ANY($1)`, pq.Array(configFiles)); err != nil { + return nil, errors.New("TODeliveryService.Delete deleting delivery service parameteres: " + err.Error()), http.StatusInternalServerError + } + + return nil, nil, http.StatusOK +} + +func (v *TODeliveryService) DeleteQuery() string { + return `DELETE FROM deliveryservice WHERE id = :id` } func readGetDeliveryServices(params map[string]string, tx *sqlx.Tx, user *auth.CurrentUser) ([]tc.DeliveryServiceNullable, []error, tc.ApiErrorType) { @@ -1138,7 +1411,7 @@ func getTenantID(tx *sql.Tx, ds *tc.DeliveryServiceNullable) (*int, error) { existingID, _, err := getDSTenantIDByID(tx, *ds.ID) // ignore exists return - if the DS is new, we only need to check the user input tenant return existingID, err } - existingID, _, err := getDSTenantIDByName(tx, *ds.XMLID) // ignore exists return - if the DS is new, we only need to check the user input tenant + existingID, _, err := getDSTenantIDByName(tx, tc.DeliveryServiceName(*ds.XMLID)) // ignore exists return - if the DS is new, we only need to check the user input tenant return existingID, err } @@ -1186,32 +1459,8 @@ func getDSTenantIDByID(tx *sql.Tx, id int) (*int, bool, error) { return tenantID, true, nil } -// GetDSTenantIDByIDTx returns the tenant ID, whether the delivery service exists, and any error. -func GetDSTenantIDByIDTx(tx *sql.Tx, id int) (*int, bool, error) { - tenantID := (*int)(nil) - if err := tx.QueryRow(`SELECT tenant_id FROM deliveryservice where id = $1`, id).Scan(&tenantID); err != nil { - if err == sql.ErrNoRows { - return nil, false, nil - } - return nil, false, fmt.Errorf("querying tenant ID for delivery service ID '%v': %v", id, err) - } - return tenantID, true, nil -} - // getDSTenantIDByName returns the tenant ID, whether the delivery service exists, and any error. -func getDSTenantIDByName(tx *sql.Tx, name string) (*int, bool, error) { - tenantID := (*int)(nil) - if err := tx.QueryRow(`SELECT tenant_id FROM deliveryservice where xml_id = $1`, name).Scan(&tenantID); err != nil { - if err == sql.ErrNoRows { - return nil, false, nil - } - return nil, false, fmt.Errorf("querying tenant ID for delivery service name '%v': %v", name, err) - } - return tenantID, true, nil -} - -// GetDSTenantIDByNameTx returns the tenant ID, whether the delivery service exists, and any error. -func GetDSTenantIDByNameTx(tx *sql.Tx, ds tc.DeliveryServiceName) (*int, bool, error) { +func getDSTenantIDByName(tx *sql.Tx, ds tc.DeliveryServiceName) (*int, bool, error) { tenantID := (*int)(nil) if err := tx.QueryRow(`SELECT tenant_id FROM deliveryservice where xml_id = $1`, ds).Scan(&tenantID); err != nil { if err == sql.ErrNoRows { diff --git a/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservicesv12.go b/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservicesv12.go deleted file mode 100644 index 68d9c21097..0000000000 --- a/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservicesv12.go +++ /dev/null @@ -1,191 +0,0 @@ -package deliveryservice - -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -import ( - "encoding/json" - "errors" - "net/http" - - "github.com/apache/trafficcontrol/lib/go-tc" - "github.com/apache/trafficcontrol/lib/go-util" - "github.com/apache/trafficcontrol/traffic_ops/traffic_ops_golang/api" - "github.com/apache/trafficcontrol/traffic_ops/traffic_ops_golang/auth" - - "github.com/lib/pq" -) - -type TODeliveryServiceV12 struct { - api.APIInfoImpl - tc.DeliveryServiceNullableV12 -} - -func (ds TODeliveryServiceV12) MarshalJSON() ([]byte, error) { - return json.Marshal(ds.DeliveryServiceNullableV12) -} - -func (ds *TODeliveryServiceV12) UnmarshalJSON(data []byte) error { - return json.Unmarshal(data, ds.DeliveryServiceNullableV12) -} - -func (v *TODeliveryServiceV12) DeleteQuery() string { - return `DELETE FROM deliveryservice WHERE id = :id` -} - -func (ds TODeliveryServiceV12) GetKeyFieldsInfo() []api.KeyFieldInfo { - return []api.KeyFieldInfo{{"id", api.GetIntKey}} -} - -func (ds TODeliveryServiceV12) GetKeys() (map[string]interface{}, bool) { - if ds.ID == nil { - return map[string]interface{}{"id": 0}, false - } - return map[string]interface{}{"id": *ds.ID}, true -} - -func (ds *TODeliveryServiceV12) SetKeys(keys map[string]interface{}) { - i, _ := keys["id"].(int) //this utilizes the non panicking type assertion, if the thrown away ok variable is false i will be the zero of the type, 0 here. - ds.ID = &i -} - -func (ds *TODeliveryServiceV12) GetAuditName() string { - if ds.XMLID != nil { - return *ds.XMLID - } - return "" -} - -func (ds *TODeliveryServiceV12) GetType() string { - return "ds" -} - -// IsTenantAuthorized checks that the user is authorized for both the delivery service's existing tenant, and the new tenant they're changing it to (if different). -func (ds *TODeliveryServiceV12) IsTenantAuthorized(user *auth.CurrentUser) (bool, error) { - tcDS := tc.NewDeliveryServiceNullableFromV12(ds.DeliveryServiceNullableV12) - return isTenantAuthorized(ds.ReqInfo, &tcDS) -} - -func (ds *TODeliveryServiceV12) Validate() error { - return ds.DeliveryServiceNullableV12.Validate(ds.ReqInfo.Tx.Tx) -} - -func CreateV12(w http.ResponseWriter, r *http.Request) { - inf, userErr, sysErr, errCode := api.NewInfo(r, nil, nil) - if userErr != nil || sysErr != nil { - api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) - return - } - defer inf.Close() - ds := tc.DeliveryServiceNullableV12{} - if err := api.Parse(r.Body, inf.Tx.Tx, &ds); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("decoding: "+err.Error()), nil) - return - } - tcDS := tc.NewDeliveryServiceNullableFromV12(ds) - tcDS, errCode, userErr, sysErr = create(inf, tcDS) - if userErr != nil || sysErr != nil { - api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) - return - } - api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice creation was successful.", []tc.DeliveryServiceNullableV12{tcDS.DeliveryServiceNullableV12}) -} - -func (ds *TODeliveryServiceV12) Read() ([]interface{}, error, error, int) { - returnable := []interface{}{} - dses, errs, _ := readGetDeliveryServices(ds.APIInfo().Params, ds.APIInfo().Tx, ds.APIInfo().User) - if len(errs) > 0 { - for _, err := range errs { - if err.Error() == `id cannot parse to integer` { - return nil, errors.New("Resource not found."), nil, http.StatusNotFound //matches perl response - } - } - return nil, nil, errors.New("reading ds v12: " + util.JoinErrsStr(errs)), http.StatusInternalServerError - } - - for _, ds := range dses { - returnable = append(returnable, ds.DeliveryServiceNullableV12) - } - return returnable, nil, nil, http.StatusOK -} - -func UpdateV12(w http.ResponseWriter, r *http.Request) { - inf, userErr, sysErr, errCode := api.NewInfo(r, []string{"id"}, []string{"id"}) - if userErr != nil || sysErr != nil { - api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) - return - } - defer inf.Close() - - ds := tc.DeliveryServiceNullableV12{} - ds.ID = util.IntPtr(inf.IntParams["id"]) - if err := api.Parse(r.Body, inf.Tx.Tx, &ds); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("decoding: "+err.Error()), nil) - return - } - tcDS := tc.NewDeliveryServiceNullableFromV12(ds) - tcDS, errCode, userErr, sysErr = update(inf, &tcDS) - if userErr != nil || sysErr != nil { - api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) - return - } - api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice update was successful.", []tc.DeliveryServiceNullableV12{tcDS.DeliveryServiceNullableV12}) -} - -//Delete is the DeliveryService implementation of the Deleter interface. -func (ds *TODeliveryServiceV12) Delete() (error, error, int) { - if ds.ID == nil { - return errors.New("missing id"), nil, http.StatusBadRequest - } - - xmlID, ok, err := GetXMLID(ds.ReqInfo.Tx.Tx, *ds.ID) - if err != nil { - return nil, errors.New("dsv12 delete: getting xmlid: " + err.Error()), http.StatusInternalServerError - } else if !ok { - return errors.New("delivery service not found"), nil, http.StatusNotFound - } - ds.XMLID = &xmlID - - // Note ds regexes MUST be deleted before the ds, because there's a ON DELETE CASCADE on deliveryservice_regex (but not on regex). - // Likewise, it MUST happen in a transaction with the later DS delete, so they aren't deleted if the DS delete fails. - if _, err := ds.ReqInfo.Tx.Tx.Exec(`DELETE FROM regex WHERE id IN (SELECT regex FROM deliveryservice_regex WHERE deliveryservice=$1)`, *ds.ID); err != nil { - return nil, errors.New("TODeliveryServiceV12.Delete deleting regexes for delivery service: " + err.Error()), http.StatusInternalServerError - } - - if _, err := ds.ReqInfo.Tx.Tx.Exec(`DELETE FROM deliveryservice_regex WHERE deliveryservice=$1`, *ds.ID); err != nil { - return nil, errors.New("TODeliveryServiceV12.Delete deleting delivery service regexes: " + err.Error()), http.StatusInternalServerError - } - - userErr, sysErr, errCode := api.GenericDelete(ds) - if userErr != nil || sysErr != nil { - return userErr, sysErr, errCode - } - - paramConfigFilePrefixes := []string{"hdr_rw_", "hdr_rw_mid_", "regex_remap_", "cacheurl_"} - configFiles := []string{} - for _, prefix := range paramConfigFilePrefixes { - configFiles = append(configFiles, prefix+*ds.XMLID+".config") - } - - if _, err := ds.ReqInfo.Tx.Tx.Exec(`DELETE FROM parameter WHERE name = 'location' AND config_file = ANY($1)`, pq.Array(configFiles)); err != nil { - return nil, errors.New("TODeliveryServiceV12.Delete deleting delivery service parameteres: " + err.Error()), http.StatusInternalServerError - } - - return nil, nil, http.StatusOK -} diff --git a/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservicesv13.go b/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservicesv13.go deleted file mode 100644 index 6497b60b5e..0000000000 --- a/traffic_ops/traffic_ops_golang/deliveryservice/deliveryservicesv13.go +++ /dev/null @@ -1,135 +0,0 @@ -package deliveryservice - -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -import ( - "encoding/json" - "errors" - "net/http" - - "github.com/apache/trafficcontrol/lib/go-tc" - "github.com/apache/trafficcontrol/lib/go-util" - "github.com/apache/trafficcontrol/traffic_ops/traffic_ops_golang/api" -) - -//we need a type alias to define functions on - -type TODeliveryServiceV13 struct { - api.APIInfoImpl - tc.DeliveryServiceNullableV13 -} - -func (ds TODeliveryServiceV13) MarshalJSON() ([]byte, error) { - return json.Marshal(ds.DeliveryServiceNullableV13) -} - -func (ds *TODeliveryServiceV13) UnmarshalJSON(data []byte) error { - return json.Unmarshal(data, ds.DeliveryServiceNullableV13) -} - -func (ds *TODeliveryServiceV13) APIInfo() *api.APIInfo { return ds.ReqInfo } - -func (ds *TODeliveryServiceV13) SetKeys(keys map[string]interface{}) { - i, _ := keys["id"].(int) //this utilizes the non panicking type assertion, if the thrown away ok variable is false i will be the zero of the type, 0 here. - ds.ID = &i -} - -func (ds *TODeliveryServiceV13) Validate() error { - return ds.DeliveryServiceNullableV13.Validate(ds.APIInfo().Tx.Tx) -} - -// TODO allow users to post names (type, cdn, etc) and get the IDs from the names. This isn't trivial to do in a single query, without dynamically building the entire insert query, and ideally inserting would be one query. But it'd be much more convenient for users. Alternatively, remove IDs from the database entirely and use real candidate keys. -func CreateV13(w http.ResponseWriter, r *http.Request) { - inf, userErr, sysErr, errCode := api.NewInfo(r, nil, nil) - if userErr != nil || sysErr != nil { - api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) - return - } - defer inf.Close() - - ds := tc.DeliveryServiceNullableV13{} - if err := api.Parse(r.Body, inf.Tx.Tx, &ds); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("decoding: "+err.Error()), nil) - return - } - - if ds.RoutingName == nil || *ds.RoutingName == "" { - ds.RoutingName = util.StrPtr("cdn") - } - if err := ds.Validate(inf.Tx.Tx); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("invalid request: "+err.Error()), nil) - return - } - tcDS := tc.NewDeliveryServiceNullableFromV13(ds) - tcDS, errCode, userErr, sysErr = create(inf, tcDS) - if userErr != nil || sysErr != nil { - api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) - return - } - api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice creation was successful.", []tc.DeliveryServiceNullableV13{tcDS.DeliveryServiceNullableV13}) -} - -func (ds *TODeliveryServiceV13) Read() ([]interface{}, error, error, int) { - returnable := []interface{}{} - dses, errs, _ := readGetDeliveryServices(ds.APIInfo().Params, ds.APIInfo().Tx, ds.APIInfo().User) - if len(errs) > 0 { - for _, err := range errs { - if err.Error() == `id cannot parse to integer` { // TODO create const for string - return nil, errors.New("Resource not found."), nil, http.StatusNotFound //matches perl response - } - } - return nil, nil, errors.New("reading dses: " + util.JoinErrsStr(errs)), http.StatusInternalServerError - } - - for _, ds := range dses { - returnable = append(returnable, ds.DeliveryServiceNullableV13) - } - return returnable, nil, nil, http.StatusOK -} - -func UpdateV13(w http.ResponseWriter, r *http.Request) { - inf, userErr, sysErr, errCode := api.NewInfo(r, nil, []string{"id"}) - if userErr != nil || sysErr != nil { - api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) - return - } - defer inf.Close() - - id := inf.IntParams["id"] - - ds := tc.DeliveryServiceNullable{} - if err := json.NewDecoder(r.Body).Decode(&ds); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("malformed JSON: "+err.Error()), nil) - return - } - ds.ID = &id - - if err := ds.Validate(inf.Tx.Tx); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusBadRequest, errors.New("invalid request: "+err.Error()), nil) - return - } - - ds, errCode, userErr, sysErr = update(inf, &ds) - if userErr != nil || sysErr != nil { - api.HandleErr(w, r, inf.Tx.Tx, errCode, userErr, sysErr) - return - } - api.WriteRespAlertObj(w, r, tc.SuccessLevel, "Deliveryservice update was successful.", []tc.DeliveryServiceNullable{ds}) -} diff --git a/traffic_ops/traffic_ops_golang/deliveryservice/eligible.go b/traffic_ops/traffic_ops_golang/deliveryservice/eligible.go index dab511ad2d..acd3922a46 100644 --- a/traffic_ops/traffic_ops_golang/deliveryservice/eligible.go +++ b/traffic_ops/traffic_ops_golang/deliveryservice/eligible.go @@ -38,7 +38,7 @@ func GetServersEligible(w http.ResponseWriter, r *http.Request) { } defer inf.Close() - dsTenantID, ok, err := GetDSTenantIDByIDTx(inf.Tx.Tx, inf.IntParams["id"]) + dsTenantID, ok, err := getDSTenantIDByID(inf.Tx.Tx, inf.IntParams["id"]) if err != nil { api.HandleErr(w, r, inf.Tx.Tx, http.StatusInternalServerError, nil, errors.New("checking tenant: "+err.Error())) return diff --git a/traffic_ops/traffic_ops_golang/deliveryservice/urlkey.go b/traffic_ops/traffic_ops_golang/deliveryservice/urlkey.go index 26facf947d..fcdd1c8191 100644 --- a/traffic_ops/traffic_ops_golang/deliveryservice/urlkey.go +++ b/traffic_ops/traffic_ops_golang/deliveryservice/urlkey.go @@ -57,7 +57,7 @@ func GetURLKeysByID(w http.ResponseWriter, r *http.Request) { return } - dsTenantID, ok, err := GetDSTenantIDByIDTx(inf.Tx.Tx, inf.IntParams["id"]) + dsTenantID, ok, err := getDSTenantIDByID(inf.Tx.Tx, inf.IntParams["id"]) if err != nil { api.HandleErr(w, r, inf.Tx.Tx, http.StatusInternalServerError, nil, errors.New("checking tenant: "+err.Error())) return @@ -101,7 +101,7 @@ func GetURLKeysByName(w http.ResponseWriter, r *http.Request) { ds := tc.DeliveryServiceName(inf.Params["name"]) - dsTenantID, ok, err := GetDSTenantIDByNameTx(inf.Tx.Tx, ds) + dsTenantID, ok, err := getDSTenantIDByName(inf.Tx.Tx, ds) if err != nil { api.HandleErr(w, r, inf.Tx.Tx, http.StatusInternalServerError, nil, errors.New("checking tenant: "+err.Error())) return @@ -146,7 +146,7 @@ func CopyURLKeys(w http.ResponseWriter, r *http.Request) { ds := tc.DeliveryServiceName(inf.Params["name"]) copyDS := tc.DeliveryServiceName(inf.Params["copy-name"]) - dsTenantID, ok, err := GetDSTenantIDByNameTx(inf.Tx.Tx, ds) + dsTenantID, ok, err := getDSTenantIDByName(inf.Tx.Tx, ds) if err != nil { api.HandleErr(w, r, inf.Tx.Tx, http.StatusInternalServerError, nil, errors.New("checking tenant: "+err.Error())) return @@ -164,7 +164,7 @@ func CopyURLKeys(w http.ResponseWriter, r *http.Request) { } { - copyDSTenantID, ok, err := GetDSTenantIDByNameTx(inf.Tx.Tx, copyDS) + copyDSTenantID, ok, err := getDSTenantIDByName(inf.Tx.Tx, copyDS) if err != nil { api.HandleErr(w, r, inf.Tx.Tx, http.StatusInternalServerError, nil, errors.New("checking tenant: "+err.Error())) return @@ -214,7 +214,7 @@ func GenerateURLKeys(w http.ResponseWriter, r *http.Request) { ds := tc.DeliveryServiceName(inf.Params["name"]) - dsTenantID, ok, err := GetDSTenantIDByNameTx(inf.Tx.Tx, ds) + dsTenantID, ok, err := getDSTenantIDByName(inf.Tx.Tx, ds) if err != nil { api.HandleErr(w, r, inf.Tx.Tx, http.StatusInternalServerError, nil, errors.New("checking tenant: "+err.Error())) return diff --git a/traffic_ops/traffic_ops_golang/login/login.go b/traffic_ops/traffic_ops_golang/login/login.go index 1a59722da7..5cd33d21ac 100644 --- a/traffic_ops/traffic_ops_golang/login/login.go +++ b/traffic_ops/traffic_ops_golang/login/login.go @@ -20,9 +20,15 @@ package login */ import ( + "bytes" "encoding/json" + "errors" "fmt" + "github.com/dgrijalva/jwt-go" + "github.com/lestrrat-go/jwx/jwk" "net/http" + "net/url" + "path/filepath" "time" "github.com/apache/trafficcontrol/lib/go-log" @@ -104,3 +110,186 @@ func LoginHandler(db *sqlx.DB, cfg config.Config) http.HandlerFunc { fmt.Fprintf(w, "%s", respBts) } } + +// OauthLoginHandler accepts a JSON web token previously obtained from an OAuth provider, decodes it, validates it, authorizes the user against the database, and returns the login result as either an error or success message +func OauthLoginHandler(db *sqlx.DB, cfg config.Config) http.HandlerFunc { + return func(w http.ResponseWriter, r *http.Request) { + handleErrs := tc.GetHandleErrorsFunc(w, r) + defer r.Body.Close() + authenticated := false + resp := struct { + tc.Alerts + }{} + + form := auth.PasswordForm{} + parameters := struct { + AuthCodeTokenUrl string `json:"authCodeTokenUrl"` + Code string `json:"code"` + ClientId string `json:"clientId"` + ClientSecret string `json:"clientSecret"` + RedirectUri string `json:"redirectUri"` + }{} + + if err := json.NewDecoder(r.Body).Decode(¶meters); err != nil { + handleErrs(http.StatusBadRequest, err) + return + } + + data := url.Values{} + data.Add("code", parameters.Code) + data.Add("client_id", parameters.ClientId) + data.Add("client_secret", parameters.ClientSecret) + data.Add("grant_type", "authorization_code") // Required by RFC6749 section 4.1.3 + data.Add("redirect_uri", parameters.RedirectUri) + + req, err := http.NewRequest(http.MethodPost, parameters.AuthCodeTokenUrl, bytes.NewBufferString(data.Encode())) + req.Header.Set("Content-Type", "application/x-www-form-urlencoded") + if err != nil { + log.Errorf("obtaining token using code from oauth provider: %s", err.Error()) + return + } + + client := http.Client{ + Timeout: 30 * time.Second, + } + response, err := client.Do(req) + if err != nil { + log.Errorf("getting an http client: %s", err.Error()) + return + } + defer response.Body.Close() + + buf := new(bytes.Buffer) + buf.ReadFrom(response.Body) + encodedToken := "" + + var result map[string]interface{} + if err := json.Unmarshal(buf.Bytes(), &result); err != nil { + log.Warnf("Error parsing JSON response from oAuth: %s", err.Error()) + encodedToken = buf.String() + } else if _, ok := result["access_token"]; !ok { + sysErr := fmt.Errorf("Missing access token in response: %s\n", buf.String()) + usrErr := errors.New("Bad response from OAuth2.0 provider") + api.HandleErr(w, r, nil, http.StatusBadGateway, usrErr, sysErr) + return + } else { + switch t := result["access_token"].(type) { + case string: + encodedToken = result["access_token"].(string) + default: + sysErr := fmt.Errorf("Incorrect type of access_token! Expected 'string', got '%v'\n", t) + usrErr := errors.New("Bad response from OAuth2.0 provider") + api.HandleErr(w, r, nil, http.StatusBadGateway, usrErr, sysErr) + return + } + } + + if encodedToken == "" { + log.Errorf("Token not found in request but is required") + handleErrs(http.StatusBadRequest, errors.New("Token not found in request but is required")) + return + } + + decodedToken, err := jwt.Parse(encodedToken, func(unverifiedToken *jwt.Token) (interface{}, error) { + publicKeyUrl := unverifiedToken.Header["jku"].(string) + publicKeyId := unverifiedToken.Header["kid"].(string) + + matched, err := VerifyUrlOnWhiteList(publicKeyUrl, cfg.ConfigTrafficOpsGolang.WhitelistedOAuthUrls) + if err != nil { + return nil, err + } + if !matched { + return nil, errors.New("Key URL from token is not included in the whitelisted urls. Received: " + publicKeyUrl) + } + + keys, err := jwk.FetchHTTP(publicKeyUrl) + if err != nil { + return nil, errors.New("Error fetching JSON key set with message: " + err.Error()) + } + + keyById := keys.LookupKeyID(publicKeyId) + if len(keyById) == 0 { + return nil, errors.New("No public key found for id: " + publicKeyId + " at url: " + publicKeyUrl) + } + + selectedKey, err := keyById[0].Materialize() + if err != nil { + return nil, errors.New("Error materializing key from JSON key set with message: " + err.Error()) + } + + return selectedKey, nil + }) + if err != nil { + handleErrs(http.StatusInternalServerError, errors.New("Error decoding token with message: "+err.Error())) + log.Errorf("Error decoding token: %s\n", err.Error()) + return + } + + authenticated = decodedToken.Valid + + userId := decodedToken.Claims.(jwt.MapClaims)["sub"].(string) + form.Username = userId + + userAllowed, err, blockingErr := auth.CheckLocalUserIsAllowed(form, db, time.Duration(cfg.DBQueryTimeoutSeconds)*time.Second) + if blockingErr != nil { + api.HandleErr(w, r, nil, http.StatusServiceUnavailable, nil, fmt.Errorf("error checking local user password: %s\n", blockingErr.Error())) + return + } + if err != nil { + log.Errorf("checking local user: %s\n", err.Error()) + } + + if userAllowed && authenticated { + expiry := time.Now().Add(time.Hour * 6) + cookie := tocookie.New(userId, expiry, cfg.Secrets[0]) + httpCookie := http.Cookie{Name: "mojolicious", Value: cookie, Path: "/", Expires: expiry, HttpOnly: true} + http.SetCookie(w, &httpCookie) + resp = struct { + tc.Alerts + }{tc.CreateAlerts(tc.SuccessLevel, "Successfully logged in.")} + } else { + resp = struct { + tc.Alerts + }{tc.CreateAlerts(tc.ErrorLevel, "Invalid username or password.")} + } + + respBts, err := json.Marshal(resp) + if err != nil { + handleErrs(http.StatusInternalServerError, err) + return + } + w.Header().Set(tc.ContentType, tc.ApplicationJson) + if !authenticated { + w.WriteHeader(http.StatusUnauthorized) + } + if !userAllowed { + w.WriteHeader(http.StatusForbidden) + } + fmt.Fprintf(w, "%s", respBts) + + } +} + +func VerifyUrlOnWhiteList(urlString string, whiteListedUrls []string) (bool, error) { + + for _, listing := range whiteListedUrls { + if listing == "" { + continue + } + + urlParsed, err := url.Parse(urlString) + if err != nil { + return false, err + } + + matched, err := filepath.Match(listing, urlParsed.Hostname()) + if err != nil { + return false, err + } + + if matched { + return true, nil + } + } + return false, nil +} diff --git a/traffic_ops/traffic_ops_golang/login/login_test.go b/traffic_ops/traffic_ops_golang/login/login_test.go new file mode 100644 index 0000000000..cd522a8f8c --- /dev/null +++ b/traffic_ops/traffic_ops_golang/login/login_test.go @@ -0,0 +1,52 @@ +package login + +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +import "testing" + +func TestVerifyUrlOnWhiteList(t *testing.T) { + type TestResult struct { + Whitelist []string + ExpectedResult bool + } + + completeTestResults := struct { + Results []TestResult + }{} + + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{}, ExpectedResult: false}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{""}, ExpectedResult: false}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{"*"}, ExpectedResult: true}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{"test.wrong"}, ExpectedResult: false}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{"test.right.com"}, ExpectedResult: true}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{"*.right.com"}, ExpectedResult: true}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{"test.wrong", "test.right.com"}, ExpectedResult: true}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{"test.wrong", "*.right.*"}, ExpectedResult: true}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{"test.wrong", "*right*"}, ExpectedResult: true}) + completeTestResults.Results = append(completeTestResults.Results, TestResult{Whitelist: []string{"test.wrong", "*right"}, ExpectedResult: false}) + + url := "https://test.right.com/other/parts" + + for _, result := range completeTestResults.Results { + if matched, _ := VerifyUrlOnWhiteList(url, result.Whitelist); matched != result.ExpectedResult { + t.Errorf("for whitelist: %v, expected: %v, actual: %v", result.Whitelist, result.ExpectedResult, matched) + } + } +} diff --git a/traffic_ops/traffic_ops_golang/riaksvc/riak_services.go b/traffic_ops/traffic_ops_golang/riaksvc/riak_services.go index e0b5bede57..daafc3bf49 100644 --- a/traffic_ops/traffic_ops_golang/riaksvc/riak_services.go +++ b/traffic_ops/traffic_ops_golang/riaksvc/riak_services.go @@ -321,10 +321,15 @@ func GetPooledCluster(tx *sql.Tx, authOptions *riak.AuthOptions, riakPort *uint) newcluster, err := GetRiakCluster(newservers, authOptions) if err == nil { if err := newcluster.Start(); err == nil { - log.Infoln("New cluster started") + log.Infof("New riak cluster started: %p\n", newcluster) if sharedCluster != nil { - runtime.SetFinalizer(sharedCluster, sharedCluster.Stop()) + runtime.SetFinalizer(sharedCluster, func(c *riak.Cluster) { + log.Infof("running finalizer for riak sharedcluster (%p)\n", c) + if err := c.Stop(); err != nil { + log.Errorf("in finalizer for riak sharedcluster (%p): stopping cluster: %s\n", c, err.Error()) + } + }) } sharedCluster = newcluster diff --git a/traffic_ops/traffic_ops_golang/routing/routes.go b/traffic_ops/traffic_ops_golang/routing/routes.go index f8fb469187..f85dfcb1f5 100644 --- a/traffic_ops/traffic_ops_golang/routing/routes.go +++ b/traffic_ops/traffic_ops_golang/routing/routes.go @@ -162,9 +162,9 @@ func Routes(d ServerData) ([]Route, []RawRoute, http.Handler, error) { {1.1, http.MethodGet, `users/{id}/deliveryservices/?(\.json)?$`, user.GetDSes, auth.PrivLevelReadOnly, Authenticated, nil}, {1.1, http.MethodGet, `user/{id}/deliveryservices/available/?(\.json)?$`, user.GetAvailableDSes, auth.PrivLevelReadOnly, Authenticated, nil}, {1.1, http.MethodPost, `user/login/?$`, login.LoginHandler(d.DB, d.Config), 0, NoAuth, nil}, + {1.4, http.MethodPost, `user/login/oauth/?$`, login.OauthLoginHandler(d.DB, d.Config), 0, NoAuth, nil}, //User: CRUD - //Incrementing version for users because change to Nullable struct. {1.1, http.MethodGet, `users/?(\.json)?$`, api.ReadHandler(&user.TOUser{}), auth.PrivLevelReadOnly, Authenticated, nil}, {1.1, http.MethodGet, `users/{id}$`, api.ReadHandler(&user.TOUser{}), auth.PrivLevelReadOnly, Authenticated, nil}, {1.1, http.MethodPut, `users/{id}$`, api.UpdateHandler(&user.TOUser{}), auth.PrivLevelOperations, Authenticated, nil}, @@ -219,7 +219,6 @@ func Routes(d ServerData) ([]Route, []RawRoute, http.Handler, error) { {1.1, http.MethodGet, `servers/{id}/deliveryservices$`, api.ReadHandler(&dsserver.TODSSDeliveryService{}), auth.PrivLevelReadOnly, Authenticated, nil}, {1.1, http.MethodGet, `deliveryservices/{id}/servers$`, dsserver.GetReadAssigned, auth.PrivLevelReadOnly, Authenticated, nil}, {1.1, http.MethodGet, `deliveryservices/{id}/unassigned_servers$`, dsserver.GetReadUnassigned, auth.PrivLevelReadOnly, Authenticated, nil}, - //{1.1, http.MethodGet, `deliveryservices/{id}/servers/eligible$`, dsserver.GetReadHandler(d.Tx, tc.Eligible),auth.PrivLevelReadOnly, Authenticated, nil}, {1.1, http.MethodGet, `deliveryservice_matches/?(\.json)?$`, deliveryservice.GetMatches, auth.PrivLevelReadOnly, Authenticated, nil}, @@ -229,8 +228,8 @@ func Routes(d ServerData) ([]Route, []RawRoute, http.Handler, error) { {1.1, http.MethodGet, `servers/totals$`, handlerToFunc(proxyHandler), 0, NoAuth, []Middleware{}}, //Server Details - {1.2, http.MethodGet, `servers/details/?(\.json)?$`, server.GetDetailParamHandler, auth.PrivLevelReadOnly, Authenticated, nil}, - {1.2, http.MethodGet, `servers/hostname/{hostName}/details/?(\.json)?$`, server.GetDetailHandler, auth.PrivLevelReadOnly, Authenticated, nil}, + {1.1, http.MethodGet, `servers/details/?(\.json)?$`, server.GetDetailParamHandler, auth.PrivLevelReadOnly, Authenticated, nil}, + {1.1, http.MethodGet, `servers/hostname/{hostName}/details/?(\.json)?$`, server.GetDetailHandler, auth.PrivLevelReadOnly, Authenticated, nil}, //Server: CRUD {1.1, http.MethodGet, `servers/?(\.json)?$`, api.ReadHandler(&server.TOServer{}), auth.PrivLevelReadOnly, Authenticated, nil}, @@ -266,13 +265,6 @@ func Routes(d ServerData) ([]Route, []RawRoute, http.Handler, error) { {1.3, http.MethodPost, `coordinates/?$`, api.CreateHandler(&coordinate.TOCoordinate{}), auth.PrivLevelOperations, Authenticated, nil}, {1.3, http.MethodDelete, `coordinates/?$`, api.DeleteHandler(&coordinate.TOCoordinate{}), auth.PrivLevelOperations, Authenticated, nil}, - //Servers - // explicitly passed to legacy system until fully implemented. Auth handled by legacy system. - {1.2, http.MethodGet, `servers/checks$`, handlerToFunc(proxyHandler), 0, NoAuth, []Middleware{}}, - {1.2, http.MethodGet, `servers/details$`, handlerToFunc(proxyHandler), 0, NoAuth, []Middleware{}}, - {1.2, http.MethodGet, `servers/status$`, handlerToFunc(proxyHandler), 0, NoAuth, []Middleware{}}, - {1.2, http.MethodGet, `servers/totals$`, handlerToFunc(proxyHandler), 0, NoAuth, []Middleware{}}, - //ASNs {1.3, http.MethodGet, `asns/?(\.json)?$`, api.ReadHandler(&asn.TOASNV11{}), auth.PrivLevelReadOnly, Authenticated, nil}, {1.3, http.MethodPut, `asns/?$`, api.UpdateHandler(&asn.TOASNV11{}), auth.PrivLevelOperations, Authenticated, nil}, @@ -391,23 +383,18 @@ func Routes(d ServerData) ([]Route, []RawRoute, http.Handler, error) { {1.1, http.MethodPost, `federations/{id}/deliveryservices?(\.json)?$`, federations.PostDSes, auth.PrivLevelAdmin, Authenticated, nil}, ////DeliveryServices - {1.4, http.MethodGet, `deliveryservices/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryService{}), auth.PrivLevelReadOnly, Authenticated, nil}, - {1.3, http.MethodGet, `deliveryservices/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryServiceV13{}), auth.PrivLevelReadOnly, Authenticated, nil}, - {1.1, http.MethodGet, `deliveryservices/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryServiceV12{}), auth.PrivLevelReadOnly, Authenticated, nil}, - - {1.4, http.MethodGet, `deliveryservices/{id}/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryService{}), auth.PrivLevelReadOnly, Authenticated, nil}, - {1.3, http.MethodGet, `deliveryservices/{id}/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryServiceV13{}), auth.PrivLevelReadOnly, Authenticated, nil}, - {1.1, http.MethodGet, `deliveryservices/{id}/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryServiceV12{}), auth.PrivLevelReadOnly, Authenticated, nil}, + {1.1, http.MethodGet, `deliveryservices/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryService{}), auth.PrivLevelReadOnly, Authenticated, nil}, + {1.1, http.MethodGet, `deliveryservices/{id}/?(\.json)?$`, api.ReadHandler(&deliveryservice.TODeliveryService{}), auth.PrivLevelReadOnly, Authenticated, nil}, - {1.4, http.MethodPost, `deliveryservices/?(\.json)?$`, deliveryservice.Create, auth.PrivLevelOperations, Authenticated, nil}, + {1.4, http.MethodPost, `deliveryservices/?(\.json)?$`, deliveryservice.CreateV14, auth.PrivLevelOperations, Authenticated, nil}, {1.3, http.MethodPost, `deliveryservices/?(\.json)?$`, deliveryservice.CreateV13, auth.PrivLevelOperations, Authenticated, nil}, {1.1, http.MethodPost, `deliveryservices/?(\.json)?$`, deliveryservice.CreateV12, auth.PrivLevelOperations, Authenticated, nil}, - {1.4, http.MethodPut, `deliveryservices/{id}/?(\.json)?$`, deliveryservice.Update, auth.PrivLevelOperations, Authenticated, nil}, + {1.4, http.MethodPut, `deliveryservices/{id}/?(\.json)?$`, deliveryservice.UpdateV14, auth.PrivLevelOperations, Authenticated, nil}, {1.3, http.MethodPut, `deliveryservices/{id}/?(\.json)?$`, deliveryservice.UpdateV13, auth.PrivLevelOperations, Authenticated, nil}, {1.1, http.MethodPut, `deliveryservices/{id}/?(\.json)?$`, deliveryservice.UpdateV12, auth.PrivLevelOperations, Authenticated, nil}, - {1.1, http.MethodDelete, `deliveryservices/{id}/?(\.json)?$`, api.DeleteHandler(&deliveryservice.TODeliveryServiceV12{}), auth.PrivLevelOperations, Authenticated, nil}, + {1.1, http.MethodDelete, `deliveryservices/{id}/?(\.json)?$`, api.DeleteHandler(&deliveryservice.TODeliveryService{}), auth.PrivLevelOperations, Authenticated, nil}, {1.1, http.MethodGet, `deliveryservices/{id}/servers/eligible/?(\.json)?$`, deliveryservice.GetServersEligible, auth.PrivLevelReadOnly, Authenticated, nil}, diff --git a/traffic_ops/traffic_ops_golang/server/servers_assignment.go b/traffic_ops/traffic_ops_golang/server/servers_assignment.go index 5be31f4a81..582cc3dc0a 100644 --- a/traffic_ops/traffic_ops_golang/server/servers_assignment.go +++ b/traffic_ops/traffic_ops_golang/server/servers_assignment.go @@ -68,8 +68,8 @@ func AssignDeliveryServicesToServerHandler(w http.ResponseWriter, r *http.Reques return } - if err := api.CreateChangeLogRawErr(api.ApiChange, "Assigned "+strconv.Itoa(len(assignedDSes))+" delivery services to server id: "+strconv.Itoa(server) , inf.User, inf.Tx.Tx); err != nil { - api.HandleErr(w, r, inf.Tx.Tx, http.StatusInternalServerError, nil, errors.New("error writing to change log: " + err.Error())) + if err := api.CreateChangeLogRawErr(api.ApiChange, "Assigned "+strconv.Itoa(len(assignedDSes))+" delivery services to server id: "+strconv.Itoa(server), inf.User, inf.Tx.Tx); err != nil { + api.HandleErr(w, r, inf.Tx.Tx, http.StatusInternalServerError, nil, errors.New("error writing to change log: "+err.Error())) return } diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/.travis.yml b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/.travis.yml new file mode 100644 index 0000000000..1027f56cd9 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/.travis.yml @@ -0,0 +1,13 @@ +language: go + +script: + - go vet ./... + - go test -v ./... + +go: + - 1.3 + - 1.4 + - 1.5 + - 1.6 + - 1.7 + - tip diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/LICENSE b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/LICENSE new file mode 100644 index 0000000000..df83a9c2f0 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/LICENSE @@ -0,0 +1,8 @@ +Copyright (c) 2012 Dave Grijalva + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md new file mode 100644 index 0000000000..7fc1f793cb --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md @@ -0,0 +1,97 @@ +## Migration Guide from v2 -> v3 + +Version 3 adds several new, frequently requested features. To do so, it introduces a few breaking changes. We've worked to keep these as minimal as possible. This guide explains the breaking changes and how you can quickly update your code. + +### `Token.Claims` is now an interface type + +The most requested feature from the 2.0 verison of this library was the ability to provide a custom type to the JSON parser for claims. This was implemented by introducing a new interface, `Claims`, to replace `map[string]interface{}`. We also included two concrete implementations of `Claims`: `MapClaims` and `StandardClaims`. + +`MapClaims` is an alias for `map[string]interface{}` with built in validation behavior. It is the default claims type when using `Parse`. The usage is unchanged except you must type cast the claims property. + +The old example for parsing a token looked like this.. + +```go + if token, err := jwt.Parse(tokenString, keyLookupFunc); err == nil { + fmt.Printf("Token for user %v expires %v", token.Claims["user"], token.Claims["exp"]) + } +``` + +is now directly mapped to... + +```go + if token, err := jwt.Parse(tokenString, keyLookupFunc); err == nil { + claims := token.Claims.(jwt.MapClaims) + fmt.Printf("Token for user %v expires %v", claims["user"], claims["exp"]) + } +``` + +`StandardClaims` is designed to be embedded in your custom type. You can supply a custom claims type with the new `ParseWithClaims` function. Here's an example of using a custom claims type. + +```go + type MyCustomClaims struct { + User string + *StandardClaims + } + + if token, err := jwt.ParseWithClaims(tokenString, &MyCustomClaims{}, keyLookupFunc); err == nil { + claims := token.Claims.(*MyCustomClaims) + fmt.Printf("Token for user %v expires %v", claims.User, claims.StandardClaims.ExpiresAt) + } +``` + +### `ParseFromRequest` has been moved + +To keep this library focused on the tokens without becoming overburdened with complex request processing logic, `ParseFromRequest` and its new companion `ParseFromRequestWithClaims` have been moved to a subpackage, `request`. The method signatues have also been augmented to receive a new argument: `Extractor`. + +`Extractors` do the work of picking the token string out of a request. The interface is simple and composable. + +This simple parsing example: + +```go + if token, err := jwt.ParseFromRequest(tokenString, req, keyLookupFunc); err == nil { + fmt.Printf("Token for user %v expires %v", token.Claims["user"], token.Claims["exp"]) + } +``` + +is directly mapped to: + +```go + if token, err := request.ParseFromRequest(req, request.OAuth2Extractor, keyLookupFunc); err == nil { + claims := token.Claims.(jwt.MapClaims) + fmt.Printf("Token for user %v expires %v", claims["user"], claims["exp"]) + } +``` + +There are several concrete `Extractor` types provided for your convenience: + +* `HeaderExtractor` will search a list of headers until one contains content. +* `ArgumentExtractor` will search a list of keys in request query and form arguments until one contains content. +* `MultiExtractor` will try a list of `Extractors` in order until one returns content. +* `AuthorizationHeaderExtractor` will look in the `Authorization` header for a `Bearer` token. +* `OAuth2Extractor` searches the places an OAuth2 token would be specified (per the spec): `Authorization` header and `access_token` argument +* `PostExtractionFilter` wraps an `Extractor`, allowing you to process the content before it's parsed. A simple example is stripping the `Bearer ` text from a header + + +### RSA signing methods no longer accept `[]byte` keys + +Due to a [critical vulnerability](https://auth0.com/blog/2015/03/31/critical-vulnerabilities-in-json-web-token-libraries/), we've decided the convenience of accepting `[]byte` instead of `rsa.PublicKey` or `rsa.PrivateKey` isn't worth the risk of misuse. + +To replace this behavior, we've added two helper methods: `ParseRSAPrivateKeyFromPEM(key []byte) (*rsa.PrivateKey, error)` and `ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error)`. These are just simple helpers for unpacking PEM encoded PKCS1 and PKCS8 keys. If your keys are encoded any other way, all you need to do is convert them to the `crypto/rsa` package's types. + +```go + func keyLookupFunc(*Token) (interface{}, error) { + // Don't forget to validate the alg is what you expect: + if _, ok := token.Method.(*jwt.SigningMethodRSA); !ok { + return nil, fmt.Errorf("Unexpected signing method: %v", token.Header["alg"]) + } + + // Look up key + key, err := lookupPublicKey(token.Header["kid"]) + if err != nil { + return nil, err + } + + // Unpack key from PEM encoded PKCS8 + return jwt.ParseRSAPublicKeyFromPEM(key) + } +``` diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/README.md b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/README.md new file mode 100644 index 0000000000..d7749077fd --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/README.md @@ -0,0 +1,104 @@ +# jwt-go + +[![Build Status](https://travis-ci.org/dgrijalva/jwt-go.svg?branch=master)](https://travis-ci.org/dgrijalva/jwt-go) +[![GoDoc](https://godoc.org/github.com/dgrijalva/jwt-go?status.svg)](https://godoc.org/github.com/dgrijalva/jwt-go) + +A [go](http://www.golang.org) (or 'golang' for search engine friendliness) implementation of [JSON Web Tokens](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) + +**NEW VERSION COMING:** There have been a lot of improvements suggested since the version 3.0.0 released in 2016. I'm working now on cutting two different releases: 3.2.0 will contain any non-breaking changes or enhancements. 4.0.0 will follow shortly which will include breaking changes. See the 4.0.0 milestone to get an idea of what's coming. If you have other ideas, or would like to participate in 4.0.0, now's the time. If you depend on this library and don't want to be interrupted, I recommend you use your dependency mangement tool to pin to version 3. + +**SECURITY NOTICE:** Some older versions of Go have a security issue in the cryotp/elliptic. Recommendation is to upgrade to at least 1.8.3. See issue #216 for more detail. + +**SECURITY NOTICE:** It's important that you [validate the `alg` presented is what you expect](https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/). This library attempts to make it easy to do the right thing by requiring key types match the expected alg, but you should take the extra step to verify it in your usage. See the examples provided. + +## What the heck is a JWT? + +JWT.io has [a great introduction](https://jwt.io/introduction) to JSON Web Tokens. + +In short, it's a signed JSON object that does something useful (for example, authentication). It's commonly used for `Bearer` tokens in Oauth 2. A token is made of three parts, separated by `.`'s. The first two parts are JSON objects, that have been [base64url](http://tools.ietf.org/html/rfc4648) encoded. The last part is the signature, encoded the same way. + +The first part is called the header. It contains the necessary information for verifying the last part, the signature. For example, which encryption method was used for signing and what key was used. + +The part in the middle is the interesting bit. It's called the Claims and contains the actual stuff you care about. Refer to [the RFC](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) for information about reserved keys and the proper way to add your own. + +## What's in the box? + +This library supports the parsing and verification as well as the generation and signing of JWTs. Current supported signing algorithms are HMAC SHA, RSA, RSA-PSS, and ECDSA, though hooks are present for adding your own. + +## Examples + +See [the project documentation](https://godoc.org/github.com/dgrijalva/jwt-go) for examples of usage: + +* [Simple example of parsing and validating a token](https://godoc.org/github.com/dgrijalva/jwt-go#example-Parse--Hmac) +* [Simple example of building and signing a token](https://godoc.org/github.com/dgrijalva/jwt-go#example-New--Hmac) +* [Directory of Examples](https://godoc.org/github.com/dgrijalva/jwt-go#pkg-examples) + +## Extensions + +This library publishes all the necessary components for adding your own signing methods. Simply implement the `SigningMethod` interface and register a factory method using `RegisterSigningMethod`. + +Here's an example of an extension that integrates with multiple Google Cloud Platform signing tools (AppEngine, IAM API, Cloud KMS): https://github.com/someone1/gcp-jwt-go + +## Compliance + +This library was last reviewed to comply with [RTF 7519](http://www.rfc-editor.org/info/rfc7519) dated May 2015 with a few notable differences: + +* In order to protect against accidental use of [Unsecured JWTs](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html#UnsecuredJWT), tokens using `alg=none` will only be accepted if the constant `jwt.UnsafeAllowNoneSignatureType` is provided as the key. + +## Project Status & Versioning + +This library is considered production ready. Feedback and feature requests are appreciated. The API should be considered stable. There should be very few backwards-incompatible changes outside of major version updates (and only with good reason). + +This project uses [Semantic Versioning 2.0.0](http://semver.org). Accepted pull requests will land on `master`. Periodically, versions will be tagged from `master`. You can find all the releases on [the project releases page](https://github.com/dgrijalva/jwt-go/releases). + +While we try to make it obvious when we make breaking changes, there isn't a great mechanism for pushing announcements out to users. You may want to use this alternative package include: `gopkg.in/dgrijalva/jwt-go.v3`. It will do the right thing WRT semantic versioning. + +**BREAKING CHANGES:*** +* Version 3.0.0 includes _a lot_ of changes from the 2.x line, including a few that break the API. We've tried to break as few things as possible, so there should just be a few type signature changes. A full list of breaking changes is available in `VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating your code. + +## Usage Tips + +### Signing vs Encryption + +A token is simply a JSON object that is signed by its author. this tells you exactly two things about the data: + +* The author of the token was in the possession of the signing secret +* The data has not been modified since it was signed + +It's important to know that JWT does not provide encryption, which means anyone who has access to the token can read its contents. If you need to protect (encrypt) the data, there is a companion spec, `JWE`, that provides this functionality. JWE is currently outside the scope of this library. + +### Choosing a Signing Method + +There are several signing methods available, and you should probably take the time to learn about the various options before choosing one. The principal design decision is most likely going to be symmetric vs asymmetric. + +Symmetric signing methods, such as HSA, use only a single secret. This is probably the simplest signing method to use since any `[]byte` can be used as a valid secret. They are also slightly computationally faster to use, though this rarely is enough to matter. Symmetric signing methods work the best when both producers and consumers of tokens are trusted, or even the same system. Since the same secret is used to both sign and validate tokens, you can't easily distribute the key for validation. + +Asymmetric signing methods, such as RSA, use different keys for signing and verifying tokens. This makes it possible to produce tokens with a private key, and allow any consumer to access the public key for verification. + +### Signing Methods and Key Types + +Each signing method expects a different object type for its signing keys. See the package documentation for details. Here are the most common ones: + +* The [HMAC signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodHMAC) (`HS256`,`HS384`,`HS512`) expect `[]byte` values for signing and validation +* The [RSA signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodRSA) (`RS256`,`RS384`,`RS512`) expect `*rsa.PrivateKey` for signing and `*rsa.PublicKey` for validation +* The [ECDSA signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodECDSA) (`ES256`,`ES384`,`ES512`) expect `*ecdsa.PrivateKey` for signing and `*ecdsa.PublicKey` for validation + +### JWT and OAuth + +It's worth mentioning that OAuth and JWT are not the same thing. A JWT token is simply a signed JSON object. It can be used anywhere such a thing is useful. There is some confusion, though, as JWT is the most common type of bearer token used in OAuth2 authentication. + +Without going too far down the rabbit hole, here's a description of the interaction of these technologies: + +* OAuth is a protocol for allowing an identity provider to be separate from the service a user is logging in to. For example, whenever you use Facebook to log into a different service (Yelp, Spotify, etc), you are using OAuth. +* OAuth defines several options for passing around authentication data. One popular method is called a "bearer token". A bearer token is simply a string that _should_ only be held by an authenticated user. Thus, simply presenting this token proves your identity. You can probably derive from here why a JWT might make a good bearer token. +* Because bearer tokens are used for authentication, it's important they're kept secret. This is why transactions that use bearer tokens typically happen over SSL. + +### Troubleshooting + +This library uses descriptive error messages whenever possible. If you are not getting the expected result, have a look at the errors. The most common place people get stuck is providing the correct type of key to the parser. See the above section on signing methods and key types. + +## More + +Documentation can be found [on godoc.org](http://godoc.org/github.com/dgrijalva/jwt-go). + +The command line utility included in this project (cmd/jwt) provides a straightforward example of token creation and parsing as well as a useful tool for debugging your own integration. You'll also find several implementation examples in the documentation. diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md new file mode 100644 index 0000000000..6370298313 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md @@ -0,0 +1,118 @@ +## `jwt-go` Version History + +#### 3.2.0 + +* Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation +* HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate +* Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before. +* Deprecated `ParseFromRequestWithClaims` to simplify API in the future. + +#### 3.1.0 + +* Improvements to `jwt` command line tool +* Added `SkipClaimsValidation` option to `Parser` +* Documentation updates + +#### 3.0.0 + +* **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code + * Dropped support for `[]byte` keys when using RSA signing methods. This convenience feature could contribute to security vulnerabilities involving mismatched key types with signing methods. + * `ParseFromRequest` has been moved to `request` subpackage and usage has changed + * The `Claims` property on `Token` is now type `Claims` instead of `map[string]interface{}`. The default value is type `MapClaims`, which is an alias to `map[string]interface{}`. This makes it possible to use a custom type when decoding claims. +* Other Additions and Changes + * Added `Claims` interface type to allow users to decode the claims into a custom type + * Added `ParseWithClaims`, which takes a third argument of type `Claims`. Use this function instead of `Parse` if you have a custom type you'd like to decode into. + * Dramatically improved the functionality and flexibility of `ParseFromRequest`, which is now in the `request` subpackage + * Added `ParseFromRequestWithClaims` which is the `FromRequest` equivalent of `ParseWithClaims` + * Added new interface type `Extractor`, which is used for extracting JWT strings from http requests. Used with `ParseFromRequest` and `ParseFromRequestWithClaims`. + * Added several new, more specific, validation errors to error type bitmask + * Moved examples from README to executable example files + * Signing method registry is now thread safe + * Added new property to `ValidationError`, which contains the raw error returned by calls made by parse/verify (such as those returned by keyfunc or json parser) + +#### 2.7.0 + +This will likely be the last backwards compatible release before 3.0.0, excluding essential bug fixes. + +* Added new option `-show` to the `jwt` command that will just output the decoded token without verifying +* Error text for expired tokens includes how long it's been expired +* Fixed incorrect error returned from `ParseRSAPublicKeyFromPEM` +* Documentation updates + +#### 2.6.0 + +* Exposed inner error within ValidationError +* Fixed validation errors when using UseJSONNumber flag +* Added several unit tests + +#### 2.5.0 + +* Added support for signing method none. You shouldn't use this. The API tries to make this clear. +* Updated/fixed some documentation +* Added more helpful error message when trying to parse tokens that begin with `BEARER ` + +#### 2.4.0 + +* Added new type, Parser, to allow for configuration of various parsing parameters + * You can now specify a list of valid signing methods. Anything outside this set will be rejected. + * You can now opt to use the `json.Number` type instead of `float64` when parsing token JSON +* Added support for [Travis CI](https://travis-ci.org/dgrijalva/jwt-go) +* Fixed some bugs with ECDSA parsing + +#### 2.3.0 + +* Added support for ECDSA signing methods +* Added support for RSA PSS signing methods (requires go v1.4) + +#### 2.2.0 + +* Gracefully handle a `nil` `Keyfunc` being passed to `Parse`. Result will now be the parsed token and an error, instead of a panic. + +#### 2.1.0 + +Backwards compatible API change that was missed in 2.0.0. + +* The `SignedString` method on `Token` now takes `interface{}` instead of `[]byte` + +#### 2.0.0 + +There were two major reasons for breaking backwards compatibility with this update. The first was a refactor required to expand the width of the RSA and HMAC-SHA signing implementations. There will likely be no required code changes to support this change. + +The second update, while unfortunately requiring a small change in integration, is required to open up this library to other signing methods. Not all keys used for all signing methods have a single standard on-disk representation. Requiring `[]byte` as the type for all keys proved too limiting. Additionally, this implementation allows for pre-parsed tokens to be reused, which might matter in an application that parses a high volume of tokens with a small set of keys. Backwards compatibilty has been maintained for passing `[]byte` to the RSA signing methods, but they will also accept `*rsa.PublicKey` and `*rsa.PrivateKey`. + +It is likely the only integration change required here will be to change `func(t *jwt.Token) ([]byte, error)` to `func(t *jwt.Token) (interface{}, error)` when calling `Parse`. + +* **Compatibility Breaking Changes** + * `SigningMethodHS256` is now `*SigningMethodHMAC` instead of `type struct` + * `SigningMethodRS256` is now `*SigningMethodRSA` instead of `type struct` + * `KeyFunc` now returns `interface{}` instead of `[]byte` + * `SigningMethod.Sign` now takes `interface{}` instead of `[]byte` for the key + * `SigningMethod.Verify` now takes `interface{}` instead of `[]byte` for the key +* Renamed type `SigningMethodHS256` to `SigningMethodHMAC`. Specific sizes are now just instances of this type. + * Added public package global `SigningMethodHS256` + * Added public package global `SigningMethodHS384` + * Added public package global `SigningMethodHS512` +* Renamed type `SigningMethodRS256` to `SigningMethodRSA`. Specific sizes are now just instances of this type. + * Added public package global `SigningMethodRS256` + * Added public package global `SigningMethodRS384` + * Added public package global `SigningMethodRS512` +* Moved sample private key for HMAC tests from an inline value to a file on disk. Value is unchanged. +* Refactored the RSA implementation to be easier to read +* Exposed helper methods `ParseRSAPrivateKeyFromPEM` and `ParseRSAPublicKeyFromPEM` + +#### 1.0.2 + +* Fixed bug in parsing public keys from certificates +* Added more tests around the parsing of keys for RS256 +* Code refactoring in RS256 implementation. No functional changes + +#### 1.0.1 + +* Fixed panic if RS256 signing method was passed an invalid key + +#### 1.0.0 + +* First versioned release +* API stabilized +* Supports creating, signing, parsing, and validating JWT tokens +* Supports RS256 and HS256 signing methods \ No newline at end of file diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/claims.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/claims.go new file mode 100644 index 0000000000..f0228f02e0 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/claims.go @@ -0,0 +1,134 @@ +package jwt + +import ( + "crypto/subtle" + "fmt" + "time" +) + +// For a type to be a Claims object, it must just have a Valid method that determines +// if the token is invalid for any supported reason +type Claims interface { + Valid() error +} + +// Structured version of Claims Section, as referenced at +// https://tools.ietf.org/html/rfc7519#section-4.1 +// See examples for how to use this with your own claim types +type StandardClaims struct { + Audience string `json:"aud,omitempty"` + ExpiresAt int64 `json:"exp,omitempty"` + Id string `json:"jti,omitempty"` + IssuedAt int64 `json:"iat,omitempty"` + Issuer string `json:"iss,omitempty"` + NotBefore int64 `json:"nbf,omitempty"` + Subject string `json:"sub,omitempty"` +} + +// Validates time based claims "exp, iat, nbf". +// There is no accounting for clock skew. +// As well, if any of the above claims are not in the token, it will still +// be considered a valid claim. +func (c StandardClaims) Valid() error { + vErr := new(ValidationError) + now := TimeFunc().Unix() + + // The claims below are optional, by default, so if they are set to the + // default value in Go, let's not fail the verification for them. + if c.VerifyExpiresAt(now, false) == false { + delta := time.Unix(now, 0).Sub(time.Unix(c.ExpiresAt, 0)) + vErr.Inner = fmt.Errorf("token is expired by %v", delta) + vErr.Errors |= ValidationErrorExpired + } + + if c.VerifyIssuedAt(now, false) == false { + vErr.Inner = fmt.Errorf("Token used before issued") + vErr.Errors |= ValidationErrorIssuedAt + } + + if c.VerifyNotBefore(now, false) == false { + vErr.Inner = fmt.Errorf("token is not valid yet") + vErr.Errors |= ValidationErrorNotValidYet + } + + if vErr.valid() { + return nil + } + + return vErr +} + +// Compares the aud claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (c *StandardClaims) VerifyAudience(cmp string, req bool) bool { + return verifyAud(c.Audience, cmp, req) +} + +// Compares the exp claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (c *StandardClaims) VerifyExpiresAt(cmp int64, req bool) bool { + return verifyExp(c.ExpiresAt, cmp, req) +} + +// Compares the iat claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (c *StandardClaims) VerifyIssuedAt(cmp int64, req bool) bool { + return verifyIat(c.IssuedAt, cmp, req) +} + +// Compares the iss claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (c *StandardClaims) VerifyIssuer(cmp string, req bool) bool { + return verifyIss(c.Issuer, cmp, req) +} + +// Compares the nbf claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (c *StandardClaims) VerifyNotBefore(cmp int64, req bool) bool { + return verifyNbf(c.NotBefore, cmp, req) +} + +// ----- helpers + +func verifyAud(aud string, cmp string, required bool) bool { + if aud == "" { + return !required + } + if subtle.ConstantTimeCompare([]byte(aud), []byte(cmp)) != 0 { + return true + } else { + return false + } +} + +func verifyExp(exp int64, now int64, required bool) bool { + if exp == 0 { + return !required + } + return now <= exp +} + +func verifyIat(iat int64, now int64, required bool) bool { + if iat == 0 { + return !required + } + return now >= iat +} + +func verifyIss(iss string, cmp string, required bool) bool { + if iss == "" { + return !required + } + if subtle.ConstantTimeCompare([]byte(iss), []byte(cmp)) != 0 { + return true + } else { + return false + } +} + +func verifyNbf(nbf int64, now int64, required bool) bool { + if nbf == 0 { + return !required + } + return now >= nbf +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/README.md b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/README.md new file mode 100644 index 0000000000..41eb89da55 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/README.md @@ -0,0 +1,18 @@ +`jwt` command-line tool +======================= + +This is a simple tool to sign, verify and show JSON Web Tokens from +the command line. + +The following will create and sign a token, then verify it and output the original claims: + + echo {\"foo\":\"bar\"} | ./jwt -key ../../test/sample_key -alg RS256 -sign - | ./jwt -key ../../test/sample_key.pub -alg RS256 -verify - + +To simply display a token, use: + + echo $JWT | ./jwt -show - + +You can install this tool with the following command: + + go install github.com/dgrijalva/jwt-go/cmd/jwt + diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/app.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/app.go new file mode 100644 index 0000000000..cdf2500832 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/app.go @@ -0,0 +1,282 @@ +// A useful example app. You can use this to debug your tokens on the command line. +// This is also a great place to look at how you might use this library. +// +// Example usage: +// The following will create and sign a token, then verify it and output the original claims. +// echo {\"foo\":\"bar\"} | bin/jwt -key test/sample_key -alg RS256 -sign - | bin/jwt -key test/sample_key.pub -verify - +package main + +import ( + "encoding/json" + "flag" + "fmt" + "io" + "io/ioutil" + "os" + "regexp" + "strings" + + jwt "github.com/dgrijalva/jwt-go" +) + +var ( + // Options + flagAlg = flag.String("alg", "", "signing algorithm identifier") + flagKey = flag.String("key", "", "path to key file or '-' to read from stdin") + flagCompact = flag.Bool("compact", false, "output compact JSON") + flagDebug = flag.Bool("debug", false, "print out all kinds of debug data") + flagClaims = make(ArgList) + flagHead = make(ArgList) + + // Modes - exactly one of these is required + flagSign = flag.String("sign", "", "path to claims object to sign, '-' to read from stdin, or '+' to use only -claim args") + flagVerify = flag.String("verify", "", "path to JWT token to verify or '-' to read from stdin") + flagShow = flag.String("show", "", "path to JWT file or '-' to read from stdin") +) + +func main() { + // Plug in Var flags + flag.Var(flagClaims, "claim", "add additional claims. may be used more than once") + flag.Var(flagHead, "header", "add additional header params. may be used more than once") + + // Usage message if you ask for -help or if you mess up inputs. + flag.Usage = func() { + fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0]) + fmt.Fprintf(os.Stderr, " One of the following flags is required: sign, verify\n") + flag.PrintDefaults() + } + + // Parse command line options + flag.Parse() + + // Do the thing. If something goes wrong, print error to stderr + // and exit with a non-zero status code + if err := start(); err != nil { + fmt.Fprintf(os.Stderr, "Error: %v\n", err) + os.Exit(1) + } +} + +// Figure out which thing to do and then do that +func start() error { + if *flagSign != "" { + return signToken() + } else if *flagVerify != "" { + return verifyToken() + } else if *flagShow != "" { + return showToken() + } else { + flag.Usage() + return fmt.Errorf("None of the required flags are present. What do you want me to do?") + } +} + +// Helper func: Read input from specified file or stdin +func loadData(p string) ([]byte, error) { + if p == "" { + return nil, fmt.Errorf("No path specified") + } + + var rdr io.Reader + if p == "-" { + rdr = os.Stdin + } else if p == "+" { + return []byte("{}"), nil + } else { + if f, err := os.Open(p); err == nil { + rdr = f + defer f.Close() + } else { + return nil, err + } + } + return ioutil.ReadAll(rdr) +} + +// Print a json object in accordance with the prophecy (or the command line options) +func printJSON(j interface{}) error { + var out []byte + var err error + + if *flagCompact == false { + out, err = json.MarshalIndent(j, "", " ") + } else { + out, err = json.Marshal(j) + } + + if err == nil { + fmt.Println(string(out)) + } + + return err +} + +// Verify a token and output the claims. This is a great example +// of how to verify and view a token. +func verifyToken() error { + // get the token + tokData, err := loadData(*flagVerify) + if err != nil { + return fmt.Errorf("Couldn't read token: %v", err) + } + + // trim possible whitespace from token + tokData = regexp.MustCompile(`\s*$`).ReplaceAll(tokData, []byte{}) + if *flagDebug { + fmt.Fprintf(os.Stderr, "Token len: %v bytes\n", len(tokData)) + } + + // Parse the token. Load the key from command line option + token, err := jwt.Parse(string(tokData), func(t *jwt.Token) (interface{}, error) { + data, err := loadData(*flagKey) + if err != nil { + return nil, err + } + if isEs() { + return jwt.ParseECPublicKeyFromPEM(data) + } else if isRs() { + return jwt.ParseRSAPublicKeyFromPEM(data) + } + return data, nil + }) + + // Print some debug data + if *flagDebug && token != nil { + fmt.Fprintf(os.Stderr, "Header:\n%v\n", token.Header) + fmt.Fprintf(os.Stderr, "Claims:\n%v\n", token.Claims) + } + + // Print an error if we can't parse for some reason + if err != nil { + return fmt.Errorf("Couldn't parse token: %v", err) + } + + // Is token invalid? + if !token.Valid { + return fmt.Errorf("Token is invalid") + } + + // Print the token details + if err := printJSON(token.Claims); err != nil { + return fmt.Errorf("Failed to output claims: %v", err) + } + + return nil +} + +// Create, sign, and output a token. This is a great, simple example of +// how to use this library to create and sign a token. +func signToken() error { + // get the token data from command line arguments + tokData, err := loadData(*flagSign) + if err != nil { + return fmt.Errorf("Couldn't read token: %v", err) + } else if *flagDebug { + fmt.Fprintf(os.Stderr, "Token: %v bytes", len(tokData)) + } + + // parse the JSON of the claims + var claims jwt.MapClaims + if err := json.Unmarshal(tokData, &claims); err != nil { + return fmt.Errorf("Couldn't parse claims JSON: %v", err) + } + + // add command line claims + if len(flagClaims) > 0 { + for k, v := range flagClaims { + claims[k] = v + } + } + + // get the key + var key interface{} + key, err = loadData(*flagKey) + if err != nil { + return fmt.Errorf("Couldn't read key: %v", err) + } + + // get the signing alg + alg := jwt.GetSigningMethod(*flagAlg) + if alg == nil { + return fmt.Errorf("Couldn't find signing method: %v", *flagAlg) + } + + // create a new token + token := jwt.NewWithClaims(alg, claims) + + // add command line headers + if len(flagHead) > 0 { + for k, v := range flagHead { + token.Header[k] = v + } + } + + if isEs() { + if k, ok := key.([]byte); !ok { + return fmt.Errorf("Couldn't convert key data to key") + } else { + key, err = jwt.ParseECPrivateKeyFromPEM(k) + if err != nil { + return err + } + } + } else if isRs() { + if k, ok := key.([]byte); !ok { + return fmt.Errorf("Couldn't convert key data to key") + } else { + key, err = jwt.ParseRSAPrivateKeyFromPEM(k) + if err != nil { + return err + } + } + } + + if out, err := token.SignedString(key); err == nil { + fmt.Println(out) + } else { + return fmt.Errorf("Error signing token: %v", err) + } + + return nil +} + +// showToken pretty-prints the token on the command line. +func showToken() error { + // get the token + tokData, err := loadData(*flagShow) + if err != nil { + return fmt.Errorf("Couldn't read token: %v", err) + } + + // trim possible whitespace from token + tokData = regexp.MustCompile(`\s*$`).ReplaceAll(tokData, []byte{}) + if *flagDebug { + fmt.Fprintf(os.Stderr, "Token len: %v bytes\n", len(tokData)) + } + + token, err := jwt.Parse(string(tokData), nil) + if token == nil { + return fmt.Errorf("malformed token: %v", err) + } + + // Print the token details + fmt.Println("Header:") + if err := printJSON(token.Header); err != nil { + return fmt.Errorf("Failed to output header: %v", err) + } + + fmt.Println("Claims:") + if err := printJSON(token.Claims); err != nil { + return fmt.Errorf("Failed to output claims: %v", err) + } + + return nil +} + +func isEs() bool { + return strings.HasPrefix(*flagAlg, "ES") +} + +func isRs() bool { + return strings.HasPrefix(*flagAlg, "RS") || strings.HasPrefix(*flagAlg, "PS") +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/args.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/args.go new file mode 100644 index 0000000000..a5bba5b10c --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/cmd/jwt/args.go @@ -0,0 +1,23 @@ +package main + +import ( + "encoding/json" + "fmt" + "strings" +) + +type ArgList map[string]string + +func (l ArgList) String() string { + data, _ := json.Marshal(l) + return string(data) +} + +func (l ArgList) Set(arg string) error { + parts := strings.SplitN(arg, "=", 2) + if len(parts) != 2 { + return fmt.Errorf("Invalid argument '%v'. Must use format 'key=value'. %v", arg, parts) + } + l[parts[0]] = parts[1] + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/doc.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/doc.go new file mode 100644 index 0000000000..a86dc1a3b3 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/doc.go @@ -0,0 +1,4 @@ +// Package jwt is a Go implementation of JSON Web Tokens: http://self-issued.info/docs/draft-jones-json-web-token.html +// +// See README.md for more info. +package jwt diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa.go new file mode 100644 index 0000000000..f977381240 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa.go @@ -0,0 +1,148 @@ +package jwt + +import ( + "crypto" + "crypto/ecdsa" + "crypto/rand" + "errors" + "math/big" +) + +var ( + // Sadly this is missing from crypto/ecdsa compared to crypto/rsa + ErrECDSAVerification = errors.New("crypto/ecdsa: verification error") +) + +// Implements the ECDSA family of signing methods signing methods +// Expects *ecdsa.PrivateKey for signing and *ecdsa.PublicKey for verification +type SigningMethodECDSA struct { + Name string + Hash crypto.Hash + KeySize int + CurveBits int +} + +// Specific instances for EC256 and company +var ( + SigningMethodES256 *SigningMethodECDSA + SigningMethodES384 *SigningMethodECDSA + SigningMethodES512 *SigningMethodECDSA +) + +func init() { + // ES256 + SigningMethodES256 = &SigningMethodECDSA{"ES256", crypto.SHA256, 32, 256} + RegisterSigningMethod(SigningMethodES256.Alg(), func() SigningMethod { + return SigningMethodES256 + }) + + // ES384 + SigningMethodES384 = &SigningMethodECDSA{"ES384", crypto.SHA384, 48, 384} + RegisterSigningMethod(SigningMethodES384.Alg(), func() SigningMethod { + return SigningMethodES384 + }) + + // ES512 + SigningMethodES512 = &SigningMethodECDSA{"ES512", crypto.SHA512, 66, 521} + RegisterSigningMethod(SigningMethodES512.Alg(), func() SigningMethod { + return SigningMethodES512 + }) +} + +func (m *SigningMethodECDSA) Alg() string { + return m.Name +} + +// Implements the Verify method from SigningMethod +// For this verify method, key must be an ecdsa.PublicKey struct +func (m *SigningMethodECDSA) Verify(signingString, signature string, key interface{}) error { + var err error + + // Decode the signature + var sig []byte + if sig, err = DecodeSegment(signature); err != nil { + return err + } + + // Get the key + var ecdsaKey *ecdsa.PublicKey + switch k := key.(type) { + case *ecdsa.PublicKey: + ecdsaKey = k + default: + return ErrInvalidKeyType + } + + if len(sig) != 2*m.KeySize { + return ErrECDSAVerification + } + + r := big.NewInt(0).SetBytes(sig[:m.KeySize]) + s := big.NewInt(0).SetBytes(sig[m.KeySize:]) + + // Create hasher + if !m.Hash.Available() { + return ErrHashUnavailable + } + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Verify the signature + if verifystatus := ecdsa.Verify(ecdsaKey, hasher.Sum(nil), r, s); verifystatus == true { + return nil + } else { + return ErrECDSAVerification + } +} + +// Implements the Sign method from SigningMethod +// For this signing method, key must be an ecdsa.PrivateKey struct +func (m *SigningMethodECDSA) Sign(signingString string, key interface{}) (string, error) { + // Get the key + var ecdsaKey *ecdsa.PrivateKey + switch k := key.(type) { + case *ecdsa.PrivateKey: + ecdsaKey = k + default: + return "", ErrInvalidKeyType + } + + // Create the hasher + if !m.Hash.Available() { + return "", ErrHashUnavailable + } + + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Sign the string and return r, s + if r, s, err := ecdsa.Sign(rand.Reader, ecdsaKey, hasher.Sum(nil)); err == nil { + curveBits := ecdsaKey.Curve.Params().BitSize + + if m.CurveBits != curveBits { + return "", ErrInvalidKey + } + + keyBytes := curveBits / 8 + if curveBits%8 > 0 { + keyBytes += 1 + } + + // We serialize the outpus (r and s) into big-endian byte arrays and pad + // them with zeros on the left to make sure the sizes work out. Both arrays + // must be keyBytes long, and the output must be 2*keyBytes long. + rBytes := r.Bytes() + rBytesPadded := make([]byte, keyBytes) + copy(rBytesPadded[keyBytes-len(rBytes):], rBytes) + + sBytes := s.Bytes() + sBytesPadded := make([]byte, keyBytes) + copy(sBytesPadded[keyBytes-len(sBytes):], sBytes) + + out := append(rBytesPadded, sBytesPadded...) + + return EncodeSegment(out), nil + } else { + return "", err + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa_test.go new file mode 100644 index 0000000000..753047b1ec --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa_test.go @@ -0,0 +1,100 @@ +package jwt_test + +import ( + "crypto/ecdsa" + "io/ioutil" + "strings" + "testing" + + "github.com/dgrijalva/jwt-go" +) + +var ecdsaTestData = []struct { + name string + keys map[string]string + tokenString string + alg string + claims map[string]interface{} + valid bool +}{ + { + "Basic ES256", + map[string]string{"private": "test/ec256-private.pem", "public": "test/ec256-public.pem"}, + "eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJmb28iOiJiYXIifQ.feG39E-bn8HXAKhzDZq7yEAPWYDhZlwTn3sePJnU9VrGMmwdXAIEyoOnrjreYlVM_Z4N13eK9-TmMTWyfKJtHQ", + "ES256", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "Basic ES384", + map[string]string{"private": "test/ec384-private.pem", "public": "test/ec384-public.pem"}, + "eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzM4NCJ9.eyJmb28iOiJiYXIifQ.ngAfKMbJUh0WWubSIYe5GMsA-aHNKwFbJk_wq3lq23aPp8H2anb1rRILIzVR0gUf4a8WzDtrzmiikuPWyCS6CN4-PwdgTk-5nehC7JXqlaBZU05p3toM3nWCwm_LXcld", + "ES384", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "Basic ES512", + map[string]string{"private": "test/ec512-private.pem", "public": "test/ec512-public.pem"}, + "eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzUxMiJ9.eyJmb28iOiJiYXIifQ.AAU0TvGQOcdg2OvrwY73NHKgfk26UDekh9Prz-L_iWuTBIBqOFCWwwLsRiHB1JOddfKAls5do1W0jR_F30JpVd-6AJeTjGKA4C1A1H6gIKwRY0o_tFDIydZCl_lMBMeG5VNFAjO86-WCSKwc3hqaGkq1MugPRq_qrF9AVbuEB4JPLyL5", + "ES512", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "basic ES256 invalid: foo => bar", + map[string]string{"private": "test/ec256-private.pem", "public": "test/ec256-public.pem"}, + "eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.MEQCIHoSJnmGlPaVQDqacx_2XlXEhhqtWceVopjomc2PJLtdAiAUTeGPoNYxZw0z8mgOnnIcjoxRuNDVZvybRZF3wR1l8W", + "ES256", + map[string]interface{}{"foo": "bar"}, + false, + }, +} + +func TestECDSAVerify(t *testing.T) { + for _, data := range ecdsaTestData { + var err error + + key, _ := ioutil.ReadFile(data.keys["public"]) + + var ecdsaKey *ecdsa.PublicKey + if ecdsaKey, err = jwt.ParseECPublicKeyFromPEM(key); err != nil { + t.Errorf("Unable to parse ECDSA public key: %v", err) + } + + parts := strings.Split(data.tokenString, ".") + + method := jwt.GetSigningMethod(data.alg) + err = method.Verify(strings.Join(parts[0:2], "."), parts[2], ecdsaKey) + if data.valid && err != nil { + t.Errorf("[%v] Error while verifying key: %v", data.name, err) + } + if !data.valid && err == nil { + t.Errorf("[%v] Invalid key passed validation", data.name) + } + } +} + +func TestECDSASign(t *testing.T) { + for _, data := range ecdsaTestData { + var err error + key, _ := ioutil.ReadFile(data.keys["private"]) + + var ecdsaKey *ecdsa.PrivateKey + if ecdsaKey, err = jwt.ParseECPrivateKeyFromPEM(key); err != nil { + t.Errorf("Unable to parse ECDSA private key: %v", err) + } + + if data.valid { + parts := strings.Split(data.tokenString, ".") + method := jwt.GetSigningMethod(data.alg) + sig, err := method.Sign(strings.Join(parts[0:2], "."), ecdsaKey) + if err != nil { + t.Errorf("[%v] Error signing token: %v", data.name, err) + } + if sig == parts[2] { + t.Errorf("[%v] Identical signatures\nbefore:\n%v\nafter:\n%v", data.name, parts[2], sig) + } + } + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa_utils.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa_utils.go new file mode 100644 index 0000000000..d19624b726 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/ecdsa_utils.go @@ -0,0 +1,67 @@ +package jwt + +import ( + "crypto/ecdsa" + "crypto/x509" + "encoding/pem" + "errors" +) + +var ( + ErrNotECPublicKey = errors.New("Key is not a valid ECDSA public key") + ErrNotECPrivateKey = errors.New("Key is not a valid ECDSA private key") +) + +// Parse PEM encoded Elliptic Curve Private Key Structure +func ParseECPrivateKeyFromPEM(key []byte) (*ecdsa.PrivateKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + // Parse the key + var parsedKey interface{} + if parsedKey, err = x509.ParseECPrivateKey(block.Bytes); err != nil { + return nil, err + } + + var pkey *ecdsa.PrivateKey + var ok bool + if pkey, ok = parsedKey.(*ecdsa.PrivateKey); !ok { + return nil, ErrNotECPrivateKey + } + + return pkey, nil +} + +// Parse PEM encoded PKCS1 or PKCS8 public key +func ParseECPublicKeyFromPEM(key []byte) (*ecdsa.PublicKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + // Parse the key + var parsedKey interface{} + if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil { + if cert, err := x509.ParseCertificate(block.Bytes); err == nil { + parsedKey = cert.PublicKey + } else { + return nil, err + } + } + + var pkey *ecdsa.PublicKey + var ok bool + if pkey, ok = parsedKey.(*ecdsa.PublicKey); !ok { + return nil, ErrNotECPublicKey + } + + return pkey, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/errors.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/errors.go new file mode 100644 index 0000000000..1c93024aad --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/errors.go @@ -0,0 +1,59 @@ +package jwt + +import ( + "errors" +) + +// Error constants +var ( + ErrInvalidKey = errors.New("key is invalid") + ErrInvalidKeyType = errors.New("key is of invalid type") + ErrHashUnavailable = errors.New("the requested hash function is unavailable") +) + +// The errors that might occur when parsing and validating a token +const ( + ValidationErrorMalformed uint32 = 1 << iota // Token is malformed + ValidationErrorUnverifiable // Token could not be verified because of signing problems + ValidationErrorSignatureInvalid // Signature validation failed + + // Standard Claim validation errors + ValidationErrorAudience // AUD validation failed + ValidationErrorExpired // EXP validation failed + ValidationErrorIssuedAt // IAT validation failed + ValidationErrorIssuer // ISS validation failed + ValidationErrorNotValidYet // NBF validation failed + ValidationErrorId // JTI validation failed + ValidationErrorClaimsInvalid // Generic claims validation error +) + +// Helper for constructing a ValidationError with a string error message +func NewValidationError(errorText string, errorFlags uint32) *ValidationError { + return &ValidationError{ + text: errorText, + Errors: errorFlags, + } +} + +// The error from Parse if token is not valid +type ValidationError struct { + Inner error // stores the error returned by external dependencies, i.e.: KeyFunc + Errors uint32 // bitfield. see ValidationError... constants + text string // errors that do not have a valid error just have text +} + +// Validation error is an error type +func (e ValidationError) Error() string { + if e.Inner != nil { + return e.Inner.Error() + } else if e.text != "" { + return e.text + } else { + return "token is invalid" + } +} + +// No errors +func (e *ValidationError) valid() bool { + return e.Errors == 0 +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/example_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/example_test.go new file mode 100644 index 0000000000..ae8b788a0b --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/example_test.go @@ -0,0 +1,114 @@ +package jwt_test + +import ( + "fmt" + "github.com/dgrijalva/jwt-go" + "time" +) + +// Example (atypical) using the StandardClaims type by itself to parse a token. +// The StandardClaims type is designed to be embedded into your custom types +// to provide standard validation features. You can use it alone, but there's +// no way to retrieve other fields after parsing. +// See the CustomClaimsType example for intended usage. +func ExampleNewWithClaims_standardClaims() { + mySigningKey := []byte("AllYourBase") + + // Create the Claims + claims := &jwt.StandardClaims{ + ExpiresAt: 15000, + Issuer: "test", + } + + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + ss, err := token.SignedString(mySigningKey) + fmt.Printf("%v %v", ss, err) + //Output: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1MDAwLCJpc3MiOiJ0ZXN0In0.QsODzZu3lUZMVdhbO76u3Jv02iYCvEHcYVUI1kOWEU0 +} + +// Example creating a token using a custom claims type. The StandardClaim is embedded +// in the custom type to allow for easy encoding, parsing and validation of standard claims. +func ExampleNewWithClaims_customClaimsType() { + mySigningKey := []byte("AllYourBase") + + type MyCustomClaims struct { + Foo string `json:"foo"` + jwt.StandardClaims + } + + // Create the Claims + claims := MyCustomClaims{ + "bar", + jwt.StandardClaims{ + ExpiresAt: 15000, + Issuer: "test", + }, + } + + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + ss, err := token.SignedString(mySigningKey) + fmt.Printf("%v %v", ss, err) + //Output: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIiLCJleHAiOjE1MDAwLCJpc3MiOiJ0ZXN0In0.HE7fK0xOQwFEr4WDgRWj4teRPZ6i3GLwD5YCm6Pwu_c +} + +// Example creating a token using a custom claims type. The StandardClaim is embedded +// in the custom type to allow for easy encoding, parsing and validation of standard claims. +func ExampleParseWithClaims_customClaimsType() { + tokenString := "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIiLCJleHAiOjE1MDAwLCJpc3MiOiJ0ZXN0In0.HE7fK0xOQwFEr4WDgRWj4teRPZ6i3GLwD5YCm6Pwu_c" + + type MyCustomClaims struct { + Foo string `json:"foo"` + jwt.StandardClaims + } + + // sample token is expired. override time so it parses as valid + at(time.Unix(0, 0), func() { + token, err := jwt.ParseWithClaims(tokenString, &MyCustomClaims{}, func(token *jwt.Token) (interface{}, error) { + return []byte("AllYourBase"), nil + }) + + if claims, ok := token.Claims.(*MyCustomClaims); ok && token.Valid { + fmt.Printf("%v %v", claims.Foo, claims.StandardClaims.ExpiresAt) + } else { + fmt.Println(err) + } + }) + + // Output: bar 15000 +} + +// Override time value for tests. Restore default value after. +func at(t time.Time, f func()) { + jwt.TimeFunc = func() time.Time { + return t + } + f() + jwt.TimeFunc = time.Now +} + +// An example of parsing the error types using bitfield checks +func ExampleParse_errorChecking() { + // Token from another example. This token is expired + var tokenString = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIiLCJleHAiOjE1MDAwLCJpc3MiOiJ0ZXN0In0.HE7fK0xOQwFEr4WDgRWj4teRPZ6i3GLwD5YCm6Pwu_c" + + token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) { + return []byte("AllYourBase"), nil + }) + + if token.Valid { + fmt.Println("You look nice today") + } else if ve, ok := err.(*jwt.ValidationError); ok { + if ve.Errors&jwt.ValidationErrorMalformed != 0 { + fmt.Println("That's not even a token") + } else if ve.Errors&(jwt.ValidationErrorExpired|jwt.ValidationErrorNotValidYet) != 0 { + // Token is either expired or not active yet + fmt.Println("Timing is everything") + } else { + fmt.Println("Couldn't handle this token:", err) + } + } else { + fmt.Println("Couldn't handle this token:", err) + } + + // Output: Timing is everything +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac.go new file mode 100644 index 0000000000..addbe5d401 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac.go @@ -0,0 +1,95 @@ +package jwt + +import ( + "crypto" + "crypto/hmac" + "errors" +) + +// Implements the HMAC-SHA family of signing methods signing methods +// Expects key type of []byte for both signing and validation +type SigningMethodHMAC struct { + Name string + Hash crypto.Hash +} + +// Specific instances for HS256 and company +var ( + SigningMethodHS256 *SigningMethodHMAC + SigningMethodHS384 *SigningMethodHMAC + SigningMethodHS512 *SigningMethodHMAC + ErrSignatureInvalid = errors.New("signature is invalid") +) + +func init() { + // HS256 + SigningMethodHS256 = &SigningMethodHMAC{"HS256", crypto.SHA256} + RegisterSigningMethod(SigningMethodHS256.Alg(), func() SigningMethod { + return SigningMethodHS256 + }) + + // HS384 + SigningMethodHS384 = &SigningMethodHMAC{"HS384", crypto.SHA384} + RegisterSigningMethod(SigningMethodHS384.Alg(), func() SigningMethod { + return SigningMethodHS384 + }) + + // HS512 + SigningMethodHS512 = &SigningMethodHMAC{"HS512", crypto.SHA512} + RegisterSigningMethod(SigningMethodHS512.Alg(), func() SigningMethod { + return SigningMethodHS512 + }) +} + +func (m *SigningMethodHMAC) Alg() string { + return m.Name +} + +// Verify the signature of HSXXX tokens. Returns nil if the signature is valid. +func (m *SigningMethodHMAC) Verify(signingString, signature string, key interface{}) error { + // Verify the key is the right type + keyBytes, ok := key.([]byte) + if !ok { + return ErrInvalidKeyType + } + + // Decode signature, for comparison + sig, err := DecodeSegment(signature) + if err != nil { + return err + } + + // Can we use the specified hashing method? + if !m.Hash.Available() { + return ErrHashUnavailable + } + + // This signing method is symmetric, so we validate the signature + // by reproducing the signature from the signing string and key, then + // comparing that against the provided signature. + hasher := hmac.New(m.Hash.New, keyBytes) + hasher.Write([]byte(signingString)) + if !hmac.Equal(sig, hasher.Sum(nil)) { + return ErrSignatureInvalid + } + + // No validation errors. Signature is good. + return nil +} + +// Implements the Sign method from SigningMethod for this signing method. +// Key must be []byte +func (m *SigningMethodHMAC) Sign(signingString string, key interface{}) (string, error) { + if keyBytes, ok := key.([]byte); ok { + if !m.Hash.Available() { + return "", ErrHashUnavailable + } + + hasher := hmac.New(m.Hash.New, keyBytes) + hasher.Write([]byte(signingString)) + + return EncodeSegment(hasher.Sum(nil)), nil + } + + return "", ErrInvalidKeyType +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac_example_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac_example_test.go new file mode 100644 index 0000000000..0027831474 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac_example_test.go @@ -0,0 +1,66 @@ +package jwt_test + +import ( + "fmt" + "github.com/dgrijalva/jwt-go" + "io/ioutil" + "time" +) + +// For HMAC signing method, the key can be any []byte. It is recommended to generate +// a key using crypto/rand or something equivalent. You need the same key for signing +// and validating. +var hmacSampleSecret []byte + +func init() { + // Load sample key data + if keyData, e := ioutil.ReadFile("test/hmacTestKey"); e == nil { + hmacSampleSecret = keyData + } else { + panic(e) + } +} + +// Example creating, signing, and encoding a JWT token using the HMAC signing method +func ExampleNew_hmac() { + // Create a new token object, specifying signing method and the claims + // you would like it to contain. + token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{ + "foo": "bar", + "nbf": time.Date(2015, 10, 10, 12, 0, 0, 0, time.UTC).Unix(), + }) + + // Sign and get the complete encoded token as a string using the secret + tokenString, err := token.SignedString(hmacSampleSecret) + + fmt.Println(tokenString, err) + // Output: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIiLCJuYmYiOjE0NDQ0Nzg0MDB9.u1riaD1rW97opCoAuRCTy4w58Br-Zk-bh7vLiRIsrpU +} + +// Example parsing and validating a token using the HMAC signing method +func ExampleParse_hmac() { + // sample token string taken from the New example + tokenString := "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIiLCJuYmYiOjE0NDQ0Nzg0MDB9.u1riaD1rW97opCoAuRCTy4w58Br-Zk-bh7vLiRIsrpU" + + // Parse takes the token string and a function for looking up the key. The latter is especially + // useful if you use multiple keys for your application. The standard is to use 'kid' in the + // head of the token to identify which key to use, but the parsed token (head and claims) is provided + // to the callback, providing flexibility. + token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) { + // Don't forget to validate the alg is what you expect: + if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok { + return nil, fmt.Errorf("Unexpected signing method: %v", token.Header["alg"]) + } + + // hmacSampleSecret is a []byte containing your secret, e.g. []byte("my_secret_key") + return hmacSampleSecret, nil + }) + + if claims, ok := token.Claims.(jwt.MapClaims); ok && token.Valid { + fmt.Println(claims["foo"], claims["nbf"]) + } else { + fmt.Println(err) + } + + // Output: bar 1.4444784e+09 +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac_test.go new file mode 100644 index 0000000000..c7e114f4f9 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/hmac_test.go @@ -0,0 +1,91 @@ +package jwt_test + +import ( + "github.com/dgrijalva/jwt-go" + "io/ioutil" + "strings" + "testing" +) + +var hmacTestData = []struct { + name string + tokenString string + alg string + claims map[string]interface{} + valid bool +}{ + { + "web sample", + "eyJ0eXAiOiJKV1QiLA0KICJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ.dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk", + "HS256", + map[string]interface{}{"iss": "joe", "exp": 1300819380, "http://example.com/is_root": true}, + true, + }, + { + "HS384", + "eyJhbGciOiJIUzM4NCIsInR5cCI6IkpXVCJ9.eyJleHAiOjEuMzAwODE5MzhlKzA5LCJodHRwOi8vZXhhbXBsZS5jb20vaXNfcm9vdCI6dHJ1ZSwiaXNzIjoiam9lIn0.KWZEuOD5lbBxZ34g7F-SlVLAQ_r5KApWNWlZIIMyQVz5Zs58a7XdNzj5_0EcNoOy", + "HS384", + map[string]interface{}{"iss": "joe", "exp": 1300819380, "http://example.com/is_root": true}, + true, + }, + { + "HS512", + "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjEuMzAwODE5MzhlKzA5LCJodHRwOi8vZXhhbXBsZS5jb20vaXNfcm9vdCI6dHJ1ZSwiaXNzIjoiam9lIn0.CN7YijRX6Aw1n2jyI2Id1w90ja-DEMYiWixhYCyHnrZ1VfJRaFQz1bEbjjA5Fn4CLYaUG432dEYmSbS4Saokmw", + "HS512", + map[string]interface{}{"iss": "joe", "exp": 1300819380, "http://example.com/is_root": true}, + true, + }, + { + "web sample: invalid", + "eyJ0eXAiOiJKV1QiLA0KICJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ.dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXo", + "HS256", + map[string]interface{}{"iss": "joe", "exp": 1300819380, "http://example.com/is_root": true}, + false, + }, +} + +// Sample data from http://tools.ietf.org/html/draft-jones-json-web-signature-04#appendix-A.1 +var hmacTestKey, _ = ioutil.ReadFile("test/hmacTestKey") + +func TestHMACVerify(t *testing.T) { + for _, data := range hmacTestData { + parts := strings.Split(data.tokenString, ".") + + method := jwt.GetSigningMethod(data.alg) + err := method.Verify(strings.Join(parts[0:2], "."), parts[2], hmacTestKey) + if data.valid && err != nil { + t.Errorf("[%v] Error while verifying key: %v", data.name, err) + } + if !data.valid && err == nil { + t.Errorf("[%v] Invalid key passed validation", data.name) + } + } +} + +func TestHMACSign(t *testing.T) { + for _, data := range hmacTestData { + if data.valid { + parts := strings.Split(data.tokenString, ".") + method := jwt.GetSigningMethod(data.alg) + sig, err := method.Sign(strings.Join(parts[0:2], "."), hmacTestKey) + if err != nil { + t.Errorf("[%v] Error signing token: %v", data.name, err) + } + if sig != parts[2] { + t.Errorf("[%v] Incorrect signature.\nwas:\n%v\nexpecting:\n%v", data.name, sig, parts[2]) + } + } + } +} + +func BenchmarkHS256Signing(b *testing.B) { + benchmarkSigning(b, jwt.SigningMethodHS256, hmacTestKey) +} + +func BenchmarkHS384Signing(b *testing.B) { + benchmarkSigning(b, jwt.SigningMethodHS384, hmacTestKey) +} + +func BenchmarkHS512Signing(b *testing.B) { + benchmarkSigning(b, jwt.SigningMethodHS512, hmacTestKey) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/http_example_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/http_example_test.go new file mode 100644 index 0000000000..82e9c50a41 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/http_example_test.go @@ -0,0 +1,216 @@ +package jwt_test + +// Example HTTP auth using asymmetric crypto/RSA keys +// This is based on a (now outdated) example at https://gist.github.com/cryptix/45c33ecf0ae54828e63b + +import ( + "bytes" + "crypto/rsa" + "fmt" + "github.com/dgrijalva/jwt-go" + "github.com/dgrijalva/jwt-go/request" + "io" + "io/ioutil" + "log" + "net" + "net/http" + "net/url" + "strings" + "time" +) + +// location of the files used for signing and verification +const ( + privKeyPath = "test/sample_key" // openssl genrsa -out app.rsa keysize + pubKeyPath = "test/sample_key.pub" // openssl rsa -in app.rsa -pubout > app.rsa.pub +) + +var ( + verifyKey *rsa.PublicKey + signKey *rsa.PrivateKey + serverPort int + // storing sample username/password pairs + // don't do this on a real server + users = map[string]string{ + "test": "known", + } +) + +// read the key files before starting http handlers +func init() { + signBytes, err := ioutil.ReadFile(privKeyPath) + fatal(err) + + signKey, err = jwt.ParseRSAPrivateKeyFromPEM(signBytes) + fatal(err) + + verifyBytes, err := ioutil.ReadFile(pubKeyPath) + fatal(err) + + verifyKey, err = jwt.ParseRSAPublicKeyFromPEM(verifyBytes) + fatal(err) + + http.HandleFunc("/authenticate", authHandler) + http.HandleFunc("/restricted", restrictedHandler) + + // Setup listener + listener, err := net.ListenTCP("tcp", &net.TCPAddr{}) + serverPort = listener.Addr().(*net.TCPAddr).Port + + log.Println("Listening...") + go func() { + fatal(http.Serve(listener, nil)) + }() +} + +var start func() + +func fatal(err error) { + if err != nil { + log.Fatal(err) + } +} + +// Define some custom types were going to use within our tokens +type CustomerInfo struct { + Name string + Kind string +} + +type CustomClaimsExample struct { + *jwt.StandardClaims + TokenType string + CustomerInfo +} + +func Example_getTokenViaHTTP() { + // See func authHandler for an example auth handler that produces a token + res, err := http.PostForm(fmt.Sprintf("http://localhost:%v/authenticate", serverPort), url.Values{ + "user": {"test"}, + "pass": {"known"}, + }) + if err != nil { + fatal(err) + } + + if res.StatusCode != 200 { + fmt.Println("Unexpected status code", res.StatusCode) + } + + // Read the token out of the response body + buf := new(bytes.Buffer) + io.Copy(buf, res.Body) + res.Body.Close() + tokenString := strings.TrimSpace(buf.String()) + + // Parse the token + token, err := jwt.ParseWithClaims(tokenString, &CustomClaimsExample{}, func(token *jwt.Token) (interface{}, error) { + // since we only use the one private key to sign the tokens, + // we also only use its public counter part to verify + return verifyKey, nil + }) + fatal(err) + + claims := token.Claims.(*CustomClaimsExample) + fmt.Println(claims.CustomerInfo.Name) + + //Output: test +} + +func Example_useTokenViaHTTP() { + + // Make a sample token + // In a real world situation, this token will have been acquired from + // some other API call (see Example_getTokenViaHTTP) + token, err := createToken("foo") + fatal(err) + + // Make request. See func restrictedHandler for example request processor + req, err := http.NewRequest("GET", fmt.Sprintf("http://localhost:%v/restricted", serverPort), nil) + fatal(err) + req.Header.Set("Authorization", fmt.Sprintf("Bearer %v", token)) + res, err := http.DefaultClient.Do(req) + fatal(err) + + // Read the response body + buf := new(bytes.Buffer) + io.Copy(buf, res.Body) + res.Body.Close() + fmt.Println(buf.String()) + + // Output: Welcome, foo +} + +func createToken(user string) (string, error) { + // create a signer for rsa 256 + t := jwt.New(jwt.GetSigningMethod("RS256")) + + // set our claims + t.Claims = &CustomClaimsExample{ + &jwt.StandardClaims{ + // set the expire time + // see http://tools.ietf.org/html/draft-ietf-oauth-json-web-token-20#section-4.1.4 + ExpiresAt: time.Now().Add(time.Minute * 1).Unix(), + }, + "level1", + CustomerInfo{user, "human"}, + } + + // Creat token string + return t.SignedString(signKey) +} + +// reads the form values, checks them and creates the token +func authHandler(w http.ResponseWriter, r *http.Request) { + // make sure its post + if r.Method != "POST" { + w.WriteHeader(http.StatusBadRequest) + fmt.Fprintln(w, "No POST", r.Method) + return + } + + user := r.FormValue("user") + pass := r.FormValue("pass") + + log.Printf("Authenticate: user[%s] pass[%s]\n", user, pass) + + // check values + if user != "test" || pass != "known" { + w.WriteHeader(http.StatusForbidden) + fmt.Fprintln(w, "Wrong info") + return + } + + tokenString, err := createToken(user) + if err != nil { + w.WriteHeader(http.StatusInternalServerError) + fmt.Fprintln(w, "Sorry, error while Signing Token!") + log.Printf("Token Signing error: %v\n", err) + return + } + + w.Header().Set("Content-Type", "application/jwt") + w.WriteHeader(http.StatusOK) + fmt.Fprintln(w, tokenString) +} + +// only accessible with a valid token +func restrictedHandler(w http.ResponseWriter, r *http.Request) { + // Get token from request + token, err := request.ParseFromRequestWithClaims(r, request.OAuth2Extractor, &CustomClaimsExample{}, func(token *jwt.Token) (interface{}, error) { + // since we only use the one private key to sign the tokens, + // we also only use its public counter part to verify + return verifyKey, nil + }) + + // If the token is missing or invalid, return error + if err != nil { + w.WriteHeader(http.StatusUnauthorized) + fmt.Fprintln(w, "Invalid token:", err) + return + } + + // Token is valid + fmt.Fprintln(w, "Welcome,", token.Claims.(*CustomClaimsExample).Name) + return +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/map_claims.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/map_claims.go new file mode 100644 index 0000000000..291213c460 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/map_claims.go @@ -0,0 +1,94 @@ +package jwt + +import ( + "encoding/json" + "errors" + // "fmt" +) + +// Claims type that uses the map[string]interface{} for JSON decoding +// This is the default claims type if you don't supply one +type MapClaims map[string]interface{} + +// Compares the aud claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (m MapClaims) VerifyAudience(cmp string, req bool) bool { + aud, _ := m["aud"].(string) + return verifyAud(aud, cmp, req) +} + +// Compares the exp claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (m MapClaims) VerifyExpiresAt(cmp int64, req bool) bool { + switch exp := m["exp"].(type) { + case float64: + return verifyExp(int64(exp), cmp, req) + case json.Number: + v, _ := exp.Int64() + return verifyExp(v, cmp, req) + } + return req == false +} + +// Compares the iat claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (m MapClaims) VerifyIssuedAt(cmp int64, req bool) bool { + switch iat := m["iat"].(type) { + case float64: + return verifyIat(int64(iat), cmp, req) + case json.Number: + v, _ := iat.Int64() + return verifyIat(v, cmp, req) + } + return req == false +} + +// Compares the iss claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (m MapClaims) VerifyIssuer(cmp string, req bool) bool { + iss, _ := m["iss"].(string) + return verifyIss(iss, cmp, req) +} + +// Compares the nbf claim against cmp. +// If required is false, this method will return true if the value matches or is unset +func (m MapClaims) VerifyNotBefore(cmp int64, req bool) bool { + switch nbf := m["nbf"].(type) { + case float64: + return verifyNbf(int64(nbf), cmp, req) + case json.Number: + v, _ := nbf.Int64() + return verifyNbf(v, cmp, req) + } + return req == false +} + +// Validates time based claims "exp, iat, nbf". +// There is no accounting for clock skew. +// As well, if any of the above claims are not in the token, it will still +// be considered a valid claim. +func (m MapClaims) Valid() error { + vErr := new(ValidationError) + now := TimeFunc().Unix() + + if m.VerifyExpiresAt(now, false) == false { + vErr.Inner = errors.New("Token is expired") + vErr.Errors |= ValidationErrorExpired + } + + if m.VerifyIssuedAt(now, false) == false { + vErr.Inner = errors.New("Token used before issued") + vErr.Errors |= ValidationErrorIssuedAt + } + + if m.VerifyNotBefore(now, false) == false { + vErr.Inner = errors.New("Token is not valid yet") + vErr.Errors |= ValidationErrorNotValidYet + } + + if vErr.valid() { + return nil + } + + return vErr +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/none.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/none.go new file mode 100644 index 0000000000..f04d189d06 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/none.go @@ -0,0 +1,52 @@ +package jwt + +// Implements the none signing method. This is required by the spec +// but you probably should never use it. +var SigningMethodNone *signingMethodNone + +const UnsafeAllowNoneSignatureType unsafeNoneMagicConstant = "none signing method allowed" + +var NoneSignatureTypeDisallowedError error + +type signingMethodNone struct{} +type unsafeNoneMagicConstant string + +func init() { + SigningMethodNone = &signingMethodNone{} + NoneSignatureTypeDisallowedError = NewValidationError("'none' signature type is not allowed", ValidationErrorSignatureInvalid) + + RegisterSigningMethod(SigningMethodNone.Alg(), func() SigningMethod { + return SigningMethodNone + }) +} + +func (m *signingMethodNone) Alg() string { + return "none" +} + +// Only allow 'none' alg type if UnsafeAllowNoneSignatureType is specified as the key +func (m *signingMethodNone) Verify(signingString, signature string, key interface{}) (err error) { + // Key must be UnsafeAllowNoneSignatureType to prevent accidentally + // accepting 'none' signing method + if _, ok := key.(unsafeNoneMagicConstant); !ok { + return NoneSignatureTypeDisallowedError + } + // If signing method is none, signature must be an empty string + if signature != "" { + return NewValidationError( + "'none' signing method with non-empty signature", + ValidationErrorSignatureInvalid, + ) + } + + // Accept 'none' signing method. + return nil +} + +// Only allow 'none' signing if UnsafeAllowNoneSignatureType is specified as the key +func (m *signingMethodNone) Sign(signingString string, key interface{}) (string, error) { + if _, ok := key.(unsafeNoneMagicConstant); ok { + return "", nil + } + return "", NoneSignatureTypeDisallowedError +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/none_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/none_test.go new file mode 100644 index 0000000000..29a69efef7 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/none_test.go @@ -0,0 +1,72 @@ +package jwt_test + +import ( + "github.com/dgrijalva/jwt-go" + "strings" + "testing" +) + +var noneTestData = []struct { + name string + tokenString string + alg string + key interface{} + claims map[string]interface{} + valid bool +}{ + { + "Basic", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.", + "none", + jwt.UnsafeAllowNoneSignatureType, + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "Basic - no key", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.", + "none", + nil, + map[string]interface{}{"foo": "bar"}, + false, + }, + { + "Signed", + "eyJhbGciOiJSUzM4NCIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.W-jEzRfBigtCWsinvVVuldiuilzVdU5ty0MvpLaSaqK9PlAWWlDQ1VIQ_qSKzwL5IXaZkvZFJXT3yL3n7OUVu7zCNJzdwznbC8Z-b0z2lYvcklJYi2VOFRcGbJtXUqgjk2oGsiqUMUMOLP70TTefkpsgqDxbRh9CDUfpOJgW-dU7cmgaoswe3wjUAUi6B6G2YEaiuXC0XScQYSYVKIzgKXJV8Zw-7AN_DBUI4GkTpsvQ9fVVjZM9csQiEXhYekyrKu1nu_POpQonGd8yqkIyXPECNmmqH5jH4sFiF67XhD7_JpkvLziBpI-uh86evBUadmHhb9Otqw3uV3NTaXLzJw", + "none", + jwt.UnsafeAllowNoneSignatureType, + map[string]interface{}{"foo": "bar"}, + false, + }, +} + +func TestNoneVerify(t *testing.T) { + for _, data := range noneTestData { + parts := strings.Split(data.tokenString, ".") + + method := jwt.GetSigningMethod(data.alg) + err := method.Verify(strings.Join(parts[0:2], "."), parts[2], data.key) + if data.valid && err != nil { + t.Errorf("[%v] Error while verifying key: %v", data.name, err) + } + if !data.valid && err == nil { + t.Errorf("[%v] Invalid key passed validation", data.name) + } + } +} + +func TestNoneSign(t *testing.T) { + for _, data := range noneTestData { + if data.valid { + parts := strings.Split(data.tokenString, ".") + method := jwt.GetSigningMethod(data.alg) + sig, err := method.Sign(strings.Join(parts[0:2], "."), data.key) + if err != nil { + t.Errorf("[%v] Error signing token: %v", data.name, err) + } + if sig != parts[2] { + t.Errorf("[%v] Incorrect signature.\nwas:\n%v\nexpecting:\n%v", data.name, sig, parts[2]) + } + } + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/parser.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/parser.go new file mode 100644 index 0000000000..d6901d9adb --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/parser.go @@ -0,0 +1,148 @@ +package jwt + +import ( + "bytes" + "encoding/json" + "fmt" + "strings" +) + +type Parser struct { + ValidMethods []string // If populated, only these methods will be considered valid + UseJSONNumber bool // Use JSON Number format in JSON decoder + SkipClaimsValidation bool // Skip claims validation during token parsing +} + +// Parse, validate, and return a token. +// keyFunc will receive the parsed token and should return the key for validating. +// If everything is kosher, err will be nil +func (p *Parser) Parse(tokenString string, keyFunc Keyfunc) (*Token, error) { + return p.ParseWithClaims(tokenString, MapClaims{}, keyFunc) +} + +func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) { + token, parts, err := p.ParseUnverified(tokenString, claims) + if err != nil { + return token, err + } + + // Verify signing method is in the required set + if p.ValidMethods != nil { + var signingMethodValid = false + var alg = token.Method.Alg() + for _, m := range p.ValidMethods { + if m == alg { + signingMethodValid = true + break + } + } + if !signingMethodValid { + // signing method is not in the listed set + return token, NewValidationError(fmt.Sprintf("signing method %v is invalid", alg), ValidationErrorSignatureInvalid) + } + } + + // Lookup key + var key interface{} + if keyFunc == nil { + // keyFunc was not provided. short circuiting validation + return token, NewValidationError("no Keyfunc was provided.", ValidationErrorUnverifiable) + } + if key, err = keyFunc(token); err != nil { + // keyFunc returned an error + if ve, ok := err.(*ValidationError); ok { + return token, ve + } + return token, &ValidationError{Inner: err, Errors: ValidationErrorUnverifiable} + } + + vErr := &ValidationError{} + + // Validate Claims + if !p.SkipClaimsValidation { + if err := token.Claims.Valid(); err != nil { + + // If the Claims Valid returned an error, check if it is a validation error, + // If it was another error type, create a ValidationError with a generic ClaimsInvalid flag set + if e, ok := err.(*ValidationError); !ok { + vErr = &ValidationError{Inner: err, Errors: ValidationErrorClaimsInvalid} + } else { + vErr = e + } + } + } + + // Perform validation + token.Signature = parts[2] + if err = token.Method.Verify(strings.Join(parts[0:2], "."), token.Signature, key); err != nil { + vErr.Inner = err + vErr.Errors |= ValidationErrorSignatureInvalid + } + + if vErr.valid() { + token.Valid = true + return token, nil + } + + return token, vErr +} + +// WARNING: Don't use this method unless you know what you're doing +// +// This method parses the token but doesn't validate the signature. It's only +// ever useful in cases where you know the signature is valid (because it has +// been checked previously in the stack) and you want to extract values from +// it. +func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Token, parts []string, err error) { + parts = strings.Split(tokenString, ".") + if len(parts) != 3 { + return nil, parts, NewValidationError("token contains an invalid number of segments", ValidationErrorMalformed) + } + + token = &Token{Raw: tokenString} + + // parse Header + var headerBytes []byte + if headerBytes, err = DecodeSegment(parts[0]); err != nil { + if strings.HasPrefix(strings.ToLower(tokenString), "bearer ") { + return token, parts, NewValidationError("tokenstring should not contain 'bearer '", ValidationErrorMalformed) + } + return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} + } + if err = json.Unmarshal(headerBytes, &token.Header); err != nil { + return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} + } + + // parse Claims + var claimBytes []byte + token.Claims = claims + + if claimBytes, err = DecodeSegment(parts[1]); err != nil { + return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} + } + dec := json.NewDecoder(bytes.NewBuffer(claimBytes)) + if p.UseJSONNumber { + dec.UseNumber() + } + // JSON Decode. Special case for map type to avoid weird pointer behavior + if c, ok := token.Claims.(MapClaims); ok { + err = dec.Decode(&c) + } else { + err = dec.Decode(&claims) + } + // Handle decode error + if err != nil { + return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} + } + + // Lookup signature method + if method, ok := token.Header["alg"].(string); ok { + if token.Method = GetSigningMethod(method); token.Method == nil { + return token, parts, NewValidationError("signing method (alg) is unavailable.", ValidationErrorUnverifiable) + } + } else { + return token, parts, NewValidationError("signing method (alg) is unspecified.", ValidationErrorUnverifiable) + } + + return token, parts, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/parser_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/parser_test.go new file mode 100644 index 0000000000..390779785d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/parser_test.go @@ -0,0 +1,301 @@ +package jwt_test + +import ( + "crypto/rsa" + "encoding/json" + "fmt" + "reflect" + "testing" + "time" + + "github.com/dgrijalva/jwt-go" + "github.com/dgrijalva/jwt-go/test" +) + +var keyFuncError error = fmt.Errorf("error loading key") + +var ( + jwtTestDefaultKey *rsa.PublicKey + defaultKeyFunc jwt.Keyfunc = func(t *jwt.Token) (interface{}, error) { return jwtTestDefaultKey, nil } + emptyKeyFunc jwt.Keyfunc = func(t *jwt.Token) (interface{}, error) { return nil, nil } + errorKeyFunc jwt.Keyfunc = func(t *jwt.Token) (interface{}, error) { return nil, keyFuncError } + nilKeyFunc jwt.Keyfunc = nil +) + +func init() { + jwtTestDefaultKey = test.LoadRSAPublicKeyFromDisk("test/sample_key.pub") +} + +var jwtTestData = []struct { + name string + tokenString string + keyfunc jwt.Keyfunc + claims jwt.Claims + valid bool + errors uint32 + parser *jwt.Parser +}{ + { + "basic", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.FhkiHkoESI_cG3NPigFrxEk9Z60_oXrOT2vGm9Pn6RDgYNovYORQmmA0zs1AoAOf09ly2Nx2YAg6ABqAYga1AcMFkJljwxTT5fYphTuqpWdy4BELeSYJx5Ty2gmr8e7RonuUztrdD5WfPqLKMm1Ozp_T6zALpRmwTIW0QPnaBXaQD90FplAg46Iy1UlDKr-Eupy0i5SLch5Q-p2ZpaL_5fnTIUDlxC3pWhJTyx_71qDI-mAA_5lE_VdroOeflG56sSmDxopPEG3bFlSu1eowyBfxtu0_CuVd-M42RU75Zc4Gsj6uV77MBtbMrf4_7M_NUTSgoIF3fRqxrj0NzihIBg", + defaultKeyFunc, + jwt.MapClaims{"foo": "bar"}, + true, + 0, + nil, + }, + { + "basic expired", + "", // autogen + defaultKeyFunc, + jwt.MapClaims{"foo": "bar", "exp": float64(time.Now().Unix() - 100)}, + false, + jwt.ValidationErrorExpired, + nil, + }, + { + "basic nbf", + "", // autogen + defaultKeyFunc, + jwt.MapClaims{"foo": "bar", "nbf": float64(time.Now().Unix() + 100)}, + false, + jwt.ValidationErrorNotValidYet, + nil, + }, + { + "expired and nbf", + "", // autogen + defaultKeyFunc, + jwt.MapClaims{"foo": "bar", "nbf": float64(time.Now().Unix() + 100), "exp": float64(time.Now().Unix() - 100)}, + false, + jwt.ValidationErrorNotValidYet | jwt.ValidationErrorExpired, + nil, + }, + { + "basic invalid", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.EhkiHkoESI_cG3NPigFrxEk9Z60_oXrOT2vGm9Pn6RDgYNovYORQmmA0zs1AoAOf09ly2Nx2YAg6ABqAYga1AcMFkJljwxTT5fYphTuqpWdy4BELeSYJx5Ty2gmr8e7RonuUztrdD5WfPqLKMm1Ozp_T6zALpRmwTIW0QPnaBXaQD90FplAg46Iy1UlDKr-Eupy0i5SLch5Q-p2ZpaL_5fnTIUDlxC3pWhJTyx_71qDI-mAA_5lE_VdroOeflG56sSmDxopPEG3bFlSu1eowyBfxtu0_CuVd-M42RU75Zc4Gsj6uV77MBtbMrf4_7M_NUTSgoIF3fRqxrj0NzihIBg", + defaultKeyFunc, + jwt.MapClaims{"foo": "bar"}, + false, + jwt.ValidationErrorSignatureInvalid, + nil, + }, + { + "basic nokeyfunc", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.FhkiHkoESI_cG3NPigFrxEk9Z60_oXrOT2vGm9Pn6RDgYNovYORQmmA0zs1AoAOf09ly2Nx2YAg6ABqAYga1AcMFkJljwxTT5fYphTuqpWdy4BELeSYJx5Ty2gmr8e7RonuUztrdD5WfPqLKMm1Ozp_T6zALpRmwTIW0QPnaBXaQD90FplAg46Iy1UlDKr-Eupy0i5SLch5Q-p2ZpaL_5fnTIUDlxC3pWhJTyx_71qDI-mAA_5lE_VdroOeflG56sSmDxopPEG3bFlSu1eowyBfxtu0_CuVd-M42RU75Zc4Gsj6uV77MBtbMrf4_7M_NUTSgoIF3fRqxrj0NzihIBg", + nilKeyFunc, + jwt.MapClaims{"foo": "bar"}, + false, + jwt.ValidationErrorUnverifiable, + nil, + }, + { + "basic nokey", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.FhkiHkoESI_cG3NPigFrxEk9Z60_oXrOT2vGm9Pn6RDgYNovYORQmmA0zs1AoAOf09ly2Nx2YAg6ABqAYga1AcMFkJljwxTT5fYphTuqpWdy4BELeSYJx5Ty2gmr8e7RonuUztrdD5WfPqLKMm1Ozp_T6zALpRmwTIW0QPnaBXaQD90FplAg46Iy1UlDKr-Eupy0i5SLch5Q-p2ZpaL_5fnTIUDlxC3pWhJTyx_71qDI-mAA_5lE_VdroOeflG56sSmDxopPEG3bFlSu1eowyBfxtu0_CuVd-M42RU75Zc4Gsj6uV77MBtbMrf4_7M_NUTSgoIF3fRqxrj0NzihIBg", + emptyKeyFunc, + jwt.MapClaims{"foo": "bar"}, + false, + jwt.ValidationErrorSignatureInvalid, + nil, + }, + { + "basic errorkey", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.FhkiHkoESI_cG3NPigFrxEk9Z60_oXrOT2vGm9Pn6RDgYNovYORQmmA0zs1AoAOf09ly2Nx2YAg6ABqAYga1AcMFkJljwxTT5fYphTuqpWdy4BELeSYJx5Ty2gmr8e7RonuUztrdD5WfPqLKMm1Ozp_T6zALpRmwTIW0QPnaBXaQD90FplAg46Iy1UlDKr-Eupy0i5SLch5Q-p2ZpaL_5fnTIUDlxC3pWhJTyx_71qDI-mAA_5lE_VdroOeflG56sSmDxopPEG3bFlSu1eowyBfxtu0_CuVd-M42RU75Zc4Gsj6uV77MBtbMrf4_7M_NUTSgoIF3fRqxrj0NzihIBg", + errorKeyFunc, + jwt.MapClaims{"foo": "bar"}, + false, + jwt.ValidationErrorUnverifiable, + nil, + }, + { + "invalid signing method", + "", + defaultKeyFunc, + jwt.MapClaims{"foo": "bar"}, + false, + jwt.ValidationErrorSignatureInvalid, + &jwt.Parser{ValidMethods: []string{"HS256"}}, + }, + { + "valid signing method", + "", + defaultKeyFunc, + jwt.MapClaims{"foo": "bar"}, + true, + 0, + &jwt.Parser{ValidMethods: []string{"RS256", "HS256"}}, + }, + { + "JSON Number", + "", + defaultKeyFunc, + jwt.MapClaims{"foo": json.Number("123.4")}, + true, + 0, + &jwt.Parser{UseJSONNumber: true}, + }, + { + "Standard Claims", + "", + defaultKeyFunc, + &jwt.StandardClaims{ + ExpiresAt: time.Now().Add(time.Second * 10).Unix(), + }, + true, + 0, + &jwt.Parser{UseJSONNumber: true}, + }, + { + "JSON Number - basic expired", + "", // autogen + defaultKeyFunc, + jwt.MapClaims{"foo": "bar", "exp": json.Number(fmt.Sprintf("%v", time.Now().Unix()-100))}, + false, + jwt.ValidationErrorExpired, + &jwt.Parser{UseJSONNumber: true}, + }, + { + "JSON Number - basic nbf", + "", // autogen + defaultKeyFunc, + jwt.MapClaims{"foo": "bar", "nbf": json.Number(fmt.Sprintf("%v", time.Now().Unix()+100))}, + false, + jwt.ValidationErrorNotValidYet, + &jwt.Parser{UseJSONNumber: true}, + }, + { + "JSON Number - expired and nbf", + "", // autogen + defaultKeyFunc, + jwt.MapClaims{"foo": "bar", "nbf": json.Number(fmt.Sprintf("%v", time.Now().Unix()+100)), "exp": json.Number(fmt.Sprintf("%v", time.Now().Unix()-100))}, + false, + jwt.ValidationErrorNotValidYet | jwt.ValidationErrorExpired, + &jwt.Parser{UseJSONNumber: true}, + }, + { + "SkipClaimsValidation during token parsing", + "", // autogen + defaultKeyFunc, + jwt.MapClaims{"foo": "bar", "nbf": json.Number(fmt.Sprintf("%v", time.Now().Unix()+100))}, + true, + 0, + &jwt.Parser{UseJSONNumber: true, SkipClaimsValidation: true}, + }, +} + +func TestParser_Parse(t *testing.T) { + privateKey := test.LoadRSAPrivateKeyFromDisk("test/sample_key") + + // Iterate over test data set and run tests + for _, data := range jwtTestData { + // If the token string is blank, use helper function to generate string + if data.tokenString == "" { + data.tokenString = test.MakeSampleToken(data.claims, privateKey) + } + + // Parse the token + var token *jwt.Token + var err error + var parser = data.parser + if parser == nil { + parser = new(jwt.Parser) + } + // Figure out correct claims type + switch data.claims.(type) { + case jwt.MapClaims: + token, err = parser.ParseWithClaims(data.tokenString, jwt.MapClaims{}, data.keyfunc) + case *jwt.StandardClaims: + token, err = parser.ParseWithClaims(data.tokenString, &jwt.StandardClaims{}, data.keyfunc) + } + + // Verify result matches expectation + if !reflect.DeepEqual(data.claims, token.Claims) { + t.Errorf("[%v] Claims mismatch. Expecting: %v Got: %v", data.name, data.claims, token.Claims) + } + + if data.valid && err != nil { + t.Errorf("[%v] Error while verifying token: %T:%v", data.name, err, err) + } + + if !data.valid && err == nil { + t.Errorf("[%v] Invalid token passed validation", data.name) + } + + if (err == nil && !token.Valid) || (err != nil && token.Valid) { + t.Errorf("[%v] Inconsistent behavior between returned error and token.Valid", data.name) + } + + if data.errors != 0 { + if err == nil { + t.Errorf("[%v] Expecting error. Didn't get one.", data.name) + } else { + + ve := err.(*jwt.ValidationError) + // compare the bitfield part of the error + if e := ve.Errors; e != data.errors { + t.Errorf("[%v] Errors don't match expectation. %v != %v", data.name, e, data.errors) + } + + if err.Error() == keyFuncError.Error() && ve.Inner != keyFuncError { + t.Errorf("[%v] Inner error does not match expectation. %v != %v", data.name, ve.Inner, keyFuncError) + } + } + } + if data.valid && token.Signature == "" { + t.Errorf("[%v] Signature is left unpopulated after parsing", data.name) + } + } +} + +func TestParser_ParseUnverified(t *testing.T) { + privateKey := test.LoadRSAPrivateKeyFromDisk("test/sample_key") + + // Iterate over test data set and run tests + for _, data := range jwtTestData { + // If the token string is blank, use helper function to generate string + if data.tokenString == "" { + data.tokenString = test.MakeSampleToken(data.claims, privateKey) + } + + // Parse the token + var token *jwt.Token + var err error + var parser = data.parser + if parser == nil { + parser = new(jwt.Parser) + } + // Figure out correct claims type + switch data.claims.(type) { + case jwt.MapClaims: + token, _, err = parser.ParseUnverified(data.tokenString, jwt.MapClaims{}) + case *jwt.StandardClaims: + token, _, err = parser.ParseUnverified(data.tokenString, &jwt.StandardClaims{}) + } + + if err != nil { + t.Errorf("[%v] Invalid token", data.name) + } + + // Verify result matches expectation + if !reflect.DeepEqual(data.claims, token.Claims) { + t.Errorf("[%v] Claims mismatch. Expecting: %v Got: %v", data.name, data.claims, token.Claims) + } + + if data.valid && err != nil { + t.Errorf("[%v] Error while verifying token: %T:%v", data.name, err, err) + } + } +} + +// Helper method for benchmarking various methods +func benchmarkSigning(b *testing.B, method jwt.SigningMethod, key interface{}) { + t := jwt.New(method) + b.RunParallel(func(pb *testing.PB) { + for pb.Next() { + if _, err := t.SignedString(key); err != nil { + b.Fatal(err) + } + } + }) + +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/doc.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/doc.go new file mode 100644 index 0000000000..c01069c984 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/doc.go @@ -0,0 +1,7 @@ +// Utility package for extracting JWT tokens from +// HTTP requests. +// +// The main function is ParseFromRequest and it's WithClaims variant. +// See examples for how to use the various Extractor implementations +// or roll your own. +package request diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor.go new file mode 100644 index 0000000000..14414fe2f2 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor.go @@ -0,0 +1,81 @@ +package request + +import ( + "errors" + "net/http" +) + +// Errors +var ( + ErrNoTokenInRequest = errors.New("no token present in request") +) + +// Interface for extracting a token from an HTTP request. +// The ExtractToken method should return a token string or an error. +// If no token is present, you must return ErrNoTokenInRequest. +type Extractor interface { + ExtractToken(*http.Request) (string, error) +} + +// Extractor for finding a token in a header. Looks at each specified +// header in order until there's a match +type HeaderExtractor []string + +func (e HeaderExtractor) ExtractToken(req *http.Request) (string, error) { + // loop over header names and return the first one that contains data + for _, header := range e { + if ah := req.Header.Get(header); ah != "" { + return ah, nil + } + } + return "", ErrNoTokenInRequest +} + +// Extract token from request arguments. This includes a POSTed form or +// GET URL arguments. Argument names are tried in order until there's a match. +// This extractor calls `ParseMultipartForm` on the request +type ArgumentExtractor []string + +func (e ArgumentExtractor) ExtractToken(req *http.Request) (string, error) { + // Make sure form is parsed + req.ParseMultipartForm(10e6) + + // loop over arg names and return the first one that contains data + for _, arg := range e { + if ah := req.Form.Get(arg); ah != "" { + return ah, nil + } + } + + return "", ErrNoTokenInRequest +} + +// Tries Extractors in order until one returns a token string or an error occurs +type MultiExtractor []Extractor + +func (e MultiExtractor) ExtractToken(req *http.Request) (string, error) { + // loop over header names and return the first one that contains data + for _, extractor := range e { + if tok, err := extractor.ExtractToken(req); tok != "" { + return tok, nil + } else if err != ErrNoTokenInRequest { + return "", err + } + } + return "", ErrNoTokenInRequest +} + +// Wrap an Extractor in this to post-process the value before it's handed off. +// See AuthorizationHeaderExtractor for an example +type PostExtractionFilter struct { + Extractor + Filter func(string) (string, error) +} + +func (e *PostExtractionFilter) ExtractToken(req *http.Request) (string, error) { + if tok, err := e.Extractor.ExtractToken(req); tok != "" { + return e.Filter(tok) + } else { + return "", err + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor_example_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor_example_test.go new file mode 100644 index 0000000000..a994ffe586 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor_example_test.go @@ -0,0 +1,32 @@ +package request + +import ( + "fmt" + "net/url" +) + +const ( + exampleTokenA = "A" +) + +func ExampleHeaderExtractor() { + req := makeExampleRequest("GET", "/", map[string]string{"Token": exampleTokenA}, nil) + tokenString, err := HeaderExtractor{"Token"}.ExtractToken(req) + if err == nil { + fmt.Println(tokenString) + } else { + fmt.Println(err) + } + //Output: A +} + +func ExampleArgumentExtractor() { + req := makeExampleRequest("GET", "/", nil, url.Values{"token": {extractorTestTokenA}}) + tokenString, err := ArgumentExtractor{"token"}.ExtractToken(req) + if err == nil { + fmt.Println(tokenString) + } else { + fmt.Println(err) + } + //Output: A +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor_test.go new file mode 100644 index 0000000000..e3bbb0a3eb --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/extractor_test.go @@ -0,0 +1,91 @@ +package request + +import ( + "fmt" + "net/http" + "net/url" + "testing" +) + +var extractorTestTokenA = "A" +var extractorTestTokenB = "B" + +var extractorTestData = []struct { + name string + extractor Extractor + headers map[string]string + query url.Values + token string + err error +}{ + { + name: "simple header", + extractor: HeaderExtractor{"Foo"}, + headers: map[string]string{"Foo": extractorTestTokenA}, + query: nil, + token: extractorTestTokenA, + err: nil, + }, + { + name: "simple argument", + extractor: ArgumentExtractor{"token"}, + headers: map[string]string{}, + query: url.Values{"token": {extractorTestTokenA}}, + token: extractorTestTokenA, + err: nil, + }, + { + name: "multiple extractors", + extractor: MultiExtractor{ + HeaderExtractor{"Foo"}, + ArgumentExtractor{"token"}, + }, + headers: map[string]string{"Foo": extractorTestTokenA}, + query: url.Values{"token": {extractorTestTokenB}}, + token: extractorTestTokenA, + err: nil, + }, + { + name: "simple miss", + extractor: HeaderExtractor{"This-Header-Is-Not-Set"}, + headers: map[string]string{"Foo": extractorTestTokenA}, + query: nil, + token: "", + err: ErrNoTokenInRequest, + }, + { + name: "filter", + extractor: AuthorizationHeaderExtractor, + headers: map[string]string{"Authorization": "Bearer " + extractorTestTokenA}, + query: nil, + token: extractorTestTokenA, + err: nil, + }, +} + +func TestExtractor(t *testing.T) { + // Bearer token request + for _, data := range extractorTestData { + // Make request from test struct + r := makeExampleRequest("GET", "/", data.headers, data.query) + + // Test extractor + token, err := data.extractor.ExtractToken(r) + if token != data.token { + t.Errorf("[%v] Expected token '%v'. Got '%v'", data.name, data.token, token) + continue + } + if err != data.err { + t.Errorf("[%v] Expected error '%v'. Got '%v'", data.name, data.err, err) + continue + } + } +} + +func makeExampleRequest(method, path string, headers map[string]string, urlArgs url.Values) *http.Request { + r, _ := http.NewRequest(method, fmt.Sprintf("%v?%v", path, urlArgs.Encode()), nil) + for k, v := range headers { + r.Header.Set(k, v) + } + return r +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/oauth2.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/oauth2.go new file mode 100644 index 0000000000..5948694a51 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/oauth2.go @@ -0,0 +1,28 @@ +package request + +import ( + "strings" +) + +// Strips 'Bearer ' prefix from bearer token string +func stripBearerPrefixFromTokenString(tok string) (string, error) { + // Should be a bearer token + if len(tok) > 6 && strings.ToUpper(tok[0:7]) == "BEARER " { + return tok[7:], nil + } + return tok, nil +} + +// Extract bearer token from Authorization header +// Uses PostExtractionFilter to strip "Bearer " prefix from header +var AuthorizationHeaderExtractor = &PostExtractionFilter{ + HeaderExtractor{"Authorization"}, + stripBearerPrefixFromTokenString, +} + +// Extractor for OAuth2 access tokens. Looks in 'Authorization' +// header then 'access_token' argument for a token. +var OAuth2Extractor = &MultiExtractor{ + AuthorizationHeaderExtractor, + ArgumentExtractor{"access_token"}, +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/request.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/request.go new file mode 100644 index 0000000000..70525cface --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/request.go @@ -0,0 +1,68 @@ +package request + +import ( + "github.com/dgrijalva/jwt-go" + "net/http" +) + +// Extract and parse a JWT token from an HTTP request. +// This behaves the same as Parse, but accepts a request and an extractor +// instead of a token string. The Extractor interface allows you to define +// the logic for extracting a token. Several useful implementations are provided. +// +// You can provide options to modify parsing behavior +func ParseFromRequest(req *http.Request, extractor Extractor, keyFunc jwt.Keyfunc, options ...ParseFromRequestOption) (token *jwt.Token, err error) { + // Create basic parser struct + p := &fromRequestParser{req, extractor, nil, nil} + + // Handle options + for _, option := range options { + option(p) + } + + // Set defaults + if p.claims == nil { + p.claims = jwt.MapClaims{} + } + if p.parser == nil { + p.parser = &jwt.Parser{} + } + + // perform extract + tokenString, err := p.extractor.ExtractToken(req) + if err != nil { + return nil, err + } + + // perform parse + return p.parser.ParseWithClaims(tokenString, p.claims, keyFunc) +} + +// ParseFromRequest but with custom Claims type +// DEPRECATED: use ParseFromRequest and the WithClaims option +func ParseFromRequestWithClaims(req *http.Request, extractor Extractor, claims jwt.Claims, keyFunc jwt.Keyfunc) (token *jwt.Token, err error) { + return ParseFromRequest(req, extractor, keyFunc, WithClaims(claims)) +} + +type fromRequestParser struct { + req *http.Request + extractor Extractor + claims jwt.Claims + parser *jwt.Parser +} + +type ParseFromRequestOption func(*fromRequestParser) + +// Parse with custom claims +func WithClaims(claims jwt.Claims) ParseFromRequestOption { + return func(p *fromRequestParser) { + p.claims = claims + } +} + +// Parse using a custom parser +func WithParser(parser *jwt.Parser) ParseFromRequestOption { + return func(p *fromRequestParser) { + p.parser = parser + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/request_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/request_test.go new file mode 100644 index 0000000000..b4365cd869 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/request/request_test.go @@ -0,0 +1,103 @@ +package request + +import ( + "fmt" + "github.com/dgrijalva/jwt-go" + "github.com/dgrijalva/jwt-go/test" + "net/http" + "net/url" + "reflect" + "strings" + "testing" +) + +var requestTestData = []struct { + name string + claims jwt.MapClaims + extractor Extractor + headers map[string]string + query url.Values + valid bool +}{ + { + "authorization bearer token", + jwt.MapClaims{"foo": "bar"}, + AuthorizationHeaderExtractor, + map[string]string{"Authorization": "Bearer %v"}, + url.Values{}, + true, + }, + { + "oauth bearer token - header", + jwt.MapClaims{"foo": "bar"}, + OAuth2Extractor, + map[string]string{"Authorization": "Bearer %v"}, + url.Values{}, + true, + }, + { + "oauth bearer token - url", + jwt.MapClaims{"foo": "bar"}, + OAuth2Extractor, + map[string]string{}, + url.Values{"access_token": {"%v"}}, + true, + }, + { + "url token", + jwt.MapClaims{"foo": "bar"}, + ArgumentExtractor{"token"}, + map[string]string{}, + url.Values{"token": {"%v"}}, + true, + }, +} + +func TestParseRequest(t *testing.T) { + // load keys from disk + privateKey := test.LoadRSAPrivateKeyFromDisk("../test/sample_key") + publicKey := test.LoadRSAPublicKeyFromDisk("../test/sample_key.pub") + keyfunc := func(*jwt.Token) (interface{}, error) { + return publicKey, nil + } + + // Bearer token request + for _, data := range requestTestData { + // Make token from claims + tokenString := test.MakeSampleToken(data.claims, privateKey) + + // Make query string + for k, vv := range data.query { + for i, v := range vv { + if strings.Contains(v, "%v") { + data.query[k][i] = fmt.Sprintf(v, tokenString) + } + } + } + + // Make request from test struct + r, _ := http.NewRequest("GET", fmt.Sprintf("/?%v", data.query.Encode()), nil) + for k, v := range data.headers { + if strings.Contains(v, "%v") { + r.Header.Set(k, fmt.Sprintf(v, tokenString)) + } else { + r.Header.Set(k, tokenString) + } + } + token, err := ParseFromRequestWithClaims(r, data.extractor, jwt.MapClaims{}, keyfunc) + + if token == nil { + t.Errorf("[%v] Token was not found: %v", data.name, err) + continue + } + if !reflect.DeepEqual(data.claims, token.Claims) { + t.Errorf("[%v] Claims mismatch. Expecting: %v Got: %v", data.name, data.claims, token.Claims) + } + if data.valid && err != nil { + t.Errorf("[%v] Error while verifying token: %v", data.name, err) + } + if !data.valid && err == nil { + t.Errorf("[%v] Invalid token passed validation", data.name) + } + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa.go new file mode 100644 index 0000000000..e4caf1ca4a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa.go @@ -0,0 +1,101 @@ +package jwt + +import ( + "crypto" + "crypto/rand" + "crypto/rsa" +) + +// Implements the RSA family of signing methods signing methods +// Expects *rsa.PrivateKey for signing and *rsa.PublicKey for validation +type SigningMethodRSA struct { + Name string + Hash crypto.Hash +} + +// Specific instances for RS256 and company +var ( + SigningMethodRS256 *SigningMethodRSA + SigningMethodRS384 *SigningMethodRSA + SigningMethodRS512 *SigningMethodRSA +) + +func init() { + // RS256 + SigningMethodRS256 = &SigningMethodRSA{"RS256", crypto.SHA256} + RegisterSigningMethod(SigningMethodRS256.Alg(), func() SigningMethod { + return SigningMethodRS256 + }) + + // RS384 + SigningMethodRS384 = &SigningMethodRSA{"RS384", crypto.SHA384} + RegisterSigningMethod(SigningMethodRS384.Alg(), func() SigningMethod { + return SigningMethodRS384 + }) + + // RS512 + SigningMethodRS512 = &SigningMethodRSA{"RS512", crypto.SHA512} + RegisterSigningMethod(SigningMethodRS512.Alg(), func() SigningMethod { + return SigningMethodRS512 + }) +} + +func (m *SigningMethodRSA) Alg() string { + return m.Name +} + +// Implements the Verify method from SigningMethod +// For this signing method, must be an *rsa.PublicKey structure. +func (m *SigningMethodRSA) Verify(signingString, signature string, key interface{}) error { + var err error + + // Decode the signature + var sig []byte + if sig, err = DecodeSegment(signature); err != nil { + return err + } + + var rsaKey *rsa.PublicKey + var ok bool + + if rsaKey, ok = key.(*rsa.PublicKey); !ok { + return ErrInvalidKeyType + } + + // Create hasher + if !m.Hash.Available() { + return ErrHashUnavailable + } + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Verify the signature + return rsa.VerifyPKCS1v15(rsaKey, m.Hash, hasher.Sum(nil), sig) +} + +// Implements the Sign method from SigningMethod +// For this signing method, must be an *rsa.PrivateKey structure. +func (m *SigningMethodRSA) Sign(signingString string, key interface{}) (string, error) { + var rsaKey *rsa.PrivateKey + var ok bool + + // Validate type of key + if rsaKey, ok = key.(*rsa.PrivateKey); !ok { + return "", ErrInvalidKey + } + + // Create the hasher + if !m.Hash.Available() { + return "", ErrHashUnavailable + } + + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Sign the string and return the encoded bytes + if sigBytes, err := rsa.SignPKCS1v15(rand.Reader, rsaKey, m.Hash, hasher.Sum(nil)); err == nil { + return EncodeSegment(sigBytes), nil + } else { + return "", err + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_pss.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_pss.go new file mode 100644 index 0000000000..10ee9db8a4 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_pss.go @@ -0,0 +1,126 @@ +// +build go1.4 + +package jwt + +import ( + "crypto" + "crypto/rand" + "crypto/rsa" +) + +// Implements the RSAPSS family of signing methods signing methods +type SigningMethodRSAPSS struct { + *SigningMethodRSA + Options *rsa.PSSOptions +} + +// Specific instances for RS/PS and company +var ( + SigningMethodPS256 *SigningMethodRSAPSS + SigningMethodPS384 *SigningMethodRSAPSS + SigningMethodPS512 *SigningMethodRSAPSS +) + +func init() { + // PS256 + SigningMethodPS256 = &SigningMethodRSAPSS{ + &SigningMethodRSA{ + Name: "PS256", + Hash: crypto.SHA256, + }, + &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthAuto, + Hash: crypto.SHA256, + }, + } + RegisterSigningMethod(SigningMethodPS256.Alg(), func() SigningMethod { + return SigningMethodPS256 + }) + + // PS384 + SigningMethodPS384 = &SigningMethodRSAPSS{ + &SigningMethodRSA{ + Name: "PS384", + Hash: crypto.SHA384, + }, + &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthAuto, + Hash: crypto.SHA384, + }, + } + RegisterSigningMethod(SigningMethodPS384.Alg(), func() SigningMethod { + return SigningMethodPS384 + }) + + // PS512 + SigningMethodPS512 = &SigningMethodRSAPSS{ + &SigningMethodRSA{ + Name: "PS512", + Hash: crypto.SHA512, + }, + &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthAuto, + Hash: crypto.SHA512, + }, + } + RegisterSigningMethod(SigningMethodPS512.Alg(), func() SigningMethod { + return SigningMethodPS512 + }) +} + +// Implements the Verify method from SigningMethod +// For this verify method, key must be an rsa.PublicKey struct +func (m *SigningMethodRSAPSS) Verify(signingString, signature string, key interface{}) error { + var err error + + // Decode the signature + var sig []byte + if sig, err = DecodeSegment(signature); err != nil { + return err + } + + var rsaKey *rsa.PublicKey + switch k := key.(type) { + case *rsa.PublicKey: + rsaKey = k + default: + return ErrInvalidKey + } + + // Create hasher + if !m.Hash.Available() { + return ErrHashUnavailable + } + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + return rsa.VerifyPSS(rsaKey, m.Hash, hasher.Sum(nil), sig, m.Options) +} + +// Implements the Sign method from SigningMethod +// For this signing method, key must be an rsa.PrivateKey struct +func (m *SigningMethodRSAPSS) Sign(signingString string, key interface{}) (string, error) { + var rsaKey *rsa.PrivateKey + + switch k := key.(type) { + case *rsa.PrivateKey: + rsaKey = k + default: + return "", ErrInvalidKeyType + } + + // Create the hasher + if !m.Hash.Available() { + return "", ErrHashUnavailable + } + + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Sign the string and return the encoded bytes + if sigBytes, err := rsa.SignPSS(rand.Reader, rsaKey, m.Hash, hasher.Sum(nil), m.Options); err == nil { + return EncodeSegment(sigBytes), nil + } else { + return "", err + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_pss_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_pss_test.go new file mode 100644 index 0000000000..9045aaf349 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_pss_test.go @@ -0,0 +1,96 @@ +// +build go1.4 + +package jwt_test + +import ( + "crypto/rsa" + "io/ioutil" + "strings" + "testing" + + "github.com/dgrijalva/jwt-go" +) + +var rsaPSSTestData = []struct { + name string + tokenString string + alg string + claims map[string]interface{} + valid bool +}{ + { + "Basic PS256", + "eyJhbGciOiJQUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.PPG4xyDVY8ffp4CcxofNmsTDXsrVG2npdQuibLhJbv4ClyPTUtR5giNSvuxo03kB6I8VXVr0Y9X7UxhJVEoJOmULAwRWaUsDnIewQa101cVhMa6iR8X37kfFoiZ6NkS-c7henVkkQWu2HtotkEtQvN5hFlk8IevXXPmvZlhQhwzB1sGzGYnoi1zOfuL98d3BIjUjtlwii5w6gYG2AEEzp7HnHCsb3jIwUPdq86Oe6hIFjtBwduIK90ca4UqzARpcfwxHwVLMpatKask00AgGVI0ysdk0BLMjmLutquD03XbThHScC2C2_Pp4cHWgMzvbgLU2RYYZcZRKr46QeNgz9w", + "PS256", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "Basic PS384", + "eyJhbGciOiJQUzM4NCIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.w7-qqgj97gK4fJsq_DCqdYQiylJjzWONvD0qWWWhqEOFk2P1eDULPnqHRnjgTXoO4HAw4YIWCsZPet7nR3Xxq4ZhMqvKW8b7KlfRTb9cH8zqFvzMmybQ4jv2hKc3bXYqVow3AoR7hN_CWXI3Dv6Kd2X5xhtxRHI6IL39oTVDUQ74LACe-9t4c3QRPuj6Pq1H4FAT2E2kW_0KOc6EQhCLWEhm2Z2__OZskDC8AiPpP8Kv4k2vB7l0IKQu8Pr4RcNBlqJdq8dA5D3hk5TLxP8V5nG1Ib80MOMMqoS3FQvSLyolFX-R_jZ3-zfq6Ebsqr0yEb0AH2CfsECF7935Pa0FKQ", + "PS384", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "Basic PS512", + "eyJhbGciOiJQUzUxMiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.GX1HWGzFaJevuSLavqqFYaW8_TpvcjQ8KfC5fXiSDzSiT9UD9nB_ikSmDNyDILNdtjZLSvVKfXxZJqCfefxAtiozEDDdJthZ-F0uO4SPFHlGiXszvKeodh7BuTWRI2wL9-ZO4mFa8nq3GMeQAfo9cx11i7nfN8n2YNQ9SHGovG7_T_AvaMZB_jT6jkDHpwGR9mz7x1sycckEo6teLdHRnH_ZdlHlxqknmyTu8Odr5Xh0sJFOL8BepWbbvIIn-P161rRHHiDWFv6nhlHwZnVzjx7HQrWSGb6-s2cdLie9QL_8XaMcUpjLkfOMKkDOfHo6AvpL7Jbwi83Z2ZTHjJWB-A", + "PS512", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "basic PS256 invalid: foo => bar", + "eyJhbGciOiJQUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.PPG4xyDVY8ffp4CcxofNmsTDXsrVG2npdQuibLhJbv4ClyPTUtR5giNSvuxo03kB6I8VXVr0Y9X7UxhJVEoJOmULAwRWaUsDnIewQa101cVhMa6iR8X37kfFoiZ6NkS-c7henVkkQWu2HtotkEtQvN5hFlk8IevXXPmvZlhQhwzB1sGzGYnoi1zOfuL98d3BIjUjtlwii5w6gYG2AEEzp7HnHCsb3jIwUPdq86Oe6hIFjtBwduIK90ca4UqzARpcfwxHwVLMpatKask00AgGVI0ysdk0BLMjmLutquD03XbThHScC2C2_Pp4cHWgMzvbgLU2RYYZcZRKr46QeNgz9W", + "PS256", + map[string]interface{}{"foo": "bar"}, + false, + }, +} + +func TestRSAPSSVerify(t *testing.T) { + var err error + + key, _ := ioutil.ReadFile("test/sample_key.pub") + var rsaPSSKey *rsa.PublicKey + if rsaPSSKey, err = jwt.ParseRSAPublicKeyFromPEM(key); err != nil { + t.Errorf("Unable to parse RSA public key: %v", err) + } + + for _, data := range rsaPSSTestData { + parts := strings.Split(data.tokenString, ".") + + method := jwt.GetSigningMethod(data.alg) + err := method.Verify(strings.Join(parts[0:2], "."), parts[2], rsaPSSKey) + if data.valid && err != nil { + t.Errorf("[%v] Error while verifying key: %v", data.name, err) + } + if !data.valid && err == nil { + t.Errorf("[%v] Invalid key passed validation", data.name) + } + } +} + +func TestRSAPSSSign(t *testing.T) { + var err error + + key, _ := ioutil.ReadFile("test/sample_key") + var rsaPSSKey *rsa.PrivateKey + if rsaPSSKey, err = jwt.ParseRSAPrivateKeyFromPEM(key); err != nil { + t.Errorf("Unable to parse RSA private key: %v", err) + } + + for _, data := range rsaPSSTestData { + if data.valid { + parts := strings.Split(data.tokenString, ".") + method := jwt.GetSigningMethod(data.alg) + sig, err := method.Sign(strings.Join(parts[0:2], "."), rsaPSSKey) + if err != nil { + t.Errorf("[%v] Error signing token: %v", data.name, err) + } + if sig == parts[2] { + t.Errorf("[%v] Signatures shouldn't match\nnew:\n%v\noriginal:\n%v", data.name, sig, parts[2]) + } + } + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_test.go new file mode 100644 index 0000000000..7f67c5db15 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_test.go @@ -0,0 +1,185 @@ +package jwt_test + +import ( + "github.com/dgrijalva/jwt-go" + "io/ioutil" + "strings" + "testing" +) + +var rsaTestData = []struct { + name string + tokenString string + alg string + claims map[string]interface{} + valid bool +}{ + { + "Basic RS256", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.FhkiHkoESI_cG3NPigFrxEk9Z60_oXrOT2vGm9Pn6RDgYNovYORQmmA0zs1AoAOf09ly2Nx2YAg6ABqAYga1AcMFkJljwxTT5fYphTuqpWdy4BELeSYJx5Ty2gmr8e7RonuUztrdD5WfPqLKMm1Ozp_T6zALpRmwTIW0QPnaBXaQD90FplAg46Iy1UlDKr-Eupy0i5SLch5Q-p2ZpaL_5fnTIUDlxC3pWhJTyx_71qDI-mAA_5lE_VdroOeflG56sSmDxopPEG3bFlSu1eowyBfxtu0_CuVd-M42RU75Zc4Gsj6uV77MBtbMrf4_7M_NUTSgoIF3fRqxrj0NzihIBg", + "RS256", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "Basic RS384", + "eyJhbGciOiJSUzM4NCIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.W-jEzRfBigtCWsinvVVuldiuilzVdU5ty0MvpLaSaqK9PlAWWlDQ1VIQ_qSKzwL5IXaZkvZFJXT3yL3n7OUVu7zCNJzdwznbC8Z-b0z2lYvcklJYi2VOFRcGbJtXUqgjk2oGsiqUMUMOLP70TTefkpsgqDxbRh9CDUfpOJgW-dU7cmgaoswe3wjUAUi6B6G2YEaiuXC0XScQYSYVKIzgKXJV8Zw-7AN_DBUI4GkTpsvQ9fVVjZM9csQiEXhYekyrKu1nu_POpQonGd8yqkIyXPECNmmqH5jH4sFiF67XhD7_JpkvLziBpI-uh86evBUadmHhb9Otqw3uV3NTaXLzJw", + "RS384", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "Basic RS512", + "eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.zBlLlmRrUxx4SJPUbV37Q1joRcI9EW13grnKduK3wtYKmDXbgDpF1cZ6B-2Jsm5RB8REmMiLpGms-EjXhgnyh2TSHE-9W2gA_jvshegLWtwRVDX40ODSkTb7OVuaWgiy9y7llvcknFBTIg-FnVPVpXMmeV_pvwQyhaz1SSwSPrDyxEmksz1hq7YONXhXPpGaNbMMeDTNP_1oj8DZaqTIL9TwV8_1wb2Odt_Fy58Ke2RVFijsOLdnyEAjt2n9Mxihu9i3PhNBkkxa2GbnXBfq3kzvZ_xxGGopLdHhJjcGWXO-NiwI9_tiu14NRv4L2xC0ItD9Yz68v2ZIZEp_DuzwRQ", + "RS512", + map[string]interface{}{"foo": "bar"}, + true, + }, + { + "basic invalid: foo => bar", + "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIifQ.EhkiHkoESI_cG3NPigFrxEk9Z60_oXrOT2vGm9Pn6RDgYNovYORQmmA0zs1AoAOf09ly2Nx2YAg6ABqAYga1AcMFkJljwxTT5fYphTuqpWdy4BELeSYJx5Ty2gmr8e7RonuUztrdD5WfPqLKMm1Ozp_T6zALpRmwTIW0QPnaBXaQD90FplAg46Iy1UlDKr-Eupy0i5SLch5Q-p2ZpaL_5fnTIUDlxC3pWhJTyx_71qDI-mAA_5lE_VdroOeflG56sSmDxopPEG3bFlSu1eowyBfxtu0_CuVd-M42RU75Zc4Gsj6uV77MBtbMrf4_7M_NUTSgoIF3fRqxrj0NzihIBg", + "RS256", + map[string]interface{}{"foo": "bar"}, + false, + }, +} + +func TestRSAVerify(t *testing.T) { + keyData, _ := ioutil.ReadFile("test/sample_key.pub") + key, _ := jwt.ParseRSAPublicKeyFromPEM(keyData) + + for _, data := range rsaTestData { + parts := strings.Split(data.tokenString, ".") + + method := jwt.GetSigningMethod(data.alg) + err := method.Verify(strings.Join(parts[0:2], "."), parts[2], key) + if data.valid && err != nil { + t.Errorf("[%v] Error while verifying key: %v", data.name, err) + } + if !data.valid && err == nil { + t.Errorf("[%v] Invalid key passed validation", data.name) + } + } +} + +func TestRSASign(t *testing.T) { + keyData, _ := ioutil.ReadFile("test/sample_key") + key, _ := jwt.ParseRSAPrivateKeyFromPEM(keyData) + + for _, data := range rsaTestData { + if data.valid { + parts := strings.Split(data.tokenString, ".") + method := jwt.GetSigningMethod(data.alg) + sig, err := method.Sign(strings.Join(parts[0:2], "."), key) + if err != nil { + t.Errorf("[%v] Error signing token: %v", data.name, err) + } + if sig != parts[2] { + t.Errorf("[%v] Incorrect signature.\nwas:\n%v\nexpecting:\n%v", data.name, sig, parts[2]) + } + } + } +} + +func TestRSAVerifyWithPreParsedPrivateKey(t *testing.T) { + key, _ := ioutil.ReadFile("test/sample_key.pub") + parsedKey, err := jwt.ParseRSAPublicKeyFromPEM(key) + if err != nil { + t.Fatal(err) + } + testData := rsaTestData[0] + parts := strings.Split(testData.tokenString, ".") + err = jwt.SigningMethodRS256.Verify(strings.Join(parts[0:2], "."), parts[2], parsedKey) + if err != nil { + t.Errorf("[%v] Error while verifying key: %v", testData.name, err) + } +} + +func TestRSAWithPreParsedPrivateKey(t *testing.T) { + key, _ := ioutil.ReadFile("test/sample_key") + parsedKey, err := jwt.ParseRSAPrivateKeyFromPEM(key) + if err != nil { + t.Fatal(err) + } + testData := rsaTestData[0] + parts := strings.Split(testData.tokenString, ".") + sig, err := jwt.SigningMethodRS256.Sign(strings.Join(parts[0:2], "."), parsedKey) + if err != nil { + t.Errorf("[%v] Error signing token: %v", testData.name, err) + } + if sig != parts[2] { + t.Errorf("[%v] Incorrect signature.\nwas:\n%v\nexpecting:\n%v", testData.name, sig, parts[2]) + } +} + +func TestRSAKeyParsing(t *testing.T) { + key, _ := ioutil.ReadFile("test/sample_key") + secureKey, _ := ioutil.ReadFile("test/privateSecure.pem") + pubKey, _ := ioutil.ReadFile("test/sample_key.pub") + badKey := []byte("All your base are belong to key") + + // Test parsePrivateKey + if _, e := jwt.ParseRSAPrivateKeyFromPEM(key); e != nil { + t.Errorf("Failed to parse valid private key: %v", e) + } + + if k, e := jwt.ParseRSAPrivateKeyFromPEM(pubKey); e == nil { + t.Errorf("Parsed public key as valid private key: %v", k) + } + + if k, e := jwt.ParseRSAPrivateKeyFromPEM(badKey); e == nil { + t.Errorf("Parsed invalid key as valid private key: %v", k) + } + + if _, e := jwt.ParseRSAPrivateKeyFromPEMWithPassword(secureKey, "password"); e != nil { + t.Errorf("Failed to parse valid private key with password: %v", e) + } + + if k, e := jwt.ParseRSAPrivateKeyFromPEMWithPassword(secureKey, "123132"); e == nil { + t.Errorf("Parsed private key with invalid password %v", k) + } + + // Test parsePublicKey + if _, e := jwt.ParseRSAPublicKeyFromPEM(pubKey); e != nil { + t.Errorf("Failed to parse valid public key: %v", e) + } + + if k, e := jwt.ParseRSAPublicKeyFromPEM(key); e == nil { + t.Errorf("Parsed private key as valid public key: %v", k) + } + + if k, e := jwt.ParseRSAPublicKeyFromPEM(badKey); e == nil { + t.Errorf("Parsed invalid key as valid private key: %v", k) + } + +} + +func BenchmarkRS256Signing(b *testing.B) { + key, _ := ioutil.ReadFile("test/sample_key") + parsedKey, err := jwt.ParseRSAPrivateKeyFromPEM(key) + if err != nil { + b.Fatal(err) + } + + benchmarkSigning(b, jwt.SigningMethodRS256, parsedKey) +} + +func BenchmarkRS384Signing(b *testing.B) { + key, _ := ioutil.ReadFile("test/sample_key") + parsedKey, err := jwt.ParseRSAPrivateKeyFromPEM(key) + if err != nil { + b.Fatal(err) + } + + benchmarkSigning(b, jwt.SigningMethodRS384, parsedKey) +} + +func BenchmarkRS512Signing(b *testing.B) { + key, _ := ioutil.ReadFile("test/sample_key") + parsedKey, err := jwt.ParseRSAPrivateKeyFromPEM(key) + if err != nil { + b.Fatal(err) + } + + benchmarkSigning(b, jwt.SigningMethodRS512, parsedKey) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_utils.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_utils.go new file mode 100644 index 0000000000..a5ababf956 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/rsa_utils.go @@ -0,0 +1,101 @@ +package jwt + +import ( + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "errors" +) + +var ( + ErrKeyMustBePEMEncoded = errors.New("Invalid Key: Key must be PEM encoded PKCS1 or PKCS8 private key") + ErrNotRSAPrivateKey = errors.New("Key is not a valid RSA private key") + ErrNotRSAPublicKey = errors.New("Key is not a valid RSA public key") +) + +// Parse PEM encoded PKCS1 or PKCS8 private key +func ParseRSAPrivateKeyFromPEM(key []byte) (*rsa.PrivateKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + var parsedKey interface{} + if parsedKey, err = x509.ParsePKCS1PrivateKey(block.Bytes); err != nil { + if parsedKey, err = x509.ParsePKCS8PrivateKey(block.Bytes); err != nil { + return nil, err + } + } + + var pkey *rsa.PrivateKey + var ok bool + if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok { + return nil, ErrNotRSAPrivateKey + } + + return pkey, nil +} + +// Parse PEM encoded PKCS1 or PKCS8 private key protected with password +func ParseRSAPrivateKeyFromPEMWithPassword(key []byte, password string) (*rsa.PrivateKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + var parsedKey interface{} + + var blockDecrypted []byte + if blockDecrypted, err = x509.DecryptPEMBlock(block, []byte(password)); err != nil { + return nil, err + } + + if parsedKey, err = x509.ParsePKCS1PrivateKey(blockDecrypted); err != nil { + if parsedKey, err = x509.ParsePKCS8PrivateKey(blockDecrypted); err != nil { + return nil, err + } + } + + var pkey *rsa.PrivateKey + var ok bool + if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok { + return nil, ErrNotRSAPrivateKey + } + + return pkey, nil +} + +// Parse PEM encoded PKCS1 or PKCS8 public key +func ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + // Parse the key + var parsedKey interface{} + if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil { + if cert, err := x509.ParseCertificate(block.Bytes); err == nil { + parsedKey = cert.PublicKey + } else { + return nil, err + } + } + + var pkey *rsa.PublicKey + var ok bool + if pkey, ok = parsedKey.(*rsa.PublicKey); !ok { + return nil, ErrNotRSAPublicKey + } + + return pkey, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/signing_method.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/signing_method.go new file mode 100644 index 0000000000..ed1f212b21 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/signing_method.go @@ -0,0 +1,35 @@ +package jwt + +import ( + "sync" +) + +var signingMethods = map[string]func() SigningMethod{} +var signingMethodLock = new(sync.RWMutex) + +// Implement SigningMethod to add new methods for signing or verifying tokens. +type SigningMethod interface { + Verify(signingString, signature string, key interface{}) error // Returns nil if signature is valid + Sign(signingString string, key interface{}) (string, error) // Returns encoded signature or error + Alg() string // returns the alg identifier for this method (example: 'HS256') +} + +// Register the "alg" name and a factory function for signing method. +// This is typically done during init() in the method's implementation +func RegisterSigningMethod(alg string, f func() SigningMethod) { + signingMethodLock.Lock() + defer signingMethodLock.Unlock() + + signingMethods[alg] = f +} + +// Get a signing method from an "alg" string +func GetSigningMethod(alg string) (method SigningMethod) { + signingMethodLock.RLock() + defer signingMethodLock.RUnlock() + + if methodF, ok := signingMethods[alg]; ok { + method = methodF() + } + return +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec256-private.pem b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec256-private.pem new file mode 100644 index 0000000000..a6882b3e53 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec256-private.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIAh5qA3rmqQQuu0vbKV/+zouz/y/Iy2pLpIcWUSyImSwoAoGCCqGSM49 +AwEHoUQDQgAEYD54V/vp+54P9DXarYqx4MPcm+HKRIQzNasYSoRQHQ/6S6Ps8tpM +cT+KvIIC8W/e9k0W7Cm72M1P9jU7SLf/vg== +-----END EC PRIVATE KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec256-public.pem b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec256-public.pem new file mode 100644 index 0000000000..7191361e72 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec256-public.pem @@ -0,0 +1,4 @@ +-----BEGIN PUBLIC KEY----- +MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEYD54V/vp+54P9DXarYqx4MPcm+HK +RIQzNasYSoRQHQ/6S6Ps8tpMcT+KvIIC8W/e9k0W7Cm72M1P9jU7SLf/vg== +-----END PUBLIC KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec384-private.pem b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec384-private.pem new file mode 100644 index 0000000000..a86c823e56 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec384-private.pem @@ -0,0 +1,6 @@ +-----BEGIN EC PRIVATE KEY----- +MIGkAgEBBDCaCvMHKhcG/qT7xsNLYnDT7sE/D+TtWIol1ROdaK1a564vx5pHbsRy +SEKcIxISi1igBwYFK4EEACKhZANiAATYa7rJaU7feLMqrAx6adZFNQOpaUH/Uylb +ZLriOLON5YFVwtVUpO1FfEXZUIQpptRPtc5ixIPY658yhBSb6irfIJUSP9aYTflJ +GKk/mDkK4t8mWBzhiD5B6jg9cEGhGgA= +-----END EC PRIVATE KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec384-public.pem b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec384-public.pem new file mode 100644 index 0000000000..e80d005644 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec384-public.pem @@ -0,0 +1,5 @@ +-----BEGIN PUBLIC KEY----- +MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAE2Gu6yWlO33izKqwMemnWRTUDqWlB/1Mp +W2S64jizjeWBVcLVVKTtRXxF2VCEKabUT7XOYsSD2OufMoQUm+oq3yCVEj/WmE35 +SRipP5g5CuLfJlgc4Yg+Qeo4PXBBoRoA +-----END PUBLIC KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec512-private.pem b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec512-private.pem new file mode 100644 index 0000000000..213afaf13c --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec512-private.pem @@ -0,0 +1,7 @@ +-----BEGIN EC PRIVATE KEY----- +MIHcAgEBBEIB0pE4uFaWRx7t03BsYlYvF1YvKaBGyvoakxnodm9ou0R9wC+sJAjH +QZZJikOg4SwNqgQ/hyrOuDK2oAVHhgVGcYmgBwYFK4EEACOhgYkDgYYABAAJXIuw +12MUzpHggia9POBFYXSxaOGKGbMjIyDI+6q7wi7LMw3HgbaOmgIqFG72o8JBQwYN +4IbXHf+f86CRY1AA2wHzbHvt6IhkCXTNxBEffa1yMUgu8n9cKKF2iLgyQKcKqW33 +8fGOw/n3Rm2Yd/EB56u2rnD29qS+nOM9eGS+gy39OQ== +-----END EC PRIVATE KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec512-public.pem b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec512-public.pem new file mode 100644 index 0000000000..02ea022031 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/ec512-public.pem @@ -0,0 +1,6 @@ +-----BEGIN PUBLIC KEY----- +MIGbMBAGByqGSM49AgEGBSuBBAAjA4GGAAQACVyLsNdjFM6R4IImvTzgRWF0sWjh +ihmzIyMgyPuqu8IuyzMNx4G2jpoCKhRu9qPCQUMGDeCG1x3/n/OgkWNQANsB82x7 +7eiIZAl0zcQRH32tcjFILvJ/XCihdoi4MkCnCqlt9/HxjsP590ZtmHfxAeertq5w +9vakvpzjPXhkvoMt/Tk= +-----END PUBLIC KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/helpers.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/helpers.go new file mode 100644 index 0000000000..f84c3ef63e --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/helpers.go @@ -0,0 +1,42 @@ +package test + +import ( + "crypto/rsa" + "github.com/dgrijalva/jwt-go" + "io/ioutil" +) + +func LoadRSAPrivateKeyFromDisk(location string) *rsa.PrivateKey { + keyData, e := ioutil.ReadFile(location) + if e != nil { + panic(e.Error()) + } + key, e := jwt.ParseRSAPrivateKeyFromPEM(keyData) + if e != nil { + panic(e.Error()) + } + return key +} + +func LoadRSAPublicKeyFromDisk(location string) *rsa.PublicKey { + keyData, e := ioutil.ReadFile(location) + if e != nil { + panic(e.Error()) + } + key, e := jwt.ParseRSAPublicKeyFromPEM(keyData) + if e != nil { + panic(e.Error()) + } + return key +} + +func MakeSampleToken(c jwt.Claims, key interface{}) string { + token := jwt.NewWithClaims(jwt.SigningMethodRS256, c) + s, e := token.SignedString(key) + + if e != nil { + panic(e.Error()) + } + + return s +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/hmacTestKey b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/hmacTestKey new file mode 100644 index 0000000000..435b8ddb37 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/hmacTestKey @@ -0,0 +1 @@ +#5K+¥¼ƒ~ew{¦Z³(æðTÉ(©„²ÒP.¿ÓûZ’ÒGï–Š´Ãwb="=.!r.OÀÍšõgЀ£ \ No newline at end of file diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/privateSecure.pem b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/privateSecure.pem new file mode 100644 index 0000000000..8537e07437 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/privateSecure.pem @@ -0,0 +1,30 @@ +-----BEGIN RSA PRIVATE KEY----- +Proc-Type: 4,ENCRYPTED +DEK-Info: DES-EDE3-CBC,7487BB8910A3741B + +iL7m48mbFSIy1Y5xbXWwPTR07ufxu7o+myGUE+AdDeWWISkd5W6Gl44oX/jgXldS +mL/ntUXoZzQz2WKEYLwssAtSTGF+QgSIMvV5faiP+pLYvWgk0oVr42po00CvADFL +eDAJC7LgagYifS1l4EAK4MY8RGCHyJWEN5JAr0fc/Haa3WfWZ009kOWAp8MDuYxB +hQlCKUmnUpXCp5c6jwbjlyinLj8XwzzjZ/rVRsY+t2Z0Vcd5qzR5BV8IJCqbG5Py +z15/EFgMG2N2eYMsiEKgdXeKW2H5XIoWyun/3pBigWaDnTtiWSt9kz2MplqYfIT7 +F+0XE3gdDGalAeN3YwFPHCkxxBmcI+s6lQG9INmf2/gkJQ+MOZBVXKmGLv6Qis3l +0eyUz1yZvNzf0zlcUBjiPulLF3peThHMEzhSsATfPomyg5NJ0X7ttd0ybnq+sPe4 +qg2OJ8qNhYrqnx7Xlvj61+B2NAZVHvIioma1FzqX8DxQYrnR5S6DJExDqvzNxEz6 +5VPQlH2Ig4hTvNzla84WgJ6USc/2SS4ehCReiNvfeNG9sPZKQnr/Ss8KPIYsKGcC +Pz/vEqbWDmJwHb7KixCQKPt1EbD+/uf0YnhskOWM15YiFbYAOZKJ5rcbz2Zu66vg +GAmqcBsHeFR3s/bObEzjxOmMfSr1vzvr4ActNJWVtfNKZNobSehZiMSHL54AXAZW +Yj48pwTbf7b1sbF0FeCuwTFiYxM+yiZVO5ciYOfmo4HUg53PjknKpcKtEFSj02P1 +8JRBSb++V0IeMDyZLl12zgURDsvualbJMMBBR8emIpF13h0qdyah431gDhHGBnnC +J5UDGq21/flFjzz0x/Okjwf7mPK5pcmF+uW7AxtHqws6m93yD5+RFmfZ8cb/8CL8 +jmsQslj+OIE64ykkRoJWpNBKyQjL3CnPnLmAB6TQKxegR94C7/hP1FvRW+W0AgZy +g2QczKQU3KBQP18Ui1HTbkOUJT0Lsy4FnmJFCB/STPRo6NlJiATKHq/cqHWQUvZd +d4oTMb1opKfs7AI9wiJBuskpGAECdRnVduml3dT4p//3BiP6K9ImWMSJeFpjFAFs +AbBMKyitMs0Fyn9AJRPl23TKVQ3cYeSTxus4wLmx5ECSsHRV6g06nYjBp4GWEqSX +RVclXF3zmy3b1+O5s2chJN6TrypzYSEYXJb1vvQLK0lNXqwxZAFV7Roi6xSG0fSY +EAtdUifLonu43EkrLh55KEwkXdVV8xneUjh+TF8VgJKMnqDFfeHFdmN53YYh3n3F +kpYSmVLRzQmLbH9dY+7kqvnsQm8y76vjug3p4IbEbHp/fNGf+gv7KDng1HyCl9A+ +Ow/Hlr0NqCAIhminScbRsZ4SgbRTRgGEYZXvyOtQa/uL6I8t2NR4W7ynispMs0QL +RD61i3++bQXuTi4i8dg3yqIfe9S22NHSzZY/lAHAmmc3r5NrQ1TM1hsSxXawT5CU +anWFjbH6YQ/QplkkAqZMpropWn6ZdNDg/+BUjukDs0HZrbdGy846WxQUvE7G2bAw +IFQ1SymBZBtfnZXhfAXOHoWh017p6HsIkb2xmFrigMj7Jh10VVhdWg== +-----END RSA PRIVATE KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/sample_key b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/sample_key new file mode 100644 index 0000000000..abdbade312 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/sample_key @@ -0,0 +1,27 @@ +-----BEGIN RSA PRIVATE KEY----- +MIIEowIBAAKCAQEA4f5wg5l2hKsTeNem/V41fGnJm6gOdrj8ym3rFkEU/wT8RDtn +SgFEZOQpHEgQ7JL38xUfU0Y3g6aYw9QT0hJ7mCpz9Er5qLaMXJwZxzHzAahlfA0i +cqabvJOMvQtzD6uQv6wPEyZtDTWiQi9AXwBpHssPnpYGIn20ZZuNlX2BrClciHhC +PUIIZOQn/MmqTD31jSyjoQoV7MhhMTATKJx2XrHhR+1DcKJzQBSTAGnpYVaqpsAR +ap+nwRipr3nUTuxyGohBTSmjJ2usSeQXHI3bODIRe1AuTyHceAbewn8b462yEWKA +Rdpd9AjQW5SIVPfdsz5B6GlYQ5LdYKtznTuy7wIDAQABAoIBAQCwia1k7+2oZ2d3 +n6agCAbqIE1QXfCmh41ZqJHbOY3oRQG3X1wpcGH4Gk+O+zDVTV2JszdcOt7E5dAy +MaomETAhRxB7hlIOnEN7WKm+dGNrKRvV0wDU5ReFMRHg31/Lnu8c+5BvGjZX+ky9 +POIhFFYJqwCRlopGSUIxmVj5rSgtzk3iWOQXr+ah1bjEXvlxDOWkHN6YfpV5ThdE +KdBIPGEVqa63r9n2h+qazKrtiRqJqGnOrHzOECYbRFYhexsNFz7YT02xdfSHn7gM +IvabDDP/Qp0PjE1jdouiMaFHYnLBbgvlnZW9yuVf/rpXTUq/njxIXMmvmEyyvSDn +FcFikB8pAoGBAPF77hK4m3/rdGT7X8a/gwvZ2R121aBcdPwEaUhvj/36dx596zvY +mEOjrWfZhF083/nYWE2kVquj2wjs+otCLfifEEgXcVPTnEOPO9Zg3uNSL0nNQghj +FuD3iGLTUBCtM66oTe0jLSslHe8gLGEQqyMzHOzYxNqibxcOZIe8Qt0NAoGBAO+U +I5+XWjWEgDmvyC3TrOSf/KCGjtu0TSv30ipv27bDLMrpvPmD/5lpptTFwcxvVhCs +2b+chCjlghFSWFbBULBrfci2FtliClOVMYrlNBdUSJhf3aYSG2Doe6Bgt1n2CpNn +/iu37Y3NfemZBJA7hNl4dYe+f+uzM87cdQ214+jrAoGAXA0XxX8ll2+ToOLJsaNT +OvNB9h9Uc5qK5X5w+7G7O998BN2PC/MWp8H+2fVqpXgNENpNXttkRm1hk1dych86 +EunfdPuqsX+as44oCyJGFHVBnWpm33eWQw9YqANRI+pCJzP08I5WK3osnPiwshd+ +hR54yjgfYhBFNI7B95PmEQkCgYBzFSz7h1+s34Ycr8SvxsOBWxymG5zaCsUbPsL0 +4aCgLScCHb9J+E86aVbbVFdglYa5Id7DPTL61ixhl7WZjujspeXZGSbmq0Kcnckb +mDgqkLECiOJW2NHP/j0McAkDLL4tysF8TLDO8gvuvzNC+WQ6drO2ThrypLVZQ+ry +eBIPmwKBgEZxhqa0gVvHQG/7Od69KWj4eJP28kq13RhKay8JOoN0vPmspXJo1HY3 +CKuHRG+AP579dncdUnOMvfXOtkdM4vk0+hWASBQzM9xzVcztCa+koAugjVaLS9A+ +9uQoqEeVNTckxx0S2bYevRy7hGQmUJTyQm3j1zEUR5jpdbL83Fbq +-----END RSA PRIVATE KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/sample_key.pub b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/sample_key.pub new file mode 100644 index 0000000000..03dc982acb --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/test/sample_key.pub @@ -0,0 +1,9 @@ +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4f5wg5l2hKsTeNem/V41 +fGnJm6gOdrj8ym3rFkEU/wT8RDtnSgFEZOQpHEgQ7JL38xUfU0Y3g6aYw9QT0hJ7 +mCpz9Er5qLaMXJwZxzHzAahlfA0icqabvJOMvQtzD6uQv6wPEyZtDTWiQi9AXwBp +HssPnpYGIn20ZZuNlX2BrClciHhCPUIIZOQn/MmqTD31jSyjoQoV7MhhMTATKJx2 +XrHhR+1DcKJzQBSTAGnpYVaqpsARap+nwRipr3nUTuxyGohBTSmjJ2usSeQXHI3b +ODIRe1AuTyHceAbewn8b462yEWKARdpd9AjQW5SIVPfdsz5B6GlYQ5LdYKtznTuy +7wIDAQAB +-----END PUBLIC KEY----- diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/token.go b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/token.go new file mode 100644 index 0000000000..d637e0867c --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/dgrijalva/jwt-go/token.go @@ -0,0 +1,108 @@ +package jwt + +import ( + "encoding/base64" + "encoding/json" + "strings" + "time" +) + +// TimeFunc provides the current time when parsing token to validate "exp" claim (expiration time). +// You can override it to use another time value. This is useful for testing or if your +// server uses a different time zone than your tokens. +var TimeFunc = time.Now + +// Parse methods use this callback function to supply +// the key for verification. The function receives the parsed, +// but unverified Token. This allows you to use properties in the +// Header of the token (such as `kid`) to identify which key to use. +type Keyfunc func(*Token) (interface{}, error) + +// A JWT Token. Different fields will be used depending on whether you're +// creating or parsing/verifying a token. +type Token struct { + Raw string // The raw token. Populated when you Parse a token + Method SigningMethod // The signing method used or to be used + Header map[string]interface{} // The first segment of the token + Claims Claims // The second segment of the token + Signature string // The third segment of the token. Populated when you Parse a token + Valid bool // Is the token valid? Populated when you Parse/Verify a token +} + +// Create a new Token. Takes a signing method +func New(method SigningMethod) *Token { + return NewWithClaims(method, MapClaims{}) +} + +func NewWithClaims(method SigningMethod, claims Claims) *Token { + return &Token{ + Header: map[string]interface{}{ + "typ": "JWT", + "alg": method.Alg(), + }, + Claims: claims, + Method: method, + } +} + +// Get the complete, signed token +func (t *Token) SignedString(key interface{}) (string, error) { + var sig, sstr string + var err error + if sstr, err = t.SigningString(); err != nil { + return "", err + } + if sig, err = t.Method.Sign(sstr, key); err != nil { + return "", err + } + return strings.Join([]string{sstr, sig}, "."), nil +} + +// Generate the signing string. This is the +// most expensive part of the whole deal. Unless you +// need this for something special, just go straight for +// the SignedString. +func (t *Token) SigningString() (string, error) { + var err error + parts := make([]string, 2) + for i, _ := range parts { + var jsonValue []byte + if i == 0 { + if jsonValue, err = json.Marshal(t.Header); err != nil { + return "", err + } + } else { + if jsonValue, err = json.Marshal(t.Claims); err != nil { + return "", err + } + } + + parts[i] = EncodeSegment(jsonValue) + } + return strings.Join(parts, "."), nil +} + +// Parse, validate, and return a token. +// keyFunc will receive the parsed token and should return the key for validating. +// If everything is kosher, err will be nil +func Parse(tokenString string, keyFunc Keyfunc) (*Token, error) { + return new(Parser).Parse(tokenString, keyFunc) +} + +func ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) { + return new(Parser).ParseWithClaims(tokenString, claims, keyFunc) +} + +// Encode JWT specific base64url encoding with padding stripped +func EncodeSegment(seg []byte) string { + return strings.TrimRight(base64.URLEncoding.EncodeToString(seg), "=") +} + +// Decode JWT specific base64url encoding with padding stripped +func DecodeSegment(seg string) ([]byte, error) { + if l := len(seg) % 4; l > 0 { + seg += strings.Repeat("=", 4-l) + } + + return base64.URLEncoding.DecodeString(seg) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/.travis.yml b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/.travis.yml new file mode 100644 index 0000000000..921806e55d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/.travis.yml @@ -0,0 +1,12 @@ +language: go +sudo: false +before_script: + - go get -t -u ./... +script: + - make generate + - go test -v ./... + - ./scripts/check-diff.sh +go: + - 1.11.x + - 1.12.x + - tip diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/Changes b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/Changes new file mode 100644 index 0000000000..eddd7e2d2f --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/Changes @@ -0,0 +1,5 @@ +Changes +======= + +v0.9.0 - 22 May 2019 + * Start tagging versions for good measure. diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/LICENSE b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/LICENSE new file mode 100644 index 0000000000..205e33a7f1 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/LICENSE @@ -0,0 +1,22 @@ +The MIT License (MIT) + +Copyright (c) 2015 lestrrat + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/Makefile b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/Makefile new file mode 100644 index 0000000000..b6a96ab4df --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/Makefile @@ -0,0 +1,16 @@ +.PHONY: generate realclean cover viewcover + +generate: + @$(MAKE) generate-jwa generate-jwk generate-jws generate-jwt + +generate-%: + @cd $(patsubst generate-%,%,$@); go generate + +realclean: + rm coverage.out + +cover: + go test -v -coverpkg=./... -coverprofile=coverage.out ./... + +viewcover: + go tool cover -html=coverage.out diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/README.md b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/README.md new file mode 100644 index 0000000000..f2db982586 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/README.md @@ -0,0 +1,295 @@ +# jwx + +Implementation of various JWx technologies + +[![Build Status](https://travis-ci.org/lestrrat-go/jwx.svg?branch=master)](https://travis-ci.org/lestrrat-go/jwx) +[![GoDoc](https://godoc.org/github.com/lestrrat-go/jwx?status.svg)](https://godoc.org/github.com/lestrrat-go/jwx) + +## Status + +### Done + +PR/issues welcome. + +| Package name | Notes | +|-----------------------------------------------------------|-------------------------------------------------| +| [jwt](https://github.com/lestrrat-go/jwx/tree/master/jwt) | [RFC 7519](https://tools.ietf.org/html/rfc7519) | +| [jwk](https://github.com/lestrrat-go/jwx/tree/master/jwk) | [RFC 7517](https://tools.ietf.org/html/rfc7517) + [RFC 7638](https://tools.ietf.org/html/rfc7638) | +| [jwa](https://github.com/lestrrat-go/jwx/tree/master/jwa) | [RFC 7518](https://tools.ietf.org/html/rfc7518) | +| [jws](https://github.com/lestrrat-go/jwx/tree/master/jws) | [RFC 7515](https://tools.ietf.org/html/rfc7515) | +| [jwe](https://github.com/lestrrat-go/jwx/tree/master/jwe) | [RFC 7516](https://tools.ietf.org/html/rfc7516) | + +### In progress: + +* jwe - more algorithms + +## Why? + +My goal was to write a server that heavily uses JWK and JWT. At first glance +the libraries that already exist seemed sufficient, but soon I realized that + +1. To completely implement the protocols, I needed the entire JWT, JWK, JWS, JWE (and JWA, by necessity). +2. Most of the libraries that existed only deal with a subset of the various JWx specifications that were necessary to implement their specific needs + +For example, a certain library looks like it had most of JWS, JWE, JWK covered, but then it lacked the ability to include private claims in its JWT responses. Another library had support of all the private claims, but completely lacked in its flexibility to generate various different response formats. + +Because I was writing the server side (and the client side for testing), I needed the *entire* JOSE toolset to properly implement my server, **and** they needed to be *flexible* enough to fulfill the entire spec that I was writing. + +So here's go-jwx. This library is extensible, customizable, and hopefully well organized to the point that it is easy for you to slice and dice it. + +As of this writing (Nov 2015), it's still lacking a few of the algorithms for JWE that are described in JWA (which I believe to be less frequently used), but in general you should be able to do pretty much everything allowed in the specifications. + +## Synopsis + +### JWT + +See the examples here as well: [https://github.com/lestrrat-go/jwx/jwt](./jwt/README.md) + +```go +func ExampleJWT() { + const aLongLongTimeAgo = 233431200 + + t := jwt.New() + t.Set(jwt.SubjectKey, `https://github.com/lestrrat-go/jwx/jwt`) + t.Set(jwt.AudienceKey, `Golang Users`) + t.Set(jwt.IssuedAtKey, time.Unix(aLongLongTimeAgo, 0)) + t.Set(`privateClaimKey`, `Hello, World!`) + + buf, err := json.MarshalIndent(t, "", " ") + if err != nil { + fmt.Printf("failed to generate JSON: %s\n", err) + return + } + + fmt.Printf("%s\n", buf) + fmt.Printf("aud -> '%s'\n", t.Audience()) + fmt.Printf("iat -> '%s'\n", t.IssuedAt().Format(time.RFC3339)) + if v, ok := t.Get(`privateClaimKey`); ok { + fmt.Printf("privateClaimKey -> '%s'\n", v) + } + fmt.Printf("sub -> '%s'\n", t.Subject()) +} +``` + +### JWK + +See the examples here as well: https://godoc.org/github.com/lestrrat-go/jwx/jwk#pkg-examples + +Create a JWK file from RSA public key: + +```go +import( + "crypto/rand" + "crypto/rsa" + "encoding/json" + "log" + "os" + + "github.com/lestrrat-go/jwx/jwk" +) + +func main() { + privkey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + log.Printf("failed to generate private key: %s", err) + return + } + + key, err := jwk.New(&privkey.PublicKey) + if err != nil { + log.Printf("failed to create JWK: %s", err) + return + } + + jsonbuf, err := json.MarshalIndent(key, "", " ") + if err != nil { + log.Printf("failed to generate JSON: %s", err) + return + } + + os.Stdout.Write(jsonbuf) +} +``` + +Parse and use a JWK key: + +```go +import( + "log" + + "github.com/lestrrat-go/jwx/jwk" +) + +func main() { + set, err := jwk.Fetch("https://foobar.domain/jwk.json") + if err != nil { + log.Printf("failed to parse JWK: %s", err) + return + } + + // If you KNOW you have exactly one key, you can just + // use set.Keys[0] + keys := set.LookupKeyID("mykey") + if len(keys) == 0 { + log.Printf("failed to lookup key: %s", err) + return + } + + key, err := keys[0].Materialize() + if err != nil { + log.Printf("failed to create public key: %s", err) + return + } + + // Use key for jws.Verify() or whatever +} +``` + +### JWS + +See also `VerifyWithJWK` and `VerifyWithJKU` + +```go +import( + "crypto/rand" + "crypto/rsa" + "log" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jws" +) + +func main() { + privkey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + log.Printf("failed to generate private key: %s", err) + return + } + + buf, err := jws.Sign([]byte("Lorem ipsum"), jwa.RS256, privkey) + if err != nil { + log.Printf("failed to created JWS message: %s", err) + return + } + + // When you received a JWS message, you can verify the signature + // and grab the payload sent in the message in one go: + verified, err := jws.Verify(buf, jwa.RS256, &privkey.PublicKey) + if err != nil { + log.Printf("failed to verify message: %s", err) + return + } + + log.Printf("signed message verified! -> %s", verified) +} +``` + +Supported signature algorithms: + +| Algorithm | Supported? | Constant in go-jwx | +|:----------------------------------------|:-----------|:-------------------| +| HMAC using SHA-256 | YES | jwa.HS256 | +| HMAC using SHA-384 | YES | jwa.HS384 | +| HMAC using SHA-512 | YES | jwa.HS512 | +| RSASSA-PKCS-v1.5 using SHA-256 | YES | jwa.RS256 | +| RSASSA-PKCS-v1.5 using SHA-384 | YES | jwa.RS384 | +| RSASSA-PKCS-v1.5 using SHA-512 | YES | jwa.RS512 | +| ECDSA using P-256 and SHA-256 | YES | jwa.ES256 | +| ECDSA using P-384 and SHA-384 | YES | jwa.ES384 | +| ECDSA using P-521 and SHA-512 | YES | jwa.ES512 | +| RSASSA-PSS using SHA256 and MGF1-SHA256 | YES | jwa.PS256 | +| RSASSA-PSS using SHA384 and MGF1-SHA384 | YES | jwa.PS384 | +| RSASSA-PSS using SHA512 and MGF1-SHA512 | YES | jwa.PS512 | + +### JWE + +See the examples here as well: https://godoc.org/github.com/lestrrat-go/jwx/jwe#pkg-examples + +```go +import( + "crypto/rand" + "crypto/rsa" + "log" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwe" +) + +func main() { + privkey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + log.Printf("failed to generate private key: %s", err) + return + } + + payload := []byte("Lorem Ipsum") + + encrypted, err := jwe.Encrypt(payload, jwa.RSA1_5, &privkey.PublicKey, jwa.A128CBC_HS256, jwa.NoCompress) + if err != nil { + log.Printf("failed to encrypt payload: %s", err) + return + } + + decrypted, err := jwe.Decrypt(encrypted, jwa.RSA1_5, privkey) + if err != nil { + log.Printf("failed to decrypt: %s", err) + return + } + + if string(decrypted) != "Lorem Ipsum" { + log.Printf("WHAT?!") + return + } +} +``` + +Supported key encryption algorithm: + +| Algorithm | Supported? | Constant in go-jwx | +|:-----------------------------------------|:-----------|:-----------------------| +| RSA-PKCS1v1.5 | YES | jwa.RSA1_5 | +| RSA-OAEP-SHA1 | YES | jwa.RSA_OAEP | +| RSA-OAEP-SHA256 | YES | jwa.RSA_OAEP_256 | +| AES key wrap (128) | YES | jwa.A128KW | +| AES key wrap (192) | YES | jwa.A192KW | +| AES key wrap (256) | YES | jwa.A256KW | +| Direct encryption | NO | jwa.DIRECT | +| ECDH-ES | YES | jwa.ECDH_ES | +| ECDH-ES + AES key wrap (128) | YES | jwa.ECDH_ES_A128KW | +| ECDH-ES + AES key wrap (192) | YES | jwa.ECDH_ES_A192KW | +| ECDH-ES + AES key wrap (256) | YES | jwa.ECDH_ES_A256KW | +| AES-GCM key wrap (128) | NO | jwa.A128GCMKW | +| AES-GCM key wrap (192) | NO | jwa.A192GCMKW | +| AES-GCM key wrap (256) | NO | jwa.A256GCMKW | +| PBES2 + HMAC-SHA256 + AES key wrap (128) | NO | jwa.PBES2_HS256_A128KW | +| PBES2 + HMAC-SHA384 + AES key wrap (192) | NO | jwa.PBES2_HS384_A192KW | +| PBES2 + HMAC-SHA512 + AES key wrap (256) | NO | jwa.PBES2_HS512_A256KW | + +Supported content encryption algorithm: + +| Algorithm | Supported? | Constant in go-jwx | +|:----------------------------|:-----------|:-----------------------| +| AES-CBC + HMAC-SHA256 (128) | YES | jwa.A128CBC_HS256 | +| AES-CBC + HMAC-SHA384 (192) | YES | jwa.A192CBC_HS384 | +| AES-CBC + HMAC-SHA512 (256) | YES | jwa.A256CBC_HS512 | +| AES-GCM (128) | YES | jwa.A128GCM | +| AES-GCM (192) | YES | jwa.A192GCM | +| AES-GCM (256) | YES | jwa.A256GCM | + +PRs welcome to support missing algorithms! + +## Other related libraries: + +* https://github.com/dgrijalva/jwt-go +* https://github.com/square/go-jose +* https://github.com/coreos/oidc +* https://golang.org/x/oauth2 + +## Contributions + +PRs welcome! + +## Credits + +* Work on this library was generously sponsored by HDE Inc (https://www.hde.co.jp) +* Lots of code, especially JWE was taken from go-jose library (https://github.com/square/go-jose) diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/buffer/buffer.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/buffer/buffer.go new file mode 100644 index 0000000000..fbef9a4f59 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/buffer/buffer.go @@ -0,0 +1,118 @@ +// Package buffer provides a very thin wrapper around []byte buffer called +// `Buffer`, to provide functionalities that are often used within the jwx +// related packages +package buffer + +import ( + "encoding/base64" + "encoding/binary" + "encoding/json" + + "github.com/pkg/errors" +) + +// Buffer wraps `[]byte` and provides functions that are often used in +// the jwx related packages. One notable difference is that while +// encoding/json marshalls `[]byte` using base64.StdEncoding, this +// module uses base64.RawURLEncoding as mandated by the spec +type Buffer []byte + +// FromUint creates a `Buffer` from an unsigned int +func FromUint(v uint64) Buffer { + data := make([]byte, 8) + binary.BigEndian.PutUint64(data, v) + + i := 0 + for ; i < len(data); i++ { + if data[i] != 0x0 { + break + } + } + return Buffer(data[i:]) +} + +// FromBase64 constructs a new Buffer from a base64 encoded data +func FromBase64(v []byte) (Buffer, error) { + b := Buffer{} + if err := b.Base64Decode(v); err != nil { + return Buffer(nil), errors.Wrap(err, "failed to decode from base64") + } + + return b, nil +} + +// FromNData constructs a new Buffer from a "n:data" format +// (I made that name up) +func FromNData(v []byte) (Buffer, error) { + size := binary.BigEndian.Uint32(v) + buf := make([]byte, int(size)) + copy(buf, v[4:4+size]) + return Buffer(buf), nil +} + +// Bytes returns the raw bytes that comprises the Buffer +func (b Buffer) Bytes() []byte { + return []byte(b) +} + +// NData returns Datalen || Data, where Datalen is a 32 bit counter for +// the length of the following data, and Data is the octets that comprise +// the buffer data +func (b Buffer) NData() []byte { + buf := make([]byte, 4+b.Len()) + binary.BigEndian.PutUint32(buf, uint32(b.Len())) + + copy(buf[4:], b.Bytes()) + return buf +} + +// Len returns the number of bytes that the Buffer holds +func (b Buffer) Len() int { + return len(b) +} + +func (b *Buffer) SetBytes(b2 []byte) { + *b = make([]byte, len(b2)) + copy(*b, b2) +} + +// Base64Encode encodes the contents of the Buffer using base64.RawURLEncoding +func (b Buffer) Base64Encode() ([]byte, error) { + enc := base64.RawURLEncoding + out := make([]byte, enc.EncodedLen(len(b))) + enc.Encode(out, b) + return out, nil +} + +// Base64Decode decodes the contents of the Buffer using base64.RawURLEncoding +func (b *Buffer) Base64Decode(v []byte) error { + enc := base64.RawURLEncoding + out := make([]byte, enc.DecodedLen(len(v))) + n, err := enc.Decode(out, v) + if err != nil { + return errors.Wrap(err, "failed to decode from base64") + } + out = out[:n] + *b = Buffer(out) + return nil +} + +// MarshalJSON marshals the buffer into JSON format after encoding the buffer +// with base64.RawURLEncoding +func (b Buffer) MarshalJSON() ([]byte, error) { + v, err := b.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to encode to base64") + } + return json.Marshal(string(v)) +} + +// UnmarshalJSON unmarshals from a JSON string into a Buffer, after decoding it +// with base64.RawURLEncoding +func (b *Buffer) UnmarshalJSON(data []byte) error { + var x string + if err := json.Unmarshal(data, &x); err != nil { + return errors.Wrap(err, "failed to unmarshal JSON") + } + return b.Base64Decode([]byte(x)) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/buffer/buffer_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/buffer/buffer_test.go new file mode 100644 index 0000000000..4889c0a104 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/buffer/buffer_test.go @@ -0,0 +1,93 @@ +package buffer + +import ( + "encoding/json" + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestBuffer_FromUint(t *testing.T) { + b := FromUint(1) + if !assert.Equal(t, []byte{1}, b.Bytes(), "should be left trimmed") { + return + } +} + +func TestBuffer_Convert(t *testing.T) { + v1 := []byte{'a', 'b', 'c'} + b := Buffer(v1) + + if !assert.Equal(t, v1, b.Bytes()) { + return + } + + v2 := "abc" + b = Buffer(v2) + if !assert.Equal(t, []byte(v2), b.Bytes()) { + return + } + +} + +func TestBuffer_Base64Encode(t *testing.T) { + b := Buffer{'a', 'b', 'c'} + v, err := b.Base64Encode() + if !assert.NoError(t, err, "Base64 encode is successful") { + return + } + if !assert.Equal(t, []byte{'Y', 'W', 'J', 'j'}, v) { + return + } +} + +func TestJSON(t *testing.T) { + b1 := Buffer{'a', 'b', 'c'} + + jsontxt, err := json.Marshal(b1) + if !assert.NoError(t, err) { + return + } + + if !assert.Equal(t, `"YWJj"`, string(jsontxt)) { + return + } + + var b2 Buffer + if !assert.NoError(t, json.Unmarshal(jsontxt, &b2)) { + return + } + + if !assert.Equal(t, b1, b2) { + return + } +} + +func TestFunky(t *testing.T) { + s := `QD4_B3ghg0PNu-c_EAlXn3Xlb0gzAFPJSYQSI1cZZ8sPIxISgPMtNJTzgncC281IaKDXLV1aEnYuH5eH-4u4f383zlyBCGKSKSQWmqKNE7xcIqleFVNsfzOucTL4QRxfbcyHcli_symC_RGWJ6GdocE0VgyYN8t9_0sm_Nq5lcwtYEQs_hNlf1ileCjjdsUfC05zTbbrLpMjgI3IK5_QxOU81FLei4LMx3iQ1kqrIGH5FxxQMKGdx_fDaRQ-YBAA2YVqn7rs3TcwQ7NUjjz8JyDE168NlMV1WxoDC9nwOe0O6K4NzFuWpoGHTh0M-0lT5M3dy9kEBYgPtWoe_u9dogA` + b := Buffer{} + if !assert.NoError(t, b.Base64Decode([]byte(s)), "Base64Decode should work") { + return + } + + if !assert.Equal(t, 257, b.Len(), "Should 257 bytes") { + return + } +} + +func TestBuffer_NData(t *testing.T) { + payload := []byte("Alice") + nd := Buffer(payload).NData() + if !assert.Equal(t, []byte{0, 0, 0, 5, 65, 108, 105, 99, 101}, nd, "NData mathces") { + return + } + + b1, err := FromNData(nd) + if !assert.NoError(t, err, "FromNData succeeds") { + return + } + + if !assert.Equal(t, payload, b1.Bytes(), "payload matches") { + return + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/cmd/jwx/jwx.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/cmd/jwx/jwx.go new file mode 100644 index 0000000000..8c9d2d8fc1 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/cmd/jwx/jwx.go @@ -0,0 +1,152 @@ +package main + +import ( + "bytes" + "encoding/json" + "flag" + "fmt" + "io" + "log" + "os" + + "github.com/lestrrat-go/jwx/jwk" + "github.com/lestrrat-go/jwx/jws" + "github.com/pkg/errors" +) + +func main() { + os.Exit(_main()) +} + +type JWKConfig struct { + JWKLocation string + Payload string +} + +type JWEConfig struct { + Algorithm string +} + +func _main() int { + var f func() int + + if len(os.Args) < 2 { + f = doHelp + } else { + switch os.Args[1] { + case "jwk": + f = doJWK + case "jwe": + f = doJWE + default: + f = doHelp + } + + os.Args = os.Args[1:] + } + return f() +} + +func doHelp() int { + fmt.Println(`jwx [command] [args]`) + return 0 +} + +func doJWE() int { + c := JWEConfig{} + flag.StringVar(&c.Algorithm, "alg", "", "Key encryption algorithm") + flag.Parse() + + return 0 +} + +func doJWK() int { + c := JWKConfig{} + flag.StringVar(&c.JWKLocation, "jwk", "", "JWK location, either a local file or a URL") + flag.Parse() + + if c.JWKLocation == "" { + fmt.Printf("-jwk must be specified\n") + return 1 + } + + key, err := jwk.Fetch(c.JWKLocation) + if err != nil { + log.Printf("%s", err) + return 0 + } + + keybuf, err := json.MarshalIndent(key, "", " ") + if err != nil { + log.Printf("%s", err) + return 0 + } + log.Printf("=== JWK ===") + for _, l := range bytes.Split(keybuf, []byte{'\n'}) { + log.Printf("%s", l) + } + + // TODO make it flexible + pubkey, err := (key.Keys[0]).(*jwk.RSAPublicKey).Materialize() + if err != nil { + log.Printf("%s", err) + return 0 + } + + var src io.Reader + if c.Payload == "" { + src = os.Stdin + } else { + f, err := os.Open(c.Payload) + if err != nil { + log.Printf("%s", errors.Wrap(err, "failed to open file "+c.Payload)) + return 1 + } + src = f + defer f.Close() + } + + var buf bytes.Buffer + src = io.TeeReader(src, &buf) + + message, err := jws.Parse(src) + if err != nil { + log.Printf("%s", err) + return 0 + } + + log.Printf("=== Payload ===") + // See if this is JSON. if it is, display it nicely + m := map[string]interface{}{} + if err := json.Unmarshal(message.Payload(), &m); err == nil { + payloadbuf, err := json.MarshalIndent(m, "", " ") + if err != nil { + log.Printf("%s", errors.Wrap(err, "failed to marshal payload")) + return 0 + } + for _, l := range bytes.Split(payloadbuf, []byte{'\n'}) { + log.Printf("%s", l) + } + } else { + log.Printf("%s", message.Payload()) + } + + for i, sig := range message.Signatures() { + log.Printf("=== Signature %d ===", i) + sigbuf, err := json.MarshalIndent(sig, "", " ") + if err != nil { + log.Printf("%s", errors.Wrap(err, "failed to marshal signature as JSON")) + return 0 + } + for _, l := range bytes.Split(sigbuf, []byte{'\n'}) { + log.Printf("%s", l) + } + + alg := sig.ProtectedHeaders().Algorithm() + if _, err := jws.Verify(buf.Bytes(), alg, pubkey); err == nil { + log.Printf("=== Verified with signature %d! ===", i) + } + } + + return 1 +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/base64/base64.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/base64/base64.go new file mode 100644 index 0000000000..59aef5329e --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/base64/base64.go @@ -0,0 +1,28 @@ +package base64 + +import ( + "encoding/base64" + "encoding/binary" +) + +func EncodeToString(src []byte) string { + return base64.RawURLEncoding.EncodeToString(src) +} + +func EncodeUint64ToString(v uint64) string { + data := make([]byte, 8) + binary.BigEndian.PutUint64(data, v) + + i := 0 + for ; i < len(data); i++ { + if data[i] != 0x0 { + break + } + } + + return EncodeToString(data[i:]) +} + +func DecodeString(src string) ([]byte, error) { + return base64.RawURLEncoding.DecodeString(src) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/concatkdf/concatkdf.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/concatkdf/concatkdf.go new file mode 100644 index 0000000000..5a89b55f35 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/concatkdf/concatkdf.go @@ -0,0 +1,55 @@ +package concatkdf + +import ( + "crypto" + "encoding/binary" + "hash" + + "github.com/lestrrat-go/jwx/buffer" +) + +type KDF struct { + buf []byte + hash hash.Hash + otherinfo []byte + round uint32 + z []byte +} + +func New(hash crypto.Hash, alg, Z, apu, apv, pubinfo, privinfo []byte) *KDF { + algbuf := buffer.Buffer(alg).NData() + apubuf := buffer.Buffer(apu).NData() + apvbuf := buffer.Buffer(apv).NData() + + concat := make([]byte, len(algbuf)+len(apubuf)+len(apvbuf)+len(pubinfo)+len(privinfo)) + n := copy(concat, algbuf) + n += copy(concat[n:], apubuf) + n += copy(concat[n:], apvbuf) + n += copy(concat[n:], pubinfo) + n += copy(concat[n:], privinfo) + + return &KDF{ + hash: hash.New(), + otherinfo: concat, + round: 1, + z: Z, + } +} + +func (k *KDF) Read(buf []byte) (int, error) { + h := k.hash + for len(buf) > len(k.buf) { + h.Reset() + + binary.Write(h, binary.BigEndian, k.round) + h.Write(k.z) + h.Write(k.otherinfo) + + k.buf = append(k.buf, h.Sum(nil)...) + k.round++ + } + + n := copy(buf, k.buf[:len(buf)]) + k.buf = k.buf[len(buf):] + return n, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/concatkdf/concatkdf_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/concatkdf/concatkdf_test.go new file mode 100644 index 0000000000..f5c1bc76d6 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/concatkdf/concatkdf_test.go @@ -0,0 +1,42 @@ +package concatkdf + +import ( + "crypto" + "testing" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/stretchr/testify/assert" +) + +// https://tools.ietf.org/html/rfc7518#appendix-C +func TestAppendix(t *testing.T) { + z := []byte{158, 86, 217, 29, 129, 113, 53, 211, 114, 131, 66, 131, 191, 132, + 38, 156, 251, 49, 110, 163, 218, 128, 106, 72, 246, 218, 167, 121, + 140, 254, 144, 196} + alg := []byte(jwa.A128GCM.String()) + apu := []byte{65, 108, 105, 99, 101} + apv := []byte{66, 111, 98} + pub := []byte{0, 0, 0, 128} + priv := []byte(nil) + expected := []byte{86, 170, 141, 234, 248, 35, 109, 32, 92, 34, 40, 205, 113, 167, 16, 26} + + kdf := New(crypto.SHA256, alg, z, apu, apv, pub, priv) + + out := make([]byte, 16) // 128bits + + n, err := kdf.Read(out[:5]) + if !assert.Equal(t, 5, n, "first read bytes matches") || + !assert.NoError(t, err, "first read successful") { + return + } + + n, err = kdf.Read(out[5:]) + if !assert.Equal(t, 11, n, "second read bytes matches") || + !assert.NoError(t, err, "second read successful") { + return + } + + if !assert.Equal(t, expected, out, "generated value matches") { + return + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/debug/debug_off.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/debug/debug_off.go new file mode 100644 index 0000000000..2c0b7f6f2a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/debug/debug_off.go @@ -0,0 +1,8 @@ +//+build !debug + +package debug + +const Enabled = false + +// Printf is no op unless you compile with the `debug` tag +func Printf(f string, args ...interface{}) {} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/debug/debug_on.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/debug/debug_on.go new file mode 100644 index 0000000000..72257f5254 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/debug/debug_on.go @@ -0,0 +1,16 @@ +//+build debug + +package debug + +import ( + "log" + "os" +) + +const Enabled = true + +var logger = log.New(os.Stdout, "|DEBUG| ", 0) + +func Printf(f string, args ...interface{}) { + logger.Printf(f, args...) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/ecdsautil/ecdsautil.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/ecdsautil/ecdsautil.go new file mode 100644 index 0000000000..7b97bcd2ac --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/ecdsautil/ecdsautil.go @@ -0,0 +1,94 @@ +package ecdsautil + +import ( + "crypto/ecdsa" + "crypto/elliptic" + "encoding/json" + "github.com/lestrrat-go/jwx/buffer" + "github.com/pkg/errors" + "math/big" +) + +type curve struct { + elliptic.Curve +} + +type rawkey struct { + Curve curve `json:"crv"` + D buffer.Buffer `json:"d"` + X buffer.Buffer `json:"x"` + Y buffer.Buffer `json:"y"` +} + +func (c *curve) UnmarshalJSON(data []byte) error { + var name string + if err := json.Unmarshal(data, &name); err != nil { + return errors.Wrap(err, `failed to unmarshal ecdsa curve`) + } + + switch name { + case "P-256": + *c = curve{elliptic.P256()} + case "P-384": + *c = curve{elliptic.P384()} + case "P-521": + *c = curve{elliptic.P521()} + default: + return errors.New("Unsupported curve") + } + return nil +} + +func NewRawKeyFromPublicKey(pubkey *ecdsa.PublicKey) *rawkey { + r := &rawkey{} + r.Curve = curve{pubkey.Curve} + r.X = buffer.Buffer(pubkey.X.Bytes()) + r.Y = buffer.Buffer(pubkey.Y.Bytes()) + return r +} + +func NewRawKeyFromPrivateKey(privkey *ecdsa.PrivateKey) *rawkey { + r := NewRawKeyFromPublicKey(&privkey.PublicKey) + r.D = buffer.Buffer(privkey.D.Bytes()) + return r +} + +func PublicKeyFromJSON(data []byte) (*ecdsa.PublicKey, error) { + r := rawkey{} + if err := json.Unmarshal(data, &r); err != nil { + return nil, errors.Wrap(err, `failed to unmarshal ecdsa public key`) + } + + return r.GeneratePublicKey() +} + +func PrivateKeyFromJSON(data []byte) (*ecdsa.PrivateKey, error) { + r := rawkey{} + if err := json.Unmarshal(data, &r); err != nil { + return nil, errors.Wrap(err, `failed to unmarshal ecdsa private key`) + } + + return r.GeneratePrivateKey() +} + +func (r rawkey) GeneratePublicKey() (*ecdsa.PublicKey, error) { + return &ecdsa.PublicKey{ + Curve: r.Curve.Curve, + X: (&big.Int{}).SetBytes(r.X.Bytes()), + Y: (&big.Int{}).SetBytes(r.Y.Bytes()), + }, nil +} + +func (r rawkey) GeneratePrivateKey() (*ecdsa.PrivateKey, error) { + pubkey, err := r.GeneratePublicKey() + if err != nil { + return nil, errors.Wrap(err, `failed to generate public key`) + } + + privkey := &ecdsa.PrivateKey{ + PublicKey: *pubkey, + D: (&big.Int{}).SetBytes(r.D.Bytes()), + } + + return privkey, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/ecdsautil/ecdsautil_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/ecdsautil/ecdsautil_test.go new file mode 100644 index 0000000000..1deebbcc51 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/ecdsautil/ecdsautil_test.go @@ -0,0 +1,38 @@ +package ecdsautil + +import ( + "bytes" + "crypto/x509" + "encoding/json" + "encoding/pem" + "testing" +) + +func TestKeyConversions(t *testing.T) { + + const ECPrivateKey = `-----BEGIN PRIVATE KEY----- +MHcCAQEEIModxWofWpbtNo6KlPEUzX6M5BoqVhcriHM/jtQYcDMDoAoGCCqGSM49 +AwEHoUQDQgAE2mFHYH1k9QGmpzTirHWgGtRRKmFh8deqKNZVUuxEH4sIQrj2zOkP +7YzeEixA9G+d7ZEPp221fqA5i0u+PchowA== +-----END PRIVATE KEY-----` + + expectedECRawKeyBytes := []byte{123, 34, 99, 114, 118, 34, 58, 123, 34, 67, 117, 114, 118, 101, 34, 58, 123, 34, 80, 34, 58, 49, 49, 53, 55, 57, 50, 48, 56, 57, 50, 49, 48, 51, 53, 54, 50, 52, 56, 55, 54, 50, 54, 57, 55, 52, 52, 54, 57, 52, 57, 52, 48, 55, 53, 55, 51, 53, 51, 48, 48, 56, 54, 49, 52, 51, 52, 49, 53, 50, 57, 48, 51, 49, 52, 49, 57, 53, 53, 51, 51, 54, 51, 49, 51, 48, 56, 56, 54, 55, 48, 57, 55, 56, 53, 51, 57, 53, 49, 44, 34, 78, 34, 58, 49, 49, 53, 55, 57, 50, 48, 56, 57, 50, 49, 48, 51, 53, 54, 50, 52, 56, 55, 54, 50, 54, 57, 55, 52, 52, 54, 57, 52, 57, 52, 48, 55, 53, 55, 51, 53, 50, 57, 57, 57, 54, 57, 53, 53, 50, 50, 52, 49, 51, 53, 55, 54, 48, 51, 52, 50, 52, 50, 50, 50, 53, 57, 48, 54, 49, 48, 54, 56, 53, 49, 50, 48, 52, 52, 51, 54, 57, 44, 34, 66, 34, 58, 52, 49, 48, 53, 56, 51, 54, 51, 55, 50, 53, 49, 53, 50, 49, 52, 50, 49, 50, 57, 51, 50, 54, 49, 50, 57, 55, 56, 48, 48, 52, 55, 50, 54, 56, 52, 48, 57, 49, 49, 52, 52, 52, 49, 48, 49, 53, 57, 57, 51, 55, 50, 53, 53, 53, 52, 56, 51, 53, 50, 53, 54, 51, 49, 52, 48, 51, 57, 52, 54, 55, 52, 48, 49, 50, 57, 49, 44, 34, 71, 120, 34, 58, 52, 56, 52, 51, 57, 53, 54, 49, 50, 57, 51, 57, 48, 54, 52, 53, 49, 55, 53, 57, 48, 53, 50, 53, 56, 53, 50, 53, 50, 55, 57, 55, 57, 49, 52, 50, 48, 50, 55, 54, 50, 57, 52, 57, 53, 50, 54, 48, 52, 49, 55, 52, 55, 57, 57, 53, 56, 52, 52, 48, 56, 48, 55, 49, 55, 48, 56, 50, 52, 48, 52, 54, 51, 53, 50, 56, 54, 44, 34, 71, 121, 34, 58, 51, 54, 49, 51, 52, 50, 53, 48, 57, 53, 54, 55, 52, 57, 55, 57, 53, 55, 57, 56, 53, 56, 53, 49, 50, 55, 57, 49, 57, 53, 56, 55, 56, 56, 49, 57, 53, 54, 54, 49, 49, 49, 48, 54, 54, 55, 50, 57, 56, 53, 48, 49, 53, 48, 55, 49, 56, 55, 55, 49, 57, 56, 50, 53, 51, 53, 54, 56, 52, 49, 52, 52, 48, 53, 49, 48, 57, 44, 34, 66, 105, 116, 83, 105, 122, 101, 34, 58, 50, 53, 54, 44, 34, 78, 97, 109, 101, 34, 58, 34, 80, 45, 50, 53, 54, 34, 125, 125, 44, 34, 100, 34, 58, 34, 121, 104, 51, 70, 97, 104, 57, 97, 108, 117, 48, 50, 106, 111, 113, 85, 56, 82, 84, 78, 102, 111, 122, 107, 71, 105, 112, 87, 70, 121, 117, 73, 99, 122, 45, 79, 49, 66, 104, 119, 77, 119, 77, 34, 44, 34, 120, 34, 58, 34, 50, 109, 70, 72, 89, 72, 49, 107, 57, 81, 71, 109, 112, 122, 84, 105, 114, 72, 87, 103, 71, 116, 82, 82, 75, 109, 70, 104, 56, 100, 101, 113, 75, 78, 90, 86, 85, 117, 120, 69, 72, 52, 115, 34, 44, 34, 121, 34, 58, 34, 67, 69, 75, 52, 57, 115, 122, 112, 68, 45, 50, 77, 51, 104, 73, 115, 81, 80, 82, 118, 110, 101, 50, 82, 68, 54, 100, 116, 116, 88, 54, 103, 79, 89, 116, 76, 118, 106, 51, 73, 97, 77, 65, 34, 125} + + t.Run("RawKeyFromPrivateKey", func(t *testing.T) { + + block, _ := pem.Decode([]byte(ECPrivateKey)) + privateKey, err := x509.ParseECPrivateKey(block.Bytes) + if err != nil { + t.Fatal("Failed to parse EC Private Key") + } + rawKey := NewRawKeyFromPrivateKey(privateKey) + rawKeyBytes, err := json.Marshal(rawKey) + if err != nil { + t.Fatal("Failed to json marshal EC Raw Key") + } + if bytes.Compare(expectedECRawKeyBytes, rawKeyBytes) != 0 { + t.Fatal("Keys do not match") + } + }) + +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/option/option.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/option/option.go new file mode 100644 index 0000000000..9259dc51b4 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/option/option.go @@ -0,0 +1,25 @@ +package option + +type Interface interface { + Name() string + Value() interface{} +} + +type Option struct { + name string + value interface{} +} + +func New(name string, value interface{}) *Option { + return &Option{ + name: name, + value: value, + } +} + +func (o *Option) Name() string { + return o.name +} +func (o *Option) Value() interface{} { + return o.value +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/padbuf/padbuf.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/padbuf/padbuf.go new file mode 100644 index 0000000000..f35598e8a4 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/padbuf/padbuf.go @@ -0,0 +1,61 @@ +// Package padbuf implements a simple buffer that knows how to pad/unpad +// itself so that the buffer size aligns with an arbitrary block size. +package padbuf + +import "errors" + +type PadBuffer []byte + +func (pb PadBuffer) Len() int { + return len(pb) +} + +func (pb PadBuffer) Pad(n int) PadBuffer { + rem := n - pb.Len()%n + if rem == 0 { + return pb + } + + newpb := pb.Resize(pb.Len() + rem) + + pad := make([]byte, rem) + for i := 0; i < rem; i++ { + pad[i] = byte(rem) + } + copy(newpb[pb.Len():], pad) + + return newpb +} + +func (pb PadBuffer) Resize(newlen int) PadBuffer { + if pb.Len() == newlen { + return pb + } + + buf := make([]byte, newlen) + copy(buf, pb) + return PadBuffer(buf) +} + +func (pb PadBuffer) Unpad(n int) (PadBuffer, error) { + rem := pb.Len() % n + if rem != 0 { + return pb, errors.New("buffer should be multiple block size") + } + + last := pb[pb.Len()-1] + + count := 0 + for i := pb.Len() - 1; i >= 0; i-- { + if pb[i] != last { + break + } + count++ + } + + if count != int(last) { + return pb, errors.New("invalid padding") + } + + return PadBuffer(pb[:pb.Len()-int(last)]), nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/padbuf/padbuf_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/padbuf/padbuf_test.go new file mode 100644 index 0000000000..f6049c173a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/padbuf/padbuf_test.go @@ -0,0 +1,29 @@ +package padbuf + +import ( + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestPadBuffer(t *testing.T) { + for i := 0; i < 256; i++ { + buf := make([]byte, i) + pb := PadBuffer(buf) + + pb = pb.Pad(16) + + if !assert.Equal(t, pb.Len()%16, 0, "pb should be multiple of 16") { + return + } + + pb, err := pb.Unpad(16) + if !assert.NoError(t, err, "Unpad return successfully") { + return + } + + if !assert.Len(t, pb, i, "Unpad should result in len = %d", i) { + return + } + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/rsautil/rsautil.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/rsautil/rsautil.go new file mode 100644 index 0000000000..5a99341330 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/rsautil/rsautil.go @@ -0,0 +1,98 @@ +package rsautil + +import ( + "crypto/rsa" + "encoding/json" + "math/big" + + "github.com/lestrrat-go/jwx/buffer" + "github.com/pkg/errors" +) + +type rawkey struct { + N buffer.Buffer `json:"n"` + E buffer.Buffer `json:"e"` + D buffer.Buffer `json:"d"` + P buffer.Buffer `json:"p"` + Q buffer.Buffer `json:"q"` + Dp buffer.Buffer `json:"dp"` + Dq buffer.Buffer `json:"dq"` + Qi buffer.Buffer `json:"qi"` + CRTValues []rsa.CRTValue `json:"crtvalues"` +} + +func NewRawKeyFromPublicKey(pubkey *rsa.PublicKey) *rawkey { + r := &rawkey{} + r.N = buffer.Buffer(pubkey.N.Bytes()) + r.E = buffer.FromUint(uint64(pubkey.E)) + return r +} + +func NewRawKeyFromPrivateKey(privkey *rsa.PrivateKey) *rawkey { + r := NewRawKeyFromPublicKey(&privkey.PublicKey) + r.D = buffer.Buffer(privkey.D.Bytes()) + r.P = buffer.Buffer(privkey.Primes[0].Bytes()) + r.Q = buffer.Buffer(privkey.Primes[1].Bytes()) + r.Dp = buffer.Buffer(privkey.Precomputed.Dp.Bytes()) + r.Dq = buffer.Buffer(privkey.Precomputed.Dq.Bytes()) + r.Qi = buffer.Buffer(privkey.Precomputed.Qinv.Bytes()) + r.CRTValues = make([]rsa.CRTValue, len(privkey.Precomputed.CRTValues)) + copy(r.CRTValues, privkey.Precomputed.CRTValues) + return r +} + +func PublicKeyFromJSON(data []byte) (*rsa.PublicKey, error) { + r := rawkey{} + if err := json.Unmarshal(data, &r); err != nil { + return nil, errors.Wrap(err, `failed to unmarshal public key`) + } + + return r.GeneratePublicKey() +} + +func PrivateKeyFromJSON(data []byte) (*rsa.PrivateKey, error) { + r := rawkey{} + if err := json.Unmarshal(data, &r); err != nil { + return nil, errors.Wrap(err, `failed to unmarshal private key`) + } + + return r.GeneratePrivateKey() +} + +func (r rawkey) GeneratePublicKey() (*rsa.PublicKey, error) { + return &rsa.PublicKey{ + N: (&big.Int{}).SetBytes(r.N.Bytes()), + E: int((&big.Int{}).SetBytes(r.E.Bytes()).Int64()), + }, nil +} + +func (r rawkey) GeneratePrivateKey() (*rsa.PrivateKey, error) { + pubkey, err := r.GeneratePublicKey() + if err != nil { + return nil, errors.Wrap(err, `failed to generate public key`) + } + + privkey := &rsa.PrivateKey{ + PublicKey: *pubkey, + D: (&big.Int{}).SetBytes(r.D.Bytes()), + Primes: []*big.Int{ + (&big.Int{}).SetBytes(r.P.Bytes()), + (&big.Int{}).SetBytes(r.Q.Bytes()), + }, + } + + if r.Dp.Len() > 0 { + privkey.Precomputed.Dp = (&big.Int{}).SetBytes(r.Dp.Bytes()) + } + if r.Dq.Len() > 0 { + privkey.Precomputed.Dq = (&big.Int{}).SetBytes(r.Dq.Bytes()) + } + if r.Qi.Len() > 0 { + privkey.Precomputed.Qinv = (&big.Int{}).SetBytes(r.Qi.Bytes()) + } + + privkey.Precomputed.CRTValues = make([]rsa.CRTValue, len(r.CRTValues)) + copy(privkey.Precomputed.CRTValues, r.CRTValues) + + return privkey, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/rsautil/rsautil_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/rsautil/rsautil_test.go new file mode 100644 index 0000000000..17b429bc0d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/internal/rsautil/rsautil_test.go @@ -0,0 +1,59 @@ +package rsautil + +import ( + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/json" + "encoding/pem" + "reflect" + "testing" +) + +func TestRSAUtil(t *testing.T) { + + t.Run("RoundTripNewRawKeyFromPrivateKey", func(t *testing.T) { + + privateKey, err := rsa.GenerateKey(rand.Reader, 512) + if err != nil { + t.Fatalf("Error generating private key: %s", err.Error()) + } + rawKey := NewRawKeyFromPrivateKey(privateKey) + keyBytes, err := json.Marshal(rawKey) + realizedPrivateKey, err := PrivateKeyFromJSON(keyBytes) + if !reflect.DeepEqual(realizedPrivateKey, privateKey) { + t.Fatalf("Mismatched private keys") + } + }) + + t.Run("PublicKeyFromJSON", func(t *testing.T) { + const jwkPublicKey = `{ + "e":"AQAB", + "kty":"RSA", + "n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw" + }` + + const expectedPEM = `-----BEGIN PUBLIC KEY----- +MIIBCgKCAQEA0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4 +cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc/BJECPebWKRXjBZCiFV4n3oknjhMst +n64tZ/2W+5JsGY4Hc5n9yBXArwl93lqt7/RN5w6Cf0h4QyQ5v+65YGjQR0/FDW2Q +vzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbIS +D08qNLyrdkt+bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ+G/xBniIqbw +0Ls1jF44+csFCur+kEgU8awapJzKnqDKgwIDAQAB +-----END PUBLIC KEY----- +` + + publicKey := []byte(jwkPublicKey) + rsaPublicKey, err := PublicKeyFromJSON(publicKey) + if err != nil { + t.Fatalf("Failed to construct RSA public key from JSON: %s", err.Error()) + } + publicKeyBytes := x509.MarshalPKCS1PublicKey(rsaPublicKey) + pemBlock := &pem.Block{Type: "PUBLIC KEY", Bytes: publicKeyBytes} + realizedPublicKeyPem := pem.EncodeToMemory(pemBlock) + if !reflect.DeepEqual(realizedPublicKeyPem, []byte(expectedPEM)) { + t.Fatal("Mismatched public keys") + } + }) + +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/compression.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/compression.go new file mode 100644 index 0000000000..8cd23f1a4d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/compression.go @@ -0,0 +1,43 @@ +// this file was auto-generated by internal/cmd/gentypes/main.go: DO NOT EDIT + +package jwa + +import ( + "github.com/pkg/errors" +) + +// CompressionAlgorithm represents the compression algorithms as described in https://tools.ietf.org/html/rfc7518#section-7.3 +type CompressionAlgorithm string + +// Supported values for CompressionAlgorithm +const ( + Deflate CompressionAlgorithm = "DEF" // DEFLATE (RFC 1951) + NoCompress CompressionAlgorithm = "" // No compression +) + +// Accept is used when conversion from values given by +// outside sources (such as JSON payloads) is required +func (v *CompressionAlgorithm) Accept(value interface{}) error { + var tmp CompressionAlgorithm + switch x := value.(type) { + case string: + tmp = CompressionAlgorithm(x) + case CompressionAlgorithm: + tmp = x + default: + return errors.Errorf(`invalid type for jwa.CompressionAlgorithm: %T`, value) + } + switch tmp { + case Deflate, NoCompress: + default: + return errors.Errorf(`invalid jwa.CompressionAlgorithm value`) + } + + *v = tmp + return nil +} + +// String returns the string representation of a CompressionAlgorithm +func (v CompressionAlgorithm) String() string { + return string(v) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/content_encryption.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/content_encryption.go new file mode 100644 index 0000000000..9c18086771 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/content_encryption.go @@ -0,0 +1,47 @@ +// this file was auto-generated by internal/cmd/gentypes/main.go: DO NOT EDIT + +package jwa + +import ( + "github.com/pkg/errors" +) + +// ContentEncryptionAlgorithm represents the various encryption algorithms as described in https://tools.ietf.org/html/rfc7518#section-5 +type ContentEncryptionAlgorithm string + +// Supported values for ContentEncryptionAlgorithm +const ( + A128CBC_HS256 ContentEncryptionAlgorithm = "A128CBC-HS256" // AES-CBC + HMAC-SHA256 (128) + A128GCM ContentEncryptionAlgorithm = "A128GCM" // AES-GCM (128) + A192CBC_HS384 ContentEncryptionAlgorithm = "A192CBC-HS384" // AES-CBC + HMAC-SHA384 (192) + A192GCM ContentEncryptionAlgorithm = "A192GCM" // AES-GCM (192) + A256CBC_HS512 ContentEncryptionAlgorithm = "A256CBC-HS512" // AES-CBC + HMAC-SHA512 (256) + A256GCM ContentEncryptionAlgorithm = "A256GCM" // AES-GCM (256) +) + +// Accept is used when conversion from values given by +// outside sources (such as JSON payloads) is required +func (v *ContentEncryptionAlgorithm) Accept(value interface{}) error { + var tmp ContentEncryptionAlgorithm + switch x := value.(type) { + case string: + tmp = ContentEncryptionAlgorithm(x) + case ContentEncryptionAlgorithm: + tmp = x + default: + return errors.Errorf(`invalid type for jwa.ContentEncryptionAlgorithm: %T`, value) + } + switch tmp { + case A128CBC_HS256, A128GCM, A192CBC_HS384, A192GCM, A256CBC_HS512, A256GCM: + default: + return errors.Errorf(`invalid jwa.ContentEncryptionAlgorithm value`) + } + + *v = tmp + return nil +} + +// String returns the string representation of a ContentEncryptionAlgorithm +func (v ContentEncryptionAlgorithm) String() string { + return string(v) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/elliptic.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/elliptic.go new file mode 100644 index 0000000000..e9d8779107 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/elliptic.go @@ -0,0 +1,44 @@ +// this file was auto-generated by internal/cmd/gentypes/main.go: DO NOT EDIT + +package jwa + +import ( + "github.com/pkg/errors" +) + +// EllipticCurveAlgorithm represents the algorithms used for EC keys +type EllipticCurveAlgorithm string + +// Supported values for EllipticCurveAlgorithm +const ( + P256 EllipticCurveAlgorithm = "P-256" + P384 EllipticCurveAlgorithm = "P-384" + P521 EllipticCurveAlgorithm = "P-521" +) + +// Accept is used when conversion from values given by +// outside sources (such as JSON payloads) is required +func (v *EllipticCurveAlgorithm) Accept(value interface{}) error { + var tmp EllipticCurveAlgorithm + switch x := value.(type) { + case string: + tmp = EllipticCurveAlgorithm(x) + case EllipticCurveAlgorithm: + tmp = x + default: + return errors.Errorf(`invalid type for jwa.EllipticCurveAlgorithm: %T`, value) + } + switch tmp { + case P256, P384, P521: + default: + return errors.Errorf(`invalid jwa.EllipticCurveAlgorithm value`) + } + + *v = tmp + return nil +} + +// String returns the string representation of a EllipticCurveAlgorithm +func (v EllipticCurveAlgorithm) String() string { + return string(v) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/internal/cmd/gentypes/main.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/internal/cmd/gentypes/main.go new file mode 100644 index 0000000000..aaf2a401dc --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/internal/cmd/gentypes/main.go @@ -0,0 +1,399 @@ +package main + +import ( + "bytes" + "fmt" + "go/format" + "log" + "os" + "sort" + "strconv" + + "github.com/pkg/errors" +) + +func main() { + if err := _main(); err != nil { + log.Printf("%s", err) + os.Exit(1) + } +} + +func _main() error { + typs := []typ{ + { + name: `CompressionAlgorithm`, + comment: `CompressionAlgorithm represents the compression algorithms as described in https://tools.ietf.org/html/rfc7518#section-7.3`, + filename: `compression.go`, + elements: []element{ + { + name: `NoCompress`, + value: ``, + comment: `No compression`, + }, + { + name: `Deflate`, + value: `DEF`, + comment: `DEFLATE (RFC 1951)`, + }, + }, + }, + { + name: `ContentEncryptionAlgorithm`, + comment: `ContentEncryptionAlgorithm represents the various encryption algorithms as described in https://tools.ietf.org/html/rfc7518#section-5`, + filename: `content_encryption.go`, + elements: []element{ + { + name: `A128CBC_HS256`, + value: `A128CBC-HS256`, + comment: `AES-CBC + HMAC-SHA256 (128)`, + }, + { + name: `A192CBC_HS384`, + value: `A192CBC-HS384`, + comment: `AES-CBC + HMAC-SHA384 (192)`, + }, + { + name: `A256CBC_HS512`, + value: `A256CBC-HS512`, + comment: `AES-CBC + HMAC-SHA512 (256)`, + }, + { + name: `A128GCM`, + value: `A128GCM`, + comment: `AES-GCM (128)`, + }, + { + name: `A192GCM`, + value: `A192GCM`, + comment: `AES-GCM (192)`, + }, + { + name: `A256GCM`, + value: `A256GCM`, + comment: `AES-GCM (256)`, + }, + }, + }, + { + name: `KeyType`, + comment: `KeyType represents the key type ("kty") that are supported`, + filename: "key_type.go", + elements: []element{ + { + name: `InvalidKeyType`, + value: ``, + comment: `Invalid KeyType`, + invalid: true, + }, + { + name: `EC`, + value: `EC`, + comment: `Elliptic Curve`, + }, + { + name: `RSA`, + value: `RSA`, + comment: `RSA`, + }, + { + name: `OctetSeq`, + value: `oct`, + comment: `Octet sequence (used to represent symmetric keys)`, + }, + }, + }, + { + name: `EllipticCurveAlgorithm`, + comment: ` EllipticCurveAlgorithm represents the algorithms used for EC keys`, + filename: `elliptic.go`, + elements: []element{ + { + name: `P256`, + value: `P-256`, + }, + { + name: `P384`, + value: `P-384`, + }, + { + name: `P521`, + value: `P-521`, + }, + }, + }, + { + name: `SignatureAlgorithm`, + comment: `SignatureAlgorithm represents the various signature algorithms as described in https://tools.ietf.org/html/rfc7518#section-3.1`, + filename: `signature.go`, + elements: []element{ + { + name: `NoSignature`, + value: "none", + }, + { + name: `HS256`, + value: "HS256", + comment: `HMAC using SHA-256`, + }, + { + name: `HS384`, + value: `HS384`, + comment: `HMAC using SHA-384`, + }, + { + name: `HS512`, + value: "HS512", + comment: `HMAC using SHA-512`, + }, + { + name: `RS256`, + value: `RS256`, + comment: `RSASSA-PKCS-v1.5 using SHA-256`, + }, + { + name: `RS384`, + value: `RS384`, + comment: `RSASSA-PKCS-v1.5 using SHA-384`, + }, + { + name: `RS512`, + value: `RS512`, + comment: `RSASSA-PKCS-v1.5 using SHA-512`, + }, + { + name: `ES256`, + value: `ES256`, + comment: `ECDSA using P-256 and SHA-256`, + }, + { + name: `ES384`, + value: `ES384`, + comment: `ECDSA using P-384 and SHA-384`, + }, + { + name: `ES512`, + value: "ES512", + comment: `ECDSA using P-521 and SHA-512`, + }, + { + name: `PS256`, + value: `PS256`, + comment: `RSASSA-PSS using SHA256 and MGF1-SHA256`, + }, + { + name: `PS384`, + value: `PS384`, + comment: `RSASSA-PSS using SHA384 and MGF1-SHA384`, + }, + { + name: `PS512`, + value: `PS512`, + comment: `RSASSA-PSS using SHA512 and MGF1-SHA512`, + }, + }, + }, + { + name: `KeyEncryptionAlgorithm`, + comment: `KeyEncryptionAlgorithm represents the various encryption algorithms as described in https://tools.ietf.org/html/rfc7518#section-4.1`, + filename: `key_encryption.go`, + elements: []element{ + { + name: `RSA1_5`, + value: "RSA1_5", + comment: `RSA-PKCS1v1.5`, + }, + { + name: `RSA_OAEP`, + value: "RSA-OAEP", + comment: `RSA-OAEP-SHA1`, + }, + { + name: `RSA_OAEP_256`, + value: "RSA-OAEP-256", + comment: `RSA-OAEP-SHA256`, + }, + { + name: `A128KW`, + value: "A128KW", + comment: `AES key wrap (128)`, + }, + { + name: `A192KW`, + value: "A192KW", + comment: `AES key wrap (192)`, + }, + { + name: `A256KW`, + value: "A256KW", + comment: `AES key wrap (256)`, + }, + { + name: `DIRECT`, + value: "dir", + comment: `Direct encryption`, + }, + { + name: `ECDH_ES`, + value: "ECDH-ES", + comment: `ECDH-ES`, + }, + { + name: `ECDH_ES_A128KW`, + value: "ECDH-ES+A128KW", + comment: `ECDH-ES + AES key wrap (128)`, + }, + { + name: `ECDH_ES_A192KW`, + value: "ECDH-ES+A192KW", + comment: `ECDH-ES + AES key wrap (192)`, + }, + { + name: `ECDH_ES_A256KW`, + value: "ECDH-ES+A256KW", + comment: `ECDH-ES + AES key wrap (256)`, + }, + { + name: `A128GCMKW`, + value: "A128GCMKW", + comment: `AES-GCM key wrap (128)`, + }, + { + name: `A192GCMKW`, + value: "A192GCMKW", + comment: `AES-GCM key wrap (192)`, + }, + { + name: `A256GCMKW`, + value: "A256GCMKW", + comment: `AES-GCM key wrap (256)`, + }, + { + name: `PBES2_HS256_A128KW`, + value: "PBES2-HS256+A128KW", + comment: `PBES2 + HMAC-SHA256 + AES key wrap (128)`, + }, + { + name: `PBES2_HS384_A192KW`, + value: "PBES2-HS384+A192KW", + comment: `PBES2 + HMAC-SHA384 + AES key wrap (192)`, + }, + { + name: `PBES2_HS512_A256KW`, + value: "PBES2-HS512+A256KW", + comment: `PBES2 + HMAC-SHA512 + AES key wrap (256)`, + }, + }, + }, + } + + sort.Slice(typs, func(i, j int) bool { + return typs[i].name < typs[j].name + }) + + for _, t := range typs { + sort.Slice(t.elements, func(i, j int) bool { + return t.elements[i].name < t.elements[j].name + }) + if err := t.Generate(); err != nil { + return errors.Wrap(err, `failed to generate file`) + } + } + return nil +} + +type typ struct { + name string + comment string + filename string + elements []element +} + +type element struct { + name string + value string + comment string + invalid bool +} + +func (t typ) Generate() error { + var buf bytes.Buffer + + fmt.Fprintf(&buf, "// this file was auto-generated by internal/cmd/gentypes/main.go: DO NOT EDIT") + fmt.Fprintf(&buf, "\n\npackage jwa") + fmt.Fprintf(&buf, "\n\nimport (") + for _, pkg := range []string{"github.com/pkg/errors"} { + fmt.Fprintf(&buf, "\n%s", strconv.Quote(pkg)) + } + fmt.Fprintf(&buf, "\n)") + fmt.Fprintf(&buf, "\n\n// %s", t.comment) + fmt.Fprintf(&buf, "\ntype %s string", t.name) + + fmt.Fprintf(&buf, "\n\n// Supported values for %s", t.name) + fmt.Fprintf(&buf, "\nconst (") + for _, e := range t.elements { + fmt.Fprintf(&buf, "\n%s %s = %s", e.name, t.name, strconv.Quote(e.value)) + if len(e.comment) > 0 { + fmt.Fprintf(&buf, " // %s", e.comment) + } + } + fmt.Fprintf(&buf, "\n)") // end const + + fmt.Fprintf(&buf, "\n\n// Accept is used when conversion from values given by") + fmt.Fprintf(&buf, "\n// outside sources (such as JSON payloads) is required") + fmt.Fprintf(&buf, "\nfunc (v *%s) Accept(value interface{}) error {", t.name) + fmt.Fprintf(&buf, "\nvar tmp %s", t.name) + fmt.Fprintf(&buf, "\nswitch x := value.(type) {") + fmt.Fprintf(&buf, "\ncase string:") + fmt.Fprintf(&buf, "\ntmp = %s(x)", t.name) + fmt.Fprintf(&buf, "\ncase %s:", t.name) + fmt.Fprintf(&buf, "\ntmp = x") + fmt.Fprintf(&buf, "\ndefault:") + fmt.Fprintf(&buf, "\nreturn errors.Errorf(`invalid type for jwa.%s: %%T`, value)", t.name) + fmt.Fprintf(&buf, "\n}") + + fmt.Fprintf(&buf, "\nswitch tmp {") + fmt.Fprintf(&buf, "\ncase ") + var valids []element + for _, e := range t.elements { + if e.invalid { + continue + } + valids = append(valids, e) + } + + for i, e := range valids { + fmt.Fprintf(&buf, "%s", e.name) + if i < len(valids)-1 { + fmt.Fprintf(&buf, ", ") + } + } + fmt.Fprintf(&buf, ":") + fmt.Fprintf(&buf, "\ndefault:") + fmt.Fprintf(&buf, "\nreturn errors.Errorf(`invalid jwa.%s value`)", t.name) + fmt.Fprintf(&buf, "\n}") + + fmt.Fprintf(&buf, "\n\n*v = tmp") + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") // func (v *%s) Accept(v interface{}) + + fmt.Fprintf(&buf, "\n\n// String returns the string representation of a %s", t.name) + fmt.Fprintf(&buf, "\nfunc (v %s) String() string {", t.name) + fmt.Fprintf(&buf, "\nreturn string(v)") + fmt.Fprintf(&buf, "\n}") + + formatted, err := format.Source(buf.Bytes()) + if err != nil { + os.Stdout.Write(buf.Bytes()) + return errors.Wrap(err, `failed to format source`) + } + + f, err := os.Create(t.filename) + if err != nil { + return errors.Wrapf(err, `failed to create %s`, t.filename) + } + defer f.Close() + f.Write(formatted) + + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/jwa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/jwa.go new file mode 100644 index 0000000000..5e2b351a7a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/jwa.go @@ -0,0 +1,17 @@ +//go:generate go run internal/cmd/gentypes/main.go + +// Package jwa defines the various algorithm described in https://tools.ietf.org/html/rfc7518 +package jwa + +// Size returns the size of the EllipticCurveAlgorithm +func (crv EllipticCurveAlgorithm) Size() int { + switch crv { + case P256: + return 32 + case P384: + return 48 + case P521: + return 66 + } + return 0 +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/key_encryption.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/key_encryption.go new file mode 100644 index 0000000000..dbb6eeddc5 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/key_encryption.go @@ -0,0 +1,58 @@ +// this file was auto-generated by internal/cmd/gentypes/main.go: DO NOT EDIT + +package jwa + +import ( + "github.com/pkg/errors" +) + +// KeyEncryptionAlgorithm represents the various encryption algorithms as described in https://tools.ietf.org/html/rfc7518#section-4.1 +type KeyEncryptionAlgorithm string + +// Supported values for KeyEncryptionAlgorithm +const ( + A128GCMKW KeyEncryptionAlgorithm = "A128GCMKW" // AES-GCM key wrap (128) + A128KW KeyEncryptionAlgorithm = "A128KW" // AES key wrap (128) + A192GCMKW KeyEncryptionAlgorithm = "A192GCMKW" // AES-GCM key wrap (192) + A192KW KeyEncryptionAlgorithm = "A192KW" // AES key wrap (192) + A256GCMKW KeyEncryptionAlgorithm = "A256GCMKW" // AES-GCM key wrap (256) + A256KW KeyEncryptionAlgorithm = "A256KW" // AES key wrap (256) + DIRECT KeyEncryptionAlgorithm = "dir" // Direct encryption + ECDH_ES KeyEncryptionAlgorithm = "ECDH-ES" // ECDH-ES + ECDH_ES_A128KW KeyEncryptionAlgorithm = "ECDH-ES+A128KW" // ECDH-ES + AES key wrap (128) + ECDH_ES_A192KW KeyEncryptionAlgorithm = "ECDH-ES+A192KW" // ECDH-ES + AES key wrap (192) + ECDH_ES_A256KW KeyEncryptionAlgorithm = "ECDH-ES+A256KW" // ECDH-ES + AES key wrap (256) + PBES2_HS256_A128KW KeyEncryptionAlgorithm = "PBES2-HS256+A128KW" // PBES2 + HMAC-SHA256 + AES key wrap (128) + PBES2_HS384_A192KW KeyEncryptionAlgorithm = "PBES2-HS384+A192KW" // PBES2 + HMAC-SHA384 + AES key wrap (192) + PBES2_HS512_A256KW KeyEncryptionAlgorithm = "PBES2-HS512+A256KW" // PBES2 + HMAC-SHA512 + AES key wrap (256) + RSA1_5 KeyEncryptionAlgorithm = "RSA1_5" // RSA-PKCS1v1.5 + RSA_OAEP KeyEncryptionAlgorithm = "RSA-OAEP" // RSA-OAEP-SHA1 + RSA_OAEP_256 KeyEncryptionAlgorithm = "RSA-OAEP-256" // RSA-OAEP-SHA256 +) + +// Accept is used when conversion from values given by +// outside sources (such as JSON payloads) is required +func (v *KeyEncryptionAlgorithm) Accept(value interface{}) error { + var tmp KeyEncryptionAlgorithm + switch x := value.(type) { + case string: + tmp = KeyEncryptionAlgorithm(x) + case KeyEncryptionAlgorithm: + tmp = x + default: + return errors.Errorf(`invalid type for jwa.KeyEncryptionAlgorithm: %T`, value) + } + switch tmp { + case A128GCMKW, A128KW, A192GCMKW, A192KW, A256GCMKW, A256KW, DIRECT, ECDH_ES, ECDH_ES_A128KW, ECDH_ES_A192KW, ECDH_ES_A256KW, PBES2_HS256_A128KW, PBES2_HS384_A192KW, PBES2_HS512_A256KW, RSA1_5, RSA_OAEP, RSA_OAEP_256: + default: + return errors.Errorf(`invalid jwa.KeyEncryptionAlgorithm value`) + } + + *v = tmp + return nil +} + +// String returns the string representation of a KeyEncryptionAlgorithm +func (v KeyEncryptionAlgorithm) String() string { + return string(v) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/key_type.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/key_type.go new file mode 100644 index 0000000000..747fe09b69 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/key_type.go @@ -0,0 +1,45 @@ +// this file was auto-generated by internal/cmd/gentypes/main.go: DO NOT EDIT + +package jwa + +import ( + "github.com/pkg/errors" +) + +// KeyType represents the key type ("kty") that are supported +type KeyType string + +// Supported values for KeyType +const ( + EC KeyType = "EC" // Elliptic Curve + InvalidKeyType KeyType = "" // Invalid KeyType + OctetSeq KeyType = "oct" // Octet sequence (used to represent symmetric keys) + RSA KeyType = "RSA" // RSA +) + +// Accept is used when conversion from values given by +// outside sources (such as JSON payloads) is required +func (v *KeyType) Accept(value interface{}) error { + var tmp KeyType + switch x := value.(type) { + case string: + tmp = KeyType(x) + case KeyType: + tmp = x + default: + return errors.Errorf(`invalid type for jwa.KeyType: %T`, value) + } + switch tmp { + case EC, OctetSeq, RSA: + default: + return errors.Errorf(`invalid jwa.KeyType value`) + } + + *v = tmp + return nil +} + +// String returns the string representation of a KeyType +func (v KeyType) String() string { + return string(v) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/signature.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/signature.go new file mode 100644 index 0000000000..e661fa7971 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwa/signature.go @@ -0,0 +1,54 @@ +// this file was auto-generated by internal/cmd/gentypes/main.go: DO NOT EDIT + +package jwa + +import ( + "github.com/pkg/errors" +) + +// SignatureAlgorithm represents the various signature algorithms as described in https://tools.ietf.org/html/rfc7518#section-3.1 +type SignatureAlgorithm string + +// Supported values for SignatureAlgorithm +const ( + ES256 SignatureAlgorithm = "ES256" // ECDSA using P-256 and SHA-256 + ES384 SignatureAlgorithm = "ES384" // ECDSA using P-384 and SHA-384 + ES512 SignatureAlgorithm = "ES512" // ECDSA using P-521 and SHA-512 + HS256 SignatureAlgorithm = "HS256" // HMAC using SHA-256 + HS384 SignatureAlgorithm = "HS384" // HMAC using SHA-384 + HS512 SignatureAlgorithm = "HS512" // HMAC using SHA-512 + NoSignature SignatureAlgorithm = "none" + PS256 SignatureAlgorithm = "PS256" // RSASSA-PSS using SHA256 and MGF1-SHA256 + PS384 SignatureAlgorithm = "PS384" // RSASSA-PSS using SHA384 and MGF1-SHA384 + PS512 SignatureAlgorithm = "PS512" // RSASSA-PSS using SHA512 and MGF1-SHA512 + RS256 SignatureAlgorithm = "RS256" // RSASSA-PKCS-v1.5 using SHA-256 + RS384 SignatureAlgorithm = "RS384" // RSASSA-PKCS-v1.5 using SHA-384 + RS512 SignatureAlgorithm = "RS512" // RSASSA-PKCS-v1.5 using SHA-512 +) + +// Accept is used when conversion from values given by +// outside sources (such as JSON payloads) is required +func (v *SignatureAlgorithm) Accept(value interface{}) error { + var tmp SignatureAlgorithm + switch x := value.(type) { + case string: + tmp = SignatureAlgorithm(x) + case SignatureAlgorithm: + tmp = x + default: + return errors.Errorf(`invalid type for jwa.SignatureAlgorithm: %T`, value) + } + switch tmp { + case ES256, ES384, ES512, HS256, HS384, HS512, NoSignature, PS256, PS384, PS512, RS256, RS384, RS512: + default: + return errors.Errorf(`invalid jwa.SignatureAlgorithm value`) + } + + *v = tmp + return nil +} + +// String returns the string representation of a SignatureAlgorithm +func (v SignatureAlgorithm) String() string { + return string(v) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/aescbc/aescbc.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/aescbc/aescbc.go new file mode 100644 index 0000000000..3401367efa --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/aescbc/aescbc.go @@ -0,0 +1,183 @@ +package aescbc + +import ( + "crypto/cipher" + "crypto/hmac" + "crypto/sha256" + "crypto/sha512" + "crypto/subtle" + "encoding/binary" + "fmt" + "hash" + + "github.com/lestrrat-go/jwx/internal/debug" + "github.com/lestrrat-go/jwx/internal/padbuf" + "github.com/pkg/errors" +) + +const ( + NonceSize = 16 +) + +type AesCbcHmac struct { + blockCipher cipher.Block + hash func() hash.Hash + keysize int + tagsize int + integrityKey []byte +} + +type BlockCipherFunc func([]byte) (cipher.Block, error) + +func New(key []byte, f BlockCipherFunc) (*AesCbcHmac, error) { + keysize := len(key) / 2 + ikey := key[:keysize] + ekey := key[keysize:] + + if debug.Enabled { + debug.Printf("New: keysize = %d", keysize) + debug.Printf("New: cek (key) = %x (%d)\n", key, len(key)) + debug.Printf("New: ikey = %x (%d)\n", ikey, len(ikey)) + debug.Printf("New: ekey = %x (%d)\n", ekey, len(ekey)) + } + + bc, err := f(ekey) + if err != nil { + return nil, errors.Wrap(err, `failed to execute block cipher function`) + } + + var hfunc func() hash.Hash + switch keysize { + case 16: + hfunc = sha256.New + case 24: + hfunc = sha512.New384 + case 32: + hfunc = sha512.New + default: + return nil, errors.Errorf("unsupported key size %d", keysize) + } + + return &AesCbcHmac{ + blockCipher: bc, + hash: hfunc, + integrityKey: ikey, + keysize: keysize, + tagsize: NonceSize, + }, nil +} + +// NonceSize fulfills the crypto.AEAD interface +func (c AesCbcHmac) NonceSize() int { + return NonceSize +} + +// Overhead fulfills the crypto.AEAD interface +func (c AesCbcHmac) Overhead() int { + return c.blockCipher.BlockSize() + c.tagsize +} + +func (c AesCbcHmac) ComputeAuthTag(aad, nonce, ciphertext []byte) []byte { + if debug.Enabled { + debug.Printf("ComputeAuthTag: aad = %x (%d)\n", aad, len(aad)) + debug.Printf("ComputeAuthTag: ciphertext = %x (%d)\n", ciphertext, len(ciphertext)) + debug.Printf("ComputeAuthTag: iv (nonce) = %x (%d)\n", nonce, len(nonce)) + debug.Printf("ComputeAuthTag: integrity = %x (%d)\n", c.integrityKey, len(c.integrityKey)) + } + + buf := make([]byte, len(aad)+len(nonce)+len(ciphertext)+8) + n := 0 + n += copy(buf, aad) + n += copy(buf[n:], nonce) + n += copy(buf[n:], ciphertext) + binary.BigEndian.PutUint64(buf[n:], uint64(len(aad)*8)) + + h := hmac.New(c.hash, c.integrityKey) + h.Write(buf) + s := h.Sum(nil) + if debug.Enabled { + debug.Printf("ComputeAuthTag: buf = %x (%d)\n", buf, len(buf)) + debug.Printf("ComputeAuthTag: computed = %x (%d)\n", s[:c.keysize], len(s[:c.keysize])) + } + return s[:c.tagsize] +} + +func ensureSize(dst []byte, n int) []byte { + // if the dst buffer has enough length just copy the relevant parts to it. + // Otherwise create a new slice that's big enough, and operate on that + // Note: I think go-jose has a bug in that it checks for cap(), but not len(). + ret := dst + if diff := n - len(dst); diff > 0 { + // dst is not big enough + ret = make([]byte, n) + copy(ret, dst) + } + return ret +} + +// Seal fulfills the crypto.AEAD interface +func (c AesCbcHmac) Seal(dst, nonce, plaintext, data []byte) []byte { + ctlen := len(plaintext) + ciphertext := make([]byte, ctlen+c.Overhead())[:ctlen] + copy(ciphertext, plaintext) + ciphertext = padbuf.PadBuffer(ciphertext).Pad(c.blockCipher.BlockSize()) + + cbc := cipher.NewCBCEncrypter(c.blockCipher, nonce) + cbc.CryptBlocks(ciphertext, ciphertext) + + authtag := c.ComputeAuthTag(data, nonce, ciphertext) + + retlen := len(dst) + len(ciphertext) + len(authtag) + + ret := ensureSize(dst, retlen) + out := ret[len(dst):] + n := copy(out, ciphertext) + n += copy(out[n:], authtag) + + if debug.Enabled { + debug.Printf("Seal: ciphertext = %x (%d)\n", ciphertext, len(ciphertext)) + debug.Printf("Seal: authtag = %x (%d)\n", authtag, len(authtag)) + debug.Printf("Seal: ret = %x (%d)\n", ret, len(ret)) + } + return ret +} + +// Open fulfills the crypto.AEAD interface +func (c AesCbcHmac) Open(dst, nonce, ciphertext, data []byte) ([]byte, error) { + if len(ciphertext) < c.keysize { + return nil, errors.New("invalid ciphertext (too short)") + } + + tagOffset := len(ciphertext) - c.tagsize + if tagOffset%c.blockCipher.BlockSize() != 0 { + return nil, fmt.Errorf( + "invalid ciphertext (invalid length: %d %% %d != 0)", + tagOffset, + c.blockCipher.BlockSize(), + ) + } + tag := ciphertext[tagOffset:] + ciphertext = ciphertext[:tagOffset] + + expectedTag := c.ComputeAuthTag(data, nonce, ciphertext) + if subtle.ConstantTimeCompare(expectedTag, tag) != 1 { + if debug.Enabled { + debug.Printf("provided tag = %x\n", tag) + debug.Printf("expected tag = %x\n", expectedTag) + } + return nil, errors.New("invalid ciphertext (tag mismatch)") + } + + cbc := cipher.NewCBCDecrypter(c.blockCipher, nonce) + buf := make([]byte, tagOffset) + cbc.CryptBlocks(buf, ciphertext) + + plaintext, err := padbuf.PadBuffer(buf).Unpad(c.blockCipher.BlockSize()) + if err != nil { + return nil, errors.Wrap(err, `failed to generate plaintext from decrypted blocks`) + } + ret := ensureSize(dst, len(plaintext)) + out := ret[len(dst):] + copy(out, plaintext) + return ret, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/aescbc/aescbc_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/aescbc/aescbc_test.go new file mode 100644 index 0000000000..2e6f7b157d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/aescbc/aescbc_test.go @@ -0,0 +1,60 @@ +package aescbc + +import ( + "crypto/aes" + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestVectorsAESCBC128(t *testing.T) { + // Source: http://tools.ietf.org/html/draft-ietf-jose-json-web-encryption-29#appendix-A.2 + plaintext := []byte{ + 76, 105, 118, 101, 32, 108, 111, 110, 103, 32, 97, 110, 100, 32, + 112, 114, 111, 115, 112, 101, 114, 46} + + aad := []byte{ + 101, 121, 74, 104, 98, 71, 99, 105, 79, 105, 74, 83, 85, 48, 69, + 120, 88, 122, 85, 105, 76, 67, 74, 108, 98, 109, 77, 105, 79, 105, + 74, 66, 77, 84, 73, 52, 81, 48, 74, 68, 76, 85, 104, 84, 77, 106, 85, + 50, 73, 110, 48} + + ciphertext := []byte{ + 40, 57, 83, 181, 119, 33, 133, 148, 198, 185, 243, 24, 152, 230, 6, + 75, 129, 223, 127, 19, 210, 82, 183, 230, 168, 33, 215, 104, 143, + 112, 56, 102} + + authtag := []byte{ + 246, 17, 244, 190, 4, 95, 98, 3, 231, 0, 115, 157, 242, 203, 100, + 191} + + key := []byte{ + 4, 211, 31, 197, 84, 157, 252, 254, 11, 100, 157, 250, 63, 170, 106, 206, + 107, 124, 212, 45, 111, 107, 9, 219, 200, 177, 0, 240, 143, 156, 44, 207} + + nonce := []byte{ + 3, 22, 60, 12, 43, 67, 104, 105, 108, 108, 105, 99, 111, 116, 104, 101} + + enc, err := New(key, aes.NewCipher) + out := enc.Seal(nil, nonce, plaintext, aad) + if !assert.NoError(t, err, "enc.Seal") { + return + } + + if !assert.Equal(t, ciphertext, out[:len(out)-enc.keysize], "Ciphertext tag should match") { + return + } + + if !assert.Equal(t, authtag, out[len(out)-enc.keysize:], "Auth tag should match") { + return + } + + out, err = enc.Open(nil, nonce, out, aad) + if !assert.NoError(t, err, "Open should succeed") { + return + } + + if !assert.Equal(t, plaintext, out, "Open should get us original text") { + return + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/cipher.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/cipher.go new file mode 100644 index 0000000000..7e83658db2 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/cipher.go @@ -0,0 +1,188 @@ +package jwe + +import ( + "crypto/aes" + "crypto/cipher" + "crypto/rsa" + "fmt" + + "github.com/lestrrat-go/jwx/internal/debug" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwe/aescbc" + "github.com/pkg/errors" +) + +const ( + TagSize = 16 +) + +func (f AeadFetchFunc) AeadFetch(key []byte) (cipher.AEAD, error) { + return f(key) +} + +var GcmAeadFetch = AeadFetchFunc(func(key []byte) (cipher.AEAD, error) { + aescipher, err := aes.NewCipher(key) + if err != nil { + if debug.Enabled { + debug.Printf("GcmAeadFetch: failed to create cipher") + } + return nil, errors.Wrap(err, "cipher: failed to create AES cipher for GCM") + } + + aead, err := cipher.NewGCM(aescipher) + if err != nil { + return nil, errors.Wrap(err, `failed to create GCM for cipher`) + } + return aead, nil +}) +var CbcAeadFetch = AeadFetchFunc(func(key []byte) (cipher.AEAD, error) { + if debug.Enabled { + debug.Printf("CbcAeadFetch: fetching key (%d)", len(key)) + } + aead, err := aescbc.New(key, aes.NewCipher) + if err != nil { + if debug.Enabled { + debug.Printf("CbcAeadFetch: failed to create aead fetcher %v (%d): %s", key, len(key), err) + } + return nil, errors.Wrap(err, "cipher: failed to create AES cipher for CBC") + } + return aead, nil +}) + +func (c AesContentCipher) KeySize() int { + return c.keysize +} + +func (c AesContentCipher) TagSize() int { + return c.tagsize +} + +func NewAesContentCipher(alg jwa.ContentEncryptionAlgorithm) (*AesContentCipher, error) { + var keysize int + var fetcher AeadFetcher + switch alg { + case jwa.A128GCM: + keysize = 16 + fetcher = GcmAeadFetch + case jwa.A192GCM: + keysize = 24 + fetcher = GcmAeadFetch + case jwa.A256GCM: + keysize = 32 + fetcher = GcmAeadFetch + case jwa.A128CBC_HS256: + keysize = 16 * 2 + fetcher = CbcAeadFetch + case jwa.A192CBC_HS384: + keysize = 24 * 2 + fetcher = CbcAeadFetch + case jwa.A256CBC_HS512: + keysize = 32 * 2 + fetcher = CbcAeadFetch + default: + return nil, errors.Wrap(ErrUnsupportedAlgorithm, "failed to create AES content cipher") + } + + return &AesContentCipher{ + keysize: keysize, + tagsize: TagSize, + AeadFetcher: fetcher, + }, nil +} + +func (c AesContentCipher) encrypt(cek, plaintext, aad []byte) (iv, ciphertext, tag []byte, err error) { + var aead cipher.AEAD + aead, err = c.AeadFetch(cek) + if err != nil { + if debug.Enabled { + debug.Printf("AeadFetch failed: %s", err) + } + return nil, nil, nil, errors.Wrap(err, "failed to fetch AEAD") + } + + // Seal may panic (argh!), so protect ourselves from that + defer func() { + if e := recover(); e != nil { + switch e.(type) { + case error: + err = e.(error) + case string: + err = errors.New(e.(string)) + default: + err = fmt.Errorf("%s", e) + } + err = errors.Wrap(err, "failed to descrypt") + } + }() + + var bs ByteSource + if c.NonceGenerator == nil { + bs, err = NewRandomKeyGenerate(aead.NonceSize()).KeyGenerate() + } else { + bs, err = c.NonceGenerator.KeyGenerate() + } + if err != nil { + return nil, nil, nil, errors.Wrap(err, "failed to generate nonce") + } + iv = bs.Bytes() + + combined := aead.Seal(nil, iv, plaintext, aad) + tagoffset := len(combined) - c.TagSize() + if debug.Enabled { + debug.Printf("tagsize = %d", c.TagSize()) + } + tag = combined[tagoffset:] + ciphertext = make([]byte, tagoffset) + copy(ciphertext, combined[:tagoffset]) + + if debug.Enabled { + debug.Printf("encrypt: combined = %x (%d)\n", combined, len(combined)) + debug.Printf("encrypt: ciphertext = %x (%d)\n", ciphertext, len(ciphertext)) + debug.Printf("encrypt: tag = %x (%d)\n", tag, len(tag)) + debug.Printf("finally ciphertext = %x\n", ciphertext) + } + return +} + +func (c AesContentCipher) decrypt(cek, iv, ciphertxt, tag, aad []byte) (plaintext []byte, err error) { + aead, err := c.AeadFetch(cek) + if err != nil { + if debug.Enabled { + debug.Printf("AeadFetch failed for %v: %s", cek, err) + } + return nil, errors.Wrap(err, "failed to fetch AEAD data") + } + + // Open may panic (argh!), so protect ourselves from that + defer func() { + if e := recover(); e != nil { + switch e.(type) { + case error: + err = e.(error) + case string: + err = errors.New(e.(string)) + default: + err = fmt.Errorf("%s", e) + } + err = errors.Wrap(err, "failed to decrypt") + return + } + }() + + combined := make([]byte, len(ciphertxt)+len(tag)) + copy(combined, ciphertxt) + copy(combined[len(ciphertxt):], tag) + + if debug.Enabled { + debug.Printf("AesContentCipher.decrypt: combined = %x (%d)", combined, len(combined)) + } + + plaintext, err = aead.Open(nil, iv, combined, aad) + return +} + +func NewRsaContentCipher(alg jwa.ContentEncryptionAlgorithm, pubkey *rsa.PublicKey) (*RsaContentCipher, error) { + return &RsaContentCipher{ + pubkey: pubkey, + }, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/cipher_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/cipher_test.go new file mode 100644 index 0000000000..660444746e --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/cipher_test.go @@ -0,0 +1,26 @@ +package jwe + +import ( + "testing" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/stretchr/testify/assert" +) + +func TestAesContentCipher(t *testing.T) { + algs := []jwa.ContentEncryptionAlgorithm{ + jwa.A128GCM, + jwa.A192GCM, + jwa.A256GCM, + jwa.A128CBC_HS256, + jwa.A192CBC_HS384, + jwa.A256CBC_HS512, + } + for _, alg := range algs { + c, err := NewAesContentCipher(alg) + if !assert.NoError(t, err, "BuildCipher for %s succeeds", alg) { + return + } + t.Logf("keysize = %d", c.KeySize()) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/content_crypt.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/content_crypt.go new file mode 100644 index 0000000000..3d0e15675a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/content_crypt.go @@ -0,0 +1,59 @@ +package jwe + +import ( + "github.com/lestrrat-go/jwx/internal/debug" + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +func (c GenericContentCrypt) Algorithm() jwa.ContentEncryptionAlgorithm { + return c.alg +} + +func (c GenericContentCrypt) Encrypt(cek, plaintext, aad []byte) ([]byte, []byte, []byte, error) { + if debug.Enabled { + debug.Printf("ContentCrypt.Encrypt: cek = %x (%d)", cek, len(cek)) + debug.Printf("ContentCrypt.Encrypt: ciphertext = %x (%d)", plaintext, len(plaintext)) + debug.Printf("ContentCrypt.Encrypt: aad = %x (%d)", aad, len(aad)) + } + iv, encrypted, tag, err := c.cipher.encrypt(cek, plaintext, aad) + if err != nil { + if debug.Enabled { + debug.Printf("cipher.encrypt failed") + } + + return nil, nil, nil, errors.Wrap(err, `failed to crypt content`) + } + + return iv, encrypted, tag, nil +} + +func (c GenericContentCrypt) Decrypt(cek, iv, ciphertext, tag, aad []byte) ([]byte, error) { + return c.cipher.decrypt(cek, iv, ciphertext, tag, aad) +} + +func NewAesCrypt(alg jwa.ContentEncryptionAlgorithm) (*GenericContentCrypt, error) { + if debug.Enabled { + debug.Printf("AES Crypt: alg = %s", alg) + } + cipher, err := NewAesContentCipher(alg) + if err != nil { + return nil, errors.Wrap(err, `aes crypt: failed to create content cipher`) + } + + if debug.Enabled { + debug.Printf("AES Crypt: cipher.keysize = %d", cipher.KeySize()) + } + + return &GenericContentCrypt{ + alg: alg, + cipher: cipher, + cekgen: NewRandomKeyGenerate(cipher.KeySize() * 2), + keysize: cipher.KeySize() * 2, + tagsize: 16, + }, nil +} + +func (c GenericContentCrypt) KeySize() int { + return c.keysize +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/doc_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/doc_test.go new file mode 100644 index 0000000000..08266f58c2 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/doc_test.go @@ -0,0 +1,36 @@ +package jwe + +import ( + "crypto/rand" + "crypto/rsa" + "log" + + "github.com/lestrrat-go/jwx/jwa" +) + +func ExampleEncrypt() { + privkey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + log.Printf("failed to generate private key: %s", err) + return + } + + payload := []byte("Lorem Ipsum") + + encrypted, err := Encrypt(payload, jwa.RSA1_5, &privkey.PublicKey, jwa.A128CBC_HS256, jwa.NoCompress) + if err != nil { + log.Printf("failed to encrypt payload: %s", err) + return + } + + decrypted, err := Decrypt(encrypted, jwa.RSA1_5, privkey) + if err != nil { + log.Printf("failed to decrypt: %s", err) + return + } + + if string(decrypted) != "Lorem Ipsum" { + log.Printf("WHAT?!") + return + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/encrypt.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/encrypt.go new file mode 100644 index 0000000000..b6ef1dfbe6 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/encrypt.go @@ -0,0 +1,105 @@ +package jwe + +import ( + "github.com/lestrrat-go/jwx/internal/debug" + "github.com/pkg/errors" +) + +// NewMultiEncrypt creates a new Encrypt struct. The caller is responsible +// for instantiating valid inputs for ContentEncrypter, KeyGenerator, +// and KeyEncrypters. +func NewMultiEncrypt(cc ContentEncrypter, kg KeyGenerator, ke ...KeyEncrypter) *MultiEncrypt { + e := &MultiEncrypt{ + ContentEncrypter: cc, + KeyGenerator: kg, + KeyEncrypters: ke, + } + return e +} + +// Encrypt takes the plaintext and encrypts into a JWE message. +func (e MultiEncrypt) Encrypt(plaintext []byte) (*Message, error) { + bk, err := e.KeyGenerator.KeyGenerate() + if err != nil { + if debug.Enabled { + debug.Printf("Failed to generate key: %s", err) + } + return nil, errors.Wrap(err, "failed to generate key") + } + cek := bk.Bytes() + + if debug.Enabled { + debug.Printf("Encrypt: generated cek len = %d", len(cek)) + } + + protected := NewEncodedHeader() + protected.Set("enc", e.ContentEncrypter.Algorithm()) + + // In JWE, multiple recipients may exist -- they receive an + // encrypted version of the CEK, using their key encryption + // algorithm of choice. + recipients := make([]Recipient, len(e.KeyEncrypters)) + for i, enc := range e.KeyEncrypters { + r := NewRecipient() + r.Header.Set("alg", enc.Algorithm()) + if v := enc.Kid(); v != "" { + r.Header.Set("kid", v) + } + enckey, err := enc.KeyEncrypt(cek) + if err != nil { + if debug.Enabled { + debug.Printf("Failed to encrypt key: %s", err) + } + return nil, errors.Wrap(err, `failed to encrypt key`) + } + r.EncryptedKey = enckey.Bytes() + if hp, ok := enckey.(HeaderPopulater); ok { + hp.HeaderPopulate(r.Header) + } + if debug.Enabled { + debug.Printf("Encrypt: encrypted_key = %x (%d)", enckey.Bytes(), len(enckey.Bytes())) + } + recipients[i] = *r + } + + // If there's only one recipient, you want to include that in the + // protected header + if len(recipients) == 1 { + protected.Header, err = protected.Header.Merge(recipients[0].Header) + if err != nil { + return nil, errors.Wrap(err, "failed to merge protected headers") + } + } + + aad, err := protected.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to base64 encode protected headers") + } + + // ...on the other hand, there's only one content cipher. + iv, ciphertext, tag, err := e.ContentEncrypter.Encrypt(cek, plaintext, aad) + if err != nil { + if debug.Enabled { + debug.Printf("Failed to encrypt: %s", err) + } + return nil, errors.Wrap(err, "failed to encrypt payload") + } + + if debug.Enabled { + debug.Printf("Encrypt.Encrypt: cek = %x (%d)", cek, len(cek)) + debug.Printf("Encrypt.Encrypt: aad = %x", aad) + debug.Printf("Encrypt.Encrypt: ciphertext = %x", ciphertext) + debug.Printf("Encrypt.Encrypt: iv = %x", iv) + debug.Printf("Encrypt.Encrypt: tag = %x", tag) + } + + msg := NewMessage() + msg.AuthenticatedData.Base64Decode(aad) + msg.CipherText = ciphertext + msg.InitializationVector = iv + msg.ProtectedHeader = protected + msg.Recipients = recipients + msg.Tag = tag + + return msg, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/interface.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/interface.go new file mode 100644 index 0000000000..65d478df59 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/interface.go @@ -0,0 +1,280 @@ +package jwe + +import ( + "crypto/cipher" + "crypto/ecdsa" + "crypto/rsa" + "errors" + "fmt" + "net/url" + + "github.com/lestrrat-go/jwx/buffer" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" +) + +// Errors used in JWE +var ( + ErrInvalidBlockSize = errors.New("keywrap input must be 8 byte blocks") + ErrInvalidCompactPartsCount = errors.New("compact JWE format must have five parts") + ErrInvalidHeaderValue = errors.New("invalid value for header key") + ErrUnsupportedAlgorithm = errors.New("unsupported algorithm") + ErrMissingPrivateKey = errors.New("missing private key") +) + +type errUnsupportedAlgorithm struct { + alg string + purpose string +} + +// NewErrUnsupportedAlgorithm creates a new UnsupportedAlgorithm error +func NewErrUnsupportedAlgorithm(alg, purpose string) errUnsupportedAlgorithm { + return errUnsupportedAlgorithm{alg: alg, purpose: purpose} +} + +// Error returns the string representation of the error +func (e errUnsupportedAlgorithm) Error() string { + return fmt.Sprintf("unsupported algorithm '%s' for %s", e.alg, e.purpose) +} + +// EssentialHeader is a set of headers that are already defined in RFC 7516` +type EssentialHeader struct { + AgreementPartyUInfo buffer.Buffer `json:"apu,omitempty"` + AgreementPartyVInfo buffer.Buffer `json:"apv,omitempty"` + Algorithm jwa.KeyEncryptionAlgorithm `json:"alg,omitempty"` + ContentEncryption jwa.ContentEncryptionAlgorithm `json:"enc,omitempty"` + ContentType string `json:"cty,omitempty"` + Compression jwa.CompressionAlgorithm `json:"zip,omitempty"` + Critical []string `json:"crit,omitempty"` + EphemeralPublicKey *jwk.ECDSAPublicKey `json:"epk,omitempty"` + Jwk jwk.Key `json:"jwk,omitempty"` // public key + JwkSetURL *url.URL `json:"jku,omitempty"` + KeyID string `json:"kid,omitempty"` + Type string `json:"typ,omitempty"` // e.g. "JWT" + X509Url *url.URL `json:"x5u,omitempty"` + X509CertChain []string `json:"x5c,omitempty"` + X509CertThumbprint string `json:"x5t,omitempty"` + X509CertThumbprintS256 string `json:"x5t#S256,omitempty"` +} + +// Header represents a jws header. +type Header struct { + *EssentialHeader `json:"-"` + PrivateParams map[string]interface{} `json:"-"` +} + +// EncodedHeader represents a header value that is base64 encoded +// in JSON format +type EncodedHeader struct { + *Header +} + +// ByteSource is an interface for things that return a byte sequence. +// This is used for KeyGenerator so that the result of computations can +// carry more than just the generate byte sequence. +type ByteSource interface { + Bytes() []byte +} + +// KeyEncrypter is an interface for things that can encrypt keys +type KeyEncrypter interface { + Algorithm() jwa.KeyEncryptionAlgorithm + KeyEncrypt([]byte) (ByteSource, error) + // Kid returns the key id for this KeyEncrypter. This exists so that + // you can pass in a KeyEncrypter to MultiEncrypt, you can rest assured + // that the generated key will have the proper key ID. + Kid() string +} + +// KeyDecrypter is an interface for things that can decrypt keys +type KeyDecrypter interface { + Algorithm() jwa.KeyEncryptionAlgorithm + KeyDecrypt([]byte) ([]byte, error) +} + +// Recipient holds the encrypted key and hints to decrypt the key +type Recipient struct { + Header *Header `json:"header"` + EncryptedKey buffer.Buffer `json:"encrypted_key"` +} + +// Message contains the entire encrypted JWE message +type Message struct { + AuthenticatedData buffer.Buffer `json:"aad,omitempty"` + CipherText buffer.Buffer `json:"ciphertext"` + InitializationVector buffer.Buffer `json:"iv,omitempty"` + ProtectedHeader *EncodedHeader `json:"protected"` + Recipients []Recipient `json:"recipients"` + Tag buffer.Buffer `json:"tag,omitempty"` + UnprotectedHeader *Header `json:"unprotected,omitempty"` +} + +// Encrypter is the top level structure that encrypts the given +// payload to a JWE message +type Encrypter interface { + Encrypt([]byte) (*Message, error) +} + +// ContentEncrypter encrypts the content using the content using the +// encrypted key +type ContentEncrypter interface { + Algorithm() jwa.ContentEncryptionAlgorithm + Encrypt([]byte, []byte, []byte) ([]byte, []byte, []byte, error) +} + +// MultiEncrypt is the default Encrypter implementation. +type MultiEncrypt struct { + ContentEncrypter ContentEncrypter + KeyGenerator KeyGenerator // KeyGenerator creates the random CEK. + KeyEncrypters []KeyEncrypter +} + +// KeyWrapEncrypt encrypts content encryption keys using AES-CGM key wrap. +// Contrary to what the name implies, it also decrypt encrypted keys +type KeyWrapEncrypt struct { + alg jwa.KeyEncryptionAlgorithm + sharedkey []byte + KeyID string +} + +// EcdhesKeyWrapEncrypt encrypts content encryption keys using ECDH-ES. +type EcdhesKeyWrapEncrypt struct { + algorithm jwa.KeyEncryptionAlgorithm + generator KeyGenerator + KeyID string +} + +// EcdhesKeyWrapDecrypt decrypts keys using ECDH-ES. +type EcdhesKeyWrapDecrypt struct { + algorithm jwa.KeyEncryptionAlgorithm + apu []byte + apv []byte + privkey *ecdsa.PrivateKey + pubkey *ecdsa.PublicKey +} + +// ByteKey is a generated key that only has the key's byte buffer +// as its instance data. If a ke needs to do more, such as providing +// values to be set in a JWE header, that key type wraps a ByteKey +type ByteKey []byte + +// ByteWithECPrivateKey holds the EC-DSA private key that generated +// the key along witht he key itself. This is required to set the +// proper values in the JWE headers +type ByteWithECPrivateKey struct { + ByteKey + PrivateKey *ecdsa.PrivateKey +} + +// HeaderPopulater is an interface for things that may modify the +// JWE header. e.g. ByteWithECPrivateKey +type HeaderPopulater interface { + HeaderPopulate(*Header) +} + +// KeyGenerator generates the raw content encryption keys +type KeyGenerator interface { + KeySize() int + KeyGenerate() (ByteSource, error) +} + +// ContentCipher knows how to encrypt/decrypt the content given a content +// encryption key and other data +type ContentCipher interface { + KeySize() int + encrypt(cek, aad, plaintext []byte) ([]byte, []byte, []byte, error) + decrypt(cek, iv, aad, ciphertext, tag []byte) ([]byte, error) +} + +// GenericContentCrypt encrypts a message by applying all the necessary +// modifications to the keys and the contents +type GenericContentCrypt struct { + alg jwa.ContentEncryptionAlgorithm + keysize int + tagsize int + cipher ContentCipher + cekgen KeyGenerator +} + +// StaticKeyGenerate uses a static byte buffer to provide keys. +type StaticKeyGenerate []byte + +// RandomKeyGenerate generates random keys +type RandomKeyGenerate struct { + keysize int +} + +// EcdhesKeyGenerate generates keys using ECDH-ES algorithm +type EcdhesKeyGenerate struct { + algorithm jwa.KeyEncryptionAlgorithm + keysize int + pubkey *ecdsa.PublicKey +} + +// Serializer converts an encrypted message into a byte buffer +type Serializer interface { + Serialize(*Message) ([]byte, error) +} + +// CompactSerialize serializes the message into JWE compact serialized format +type CompactSerialize struct{} + +// JSONSerialize serializes the message into JWE JSON serialized format. If you +// set `Pretty` to true, `json.MarshalIndent` is used instead of `json.Marshal` +type JSONSerialize struct { + Pretty bool +} + +// AeadFetcher is an interface for things that can fetch AEAD ciphers +type AeadFetcher interface { + AeadFetch([]byte) (cipher.AEAD, error) +} + +// AeadFetchFunc fetches a AEAD cipher from the given key, and is +// represented by a function +type AeadFetchFunc func([]byte) (cipher.AEAD, error) + +// AesContentCipher represents a cipher based on AES +type AesContentCipher struct { + AeadFetcher + NonceGenerator KeyGenerator + keysize int + tagsize int +} + +// RsaContentCipher represents a cipher based on RSA +type RsaContentCipher struct { + pubkey *rsa.PublicKey +} + +// RSAPKCS15KeyDecrypt decrypts keys using RSA PKCS1v15 algorithm +type RSAPKCS15KeyDecrypt struct { + alg jwa.KeyEncryptionAlgorithm + privkey *rsa.PrivateKey + generator KeyGenerator +} + +// RSAPKCSKeyEncrypt encrypts keys using RSA PKCS1v15 algorithm +type RSAPKCSKeyEncrypt struct { + alg jwa.KeyEncryptionAlgorithm + pubkey *rsa.PublicKey + KeyID string +} + +// RSAOAEPKeyEncrypt encrypts keys using RSA OAEP algorithm +type RSAOAEPKeyEncrypt struct { + alg jwa.KeyEncryptionAlgorithm + pubkey *rsa.PublicKey + KeyID string +} + +// RSAOAEPKeyDecrypt decrypts keys using RSA OAEP algorithm +type RSAOAEPKeyDecrypt struct { + alg jwa.KeyEncryptionAlgorithm + privkey *rsa.PrivateKey +} + +// DirectDecrypt does not encryption (Note: Unimplemented) +type DirectDecrypt struct { + Key []byte +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/jwe.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/jwe.go new file mode 100644 index 0000000000..203fabdc03 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/jwe.go @@ -0,0 +1,285 @@ +// Package jwe implements JWE as described in https://tools.ietf.org/html/rfc7516 +package jwe + +import ( + "bytes" + "crypto/ecdsa" + "crypto/rsa" + "encoding/json" + + "github.com/lestrrat-go/jwx/buffer" + "github.com/lestrrat-go/jwx/internal/debug" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/pkg/errors" +) + +// Encrypt takes the plaintext payload and encrypts it in JWE compact format. +func Encrypt(payload []byte, keyalg jwa.KeyEncryptionAlgorithm, key interface{}, contentalg jwa.ContentEncryptionAlgorithm, compressalg jwa.CompressionAlgorithm) ([]byte, error) { + contentcrypt, err := NewAesCrypt(contentalg) + if err != nil { + return nil, errors.Wrap(err, `failed to create AES encrypter`) + } + + var keyenc KeyEncrypter + var keysize int + switch keyalg { + case jwa.RSA1_5: + pubkey, ok := key.(*rsa.PublicKey) + if !ok { + return nil, errors.New("invalid key: *rsa.PublicKey required") + } + keyenc, err = NewRSAPKCSKeyEncrypt(keyalg, pubkey) + if err != nil { + return nil, errors.Wrap(err, "failed to create RSA PKCS encrypter") + } + keysize = contentcrypt.KeySize() / 2 + case jwa.RSA_OAEP, jwa.RSA_OAEP_256: + pubkey, ok := key.(*rsa.PublicKey) + if !ok { + return nil, errors.New("invalid key: *rsa.PublicKey required") + } + keyenc, err = NewRSAOAEPKeyEncrypt(keyalg, pubkey) + if err != nil { + return nil, errors.Wrap(err, "failed to create RSA OAEP encrypter") + } + keysize = contentcrypt.KeySize() / 2 + case jwa.A128KW, jwa.A192KW, jwa.A256KW: + sharedkey, ok := key.([]byte) + if !ok { + return nil, errors.New("invalid key: []byte required") + } + keyenc, err = NewKeyWrapEncrypt(keyalg, sharedkey) + if err != nil { + return nil, errors.Wrap(err, "failed to create key wrap encrypter") + } + keysize = contentcrypt.KeySize() + switch aesKeySize := keysize / 2; aesKeySize { + case 16, 24, 32: + default: + return nil, errors.Errorf("unsupported keysize %d (from content encryption algorithm %s). consider using content encryption that uses 32, 48, or 64 byte keys", keysize, contentalg) + } + case jwa.ECDH_ES_A128KW, jwa.ECDH_ES_A192KW, jwa.ECDH_ES_A256KW: + pubkey, ok := key.(*ecdsa.PublicKey) + if !ok { + return nil, errors.New("invalid key: *ecdsa.PublicKey required") + } + keyenc, err = NewEcdhesKeyWrapEncrypt(keyalg, pubkey) + if err != nil { + return nil, errors.Wrap(err, "failed to create ECDHS key wrap encrypter") + } + keysize = contentcrypt.KeySize() / 2 + case jwa.ECDH_ES: + fallthrough + case jwa.A128GCMKW, jwa.A192GCMKW, jwa.A256GCMKW: + fallthrough + case jwa.PBES2_HS256_A128KW, jwa.PBES2_HS384_A192KW, jwa.PBES2_HS512_A256KW: + fallthrough + default: + if debug.Enabled { + debug.Printf("Encrypt: unknown key encryption algorithm: %s", keyalg) + } + return nil, errors.Wrap(ErrUnsupportedAlgorithm, "failed to create encrypter") + } + + if debug.Enabled { + debug.Printf("Encrypt: keysize = %d", keysize) + } + enc := NewMultiEncrypt(contentcrypt, NewRandomKeyGenerate(keysize), keyenc) + msg, err := enc.Encrypt(payload) + if err != nil { + if debug.Enabled { + debug.Printf("Encrypt: failed to encrypt: %s", err) + } + return nil, errors.Wrap(err, "failed to encrypt payload") + } + + return CompactSerialize{}.Serialize(msg) +} + +// Decrypt takes the key encryption algorithm and the corresponding +// key to decrypt the JWE message, and returns the decrypted payload. +// The JWE message can be either compact or full JSON format. +func Decrypt(buf []byte, alg jwa.KeyEncryptionAlgorithm, key interface{}) ([]byte, error) { + msg, err := Parse(buf) + if err != nil { + return nil, errors.Wrap(err, "failed to parse buffer for Decrypt") + } + + return msg.Decrypt(alg, key) +} + +// Parse parses the JWE message into a Message object. The JWE message +// can be either compact or full JSON format. +func Parse(buf []byte) (*Message, error) { + buf = bytes.TrimSpace(buf) + if len(buf) == 0 { + return nil, errors.New("empty buffer") + } + + if buf[0] == '{' { + return parseJSON(buf) + } + return parseCompact(buf) +} + +// ParseString is the same as Parse, but takes a string. +func ParseString(s string) (*Message, error) { + return Parse([]byte(s)) +} + +func parseJSON(buf []byte) (*Message, error) { + m := struct { + *Message + *Recipient + }{} + + if err := json.Unmarshal(buf, &m); err != nil { + return nil, errors.Wrap(err, "failed to parse JSON") + } + + // if the "signature" field exist, treat it as a flattened + if m.Recipient != nil { + if len(m.Message.Recipients) != 0 { + return nil, errors.New("invalid message: mixed flattened/full json serialization") + } + + m.Message.Recipients = []Recipient{*m.Recipient} + } + + return m.Message, nil +} + +func parseCompact(buf []byte) (*Message, error) { + if debug.Enabled { + debug.Printf("Parse(Compact): buf = '%s'", buf) + } + parts := bytes.Split(buf, []byte{'.'}) + if len(parts) != 5 { + return nil, ErrInvalidCompactPartsCount + } + + hdrbuf := buffer.Buffer{} + if err := hdrbuf.Base64Decode(parts[0]); err != nil { + return nil, errors.Wrap(err, `failed to parse first part of compact form`) + } + if debug.Enabled { + debug.Printf("hdrbuf = %s", hdrbuf) + } + + hdr := NewHeader() + if err := json.Unmarshal(hdrbuf, hdr); err != nil { + return nil, errors.Wrap(err, "failed to parse header JSON") + } + + // We need the protected header to contain the content encryption + // algorithm. XXX probably other headers need to go there too + protected := NewEncodedHeader() + protected.ContentEncryption = hdr.ContentEncryption + hdr.ContentEncryption = "" + + enckeybuf := buffer.Buffer{} + if err := enckeybuf.Base64Decode(parts[1]); err != nil { + return nil, errors.Wrap(err, "failed to base64 decode encryption key") + } + + ivbuf := buffer.Buffer{} + if err := ivbuf.Base64Decode(parts[2]); err != nil { + return nil, errors.Wrap(err, "failed to base64 decode iv") + } + + ctbuf := buffer.Buffer{} + if err := ctbuf.Base64Decode(parts[3]); err != nil { + return nil, errors.Wrap(err, "failed to base64 decode content") + } + + tagbuf := buffer.Buffer{} + if err := tagbuf.Base64Decode(parts[4]); err != nil { + return nil, errors.Wrap(err, "failed to base64 decode tag") + } + + m := NewMessage() + m.AuthenticatedData.SetBytes(hdrbuf.Bytes()) + m.ProtectedHeader = protected + m.Tag = tagbuf + m.CipherText = ctbuf + m.InitializationVector = ivbuf + m.Recipients = []Recipient{ + { + Header: hdr, + EncryptedKey: enckeybuf, + }, + } + return m, nil +} + +// BuildKeyDecrypter creates a new KeyDecrypter instance from the given +// parameters. It is used by the Message.Decrypt method to create +// key decrypter(s) from the given message. `keysize` is only used by +// some decrypters. Pass the value from ContentCipher.KeySize(). +func BuildKeyDecrypter(alg jwa.KeyEncryptionAlgorithm, h *Header, key interface{}, keysize int) (KeyDecrypter, error) { + switch alg { + case jwa.RSA1_5: + privkey, ok := key.(*rsa.PrivateKey) + if !ok { + return nil, errors.New("*rsa.PrivateKey is required as the key to build this key decrypter") + } + return NewRSAPKCS15KeyDecrypt(alg, privkey, keysize/2), nil + case jwa.RSA_OAEP, jwa.RSA_OAEP_256: + privkey, ok := key.(*rsa.PrivateKey) + if !ok { + return nil, errors.New("*rsa.PrivateKey is required as the key to build this key decrypter") + } + return NewRSAOAEPKeyDecrypt(alg, privkey) + case jwa.A128KW, jwa.A192KW, jwa.A256KW: + sharedkey, ok := key.([]byte) + if !ok { + return nil, errors.New("[]byte is required as the key to build this key decrypter") + } + return NewKeyWrapEncrypt(alg, sharedkey) + case jwa.ECDH_ES_A128KW, jwa.ECDH_ES_A192KW, jwa.ECDH_ES_A256KW: + epkif, err := h.Get("epk") + if err != nil { + return nil, errors.Wrap(err, "failed to get 'epk' field") + } + if epkif == nil { + return nil, errors.New("'epk' header is required as the key to build this key decrypter") + } + + epk, ok := epkif.(*jwk.ECDSAPublicKey) + if !ok { + return nil, errors.New("'epk' header is required as the key to build this key decrypter") + } + + pubkey, err := epk.Materialize() + if err != nil { + return nil, errors.Wrap(err, "failed to get public key") + } + + privkey, ok := key.(*ecdsa.PrivateKey) + if !ok { + return nil, errors.New("*ecdsa.PrivateKey is required as the key to build this key decrypter") + } + apuif, err := h.Get("apu") + if err != nil { + return nil, errors.New("'apu' key is required for this key decrypter") + } + apu, ok := apuif.(buffer.Buffer) + if !ok { + return nil, errors.New("'apu' key is required for this key decrypter") + } + + apvif, err := h.Get("apv") + if err != nil { + return nil, errors.New("'apv' key is required for this key decrypter") + } + apv, ok := apvif.(buffer.Buffer) + if !ok { + return nil, errors.New("'apv' key is required for this key decrypter") + } + + return NewEcdhesKeyWrapDecrypt(alg, pubkey.(*ecdsa.PublicKey), apu.Bytes(), apv.Bytes(), privkey), nil + } + + return nil, NewErrUnsupportedAlgorithm(string(alg), "key decryption") +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/jwe_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/jwe_test.go new file mode 100644 index 0000000000..46c36e40a4 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/jwe_test.go @@ -0,0 +1,327 @@ +package jwe_test + +import ( + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/rsa" + "encoding/json" + "net/url" + "testing" + + "github.com/lestrrat-go/jwx/internal/rsautil" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwe" + "github.com/stretchr/testify/assert" +) + +const ( + examplePayload = `The true sign of intelligence is not knowledge but imagination.` +) + +var rsaPrivKey *rsa.PrivateKey + +func init() { + var jwkstr = []byte(` + {"kty":"RSA", + "n":"oahUIoWw0K0usKNuOR6H4wkf4oBUXHTxRvgb48E-BVvxkeDNjbC4he8rUWcJoZmds2h7M70imEVhRU5djINXtqllXI4DFqcI1DgjT9LewND8MW2Krf3Spsk_ZkoFnilakGygTwpZ3uesH-PFABNIUYpOiN15dsQRkgr0vEhxN92i2asbOenSZeyaxziK72UwxrrKoExv6kc5twXTq4h-QChLOln0_mtUZwfsRaMStPs6mS6XrgxnxbWhojf663tuEQueGC-FCMfra36C9knDFGzKsNa7LZK2djYgyD3JR_MB_4NUJW_TqOQtwHYbxevoJArm-L5StowjzGy-_bq6Gw", + "e":"AQAB", + "d":"kLdtIj6GbDks_ApCSTYQtelcNttlKiOyPzMrXHeI-yk1F7-kpDxY4-WY5NWV5KntaEeXS1j82E375xxhWMHXyvjYecPT9fpwR_M9gV8n9Hrh2anTpTD93Dt62ypW3yDsJzBnTnrYu1iwWRgBKrEYY46qAZIrA2xAwnm2X7uGR1hghkqDp0Vqj3kbSCz1XyfCs6_LehBwtxHIyh8Ripy40p24moOAbgxVw3rxT_vlt3UVe4WO3JkJOzlpUf-KTVI2Ptgm-dARxTEtE-id-4OJr0h-K-VFs3VSndVTIznSxfyrj8ILL6MG_Uv8YAu7VILSB3lOW085-4qE3DzgrTjgyQ", + "p":"1r52Xk46c-LsfB5P442p7atdPUrxQSy4mti_tZI3Mgf2EuFVbUoDBvaRQ-SWxkbkmoEzL7JXroSBjSrK3YIQgYdMgyAEPTPjXv_hI2_1eTSPVZfzL0lffNn03IXqWF5MDFuoUYE0hzb2vhrlN_rKrbfDIwUbTrjjgieRbwC6Cl0", + "q":"wLb35x7hmQWZsWJmB_vle87ihgZ19S8lBEROLIsZG4ayZVe9Hi9gDVCOBmUDdaDYVTSNx_8Fyw1YYa9XGrGnDew00J28cRUoeBB_jKI1oma0Orv1T9aXIWxKwd4gvxFImOWr3QRL9KEBRzk2RatUBnmDZJTIAfwTs0g68UZHvtc", + "dp":"ZK-YwE7diUh0qR1tR7w8WHtolDx3MZ_OTowiFvgfeQ3SiresXjm9gZ5KLhMXvo-uz-KUJWDxS5pFQ_M0evdo1dKiRTjVw_x4NyqyXPM5nULPkcpU827rnpZzAJKpdhWAgqrXGKAECQH0Xt4taznjnd_zVpAmZZq60WPMBMfKcuE", + "dq":"Dq0gfgJ1DdFGXiLvQEZnuKEN0UUmsJBxkjydc3j4ZYdBiMRAy86x0vHCjywcMlYYg4yoC4YZa9hNVcsjqA3FeiL19rk8g6Qn29Tt0cj8qqyFpz9vNDBUfCAiJVeESOjJDZPYHdHY8v1b-o-Z2X5tvLx-TCekf7oxyeKDUqKWjis", + "qi":"VIMpMYbPf47dT1w_zDUXfPimsSegnMOA1zTaX7aGk_8urY6R8-ZW1FxU7AlWAyLWybqq6t16VFd7hQd0y6flUK4SlOydB61gwanOsXGOAOv82cHq0E3eL4HrtZkUuKvnPrMnsUUFlfUdybVzxyjz9JF_XyaY14ardLSjf4L_FNY" + }`) + + var err error + rsaPrivKey, err = rsautil.PrivateKeyFromJSON(jwkstr) + if err != nil { + panic(err) + } +} + +func TestSanityCheck_JWEExamplePayload(t *testing.T) { + expected := []byte{ + 84, 104, 101, 32, 116, 114, 117, 101, 32, 115, 105, 103, 110, 32, + 111, 102, 32, 105, 110, 116, 101, 108, 108, 105, 103, 101, 110, 99, + 101, 32, 105, 115, 32, 110, 111, 116, 32, 107, 110, 111, 119, 108, + 101, 100, 103, 101, 32, 98, 117, 116, 32, 105, 109, 97, 103, 105, + 110, 97, 116, 105, 111, 110, 46, + } + assert.Equal(t, expected, []byte(examplePayload), "examplePayload OK") +} + +func TestParse_Compact(t *testing.T) { + s := `eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkEyNTZHQ00ifQ.OKOawDo13gRp2ojaHV7LFpZcgV7T6DVZKTyKOMTYUmKoTCVJRgckCL9kiMT03JGeipsEdY3mx_etLbbWSrFr05kLzcSr4qKAq7YN7e9jwQRb23nfa6c9d-StnImGyFDbSv04uVuxIp5Zms1gNxKKK2Da14B8S4rzVRltdYwam_lDp5XnZAYpQdb76FdIKLaVmqgfwX7XWRxv2322i-vDxRfqNzo_tETKzpVLzfiwQyeyPGLBIO56YJ7eObdv0je81860ppamavo35UgoRdbYaBcoh9QcfylQr66oc6vFWXRcZ_ZT2LawVCWTIy3brGPi6UklfCpIMfIjf7iGdXKHzg.48V1_ALb6US04U3b.5eym8TW_c8SuK0ltJ3rpYIzOeDQz7TALvtu6UG9oMo4vpzs9tX_EFShS8iB7j6jiSdiwkIr3ajwQzaBtQD_A.XFBoMYUZodetZdvTiFvSkQ` + + msg, err := jwe.Parse([]byte(s)) + if !assert.NoError(t, err, "Parsing JWE is successful") { + return + } + + if !assert.Len(t, msg.Recipients, 1, "There is exactly 1 recipient") { + return + } +} + +// This test parses the example found in https://tools.ietf.org/html/rfc7516#appendix-A.1, +// and checks if we can roundtrip to the same compact serialization format. +func TestParse_RSAES_OAEP_AES_GCM(t *testing.T) { + const payload = `The true sign of intelligence is not knowledge but imagination.` + const serialized = `eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkEyNTZHQ00ifQ.OKOawDo13gRp2ojaHV7LFpZcgV7T6DVZKTyKOMTYUmKoTCVJRgckCL9kiMT03JGeipsEdY3mx_etLbbWSrFr05kLzcSr4qKAq7YN7e9jwQRb23nfa6c9d-StnImGyFDbSv04uVuxIp5Zms1gNxKKK2Da14B8S4rzVRltdYwam_lDp5XnZAYpQdb76FdIKLaVmqgfwX7XWRxv2322i-vDxRfqNzo_tETKzpVLzfiwQyeyPGLBIO56YJ7eObdv0je81860ppamavo35UgoRdbYaBcoh9QcfylQr66oc6vFWXRcZ_ZT2LawVCWTIy3brGPi6UklfCpIMfIjf7iGdXKHzg.48V1_ALb6US04U3b.5eym8TW_c8SuK0ltJ3rpYIzOeDQz7TALvtu6UG9oMo4vpzs9tX_EFShS8iB7j6jiSdiwkIr3ajwQzaBtQD_A.XFBoMYUZodetZdvTiFvSkQ` + var jwkstr = []byte(` + {"kty":"RSA", + "n":"oahUIoWw0K0usKNuOR6H4wkf4oBUXHTxRvgb48E-BVvxkeDNjbC4he8rUWcJoZmds2h7M70imEVhRU5djINXtqllXI4DFqcI1DgjT9LewND8MW2Krf3Spsk_ZkoFnilakGygTwpZ3uesH-PFABNIUYpOiN15dsQRkgr0vEhxN92i2asbOenSZeyaxziK72UwxrrKoExv6kc5twXTq4h-QChLOln0_mtUZwfsRaMStPs6mS6XrgxnxbWhojf663tuEQueGC-FCMfra36C9knDFGzKsNa7LZK2djYgyD3JR_MB_4NUJW_TqOQtwHYbxevoJArm-L5StowjzGy-_bq6Gw", + "e":"AQAB", + "d":"kLdtIj6GbDks_ApCSTYQtelcNttlKiOyPzMrXHeI-yk1F7-kpDxY4-WY5NWV5KntaEeXS1j82E375xxhWMHXyvjYecPT9fpwR_M9gV8n9Hrh2anTpTD93Dt62ypW3yDsJzBnTnrYu1iwWRgBKrEYY46qAZIrA2xAwnm2X7uGR1hghkqDp0Vqj3kbSCz1XyfCs6_LehBwtxHIyh8Ripy40p24moOAbgxVw3rxT_vlt3UVe4WO3JkJOzlpUf-KTVI2Ptgm-dARxTEtE-id-4OJr0h-K-VFs3VSndVTIznSxfyrj8ILL6MG_Uv8YAu7VILSB3lOW085-4qE3DzgrTjgyQ", + "p":"1r52Xk46c-LsfB5P442p7atdPUrxQSy4mti_tZI3Mgf2EuFVbUoDBvaRQ-SWxkbkmoEzL7JXroSBjSrK3YIQgYdMgyAEPTPjXv_hI2_1eTSPVZfzL0lffNn03IXqWF5MDFuoUYE0hzb2vhrlN_rKrbfDIwUbTrjjgieRbwC6Cl0", + "q":"wLb35x7hmQWZsWJmB_vle87ihgZ19S8lBEROLIsZG4ayZVe9Hi9gDVCOBmUDdaDYVTSNx_8Fyw1YYa9XGrGnDew00J28cRUoeBB_jKI1oma0Orv1T9aXIWxKwd4gvxFImOWr3QRL9KEBRzk2RatUBnmDZJTIAfwTs0g68UZHvtc", + "dp":"ZK-YwE7diUh0qR1tR7w8WHtolDx3MZ_OTowiFvgfeQ3SiresXjm9gZ5KLhMXvo-uz-KUJWDxS5pFQ_M0evdo1dKiRTjVw_x4NyqyXPM5nULPkcpU827rnpZzAJKpdhWAgqrXGKAECQH0Xt4taznjnd_zVpAmZZq60WPMBMfKcuE", + "dq":"Dq0gfgJ1DdFGXiLvQEZnuKEN0UUmsJBxkjydc3j4ZYdBiMRAy86x0vHCjywcMlYYg4yoC4YZa9hNVcsjqA3FeiL19rk8g6Qn29Tt0cj8qqyFpz9vNDBUfCAiJVeESOjJDZPYHdHY8v1b-o-Z2X5tvLx-TCekf7oxyeKDUqKWjis", + "qi":"VIMpMYbPf47dT1w_zDUXfPimsSegnMOA1zTaX7aGk_8urY6R8-ZW1FxU7AlWAyLWybqq6t16VFd7hQd0y6flUK4SlOydB61gwanOsXGOAOv82cHq0E3eL4HrtZkUuKvnPrMnsUUFlfUdybVzxyjz9JF_XyaY14ardLSjf4L_FNY" + }`) + privkey, err := rsautil.PrivateKeyFromJSON(jwkstr) + if !assert.NoError(t, err, "PrivateKey created") { + return + } + + msg, err := jwe.ParseString(serialized) + if !assert.NoError(t, err, "parse successful") { + return + } + t.Logf("------ ParseString done") + + plaintext, err := msg.Decrypt(jwa.RSA_OAEP, privkey) + if !assert.NoError(t, err, "Decrypt message succeeded") { + return + } + + if !assert.Equal(t, payload, string(plaintext), "decrypted value does not match") { + return + } + + jsonbuf, err := jwe.CompactSerialize{}.Serialize(msg) + if !assert.NoError(t, err, "Compact serialize succeeded") { + return + } + + if !assert.Equal(t, serialized, string(jsonbuf), "Compact serialize matches") { + jsonbuf, _ = jwe.JSONSerialize{Pretty: true}.Serialize(msg) + t.Logf("%s", jsonbuf) + return + } + + encrypted, err := jwe.Encrypt(plaintext, jwa.RSA_OAEP, &privkey.PublicKey, jwa.A256GCM, jwa.NoCompress) + if !assert.NoError(t, err, "jwe.Encrypt should succeed") { + return + } + + plaintext, err = jwe.Decrypt(encrypted, jwa.RSA_OAEP, privkey) + if !assert.NoError(t, err, "jwe.Decrypt should succeed") { + return + } + + if !assert.Equal(t, payload, string(plaintext), "jwe.Decrypt should produce the same plaintext") { + return + } +} + +// https://tools.ietf.org/html/rfc7516#appendix-A.1. +func TestRoundtrip_RSAES_OAEP_AES_GCM(t *testing.T) { + var plaintext = []byte{ + 84, 104, 101, 32, 116, 114, 117, 101, 32, 115, 105, 103, 110, 32, + 111, 102, 32, 105, 110, 116, 101, 108, 108, 105, 103, 101, 110, 99, + 101, 32, 105, 115, 32, 110, 111, 116, 32, 107, 110, 111, 119, 108, + 101, 100, 103, 101, 32, 98, 117, 116, 32, 105, 109, 97, 103, 105, + 110, 97, 116, 105, 111, 110, 46, + } + + max := 100 + if testing.Short() { + max = 1 + } + + for i := 0; i < max; i++ { + encrypted, err := jwe.Encrypt(plaintext, jwa.RSA_OAEP, &rsaPrivKey.PublicKey, jwa.A256GCM, jwa.NoCompress) + if !assert.NoError(t, err, "Encrypt should succeed") { + return + } + + decrypted, err := jwe.Decrypt(encrypted, jwa.RSA_OAEP, rsaPrivKey) + if !assert.NoError(t, err, "Decrypt should succeed") { + return + } + + if !assert.Equal(t, plaintext, decrypted, "Decrypted content should match") { + return + } + } +} + +func TestRoundtrip_RSA1_5_A128CBC_HS256(t *testing.T) { + var plaintext = []byte{ + 76, 105, 118, 101, 32, 108, 111, 110, 103, 32, 97, 110, 100, 32, + 112, 114, 111, 115, 112, 101, 114, 46, + } + + max := 100 + if testing.Short() { + max = 1 + } + + for i := 0; i < max; i++ { + encrypted, err := jwe.Encrypt(plaintext, jwa.RSA1_5, &rsaPrivKey.PublicKey, jwa.A128CBC_HS256, jwa.NoCompress) + if !assert.NoError(t, err, "Encrypt is successful") { + return + } + + decrypted, err := jwe.Decrypt(encrypted, jwa.RSA1_5, rsaPrivKey) + if !assert.NoError(t, err, "Decrypt successful") { + return + } + + if !assert.Equal(t, plaintext, decrypted, "Decrypted correct plaintext") { + return + } + } +} + +// https://tools.ietf.org/html/rfc7516#appendix-A.3. Note that cek is dynamically +// generated, so the encrypted values will NOT match that of the RFC. +func TestEncode_A128KW_A128CBC_HS256(t *testing.T) { + var plaintext = []byte{ + 76, 105, 118, 101, 32, 108, 111, 110, 103, 32, 97, 110, 100, 32, + 112, 114, 111, 115, 112, 101, 114, 46, + } + var sharedkey = []byte{ + 25, 172, 32, 130, 225, 114, 26, 181, 138, 106, 254, 192, 95, 133, 74, 82, + } + + max := 100 + if testing.Short() { + max = 1 + } + + for i := 0; i < max; i++ { + encrypted, err := jwe.Encrypt(plaintext, jwa.A128KW, sharedkey, jwa.A128CBC_HS256, jwa.NoCompress) + if !assert.NoError(t, err, "Encrypt is successful") { + return + } + + decrypted, err := jwe.Decrypt(encrypted, jwa.A128KW, sharedkey) + if !assert.NoError(t, err, "Decrypt successful") { + return + } + + if !assert.Equal(t, plaintext, decrypted, "Decrypted correct plaintext") { + return + } + } +} + +func TestEncode_ECDHES(t *testing.T) { + plaintext := []byte("Lorem ipsum") + privkey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if !assert.NoError(t, err, "ecdsa key generated") { + return + } + encrypted, err := jwe.Encrypt(plaintext, jwa.ECDH_ES_A128KW, &privkey.PublicKey, jwa.A128CBC_HS256, jwa.NoCompress) + if !assert.NoError(t, err, "Encrypt succeeds") { + return + } + + t.Logf("encrypted = %s", encrypted) + + msg, _ := jwe.Parse(encrypted) + jsonbuf, _ := json.MarshalIndent(msg, "", " ") + t.Logf("%s", jsonbuf) + + decrypted, err := jwe.Decrypt(encrypted, jwa.ECDH_ES_A128KW, privkey) + if !assert.NoError(t, err, "Decrypt succeeds") { + return + } + t.Logf("%s", decrypted) +} + +func TestEncode_ECDH_ES_A256KW_A192KW_A128KW(t *testing.T) { + plaintext := []byte("Lorem ipsum") + privkey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if !assert.NoError(t, err, "ecdsa key generated") { + return + } + + algorithms := []jwa.KeyEncryptionAlgorithm{jwa.ECDH_ES_A256KW, jwa.ECDH_ES_A192KW, jwa.ECDH_ES_A128KW} + + for i := 0; i < len(algorithms); i++ { + encrypted, err := jwe.Encrypt(plaintext, algorithms[i], &privkey.PublicKey, jwa.A256GCM, jwa.NoCompress) + if !assert.NoError(t, err, "Encrypt succeeds") { + return + } + + t.Logf("encrypted = %s", encrypted) + + msg, _ := jwe.Parse(encrypted) + jsonbuf, _ := json.MarshalIndent(msg, "", " ") + t.Logf("%s", jsonbuf) + + decrypted, err := jwe.Decrypt(encrypted, algorithms[i], privkey) + if !assert.NoError(t, err, "Decrypt succeeds") { + return + } + t.Logf("%s", decrypted) + } +} + +func Test_A256KW_A256CBC_HS512(t *testing.T) { + var keysize = 32 + var key = make([]byte, keysize) + for i := 0; i < keysize; i++ { + key[i] = byte(i) + } + _, err := jwe.Encrypt([]byte(examplePayload), jwa.A256KW, key, jwa.A256CBC_HS512, jwa.NoCompress) + if !assert.Error(t, err, "should fail to encrypt payload") { + return + } +} + +func TestHeaders(t *testing.T) { + h := jwe.NewHeader() + + data := map[string]struct { + Value interface{} + Expected interface{} + }{ + "kid": {Value: "kid blah"}, + "enc": {Value: jwa.A128GCM}, + "cty": {Value: "application/json"}, + "typ": {Value: "typ blah"}, + "x5t": {Value: "x5t blah"}, + "x5t#256": {Value: "x5t#256 blah"}, + "crit": {Value: []string{"crit blah"}}, + "jku": { + Value: "http://github.com/lestrrat-go/jwx", + Expected: &url.URL{Scheme: "http", Host: "github.com", Path: "/lestrrat-go/jwx"}, + }, + "x5u": { + Value: "http://github.com/lestrrat-go/jwx", + Expected: &url.URL{Scheme: "http", Host: "github.com", Path: "/lestrrat-go/jwx"}, + }, + } + + for name, testcase := range data { + h.Set(name, testcase.Value) + got, err := h.Get(name) + if !assert.NoError(t, err, "value should exist") { + return + } + + expected := testcase.Expected + if expected == nil { + expected = testcase.Value + } + if !assert.Equal(t, expected, got, "value should match") { + return + } + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/key_encrypt.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/key_encrypt.go new file mode 100644 index 0000000000..7916ce9658 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/key_encrypt.go @@ -0,0 +1,435 @@ +package jwe + +import ( + "crypto" + "crypto/aes" + "crypto/cipher" + "crypto/ecdsa" + "crypto/rand" + "crypto/rsa" + "crypto/sha1" + "crypto/sha256" + "crypto/subtle" + "encoding/binary" + "fmt" + "hash" + + "github.com/lestrrat-go/jwx/internal/concatkdf" + "github.com/lestrrat-go/jwx/internal/debug" + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +// NewKeyWrapEncrypt creates a key-wrap encrypter using AES-CGM. +// Although the name suggests otherwise, this does the decryption as well. +func NewKeyWrapEncrypt(alg jwa.KeyEncryptionAlgorithm, sharedkey []byte) (KeyWrapEncrypt, error) { + return KeyWrapEncrypt{ + alg: alg, + sharedkey: sharedkey, + }, nil +} + +// Algorithm returns the key encryption algorithm being used +func (kw KeyWrapEncrypt) Algorithm() jwa.KeyEncryptionAlgorithm { + return kw.alg +} + +// Kid returns the key ID associated with this encrypter +func (kw KeyWrapEncrypt) Kid() string { + return kw.KeyID +} + +// KeyDecrypt decrypts the encrypted key using AES-CGM key unwrap +func (kw KeyWrapEncrypt) KeyDecrypt(enckey []byte) ([]byte, error) { + block, err := aes.NewCipher(kw.sharedkey) + if err != nil { + return nil, errors.Wrap(err, "failed to create cipher from shared key") + } + + cek, err := keyunwrap(block, enckey) + if err != nil { + return nil, errors.Wrap(err, "failed to unwrap data") + } + return cek, nil +} + +// KeyEncrypt encrypts the given content encryption key +func (kw KeyWrapEncrypt) KeyEncrypt(cek []byte) (ByteSource, error) { + block, err := aes.NewCipher(kw.sharedkey) + if err != nil { + return nil, errors.Wrap(err, "failed to create cipher from shared key") + } + encrypted, err := keywrap(block, cek) + if err != nil { + return nil, errors.Wrap(err, `keywrap: failed to wrap key`) + } + return ByteKey(encrypted), nil +} + +// NewEcdhesKeyWrapEncrypt creates a new key encrypter based on ECDH-ES +func NewEcdhesKeyWrapEncrypt(alg jwa.KeyEncryptionAlgorithm, key *ecdsa.PublicKey) (*EcdhesKeyWrapEncrypt, error) { + generator, err := NewEcdhesKeyGenerate(alg, key) + if err != nil { + return nil, errors.Wrap(err, "failed to create key generator") + } + return &EcdhesKeyWrapEncrypt{ + algorithm: alg, + generator: generator, + }, nil +} + +// Algorithm returns the key encryption algorithm being used +func (kw EcdhesKeyWrapEncrypt) Algorithm() jwa.KeyEncryptionAlgorithm { + return kw.algorithm +} + +// Kid returns the key ID associated with this encrypter +func (kw EcdhesKeyWrapEncrypt) Kid() string { + return kw.KeyID +} + +// KeyEncrypt encrypts the content encryption key using ECDH-ES +func (kw EcdhesKeyWrapEncrypt) KeyEncrypt(cek []byte) (ByteSource, error) { + kg, err := kw.generator.KeyGenerate() + if err != nil { + return nil, errors.Wrap(err, "failed to create key generator") + } + + bwpk, ok := kg.(ByteWithECPrivateKey) + if !ok { + return nil, errors.New("key generator generated invalid key (expected ByteWithECPrivateKey)") + } + + block, err := aes.NewCipher(bwpk.Bytes()) + if err != nil { + return nil, errors.Wrap(err, "failed to generate cipher from generated key") + } + + jek, err := keywrap(block, cek) + if err != nil { + return nil, errors.Wrap(err, "failed to wrap data") + } + + bwpk.ByteKey = ByteKey(jek) + + return bwpk, nil +} + +// NewEcdhesKeyWrapDecrypt creates a new key decrypter using ECDH-ES +func NewEcdhesKeyWrapDecrypt(alg jwa.KeyEncryptionAlgorithm, pubkey *ecdsa.PublicKey, apu, apv []byte, privkey *ecdsa.PrivateKey) *EcdhesKeyWrapDecrypt { + return &EcdhesKeyWrapDecrypt{ + algorithm: alg, + apu: apu, + apv: apv, + privkey: privkey, + pubkey: pubkey, + } +} + +// Algorithm returns the key encryption algorithm being used +func (kw EcdhesKeyWrapDecrypt) Algorithm() jwa.KeyEncryptionAlgorithm { + return kw.algorithm +} + +// KeyDecrypt decrypts the encrypted key using ECDH-ES +func (kw EcdhesKeyWrapDecrypt) KeyDecrypt(enckey []byte) ([]byte, error) { + var keysize uint32 + switch kw.algorithm { + case jwa.ECDH_ES_A128KW: + keysize = 16 + case jwa.ECDH_ES_A192KW: + keysize = 24 + case jwa.ECDH_ES_A256KW: + keysize = 32 + default: + return nil, errors.Wrap(ErrUnsupportedAlgorithm, "invalid ECDH-ES key wrap algorithm") + } + + privkey := kw.privkey + pubkey := kw.pubkey + + pubinfo := make([]byte, 4) + binary.BigEndian.PutUint32(pubinfo, keysize*8) + + z, _ := privkey.PublicKey.Curve.ScalarMult(pubkey.X, pubkey.Y, privkey.D.Bytes()) + kdf := concatkdf.New(crypto.SHA256, []byte(kw.algorithm.String()), z.Bytes(), kw.apu, kw.apv, pubinfo, []byte{}) + kek := make([]byte, keysize) + kdf.Read(kek) + + block, err := aes.NewCipher(kek) + if err != nil { + return nil, errors.Wrap(err, "failed to create cipher for ECDH-ES key wrap") + } + + return keyunwrap(block, enckey) +} + +// NewRSAOAEPKeyEncrypt creates a new key encrypter using RSA OAEP +func NewRSAOAEPKeyEncrypt(alg jwa.KeyEncryptionAlgorithm, pubkey *rsa.PublicKey) (*RSAOAEPKeyEncrypt, error) { + switch alg { + case jwa.RSA_OAEP, jwa.RSA_OAEP_256: + default: + return nil, errors.Wrap(ErrUnsupportedAlgorithm, "invalid RSA OAEP encrypt algorithm") + } + return &RSAOAEPKeyEncrypt{ + alg: alg, + pubkey: pubkey, + }, nil +} + +// NewRSAPKCSKeyEncrypt creates a new key encrypter using PKCS1v15 +func NewRSAPKCSKeyEncrypt(alg jwa.KeyEncryptionAlgorithm, pubkey *rsa.PublicKey) (*RSAPKCSKeyEncrypt, error) { + switch alg { + case jwa.RSA1_5: + default: + return nil, errors.Wrap(ErrUnsupportedAlgorithm, "invalid RSA PKCS encrypt algorithm") + } + + return &RSAPKCSKeyEncrypt{ + alg: alg, + pubkey: pubkey, + }, nil +} + +// Algorithm returns the key encryption algorithm being used +func (e RSAPKCSKeyEncrypt) Algorithm() jwa.KeyEncryptionAlgorithm { + return e.alg +} + +// Kid returns the key ID associated with this encrypter +func (e RSAPKCSKeyEncrypt) Kid() string { + return e.KeyID +} + +// Algorithm returns the key encryption algorithm being used +func (e RSAOAEPKeyEncrypt) Algorithm() jwa.KeyEncryptionAlgorithm { + return e.alg +} + +// Kid returns the key ID associated with this encrypter +func (e RSAOAEPKeyEncrypt) Kid() string { + return e.KeyID +} + +// KeyEncrypt encrypts the content encryption key using RSA PKCS1v15 +func (e RSAPKCSKeyEncrypt) KeyEncrypt(cek []byte) (ByteSource, error) { + if e.alg != jwa.RSA1_5 { + return nil, errors.Wrap(ErrUnsupportedAlgorithm, "invalid RSA PKCS encrypt algorithm") + } + encrypted, err := rsa.EncryptPKCS1v15(rand.Reader, e.pubkey, cek) + if err != nil { + return nil, errors.Wrap(err, "failed to encrypt using PKCS1v15") + } + return ByteKey(encrypted), nil +} + +// KeyEncrypt encrypts the content encryption key using RSA OAEP +func (e RSAOAEPKeyEncrypt) KeyEncrypt(cek []byte) (ByteSource, error) { + var hash hash.Hash + switch e.alg { + case jwa.RSA_OAEP: + hash = sha1.New() + case jwa.RSA_OAEP_256: + hash = sha256.New() + default: + return nil, errors.New("failed to generate key encrypter for RSA-OAEP: RSA_OAEP/RSA_OAEP_256 required") + } + encrypted, err := rsa.EncryptOAEP(hash, rand.Reader, e.pubkey, cek, []byte{}) + if err != nil { + return nil, errors.Wrap(err, `failed to OAEP encrypt`) + } + return ByteKey(encrypted), nil +} + +// NewRSAPKCS15KeyDecrypt creates a new decrypter using RSA PKCS1v15 +func NewRSAPKCS15KeyDecrypt(alg jwa.KeyEncryptionAlgorithm, privkey *rsa.PrivateKey, keysize int) *RSAPKCS15KeyDecrypt { + generator := NewRandomKeyGenerate(keysize * 2) + return &RSAPKCS15KeyDecrypt{ + alg: alg, + privkey: privkey, + generator: generator, + } +} + +// Algorithm returns the key encryption algorithm being used +func (d RSAPKCS15KeyDecrypt) Algorithm() jwa.KeyEncryptionAlgorithm { + return d.alg +} + +// KeyDecrypt decryptes the encrypted key using RSA PKCS1v1.5 +func (d RSAPKCS15KeyDecrypt) KeyDecrypt(enckey []byte) ([]byte, error) { + if debug.Enabled { + debug.Printf("START PKCS.KeyDecrypt") + } + // Hey, these notes and workarounds were stolen from go-jose + defer func() { + // DecryptPKCS1v15SessionKey sometimes panics on an invalid payload + // because of an index out of bounds error, which we want to ignore. + // This has been fixed in Go 1.3.1 (released 2014/08/13), the recover() + // only exists for preventing crashes with unpatched versions. + // See: https://groups.google.com/forum/#!topic/golang-dev/7ihX6Y6kx9k + // See: https://code.google.com/p/go/source/detail?r=58ee390ff31602edb66af41ed10901ec95904d33 + _ = recover() + }() + + // Perform some input validation. + expectedlen := d.privkey.PublicKey.N.BitLen() / 8 + if expectedlen != len(enckey) { + // Input size is incorrect, the encrypted payload should always match + // the size of the public modulus (e.g. using a 2048 bit key will + // produce 256 bytes of output). Reject this since it's invalid input. + return nil, fmt.Errorf( + "input size for key decrypt is incorrect (expected %d, got %d)", + expectedlen, + len(enckey), + ) + } + + var err error + + bk, err := d.generator.KeyGenerate() + if err != nil { + return nil, errors.New("failed to generate key") + } + cek := bk.Bytes() + + // When decrypting an RSA-PKCS1v1.5 payload, we must take precautions to + // prevent chosen-ciphertext attacks as described in RFC 3218, "Preventing + // the Million Message Attack on Cryptographic Message Syntax". We are + // therefore deliberatly ignoring errors here. + err = rsa.DecryptPKCS1v15SessionKey(rand.Reader, d.privkey, enckey, cek) + if err != nil { + return nil, errors.Wrap(err, "failed to decrypt via PKCS1v15") + } + + return cek, nil +} + +// NewRSAOAEPKeyDecrypt creates a new key decrypter using RSA OAEP +func NewRSAOAEPKeyDecrypt(alg jwa.KeyEncryptionAlgorithm, privkey *rsa.PrivateKey) (*RSAOAEPKeyDecrypt, error) { + switch alg { + case jwa.RSA_OAEP, jwa.RSA_OAEP_256: + default: + return nil, errors.Wrap(ErrUnsupportedAlgorithm, "invalid RSA OAEP decrypt algorithm") + } + + return &RSAOAEPKeyDecrypt{ + alg: alg, + privkey: privkey, + }, nil +} + +// Algorithm returns the key encryption algorithm being used +func (d RSAOAEPKeyDecrypt) Algorithm() jwa.KeyEncryptionAlgorithm { + return d.alg +} + +// KeyDecrypt decryptes the encrypted key using RSA OAEP +func (d RSAOAEPKeyDecrypt) KeyDecrypt(enckey []byte) ([]byte, error) { + if debug.Enabled { + debug.Printf("START OAEP.KeyDecrypt") + } + var hash hash.Hash + switch d.alg { + case jwa.RSA_OAEP: + hash = sha1.New() + case jwa.RSA_OAEP_256: + hash = sha256.New() + default: + return nil, errors.New("failed to generate key encrypter for RSA-OAEP: RSA_OAEP/RSA_OAEP_256 required") + } + return rsa.DecryptOAEP(hash, rand.Reader, d.privkey, enckey, []byte{}) +} + +// Decrypt for DirectDecrypt does not do anything other than +// return a copy of the embedded key +func (d DirectDecrypt) Decrypt() ([]byte, error) { + cek := make([]byte, len(d.Key)) + copy(cek, d.Key) + return cek, nil +} + +var keywrapDefaultIV = []byte{0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6} + +const keywrapChunkLen = 8 + +func keywrap(kek cipher.Block, cek []byte) ([]byte, error) { + if len(cek)%8 != 0 { + return nil, ErrInvalidBlockSize + } + + n := len(cek) / keywrapChunkLen + r := make([][]byte, n) + + for i := 0; i < n; i++ { + r[i] = make([]byte, keywrapChunkLen) + copy(r[i], cek[i*keywrapChunkLen:]) + } + + buffer := make([]byte, keywrapChunkLen*2) + tBytes := make([]byte, keywrapChunkLen) + copy(buffer, keywrapDefaultIV) + + for t := 0; t < 6*n; t++ { + copy(buffer[keywrapChunkLen:], r[t%n]) + + kek.Encrypt(buffer, buffer) + + binary.BigEndian.PutUint64(tBytes, uint64(t+1)) + + for i := 0; i < keywrapChunkLen; i++ { + buffer[i] = buffer[i] ^ tBytes[i] + } + copy(r[t%n], buffer[keywrapChunkLen:]) + } + + out := make([]byte, (n+1)*keywrapChunkLen) + copy(out, buffer[:keywrapChunkLen]) + for i := range r { + copy(out[(i+1)*8:], r[i]) + } + + return out, nil +} + +func keyunwrap(block cipher.Block, ciphertxt []byte) ([]byte, error) { + if len(ciphertxt)%keywrapChunkLen != 0 { + return nil, ErrInvalidBlockSize + } + + n := (len(ciphertxt) / keywrapChunkLen) - 1 + r := make([][]byte, n) + + for i := range r { + r[i] = make([]byte, keywrapChunkLen) + copy(r[i], ciphertxt[(i+1)*keywrapChunkLen:]) + } + + buffer := make([]byte, keywrapChunkLen*2) + tBytes := make([]byte, keywrapChunkLen) + copy(buffer[:keywrapChunkLen], ciphertxt[:keywrapChunkLen]) + + for t := 6*n - 1; t >= 0; t-- { + binary.BigEndian.PutUint64(tBytes, uint64(t+1)) + + for i := 0; i < keywrapChunkLen; i++ { + buffer[i] = buffer[i] ^ tBytes[i] + } + copy(buffer[keywrapChunkLen:], r[t%n]) + + block.Decrypt(buffer, buffer) + + copy(r[t%n], buffer[keywrapChunkLen:]) + } + + if subtle.ConstantTimeCompare(buffer[:keywrapChunkLen], keywrapDefaultIV) == 0 { + return nil, errors.New("keywrap: failed to unwrap key") + } + + out := make([]byte, n*keywrapChunkLen) + for i := range r { + copy(out[i*keywrapChunkLen:], r[i]) + } + + return out, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/key_generate.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/key_generate.go new file mode 100644 index 0000000000..8c63fac938 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/key_generate.go @@ -0,0 +1,109 @@ +package jwe + +import ( + "crypto" + "crypto/ecdsa" + "crypto/rand" + "encoding/binary" + "io" + + "github.com/lestrrat-go/jwx/internal/concatkdf" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/pkg/errors" +) + +// Bytes returns the byte from this ByteKey +func (k ByteKey) Bytes() []byte { + return []byte(k) +} + +// KeySize returns the size of the key +func (g StaticKeyGenerate) KeySize() int { + return len(g) +} + +// KeyGenerate returns the key +func (g StaticKeyGenerate) KeyGenerate() (ByteSource, error) { + buf := make([]byte, g.KeySize()) + copy(buf, g) + return ByteKey(buf), nil +} + +// NewRandomKeyGenerate creates a new KeyGenerator that returns +// random bytes +func NewRandomKeyGenerate(n int) RandomKeyGenerate { + return RandomKeyGenerate{keysize: n} +} + +// KeySize returns the key size +func (g RandomKeyGenerate) KeySize() int { + return g.keysize +} + +// KeyGenerate generates a random new key +func (g RandomKeyGenerate) KeyGenerate() (ByteSource, error) { + buf := make([]byte, g.keysize) + if _, err := io.ReadFull(rand.Reader, buf); err != nil { + return nil, errors.Wrap(err, "failed to read from rand.Reader") + } + return ByteKey(buf), nil +} + +// NewEcdhesKeyGenerate creates a new key generator using ECDH-ES +func NewEcdhesKeyGenerate(alg jwa.KeyEncryptionAlgorithm, pubkey *ecdsa.PublicKey) (*EcdhesKeyGenerate, error) { + var keysize int + switch alg { + case jwa.ECDH_ES: + return nil, errors.New("unimplemented") + case jwa.ECDH_ES_A128KW: + keysize = 16 + case jwa.ECDH_ES_A192KW: + keysize = 24 + case jwa.ECDH_ES_A256KW: + keysize = 32 + default: + return nil, errors.Wrap(ErrUnsupportedAlgorithm, "invalid ECDH-ES key generation algorithm") + } + + return &EcdhesKeyGenerate{ + algorithm: alg, + keysize: keysize, + pubkey: pubkey, + }, nil +} + +// KeySize returns the key size associated with this generator +func (g EcdhesKeyGenerate) KeySize() int { + return g.keysize +} + +// KeyGenerate generates new keys using ECDH-ES +func (g EcdhesKeyGenerate) KeyGenerate() (ByteSource, error) { + priv, err := ecdsa.GenerateKey(g.pubkey.Curve, rand.Reader) + if err != nil { + return nil, errors.Wrap(err, "failed to generate key for ECDH-ES") + } + + pubinfo := make([]byte, 4) + binary.BigEndian.PutUint32(pubinfo, uint32(g.keysize)*8) + + z, _ := priv.PublicKey.Curve.ScalarMult(g.pubkey.X, g.pubkey.Y, priv.D.Bytes()) + kdf := concatkdf.New(crypto.SHA256, []byte(g.algorithm.String()), z.Bytes(), []byte{}, []byte{}, pubinfo, []byte{}) + kek := make([]byte, g.keysize) + kdf.Read(kek) + + return ByteWithECPrivateKey{ + PrivateKey: priv, + ByteKey: ByteKey(kek), + }, nil +} + +// HeaderPopulate populates the header with the required EC-DSA public key +// information ('epk' key) +func (k ByteWithECPrivateKey) HeaderPopulate(h *Header) { + key, err := jwk.New(&k.PrivateKey.PublicKey) + if err == nil { + h.Set("epk", key) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/keywrap_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/keywrap_test.go new file mode 100644 index 0000000000..e522bf7dc6 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/keywrap_test.go @@ -0,0 +1,75 @@ +package jwe + +import ( + "crypto/aes" + "encoding/hex" + "testing" + + "github.com/stretchr/testify/assert" +) + +func mustHexDecode(s string) []byte { + b, err := hex.DecodeString(s) + if err != nil { + panic(err) + } + return b +} + +type vector struct { + Kek string + Data string + Expected string +} + +func TestRFC3394_Wrap(t *testing.T) { + vectors := []vector{ + vector{ + Kek: "000102030405060708090A0B0C0D0E0F", + Data: "00112233445566778899AABBCCDDEEFF", + Expected: "1FA68B0A8112B447AEF34BD8FB5A7B829D3E862371D2CFE5", + }, + vector{ + Kek: "000102030405060708090A0B0C0D0E0F1011121314151617", + Data: "00112233445566778899AABBCCDDEEFF", + Expected: "96778B25AE6CA435F92B5B97C050AED2468AB8A17AD84E5D", + }, + vector{ + Kek: "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F", + Data: "00112233445566778899AABBCCDDEEFF0001020304050607", + Expected: "A8F9BC1612C68B3FF6E6F4FBE30E71E4769C8B80A32CB8958CD5D17D6B254DA1", + }, + } + + for _, v := range vectors { + t.Logf("kek = %s", v.Kek) + t.Logf("data = %s", v.Data) + t.Logf("expected = %s", v.Expected) + + kek := mustHexDecode(v.Kek) + data := mustHexDecode(v.Data) + expected := mustHexDecode(v.Expected) + + block, err := aes.NewCipher(kek) + if !assert.NoError(t, err, "NewCipher is successful") { + return + } + out, err := keywrap(block, data) + if !assert.NoError(t, err, "Wrap is successful") { + return + } + + if !assert.Equal(t, expected, out, "Wrap generates expected output") { + return + } + + unwrapped, err := keyunwrap(block, out) + if !assert.NoError(t, err, "Unwrap is successful") { + return + } + + if !assert.Equal(t, data, unwrapped, "Unwrapped data matches") { + return + } + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/lowlevel_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/lowlevel_test.go new file mode 100644 index 0000000000..bd7832815a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/lowlevel_test.go @@ -0,0 +1,122 @@ +package jwe + +import ( + "testing" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/stretchr/testify/assert" +) + +// This test uses Appendix 3 to verify some low level tools for +// KeyWrap and CBC HMAC encryption. +// This test uses a static cek so that we can validate the results +// against the contents in the above Appendix +func TestLowLevelParts_A128KW_A128CBCHS256(t *testing.T) { + var plaintext = []byte{ + 76, 105, 118, 101, 32, 108, 111, 110, 103, 32, 97, 110, 100, 32, + 112, 114, 111, 115, 112, 101, 114, 46, + } + var cek = []byte{ + 4, 211, 31, 197, 84, 157, 252, 254, 11, 100, 157, 250, 63, 170, 106, + 206, 107, 124, 212, 45, 111, 107, 9, 219, 200, 177, 0, 240, 143, 156, + 44, 207, + } + var iv = []byte{ + 3, 22, 60, 12, 43, 67, 104, 105, 108, 108, 105, 99, 111, 116, 104, + 101, + } + var sharedkey = []byte{ + 25, 172, 32, 130, 225, 114, 26, 181, 138, 106, 254, 192, 95, 133, 74, 82, + } + var encsharedkey = []byte{ + 232, 160, 123, 211, 183, 76, 245, 132, 200, 128, 123, 75, 190, 216, + 22, 67, 201, 138, 193, 186, 9, 91, 122, 31, 246, 90, 28, 139, 57, 3, + 76, 124, 193, 11, 98, 37, 173, 61, 104, 57, + } + var aad = []byte{ + 101, 121, 74, 104, 98, 71, 99, 105, 79, 105, 74, 66, 77, 84, 73, 52, + 83, 49, 99, 105, 76, 67, 74, 108, 98, 109, 77, 105, 79, 105, 74, 66, + 77, 84, 73, 52, 81, 48, 74, 68, 76, 85, 104, 84, 77, 106, 85, 50, 73, + 110, 48, + } + var ciphertext = []byte{ + 40, 57, 83, 181, 119, 33, 133, 148, 198, 185, 243, 24, 152, 230, 6, + 75, 129, 223, 127, 19, 210, 82, 183, 230, 168, 33, 215, 104, 143, + 112, 56, 102, + } + var authtag = []byte{ + 83, 73, 191, 98, 104, 205, 211, 128, 201, 189, 199, 133, 32, 38, + 194, 85, + } + + const compactExpected = `eyJhbGciOiJBMTI4S1ciLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0.6KB707dM9YTIgHtLvtgWQ8mKwboJW3of9locizkDTHzBC2IlrT1oOQ.AxY8DCtDaGlsbGljb3RoZQ.KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY.U0m_YmjN04DJvceFICbCVQ` + + k, err := NewKeyWrapEncrypt(jwa.A128KW, sharedkey) + if !assert.NoError(t, err, "Create key wrap") { + return + } + + enckey, err := k.KeyEncrypt(cek) + if !assert.NoError(t, err, "Failed to encrypt key") { + return + } + if !assert.Equal(t, encsharedkey, enckey.Bytes(), "encrypted keys match") { + return + } + + cipher, err := NewAesContentCipher(jwa.A128CBC_HS256) + if !assert.NoError(t, err, "NewAesContentCipher is successful") { + return + } + cipher.NonceGenerator = StaticKeyGenerate(iv) + + iv, encrypted, tag, err := cipher.encrypt(cek, plaintext, aad) + if !assert.NoError(t, err, "encrypt() successful") { + return + } + + if !assert.Equal(t, ciphertext, encrypted, "Generated cipher text does not match") { + return + } + + if !assert.Equal(t, tag, authtag, "Generated tag text does not match") { + return + } + + data, err := cipher.decrypt(cek, iv, encrypted, tag, aad) + if !assert.NoError(t, err, "decrypt successful") { + return + } + + if !assert.Equal(t, plaintext, data, "decrypt works") { + return + } + + r := NewRecipient() + r.Header.Set("alg", jwa.A128KW) + r.EncryptedKey = enckey.Bytes() + + protected := NewEncodedHeader() + protected.Set("enc", jwa.A128CBC_HS256) + + msg := NewMessage() + msg.ProtectedHeader = protected + msg.AuthenticatedData = aad + msg.CipherText = ciphertext + msg.InitializationVector = iv + msg.Tag = tag + msg.Recipients = []Recipient{*r} + + serialized, err := CompactSerialize{}.Serialize(msg) + if !assert.NoError(t, err, "compact serialization is successful") { + return + } + + if !assert.Equal(t, compactExpected, string(serialized), "compact serialization matches") { + serialized, err = JSONSerialize{Pretty: true}.Serialize(msg) + if !assert.NoError(t, err, "JSON serialization is successful") { + return + } + t.Logf("%s", serialized) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/message.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/message.go new file mode 100644 index 0000000000..10fda981e6 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/message.go @@ -0,0 +1,559 @@ +package jwe + +import ( + "bytes" + "compress/flate" + "encoding/json" + "net/url" + + "github.com/lestrrat-go/jwx/buffer" + "github.com/lestrrat-go/jwx/internal/debug" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/pkg/errors" +) + +// NewRecipient creates a Recipient object +func NewRecipient() *Recipient { + return &Recipient{ + Header: NewHeader(), + } +} + +// NewHeader creates a new Header object +func NewHeader() *Header { + return &Header{ + EssentialHeader: &EssentialHeader{}, + PrivateParams: map[string]interface{}{}, + } +} + +// NewEncodedHeader creates a new encoded Header object +func NewEncodedHeader() *EncodedHeader { + return &EncodedHeader{ + Header: NewHeader(), + } +} + +// Get returns the header key +func (h *Header) Get(key string) (interface{}, error) { + switch key { + case "alg": + return h.Algorithm, nil + case "apu": + return h.AgreementPartyUInfo, nil + case "apv": + return h.AgreementPartyVInfo, nil + case "enc": + return h.ContentEncryption, nil + case "epk": + return h.EphemeralPublicKey, nil + case "cty": + return h.ContentType, nil + case "kid": + return h.KeyID, nil + case "typ": + return h.Type, nil + case "x5t": + return h.X509CertThumbprint, nil + case "x5t#256": + return h.X509CertThumbprintS256, nil + case "x5c": + return h.X509CertChain, nil + case "crit": + return h.Critical, nil + case "jku": + return h.JwkSetURL, nil + case "x5u": + return h.X509Url, nil + default: + v, ok := h.PrivateParams[key] + if !ok { + return nil, errors.New("invalid header name") + } + return v, nil + } +} + +// Set sets the value of the given key to the given value. If it's +// one of the known keys, it will be set in EssentialHeader field. +// Otherwise, it is set in PrivateParams field. +func (h *Header) Set(key string, value interface{}) error { + switch key { + case "alg": + var v jwa.KeyEncryptionAlgorithm + s, ok := value.(string) + if ok { + v = jwa.KeyEncryptionAlgorithm(s) + } else { + v, ok = value.(jwa.KeyEncryptionAlgorithm) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'alg'") + } + } + h.Algorithm = v + case "apu": + var v buffer.Buffer + switch value.(type) { + case buffer.Buffer: + v = value.(buffer.Buffer) + case []byte: + v = buffer.Buffer(value.([]byte)) + case string: + v = buffer.Buffer(value.(string)) + default: + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'apu'") + } + h.AgreementPartyUInfo = v + case "apv": + var v buffer.Buffer + switch value.(type) { + case buffer.Buffer: + v = value.(buffer.Buffer) + case []byte: + v = buffer.Buffer(value.([]byte)) + case string: + v = buffer.Buffer(value.(string)) + default: + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'apv'") + } + h.AgreementPartyVInfo = v + case "enc": + var v jwa.ContentEncryptionAlgorithm + s, ok := value.(string) + if ok { + v = jwa.ContentEncryptionAlgorithm(s) + } else { + v, ok = value.(jwa.ContentEncryptionAlgorithm) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'enc'") + } + } + h.ContentEncryption = v + case "cty": + v, ok := value.(string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'cty'") + } + h.ContentType = v + case "epk": + v, ok := value.(*jwk.ECDSAPublicKey) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'epk'") + } + h.EphemeralPublicKey = v + case "kid": + v, ok := value.(string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'kid'") + } + h.KeyID = v + case "typ": + v, ok := value.(string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'typ'") + } + h.Type = v + case "x5t": + v, ok := value.(string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'x5t'") + } + h.X509CertThumbprint = v + case "x5t#256": + v, ok := value.(string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'x5t#256'") + } + h.X509CertThumbprintS256 = v + case "x5c": + v, ok := value.([]string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'x5c'") + } + h.X509CertChain = v + case "crit": + v, ok := value.([]string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'crit'") + } + h.Critical = v + case "jku": + v, ok := value.(string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'jku'") + } + u, err := url.Parse(v) + if err != nil { + return errors.Wrap(errors.Wrap(err, "failed to parse new value for 'jku' header"), "invalid header value") + } + h.JwkSetURL = u + case "x5u": + v, ok := value.(string) + if !ok { + return errors.Wrap(ErrInvalidHeaderValue, "invalid header value for 'x5u'") + } + u, err := url.Parse(v) + if err != nil { + return errors.Wrap(errors.Wrap(err, "failed to parse new value for 'x5u' header"), "invalid header value") + } + h.X509Url = u + default: + h.PrivateParams[key] = value + } + return nil +} + +// Merge merges the current header with another. +func (h *Header) Merge(h2 *Header) (*Header, error) { + if h2 == nil { + return nil, errors.New("merge target is nil") + } + + h3 := NewHeader() + if err := h3.Copy(h); err != nil { + return nil, errors.Wrap(err, "failed to copy header values") + } + + h3.EssentialHeader.Merge(h2.EssentialHeader) + + for k, v := range h2.PrivateParams { + h3.PrivateParams[k] = v + } + + return h3, nil +} + +// Merge merges the current header with another. +func (h *EssentialHeader) Merge(h2 *EssentialHeader) { + if h2.AgreementPartyUInfo.Len() != 0 { + h.AgreementPartyUInfo = h2.AgreementPartyUInfo + } + + if h2.AgreementPartyVInfo.Len() != 0 { + h.AgreementPartyVInfo = h2.AgreementPartyVInfo + } + + if h2.Algorithm != "" { + h.Algorithm = h2.Algorithm + } + + if h2.ContentEncryption != "" { + h.ContentEncryption = h2.ContentEncryption + } + + if h2.ContentType != "" { + h.ContentType = h2.ContentType + } + + if h2.Compression != "" { + h.Compression = h2.Compression + } + + if h2.Critical != nil { + h.Critical = h2.Critical + } + + if h2.EphemeralPublicKey != nil { + h.EphemeralPublicKey = h2.EphemeralPublicKey + } + + if h2.Jwk != nil { + h.Jwk = h2.Jwk + } + + if h2.JwkSetURL != nil { + h.JwkSetURL = h2.JwkSetURL + } + + if h2.KeyID != "" { + h.KeyID = h2.KeyID + } + + if h2.Type != "" { + h.Type = h2.Type + } + + if h2.X509Url != nil { + h.X509Url = h2.X509Url + } + + if h2.X509CertChain != nil { + h.X509CertChain = h2.X509CertChain + } + + if h2.X509CertThumbprint != "" { + h.X509CertThumbprint = h2.X509CertThumbprint + } + + if h2.X509CertThumbprintS256 != "" { + h.X509CertThumbprintS256 = h2.X509CertThumbprintS256 + } +} + +// Copy copies the other heder over this one +func (h *Header) Copy(h2 *Header) error { + if h == nil { + return errors.New("copy destination is nil") + } + if h2 == nil { + return errors.New("copy target is nil") + } + + h.EssentialHeader.Copy(h2.EssentialHeader) + + for k, v := range h2.PrivateParams { + h.PrivateParams[k] = v + } + + return nil +} + +// Copy copies the other heder over this one +func (h *EssentialHeader) Copy(h2 *EssentialHeader) { + h.AgreementPartyUInfo = h2.AgreementPartyUInfo + h.AgreementPartyVInfo = h2.AgreementPartyVInfo + h.Algorithm = h2.Algorithm + h.ContentEncryption = h2.ContentEncryption + h.ContentType = h2.ContentType + h.Compression = h2.Compression + h.Critical = h2.Critical + h.EphemeralPublicKey = h2.EphemeralPublicKey + h.Jwk = h2.Jwk + h.JwkSetURL = h2.JwkSetURL + h.KeyID = h2.KeyID + h.Type = h2.Type + h.X509Url = h2.X509Url + h.X509CertChain = h2.X509CertChain + h.X509CertThumbprint = h2.X509CertThumbprint + h.X509CertThumbprintS256 = h2.X509CertThumbprintS256 +} + +func mergeMarshal(e interface{}, p map[string]interface{}) ([]byte, error) { + buf, err := json.Marshal(e) + if err != nil { + return nil, errors.Wrap(err, `failed to marshal e`) + } + + if len(p) == 0 { + return buf, nil + } + + ext, err := json.Marshal(p) + if err != nil { + return nil, errors.Wrap(err, `failed to marshal p`) + } + + if len(buf) < 2 { + return nil, errors.New(`invalid json`) + } + + if buf[0] != '{' || buf[len(buf)-1] != '}' { + return nil, errors.New("invalid JSON") + } + buf[len(buf)-1] = ',' + buf = append(buf, ext[1:]...) + return buf, nil +} + +// MarshalJSON generates the JSON representation of this header +func (h Header) MarshalJSON() ([]byte, error) { + return mergeMarshal(h.EssentialHeader, h.PrivateParams) +} + +// UnmarshalJSON parses the JSON buffer into a Header +func (h *Header) UnmarshalJSON(data []byte) error { + if h.EssentialHeader == nil { + h.EssentialHeader = &EssentialHeader{} + } + if h.PrivateParams == nil { + h.PrivateParams = map[string]interface{}{} + } + + if err := json.Unmarshal(data, h.EssentialHeader); err != nil { + return errors.Wrap(err, "failed to parse JSON (essential) headers") + } + + m := map[string]interface{}{} + if err := json.Unmarshal(data, &m); err != nil { + return errors.Wrap(err, "failed to parse JSON headers") + } + for _, n := range []string{"alg", "apu", "apv", "enc", "cty", "zip", "crit", "epk", "jwk", "jku", "kid", "typ", "x5u", "x5c", "x5t", "x5t#S256"} { + delete(m, n) + } + + for name, value := range m { + if err := h.Set(name, value); err != nil { + return errors.Wrap(err, "failed to set header field '"+name+"'") + } + } + return nil +} + +// Base64Encode creates the base64 encoded version of the JSON +// representation of this header +func (e EncodedHeader) Base64Encode() ([]byte, error) { + buf, err := json.Marshal(e.Header) + if err != nil { + return nil, errors.Wrap(err, "failed to marshal encoded header into JSON") + } + + buf, err = buffer.Buffer(buf).Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to base64 encode encoded header") + } + + return buf, nil +} + +// MarshalJSON generates the JSON representation of this header +func (e EncodedHeader) MarshalJSON() ([]byte, error) { + buf, err := e.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to base64 encode encoded header") + } + return json.Marshal(string(buf)) +} + +// UnmarshalJSON parses the JSON buffer into a Header +func (e *EncodedHeader) UnmarshalJSON(buf []byte) error { + b := buffer.Buffer{} + // base646 json string -> json object representation of header + if err := json.Unmarshal(buf, &b); err != nil { + return errors.Wrap(err, "failed to unmarshal buffer") + } + + if err := json.Unmarshal(b.Bytes(), &e.Header); err != nil { + return errors.Wrap(err, "failed to unmarshal buffer") + } + + return nil +} + +// NewMessage creates a new message +func NewMessage() *Message { + return &Message{ + ProtectedHeader: NewEncodedHeader(), + UnprotectedHeader: NewHeader(), + } +} + +// Decrypt decrypts the message using the specified algorithm and key +func (m *Message) Decrypt(alg jwa.KeyEncryptionAlgorithm, key interface{}) ([]byte, error) { + var err error + + if len(m.Recipients) == 0 { + return nil, errors.New("no recipients, can not proceed with decrypt") + } + + enc := m.ProtectedHeader.ContentEncryption + + h := NewHeader() + if err := h.Copy(m.ProtectedHeader.Header); err != nil { + return nil, errors.Wrap(err, `failed to copy protected headers`) + } + h, err = h.Merge(m.UnprotectedHeader) + if err != nil { + if debug.Enabled { + debug.Printf("failed to merge unprotected header") + } + return nil, errors.Wrap(err, "failed to merge headers for message decryption") + } + + aad, err := m.AuthenticatedData.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to base64 encode authenticated data for message decryption") + } + ciphertext := m.CipherText.Bytes() + iv := m.InitializationVector.Bytes() + tag := m.Tag.Bytes() + + cipher, err := buildContentCipher(enc) + if err != nil { + return nil, errors.Wrap(err, "unsupported content cipher algorithm '"+enc.String()+"'") + } + keysize := cipher.KeySize() + + var plaintext []byte + for _, recipient := range m.Recipients { + if debug.Enabled { + debug.Printf("Attempting to check if we can decode for recipient (alg = %s)", recipient.Header.Algorithm) + } + if recipient.Header.Algorithm != alg { + continue + } + + h2 := NewHeader() + if err := h2.Copy(h); err != nil { + if debug.Enabled { + debug.Printf("failed to copy header: %s", err) + } + continue + } + + h2, err := h2.Merge(recipient.Header) + if err != nil { + if debug.Enabled { + debug.Printf("Failed to merge! %s", err) + } + continue + } + + k, err := BuildKeyDecrypter(h2.Algorithm, h2, key, keysize) + if err != nil { + if debug.Enabled { + debug.Printf("failed to create key decrypter: %s", err) + } + continue + } + + cek, err := k.KeyDecrypt(recipient.EncryptedKey.Bytes()) + if err != nil { + if debug.Enabled { + debug.Printf("failed to decrypt key: %s", err) + } + continue + } + + plaintext, err = cipher.decrypt(cek, iv, ciphertext, tag, aad) + if err == nil { + break + } + if debug.Enabled { + debug.Printf("DecryptMessage: failed to decrypt using %s: %s", h2.Algorithm, err) + } + // Keep looping because there might be another key with the same algo + } + + if plaintext == nil { + return nil, errors.New("failed to find matching recipient to decrypt key") + } + + if h.Compression == jwa.Deflate { + output := bytes.Buffer{} + w, _ := flate.NewWriter(&output, 1) + in := plaintext + for len(in) > 0 { + n, err := w.Write(in) + if err != nil { + return nil, errors.Wrap(err, `failed to write to compression writer`) + } + in = in[n:] + } + if err := w.Close(); err != nil { + return nil, errors.Wrap(err, "failed to close compression writer") + } + plaintext = output.Bytes() + } + + return plaintext, nil +} + +func buildContentCipher(alg jwa.ContentEncryptionAlgorithm) (ContentCipher, error) { + switch alg { + case jwa.A128GCM, jwa.A192GCM, jwa.A256GCM, jwa.A128CBC_HS256, jwa.A192CBC_HS384, jwa.A256CBC_HS512: + return NewAesContentCipher(alg) + } + + return nil, ErrUnsupportedAlgorithm +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/out b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/out new file mode 100644 index 0000000000..2453b774de --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/out @@ -0,0 +1,27 @@ +=== RUN TestEncode_A128KW_A128CBC_HS256 +|DEBUG| AES Crypt: alg = A128CBC-HS256 +|DEBUG| AES Crypt: cipher.keysize = 32 +2017/04/07 07:32:00 contentcrypt.KeySize = 64 +|DEBUG| Encrypt: keysize = 64 +|DEBUG| Encrypt: generated cek len = 64 +|DEBUG| Encrypt: encrypted_key = ce6b0d820526b80d4f8167b94aaea7a25a43ea1a7194c881babfb2aa15bf496b9536a7677d1499ed0e152240722c22e82bfb8b687bff8f24f8e1e66541869148a5bc4b76f01f077e (72) +|DEBUG| ContentCrypt.Encrypt: cek = 8e070931dc0fb523fffc886cb70dcf435d63a40c662851eb74b2566d197d9a90f4bf5246ba7bd684718cbc3abb586e8b0f4c8d68b54d2ee00a93c476a589b0b4 (64) +|DEBUG| ContentCrypt.Encrypt: ciphertext = 4c697665206c6f6e6720616e642070726f737065722e (22) +|DEBUG| ContentCrypt.Encrypt: aad = 65794a68624763694f694a424d544934533163694c434a6c626d4d694f694a424d54493451304a444c5568544d6a5532496e30 (51) +|DEBUG| CbcAeadFetch: fetching key (64) +|DEBUG| New: cek (key) = 8e070931dc0fb523fffc886cb70dcf435d63a40c662851eb74b2566d197d9a90f4bf5246ba7bd684718cbc3abb586e8b0f4c8d68b54d2ee00a93c476a589b0b4 (64) +|DEBUG| New: ikey = 8e070931dc0fb523fffc886cb70dcf43 (16) +|DEBUG| New: ekey = 5d63a40c662851eb74b2566d197d9a90f4bf5246ba7bd684718cbc3abb586e8b0f4c8d68b54d2ee00a93c476a589b0b4 (48) +|DEBUG| CbcAeadFetch: failed to create aead fetcher [142 7 9 49 220 15 181 35 255 252 136 108 183 13 207 67 93 99 164 12 102 40 81 235 116 178 86 109 25 125 154 144 244 191 82 70 186 123 214 132 113 140 188 58 187 88 110 139 15 76 141 104 181 77 46 224 10 147 196 118 165 137 176 180] (64): crypto/aes: invalid key size 48 +|DEBUG| AeadFetch failed: cipher: failed to create AES cipher for CBC: crypto/aes: invalid key size 48 +|DEBUG| cipher.encrypt failed +|DEBUG| Failed to encrypt: failed to crypt content: failed to fetch AEAD for AesContentCipher.encrypt: cipher: failed to create AES cipher for CBC: crypto/aes: invalid key size 48 +|DEBUG| Encrypt: failed to encrypt: failed to crypt content: failed to fetch AEAD for AesContentCipher.encrypt: cipher: failed to create AES cipher for CBC: crypto/aes: invalid key size 48 +--- FAIL: TestEncode_A128KW_A128CBC_HS256 (0.00s) + assertions.go:219: Error Trace: jwe_test.go:320 + Error: Received unexpected error "failed to crypt content: failed to fetch AEAD for AesContentCipher.encrypt: cipher: failed to create AES cipher for CBC: crypto/aes: invalid key size 48" + Messages: Encrypt is successful + +FAIL +exit status 1 +FAIL github.com/lestrrat-go/jwx/jwe 0.013s diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/out.256 b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/out.256 new file mode 100644 index 0000000000..2d751284b3 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/out.256 @@ -0,0 +1,28 @@ +=== RUN Test_A256KW_A256CBC_HS512 +|DEBUG| AES Crypt: alg = A256CBC-HS512 +|DEBUG| AES Crypt: cipher.keysize = 64 +2017/04/07 07:32:55 contentcrypt.KeySize = 128 +|DEBUG| Encrypt: keysize = 128 +|DEBUG| Encrypt: generated cek len = 128 +|DEBUG| Encrypt: encrypted_key = fe1409b4a2580c25b431ccb29de26e19457d8ba5e1724cd55601ea5ee66836159f299c2b561e501893e3d06da5a9012dff4a2b46ffb1ccdbc859da1dac776d50fea0e23354b3065a8b941989965c9ac584b40e5a9e61ed573fd7d29ae6b95cc9e024b8a57f6218ecb8428734a866f23217fe2fc240d4ab487e4eaca8339e92331436edabbd2f5486 (136) +|DEBUG| ContentCrypt.Encrypt: cek = b70ae35958ec04c8256fd927ef8f63541ac1d77eac41c39fe173ad92c166879e678d91445fe858be19d2731d2cdd498b0e481ff78204b675b7016208798277167f040064591d5251dc5c691b3431bb2d2deacbca60630637f2f2c73502d451eff4086a8e1a3b2d2068f6806896df306b55b2ef63d8b22dfcf5d1aa6336774465 (128) +|DEBUG| ContentCrypt.Encrypt: ciphertext = 5468652074727565207369676e206f6620696e74656c6c6967656e6365206973206e6f74206b6e6f776c656467652062757420696d6167696e6174696f6e2e (63) +|DEBUG| ContentCrypt.Encrypt: aad = 65794a68624763694f694a424d6a5532533163694c434a6c626d4d694f694a424d6a553251304a444c5568544e544579496e30 (51) +|DEBUG| CbcAeadFetch: fetching key (128) +|DEBUG| New: keysize = 64 +|DEBUG| New: cek (key) = b70ae35958ec04c8256fd927ef8f63541ac1d77eac41c39fe173ad92c166879e678d91445fe858be19d2731d2cdd498b0e481ff78204b675b7016208798277167f040064591d5251dc5c691b3431bb2d2deacbca60630637f2f2c73502d451eff4086a8e1a3b2d2068f6806896df306b55b2ef63d8b22dfcf5d1aa6336774465 (128) +|DEBUG| New: ikey = b70ae35958ec04c8256fd927ef8f63541ac1d77eac41c39fe173ad92c166879e678d91445fe858be19d2731d2cdd498b0e481ff78204b675b701620879827716 (64) +|DEBUG| New: ekey = 7f040064591d5251dc5c691b3431bb2d2deacbca60630637f2f2c73502d451eff4086a8e1a3b2d2068f6806896df306b55b2ef63d8b22dfcf5d1aa6336774465 (64) +|DEBUG| CbcAeadFetch: failed to create aead fetcher [183 10 227 89 88 236 4 200 37 111 217 39 239 143 99 84 26 193 215 126 172 65 195 159 225 115 173 146 193 102 135 158 103 141 145 68 95 232 88 190 25 210 115 29 44 221 73 139 14 72 31 247 130 4 182 117 183 1 98 8 121 130 119 22 127 4 0 100 89 29 82 81 220 92 105 27 52 49 187 45 45 234 203 202 96 99 6 55 242 242 199 53 2 212 81 239 244 8 106 142 26 59 45 32 104 246 128 104 150 223 48 107 85 178 239 99 216 178 45 252 245 209 170 99 54 119 68 101] (128): crypto/aes: invalid key size 64 +|DEBUG| AeadFetch failed: cipher: failed to create AES cipher for CBC: crypto/aes: invalid key size 64 +|DEBUG| cipher.encrypt failed +|DEBUG| Failed to encrypt: failed to crypt content: failed to fetch AEAD for AesContentCipher.encrypt: cipher: failed to create AES cipher for CBC: crypto/aes: invalid key size 64 +|DEBUG| Encrypt: failed to encrypt: failed to crypt content: failed to fetch AEAD for AesContentCipher.encrypt: cipher: failed to create AES cipher for CBC: crypto/aes: invalid key size 64 +--- FAIL: Test_A256KW_A256CBC_HS512 (0.00s) + assertions.go:219: Error Trace: jwe_test.go:366 + Error: Received unexpected error "failed to crypt content: failed to fetch AEAD for AesContentCipher.encrypt: cipher: failed to create AES cipher for CBC: crypto/aes: invalid key size 64" + Messages: failed to encrypt payload + +FAIL +exit status 1 +FAIL github.com/lestrrat-go/jwx/jwe 0.012s diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/serializer.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/serializer.go new file mode 100644 index 0000000000..97bffe7f5c --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/serializer.go @@ -0,0 +1,73 @@ +package jwe + +import ( + "encoding/json" + + "github.com/pkg/errors" +) + +// Serialize converts the message into a JWE compact serialize format byte buffer +func (s CompactSerialize) Serialize(m *Message) ([]byte, error) { + if len(m.Recipients) != 1 { + return nil, errors.New("wrong number of recipients for compact serialization") + } + + recipient := m.Recipients[0] + + // The protected header must be a merge between the message-wide + // protected header AND the recipient header + hcopy := NewHeader() + // There's something wrong if m.ProtectedHeader.Header is nil, but + // it could happen + if m.ProtectedHeader == nil || m.ProtectedHeader.Header == nil { + return nil, errors.New("invalid protected header") + } + err := hcopy.Copy(m.ProtectedHeader.Header) + if err != nil { + return nil, errors.Wrap(err, "failed to copy protected header") + } + hcopy, err = hcopy.Merge(m.UnprotectedHeader) + if err != nil { + return nil, errors.Wrap(err, "failed to merge unprotected header") + } + hcopy, err = hcopy.Merge(recipient.Header) + if err != nil { + return nil, errors.Wrap(err, "failed to merge recipient header") + } + + protected, err := EncodedHeader{Header: hcopy}.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to encode header") + } + + encryptedKey, err := recipient.EncryptedKey.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to encode encryption key") + } + + iv, err := m.InitializationVector.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to encode iv") + } + + cipher, err := m.CipherText.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to encode cipher text") + } + + tag, err := m.Tag.Base64Encode() + if err != nil { + return nil, errors.Wrap(err, "failed to encode tag") + } + + buf := append(append(append(append(append(append(append(append(protected, '.'), encryptedKey...), '.'), iv...), '.'), cipher...), '.'), tag...) + return buf, nil +} + +// Serialize converts the message into a JWE JSON serialize format byte buffer +func (s JSONSerialize) Serialize(m *Message) ([]byte, error) { + if s.Pretty { + return json.MarshalIndent(m, "", " ") + } + return json.Marshal(m) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/speed_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/speed_test.go new file mode 100644 index 0000000000..9db6f6d8e4 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwe/speed_test.go @@ -0,0 +1,50 @@ +package jwe + +import ( + "bytes" + "testing" +) + +var s = []byte(`eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkEyNTZHQ00ifQ.OKOawDo13gRp2ojaHV7LFpZcgV7T6DVZKTyKOMTYUmKoTCVJRgckCL9kiMT03JGeipsEdY3mx_etLbbWSrFr05kLzcSr4qKAq7YN7e9jwQRb23nfa6c9d-StnImGyFDbSv04uVuxIp5Zms1gNxKKK2Da14B8S4rzVRltdYwam_lDp5XnZAYpQdb76FdIKLaVmqgfwX7XWRxv2322i-vDxRfqNzo_tETKzpVLzfiwQyeyPGLBIO56YJ7eObdv0je81860ppamavo35UgoRdbYaBcoh9QcfylQr66oc6vFWXRcZ_ZT2LawVCWTIy3brGPi6UklfCpIMfIjf7iGdXKHzg.48V1_ALb6US04U3b.5eym8TW_c8SuK0ltJ3rpYIzOeDQz7TALvtu6UG9oMo4vpzs9tX_EFShS8iB7j6jiSdiwkIr3ajwQzaBtQD_A.XFBoMYUZodetZdvTiFvSkQ`) + +func BenchmarkSplitLib(b *testing.B) { + for i := 0; i < b.N; i++ { + SplitLib(s) + } +} + +func BenchmarkSplitManual(b *testing.B) { + ret := make([][]byte, 5) + for i := 0; i < b.N; i++ { + SplitManual(ret, s) + } +} + +func SplitLib(buf []byte) [][]byte { + return bytes.Split(buf, []byte{'.'}) +} + +func SplitManual(parts [][]byte, buf []byte) { + bufi := 0 + for len(buf) > 0 { + i := bytes.IndexByte(buf, '.') + if i == -1 { + return + } + + parts[bufi] = buf[:i] + bufi++ + if len(buf) > i { + buf = buf[i+1:] + } + if bufi == 4 { + break + } + } + + if i := bytes.IndexByte(buf, '.'); i != -1 { + return + } + + parts[4] = buf +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/certchain.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/certchain.go new file mode 100644 index 0000000000..2aa7dca821 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/certchain.go @@ -0,0 +1,49 @@ +package jwk + +import ( + "crypto/x509" + "encoding/base64" + + "github.com/pkg/errors" +) + +func (c CertificateChain) Get() []*x509.Certificate { + return c.certs +} + +func (c *CertificateChain) Accept(v interface{}) error { + switch x := v.(type) { + case string: + return c.Accept([]string{x}) + case []interface{}: + l := make([]string, len(x)) + for i, e := range x { + if es, ok := e.(string); ok { + l[i] = es + } else { + return errors.Errorf(`invalid list element type: expected string, got %T`, v) + } + } + return c.Accept(l) + case []string: + certs := make([]*x509.Certificate, len(x)) + for i, e := range x { + buf, err := base64.StdEncoding.DecodeString(e) + if err != nil { + return errors.Wrap(err, `failed to base64 decode list element`) + } + cert, err := x509.ParseCertificate(buf) + if err != nil { + return errors.Wrap(err, `failed to parse certificate`) + } + certs[i] = cert + } + + *c = CertificateChain{ + certs: certs, + } + return nil + default: + return errors.Errorf(`invalid value %T`, v) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/doc_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/doc_test.go new file mode 100644 index 0000000000..c66577d904 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/doc_test.go @@ -0,0 +1,31 @@ +package jwk_test + +import ( + "log" + + "github.com/lestrrat-go/jwx/jwk" +) + +func Example() { + set, err := jwk.Fetch("https://foobar.domain/json") + if err != nil { + log.Printf("failed to parse JWK: %s", err) + return + } + + // If you KNOW you have exactly one key, you can just + // use set.Keys[0] + keys := set.LookupKeyID("mykey") + if len(keys) == 0 { + log.Printf("failed to lookup key: %s", err) + return + } + + key, err := keys[0].Materialize() + if err != nil { + log.Printf("failed to generate public key: %s", err) + return + } + // Use key for jws.Verify() or whatever + _ = key +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/ecdsa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/ecdsa.go new file mode 100644 index 0000000000..36c1eea841 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/ecdsa.go @@ -0,0 +1,274 @@ +package jwk + +import ( + "crypto" + "crypto/ecdsa" + "crypto/elliptic" + "encoding/json" + "fmt" + "math/big" + + "github.com/lestrrat-go/jwx/internal/base64" + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +func newECDSAPublicKey(key *ecdsa.PublicKey) (*ECDSAPublicKey, error) { + if key == nil { + return nil, errors.New(`non-nil ecdsa.PublicKey required`) + } + + var hdr StandardHeaders + hdr.Set(KeyTypeKey, jwa.EC) + return &ECDSAPublicKey{ + headers: &hdr, + key: key, + }, nil +} + +func newECDSAPrivateKey(key *ecdsa.PrivateKey) (*ECDSAPrivateKey, error) { + if key == nil { + return nil, errors.New(`non-nil ecdsa.PrivateKey required`) + } + + var hdr StandardHeaders + hdr.Set(KeyTypeKey, jwa.EC) + return &ECDSAPrivateKey{ + headers: &hdr, + key: key, + }, nil +} + +func (k ECDSAPrivateKey) PublicKey() (*ECDSAPublicKey, error) { + return newECDSAPublicKey(&k.key.PublicKey) +} + +// Materialize returns the EC-DSA public key represented by this JWK +func (k ECDSAPublicKey) Materialize() (interface{}, error) { + return k.key, nil +} + +func (k ECDSAPublicKey) Curve() jwa.EllipticCurveAlgorithm { + return jwa.EllipticCurveAlgorithm(k.key.Curve.Params().Name) +} + +func (k ECDSAPrivateKey) Curve() jwa.EllipticCurveAlgorithm { + return jwa.EllipticCurveAlgorithm(k.key.PublicKey.Curve.Params().Name) +} + +func ecdsaThumbprint(hash crypto.Hash, crv, x, y string) []byte { + h := hash.New() + fmt.Fprintf(h, `{"crv":"`) + fmt.Fprintf(h, crv) + fmt.Fprintf(h, `","kty":"EC","x":"`) + fmt.Fprintf(h, x) + fmt.Fprintf(h, `","y":"`) + fmt.Fprintf(h, y) + fmt.Fprintf(h, `"}`) + return h.Sum(nil) +} + +// Thumbprint returns the JWK thumbprint using the indicated +// hashing algorithm, according to RFC 7638 +func (k ECDSAPublicKey) Thumbprint(hash crypto.Hash) ([]byte, error) { + return ecdsaThumbprint( + hash, + k.key.Curve.Params().Name, + base64.EncodeToString(k.key.X.Bytes()), + base64.EncodeToString(k.key.Y.Bytes()), + ), nil +} + +// Thumbprint returns the JWK thumbprint using the indicated +// hashing algorithm, according to RFC 7638 +func (k ECDSAPrivateKey) Thumbprint(hash crypto.Hash) ([]byte, error) { + return ecdsaThumbprint( + hash, + k.key.Curve.Params().Name, + base64.EncodeToString(k.key.X.Bytes()), + base64.EncodeToString(k.key.Y.Bytes()), + ), nil +} + +// Materialize returns the EC-DSA private key represented by this JWK +func (k ECDSAPrivateKey) Materialize() (interface{}, error) { + return k.key, nil +} + +func (k ECDSAPublicKey) MarshalJSON() (buf []byte, err error) { + + m := make(map[string]interface{}) + if err := k.PopulateMap(m); err != nil { + return nil, errors.Wrap(err, `failed to populate public key values`) + } + + return json.Marshal(m) +} + +func (k ECDSAPublicKey) PopulateMap(m map[string]interface{}) (err error) { + + if err := k.headers.PopulateMap(m); err != nil { + return errors.Wrap(err, `failed to populate header values`) + } + + const ( + xKey = `x` + yKey = `y` + crvKey = `crv` + ) + m[xKey] = base64.EncodeToString(k.key.X.Bytes()) + m[yKey] = base64.EncodeToString(k.key.Y.Bytes()) + m[crvKey] = k.key.Curve.Params().Name + + return nil +} + +func (k ECDSAPrivateKey) MarshalJSON() (buf []byte, err error) { + + m := make(map[string]interface{}) + if err := k.PopulateMap(m); err != nil { + return nil, errors.Wrap(err, `failed to populate public key values`) + } + + return json.Marshal(m) +} + +func (k ECDSAPrivateKey) PopulateMap(m map[string]interface{}) (err error) { + + if err := k.headers.PopulateMap(m); err != nil { + return errors.Wrap(err, `failed to populate header values`) + } + + pubkey, err := newECDSAPublicKey(&k.key.PublicKey) + if err != nil { + return errors.Wrap(err, `failed to construct public key from private key`) + } + + if err := pubkey.PopulateMap(m); err != nil { + return errors.Wrap(err, `failed to populate public key values`) + } + + m[`d`] = base64.EncodeToString(k.key.D.Bytes()) + + return nil +} + +func (k *ECDSAPublicKey) UnmarshalJSON(data []byte) (err error) { + + m := map[string]interface{}{} + if err := json.Unmarshal(data, &m); err != nil { + return errors.Wrap(err, `failed to unmarshal public key`) + } + + if err := k.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract data from map`) + } + return nil +} + +func (k *ECDSAPublicKey) ExtractMap(m map[string]interface{}) (err error) { + + const ( + xKey = `x` + yKey = `y` + crvKey = `crv` + ) + + crvname, ok := m[crvKey] + if !ok { + return errors.Errorf(`failed to get required key crv`) + } + delete(m, crvKey) + + var crv jwa.EllipticCurveAlgorithm + if err := crv.Accept(crvname); err != nil { + return errors.Wrap(err, `failed to accept value for crv key`) + } + + var curve elliptic.Curve + switch crv { + case jwa.P256: + curve = elliptic.P256() + case jwa.P384: + curve = elliptic.P384() + case jwa.P521: + curve = elliptic.P521() + default: + return errors.Errorf(`invalid curve name %s`, crv) + } + + xbuf, err := getRequiredKey(m, xKey) + if err != nil { + return errors.Wrapf(err, `failed to get required key %s`, xKey) + } + delete(m, xKey) + + ybuf, err := getRequiredKey(m, yKey) + if err != nil { + return errors.Wrapf(err, `failed to get required key %s`, yKey) + } + delete(m, yKey) + + var x, y big.Int + x.SetBytes(xbuf) + y.SetBytes(ybuf) + + var hdrs StandardHeaders + if err := hdrs.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract header values`) + } + + *k = ECDSAPublicKey{ + headers: &hdrs, + key: &ecdsa.PublicKey{ + Curve: curve, + X: &x, + Y: &y, + }, + } + return nil +} + +func (k *ECDSAPrivateKey) UnmarshalJSON(data []byte) (err error) { + + m := map[string]interface{}{} + if err := json.Unmarshal(data, &m); err != nil { + return errors.Wrap(err, `failed to unmarshal public key`) + } + + if err := k.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract data from map`) + } + return nil +} + +func (k *ECDSAPrivateKey) ExtractMap(m map[string]interface{}) (err error) { + + const ( + dKey = `d` + ) + + dbuf, err := getRequiredKey(m, dKey) + if err != nil { + return errors.Wrapf(err, `failed to get required key %s`, dKey) + } + delete(m, dKey) + + var pubkey ECDSAPublicKey + if err := pubkey.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract public key values`) + } + + var d big.Int + d.SetBytes(dbuf) + + *k = ECDSAPrivateKey{ + headers: pubkey.headers, + key: &ecdsa.PrivateKey{ + PublicKey: *(pubkey.key), + D: &d, + }, + } + pubkey.headers = nil + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/ecdsa_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/ecdsa_test.go new file mode 100644 index 0000000000..6ed17391bc --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/ecdsa_test.go @@ -0,0 +1,228 @@ +package jwk_test + +import ( + "bytes" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "encoding/json" + "reflect" + "testing" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/stretchr/testify/assert" +) + +func TestECDSA(t *testing.T) { + t.Run("Parse Private Key", func(t *testing.T) { + s := `{"keys": + [ + {"kty":"EC", + "crv":"P-256", + "key_ops": ["verify"], + "x":"MKBCTNIcKUSDii11ySs3526iDZ8AiTo7Tu6KPAqv7D4", + "y":"4Etl6SRW2YiLUrN5vfvVHuhp7x8PxltmWWlbbM4IFyM", + "d":"870MB6gfuTJ4HtUnUvYMyJpr5eUZNP4Bk43bVdj3eAE" + } + ] + }` + set, err := jwk.ParseString(s) + if !assert.NoError(t, err, "Parsing private key is successful") { + return + } + + if !assert.Len(t, set.Keys, 1, `should be 1 key`) { + return + } + + rawKey, err := set.Keys[0].Materialize() + if !assert.NoError(t, err, "Materialize should succeed") { + return + } + + if !assert.IsType(t, &ecdsa.PrivateKey{}, rawKey, `should be *ecdsa.PrivateKey`) { + return + } + + rawPrivKey := rawKey.(*ecdsa.PrivateKey) + + pubkey, err := set.Keys[0].(*jwk.ECDSAPrivateKey).PublicKey() + if !assert.NoError(t, err, "Should be able to get ECDSA public key") { + return + } + + rawKey, err = pubkey.Materialize() + if !assert.NoError(t, err, "Materialize should succeed") { + return + } + + if !assert.IsType(t, &ecdsa.PublicKey{}, rawKey, `should be *ecdsa.PublicKey`) { + return + } + + rawPubKey := rawKey.(*ecdsa.PublicKey) + + if !assert.Equal(t, elliptic.P256(), rawPubKey.Curve, "Curve matches") { + return + } + + if !assert.NotEmpty(t, rawPubKey.X, "X exists") { + return + } + + if !assert.NotEmpty(t, rawPubKey.Y, "Y exists") { + return + } + + if !assert.NotEmpty(t, rawPrivKey.D, "D exists") { + return + } + }) + t.Run("Initialization", func(t *testing.T) { + // Generate an ECDSA P-256 test key. + ecPrk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if !assert.NoError(t, err, "Failed to generate EC P-256 key") { + return + } + // Test initialization of a private EC JWK. + prk, err := jwk.New(ecPrk) + if !assert.NoError(t, err, `jwk.New should succeed`) { + return + } + + if !assert.NoError(t, prk.Set(jwk.KeyIDKey, "MyKey"), "Set private key ID success") { + return + } + + if !assert.Equal(t, prk.KeyType(), jwa.EC, "Private key type match") { + return + } + + if !assert.Equal(t, prk.KeyID(), "MyKey", "Private key ID match") { + return + } + + // Test initialization of a public EC JWK. + puk, err := jwk.New(&ecPrk.PublicKey) + if !assert.NoError(t, err, `jwk.New should succeed`) { + return + } + + if !assert.NoError(t, puk.Set(jwk.KeyIDKey, "MyKey"), " Set public key ID success") { + return + } + + if !assert.Equal(t, puk.KeyType(), jwa.EC, "Public key type match") { + return + } + + if !assert.Equal(t, prk.KeyID(), "MyKey", "Public key ID match") { + return + } + }) + t.Run("Marshall Unmarshal Public Key", func(t *testing.T) { + s := `{"keys": + [ + {"kty":"EC", + "crv":"P-256", + "key_ops": ["verify"], + "x":"MKBCTNIcKUSDii11ySs3526iDZ8AiTo7Tu6KPAqv7D4", + "y":"4Etl6SRW2YiLUrN5vfvVHuhp7x8PxltmWWlbbM4IFyM", + "d":"870MB6gfuTJ4HtUnUvYMyJpr5eUZNP4Bk43bVdj3eAE" + } + ] + }` + expectedPublicKeyBytes := []byte{123, 34, 99, 114, 118, 34, 58, 34, 80, 45, 50, 53, 54, 34, 44, 34, 107, 116, 121, 34, 58, 34, 69, 67, 34, 44, 34, 120, 34, 58, 34, 77, 75, 66, 67, 84, 78, 73, 99, 75, 85, 83, 68, 105, 105, 49, 49, 121, 83, 115, 51, 53, 50, 54, 105, 68, 90, 56, 65, 105, 84, 111, 55, 84, 117, 54, 75, 80, 65, 113, 118, 55, 68, 52, 34, 44, 34, 121, 34, 58, 34, 52, 69, 116, 108, 54, 83, 82, 87, 50, 89, 105, 76, 85, 114, 78, 53, 118, 102, 118, 86, 72, 117, 104, 112, 55, 120, 56, 80, 120, 108, 116, 109, 87, 87, 108, 98, 98, 77, 52, 73, 70, 121, 77, 34, 125} + + set, err := jwk.ParseString(s) + if !assert.NoError(t, err, "Parsing private key is successful") { + return + } + + if !assert.Len(t, set.Keys, 1, `should be 1 key`) { + return + } + + eCDSAPrivateKey := set.Keys[0].(*jwk.ECDSAPrivateKey) + + //Coverage for Curve() function + ellipticCurveAlgorithm := eCDSAPrivateKey.Curve() + if ellipticCurveAlgorithm.String() != "P-256" { + t.Fatal("ellipticCurveAlgorithm does not match") + + } + pubKey, err := set.Keys[0].(*jwk.ECDSAPrivateKey).PublicKey() + rawPubKey, err := pubKey.Materialize() + if err != nil { + t.Fatal("Failed to create raw ECDSAPublicKey") + } + eCDSAPublicKey, err := jwk.New(rawPubKey) + if err != nil { + t.Fatal("Failed to create ECDSAPublicKey") + } + + // verify marshal + pubKeyBytes, err := json.Marshal(eCDSAPublicKey) + if err != nil { + t.Fatal("Failed to marshal ECDSAPublicKey") + } + + if bytes.Compare(pubKeyBytes, expectedPublicKeyBytes) != 0 { + t.Fatal("Expected and created ECDSA Public Keys do not match") + } + + // verify unmarshal + eCDSAPublicKey2 := &jwk.ECDSAPublicKey{} + err = json.Unmarshal(expectedPublicKeyBytes, eCDSAPublicKey2) + if err != nil { + t.Fatal("Failed to unmarshal ECDSAPublicKey") + } + pECDSAPublicKey := eCDSAPublicKey.(*jwk.ECDSAPublicKey) + if !reflect.DeepEqual(pECDSAPublicKey, eCDSAPublicKey2) { + t.Fatal("ECDSA Public Keys do not match") + } + }) + t.Run("Marshall Unmarshal Private Key", func(t *testing.T) { + s := `{"keys": + [ + {"kty":"EC", + "crv":"P-256", + "key_ops": ["verify"], + "x":"MKBCTNIcKUSDii11ySs3526iDZ8AiTo7Tu6KPAqv7D4", + "y":"4Etl6SRW2YiLUrN5vfvVHuhp7x8PxltmWWlbbM4IFyM", + "d":"870MB6gfuTJ4HtUnUvYMyJpr5eUZNP4Bk43bVdj3eAE" + } + ] + }` + expectedPrivKey := []byte{123, 34, 99, 114, 118, 34, 58, 34, 80, 45, 50, 53, 54, 34, 44, 34, 100, 34, 58, 34, 56, 55, 48, 77, 66, 54, 103, 102, 117, 84, 74, 52, 72, 116, 85, 110, 85, 118, 89, 77, 121, 74, 112, 114, 53, 101, 85, 90, 78, 80, 52, 66, 107, 52, 51, 98, 86, 100, 106, 51, 101, 65, 69, 34, 44, 34, 107, 101, 121, 95, 111, 112, 115, 34, 58, 91, 34, 118, 101, 114, 105, 102, 121, 34, 93, 44, 34, 107, 116, 121, 34, 58, 34, 69, 67, 34, 44, 34, 120, 34, 58, 34, 77, 75, 66, 67, 84, 78, 73, 99, 75, 85, 83, 68, 105, 105, 49, 49, 121, 83, 115, 51, 53, 50, 54, 105, 68, 90, 56, 65, 105, 84, 111, 55, 84, 117, 54, 75, 80, 65, 113, 118, 55, 68, 52, 34, 44, 34, 121, 34, 58, 34, 52, 69, 116, 108, 54, 83, 82, 87, 50, 89, 105, 76, 85, 114, 78, 53, 118, 102, 118, 86, 72, 117, 104, 112, 55, 120, 56, 80, 120, 108, 116, 109, 87, 87, 108, 98, 98, 77, 52, 73, 70, 121, 77, 34, 125} + + set, err := jwk.ParseString(s) + if err != nil { + t.Fatal("Failed to parse JWK ECDSA") + } + eCDSAPrivateKey := set.Keys[0].(*jwk.ECDSAPrivateKey) + + privKeyBytes, err := json.Marshal(eCDSAPrivateKey) + if err != nil { + t.Fatal("Failed to marshal ECDSAPrivateKey") + } + // verify marshal + + if bytes.Compare(privKeyBytes, expectedPrivKey) != 0 { + t.Fatal("ECDSAPrivate in bytes do not match") + } + + // verify unmarshal + + expECDSAPrivateKey := &jwk.ECDSAPrivateKey{} + err = json.Unmarshal(expectedPrivKey, expECDSAPrivateKey) + if err != nil { + t.Fatal("Failed to unmarshal ECDSAPublicKey") + } + + if !reflect.DeepEqual(expECDSAPrivateKey, eCDSAPrivateKey) { + t.Fatal("ECDSAPrivate Keys do not match") + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/headers_gen.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/headers_gen.go new file mode 100644 index 0000000000..954596ce5d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/headers_gen.go @@ -0,0 +1,371 @@ +// This file is auto-generated. DO NOT EDIT + +package jwk + +import ( + "crypto/x509" + "fmt" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +const ( + AlgorithmKey = "alg" + KeyIDKey = "kid" + KeyTypeKey = "kty" + KeyUsageKey = "use" + KeyOpsKey = "key_ops" + X509CertChainKey = "x5c" + X509CertThumbprintKey = "x5t" + X509CertThumbprintS256Key = "x5t#S256" + X509URLKey = "x5u" +) + +type Headers interface { + Remove(string) + Get(string) (interface{}, bool) + Set(string, interface{}) error + PopulateMap(map[string]interface{}) error + ExtractMap(map[string]interface{}) error + Walk(func(string, interface{}) error) error + Algorithm() string + KeyID() string + KeyType() jwa.KeyType + KeyUsage() string + KeyOps() KeyOperationList + X509CertChain() []*x509.Certificate + X509CertThumbprint() string + X509CertThumbprintS256() string + X509URL() string +} + +type StandardHeaders struct { + algorithm *string // https://tools.ietf.org/html/rfc7517#section-4.4 + keyID *string // https://tools.ietf.org/html/rfc7515#section-4.1.4 + keyType *jwa.KeyType // https://tools.ietf.org/html/rfc7517#section-4.1 + keyUsage *string // https://tools.ietf.org/html/rfc7517#section-4.2 + keyops KeyOperationList // https://tools.ietf.org/html/rfc7517#section-4.3 + x509CertChain *CertificateChain // https://tools.ietf.org/html/rfc7515#section-4.1.6 + x509CertThumbprint *string // https://tools.ietf.org/html/rfc7515#section-4.1.7 + x509CertThumbprintS256 *string // https://tools.ietf.org/html/rfc7515#section-4.1.8 + x509URL *string // https://tools.ietf.org/html/rfc7515#section-4.1.5 + privateParams map[string]interface{} +} + +func (h *StandardHeaders) Remove(s string) { + delete(h.privateParams, s) +} + +func (h *StandardHeaders) Algorithm() string { + if v := h.algorithm; v != nil { + return *v + } + return "" +} + +func (h *StandardHeaders) KeyID() string { + if v := h.keyID; v != nil { + return *v + } + return "" +} + +func (h *StandardHeaders) KeyType() jwa.KeyType { + if v := h.keyType; v != nil { + return *v + } + return jwa.InvalidKeyType +} + +func (h *StandardHeaders) KeyUsage() string { + if v := h.keyUsage; v != nil { + return *v + } + return "" +} + +func (h *StandardHeaders) KeyOps() KeyOperationList { + return h.keyops +} + +func (h *StandardHeaders) X509CertChain() []*x509.Certificate { + return h.x509CertChain.Get() +} + +func (h *StandardHeaders) X509CertThumbprint() string { + if v := h.x509CertThumbprint; v != nil { + return *v + } + return "" +} + +func (h *StandardHeaders) X509CertThumbprintS256() string { + if v := h.x509CertThumbprintS256; v != nil { + return *v + } + return "" +} + +func (h *StandardHeaders) X509URL() string { + if v := h.x509URL; v != nil { + return *v + } + return "" +} + +func (h *StandardHeaders) Get(name string) (interface{}, bool) { + switch name { + case AlgorithmKey: + v := h.algorithm + if v == nil { + return nil, false + } + return *v, true + case KeyIDKey: + v := h.keyID + if v == nil { + return nil, false + } + return *v, true + case KeyTypeKey: + v := h.keyType + if v == nil { + return nil, false + } + return *v, true + case KeyUsageKey: + v := h.keyUsage + if v == nil { + return nil, false + } + return *v, true + case KeyOpsKey: + v := h.keyops + if v == nil { + return nil, false + } + return v, true + case X509CertChainKey: + v := h.x509CertChain + if v == nil { + return nil, false + } + return v.Get(), true + case X509CertThumbprintKey: + v := h.x509CertThumbprint + if v == nil { + return nil, false + } + return *v, true + case X509CertThumbprintS256Key: + v := h.x509CertThumbprintS256 + if v == nil { + return nil, false + } + return *v, true + case X509URLKey: + v := h.x509URL + if v == nil { + return nil, false + } + return *v, true + default: + v, ok := h.privateParams[name] + return v, ok + } +} + +func (h *StandardHeaders) Set(name string, value interface{}) error { + switch name { + case AlgorithmKey: + switch v := value.(type) { + case string: + h.algorithm = &v + return nil + case fmt.Stringer: + s := v.String() + h.algorithm = &s + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, AlgorithmKey, value) + case KeyIDKey: + if v, ok := value.(string); ok { + h.keyID = &v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, KeyIDKey, value) + case KeyTypeKey: + var acceptor jwa.KeyType + if err := acceptor.Accept(value); err != nil { + return errors.Wrapf(err, `invalid value for %s key`, KeyTypeKey) + } + h.keyType = &acceptor + return nil + case KeyUsageKey: + if v, ok := value.(string); ok { + h.keyUsage = &v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, KeyUsageKey, value) + case KeyOpsKey: + if err := h.keyops.Accept(value); err != nil { + return errors.Wrapf(err, `invalid value for %s key`, KeyOpsKey) + } + return nil + case X509CertChainKey: + var acceptor CertificateChain + if err := acceptor.Accept(value); err != nil { + return errors.Wrapf(err, `invalid value for %s key`, X509CertChainKey) + } + h.x509CertChain = &acceptor + return nil + case X509CertThumbprintKey: + if v, ok := value.(string); ok { + h.x509CertThumbprint = &v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, X509CertThumbprintKey, value) + case X509CertThumbprintS256Key: + if v, ok := value.(string); ok { + h.x509CertThumbprintS256 = &v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, X509CertThumbprintS256Key, value) + case X509URLKey: + if v, ok := value.(string); ok { + h.x509URL = &v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, X509URLKey, value) + default: + if h.privateParams == nil { + h.privateParams = map[string]interface{}{} + } + h.privateParams[name] = value + } + return nil +} + +// PopulateMap populates a map with appropriate values that represent +// the headers as a JSON object. This exists primarily because JWKs are +// represented as flat objects instead of differentiating the different +// parts of the message in separate sub objects. +func (h StandardHeaders) PopulateMap(m map[string]interface{}) error { + for k, v := range h.privateParams { + m[k] = v + } + if v, ok := h.Get(AlgorithmKey); ok { + m[AlgorithmKey] = v + } + if v, ok := h.Get(KeyIDKey); ok { + m[KeyIDKey] = v + } + if v, ok := h.Get(KeyTypeKey); ok { + m[KeyTypeKey] = v + } + if v, ok := h.Get(KeyUsageKey); ok { + m[KeyUsageKey] = v + } + if v, ok := h.Get(KeyOpsKey); ok { + m[KeyOpsKey] = v + } + if v, ok := h.Get(X509CertChainKey); ok { + m[X509CertChainKey] = v + } + if v, ok := h.Get(X509CertThumbprintKey); ok { + m[X509CertThumbprintKey] = v + } + if v, ok := h.Get(X509CertThumbprintS256Key); ok { + m[X509CertThumbprintS256Key] = v + } + if v, ok := h.Get(X509URLKey); ok { + m[X509URLKey] = v + } + + return nil +} + +// ExtractMap populates the appropriate values from a map that represent +// the headers as a JSON object. This exists primarily because JWKs are +// represented as flat objects instead of differentiating the different +// parts of the message in separate sub objects. +func (h *StandardHeaders) ExtractMap(m map[string]interface{}) (err error) { + if v, ok := m[AlgorithmKey]; ok { + if err := h.Set(AlgorithmKey, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, AlgorithmKey) + } + delete(m, AlgorithmKey) + } + if v, ok := m[KeyIDKey]; ok { + if err := h.Set(KeyIDKey, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, KeyIDKey) + } + delete(m, KeyIDKey) + } + if v, ok := m[KeyTypeKey]; ok { + if err := h.Set(KeyTypeKey, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, KeyTypeKey) + } + delete(m, KeyTypeKey) + } + if v, ok := m[KeyUsageKey]; ok { + if err := h.Set(KeyUsageKey, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, KeyUsageKey) + } + delete(m, KeyUsageKey) + } + if v, ok := m[KeyOpsKey]; ok { + if err := h.Set(KeyOpsKey, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, KeyOpsKey) + } + delete(m, KeyOpsKey) + } + if v, ok := m[X509CertChainKey]; ok { + if err := h.Set(X509CertChainKey, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, X509CertChainKey) + } + delete(m, X509CertChainKey) + } + if v, ok := m[X509CertThumbprintKey]; ok { + if err := h.Set(X509CertThumbprintKey, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, X509CertThumbprintKey) + } + delete(m, X509CertThumbprintKey) + } + if v, ok := m[X509CertThumbprintS256Key]; ok { + if err := h.Set(X509CertThumbprintS256Key, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, X509CertThumbprintS256Key) + } + delete(m, X509CertThumbprintS256Key) + } + if v, ok := m[X509URLKey]; ok { + if err := h.Set(X509URLKey, v); err != nil { + return errors.Wrapf(err, `failed to set value for key %s`, X509URLKey) + } + delete(m, X509URLKey) + } + // Fix: A nil map is different from a empty map as far as deep.equal is concerned + if len(m) > 0 { + h.privateParams = m + } + + return nil +} + +func (h StandardHeaders) Walk(f func(string, interface{}) error) error { + for _, key := range []string{AlgorithmKey, KeyIDKey, KeyTypeKey, KeyUsageKey, KeyOpsKey, X509CertChainKey, X509CertThumbprintKey, X509CertThumbprintS256Key, X509URLKey} { + if v, ok := h.Get(key); ok { + if err := f(key, v); err != nil { + return errors.Wrapf(err, `walk function returned error for %s`, key) + } + } + } + + for k, v := range h.privateParams { + if err := f(k, v); err != nil { + return errors.Wrapf(err, `walk function returned error for %s`, k) + } + } + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/headers_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/headers_test.go new file mode 100644 index 0000000000..a7bdcc639d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/headers_test.go @@ -0,0 +1,161 @@ +package jwk_test + +import ( + "fmt" + "testing" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/stretchr/testify/assert" +) + +func TestHeader(t *testing.T) { + t.Run("Roundtrip", func(t *testing.T) { + values := map[string]interface{}{ + jwk.KeyIDKey: "helloworld01", + jwk.KeyTypeKey: jwa.RSA, + jwk.KeyOpsKey: jwk.KeyOperationList{jwk.KeyOpSign}, + jwk.KeyUsageKey: "sig", + jwk.X509CertThumbprintKey: "thumbprint", + jwk.X509CertThumbprintS256Key: "thumbprint256", + jwk.X509URLKey: "cert1", + } + + var h jwk.StandardHeaders + for k, v := range values { + if !assert.NoError(t, h.Set(k, v), "Set works for '%s'", k) { + return + } + + got, ok := h.Get(k) + if !assert.True(t, ok, "Get works for '%s'", k) { + return + } + + if !assert.Equal(t, v, got, "values match '%s'", k) { + return + } + + if !assert.NoError(t, h.Set(k, v), "Set works for '%s'", k) { + return + } + } + }) + t.Run("RoundtripError", func(t *testing.T) { + + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + values := map[string]interface{}{ + jwk.AlgorithmKey: dummy, + jwk.KeyIDKey: dummy, + jwk.KeyTypeKey: dummy, + jwk.KeyUsageKey: dummy, + jwk.KeyOpsKey: dummy, + jwk.X509CertChainKey: dummy, + jwk.X509CertThumbprintKey: dummy, + jwk.X509CertThumbprintS256Key: dummy, + jwk.X509URLKey: dummy, + } + + var h jwk.StandardHeaders + for k, v := range values { + err := h.Set(k, v) + if err == nil { + t.Fatalf("Setting %s value should have failed", k) + } + } + err := h.Set("Default", dummy) + if err != nil { + t.Fatalf("Setting %s value failed", "default") + } + if h.Algorithm() != "" { + t.Fatalf("Algorithm should be empty string") + } + if h.KeyID() != "" { + t.Fatalf("KeyID should be empty string") + } + if h.KeyType() != "" { + t.Fatalf("KeyType should be empty string") + } + if h.KeyUsage() != "" { + t.Fatalf("KeyUsage should be empty string") + } + if h.KeyOps() != nil { + t.Fatalf("KeyOps should be empty string") + } + }) + t.Run("ExtractMapError", func(t *testing.T) { + + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + values := map[string]interface{}{ + jwk.AlgorithmKey: dummy, + jwk.KeyIDKey: dummy, + jwk.KeyTypeKey: dummy, + jwk.KeyUsageKey: dummy, + jwk.KeyOpsKey: dummy, + jwk.X509CertChainKey: dummy, + jwk.X509CertThumbprintKey: dummy, + jwk.X509CertThumbprintS256Key: dummy, + jwk.X509URLKey: dummy, + } + + var h jwk.StandardHeaders + for k, _ := range values { + err := h.ExtractMap(values) + if err == nil { + t.Fatalf("Extracting %s value should have failed", k) + } + delete(values, k) + } + }) + + t.Run("Algorithm", func(t *testing.T) { + var h jwk.StandardHeaders + for _, value := range []interface{}{jwa.RS256, jwa.RSA1_5} { + if !assert.NoError(t, h.Set("alg", value), "Set for alg should succeed") { + return + } + + got, ok := h.Get("alg") + if !assert.True(t, ok, "Get for alg should succeed") { + return + } + + if !assert.Equal(t, value.(fmt.Stringer).String(), got, "values match") { + return + } + } + }) + t.Run("KeyType", func(t *testing.T) { + var h jwk.StandardHeaders + for _, value := range []interface{}{jwa.RSA, "RSA"} { + if !assert.NoError(t, h.Set(jwk.KeyTypeKey, value), "Set for kty should succeed") { + return + } + + got, ok := h.Get(jwk.KeyTypeKey) + if !assert.True(t, ok, "Get for kty should succeed") { + return + } + + var s string + switch value.(type) { + case jwa.KeyType: + s = value.(jwa.KeyType).String() + case string: + s = value.(string) + } + + if !assert.Equal(t, jwa.KeyType(s), got, "values match") { + return + } + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/interface.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/interface.go new file mode 100644 index 0000000000..d664befe32 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/interface.go @@ -0,0 +1,105 @@ +package jwk + +import ( + "crypto" + "crypto/ecdsa" + "crypto/rsa" + "crypto/x509" + "errors" +) + +// KeyUsageType is used to denote what this key should be used for +type KeyUsageType string + +const ( + // ForSignature is the value used in the headers to indicate that + // this key should be used for signatures + ForSignature KeyUsageType = "sig" + // ForEncryption is the value used in the headers to indicate that + // this key should be used for encryptiong + ForEncryption KeyUsageType = "enc" +) + +type CertificateChain struct { + certs []*x509.Certificate +} + +// Errors related to JWK +var ( + ErrInvalidHeaderName = errors.New("invalid header name") + ErrInvalidHeaderValue = errors.New("invalid value for header key") + ErrUnsupportedKty = errors.New("unsupported kty") + ErrUnsupportedCurve = errors.New("unsupported curve") +) + +type KeyOperation string +type KeyOperationList []KeyOperation + +const ( + KeyOpSign KeyOperation = "sign" // (compute digital signature or MAC) + KeyOpVerify = "verify" // (verify digital signature or MAC) + KeyOpEncrypt = "encrypt" // (encrypt content) + KeyOpDecrypt = "decrypt" // (decrypt content and validate decryption, if applicable) + KeyOpWrapKey = "wrapKey" // (encrypt key) + KeyOpUnwrapKey = "unwrapKey" // (decrypt key and validate decryption, if applicable) + KeyOpDeriveKey = "deriveKey" // (derive key) + KeyOpDeriveBits = "deriveBits" // (derive bits not to be used as a key) +) + +// Set is a convenience struct to allow generating and parsing +// JWK sets as opposed to single JWKs +type Set struct { + Keys []Key `json:"keys"` +} + +// Key defines the minimal interface for each of the +// key types. Their use and implementation differ significantly +// between each key types, so you should use type assertions +// to perform more specific tasks with each key +type Key interface { + Headers + + // Materialize creates the corresponding key. For example, + // RSA types would create *rsa.PublicKey or *rsa.PrivateKey, + // EC types would create *ecdsa.PublicKey or *ecdsa.PrivateKey, + // and OctetSeq types create a []byte key. + Materialize() (interface{}, error) + + // Thumbprint returns the JWK thumbprint using the indicated + // hashing algorithm, according to RFC 7638 + Thumbprint(crypto.Hash) ([]byte, error) +} + +type headers interface { + Headers +} + +// RSAPublicKey is a type of JWK generated from RSA public keys +type RSAPublicKey struct { + headers + key *rsa.PublicKey +} + +// RSAPrivateKey is a type of JWK generated from RSA private keys +type RSAPrivateKey struct { + headers + key *rsa.PrivateKey +} + +// SymmetricKey is a type of JWK generated from symmetric keys +type SymmetricKey struct { + headers + key []byte +} + +// ECDSAPublicKey is a type of JWK generated from ECDSA public keys +type ECDSAPublicKey struct { + headers + key *ecdsa.PublicKey +} + +// ECDSAPrivateKey is a type of JWK generated from ECDH-ES private keys +type ECDSAPrivateKey struct { + headers + key *ecdsa.PrivateKey +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/internal/cmd/genheader/main.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/internal/cmd/genheader/main.go new file mode 100644 index 0000000000..87e25f1ebd --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/internal/cmd/genheader/main.go @@ -0,0 +1,368 @@ +package main + +import ( + "bytes" + "fmt" + "go/format" + "log" + "os" + "sort" + "strconv" + "strings" + + "github.com/pkg/errors" +) + +func main() { + if err := _main(); err != nil { + log.Printf("%s", err) + os.Exit(1) + } +} + +func _main() error { + return generateHeaders() +} + +type headerField struct { + name string + method string + typ string + returnType string + key string + comment string + hasAccept bool + hasGet bool + noDeref bool + isList bool +} + +func (f headerField) IsList() bool { + return f.isList || strings.HasPrefix(f.typ, "[]") +} + +func (f headerField) IsPointer() bool { + return strings.HasPrefix(f.typ, "*") +} + +func (f headerField) PointerElem() string { + return strings.TrimPrefix(f.typ, "*") +} + +var zerovals = map[string]string{ + "string": `""`, + "jwa.KeyType": "jwa.InvalidKeyType", +} + +func zeroval(s string) string { + if v, ok := zerovals[s]; ok { + return v + } + return "nil" +} + +func generateHeaders() error { + fields := []headerField{ + { + name: `keyType`, + method: `KeyType`, + typ: `*jwa.KeyType`, + key: `kty`, + comment: `https://tools.ietf.org/html/rfc7517#section-4.1`, + hasAccept: true, + }, + { + name: `keyUsage`, + method: `KeyUsage`, + key: `use`, + typ: `*string`, + comment: `https://tools.ietf.org/html/rfc7517#section-4.2`, + }, + { + name: `keyops`, + method: `KeyOps`, + typ: `KeyOperationList`, + key: `key_ops`, + comment: `https://tools.ietf.org/html/rfc7517#section-4.3`, + hasAccept: true, + }, + { + name: `algorithm`, + method: `Algorithm`, + typ: `*string`, + key: `alg`, + comment: `https://tools.ietf.org/html/rfc7517#section-4.4`, + }, + { + name: `keyID`, + method: `KeyID`, + typ: `*string`, + key: `kid`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.4`, + }, + { + name: `x509URL`, + method: `X509URL`, + typ: `*string`, + key: `x5u`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.5`, + }, + { + name: `x509CertChain`, + method: `X509CertChain`, + typ: `*CertificateChain`, + key: `x5c`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.6`, + hasAccept: true, + hasGet: true, + noDeref: true, + returnType: `[]*x509.Certificate`, + }, + { + name: `x509CertThumbprint`, + method: `X509CertThumbprint`, + typ: `*string`, + key: `x5t`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.7`, + }, + { + name: `x509CertThumbprintS256`, + method: `X509CertThumbprintS256`, + typ: `*string`, + key: `x5t#S256`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.8`, + }, + } + + sort.Slice(fields, func(i, j int) bool { + return fields[i].name < fields[j].name + }) + + var buf bytes.Buffer + + fmt.Fprintf(&buf, "\n// This file is auto-generated. DO NOT EDIT") + fmt.Fprintf(&buf, "\n\npackage jwk") + fmt.Fprintf(&buf, "\n\nimport (") + for _, pkg := range []string{"crypto/x509", "fmt"} { + fmt.Fprintf(&buf, "\n%s", strconv.Quote(pkg)) + } + fmt.Fprintf(&buf, "\n\n") + for _, pkg := range []string{"github.com/lestrrat-go/jwx/jwa", "github.com/pkg/errors"} { + fmt.Fprintf(&buf, "\n%s", strconv.Quote(pkg)) + } + fmt.Fprintf(&buf, "\n)") + + fmt.Fprintf(&buf, "\n\nconst (") + for _, f := range fields { + fmt.Fprintf(&buf, "\n%sKey = %s", f.method, strconv.Quote(f.key)) + } + fmt.Fprintf(&buf, "\n)") // end const + + fmt.Fprintf(&buf, "\n\ntype Headers interface {") + fmt.Fprintf(&buf, "\nRemove(string)") + fmt.Fprintf(&buf, "\nGet(string) (interface{}, bool)") + fmt.Fprintf(&buf, "\nSet(string, interface{}) error") + fmt.Fprintf(&buf, "\nPopulateMap(map[string]interface{}) error") + fmt.Fprintf(&buf, "\nExtractMap(map[string]interface{}) error") + fmt.Fprintf(&buf, "\nWalk(func(string, interface{}) error) error") + for _, f := range fields { + fmt.Fprintf(&buf, "\n%s() ", f.method) + if f.returnType != "" { + fmt.Fprintf(&buf, "%s", f.returnType) + } else if f.IsPointer() && f.noDeref { + fmt.Fprintf(&buf, "%s", f.typ) + } else { + fmt.Fprintf(&buf, "%s", f.PointerElem()) + } + } + fmt.Fprintf(&buf, "\n}") // end type Headers interface + fmt.Fprintf(&buf, "\n\ntype StandardHeaders struct {") + for _, f := range fields { + fmt.Fprintf(&buf, "\n%s %s // %s", f.name, f.typ, f.comment) + } + fmt.Fprintf(&buf, "\nprivateParams map[string]interface{}") + fmt.Fprintf(&buf, "\n}") // end type StandardHeaders + + fmt.Fprintf(&buf, "\n\nfunc (h *StandardHeaders) Remove(s string) {") + fmt.Fprintf(&buf, "\ndelete(h.privateParams, s)") + fmt.Fprintf(&buf, "\n}") // func Remove(s string) + + for _, f := range fields { + fmt.Fprintf(&buf, "\n\nfunc (h *StandardHeaders) %s() ", f.method) + if f.returnType != "" { + fmt.Fprintf(&buf, "%s", f.returnType) + } else if f.IsPointer() && f.noDeref { + fmt.Fprintf(&buf, "%s", f.typ) + } else { + fmt.Fprintf(&buf, "%s", f.PointerElem()) + } + fmt.Fprintf(&buf, " {") + + if f.hasGet { + fmt.Fprintf(&buf, "\nreturn h.%s.Get()", f.name) + } else if !f.IsPointer() { + fmt.Fprintf(&buf, "\nreturn h.%s", f.name) + } else { + fmt.Fprintf(&buf, "\nif v := h.%s; v != %s {", f.name, zeroval(f.typ)) + if f.IsPointer() && !f.noDeref { + fmt.Fprintf(&buf, "\nreturn *v") + } else { + fmt.Fprintf(&buf, "\nreturn v") + } + fmt.Fprintf(&buf, "\n}") // if h.%s != %s + fmt.Fprintf(&buf, "\nreturn %s", zeroval(f.PointerElem())) + } + fmt.Fprintf(&buf, "\n}") // func (h *StandardHeaders) %s() %s + } + + fmt.Fprintf(&buf, "\n\nfunc (h *StandardHeaders) Get(name string) (interface{}, bool) {") + fmt.Fprintf(&buf, "\nswitch name {") + for _, f := range fields { + fmt.Fprintf(&buf, "\ncase %sKey:", f.method) + fmt.Fprintf(&buf, "\nv := h.%s", f.name) + if f.IsList() { + fmt.Fprintf(&buf, "\nif len(v) == 0 {") + } else { + fmt.Fprintf(&buf, "\nif v == %s {", zeroval(f.typ)) + } + fmt.Fprintf(&buf, "\nreturn nil, false") + fmt.Fprintf(&buf, "\n}") // end if h.%s == nil + if f.hasGet { + fmt.Fprintf(&buf, "\nreturn v.Get(), true") + } else if f.IsPointer() && !f.noDeref { + fmt.Fprintf(&buf, "\nreturn *v, true") + } else { + fmt.Fprintf(&buf, "\nreturn v, true") + } + } + fmt.Fprintf(&buf, "\ndefault:") + fmt.Fprintf(&buf, "\nv, ok := h.privateParams[name]") + fmt.Fprintf(&buf, "\nreturn v, ok") + fmt.Fprintf(&buf, "\n}") // end switch name + fmt.Fprintf(&buf, "\n}") // func (h *StandardHeaders) Get(name string) (interface{}, bool) + + fmt.Fprintf(&buf, "\n\nfunc (h *StandardHeaders) Set(name string, value interface{}) error {") + fmt.Fprintf(&buf, "\nswitch name {") + for _, f := range fields { + fmt.Fprintf(&buf, "\ncase %sKey:", f.method) + if f.name == "algorithm" { + fmt.Fprintf(&buf, "\nswitch v := value.(type) {") + fmt.Fprintf(&buf, "\ncase string:") + fmt.Fprintf(&buf, "\nh.algorithm = &v") + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\ncase fmt.Stringer:") + fmt.Fprintf(&buf, "\ns := v.String()") + fmt.Fprintf(&buf, "\nh.algorithm = &s") + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\nreturn errors.Errorf(`invalid value for %%s key: %%T`, AlgorithmKey, value)") + } else if f.hasAccept { + if f.IsPointer() { + fmt.Fprintf(&buf, "\nvar acceptor %s", f.PointerElem()) + fmt.Fprintf(&buf, "\nif err := acceptor.Accept(value); err != nil {") + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `invalid value for %%s key`, %sKey)", f.method) + fmt.Fprintf(&buf, "\n}") // end if err := h.%s.Accept(value) + fmt.Fprintf(&buf, "\nh.%s = &acceptor", f.name) + } else { + fmt.Fprintf(&buf, "\nif err := h.%s.Accept(value); err != nil {", f.name) + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `invalid value for %%s key`, %sKey)", f.method) + fmt.Fprintf(&buf, "\n}") // end if err := h.%s.Accept(value) + } + fmt.Fprintf(&buf, "\nreturn nil") + } else { + if f.IsPointer() { + fmt.Fprintf(&buf, "\nif v, ok := value.(%s); ok {", f.PointerElem()) + fmt.Fprintf(&buf, "\nh.%s = &v", f.name) + } else { + fmt.Fprintf(&buf, "\nif v, ok := value.(%s); ok {", f.typ) + fmt.Fprintf(&buf, "\nh.%s = v", f.name) + } + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") // end if v, ok := value.(%s) + fmt.Fprintf(&buf, "\nreturn errors.Errorf(`invalid value for %%s key: %%T`, %sKey, value)", f.method) + } + } + fmt.Fprintf(&buf, "\ndefault:") + fmt.Fprintf(&buf, "\nif h.privateParams == nil {") + fmt.Fprintf(&buf, "\nh.privateParams = map[string]interface{}{}") + fmt.Fprintf(&buf, "\n}") // end if h.privateParams == nil + fmt.Fprintf(&buf, "\nh.privateParams[name] = value") + fmt.Fprintf(&buf, "\n}") // end switch name + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") // end func (h *StandardHeaders) Set(name string, value interface{}) + + fmt.Fprintf(&buf, "\n\n// PopulateMap populates a map with appropriate values that represent") + fmt.Fprintf(&buf, "\n// the headers as a JSON object. This exists primarily because JWKs are") + fmt.Fprintf(&buf, "\n// represented as flat objects instead of differentiating the different") + fmt.Fprintf(&buf, "\n// parts of the message in separate sub objects.") + fmt.Fprintf(&buf, "\nfunc (h StandardHeaders) PopulateMap(m map[string]interface{}) error {") + fmt.Fprintf(&buf, "\nfor k, v := range h.privateParams {") + fmt.Fprintf(&buf, "\nm[k] = v") + fmt.Fprintf(&buf, "\n}") // end for k, v := range h.privateParams + for _, f := range fields { + fmt.Fprintf(&buf, "\nif v, ok := h.Get(%sKey); ok {", f.method) + fmt.Fprintf(&buf, "\nm[%sKey] = v", f.method) + fmt.Fprintf(&buf, "\n}") // end if v, ok := h.Get(%sKey); ok + } + fmt.Fprintf(&buf, "\n\nreturn nil") + fmt.Fprintf(&buf, "\n}") // func (h StandardHeaders) PopulateMap(m map[string]interface{}) + + fmt.Fprintf(&buf, "\n\n// ExtractMap populates the appropriate values from a map that represent") + fmt.Fprintf(&buf, "\n// the headers as a JSON object. This exists primarily because JWKs are") + fmt.Fprintf(&buf, "\n// represented as flat objects instead of differentiating the different") + fmt.Fprintf(&buf, "\n// parts of the message in separate sub objects.") + fmt.Fprintf(&buf, "\nfunc (h *StandardHeaders) ExtractMap(m map[string]interface{}) (err error) {") + for _, f := range fields { + fmt.Fprintf(&buf, "\nif v, ok := m[%sKey]; ok {", f.method) + fmt.Fprintf(&buf, "\nif err := h.Set(%sKey, v); err != nil {", f.method) + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `failed to set value for key %%s`, %sKey)", f.method) + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\ndelete(m, %sKey)", f.method) + fmt.Fprintf(&buf, "\n}") // end if v, ok := m[%sKey] + } + fmt.Fprintf(&buf, "\n// Fix: A nil map is different from a empty map as far as deep.equal is concerned") + fmt.Fprintf(&buf, "\nif len(m) > 0 {") + fmt.Fprintf(&buf, "\nh.privateParams = m") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\n\nreturn nil") + fmt.Fprintf(&buf, "\n}") // end func (h *StandardHeaders) ExtractMap(m map[string]interface{}) error + + fmt.Fprintf(&buf, "\n\nfunc (h StandardHeaders) Walk(f func(string, interface{}) error) error {") + fmt.Fprintf(&buf, "\nfor _, key := range []string{") + for i, field := range fields { + fmt.Fprintf(&buf, "%sKey", field.method) + if i < len(fields)-1 { + fmt.Fprintf(&buf, ", ") + } + } + fmt.Fprintf(&buf, "} {") + fmt.Fprintf(&buf, "\nif v, ok := h.Get(key); ok {") + fmt.Fprintf(&buf, "\nif err := f(key, v); err != nil {") + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `walk function returned error for %%s`, key)") + fmt.Fprintf(&buf, "\n}") // end if err := f(key, v); err != nil + fmt.Fprintf(&buf, "\n}") // end if v, ok := h.Get(key); ok + fmt.Fprintf(&buf, "\n}") // end for _, key := range []string{} + + fmt.Fprintf(&buf, "\n\nfor k, v := range h.privateParams {") + fmt.Fprintf(&buf, "\nif err := f(k, v); err != nil {") + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `walk function returned error for %%s`, k)") + fmt.Fprintf(&buf, "\n}") // end if err := f(key, v); err != nil + fmt.Fprintf(&buf, "\n}") // end for k, v := range h.privateParams + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") // end func (h StandardHeaders) Walk(f func(string, interface{}) error) + + formatted, err := format.Source(buf.Bytes()) + if err != nil { + buf.WriteTo(os.Stdout) + return errors.Wrap(err, `failed to format code`) + } + + f, err := os.Create("headers_gen.go") + if err != nil { + return errors.Wrap(err, `failed to open headers_gen.go`) + } + defer f.Close() + f.Write(formatted) + + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/jwk.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/jwk.go new file mode 100644 index 0000000000..741e683de7 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/jwk.go @@ -0,0 +1,271 @@ +//go:generate go run internal/cmd/genheader/main.go + +// Package jwk implements JWK as described in https://tools.ietf.org/html/rfc7517 +package jwk + +import ( + "bytes" + "crypto/ecdsa" + "crypto/rsa" + "encoding/json" + "fmt" + "io" + "io/ioutil" + "net/http" + "net/url" + "os" + "strings" + + "github.com/lestrrat-go/jwx/internal/base64" + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +// GetPublicKey returns the public key based on te private key type. +// For rsa key types *rsa.PublicKey is returned; for ecdsa key types *ecdsa.PublicKey; +// for byte slice (raw) keys, the key itself is returned. If the corresponding +// public key cannot be deduced, an error is returned +func GetPublicKey(key interface{}) (interface{}, error) { + if key == nil { + return nil, errors.New(`jwk.New requires a non-nil key`) + } + + switch v := key.(type) { + // Mental note: although Public() is defined in both types, + // you can not coalesce the clauses for rsa.PrivateKey and + // ecdsa.PrivateKey, as then `v` becomes interface{} + // b/c the compiler cannot deduce the exact type. + case *rsa.PrivateKey: + return v.Public(), nil + case *ecdsa.PrivateKey: + return v.Public(), nil + case []byte: + return v, nil + default: + return nil, errors.Errorf(`invalid key type %T`, key) + } +} + +// New creates a jwk.Key from the given key. +func New(key interface{}) (Key, error) { + if key == nil { + return nil, errors.New(`jwk.New requires a non-nil key`) + } + + switch v := key.(type) { + case *rsa.PrivateKey: + return newRSAPrivateKey(v) + case *rsa.PublicKey: + return newRSAPublicKey(v) + case *ecdsa.PrivateKey: + return newECDSAPrivateKey(v) + case *ecdsa.PublicKey: + return newECDSAPublicKey(v) + case []byte: + return newSymmetricKey(v) + default: + return nil, errors.Errorf(`invalid key type %T`, key) + } +} + +// Fetch fetches a JWK resource specified by a URL +func Fetch(urlstring string, options ...Option) (*Set, error) { + u, err := url.Parse(urlstring) + if err != nil { + return nil, errors.Wrap(err, `failed to parse url`) + } + + switch u.Scheme { + case "http", "https": + return FetchHTTP(urlstring, options...) + case "file": + f, err := os.Open(u.Path) + if err != nil { + return nil, errors.Wrap(err, `failed to open jwk file`) + } + defer f.Close() + + buf, err := ioutil.ReadAll(f) + if err != nil { + return nil, errors.Wrap(err, `failed read content from jwk file`) + } + return ParseBytes(buf) + } + return nil, errors.Errorf(`invalid url scheme %s`, u.Scheme) +} + +// FetchHTTP fetches the remote JWK and parses its contents +func FetchHTTP(jwkurl string, options ...Option) (*Set, error) { + var httpcl HTTPClient = http.DefaultClient + for _, option := range options { + switch option.Name() { + case optkeyHTTPClient: + httpcl = option.Value().(HTTPClient) + } + } + + res, err := httpcl.Get(jwkurl) + if err != nil { + return nil, errors.Wrap(err, "failed to fetch remote JWK") + } + defer res.Body.Close() + + if res.StatusCode != http.StatusOK { + return nil, fmt.Errorf("failed to fetch remote JWK (status = %d)", res.StatusCode) + } + + buf, err := ioutil.ReadAll(res.Body) + if err != nil { + return nil, errors.Wrap(err, "failed to read JWK HTTP response body") + } + + return ParseBytes(buf) +} + +func (set *Set) UnmarshalJSON(data []byte) error { + v, err := ParseBytes(data) + if err != nil { + return errors.Wrap(err, `failed to parse jwk.Set`) + } + *set = *v + return nil +} + +// Parse parses JWK from the incoming io.Reader. +func Parse(in io.Reader) (*Set, error) { + m := make(map[string]interface{}) + if err := json.NewDecoder(in).Decode(&m); err != nil { + return nil, errors.Wrap(err, "failed to unmarshal JWK") + } + + // We must change what the underlying structure that gets decoded + // out of this JSON is based on parameters within the already parsed + // JSON (m). In order to do this, we have to go through the tedious + // task of parsing the contents of this map :/ + if _, ok := m["keys"]; ok { + var set Set + if err := set.ExtractMap(m); err != nil { + return nil, errors.Wrap(err, `failed to extract from map`) + } + return &set, nil + } + + k, err := constructKey(m) + if err != nil { + return nil, errors.Wrap(err, `failed to construct key from keys`) + } + return &Set{Keys: []Key{k}}, nil +} + +// ParseBytes parses JWK from the incoming byte buffer. +func ParseBytes(buf []byte) (*Set, error) { + return Parse(bytes.NewReader(buf)) +} + +// ParseString parses JWK from the incoming string. +func ParseString(s string) (*Set, error) { + return Parse(strings.NewReader(s)) +} + +// LookupKeyID looks for keys matching the given key id. Note that the +// Set *may* contain multiple keys with the same key id +func (s Set) LookupKeyID(kid string) []Key { + var keys []Key + for _, key := range s.Keys { + if key.KeyID() == kid { + keys = append(keys, key) + } + } + return keys +} + +func (s *Set) ExtractMap(m map[string]interface{}) error { + raw, ok := m["keys"] + if !ok { + return errors.New("missing 'keys' parameter") + } + + v, ok := raw.([]interface{}) + if !ok { + return errors.New("invalid 'keys' parameter") + } + + var ks Set + for _, c := range v { + conf, ok := c.(map[string]interface{}) + if !ok { + return errors.New("invalid element in 'keys'") + } + + k, err := constructKey(conf) + if err != nil { + return errors.Wrap(err, `failed to construct key from map`) + } + ks.Keys = append(ks.Keys, k) + } + + *s = ks + return nil +} + +func constructKey(m map[string]interface{}) (Key, error) { + kty, ok := m[KeyTypeKey].(string) + if !ok { + return nil, errors.Errorf(`unsupported kty type %T`, m[KeyTypeKey]) + } + + var key Key + switch jwa.KeyType(kty) { + case jwa.RSA: + if _, ok := m["d"]; ok { + key = &RSAPrivateKey{} + } else { + key = &RSAPublicKey{} + } + case jwa.EC: + if _, ok := m["d"]; ok { + key = &ECDSAPrivateKey{} + } else { + key = &ECDSAPublicKey{} + } + case jwa.OctetSeq: + key = &SymmetricKey{} + default: + return nil, errors.Errorf(`invalid kty %s`, kty) + } + + if err := key.ExtractMap(m); err != nil { + return nil, errors.Wrap(err, `failed to extract key from map`) + } + + return key, nil +} + +func getRequiredKey(m map[string]interface{}, key string) ([]byte, error) { + return getKey(m, key, true) +} + +func getOptionalKey(m map[string]interface{}, key string) ([]byte, error) { + return getKey(m, key, false) +} + +func getKey(m map[string]interface{}, key string, required bool) ([]byte, error) { + v, ok := m[key] + if !ok { + if !required { + return nil, errors.Errorf(`missing parameter '%s'`, key) + } + return nil, errors.Errorf(`missing required parameter '%s'`, key) + } + + vs, ok := v.(string) + if !ok { + return nil, errors.Errorf(`invalid type for parameter '%s': %T`, key, v) + } + + buf, err := base64.DecodeString(vs) + if err != nil { + return nil, errors.Wrapf(err, `failed to base64 decode key %s`, key) + } + return buf, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/jwk_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/jwk_test.go new file mode 100644 index 0000000000..ef249ab396 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/jwk_test.go @@ -0,0 +1,563 @@ +package jwk_test + +import ( + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/rsa" + "encoding/json" + "io" + "io/ioutil" + "net/http" + "net/http/httptest" + "os" + "testing" + + "github.com/lestrrat-go/jwx/internal/base64" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/pkg/errors" + "github.com/stretchr/testify/assert" +) + +func TestNew(t *testing.T) { + k, err := jwk.New(nil) + if !assert.Nil(t, k, "key should be nil") { + return + } + if !assert.Error(t, err, "nil key should cause an error") { + return + } +} + +func TestParse(t *testing.T) { + verify := func(t *testing.T, src string, expected interface{}) { + t.Run("json.Unmarshal", func(t *testing.T) { + var set jwk.Set + if err := json.Unmarshal([]byte(src), &set); !assert.NoError(t, err, `json.Unmarshal should succeed`) { + return + } + + if !assert.True(t, len(set.Keys) > 0, "set.Keys should be greater than 0") { + return + } + for _, key := range set.Keys { + if !assert.IsType(t, expected, key, "key should be a jwk.RSAPublicKey") { + return + } + } + }) + t.Run("jwk.Parse", func(t *testing.T) { + set, err := jwk.ParseBytes([]byte(src)) + if !assert.NoError(t, err, `jwk.Parse should succeed`) { + return + } + + if !assert.True(t, len(set.Keys) > 0, "set.Keys should be greater than 0") { + return + } + for _, key := range set.Keys { + if !assert.IsType(t, expected, key, "key should be a jwk.RSAPublicKey") { + return + } + + switch key := key.(type) { + case *jwk.RSAPrivateKey, *jwk.ECDSAPrivateKey: + realKey, err := key.(jwk.Key).Materialize() + if !assert.NoError(t, err, "failed to get underlying private key") { + return + } + + if _, err := jwk.GetPublicKey(realKey); !assert.NoError(t, err, `failed to get public key from underlying private key`) { + return + } + } + } + }) + } + + t.Run("RSA Public Key", func(t *testing.T) { + const src = `{ + "e":"AQAB", + "kty":"RSA", + "n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw" + }` + verify(t, src, &jwk.RSAPublicKey{}) + }) + t.Run("RSA Private Key", func(t *testing.T) { + const src = `{ + "kty":"RSA", + "n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw", + "e":"AQAB", + "d":"X4cTteJY_gn4FYPsXB8rdXix5vwsg1FLN5E3EaG6RJoVH-HLLKD9M7dx5oo7GURknchnrRweUkC7hT5fJLM0WbFAKNLWY2vv7B6NqXSzUvxT0_YSfqijwp3RTzlBaCxWp4doFk5N2o8Gy_nHNKroADIkJ46pRUohsXywbReAdYaMwFs9tv8d_cPVY3i07a3t8MN6TNwm0dSawm9v47UiCl3Sk5ZiG7xojPLu4sbg1U2jx4IBTNBznbJSzFHK66jT8bgkuqsk0GjskDJk19Z4qwjwbsnn4j2WBii3RL-Us2lGVkY8fkFzme1z0HbIkfz0Y6mqnOYtqc0X4jfcKoAC8Q", + "p":"83i-7IvMGXoMXCskv73TKr8637FiO7Z27zv8oj6pbWUQyLPQBQxtPVnwD20R-60eTDmD2ujnMt5PoqMrm8RfmNhVWDtjjMmCMjOpSXicFHj7XOuVIYQyqVWlWEh6dN36GVZYk93N8Bc9vY41xy8B9RzzOGVQzXvNEvn7O0nVbfs", + "q":"3dfOR9cuYq-0S-mkFLzgItgMEfFzB2q3hWehMuG0oCuqnb3vobLyumqjVZQO1dIrdwgTnCdpYzBcOfW5r370AFXjiWft_NGEiovonizhKpo9VVS78TzFgxkIdrecRezsZ-1kYd_s1qDbxtkDEgfAITAG9LUnADun4vIcb6yelxk", + "dp":"G4sPXkc6Ya9y8oJW9_ILj4xuppu0lzi_H7VTkS8xj5SdX3coE0oimYwxIi2emTAue0UOa5dpgFGyBJ4c8tQ2VF402XRugKDTP8akYhFo5tAA77Qe_NmtuYZc3C3m3I24G2GvR5sSDxUyAN2zq8Lfn9EUms6rY3Ob8YeiKkTiBj0", + "dq":"s9lAH9fggBsoFR8Oac2R_E2gw282rT2kGOAhvIllETE1efrA6huUUvMfBcMpn8lqeW6vzznYY5SSQF7pMdC_agI3nG8Ibp1BUb0JUiraRNqUfLhcQb_d9GF4Dh7e74WbRsobRonujTYN1xCaP6TO61jvWrX-L18txXw494Q_cgk", + "qi":"GyM_p6JrXySiz1toFgKbWV-JdI3jQ4ypu9rbMWx3rQJBfmt0FoYzgUIZEVFEcOqwemRN81zoDAaa-Bk0KWNGDjJHZDdDmFhW3AN7lI-puxk_mHZGJ11rxyR8O55XLSe3SPmRfKwZI6yU24ZxvQKFYItdldUKGzO6Ia6zTKhAVRU", + "alg":"RS256", + "kid":"2011-04-29" + }` + verify(t, src, &jwk.RSAPrivateKey{}) + }) + t.Run("ECDSA Private Key", func(t *testing.T) { + const src = `{ + "kty" : "EC", + "crv" : "P-256", + "x" : "SVqB4JcUD6lsfvqMr-OKUNUphdNn64Eay60978ZlL74", + "y" : "lf0u0pMj4lGAzZix5u4Cm5CMQIgMNpkwy163wtKYVKI", + "d" : "0g5vAEKzugrXaRbgKG0Tj2qJ5lMP4Bezds1_sTybkfk" + }` + verify(t, src, &jwk.ECDSAPrivateKey{}) + }) + t.Run("Invalid ECDSA Private Key", func(t *testing.T) { + const src = `{ + "kty" : "EC", + "crv" : "P-256", + "y" : "lf0u0pMj4lGAzZix5u4Cm5CMQIgMNpkwy163wtKYVKI", + "d" : "0g5vAEKzugrXaRbgKG0Tj2qJ5lMP4Bezds1_sTybkfk" + }` + _, err := jwk.ParseString(src) + if !assert.Error(t, err, `jwk.ParseString should fail`) { + return + } + }) +} + +func TestRoundtrip(t *testing.T) { + generateRSA := func(use string, keyID string) (jwk.Key, error) { + key, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + return nil, errors.Wrap(err, `failed to generate RSA private key`) + } + + k, err := jwk.New(key) + if err != nil { + return nil, errors.Wrap(err, `failed to generate jwk.RSAPrivateKey`) + } + + k.Set(jwk.KeyUsageKey, use) + k.Set(jwk.KeyIDKey, keyID) + return k, nil + } + + generateECDSA := func(use, keyID string) (jwk.Key, error) { + key, err := ecdsa.GenerateKey(elliptic.P521(), rand.Reader) + if err != nil { + return nil, errors.Wrap(err, `failed to generate ECDSA private key`) + } + + k, err := jwk.New(key) + if err != nil { + return nil, errors.Wrap(err, `failed to generate jwk.ECDSAPrivateKey`) + } + + k.Set(jwk.KeyUsageKey, use) + k.Set(jwk.KeyIDKey, keyID) + return k, nil + } + + generateSymmetric := func(use, keyID string) (jwk.Key, error) { + sharedKey := make([]byte, 64) + rand.Read(sharedKey) + + key, err := jwk.New(sharedKey) + if err != nil { + return nil, errors.Wrap(err, `failed to generate jwk.SymmetricKey`) + } + + key.Set(jwk.KeyUsageKey, use) + key.Set(jwk.KeyIDKey, keyID) + return key, nil + } + + tests := []struct { + use string + keyID string + generate func(string, string) (jwk.Key, error) + }{ + { + use: "enc", + keyID: "enc1", + generate: generateRSA, + }, + { + use: "enc", + keyID: "enc2", + generate: generateRSA, + }, + { + use: "sig", + keyID: "sig1", + generate: generateRSA, + }, + { + use: "sig", + keyID: "sig2", + generate: generateRSA, + }, + { + use: "sig", + keyID: "sig3", + generate: generateSymmetric, + }, + { + use: "enc", + keyID: "enc4", + generate: generateECDSA, + }, + { + use: "enc", + keyID: "enc5", + generate: generateECDSA, + }, + { + use: "sig", + keyID: "sig4", + generate: generateECDSA, + }, + { + use: "sig", + keyID: "sig5", + generate: generateECDSA, + }, + } + + var ks1 jwk.Set + for _, tc := range tests { + key, err := tc.generate(tc.use, tc.keyID) + if !assert.NoError(t, err, `tc.generate should succeed`) { + return + } + ks1.Keys = append(ks1.Keys, key) + + } + + buf, err := json.MarshalIndent(ks1, "", " ") + if !assert.NoError(t, err, "JSON marshal succeeded") { + return + } + + ks2, err := jwk.ParseBytes(buf) + if !assert.NoError(t, err, "JSON unmarshal succeeded") { + t.Logf("%s", buf) + return + } + + for _, tc := range tests { + keys := ks2.LookupKeyID(tc.keyID) + if !assert.Len(t, keys, 1, "Should be 1 key") { + return + } + key1 := keys[0] + + keys = ks1.LookupKeyID(tc.keyID) + if !assert.Len(t, keys, 1, "Should be 1 key") { + return + } + + key2 := keys[0] + + pk1json, _ := json.Marshal(key1) + pk2json, _ := json.Marshal(key2) + if !assert.Equal(t, pk1json, pk2json, "Keys should match (kid = %s)", tc.keyID) { + return + } + } +} + +/* + +func TestJwksSerializationPadding(t *testing.T) { + x := new(big.Int) + y := new(big.Int) + + e := &EssentialHeader{} + e.KeyType = jwa.EC + x.SetString("123520477547912006148785171019615806128401248503564636913311359802381551887648525354374204836279603443398171853465", 10) + y.SetString("13515585925570416130130241699780319456178918334914981404162640338265336278264431930522217750520011829472589865088261", 10) + pubKey := &ecdsa.PublicKey{ + Curve: elliptic.P384(), + X: x, + Y: y, + } + jwkPubKey := NewEcdsaPublicKey(pubKey) + jwkPubKey.EssentialHeader = e + jwkJSON, err := json.Marshal(jwkPubKey) + if !assert.NoError(t, err, "JWK Marshalled") { + return + } + + _, err = Parse(jwkJSON) + if !assert.NoError(t, err, "JWK Parsed") { + return + } + +} + +func TestRSAPrivateKey(t *testing.T) { + key, err := rsa.GenerateKey(rand.Reader, 2048) + if !assert.NoError(t, err, "RSA key generated") { + return + } + + k1, err := NewRSAPrivateKey(key) + if !assert.NoError(t, err, "JWK RSA key generated") { + return + } + + jsonbuf, err := json.MarshalIndent(k1, "", " ") + if !assert.NoError(t, err, "Marshal to JSON succeeded") { + return + } + + t.Logf("%s", jsonbuf) + + k2 := &RSAPrivateKey{} + if !assert.NoError(t, json.Unmarshal(jsonbuf, k2), "Unmarshal from JSON succeeded") { + return + } + + if !assert.Equal(t, k1, k2, "keys match") { + return + } + + k3, err := Parse(jsonbuf) + if !assert.NoError(t, err, "Parse should succeed") { + return + } + + if !assert.Equal(t, k1, k3.Keys[0], "keys match") { + return + } +} +*/ + +func TestAppendix(t *testing.T) { + t.Run("A1", func(t *testing.T) { + var jwksrc = []byte(`{"keys": + [ + {"kty":"EC", + "crv":"P-256", + "x":"MKBCTNIcKUSDii11ySs3526iDZ8AiTo7Tu6KPAqv7D4", + "y":"4Etl6SRW2YiLUrN5vfvVHuhp7x8PxltmWWlbbM4IFyM", + "use":"enc", + "kid":"1"}, + + {"kty":"RSA", + "n": "0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw", + "e":"AQAB", + "alg":"RS256", + "kid":"2011-04-29"} + ] + }`) + + set, err := jwk.ParseBytes(jwksrc) + if !assert.NoError(t, err, "Parse should succeed") { + return + } + + if !assert.Len(t, set.Keys, 2, "There should be 2 keys") { + return + } + + { + key, ok := set.Keys[0].(*jwk.ECDSAPublicKey) + if !assert.True(t, ok, "set.Keys[0] should be a EcdsaPublicKey") { + return + } + + if !assert.Equal(t, jwa.P256, key.Curve(), "curve is P-256") { + return + } + } + }) + + t.Run("A3", func(t *testing.T) { + const ( + key1 = `GawgguFyGrWKav7AX4VKUg` + key2 = `AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow` + ) + + buf1, err := base64.DecodeString(key1) + if !assert.NoError(t, err, "failed to decode key1") { + return + } + + buf2, err := base64.DecodeString(key2) + if !assert.NoError(t, err, "failed to decode key2") { + return + } + + var jwksrc = []byte(`{"keys": + [ + {"kty":"oct", + "alg":"A128KW", + "k":"` + key1 + `"}, + + {"kty":"oct", + "k":"` + key2 + `", + "kid":"HMAC key used in JWS spec Appendix A.1 example"} + ] + }`) + set, err := jwk.ParseBytes(jwksrc) + if !assert.NoError(t, err, "Parse should succeed") { + return + } + + tests := []struct { + headers map[string]interface{} + key []byte + }{ + { + headers: map[string]interface{}{ + jwk.KeyTypeKey: jwa.OctetSeq, + jwk.AlgorithmKey: jwa.A128KW.String(), + }, + key: buf1, + }, + { + headers: map[string]interface{}{ + jwk.KeyTypeKey: jwa.OctetSeq, + jwk.KeyIDKey: "HMAC key used in JWS spec Appendix A.1 example", + }, + key: buf2, + }, + } + + for i, data := range tests { + key, ok := set.Keys[i].(*jwk.SymmetricKey) + if !assert.True(t, ok, "set.Keys[%d] should be a SymmetricKey", i) { + return + } + + ckey, err := key.Materialize() + if !assert.NoError(t, err, "materialized key") { + return + } + + if !assert.Equal(t, data.key, ckey, `key byte sequence should match`) { + return + } + + for k, expected := range data.headers { + t.Run(k, func(t *testing.T) { + if v, ok := key.Get(k); assert.True(t, ok, "getting %s from jwk.Key should succeed", k) { + if !assert.Equal(t, expected, v, "value for %s should match", k) { + return + } + } + }) + } + } + }) + t.Run("B", func(t *testing.T) { + var jwksrc = []byte(`{"keys": + [ + {"kty":"RSA", + "use":"sig", + "kid":"1b94c", + "n":"vrjOfz9Ccdgx5nQudyhdoR17V-IubWMeOZCwX_jj0hgAsz2J_pqYW08PLbK_PdiVGKPrqzmDIsLI7sA25VEnHU1uCLNwBuUiCO11_-7dYbsr4iJmG0Qu2j8DsVyT1azpJC_NG84Ty5KKthuCaPod7iI7w0LK9orSMhBEwwZDCxTWq4aYWAchc8t-emd9qOvWtVMDC2BXksRngh6X5bUYLy6AyHKvj-nUy1wgzjYQDwHMTplCoLtU-o-8SNnZ1tmRoGE9uJkBLdh5gFENabWnU5m1ZqZPdwS-qo-meMvVfJb6jJVWRpl2SUtCnYG2C32qvbWbjZ_jBPD5eunqsIo1vQ", + "e":"AQAB", + "x5c": [ + "MIIE3jCCA8agAwIBAgICAwEwDQYJKoZIhvcNAQEFBQAwYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRoZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3MgMiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjExMTYwMTU0MzdaFw0yNjExMTYwMTU0MzdaMIHKMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEaMBgGA1UEChMRR29EYWRkeS5jb20sIEluYy4xMzAxBgNVBAsTKmh0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeTEwMC4GA1UEAxMnR28gRGFkZHkgU2VjdXJlIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MREwDwYDVQQFEwgwNzk2OTI4NzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMQt1RWMnCZM7DI161+4WQFapmGBWTtwY6vj3D3HKrjJM9N55DrtPDAjhI6zMBS2sofDPZVUBJ7fmd0LJR4h3mUpfjWoqVTr9vcyOdQmVZWt7/v+WIbXnvQAjYwqDL1CBM6nPwT27oDyqu9SoWlm2r4arV3aLGbqGmu75RpRSgAvSMeYddi5Kcju+GZtCpyz8/x4fKL4o/K1w/O5epHBp+YlLpyo7RJlbmr2EkRTcDCVw5wrWCs9CHRK8r5RsL+H0EwnWGu1NcWdrxcx+AuP7q2BNgWJCJjPOq8lh8BJ6qf9Z/dFjpfMFDniNoW1fho3/Rb2cRGadDAW/hOUoz+EDU8CAwEAAaOCATIwggEuMB0GA1UdDgQWBBT9rGEyk2xF1uLuhV+auud2mWjM5zAfBgNVHSMEGDAWgBTSxLDSkdRMEXGzYcs9of7dqGrU4zASBgNVHRMBAf8ECDAGAQH/AgEAMDMGCCsGAQUFBwEBBCcwJTAjBggrBgEFBQcwAYYXaHR0cDovL29jc3AuZ29kYWRkeS5jb20wRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDovL2NlcnRpZmljYXRlcy5nb2RhZGR5LmNvbS9yZXBvc2l0b3J5L2dkcm9vdC5jcmwwSwYDVR0gBEQwQjBABgRVHSAAMDgwNgYIKwYBBQUHAgEWKmh0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeTAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEFBQADggEBANKGwOy9+aG2Z+5mC6IGOgRQjhVyrEp0lVPLN8tESe8HkGsz2ZbwlFalEzAFPIUyIXvJxwqoJKSQ3kbTJSMUA2fCENZvD117esyfxVgqwcSeIaha86ykRvOe5GPLL5CkKSkB2XIsKd83ASe8T+5o0yGPwLPk9Qnt0hCqU7S+8MxZC9Y7lhyVJEnfzuz9p0iRFEUOOjZv2kWzRaJBydTXRE4+uXR21aITVSzGh6O1mawGhId/dQb8vxRMDsxuxN89txJx9OjxUUAiKEngHUuHqDTMBqLdElrRhjZkAzVvb3du6/KFUJheqwNTrZEjYx8WnM25sgVjOuH0aBsXBTWVU+4=", + "MIIE+zCCBGSgAwIBAgICAQ0wDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTA0MDYyOTE3MDYyMFoXDTI0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRoZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3MgMiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggENADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCAPVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6wwdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXiEqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMYavx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjggHhMIIB3TAdBgNVHQ4EFgQU0sSw0pHUTBFxs2HLPaH+3ahq1OMwgdIGA1UdIwSByjCBx6GBwaSBvjCBuzEkMCIGA1UEBxMbVmFsaUNlcnQgVmFsaWRhdGlvbiBOZXR3b3JrMRcwFQYDVQQKEw5WYWxpQ2VydCwgSW5jLjE1MDMGA1UECxMsVmFsaUNlcnQgQ2xhc3MgMiBQb2xpY3kgVmFsaWRhdGlvbiBBdXRob3JpdHkxITAfBgNVBAMTGGh0dHA6Ly93d3cudmFsaWNlcnQuY29tLzEgMB4GCSqGSIb3DQEJARYRaW5mb0B2YWxpY2VydC5jb22CAQEwDwYDVR0TAQH/BAUwAwEB/zAzBggrBgEFBQcBAQQnMCUwIwYIKwYBBQUHMAGGF2h0dHA6Ly9vY3NwLmdvZGFkZHkuY29tMEQGA1UdHwQ9MDswOaA3oDWGM2h0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeS9yb290LmNybDBLBgNVHSAERDBCMEAGBFUdIAAwODA2BggrBgEFBQcCARYqaHR0cDovL2NlcnRpZmljYXRlcy5nb2RhZGR5LmNvbS9yZXBvc2l0b3J5MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQUFAAOBgQC1QPmnHfbq/qQaQlpE9xXUhUaJwL6e4+PrxeNYiY+Sn1eocSxI0YGyeR+sBjUZsE4OWBsUs5iB0QQeyAfJg594RAoYC5jcdnplDQ1tgMQLARzLrUc+cb53S8wGd9D0VmsfSxOaFIqII6hR8INMqzW/Rn453HWkrugp++85j09VZw==", + "MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNjAwMTk1NFoXDTE5MDYyNjAwMTk1NFowgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDOOnHK5avIWZJV16vYdA757tn2VUdZZUcOBVXc65g2PFxTXdMwzzjsvUGJ7SVCCSRrCl6zfN1SLUzm1NZ9WlmpZdRJEy0kTRxQb7XBhVQ7/nHk01xC+YDgkRoKWzk2Z/M/VXwbP7RfZHM047QSv4dk+NoS/zcnwbNDu+97bi5p9wIDAQABMA0GCSqGSIb3DQEBBQUAA4GBADt/UG9vUJSZSWI4OB9L+KXIPqeCgfYrx+jFzug6EILLGACOTb2oWH+heQC1u+mNr0HZDzTuIYEZoDJJKPTEjlbVUjP9UNV+mWwD5MlM/Mtsq2azSiGM5bUMMj4QssxsodyamEwCW/POuZ6lcg5Ktz885hZo+L7tdEy8W9ViH0Pd" + ] + }]}`) + + set, err := jwk.ParseBytes(jwksrc) + if !assert.NoError(t, err, "Parse should succeed") { + return + } + if !assert.Len(t, set.Keys, 1, "There should be 1 key") { + return + } + + { + key, ok := set.Keys[0].(*jwk.RSAPublicKey) + if !assert.True(t, ok, "set.Keys[0] should be a jwk.RSAPublicKey") { + return + } + if !assert.Len(t, key.X509CertChain(), 3, "key.X509CertChain should be 3 cert") { + return + } + } + }) +} + +func TestFetch(t *testing.T) { + const jwksrc = `{"keys": + [ + {"kty":"EC", + "crv":"P-256", + "x":"MKBCTNIcKUSDii11ySs3526iDZ8AiTo7Tu6KPAqv7D4", + "y":"4Etl6SRW2YiLUrN5vfvVHuhp7x8PxltmWWlbbM4IFyM", + "use":"enc", + "kid":"1"}, + + {"kty":"RSA", + "n": "0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw", + "e":"AQAB", + "alg":"RS256", + "kid":"2011-04-29"} + ] + }` + + verify := func(t *testing.T, set *jwk.Set) { + key, ok := set.Keys[0].(*jwk.ECDSAPublicKey) + if !assert.True(t, ok, "set.Keys[0] should be a EcdsaPublicKey") { + return + } + + if !assert.Equal(t, jwa.P256, key.Curve(), "curve is P-256") { + return + } + } + t.Run("HTTP", func(t *testing.T) { + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + switch r.URL.Path { + case "/": + w.WriteHeader(http.StatusOK) + io.WriteString(w, jwksrc) + default: + w.WriteHeader(http.StatusNotFound) + } + })) + defer srv.Close() + + cl := srv.Client() + + set, err := jwk.Fetch(srv.URL, jwk.WithHTTPClient(cl)) + if !assert.NoError(t, err, `failed to fetch jwk`) { + return + } + verify(t, set) + }) + t.Run("Local File", func(t *testing.T) { + f, err := ioutil.TempFile("", "jwk-fetch-test") + if !assert.NoError(t, err, `failed to generate temporary file`) { + return + } + defer f.Close() + defer os.Remove(f.Name()) + + io.WriteString(f, jwksrc) + f.Sync() + + set, err := jwk.Fetch("file://" + f.Name()) + if !assert.NoError(t, err, `failed to fetch jwk`) { + return + } + verify(t, set) + }) + t.Run("Invalid Scheme", func(t *testing.T) { + set, err := jwk.Fetch("gopher://foo/bar") + if !assert.Nil(t, set, `set should be nil`) { + return + } + if !assert.Error(t, err, `invalid sche,e should be an error`) { + return + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/key_ops.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/key_ops.go new file mode 100644 index 0000000000..ed3316d3b0 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/key_ops.go @@ -0,0 +1,45 @@ +package jwk + +import "github.com/pkg/errors" + +func (ops *KeyOperationList) Get() KeyOperationList { + if ops == nil { + return nil + } + return *ops +} + +func (ops *KeyOperationList) Accept(v interface{}) error { + switch x := v.(type) { + case string: + return ops.Accept([]string{x}) + case []interface{}: + l := make([]string, len(x)) + for i, e := range x { + if es, ok := e.(string); ok { + l[i] = es + } else { + return errors.Errorf(`invalid list element type: expected string, got %T`, v) + } + } + return ops.Accept(l) + case []string: + list := make([]KeyOperation, len(x)) + for i, e := range x { + switch e := KeyOperation(e); e { + case KeyOpSign, KeyOpVerify, KeyOpEncrypt, KeyOpDecrypt, KeyOpWrapKey, KeyOpUnwrapKey, KeyOpDeriveKey, KeyOpDeriveBits: + list[i] = e + default: + return errors.Errorf(`invalid keyoperation %v`, e) + } + } + + *ops = list + return nil + case KeyOperationList: + *ops = x + return nil + default: + return errors.Errorf(`invalid value %T`, v) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/option.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/option.go new file mode 100644 index 0000000000..dd55019a90 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/option.go @@ -0,0 +1,21 @@ +package jwk + +import ( + "net/http" + + "github.com/lestrrat-go/jwx/internal/option" +) + +type Option = option.Interface + +const ( + optkeyHTTPClient = `http-client` +) + +type HTTPClient interface { + Get(string) (*http.Response, error) +} + +func WithHTTPClient(cl HTTPClient) Option { + return option.New(optkeyHTTPClient, cl) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/rsa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/rsa.go new file mode 100644 index 0000000000..d8f92adc07 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/rsa.go @@ -0,0 +1,311 @@ +package jwk + +import ( + "bytes" + "crypto" + "crypto/rsa" + "encoding/json" + "math/big" + + "github.com/lestrrat-go/jwx/internal/base64" + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +func newRSAPublicKey(key *rsa.PublicKey) (*RSAPublicKey, error) { + if key == nil { + return nil, errors.New(`non-nil rsa.PublicKey required`) + } + + var hdr StandardHeaders + hdr.Set(KeyTypeKey, jwa.RSA) + return &RSAPublicKey{ + headers: &hdr, + key: key, + }, nil +} + +func newRSAPrivateKey(key *rsa.PrivateKey) (*RSAPrivateKey, error) { + if key == nil { + return nil, errors.New(`non-nil rsa.PrivateKey required`) + } + + if len(key.Primes) < 2 { + return nil, errors.New("two primes required for RSA private key") + } + + var hdr StandardHeaders + hdr.Set(KeyTypeKey, jwa.RSA) + return &RSAPrivateKey{ + headers: &hdr, + key: key, + }, nil +} + +func (k RSAPrivateKey) PublicKey() (*RSAPublicKey, error) { + return newRSAPublicKey(&k.key.PublicKey) +} + +func (k *RSAPublicKey) Materialize() (interface{}, error) { + if k.key == nil { + return nil, errors.New(`key has no rsa.PublicKey associated with it`) + } + return k.key, nil +} + +func (k *RSAPrivateKey) Materialize() (interface{}, error) { + if k.key == nil { + return nil, errors.New(`key has no rsa.PrivateKey associated with it`) + } + return k.key, nil +} + +func (k RSAPublicKey) MarshalJSON() (buf []byte, err error) { + + m := map[string]interface{}{} + if err := k.PopulateMap(m); err != nil { + return nil, errors.Wrap(err, `failed to populate public key values`) + } + + return json.Marshal(m) +} + +func (k RSAPublicKey) PopulateMap(m map[string]interface{}) (err error) { + + if err := k.headers.PopulateMap(m); err != nil { + return errors.Wrap(err, `failed to populate header values`) + } + + m[`n`] = base64.EncodeToString(k.key.N.Bytes()) + m[`e`] = base64.EncodeUint64ToString(uint64(k.key.E)) + + return nil +} + +func (k *RSAPublicKey) UnmarshalJSON(data []byte) (err error) { + + m := map[string]interface{}{} + if err := json.Unmarshal(data, &m); err != nil { + return errors.Wrap(err, `failed to unmarshal public key`) + } + + if err := k.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract data from map`) + } + return nil +} + +func (k *RSAPublicKey) ExtractMap(m map[string]interface{}) (err error) { + + const ( + eKey = `e` + nKey = `n` + ) + + nbuf, err := getRequiredKey(m, nKey) + if err != nil { + return errors.Wrapf(err, `failed to get required key %s`, nKey) + } + delete(m, nKey) + + ebuf, err := getRequiredKey(m, eKey) + if err != nil { + return errors.Wrapf(err, `failed to get required key %s`, eKey) + } + delete(m, eKey) + + var n, e big.Int + n.SetBytes(nbuf) + e.SetBytes(ebuf) + + var hdrs StandardHeaders + if err := hdrs.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract header values`) + } + + *k = RSAPublicKey{ + headers: &hdrs, + key: &rsa.PublicKey{E: int(e.Int64()), N: &n}, + } + return nil +} + +func (k RSAPrivateKey) MarshalJSON() (buf []byte, err error) { + + m := make(map[string]interface{}) + if err := k.PopulateMap(m); err != nil { + return nil, errors.Wrap(err, `failed to populate private key values`) + } + + return json.Marshal(m) +} + +func (k RSAPrivateKey) PopulateMap(m map[string]interface{}) (err error) { + + const ( + dKey = `d` + pKey = `p` + qKey = `q` + dpKey = `dp` + dqKey = `dq` + qiKey = `qi` + ) + + if err := k.headers.PopulateMap(m); err != nil { + return errors.Wrap(err, `failed to populate header values`) + } + + pubkey, _ := newRSAPublicKey(&k.key.PublicKey) + if err := pubkey.PopulateMap(m); err != nil { + return errors.Wrap(err, `failed to populate public key values`) + } + + if err := k.headers.PopulateMap(m); err != nil { + return errors.Wrap(err, `failed to populate header values`) + } + m[dKey] = base64.EncodeToString(k.key.D.Bytes()) + m[pKey] = base64.EncodeToString(k.key.Primes[0].Bytes()) + m[qKey] = base64.EncodeToString(k.key.Primes[1].Bytes()) + if v := k.key.Precomputed.Dp; v != nil { + m[dpKey] = base64.EncodeToString(v.Bytes()) + } + if v := k.key.Precomputed.Dq; v != nil { + m[dqKey] = base64.EncodeToString(v.Bytes()) + } + if v := k.key.Precomputed.Qinv; v != nil { + m[qiKey] = base64.EncodeToString(v.Bytes()) + } + return nil +} + +func (k *RSAPrivateKey) UnmarshalJSON(data []byte) (err error) { + + m := map[string]interface{}{} + if err := json.Unmarshal(data, &m); err != nil { + return errors.Wrap(err, `failed to unmarshal public key`) + } + + var key RSAPrivateKey + if err := key.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract data from map`) + } + *k = key + + return nil +} + +func (k *RSAPrivateKey) ExtractMap(m map[string]interface{}) (err error) { + + const ( + dKey = `d` + pKey = `p` + qKey = `q` + dpKey = `dp` + dqKey = `dq` + qiKey = `qi` + ) + + dbuf, err := getRequiredKey(m, dKey) + if err != nil { + return errors.Wrap(err, `failed to get required key`) + } + delete(m, dKey) + + pbuf, err := getRequiredKey(m, pKey) + if err != nil { + return errors.Wrap(err, `failed to get required key`) + } + delete(m, pKey) + + qbuf, err := getRequiredKey(m, qKey) + if err != nil { + return errors.Wrap(err, `failed to get required key`) + } + delete(m, qKey) + + var d, q, p big.Int + d.SetBytes(dbuf) + q.SetBytes(qbuf) + p.SetBytes(pbuf) + + var dp, dq, qi *big.Int + + dpbuf, err := getOptionalKey(m, dpKey) + if err == nil { + delete(m, dpKey) + + dp = &big.Int{} + dp.SetBytes(dpbuf) + } + + dqbuf, err := getOptionalKey(m, dqKey) + if err == nil { + delete(m, dqKey) + + dq = &big.Int{} + dq.SetBytes(dqbuf) + } + + qibuf, err := getOptionalKey(m, qiKey) + if err == nil { + delete(m, qiKey) + + qi = &big.Int{} + qi.SetBytes(qibuf) + } + + var pubkey RSAPublicKey + if err := pubkey.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract fields for public key`) + } + + materialized, err := pubkey.Materialize() + if err != nil { + return errors.Wrap(err, `failed to materialize RSA public key`) + } + rsaPubkey := materialized.(*rsa.PublicKey) + + var key rsa.PrivateKey + key.PublicKey = *rsaPubkey + key.D = &d + key.Primes = []*big.Int{&p, &q} + + if dp != nil { + key.Precomputed.Dp = dp + } + if dq != nil { + key.Precomputed.Dq = dq + } + if qi != nil { + key.Precomputed.Qinv = qi + } + + *k = RSAPrivateKey{ + headers: pubkey.headers, + key: &key, + } + return nil +} + +// Thumbprint returns the JWK thumbprint using the indicated +// hashing algorithm, according to RFC 7638 +func (k RSAPrivateKey) Thumbprint(hash crypto.Hash) ([]byte, error) { + return rsaThumbprint(hash, &k.key.PublicKey) +} + +func (k RSAPublicKey) Thumbprint(hash crypto.Hash) ([]byte, error) { + return rsaThumbprint(hash, k.key) +} + +func rsaThumbprint(hash crypto.Hash, key *rsa.PublicKey) ([]byte, error) { + var buf bytes.Buffer + buf.WriteString(`{"e":"`) + buf.WriteString(base64.EncodeUint64ToString(uint64(key.E))) + buf.WriteString(`","kty":"RSA","n":"`) + buf.WriteString(base64.EncodeToString(key.N.Bytes())) + buf.WriteString(`"}`) + + h := hash.New() + buf.WriteTo(h) + return h.Sum(nil), nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/rsa_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/rsa_test.go new file mode 100644 index 0000000000..d3d1cb1dc5 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/rsa_test.go @@ -0,0 +1,184 @@ +package jwk_test + +import ( + "crypto" + "crypto/rsa" + "encoding/json" + "strings" + "testing" + + "github.com/lestrrat-go/jwx/jwk" + "github.com/stretchr/testify/assert" +) + +func TestRSA(t *testing.T) { + verify := func(t *testing.T, key jwk.Key) { + t.Helper() + + rsaKey, err := key.Materialize() + if !assert.NoError(t, err, `Materialize() should succeed`) { + return + } + + newKey, err := jwk.New(rsaKey) + if !assert.NoError(t, err, `jwk.New should succeed`) { + return + } + + key.Walk(func(k string, v interface{}) error { + return newKey.Set(k, v) + }) + + jsonbuf1, err := json.Marshal(key) + if !assert.NoError(t, err, `json.Marshal should succeed`) { + return + } + + jsonbuf2, err := json.Marshal(newKey) + if !assert.NoError(t, err, `json.Marshal should succeed`) { + return + } + + if !assert.Equal(t, jsonbuf1, jsonbuf2, `generated JSON buffers should match`) { + t.Logf("%s", jsonbuf1) + t.Logf("%s", jsonbuf2) + return + } + } + t.Run("Public Key", func(t *testing.T) { + const src = `{ + "e":"AQAB", + "kty":"RSA", + "n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw" + }` + + var key jwk.RSAPublicKey + if !assert.NoError(t, json.Unmarshal([]byte(src), &key), `json.Unmarshal should succeed`) { + return + } + verify(t, &key) + }) + t.Run("Private Key", func(t *testing.T) { + const src = `{ + "kty":"RSA", + "n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw", + "e":"AQAB", + "d":"X4cTteJY_gn4FYPsXB8rdXix5vwsg1FLN5E3EaG6RJoVH-HLLKD9M7dx5oo7GURknchnrRweUkC7hT5fJLM0WbFAKNLWY2vv7B6NqXSzUvxT0_YSfqijwp3RTzlBaCxWp4doFk5N2o8Gy_nHNKroADIkJ46pRUohsXywbReAdYaMwFs9tv8d_cPVY3i07a3t8MN6TNwm0dSawm9v47UiCl3Sk5ZiG7xojPLu4sbg1U2jx4IBTNBznbJSzFHK66jT8bgkuqsk0GjskDJk19Z4qwjwbsnn4j2WBii3RL-Us2lGVkY8fkFzme1z0HbIkfz0Y6mqnOYtqc0X4jfcKoAC8Q", + "p":"83i-7IvMGXoMXCskv73TKr8637FiO7Z27zv8oj6pbWUQyLPQBQxtPVnwD20R-60eTDmD2ujnMt5PoqMrm8RfmNhVWDtjjMmCMjOpSXicFHj7XOuVIYQyqVWlWEh6dN36GVZYk93N8Bc9vY41xy8B9RzzOGVQzXvNEvn7O0nVbfs", + "q":"3dfOR9cuYq-0S-mkFLzgItgMEfFzB2q3hWehMuG0oCuqnb3vobLyumqjVZQO1dIrdwgTnCdpYzBcOfW5r370AFXjiWft_NGEiovonizhKpo9VVS78TzFgxkIdrecRezsZ-1kYd_s1qDbxtkDEgfAITAG9LUnADun4vIcb6yelxk", + "dp":"G4sPXkc6Ya9y8oJW9_ILj4xuppu0lzi_H7VTkS8xj5SdX3coE0oimYwxIi2emTAue0UOa5dpgFGyBJ4c8tQ2VF402XRugKDTP8akYhFo5tAA77Qe_NmtuYZc3C3m3I24G2GvR5sSDxUyAN2zq8Lfn9EUms6rY3Ob8YeiKkTiBj0", + "dq":"s9lAH9fggBsoFR8Oac2R_E2gw282rT2kGOAhvIllETE1efrA6huUUvMfBcMpn8lqeW6vzznYY5SSQF7pMdC_agI3nG8Ibp1BUb0JUiraRNqUfLhcQb_d9GF4Dh7e74WbRsobRonujTYN1xCaP6TO61jvWrX-L18txXw494Q_cgk", + "qi":"GyM_p6JrXySiz1toFgKbWV-JdI3jQ4ypu9rbMWx3rQJBfmt0FoYzgUIZEVFEcOqwemRN81zoDAaa-Bk0KWNGDjJHZDdDmFhW3AN7lI-puxk_mHZGJ11rxyR8O55XLSe3SPmRfKwZI6yU24ZxvQKFYItdldUKGzO6Ia6zTKhAVRU", + "alg":"RS256", + "kid":"2011-04-29" + }` + var key jwk.RSAPrivateKey + if !assert.NoError(t, json.Unmarshal([]byte(src), &key), `json.Unmarshal should succeed`) { + return + } + verify(t, &key) + }) + t.Run("Private Key", func(t *testing.T) { + s := `{"keys": + [ + {"kty":"RSA", + "n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw", + "e":"AQAB", + "d":"X4cTteJY_gn4FYPsXB8rdXix5vwsg1FLN5E3EaG6RJoVH-HLLKD9M7dx5oo7GURknchnrRweUkC7hT5fJLM0WbFAKNLWY2vv7B6NqXSzUvxT0_YSfqijwp3RTzlBaCxWp4doFk5N2o8Gy_nHNKroADIkJ46pRUohsXywbReAdYaMwFs9tv8d_cPVY3i07a3t8MN6TNwm0dSawm9v47UiCl3Sk5ZiG7xojPLu4sbg1U2jx4IBTNBznbJSzFHK66jT8bgkuqsk0GjskDJk19Z4qwjwbsnn4j2WBii3RL-Us2lGVkY8fkFzme1z0HbIkfz0Y6mqnOYtqc0X4jfcKoAC8Q", + "p":"83i-7IvMGXoMXCskv73TKr8637FiO7Z27zv8oj6pbWUQyLPQBQxtPVnwD20R-60eTDmD2ujnMt5PoqMrm8RfmNhVWDtjjMmCMjOpSXicFHj7XOuVIYQyqVWlWEh6dN36GVZYk93N8Bc9vY41xy8B9RzzOGVQzXvNEvn7O0nVbfs", + "q":"3dfOR9cuYq-0S-mkFLzgItgMEfFzB2q3hWehMuG0oCuqnb3vobLyumqjVZQO1dIrdwgTnCdpYzBcOfW5r370AFXjiWft_NGEiovonizhKpo9VVS78TzFgxkIdrecRezsZ-1kYd_s1qDbxtkDEgfAITAG9LUnADun4vIcb6yelxk", + "dp":"G4sPXkc6Ya9y8oJW9_ILj4xuppu0lzi_H7VTkS8xj5SdX3coE0oimYwxIi2emTAue0UOa5dpgFGyBJ4c8tQ2VF402XRugKDTP8akYhFo5tAA77Qe_NmtuYZc3C3m3I24G2GvR5sSDxUyAN2zq8Lfn9EUms6rY3Ob8YeiKkTiBj0", + "dq":"s9lAH9fggBsoFR8Oac2R_E2gw282rT2kGOAhvIllETE1efrA6huUUvMfBcMpn8lqeW6vzznYY5SSQF7pMdC_agI3nG8Ibp1BUb0JUiraRNqUfLhcQb_d9GF4Dh7e74WbRsobRonujTYN1xCaP6TO61jvWrX-L18txXw494Q_cgk", + "qi":"GyM_p6JrXySiz1toFgKbWV-JdI3jQ4ypu9rbMWx3rQJBfmt0FoYzgUIZEVFEcOqwemRN81zoDAaa-Bk0KWNGDjJHZDdDmFhW3AN7lI-puxk_mHZGJ11rxyR8O55XLSe3SPmRfKwZI6yU24ZxvQKFYItdldUKGzO6Ia6zTKhAVRU", + "alg":"RS256", + "kid":"2011-04-29"} + ] + }` + set, err := jwk.ParseString(s) + if !assert.NoError(t, err, "Parsing private key is successful") { + return + } + + rsakey, ok := set.Keys[0].(*jwk.RSAPrivateKey) + if !assert.True(t, ok, "Type assertion for RSAPrivateKey is successful") { + return + } + + var privkey *rsa.PrivateKey + var pubkey *rsa.PublicKey + + { + pkey, err := rsakey.PublicKey() + if !assert.NoError(t, err, "rsakey.PublickKey is successful") { + return + } + + mkey, err := pkey.Materialize() + if !assert.NoError(t, err, "RSAPublickKey.Materialize is successful") { + return + } + var ok bool + pubkey, ok = mkey.(*rsa.PublicKey) + if !assert.True(t, ok, "Materialized key is a *rsa.PublicKey") { + return + } + } + + if !assert.NotEmpty(t, pubkey.N, "N exists") { + return + } + + if !assert.NotEmpty(t, pubkey.E, "E exists") { + return + } + + { + mkey, err := rsakey.Materialize() + if !assert.NoError(t, err, "RSAPrivateKey.Materialize is successful") { + return + } + var ok bool + privkey, ok = mkey.(*rsa.PrivateKey) + if !assert.True(t, ok, "Materialized key is a *rsa.PrivateKey") { + return + } + } + + if !assert.NotEmpty(t, privkey.Precomputed.Dp, "Dp exists") { + return + } + + if !assert.NotEmpty(t, privkey.Precomputed.Dq, "Dq exists") { + return + } + + if !assert.NotEmpty(t, privkey.Precomputed.Qinv, "Qinv exists") { + return + } + }) + t.Run("Thumbprint", func(t *testing.T) { + expected := []byte{55, 54, 203, 177, 120, 124, 184, 48, 156, 119, 238, + 140, 55, 5, 197, 225, 111, 251, 158, 133, 151, 21, 144, 31, 30, 76, 89, + 177, 17, 130, 245, 123, + } + const src = `{ + "kty":"RSA", + "e": "AQAB", + "n": "0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw" + }` + + var key jwk.RSAPublicKey + if err := json.NewDecoder(strings.NewReader(src)).Decode(&key); !assert.NoError(t, err, `json.Unmarshal should succeed`) { + return + } + + tp, err := key.Thumbprint(crypto.SHA256) + if !assert.NoError(t, err, "Thumbprint should succeed") { + return + } + + if !assert.Equal(t, expected, tp, "Thumbprint should match") { + return + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/symmetric.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/symmetric.go new file mode 100644 index 0000000000..59e5132e94 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwk/symmetric.go @@ -0,0 +1,87 @@ +package jwk + +import ( + "crypto" + "encoding/json" + "fmt" + "github.com/lestrrat-go/jwx/internal/base64" + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +func newSymmetricKey(key []byte) (*SymmetricKey, error) { + if len(key) == 0 { + return nil, errors.New(`non-empty []byte key required`) + } + + var hdr StandardHeaders + hdr.Set(KeyTypeKey, jwa.OctetSeq) + return &SymmetricKey{ + headers: &hdr, + key: key, + }, nil +} + +// Materialize returns the octets for this symmetric key. +// Since this is a symmetric key, this just calls Octets +func (s SymmetricKey) Materialize() (interface{}, error) { + return s.Octets(), nil +} + +// Octets returns the octets in the key +func (s SymmetricKey) Octets() []byte { + return s.key +} + +// Thumbprint returns the JWK thumbprint using the indicated +// hashing algorithm, according to RFC 7638 +func (s SymmetricKey) Thumbprint(hash crypto.Hash) ([]byte, error) { + h := hash.New() + fmt.Fprintf(h, `{"k":"`) + fmt.Fprintf(h, base64.EncodeToString(s.key)) + fmt.Fprintf(h, `","kty":"oct"}`) + return h.Sum(nil), nil +} + +func (s *SymmetricKey) ExtractMap(m map[string]interface{}) (err error) { + + const kKey = `k` + + kbuf, err := getRequiredKey(m, kKey) + if err != nil { + return errors.Wrapf(err, `failed to get required key '%s'`, kKey) + } + delete(m, kKey) + + var hdrs StandardHeaders + if err := hdrs.ExtractMap(m); err != nil { + return errors.Wrap(err, `failed to extract header values`) + } + + *s = SymmetricKey{ + headers: &hdrs, + key: kbuf, + } + return nil +} + +func (s SymmetricKey) MarshalJSON() (buf []byte, err error) { + + m := make(map[string]interface{}) + if err := s.PopulateMap(m); err != nil { + return nil, errors.Wrap(err, `failed to populate symmetric key values`) + } + + return json.Marshal(m) +} + +func (s SymmetricKey) PopulateMap(m map[string]interface{}) (err error) { + + if err := s.headers.PopulateMap(m); err != nil { + return errors.Wrap(err, `failed to populate header values`) + } + + const kKey = `k` + m[kKey] = base64.EncodeToString(s.key) + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/doc_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/doc_test.go new file mode 100644 index 0000000000..65de2e3c2a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/doc_test.go @@ -0,0 +1,63 @@ +package jws_test + +import ( + "crypto/rand" + "crypto/rsa" + "log" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jws" +) + +func ExampleSign_JWSCompact() { + privkey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + log.Printf("failed to create private key: %s", err) + return + } + + buf, err := jws.Sign([]byte("Lorem ipsum"), jwa.RS256, privkey) + if err != nil { + log.Printf("failed to sign payload: %s", err) + return + } + + log.Printf("%s", buf) + + verified, err := jws.Verify(buf, jwa.RS256, &privkey.PublicKey) + if err != nil { + log.Printf("failed to verify JWS message: %s", err) + return + } + log.Printf("message verified!") + + // Do something with `verified` .... + _ = verified +} + +func ExampleSign_JWSJSON() { + key, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + log.Printf("failed to create private key: %s", err) + return + } + + payload := "Lorem ipsum" + + //TODO fix formatter + buf, err := jws.Sign([]byte(payload), jwa.RS256, key) + if err != nil { + log.Printf("failed to sign payload: %s", err) + return + } + + verified, err := jws.Verify(buf, jwa.RS256, &key.PublicKey) + if err != nil { + log.Printf("failed to verify JWS message: %s", err) + return + } + log.Printf("message verified!") + + // Do something with `verified` .... + _ = verified +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/headers.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/headers.go new file mode 100644 index 0000000000..37b54365f6 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/headers.go @@ -0,0 +1,198 @@ +// This file is auto-generated. DO NOT EDIT +package jws + +import ( + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/pkg/errors" +) + +const ( + AlgorithmKey = "alg" + ContentTypeKey = "cty" + CriticalKey = "crit" + JWKKey = "jwk" + JWKSetURLKey = "jku" + KeyIDKey = "kid" + TypeKey = "typ" + X509CertChainKey = "x5c" + X509CertThumbprintKey = "x5t" + X509CertThumbprintS256Key = "x5t#S256" + X509URLKey = "x5u" +) + +type Headers interface { + Get(string) (interface{}, bool) + Set(string, interface{}) error + Algorithm() jwa.SignatureAlgorithm +} + +type StandardHeaders struct { + JWSalgorithm jwa.SignatureAlgorithm `json:"alg,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.1 + JWScontentType string `json:"cty,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.10 + JWScritical []string `json:"crit,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.11 + JWSjwk *jwk.Set `json:"jwk,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.3 + JWSjwkSetURL string `json:"jku,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.2 + JWSkeyID string `json:"kid,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.4 + JWStyp string `json:"typ,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.9 + JWSx509CertChain []string `json:"x5c,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.6 + JWSx509CertThumbprint string `json:"x5t,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.7 + JWSx509CertThumbprintS256 string `json:"x5t#S256,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.8 + JWSx509URL string `json:"x5u,omitempty"` // https://tools.ietf.org/html/rfc7515#section-4.1.5 + privateParams map[string]interface{} +} + +func (h *StandardHeaders) Algorithm() jwa.SignatureAlgorithm { + return h.JWSalgorithm +} + +func (h *StandardHeaders) Get(name string) (interface{}, bool) { + switch name { + case AlgorithmKey: + v := h.JWSalgorithm + if v == "" { + return nil, false + } + return v, true + case ContentTypeKey: + v := h.JWScontentType + if v == "" { + return nil, false + } + return v, true + case CriticalKey: + v := h.JWScritical + if len(v) == 0 { + return nil, false + } + return v, true + case JWKKey: + v := h.JWSjwk + if v == nil { + return nil, false + } + return v, true + case JWKSetURLKey: + v := h.JWSjwkSetURL + if v == "" { + return nil, false + } + return v, true + case KeyIDKey: + v := h.JWSkeyID + if v == "" { + return nil, false + } + return v, true + case TypeKey: + v := h.JWStyp + if v == "" { + return nil, false + } + return v, true + case X509CertChainKey: + v := h.JWSx509CertChain + if len(v) == 0 { + return nil, false + } + return v, true + case X509CertThumbprintKey: + v := h.JWSx509CertThumbprint + if v == "" { + return nil, false + } + return v, true + case X509CertThumbprintS256Key: + v := h.JWSx509CertThumbprintS256 + if v == "" { + return nil, false + } + return v, true + case X509URLKey: + v := h.JWSx509URL + if v == "" { + return nil, false + } + return v, true + default: + v, ok := h.privateParams[name] + return v, ok + } +} + +func (h *StandardHeaders) Set(name string, value interface{}) error { + switch name { + case AlgorithmKey: + if err := h.JWSalgorithm.Accept(value); err != nil { + return errors.Wrapf(err, `invalid value for %s key`, AlgorithmKey) + } + return nil + case ContentTypeKey: + if v, ok := value.(string); ok { + h.JWScontentType = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, ContentTypeKey, value) + case CriticalKey: + if v, ok := value.([]string); ok { + h.JWScritical = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, CriticalKey, value) + case JWKKey: + v, ok := value.(*jwk.Set) + if ok { + h.JWSjwk = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, JWKKey, value) + case JWKSetURLKey: + if v, ok := value.(string); ok { + h.JWSjwkSetURL = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, JWKSetURLKey, value) + case KeyIDKey: + if v, ok := value.(string); ok { + h.JWSkeyID = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, KeyIDKey, value) + case TypeKey: + if v, ok := value.(string); ok { + h.JWStyp = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, TypeKey, value) + case X509CertChainKey: + if v, ok := value.([]string); ok { + h.JWSx509CertChain = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, X509CertChainKey, value) + case X509CertThumbprintKey: + if v, ok := value.(string); ok { + h.JWSx509CertThumbprint = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, X509CertThumbprintKey, value) + case X509CertThumbprintS256Key: + if v, ok := value.(string); ok { + h.JWSx509CertThumbprintS256 = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, X509CertThumbprintS256Key, value) + case X509URLKey: + if v, ok := value.(string); ok { + h.JWSx509URL = v + return nil + } + return errors.Errorf(`invalid value for %s key: %T`, X509URLKey, value) + default: + if h.privateParams == nil { + h.privateParams = map[string]interface{}{} + } + h.privateParams[name] = value + } + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/headers_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/headers_test.go new file mode 100644 index 0000000000..34e068e994 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/headers_test.go @@ -0,0 +1,128 @@ +package jws_test + +import ( + "encoding/json" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/lestrrat-go/jwx/jws" + "reflect" + "testing" +) + +func TestHeader(t *testing.T) { + publicKey := `{"kty":"RSA", + "n": "0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw", + "e":"AQAB", + "alg":"RS256", + "kid":"2011-04-29"}` + jwkPublicKeySet, err := jwk.ParseString(publicKey) + if err != nil { + t.Fatal("Failed to parse RSA public key") + } + certChain := []string{ + "MIIE3jCCA8agAwIBAgICAwEwDQYJKoZIhvcNAQEFBQAwYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRoZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3MgMiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjExMTYwMTU0MzdaFw0yNjExMTYwMTU0MzdaMIHKMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEaMBgGA1UEChMRR29EYWRkeS5jb20sIEluYy4xMzAxBgNVBAsTKmh0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeTEwMC4GA1UEAxMnR28gRGFkZHkgU2VjdXJlIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MREwDwYDVQQFEwgwNzk2OTI4NzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMQt1RWMnCZM7DI161+4WQFapmGBWTtwY6vj3D3HKrjJM9N55DrtPDAjhI6zMBS2sofDPZVUBJ7fmd0LJR4h3mUpfjWoqVTr9vcyOdQmVZWt7/v+WIbXnvQAjYwqDL1CBM6nPwT27oDyqu9SoWlm2r4arV3aLGbqGmu75RpRSgAvSMeYddi5Kcju+GZtCpyz8/x4fKL4o/K1w/O5epHBp+YlLpyo7RJlbmr2EkRTcDCVw5wrWCs9CHRK8r5RsL+H0EwnWGu1NcWdrxcx+AuP7q2BNgWJCJjPOq8lh8BJ6qf9Z/dFjpfMFDniNoW1fho3/Rb2cRGadDAW/hOUoz+EDU8CAwEAAaOCATIwggEuMB0GA1UdDgQWBBT9rGEyk2xF1uLuhV+auud2mWjM5zAfBgNVHSMEGDAWgBTSxLDSkdRMEXGzYcs9of7dqGrU4zASBgNVHRMBAf8ECDAGAQH/AgEAMDMGCCsGAQUFBwEBBCcwJTAjBggrBgEFBQcwAYYXaHR0cDovL29jc3AuZ29kYWRkeS5jb20wRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDovL2NlcnRpZmljYXRlcy5nb2RhZGR5LmNvbS9yZXBvc2l0b3J5L2dkcm9vdC5jcmwwSwYDVR0gBEQwQjBABgRVHSAAMDgwNgYIKwYBBQUHAgEWKmh0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeTAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEFBQADggEBANKGwOy9+aG2Z+5mC6IGOgRQjhVyrEp0lVPLN8tESe8HkGsz2ZbwlFalEzAFPIUyIXvJxwqoJKSQ3kbTJSMUA2fCENZvD117esyfxVgqwcSeIaha86ykRvOe5GPLL5CkKSkB2XIsKd83ASe8T+5o0yGPwLPk9Qnt0hCqU7S+8MxZC9Y7lhyVJEnfzuz9p0iRFEUOOjZv2kWzRaJBydTXRE4+uXR21aITVSzGh6O1mawGhId/dQb8vxRMDsxuxN89txJx9OjxUUAiKEngHUuHqDTMBqLdElrRhjZkAzVvb3du6/KFUJheqwNTrZEjYx8WnM25sgVjOuH0aBsXBTWVU+4=", + "MIIE+zCCBGSgAwIBAgICAQ0wDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTA0MDYyOTE3MDYyMFoXDTI0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRoZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3MgMiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggENADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCAPVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6wwdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXiEqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMYavx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjggHhMIIB3TAdBgNVHQ4EFgQU0sSw0pHUTBFxs2HLPaH+3ahq1OMwgdIGA1UdIwSByjCBx6GBwaSBvjCBuzEkMCIGA1UEBxMbVmFsaUNlcnQgVmFsaWRhdGlvbiBOZXR3b3JrMRcwFQYDVQQKEw5WYWxpQ2VydCwgSW5jLjE1MDMGA1UECxMsVmFsaUNlcnQgQ2xhc3MgMiBQb2xpY3kgVmFsaWRhdGlvbiBBdXRob3JpdHkxITAfBgNVBAMTGGh0dHA6Ly93d3cudmFsaWNlcnQuY29tLzEgMB4GCSqGSIb3DQEJARYRaW5mb0B2YWxpY2VydC5jb22CAQEwDwYDVR0TAQH/BAUwAwEB/zAzBggrBgEFBQcBAQQnMCUwIwYIKwYBBQUHMAGGF2h0dHA6Ly9vY3NwLmdvZGFkZHkuY29tMEQGA1UdHwQ9MDswOaA3oDWGM2h0dHA6Ly9jZXJ0aWZpY2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeS9yb290LmNybDBLBgNVHSAERDBCMEAGBFUdIAAwODA2BggrBgEFBQcCARYqaHR0cDovL2NlcnRpZmljYXRlcy5nb2RhZGR5LmNvbS9yZXBvc2l0b3J5MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQUFAAOBgQC1QPmnHfbq/qQaQlpE9xXUhUaJwL6e4+PrxeNYiY+Sn1eocSxI0YGyeR+sBjUZsE4OWBsUs5iB0QQeyAfJg594RAoYC5jcdnplDQ1tgMQLARzLrUc+cb53S8wGd9D0VmsfSxOaFIqII6hR8INMqzW/Rn453HWkrugp++85j09VZw==", + "MIIC5zCCAlACAQEwDQYJKoZIhvcNAQEFBQAwgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMB4XDTk5MDYyNjAwMTk1NFoXDTE5MDYyNjAwMTk1NFowgbsxJDAiBgNVBAcTG1ZhbGlDZXJ0IFZhbGlkYXRpb24gTmV0d29yazEXMBUGA1UEChMOVmFsaUNlcnQsIEluYy4xNTAzBgNVBAsTLFZhbGlDZXJ0IENsYXNzIDIgUG9saWN5IFZhbGlkYXRpb24gQXV0aG9yaXR5MSEwHwYDVQQDExhodHRwOi8vd3d3LnZhbGljZXJ0LmNvbS8xIDAeBgkqhkiG9w0BCQEWEWluZm9AdmFsaWNlcnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDOOnHK5avIWZJV16vYdA757tn2VUdZZUcOBVXc65g2PFxTXdMwzzjsvUGJ7SVCCSRrCl6zfN1SLUzm1NZ9WlmpZdRJEy0kTRxQb7XBhVQ7/nHk01xC+YDgkRoKWzk2Z/M/VXwbP7RfZHM047QSv4dk+NoS/zcnwbNDu+97bi5p9wIDAQABMA0GCSqGSIb3DQEBBQUAA4GBADt/UG9vUJSZSWI4OB9L+KXIPqeCgfYrx+jFzug6EILLGACOTb2oWH+heQC1u+mNr0HZDzTuIYEZoDJJKPTEjlbVUjP9UNV+mWwD5MlM/Mtsq2azSiGM5bUMMj4QssxsodyamEwCW/POuZ6lcg5Ktz885hZo+L7tdEy8W9ViH0Pd", + } + values := map[string]interface{}{ + jws.AlgorithmKey: jwa.ES256, + jws.ContentTypeKey: "example", + jws.CriticalKey: []string{"exp"}, + jws.JWKKey: jwkPublicKeySet, + jws.JWKSetURLKey: "https://www.jwk.com/key.json", + jws.TypeKey: "JWT", + jws.KeyIDKey: "e9bc097a-ce51-4036-9562-d2ade882db0d", + jws.X509CertChainKey: certChain, + jws.X509CertThumbprintKey: "QzY0NjREMjkyQTI4RTU2RkE4MUJBRDExNzY1MUY1N0I4QjFCODlBOQ", + jws.X509URLKey: "https://www.x509.com/key.pem", + } + t.Run("Roundtrip", func(t *testing.T) { + + var h jws.StandardHeaders + for k, v := range values { + err := h.Set(k, v) + if err != nil { + t.Fatalf("Set failed for %s", k) + } + got, ok := h.Get(k) + if !ok { + t.Fatalf("Set failed for %s", k) + } + //fmt.Println(reflect.TypeOf(got).String()) + //fmt.Println(reflect.TypeOf(v).String()) + if !reflect.DeepEqual(v, got) { + t.Fatalf("Values do not match: (%v, %v)", v, got) + } + } + }) + t.Run("JSON Marshal Unmarshal", func(t *testing.T) { + + var h jws.StandardHeaders + for k, v := range values { + err := h.Set(k, v) + if err != nil { + t.Fatalf("Set failed for %s", k) + } + got, ok := h.Get(k) + if !ok { + t.Fatalf("Set failed for %s", k) + } + if !reflect.DeepEqual(v, got) { + t.Fatalf("Values do not match: (%v, %v)", v, got) + } + } + hByte, err := json.Marshal(h) + if err != nil { + t.Fatal("Failed to JSON marshal") + } + var hNew jws.StandardHeaders + err = json.Unmarshal(hByte, &hNew) + if err != nil { + t.Fatal("Failed to JSON marshal") + } + }) + t.Run("RoundtripError", func(t *testing.T) { + + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + + values := map[string]interface{}{ + jws.AlgorithmKey: dummy, + jws.ContentTypeKey: dummy, + jws.CriticalKey: dummy, + jws.JWKKey: dummy, + jws.JWKSetURLKey: dummy, + jws.KeyIDKey: dummy, + jws.TypeKey: dummy, + jws.X509CertChainKey: dummy, + jws.X509CertThumbprintKey: dummy, + jws.X509CertThumbprintS256Key: dummy, + jws.X509URLKey: dummy, + } + + var h jws.StandardHeaders + for k, v := range values { + err := h.Set(k, v) + if err == nil { + t.Fatalf("Setting %s value should have failed", k) + } + } + err := h.Set("default", dummy) // private params + if err != nil { + t.Fatalf("Setting %s value failed", "default") + } + for k, _ := range values { + _, ok := h.Get(k) + if ok { + t.Fatalf("Getting %s value should have failed", k) + } + } + _, ok := h.Get("default") + if !ok { + t.Fatal("Failed to get default value") + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/interface.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/interface.go new file mode 100644 index 0000000000..60eb1c7b01 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/interface.go @@ -0,0 +1,91 @@ +package jws + +import ( + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" +) + +type EncodedSignature struct { + Protected string `json:"protected,omitempty"` + Headers Headers `json:"header,omitempty"` + Signature string `json:"signature,omitempty"` +} + +type EncodedSignatureUnmarshalProxy struct { + Protected string `json:"protected,omitempty"` + Headers *StandardHeaders `json:"header,omitempty"` + Signature string `json:"signature,omitempty"` +} + +type EncodedMessage struct { + Payload string `json:"payload"` + Signatures []*EncodedSignature `json:"signatures,omitempty"` +} + +type EncodedMessageUnmarshalProxy struct { + Payload string `json:"payload"` + Signatures []*EncodedSignatureUnmarshalProxy `json:"signatures,omitempty"` +} + +type FullEncodedMessage struct { + *EncodedSignature // embedded to pick up flattened JSON message + *EncodedMessage +} + +type FullEncodedMessageUnmarshalProxy struct { + *EncodedSignatureUnmarshalProxy // embedded to pick up flattened JSON message + *EncodedMessageUnmarshalProxy +} + +// PayloadSigner generates signature for the given payload +type PayloadSigner interface { + Sign([]byte) ([]byte, error) + Algorithm() jwa.SignatureAlgorithm + ProtectedHeader() Headers + PublicHeader() Headers +} + +// Message represents a full JWS encoded message. Flattened serialization +// is not supported as a struct, but rather it's represented as a +// Message struct with only one `signature` element. +// +// Do not expect to use the Message object to verify or construct a +// signed payloads with. You should only use this when you want to actually +// want to programmatically view the contents for the full JWS payload. +// +// To sign and verify, use the appropriate `Sign()` nad `Verify()` functions +type Message struct { + payload []byte `json:"payload"` + signatures []*Signature `json:"signatures,omitempty"` +} + +type Signature struct { + headers Headers `json:"header,omitempty"` // Unprotected Headers + protected Headers `json:"protected,omitempty"` // Protected Headers + signature []byte `json:"signature,omitempty"` // Signature +} + +// JWKAcceptor decides which keys can be accepted +// by functions that iterate over a JWK key set. +type JWKAcceptor interface { + Accept(jwk.Key) bool +} + +// JWKAcceptFunc is an implementation of JWKAcceptor +// using a plain function +type JWKAcceptFunc func(jwk.Key) bool + +// Accept executes the provided function to determine if the +// given key can be used +func (f JWKAcceptFunc) Accept(key jwk.Key) bool { + return f(key) +} + +// DefaultJWKAcceptor is the default acceptor that is used +// in functions like VerifyWithJWKSet +var DefaultJWKAcceptor = JWKAcceptFunc(func(key jwk.Key) bool { + if u := key.KeyUsage(); u != "" && u != "enc" && u != "sig" { + return false + } + return true +}) diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/internal/cmd/genheader/main.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/internal/cmd/genheader/main.go new file mode 100644 index 0000000000..68f4e50c5e --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/internal/cmd/genheader/main.go @@ -0,0 +1,268 @@ +package main + +import ( + "bytes" + "fmt" + "go/format" + "log" + "os" + "sort" + "strconv" + "strings" + + "github.com/pkg/errors" +) + +func main() { + if err := _main(); err != nil { + log.Printf("%s", err) + os.Exit(1) + } +} + +func _main() error { + return generateHeaders() +} + +type headerField struct { + name string + method string + typ string + key string + comment string + hasAccept bool + noDeref bool + jsonTag string +} + +func (f headerField) IsPointer() bool { + return strings.HasPrefix(f.typ, "*") +} + +func (f headerField) PointerElem() string { + return strings.TrimPrefix(f.typ, "*") +} + +var zerovals = map[string]string{ + "string": `""`, + "jwa.SignatureAlgorithm": `""`, + "[]string": "0", +} + +func zeroval(s string) string { + if v, ok := zerovals[s]; ok { + return v + } + return "nil" +} + +func generateHeaders() error { + fields := []headerField{ + { + name: `JWSalgorithm`, + method: `Algorithm`, + typ: `jwa.SignatureAlgorithm`, + key: `alg`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.1`, + hasAccept: true, + jsonTag: "`" + `json:"alg,omitempty"` + "`", + }, + { + name: `JWScontentType`, + method: `ContentType`, + typ: `string`, + key: `cty`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.10`, + jsonTag: "`" + `json:"cty,omitempty"` + "`", + }, + { + name: `JWScritical`, + method: `Critical`, + typ: `[]string`, + key: `crit`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.11`, + jsonTag: "`" + `json:"crit,omitempty"` + "`", + }, + { + name: `JWSjwk`, + method: `JWK`, + typ: `*jwk.Set`, + key: `jwk`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.3`, + jsonTag: "`" + `json:"jwk,omitempty"` + "`", + }, + { + name: `JWSjwkSetURL`, + method: `JWKSetURL`, + typ: `string`, + key: `jku`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.2`, + jsonTag: "`" + `json:"jku,omitempty"` + "`", + }, + { + name: `JWSkeyID`, + method: `KeyID`, + typ: `string`, + key: `kid`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.4`, + jsonTag: "`" + `json:"kid,omitempty"` + "`", + }, + { + name: `JWStyp`, + method: `Type`, + typ: `string`, + key: `typ`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.9`, + jsonTag: "`" + `json:"typ,omitempty"` + "`", + }, + { + name: `JWSx509CertChain`, + method: `X509CertChain`, + typ: `[]string`, + key: `x5c`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.6`, + jsonTag: "`" + `json:"x5c,omitempty"` + "`", + }, + { + name: `JWSx509CertThumbprint`, + method: `X509CertThumbprint`, + typ: `string`, + key: `x5t`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.7`, + jsonTag: "`" + `json:"x5t,omitempty"` + "`", + }, + { + name: `JWSx509CertThumbprintS256`, + method: `X509CertThumbprintS256`, + typ: `string`, + key: `x5t#S256`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.8`, + jsonTag: "`" + `json:"x5t#S256,omitempty"` + "`", + }, + { + name: `JWSx509URL`, + method: `X509URL`, + typ: `string`, + key: `x5u`, + comment: `https://tools.ietf.org/html/rfc7515#section-4.1.5`, + jsonTag: "`" + `json:"x5u,omitempty"` + "`", + }, + } + + sort.Slice(fields, func(i, j int) bool { + return fields[i].name < fields[j].name + }) + + var buf bytes.Buffer + + fmt.Fprintf(&buf, "\n// This file is auto-generated. DO NOT EDIT") + fmt.Fprintf(&buf, "\npackage jws") + fmt.Fprintf(&buf, "\n\nimport (") + for _, pkg := range []string{"github.com/lestrrat-go/jwx/jwa", "github.com/lestrrat-go/jwx/jwk", "github.com/pkg/errors"} { + fmt.Fprintf(&buf, "\n%s", strconv.Quote(pkg)) + } + fmt.Fprintf(&buf, "\n)") + + fmt.Fprintf(&buf, "\n\nconst (") + for _, f := range fields { + fmt.Fprintf(&buf, "\n%sKey = %s", f.method, strconv.Quote(f.key)) + } + fmt.Fprintf(&buf, "\n)") // end const + + fmt.Fprintf(&buf, "\n\ntype Headers interface {") + fmt.Fprintf(&buf, "\nGet(string) (interface{}, bool)") + fmt.Fprintf(&buf, "\nSet(string, interface{}) error") + fmt.Fprintf(&buf, "\nAlgorithm() jwa.SignatureAlgorithm") + + /* for _, f := range fields { + fmt.Fprintf(&buf, "\n%s() %s", f.method, f.PointerElem()) + }*/ + fmt.Fprintf(&buf, "\n}") // end type Headers interface + fmt.Fprintf(&buf, "\n\ntype StandardHeaders struct {") + for _, f := range fields { + fmt.Fprintf(&buf, "\n%s %s %s // %s", f.name, f.typ, f.jsonTag, f.comment) + } + fmt.Fprintf(&buf, "\nprivateParams map[string]interface{}") + fmt.Fprintf(&buf, "\n}") // end type StandardHeaders + + fmt.Fprintf(&buf, "\n\nfunc (h *StandardHeaders) Algorithm() jwa.SignatureAlgorithm {") + fmt.Fprintf(&buf, "\nreturn h.JWSalgorithm") + fmt.Fprintf(&buf, "\n}") // func (h *StandardHeaders) %s() %s + + fmt.Fprintf(&buf, "\n\nfunc (h *StandardHeaders) Get(name string) (interface{}, bool) {") + fmt.Fprintf(&buf, "\nswitch name {") + for _, f := range fields { + fmt.Fprintf(&buf, "\ncase %sKey:", f.method) + fmt.Fprintf(&buf, "\nv := h.%s", f.name) + + if f.typ == "[]string" { + fmt.Fprintf(&buf, "\nif len(v) == %s {", zeroval(f.typ)) + } else { + fmt.Fprintf(&buf, "\nif v == %s {", zeroval(f.typ)) + } + fmt.Fprintf(&buf, "\nreturn nil, false") + fmt.Fprintf(&buf, "\n}") // end if h.%s == nil + fmt.Fprintf(&buf, "\nreturn v, true") + + } + fmt.Fprintf(&buf, "\ndefault:") + fmt.Fprintf(&buf, "\nv, ok := h.privateParams[name]") + fmt.Fprintf(&buf, "\nreturn v, ok") + fmt.Fprintf(&buf, "\n}") // end switch name + fmt.Fprintf(&buf, "\n}") // func (h *StandardHeaders) Get(name string) (interface{}, bool) + + fmt.Fprintf(&buf, "\n\nfunc (h *StandardHeaders) Set(name string, value interface{}) error {") + fmt.Fprintf(&buf, "\nswitch name {") + for _, f := range fields { + fmt.Fprintf(&buf, "\ncase %sKey:", f.method) + if f.hasAccept { + if f.IsPointer() { + fmt.Fprintf(&buf, "\nvar acceptor %s", f.PointerElem()) + fmt.Fprintf(&buf, "\nif err := acceptor.Accept(value); err != nil {") + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `invalid value for %%s key`, %sKey)", f.method) + fmt.Fprintf(&buf, "\n}") // end if err := h.%s.Accept(value) + fmt.Fprintf(&buf, "\nh.%s = &acceptor", f.name) + } else { + fmt.Fprintf(&buf, "\nif err := h.%s.Accept(value); err != nil {", f.name) + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `invalid value for %%s key`, %sKey)", f.method) + fmt.Fprintf(&buf, "\n}") // end if err := h.%s.Accept(value) + } + fmt.Fprintf(&buf, "\nreturn nil") + } else { + if f.name == "JWSjwk" { + fmt.Fprintf(&buf, "\nv, ok := value.(%s)", f.typ) + fmt.Fprintf(&buf, "\nif ok {") + fmt.Fprintf(&buf, "\nh.%s = v", f.name) + } else { + fmt.Fprintf(&buf, "\nif v, ok := value.(%s); ok {", f.typ) + fmt.Fprintf(&buf, "\nh.%s = v", f.name) + } + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") // end if v, ok := value.(%s) + fmt.Fprintf(&buf, "\nreturn errors.Errorf(`invalid value for %%s key: %%T`, %sKey, value)", f.method) + } + } + fmt.Fprintf(&buf, "\ndefault:") + fmt.Fprintf(&buf, "\nif h.privateParams == nil {") + fmt.Fprintf(&buf, "\nh.privateParams = map[string]interface{}{}") + fmt.Fprintf(&buf, "\n}") // end if h.privateParams == nil + fmt.Fprintf(&buf, "\nh.privateParams[name] = value") + fmt.Fprintf(&buf, "\n}") // end switch name + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") // end func (h *StandardHeaders) Set(name string, value interface{}) + + formatted, err := format.Source(buf.Bytes()) + if err != nil { + buf.WriteTo(os.Stdout) + return errors.Wrap(err, `failed to format code`) + } + + f, err := os.Create("headers.go") + if err != nil { + return errors.Wrap(err, `failed to open headers.go`) + } + defer f.Close() + f.Write(formatted) + + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/jws.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/jws.go new file mode 100644 index 0000000000..a8ca7ec737 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/jws.go @@ -0,0 +1,567 @@ +//go:generate go run internal/cmd/genheader/main.go + +// Package jws implements the digital signature on JSON based data +// structures as described in https://tools.ietf.org/html/rfc7515 +// +// If you do not care about the details, the only things that you +// would need to use are the following functions: +// +// jws.Sign(payload, algorithm, key) +// jws.Verify(encodedjws, algorithm, key) +// +// To sign, simply use `jws.Sign`. `payload` is a []byte buffer that +// contains whatever data you want to sign. `alg` is one of the +// jwa.SignatureAlgorithm constants from package jwa. For RSA and +// ECDSA family of algorithms, you will need to prepare a private key. +// For HMAC family, you just need a []byte value. The `jws.Sign` +// function will return the encoded JWS message on success. +// +// To verify, use `jws.Verify`. It will parse the `encodedjws` buffer +// and verify the result using `algorithm` and `key`. Upon successful +// verification, the original payload is returned, so you can work on it. +package jws + +import ( + "bufio" + "bytes" + "encoding/base64" + "encoding/json" + "io" + "strings" + "unicode" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/lestrrat-go/jwx/jws/sign" + "github.com/lestrrat-go/jwx/jws/verify" + "github.com/pkg/errors" +) + +// Sign is a short way to generate a JWS in compact serialization +// for a given payload. If you need more control over the signature +// generation process, you should manually create signers and tweak +// the message. +/* +func Sign(payload []byte, alg jwa.SignatureAlgorithm, key interface{}, options ...Option) ([]byte, error) { + signer, err := sign.New(alg) + if err != nil { + return nil, errors.Wrap(err, "failed to create signer") + } + + msg, err := SignMulti(payload, WithSigner(signer, key)) + if err != nil { + return nil, errors.Wrap(err, "failed to sign payload") + } + return msg, nil +} +*/ + +type payloadSigner struct { + signer sign.Signer + key interface{} + protected Headers + public Headers +} + +func (s *payloadSigner) Sign(payload []byte) ([]byte, error) { + return s.signer.Sign(payload, s.key) +} + +func (s *payloadSigner) Algorithm() jwa.SignatureAlgorithm { + return s.signer.Algorithm() +} + +func (s *payloadSigner) ProtectedHeader() Headers { + return s.protected +} + +func (s *payloadSigner) PublicHeader() Headers { + return s.public +} + +// Sign generates a signature for the given payload, and serializes +// it in compact serialization format. In this format you may NOT use +// multiple signers. +// +// If you would like to pass custom headers, use the WithHeaders option. +func Sign(payload []byte, alg jwa.SignatureAlgorithm, key interface{}, options ...Option) ([]byte, error) { + var hdrs Headers = &StandardHeaders{} + for _, o := range options { + switch o.Name() { + case optkeyHeaders: + hdrs = o.Value().(Headers) + } + } + + signer, err := sign.New(alg) + if err != nil { + return nil, errors.Wrap(err, `failed to create signer`) + } + + hdrs.Set(AlgorithmKey, signer.Algorithm()) + + hdrbuf, err := json.Marshal(hdrs) + if err != nil { + return nil, errors.Wrap(err, `failed to marshal headers`) + } + + var buf bytes.Buffer + enc := base64.NewEncoder(base64.RawURLEncoding, &buf) + if _, err := enc.Write(hdrbuf); err != nil { + return nil, errors.Wrap(err, `failed to write headers as base64`) + } + if err := enc.Close(); err != nil { + return nil, errors.Wrap(err, `failed to finalize writing headers as base64`) + } + + buf.WriteByte('.') + enc = base64.NewEncoder(base64.RawURLEncoding, &buf) + if _, err := enc.Write(payload); err != nil { + return nil, errors.Wrap(err, `failed to write payload as base64`) + } + if err := enc.Close(); err != nil { + return nil, errors.Wrap(err, `failed to finalize writing payload as base64`) + } + + signature, err := signer.Sign(buf.Bytes(), key) + if err != nil { + return nil, errors.Wrap(err, `failed to sign payload`) + } + + buf.WriteByte('.') + enc = base64.NewEncoder(base64.RawURLEncoding, &buf) + if _, err := enc.Write(signature); err != nil { + return nil, errors.Wrap(err, `failed to write signature as base64`) + } + if err := enc.Close(); err != nil { + return nil, errors.Wrap(err, `failed to finalize writing signature as base64`) + } + + return buf.Bytes(), nil +} + +// SignLiteral generates a signature for the given payload and headers, and serializes +// it in compact serialization format. In this format you may NOT use +// multiple signers. +// +func SignLiteral(payload []byte, alg jwa.SignatureAlgorithm, key interface{}, headers []byte) ([]byte, error) { + + signer, err := sign.New(alg) + if err != nil { + return nil, errors.Wrap(err, `failed to create signer`) + } + + var buf bytes.Buffer + enc := base64.NewEncoder(base64.RawURLEncoding, &buf) + if _, err := enc.Write(headers); err != nil { + return nil, errors.Wrap(err, `failed to write headers as base64`) + } + if err := enc.Close(); err != nil { + return nil, errors.Wrap(err, `failed to finalize writing headers as base64`) + } + + buf.WriteByte('.') + enc = base64.NewEncoder(base64.RawURLEncoding, &buf) + if _, err := enc.Write(payload); err != nil { + return nil, errors.Wrap(err, `failed to write payload as base64`) + } + if err := enc.Close(); err != nil { + return nil, errors.Wrap(err, `failed to finalize writing payload as base64`) + } + + signature, err := signer.Sign(buf.Bytes(), key) + if err != nil { + return nil, errors.Wrap(err, `failed to sign payload`) + } + + buf.WriteByte('.') + enc = base64.NewEncoder(base64.RawURLEncoding, &buf) + if _, err := enc.Write(signature); err != nil { + return nil, errors.Wrap(err, `failed to write signature as base64`) + } + if err := enc.Close(); err != nil { + return nil, errors.Wrap(err, `failed to finalize writing signature as base64`) + } + + return buf.Bytes(), nil +} + +// SignMulti accepts multiple signers via the options parameter, +// and creates a JWS in JSON serialization format that contains +// signatures from applying aforementioned signers. +func SignMulti(payload []byte, options ...Option) ([]byte, error) { + var signers []PayloadSigner + for _, o := range options { + switch o.Name() { + case optkeyPayloadSigner: + signers = append(signers, o.Value().(PayloadSigner)) + } + } + + if len(signers) == 0 { + return nil, errors.New(`no signers provided`) + } + + var result EncodedMessage + + result.Payload = base64.RawURLEncoding.EncodeToString(payload) + + for _, signer := range signers { + protected := signer.ProtectedHeader() + if protected == nil { + protected = &StandardHeaders{} + } + + protected.Set(AlgorithmKey, signer.Algorithm()) + + hdrbuf, err := json.Marshal(protected) + if err != nil { + return nil, errors.Wrap(err, `failed to marshal headers`) + } + encodedHeader := base64.RawURLEncoding.EncodeToString(hdrbuf) + var buf bytes.Buffer + buf.WriteString(encodedHeader) + buf.WriteByte('.') + buf.WriteString(result.Payload) + signature, err := signer.Sign(buf.Bytes()) + if err != nil { + return nil, errors.Wrap(err, `failed to sign payload`) + } + + result.Signatures = append(result.Signatures, &EncodedSignature{ + Headers: signer.PublicHeader(), + Protected: encodedHeader, + Signature: base64.RawURLEncoding.EncodeToString(signature), + }) + } + + return json.Marshal(result) +} + +// Verify checks if the given JWS message is verifiable using `alg` and `key`. +// If the verification is successful, `err` is nil, and the content of the +// payload that was signed is returned. If you need more fine-grained +// control of the verification process, manually call `Parse`, generate a +// verifier, and call `Verify` on the parsed JWS message object. +func Verify(buf []byte, alg jwa.SignatureAlgorithm, key interface{}) (ret []byte, err error) { + + verifier, err := verify.New(alg) + if err != nil { + return nil, errors.Wrap(err, "failed to create verifier") + } + + buf = bytes.TrimSpace(buf) + if len(buf) == 0 { + return nil, errors.New(`attempt to verify empty buffer`) + } + + if buf[0] == '{' { + + var v FullEncodedMessage + if err := json.Unmarshal(buf, &v); err != nil { + return nil, errors.Wrap(err, `failed to unmarshal JWS message`) + } + + // There's something wrong if the Message part is not initialized + if v.EncodedMessage == nil { + return nil, errors.New(`invalid JWS message format`) + } + + // if we're using the flattened serialization format, then m.Signature + // will be non-nil + msg := v.EncodedMessage + if v.EncodedSignature != nil { + msg.Signatures[0] = v.EncodedSignature + } + + var buf bytes.Buffer + for _, sig := range msg.Signatures { + buf.Reset() + buf.WriteString(sig.Protected) + buf.WriteByte('.') + buf.WriteString(msg.Payload) + decodedSignature, err := base64.RawURLEncoding.DecodeString(sig.Signature) + if err != nil { + continue + } + + if err := verifier.Verify(buf.Bytes(), decodedSignature, key); err == nil { + // verified! + decodedPayload, err := base64.RawURLEncoding.DecodeString(msg.Payload) + if err != nil { + return nil, errors.Wrap(err, `message verified, failed to decode payload`) + } + return decodedPayload, nil + } + } + return nil, errors.New(`could not verify with any of the signatures`) + } + + protected, payload, signature, err := SplitCompact(bytes.NewReader(buf)) + if err != nil { + return nil, errors.Wrap(err, `failed extract from compact serialization format`) + } + + var verifyBuf bytes.Buffer + verifyBuf.Write(protected) + verifyBuf.WriteByte('.') + verifyBuf.Write(payload) + + decodedSignature := make([]byte, base64.RawURLEncoding.DecodedLen(len(signature))) + if _, err := base64.RawURLEncoding.Decode(decodedSignature, signature); err != nil { + return nil, errors.Wrap(err, `failed to decode signature`) + } + if err := verifier.Verify(verifyBuf.Bytes(), decodedSignature, key); err != nil { + return nil, errors.Wrap(err, `failed to verify message`) + } + + decodedPayload := make([]byte, base64.RawURLEncoding.DecodedLen(len(payload))) + if _, err := base64.RawURLEncoding.Decode(decodedPayload, payload); err != nil { + return nil, errors.Wrap(err, `message verified, failed to decode payload`) + } + return decodedPayload, nil +} + +// VerifyWithJKU verifies the JWS message using a remote JWK +// file represented in the url. +func VerifyWithJKU(buf []byte, jwkurl string) ([]byte, error) { + key, err := jwk.FetchHTTP(jwkurl) + if err != nil { + return nil, errors.Wrap(err, `failed to fetch jwk via HTTP`) + } + + return VerifyWithJWKSet(buf, key, nil) +} + +// VerifyWithJWK verifies the JWS message using the specified JWK +func VerifyWithJWK(buf []byte, key jwk.Key) (payload []byte, err error) { + + keyval, err := key.Materialize() + if err != nil { + return nil, errors.Wrap(err, `failed to materialize jwk.Key`) + } + + payload, err = Verify(buf, jwa.SignatureAlgorithm(key.Algorithm()), keyval) + if err != nil { + return nil, errors.Wrap(err, "failed to verify message") + } + return payload, nil +} + +// VerifyWithJWKSet verifies the JWS message using JWK key set. +// By default it will only pick up keys that have the "use" key +// set to either "sig" or "enc", but you can override it by +// providing a keyaccept function. +func VerifyWithJWKSet(buf []byte, keyset *jwk.Set, keyaccept JWKAcceptFunc) (payload []byte, err error) { + + if keyaccept == nil { + keyaccept = DefaultJWKAcceptor + } + + for _, key := range keyset.Keys { + if !keyaccept(key) { + continue + } + + payload, err := VerifyWithJWK(buf, key) + if err == nil { + return payload, nil + } + } + + return nil, errors.New("failed to verify with any of the keys") +} + +// Parse parses contents from the given source and creates a jws.Message +// struct. The input can be in either compact or full JSON serialization. +func Parse(src io.Reader) (m *Message, err error) { + + rdr := bufio.NewReader(src) + var first rune + for { + r, _, err := rdr.ReadRune() + if err != nil { + return nil, errors.Wrap(err, `failed to read rune`) + } + if !unicode.IsSpace(r) { + first = r + rdr.UnreadRune() + break + } + } + + var parser func(io.Reader) (*Message, error) + if first == '{' { + parser = parseJSON + } else { + parser = parseCompact + } + + m, err = parser(rdr) + if err != nil { + return nil, errors.Wrap(err, `failed to parse jws message`) + } + + return m, nil +} + +// ParseString is the same as Parse, but take in a string +func ParseString(s string) (*Message, error) { + return Parse(strings.NewReader(s)) +} + +func parseJSON(src io.Reader) (result *Message, err error) { + + var wrapper FullEncodedMessageUnmarshalProxy + + if err := json.NewDecoder(src).Decode(&wrapper); err != nil { + return nil, errors.Wrap(err, `failed to unmarshal jws message`) + } + + if wrapper.EncodedMessageUnmarshalProxy == nil { + return nil, errors.New(`invalid payload (probably empty)`) + } + + // if the "signature" field exist, treat it as a flattened + if wrapper.EncodedSignatureUnmarshalProxy != nil { + if len(wrapper.Signatures) != 0 { + return nil, errors.New("invalid message: mixed flattened/full json serialization") + } + + wrapper.Signatures = append(wrapper.Signatures, wrapper.EncodedSignatureUnmarshalProxy) + } + + var plain Message + plain.payload, err = base64.RawURLEncoding.DecodeString(wrapper.Payload) + if err != nil { + return nil, errors.Wrap(err, `failed to decode payload`) + } + + for i, sig := range wrapper.Signatures { + var plainSig Signature + + plainSig.headers = sig.Headers + + if l := len(sig.Protected); l > 0 { + plainSig.protected = new(StandardHeaders) + hdrbuf, err := base64.RawURLEncoding.DecodeString(sig.Protected) + if err != nil { + return nil, errors.Wrapf(err, `failed to base64 decode protected header for signature #%d`, i+1) + } + if err := json.Unmarshal(hdrbuf, &plainSig.protected); err != nil { + return nil, errors.Wrapf(err, `failed to unmarshal protected header for signature #%d`, i+1) + } + } + + plainSig.signature, err = base64.RawURLEncoding.DecodeString(sig.Signature) + if err != nil { + return nil, errors.Wrapf(err, `failed to decode signature #%d`, i) + } + + plain.signatures = append(plain.signatures, &plainSig) + } + + return &plain, nil +} + +// SplitCompact splits a JWT and returns its three parts +// separately: protected headers, payload and signature. +func SplitCompact(rdr io.Reader) ([]byte, []byte, []byte, error) { + var protected []byte + var payload []byte + var signature []byte + var periods int = 0 + var state int = 0 + + buf := make([]byte, 4096) + var sofar []byte + + for { + // read next bytes + n, err := rdr.Read(buf) + // return on unexpected read error + if err != nil && err != io.EOF { + return nil, nil, nil, err + } + + // append to current buffer + sofar = append(sofar, buf[:n]...) + // loop to capture multiple '.' in current buffer + for loop := true; loop; { + var i = bytes.IndexByte(sofar, '.') + if i == -1 && err != io.EOF { + // no '.' found -> exit and read next bytes (outer loop) + loop = false + continue + } else if i == -1 && err == io.EOF { + // no '.' found -> process rest and exit + i = len(sofar) + loop = false + } else { + // '.' found + periods++ + } + + // Reaching this point means we have found a '.' or EOF and process the rest of the buffer + switch state { + case 0: + protected = sofar[:i] + state++ + case 1: + payload = sofar[:i] + state++ + case 2: + signature = sofar[:i] + } + // Shorten current buffer + if len(sofar) > i { + sofar = sofar[i+1:] + } + } + // Exit on EOF + if err == io.EOF { + break + } + } + if periods != 2 { + return nil, nil, nil, errors.New(`invalid number of segments`) + } + + return protected, payload, signature, nil +} + +// parseCompact parses a JWS value serialized via compact serialization. +func parseCompact(rdr io.Reader) (m *Message, err error) { + + protected, payload, signature, err := SplitCompact(rdr) + if err != nil { + return nil, errors.Wrap(err, `invalid compact serialization format`) + } + + decodedHeader := make([]byte, base64.RawURLEncoding.DecodedLen(len(protected))) + if _, err := base64.RawURLEncoding.Decode(decodedHeader, protected); err != nil { + return nil, errors.Wrap(err, `failed to decode headers`) + } + var hdr StandardHeaders + if err := json.Unmarshal(decodedHeader, &hdr); err != nil { + return nil, errors.Wrap(err, `failed to parse JOSE headers`) + } + + decodedPayload := make([]byte, base64.RawURLEncoding.DecodedLen(len(payload))) + if _, err = base64.RawURLEncoding.Decode(decodedPayload, payload); err != nil { + return nil, errors.Wrap(err, `failed to decode payload`) + } + + decodedSignature := make([]byte, base64.RawURLEncoding.DecodedLen(len(signature))) + if _, err := base64.RawURLEncoding.Decode(decodedSignature, signature); err != nil { + return nil, errors.Wrap(err, `failed to decode signature`) + } + + var msg Message + msg.payload = decodedPayload + msg.signatures = append(msg.signatures, &Signature{ + protected: &hdr, + signature: decodedSignature, + }) + return &msg, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/jws_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/jws_test.go new file mode 100644 index 0000000000..aee2381039 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/jws_test.go @@ -0,0 +1,925 @@ +package jws_test + +import ( + "bytes" + "crypto/ecdsa" + "crypto/rand" + "crypto/rsa" + "crypto/sha512" + "encoding/base64" + "encoding/json" + "math/big" + "strings" + "testing" + + "github.com/lestrrat-go/jwx/buffer" + "github.com/lestrrat-go/jwx/internal/ecdsautil" + "github.com/lestrrat-go/jwx/internal/rsautil" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwk" + "github.com/lestrrat-go/jwx/jws" + "github.com/lestrrat-go/jwx/jws/sign" + "github.com/lestrrat-go/jwx/jws/verify" + "github.com/stretchr/testify/assert" +) + +const examplePayload = `{"iss":"joe",` + "\r\n" + ` "exp":1300819380,` + "\r\n" + ` "http://example.com/is_root":true}` +const exampleCompactSerialization = `eyJ0eXAiOiJKV1QiLA0KICJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ.dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk` + +func TestParse(t *testing.T) { + t.Run("Empty bytes.Buffer", func(t *testing.T) { + _, err := jws.Parse(&bytes.Buffer{}) + if !assert.Error(t, err, "Parsing an empty buffer should result in an error") { + return + } + }) + t.Run("Compact missing parts", func(t *testing.T) { + incoming := strings.Join( + (strings.Split( + exampleCompactSerialization, + ".", + ))[:2], + ".", + ) + _, err := jws.ParseString(incoming) + if !assert.Error(t, err, "Parsing compact serialization with less than 3 parts should be an error") { + return + } + }) + t.Run("Compact bad header", func(t *testing.T) { + parts := strings.Split(exampleCompactSerialization, ".") + parts[0] = "%badvalue%" + incoming := strings.Join(parts, ".") + + _, err := jws.ParseString(incoming) + if !assert.Error(t, err, "Parsing compact serialization with bad header should be an error") { + return + } + }) + t.Run("Compact bad payload", func(t *testing.T) { + parts := strings.Split(exampleCompactSerialization, ".") + parts[1] = "%badvalue%" + incoming := strings.Join(parts, ".") + + _, err := jws.ParseString(incoming) + if !assert.Error(t, err, "Parsing compact serialization with bad payload should be an error") { + return + } + }) + t.Run("Compact bad signature", func(t *testing.T) { + parts := strings.Split(exampleCompactSerialization, ".") + parts[2] = "%badvalue%" + incoming := strings.Join(parts, ".") + + t.Logf("incoming = '%s'", incoming) + _, err := jws.ParseString(incoming) + if !assert.Error(t, err, "Parsing compact serialization with bad signature should be an error") { + return + } + }) +} + +func TestRoundtrip(t *testing.T) { + payload := []byte("Lorem ipsum") + sharedkey := []byte("Avracadabra") + + hmacAlgorithms := []jwa.SignatureAlgorithm{jwa.HS256, jwa.HS384, jwa.HS512} + for _, alg := range hmacAlgorithms { + t.Run("HMAC "+alg.String(), func(t *testing.T) { + signed, err := jws.Sign(payload, alg, sharedkey) + if !assert.NoError(t, err, "Sign succeeds") { + return + } + + verified, err := jws.Verify(signed, alg, sharedkey) + if !assert.NoError(t, err, "Verify succeeded") { + return + } + + if !assert.Equal(t, payload, verified, "verified payload matches") { + return + } + }) + } + t.Run("HMAC SignMulti", func(t *testing.T) { + var signed []byte + t.Run("Sign", func(t *testing.T) { + var options []jws.Option + for _, alg := range hmacAlgorithms { + signer, err := sign.New(alg) + if !assert.NoError(t, err, `sign.New should succeed`) { + return + } + options = append(options, jws.WithSigner(signer, sharedkey, nil, nil)) + } + var err error + signed, err = jws.SignMulti(payload, options...) + if !assert.NoError(t, err, `jws.SignMulti should succeed`) { + return + } + }) + for _, alg := range hmacAlgorithms { + t.Run("Verify "+alg.String(), func(t *testing.T) { + verified, err := jws.Verify(signed, alg, sharedkey) + if !assert.NoError(t, err, "Verify succeeded") { + return + } + + if !assert.Equal(t, payload, verified, "verified payload matches") { + return + } + }) + } + }) +} + +func TestVerifyWithJWKSet(t *testing.T) { + payload := []byte("Hello, World!") + key, err := rsa.GenerateKey(rand.Reader, 2048) + if !assert.NoError(t, err, "RSA key generated") { + return + } + + jwkKey, err := jwk.New(&key.PublicKey) + if !assert.NoError(t, err, "JWK public key generated") { + return + } + err = jwkKey.Set(jwk.AlgorithmKey, jwa.RS256) + if !assert.NoError(t, err, "Algorithm set successfully") { + return + } + + buf, err := jws.Sign(payload, jwa.RS256, key) + if !assert.NoError(t, err, "Signature generated successfully") { + return + } + + verified, err := jws.VerifyWithJWKSet(buf, &jwk.Set{Keys: []jwk.Key{jwkKey}}, nil) + if !assert.NoError(t, err, "Verify is successful") { + return + } + + verified, err = jws.VerifyWithJWK(buf, jwkKey) + if !assert.NoError(t, err, "Verify is successful") { + return + } + + if !assert.Equal(t, payload, verified, "Verified payload is the same") { + return + } +} + +func TestRoundtrip_RSACompact(t *testing.T) { + payload := []byte("Hello, World!") + for _, alg := range []jwa.SignatureAlgorithm{jwa.RS256, jwa.RS384, jwa.RS512, jwa.PS256, jwa.PS384, jwa.PS512} { + key, err := rsa.GenerateKey(rand.Reader, 2048) + if !assert.NoError(t, err, "RSA key generated") { + return + } + + buf, err := jws.Sign(payload, alg, key) + if !assert.NoError(t, err, "(%s) Signature generated successfully", alg) { + return + } + + parsers := map[string]func([]byte) (*jws.Message, error){ + "Parse(io.Reader)": func(b []byte) (*jws.Message, error) { return jws.Parse(bytes.NewReader(b)) }, + "Parse(string)": func(b []byte) (*jws.Message, error) { return jws.ParseString(string(b)) }, + } + for name, f := range parsers { + m, err := f(buf) + if !assert.NoError(t, err, "(%s) %s is successful", alg, name) { + return + } + + if !assert.Equal(t, payload, m.Payload(), "(%s) %s: Payload is decoded", alg, name) { + return + } + } + + verified, err := jws.Verify(buf, alg, &key.PublicKey) + if !assert.NoError(t, err, "(%s) Verify is successful", alg) { + return + } + + if !assert.Equal(t, payload, verified, "(%s) Verified payload is the same", alg) { + return + } + } +} + +func TestEncode(t *testing.T) { + // HS256Compact tests that https://tools.ietf.org/html/rfc7515#appendix-A.1 works + t.Run("HS256Compact", func(t *testing.T) { + const hdr = `{"typ":"JWT",` + "\r\n" + ` "alg":"HS256"}` + const hmacKey = `AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow` + const expected = `eyJ0eXAiOiJKV1QiLA0KICJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ.dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk` + + hmacKeyDecoded := buffer.Buffer{} + err := hmacKeyDecoded.Base64Decode([]byte(hmacKey)) + if !assert.NoError(t, err, "HMAC base64 decoded successful") { + return + } + + hdrbuf, err := buffer.Buffer(hdr).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + payload, err := buffer.Buffer(examplePayload).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + + signingInput := bytes.Join( + [][]byte{ + hdrbuf, + payload, + }, + []byte{'.'}, + ) + + sign, err := sign.New(jwa.HS256) + if !assert.NoError(t, err, "HMAC signer created successfully") { + return + } + + signature, err := sign.Sign(signingInput, hmacKeyDecoded.Bytes()) + if !assert.NoError(t, err, "PayloadSign is successful") { + return + } + sigbuf, err := buffer.Buffer(signature).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + + encoded := bytes.Join( + [][]byte{ + signingInput, + sigbuf, + }, + []byte{'.'}, + ) + if !assert.Equal(t, expected, string(encoded), "generated compact serialization should match") { + return + } + + msg, err := jws.Parse(bytes.NewReader(encoded)) + if !assert.NoError(t, err, "Parsing compact encoded serialization succeeds") { + return + } + + signatures := msg.Signatures() + if !assert.Len(t, signatures, 1, `there should be exactly one signature`) { + return + } + + algorithm := signatures[0].ProtectedHeaders().Algorithm() + if algorithm != jwa.HS256 { + t.Fatal("Algorithm in header does not match") + } + + v, err := verify.New(jwa.HS256) + if !assert.NoError(t, err, "HmacVerify created") { + return + } + + if !assert.NoError(t, v.Verify(signingInput, signature, hmacKeyDecoded.Bytes()), "Verify succeeds") { + return + } + }) + t.Run("HS256CompactLiteral", func(t *testing.T) { + const hdr = `{"typ":"JWT",` + "\r\n" + ` "alg":"HS256"}` + const jwksrc = `{ +"kty":"oct", +"k":"AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow" +}` + + hdrBytes := []byte(hdr) + + hdrBuf, err := buffer.Buffer(hdr).Base64Encode() + if err != nil { + t.Fatal("Failed to base64 encode protected header") + } + standardHeaders := &jws.StandardHeaders{} + err = json.Unmarshal(hdrBytes, standardHeaders) + if err != nil { + t.Fatal("Failed to parse protected header") + } + alg := standardHeaders.Algorithm() + + payload, err := buffer.Buffer(examplePayload).Base64Encode() + if err != nil { + t.Fatal("Failed to base64 encode payload") + } + + keys, _ := jwk.ParseString(jwksrc) + key, err := keys.Keys[0].Materialize() + if err != nil { + t.Fatal("Failed to parse key") + } + var jwsCompact []byte + jwsCompact, err = jws.SignLiteral([]byte(examplePayload), alg, key, hdrBytes) + if err != nil { + t.Fatal("Failed to sign message") + } + + msg, err := jws.Parse(bytes.NewReader(jwsCompact)) + if !assert.NoError(t, err, "Parsing compact encoded serialization succeeds") { + return + } + + signatures := msg.Signatures() + if !assert.Len(t, signatures, 1, `there should be exactly one signature`) { + return + } + + algorithm := signatures[0].ProtectedHeaders().Algorithm() + if algorithm != alg { + t.Fatal("Algorithm in header does not match") + } + + v, err := verify.New(alg) + if !assert.NoError(t, err, "HmacVerify created") { + return + } + + signingInput := bytes.Join( + [][]byte{ + hdrBuf, + payload, + }, + []byte{'.'}, + ) + + if !assert.NoError(t, v.Verify(signingInput, signatures[0].Signature(), key), "Verify succeeds") { + return + } + }) + t.Run("ES512Compact", func(t *testing.T) { + // ES256Compact tests that https://tools.ietf.org/html/rfc7515#appendix-A.3 works + hdr := []byte{123, 34, 97, 108, 103, 34, 58, 34, 69, 83, 53, 49, 50, 34, 125} + const jwksrc = `{ +"kty":"EC", +"crv":"P-521", +"x":"AekpBQ8ST8a8VcfVOTNl353vSrDCLLJXmPk06wTjxrrjcBpXp5EOnYG_NjFZ6OvLFV1jSfS9tsz4qUxcWceqwQGk", +"y":"ADSmRA43Z1DSNx_RvcLI87cdL07l6jQyyBXMoxVg_l2Th-x3S1WDhjDly79ajL4Kkd0AZMaZmh9ubmf63e3kyMj2", +"d":"AY5pb7A0UFiB3RELSD64fTLOSV_jazdF7fLYyuTw8lOfRhWg6Y6rUrPAxerEzgdRhajnu0ferB0d53vM9mE15j2C" +}` + + // "Payload" + jwsPayload := []byte{80, 97, 121, 108, 111, 97, 100} + + standardHeaders := &jws.StandardHeaders{} + err := json.Unmarshal(hdr, standardHeaders) + if err != nil { + t.Fatal("Failed to parse header") + } + alg := standardHeaders.Algorithm() + + keys, err := jwk.ParseString(jwksrc) + if err != nil { + t.Fatal("Failed to parse JWK") + } + key, err := keys.Keys[0].Materialize() + if err != nil { + t.Fatal("Failed to create private key") + } + var jwsCompact []byte + jwsCompact, err = jws.Sign(jwsPayload, alg, key) + if err != nil { + t.Fatal("Failed to sign message") + } + + // Verify with standard ecdsa library + _, _, jwsSignature, err := jws.SplitCompact(bytes.NewReader(jwsCompact)) + if err != nil { + t.Fatal("Failed to split compact JWT") + } + decodedJwsSignature := make([]byte, base64.RawURLEncoding.DecodedLen(len(jwsSignature))) + decodedLen, err := base64.RawURLEncoding.Decode(decodedJwsSignature, jwsSignature) + if err != nil { + t.Fatal("Failed to sign message") + } + r, s := &big.Int{}, &big.Int{} + n := decodedLen / 2 + r.SetBytes(decodedJwsSignature[:n]) + s.SetBytes(decodedJwsSignature[n:]) + signingHdr, err := buffer.Buffer(hdr).Base64Encode() + if err != nil { + t.Fatal("Failed to base64 encode headers") + } + signingPayload, err := buffer.Buffer(jwsPayload).Base64Encode() + if err != nil { + t.Fatal("Failed to base64 encode payload") + } + jwsSigningInput := bytes.Join( + [][]byte{ + signingHdr, + signingPayload, + }, + []byte{'.'}, + ) + hashed512 := sha512.Sum512(jwsSigningInput) + ecdsaPrivateKey := key.(*ecdsa.PrivateKey) + verified := ecdsa.Verify(&ecdsaPrivateKey.PublicKey, hashed512[:], r, s) + if !verified { + t.Fatal("Failed to verify message") + } + + // Verify with API library + + publicKey, err := jwk.GetPublicKey(key) + if err != nil { + t.Fatal("Failed to get public from private key") + } + verifiedPayload, err := jws.Verify(jwsCompact, alg, publicKey) + if err != nil || string(verifiedPayload) != string(jwsPayload) { + t.Fatal("Failed to verify message") + } + }) + t.Run("RS256Compact", func(t *testing.T) { + // RS256Compact tests that https://tools.ietf.org/html/rfc7515#appendix-A.2 works + const hdr = `{"alg":"RS256"}` + const expected = `eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ.cC4hiUPoj9Eetdgtv3hF80EGrhuB__dzERat0XF9g2VtQgr9PJbu3XOiZj5RZmh7AAuHIm4Bh-0Qc_lF5YKt_O8W2Fp5jujGbds9uJdbF9CUAr7t1dnZcAcQjbKBYNX4BAynRFdiuB--f_nZLgrnbyTyWzO75vRK5h6xBArLIARNPvkSjtQBMHlb1L07Qe7K0GarZRmB_eSN9383LcOLn6_dO--xi12jzDwusC-eOkHWEsqtFZESc6BfI7noOPqvhJ1phCnvWh6IeYI2w9QOYEUipUTI8np6LbgGY9Fs98rqVt5AXLIhWkWywlVmtVrBp0igcN_IoypGlUPQGe77Rw` + const jwksrc = `{ + "kty":"RSA", + "n":"ofgWCuLjybRlzo0tZWJjNiuSfb4p4fAkd_wWJcyQoTbji9k0l8W26mPddxHmfHQp-Vaw-4qPCJrcS2mJPMEzP1Pt0Bm4d4QlL-yRT-SFd2lZS-pCgNMsD1W_YpRPEwOWvG6b32690r2jZ47soMZo9wGzjb_7OMg0LOL-bSf63kpaSHSXndS5z5rexMdbBYUsLA9e-KXBdQOS-UTo7WTBEMa2R2CapHg665xsmtdVMTBQY4uDZlxvb3qCo5ZwKh9kG4LT6_I5IhlJH7aGhyxXFvUK-DWNmoudF8NAco9_h9iaGNj8q2ethFkMLs91kzk2PAcDTW9gb54h4FRWyuXpoQ", + "e":"AQAB", + "d":"Eq5xpGnNCivDflJsRQBXHx1hdR1k6Ulwe2JZD50LpXyWPEAeP88vLNO97IjlA7_GQ5sLKMgvfTeXZx9SE-7YwVol2NXOoAJe46sui395IW_GO-pWJ1O0BkTGoVEn2bKVRUCgu-GjBVaYLU6f3l9kJfFNS3E0QbVdxzubSu3Mkqzjkn439X0M_V51gfpRLI9JYanrC4D4qAdGcopV_0ZHHzQlBjudU2QvXt4ehNYTCBr6XCLQUShb1juUO1ZdiYoFaFQT5Tw8bGUl_x_jTj3ccPDVZFD9pIuhLhBOneufuBiB4cS98l2SR_RQyGWSeWjnczT0QU91p1DhOVRuOopznQ", + "p":"4BzEEOtIpmVdVEZNCqS7baC4crd0pqnRH_5IB3jw3bcxGn6QLvnEtfdUdiYrqBdss1l58BQ3KhooKeQTa9AB0Hw_Py5PJdTJNPY8cQn7ouZ2KKDcmnPGBY5t7yLc1QlQ5xHdwW1VhvKn-nXqhJTBgIPgtldC-KDV5z-y2XDwGUc", + "q":"uQPEfgmVtjL0Uyyx88GZFF1fOunH3-7cepKmtH4pxhtCoHqpWmT8YAmZxaewHgHAjLYsp1ZSe7zFYHj7C6ul7TjeLQeZD_YwD66t62wDmpe_HlB-TnBA-njbglfIsRLtXlnDzQkv5dTltRJ11BKBBypeeF6689rjcJIDEz9RWdc", + "dp":"BwKfV3Akq5_MFZDFZCnW-wzl-CCo83WoZvnLQwCTeDv8uzluRSnm71I3QCLdhrqE2e9YkxvuxdBfpT_PI7Yz-FOKnu1R6HsJeDCjn12Sk3vmAktV2zb34MCdy7cpdTh_YVr7tss2u6vneTwrA86rZtu5Mbr1C1XsmvkxHQAdYo0", + "dq":"h_96-mK1R_7glhsum81dZxjTnYynPbZpHziZjeeHcXYsXaaMwkOlODsWa7I9xXDoRwbKgB719rrmI2oKr6N3Do9U0ajaHF-NKJnwgjMd2w9cjz3_-kyNlxAr2v4IKhGNpmM5iIgOS1VZnOZ68m6_pbLBSp3nssTdlqvd0tIiTHU", + "qi":"IYd7DHOhrWvxkwPQsRM2tOgrjbcrfvtQJipd-DlcxyVuuM9sQLdgjVk2oy26F0EmpScGLq2MowX7fhd_QJQ3ydy5cY7YIBi87w93IKLEdfnbJtoOPLUW0ITrJReOgo1cq9SbsxYawBgfp_gh6A5603k2-ZQwVK0JKSHuLFkuQ3U" + }` + + privkey, err := rsautil.PrivateKeyFromJSON([]byte(jwksrc)) + if !assert.NoError(t, err, "parsing jwk should be successful") { + return + } + + sign, err := sign.New(jwa.RS256) + if !assert.NoError(t, err, "RsaSign created successfully") { + return + } + + hdrbuf, err := buffer.Buffer(hdr).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + payload, err := buffer.Buffer(examplePayload).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + + signingInput := bytes.Join( + [][]byte{ + hdrbuf, + payload, + }, + []byte{'.'}, + ) + signature, err := sign.Sign(signingInput, privkey) + if !assert.NoError(t, err, "PayloadSign is successful") { + return + } + sigbuf, err := buffer.Buffer(signature).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + + encoded := bytes.Join( + [][]byte{ + signingInput, + sigbuf, + }, + []byte{'.'}, + ) + + if !assert.Equal(t, expected, string(encoded), "generated compact serialization should match") { + return + } + + msg, err := jws.Parse(bytes.NewReader(encoded)) + if !assert.NoError(t, err, "Parsing compact encoded serialization succeeds") { + return + } + + signatures := msg.Signatures() + if !assert.Len(t, signatures, 1, `there should be exactly one signature`) { + return + } + + algorithm := signatures[0].ProtectedHeaders().Algorithm() + if algorithm != jwa.RS256 { + t.Fatal("Algorithm in header does not match") + } + + v, err := verify.New(jwa.RS256) + if !assert.NoError(t, err, "Verify created") { + return + } + + if !assert.NoError(t, v.Verify(signingInput, signature, &privkey.PublicKey), "Verify succeeds") { + return + } + }) + t.Run("ES256Compact", func(t *testing.T) { + // ES256Compact tests that https://tools.ietf.org/html/rfc7515#appendix-A.3 works + const hdr = `{"alg":"ES256"}` + const jwksrc = `{ + "kty":"EC", + "crv":"P-256", + "x":"f83OJ3D2xF1Bg8vub9tLe1gHMzV76e8Tus9uPHvRVEU", + "y":"x_FEzRu9m36HLN_tue659LNpXW6pCyStikYjKIWI5a0", + "d":"jpsQnnGQmL-YBIffH1136cspYG6-0iY7X1fCE9-E9LI" + }` + + privkey, err := ecdsautil.PrivateKeyFromJSON([]byte(jwksrc)) + if !assert.NoError(t, err, "parsing jwk should be successful") { + return + } + + signer, err := sign.New(jwa.ES256) + if !assert.NoError(t, err, "RsaSign created successfully") { + return + } + + hdrbuf, err := buffer.Buffer(hdr).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + payload, err := buffer.Buffer(examplePayload).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + + signingInput := bytes.Join( + [][]byte{ + hdrbuf, + payload, + }, + []byte{'.'}, + ) + signature, err := signer.Sign(signingInput, privkey) + if !assert.NoError(t, err, "PayloadSign is successful") { + return + } + sigbuf, err := buffer.Buffer(signature).Base64Encode() + if !assert.NoError(t, err, "base64 encode successful") { + return + } + + encoded := bytes.Join( + [][]byte{ + signingInput, + sigbuf, + }, + []byte{'.'}, + ) + + // The signature contains random factor, so unfortunately we can't match + // the output against a fixed expected outcome. We'll wave doing an + // exact match, and just try to verify using the signature + + msg, err := jws.Parse(bytes.NewReader(encoded)) + if !assert.NoError(t, err, "Parsing compact encoded serialization succeeds") { + return + } + + signatures := msg.Signatures() + if !assert.Len(t, signatures, 1, `there should be exactly one signature`) { + return + } + + algorithm := signatures[0].ProtectedHeaders().Algorithm() + if algorithm != jwa.ES256 { + t.Fatal("Algorithm in header does not match") + } + + v, err := verify.New(jwa.ES256) + if !assert.NoError(t, err, "EcdsaVerify created") { + return + } + if !assert.NoError(t, v.Verify(signingInput, signature, &privkey.PublicKey), "Verify succeeds") { + return + } + }) + t.Run("UnsecuredCompact", func(t *testing.T) { + s := `eyJhbGciOiJub25lIn0.eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ.` + + m, err := jws.Parse(strings.NewReader(s)) + if !assert.NoError(t, err, "Parsing compact serialization") { + return + } + + { + v := map[string]interface{}{} + if !assert.NoError(t, json.Unmarshal(m.Payload(), &v), "Unmarshal payload") { + return + } + if !assert.Equal(t, v["iss"], "joe", "iss matches") { + return + } + if !assert.Equal(t, int(v["exp"].(float64)), 1300819380, "exp matches") { + return + } + if !assert.Equal(t, v["http://example.com/is_root"], true, "'http://example.com/is_root' matches") { + return + } + } + + if !assert.Len(t, m.Signatures(), 1, "There should be 1 signature") { + return + } + + signatures := m.Signatures() + algorithm := signatures[0].ProtectedHeaders().Algorithm() + if algorithm != jwa.NoSignature { + t.Fatal("Algorithm in header does not match") + } + + if !assert.Empty(t, signatures[0].Signature(), "Signature should be empty") { + return + } + }) + t.Run("CompleteJSON", func(t *testing.T) { + s := `{ + "payload": "eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ", + "signatures":[ + { + "header": {"kid":"2010-12-29"}, + "protected":"eyJhbGciOiJSUzI1NiJ9", + "signature": "cC4hiUPoj9Eetdgtv3hF80EGrhuB__dzERat0XF9g2VtQgr9PJbu3XOiZj5RZmh7AAuHIm4Bh-0Qc_lF5YKt_O8W2Fp5jujGbds9uJdbF9CUAr7t1dnZcAcQjbKBYNX4BAynRFdiuB--f_nZLgrnbyTyWzO75vRK5h6xBArLIARNPvkSjtQBMHlb1L07Qe7K0GarZRmB_eSN9383LcOLn6_dO--xi12jzDwusC-eOkHWEsqtFZESc6BfI7noOPqvhJ1phCnvWh6IeYI2w9QOYEUipUTI8np6LbgGY9Fs98rqVt5AXLIhWkWywlVmtVrBp0igcN_IoypGlUPQGe77Rw" + }, + { + "header": {"kid":"e9bc097a-ce51-4036-9562-d2ade882db0d"}, + "protected":"eyJhbGciOiJFUzI1NiJ9", + "signature": "DtEhU3ljbEg8L38VWAfUAqOyKAM6-Xx-F4GawxaepmXFCgfTjDxw5djxLa8ISlSApmWQxfKTUJqPP3-Kg6NU1Q" + } + ] + }` + + m, err := jws.Parse(strings.NewReader(s)) + if !assert.NoError(t, err, "Unmarshal complete json serialization") { + return + } + + if !assert.Len(t, m.Signatures(), 2, "There should be 2 signatures") { + return + } + + var sigs []*jws.Signature + sigs = m.LookupSignature("2010-12-29") + if !assert.Len(t, sigs, 1, "There should be 1 signature with kid = '2010-12-29'") { + return + } + + jsonbuf, err := json.Marshal(m) + if !assert.NoError(t, err, "Marshal JSON is successful") { + return + } + + b := &bytes.Buffer{} + json.Compact(b, jsonbuf) + + if !assert.Equal(t, b.Bytes(), jsonbuf, "generated json matches") { + return + } + }) + t.Run("Protected Header lookup", func(t *testing.T) { + s := `{ + "payload": "eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ", + "signatures":[ + { + "header": {"cty":"example"}, + "protected":"eyJhbGciOiJFUzI1NiIsImtpZCI6ImU5YmMwOTdhLWNlNTEtNDAzNi05NTYyLWQyYWRlODgyZGIwZCJ9", + "signature": "JcLb1udPAV72TayGv6eawZKlIQQ3K1NzB0fU7wwYoFypGxEczdCQU-V9jp4WwY2ueJKYeE4fF6jigB0PdSKR0Q" + } + ] + }` + + // Protected Header is {"alg":"ES256","kid":"e9bc097a-ce51-4036-9562-d2ade882db0d"} + // This protected header combination forces the parser/unmarshal to go trough the code path to populate and look for protected header fields. + // The signature is valid. + + m, err := jws.Parse(strings.NewReader(s)) + if !assert.NoError(t, err, "Unmarshal complete json serialization") { + return + } + if len(m.Signatures()) != 1 { + t.Fatal("There should be 1 signature") + } + + var sigs []*jws.Signature + sigs = m.LookupSignature("e9bc097a-ce51-4036-9562-d2ade882db0d") + if !assert.Len(t, sigs, 1, "There should be 1 signature with kid = '2010-12-29'") { + return + } + }) + t.Run("FlattenedJSON", func(t *testing.T) { + s := `{ + "payload": "eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ", + "protected":"eyJhbGciOiJFUzI1NiJ9", + "header": { + "kid":"e9bc097a-ce51-4036-9562-d2ade882db0d" + }, + "signature": "DtEhU3ljbEg8L38VWAfUAqOyKAM6-Xx-F4GawxaepmXFCgfTjDxw5djxLa8ISlSApmWQxfKTUJqPP3-Kg6NU1Q" + }` + + m, err := jws.Parse(strings.NewReader(s)) + if !assert.NoError(t, err, "Parsing flattened json serialization") { + return + } + + if !assert.Len(t, m.Signatures(), 1, "There should be 1 signature") { + return + } + + jsonbuf, _ := json.MarshalIndent(m, "", " ") + t.Logf("%s", jsonbuf) + }) + t.Run("SplitCompact short", func(t *testing.T) { + // Create string with X.Y.Z + numX := 100 + numY := 100 + numZ := 100 + var largeString = "" + for i := 0; i < numX; i++ { + largeString += "X" + } + largeString += "." + for i := 0; i < numY; i++ { + largeString += "Y" + } + largeString += "." + for i := 0; i < numZ; i++ { + largeString += "Z" + } + x, y, z, err := jws.SplitCompact(strings.NewReader(largeString)) + if !assert.NoError(t, err, "SplitCompactShort string split") { + return + } + if !assert.Len(t, x, numX, "Length of header") { + return + } + if !assert.Len(t, y, numY, "Length of payload") { + return + } + if !assert.Len(t, z, numZ, "Length of signature") { + return + } + }) + t.Run("SplitCompact long", func(t *testing.T) { + // Create string with X.Y.Z + numX := 8000 + numY := 8000 + numZ := 8000 + var largeString = "" + for i := 0; i < numX; i++ { + largeString += "X" + } + largeString += "." + for i := 0; i < numY; i++ { + largeString += "Y" + } + largeString += "." + for i := 0; i < numZ; i++ { + largeString += "Z" + } + x, y, z, err := jws.SplitCompact(strings.NewReader(largeString)) + if !assert.NoError(t, err, "SplitCompactShort string split") { + return + } + if !assert.Len(t, x, numX, "Length of header") { + return + } + if !assert.Len(t, y, numY, "Length of payload") { + return + } + if !assert.Len(t, z, numZ, "Length of signature") { + return + } + }) +} + +/* +func TestSign_HeaderValues(t *testing.T) { + const jwksrc = `{ + "kty":"EC", + "crv":"P-256", + "x":"f83OJ3D2xF1Bg8vub9tLe1gHMzV76e8Tus9uPHvRVEU", + "y":"x_FEzRu9m36HLN_tue659LNpXW6pCyStikYjKIWI5a0", + "d":"jpsQnnGQmL-YBIffH1136cspYG6-0iY7X1fCE9-E9LI" + }` + + privkey, err := ecdsautil.PrivateKeyFromJSON([]byte(jwksrc)) + if !assert.NoError(t, err, "parsing jwk should be successful") { + return + } + + payload := []byte("Hello, World!") + + hdr := jws.NewHeader() + hdr.KeyID = "helloworld01" + encoded, err := jws.Sign(payload, jwa.ES256, privkey, jws.WithPublicHeaders(hdr)) + if !assert.NoError(t, err, "Sign should succeed") { + return + } + + // Although we set KeyID to the public header, in compact serialization + // there's no difference + msg, err := jws.Parse(bytes.NewReader(encoded)) + if !assert.NoError(t, err, `parse should succeed`) { + return + } + + if !assert.Equal(t, hdr.KeyID, msg.Signatures[0].ProtectedHeader.KeyID, "KeyID should match") { + return + } + + verified, err := jws.Verify(encoded, jwa.ES256, &privkey.PublicKey) + if !assert.NoError(t, err, "Verify should succeed") { + return + } + if !assert.Equal(t, verified, payload, "Payload should match") { + return + } +} +*/ + +func TestPublicHeaders(t *testing.T) { + key, err := rsa.GenerateKey(rand.Reader, 2048) + if !assert.NoError(t, err, "GenerateKey should succeed") { + return + } + + signer, err := sign.New(jwa.RS256) + if !assert.NoError(t, err, "rsasign.NewSigner should succeed") { + return + } + _ = signer // TODO + + pubkey := key.PublicKey + pubjwk, err := jwk.New(&pubkey) + if !assert.NoError(t, err, "NewRsaPublicKey should succeed") { + return + } + _ = pubjwk // TODO + + /* + if !assert.NoError(t, signer.UnprotectedHeaders().Set("jwk", pubjwk), "Set('jwk') should succeed") { + return + } + */ +} + +func TestDecode_ES384Compact_NoSigTrim(t *testing.T) { + incoming := "eyJhbGciOiJFUzM4NCIsInR5cCI6IkpXVCIsImtpZCI6IjE5MzFmZTQ0YmFhMWNhZTkyZWUzNzYzOTQ0MDU1OGMwODdlMTRlNjk5ZWU5NjVhM2Q1OGU1MmU2NGY4MDE0NWIifQ.eyJpc3MiOiJicmt0LWNsaS0xLjAuN3ByZTEiLCJpYXQiOjE0ODQ2OTU1MjAsImp0aSI6IjgxYjczY2Y3In0.DdFi0KmPHSv4PfIMGcWGMSRLmZsfRPQ3muLFW6Ly2HpiLFFQWZ0VEanyrFV263wjlp3udfedgw_vrBLz3XC8CkbvCo_xeHMzaTr_yfhjoheSj8gWRLwB-22rOnUX_M0A" + t.Logf("incoming = '%s'", incoming) + const jwksrc = `{ + "kty":"EC", + "crv":"P-384", + "x":"YHVZ4gc1RDoqxKm4NzaN_Y1r7R7h3RM3JMteC478apSKUiLVb4UNytqWaLoE6ygH", + "y":"CRKSqP-aYTIsqJfg_wZEEYUayUR5JhZaS2m4NLk2t1DfXZgfApAJ2lBO0vWKnUMp" + }` + + pubkey, err := ecdsautil.PublicKeyFromJSON([]byte(jwksrc)) + if !assert.NoError(t, err, "parsing jwk should be successful") { + return + } + v, err := verify.New(jwa.ES384) + if !assert.NoError(t, err, "EcdsaVerify created") { + return + } + + protected, payload, signature, err := jws.SplitCompact(strings.NewReader(incoming)) + if !assert.NoError(t, err, `jws.SplitCompact should succeed`) { + return + } + + var buf bytes.Buffer + buf.Write(protected) + buf.WriteByte('.') + buf.Write(payload) + + decodedSignature := make([]byte, base64.RawURLEncoding.DecodedLen(len(signature))) + if _, err := base64.RawURLEncoding.Decode(decodedSignature, signature); !assert.NoError(t, err, `decoding signature should succeed`) { + return + } + + if !assert.NoError(t, v.Verify(buf.Bytes(), decodedSignature, pubkey), "Verify succeeds") { + return + } +} + +func TestGHIssue126(t *testing.T) { + _, err := jws.Verify([]byte("{}"), jwa.ES384, nil) + if !assert.Error(t, err, "Verify should fail") { + return + } + + if !assert.Equal(t, err.Error(), `invalid JWS message format`) { + return + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/message.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/message.go new file mode 100644 index 0000000000..fca60855b2 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/message.go @@ -0,0 +1,45 @@ +package jws + +func (s Signature) PublicHeaders() Headers { + return s.headers +} + +func (s Signature) ProtectedHeaders() Headers { + return s.protected +} + +func (s Signature) Signature() []byte { + return s.signature +} + +func (m Message) Payload() []byte { + return m.payload +} + +func (m Message) Signatures() []*Signature { + return m.signatures +} + +// LookupSignature looks up a particular signature entry using +// the `kid` value +func (m Message) LookupSignature(kid string) []*Signature { + var sigs []*Signature + for _, sig := range m.signatures { + if hdr := sig.PublicHeaders(); hdr != nil { + hdrKeyId, ok := hdr.Get(KeyIDKey) + if ok && hdrKeyId == kid { + sigs = append(sigs, sig) + continue + } + } + + if hdr := sig.ProtectedHeaders(); hdr != nil { + hdrKeyId, ok := hdr.Get(KeyIDKey) + if ok && hdrKeyId == kid { + sigs = append(sigs, sig) + continue + } + } + } + return sigs +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/option.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/option.go new file mode 100644 index 0000000000..5a0d49147d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/option.go @@ -0,0 +1,26 @@ +package jws + +import ( + "github.com/lestrrat-go/jwx/internal/option" + "github.com/lestrrat-go/jwx/jws/sign" +) + +type Option = option.Interface + +const ( + optkeyPayloadSigner = `payload-signer` + optkeyHeaders = `headers` +) + +func WithSigner(signer sign.Signer, key interface{}, public, protected Headers) Option { + return option.New(optkeyPayloadSigner, &payloadSigner{ + signer: signer, + key: key, + protected: protected, + public: public, + }) +} + +func WithHeaders(h Headers) Option { + return option.New(optkeyHeaders, h) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/ecdsa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/ecdsa.go new file mode 100644 index 0000000000..a4aca4a2d8 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/ecdsa.go @@ -0,0 +1,81 @@ +package sign + +import ( + "crypto" + "crypto/ecdsa" + "crypto/rand" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +var ecdsaSignFuncs = map[jwa.SignatureAlgorithm]ecdsaSignFunc{} + +func init() { + algs := map[jwa.SignatureAlgorithm]crypto.Hash{ + jwa.ES256: crypto.SHA256, + jwa.ES384: crypto.SHA384, + jwa.ES512: crypto.SHA512, + } + + for alg, h := range algs { + ecdsaSignFuncs[alg] = makeECDSASignFunc(h) + } +} + +func makeECDSASignFunc(hash crypto.Hash) ecdsaSignFunc { + return ecdsaSignFunc(func(payload []byte, key *ecdsa.PrivateKey) ([]byte, error) { + curveBits := key.Curve.Params().BitSize + keyBytes := curveBits / 8 + // Curve bits do not need to be a multiple of 8. + if curveBits%8 > 0 { + keyBytes += 1 + } + h := hash.New() + h.Write(payload) + r, s, err := ecdsa.Sign(rand.Reader, key, h.Sum(nil)) + if err != nil { + return nil, errors.Wrap(err, "failed to sign payload using ecdsa") + } + + rBytes := r.Bytes() + rBytesPadded := make([]byte, keyBytes) + copy(rBytesPadded[keyBytes-len(rBytes):], rBytes) + + sBytes := s.Bytes() + sBytesPadded := make([]byte, keyBytes) + copy(sBytesPadded[keyBytes-len(sBytes):], sBytes) + + out := append(rBytesPadded, sBytesPadded...) + return out, nil + }) +} + +func newECDSA(alg jwa.SignatureAlgorithm) (*ECDSASigner, error) { + signfn, ok := ecdsaSignFuncs[alg] + if !ok { + return nil, errors.Errorf(`unsupported algorithm while trying to create ECDSA signer: %s`, alg) + } + + return &ECDSASigner{ + alg: alg, + sign: signfn, + }, nil +} + +func (s ECDSASigner) Algorithm() jwa.SignatureAlgorithm { + return s.alg +} + +func (s ECDSASigner) Sign(payload []byte, key interface{}) ([]byte, error) { + if key == nil { + return nil, errors.New(`missing private key while signing payload`) + } + + ecdsakey, ok := key.(*ecdsa.PrivateKey) + if !ok { + return nil, errors.Errorf(`invalid key type %T. *ecdsa.PrivateKey is required`, key) + } + + return s.sign(payload, ecdsakey) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/ecdsa_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/ecdsa_test.go new file mode 100644 index 0000000000..325b10f425 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/ecdsa_test.go @@ -0,0 +1,34 @@ +package sign + +import ( + "github.com/lestrrat-go/jwx/jwa" + "testing" +) + +func TestECDSASign(t *testing.T) { + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + t.Run("ECDSA Creation Error", func(t *testing.T) { + _, err := newECDSA(jwa.HS256) + if err == nil { + t.Fatal("ECDSA Object creation should fail") + } + }) + t.Run("ECDSA Sign Error", func(t *testing.T) { + signer, err := newECDSA(jwa.ES512) + if err != nil { + t.Fatalf("Signer creation failure: %v", jwa.ES512) + } + _, err = signer.Sign([]byte("payload"), dummy) + if err == nil { + t.Fatal("HMAC Object creation should fail") + } + _, err = signer.Sign([]byte("payload"), []byte("")) + if err == nil { + t.Fatal("HMAC Object creation should fail") + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/hmac.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/hmac.go new file mode 100644 index 0000000000..0921f77cd4 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/hmac.go @@ -0,0 +1,63 @@ +package sign + +import ( + "crypto/hmac" + "crypto/sha256" + "crypto/sha512" + "hash" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +var HMACSignFuncs = map[jwa.SignatureAlgorithm]hmacSignFunc{} + +func init() { + algs := map[jwa.SignatureAlgorithm]func() hash.Hash{ + jwa.HS256: sha256.New, + jwa.HS384: sha512.New384, + jwa.HS512: sha512.New, + } + + for alg, h := range algs { + HMACSignFuncs[alg] = makeHMACSignFunc(h) + + } +} + +func newHMAC(alg jwa.SignatureAlgorithm) (*HMACSigner, error) { + signer, ok := HMACSignFuncs[alg] + if !ok { + return nil, errors.Errorf(`unsupported algorithm while trying to create HMAC signer: %s`, alg) + } + + return &HMACSigner{ + alg: alg, + sign: signer, + }, nil +} + +func makeHMACSignFunc(hfunc func() hash.Hash) hmacSignFunc { + return hmacSignFunc(func(payload []byte, key []byte) ([]byte, error) { + h := hmac.New(hfunc, key) + h.Write(payload) + return h.Sum(nil), nil + }) +} + +func (s HMACSigner) Algorithm() jwa.SignatureAlgorithm { + return s.alg +} + +func (s HMACSigner) Sign(payload []byte, key interface{}) ([]byte, error) { + hmackey, ok := key.([]byte) + if !ok { + return nil, errors.Errorf(`invalid key type %T. []byte is required`, key) + } + + if len(hmackey) == 0 { + return nil, errors.New(`missing key while signing payload`) + } + + return s.sign(payload, hmackey) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/hmac_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/hmac_test.go new file mode 100644 index 0000000000..b9d7ebf117 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/hmac_test.go @@ -0,0 +1,34 @@ +package sign + +import ( + "github.com/lestrrat-go/jwx/jwa" + "testing" +) + +func TestHMACSign(t *testing.T) { + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + t.Run("HMAC Creation Error", func(t *testing.T) { + _, err := newHMAC(jwa.ES256) + if err == nil { + t.Fatal("HMAC Object creation should fail") + } + }) + t.Run("HMAC Sign Error", func(t *testing.T) { + signer, err := newHMAC(jwa.HS512) + if err != nil { + t.Fatalf("Signer creation failure: %v", jwa.HS512) + } + _, err = signer.Sign([]byte("payload"), dummy) + if err == nil { + t.Fatal("HMAC Object creation should fail") + } + _, err = signer.Sign([]byte("payload"), []byte("")) + if err == nil { + t.Fatal("HMAC Object creation should fail") + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/interface.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/interface.go new file mode 100644 index 0000000000..f5371a6287 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/interface.go @@ -0,0 +1,44 @@ +package sign + +import ( + "crypto/ecdsa" + "crypto/rsa" + + "github.com/lestrrat-go/jwx/jwa" +) + +type Signer interface { + // Sign creates a signature for the given `payload`. + // `key` is the key used for signing the payload, and is usually + // the private key type associated with the signature method. For example, + // for `jwa.RSXXX` and `jwa.PSXXX` types, you need to pass the + // `*"crypto/rsa".PrivateKey` type. + // Check the documentation for each signer for details + Sign(payload []byte, key interface{}) ([]byte, error) + + Algorithm() jwa.SignatureAlgorithm +} + +type rsaSignFunc func([]byte, *rsa.PrivateKey) ([]byte, error) + +// RSASigner uses crypto/rsa to sign the payloads. +type RSASigner struct { + alg jwa.SignatureAlgorithm + sign rsaSignFunc +} + +type ecdsaSignFunc func([]byte, *ecdsa.PrivateKey) ([]byte, error) + +// ECDSASigner uses crypto/ecdsa to sign the payloads. +type ECDSASigner struct { + alg jwa.SignatureAlgorithm + sign ecdsaSignFunc +} + +type hmacSignFunc func([]byte, []byte) ([]byte, error) + +// HMACSigner uses crypto/hmac to sign the payloads. +type HMACSigner struct { + alg jwa.SignatureAlgorithm + sign hmacSignFunc +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/rsa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/rsa.go new file mode 100644 index 0000000000..1e29390501 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/rsa.go @@ -0,0 +1,95 @@ +package sign + +import ( + "crypto" + "crypto/rand" + "crypto/rsa" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +var rsaSignFuncs = map[jwa.SignatureAlgorithm]rsaSignFunc{} + +func init() { + algs := map[jwa.SignatureAlgorithm]struct { + Hash crypto.Hash + SignFunc func(crypto.Hash) rsaSignFunc + }{ + jwa.RS256: { + Hash: crypto.SHA256, + SignFunc: makeSignPKCS1v15, + }, + jwa.RS384: { + Hash: crypto.SHA384, + SignFunc: makeSignPKCS1v15, + }, + jwa.RS512: { + Hash: crypto.SHA512, + SignFunc: makeSignPKCS1v15, + }, + jwa.PS256: { + Hash: crypto.SHA256, + SignFunc: makeSignPSS, + }, + jwa.PS384: { + Hash: crypto.SHA384, + SignFunc: makeSignPSS, + }, + jwa.PS512: { + Hash: crypto.SHA512, + SignFunc: makeSignPSS, + }, + } + + for alg, item := range algs { + rsaSignFuncs[alg] = item.SignFunc(item.Hash) + } +} + +func makeSignPKCS1v15(hash crypto.Hash) rsaSignFunc { + return rsaSignFunc(func(payload []byte, key *rsa.PrivateKey) ([]byte, error) { + h := hash.New() + h.Write(payload) + return rsa.SignPKCS1v15(rand.Reader, key, hash, h.Sum(nil)) + }) +} + +func makeSignPSS(hash crypto.Hash) rsaSignFunc { + return rsaSignFunc(func(payload []byte, key *rsa.PrivateKey) ([]byte, error) { + h := hash.New() + h.Write(payload) + return rsa.SignPSS(rand.Reader, key, hash, h.Sum(nil), &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthAuto, + }) + }) +} + +func newRSA(alg jwa.SignatureAlgorithm) (*RSASigner, error) { + signfn, ok := rsaSignFuncs[alg] + if !ok { + return nil, errors.Errorf(`unsupported algorithm while trying to create RSA signer: %s`, alg) + } + return &RSASigner{ + alg: alg, + sign: signfn, + }, nil +} + +func (s RSASigner) Algorithm() jwa.SignatureAlgorithm { + return s.alg +} + +// Sign creates a signature using crypto/rsa. key must be a non-nil instance of +// `*"crypto/rsa".PrivateKey`. +func (s RSASigner) Sign(payload []byte, key interface{}) ([]byte, error) { + if key == nil { + return nil, errors.New(`missing private key while signing payload`) + } + rsakey, ok := key.(*rsa.PrivateKey) + if !ok { + return nil, errors.Errorf(`invalid key type %T. *rsa.PrivateKey is required`, key) + } + + return s.sign(payload, rsakey) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/sign.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/sign.go new file mode 100644 index 0000000000..1b4474b558 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/sign/sign.go @@ -0,0 +1,20 @@ +package sign + +import ( + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +// New creates a signer that signs payloads using the given signature algorithm. +func New(alg jwa.SignatureAlgorithm) (Signer, error) { + switch alg { + case jwa.RS256, jwa.RS384, jwa.RS512, jwa.PS256, jwa.PS384, jwa.PS512: + return newRSA(alg) + case jwa.ES256, jwa.ES384, jwa.ES512: + return newECDSA(alg) + case jwa.HS256, jwa.HS384, jwa.HS512: + return newHMAC(alg) + default: + return nil, errors.Errorf(`unsupported signature algorithm %s`, alg) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/signer.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/signer.go new file mode 100644 index 0000000000..1351a3b326 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/signer.go @@ -0,0 +1 @@ +package jws diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/signer_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/signer_test.go new file mode 100644 index 0000000000..fa4c01b2dc --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/signer_test.go @@ -0,0 +1,100 @@ +package jws_test + +import ( + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/rsa" + "strings" + "testing" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jws" + "github.com/lestrrat-go/jwx/jws/sign" + "github.com/lestrrat-go/jwx/jws/verify" + "github.com/stretchr/testify/assert" +) + +func TestSign(t *testing.T) { + t.Run("Bad algorithm", func(t *testing.T) { + _, err := jws.Sign([]byte(nil), jwa.SignatureAlgorithm("FooBar"), nil) + if !assert.Error(t, err, "Unknown algorithm should return error") { + return + } + }) + t.Run("No private key", func(t *testing.T) { + _, err := jws.Sign([]byte{'a', 'b', 'c'}, jwa.RS256, nil) + if !assert.Error(t, err, "Sign with no private key should return error") { + return + } + }) + t.Run("RSA verify with no public key", func(t *testing.T) { + _, err := jws.Verify([]byte(nil), jwa.RS256, nil) + if !assert.Error(t, err, "Verify with no private key should return error") { + return + } + }) + t.Run("RSA roundtrip", func(t *testing.T) { + rsakey, err := rsa.GenerateKey(rand.Reader, 2048) + if !assert.NoError(t, err, "RSA key generated") { + return + } + + signer, err := sign.New(jwa.RS256) + if !assert.NoError(t, err, `creating a signer should succeed`) { + return + } + + payload := []byte("Hello, world") + + signed, err := signer.Sign(payload, rsakey) + if !assert.NoError(t, err, "Payload signed") { + return + } + + verifier, err := verify.New(jwa.RS256) + if !assert.NoError(t, err, "creating a verifier should succeed") { + return + } + + if !assert.NoError(t, verifier.Verify(payload, signed, &rsakey.PublicKey), "Payload verified") { + return + } + }) +} +func TestSignMulti(t *testing.T) { + rsakey, err := rsa.GenerateKey(rand.Reader, 2048) + if !assert.NoError(t, err, "RSA key generated") { + return + } + + dsakey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if !assert.NoError(t, err, "ECDSA key generated") { + return + } + + s1, err := sign.New(jwa.RS256) + if !assert.NoError(t, err, "RSA Signer created") { + return + } + var s1hdr jws.StandardHeaders + s1hdr.Set(jws.KeyIDKey, "2010-12-29") + + s2, err := sign.New(jwa.ES256) + if !assert.NoError(t, err, "DSA Signer created") { + return + } + var s2hdr jws.StandardHeaders + s2hdr.Set(jws.KeyIDKey, "e9bc097a-ce51-4036-9562-d2ade882db0d") + + v := strings.Join([]string{`{"iss":"joe",`, ` "exp":1300819380,`, ` "http://example.com/is_root":true}`}, "\r\n") + m, err := jws.SignMulti([]byte(v), + jws.WithSigner(s1, rsakey, &s1hdr, nil), + jws.WithSigner(s2, dsakey, &s2hdr, nil), + ) + if !assert.NoError(t, err, "jws.SignMulti should succeed") { + return + } + + t.Logf("%s", m) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/ecdsa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/ecdsa.go new file mode 100644 index 0000000000..879e1e346f --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/ecdsa.go @@ -0,0 +1,65 @@ +package verify + +import ( + "crypto" + "crypto/ecdsa" + "math/big" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +var ecdsaVerifyFuncs = map[jwa.SignatureAlgorithm]ecdsaVerifyFunc{} + +func init() { + algs := map[jwa.SignatureAlgorithm]crypto.Hash{ + jwa.ES256: crypto.SHA256, + jwa.ES384: crypto.SHA384, + jwa.ES512: crypto.SHA512, + } + + for alg, h := range algs { + ecdsaVerifyFuncs[alg] = makeECDSAVerifyFunc(h) + } +} + +func makeECDSAVerifyFunc(hash crypto.Hash) ecdsaVerifyFunc { + return ecdsaVerifyFunc(func(payload []byte, signature []byte, key *ecdsa.PublicKey) error { + + r, s := &big.Int{}, &big.Int{} + n := len(signature) / 2 + r.SetBytes(signature[:n]) + s.SetBytes(signature[n:]) + + h := hash.New() + h.Write(payload) + + if !ecdsa.Verify(key, h.Sum(nil), r, s) { + return errors.New(`failed to verify signature using ecdsa`) + } + return nil + }) +} + +func newECDSA(alg jwa.SignatureAlgorithm) (*ECDSAVerifier, error) { + verifyfn, ok := ecdsaVerifyFuncs[alg] + if !ok { + return nil, errors.Errorf(`unsupported algorithm while trying to create ECDSA verifier: %s`, alg) + } + + return &ECDSAVerifier{ + verify: verifyfn, + }, nil +} + +func (v ECDSAVerifier) Verify(payload []byte, signature []byte, key interface{}) error { + if key == nil { + return errors.New(`missing public key while verifying payload`) + } + ecdsakey, ok := key.(*ecdsa.PublicKey) + if !ok { + return errors.Errorf(`invalid key type %T. *ecdsa.PublicKey is required`, key) + } + + return v.verify(payload, signature, ecdsakey) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/ecdsa_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/ecdsa_test.go new file mode 100644 index 0000000000..465ed0211a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/ecdsa_test.go @@ -0,0 +1,34 @@ +package verify + +import ( + "github.com/lestrrat-go/jwx/jwa" + "testing" +) + +func TestECDSAVerify(t *testing.T) { + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + t.Run("ECDSA Verifier Creation Error", func(t *testing.T) { + _, err := newECDSA(jwa.HS256) + if err == nil { + t.Fatal("ECDSA Verifier Object creation should fail") + } + }) + t.Run("ECDSA Verifier Sign Error", func(t *testing.T) { + pVerifier, err := newECDSA(jwa.ES512) + if err != nil { + t.Fatalf("Signer creation failure: %v", jwa.ES512) + } + err = pVerifier.Verify([]byte("payload"), []byte("signature"), dummy) + if err == nil { + t.Fatal("ECDSA Verification should fail") + } + err = pVerifier.Verify([]byte("payload"), []byte("signature"), nil) + if err == nil { + t.Fatal("ECDSA Verification should fail") + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/hmac.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/hmac.go new file mode 100644 index 0000000000..268ce25921 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/hmac.go @@ -0,0 +1,33 @@ +package verify + +import ( + "crypto/hmac" + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jws/sign" + "github.com/pkg/errors" +) + +func newHMAC(alg jwa.SignatureAlgorithm) (*HMACVerifier, error) { + _, ok := sign.HMACSignFuncs[alg] + if !ok { + return nil, errors.Errorf(`unsupported algorithm while trying to create HMAC signer: %s`, alg) + } + s, err := sign.New(alg) + if err != nil { + return nil, errors.Wrap(err, `failed to generate HMAC signer`) + } + return &HMACVerifier{signer: s}, nil +} + +func (v HMACVerifier) Verify(payload, signature []byte, key interface{}) (err error) { + + expected, err := v.signer.Sign(payload, key) + if err != nil { + return errors.Wrap(err, `failed to generated signature`) + } + + if !hmac.Equal(signature, expected) { + return errors.New(`failed to match hmac signature`) + } + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/hmac_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/hmac_test.go new file mode 100644 index 0000000000..bd676a458f --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/hmac_test.go @@ -0,0 +1,31 @@ +package verify + +import ( + "github.com/lestrrat-go/jwx/jwa" + "testing" +) + +func TestHMACVerify(t *testing.T) { + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + t.Run("HMAC Verifier Creation Error", func(t *testing.T) { + _, err := newHMAC(jwa.ES256) + if err == nil { + t.Fatal("HMAC Verifier Object creation should fail") + } + }) + t.Run("HMAC Verifier Sign Error", func(t *testing.T) { + pVerifier, err := newHMAC(jwa.HS512) + if err != nil { + t.Fatalf("Signer creation failure: %v", jwa.HS512) + } + err = pVerifier.Verify([]byte("payload"), []byte("signature"), dummy) + if err == nil { + t.Fatal("HMAC Verification should fail") + } + + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/interface.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/interface.go new file mode 100644 index 0000000000..a17b63e725 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/interface.go @@ -0,0 +1,35 @@ +package verify + +import ( + "crypto/ecdsa" + "crypto/rsa" + + "github.com/lestrrat-go/jwx/jws/sign" +) + +type Verifier interface { + // Verify checks whether the payload and signature are valid for + // the given key. + // `key` is the key used for verifying the payload, and is usually + // the public key associated with the signature method. For example, + // for `jwa.RSXXX` and `jwa.PSXXX` types, you need to pass the + // `*"crypto/rsa".PublicKey` type. + // Check the documentation for each verifier for details + Verify(payload []byte, signature []byte, key interface{}) error +} + +type rsaVerifyFunc func([]byte, []byte, *rsa.PublicKey) error + +type RSAVerifier struct { + verify rsaVerifyFunc +} + +type ecdsaVerifyFunc func([]byte, []byte, *ecdsa.PublicKey) error + +type ECDSAVerifier struct { + verify ecdsaVerifyFunc +} + +type HMACVerifier struct { + signer sign.Signer +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/rsa.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/rsa.go new file mode 100644 index 0000000000..4bbf861b05 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/rsa.go @@ -0,0 +1,86 @@ +package verify + +import ( + "crypto" + "crypto/rsa" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +var rsaVerifyFuncs = map[jwa.SignatureAlgorithm]rsaVerifyFunc{} + +func init() { + algs := map[jwa.SignatureAlgorithm]struct { + Hash crypto.Hash + VerifyFunc func(crypto.Hash) rsaVerifyFunc + }{ + jwa.RS256: { + Hash: crypto.SHA256, + VerifyFunc: makeVerifyPKCS1v15, + }, + jwa.RS384: { + Hash: crypto.SHA384, + VerifyFunc: makeVerifyPKCS1v15, + }, + jwa.RS512: { + Hash: crypto.SHA512, + VerifyFunc: makeVerifyPKCS1v15, + }, + jwa.PS256: { + Hash: crypto.SHA256, + VerifyFunc: makeVerifyPSS, + }, + jwa.PS384: { + Hash: crypto.SHA384, + VerifyFunc: makeVerifyPSS, + }, + jwa.PS512: { + Hash: crypto.SHA512, + VerifyFunc: makeVerifyPSS, + }, + } + + for alg, item := range algs { + rsaVerifyFuncs[alg] = item.VerifyFunc(item.Hash) + } +} + +func makeVerifyPKCS1v15(hash crypto.Hash) rsaVerifyFunc { + return rsaVerifyFunc(func(payload, signature []byte, key *rsa.PublicKey) error { + h := hash.New() + h.Write(payload) + return rsa.VerifyPKCS1v15(key, hash, h.Sum(nil), signature) + }) +} + +func makeVerifyPSS(hash crypto.Hash) rsaVerifyFunc { + return rsaVerifyFunc(func(payload, signature []byte, key *rsa.PublicKey) error { + h := hash.New() + h.Write(payload) + return rsa.VerifyPSS(key, hash, h.Sum(nil), signature, nil) + }) +} + +func newRSA(alg jwa.SignatureAlgorithm) (*RSAVerifier, error) { + verifyfn, ok := rsaVerifyFuncs[alg] + if !ok { + return nil, errors.Errorf(`unsupported algorithm while trying to create RSA verifier: %s`, alg) + } + + return &RSAVerifier{ + verify: verifyfn, + }, nil +} + +func (v RSAVerifier) Verify(payload, signature []byte, key interface{}) error { + if key == nil { + return errors.New(`missing public key while verifying payload`) + } + rsakey, ok := key.(*rsa.PublicKey) + if !ok { + return errors.Errorf(`invalid key type %T. *rsa.PublicKey is required`, key) + } + + return v.verify(payload, signature, rsakey) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/rsa_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/rsa_test.go new file mode 100644 index 0000000000..1126aa27dc --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/rsa_test.go @@ -0,0 +1,34 @@ +package verify + +import ( + "github.com/lestrrat-go/jwx/jwa" + "testing" +) + +func TestRSAVerify(t *testing.T) { + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + t.Run("RSA Verifier Creation Error", func(t *testing.T) { + _, err := newRSA(jwa.HS256) + if err == nil { + t.Fatal("ECDSA Verifier Object creation should fail") + } + }) + t.Run("RSA Verifier Sign Error", func(t *testing.T) { + pVerifier, err := newRSA(jwa.PS512) + if err != nil { + t.Fatalf("Signer creation failure: %v", jwa.ES512) + } + err = pVerifier.Verify([]byte("payload"), []byte("signature"), dummy) + if err == nil { + t.Fatal("RSA Verification should fail") + } + err = pVerifier.Verify([]byte("payload"), []byte("signature"), nil) + if err == nil { + t.Fatal("RSA Verification should fail") + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/verify.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/verify.go new file mode 100644 index 0000000000..368e7b454d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jws/verify/verify.go @@ -0,0 +1,21 @@ +package verify + +import ( + "github.com/lestrrat-go/jwx/jwa" + "github.com/pkg/errors" +) + +// New creates a new JWS verifier using the specified algorithm +// and the public key +func New(alg jwa.SignatureAlgorithm) (Verifier, error) { + switch alg { + case jwa.RS256, jwa.RS384, jwa.RS512, jwa.PS256, jwa.PS384, jwa.PS512: + return newRSA(alg) + case jwa.ES256, jwa.ES384, jwa.ES512: + return newECDSA(alg) + case jwa.HS256, jwa.HS384, jwa.HS512: + return newHMAC(alg) + default: + return nil, errors.Errorf(`unsupported signature algorithm: %s`, alg) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/README.md b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/README.md new file mode 100644 index 0000000000..962bd21b7a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/README.md @@ -0,0 +1,78 @@ +# jwt + +JWT tokens + +# SYNOPSIS + +```go +package jwt_test + +import ( + "bytes" + "crypto/rand" + "crypto/rsa" + "encoding/json" + "fmt" + "time" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwt" +) + +func ExampleSignAndParse() { + privKey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + fmt.Printf("failed to generate private key: %s\n", err) + return + } + + var payload []byte + { // Create signed payload + token := jwt.New() + token.Set(`foo`, `bar`) + payload, err = token.Sign(jwa.RS256, privKey) + if err != nil { + fmt.Printf("failed to generate signed payload: %s\n", err) + return + } + } + + { // Parse signed payload + // Use jwt.ParseVerify if you want to make absolutely sure that you + // are going to verify the signatures every time + token, err := jwt.Parse(bytes.NewReader(payload), jwt.WithVerify(jwa.RS256, &privKey.PublicKey)) + if err != nil { + fmt.Printf("failed to parse JWT token: %s\n", err) + return + } + buf, err := json.MarshalIndent(token, "", " ") + if err != nil { + fmt.Printf("failed to generate JSON: %s\n", err) + return + } + fmt.Printf("%s\n", buf) + } +} + +func ExampleToken() { + t := jwt.New() + t.Set(jwt.SubjectKey, `https://github.com/lestrrat-go/jwx/jwt`) + t.Set(jwt.AudienceKey, `Golang Users`) + t.Set(jwt.IssuedAtKey, time.Unix(aLongLongTimeAgo, 0)) + t.Set(`privateClaimKey`, `Hello, World!`) + + buf, err := json.MarshalIndent(t, "", " ") + if err != nil { + fmt.Printf("failed to generate JSON: %s\n", err) + return + } + + fmt.Printf("%s\n", buf) + fmt.Printf("aud -> '%s'\n", t.Audience()) + fmt.Printf("iat -> '%s'\n", t.IssuedAt().Format(time.RFC3339)) + if v, ok := t.Get(`privateClaimKey`); ok { + fmt.Printf("privateClaimKey -> '%s'\n", v) + } + fmt.Printf("sub -> '%s'\n", t.Subject()) +} +``` \ No newline at end of file diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/example_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/example_test.go new file mode 100644 index 0000000000..b52748db79 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/example_test.go @@ -0,0 +1,88 @@ +package jwt_test + +import ( + "bytes" + "crypto/rand" + "crypto/rsa" + "encoding/json" + "fmt" + "time" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwt" +) + +func ExampleSignAndParse() { + privKey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + fmt.Printf("failed to generate private key: %s\n", err) + return + } + + var payload []byte + { // Create signed payload + token := jwt.New() + token.Set(`foo`, `bar`) + payload, err = token.Sign(jwa.RS256, privKey) + if err != nil { + fmt.Printf("failed to generate signed payload: %s\n", err) + return + } + } + + { // Parse signed payload + // Use jwt.ParseVerify if you want to make absolutely sure that you + // are going to verify the signatures every time + token, err := jwt.Parse(bytes.NewReader(payload), jwt.WithVerify(jwa.RS256, &privKey.PublicKey)) + if err != nil { + fmt.Printf("failed to parse JWT token: %s\n", err) + return + } + buf, err := json.MarshalIndent(token, "", " ") + if err != nil { + fmt.Printf("failed to generate JSON: %s\n", err) + return + } + fmt.Printf("%s\n", buf) + } + // OUTPUT: + // { + // "foo": "bar" + // } +} + +func ExampleToken() { + t := jwt.New() + t.Set(jwt.SubjectKey, `https://github.com/lestrrat-go/jwx/jwt`) + t.Set(jwt.AudienceKey, `Golang Users`) + t.Set(jwt.IssuedAtKey, time.Unix(aLongLongTimeAgo, 0)) + t.Set(`privateClaimKey`, `Hello, World!`) + + buf, err := json.MarshalIndent(t, "", " ") + if err != nil { + fmt.Printf("failed to generate JSON: %s\n", err) + return + } + + fmt.Printf("%s\n", buf) + fmt.Printf("aud -> '%s'\n", t.Audience()) + fmt.Printf("iat -> '%s'\n", t.IssuedAt().Format(time.RFC3339)) + if v, ok := t.Get(`privateClaimKey`); ok { + fmt.Printf("privateClaimKey -> '%s'\n", v) + } + fmt.Printf("sub -> '%s'\n", t.Subject()) + + // OUTPUT: + // { + // "aud": [ + // "Golang Users" + // ], + // "iat": 233431200, + // "sub": "https://github.com/lestrrat-go/jwx/jwt", + // "privateClaimKey": "Hello, World!" + // } + // aud -> '[Golang Users]' + // iat -> '1977-05-25T18:00:00Z' + // privateClaimKey -> 'Hello, World!' + // sub -> 'https://github.com/lestrrat-go/jwx/jwt' +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/interface.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/interface.go new file mode 100644 index 0000000000..730224eeaa --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/interface.go @@ -0,0 +1,3 @@ +package jwt + +type StringList []string diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/cmd/gentoken/main.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/cmd/gentoken/main.go new file mode 100644 index 0000000000..73c9a7b33d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/cmd/gentoken/main.go @@ -0,0 +1,370 @@ +package main + +import ( + "bytes" + "fmt" + "go/format" + "log" + "os" + "strconv" + "strings" + + "github.com/pkg/errors" +) + +func main() { + if err := _main(); err != nil { + log.Printf("%s", err) + os.Exit(1) + } +} + +func _main() error { + if err := generateToken(); err != nil { + return errors.Wrap(err, `failed to generate token file`) + } + return nil +} + +type tokenField struct { + Name string + JSONKey string + Type string + Comment string + isList bool + hasAccept bool + noDeref bool + elemType string +} + +func (t tokenField) UpperName() string { + return strings.Title(t.Name) +} + +func (t tokenField) IsList() bool { + return t.isList || strings.HasPrefix(t.Type, `[]`) +} + +func (t tokenField) ListElem() string { + if t.elemType != "" { + return t.elemType + } + return strings.TrimPrefix(t.Type, `[]`) +} + +func (t tokenField) IsPointer() bool { + return strings.HasPrefix(t.Type, `*`) +} + +func (t tokenField) PointerElem() string { + return strings.TrimPrefix(t.Type, `*`) +} + +var zerovals = map[string]string{ + `string`: `""`, +} + +func zeroval(s string) string { + if v, ok := zerovals[s]; ok { + return v + } + return `nil` +} + +func generateToken() error { + var buf bytes.Buffer + + var fields = []tokenField{ + { + Name: "audience", + JSONKey: "aud", + Type: "StringList", + Comment: `https://tools.ietf.org/html/rfc7519#section-4.1.3`, + isList: true, + hasAccept: true, + elemType: `string`, + }, + { + Name: "expiration", + JSONKey: "exp", + Type: "*types.NumericDate", + Comment: `https://tools.ietf.org/html/rfc7519#section-4.1.4`, + hasAccept: true, + noDeref: true, + }, + { + Name: "issuedAt", + JSONKey: "iat", + Type: "*types.NumericDate", + Comment: `https://tools.ietf.org/html/rfc7519#section-4.1.6`, + hasAccept: true, + noDeref: true, + }, + { + Name: "issuer", + JSONKey: "iss", + Type: "*string", + Comment: `https://tools.ietf.org/html/rfc7519#section-4.1.1`, + }, + { + Name: "jwtID", + JSONKey: "jti", + Type: "*string", + Comment: `https://tools.ietf.org/html/rfc7519#section-4.1.7`, + }, + { + Name: "notBefore", + JSONKey: "nbf", + Type: "*types.NumericDate", + Comment: `https://tools.ietf.org/html/rfc7519#section-4.1.5`, + hasAccept: true, + noDeref: true, + }, + { + Name: "subject", + JSONKey: "sub", + Type: "*string", + Comment: `https://tools.ietf.org/html/rfc7519#section-4.1.2`, + }, + } + + fmt.Fprintf(&buf, "\n// This file is auto-generated. DO NOT EDIT") + fmt.Fprintf(&buf, "\npackage jwt") + fmt.Fprintf(&buf, "\n\nimport (") + for _, pkg := range []string{"bytes", "encoding/json", "time", "github.com/pkg/errors", "github.com/lestrrat-go/jwx/jwt/internal/types"} { + fmt.Fprintf(&buf, "\n%s", strconv.Quote(pkg)) + } + fmt.Fprintf(&buf, "\n)") // end of import + + fmt.Fprintf(&buf, "\n\n// Key names for standard claims") + fmt.Fprintf(&buf, "\nconst (") + for _, field := range fields { + fmt.Fprintf(&buf, "\n%sKey = %s", field.UpperName(), strconv.Quote(field.JSONKey)) + } + fmt.Fprintf(&buf, "\n)") // end const + + fmt.Fprintf(&buf, "\n\n// Token represents a JWT token. The object has convenience accessors") + fmt.Fprintf(&buf, "\n// to %d standard claims including ", len(fields)) + for i, field := range fields { + fmt.Fprintf(&buf, "%s", strconv.Quote(field.JSONKey)) + switch { + case i < len(fields)-2: + fmt.Fprintf(&buf, ", ") + case i == len(fields)-2: + fmt.Fprintf(&buf, " and ") + } + } + fmt.Fprintf(&buf, "\n// which are type-aware (to an extent). Other claims may be accessed via the `Get`/`Set`") + fmt.Fprintf(&buf, "\n// methods but their types are not taken into consideration at all. If you have non-standard") + fmt.Fprintf(&buf, "\n// claims that you must frequently access, consider wrapping the token in a wrapper") + fmt.Fprintf(&buf, "\n// by embedding the jwt.Token type in it") + fmt.Fprintf(&buf, "\ntype Token struct {") + for _, field := range fields { + fmt.Fprintf(&buf, "\n%s %s `json:\"%s,omitempty\"` // %s", field.Name, field.Type, field.JSONKey, field.Comment) + } + fmt.Fprintf(&buf, "\nprivateClaims map[string]interface{} `json:\"-\"`") + fmt.Fprintf(&buf, "\n}") // end type Token + + fmt.Fprintf(&buf, "\n\nfunc (t *Token) Get(s string) (interface{}, bool) {") + fmt.Fprintf(&buf, "\nswitch s {") + for _, field := range fields { + fmt.Fprintf(&buf, "\ncase %sKey:", field.UpperName()) + switch { + case field.IsList(): + fmt.Fprintf(&buf, "\nif len(t.%s) == 0 {", field.Name) + fmt.Fprintf(&buf, "\nreturn nil, false") + fmt.Fprintf(&buf, "\n}") // end if len(t.%s) == 0 + fmt.Fprintf(&buf, "\nreturn ") + // some types such as `aud` need explicit conversion + var pre, post string + if field.Type == "StringList" { + pre = "[]string(" + post = ")" + } + fmt.Fprintf(&buf, "%st.%s%s, true", pre, field.Name, post) + case field.IsPointer(): + fmt.Fprintf(&buf, "\nif t.%s == nil {", field.Name) + fmt.Fprintf(&buf, "\nreturn nil, false") + fmt.Fprintf(&buf, "\n} else {") + if field.noDeref { + if field.Type == "*types.NumericDate" { + fmt.Fprintf(&buf, "\nreturn t.%s.Get(), true", field.Name) + } else { + fmt.Fprintf(&buf, "\nreturn t.%s, true", field.Name) + } + } else { + fmt.Fprintf(&buf, "\nreturn *(t.%s), true", field.Name) + } + fmt.Fprintf(&buf, "\n}") // end if t.%s != nil + } + } + fmt.Fprintf(&buf, "\n}") // end switch + fmt.Fprintf(&buf, "\nif v, ok := t.privateClaims[s]; ok {") + fmt.Fprintf(&buf, "\nreturn v, true") + fmt.Fprintf(&buf, "\n}") // end if v, ok := t.privateClaims[s] + fmt.Fprintf(&buf, "\nreturn nil, false") + fmt.Fprintf(&buf, "\n}") // end of Get + + fmt.Fprintf(&buf, "\n\nfunc (t *Token) Set(name string, v interface{}) error {") + fmt.Fprintf(&buf, "\nswitch name {") + for _, field := range fields { + fmt.Fprintf(&buf, "\ncase %sKey:", field.UpperName()) + switch { + case field.hasAccept: + if field.IsPointer() { + fmt.Fprintf(&buf, "\nvar x %s", field.PointerElem()) + } else { + fmt.Fprintf(&buf, "\nvar x %s", field.Type) + } + fmt.Fprintf(&buf, "\nif err := x.Accept(v); err != nil {") + fmt.Fprintf(&buf, "\nreturn errors.Wrap(err, `invalid value for '%s' key`)", field.Name) + fmt.Fprintf(&buf, "\n}") + if field.IsPointer() { + fmt.Fprintf(&buf, "\nt.%s = &x", field.Name) + } else { + fmt.Fprintf(&buf, "\nt.%s = x", field.Name) + } + case field.IsPointer(): + fmt.Fprintf(&buf, "\nx, ok := v.(%s)", field.PointerElem()) + fmt.Fprintf(&buf, "\nif !ok {") + fmt.Fprintf(&buf, "\nreturn errors.Errorf(`invalid type for '%s' key: %%T`, v)", field.Name) + fmt.Fprintf(&buf, "\n}") // end if !ok + fmt.Fprintf(&buf, "\nt.%s = &x", field.Name) + case field.IsList(): + fmt.Fprintf(&buf, "\nswitch x := v.(type) {") + fmt.Fprintf(&buf, "\ncase %s:", field.ListElem()) + fmt.Fprintf(&buf, "\nt.%s = []string{x}", field.Name) + fmt.Fprintf(&buf, "\ncase %s:", field.Type) + fmt.Fprintf(&buf, "\nt.%s = x", field.Name) + fmt.Fprintf(&buf, "\ndefault:") + fmt.Fprintf(&buf, "\nreturn errors.Errorf(`invalid type for '%s' key: %%T`, v)", field.Name) + fmt.Fprintf(&buf, "\n}") // end of switch x := v.(type) {") + } + } + fmt.Fprintf(&buf, "\ndefault:") + fmt.Fprintf(&buf, "\nif t.privateClaims == nil {") + fmt.Fprintf(&buf, "\nt.privateClaims = make(map[string]interface{})") + fmt.Fprintf(&buf, "\n}") // end if h.privateParams == nil + fmt.Fprintf(&buf, "\nt.privateClaims[name] = v") + fmt.Fprintf(&buf, "\n}") // end switch name + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") // end func (h *StandardHeaders) Set(name string, value interface{}) + + for _, field := range fields { + switch { + case field.IsList(): + fmt.Fprintf(&buf, "\n\nfunc (t Token) %s() %s {", field.UpperName(), field.Type) + fmt.Fprintf(&buf, "\nif v, ok := t.Get(%sKey); ok {", field.UpperName()) + fmt.Fprintf(&buf, "\nreturn v.([]string)") + fmt.Fprintf(&buf, "\n}") // end if v, ok := t.Get(%sKey) + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") // end func (t Token) %s() %s + case field.Type == "*types.NumericDate": + fmt.Fprintf(&buf, "\n\nfunc (t Token) %s() time.Time {", field.UpperName()) + fmt.Fprintf(&buf, "\nif v, ok := t.Get(%sKey); ok {", field.UpperName()) + fmt.Fprintf(&buf, "\nreturn v.(time.Time)") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\nreturn time.Time{}") + fmt.Fprintf(&buf, "\n}") // end func (t Token) %s() + case field.IsPointer(): + fmt.Fprintf(&buf, "\n\n// %s is a convenience function to retrieve the corresponding value store in the token", field.UpperName()) + fmt.Fprintf(&buf, "\n// if there is a problem retrieving the value, the zero value is returned. If you need to differentiate between existing/non-existing values, use `Get` instead") + fmt.Fprintf(&buf, "\n\nfunc (t Token) %s() %s {", field.UpperName(), field.PointerElem()) + fmt.Fprintf(&buf, "\nif v, ok := t.Get(%sKey); ok {", field.UpperName()) + fmt.Fprintf(&buf, "\nreturn v.(%s)", field.PointerElem()) + fmt.Fprintf(&buf, "\n}") // end if v, ok := t.Get(%sKey) + fmt.Fprintf(&buf, "\nreturn %s", zeroval(field.PointerElem())) + fmt.Fprintf(&buf, "\n}") // end func (t Token) %s() %s + } + } + + // JSON related stuff + fmt.Fprintf(&buf, "\n\n// this is almost identical to json.Encoder.Encode(), but we use Marshal") + fmt.Fprintf(&buf, "\n// to avoid having to remove the trailing newline for each successive") + fmt.Fprintf(&buf, "\n// call to Encode()") + fmt.Fprintf(&buf, "\nfunc writeJSON(buf *bytes.Buffer, v interface{}, keyName string) error {") + fmt.Fprintf(&buf, "\nif enc, err := json.Marshal(v); err != nil {") + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `failed to encode '%%s'`, keyName)") + fmt.Fprintf(&buf, "\n} else {") + fmt.Fprintf(&buf, "\nbuf.Write(enc)") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") + + fmt.Fprintf(&buf, "\n\n// MarshalJSON serializes the token in JSON format. This exists to") + fmt.Fprintf(&buf, "\n// allow flattening of private claims.") + fmt.Fprintf(&buf, "\nfunc (t Token) MarshalJSON() ([]byte, error) {") + fmt.Fprintf(&buf, "\nvar buf bytes.Buffer") + fmt.Fprintf(&buf, "\nbuf.WriteRune('{')") + + for i, field := range fields { + if strings.HasPrefix(field.Type, "*") { + fmt.Fprintf(&buf, "\nif t.%s != nil {", field.Name) + } else { + fmt.Fprintf(&buf, "\nif len(t.%s) > 0 {", field.Name) + } + if i > 0 { + fmt.Fprintf(&buf, "\nif buf.Len() > 1 {") + fmt.Fprintf(&buf, "\nbuf.WriteRune(',')") + fmt.Fprintf(&buf, "\n}") + } + fmt.Fprintf(&buf, "\nbuf.WriteRune('\"')") + fmt.Fprintf(&buf, "\nbuf.WriteString(%sKey)", field.UpperName()) + fmt.Fprintf(&buf, "\nbuf.WriteString(`\":`)") + fmt.Fprintf(&buf, "\nif err := writeJSON(&buf, t.%s, %sKey); err != nil {", field.Name, field.UpperName()) + fmt.Fprintf(&buf, "\nreturn nil, err") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\n}") + } + + fmt.Fprintf(&buf, "\nif len(t.privateClaims) == 0 {") + fmt.Fprintf(&buf, "\nbuf.WriteRune('}')") + fmt.Fprintf(&buf, "\nreturn buf.Bytes(), nil") + fmt.Fprintf(&buf, "\n}") + + fmt.Fprintf(&buf, "\n// If private claims exist, they need to flattened and included in the token") + fmt.Fprintf(&buf, "\npcjson, err := json.Marshal(t.privateClaims)") + fmt.Fprintf(&buf, "\nif err != nil {") + fmt.Fprintf(&buf, "\nreturn nil, errors.Wrap(err, `failed to marshal private claims`)") + fmt.Fprintf(&buf, "\n}") + + fmt.Fprintf(&buf, "\n// remove '{' from the private claims") + fmt.Fprintf(&buf, "\npcjson = pcjson[1:]") + fmt.Fprintf(&buf, "\nif buf.Len() > 1 {") + fmt.Fprintf(&buf, "\nbuf.WriteRune(',')") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\nbuf.Write(pcjson)") + fmt.Fprintf(&buf, "\nreturn buf.Bytes(), nil") + fmt.Fprintf(&buf, "\n}") + + fmt.Fprintf(&buf, "\n\n// UnmarshalJSON deserializes data from a JSON data buffer into a Token") + fmt.Fprintf(&buf, "\nfunc (t *Token) UnmarshalJSON(data []byte) error {") + fmt.Fprintf(&buf, "\nvar m map[string]interface{}") + fmt.Fprintf(&buf, "\nif err := json.Unmarshal(data, &m); err != nil {") + fmt.Fprintf(&buf, "\nreturn errors.Wrap(err, `failed to unmarshal token`)") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\nfor name, value := range m {") + fmt.Fprintf(&buf, "\nif err := t.Set(name, value); err != nil {") + fmt.Fprintf(&buf, "\nreturn errors.Wrapf(err, `failed to set value for %%s`, name)") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\n}") + fmt.Fprintf(&buf, "\nreturn nil") + fmt.Fprintf(&buf, "\n}") + + formatted, err := format.Source(buf.Bytes()) + if err != nil { + log.Printf("%s", buf.Bytes()) + log.Printf("%s", err) + return errors.Wrap(err, `failed to format source`) + } + + filename := "token_gen.go" + f, err := os.Create(filename) + if err != nil { + return errors.Wrapf(err, `failed to open file %s for writing`, filename) + } + defer f.Close() + + f.Write(formatted) + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/types/date.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/types/date.go new file mode 100644 index 0000000000..2f0052103e --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/types/date.go @@ -0,0 +1,83 @@ +package types + +import ( + "encoding/json" + "strconv" + "time" + + "github.com/pkg/errors" +) + +// NumericDate represents the date format used in the 'nbf' claim +type NumericDate struct { + time.Time +} + +func (n *NumericDate) Get() time.Time { + if n == nil { + return (time.Time{}).UTC() + } + return n.Time +} + +func numericToTime(v interface{}, t *time.Time) bool { + var n int64 + switch x := v.(type) { + case int64: + n = x + case int32: + n = int64(x) + case int16: + n = int64(x) + case int8: + n = int64(x) + case int: + n = int64(x) + case float32: + n = int64(x) + case float64: + n = int64(x) + default: + return false + } + + *t = time.Unix(n, 0) + return true +} + +func (n *NumericDate) Accept(v interface{}) error { + var t time.Time + + switch x := v.(type) { + case string: + i, err := strconv.ParseInt(string(x[:]), 10, 64) + if err != nil { + return errors.Errorf(`invalid epoch value`) + } + t = time.Unix(i, 0) + + case json.Number: + intval, err := x.Int64() + if err != nil { + return errors.Wrap(err, `failed to convert json value to int64`) + } + t = time.Unix(intval, 0) + case time.Time: + t = x + default: + if !numericToTime(v, &t) { + return errors.Errorf(`invalid type %T`, v) + } + } + n.Time = t.UTC() + return nil +} + +// MarshalJSON translates from internal representation to JSON NumericDate +// See https://tools.ietf.org/html/rfc7519#page-6 +func (n *NumericDate) MarshalJSON() ([]byte, error) { + if n.IsZero() { + return json.Marshal(nil) + } + return json.Marshal(n.Unix()) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/types/date_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/types/date_test.go new file mode 100644 index 0000000000..4c1b84e333 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/internal/types/date_test.go @@ -0,0 +1,37 @@ +package types_test + +import ( + "encoding/json" + "fmt" + "reflect" + "testing" + "time" + + "github.com/lestrrat-go/jwx/jwt" +) + +func TestDate(t *testing.T) { + t.Run("Accept values", func(t *testing.T) { + // NumericDate allows assignment from various different Go types, + // so that it's easier for the devs, and conversion to/from JSON + // use of "127" is just to allow use of int8's + now := time.Unix(127, 0).UTC() + for _, ut := range []interface{}{int64(127), int32(127), int16(127), int8(127), float32(127), float64(127), json.Number("127")} { + t.Run(fmt.Sprintf("%T", ut), func(t *testing.T) { + var t1 jwt.Token + err := t1.Set(jwt.IssuedAtKey, ut) + if err != nil { + t.Fatalf("Failed to set IssuedAt value: %v", ut) + } + v, ok := t1.Get(jwt.IssuedAtKey) + if !ok { + t.Fatal("Failed to retrieve IssuedAt value") + } + realized := v.(time.Time) + if !reflect.DeepEqual(now, realized) { + t.Fatalf("Token time mistmatch. Expected:Realized (%v:%v)", now, realized) + } + }) + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/jwt.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/jwt.go new file mode 100644 index 0000000000..fd804fba65 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/jwt.go @@ -0,0 +1,106 @@ +//go:generate go run internal/cmd/gentoken/main.go + +// Package jwt implements JSON Web Tokens as described in https://tools.ietf.org/html/rfc7519 +package jwt + +import ( + "bytes" + "encoding/json" + "io" + "io/ioutil" + "strings" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jws" + "github.com/pkg/errors" +) + +// ParseString calls Parse with the given string +func ParseString(s string, options ...Option) (*Token, error) { + return Parse(strings.NewReader(s), options...) +} + +// ParseString calls Parse with the given byte sequence +func ParseBytes(s []byte, options ...Option) (*Token, error) { + return Parse(bytes.NewReader(s), options...) +} + +// Parse parses the JWT token payload and creates a new `jwt.Token` object. +// The token must be encoded in either JSON format or compact format. +// +// If the token is signed and you want to verify the payload, you must +// pass the jwt.WithVerify(alg, key) option. If you do not specify these +// parameters, no verification will be performed. +func Parse(src io.Reader, options ...Option) (*Token, error) { + var params VerifyParameters + for _, o := range options { + switch o.Name() { + case optkeyVerify: + params = o.Value().(VerifyParameters) + } + } + + if params != nil { + return ParseVerify(src, params.Algorithm(), params.Key()) + } + + m, err := jws.Parse(src) + if err != nil { + return nil, errors.Wrap(err, `invalid jws message`) + } + + token := New() + if err := json.Unmarshal(m.Payload(), token); err != nil { + return nil, errors.Wrap(err, `failed to parse token`) + } + return token, nil +} + +// ParseVerify is a function that is similar to Parse(), but does not +// allow for parsing without signature verification parameters. +func ParseVerify(src io.Reader, alg jwa.SignatureAlgorithm, key interface{}) (*Token, error) { + data, err := ioutil.ReadAll(src) + if err != nil { + return nil, errors.Wrap(err, `failed to read token from source`) + } + + v, err := jws.Verify(data, alg, key) + if err != nil { + return nil, errors.Wrap(err, `failed to verify jws signature`) + } + + var token Token + if err := json.Unmarshal(v, &token); err != nil { + return nil, errors.Wrap(err, `failed to parse token`) + } + return &token, nil +} + +// New creates a new empty JWT token +func New() *Token { + return &Token{} +} + +// Sign is a convenience function to create a signed JWT token serialized in +// compact form. `key` must match the key type required by the given +// signature method `method` +func (t *Token) Sign(method jwa.SignatureAlgorithm, key interface{}) ([]byte, error) { + buf, err := json.Marshal(t) + if err != nil { + return nil, errors.Wrap(err, `failed to marshal token`) + } + + var hdr jws.StandardHeaders + if hdr.Set(`alg`, method.String()) != nil { + return nil, errors.Wrap(err, `failed to sign payload`) + } + if hdr.Set(`typ`, `JWT`) != nil { + return nil, errors.Wrap(err, `failed to sign payload`) + } + sign, err := jws.Sign(buf, method, key, jws.WithHeaders(&hdr)) + if err != nil { + return nil, errors.Wrap(err, `failed to sign payload`) + } + + return sign, nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/jwt_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/jwt_test.go new file mode 100644 index 0000000000..b10b114cb0 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/jwt_test.go @@ -0,0 +1,253 @@ +package jwt_test + +import ( + "bytes" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/rsa" + "encoding/json" + "strings" + "testing" + "time" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jws" + "github.com/lestrrat-go/jwx/jwt" + "github.com/stretchr/testify/assert" +) + +func TestJWTParse(t *testing.T) { + + alg := jwa.RS256 + key, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + t.Fatal("Failed to generate RSA key") + } + t1 := jwt.New() + signed, err := t1.Sign(alg, key) + if err != nil { + t.Fatal("Failed to sign JWT") + } + + t.Run("Parse (no signature verification)", func(t *testing.T) { + t2, err := jwt.Parse(bytes.NewReader(signed)) + if !assert.NoError(t, err, `jwt.Parse should succeed`) { + return + } + if !assert.Equal(t, t1, t2, `t1 == t2`) { + return + } + }) + t.Run("ParseString (no signature verification)", func(t *testing.T) { + t2, err := jwt.ParseString(string(signed)) + if !assert.NoError(t, err, `jwt.ParseString should succeed`) { + return + } + if !assert.Equal(t, t1, t2, `t1 == t2`) { + return + } + }) + t.Run("ParseBytes (no signature verification)", func(t *testing.T) { + t2, err := jwt.ParseBytes(signed) + if !assert.NoError(t, err, `jwt.ParseBytes should succeed`) { + return + } + if !assert.Equal(t, t1, t2, `t1 == t2`) { + return + } + }) + t.Run("Parse (correct signature key)", func(t *testing.T) { + t2, err := jwt.Parse(bytes.NewReader(signed), jwt.WithVerify(alg, &key.PublicKey)) + if !assert.NoError(t, err, `jwt.Parse should succeed`) { + return + } + if !assert.Equal(t, t1, t2, `t1 == t2`) { + return + } + }) + t.Run("parse (wrong signature algorithm)", func(t *testing.T) { + _, err := jwt.Parse(bytes.NewReader(signed), jwt.WithVerify(jwa.RS512, &key.PublicKey)) + if !assert.Error(t, err, `jwt.Parse should fail`) { + return + } + }) + t.Run("parse (wrong signature key)", func(t *testing.T) { + pubkey := key.PublicKey + pubkey.E = 0 // bogus value + _, err := jwt.Parse(bytes.NewReader(signed), jwt.WithVerify(alg, &pubkey)) + if !assert.Error(t, err, `jwt.Parse should fail`) { + return + } + }) +} + +func TestJWTParseVerify(t *testing.T) { + alg := jwa.RS256 + key, err := rsa.GenerateKey(rand.Reader, 2048) + if !assert.NoError(t, err, "RSA key generated") { + return + } + + t1 := jwt.New() + signed, err := t1.Sign(alg, key) + + t.Run("parse (no signature verification)", func(t *testing.T) { + _, err := jwt.ParseVerify(bytes.NewReader(signed), "", nil) + if !assert.Error(t, err, `jwt.ParseVerify should fail`) { + return + } + }) + t.Run("parse (correct signature key)", func(t *testing.T) { + t2, err := jwt.ParseVerify(bytes.NewReader(signed), alg, &key.PublicKey) + if !assert.NoError(t, err, `jwt.ParseVerify should succeed`) { + return + } + if !assert.Equal(t, t1, t2, `t1 == t2`) { + return + } + }) + t.Run("parse (wrong signature algorithm)", func(t *testing.T) { + _, err := jwt.ParseVerify(bytes.NewReader(signed), jwa.RS512, &key.PublicKey) + if !assert.Error(t, err, `jwt.ParseVerify should fail`) { + return + } + }) + t.Run("parse (wrong signature key)", func(t *testing.T) { + pubkey := key.PublicKey + pubkey.E = 0 // bogus value + _, err := jwt.ParseVerify(bytes.NewReader(signed), alg, &pubkey) + if !assert.Error(t, err, `jwt.ParseVerify should fail`) { + return + } + }) +} + +func TestVerifyClaims(t *testing.T) { + // GitHub issue #37: tokens are invalid in the second they are created (because Now() is not after IssuedAt()) + t.Run(jwt.IssuedAtKey+"+skew", func(t *testing.T) { + token := jwt.New() + now := time.Now().UTC() + token.Set(jwt.IssuedAtKey, now) + + const DefaultSkew = 0 + + args := []jwt.Option{ + jwt.WithClock(jwt.ClockFunc(func() time.Time { return now })), + jwt.WithAcceptableSkew(DefaultSkew), + } + + if !assert.NoError(t, token.Verify(args...), "token.Verify should validate tokens in the same second they are created") { + if now.Equal(token.IssuedAt()) { + t.Errorf("iat claim failed: iat == now") + } + return + } + }) +} + +const aLongLongTimeAgo = 233431200 +const aLongLongTimeAgoString = "233431200" + +func TestUnmarshal(t *testing.T) { + testcases := []struct { + Title string + Source string + Expected func() *jwt.Token + ExpectedJSON string + }{ + { + Title: "single aud", + Source: `{"aud":"foo"}`, + Expected: func() *jwt.Token { + t := jwt.New() + t.Set("aud", "foo") + return t + }, + ExpectedJSON: `{"aud":["foo"]}`, + }, + { + Title: "multiple aud's", + Source: `{"aud":["foo","bar"]}`, + Expected: func() *jwt.Token { + t := jwt.New() + t.Set("aud", []string{"foo", "bar"}) + return t + }, + ExpectedJSON: `{"aud":["foo","bar"]}`, + }, + { + Title: "issuedAt", + Source: `{"` + jwt.IssuedAtKey + `":` + aLongLongTimeAgoString + `}`, + Expected: func() *jwt.Token { + t := jwt.New() + t.Set(jwt.IssuedAtKey, aLongLongTimeAgo) + return t + }, + ExpectedJSON: `{"` + jwt.IssuedAtKey + `":` + aLongLongTimeAgoString + `}`, + }, + } + + for _, tc := range testcases { + t.Run(tc.Title, func(t *testing.T) { + var token jwt.Token + if !assert.NoError(t, json.Unmarshal([]byte(tc.Source), &token), `json.Unmarshal should succeed`) { + return + } + if !assert.Equal(t, tc.Expected(), &token, `token should match expected value`) { + return + } + + var buf bytes.Buffer + if !assert.NoError(t, json.NewEncoder(&buf).Encode(token), `json.Marshal should succeed`) { + return + } + if !assert.Equal(t, tc.ExpectedJSON, strings.TrimSpace(buf.String()), `json should match`) { + return + } + }) + } +} + +func TestGH52(t *testing.T) { + priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if !assert.NoError(t, err) { + return + } + + pub := &priv.PublicKey + if !assert.NoError(t, err) { + return + } + for i := 0; i < 1000; i++ { + tok := jwt.New() + + s, err := tok.Sign(jwa.ES256, priv) + if !assert.NoError(t, err) { + return + } + + if _, err = jws.Verify([]byte(s), jwa.ES256, pub); !assert.NoError(t, err, `test should pass (run %d)`, i) { + return + } + } +} + +func TestUnmarshalJSON(t *testing.T) { + + t.Run("Unmarshal audience with multiple values", func(t *testing.T) { + var t1 jwt.Token + if !assert.NoError(t, json.Unmarshal([]byte(`{"aud":["foo", "bar", "baz"]}`), &t1), `jwt.Parse should succeed`) { + return + } + aud, ok := t1.Get(jwt.AudienceKey) + if !assert.True(t, ok, `jwt.Get(jwt.AudienceKey) should succeed`) { + t.Logf("%#v", t1) + return + } + + if !assert.Equal(t, aud.([]string), []string{"foo", "bar", "baz"}, "audience should match. got %v", aud) { + return + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/options.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/options.go new file mode 100644 index 0000000000..9f495bbe3a --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/options.go @@ -0,0 +1,37 @@ +package jwt + +import ( + "github.com/lestrrat-go/jwx/internal/option" + "github.com/lestrrat-go/jwx/jwa" +) + +type Option = option.Interface + +const ( + optkeyVerify = `verify` +) + +type VerifyParameters interface { + Algorithm() jwa.SignatureAlgorithm + Key() interface{} +} + +type verifyParams struct { + alg jwa.SignatureAlgorithm + key interface{} +} + +func (p *verifyParams) Algorithm() jwa.SignatureAlgorithm { + return p.alg +} + +func (p *verifyParams) Key() interface{} { + return p.key +} + +func WithVerify(alg jwa.SignatureAlgorithm, key interface{}) Option { + return option.New(optkeyVerify, &verifyParams{ + alg: alg, + key: key, + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/string.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/string.go new file mode 100644 index 0000000000..8bf1a0ea62 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/string.go @@ -0,0 +1,37 @@ +package jwt + +import ( + "encoding/json" + + "github.com/pkg/errors" +) + +func (l *StringList) Accept(v interface{}) error { + switch x := v.(type) { + case string: + *l = StringList([]string{x}) + case []string: + *l = StringList(x) + case []interface{}: + list := make(StringList, len(x)) + for i, e := range x { + if s, ok := e.(string); ok { + list[i] = s + continue + } + return errors.Errorf(`invalid list element type %T`, e) + } + *l = list + default: + return errors.Errorf(`invalid type: %T`, v) + } + return nil +} + +func (l *StringList) UnmarshalJSON(data []byte) error { + var v interface{} + if err := json.Unmarshal(data, &v); err != nil { + return errors.Wrap(err, `failed to unmarshal data`) + } + return l.Accept(v) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/string_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/string_test.go new file mode 100644 index 0000000000..8458e6ee1c --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/string_test.go @@ -0,0 +1,18 @@ +package jwt_test + +import ( + "github.com/lestrrat-go/jwx/jwt" + "testing" +) + +func TestStringList_Accept(t *testing.T) { + + var x jwt.StringList + interfaceList := make([]interface{}, 0) + interfaceList = append(interfaceList, "first") + interfaceList = append(interfaceList, "second") + err := x.Accept(interfaceList) + if err != nil { + t.Fatal("Failed to convert []interface{} into StringList: %", err.Error()) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/token_gen.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/token_gen.go new file mode 100644 index 0000000000..5e2f5ac82d --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/token_gen.go @@ -0,0 +1,322 @@ +// This file is auto-generated. DO NOT EDIT +package jwt + +import ( + "bytes" + "encoding/json" + "github.com/lestrrat-go/jwx/jwt/internal/types" + "github.com/pkg/errors" + "time" +) + +// Key names for standard claims +const ( + AudienceKey = "aud" + ExpirationKey = "exp" + IssuedAtKey = "iat" + IssuerKey = "iss" + JwtIDKey = "jti" + NotBeforeKey = "nbf" + SubjectKey = "sub" +) + +// Token represents a JWT token. The object has convenience accessors +// to 7 standard claims including "aud", "exp", "iat", "iss", "jti", "nbf" and "sub" +// which are type-aware (to an extent). Other claims may be accessed via the `Get`/`Set` +// methods but their types are not taken into consideration at all. If you have non-standard +// claims that you must frequently access, consider wrapping the token in a wrapper +// by embedding the jwt.Token type in it +type Token struct { + audience StringList `json:"aud,omitempty"` // https://tools.ietf.org/html/rfc7519#section-4.1.3 + expiration *types.NumericDate `json:"exp,omitempty"` // https://tools.ietf.org/html/rfc7519#section-4.1.4 + issuedAt *types.NumericDate `json:"iat,omitempty"` // https://tools.ietf.org/html/rfc7519#section-4.1.6 + issuer *string `json:"iss,omitempty"` // https://tools.ietf.org/html/rfc7519#section-4.1.1 + jwtID *string `json:"jti,omitempty"` // https://tools.ietf.org/html/rfc7519#section-4.1.7 + notBefore *types.NumericDate `json:"nbf,omitempty"` // https://tools.ietf.org/html/rfc7519#section-4.1.5 + subject *string `json:"sub,omitempty"` // https://tools.ietf.org/html/rfc7519#section-4.1.2 + privateClaims map[string]interface{} `json:"-"` +} + +func (t *Token) Get(s string) (interface{}, bool) { + switch s { + case AudienceKey: + if len(t.audience) == 0 { + return nil, false + } + return []string(t.audience), true + case ExpirationKey: + if t.expiration == nil { + return nil, false + } else { + return t.expiration.Get(), true + } + case IssuedAtKey: + if t.issuedAt == nil { + return nil, false + } else { + return t.issuedAt.Get(), true + } + case IssuerKey: + if t.issuer == nil { + return nil, false + } else { + return *(t.issuer), true + } + case JwtIDKey: + if t.jwtID == nil { + return nil, false + } else { + return *(t.jwtID), true + } + case NotBeforeKey: + if t.notBefore == nil { + return nil, false + } else { + return t.notBefore.Get(), true + } + case SubjectKey: + if t.subject == nil { + return nil, false + } else { + return *(t.subject), true + } + } + if v, ok := t.privateClaims[s]; ok { + return v, true + } + return nil, false +} + +func (t *Token) Set(name string, v interface{}) error { + switch name { + case AudienceKey: + var x StringList + if err := x.Accept(v); err != nil { + return errors.Wrap(err, `invalid value for 'audience' key`) + } + t.audience = x + case ExpirationKey: + var x types.NumericDate + if err := x.Accept(v); err != nil { + return errors.Wrap(err, `invalid value for 'expiration' key`) + } + t.expiration = &x + case IssuedAtKey: + var x types.NumericDate + if err := x.Accept(v); err != nil { + return errors.Wrap(err, `invalid value for 'issuedAt' key`) + } + t.issuedAt = &x + case IssuerKey: + x, ok := v.(string) + if !ok { + return errors.Errorf(`invalid type for 'issuer' key: %T`, v) + } + t.issuer = &x + case JwtIDKey: + x, ok := v.(string) + if !ok { + return errors.Errorf(`invalid type for 'jwtID' key: %T`, v) + } + t.jwtID = &x + case NotBeforeKey: + var x types.NumericDate + if err := x.Accept(v); err != nil { + return errors.Wrap(err, `invalid value for 'notBefore' key`) + } + t.notBefore = &x + case SubjectKey: + x, ok := v.(string) + if !ok { + return errors.Errorf(`invalid type for 'subject' key: %T`, v) + } + t.subject = &x + default: + if t.privateClaims == nil { + t.privateClaims = make(map[string]interface{}) + } + t.privateClaims[name] = v + } + return nil +} + +func (t Token) Audience() StringList { + if v, ok := t.Get(AudienceKey); ok { + return v.([]string) + } + return nil +} + +func (t Token) Expiration() time.Time { + if v, ok := t.Get(ExpirationKey); ok { + return v.(time.Time) + } + return time.Time{} +} + +func (t Token) IssuedAt() time.Time { + if v, ok := t.Get(IssuedAtKey); ok { + return v.(time.Time) + } + return time.Time{} +} + +// Issuer is a convenience function to retrieve the corresponding value store in the token +// if there is a problem retrieving the value, the zero value is returned. If you need to differentiate between existing/non-existing values, use `Get` instead + +func (t Token) Issuer() string { + if v, ok := t.Get(IssuerKey); ok { + return v.(string) + } + return "" +} + +// JwtID is a convenience function to retrieve the corresponding value store in the token +// if there is a problem retrieving the value, the zero value is returned. If you need to differentiate between existing/non-existing values, use `Get` instead + +func (t Token) JwtID() string { + if v, ok := t.Get(JwtIDKey); ok { + return v.(string) + } + return "" +} + +func (t Token) NotBefore() time.Time { + if v, ok := t.Get(NotBeforeKey); ok { + return v.(time.Time) + } + return time.Time{} +} + +// Subject is a convenience function to retrieve the corresponding value store in the token +// if there is a problem retrieving the value, the zero value is returned. If you need to differentiate between existing/non-existing values, use `Get` instead + +func (t Token) Subject() string { + if v, ok := t.Get(SubjectKey); ok { + return v.(string) + } + return "" +} + +// this is almost identical to json.Encoder.Encode(), but we use Marshal +// to avoid having to remove the trailing newline for each successive +// call to Encode() +func writeJSON(buf *bytes.Buffer, v interface{}, keyName string) error { + if enc, err := json.Marshal(v); err != nil { + return errors.Wrapf(err, `failed to encode '%s'`, keyName) + } else { + buf.Write(enc) + } + return nil +} + +// MarshalJSON serializes the token in JSON format. This exists to +// allow flattening of private claims. +func (t Token) MarshalJSON() ([]byte, error) { + var buf bytes.Buffer + buf.WriteRune('{') + if len(t.audience) > 0 { + buf.WriteRune('"') + buf.WriteString(AudienceKey) + buf.WriteString(`":`) + if err := writeJSON(&buf, t.audience, AudienceKey); err != nil { + return nil, err + } + } + if t.expiration != nil { + if buf.Len() > 1 { + buf.WriteRune(',') + } + buf.WriteRune('"') + buf.WriteString(ExpirationKey) + buf.WriteString(`":`) + if err := writeJSON(&buf, t.expiration, ExpirationKey); err != nil { + return nil, err + } + } + if t.issuedAt != nil { + if buf.Len() > 1 { + buf.WriteRune(',') + } + buf.WriteRune('"') + buf.WriteString(IssuedAtKey) + buf.WriteString(`":`) + if err := writeJSON(&buf, t.issuedAt, IssuedAtKey); err != nil { + return nil, err + } + } + if t.issuer != nil { + if buf.Len() > 1 { + buf.WriteRune(',') + } + buf.WriteRune('"') + buf.WriteString(IssuerKey) + buf.WriteString(`":`) + if err := writeJSON(&buf, t.issuer, IssuerKey); err != nil { + return nil, err + } + } + if t.jwtID != nil { + if buf.Len() > 1 { + buf.WriteRune(',') + } + buf.WriteRune('"') + buf.WriteString(JwtIDKey) + buf.WriteString(`":`) + if err := writeJSON(&buf, t.jwtID, JwtIDKey); err != nil { + return nil, err + } + } + if t.notBefore != nil { + if buf.Len() > 1 { + buf.WriteRune(',') + } + buf.WriteRune('"') + buf.WriteString(NotBeforeKey) + buf.WriteString(`":`) + if err := writeJSON(&buf, t.notBefore, NotBeforeKey); err != nil { + return nil, err + } + } + if t.subject != nil { + if buf.Len() > 1 { + buf.WriteRune(',') + } + buf.WriteRune('"') + buf.WriteString(SubjectKey) + buf.WriteString(`":`) + if err := writeJSON(&buf, t.subject, SubjectKey); err != nil { + return nil, err + } + } + if len(t.privateClaims) == 0 { + buf.WriteRune('}') + return buf.Bytes(), nil + } + // If private claims exist, they need to flattened and included in the token + pcjson, err := json.Marshal(t.privateClaims) + if err != nil { + return nil, errors.Wrap(err, `failed to marshal private claims`) + } + // remove '{' from the private claims + pcjson = pcjson[1:] + if buf.Len() > 1 { + buf.WriteRune(',') + } + buf.Write(pcjson) + return buf.Bytes(), nil +} + +// UnmarshalJSON deserializes data from a JSON data buffer into a Token +func (t *Token) UnmarshalJSON(data []byte) error { + var m map[string]interface{} + if err := json.Unmarshal(data, &m); err != nil { + return errors.Wrap(err, `failed to unmarshal token`) + } + for name, value := range m { + if err := t.Set(name, value); err != nil { + return errors.Wrapf(err, `failed to set value for %s`, name) + } + } + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/token_gen_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/token_gen_test.go new file mode 100644 index 0000000000..4bedc1bcd4 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/token_gen_test.go @@ -0,0 +1,157 @@ +package jwt_test + +import ( + "encoding/json" + "reflect" + "testing" + "time" + + "github.com/lestrrat-go/jwx/jwt" + "github.com/stretchr/testify/assert" +) + +func TestHeader(t *testing.T) { + + const ( + tokenTime = 233431200 + ) + expectedTokenTime := time.Unix(tokenTime, 0).UTC() + + values := map[string]interface{}{ + jwt.AudienceKey: []string{"developers", "secops", "tac"}, + jwt.ExpirationKey: expectedTokenTime, + jwt.IssuedAtKey: expectedTokenTime, + jwt.IssuerKey: "http://www.example.com", + jwt.JwtIDKey: "e9bc097a-ce51-4036-9562-d2ade882db0d", + jwt.NotBeforeKey: expectedTokenTime, + jwt.SubjectKey: "unit test", + } + + t.Run("Roundtrip", func(t *testing.T) { + + var h jwt.Token + for k, v := range values { + err := h.Set(k, v) + if err != nil { + t.Fatalf("Set failed for %s", k) + } + got, ok := h.Get(k) + if !ok { + t.Fatalf("Set failed for %s", k) + } + if !reflect.DeepEqual(v, got) { + t.Fatalf("Values do not match: (%v, %v)", v, got) + } + } + }) + + t.Run("RoundtripError", func(t *testing.T) { + + type dummyStruct struct { + dummy1 int + dummy2 float64 + } + dummy := &dummyStruct{1, 3.4} + + values := map[string]interface{}{ + jwt.AudienceKey: dummy, + jwt.ExpirationKey: dummy, + jwt.IssuedAtKey: dummy, + jwt.IssuerKey: dummy, + jwt.JwtIDKey: dummy, + jwt.NotBeforeKey: dummy, + jwt.SubjectKey: dummy, + } + + var h jwt.Token + for k, v := range values { + err := h.Set(k, v) + if err == nil { + t.Fatalf("Setting %s value should have failed", k) + } + } + err := h.Set("default", dummy) // private params + if err != nil { + t.Fatalf("Setting %s value failed", "default") + } + for k, _ := range values { + _, ok := h.Get(k) + if ok { + t.Fatalf("Getting %s value should have failed", k) + } + } + _, ok := h.Get("default") + if !ok { + t.Fatal("Failed to get default value") + } + }) + + t.Run("GetError", func(t *testing.T) { + + var h jwt.Token + issuer := h.Issuer() + if issuer != "" { + t.Fatalf("Get Issuer should return empty string") + } + jwtId := h.JwtID() + if jwtId != "" { + t.Fatalf("Get JWT Id should return empty string") + } + }) +} + +func TestTokenMarshal(t *testing.T) { + t1 := jwt.New() + err := t1.Set(jwt.JwtIDKey, "AbCdEfG") + if err != nil { + t.Fatalf("Failed to set JWT ID: %s", err.Error()) + } + err = t1.Set(jwt.SubjectKey, "foobar@example.com") + if err != nil { + t.Fatalf("Failed to set Subject: %s", err.Error()) + } + + // Silly fix to remove monotonic element from time.Time obtained + // from time.Now(). Without this, the equality comparison goes + // ga-ga for golang tip (1.9) + now := time.Unix(time.Now().Unix(), 0) + err = t1.Set(jwt.IssuedAtKey, now.Unix()) + if err != nil { + t.Fatalf("Failed to set IssuedAt: %s", err.Error()) + } + err = t1.Set(jwt.NotBeforeKey, now.Add(5*time.Second)) + if err != nil { + t.Fatalf("Failed to set NotBefore: %s", err.Error()) + } + err = t1.Set(jwt.ExpirationKey, now.Add(10*time.Second).Unix()) + if err != nil { + t.Fatalf("Failed to set Expiration: %s", err.Error()) + } + err = t1.Set(jwt.AudienceKey, []string{"devops", "secops", "tac"}) + if err != nil { + t.Fatalf("Failed to set audience: %s", err.Error()) + } + err = t1.Set("custom", "MyValue") + if err != nil { + t.Fatalf(`Failed to set private claim "custom": %s`, err.Error()) + } + jsonbuf1, err := json.MarshalIndent(t1, "", " ") + if err != nil { + t.Fatalf("JSON Marshal failed: %s", err.Error()) + } + + t2 := jwt.New() + err = json.Unmarshal(jsonbuf1, t2) + if err != nil { + t.Fatalf("JSON Unmarshal error: %s", err.Error()) + } + + if !assert.Equal(t, t1, t2, "tokens should match") { + return + } + + _, err = json.MarshalIndent(t2, "", " ") + if err != nil { + t.Fatalf("JSON marshal error: %s", err.Error()) + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/verify.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/verify.go new file mode 100644 index 0000000000..03dc8a446e --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/verify.go @@ -0,0 +1,157 @@ +package jwt + +import ( + "errors" + "time" + + "github.com/lestrrat-go/jwx/internal/option" +) + +const ( + optkeyAcceptableSkew = "acceptableSkew" + optkeyClock = "clock" + optkeyIssuer = "issuer" + optkeySubject = "subject" + optkeyAudience = "audience" + optkeyJwtid = "jwtid" +) + +type Clock interface { + Now() time.Time +} +type ClockFunc func() time.Time + +func (f ClockFunc) Now() time.Time { + return f() +} + +// WithClock specifies the `Clock` to be used when verifying +// claims exp and nbf. +func WithClock(c Clock) Option { + return option.New(optkeyClock, c) +} + +// WithAcceptableSkew specifies the duration in which exp and nbf +// claims may differ by. This value should be positive +func WithAcceptableSkew(dur time.Duration) Option { + return option.New(optkeyAcceptableSkew, dur) +} + +// WithIssuer specifies that expected issuer value. If not specified, +// the value of issuer is not verified at all. +func WithIssuer(s string) Option { + return option.New(optkeyIssuer, s) +} + +// WithSubject specifies that expected subject value. If not specified, +// the value of subject is not verified at all. +func WithSubject(s string) Option { + return option.New(optkeySubject, s) +} + +// WithJwtID specifies that expected jti value. If not specified, +// the value of jti is not verified at all. +func WithJwtID(s string) Option { + return option.New(optkeyJwtid, s) +} + +// WithAudience specifies that expected audience value. +// Verify will return true if one of the values in the `aud` element +// matches this value. If not specified, the value of issuer is not +// verified at all. +func WithAudience(s string) Option { + return option.New(optkeyAudience, s) +} + +// Verify makes sure that the essential claims stand. +// +// See the various `WithXXX` functions for optional parameters +// that can control the behavior of this method. +func (t *Token) Verify(options ...Option) error { + var issuer string + var subject string + var audience string + var jwtid string + var clock Clock = ClockFunc(time.Now) + var skew time.Duration + for _, o := range options { + switch o.Name() { + case optkeyClock: + clock = o.Value().(Clock) + case optkeyAcceptableSkew: + skew = o.Value().(time.Duration) + case optkeyIssuer: + issuer = o.Value().(string) + case optkeySubject: + subject = o.Value().(string) + case optkeyAudience: + audience = o.Value().(string) + case optkeyJwtid: + jwtid = o.Value().(string) + } + } + + // check for iss + if len(issuer) > 0 { + if v := t.Issuer(); v != "" && v != issuer { + return errors.New(`iss not satisfied`) + } + } + + // check for jti + if len(jwtid) > 0 { + if v := t.JwtID(); v != "" && v != jwtid { + return errors.New(`jti not satisfied`) + } + } + + // check for sub + if len(subject) > 0 { + if v := t.Subject(); v != "" && v != subject { + return errors.New(`sub not satisfied`) + } + } + + // check for aud + if len(audience) > 0 { + var found bool + for _, v := range t.Audience() { + if v == audience { + found = true + break + } + } + if !found { + return errors.New(`aud not satisfied`) + } + } + + // check for exp + if tv := t.Expiration(); !tv.IsZero() { + now := clock.Now().Truncate(time.Second) + ttv := tv.Truncate(time.Second) + if !now.Before(ttv.Add(skew)) { + return errors.New(`exp not satisfied`) + } + } + + // check for iat + if tv := t.IssuedAt(); !tv.IsZero() { + now := clock.Now().Truncate(time.Second) + ttv := tv.Truncate(time.Second) + if now.Before(ttv.Add(-1 * skew)) { + return errors.New(`iat not satisfied`) + } + } + + // check for nbf + if tv := t.NotBefore(); !tv.IsZero() { + now := clock.Now().Truncate(time.Second) + ttv := tv.Truncate(time.Second) + // now cannot be before t, so we check for now > t - skew + if !now.After(ttv.Add(-1 * skew)) { + return errors.New(`nbf not satisfied`) + } + } + return nil +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/verify_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/verify_test.go new file mode 100644 index 0000000000..b143db5ab7 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwt/verify_test.go @@ -0,0 +1,128 @@ +package jwt_test + +import ( + "testing" + "time" + + "github.com/lestrrat-go/jwx/jwt" + "github.com/stretchr/testify/assert" +) + +func TestGHIssue10(t *testing.T) { + t.Run(jwt.IssuerKey, func(t *testing.T) { + t1 := jwt.New() + t1.Set(jwt.IssuerKey, "github.com/lestrrat-go/jwx") + + // This should succeed, because WithIssuer is not provided in the + // optional parameters + if !assert.NoError(t, t1.Verify(), "t1.Verify should succeed") { + return + } + + // This should succeed, because WithIssuer is provided with same value + if !assert.NoError(t, t1.Verify(jwt.WithIssuer(t1.Issuer())), "t1.Verify should succeed") { + return + } + + if !assert.Error(t, t1.Verify(jwt.WithIssuer("poop")), "t1.Verify should fail") { + return + } + }) + t.Run(jwt.AudienceKey, func(t *testing.T) { + t1 := jwt.New() + err := t1.Set(jwt.AudienceKey, []string{"foo", "bar", "baz"}) + if err != nil { + t.Fatalf("Failed to set audience claim: %s", err.Error()) + } + + // This should succeed, because WithAudience is not provided in the + // optional parameters + err = t1.Verify() + if err != nil { + t.Fatalf("Error varifying claim: %s", err.Error()) + } + + // This should succeed, because WithAudience is provided, and its + // value matches one of the audience values + if !assert.NoError(t, t1.Verify(jwt.WithAudience("baz")), "token.Verify should succeed") { + return + } + + if !assert.Error(t, t1.Verify(jwt.WithAudience("poop")), "token.Verify should fail") { + return + } + }) + t.Run(jwt.SubjectKey, func(t *testing.T) { + t1 := jwt.New() + t1.Set(jwt.SubjectKey, "github.com/lestrrat-go/jwx") + + // This should succeed, because WithSubject is not provided in the + // optional parameters + if !assert.NoError(t, t1.Verify(), "token.Verify should succeed") { + return + } + + // This should succeed, because WithSubject is provided with same value + if !assert.NoError(t, t1.Verify(jwt.WithSubject(t1.Subject())), "token.Verify should succeed") { + return + } + + if !assert.Error(t, t1.Verify(jwt.WithSubject("poop")), "token.Verify should fail") { + return + } + }) + t.Run(jwt.NotBeforeKey, func(t *testing.T) { + t1 := jwt.New() + + // NotBefore is set to future date + tm := time.Now().Add(72 * time.Hour) + t1.Set(jwt.NotBeforeKey, tm) + + // This should fail, because nbf is the future + if !assert.Error(t, t1.Verify(), "token.Verify should fail") { + return + } + + // This should succeed, because we have given reaaaaaaly big skew + // that is well enough to get us accepted + if !assert.NoError(t, t1.Verify(jwt.WithAcceptableSkew(73*time.Hour)), "token.Verify should succeed") { + return + } + + // This should succeed, because we have given a time + // that is well enough into the future + if !assert.NoError(t, t1.Verify(jwt.WithClock(jwt.ClockFunc(func() time.Time { return tm.Add(time.Hour) }))), "token.Verify should succeed") { + return + } + }) + t.Run(jwt.ExpirationKey, func(t *testing.T) { + t1 := jwt.New() + + // issuedat = 1 Hr before current time + tm := time.Now() + t1.Set(jwt.IssuedAtKey, tm.Add(-1*time.Hour)) + + // valid for 2 minutes only from IssuedAt + t1.Set(jwt.ExpirationKey, tm.Add(-58*time.Minute)) + + // This should fail, because exp is set in the past + if !assert.Error(t, t1.Verify(), "token.Verify should fail") { + return + } + + // This should succeed, because we have given big skew + // that is well enough to get us accepted + if !assert.NoError(t, t1.Verify(jwt.WithAcceptableSkew(time.Hour)), "token.Verify should succeed (1)") { + return + } + + // This should succeed, because we have given a time + // that is well enough into the past + clock := jwt.ClockFunc(func() time.Time { + return tm.Add(-59 * time.Minute) + }) + if !assert.NoError(t, t1.Verify(jwt.WithClock(clock)), "token.Verify should succeed (2)") { + return + } + }) +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwx.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwx.go new file mode 100644 index 0000000000..d5ca50951b --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwx.go @@ -0,0 +1,26 @@ +// Package jwx contains tools that deal with the various JWx (JOSE) +// technologies such as JWT, JWS, JWE, etc in Go. +// +// JWS (https://tools.ietf.org/html/rfc7515) +// JWE (https://tools.ietf.org/html/rfc7516) +// JWK (https://tools.ietf.org/html/rfc7517) +// JWA (https://tools.ietf.org/html/rfc7518) +// JWT (https://tools.ietf.org/html/rfc7519) +// +// The primary focus of this library tool set is to implement the extremely +// flexible OAuth2 / OpenID Connect protocols. There are many other libraries +// out there that deal with all or parts of these JWx technologies: +// +// https://github.com/dgrijalva/jwt-go +// https://github.com/square/go-jose +// https://github.com/coreos/oidc +// https://golang.org/x/oauth2 +// +// This library exists because there was a need for a toolset that encompasses +// the whole set of JWx technologies in a highly customizable manner, in one package. +// +// You can find more high level documentation at Github (https://github.com/lestrrat-go/jwx) +package jwx + +// Version describes the version of this library. +const Version = "0.0.1" diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwx_example_test.go b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwx_example_test.go new file mode 100644 index 0000000000..db294d00f6 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/jwx_example_test.go @@ -0,0 +1,116 @@ +package jwx_test + +import ( + "crypto/rand" + "crypto/rsa" + "encoding/json" + "fmt" + "log" + "time" + + "github.com/lestrrat-go/jwx/jwa" + "github.com/lestrrat-go/jwx/jwe" + "github.com/lestrrat-go/jwx/jwk" + "github.com/lestrrat-go/jwx/jws" + "github.com/lestrrat-go/jwx/jwt" +) + +func ExampleJWT() { + const aLongLongTimeAgo = 233431200 + + t := jwt.New() + t.Set(jwt.SubjectKey, `https://github.com/lestrrat-go/jwx/jwt`) + t.Set(jwt.AudienceKey, `Golang Users`) + t.Set(jwt.IssuedAtKey, time.Unix(aLongLongTimeAgo, 0)) + t.Set(`privateClaimKey`, `Hello, World!`) + + buf, err := json.MarshalIndent(t, "", " ") + if err != nil { + fmt.Printf("failed to generate JSON: %s\n", err) + return + } + + fmt.Printf("%s\n", buf) + fmt.Printf("aud -> '%s'\n", t.Audience()) + fmt.Printf("iat -> '%s'\n", t.IssuedAt().Format(time.RFC3339)) + if v, ok := t.Get(`privateClaimKey`); ok { + fmt.Printf("privateClaimKey -> '%s'\n", v) + } + fmt.Printf("sub -> '%s'\n", t.Subject()) +} + +func ExampleJWK() { + set, err := jwk.FetchHTTP("https://foobar.domain/jwk.json") + if err != nil { + log.Printf("failed to parse JWK: %s", err) + return + } + + // If you KNOW you have exactly one key, you can just + // use set.Keys[0] + keys := set.LookupKeyID("mykey") + if len(keys) == 0 { + log.Printf("failed to lookup key: %s", err) + return + } + + key, err := keys[0].Materialize() + if err != nil { + log.Printf("failed to create public key: %s", err) + return + } + + // Use key for jws.Verify() or whatever + _ = key +} + +func ExampleJWS() { + privkey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + log.Printf("failed to generate private key: %s", err) + return + } + + buf, err := jws.Sign([]byte("Lorem ipsum"), jwa.RS256, privkey) + if err != nil { + log.Printf("failed to created JWS message: %s", err) + return + } + + // When you received a JWS message, you can verify the signature + // and grab the payload sent in the message in one go: + verified, err := jws.Verify(buf, jwa.RS256, &privkey.PublicKey) + if err != nil { + log.Printf("failed to verify message: %s", err) + return + } + + log.Printf("signed message verified! -> %s", verified) +} + +func ExampleJWE() { + privkey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + log.Printf("failed to generate private key: %s", err) + return + } + + payload := []byte("Lorem Ipsum") + + encrypted, err := jwe.Encrypt(payload, jwa.RSA1_5, &privkey.PublicKey, jwa.A128CBC_HS256, jwa.NoCompress) + if err != nil { + log.Printf("failed to encrypt payload: %s", err) + return + } + + decrypted, err := jwe.Decrypt(encrypted, jwa.RSA1_5, privkey) + if err != nil { + log.Printf("failed to decrypt: %s", err) + return + } + + if string(decrypted) != "Lorem Ipsum" { + log.Printf("WHAT?!") + return + } +} diff --git a/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/scripts/check-diff.sh b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/scripts/check-diff.sh new file mode 100755 index 0000000000..0359eda8d2 --- /dev/null +++ b/traffic_ops/traffic_ops_golang/vendor/github.com/lestrrat-go/jwx/scripts/check-diff.sh @@ -0,0 +1,31 @@ +#!/bin/bash + +UNTRACKED=$(git ls-files --others --exclude-standard) +DIFF=$(git diff) + +st=0 +if [ ! -z "$DIFF" ]; then + echo "==== START OF DIFF FOUND ===" + echo "" + echo "$DIFF" + echo "" + echo "Above diff was found." + echo "" + echo "==== END OF DIFF FOUND ===" + echo "" + st=1 +fi + +if [ ! -z "$UNTRACKED" ]; then + echo "==== START OF UNTRACKED FILES FOUND ===" + echo "" + echo "$UNTRACKED" + echo "" + echo "Above untracked files were found." + echo "" + echo "==== END OF UNTRACKED FILES FOUND ===" + echo "" + st=1 +fi + +exit $st \ No newline at end of file diff --git a/traffic_portal/app/src/app.js b/traffic_portal/app/src/app.js index 9b4c737ba3..8d890b267f 100644 --- a/traffic_portal/app/src/app.js +++ b/traffic_portal/app/src/app.js @@ -46,6 +46,7 @@ var trafficPortal = angular.module('trafficPortal', [ // public modules require('./modules/public').name, require('./modules/public/login').name, + require('./modules/public/sso').name, // private modules require('./modules/private').name, @@ -457,7 +458,6 @@ var trafficPortal = angular.module('trafficPortal', [ .run(function($log, applicationService) { $log.debug("Application run..."); - applicationService.startup(); }) ; @@ -474,7 +474,7 @@ trafficPortal.factory('authInterceptor', function ($rootScope, $q, $window, $loc if (rejection.status === 401) { $rootScope.$broadcast('trafficPortal::exit'); userModel.resetUser(); - if (url == '/login' || $location.search().redirect) { + if (url === '/login' || url ==='/sso' || $location.search().redirect) { messageModel.setMessages(alerts, false); } else { $timeout(function () { diff --git a/traffic_portal/app/src/assets/css/colReorder.dataTables.min_1.5.1.css b/traffic_portal/app/src/assets/css/colReorder.dataTables.min_1.5.1.css new file mode 100644 index 0000000000..4f83a4085d --- /dev/null +++ b/traffic_portal/app/src/assets/css/colReorder.dataTables.min_1.5.1.css @@ -0,0 +1 @@ +table.DTCR_clonedTable.dataTable{position:absolute !important;background-color:rgba(255,255,255,0.7);z-index:202}div.DTCR_pointer{width:1px;background-color:#0259C4;z-index:201} diff --git a/traffic_portal/app/src/assets/js/colReorder.dataTables.min_1.5.1.js b/traffic_portal/app/src/assets/js/colReorder.dataTables.min_1.5.1.js new file mode 100644 index 0000000000..57a1568776 --- /dev/null +++ b/traffic_portal/app/src/assets/js/colReorder.dataTables.min_1.5.1.js @@ -0,0 +1,29 @@ +/*! + ColReorder 1.5.1 + ©2010-2018 SpryMedia Ltd - datatables.net/license +*/ +(function(e){"function"===typeof define&&define.amd?define(["jquery","datatables.net"],function(o){return e(o,window,document)}):"object"===typeof exports?module.exports=function(o,l){o||(o=window);if(!l||!l.fn.dataTable)l=require("datatables.net")(o,l).$;return e(l,o,o.document)}:e(jQuery,window,document)})(function(e,o,l,r){function q(a){for(var b=[],c=0,f=a.length;cb||b>=l)this.oApi._fnLog(a,1,"ColReorder 'from' index is out of bounds: "+b);else if(0>c||c>=l)this.oApi._fnLog(a,1,"ColReorder 'to' index is out of bounds: "+ + c);else{j=[];d=0;for(h=l;dthis.s.fixed-1&&fMath.pow(Math.pow(this._fnCursorPosition(a,"pageX")-this.s.mouse.startX,2)+Math.pow(this._fnCursorPosition(a,"pageY")- + this.s.mouse.startY,2),0.5))return;this._fnCreateDragNode()}this.dom.drag.css({left:this._fnCursorPosition(a,"pageX")-this.s.mouse.offsetX,top:this._fnCursorPosition(a,"pageY")-this.s.mouse.offsetY});for(var b=!1,c=this.s.mouse.toIndex,f=1,e=this.s.aoTargets.length;f").addClass("DTCR_pointer").css({position:"absolute",top:a?e("div.dataTables_scroll",this.s.dt.nTableWrapper).offset().top:e(this.s.dt.nTable).offset().top,height:a?e("div.dataTables_scroll", + this.s.dt.nTableWrapper).height():e(this.s.dt.nTable).height()}).appendTo("body")},_fnSetColumnIndexes:function(){e.each(this.s.dt.aoColumns,function(a,b){e(b.nTh).attr("data-column-index",a)})},_fnCursorPosition:function(a,b){return-1!==a.type.indexOf("touch")?a.originalEvent.touches[0][b]:a[b]}});i.defaults={aiOrder:null,bEnable:!0,bRealtime:!0,iFixedColumnsLeft:0,iFixedColumnsRight:0,fnReorderCallback:null};i.version="1.5.1";e.fn.dataTable.ColReorder=i;e.fn.DataTable.ColReorder=i;"function"==typeof e.fn.dataTable&& +"function"==typeof e.fn.dataTableExt.fnVersionCheck&&e.fn.dataTableExt.fnVersionCheck("1.10.8")?e.fn.dataTableExt.aoFeatures.push({fnInit:function(a){var b=a.oInstance;a._colReorder?b.oApi._fnLog(a,1,"ColReorder attempted to initialise twice. Ignoring second"):(b=a.oInit,new i(a,b.colReorder||b.oColReorder||{}));return null},cFeature:"R",sFeature:"ColReorder"}):alert("Warning: ColReorder requires DataTables 1.10.8 or greater - www.datatables.net/download");e(l).on("preInit.dt.colReorder",function(a, + b){if("dt"===a.namespace){var c=b.oInit.colReorder,f=t.defaults.colReorder;if(c||f)f=e.extend({},c,f),!1!==c&&new i(b,f)}});e.fn.dataTable.Api.register("colReorder.reset()",function(){return this.iterator("table",function(a){a._colReorder.fnReset()})});e.fn.dataTable.Api.register("colReorder.order()",function(a,b){return a?this.iterator("table",function(c){c._colReorder.fnOrder(a,b)}):this.context.length?this.context[0]._colReorder.fnOrder():null});e.fn.dataTable.Api.register("colReorder.transpose()", + function(a,b){return this.context.length&&this.context[0]._colReorder?this.context[0]._colReorder.fnTranspose(a,b):a});e.fn.dataTable.Api.register("colReorder.move()",function(a,b,c,e){this.context.length&&this.context[0]._colReorder.s.dt.oInstance.fnColReorder(a,b,c,e);return this});e.fn.dataTable.Api.register("colReorder.enable()",function(a){return this.iterator("table",function(b){b._colReorder&&b._colReorder.fnEnable(a)})});e.fn.dataTable.Api.register("colReorder.disable()",function(){return this.iterator("table", + function(a){a._colReorder&&a._colReorder.fnDisable()})});return i}); \ No newline at end of file diff --git a/traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.16.js b/traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.16.js deleted file mode 100644 index 204e62ade7..0000000000 --- a/traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.16.js +++ /dev/null @@ -1,164 +0,0 @@ -/*! - DataTables 1.10.16 - ©2008-2017 SpryMedia Ltd - datatables.net/license - */ -(function(h){"function"===typeof define&&define.amd?define(["jquery"],function(E){return h(E,window,document)}):"object"===typeof exports?module.exports=function(E,G){E||(E=window);G||(G="undefined"!==typeof window?require("jquery"):require("jquery")(E));return h(G,E,E.document)}:h(jQuery,window,document)})(function(h,E,G,k){function X(a){var b,c,d={};h.each(a,function(e){if((b=e.match(/^([^A-Z]+?)([A-Z])/))&&-1!=="a aa ai ao as b fn i m o s ".indexOf(b[1]+" "))c=e.replace(b[0],b[2].toLowerCase()), - d[c]=e,"o"===b[1]&&X(a[e])});a._hungarianMap=d}function I(a,b,c){a._hungarianMap||X(a);var d;h.each(b,function(e){d=a._hungarianMap[e];if(d!==k&&(c||b[d]===k))"o"===d.charAt(0)?(b[d]||(b[d]={}),h.extend(!0,b[d],b[e]),I(a[d],b[d],c)):b[d]=b[e]})}function Ca(a){var b=m.defaults.oLanguage,c=a.sZeroRecords;!a.sEmptyTable&&(c&&"No data available in table"===b.sEmptyTable)&&F(a,a,"sZeroRecords","sEmptyTable");!a.sLoadingRecords&&(c&&"Loading..."===b.sLoadingRecords)&&F(a,a,"sZeroRecords","sLoadingRecords"); - a.sInfoThousands&&(a.sThousands=a.sInfoThousands);(a=a.sDecimal)&&cb(a)}function db(a){A(a,"ordering","bSort");A(a,"orderMulti","bSortMulti");A(a,"orderClasses","bSortClasses");A(a,"orderCellsTop","bSortCellsTop");A(a,"order","aaSorting");A(a,"orderFixed","aaSortingFixed");A(a,"paging","bPaginate");A(a,"pagingType","sPaginationType");A(a,"pageLength","iDisplayLength");A(a,"searching","bFilter");"boolean"===typeof a.sScrollX&&(a.sScrollX=a.sScrollX?"100%":"");"boolean"===typeof a.scrollX&&(a.scrollX= - a.scrollX?"100%":"");if(a=a.aoSearchCols)for(var b=0,c=a.length;b").css({position:"fixed",top:0,left:-1*h(E).scrollLeft(),height:1,width:1,overflow:"hidden"}).append(h("
").css({position:"absolute", - top:1,left:1,width:100,overflow:"scroll"}).append(h("
").css({width:"100%",height:10}))).appendTo("body"),d=c.children(),e=d.children();b.barWidth=d[0].offsetWidth-d[0].clientWidth;b.bScrollOversize=100===e[0].offsetWidth&&100!==d[0].clientWidth;b.bScrollbarLeft=1!==Math.round(e.offset().left);b.bBounding=c[0].getBoundingClientRect().width?!0:!1;c.remove()}h.extend(a.oBrowser,m.__browser);a.oScroll.iBarWidth=m.__browser.barWidth}function gb(a,b,c,d,e,f){var g,j=!1;c!==k&&(g=c,j=!0);for(;d!== - e;)a.hasOwnProperty(d)&&(g=j?b(g,a[d],d,a):a[d],j=!0,d+=f);return g}function Da(a,b){var c=m.defaults.column,d=a.aoColumns.length,c=h.extend({},m.models.oColumn,c,{nTh:b?b:G.createElement("th"),sTitle:c.sTitle?c.sTitle:b?b.innerHTML:"",aDataSort:c.aDataSort?c.aDataSort:[d],mData:c.mData?c.mData:d,idx:d});a.aoColumns.push(c);c=a.aoPreSearchCols;c[d]=h.extend({},m.models.oSearch,c[d]);ja(a,d,h(b).data())}function ja(a,b,c){var b=a.aoColumns[b],d=a.oClasses,e=h(b.nTh);if(!b.sWidthOrig){b.sWidthOrig= - e.attr("width")||null;var f=(e.attr("style")||"").match(/width:\s*(\d+[pxem%]+)/);f&&(b.sWidthOrig=f[1])}c!==k&&null!==c&&(eb(c),I(m.defaults.column,c),c.mDataProp!==k&&!c.mData&&(c.mData=c.mDataProp),c.sType&&(b._sManualType=c.sType),c.className&&!c.sClass&&(c.sClass=c.className),c.sClass&&e.addClass(c.sClass),h.extend(b,c),F(b,c,"sWidth","sWidthOrig"),c.iDataSort!==k&&(b.aDataSort=[c.iDataSort]),F(b,c,"aDataSort"));var g=b.mData,j=Q(g),i=b.mRender?Q(b.mRender):null,c=function(a){return"string"=== - typeof a&&-1!==a.indexOf("@")};b._bAttrSrc=h.isPlainObject(g)&&(c(g.sort)||c(g.type)||c(g.filter));b._setter=null;b.fnGetData=function(a,b,c){var d=j(a,b,k,c);return i&&b?i(d,b,a,c):d};b.fnSetData=function(a,b,c){return R(g)(a,b,c)};"number"!==typeof g&&(a._rowReadObject=!0);a.oFeatures.bSort||(b.bSortable=!1,e.addClass(d.sSortableNone));a=-1!==h.inArray("asc",b.asSorting);c=-1!==h.inArray("desc",b.asSorting);!b.bSortable||!a&&!c?(b.sSortingClass=d.sSortableNone,b.sSortingClassJUI=""):a&&!c?(b.sSortingClass= - d.sSortableAsc,b.sSortingClassJUI=d.sSortJUIAscAllowed):!a&&c?(b.sSortingClass=d.sSortableDesc,b.sSortingClassJUI=d.sSortJUIDescAllowed):(b.sSortingClass=d.sSortable,b.sSortingClassJUI=d.sSortJUI)}function Y(a){if(!1!==a.oFeatures.bAutoWidth){var b=a.aoColumns;Ea(a);for(var c=0,d=b.length;cq[f])d(l.length+q[f],n);else if("string"===typeof q[f]){j=0;for(i=l.length;j< -i;j++)("_all"==q[f]||h(l[j].nTh).hasClass(q[f]))&&d(j,n)}}if(c){e=0;for(a=c.length;eb&&a[e]--; -1!=d&&c===k&&a.splice(d,1)}function ca(a,b,c,d){var e=a.aoData[b],f,g=function(c,d){for(;c.childNodes.length;)c.removeChild(c.firstChild); - c.innerHTML=B(a,b,d,"display")};if("dom"===c||(!c||"auto"===c)&&"dom"===e.src)e._aData=Ha(a,e,d,d===k?k:e._aData).data;else{var j=e.anCells;if(j)if(d!==k)g(j[d],d);else{c=0;for(f=j.length;c").appendTo(g));b=0;for(c=l.length;btr").attr("role","row");h(g).find(">tr>th, >tr>td").addClass(n.sHeaderTH);h(j).find(">tr>th, >tr>td").addClass(n.sFooterTH); - if(null!==j){a=a.aoFooter[0];b=0;for(c=a.length;b=a.fnRecordsDisplay()?0:g,a.iInitDisplayStart= - -1);var g=a._iDisplayStart,n=a.fnDisplayEnd();if(a.bDeferLoading)a.bDeferLoading=!1,a.iDraw++,C(a,!1);else if(j){if(!a.bDestroying&&!kb(a))return}else a.iDraw++;if(0!==i.length){f=j?a.aoData.length:n;for(j=j?0:g;j",{"class":e?d[0]:""}).append(h("",{valign:"top",colSpan:aa(a),"class":a.oClasses.sRowEmpty}).html(c))[0];r(a,"aoHeaderCallback","header",[h(a.nTHead).children("tr")[0],Ja(a),g,n,i]);r(a,"aoFooterCallback","footer",[h(a.nTFoot).children("tr")[0],Ja(a),g,n,i]);d=h(a.nTBody);d.children().detach();d.append(h(b));r(a,"aoDrawCallback","draw",[a]);a.bSorted=!1;a.bFiltered=!1;a.bDrawing=!1}}function S(a,b){var c=a.oFeatures,d=c.bFilter; - c.bSort&&lb(a);d?fa(a,a.oPreviousSearch):a.aiDisplay=a.aiDisplayMaster.slice();!0!==b&&(a._iDisplayStart=0);a._drawHold=b;N(a);a._drawHold=!1}function mb(a){var b=a.oClasses,c=h(a.nTable),c=h("
").insertBefore(c),d=a.oFeatures,e=h("
",{id:a.sTableId+"_wrapper","class":b.sWrapper+(a.nTFoot?"":" "+b.sNoFooter)});a.nHolding=c[0];a.nTableWrapper=e[0];a.nTableReinsertBefore=a.nTable.nextSibling;for(var f=a.sDom.split(""),g,j,i,n,l,q,k=0;k")[0]; - n=f[k+1];if("'"==n||'"'==n){l="";for(q=2;f[k+q]!=n;)l+=f[k+q],q++;"H"==l?l=b.sJUIHeader:"F"==l&&(l=b.sJUIFooter);-1!=l.indexOf(".")?(n=l.split("."),i.id=n[0].substr(1,n[0].length-1),i.className=n[1]):"#"==l.charAt(0)?i.id=l.substr(1,l.length-1):i.className=l;k+=q}e.append(i);e=h(i)}else if(">"==j)e=e.parent();else if("l"==j&&d.bPaginate&&d.bLengthChange)g=nb(a);else if("f"==j&&d.bFilter)g=ob(a);else if("r"==j&&d.bProcessing)g=pb(a);else if("t"==j)g=qb(a);else if("i"==j&&d.bInfo)g=rb(a);else if("p"== - j&&d.bPaginate)g=sb(a);else if(0!==m.ext.feature.length){i=m.ext.feature;q=0;for(n=i.length;q',j=d.sSearch,j=j.match(/_INPUT_/)?j.replace("_INPUT_",g):j+g,b=h("
",{id:!f.f?c+"_filter":null,"class":b.sFilter}).append(h("
").addClass(b.sLength);a.aanFeatures.l||(i[0].id=c+"_length");i.children().append(a.oLanguage.sLengthMenu.replace("_MENU_",e[0].outerHTML));h("select",i).val(a._iDisplayLength).on("change.DT",function(){Qa(a,h(this).val());N(a)});h(a.nTable).on("length.dt.DT",function(b,c,d){a===c&&h("select",i).val(d)});return i[0]}function sb(a){var b=a.sPaginationType,c=m.ext.pager[b],d="function"===typeof c,e=function(a){N(a)}, - b=h("
").addClass(a.oClasses.sPaging+b)[0],f=a.aanFeatures;d||c.fnInit(a,b,e);f.p||(b.id=a.sTableId+"_paginate",a.aoDrawCallback.push({fn:function(a){if(d){var b=a._iDisplayStart,i=a._iDisplayLength,h=a.fnRecordsDisplay(),l=-1===i,b=l?0:Math.ceil(b/i),i=l?1:Math.ceil(h/i),h=c(b,i),k,l=0;for(k=f.p.length;lf&&(d=0)):"first"==b?d=0:"previous"==b?(d=0<=e?d-e:0,0>d&&(d=0)):"next"==b?d+e",{id:!a.aanFeatures.r?a.sTableId+"_processing":null,"class":a.oClasses.sProcessing}).html(a.oLanguage.sProcessing).insertBefore(a.nTable)[0]}function C(a,b){a.oFeatures.bProcessing&&h(a.aanFeatures.r).css("display", - b?"block":"none");r(a,null,"processing",[a,b])}function qb(a){var b=h(a.nTable);b.attr("role","grid");var c=a.oScroll;if(""===c.sX&&""===c.sY)return a.nTable;var d=c.sX,e=c.sY,f=a.oClasses,g=b.children("caption"),j=g.length?g[0]._captionSide:null,i=h(b[0].cloneNode(!1)),n=h(b[0].cloneNode(!1)),l=b.children("tfoot");l.length||(l=null);i=h("
",{"class":f.sScrollWrapper}).append(h("
",{"class":f.sScrollHead}).css({overflow:"hidden",position:"relative",border:0,width:d?!d?null:v(d):"100%"}).append(h("
", - {"class":f.sScrollHeadInner}).css({"box-sizing":"content-box",width:c.sXInner||"100%"}).append(i.removeAttr("id").css("margin-left",0).append("top"===j?g:null).append(b.children("thead"))))).append(h("
",{"class":f.sScrollBody}).css({position:"relative",overflow:"auto",width:!d?null:v(d)}).append(b));l&&i.append(h("
",{"class":f.sScrollFoot}).css({overflow:"hidden",border:0,width:d?!d?null:v(d):"100%"}).append(h("
",{"class":f.sScrollFootInner}).append(n.removeAttr("id").css("margin-left", - 0).append("bottom"===j?g:null).append(b.children("tfoot")))));var b=i.children(),k=b[0],f=b[1],t=l?b[2]:null;if(d)h(f).on("scroll.DT",function(){var a=this.scrollLeft;k.scrollLeft=a;l&&(t.scrollLeft=a)});h(f).css(e&&c.bCollapse?"max-height":"height",e);a.nScrollHead=k;a.nScrollBody=f;a.nScrollFoot=t;a.aoDrawCallback.push({fn:ka,sName:"scrolling"});return i[0]}function ka(a){var b=a.oScroll,c=b.sX,d=b.sXInner,e=b.sY,b=b.iBarWidth,f=h(a.nScrollHead),g=f[0].style,j=f.children("div"),i=j[0].style,n=j.children("table"), - j=a.nScrollBody,l=h(j),q=j.style,t=h(a.nScrollFoot).children("div"),m=t.children("table"),o=h(a.nTHead),p=h(a.nTable),s=p[0],r=s.style,u=a.nTFoot?h(a.nTFoot):null,x=a.oBrowser,T=x.bScrollOversize,Xb=D(a.aoColumns,"nTh"),O,K,P,w,Ta=[],y=[],z=[],A=[],B,C=function(a){a=a.style;a.paddingTop="0";a.paddingBottom="0";a.borderTopWidth="0";a.borderBottomWidth="0";a.height=0};K=j.scrollHeight>j.clientHeight;if(a.scrollBarVis!==K&&a.scrollBarVis!==k)a.scrollBarVis=K,Y(a);else{a.scrollBarVis=K;p.children("thead, tfoot").remove(); - u&&(P=u.clone().prependTo(p),O=u.find("tr"),P=P.find("tr"));w=o.clone().prependTo(p);o=o.find("tr");K=w.find("tr");w.find("th, td").removeAttr("tabindex");c||(q.width="100%",f[0].style.width="100%");h.each(ra(a,w),function(b,c){B=Z(a,b);c.style.width=a.aoColumns[B].sWidth});u&&H(function(a){a.style.width=""},P);f=p.outerWidth();if(""===c){r.width="100%";if(T&&(p.find("tbody").height()>j.offsetHeight||"scroll"==l.css("overflow-y")))r.width=v(p.outerWidth()-b);f=p.outerWidth()}else""!==d&&(r.width= - v(d),f=p.outerWidth());H(C,K);H(function(a){z.push(a.innerHTML);Ta.push(v(h(a).css("width")))},K);H(function(a,b){if(h.inArray(a,Xb)!==-1)a.style.width=Ta[b]},o);h(K).height(0);u&&(H(C,P),H(function(a){A.push(a.innerHTML);y.push(v(h(a).css("width")))},P),H(function(a,b){a.style.width=y[b]},O),h(P).height(0));H(function(a,b){a.innerHTML='
'+z[b]+"
";a.style.width=Ta[b]},K);u&&H(function(a,b){a.innerHTML='
'+ - A[b]+"
";a.style.width=y[b]},P);if(p.outerWidth()j.offsetHeight||"scroll"==l.css("overflow-y")?f+b:f;if(T&&(j.scrollHeight>j.offsetHeight||"scroll"==l.css("overflow-y")))r.width=v(O-b);(""===c||""!==d)&&J(a,1,"Possible column misalignment",6)}else O="100%";q.width=v(O);g.width=v(O);u&&(a.nScrollFoot.style.width=v(O));!e&&T&&(q.height=v(s.offsetHeight+b));c=p.outerWidth();n[0].style.width=v(c);i.width=v(c);d=p.height()>j.clientHeight||"scroll"==l.css("overflow-y");e="padding"+ - (x.bScrollbarLeft?"Left":"Right");i[e]=d?b+"px":"0px";u&&(m[0].style.width=v(c),t[0].style.width=v(c),t[0].style[e]=d?b+"px":"0px");p.children("colgroup").insertBefore(p.children("thead"));l.scroll();if((a.bSorted||a.bFiltered)&&!a._drawHold)j.scrollTop=0}}function H(a,b,c){for(var d=0,e=0,f=b.length,g,j;e").appendTo(j.find("tbody")); - j.find("thead, tfoot").remove();j.append(h(a.nTHead).clone()).append(h(a.nTFoot).clone());j.find("tfoot th, tfoot td").css("width","");n=ra(a,j.find("thead")[0]);for(m=0;m").css({width:o.sWidthOrig,margin:0,padding:0,border:0,height:1}));if(a.aoData.length)for(m=0;m").css(f||e?{position:"absolute",top:0,left:0,height:1,right:0,overflow:"hidden"}:{}).append(j).appendTo(k);f&&g?j.width(g):f?(j.css("width","auto"),j.removeAttr("width"),j.width()").css("width",v(a)).appendTo(b||G.body),d=c[0].offsetWidth;c.remove();return d}function Eb(a,b){var c=Fb(a,b);if(0>c)return null;var d=a.aoData[c];return!d.nTr?h("").html(B(a,c,b,"display"))[0]:d.anCells[b]}function Fb(a,b){for(var c,d=-1,e=-1,f=0,g=a.aoData.length;fd&&(d=c.length,e=f);return e}function v(a){return null===a?"0px":"number"==typeof a?0>a?"0px":a+"px":a.match(/\d$/)?a+"px":a}function V(a){var b,c,d=[],e=a.aoColumns,f,g,j,i;b=a.aaSortingFixed;c=h.isPlainObject(b);var n=[];f=function(a){a.length&&!h.isArray(a[0])?n.push(a):h.merge(n,a)};h.isArray(b)&&f(b);c&&b.pre&&f(b.pre);f(a.aaSorting);c&&b.post&&f(b.post);for(a=0;ae?1:0,0!==c)return"asc"===j.dir?c:-c;c=d[a];e=d[b];return ce?1:0}):i.sort(function(a,b){var c,g,j,i,k=h.length,m=f[a]._aSortData,o=f[b]._aSortData;for(j=0;jg?1:0})}a.bSorted=!0}function Hb(a){for(var b,c,d=a.aoColumns,e=V(a),a=a.oLanguage.oAria,f=0,g=d.length;f/g, - "");var i=c.nTh;i.removeAttribute("aria-sort");c.bSortable&&(0e?e+1:3));e=0;for(f=d.length;ee?e+1:3))}a.aLastSort=d}function Gb(a,b){var c=a.aoColumns[b],d=m.ext.order[c.sSortDataType],e;d&&(e=d.call(a.oInstance,a,b,$(a,b)));for(var f,g=m.ext.type.order[c.sType+"-pre"],j=0,i=a.aoData.length;j=f.length?[0,c[1]]:c)}));b.search!== - k&&h.extend(a.oPreviousSearch,Ab(b.search));if(b.columns){d=0;for(e=b.columns.length;d=c&&(b=c-d);b-=b%d;if(-1===d||0>b)b=0;a._iDisplayStart=b}function Ma(a,b){var c=a.renderer,d=m.ext.renderer[b];return h.isPlainObject(c)&&c[b]?d[c[b]]||d._:"string"===typeof c?d[c]||d._:d._}function y(a){return a.oFeatures.bServerSide?"ssp":a.ajax||a.sAjaxSource?"ajax":"dom"}function ha(a,b){var c=[],c=Kb.numbers_length,d=Math.floor(c/2);b<=c?c=W(0,b):a<=d?(c=W(0, - c-2),c.push("ellipsis"),c.push(b-1)):(a>=b-1-d?c=W(b-(c-2),b):(c=W(a-d+2,a+d-1),c.push("ellipsis"),c.push(b-1)),c.splice(0,0,"ellipsis"),c.splice(0,0,0));c.DT_el="span";return c}function cb(a){h.each({num:function(b){return za(b,a)},"num-fmt":function(b){return za(b,a,Wa)},"html-num":function(b){return za(b,a,Aa)},"html-num-fmt":function(b){return za(b,a,Aa,Wa)}},function(b,c){x.type.order[b+a+"-pre"]=c;b.match(/^html\-/)&&(x.type.search[b+a]=x.type.search.html)})}function Lb(a){return function(){var b= - [ya(this[m.ext.iApiIndex])].concat(Array.prototype.slice.call(arguments));return m.ext.internal[a].apply(this,b)}}var m=function(a){this.$=function(a,b){return this.api(!0).$(a,b)};this._=function(a,b){return this.api(!0).rows(a,b).data()};this.api=function(a){return a?new s(ya(this[x.iApiIndex])):new s(this)};this.fnAddData=function(a,b){var c=this.api(!0),d=h.isArray(a)&&(h.isArray(a[0])||h.isPlainObject(a[0]))?c.rows.add(a):c.row.add(a);(b===k||b)&&c.draw();return d.flatten().toArray()};this.fnAdjustColumnSizing= - function(a){var b=this.api(!0).columns.adjust(),c=b.settings()[0],d=c.oScroll;a===k||a?b.draw(!1):(""!==d.sX||""!==d.sY)&&ka(c)};this.fnClearTable=function(a){var b=this.api(!0).clear();(a===k||a)&&b.draw()};this.fnClose=function(a){this.api(!0).row(a).child.hide()};this.fnDeleteRow=function(a,b,c){var d=this.api(!0),a=d.rows(a),e=a.settings()[0],h=e.aoData[a[0][0]];a.remove();b&&b.call(this,e,h);(c===k||c)&&d.draw();return h};this.fnDestroy=function(a){this.api(!0).destroy(a)};this.fnDraw=function(a){this.api(!0).draw(a)}; - this.fnFilter=function(a,b,c,d,e,h){e=this.api(!0);null===b||b===k?e.search(a,c,d,h):e.column(b).search(a,c,d,h);e.draw()};this.fnGetData=function(a,b){var c=this.api(!0);if(a!==k){var d=a.nodeName?a.nodeName.toLowerCase():"";return b!==k||"td"==d||"th"==d?c.cell(a,b).data():c.row(a).data()||null}return c.data().toArray()};this.fnGetNodes=function(a){var b=this.api(!0);return a!==k?b.row(a).node():b.rows().nodes().flatten().toArray()};this.fnGetPosition=function(a){var b=this.api(!0),c=a.nodeName.toUpperCase(); - return"TR"==c?b.row(a).index():"TD"==c||"TH"==c?(a=b.cell(a).index(),[a.row,a.columnVisible,a.column]):null};this.fnIsOpen=function(a){return this.api(!0).row(a).child.isShown()};this.fnOpen=function(a,b,c){return this.api(!0).row(a).child(b,c).show().child()[0]};this.fnPageChange=function(a,b){var c=this.api(!0).page(a);(b===k||b)&&c.draw(!1)};this.fnSetColumnVis=function(a,b,c){a=this.api(!0).column(a).visible(b);(c===k||c)&&a.columns.adjust().draw()};this.fnSettings=function(){return ya(this[x.iApiIndex])}; - this.fnSort=function(a){this.api(!0).order(a).draw()};this.fnSortListener=function(a,b,c){this.api(!0).order.listener(a,b,c)};this.fnUpdate=function(a,b,c,d,e){var h=this.api(!0);c===k||null===c?h.row(b).data(a):h.cell(b,c).data(a);(e===k||e)&&h.columns.adjust();(d===k||d)&&h.draw();return 0};this.fnVersionCheck=x.fnVersionCheck;var b=this,c=a===k,d=this.length;c&&(a={});this.oApi=this.internal=x.internal;for(var e in m.ext.internal)e&&(this[e]=Lb(e));this.each(function(){var e={},g=1").appendTo(q));p.nTHead=b[0];b=q.children("tbody");b.length===0&&(b=h("").appendTo(q));p.nTBody=b[0];b=q.children("tfoot");if(b.length===0&&a.length>0&&(p.oScroll.sX!==""||p.oScroll.sY!==""))b=h("").appendTo(q);if(b.length===0||b.children().length===0)q.addClass(u.sNoFooter); - else if(b.length>0){p.nTFoot=b[0];da(p.aoFooter,p.nTFoot)}if(g.aaData)for(j=0;j/g,Zb=/^\d{2,4}[\.\/\-]\d{1,2}[\.\/\-]\d{1,2}([T ]{1}\d{1,2}[:\.]\d{2}([\.:]\d{2})?)?$/,$b=RegExp("(\\/|\\.|\\*|\\+|\\?|\\||\\(|\\)|\\[|\\]|\\{|\\}|\\\\|\\$|\\^|\\-)", - "g"),Wa=/[',$£€¥%\u2009\u202F\u20BD\u20a9\u20BArfk]/gi,L=function(a){return!a||!0===a||"-"===a?!0:!1},Nb=function(a){var b=parseInt(a,10);return!isNaN(b)&&isFinite(a)?b:null},Ob=function(a,b){Xa[b]||(Xa[b]=RegExp(Pa(b),"g"));return"string"===typeof a&&"."!==b?a.replace(/\./g,"").replace(Xa[b],"."):a},Ya=function(a,b,c){var d="string"===typeof a;if(L(a))return!0;b&&d&&(a=Ob(a,b));c&&d&&(a=a.replace(Wa,""));return!isNaN(parseFloat(a))&&isFinite(a)},Pb=function(a,b,c){return L(a)?!0:!(L(a)||"string"=== - typeof a)?null:Ya(a.replace(Aa,""),b,c)?!0:null},D=function(a,b,c){var d=[],e=0,f=a.length;if(c!==k)for(;ea.length)){b=a.slice().sort();for(var c=b[0],d=1,e=b.length;d")[0],Wb=va.textContent!==k,Yb=/<.*?>/g,Na=m.util.throttle,Rb=[],w=Array.prototype,ac=function(a){var b,c,d=m.settings,e=h.map(d,function(a){return a.nTable});if(a){if(a.nTable&&a.oApi)return[a];if(a.nodeName&&"table"===a.nodeName.toLowerCase())return b=h.inArray(a,e),-1!==b?[d[b]]:null;if(a&&"function"===typeof a.settings)return a.settings().toArray();"string"===typeof a?c=h(a):a instanceof - h&&(c=a)}else return[];if(c)return c.map(function(){b=h.inArray(this,e);return-1!==b?d[b]:null}).toArray()};s=function(a,b){if(!(this instanceof s))return new s(a,b);var c=[],d=function(a){(a=ac(a))&&(c=c.concat(a))};if(h.isArray(a))for(var e=0,f=a.length;ea?new s(b[a],this[a]):null},filter:function(a){var b=[];if(w.filter)b=w.filter.call(this,a,this);else for(var c=0,d=this.length;c").addClass(b),h("td",c).addClass(b).html(a)[0].colSpan=aa(d),e.push(c[0]))};f(a,b);c._details&&c._details.detach();c._details=h(e);c._detailsShow&& - c._details.insertAfter(c.nTr)}return this});o(["row().child.show()","row().child().show()"],function(){Tb(this,!0);return this});o(["row().child.hide()","row().child().hide()"],function(){Tb(this,!1);return this});o(["row().child.remove()","row().child().remove()"],function(){bb(this);return this});o("row().child.isShown()",function(){var a=this.context;return a.length&&this.length?a[0].aoData[this[0]]._detailsShow||!1:!1});var bc=/^([^:]+):(name|visIdx|visible)$/,Ub=function(a,b,c,d,e){for(var c= - [],d=0,f=e.length;d=0?b:g.length+b];if(typeof a==="function"){var e=Ba(c,f);return h.map(g,function(b,f){return a(f,Ub(c,f,0,0,e),i[f])?f:null})}var k=typeof a==="string"?a.match(bc):"";if(k)switch(k[2]){case "visIdx":case "visible":b= - parseInt(k[1],10);if(b<0){var m=h.map(g,function(a,b){return a.bVisible?b:null});return[m[m.length+b]]}return[Z(c,b)];case "name":return h.map(j,function(a,b){return a===k[1]?b:null});default:return[]}if(a.nodeName&&a._DT_CellIndex)return[a._DT_CellIndex.column];b=h(i).filter(a).map(function(){return h.inArray(this,i)}).toArray();if(b.length||!a.nodeName)return b;b=h(a).closest("*[data-dt-column]");return b.length?[b.data("dt-column")]:[]},c,f)},1);c.selector.cols=a;c.selector.opts=b;return c});u("columns().header()", - "column().header()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].nTh},1)});u("columns().footer()","column().footer()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].nTf},1)});u("columns().data()","column().data()",function(){return this.iterator("column-rows",Ub,1)});u("columns().dataSrc()","column().dataSrc()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].mData},1)});u("columns().cache()","column().cache()", - function(a){return this.iterator("column-rows",function(b,c,d,e,f){return ia(b.aoData,f,"search"===a?"_aFilterData":"_aSortData",c)},1)});u("columns().nodes()","column().nodes()",function(){return this.iterator("column-rows",function(a,b,c,d,e){return ia(a.aoData,e,"anCells",b)},1)});u("columns().visible()","column().visible()",function(a,b){var c=this.iterator("column",function(b,c){if(a===k)return b.aoColumns[c].bVisible;var f=b.aoColumns,g=f[c],j=b.aoData,i,n,l;if(a!==k&&g.bVisible!==a){if(a){var m= - h.inArray(!0,D(f,"bVisible"),c+1);i=0;for(n=j.length;id;return!0};m.isDataTable=m.fnIsDataTable=function(a){var b=h(a).get(0),c=!1;if(a instanceof m.Api)return!0;h.each(m.settings,function(a,e){var f=e.nScrollHead?h("table",e.nScrollHead)[0]:null,g=e.nScrollFoot? - h("table",e.nScrollFoot)[0]:null;if(e.nTable===b||f===b||g===b)c=!0});return c};m.tables=m.fnTables=function(a){var b=!1;h.isPlainObject(a)&&(b=a.api,a=a.visible);var c=h.map(m.settings,function(b){if(!a||a&&h(b.nTable).is(":visible"))return b.nTable});return b?new s(c):c};m.camelToHungarian=I;o("$()",function(a,b){var c=this.rows(b).nodes(),c=h(c);return h([].concat(c.filter(a).toArray(),c.find(a).toArray()))});h.each(["on","one","off"],function(a,b){o(b+"()",function(){var a=Array.prototype.slice.call(arguments); - a[0]=h.map(a[0].split(/\s/),function(a){return!a.match(/\.dt\b/)?a+".dt":a}).join(" ");var d=h(this.tables().nodes());d[b].apply(d,a);return this})});o("clear()",function(){return this.iterator("table",function(a){na(a)})});o("settings()",function(){return new s(this.context,this.context)});o("init()",function(){var a=this.context;return a.length?a[0].oInit:null});o("data()",function(){return this.iterator("table",function(a){return D(a.aoData,"_aData")}).flatten()});o("destroy()",function(a){a=a|| - !1;return this.iterator("table",function(b){var c=b.nTableWrapper.parentNode,d=b.oClasses,e=b.nTable,f=b.nTBody,g=b.nTHead,j=b.nTFoot,i=h(e),f=h(f),k=h(b.nTableWrapper),l=h.map(b.aoData,function(a){return a.nTr}),o;b.bDestroying=!0;r(b,"aoDestroyCallback","destroy",[b]);a||(new s(b)).columns().visible(!0);k.off(".DT").find(":not(tbody *)").off(".DT");h(E).off(".DT-"+b.sInstance);e!=g.parentNode&&(i.children("thead").detach(),i.append(g));j&&e!=j.parentNode&&(i.children("tfoot").detach(),i.append(j)); - b.aaSorting=[];b.aaSortingFixed=[];wa(b);h(l).removeClass(b.asStripeClasses.join(" "));h("th, td",g).removeClass(d.sSortable+" "+d.sSortableAsc+" "+d.sSortableDesc+" "+d.sSortableNone);f.children().detach();f.append(l);g=a?"remove":"detach";i[g]();k[g]();!a&&c&&(c.insertBefore(e,b.nTableReinsertBefore),i.css("width",b.sDestroyWidth).removeClass(d.sTable),(o=b.asDestroyStripes.length)&&f.children().each(function(a){h(this).addClass(b.asDestroyStripes[a%o])}));c=h.inArray(b,m.settings);-1!==c&&m.settings.splice(c, - 1)})});h.each(["column","row","cell"],function(a,b){o(b+"s().every()",function(a){var d=this.selector.opts,e=this;return this.iterator(b,function(f,g,h,i,n){a.call(e[b](g,"cell"===b?h:d,"cell"===b?d:k),g,h,i,n)})})});o("i18n()",function(a,b,c){var d=this.context[0],a=Q(a)(d.oLanguage);a===k&&(a=b);c!==k&&h.isPlainObject(a)&&(a=a[c]!==k?a[c]:a._);return a.replace("%d",c)});m.version="1.10.16";m.settings=[];m.models={};m.models.oSearch={bCaseInsensitive:!0,sSearch:"",bRegex:!1,bSmart:!0};m.models.oRow= - {nTr:null,anCells:null,_aData:[],_aSortData:null,_aFilterData:null,_sFilterRow:null,_sRowStripe:"",src:null,idx:-1};m.models.oColumn={idx:null,aDataSort:null,asSorting:null,bSearchable:null,bSortable:null,bVisible:null,_sManualType:null,_bAttrSrc:!1,fnCreatedCell:null,fnGetData:null,fnSetData:null,mData:null,mRender:null,nTh:null,nTf:null,sClass:null,sContentPadding:null,sDefaultContent:null,sName:null,sSortDataType:"std",sSortingClass:null,sSortingClassJUI:null,sTitle:null,sType:null,sWidth:null, - sWidthOrig:null};m.defaults={aaData:null,aaSorting:[[0,"asc"]],aaSortingFixed:[],ajax:null,aLengthMenu:[10,25,50,100],aoColumns:null,aoColumnDefs:null,aoSearchCols:[],asStripeClasses:null,bAutoWidth:!0,bDeferRender:!1,bDestroy:!1,bFilter:!0,bInfo:!0,bLengthChange:!0,bPaginate:!0,bProcessing:!1,bRetrieve:!1,bScrollCollapse:!1,bServerSide:!1,bSort:!0,bSortMulti:!0,bSortCellsTop:!1,bSortClasses:!0,bStateSave:!1,fnCreatedRow:null,fnDrawCallback:null,fnFooterCallback:null,fnFormatNumber:function(a){return a.toString().replace(/\B(?=(\d{3})+(?!\d))/g, - this.oLanguage.sThousands)},fnHeaderCallback:null,fnInfoCallback:null,fnInitComplete:null,fnPreDrawCallback:null,fnRowCallback:null,fnServerData:null,fnServerParams:null,fnStateLoadCallback:function(a){try{return JSON.parse((-1===a.iStateDuration?sessionStorage:localStorage).getItem("DataTables_"+a.sInstance+"_"+location.pathname))}catch(b){}},fnStateLoadParams:null,fnStateLoaded:null,fnStateSaveCallback:function(a,b){try{(-1===a.iStateDuration?sessionStorage:localStorage).setItem("DataTables_"+a.sInstance+ - "_"+location.pathname,JSON.stringify(b))}catch(c){}},fnStateSaveParams:null,iStateDuration:7200,iDeferLoading:null,iDisplayLength:10,iDisplayStart:0,iTabIndex:0,oClasses:{},oLanguage:{oAria:{sSortAscending:": activate to sort column ascending",sSortDescending:": activate to sort column descending"},oPaginate:{sFirst:"First",sLast:"Last",sNext:"Next",sPrevious:"Previous"},sEmptyTable:"No data available in table",sInfo:"Showing _START_ to _END_ of _TOTAL_ entries",sInfoEmpty:"Showing 0 to 0 of 0 entries", - sInfoFiltered:"(filtered from _MAX_ total entries)",sInfoPostFix:"",sDecimal:"",sThousands:",",sLengthMenu:"Show _MENU_ entries",sLoadingRecords:"Loading...",sProcessing:"Processing...",sSearch:"Search:",sSearchPlaceholder:"",sUrl:"",sZeroRecords:"No matching records found"},oSearch:h.extend({},m.models.oSearch),sAjaxDataProp:"data",sAjaxSource:null,sDom:"lfrtip",searchDelay:null,sPaginationType:"simple_numbers",sScrollX:"",sScrollXInner:"",sScrollY:"",sServerMethod:"GET",renderer:null,rowId:"DT_RowId"}; - X(m.defaults);m.defaults.column={aDataSort:null,iDataSort:-1,asSorting:["asc","desc"],bSearchable:!0,bSortable:!0,bVisible:!0,fnCreatedCell:null,mData:null,mRender:null,sCellType:"td",sClass:"",sContentPadding:"",sDefaultContent:null,sName:"",sSortDataType:"std",sTitle:null,sType:null,sWidth:null};X(m.defaults.column);m.models.oSettings={oFeatures:{bAutoWidth:null,bDeferRender:null,bFilter:null,bInfo:null,bLengthChange:null,bPaginate:null,bProcessing:null,bServerSide:null,bSort:null,bSortMulti:null, - bSortClasses:null,bStateSave:null},oScroll:{bCollapse:null,iBarWidth:0,sX:null,sXInner:null,sY:null},oLanguage:{fnInfoCallback:null},oBrowser:{bScrollOversize:!1,bScrollbarLeft:!1,bBounding:!1,barWidth:0},ajax:null,aanFeatures:[],aoData:[],aiDisplay:[],aiDisplayMaster:[],aIds:{},aoColumns:[],aoHeader:[],aoFooter:[],oPreviousSearch:{},aoPreSearchCols:[],aaSorting:null,aaSortingFixed:[],asStripeClasses:null,asDestroyStripes:[],sDestroyWidth:0,aoRowCallback:[],aoHeaderCallback:[],aoFooterCallback:[], - aoDrawCallback:[],aoRowCreatedCallback:[],aoPreDrawCallback:[],aoInitComplete:[],aoStateSaveParams:[],aoStateLoadParams:[],aoStateLoaded:[],sTableId:"",nTable:null,nTHead:null,nTFoot:null,nTBody:null,nTableWrapper:null,bDeferLoading:!1,bInitialised:!1,aoOpenRows:[],sDom:null,searchDelay:null,sPaginationType:"two_button",iStateDuration:0,aoStateSave:[],aoStateLoad:[],oSavedState:null,oLoadedState:null,sAjaxSource:null,sAjaxDataProp:null,bAjaxDataGet:!0,jqXHR:null,json:k,oAjaxData:k,fnServerData:null, - aoServerParams:[],sServerMethod:null,fnFormatNumber:null,aLengthMenu:null,iDraw:0,bDrawing:!1,iDrawError:-1,_iDisplayLength:10,_iDisplayStart:0,_iRecordsTotal:0,_iRecordsDisplay:0,oClasses:{},bFiltered:!1,bSorted:!1,bSortCellsTop:null,oInit:null,aoDestroyCallback:[],fnRecordsTotal:function(){return"ssp"==y(this)?1*this._iRecordsTotal:this.aiDisplayMaster.length},fnRecordsDisplay:function(){return"ssp"==y(this)?1*this._iRecordsDisplay:this.aiDisplay.length},fnDisplayEnd:function(){var a=this._iDisplayLength, - b=this._iDisplayStart,c=b+a,d=this.aiDisplay.length,e=this.oFeatures,f=e.bPaginate;return e.bServerSide?!1===f||-1===a?b+d:Math.min(b+a,this._iRecordsDisplay):!f||c>d||-1===a?d:c},oInstance:null,sInstance:null,iTabIndex:0,nScrollHead:null,nScrollFoot:null,aLastSort:[],oPlugins:{},rowIdFn:null,rowId:null};m.ext=x={buttons:{},classes:{},builder:"-source-",errMode:"alert",feature:[],search:[],selector:{cell:[],column:[],row:[]},internal:{},legacy:{ajax:null},pager:{},renderer:{pageButton:{},header:{}}, - order:{},type:{detect:[],search:{},order:{}},_unique:0,fnVersionCheck:m.fnVersionCheck,iApiIndex:0,oJUIClasses:{},sVersion:m.version};h.extend(x,{afnFiltering:x.search,aTypes:x.type.detect,ofnSearch:x.type.search,oSort:x.type.order,afnSortData:x.order,aoFeatures:x.feature,oApi:x.internal,oStdClasses:x.classes,oPagination:x.pager});h.extend(m.ext.classes,{sTable:"dataTable",sNoFooter:"no-footer",sPageButton:"paginate_button",sPageButtonActive:"current",sPageButtonDisabled:"disabled",sStripeOdd:"odd", - sStripeEven:"even",sRowEmpty:"dataTables_empty",sWrapper:"dataTables_wrapper",sFilter:"dataTables_filter",sInfo:"dataTables_info",sPaging:"dataTables_paginate paging_",sLength:"dataTables_length",sProcessing:"dataTables_processing",sSortAsc:"sorting_asc",sSortDesc:"sorting_desc",sSortable:"sorting",sSortableAsc:"sorting_asc_disabled",sSortableDesc:"sorting_desc_disabled",sSortableNone:"sorting_disabled",sSortColumn:"sorting_",sFilterInput:"",sLengthSelect:"",sScrollWrapper:"dataTables_scroll",sScrollHead:"dataTables_scrollHead", - sScrollHeadInner:"dataTables_scrollHeadInner",sScrollBody:"dataTables_scrollBody",sScrollFoot:"dataTables_scrollFoot",sScrollFootInner:"dataTables_scrollFootInner",sHeaderTH:"",sFooterTH:"",sSortJUIAsc:"",sSortJUIDesc:"",sSortJUI:"",sSortJUIAscAllowed:"",sSortJUIDescAllowed:"",sSortJUIWrapper:"",sSortIcon:"",sJUIHeader:"",sJUIFooter:""});var Kb=m.ext.pager;h.extend(Kb,{simple:function(){return["previous","next"]},full:function(){return["first","previous","next","last"]},numbers:function(a,b){return[ha(a, - b)]},simple_numbers:function(a,b){return["previous",ha(a,b),"next"]},full_numbers:function(a,b){return["first","previous",ha(a,b),"next","last"]},first_last_numbers:function(a,b){return["first",ha(a,b),"last"]},_numbers:ha,numbers_length:7});h.extend(!0,m.ext.renderer,{pageButton:{_:function(a,b,c,d,e,f){var g=a.oClasses,j=a.oLanguage.oPaginate,i=a.oLanguage.oAria.paginate||{},n,l,m=0,o=function(b,d){var k,s,u,r,v=function(b){Sa(a,b.data.action,true)};k=0;for(s=d.length;k").appendTo(b);o(u,r)}else{n=null;l="";switch(r){case "ellipsis":b.append('');break;case "first":n=j.sFirst;l=r+(e>0?"":" "+g.sPageButtonDisabled);break;case "previous":n=j.sPrevious;l=r+(e>0?"":" "+g.sPageButtonDisabled);break;case "next":n=j.sNext;l=r+(e",{"class":g.sPageButton+ - " "+l,"aria-controls":a.sTableId,"aria-label":i[r],"data-dt-idx":m,tabindex:a.iTabIndex,id:c===0&&typeof r==="string"?a.sTableId+"_"+r:null}).html(n).appendTo(b);Va(u,{action:r},v);m++}}}},s;try{s=h(b).find(G.activeElement).data("dt-idx")}catch(u){}o(h(b).empty(),d);s!==k&&h(b).find("[data-dt-idx="+s+"]").focus()}}});h.extend(m.ext.type.detect,[function(a,b){var c=b.oLanguage.sDecimal;return Ya(a,c)?"num"+c:null},function(a){if(a&&!(a instanceof Date)&&!Zb.test(a))return null;var b=Date.parse(a); - return null!==b&&!isNaN(b)||L(a)?"date":null},function(a,b){var c=b.oLanguage.sDecimal;return Ya(a,c,!0)?"num-fmt"+c:null},function(a,b){var c=b.oLanguage.sDecimal;return Pb(a,c)?"html-num"+c:null},function(a,b){var c=b.oLanguage.sDecimal;return Pb(a,c,!0)?"html-num-fmt"+c:null},function(a){return L(a)||"string"===typeof a&&-1!==a.indexOf("<")?"html":null}]);h.extend(m.ext.type.search,{html:function(a){return L(a)?a:"string"===typeof a?a.replace(Mb," ").replace(Aa,""):""},string:function(a){return L(a)? - a:"string"===typeof a?a.replace(Mb," "):a}});var za=function(a,b,c,d){if(0!==a&&(!a||"-"===a))return-Infinity;b&&(a=Ob(a,b));a.replace&&(c&&(a=a.replace(c,"")),d&&(a=a.replace(d,"")));return 1*a};h.extend(x.type.order,{"date-pre":function(a){return Date.parse(a)||-Infinity},"html-pre":function(a){return L(a)?"":a.replace?a.replace(/<.*?>/g,"").toLowerCase():a+""},"string-pre":function(a){return L(a)?"":"string"===typeof a?a.toLowerCase():!a.toString?"":a.toString()},"string-asc":function(a,b){return a< - b?-1:a>b?1:0},"string-desc":function(a,b){return ab?-1:0}});cb("");h.extend(!0,m.ext.renderer,{header:{_:function(a,b,c,d){h(a.nTable).on("order.dt.DT",function(e,f,g,h){if(a===f){e=c.idx;b.removeClass(c.sSortingClass+" "+d.sSortAsc+" "+d.sSortDesc).addClass(h[e]=="asc"?d.sSortAsc:h[e]=="desc"?d.sSortDesc:c.sSortingClass)}})},jqueryui:function(a,b,c,d){h("
").addClass(d.sSortJUIWrapper).append(b.contents()).append(h("").addClass(d.sSortIcon+" "+c.sSortingClassJUI)).appendTo(b); - h(a.nTable).on("order.dt.DT",function(e,f,g,h){if(a===f){e=c.idx;b.removeClass(d.sSortAsc+" "+d.sSortDesc).addClass(h[e]=="asc"?d.sSortAsc:h[e]=="desc"?d.sSortDesc:c.sSortingClass);b.find("span."+d.sSortIcon).removeClass(d.sSortJUIAsc+" "+d.sSortJUIDesc+" "+d.sSortJUI+" "+d.sSortJUIAscAllowed+" "+d.sSortJUIDescAllowed).addClass(h[e]=="asc"?d.sSortJUIAsc:h[e]=="desc"?d.sSortJUIDesc:c.sSortingClassJUI)}})}}});var Vb=function(a){return"string"===typeof a?a.replace(//g,">").replace(/"/g, - """):a};m.render={number:function(a,b,c,d,e){return{display:function(f){if("number"!==typeof f&&"string"!==typeof f)return f;var g=0>f?"-":"",h=parseFloat(f);if(isNaN(h))return Vb(f);h=h.toFixed(c);f=Math.abs(h);h=parseInt(f,10);f=c?b+(f-h).toFixed(c).substring(2):"";return g+(d||"")+h.toString().replace(/\B(?=(\d{3})+(?!\d))/g,a)+f+(e||"")}}},text:function(){return{display:Vb}}};h.extend(m.ext.internal,{_fnExternApiFunc:Lb,_fnBuildAjax:sa,_fnAjaxUpdate:kb,_fnAjaxParameters:tb,_fnAjaxUpdateDraw:ub, - _fnAjaxDataSrc:ta,_fnAddColumn:Da,_fnColumnOptions:ja,_fnAdjustColumnSizing:Y,_fnVisibleToColumnIndex:Z,_fnColumnIndexToVisible:$,_fnVisbleColumns:aa,_fnGetColumns:la,_fnColumnTypes:Fa,_fnApplyColumnDefs:hb,_fnHungarianMap:X,_fnCamelToHungarian:I,_fnLanguageCompat:Ca,_fnBrowserDetect:fb,_fnAddData:M,_fnAddTr:ma,_fnNodeToDataIndex:function(a,b){return b._DT_RowIndex!==k?b._DT_RowIndex:null},_fnNodeToColumnIndex:function(a,b,c){return h.inArray(c,a.aoData[b].anCells)},_fnGetCellData:B,_fnSetCellData:ib, - _fnSplitObjNotation:Ia,_fnGetObjectDataFn:Q,_fnSetObjectDataFn:R,_fnGetDataMaster:Ja,_fnClearTable:na,_fnDeleteIndex:oa,_fnInvalidate:ca,_fnGetRowElements:Ha,_fnCreateTr:Ga,_fnBuildHead:jb,_fnDrawHead:ea,_fnDraw:N,_fnReDraw:S,_fnAddOptionsHtml:mb,_fnDetectHeader:da,_fnGetUniqueThs:ra,_fnFeatureHtmlFilter:ob,_fnFilterComplete:fa,_fnFilterCustom:xb,_fnFilterColumn:wb,_fnFilter:vb,_fnFilterCreateSearch:Oa,_fnEscapeRegex:Pa,_fnFilterData:yb,_fnFeatureHtmlInfo:rb,_fnUpdateInfo:Bb,_fnInfoMacros:Cb,_fnInitialise:ga, - _fnInitComplete:ua,_fnLengthChange:Qa,_fnFeatureHtmlLength:nb,_fnFeatureHtmlPaginate:sb,_fnPageChange:Sa,_fnFeatureHtmlProcessing:pb,_fnProcessingDisplay:C,_fnFeatureHtmlTable:qb,_fnScrollDraw:ka,_fnApplyToChildren:H,_fnCalculateColumnWidths:Ea,_fnThrottle:Na,_fnConvertToWidth:Db,_fnGetWidestNode:Eb,_fnGetMaxLenString:Fb,_fnStringToCss:v,_fnSortFlatten:V,_fnSort:lb,_fnSortAria:Hb,_fnSortListener:Ua,_fnSortAttachListener:La,_fnSortingClasses:wa,_fnSortData:Gb,_fnSaveState:xa,_fnLoadState:Ib,_fnSettingsFromNode:ya, - _fnLog:J,_fnMap:F,_fnBindAction:Va,_fnCallbackReg:z,_fnCallbackFire:r,_fnLengthOverflow:Ra,_fnRenderer:Ma,_fnDataSource:y,_fnRowAttributes:Ka,_fnCalculateEnd:function(){}});h.fn.dataTable=m;m.$=h;h.fn.dataTableSettings=m.settings;h.fn.dataTableExt=m.ext;h.fn.DataTable=function(a){return h(this).dataTable(a).api()};h.each(m,function(a,b){h.fn.DataTable[a]=b});return h.fn.dataTable}); \ No newline at end of file diff --git a/traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.19_patched.js b/traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.19_patched.js new file mode 100644 index 0000000000..c5ee989889 --- /dev/null +++ b/traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.19_patched.js @@ -0,0 +1,31 @@ +/*! DataTables 1.10.19 + * ©2008-2018 SpryMedia Ltd - datatables.net/license + */ + +/* the following modifications were made to this file + * --- a/traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.19.js + * +++ b/traffic_portal/app/src/assets/js/jquery.dataTables.min_1.10.19_patched.js + * @@ -6310,6 +6310,7 @@ + * search: _fnSearchToCamel( settings.oPreviousSearch ), + * columns: $.map( settings.aoColumns, function ( col, i ) { + * return { + * + name: col.name, + * visible: col.bVisible, + * search: _fnSearchToCamel( settings.aoPreSearchCols[i] ) + * }; + * @@ -6399,6 +6400,7 @@ + * // Visibility + * if ( col.visible !== undefined ) { + * columns[i].bVisible = col.visible; + * + columns[i].bSearchable = col.visible; // base searchability on visibility + * } + * + * @@ -8575,6 +8577,7 @@ + * // Common actions + * col.bVisible = vis; + * + col.bSearchable = vis; // set searchable to same value as visible + * _fnDrawHead( settings, settings.aoHeader ); + * _fnDrawHead( settings, settings.aoFooter ); + */ + +!function(t){"use strict";"function"==typeof define&&define.amd?define(["jquery"],function(e){return t(e,window,document)}):"object"==typeof exports?module.exports=function(e,n){return e||(e=window),n||(n="undefined"!=typeof window?require("jquery"):require("jquery")(e)),t(n,e,e.document)}:t(jQuery,window,document)}(function(t,e,n,a){"use strict";var r,o,i,l,s=function(e){this.$=function(t,e){return this.api(!0).$(t,e)},this._=function(t,e){return this.api(!0).rows(t,e).data()},this.api=function(t){return new o(t?oe(this[r.iApiIndex]):this)},this.fnAddData=function(e,n){var r=this.api(!0),o=t.isArray(e)&&(t.isArray(e[0])||t.isPlainObject(e[0]))?r.rows.add(e):r.row.add(e);return(n===a||n)&&r.draw(),o.flatten().toArray()},this.fnAdjustColumnSizing=function(t){var e=this.api(!0).columns.adjust(),n=e.settings()[0],r=n.oScroll;t===a||t?e.draw(!1):""===r.sX&&""===r.sY||Bt(n)},this.fnClearTable=function(t){var e=this.api(!0).clear();(t===a||t)&&e.draw()},this.fnClose=function(t){this.api(!0).row(t).child.hide()},this.fnDeleteRow=function(t,e,n){var r=this.api(!0),o=r.rows(t),i=o.settings()[0],l=i.aoData[o[0][0]];return o.remove(),e&&e.call(this,i,l),(n===a||n)&&r.draw(),l},this.fnDestroy=function(t){this.api(!0).destroy(t)},this.fnDraw=function(t){this.api(!0).draw(t)},this.fnFilter=function(t,e,n,r,o,i){var l=this.api(!0);null===e||e===a?l.search(t,n,r,i):l.column(e).search(t,n,r,i),l.draw()},this.fnGetData=function(t,e){var n=this.api(!0);if(t!==a){var r=t.nodeName?t.nodeName.toLowerCase():"";return e!==a||"td"==r||"th"==r?n.cell(t,e).data():n.row(t).data()||null}return n.data().toArray()},this.fnGetNodes=function(t){var e=this.api(!0);return t!==a?e.row(t).node():e.rows().nodes().flatten().toArray()},this.fnGetPosition=function(t){var e=this.api(!0),n=t.nodeName.toUpperCase();if("TR"==n)return e.row(t).index();if("TD"==n||"TH"==n){var a=e.cell(t).index();return[a.row,a.columnVisible,a.column]}return null},this.fnIsOpen=function(t){return this.api(!0).row(t).child.isShown()},this.fnOpen=function(t,e,n){return this.api(!0).row(t).child(e,n).show().child()[0]},this.fnPageChange=function(t,e){var n=this.api(!0).page(t);(e===a||e)&&n.draw(!1)},this.fnSetColumnVis=function(t,e,n){var r=this.api(!0).column(t).visible(e);(n===a||n)&&r.columns.adjust().draw()},this.fnSettings=function(){return oe(this[r.iApiIndex])},this.fnSort=function(t){this.api(!0).order(t).draw()},this.fnSortListener=function(t,e,n){this.api(!0).order.listener(t,e,n)},this.fnUpdate=function(t,e,n,r,o){var i=this.api(!0);return n===a||null===n?i.row(e).data(t):i.cell(e,n).data(t),(o===a||o)&&i.columns.adjust(),(r===a||r)&&i.draw(),0},this.fnVersionCheck=r.fnVersionCheck;var n=this,i=e===a,l=this.length;for(var u in i&&(e={}),this.oApi=this.internal=r.internal,s.ext.internal)u&&(this[u]=Pe(u));return this.each(function(){var r,o=l>1?se({},e,!0):e,u=0,c=this.getAttribute("id"),f=!1,d=s.defaults,h=t(this);if("table"==this.nodeName.toLowerCase()){L(d),R(d.column),I(d,d,!0),I(d.column,d.column,!0),I(d,t.extend(o,h.data()));var p=s.settings;for(u=0,r=p.length;u").appendTo(h)),S.nTHead=i[0];var l=h.children("tbody");0===l.length&&(l=t("").appendTo(h)),S.nTBody=l[0];var s=h.children("tfoot");if(0===s.length&&n.length>0&&(""!==S.oScroll.sX||""!==S.oScroll.sY)&&(s=t("").appendTo(h)),0===s.length||0===s.children().length?h.addClass(m.sNoFooter):s.length>0&&(S.nTFoot=s[0],ct(S.aoFooter,S.nTFoot)),o.aaData)for(u=0;u/g,d=/^\d{2,4}[\.\/\-]\d{1,2}[\.\/\-]\d{1,2}([T ]{1}\d{1,2}[:\.]\d{2}([\.:]\d{2})?)?$/,h=new RegExp("(\\"+["/",".","*","+","?","|","(",")","[","]","{","}","\\","$","^","-"].join("|\\")+")","g"),p=/[',$£€¥%\u2009\u202F\u20BD\u20a9\u20BArfkɃΞ]/gi,g=function(t){return!t||!0===t||"-"===t},b=function(t){var e=parseInt(t,10);return!isNaN(e)&&isFinite(t)?e:null},v=function(t,e){return u[e]||(u[e]=new RegExp(wt(e),"g")),"string"==typeof t&&"."!==e?t.replace(/\./g,"").replace(u[e],"."):t},S=function(t,e,n){var a="string"==typeof t;return!!g(t)||(e&&a&&(t=v(t,e)),n&&a&&(t=t.replace(p,"")),!isNaN(parseFloat(t))&&isFinite(t))},m=function(t,e,n){return!!g(t)||(function(t){return g(t)||"string"==typeof t}(t)&&!!S(T(t),e,n)||null)},D=function(t,e,n){var r=[],o=0,i=t.length;if(n!==a)for(;o").css({position:"fixed",top:0,left:-1*t(e).scrollLeft(),height:1,width:1,overflow:"hidden"}).append(t("
").css({position:"absolute",top:1,left:1,width:100,overflow:"scroll"}).append(t("
").css({width:"100%",height:10}))).appendTo("body"),o=r.children(),i=o.children();a.barWidth=o[0].offsetWidth-o[0].clientWidth,a.bScrollOversize=100===i[0].offsetWidth&&100!==o[0].clientWidth,a.bScrollbarLeft=1!==Math.round(i.offset().left),a.bBounding=!!r[0].getBoundingClientRect().width,r.remove()}t.extend(n.oBrowser,s.__browser),n.oScroll.iBarWidth=s.__browser.barWidth}function j(t,e,n,r,o,i){var l,s=r,u=!1;for(n!==a&&(l=n,u=!0);s!==o;)t.hasOwnProperty(s)&&(l=u?e(l,t[s],s,t):t[s],u=!0,s+=i);return l}function N(e,a){var r=s.defaults.column,o=e.aoColumns.length,i=t.extend({},s.models.oColumn,r,{nTh:a||n.createElement("th"),sTitle:r.sTitle?r.sTitle:a?a.innerHTML:"",aDataSort:r.aDataSort?r.aDataSort:[o],mData:r.mData?r.mData:o,idx:o});e.aoColumns.push(i);var l=e.aoPreSearchCols;l[o]=t.extend({},s.models.oSearch,l[o]),H(e,o,t(a).data())}function H(e,n,r){var o=e.aoColumns[n],i=e.oClasses,l=t(o.nTh);if(!o.sWidthOrig){o.sWidthOrig=l.attr("width")||null;var u=(l.attr("style")||"").match(/width:\s*(\d+[pxem%]+)/);u&&(o.sWidthOrig=u[1])}r!==a&&null!==r&&(R(r),I(s.defaults.column,r),r.mDataProp===a||r.mData||(r.mData=r.mDataProp),r.sType&&(o._sManualType=r.sType),r.className&&!r.sClass&&(r.sClass=r.className),r.sClass&&l.addClass(r.sClass),t.extend(o,r),le(o,r,"sWidth","sWidthOrig"),r.iDataSort!==a&&(o.aDataSort=[r.iDataSort]),le(o,r,"aDataSort"));var c=o.mData,f=Y(c),d=o.mRender?Y(o.mRender):null,h=function(t){return"string"==typeof t&&-1!==t.indexOf("@")};o._bAttrSrc=t.isPlainObject(c)&&(h(c.sort)||h(c.type)||h(c.filter)),o._setter=null,o.fnGetData=function(t,e,n){var r=f(t,e,a,n);return d&&e?d(r,e,t,n):r},o.fnSetData=function(t,e,n){return Z(c)(t,e,n)},"number"!=typeof c&&(e._rowReadObject=!0),e.oFeatures.bSort||(o.bSortable=!1,l.addClass(i.sSortableNone));var p=-1!==t.inArray("asc",o.asSorting),g=-1!==t.inArray("desc",o.asSorting);o.bSortable&&(p||g)?p&&!g?(o.sSortingClass=i.sSortableAsc,o.sSortingClassJUI=i.sSortJUIAscAllowed):!p&&g?(o.sSortingClass=i.sSortableDesc,o.sSortingClassJUI=i.sSortJUIDescAllowed):(o.sSortingClass=i.sSortable,o.sSortingClassJUI=i.sSortJUI):(o.sSortingClass=i.sSortableNone,o.sSortingClassJUI="")}function O(t){if(!1!==t.oFeatures.bAutoWidth){var e=t.aoColumns;Xt(t);for(var n=0,a=e.length;n=0;i--){var p=(d=n[i]).targets!==a?d.targets:d.aTargets;for(t.isArray(p)||(p=[p]),s=0,u=p.length;s=0){for(;h.length<=p[s];)N(e);o(p[s],d)}else if("number"==typeof p[s]&&p[s]<0)o(h.length+p[s],d);else if("string"==typeof p[s])for(c=0,f=h.length;ce&&t[o]--;-1!=r&&n===a&&t.splice(r,1)}function et(t,e,n,r){var o,i,l=t.aoData[e],s=function(n,a){for(;n.childNodes.length;)n.removeChild(n.firstChild);n.innerHTML=J(t,e,a,"display")};if("dom"!==n&&(n&&"auto"!==n||"dom"!==l.src)){var u=l.anCells;if(u)if(r!==a)s(u[r],r);else for(o=0,i=u.length;o").appendTo(l)),n=0,a=f.length;ntr").attr("role","row"),t(l).find(">tr>th, >tr>td").addClass(c.sHeaderTH),t(s).find(">tr>th, >tr>td").addClass(c.sFooterTH),null!==s){var d=e.aoFooter[0];for(n=0,a=d.length;n=0;l--)e.aoColumns[l].bVisible||r||p[o].splice(l,1);g.push([])}for(o=0,i=p.length;o=e.fnRecordsDisplay()?0:u,e.iInitDisplayStart=-1);var d=e._iDisplayStart,h=e.fnDisplayEnd();if(e.bDeferLoading)e.bDeferLoading=!1,e.iDraw++,Wt(e,!1);else if(c){if(!e.bDestroying&&!ht(e))return}else e.iDraw++;if(0!==f.length)for(var p=c?0:d,g=c?e.aoData.length:h,b=p;b",{class:l?i[0]:""}).append(t("",{valign:"top",colSpan:W(e),class:e.oClasses.sRowEmpty}).html(y))[0]}fe(e,"aoHeaderCallback","header",[t(e.nTHead).children("tr")[0],K(e),d,h,f]),fe(e,"aoFooterCallback","footer",[t(e.nTFoot).children("tr")[0],K(e),d,h,f]);var _=t(e.nTBody);_.children().detach(),_.append(t(r)),fe(e,"aoDrawCallback","draw",[e]),e.bSorted=!1,e.bFiltered=!1,e.bDrawing=!1}else Wt(e,!1)}function st(t,e){var n=t.oFeatures,a=n.bSort,r=n.bFilter;a&&Zt(t),r?St(t,t.oPreviousSearch):t.aiDisplay=t.aiDisplayMaster.slice(),!0!==e&&(t._iDisplayStart=0),t._drawHold=e,lt(t),t._drawHold=!1}function ut(e){var n=e.oClasses,a=t(e.nTable),r=t("
").insertBefore(a),o=e.oFeatures,i=t("
",{id:e.sTableId+"_wrapper",class:n.sWrapper+(e.nTFoot?"":" "+n.sNoFooter)});e.nHolding=r[0],e.nTableWrapper=i[0],e.nTableReinsertBefore=e.nTable.nextSibling;for(var l,u,c,f,d,h,p=e.sDom.split(""),g=0;g")[0],"'"==(f=p[g+1])||'"'==f){for(d="",h=2;p[g+h]!=f;)d+=p[g+h],h++;if("H"==d?d=n.sJUIHeader:"F"==d&&(d=n.sJUIFooter),-1!=d.indexOf(".")){var b=d.split(".");c.id=b[0].substr(1,b[0].length-1),c.className=b[1]}else"#"==d.charAt(0)?c.id=d.substr(1,d.length-1):c.className=d;g+=h}i.append(c),i=t(c)}else if(">"==u)i=i.parent();else if("l"==u&&o.bPaginate&&o.bLengthChange)l=Ht(e);else if("f"==u&&o.bFilter)l=vt(e);else if("r"==u&&o.bProcessing)l=Mt(e);else if("t"==u)l=Et(e);else if("i"==u&&o.bInfo)l=Ft(e);else if("p"==u&&o.bPaginate)l=Ot(e);else if(0!==s.ext.feature.length)for(var v=s.ext.feature,S=0,m=v.length;S',u=o.sSearch;u=u.match(/_INPUT_/)?u.replace("_INPUT_",s):u+s;var c=t("
",{id:l.f?null:r+"_filter",class:a.sFilter}).append(t("
").addClass(n.sLength);return e.aanFeatures.l||(f[0].id=a+"_length"),f.children().append(e.oLanguage.sLengthMenu.replace("_MENU_",s[0].outerHTML)),t("select",f).val(e._iDisplayLength).on("change.DT",function(n){Nt(e,t(this).val()),lt(e)}),t(e.nTable).on("length.dt.DT",function(n,a,r){e===a&&t("select",f).val(r)}),f[0]}function Ot(e){var n=e.sPaginationType,a=s.ext.pager[n],r="function"==typeof a,o=function(t){lt(t)},i=t("
").addClass(e.oClasses.sPaging+n)[0],l=e.aanFeatures;return r||a.fnInit(e,i,o),l.p||(i.id=e.sTableId+"_paginate",e.aoDrawCallback.push({fn:function(t){if(r){var e,n,i=t._iDisplayStart,s=t._iDisplayLength,u=t.fnRecordsDisplay(),c=-1===s,f=c?0:Math.ceil(i/s),d=c?1:Math.ceil(u/s),h=a(f,d);for(e=0,n=l.p.length;eo&&(a=0):"first"==e?a=0:"previous"==e?(a=r>=0?a-r:0)<0&&(a=0):"next"==e?a+r",{id:e.aanFeatures.r?null:e.sTableId+"_processing",class:e.oClasses.sProcessing}).html(e.oLanguage.sProcessing).insertBefore(e.nTable)[0]}function Wt(e,n){e.oFeatures.bProcessing&&t(e.aanFeatures.r).css("display",n?"block":"none"),fe(e,null,"processing",[e,n])}function Et(e){var n=t(e.nTable);n.attr("role","grid");var a=e.oScroll;if(""===a.sX&&""===a.sY)return e.nTable;var r=a.sX,o=a.sY,i=e.oClasses,l=n.children("caption"),s=l.length?l[0]._captionSide:null,u=t(n[0].cloneNode(!1)),c=t(n[0].cloneNode(!1)),f=n.children("tfoot"),d="
",h=function(t){return t?zt(t):null};f.length||(f=null);var p=t(d,{class:i.sScrollWrapper}).append(t(d,{class:i.sScrollHead}).css({overflow:"hidden",position:"relative",border:0,width:r?h(r):"100%"}).append(t(d,{class:i.sScrollHeadInner}).css({"box-sizing":"content-box",width:a.sXInner||"100%"}).append(u.removeAttr("id").css("margin-left",0).append("top"===s?l:null).append(n.children("thead"))))).append(t(d,{class:i.sScrollBody}).css({position:"relative",overflow:"auto",width:h(r)}).append(n));f&&p.append(t(d,{class:i.sScrollFoot}).css({overflow:"hidden",border:0,width:r?h(r):"100%"}).append(t(d,{class:i.sScrollFootInner}).append(c.removeAttr("id").css("margin-left",0).append("bottom"===s?l:null).append(n.children("tfoot")))));var g=p.children(),b=g[0],v=g[1],S=f?g[2]:null;return r&&t(v).on("scroll.DT",function(t){var e=this.scrollLeft;b.scrollLeft=e,f&&(S.scrollLeft=e)}),t(v).css(o&&a.bCollapse?"max-height":"height",o),e.nScrollHead=b,e.nScrollBody=v,e.nScrollFoot=S,e.aoDrawCallback.push({fn:Bt,sName:"scrolling"}),p[0]}function Bt(e){var n,r,o,i,l,s,u,c,f,d=e.oScroll,h=d.sX,p=d.sXInner,g=d.sY,b=d.iBarWidth,v=t(e.nScrollHead),S=v[0].style,m=v.children("div"),y=m[0].style,_=m.children("table"),w=e.nScrollBody,T=t(w),C=w.style,x=t(e.nScrollFoot).children("div"),I=x.children("table"),A=t(e.nTHead),F=t(e.nTable),L=F[0],R=L.style,P=e.nTFoot?t(e.nTFoot):null,j=e.oBrowser,N=j.bScrollOversize,H=D(e.aoColumns,"nTh"),M=[],W=[],E=[],B=[],U=function(t){var e=t.style;e.paddingTop="0",e.paddingBottom="0",e.borderTopWidth="0",e.borderBottomWidth="0",e.height=0},V=w.scrollHeight>w.clientHeight;if(e.scrollBarVis!==V&&e.scrollBarVis!==a)return e.scrollBarVis=V,void O(e);e.scrollBarVis=V,F.children("thead, tfoot").remove(),P&&(s=P.clone().prependTo(F),r=P.find("tr"),i=s.find("tr")),l=A.clone().prependTo(F),n=A.find("tr"),o=l.find("tr"),l.find("th, td").removeAttr("tabindex"),h||(C.width="100%",v[0].style.width="100%"),t.each(ft(e,l),function(t,n){u=k(e,t),n.style.width=e.aoColumns[u].sWidth}),P&&Ut(function(t){t.style.width=""},i),f=F.outerWidth(),""===h?(R.width="100%",N&&(F.find("tbody").height()>w.offsetHeight||"scroll"==T.css("overflow-y"))&&(R.width=zt(F.outerWidth()-b)),f=F.outerWidth()):""!==p&&(R.width=zt(p),f=F.outerWidth()),Ut(U,o),Ut(function(e){E.push(e.innerHTML),M.push(zt(t(e).css("width")))},o),Ut(function(e,n){-1!==t.inArray(e,H)&&(e.style.width=M[n])},n),t(o).height(0),P&&(Ut(U,i),Ut(function(e){B.push(e.innerHTML),W.push(zt(t(e).css("width")))},i),Ut(function(t,e){t.style.width=W[e]},r),t(i).height(0)),Ut(function(t,e){t.innerHTML='
'+E[e]+"
",t.childNodes[0].style.height="0",t.childNodes[0].style.overflow="hidden",t.style.width=M[e]},o),P&&Ut(function(t,e){t.innerHTML='
'+B[e]+"
",t.childNodes[0].style.height="0",t.childNodes[0].style.overflow="hidden",t.style.width=W[e]},i),F.outerWidth()w.offsetHeight||"scroll"==T.css("overflow-y")?f+b:f,N&&(w.scrollHeight>w.offsetHeight||"scroll"==T.css("overflow-y"))&&(R.width=zt(c-b)),""!==h&&""===p||ie(e,1,"Possible column misalignment",6)):c="100%",C.width=zt(c),S.width=zt(c),P&&(e.nScrollFoot.style.width=zt(c)),g||N&&(C.height=zt(L.offsetHeight+b));var X=F.outerWidth();_[0].style.width=zt(X),y.width=zt(X);var J=F.height()>w.clientHeight||"scroll"==T.css("overflow-y"),q="padding"+(j.bScrollbarLeft?"Left":"Right");y[q]=J?b+"px":"0px",P&&(I[0].style.width=zt(X),x[0].style.width=zt(X),x[0].style[q]=J?b+"px":"0px"),F.children("colgroup").insertBefore(F.children("thead")),T.scroll(),!e.bSorted&&!e.bFiltered||e._drawHold||(w.scrollTop=0)}function Ut(t,e,n){for(var a,r,o=0,i=0,l=e.length;i/g;function Xt(n){var a,r,o,i=n.nTable,l=n.aoColumns,s=n.oScroll,u=s.sY,c=s.sX,f=s.sXInner,d=l.length,h=E(n,"bVisible"),p=t("th",n.nTHead),g=i.getAttribute("width"),b=i.parentNode,v=!1,S=n.oBrowser,m=S.bScrollOversize,D=i.style.width;for(D&&-1!==D.indexOf("%")&&(g=D),a=0;a").appendTo(_.find("tbody"));for(_.find("thead, tfoot").remove(),_.append(t(n.nTHead).clone()).append(t(n.nTFoot).clone()),_.find("tfoot th, tfoot td").css("width",""),p=ft(n,_.find("thead")[0]),a=0;a").css({width:r.sWidthOrig,margin:0,padding:0,border:0,height:1}));if(n.aoData.length)for(a=0;a").css(c||u?{position:"absolute",top:0,left:0,height:1,right:0,overflow:"hidden"}:{}).append(_).appendTo(b);c&&f?_.width(f):c?(_.css("width","auto"),_.removeAttr("width"),_.width()").css("width",zt(e)).appendTo(a||n.body),o=r[0].offsetWidth;return r.remove(),o}function Gt(e,n){var a=$t(e,n);if(a<0)return null;var r=e.aoData[a];return r.nTr?r.anCells[n]:t("").html(J(e,a,n,"display"))[0]}function $t(t,e){for(var n,a=-1,r=-1,o=0,i=t.aoData.length;oa&&(a=n.length,r=o);return r}function zt(t){return null===t?"0px":"number"==typeof t?t<0?"0px":t+"px":t.match(/\d$/)?t+"px":t}function Yt(e){var n,r,o,i,l,u,c,f=[],d=e.aoColumns,h=e.aaSortingFixed,p=t.isPlainObject(h),g=[],b=function(e){e.length&&!t.isArray(e[0])?g.push(e):t.merge(g,e)};for(t.isArray(h)&&b(h),p&&h.pre&&b(h.pre),b(e.aaSorting),p&&h.post&&b(h.post),n=0;na?1:0))return"asc"===s.dir?l:-l;return(n=i[t])<(a=i[e])?-1:n>a?1:0}):f.sort(function(t,e){var n,a,r,s,c,f=o.length,d=u[t]._aSortData,h=u[e]._aSortData;for(r=0;ra?1:0})}t.bSorted=!0}function Kt(t){for(var e,n,a=t.aoColumns,r=Yt(t),o=t.oLanguage.oAria,i=0,l=a.length;i/g,""),f=s.nTh;f.removeAttribute("aria-sort"),s.bSortable?(r.length>0&&r[0].col==i?(f.setAttribute("aria-sort","asc"==r[0].dir?"ascending":"descending"),n=u[r[0].index+1]||u[0]):n=u[0],e=c+("asc"===n?o.sSortAscending:o.sSortDescending)):e=c,f.setAttribute("aria-label",e)}}function Qt(e,n,r,o){var i,l=e.aoColumns[n],s=e.aaSorting,u=l.asSorting,c=function(e,n){var r=e._idx;return r===a&&(r=t.inArray(e[1],u)),r+10&&n.time<+new Date-1e3*u)r();else if(n.columns&&l.length!==n.columns.length)r();else{if(e.oLoadedState=t.extend(!0,{},n),n.start!==a&&(e._iDisplayStart=n.start,e.iInitDisplayStart=n.start),n.length!==a&&(e._iDisplayLength=n.length),n.order!==a&&(e.aaSorting=[],t.each(n.order,function(t,n){e.aaSorting.push(n[0]>=l.length?[0,n[1]]:n)})),n.search!==a&&t.extend(e.oPreviousSearch,At(n.search)),n.columns)for(o=0,i=n.columns.length;o=n&&(e=n-a),e-=e%a,(-1===a||e<0)&&(e=0),t._iDisplayStart=e}function he(e,n){var a=e.renderer,r=s.ext.renderer[n];return t.isPlainObject(a)&&a[n]?r[a[n]]||r._:"string"==typeof a&&r[a]||r._}function pe(t){return t.oFeatures.bServerSide?"ssp":t.ajax||t.sAjaxSource?"ajax":"dom"}var ge=[],be=Array.prototype;o=function(e,n){if(!(this instanceof o))return new o(e,n);var a=[],r=function(e){var n=function(e){var n,a,r=s.settings,o=t.map(r,function(t,e){return t.nTable});return e?e.nTable&&e.oApi?[e]:e.nodeName&&"table"===e.nodeName.toLowerCase()?-1!==(n=t.inArray(e,o))?[r[n]]:null:e&&"function"==typeof e.settings?e.settings().toArray():("string"==typeof e?a=t(e):e instanceof t&&(a=e),a?a.map(function(e){return-1!==(n=t.inArray(this,o))?r[n]:null}).toArray():void 0):[]}(e);n&&(a=a.concat(n))};if(t.isArray(e))for(var i=0,l=e.length;it?new o(e[t],this[t]):null},filter:function(t){var e=[];if(be.filter)e=be.filter.call(this,t,this);else for(var n=0,a=this.length;n0)return t[0].json}),i("ajax.params()",function(){var t=this.context;if(t.length>0)return t[0].oAjaxData}),i("ajax.reload()",function(t,e){return this.iterator("table",function(n){ve(n,!1===e,t)})}),i("ajax.url()",function(e){var n=this.context;return e===a?0===n.length?a:(n=n[0]).ajax?t.isPlainObject(n.ajax)?n.ajax.url:n.ajax:n.sAjaxSource:this.iterator("table",function(n){t.isPlainObject(n.ajax)?n.ajax.url=e:n.ajax=e})}),i("ajax.url().load()",function(t,e){return this.iterator("table",function(n){ve(n,!1===e,t)})});var Se=function(e,n,o,i,l){var s,u,c,f,d,h,p=[],g=typeof n;for(n&&"string"!==g&&"function"!==g&&n.length!==a||(n=[n]),c=0,f=n.length;c0)return t[0]=t[e],t[0].length=1,t.length=1,t.context=[t.context[e]],t;return t.length=0,t},ye=function(e,n){var a,r=[],o=e.aiDisplay,i=e.aiDisplayMaster,l=n.search,s=n.order,u=n.page;if("ssp"==pe(e))return"removed"===l?[]:_(0,i.length);if("current"==u)for(f=e._iDisplayStart,d=e.fnDisplayEnd();f=0&&"applied"==l)&&r.push(f);return r};i("rows()",function(e,n){e===a?e="":t.isPlainObject(e)&&(n=e,e=""),n=me(n);var r=this.iterator("table",function(r){return function(e,n,r){var o;return Se("row",n,function(n){var i=b(n),l=e.aoData;if(null!==i&&!r)return[i];if(o||(o=ye(e,r)),null!==i&&-1!==t.inArray(i,o))return[i];if(null===n||n===a||""===n)return o;if("function"==typeof n)return t.map(o,function(t){var e=l[t];return n(t,e._aData,e.nTr)?t:null});if(n.nodeName){var s=n._DT_RowIndex,u=n._DT_CellIndex;if(s!==a)return l[s]&&l[s].nTr===n?[s]:[];if(u)return l[u.row]&&l[u.row].nTr===n?[u.row]:[];var c=t(n).closest("*[data-dt-row]");return c.length?[c.data("dt-row")]:[]}if("string"==typeof n&&"#"===n.charAt(0)){var f=e.aIds[n.replace(/^#/,"")];if(f!==a)return[f.idx]}var d=w(y(e.aoData,o,"nTr"));return t(d).filter(n).map(function(){return this._DT_RowIndex}).toArray()},e,r)}(r,e,n)},1);return r.selector.rows=e,r.selector.opts=n,r}),i("rows().nodes()",function(){return this.iterator("row",function(t,e){return t.aoData[e].nTr||a},1)}),i("rows().data()",function(){return this.iterator(!0,"rows",function(t,e){return y(t.aoData,e,"_aData")},1)}),l("rows().cache()","row().cache()",function(t){return this.iterator("row",function(e,n){var a=e.aoData[n];return"search"===t?a._aFilterData:a._aSortData},1)}),l("rows().invalidate()","row().invalidate()",function(t){return this.iterator("row",function(e,n){et(e,n,t)})}),l("rows().indexes()","row().index()",function(){return this.iterator("row",function(t,e){return e},1)}),l("rows().ids()","row().id()",function(t){for(var e=[],n=this.context,a=0,r=n.length;a0&&e._iRecordsDisplay--,de(e);var h=e.rowIdFn(d._aData);h!==a&&delete e.aIds[h]}),this.iterator("table",function(t){for(var e=0,n=t.aoData.length;e0&&(e.on("draw.dt.DT_details",function(a,r){t===r&&e.rows({page:"current"}).eq(0).each(function(t){var e=n[t];e._detailsShow&&e._details.insertAfter(e.nTr)})}),e.on("column-visibility.dt.DT_details",function(e,a,r,o){if(t===a)for(var i,l=W(a),s=0,u=n.length;s").addClass(a);t("td",s).addClass(a).html(n)[0].colSpan=W(e),o.push(s[0])}};i(a,r),n._details&&n._details.detach(),n._details=t(o),n._detailsShow&&n._details.insertAfter(n.nTr)}(r[0],r[0].aoData[this[0]],e,n),this)}),i(["row().child.show()","row().child().show()"],function(t){return we(this,!0),this}),i(["row().child.hide()","row().child().hide()"],function(){return we(this,!1),this}),i(["row().child.remove()","row().child().remove()"],function(){return _e(this),this}),i("row().child.isShown()",function(){var t=this.context;return t.length&&this.length&&t[0].aoData[this[0]]._detailsShow||!1});var Ce=/^([^:]+):(name|visIdx|visible)$/,xe=function(t,e,n,a,r){for(var o=[],i=0,l=r.length;i=0?l:r.length+l];if("function"==typeof n){var s=ye(e,a);return t.map(r,function(t,a){return n(a,xe(e,a,0,0,s),i[a])?a:null})}var u="string"==typeof n?n.match(Ce):"";if(u)switch(u[2]){case"visIdx":case"visible":var c=parseInt(u[1],10);if(c<0){var f=t.map(r,function(t,e){return t.bVisible?e:null});return[f[f.length+c]]}return[k(e,c)];case"name":return t.map(o,function(t,e){return t===u[1]?e:null});default:return[]}if(n.nodeName&&n._DT_CellIndex)return[n._DT_CellIndex.column];var d=t(i).filter(n).map(function(){return t.inArray(this,i)}).toArray();if(d.length||!n.nodeName)return d;var h=t(n).closest("*[data-dt-column]");return h.length?[h.data("dt-column")]:[]},e,a)}(a,e,n)},1);return r.selector.cols=e,r.selector.opts=n,r}),l("columns().header()","column().header()",function(t,e){return this.iterator("column",function(t,e){return t.aoColumns[e].nTh},1)}),l("columns().footer()","column().footer()",function(t,e){return this.iterator("column",function(t,e){return t.aoColumns[e].nTf},1)}),l("columns().data()","column().data()",function(){return this.iterator("column-rows",xe,1)}),l("columns().dataSrc()","column().dataSrc()",function(){return this.iterator("column",function(t,e){return t.aoColumns[e].mData},1)}),l("columns().cache()","column().cache()",function(t){return this.iterator("column-rows",function(e,n,a,r,o){return y(e.aoData,o,"search"===t?"_aFilterData":"_aSortData",n)},1)}),l("columns().nodes()","column().nodes()",function(){return this.iterator("column-rows",function(t,e,n,a,r){return y(t.aoData,r,"anCells",e)},1)}),l("columns().visible()","column().visible()",function(e,n){var r=this.iterator("column",function(n,r){if(e===a)return n.aoColumns[r].bVisible;!function(e,n,r){var o,i,l,s,u=e.aoColumns,c=u[n],f=e.aoData;if(r===a)return c.bVisible;if(c.bVisible!==r){if(r){var d=t.inArray(!0,D(u,"bVisible"),n+1);for(i=0,l=f.length;in;return!0},s.isDataTable=s.fnIsDataTable=function(e){var n=t(e).get(0),a=!1;return e instanceof s.Api||(t.each(s.settings,function(e,r){var o=r.nScrollHead?t("table",r.nScrollHead)[0]:null,i=r.nScrollFoot?t("table",r.nScrollFoot)[0]:null;r.nTable!==n&&o!==n&&i!==n||(a=!0)}),a)},s.tables=s.fnTables=function(e){var n=!1;t.isPlainObject(e)&&(n=e.api,e=e.visible);var a=t.map(s.settings,function(n){if(!e||e&&t(n.nTable).is(":visible"))return n.nTable});return n?new o(a):a},s.camelToHungarian=I,i("$()",function(e,n){var a=this.rows(n).nodes(),r=t(a);return t([].concat(r.filter(e).toArray(),r.find(e).toArray()))}),t.each(["on","one","off"],function(e,n){i(n+"()",function(){var e=Array.prototype.slice.call(arguments);e[0]=t.map(e[0].split(/\s/),function(t){return t.match(/\.dt\b/)?t:t+".dt"}).join(" ");var a=t(this.tables().nodes());return a[n].apply(a,e),this})}),i("clear()",function(){return this.iterator("table",function(t){Q(t)})}),i("settings()",function(){return new o(this.context,this.context)}),i("init()",function(){var t=this.context;return t.length?t[0].oInit:null}),i("data()",function(){return this.iterator("table",function(t){return D(t.aoData,"_aData")}).flatten()}),i("destroy()",function(n){return n=n||!1,this.iterator("table",function(a){var r,i=a.nTableWrapper.parentNode,l=a.oClasses,u=a.nTable,c=a.nTBody,f=a.nTHead,d=a.nTFoot,h=t(u),p=t(c),g=t(a.nTableWrapper),b=t.map(a.aoData,function(t){return t.nTr});a.bDestroying=!0,fe(a,"aoDestroyCallback","destroy",[a]),n||new o(a).columns().visible(!0),g.off(".DT").find(":not(tbody *)").off(".DT"),t(e).off(".DT-"+a.sInstance),u!=f.parentNode&&(h.children("thead").detach(),h.append(f)),d&&u!=d.parentNode&&(h.children("tfoot").detach(),h.append(d)),a.aaSorting=[],a.aaSortingFixed=[],ee(a),t(b).removeClass(a.asStripeClasses.join(" ")),t("th, td",f).removeClass(l.sSortable+" "+l.sSortableAsc+" "+l.sSortableDesc+" "+l.sSortableNone),p.children().detach(),p.append(b);var v=n?"remove":"detach";h[v](),g[v](),!n&&i&&(i.insertBefore(u,a.nTableReinsertBefore),h.css("width",a.sDestroyWidth).removeClass(l.sTable),(r=a.asDestroyStripes.length)&&p.children().each(function(e){t(this).addClass(a.asDestroyStripes[e%r])}));var S=t.inArray(a,s.settings);-1!==S&&s.settings.splice(S,1)})}),t.each(["column","row","cell"],function(t,e){i(e+"s().every()",function(t){var n=this.selector.opts,r=this;return this.iterator(e,function(o,i,l,s,u){t.call(r[e](i,"cell"===e?l:n,"cell"===e?n:a),i,l,s,u)})})}),i("i18n()",function(e,n,r){var o=this.context[0],i=Y(e)(o.oLanguage);return i===a&&(i=n),r!==a&&t.isPlainObject(i)&&(i=i[r]!==a?i[r]:i._),i.replace("%d",r)}),s.version="1.10.19",s.settings=[],s.models={},s.models.oSearch={bCaseInsensitive:!0,sSearch:"",bRegex:!1,bSmart:!0},s.models.oRow={nTr:null,anCells:null,_aData:[],_aSortData:null,_aFilterData:null,_sFilterRow:null,_sRowStripe:"",src:null,idx:-1},s.models.oColumn={idx:null,aDataSort:null,asSorting:null,bSearchable:null,bSortable:null,bVisible:null,_sManualType:null,_bAttrSrc:!1,fnCreatedCell:null,fnGetData:null,fnSetData:null,mData:null,mRender:null,nTh:null,nTf:null,sClass:null,sContentPadding:null,sDefaultContent:null,sName:null,sSortDataType:"std",sSortingClass:null,sSortingClassJUI:null,sTitle:null,sType:null,sWidth:null,sWidthOrig:null},s.defaults={aaData:null,aaSorting:[[0,"asc"]],aaSortingFixed:[],ajax:null,aLengthMenu:[10,25,50,100],aoColumns:null,aoColumnDefs:null,aoSearchCols:[],asStripeClasses:null,bAutoWidth:!0,bDeferRender:!1,bDestroy:!1,bFilter:!0,bInfo:!0,bLengthChange:!0,bPaginate:!0,bProcessing:!1,bRetrieve:!1,bScrollCollapse:!1,bServerSide:!1,bSort:!0,bSortMulti:!0,bSortCellsTop:!1,bSortClasses:!0,bStateSave:!1,fnCreatedRow:null,fnDrawCallback:null,fnFooterCallback:null,fnFormatNumber:function(t){return t.toString().replace(/\B(?=(\d{3})+(?!\d))/g,this.oLanguage.sThousands)},fnHeaderCallback:null,fnInfoCallback:null,fnInitComplete:null,fnPreDrawCallback:null,fnRowCallback:null,fnServerData:null,fnServerParams:null,fnStateLoadCallback:function(t){try{return JSON.parse((-1===t.iStateDuration?sessionStorage:localStorage).getItem("DataTables_"+t.sInstance+"_"+location.pathname))}catch(t){}},fnStateLoadParams:null,fnStateLoaded:null,fnStateSaveCallback:function(t,e){try{(-1===t.iStateDuration?sessionStorage:localStorage).setItem("DataTables_"+t.sInstance+"_"+location.pathname,JSON.stringify(e))}catch(t){}},fnStateSaveParams:null,iStateDuration:7200,iDeferLoading:null,iDisplayLength:10,iDisplayStart:0,iTabIndex:0,oClasses:{},oLanguage:{oAria:{sSortAscending:": activate to sort column ascending",sSortDescending:": activate to sort column descending"},oPaginate:{sFirst:"First",sLast:"Last",sNext:"Next",sPrevious:"Previous"},sEmptyTable:"No data available in table",sInfo:"Showing _START_ to _END_ of _TOTAL_ entries",sInfoEmpty:"Showing 0 to 0 of 0 entries",sInfoFiltered:"(filtered from _MAX_ total entries)",sInfoPostFix:"",sDecimal:"",sThousands:",",sLengthMenu:"Show _MENU_ entries",sLoadingRecords:"Loading...",sProcessing:"Processing...",sSearch:"Search:",sSearchPlaceholder:"",sUrl:"",sZeroRecords:"No matching records found"},oSearch:t.extend({},s.models.oSearch),sAjaxDataProp:"data",sAjaxSource:null,sDom:"lfrtip",searchDelay:null,sPaginationType:"simple_numbers",sScrollX:"",sScrollXInner:"",sScrollY:"",sServerMethod:"GET",renderer:null,rowId:"DT_RowId"},x(s.defaults),s.defaults.column={aDataSort:null,iDataSort:-1,asSorting:["asc","desc"],bSearchable:!0,bSortable:!0,bVisible:!0,fnCreatedCell:null,mData:null,mRender:null,sCellType:"td",sClass:"",sContentPadding:"",sDefaultContent:null,sName:"",sSortDataType:"std",sTitle:null,sType:null,sWidth:null},x(s.defaults.column),s.models.oSettings={oFeatures:{bAutoWidth:null,bDeferRender:null,bFilter:null,bInfo:null,bLengthChange:null,bPaginate:null,bProcessing:null,bServerSide:null,bSort:null,bSortMulti:null,bSortClasses:null,bStateSave:null},oScroll:{bCollapse:null,iBarWidth:0,sX:null,sXInner:null,sY:null},oLanguage:{fnInfoCallback:null},oBrowser:{bScrollOversize:!1,bScrollbarLeft:!1,bBounding:!1,barWidth:0},ajax:null,aanFeatures:[],aoData:[],aiDisplay:[],aiDisplayMaster:[],aIds:{},aoColumns:[],aoHeader:[],aoFooter:[],oPreviousSearch:{},aoPreSearchCols:[],aaSorting:null,aaSortingFixed:[],asStripeClasses:null,asDestroyStripes:[],sDestroyWidth:0,aoRowCallback:[],aoHeaderCallback:[],aoFooterCallback:[],aoDrawCallback:[],aoRowCreatedCallback:[],aoPreDrawCallback:[],aoInitComplete:[],aoStateSaveParams:[],aoStateLoadParams:[],aoStateLoaded:[],sTableId:"",nTable:null,nTHead:null,nTFoot:null,nTBody:null,nTableWrapper:null,bDeferLoading:!1,bInitialised:!1,aoOpenRows:[],sDom:null,searchDelay:null,sPaginationType:"two_button",iStateDuration:0,aoStateSave:[],aoStateLoad:[],oSavedState:null,oLoadedState:null,sAjaxSource:null,sAjaxDataProp:null,bAjaxDataGet:!0,jqXHR:null,json:a,oAjaxData:a,fnServerData:null,aoServerParams:[],sServerMethod:null,fnFormatNumber:null,aLengthMenu:null,iDraw:0,bDrawing:!1,iDrawError:-1,_iDisplayLength:10,_iDisplayStart:0,_iRecordsTotal:0,_iRecordsDisplay:0,oClasses:{},bFiltered:!1,bSorted:!1,bSortCellsTop:null,oInit:null,aoDestroyCallback:[],fnRecordsTotal:function(){return"ssp"==pe(this)?1*this._iRecordsTotal:this.aiDisplayMaster.length},fnRecordsDisplay:function(){return"ssp"==pe(this)?1*this._iRecordsDisplay:this.aiDisplay.length},fnDisplayEnd:function(){var t=this._iDisplayLength,e=this._iDisplayStart,n=e+t,a=this.aiDisplay.length,r=this.oFeatures,o=r.bPaginate;return r.bServerSide?!1===o||-1===t?e+a:Math.min(e+t,this._iRecordsDisplay):!o||n>a||-1===t?a:n},oInstance:null,sInstance:null,iTabIndex:0,nScrollHead:null,nScrollFoot:null,aLastSort:[],oPlugins:{},rowIdFn:null,rowId:null},s.ext=r={buttons:{},classes:{},builder:"-source-",errMode:"alert",feature:[],search:[],selector:{cell:[],column:[],row:[]},internal:{},legacy:{ajax:null},pager:{},renderer:{pageButton:{},header:{}},order:{},type:{detect:[],search:{},order:{}},_unique:0,fnVersionCheck:s.fnVersionCheck,iApiIndex:0,oJUIClasses:{},sVersion:s.version},t.extend(r,{afnFiltering:r.search,aTypes:r.type.detect,ofnSearch:r.type.search,oSort:r.type.order,afnSortData:r.order,aoFeatures:r.feature,oApi:r.internal,oStdClasses:r.classes,oPagination:r.pager}),t.extend(s.ext.classes,{sTable:"dataTable",sNoFooter:"no-footer",sPageButton:"paginate_button",sPageButtonActive:"current",sPageButtonDisabled:"disabled",sStripeOdd:"odd",sStripeEven:"even",sRowEmpty:"dataTables_empty",sWrapper:"dataTables_wrapper",sFilter:"dataTables_filter",sInfo:"dataTables_info",sPaging:"dataTables_paginate paging_",sLength:"dataTables_length",sProcessing:"dataTables_processing",sSortAsc:"sorting_asc",sSortDesc:"sorting_desc",sSortable:"sorting",sSortableAsc:"sorting_asc_disabled",sSortableDesc:"sorting_desc_disabled",sSortableNone:"sorting_disabled",sSortColumn:"sorting_",sFilterInput:"",sLengthSelect:"",sScrollWrapper:"dataTables_scroll",sScrollHead:"dataTables_scrollHead",sScrollHeadInner:"dataTables_scrollHeadInner",sScrollBody:"dataTables_scrollBody",sScrollFoot:"dataTables_scrollFoot",sScrollFootInner:"dataTables_scrollFootInner",sHeaderTH:"",sFooterTH:"",sSortJUIAsc:"",sSortJUIDesc:"",sSortJUI:"",sSortJUIAscAllowed:"",sSortJUIDescAllowed:"",sSortJUIWrapper:"",sSortIcon:"",sJUIHeader:"",sJUIFooter:""});var Ie=s.ext.pager;function Ae(t,e){var n=[],a=Ie.numbers_length,r=Math.floor(a/2);return e<=a?n=_(0,e):t<=r?((n=_(0,a-2)).push("ellipsis"),n.push(e-1)):t>=e-1-r?((n=_(e-(a-2),e)).splice(0,0,"ellipsis"),n.splice(0,0,0)):((n=_(t-r+2,t+r-1)).push("ellipsis"),n.push(e-1),n.splice(0,0,"ellipsis"),n.splice(0,0,0)),n.DT_el="span",n}t.extend(Ie,{simple:function(t,e){return["previous","next"]},full:function(t,e){return["first","previous","next","last"]},numbers:function(t,e){return[Ae(t,e)]},simple_numbers:function(t,e){return["previous",Ae(t,e),"next"]},full_numbers:function(t,e){return["first","previous",Ae(t,e),"next","last"]},first_last_numbers:function(t,e){return["first",Ae(t,e),"last"]},_numbers:Ae,numbers_length:7}),t.extend(!0,s.ext.renderer,{pageButton:{_:function(e,r,o,i,l,s){var u,c,f,d=e.oClasses,h=e.oLanguage.oPaginate,p=e.oLanguage.oAria.paginate||{},g=0,b=function(n,a){var r,i,f,v=function(t){kt(e,t.data.action,!0)};for(r=0,i=a.length;r").appendTo(n);b(S,f)}else{switch(u=null,c="",f){case"ellipsis":n.append('');break;case"first":u=h.sFirst,c=f+(l>0?"":" "+d.sPageButtonDisabled);break;case"previous":u=h.sPrevious,c=f+(l>0?"":" "+d.sPageButtonDisabled);break;case"next":u=h.sNext,c=f+(l",{class:d.sPageButton+" "+c,"aria-controls":e.sTableId,"aria-label":p[f],"data-dt-idx":g,tabindex:e.iTabIndex,id:0===o&&"string"==typeof f?e.sTableId+"_"+f:null}).html(u).appendTo(n),{action:f},v),g++)}};try{f=t(r).find(n.activeElement).data("dt-idx")}catch(t){}b(t(r).empty(),i),f!==a&&t(r).find("[data-dt-idx="+f+"]").focus()}}}),t.extend(s.ext.type.detect,[function(t,e){var n=e.oLanguage.sDecimal;return S(t,n)?"num"+n:null},function(t,e){if(t&&!(t instanceof Date)&&!d.test(t))return null;var n=Date.parse(t);return null!==n&&!isNaN(n)||g(t)?"date":null},function(t,e){var n=e.oLanguage.sDecimal;return S(t,n,!0)?"num-fmt"+n:null},function(t,e){var n=e.oLanguage.sDecimal;return m(t,n)?"html-num"+n:null},function(t,e){var n=e.oLanguage.sDecimal;return m(t,n,!0)?"html-num-fmt"+n:null},function(t,e){return g(t)||"string"==typeof t&&-1!==t.indexOf("<")?"html":null}]),t.extend(s.ext.type.search,{html:function(t){return g(t)?t:"string"==typeof t?t.replace(c," ").replace(f,""):""},string:function(t){return g(t)?t:"string"==typeof t?t.replace(c," "):t}});var Fe=function(t,e,n,a){return 0===t||t&&"-"!==t?(e&&(t=v(t,e)),t.replace&&(n&&(t=t.replace(n,"")),a&&(t=t.replace(a,""))),1*t):-1/0};function Le(e){t.each({num:function(t){return Fe(t,e)},"num-fmt":function(t){return Fe(t,e,p)},"html-num":function(t){return Fe(t,e,f)},"html-num-fmt":function(t){return Fe(t,e,f,p)}},function(t,n){r.type.order[t+e+"-pre"]=n,t.match(/^html\-/)&&(r.type.search[t+e]=r.type.search.html)})}t.extend(r.type.order,{"date-pre":function(t){var e=Date.parse(t);return isNaN(e)?-1/0:e},"html-pre":function(t){return g(t)?"":t.replace?t.replace(/<.*?>/g,"").toLowerCase():t+""},"string-pre":function(t){return g(t)?"":"string"==typeof t?t.toLowerCase():t.toString?t.toString():""},"string-asc":function(t,e){return te?1:0},"string-desc":function(t,e){return te?-1:0}}),Le(""),t.extend(!0,s.ext.renderer,{header:{_:function(e,n,a,r){t(e.nTable).on("order.dt.DT",function(t,o,i,l){if(e===o){var s=a.idx;n.removeClass(a.sSortingClass+" "+r.sSortAsc+" "+r.sSortDesc).addClass("asc"==l[s]?r.sSortAsc:"desc"==l[s]?r.sSortDesc:a.sSortingClass)}})},jqueryui:function(e,n,a,r){t("
").addClass(r.sSortJUIWrapper).append(n.contents()).append(t("").addClass(r.sSortIcon+" "+a.sSortingClassJUI)).appendTo(n),t(e.nTable).on("order.dt.DT",function(t,o,i,l){if(e===o){var s=a.idx;n.removeClass(r.sSortAsc+" "+r.sSortDesc).addClass("asc"==l[s]?r.sSortAsc:"desc"==l[s]?r.sSortDesc:a.sSortingClass),n.find("span."+r.sSortIcon).removeClass(r.sSortJUIAsc+" "+r.sSortJUIDesc+" "+r.sSortJUI+" "+r.sSortJUIAscAllowed+" "+r.sSortJUIDescAllowed).addClass("asc"==l[s]?r.sSortJUIAsc:"desc"==l[s]?r.sSortJUIDesc:a.sSortingClassJUI)}})}}});var Re=function(t){return"string"==typeof t?t.replace(//g,">").replace(/"/g,"""):t};function Pe(t){return function(){var e=[oe(this[s.ext.iApiIndex])].concat(Array.prototype.slice.call(arguments));return s.ext.internal[t].apply(this,e)}}return s.render={number:function(t,e,n,a,r){return{display:function(o){if("number"!=typeof o&&"string"!=typeof o)return o;var i=o<0?"-":"",l=parseFloat(o);if(isNaN(l))return Re(o);l=l.toFixed(n),o=Math.abs(l);var s=parseInt(o,10),u=n?e+(o-s).toFixed(n).substring(2):"";return i+(a||"")+s.toString().replace(/\B(?=(\d{3})+(?!\d))/g,t)+u+(r||"")}}},text:function(){return{display:Re,filter:Re}}},t.extend(s.ext.internal,{_fnExternApiFunc:Pe,_fnBuildAjax:dt,_fnAjaxUpdate:ht,_fnAjaxParameters:pt,_fnAjaxUpdateDraw:gt,_fnAjaxDataSrc:bt,_fnAddColumn:N,_fnColumnOptions:H,_fnAdjustColumnSizing:O,_fnVisibleToColumnIndex:k,_fnColumnIndexToVisible:M,_fnVisbleColumns:W,_fnGetColumns:E,_fnColumnTypes:B,_fnApplyColumnDefs:U,_fnHungarianMap:x,_fnCamelToHungarian:I,_fnLanguageCompat:A,_fnBrowserDetect:P,_fnAddData:V,_fnAddTr:X,_fnNodeToDataIndex:function(t,e){return e._DT_RowIndex!==a?e._DT_RowIndex:null},_fnNodeToColumnIndex:function(e,n,a){return t.inArray(a,e.aoData[n].anCells)},_fnGetCellData:J,_fnSetCellData:q,_fnSplitObjNotation:z,_fnGetObjectDataFn:Y,_fnSetObjectDataFn:Z,_fnGetDataMaster:K,_fnClearTable:Q,_fnDeleteIndex:tt,_fnInvalidate:et,_fnGetRowElements:nt,_fnCreateTr:at,_fnBuildHead:ot,_fnDrawHead:it,_fnDraw:lt,_fnReDraw:st,_fnAddOptionsHtml:ut,_fnDetectHeader:ct,_fnGetUniqueThs:ft,_fnFeatureHtmlFilter:vt,_fnFilterComplete:St,_fnFilterCustom:mt,_fnFilterColumn:Dt,_fnFilter:yt,_fnFilterCreateSearch:_t,_fnEscapeRegex:wt,_fnFilterData:xt,_fnFeatureHtmlInfo:Ft,_fnUpdateInfo:Lt,_fnInfoMacros:Rt,_fnInitialise:Pt,_fnInitComplete:jt,_fnLengthChange:Nt,_fnFeatureHtmlLength:Ht,_fnFeatureHtmlPaginate:Ot,_fnPageChange:kt,_fnFeatureHtmlProcessing:Mt,_fnProcessingDisplay:Wt,_fnFeatureHtmlTable:Et,_fnScrollDraw:Bt,_fnApplyToChildren:Ut,_fnCalculateColumnWidths:Xt,_fnThrottle:Jt,_fnConvertToWidth:qt,_fnGetWidestNode:Gt,_fnGetMaxLenString:$t,_fnStringToCss:zt,_fnSortFlatten:Yt,_fnSort:Zt,_fnSortAria:Kt,_fnSortListener:Qt,_fnSortAttachListener:te,_fnSortingClasses:ee,_fnSortData:ne,_fnSaveState:ae,_fnLoadState:re,_fnSettingsFromNode:oe,_fnLog:ie,_fnMap:le,_fnBindAction:ue,_fnCallbackReg:ce,_fnCallbackFire:fe,_fnLengthOverflow:de,_fnRenderer:he,_fnDataSource:pe,_fnRowAttributes:rt,_fnExtend:se,_fnCalculateEnd:function(){}}),t.fn.dataTable=s,s.$=t,t.fn.dataTableSettings=s.settings,t.fn.dataTableExt=s.ext,t.fn.DataTable=function(e){return t(this).dataTable(e).api()},t.each(s,function(e,n){t.fn.DataTable[e]=n}),t.fn.dataTable}); diff --git a/traffic_portal/app/src/common/api/AuthService.js b/traffic_portal/app/src/common/api/AuthService.js index 439390a146..b73ea879fe 100644 --- a/traffic_portal/app/src/common/api/AuthService.js +++ b/traffic_portal/app/src/common/api/AuthService.js @@ -17,16 +17,47 @@ * under the License. */ -var AuthService = function($rootScope, $http, $state, $location, $q, $state, httpService, userModel, messageModel, ENV) { +var AuthService = function($rootScope, $http, $state, $location, userModel, messageModel, ENV, locationUtils) { this.login = function(username, password) { userModel.resetUser(); - return httpService.post(ENV.api['root'] + 'user/login', { u: username, p: password }) + return $http.post(ENV.api['root'] + 'user/login', { u: username, p: password }).then( + function(result) { + $rootScope.$broadcast('authService::login'); + const redirect = decodeURIComponent($location.search().redirect); + if (redirect !== 'undefined') { + $location.search('redirect', null); // remove the redirect query param + $location.url(redirect); + } else { + $location.url('/'); + } + return result; + }, + function(err) { + throw err; + } + ); + }; + + this.tokenLogin = function(token) { + userModel.resetUser(); + return $http.post(ENV.api['root'] + "user/login/token", { t: token }).then( + function(result) { + return result; + }, + function(err) { + throw err; + } + ); + }; + + this.oauthLogin = function(authCodeTokenUrl, code, clientId, clientSecret, redirectUri) { + return $http.post(ENV.api['root'] + 'user/login/oauth', { authCodeTokenUrl: authCodeTokenUrl, code: code, clientId: clientId, clientSecret: clientSecret, redirectUri: redirectUri}) .then( function(result) { $rootScope.$broadcast('authService::login'); - var redirect = decodeURIComponent($location.search().redirect); - if (redirect !== 'undefined') { + const redirect = decodeURIComponent($location.search().redirect); + if (redirect !== undefined) { $location.search('redirect', null); // remove the redirect query param $location.url(redirect); } else { @@ -34,47 +65,32 @@ var AuthService = function($rootScope, $http, $state, $location, $q, $state, htt } }, function(fault) { - // do nothing + messageModel.setMessages(fault.data.alerts, true); + locationUtils.navigateToPath('/'); } ); }; - this.tokenLogin = function(token) { - var deferred = $q.defer(); - - userModel.resetUser(); - - $http.post(ENV.api['root'] + "user/login/token", { t: token }) - .then( - function() { - deferred.resolve(); - }, - function() { - deferred.reject(); - } - ); - - return deferred.promise; - }; - this.logout = function() { userModel.resetUser(); - httpService.post(ENV.api['root'] + 'user/logout'). - then( - function(result) { - $rootScope.$broadcast('trafficPortal::exit'); - if ($state.current.name == 'trafficPortal.public.login') { - messageModel.setMessages(result.alerts, false); - } else { - messageModel.setMessages(result.alerts, true); - $state.go('trafficPortal.public.login'); - } - return result; + return $http.post(ENV.api['root'] + 'user/logout').then( + function(result) { + $rootScope.$broadcast('trafficPortal::exit'); + if ($state.current.name == 'trafficPortal.public.login') { + messageModel.setMessages(result.alerts, false); + } else { + messageModel.setMessages(result.alerts, true); + $state.go('trafficPortal.public.login'); } + return result; + }, + function(err) { + throw err; + } ); }; }; -AuthService.$inject = ['$rootScope', '$http', '$state', '$location', '$q', '$state', 'httpService', 'userModel', 'messageModel', 'ENV']; +AuthService.$inject = ['$rootScope', '$http', '$state', '$location', 'userModel', 'messageModel', 'ENV', 'locationUtils']; module.exports = AuthService; diff --git a/traffic_portal/app/src/common/api/CapabilityService.js b/traffic_portal/app/src/common/api/CapabilityService.js index 9d0855c93f..73c873e29c 100644 --- a/traffic_portal/app/src/common/api/CapabilityService.js +++ b/traffic_portal/app/src/common/api/CapabilityService.js @@ -17,68 +17,67 @@ * under the License. */ -var CapabilityService = function(Restangular, $q, $http, messageModel, ENV) { +var CapabilityService = function($http, messageModel, ENV) { this.getCapabilities = function(queryParams) { - return Restangular.all('capabilities').getList(queryParams); + return $http.get(ENV.api['root'] + 'capabilities', {params: queryParams}).then( + function(result) { + return result.data.response; + }, + function(err) { + throw err; + } + ); }; this.getCapability = function(name) { - return Restangular.one("capabilities", name).get(); + return $http.get(ENV.api['root'] + 'capabilities/' + name).then( + function(result) { + return result.data.response[0]; + }, + function(err) { + throw err; + } + ); }; this.createCapability = function(cap) { - var request = $q.defer(); - - $http.post(ENV.api['root'] + "capabilities", cap) - .then( - function(result) { - request.resolve(result.data); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - request.reject(fault); - } - ); - - return request.promise; + return $http.post(ENV.api['root'] + "capabilities", cap).then( + function(result) { + return result.data; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; this.updateCapability = function(cap) { - var request = $q.defer(); - - $http.put(ENV.api['root'] + "capabilities/" + cap.name, cap) - .then( - function(result) { - request.resolve(result.data); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - request.reject(); - } - ); - - return request.promise; + return $http.put(ENV.api['root'] + "capabilities/" + cap.name, cap).then( + function(result) { + return result.data; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; this.deleteCapability = function(cap) { - var request = $q.defer(); - - $http.delete(ENV.api['root'] + "capabilities/" + cap.name) - .then( - function(result) { - request.resolve(result.data); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - request.reject(fault); - } - ); - - return request.promise; + return $http.delete(ENV.api['root'] + "capabilities/" + cap.name).then( + function(result) { + return result.data; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; }; -CapabilityService.$inject = ['Restangular', '$q', '$http', 'messageModel', 'ENV']; +CapabilityService.$inject = ['$http', 'messageModel', 'ENV']; module.exports = CapabilityService; diff --git a/traffic_portal/app/src/common/api/CoordinateService.js b/traffic_portal/app/src/common/api/CoordinateService.js index ed1fa72beb..fff167837f 100644 --- a/traffic_portal/app/src/common/api/CoordinateService.js +++ b/traffic_portal/app/src/common/api/CoordinateService.js @@ -17,66 +17,60 @@ * under the License. */ -var CoordinateService = function($http, $q, Restangular, locationUtils, messageModel, ENV) { +var CoordinateService = function($http, locationUtils, messageModel, ENV) { this.getCoordinates = function(queryParams) { - return Restangular.all('coordinates').getList(queryParams); + return $http.get(ENV.api['root'] + 'coordinates', {params: queryParams}).then( + function(result) { + return result.data.response; + }, + function (err) { + throw err; + } + ); }; this.createCoordinate = function(coordinate) { - var request = $q.defer(); - - $http.post(ENV.api['root'] + "coordinates", coordinate) - .then( - function(response) { - messageModel.setMessages(response.data.alerts, true); - locationUtils.navigateToPath('/coordinates'); - request.resolve(response); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false) - request.reject(fault); - } - ); - - return request.promise; + return $http.post(ENV.api['root'] + "coordinates", coordinate).then( + function(response) { + messageModel.setMessages(response.data.alerts, true); + locationUtils.navigateToPath('/coordinates'); + return response; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false) + throw err; + } + ); }; this.updateCoordinate = function(id, coordinate) { - var request = $q.defer(); - - $http.put(ENV.api['root'] + "coordinates?id=" + id, coordinate) - .then( - function(response) { - messageModel.setMessages(response.data.alerts, false); - request.resolve(); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - request.reject(); - } - ); - return request.promise; + return $http.put(ENV.api['root'] + "coordinates", coordinate, {params: {id: id}}).then( + function(response) { + messageModel.setMessages(response.data.alerts, false); + return response; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; this.deleteCoordinate = function(id) { - var deferred = $q.defer(); - - $http.delete(ENV.api['root'] + "coordinates?id=" + id) - .then( - function(response) { - messageModel.setMessages(response.data.alerts, true); - deferred.resolve(response); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - deferred.reject(fault); - } - ); - return deferred.promise; + return $http.delete(ENV.api['root'] + "coordinates", {params: {id: id}}).then( + function(response) { + messageModel.setMessages(response.data.alerts, true); + return response; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; }; -CoordinateService.$inject = ['$http', '$q', 'Restangular', 'locationUtils', 'messageModel', 'ENV']; +CoordinateService.$inject = ['$http', 'locationUtils', 'messageModel', 'ENV']; module.exports = CoordinateService; diff --git a/traffic_portal/app/src/common/api/ParameterService.js b/traffic_portal/app/src/common/api/ParameterService.js index 691f663b23..bd72fe7dde 100644 --- a/traffic_portal/app/src/common/api/ParameterService.js +++ b/traffic_portal/app/src/common/api/ParameterService.js @@ -17,72 +17,104 @@ * under the License. */ -var ParameterService = function(Restangular, $http, $q, locationUtils, messageModel, ENV) { +var ParameterService = function($http, locationUtils, messageModel, ENV) { this.getParameters = function(queryParams) { - return Restangular.all('parameters').getList(queryParams); + return $http.get(ENV.api['root'] + 'parameters', {params: queryParams}).then( + function (result) { + return result.data.response + }, + function (err) { + throw err; + } + ); }; this.getParameter = function(id) { - return Restangular.one("parameters", id).get(); + return $http.get(ENV.api['root'] + 'parameters', {params: {id: id}}).then( + function (result) { + return result.data.response[0]; + }, + function (err) { + throw err; + } + ); }; this.createParameter = function(parameter) { - return Restangular.service('parameters').post(parameter) - .then( - function() { + return $http.post(ENV.api['root'] + 'parameters', parameter).then( + function(result) { messageModel.setMessages([ { level: 'success', text: 'Parameter created' } ], true); - locationUtils.navigateToPath('/parameters'); + locationUtils.navigateToPath('/parameters/' + result.data.response.id + '/profiles'); + return result; }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; } ); }; this.updateParameter = function(parameter) { - return parameter.put() - .then( - function() { + return $http.put(ENV.api['root'] + 'parameters/' + parameter.id, parameter).then( + function(result) { messageModel.setMessages([ { level: 'success', text: 'Parameter updated' } ], false); + return result; }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; } ); }; this.deleteParameter = function(id) { - var request = $q.defer(); - - $http.delete(ENV.api['root'] + "parameters/" + id) - .then( - function(result) { - request.resolve(result.data); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - request.reject(fault); - } - ); - - return request.promise; + return $http.delete(ENV.api['root'] + "parameters/" + id).then( + function(result) { + return result.data; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; this.getProfileParameters = function(profileId) { - return Restangular.one('profiles', profileId).getList('parameters'); + return $http.get(ENV.api['root'] + 'profiles/' + profileId + '/parameters').then( + function (result) { + return result.data.response; + }, + function (err) { + throw err; + } + ); }; this.getProfileUnassignedParams = function(profileId) { - return Restangular.one('profiles', profileId).getList('unassigned_parameters'); + return $http.get(ENV.api['root'] + 'profiles/' + profileId + 'unassigned_parameters').then( + function (result) { + return result.data.response; + }, + function (err) { + throw err; + } + ); }; this.getCacheGroupUnassignedParams = function(cgId) { - return Restangular.one('cachegroups', cgId).getList('unassigned_parameters'); + return $http.get(ENV.api['root'] + 'cachegroups/' + cgId + '/unassigned_parameters').then( + function (result) { + return result.data.response; + }, + function (err) { + throw err; + } + ); }; }; -ParameterService.$inject = ['Restangular', '$http', '$q', 'locationUtils', 'messageModel', 'ENV']; +ParameterService.$inject = ['$http', 'locationUtils', 'messageModel', 'ENV']; module.exports = ParameterService; diff --git a/traffic_portal/app/src/common/api/ProfileService.js b/traffic_portal/app/src/common/api/ProfileService.js index 446ce1f6d9..d949fac6fe 100644 --- a/traffic_portal/app/src/common/api/ProfileService.js +++ b/traffic_portal/app/src/common/api/ProfileService.js @@ -17,109 +17,131 @@ * under the License. */ -var ProfileService = function(Restangular, $http, $q, locationUtils, messageModel, ENV) { +var ProfileService = function($http, locationUtils, messageModel, ENV) { this.getProfiles = function(queryParams) { - return Restangular.all('profiles').getList(queryParams); + return $http.get(ENV.api['root'] + 'profiles', {params: queryParams}).then( + function(result) { + return result.data.response; + }, + function(err) { + throw err; + } + ); }; this.getProfile = function(id, queryParams) { - return Restangular.one("profiles", id).get(queryParams); + return $http.get(ENV.api['root'] + 'profiles', {params: {id: id}}).then( + function (result) { + return result.data.response[0]; + }, + function (err) { + throw err; + } + ); }; this.createProfile = function(profile) { - return Restangular.service('profiles').post(profile) - .then( - function() { + return $http.post(ENV.api['root'] + 'profiles', profile).then( + function(result) { messageModel.setMessages([ { level: 'success', text: 'Profile created' } ], true); - locationUtils.navigateToPath('/profiles'); + locationUtils.navigateToPath('/profiles/' + result.data.response.id + '/parameters'); + return result; }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; } ); }; this.updateProfile = function(profile) { - return profile.put() - .then( - function() { - messageModel.setMessages([ { level: 'success', text: 'Profile updated' } ], false); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - } + return $http.put(ENV.api['root'] + 'profiles/' + profile.id, profile).then( + function(result) { + messageModel.setMessages([ { level: 'success', text: 'Profile updated' } ], false); + return result; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } ); }; this.deleteProfile = function(id) { - var request = $q.defer(); - - $http.delete(ENV.api['root'] + "profiles/" + id) - .then( - function(result) { - request.resolve(result.data); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - request.reject(fault); - } - ); - - return request.promise; + return $http.delete(ENV.api['root'] + "profiles/" + id).then( + function(result) { + return result.data; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; this.getParameterProfiles = function(paramId) { - return Restangular.one('parameters', paramId).getList('profiles'); + return $http.get(ENV.api['root'] + 'parameters/' + paramId + '/profiles').then( + function (result) { + return result.data.response; + }, + function (err) { + throw err; + } + ); }; this.getParamUnassignedProfiles = function(paramId) { - return Restangular.one('parameters', paramId).getList('unassigned_profiles'); + return $http.get(ENV.api['root'] + 'parameters/' + paramId + '/unassigned_profiles').then( + function (result) { + return result.data.response; + }, + function (err) { + throw err; + } + ); }; this.cloneProfile = function(sourceName, cloneName) { - return $http.post(ENV.api['root'] + "profiles/name/" + cloneName + "/copy/" + sourceName) - .then( - function(result) { - messageModel.setMessages(result.data.alerts, true); - locationUtils.navigateToPath('/profiles/' + result.data.response.id); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - } - ); + return $http.post(ENV.api['root'] + "profiles/name/" + cloneName + "/copy/" + sourceName).then( + function(result) { + messageModel.setMessages(result.data.alerts, true); + locationUtils.navigateToPath('/profiles/' + result.data.response.id); + return result; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; this.exportProfile = function(id) { - var deferred = $q.defer(); - - $http.get(ENV.api['root'] + "profiles/" + id + "/export") - .then( - function(result) { - deferred.resolve(result.data); - }, - function(fault) { - deferred.reject(fault); - } - ); - - return deferred.promise; + return $http.get(ENV.api['root'] + "profiles/" + id + "/export").then( + function(result) { + return result.data; + }, + function(err) { + throw err; + } + ); }; this.importProfile = function(importJSON) { - return $http.post(ENV.api['root'] + "profiles/import", importJSON) - .then( - function(result) { - messageModel.setMessages(result.data.alerts, true); - locationUtils.navigateToPath('/profiles/' + result.data.response.id); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - } - ); + return $http.post(ENV.api['root'] + "profiles/import", importJSON).then( + function(result) { + messageModel.setMessages(result.data.alerts, true); + locationUtils.navigateToPath('/profiles/' + result.data.response.id); + return result; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; }; -ProfileService.$inject = ['Restangular', '$http', '$q', 'locationUtils', 'messageModel', 'ENV']; +ProfileService.$inject = ['$http', 'locationUtils', 'messageModel', 'ENV']; module.exports = ProfileService; diff --git a/traffic_portal/app/src/common/api/TypeService.js b/traffic_portal/app/src/common/api/TypeService.js index aab60002b7..8076b1c391 100644 --- a/traffic_portal/app/src/common/api/TypeService.js +++ b/traffic_portal/app/src/common/api/TypeService.js @@ -17,54 +17,71 @@ * under the License. */ -var TypeService = function(Restangular, locationUtils, messageModel) { +var TypeService = function($http, ENV, locationUtils, messageModel) { this.getTypes = function(queryParams) { - return Restangular.all('types').getList(queryParams); + return $http.get(ENV.api['root'] + 'types', {params: queryParams}).then( + function (result) { + return result.data.response; + }, + function (err) { + throw err; + } + ) }; this.getType = function(id) { - return Restangular.one("types", id).get(); + return $http.get(ENV.api['root'] + 'types', {params: {id: id}}).then( + function (result) { + return result.data.response[0]; + }, + function (err) { + throw err; + } + ) }; this.createType = function(type) { - return Restangular.service('types').post(type) - .then( - function() { - messageModel.setMessages([ { level: 'success', text: 'Type created' } ], true); - locationUtils.navigateToPath('/types'); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - } - ); + return $http.post(ENV.api['root'] + 'types', type).then( + function(result) { + messageModel.setMessages([ { level: 'success', text: 'Type created' } ], true); + locationUtils.navigateToPath('/types'); + return result; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } + ); }; this.updateType = function(type) { - return type.put() - .then( - function() { - messageModel.setMessages([ { level: 'success', text: 'Type updated' } ], false); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, false); - } + return $http.put(ENV.api['root'] + 'types/' + type.id, type).then( + function(result) { + messageModel.setMessages([ { level: 'success', text: 'Type updated' } ], false); + return result; + }, + function(err) { + messageModel.setMessages(err.data.alerts, false); + throw err; + } ); }; this.deleteType = function(id) { - return Restangular.one("types", id).remove() - .then( - function() { - messageModel.setMessages([ { level: 'success', text: 'Type deleted' } ], true); - }, - function(fault) { - messageModel.setMessages(fault.data.alerts, true); - } + return $http.delete(ENV.api['root'] + "types/" + id).then( + function(result) { + messageModel.setMessages([ { level: 'success', text: 'Type deleted' } ], true); + return result; + }, + function(err) { + messageModel.setMessages(err.data.alerts, true); + throw err; + } ); }; }; -TypeService.$inject = ['Restangular', 'locationUtils', 'messageModel']; +TypeService.$inject = ['$http', 'ENV', 'locationUtils', 'messageModel']; module.exports = TypeService; diff --git a/traffic_portal/app/src/common/modules/form/_form.scss b/traffic_portal/app/src/common/modules/form/_form.scss index ff9072ca68..10cbd423b0 100644 --- a/traffic_portal/app/src/common/modules/form/_form.scss +++ b/traffic_portal/app/src/common/modules/form/_form.scss @@ -196,3 +196,21 @@ aside.current-value { } } +#deliveryServiceForm, #deliveryServiceURLs { + fieldset { + margin: 0 0 20px 0; + legend { + font-size: 18px; + font-weight: bold; + .fa { + float: right; + margin-right: 10px; + } + } + legend.fieldset-error { + color: #a94442; + } + } +} + + diff --git a/traffic_portal/app/src/common/modules/form/deliveryService/FormDeliveryServiceController.js b/traffic_portal/app/src/common/modules/form/deliveryService/FormDeliveryServiceController.js index 0ada148a1d..7f3648ac23 100644 --- a/traffic_portal/app/src/common/modules/form/deliveryService/FormDeliveryServiceController.js +++ b/traffic_portal/app/src/common/modules/form/deliveryService/FormDeliveryServiceController.js @@ -48,6 +48,12 @@ var FormDeliveryServiceController = function(deliveryService, dsCurrent, origin, $scope.deliveryService = deliveryService; + $scope.showGeneralConfig = true; + + $scope.showCacheConfig = true; + + $scope.showRoutingConfig = true; + $scope.dsCurrent = dsCurrent; // this ds is used primarily for showing the diff between a ds request and the current DS $scope.origin = origin[0]; @@ -59,13 +65,7 @@ var FormDeliveryServiceController = function(deliveryService, dsCurrent, origin, $scope.dsRequestsEnabled = propertiesModel.properties.dsRequests.enabled; $scope.edgeFQDNs = function(ds) { - var urlString = ''; - if (_.isArray(ds.exampleURLs) && ds.exampleURLs.length > 0) { - for (var i = 0; i < ds.exampleURLs.length; i++) { - urlString += ds.exampleURLs[i] + '\n'; - } - } - return urlString; + return ds.exampleURLs.join('
'); }; $scope.DRAFT = 0; @@ -204,10 +204,6 @@ var FormDeliveryServiceController = function(deliveryService, dsCurrent, origin, { value: 4, label: "4 - Latch on Failover" } ]; - $scope.label = function(field, attribute) { - return propertiesModel.properties.deliveryServices.defaults.descriptions[field][attribute]; - }; - $scope.tenantLabel = function(tenant) { return '-'.repeat(tenant.level) + ' ' + tenant.name; }; diff --git a/traffic_portal/app/src/common/modules/form/deliveryService/edit/FormEditDeliveryServiceController.js b/traffic_portal/app/src/common/modules/form/deliveryService/edit/FormEditDeliveryServiceController.js index 9f6f3407c4..e6d297ea38 100644 --- a/traffic_portal/app/src/common/modules/form/deliveryService/edit/FormEditDeliveryServiceController.js +++ b/traffic_portal/app/src/common/modules/form/deliveryService/edit/FormEditDeliveryServiceController.js @@ -59,46 +59,42 @@ var FormEditDeliveryServiceController = function(deliveryService, origin, type, deliveryService: deliveryService }; - // if the user chooses to complete/fulfill the delete request immediately, the ds will be deleted and behind the - // scenes a delivery service request will be created and marked as complete - if (options.status.id == $scope.COMPLETE) { - // first delete the ds - deliveryServiceService.deleteDeliveryService(deliveryService) - .then( - function() { - // then create the ds request - deliveryServiceRequestService.createDeliveryServiceRequest(dsRequest). - then( - function(response) { - var comment = { - deliveryServiceRequestId: response.id, - value: options.comment - }; - // then create the ds request comment - deliveryServiceRequestService.createDeliveryServiceRequestComment(comment). - then( - function() { - var promises = []; - // assign the ds request - promises.push(deliveryServiceRequestService.assignDeliveryServiceRequest(response.id, userModel.user.id)); - // set the status to 'complete' - promises.push(deliveryServiceRequestService.updateDeliveryServiceRequestStatus(response.id, 'complete')); - // and finally navigate to the /delivery-services page - messageModel.setMessages([ { level: 'success', text: 'Delivery service [ ' + deliveryService.xmlId + ' ] deleted' } ], true); - locationUtils.navigateToPath('/delivery-services'); - } - ); - } - ); - }, - function(fault) { - $anchorScroll(); // scrolls window to top - messageModel.setMessages(fault.data.alerts, false); - } - ); - - - + // if the user chooses to complete/fulfill the delete request immediately, a delivery service request will be made and marked as complete, + // then if that is successful, the DS will be deleted + if (options.status.id === $scope.COMPLETE) { + // first create the ds request + deliveryServiceRequestService.createDeliveryServiceRequest(dsRequest) + .then(function(response) { + var comment = { + deliveryServiceRequestId: response.id, + value: options.comment + }; + // then create the ds request comment + deliveryServiceRequestService.createDeliveryServiceRequestComment(comment). + then( + function() { + var promises = []; + // assign the ds request + promises.push(deliveryServiceRequestService.assignDeliveryServiceRequest(response.id, userModel.user.id)); + // set the status to 'complete' + promises.push(deliveryServiceRequestService.updateDeliveryServiceRequestStatus(response.id, 'complete')); + // and finally navigate to the /delivery-services page + messageModel.setMessages([ { level: 'success', text: 'Delivery service [ ' + deliveryService.xmlId + ' ] deleted' } ], true); + locationUtils.navigateToPath('/delivery-services'); + } + // then, if all that works, delete the ds + ).then( + function() { + deliveryServiceService.deleteDeliveryService(deliveryService); + } + ); + } + // handle any failures just once + ).catch(function(fault) { + $anchorScroll(); // scrolls window to top + messageModel.setMessages(fault.data.alerts, false); + } + ); } else { deliveryServiceRequestService.createDeliveryServiceRequest(dsRequest). then( @@ -123,7 +119,7 @@ var FormEditDeliveryServiceController = function(deliveryService, origin, type, }; var createDeliveryServiceUpdateRequest = function(dsRequest, dsRequestComment, autoFulfilled) { - deliveryServiceRequestService.createDeliveryServiceRequest(dsRequest). + return deliveryServiceRequestService.createDeliveryServiceRequest(dsRequest). then( function(response) { var comment = { @@ -198,21 +194,19 @@ var FormEditDeliveryServiceController = function(deliveryService, origin, type, status: status, deliveryService: deliveryService }; - // if the user chooses to complete/fulfill the update request immediately, the ds will be updated and behind the - // scenes a delivery service request will be created and marked as complete + // if the user chooses to complete/fulfill the update request immediately, a delivery service request will be made and marked as complete, + // then if that is successful, the DS will be updated if (options.status.id == $scope.COMPLETE) { - deliveryServiceService.updateDeliveryService(deliveryService). - then( - function() { - $state.reload(); // reloads all the resolves for the view - messageModel.setMessages([ { level: 'success', text: 'Delivery Service [ ' + deliveryService.xmlId + ' ] updated' } ], false); - createDeliveryServiceUpdateRequest(dsRequest, options.comment, true); - }, - function(fault) { - $anchorScroll(); // scrolls window to top - messageModel.setMessages(fault.data.alerts, false); - } - ); + createDeliveryServiceUpdateRequest(dsRequest, options.comment, true).then( + function() { + deliveryServiceService.updateDeliveryService(deliveryService); + }).then(function() { + $state.reload(); // reloads all the resolves for the view + messageModel.setMessages([ { level: 'success', text: 'Delivery Service [ ' + deliveryService.xmlId + ' ] updated' } ], false); + }).catch(function(fault) { + $anchorScroll(); // scrolls window to top + messageModel.setMessages(fault.data.alerts, false); + }); } else { createDeliveryServiceUpdateRequest(dsRequest, options.comment, false); } diff --git a/traffic_portal/app/src/common/modules/form/deliveryService/form.deliveryService.DNS.tpl.html b/traffic_portal/app/src/common/modules/form/deliveryService/form.deliveryService.DNS.tpl.html index 9643455a63..5f688e7b40 100644 --- a/traffic_portal/app/src/common/modules/form/deliveryService/form.deliveryService.DNS.tpl.html +++ b/traffic_portal/app/src/common/modules/form/deliveryService/form.deliveryService.DNS.tpl.html @@ -37,9 +37,11 @@
-
- -
+
+ + + +
-
+
+
+ Delivery Service URL(s) +

{{::url}}

+
+
-
- -
- - Required - -
-
- -
- -
- - Required - View Details   - -
-
- -
- -
- - Required - Too Long - Must be a valid DNS label (no special characters, capital letters, periods, underscores, or spaces and cannot begin or end with a hyphen) - - -
-
- -
- -
- - Required - Too Long - - -
-
- -
- -
- - Required - View Details   - -
-
- -
- -
- - Required - View Details   - -
-
- -
- -
- - Required - Must start with http:// or https:// and be followed by a valid hostname with an optional port (no trailing slash) - View Details   - - -
-
- -
- -
- - Required - -
-
- -
- -
- - Required - - -
-
- -
- -
- - - -
-
- -
- -
- - - -
-
- -
- -
- -
-
- -
- -
- -
- Warning: Changing the routing name may require SSL certificates to be updated. - - Required - Too Long - Must be a valid DNS label (no special characters, capital letters, periods, underscores, or spaces and cannot begin or end with a hyphen) - - -
-
- -
- -
- - Required - -
-
- -
- -
- - Required - -
-
- -
- -
- - Required - -
-
- -
-