Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 76 additions & 12 deletions .github/actions/setup-elasticsearch/action.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# For the sake of saving time, only run this step if the test-group is one that will run tests against an Elasticsearch on localhost.
name: Set up local Elasticsearch

description: Install a local Elasticsearch with version that matches prod
Expand All @@ -6,20 +7,83 @@ inputs:
token:
description: PAT
required: true
elasticsearch_version:
description: Version of Elasticsearch to install
required: true
# Make sure the version matches production and is available on Docker Hub
default: '8.12.0'

runs:
using: 'composite'
steps:
- name: Install a local Elasticsearch for testing
# For the sake of saving time, only run this step if the test-group
# is one that will run tests against an Elasticsearch on localhost.
uses: getong/elasticsearch-action@95b501ab0c83dee0aac7c39b7cea3723bef14954
# Cache the elasticsearch image to prevent Docker Hub rate limiting
- name: Cache Docker layers
id: cache-docker-layers
uses: actions/cache@v2
with:
# Make sure this matches production
# It might also need to match what's available on Docker hub
elasticsearch version: '8.12.0'
host port: 9200
container port: 9200
host node port: 9300
node port: 9300
discovery type: 'single-node'
path: /tmp/docker-cache
key: ${{ runner.os }}-elasticsearch-${{ inputs.elasticsearch_version }}
restore-keys: |
${{ runner.os }}-elasticsearch-

- name: Load cached Docker image
shell: bash
if: steps.cache-docker-layers.outputs.cache-hit == 'true'
run: docker load -i /tmp/docker-cache/elasticsearch.tar || echo "No cache found for elasticsearch, pulling image"

- name: Pull Docker image
shell: bash
if: steps.cache-docker-layers.outputs.cache-hit != 'true'
run: docker pull elasticsearch:${{ inputs.elasticsearch_version }}

- name: Save Docker image to cache
shell: bash
if: steps.cache-docker-layers.outputs.cache-hit != 'true'
run: |
mkdir -p /tmp/docker-cache
docker save -o /tmp/docker-cache/elasticsearch.tar elasticsearch:${{ inputs.elasticsearch_version }}

# Setups the Elasticsearch container
# Derived from https://github.com/getong/elasticsearch-action
- name: Run Docker container
shell: bash
env:
INPUT_ELASTICSEARCH_VERSION: ${{ inputs.elasticsearch_version }}
INPUT_HOST_PORT: 9200
INPUT_CONTAINER_PORT: 9200
INPUT_HOST_NODE_PORT: 9300
INPUT_NODE_PORT: 9300
INPUT_DISCOVERY_TYPE: 'single-node'
run: |
docker network create elastic

docker run --network elastic \
-e 'node.name=es1' \
-e 'cluster.name=docker-elasticsearch' \
-e 'cluster.initial_master_nodes=es1' \
-e 'discovery.seed_hosts=es1' \
-e 'cluster.routing.allocation.disk.threshold_enabled=false' \
-e 'bootstrap.memory_lock=true' \
-e 'ES_JAVA_OPTS=-Xms1g -Xmx1g' \
-e 'xpack.security.enabled=false' \
-e 'xpack.license.self_generated.type=basic' \
--ulimit nofile=65536:65536 \
--ulimit memlock=-1:-1 \
--name='es1' \
-d \
-p $INPUT_HOST_PORT:$INPUT_CONTAINER_PORT \
-p $INPUT_HOST_NODE_PORT:$INPUT_NODE_PORT \
-e discovery_type=$INPUT_DISCOVERY_TYPE \
elasticsearch:$INPUT_ELASTICSEARCH_VERSION

# Check if Elasticsearch is up and running
for i in {1..120}; do
if curl --silent --fail http://localhost:9200; then
echo "Elasticsearch is up and running"
exit 0
fi
echo "Waiting for Elasticsearch to be ready..."
sleep 1
done
echo "Elasticsearch did not become ready in time"
exit 1
Original file line number Diff line number Diff line change
Expand Up @@ -85,15 +85,15 @@ For related information, see "Voice and tone" in [AUTOTITLE](/contributing/style

Most readers don't consume articles in their entirety. Instead they either _scan_ the page to locate specific information, or _skim_ the page to get a general idea of the concepts.

When scanning or skimming content, readers skip over large chunks of text. They look for elements that are related to their task or that stand out on the page, such as headings, callouts, lists, tables, code blocks, visuals, and the first few words in each section.
When scanning or skimming content, readers skip over large chunks of text. They look for elements that are related to their task or that stand out on the page, such as headings, alerts, lists, tables, code blocks, visuals, and the first few words in each section.

Once the article has a clearly defined purpose and structure, you can apply the following formatting techniques to optimize the content for scanning and skimming. These techniques can also help to make content more understandable for all readers.

* **Use text highlighting** such as boldface and hyperlinks to call attention to the most important points. Use text highlighting sparingly. Do not highlight more than 10% of the total text in an article.
* **Use formatting elements** to separate the content and create space on the page. For example:
* Bulleted lists (with optional run-in subheads)
* Numbered lists
* [Callouts](/contributing/style-guide-and-content-model/style-guide#callouts)
* [Alerts](/contributing/style-guide-and-content-model/style-guide#alerts)
* Tables
* Visuals
* Code blocks and code annotations
Expand Down
1 change: 0 additions & 1 deletion data/reusables/gpg/copy-gpg-key-id.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
1. From the list of GPG keys, copy the long form of the GPG key ID you'd like to use. In this example, the GPG key ID is `3AA5C34371567BD2`:

```shell copy

$ gpg --list-secret-keys --keyid-format=long
/Users/hubot/.gnupg/secring.gpg
------------------------------------
Expand Down