diff --git a/README.md b/README.md
index 39762a67..a0e3c2b3 100644
--- a/README.md
+++ b/README.md
@@ -8,30 +8,30 @@ SKALE Node CLI, part of the SKALE suite of validator tools, is the command line
## Table of Contents
-1. [Installation](#installation)
-2. [CLI usage](#cli-usage)
- 2.1 [Top level commands](#top-level-commands)
- 2.2 [Node](#node-commands)
- 2.3 [Wallet](#wallet-commands)
- 2.4 [sChains](#schain-commands)
- 2.5 [Health](#health-commands)
- 2.6 [SSL](#ssl-commands)
- 2.7 [Logs](#logs-commands)
- 2.8 [Resources allocation](#resources-allocation-commands)
- 2.9 [Validate](#validate-commands)
-3. [Sync CLI usage](#sync-cli-usage)
- 3.1 [Top level commands](#top-level-commands-sync)
- 3.2 [Sync node commands](#sync-node-commands)
-4. [Exit codes](#exit-codes)
-5. [Development](#development)
+1. [Installation](#installation)
+2. [CLI usage](#cli-usage)\
+ 2.1 [Top level commands](#top-level-commands)\
+ 2.2 [Node](#node-commands)\
+ 2.3 [Wallet](#wallet-commands)\
+ 2.4 [sChains](#schain-commands)\
+ 2.5 [Health](#health-commands)\
+ 2.6 [SSL](#ssl-commands)\
+ 2.7 [Logs](#logs-commands)\
+ 2.8 [Resources allocation](#resources-allocation-commands)\
+ 2.9 [Validate](#validate-commands)
+3. [Sync CLI usage](#sync-cli-usage)\
+ 3.1 [Top level commands](#top-level-commands-sync)\
+ 3.2 [Sync node commands](#sync-node-commands)
+4. [Exit codes](#exit-codes)
+5. [Development](#development)
## Installation
-- Prerequisites
+* Prerequisites
Ensure that the following package is installed: **docker**, **docker-compose** (1.27.4+)
-- Download the executable
+* Download the executable
```shell
VERSION_NUM={put the version number here} && sudo -E bash -c "curl -L https://github.com/skalenetwork/node-cli/releases/download/$VERSION_NUM/skale-$VERSION_NUM-`uname -s`-`uname -m` > /usr/local/bin/skale"
@@ -43,13 +43,13 @@ For Sync node version:
VERSION_NUM={put the version number here} && sudo -E bash -c "curl -L https://github.com/skalenetwork/node-cli/releases/download/$VERSION_NUM/skale-$VERSION_NUM-`uname -s`-`uname -m`-sync > /usr/local/bin/skale"
```
-- Apply executable permissions to the downloaded binary:
+* Apply executable permissions to the downloaded binary:
```shell
chmod +x /usr/local/bin/skale
```
-- Test the installation
+* Test the installation
```shell
skale --help
@@ -77,7 +77,7 @@ skale version
Options:
-- `--short` - prints version only, without additional text.
+* `--short` - prints version only, without additional text.
### Node commands
@@ -107,26 +107,25 @@ skale node init [ENV_FILE]
Arguments:
-- `ENV_FILE` - path to .env file (required parameters are listed in the `skale node init` command)
+* `ENV_FILE` - path to .env file (required parameters are listed in the `skale node init` command)
You should specify the following environment variables:
-- `SGX_SERVER_URL` - SGX server URL
-- `DISK_MOUNTPOINT` - disk mount point for storing sChains data
-- `DOCKER_LVMPY_STREAM` - stream of `docker-lvmpy` to use
-- `CONTAINER_CONFIGS_STREAM` - stream of `skale-node` to use
-- `ENDPOINT` - RPC endpoint of the node in the network where SKALE Manager is deployed
-- `MANAGER_CONTRACTS_ABI_URL` - URL to SKALE Manager contracts ABI and addresses
-- `IMA_CONTRACTS_ABI_URL` - URL to IMA contracts ABI and addresses
-- `FILEBEAT_URL` - URL to the Filebeat log server
-- `ENV_TYPE` - environement type (mainnet, testnet, etc)
-
+* `SGX_SERVER_URL` - SGX server URL
+* `DISK_MOUNTPOINT` - disk mount point for storing sChains data
+* `DOCKER_LVMPY_STREAM` - stream of `docker-lvmpy` to use
+* `CONTAINER_CONFIGS_STREAM` - stream of `skale-node` to use
+* `ENDPOINT` - RPC endpoint of the node in the network where SKALE Manager is deployed
+* `MANAGER_CONTRACTS_ABI_URL` - URL to SKALE Manager contracts ABI and addresses
+* `IMA_CONTRACTS_ABI_URL` - URL to IMA contracts ABI and addresses
+* `FILEBEAT_URL` - URL to the Filebeat log server
+* `ENV_TYPE` - environement type (mainnet, testnet, etc)
Optional variables:
-- `TG_API_KEY` - Telegram API key
-- `TG_CHAT_ID` - Telegram chat ID
-- `MONITORING_CONTAINERS` - will enable monitoring containers (`filebeat`, `cadvisor`, `prometheus`)
+* `TG_API_KEY` - Telegram API key
+* `TG_CHAT_ID` - Telegram chat ID
+* `MONITORING_CONTAINERS` - will enable monitoring containers (`filebeat`, `cadvisor`, `prometheus`)
#### Node initialization from backup
@@ -138,8 +137,8 @@ skale node restore [BACKUP_PATH] [ENV_FILE]
Arguments:
-- `BACKUP_PATH` - path to the archive with backup data generated by `skale node backup` command
-- `ENV_FILE` - path to .env file (required parameters are listed in the `skale node init` command)
+* `BACKUP_PATH` - path to the archive with backup data generated by `skale node backup` command
+* `ENV_FILE` - path to .env file (required parameters are listed in the `skale node init` command)
#### Node backup
@@ -151,8 +150,7 @@ skale node backup [BACKUP_FOLDER_PATH] [ENV_FILE]
Arguments:
-- `BACKUP_FOLDER_PATH` - path to the folder where the backup file will be saved
-
+* `BACKUP_FOLDER_PATH` - path to the folder where the backup file will be saved
#### Node Registration
@@ -162,13 +160,13 @@ skale node register
Required arguments:
-- `--ip` - public IP for RPC connections and consensus
-- `--domain`/`-d` - SKALE node domain name
-- `--name` - SKALE node name
+* `--ip` - public IP for RPC connections and consensus
+* `--domain`/`-d` - SKALE node domain name
+* `--name` - SKALE node name
Optional arguments:
-- `--port` - public port - beginning of the port range for node SKALE Chains (default: `10000`)
+* `--port` - public port - beginning of the port range for node SKALE Chains (default: `10000`)
#### Node update
@@ -180,11 +178,11 @@ skale node update [ENV_FILEPATH]
Options:
-- `--yes` - update without additional confirmation
+* `--yes` - update without additional confirmation
Arguments:
-- `ENV_FILEPATH` - path to env file where parameters are defined
+* `ENV_FILEPATH` - path to env file where parameters are defined
You can also specify a file with environment variables
which will update parameters in env file used during skale node init.
@@ -199,8 +197,8 @@ skale node turn-off
Options:
-- `--maintenance-on` - set SKALE node into maintenance mode before turning off
-- `--yes` - turn off without additional confirmation
+* `--maintenance-on` - set SKALE node into maintenance mode before turning off
+* `--yes` - turn off without additional confirmation
#### Node turn-on
@@ -212,12 +210,12 @@ skale node turn-on [ENV_FILEPATH]
Options:
-- `--maintenance-off` - turn off maintenance mode after turning on the node
-- `--yes` - turn on without additional confirmation
+* `--maintenance-off` - turn off maintenance mode after turning on the node
+* `--yes` - turn on without additional confirmation
Arguments:
-- `ENV_FILEPATH` - path to env file where parameters are defined
+* `ENV_FILEPATH` - path to env file where parameters are defined
You can also specify a file with environment variables
which will update parameters in env file used during skale node init.
@@ -232,7 +230,7 @@ skale node maintenance-on
Options:
-- `--yes` - set without additional confirmation
+* `--yes` - set without additional confirmation
Switch off maintenance mode
@@ -250,8 +248,8 @@ skale node set-domain
Options:
-- `--domain`/`-d` - SKALE node domain name
-- `--yes` - set without additional confirmation
+* `--domain`/`-d` - SKALE node domain name
+* `--yes` - set without additional confirmation
### Wallet commands
@@ -287,8 +285,8 @@ skale wallet send [ADDRESS] [AMOUNT]
Arguments:
-- `ADDRESS` - Ethereum receiver address
-- `AMOUNT` - Amount of ETH tokens to send
+* `ADDRESS` - Ethereum receiver address
+* `AMOUNT` - Amount of ETH tokens to send
Optional arguments:
@@ -330,7 +328,7 @@ skale schains info SCHAIN_NAME
Options:
-- `--json` - Show info in JSON format
+* `--json` - Show info in JSON format
#### SKALE Chain repair
@@ -354,7 +352,7 @@ skale health containers
Options:
-- `-a/--all` - list all containers (by default - only running)
+* `-a/--all` - list all containers (by default - only running)
#### sChains healthchecks
@@ -366,7 +364,7 @@ skale health schains
Options:
-- `--json` - Show data in JSON format
+* `--json` - Show data in JSON format
#### SGX
@@ -407,13 +405,12 @@ skale ssl upload
##### Options
-- `-c/--cert-path` - Path to the certificate file
-- `-k/--key-path` - Path to the key file
-- `-f/--force` - Overwrite existing certificates
+* `-c/--cert-path` - Path to the certificate file
+* `-k/--key-path` - Path to the key file
+* `-f/--force` - Overwrite existing certificates
Admin API URL: \[GET] `/api/ssl/upload`
-
#### Check ssl certificate
Check ssl certificate be connecting to healthcheck ssl server
@@ -424,11 +421,11 @@ skale ssl check
##### Options
-- `-c/--cert-path` - Path to the certificate file (default: uploaded using `skale ssl upload` certificate)
-- `-k/--key-path` - Path to the key file (default: uploaded using `skale ssl upload` key)
-- `--type/-t` - Check type (`openssl` - openssl cli check, `skaled` - skaled-based check, `all` - both)
-- `--port/-p` - Port to start healthcheck server (defualt: `4536`)
-- `--no-client` - Skip client connection (only make sure server started without errors)
+* `-c/--cert-path` - Path to the certificate file (default: uploaded using `skale ssl upload` certificate)
+* `-k/--key-path` - Path to the key file (default: uploaded using `skale ssl upload` key)
+* `--type/-t` - Check type (`openssl` - openssl cli check, `skaled` - skaled-based check, `all` - both)
+* `--port/-p` - Port to start healthcheck server (defualt: `4536`)
+* `--no-client` - Skip client connection (only make sure server started without errors)
### Logs commands
@@ -444,7 +441,7 @@ skale logs cli
Options:
-- `--debug` - show debug logs; more detailed output
+* `--debug` - show debug logs; more detailed output
#### Dump Logs
@@ -456,8 +453,7 @@ skale logs dump [PATH]
Optional arguments:
-- `--container`, `-c` - Dump logs only from specified container
-
+* `--container`, `-c` - Dump logs only from specified container
### Resources allocation commands
@@ -470,6 +466,7 @@ Show resources allocation file:
```shell
skale resources-allocation show
```
+
#### Generate/update
Generate/update allocation file:
@@ -480,12 +477,12 @@ skale resources-allocation generate [ENV_FILE]
Arguments:
-- `ENV_FILE` - path to .env file (required parameters are listed in the `skale node init` command)
+* `ENV_FILE` - path to .env file (required parameters are listed in the `skale node init` command)
Options:
-- `--yes` - generate without additional confirmation
-- `-f/--force` - rewrite allocation file if it exists
+* `--yes` - generate without additional confirmation
+* `-f/--force` - rewrite allocation file if it exists
### Validate commands
@@ -501,8 +498,7 @@ skale validate abi
Options:
-- `--json` - show validation result in json format
-
+* `--json` - show validation result in json format
## Sync CLI usage
@@ -526,7 +522,7 @@ skale version
Options:
-- `--short` - prints version only, without additional text.
+* `--short` - prints version only, without additional text.
### Sync node commands
@@ -542,24 +538,26 @@ skale sync-node init [ENV_FILE]
Arguments:
-- `ENV_FILE` - path to .env file (required parameters are listed in the `skale sync-node init` command)
+* `ENV_FILE` - path to .env file (required parameters are listed in the `skale sync-node init` command)
You should specify the following environment variables:
-- `DISK_MOUNTPOINT` - disk mount point for storing sChains data
-- `DOCKER_LVMPY_STREAM` - stream of `docker-lvmpy` to use
-- `CONTAINER_CONFIGS_STREAM` - stream of `skale-node` to use
-- `ENDPOINT` - RPC endpoint of the node in the network where SKALE Manager is deployed
-- `MANAGER_CONTRACTS_ABI_URL` - URL to SKALE Manager contracts ABI and addresses
-- `IMA_CONTRACTS_ABI_URL` - URL to IMA contracts ABI and addresses
-- `SCHAIN_NAME` - name of the SKALE chain to sync
-- `ENV_TYPE` - environement type (mainnet, testnet, etc)
-
+* `DISK_MOUNTPOINT` - disk mount point for storing sChains data
+* `DOCKER_LVMPY_STREAM` - stream of `docker-lvmpy` to use
+* `CONTAINER_CONFIGS_STREAM` - stream of `skale-node` to use
+* `ENDPOINT` - RPC endpoint of the node in the network where SKALE Manager is deployed
+* `MANAGER_CONTRACTS_ABI_URL` - URL to SKALE Manager contracts ABI and addresses
+* `IMA_CONTRACTS_ABI_URL` - URL to IMA contracts ABI and addresses
+* `SCHAIN_NAME` - name of the SKALE chain to sync
+* `ENV_TYPE` - environement type (mainnet, testnet, etc)
Options:
-- `--archive` - Run sync node in an archive node (disable block rotation)
-- `--historic-state` - Enable historic state (works only in pair with --archive flag)
+* `--indexer` - run sync node in indexer mode (disable block rotation)
+* `--archive` - enable historic state and disable block rotation (can't be used with `--indexer`)
+* `--snapshot` - start sync node from snapshot
+* `--snapshot-from` - specify the IP of the node to take snapshot from
+* `--yes` - initialize without additional confirmation
#### Sync node update
@@ -571,28 +569,42 @@ skale sync-node update [ENV_FILEPATH]
Options:
-- `--yes` - update without additional confirmation
+* `--yes` - update without additional confirmation
Arguments:
-- `ENV_FILEPATH` - path to env file where parameters are defined
+* `ENV_FILEPATH` - path to env file where parameters are defined
> NOTE: You can just update a file with environment variables used during `skale sync-node init`.
+#### Sync node cleanup
+
+Cleanup full sync SKALE node on current machine
+
+```shell
+skale sync-node cleanup
+```
+
+Options:
+
+* `--yes` - cleanup without additional confirmation
+
+> WARNING: This command will remove all data from the node.
+
## Exit codes
Exit codes conventions for SKALE CLI tools
-- `0` - Everything is OK
-- `1` - General error exit code
-- `3` - Bad API response**
-- `4` - Script execution error**
-- `5` - Transaction error*
-- `6` - Revert error*
-- `7` - Bad user error**
-- `8` - Node state error**
+* `0` - Everything is OK
+* `1` - General error exit code
+* `3` - Bad API response\*\*
+* `4` - Script execution error\*\*
+* `5` - Transaction error\*
+* `6` - Revert error\*
+* `7` - Bad user error\*\*
+* `8` - Node state error\*\*
-`*` - `validator-cli` only
+`*` - `validator-cli` only\
`**` - `node-cli` only
## Development
@@ -626,10 +638,10 @@ ENV=dev python main.py YOUR_COMMAND
Required environment variables:
-- `ACCESS_KEY_ID` - DO Spaces/AWS S3 API Key ID
-- `SECRET_ACCESS_KEY` - DO Spaces/AWS S3 Secret access key
-- `GITHUB_EMAIL` - Email of GitHub user
-- `GITHUB_OAUTH_TOKEN` - GitHub auth token
+* `ACCESS_KEY_ID` - DO Spaces/AWS S3 API Key ID
+* `SECRET_ACCESS_KEY` - DO Spaces/AWS S3 Secret access key
+* `GITHUB_EMAIL` - Email of GitHub user
+* `GITHUB_OAUTH_TOKEN` - GitHub auth token
## Contributing
diff --git a/node_cli/cli/__init__.py b/node_cli/cli/__init__.py
index f3c0c926..e1e6add7 100644
--- a/node_cli/cli/__init__.py
+++ b/node_cli/cli/__init__.py
@@ -1,4 +1,4 @@
-__version__ = '2.6.1'
+__version__ = '2.6.2'
-if __name__ == "__main__":
+if __name__ == '__main__':
print(__version__)
diff --git a/node_cli/cli/sync_node.py b/node_cli/cli/sync_node.py
index 81c5618e..9dc4333a 100644
--- a/node_cli/cli/sync_node.py
+++ b/node_cli/cli/sync_node.py
@@ -21,13 +21,13 @@
import click
-from node_cli.core.node import init_sync, update_sync, repair_sync
+from node_cli.core.node import init_sync, update_sync, cleanup_sync
from node_cli.utils.helper import (
abort_if_false,
error_exit,
safe_load_texts,
streamed_cmd,
- URL_TYPE
+ URL_TYPE,
)
from node_cli.utils.exit_codes import CLIExitCodes
@@ -41,86 +41,54 @@ def sync_node_cli():
pass
-@sync_node_cli.group(help="SKALE sync node commands")
+@sync_node_cli.group(help='SKALE sync node commands')
def sync_node():
pass
@sync_node.command('init', help=TEXTS['init']['help'])
@click.argument('env_file')
+@click.option('--indexer', help=TEXTS['init']['indexer'], is_flag=True)
+@click.option('--archive', help=TEXTS['init']['archive'], is_flag=True)
+@click.option('--snapshot', help=TEXTS['init']['snapshot'], is_flag=True)
@click.option(
- '--archive',
- help=TEXTS['init']['archive'],
- is_flag=True
-)
-@click.option(
- '--historic-state',
- help=TEXTS['init']['historic_state'],
- is_flag=True
-)
-@click.option(
- '--snapshot-from',
- type=URL_TYPE,
- default=None,
- hidden=True,
- help='Ip of the node from to download snapshot from'
+ '--snapshot-from', type=URL_TYPE, default=None, hidden=True, help=TEXTS['init']['snapshot_from']
)
@streamed_cmd
-def _init_sync(env_file, archive, historic_state, snapshot_from: Optional[str]):
- if historic_state and not archive:
+def _init_sync(
+ env_file, indexer: bool, archive: bool, snapshot: bool, snapshot_from: Optional[str]
+) -> None:
+ if indexer and archive:
error_exit(
- '--historic-state can be used only is combination with --archive',
- exit_code=CLIExitCodes.FAILURE
+ 'Cannot use both --indexer and --archive options',
+ exit_code=CLIExitCodes.FAILURE,
)
- init_sync(env_file, archive, historic_state, snapshot_from)
+ init_sync(env_file, indexer, archive, snapshot, snapshot_from)
@sync_node.command('update', help='Update sync node from .env file')
-@click.option('--yes', is_flag=True, callback=abort_if_false,
- expose_value=False,
- prompt='Are you sure you want to update SKALE node software?')
@click.option(
- '--unsafe',
- 'unsafe_ok',
- help='Allow unsafe update',
- hidden=True,
- is_flag=True
+ '--yes',
+ is_flag=True,
+ callback=abort_if_false,
+ expose_value=False,
+ prompt='Are you sure you want to update SKALE node software?',
)
+@click.option('--unsafe', 'unsafe_ok', help='Allow unsafe update', hidden=True, is_flag=True)
@click.argument('env_file')
@streamed_cmd
def _update_sync(env_file, unsafe_ok):
update_sync(env_file)
-@sync_node.command('repair', help='Start sync node from empty database')
-@click.option('--yes', is_flag=True, callback=abort_if_false,
- expose_value=False,
- prompt='Are you sure you want to start sync node from empty database?')
+@sync_node.command('cleanup', help='Remove sync node data and containers')
@click.option(
- '--archive',
- help=TEXTS['init']['archive'],
- is_flag=True
-)
-@click.option(
- '--historic-state',
- help=TEXTS['init']['historic_state'],
- is_flag=True
-)
-@click.option(
- '--snapshot-from',
- type=URL_TYPE,
- default=None,
- hidden=True,
- help='Ip of the node from to download snapshot from'
+ '--yes',
+ is_flag=True,
+ callback=abort_if_false,
+ expose_value=False,
+ prompt='Are you sure you want to remove all node containers and data?',
)
@streamed_cmd
-def _repair_sync(
- archive: str,
- historic_state: str,
- snapshot_from: Optional[str] = None
-) -> None:
- repair_sync(
- archive=archive,
- historic_state=historic_state,
- snapshot_from=snapshot_from
- )
+def _cleanup_sync() -> None:
+ cleanup_sync()
diff --git a/node_cli/core/node.py b/node_cli/core/node.py
index 0d5eeb72..5e1d44ec 100644
--- a/node_cli/core/node.py
+++ b/node_cli/core/node.py
@@ -56,8 +56,8 @@
turn_on_op,
restore_op,
init_sync_op,
- repair_sync_op,
update_sync_op,
+ cleanup_sync_op,
)
from node_cli.utils.print_formatters import (
print_failed_requirements_checks,
@@ -176,11 +176,14 @@ def restore(backup_path, env_filepath, no_snapshot=False, config_only=False):
print('Node is restored from backup')
-def init_sync(env_filepath: str, archive: bool, historic_state: bool, snapshot_from: str) -> None:
+@check_not_inited
+def init_sync(
+ env_filepath: str, indexer: bool, archive: bool, snapshot: bool, snapshot_from: Optional[str]
+) -> None:
env = compose_node_env(env_filepath, sync_node=True)
if env is None:
return
- inited_ok = init_sync_op(env_filepath, env, archive, historic_state, snapshot_from)
+ inited_ok = init_sync_op(env_filepath, env, indexer, archive, snapshot, snapshot_from)
if not inited_ok:
error_exit('Init operation failed', exit_code=CLIExitCodes.OPERATION_EXECUTION_ERROR)
logger.info('Waiting for containers initialization')
@@ -212,16 +215,11 @@ def update_sync(env_filepath: str, unsafe_ok: bool = False) -> None:
@check_inited
@check_user
-def repair_sync(archive: bool, historic_state: bool, snapshot_from: str) -> None:
- env_params = extract_env_params(INIT_ENV_FILEPATH, sync_node=True)
- schain_name = env_params['SCHAIN_NAME']
- repair_sync_op(
- schain_name=schain_name,
- archive=archive,
- historic_state=historic_state,
- snapshot_from=snapshot_from,
- )
- logger.info('Schain was started from scratch')
+def cleanup_sync() -> None:
+ env = compose_node_env(SKALE_DIR_ENV_FILEPATH, save=False, sync_node=True)
+ schain_name = env['SCHAIN_NAME']
+ cleanup_sync_op(env, schain_name)
+ logger.info('Sync node was cleaned up, all containers and data removed')
def compose_node_env(
@@ -230,7 +228,7 @@ def compose_node_env(
sync_schains=None,
pull_config_for_schain=None,
sync_node=False,
- save: bool = True
+ save: bool = True,
):
if env_filepath is not None:
env_params = extract_env_params(env_filepath, sync_node=sync_node, raise_for_status=True)
diff --git a/node_cli/core/schains.py b/node_cli/core/schains.py
index ab646b8f..9b6c625c 100644
--- a/node_cli/core/schains.py
+++ b/node_cli/core/schains.py
@@ -13,21 +13,17 @@
NODE_CONFIG_PATH,
NODE_CLI_STATUS_FILENAME,
SCHAIN_NODE_DATA_PATH,
- SCHAINS_MNT_DIR_SYNC
+ SCHAINS_MNT_DIR_SYNC,
)
from node_cli.configs.env import get_env_config
-from node_cli.utils.helper import (
- get_request,
- error_exit,
- safe_load_yml
-)
+from node_cli.utils.helper import get_request, error_exit, safe_load_yml
from node_cli.utils.exit_codes import CLIExitCodes
from node_cli.utils.print_formatters import (
print_dkg_statuses,
print_firewall_rules,
print_schain_info,
- print_schains
+ print_schains,
)
from node_cli.utils.docker_utils import ensure_volume, is_volume_exists
from node_cli.utils.helper import read_json, run_cmd, save_json
@@ -41,9 +37,7 @@
def get_schain_firewall_rules(schain: str) -> None:
status, payload = get_request(
- blueprint=BLUEPRINT_NAME,
- method='firewall-rules',
- params={'schain_name': schain}
+ blueprint=BLUEPRINT_NAME, method='firewall-rules', params={'schain_name': schain}
)
if status == 'ok':
print_firewall_rules(payload['endpoints'])
@@ -52,10 +46,7 @@ def get_schain_firewall_rules(schain: str) -> None:
def show_schains() -> None:
- status, payload = get_request(
- blueprint=BLUEPRINT_NAME,
- method='list'
- )
+ status, payload = get_request(blueprint=BLUEPRINT_NAME, method='list')
if status == 'ok':
schains = payload
if not schains:
@@ -69,11 +60,7 @@ def show_schains() -> None:
def show_dkg_info(all_: bool = False) -> None:
params = {'all': all_}
- status, payload = get_request(
- blueprint=BLUEPRINT_NAME,
- method='dkg-statuses',
- params=params
- )
+ status, payload = get_request(blueprint=BLUEPRINT_NAME, method='dkg-statuses', params=params)
if status == 'ok':
print_dkg_statuses(payload)
else:
@@ -82,9 +69,7 @@ def show_dkg_info(all_: bool = False) -> None:
def show_config(name: str) -> None:
status, payload = get_request(
- blueprint=BLUEPRINT_NAME,
- method='config',
- params={'schain_name': name}
+ blueprint=BLUEPRINT_NAME, method='config', params={'schain_name': name}
)
if status == 'ok':
pprint.pprint(payload)
@@ -97,9 +82,7 @@ def get_node_cli_schain_status_filepath(schain_name: str) -> str:
def update_node_cli_schain_status(
- schain_name: str,
- repair_ts: Optional[int] = None,
- snapshot_from: Optional[str] = None
+ schain_name: str, repair_ts: Optional[int] = None, snapshot_from: Optional[str] = None
) -> None:
path = get_node_cli_schain_status_filepath(schain_name)
if os.path.isdir(path):
@@ -110,7 +93,7 @@ def update_node_cli_schain_status(
status = {
'schain_name': schain_name,
'repair_ts': repair_ts,
- 'snapshot_from': snapshot_from
+ 'snapshot_from': snapshot_from,
}
os.makedirs(os.path.dirname(path), exist_ok=True)
save_json(path, status)
@@ -121,10 +104,7 @@ def get_node_cli_schain_status(schain_name: str) -> dict:
return read_json(path)
-def toggle_schain_repair_mode(
- schain: str,
- snapshot_from: Optional[str] = None
-) -> None:
+def toggle_schain_repair_mode(schain: str, snapshot_from: Optional[str] = None) -> None:
ts = int(time.time())
update_node_cli_schain_status(schain_name=schain, repair_ts=ts, snapshot_from=snapshot_from)
print('Schain has been set for repair')
@@ -132,9 +112,7 @@ def toggle_schain_repair_mode(
def describe(schain: str, raw=False) -> None:
status, payload = get_request(
- blueprint=BLUEPRINT_NAME,
- method='get',
- params={'schain_name': schain}
+ blueprint=BLUEPRINT_NAME, method='get', params={'schain_name': schain}
)
if status == 'ok':
print_schain_info(payload, raw=raw)
@@ -193,23 +171,18 @@ def rm_btrfs_subvolume(subvolume: str) -> None:
def fillin_snapshot_folder(src_path: str, block_number: int) -> None:
snapshots_dirname = 'snapshots'
- snapshot_folder_path = os.path.join(
- src_path, snapshots_dirname, str(block_number))
+ snapshot_folder_path = os.path.join(src_path, snapshots_dirname, str(block_number))
os.makedirs(snapshot_folder_path, exist_ok=True)
for subvolume in os.listdir(src_path):
if subvolume != snapshots_dirname:
logger.debug('Copying %s to %s', subvolume, snapshot_folder_path)
subvolume_path = os.path.join(src_path, subvolume)
- subvolume_snapshot_path = os.path.join(
- snapshot_folder_path, subvolume)
+ subvolume_snapshot_path = os.path.join(snapshot_folder_path, subvolume)
make_btrfs_snapshot(subvolume_path, subvolume_snapshot_path)
def restore_schain_from_snapshot(
- schain: str,
- snapshot_path: str,
- env_type: Optional[str] = None,
- schain_type: str = 'medium'
+ schain: str, snapshot_path: str, env_type: Optional[str] = None, schain_type: str = 'medium'
) -> None:
if env_type is None:
env_config = get_env_config()
diff --git a/node_cli/operations/__init__.py b/node_cli/operations/__init__.py
index 28e46ff9..5c53ec18 100644
--- a/node_cli/operations/__init__.py
+++ b/node_cli/operations/__init__.py
@@ -25,6 +25,6 @@
turn_off as turn_off_op,
turn_on as turn_on_op,
restore as restore_op,
- repair_sync as repair_sync_op,
- configure_nftables
+ cleanup_sync as cleanup_sync_op,
+ configure_nftables,
)
diff --git a/node_cli/operations/base.py b/node_cli/operations/base.py
index 540f0188..a4ab9b4d 100644
--- a/node_cli/operations/base.py
+++ b/node_cli/operations/base.py
@@ -17,14 +17,25 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see .
+import time
+
import distro
import functools
import logging
from typing import Dict, Optional
from node_cli.cli.info import VERSION
-from node_cli.configs import CONTAINER_CONFIG_PATH, CONTAINER_CONFIG_TMP_PATH
-from node_cli.core.host import ensure_btrfs_kernel_module_autoloaded, link_env_file, prepare_host
+from node_cli.configs import (
+ CONTAINER_CONFIG_PATH,
+ CONTAINER_CONFIG_TMP_PATH,
+ SKALE_DIR,
+ GLOBAL_SKALE_DIR,
+)
+from node_cli.core.host import (
+ ensure_btrfs_kernel_module_autoloaded,
+ link_env_file,
+ prepare_host,
+)
from node_cli.core.docker_config import configure_docker
from node_cli.core.nginx import generate_nginx_config
@@ -37,29 +48,33 @@
download_contracts,
configure_filebeat,
configure_flask,
- unpack_backup_archive
+ unpack_backup_archive,
)
from node_cli.operations.volume import (
cleanup_volume_artifacts,
ensure_filestorage_mapping,
- prepare_block_device
+ prepare_block_device,
)
from node_cli.operations.docker_lvmpy import lvmpy_install # noqa
-from node_cli.operations.skale_node import download_skale_node, sync_skale_node, update_images
+from node_cli.operations.skale_node import (
+ download_skale_node,
+ sync_skale_node,
+ update_images,
+)
from node_cli.core.checks import CheckType, run_checks as run_host_checks
-from node_cli.core.schains import update_node_cli_schain_status, cleanup_sync_datadir
+from node_cli.core.schains import (
+ update_node_cli_schain_status,
+ cleanup_sync_datadir,
+)
from node_cli.utils.docker_utils import (
compose_rm,
compose_up,
docker_cleanup,
remove_dynamic_containers,
- remove_schain_container,
- start_admin,
- stop_admin
)
from node_cli.utils.meta import get_meta_info, update_meta
from node_cli.utils.print_formatters import print_failed_requirements_checks
-from node_cli.utils.helper import str_to_bool
+from node_cli.utils.helper import str_to_bool, rm_dir
logger = logging.getLogger(__name__)
@@ -68,15 +83,12 @@
def checked_host(func):
@functools.wraps(func)
def wrapper(env_filepath: str, env: Dict, *args, **kwargs):
- download_skale_node(
- env['CONTAINER_CONFIGS_STREAM'],
- env.get('CONTAINER_CONFIGS_DIR')
- )
+ download_skale_node(env['CONTAINER_CONFIGS_STREAM'], env.get('CONTAINER_CONFIGS_DIR'))
failed_checks = run_host_checks(
env['DISK_MOUNTPOINT'],
env['ENV_TYPE'],
CONTAINER_CONFIG_TMP_PATH,
- check_type=CheckType.PREINSTALL
+ check_type=CheckType.PREINSTALL,
)
if failed_checks:
print_failed_requirements_checks(failed_checks)
@@ -90,7 +102,7 @@ def wrapper(env_filepath: str, env: Dict, *args, **kwargs):
env['DISK_MOUNTPOINT'],
env['ENV_TYPE'],
CONTAINER_CONFIG_PATH,
- check_type=CheckType.POSTINSTALL
+ check_type=CheckType.POSTINSTALL,
)
if failed_checks:
print_failed_requirements_checks(failed_checks)
@@ -120,11 +132,7 @@ def update(env_filepath: str, env: Dict) -> None:
lvmpy_install(env)
generate_nginx_config()
- prepare_host(
- env_filepath,
- env['ENV_TYPE'],
- allocation=True
- )
+ prepare_host(env_filepath, env['ENV_TYPE'], allocation=True)
init_shared_space_volume(env['ENV_TYPE'])
current_stream = get_meta_info().config_stream
@@ -133,7 +141,7 @@ def update(env_filepath: str, env: Dict) -> None:
logger.info(
'Stream version was changed from %s to %s',
current_stream,
- env['CONTAINER_CONFIGS_STREAM']
+ env['CONTAINER_CONFIGS_STREAM'],
)
docker_cleanup()
@@ -142,7 +150,7 @@ def update(env_filepath: str, env: Dict) -> None:
env['CONTAINER_CONFIGS_STREAM'],
env['DOCKER_LVMPY_STREAM'],
distro.id(),
- distro.version()
+ distro.version(),
)
update_images(env=env)
compose_up(env)
@@ -160,10 +168,7 @@ def init(env_filepath: str, env: dict) -> bool:
enable_monitoring = str_to_bool(env.get('MONITORING_CONTAINERS', 'False'))
configure_nftables(enable_monitoring=enable_monitoring)
- prepare_host(
- env_filepath,
- env_type=env['ENV_TYPE']
- )
+ prepare_host(env_filepath, env_type=env['ENV_TYPE'])
link_env_file()
download_contracts(env)
@@ -179,7 +184,7 @@ def init(env_filepath: str, env: dict) -> bool:
env['CONTAINER_CONFIGS_STREAM'],
env['DOCKER_LVMPY_STREAM'],
distro.id(),
- distro.version()
+ distro.version(),
)
update_resource_allocation(env_type=env['ENV_TYPE'])
update_images(env=env)
@@ -191,15 +196,13 @@ def init(env_filepath: str, env: dict) -> bool:
def init_sync(
env_filepath: str,
env: dict,
+ indexer: bool,
archive: bool,
- historic_state: bool,
- snapshot_from: Optional[str]
+ snapshot: bool,
+ snapshot_from: Optional[str],
) -> bool:
cleanup_volume_artifacts(env['DISK_MOUNTPOINT'])
- download_skale_node(
- env.get('CONTAINER_CONFIGS_STREAM'),
- env.get('CONTAINER_CONFIGS_DIR')
- )
+ download_skale_node(env.get('CONTAINER_CONFIGS_STREAM'), env.get('CONTAINER_CONFIGS_DIR'))
sync_skale_node()
if env.get('SKIP_DOCKER_CONFIG') != 'True':
@@ -214,32 +217,30 @@ def init_sync(
)
node_options = NodeOptions()
- node_options.archive = archive
- node_options.catchup = archive
- node_options.historic_state = historic_state
+ node_options.archive = archive or indexer
+ node_options.catchup = archive or indexer
+ node_options.historic_state = archive
ensure_filestorage_mapping()
link_env_file()
download_contracts(env)
generate_nginx_config()
- prepare_block_device(
- env['DISK_MOUNTPOINT'],
- force=env['ENFORCE_BTRFS'] == 'True'
- )
+ prepare_block_device(env['DISK_MOUNTPOINT'], force=env['ENFORCE_BTRFS'] == 'True')
update_meta(
VERSION,
env['CONTAINER_CONFIGS_STREAM'],
env['DOCKER_LVMPY_STREAM'],
distro.id(),
- distro.version()
+ distro.version(),
)
update_resource_allocation(env_type=env['ENV_TYPE'])
schain_name = env['SCHAIN_NAME']
- if snapshot_from:
- update_node_cli_schain_status(schain_name, snapshot_from=snapshot_from)
+ if snapshot or snapshot_from:
+ ts = int(time.time())
+ update_node_cli_schain_status(schain_name, repair_ts=ts, snapshot_from=snapshot_from)
update_images(env=env, sync_node=True)
@@ -251,10 +252,7 @@ def update_sync(env_filepath: str, env: Dict) -> bool:
compose_rm(env, sync_node=True)
remove_dynamic_containers()
cleanup_volume_artifacts(env['DISK_MOUNTPOINT'])
- download_skale_node(
- env['CONTAINER_CONFIGS_STREAM'],
- env.get('CONTAINER_CONFIGS_DIR')
- )
+ download_skale_node(env['CONTAINER_CONFIGS_STREAM'], env.get('CONTAINER_CONFIGS_DIR'))
sync_skale_node()
if env.get('SKIP_DOCKER_CONFIG') != 'True':
@@ -267,24 +265,17 @@ def update_sync(env_filepath: str, env: Dict) -> bool:
backup_old_contracts()
download_contracts(env)
- prepare_block_device(
- env['DISK_MOUNTPOINT'],
- force=env['ENFORCE_BTRFS'] == 'True'
- )
+ prepare_block_device(env['DISK_MOUNTPOINT'], force=env['ENFORCE_BTRFS'] == 'True')
generate_nginx_config()
- prepare_host(
- env_filepath,
- env['ENV_TYPE'],
- allocation=True
- )
+ prepare_host(env_filepath, env['ENV_TYPE'], allocation=True)
update_meta(
VERSION,
env['CONTAINER_CONFIGS_STREAM'],
env['DOCKER_LVMPY_STREAM'],
distro.id(),
- distro.version()
+ distro.version(),
)
update_images(env=env, sync_node=True)
@@ -292,9 +283,9 @@ def update_sync(env_filepath: str, env: Dict) -> bool:
return True
-def turn_off(env: dict) -> None:
+def turn_off(env: dict, sync_node: bool = False) -> None:
logger.info('Turning off the node...')
- compose_rm(env=env)
+ compose_rm(env=env, sync_node=sync_node)
remove_dynamic_containers()
logger.info('Node was successfully turned off')
@@ -306,7 +297,7 @@ def turn_on(env: dict) -> None:
env['CONTAINER_CONFIGS_STREAM'],
env['DOCKER_LVMPY_STREAM'],
distro.id(),
- distro.version()
+ distro.version(),
)
if env.get('SKIP_DOCKER_CONFIG') != 'True':
configure_docker()
@@ -324,7 +315,7 @@ def restore(env, backup_path, config_only=False):
env['DISK_MOUNTPOINT'],
env['ENV_TYPE'],
CONTAINER_CONFIG_PATH,
- check_type=CheckType.PREINSTALL
+ check_type=CheckType.PREINSTALL,
)
if failed_checks:
print_failed_requirements_checks(failed_checks)
@@ -347,7 +338,7 @@ def restore(env, backup_path, config_only=False):
env['CONTAINER_CONFIGS_STREAM'],
env['DOCKER_LVMPY_STREAM'],
distro.id(),
- distro.version()
+ distro.version(),
)
update_resource_allocation(env_type=env['ENV_TYPE'])
@@ -358,7 +349,7 @@ def restore(env, backup_path, config_only=False):
env['DISK_MOUNTPOINT'],
env['ENV_TYPE'],
CONTAINER_CONFIG_PATH,
- check_type=CheckType.POSTINSTALL
+ check_type=CheckType.POSTINSTALL,
)
if failed_checks:
print_failed_requirements_checks(failed_checks)
@@ -366,24 +357,8 @@ def restore(env, backup_path, config_only=False):
return True
-def repair_sync(
- schain_name: str,
- archive: bool,
- historic_state: bool,
- snapshot_from: Optional[str]
-) -> None:
- stop_admin(sync_node=True)
- remove_schain_container(schain_name=schain_name)
-
- logger.info('Updating node options')
+def cleanup_sync(env, schain_name: str) -> None:
+ turn_off(env, sync_node=True)
cleanup_sync_datadir(schain_name=schain_name)
-
- logger.info('Updating node options')
- node_options = NodeOptions()
- node_options.archive = archive
- node_options.catchup = archive
- node_options.historic_state = historic_state
-
- logger.info('Updating cli status')
- update_node_cli_schain_status(schain_name, snapshot_from=snapshot_from)
- start_admin(sync_node=True)
+ rm_dir(GLOBAL_SKALE_DIR)
+ rm_dir(SKALE_DIR)
diff --git a/node_cli/utils/docker_utils.py b/node_cli/utils/docker_utils.py
index fe271d79..2b75f9a3 100644
--- a/node_cli/utils/docker_utils.py
+++ b/node_cli/utils/docker_utils.py
@@ -33,13 +33,12 @@
SYNC_COMPOSE_PATH,
REMOVED_CONTAINERS_FOLDER_PATH,
SGX_CERTIFICATES_DIR_NAME,
- NGINX_CONTAINER_NAME
+ NGINX_CONTAINER_NAME,
)
logger = logging.getLogger(__name__)
-ADMIN_REMOVE_TIMEOUT = 60
SCHAIN_REMOVE_TIMEOUT = 300
IMA_REMOVE_TIMEOUT = 20
TELEGRAF_REMOVE_TIMEOUT = 20
@@ -53,9 +52,12 @@
'nginx',
'redis',
'watchdog',
- 'filebeat'
+ 'filebeat',
+)
+MONITORING_COMPOSE_SERVICES = (
+ 'node-exporter',
+ 'advisor',
)
-MONITORING_COMPOSE_SERVICES = ('node-exporter', 'advisor',)
TELEGRAF_SERVICES = ('telegraf',)
NOTIFICATION_COMPOSE_SERVICES = ('celery',)
COMPOSE_TIMEOUT = 10
@@ -123,8 +125,7 @@ def safe_rm(container: Container, timeout=DOCKER_DEFAULT_STOP_TIMEOUT, **kwargs)
folder. Then stops and removes container with specified params.
"""
container_name = container.name
- logger.info(
- f'Stopping container: {container_name}, timeout: {timeout}')
+ logger.info(f'Stopping container: {container_name}, timeout: {timeout}')
container.stop(timeout=timeout)
backup_container_logs(container)
logger.info(f'Removing container: {container_name}, kwargs: {kwargs}')
@@ -135,7 +136,7 @@ def safe_rm(container: Container, timeout=DOCKER_DEFAULT_STOP_TIMEOUT, **kwargs)
def stop_container(
container_name: str,
timeout: int = DOCKER_DEFAULT_STOP_TIMEOUT,
- dclient: Optional[DockerClient] = None
+ dclient: Optional[DockerClient] = None,
) -> None:
dc = dclient or docker_client()
container = dc.containers.get(container_name)
@@ -146,7 +147,7 @@ def stop_container(
def rm_container(
container_name: str,
timeout: int = DOCKER_DEFAULT_STOP_TIMEOUT,
- dclient: Optional[DockerClient] = None
+ dclient: Optional[DockerClient] = None,
) -> None:
dc = dclient or docker_client()
container_names = [container.name for container in get_containers()]
@@ -155,26 +156,13 @@ def rm_container(
safe_rm(container)
-def start_container(
- container_name: str,
- dclient: Optional[DockerClient] = None
-) -> None:
+def start_container(container_name: str, dclient: Optional[DockerClient] = None) -> None:
dc = dclient or docker_client()
container = dc.containers.get(container_name)
logger.info('Starting container %s', container_name)
container.start()
-def start_admin(sync_node: bool = False, dclient: Optional[DockerClient] = None) -> None:
- container_name = 'skale_sync_admin' if sync_node else 'skale_admin'
- start_container(container_name=container_name, dclient=dclient)
-
-
-def stop_admin(sync_node: bool = False, dclient: Optional[DockerClient] = None) -> None:
- container_name = 'skale_sync_admin' if sync_node else 'skale_admin'
- stop_container(container_name=container_name, timeout=ADMIN_REMOVE_TIMEOUT, dclient=dclient)
-
-
def remove_schain_container(schain_name: str, dclient: Optional[DockerClient] = None) -> None:
container_name = f'skale_schain_{schain_name}'
rm_container(container_name, timeout=SCHAIN_REMOVE_TIMEOUT, dclient=dclient)
@@ -183,20 +171,19 @@ def remove_schain_container(schain_name: str, dclient: Optional[DockerClient] =
def backup_container_logs(
container: Container,
head: int = DOCKER_DEFAULT_HEAD_LINES,
- tail: int = DOCKER_DEFAULT_TAIL_LINES
+ tail: int = DOCKER_DEFAULT_TAIL_LINES,
) -> None:
logger.info(f'Going to backup container logs: {container.name}')
logs_backup_filepath = get_logs_backup_filepath(container)
save_container_logs(container, logs_backup_filepath, tail)
- logger.info(
- f'Old container logs saved to {logs_backup_filepath}, tail: {tail}')
+ logger.info(f'Old container logs saved to {logs_backup_filepath}, tail: {tail}')
def save_container_logs(
container: Container,
log_filepath: str,
head: int = DOCKER_DEFAULT_HEAD_LINES,
- tail: int = DOCKER_DEFAULT_TAIL_LINES
+ tail: int = DOCKER_DEFAULT_TAIL_LINES,
) -> None:
separator = b'=' * 80 + b'\n'
tail_lines = container.logs(tail=tail)
@@ -211,8 +198,9 @@ def save_container_logs(
def get_logs_backup_filepath(container: Container) -> str:
- container_index = sum(1 for f in os.listdir(REMOVED_CONTAINERS_FOLDER_PATH)
- if f.startswith(f'{container.name}-'))
+ container_index = sum(
+ 1 for f in os.listdir(REMOVED_CONTAINERS_FOLDER_PATH) if f.startswith(f'{container.name}-')
+ )
log_file_name = f'{container.name}-{container_index}.log'
return os.path.join(REMOVED_CONTAINERS_FOLDER_PATH, log_file_name)
@@ -224,11 +212,7 @@ def ensure_volume(name: str, size: int, driver='lvmpy', dutils=None):
return
logger.info('Creating volume %s, size: %d', name, size)
driver_opts = {'size': str(size)} if driver == 'lvmpy' else None
- volume = dutils.volumes.create(
- name=name,
- driver=driver,
- driver_opts=driver_opts
- )
+ volume = dutils.volumes.create(name=name, driver=driver, driver_opts=driver_opts)
return volume
@@ -246,12 +230,15 @@ def compose_rm(env={}, sync_node: bool = False):
compose_path = get_compose_path(sync_node)
run_cmd(
cmd=(
- 'docker', 'compose',
- '-f', compose_path,
+ 'docker',
+ 'compose',
+ '-f',
+ compose_path,
'down',
- '-t', str(COMPOSE_SHUTDOWN_TIMEOUT),
+ '-t',
+ str(COMPOSE_SHUTDOWN_TIMEOUT),
),
- env=env
+ env=env,
)
logger.info('Compose containers removed')
@@ -259,19 +246,13 @@ def compose_rm(env={}, sync_node: bool = False):
def compose_pull(env: dict, sync_node: bool = False):
logger.info('Pulling compose containers')
compose_path = get_compose_path(sync_node)
- run_cmd(
- cmd=('docker', 'compose', '-f', compose_path, 'pull'),
- env=env
- )
+ run_cmd(cmd=('docker', 'compose', '-f', compose_path, 'pull'), env=env)
def compose_build(env: dict, sync_node: bool = False):
logger.info('Building compose containers')
compose_path = get_compose_path(sync_node)
- run_cmd(
- cmd=('docker', 'compose', '-f', compose_path, 'build'),
- env=env
- )
+ run_cmd(cmd=('docker', 'compose', '-f', compose_path, 'build'), env=env)
def get_up_compose_cmd(services):
@@ -329,10 +310,7 @@ def cleanup_unused_images(dclient=None, ignore=None):
ignore = ignore or []
dc = dclient or docker_client()
used = get_used_images(dclient=dc)
- remove_images(
- filter(lambda i: i not in used and i not in ignore, dc.images.list()),
- dclient=dc
- )
+ remove_images(filter(lambda i: i not in used and i not in ignore, dc.images.list()), dclient=dc)
def is_container_running(name: str, dclient: Optional[DockerClient] = None) -> bool:
diff --git a/node_cli/utils/helper.py b/node_cli/utils/helper.py
index 39de440a..7b6da7f2 100644
--- a/node_cli/utils/helper.py
+++ b/node_cli/utils/helper.py
@@ -49,20 +49,27 @@
from node_cli.utils.print_formatters import print_err_response
from node_cli.utils.exit_codes import CLIExitCodes
-from node_cli.configs.env import (
- absent_params as absent_env_params,
- get_env_config
-)
+from node_cli.configs.env import absent_params as absent_env_params, get_env_config
from node_cli.configs import (
- TEXT_FILE, ADMIN_HOST, ADMIN_PORT, HIDE_STREAM_LOG, GLOBAL_SKALE_DIR,
- GLOBAL_SKALE_CONF_FILEPATH, DEFAULT_SSH_PORT
+ TEXT_FILE,
+ ADMIN_HOST,
+ ADMIN_PORT,
+ HIDE_STREAM_LOG,
+ GLOBAL_SKALE_DIR,
+ GLOBAL_SKALE_CONF_FILEPATH,
+ DEFAULT_SSH_PORT,
)
from node_cli.configs.routes import get_route
from node_cli.utils.global_config import read_g_config, get_system_user
from node_cli.configs.cli_logger import (
- FILE_LOG_FORMAT, LOG_BACKUP_COUNT, LOG_FILE_SIZE_BYTES,
- LOG_FILEPATH, STREAM_LOG_FORMAT, DEBUG_LOG_FILEPATH)
+ FILE_LOG_FORMAT,
+ LOG_BACKUP_COUNT,
+ LOG_FILE_SIZE_BYTES,
+ LOG_FILEPATH,
+ STREAM_LOG_FORMAT,
+ DEBUG_LOG_FILEPATH,
+)
logger = logging.getLogger(__name__)
@@ -71,7 +78,7 @@
DEFAULT_ERROR_DATA = {
'status': 'error',
- 'payload': 'Request failed. Check skale_api container logs'
+ 'payload': 'Request failed. Check skale_api container logs',
}
@@ -100,14 +107,7 @@ def init_file(path, content=None):
write_json(path, content)
-def run_cmd(
- cmd,
- env={},
- shell=False,
- secure=False,
- check_code=True,
- separate_stderr=False
-):
+def run_cmd(cmd, env={}, shell=False, secure=False, check_code=True, separate_stderr=False):
if not secure:
logger.debug(f'Running: {cmd}')
else:
@@ -115,13 +115,7 @@ def run_cmd(
stdout, stderr = subprocess.PIPE, subprocess.PIPE
if not separate_stderr:
stderr = subprocess.STDOUT
- res = subprocess.run(
- cmd,
- shell=shell,
- stdout=stdout,
- stderr=stderr,
- env={**env, **os.environ}
- )
+ res = subprocess.run(cmd, shell=shell, stdout=stdout, stderr=stderr, env={**env, **os.environ})
if check_code:
output = res.stdout.decode('utf-8')
if res.returncode:
@@ -152,7 +146,7 @@ def process_template(source, destination, data):
"""
template = read_file(source)
processed_template = Environment().from_string(template).render(data)
- with open(destination, "w") as f:
+ with open(destination, 'w') as f:
f.write(processed_template)
@@ -164,11 +158,13 @@ def extract_env_params(env_filepath, sync_node=False, raise_for_status=True):
env_params = get_env_config(env_filepath, sync_node=sync_node)
absent_params = ', '.join(absent_env_params(env_params))
if absent_params:
- click.echo(f"Your env file({env_filepath}) have some absent params: "
- f"{absent_params}.\n"
- f"You should specify them to make sure that "
- f"all services are working",
- err=True)
+ click.echo(
+ f'Your env file({env_filepath}) have some absent params: '
+ f'{absent_params}.\n'
+ f'You should specify them to make sure that '
+ f'all services are working',
+ err=True,
+ )
if raise_for_status:
raise InvalidEnvFileError(f'Missing params: {absent_params}')
return None
@@ -260,7 +256,7 @@ def download_dump(path, container_name=None):
error_exit(r.json())
return None
d = r.headers['Content-Disposition']
- fname_q = re.findall("filename=(.+)", d)[0]
+ fname_q = re.findall('filename=(.+)', d)[0]
fname = fname_q.replace('"', '')
filepath = os.path.join(path, fname)
with open(filepath, 'wb') as f:
@@ -271,8 +267,7 @@ def download_dump(path, container_name=None):
def init_default_logger():
f_handler = get_file_handler(LOG_FILEPATH, logging.INFO)
debug_f_handler = get_file_handler(DEBUG_LOG_FILEPATH, logging.DEBUG)
- logging.basicConfig(
- level=logging.DEBUG, handlers=[f_handler, debug_f_handler])
+ logging.basicConfig(level=logging.DEBUG, handlers=[f_handler, debug_f_handler])
def get_stream_handler():
@@ -286,8 +281,8 @@ def get_stream_handler():
def get_file_handler(log_filepath, log_level):
formatter = Formatter(FILE_LOG_FORMAT)
f_handler = py_handlers.RotatingFileHandler(
- log_filepath, maxBytes=LOG_FILE_SIZE_BYTES,
- backupCount=LOG_BACKUP_COUNT)
+ log_filepath, maxBytes=LOG_FILE_SIZE_BYTES, backupCount=LOG_BACKUP_COUNT
+ )
f_handler.setFormatter(formatter)
f_handler.setLevel(log_level)
@@ -306,25 +301,28 @@ def to_camel_case(snake_str):
def validate_abi(abi_filepath: str) -> dict:
if not os.path.isfile(abi_filepath):
- return {'filepath': abi_filepath,
- 'status': 'error',
- 'msg': 'No such file'}
+ return {'filepath': abi_filepath, 'status': 'error', 'msg': 'No such file'}
try:
with open(abi_filepath) as abi_file:
json.load(abi_file)
except Exception:
- return {'filepath': abi_filepath, 'status': 'error',
- 'msg': 'Failed to load abi file as json'}
+ return {
+ 'filepath': abi_filepath,
+ 'status': 'error',
+ 'msg': 'Failed to load abi file as json',
+ }
return {'filepath': abi_filepath, 'status': 'ok', 'msg': ''}
def streamed_cmd(func):
- """ Decorator that allow function to print logs into stderr """
+ """Decorator that allow function to print logs into stderr"""
+
@wraps(func)
def wrapper(*args, **kwargs):
if HIDE_STREAM_LOG is None:
logging.getLogger('').addHandler(get_stream_handler())
return func(*args, **kwargs)
+
return wrapper
@@ -353,7 +351,7 @@ def rm_dir(folder: str) -> None:
logger.info(f'{folder} exists, removing...')
shutil.rmtree(folder)
else:
- logger.info(f'{folder} doesn\'t exist, skipping...')
+ logger.info(f"{folder} doesn't exist, skipping...")
def safe_mkdir(path: str, print_res: bool = False):
@@ -387,8 +385,7 @@ def convert(self, value, param, ctx):
try:
result = urlparse(value)
except ValueError:
- self.fail(f'Some characters are not allowed in {value}',
- param, ctx)
+ self.fail(f'Some characters are not allowed in {value}', param, ctx)
if not all([result.scheme, result.netloc]):
self.fail(f'Expected valid url. Got {value}', param, ctx)
return value
@@ -401,8 +398,7 @@ def convert(self, value, param, ctx):
try:
ipaddress.ip_address(value)
except ValueError:
- self.fail(f'expected valid ipv4/ipv6 address. Got {value}',
- param, ctx)
+ self.fail(f'expected valid ipv4/ipv6 address. Got {value}', param, ctx)
return value
diff --git a/ruff.toml b/ruff.toml
new file mode 100644
index 00000000..d823f4d6
--- /dev/null
+++ b/ruff.toml
@@ -0,0 +1,4 @@
+line-length = 100
+
+[format]
+quote-style = "single"
diff --git a/tests/cli/sync_node_test.py b/tests/cli/sync_node_test.py
index 3465bfc8..88803d4f 100644
--- a/tests/cli/sync_node_test.py
+++ b/tests/cli/sync_node_test.py
@@ -24,12 +24,13 @@
from node_cli.configs import SKALE_DIR, NODE_DATA_PATH
from node_cli.core.node_options import NodeOptions
-from node_cli.cli.sync_node import _init_sync, _update_sync
+from node_cli.cli.sync_node import _init_sync, _update_sync, _cleanup_sync
from node_cli.utils.meta import CliMeta
from node_cli.utils.helper import init_default_logger
from tests.helper import run_command, subprocess_run_mock
from tests.resources_test import BIG_DISK_SIZE
+from tests.conftest import set_env_var
logger = logging.getLogger(__name__)
init_default_logger()
@@ -37,12 +38,13 @@
def test_init_sync(mocked_g_config):
pathlib.Path(SKALE_DIR).mkdir(parents=True, exist_ok=True)
- with mock.patch('subprocess.run', new=subprocess_run_mock), mock.patch(
- 'node_cli.core.node.init_sync_op'
- ), mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True), mock.patch(
- 'node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE
- ), mock.patch('node_cli.operations.base.configure_nftables'), mock.patch(
- 'node_cli.utils.decorators.is_node_inited', return_value=False
+ with (
+ mock.patch('subprocess.run', new=subprocess_run_mock),
+ mock.patch('node_cli.core.node.init_sync_op'),
+ mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True),
+ mock.patch('node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE),
+ mock.patch('node_cli.operations.base.configure_nftables'),
+ mock.patch('node_cli.utils.decorators.is_node_inited', return_value=False),
):
result = run_command(_init_sync, ['./tests/test-env'])
@@ -57,28 +59,27 @@ def test_init_sync(mocked_g_config):
def test_init_sync_archive(mocked_g_config, clean_node_options):
pathlib.Path(NODE_DATA_PATH).mkdir(parents=True, exist_ok=True)
# with mock.patch('subprocess.run', new=subprocess_run_mock), \
- with mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True), mock.patch(
- 'node_cli.operations.base.cleanup_volume_artifacts'
- ), mock.patch('node_cli.operations.base.download_skale_node'), mock.patch(
- 'node_cli.operations.base.sync_skale_node'
- ), mock.patch('node_cli.operations.base.configure_docker'), mock.patch(
- 'node_cli.operations.base.prepare_host'
- ), mock.patch('node_cli.operations.base.ensure_filestorage_mapping'), mock.patch(
- 'node_cli.operations.base.link_env_file'
- ), mock.patch('node_cli.operations.base.download_contracts'), mock.patch(
- 'node_cli.operations.base.generate_nginx_config'
- ), mock.patch('node_cli.operations.base.prepare_block_device'), mock.patch(
- 'node_cli.operations.base.update_meta'
- ), mock.patch('node_cli.operations.base.update_resource_allocation'), mock.patch(
- 'node_cli.operations.base.update_images'
- ), mock.patch('node_cli.operations.base.compose_up'), mock.patch(
- 'node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE
- ), mock.patch('node_cli.operations.base.configure_nftables'), mock.patch(
- 'node_cli.utils.decorators.is_node_inited', return_value=False
+ with (
+ mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True),
+ mock.patch('node_cli.operations.base.cleanup_volume_artifacts'),
+ mock.patch('node_cli.operations.base.download_skale_node'),
+ mock.patch('node_cli.operations.base.sync_skale_node'),
+ mock.patch('node_cli.operations.base.configure_docker'),
+ mock.patch('node_cli.operations.base.prepare_host'),
+ mock.patch('node_cli.operations.base.ensure_filestorage_mapping'),
+ mock.patch('node_cli.operations.base.link_env_file'),
+ mock.patch('node_cli.operations.base.download_contracts'),
+ mock.patch('node_cli.operations.base.generate_nginx_config'),
+ mock.patch('node_cli.operations.base.prepare_block_device'),
+ mock.patch('node_cli.operations.base.update_meta'),
+ mock.patch('node_cli.operations.base.update_resource_allocation'),
+ mock.patch('node_cli.operations.base.update_images'),
+ mock.patch('node_cli.operations.base.compose_up'),
+ mock.patch('node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE),
+ mock.patch('node_cli.operations.base.configure_nftables'),
+ mock.patch('node_cli.utils.decorators.is_node_inited', return_value=False),
):
- result = run_command(
- _init_sync, ['./tests/test-env', '--archive', '--historic-state']
- )
+ result = run_command(_init_sync, ['./tests/test-env', '--archive'])
node_options = NodeOptions()
assert node_options.archive
@@ -88,32 +89,57 @@ def test_init_sync_archive(mocked_g_config, clean_node_options):
assert result.exit_code == 0
-def test_init_sync_historic_state_fail(mocked_g_config, clean_node_options):
+def test_init_archive_indexer_fail(mocked_g_config, clean_node_options):
pathlib.Path(SKALE_DIR).mkdir(parents=True, exist_ok=True)
- with mock.patch('subprocess.run', new=subprocess_run_mock), mock.patch(
- 'node_cli.core.node.init_sync_op'
- ), mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True), mock.patch(
- 'node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE
- ), mock.patch('node_cli.operations.base.configure_nftables'), mock.patch(
- 'node_cli.utils.decorators.is_node_inited', return_value=False
+ with (
+ mock.patch('subprocess.run', new=subprocess_run_mock),
+ mock.patch('node_cli.core.node.init_sync_op'),
+ mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True),
+ mock.patch('node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE),
+ mock.patch('node_cli.operations.base.configure_nftables'),
+ mock.patch('node_cli.utils.decorators.is_node_inited', return_value=False),
+ mock.patch('node_cli.core.node.compose_node_env', return_value={}),
+ set_env_var('ENV_TYPE', 'devnet'),
):
- result = run_command(_init_sync, ['./tests/test-env', '--historic-state'])
+ result = run_command(_init_sync, ['./tests/test-env', '--archive', '--indexer'])
assert result.exit_code == 1
- assert '--historic-state can be used only' in result.output
+ assert 'Cannot use both' in result.output
def test_update_sync(mocked_g_config):
pathlib.Path(SKALE_DIR).mkdir(parents=True, exist_ok=True)
- with mock.patch('subprocess.run', new=subprocess_run_mock), mock.patch(
- 'node_cli.core.node.update_sync_op'
- ), mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True), mock.patch(
- 'node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE
- ), mock.patch('node_cli.operations.base.configure_nftables'), mock.patch(
- 'node_cli.utils.decorators.is_node_inited', return_value=True
- ), mock.patch(
- 'node_cli.core.node.get_meta_info',
- return_value=CliMeta(version='2.6.0', config_stream='3.0.2')
+ with (
+ mock.patch('subprocess.run', new=subprocess_run_mock),
+ mock.patch('node_cli.core.node.update_sync_op'),
+ mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True),
+ mock.patch('node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE),
+ mock.patch('node_cli.operations.base.configure_nftables'),
+ mock.patch('node_cli.utils.decorators.is_node_inited', return_value=True),
+ mock.patch(
+ 'node_cli.core.node.get_meta_info',
+ return_value=CliMeta(version='2.6.0', config_stream='3.0.2'),
+ ),
):
result = run_command(_update_sync, ['./tests/test-env', '--yes'])
assert result.exit_code == 0
+
+
+def test_cleanup_sync(mocked_g_config):
+ pathlib.Path(SKALE_DIR).mkdir(parents=True, exist_ok=True)
+
+ with (
+ mock.patch('subprocess.run', new=subprocess_run_mock),
+ mock.patch('node_cli.core.node.cleanup_sync_op'),
+ mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True),
+ mock.patch('node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE),
+ mock.patch('node_cli.operations.base.configure_nftables'),
+ mock.patch('node_cli.utils.decorators.is_node_inited', return_value=True),
+ mock.patch('node_cli.core.node.compose_node_env', return_value={'SCHAIN_NAME': 'test'}),
+ mock.patch(
+ 'node_cli.core.node.get_meta_info',
+ return_value=CliMeta(version='2.6.0', config_stream='3.0.2'),
+ ),
+ ):
+ result = run_command(_cleanup_sync, ['--yes'])
+ assert result.exit_code == 0
diff --git a/tests/conftest.py b/tests/conftest.py
index 824ba93d..146c9a22 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -16,12 +16,13 @@
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see .
-""" SKALE config test """
+"""SKALE config test"""
import json
import os
import pathlib
import shutil
+from contextlib import contextmanager
import docker
import mock
@@ -36,7 +37,7 @@
NGINX_CONTAINER_NAME,
REMOVED_CONTAINERS_FOLDER_PATH,
STATIC_PARAMS_FILEPATH,
- SCHAIN_NODE_DATA_PATH
+ SCHAIN_NODE_DATA_PATH,
)
from node_cli.configs.node_options import NODE_OPTIONS_FILEPATH
from node_cli.configs.ssl import SSL_FOLDER_PATH
@@ -126,11 +127,7 @@
@pytest.fixture
def net_params_file():
with open(STATIC_PARAMS_FILEPATH, 'w') as f:
- yaml.dump(
- yaml.load(TEST_ENV_PARAMS, Loader=yaml.Loader),
- stream=f,
- Dumper=yaml.Dumper
- )
+ yaml.dump(yaml.load(TEST_ENV_PARAMS, Loader=yaml.Loader), stream=f, Dumper=yaml.Dumper)
yield STATIC_PARAMS_FILEPATH
os.remove(STATIC_PARAMS_FILEPATH)
@@ -165,12 +162,7 @@ def dclient():
def simple_image(dclient):
name = 'simple-image'
try:
- dclient.images.build(
- tag=name,
- rm=True,
- nocache=True,
- path='tests/simple_container'
- )
+ dclient.images.build(tag=name, rm=True, nocache=True, path='tests/simple_container')
yield name
finally:
try:
@@ -184,9 +176,7 @@ def simple_image(dclient):
def docker_hc(dclient):
dclient = docker.from_env()
return dclient.api.create_host_config(
- log_config=docker.types.LogConfig(
- type=docker.types.LogConfig.types.JSON
- )
+ log_config=docker.types.LogConfig(type=docker.types.LogConfig.types.JSON)
)
@@ -240,13 +230,7 @@ def nginx_container(dutils, ssl_folder):
'nginx:1.20.2',
name=NGINX_CONTAINER_NAME,
detach=True,
- volumes={
- ssl_folder: {
- 'bind': '/ssl',
- 'mode': 'ro',
- 'propagation': 'slave'
- }
- }
+ volumes={ssl_folder: {'bind': '/ssl', 'mode': 'ro', 'propagation': 'slave'}},
)
yield c
finally:
@@ -321,3 +305,16 @@ def tmp_sync_datadir():
yield TEST_SCHAINS_MNT_DIR_SYNC
finally:
shutil.rmtree(TEST_SCHAINS_MNT_DIR_SYNC)
+
+
+@contextmanager
+def set_env_var(name, value):
+ old_value = os.environ.get(name)
+ os.environ[name] = value
+ try:
+ yield
+ finally:
+ if old_value is None:
+ del os.environ[name]
+ else:
+ os.environ[name] = old_value
diff --git a/tests/core/core_node_test.py b/tests/core/core_node_test.py
index f79c6fa3..a8b7968a 100644
--- a/tests/core/core_node_test.py
+++ b/tests/core/core_node_test.py
@@ -12,7 +12,7 @@
from node_cli.configs import NODE_DATA_PATH
from node_cli.configs.resource_allocation import RESOURCE_ALLOCATION_FILEPATH
from node_cli.core.node import BASE_CONTAINERS_AMOUNT, is_base_containers_alive
-from node_cli.core.node import init, pack_dir, update, is_update_safe, repair_sync
+from node_cli.core.node import init, pack_dir, update, is_update_safe
from node_cli.utils.meta import CliMeta
from tests.helper import response_mock, safe_update_api_response, subprocess_run_mock
@@ -142,14 +142,15 @@ def test_init_node(no_resource_file): # todo: write new init node test
resp_mock = response_mock(requests.codes.created)
assert not os.path.isfile(RESOURCE_ALLOCATION_FILEPATH)
env_filepath = './tests/test-env'
- with mock.patch('subprocess.run', new=subprocess_run_mock), mock.patch(
- 'node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE
- ), mock.patch('node_cli.core.host.prepare_host'), mock.patch(
- 'node_cli.core.host.init_data_dir'
- ), mock.patch('node_cli.operations.base.configure_nftables'), mock.patch(
- 'node_cli.core.node.init_op'
- ), mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True), mock.patch(
- 'node_cli.utils.helper.post_request', resp_mock
+ with (
+ mock.patch('subprocess.run', new=subprocess_run_mock),
+ mock.patch('node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE),
+ mock.patch('node_cli.core.host.prepare_host'),
+ mock.patch('node_cli.core.host.init_data_dir'),
+ mock.patch('node_cli.operations.base.configure_nftables'),
+ mock.patch('node_cli.core.node.init_op'),
+ mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True),
+ mock.patch('node_cli.utils.helper.post_request', resp_mock),
):
init(env_filepath)
assert os.path.isfile(RESOURCE_ALLOCATION_FILEPATH)
@@ -159,23 +160,25 @@ def test_update_node(mocked_g_config, resource_file):
env_filepath = './tests/test-env'
resp_mock = response_mock(requests.codes.created)
os.makedirs(NODE_DATA_PATH, exist_ok=True)
- with mock.patch('subprocess.run', new=subprocess_run_mock), mock.patch(
- 'node_cli.core.node.update_op'
- ), mock.patch('node_cli.core.node.get_flask_secret_key'), mock.patch(
- 'node_cli.core.node.save_env_params'
- ), mock.patch('node_cli.operations.base.configure_nftables'), mock.patch(
- 'node_cli.core.host.prepare_host'
- ), mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True), mock.patch(
- 'node_cli.utils.helper.post_request', resp_mock
- ), mock.patch('node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE), mock.patch(
- 'node_cli.core.host.init_data_dir'
- ), mock.patch(
- 'node_cli.core.node.get_meta_info',
- return_value=CliMeta(
- version='2.6.0', config_stream='3.0.2'
- )
+ with (
+ mock.patch('subprocess.run', new=subprocess_run_mock),
+ mock.patch('node_cli.core.node.update_op'),
+ mock.patch('node_cli.core.node.get_flask_secret_key'),
+ mock.patch('node_cli.core.node.save_env_params'),
+ mock.patch('node_cli.operations.base.configure_nftables'),
+ mock.patch('node_cli.core.host.prepare_host'),
+ mock.patch('node_cli.core.node.is_base_containers_alive', return_value=True),
+ mock.patch('node_cli.utils.helper.post_request', resp_mock),
+ mock.patch('node_cli.core.resources.get_disk_size', return_value=BIG_DISK_SIZE),
+ mock.patch('node_cli.core.host.init_data_dir'),
+ mock.patch(
+ 'node_cli.core.node.get_meta_info',
+ return_value=CliMeta(version='2.6.0', config_stream='3.0.2'),
+ ),
):
- with mock.patch( 'node_cli.utils.helper.requests.get', return_value=safe_update_api_response()): # noqa
+ with mock.patch(
+ 'node_cli.utils.helper.requests.get', return_value=safe_update_api_response()
+ ): # noqa
result = update(env_filepath, pull_config_for_schain=None)
assert result is None
@@ -207,10 +210,3 @@ def test_is_update_safe():
'node_cli.utils.helper.requests.get', return_value=safe_update_api_response(safe=False)
):
assert not is_update_safe()
-
-
-def test_repair_sync(tmp_sync_datadir, mocked_g_config, resource_file):
- with mock.patch('node_cli.core.schains.rm_btrfs_subvolume'), \
- mock.patch('node_cli.utils.docker_utils.stop_container'), \
- mock.patch('node_cli.utils.docker_utils.start_container'):
- repair_sync(archive=True, historic_state=True, snapshot_from='127.0.0.1')
diff --git a/text.yml b/text.yml
index cc5c5ec3..80461c76 100644
--- a/text.yml
+++ b/text.yml
@@ -63,8 +63,10 @@ exit:
sync_node:
init:
help: Initialize sync SKALE node
- archive: Run sync node in an archive node (disable block rotation)
- historic_state: Enable historic state (works only in pair with --archive flag)
+ indexer: Run sync node in indexer mode (disable block rotation)
+ archive: Enable historic state and disable block rotation
+ snapshot_from: IP of the node to take snapshot from
+ snapshot: Start sync node from snapshot
lvmpy:
help: Lvmpy commands