Skip to content
This repository was archived by the owner on Feb 3, 2025. It is now read-only.
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions tftrt/examples/image-classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,11 @@ See https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html for mor
When using TF-TRT, you can also control the precision with `--precision`.
float32 is the default (`--precision fp32`) with float16 (`--precision fp16`) or
int8 (`--precision int8`) allowing further performance improvements.
int8 mode requires a calibration step which is done
automatically.

int8 mode requires a calibration step which is done automatically, but you will
also have to specificy the directory in which the calibration dataset is stored
with `--calib_data_dir /imagenet_validation_data`. You can use the same data for
both calibration and validation.

## Models

Expand All @@ -34,6 +37,10 @@ ImageNet validation dataset, see
[Verified Models](https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html#verified-models)

## Setup
If you are running these examples within the [NVIDIA TensorFlow docker
container](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow), you can
skip these steps by running `./install_dependencies.sh`.

```
# Clone [tensorflow/models](https://github.com/tensorflow/models)
git clone https://github.com/tensorflow/models.git
Expand Down Expand Up @@ -78,7 +85,7 @@ for more information.

## Usage

`python inference.py --data_dir /imagenet_validation_data --model vgg_16 [--use_trt]`
`python image_classification.py --data_dir /imagenet_validation_data --model vgg_16 [--use_trt]`

Run with `--help` to see all available options.

Expand Down