diff --git a/docs/docs/benchmarks/inference-time.md b/docs/docs/benchmarks/inference-time.md index c1f91a3b7b..45c408a8e5 100644 --- a/docs/docs/benchmarks/inference-time.md +++ b/docs/docs/benchmarks/inference-time.md @@ -28,6 +28,28 @@ Times presented in the tables are measured as consecutive runs of the model. Ini | STYLE_TRANSFER_UDNIE | 450 | 600 | 750 | 1650 | 1800 | | STYLE_TRANSFER_RAIN_PRINCESS | 450 | 600 | 750 | 1650 | 1800 | +## OCR + +| Model | iPhone 16 Pro (XNNPACK) [ms] | iPhone 14 Pro Max (XNNPACK) [ms] | iPhone SE 3 (XNNPACK) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | Samsung Galaxy S21 (XNNPACK) [ms] | +| ----------- | ---------------------------- | -------------------------------- | -------------------------- | --------------------------------- | --------------------------------- | +| CRAFT_800 | 2099 | 2227 | ❌ | 2245 | 7108 | +| CRNN_EN_512 | 70 | 252 | ❌ | 54 | 151 | +| CRNN_EN_256 | 39 | 123 | ❌ | 24 | 78 | +| CRNN_EN_128 | 17 | 83 | ❌ | 14 | 39 | + +❌ - Insufficient RAM. + +## Vertical OCR + +| Model | iPhone 16 Pro (XNNPACK) [ms] | iPhone 14 Pro Max (XNNPACK) [ms] | iPhone SE 3 (XNNPACK) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | Samsung Galaxy S21 (XNNPACK) [ms] | +| ----------- | ---------------------------- | -------------------------------- | -------------------------- | --------------------------------- | --------------------------------- | +| CRAFT_1280 | 5457 | 5833 | ❌ | 6296 | 14053 | +| CRAFT_320 | 1351 | 1460 | ❌ | 1485 | 3101 | +| CRNN_EN_512 | 39 | 123 | ❌ | 24 | 78 | +| CRNN_EN_64 | 10 | 33 | ❌ | 7 | 18 | + +❌ - Insufficient RAM. + ## LLMs | Model | iPhone 16 Pro (XNNPACK) [tokens/s] | iPhone 13 Pro (XNNPACK) [tokens/s] | iPhone SE 3 (XNNPACK) [tokens/s] | Samsung Galaxy S24 (XNNPACK) [tokens/s] | OnePlus 12 (XNNPACK) [tokens/s] | diff --git a/docs/docs/benchmarks/memory-usage.md b/docs/docs/benchmarks/memory-usage.md index 868a0884b6..2f535ad48b 100644 --- a/docs/docs/benchmarks/memory-usage.md +++ b/docs/docs/benchmarks/memory-usage.md @@ -24,6 +24,19 @@ sidebar_position: 2 | STYLE_TRANSFER_UDNIE | 950 | 350 | | STYLE_TRANSFER_RAIN_PRINCESS | 950 | 350 | +## OCR + +| Model | Android (XNNPACK) [MB] | iOS (XNNPACK) [MB] | +| --------------------------------------------------- | ---------------------- | ------------------ | +| CRAFT_800 + CRNN_EN_512 + CRNN_EN_256 + CRNN_EN_128 | 2100 | 1782 | + +## Vertical OCR + +| Model | Android (XNNPACK) [MB] | iOS (XNNPACK) [MB] | +| ------------------------------------ | ---------------------- | ------------------ | +| CRAFT_1280 + CRAFT_320 + CRNN_EN_512 | 2770 | 3720 | +| CRAFT_1280 + CRAFT_320 + CRNN_EN_64 | 1770 | 2740 | + ## LLMs | Model | Android (XNNPACK) [GB] | iOS (XNNPACK) [GB] | diff --git a/docs/docs/benchmarks/model-size.md b/docs/docs/benchmarks/model-size.md index a80f59d47f..59f1d9bda0 100644 --- a/docs/docs/benchmarks/model-size.md +++ b/docs/docs/benchmarks/model-size.md @@ -24,6 +24,24 @@ sidebar_position: 1 | STYLE_TRANSFER_UDNIE | 6.78 | 5.22 | | STYLE_TRANSFER_RAIN_PRINCESS | 6.78 | 5.22 | +## OCR + +| Model | XNNPACK [MB] | +| ----------- | ------------ | +| CRAFT_800 | 83.1 | +| CRNN_EN_512 | 547 | +| CRNN_EN_256 | 277 | +| CRNN_EN_128 | 142 | + +## Vertical OCR + +| Model | XNNPACK [MB] | +| ----------- | ------------ | +| CRAFT_1280 | 83.1 | +| CRAFT_320 | 83.1 | +| CRNN_EN_512 | 277 | +| CRNN_EN_64 | 74.3 | + ## LLMs | Model | XNNPACK [GB] | diff --git a/docs/docs/computer-vision/useOCR.md b/docs/docs/computer-vision/useOCR.md new file mode 100644 index 0000000000..e2431f49a8 --- /dev/null +++ b/docs/docs/computer-vision/useOCR.md @@ -0,0 +1,193 @@ +--- +title: useOCR +sidebar_position: 4 +--- + +Optical character recognition(OCR) is a computer vision technique that detects and recognizes text within the image. It's commonly used to convert different types of documents, such as scanned paper documents, PDF files, or images captured by a digital camera, into editable and searchable data. + +:::caution +It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion). You can also use [constants](https://github.com/software-mansion/react-native-executorch/blob/765305abc289083787eb9613b899d6fcc0e24126/src/constants/modelUrls.ts#L51) shipped with our library. +::: + +## Reference + +```jsx +import { + useOCR, + CRAFT_800, + RECOGNIZER_EN_CRNN_512, + RECOGNIZER_EN_CRNN_256, + RECOGNIZER_EN_CRNN_128 +} from 'react-native-executorch'; + +function App() { + const model = useOCR({ + detectorSource: CRAFT_800, + recognizerSources: { + recognizerLarge: RECOGNIZER_EN_CRNN_512, + recognizerMedium: RECOGNIZER_EN_CRNN_256, + recognizerSmall: RECOGNIZER_EN_CRNN_128 + }, + language: "en", + }); + + ... + for (const ocrDetection of await model.forward("https://url-to-image.jpg")) { + console.log("Bounding box: ", ocrDetection.bbox); + console.log("Bounding label: ", ocrDetection.text); + console.log("Bounding score: ", ocrDetection.score); + } + ... +} +``` + +
+Type definitions + +```typescript +interface RecognizerSources { + recognizerLarge: string | number; + recognizerMedium: string | number; + recognizerSmall: string | number; +} + +type OCRLanguage = 'en'; + +interface Point { + x: number; + y: number; +} + +interface OCRDetection { + bbox: Point[]; + text: string; + score: number; +} +``` + +
+ +### Arguments + +**`detectorSource`** - A string that specifies the location of the detector binary. For more information, take a look at [loading models](../fundamentals/loading-models.md) section. + +**`recognizerSources`** - An object that specifies locations of the recognizers binary files. Each recognizer is composed of three models tailored to process images of varying widths. + +- `recognizerLarge` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 512 pixels. +- `recognizerMedium` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 256 pixels. +- `recognizerSmall` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 128 pixels. + +For more information, take a look at [loading models](../fundamentals/loading-models.md) section. + +**`language`** - A parameter that specifies the language of the text to be recognized by the OCR. + +### Returns + +The hook returns an object with the following properties: + +| Field | Type | Description | +| ------------------ | -------------------------------------------- | ------------------------------------------------------------------------------------------- | +| `forward` | `(input: string) => Promise` | A function that accepts an image (url, b64) and returns an array of `OCRDetection` objects. | +| `error` | string | null | Contains the error message if the model loading failed. | +| `isGenerating` | `boolean` | Indicates whether the model is currently processing an inference. | +| `isReady` | `boolean` | Indicates whether the model has successfully loaded and is ready for inference. | +| `downloadProgress` | `number` | Represents the download progress as a value between 0 and 1. | + +## Running the model + +To run the model, you can use the `forward` method. It accepts one argument, which is the image. The image can be a remote URL, a local file URI, or a base64-encoded image. The function returns an array of `OCRDetection` objects. Each object contains coordinates of the bounding box, the text recognized within the box, and the confidence score. For more information, please refer to the reference or type definitions. + +## Detection object + +The detection object is specified as follows: + +```typescript +interface Point { + x: number; + y: number; +} + +interface OCRDetection { + bbox: Point[]; + text: string; + score: number; +} +``` + +The `bbox` property contains information about the bounding box of detected text regions. It is represented as four points, which are corners of detected bounding box. +The `text` property contains the text recognized within detected text region. The `score` represents the confidence score of the recognized text. + +## Example + +```tsx +import { + useOCR, + CRAFT_800, + RECOGNIZER_EN_CRNN_512, + RECOGNIZER_EN_CRNN_256, + RECOGNIZER_EN_CRNN_128, +} from 'react-native-executorch'; + +function App() { + const model = useOCR({ + detectorSource: CRAFT_800, + recognizerSources: { + recognizerLarge: RECOGNIZER_EN_CRNN_512, + recognizerMedium: RECOGNIZER_EN_CRNN_256, + recognizerSmall: RECOGNIZER_EN_CRNN_128, + }, + language: 'en', + }); + + const runModel = async () => { + const ocrDetections = await model.forward('https://url-to-image.jpg'); + + for (const ocrDetection of ocrDetections) { + console.log('Bounding box: ', ocrDetection.bbox); + console.log('Bounding text: ', ocrDetection.text); + console.log('Bounding score: ', ocrDetection.score); + } + }; +} +``` + +## Supported models + +| Model | Type | +| ------------------------------------------------------ | ---------- | +| [CRAFT_800](https://github.com/clovaai/CRAFT-pytorch) | Detector | +| [CRNN_EN_512](https://www.jaided.ai/easyocr/modelhub/) | Recognizer | +| [CRNN_EN_256](https://www.jaided.ai/easyocr/modelhub/) | Recognizer | +| [CRNN_EN_128](https://www.jaided.ai/easyocr/modelhub/) | Recognizer | + +## Benchmarks + +### Model size + +| Model | XNNPACK [MB] | +| ----------- | ------------ | +| CRAFT_800 | 83.1 | +| CRNN_EN_512 | 547 | +| CRNN_EN_256 | 277 | +| CRNN_EN_128 | 142 | + +### Memory usage + +| Model | Android (XNNPACK) [MB] | iOS (XNNPACK) [MB] | +| --------------------------------------------------- | ---------------------- | ------------------ | +| CRAFT_800 + CRNN_EN_512 + CRNN_EN_256 + CRNN_EN_128 | 2100 | 1782 | + +### Inference time + +:::warning warning +Times presented in the tables are measured as consecutive runs of the model. Initial run times may be up to 2x longer due to model loading and initialization. +::: + +| Model | iPhone 16 Pro (XNNPACK) [ms] | iPhone 14 Pro Max (XNNPACK) [ms] | iPhone SE 3 (XNNPACK) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | Samsung Galaxy S21 (XNNPACK) [ms] | +| ----------- | ---------------------------- | -------------------------------- | -------------------------- | --------------------------------- | --------------------------------- | +| CRAFT_800 | 2099 | 2227 | ❌ | 2245 | 7108 | +| CRNN_EN_512 | 70 | 252 | ❌ | 54 | 151 | +| CRNN_EN_256 | 39 | 123 | ❌ | 24 | 78 | +| CRNN_EN_128 | 17 | 83 | ❌ | 14 | 39 | + +❌ - Insufficient RAM. diff --git a/docs/docs/computer-vision/useVerticalOCR.md b/docs/docs/computer-vision/useVerticalOCR.md new file mode 100644 index 0000000000..8fb82d507c --- /dev/null +++ b/docs/docs/computer-vision/useVerticalOCR.md @@ -0,0 +1,214 @@ +--- +title: useVerticalOCR +sidebar_position: 5 +--- + +:::danger Experimental +The `useVerticalOCR` hook is currently in an experimental phase. We appreciate feedback from users as we continue to refine and enhance its functionality. +::: + +Optical Character Recognition (OCR) is a computer vision technique used to detect and recognize text within images. It is commonly utilized to convert a variety of documents, such as scanned paper documents, PDF files, or images captured by a digital camera, into editable and searchable data. Traditionally, OCR technology has been optimized for recognizing horizontal text, and integrating support for vertical text recognition often requires significant additional effort from developers. To simplify this, we introduce `useVerticalOCR`, a tool designed to abstract the complexities of vertical text OCR, enabling seamless integration into your applications. + +:::caution +It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion). You can also use [constants](https://github.com/software-mansion/react-native-executorch/blob/765305abc289083787eb9613b899d6fcc0e24126/src/constants/modelUrls.ts#L51) shipped with our library. +::: + +## Reference + +```jsx +import { + DETECTOR_CRAFT_1280, + DETECTOR_CRAFT_320, + RECOGNIZER_EN_CRNN_512, + RECOGNIZER_EN_CRNN_64, + useVerticalOCR, +} from 'react-native-executorch'; + +function App() { + const model = useVerticalOCR({ + detectorSources: { + detectorLarge: DETECTOR_CRAFT_1280, + detectorNarrow: DETECTOR_CRAFT_320, + }, + recognizerSources: { + recognizerLarge: RECOGNIZER_EN_CRNN_512, + recognizerSmall: RECOGNIZER_EN_CRNN_64, + }, + language: 'en', + independentCharacters: true, + }); + + ... + for (const ocrDetection of await model.forward("https://url-to-image.jpg")) { + console.log("Bounding box: ", ocrDetection.bbox); + console.log("Bounding label: ", ocrDetection.text); + console.log("Bounding score: ", ocrDetection.score); + } + ... +} +``` + +
+Type definitions + +```typescript +interface DetectorSources { + detectorLarge: string | number; + detectorNarrow: string | number; +} + +interface RecognizerSources { + recognizerLarge: string | number; + recognizerSmall: string | number; +} + +type OCRLanguage = 'en'; + +interface Point { + x: number; + y: number; +} + +interface OCRDetection { + bbox: Point[]; + text: string; + score: number; +} +``` + +
+ +### Arguments + +**`detectorSources`** - An object that specifies the location of the detectors binary files. Each detector is composed of two models tailored to process images of varying widths. + +- `detectorLarge` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 1280 pixels. +- `detectorNarrow` - A string that specifies the location of the detector binary file which accepts input images with a width of 320 pixels. + +For more information, take a look at [loading models](../fundamentals/loading-models.md) section. + +**`recognizerSources`** - An object that specifies the locations of the recognizers binary files. Each recognizer is composed of two models tailored to process images of varying widths. + +- `recognizerLarge` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 512 pixels. +- `recognizerSmall` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 64 pixels. + +For more information, take a look at [loading models](../fundamentals/loading-models.md) section. + +**`language`** - A parameter that specifies the language of the text to be recognized by the OCR. + +**`independentCharacters`** – A boolean parameter that indicates whether the text in the image consists of a random sequence of characters. If set to true, the algorithm will scan each character individually instead of reading them as continuous text. + +### Returns + +The hook returns an object with the following properties: + +| Field | Type | Description | +| ------------------ | -------------------------------------------- | ------------------------------------------------------------------------------------------- | +| `forward` | `(input: string) => Promise` | A function that accepts an image (url, b64) and returns an array of `OCRDetection` objects. | +| `error` | string | null | Contains the error message if the model loading failed. | +| `isGenerating` | `boolean` | Indicates whether the model is currently processing an inference. | +| `isReady` | `boolean` | Indicates whether the model has successfully loaded and is ready for inference. | +| `downloadProgress` | `number` | Represents the download progress as a value between 0 and 1. | + +## Running the model + +To run the model, you can use the `forward` method. It accepts one argument, which is the image. The image can be a remote URL, a local file URI, or a base64-encoded image. The function returns an array of `OCRDetection` objects. Each object contains coordinates of the bounding box, the text recognized within the box, and the confidence score. For more information, please refer to the reference or type definitions. + +## Detection object + +The detection object is specified as follows: + +```typescript +interface Point { + x: number; + y: number; +} + +interface OCRDetection { + bbox: Point[]; + text: string; + score: number; +} +``` + +The `bbox` property contains information about the bounding box of detected text regions. It is represented as four points, which are corners of detected bounding box. +The `text` property contains the text recognized withinh detected text region. The `score` represents the confidence score of the recognized text. + +## Example + +```tsx +import { + DETECTOR_CRAFT_1280, + DETECTOR_CRAFT_320, + RECOGNIZER_EN_CRNN_512, + RECOGNIZER_EN_CRNN_64, + useVerticalOCR, +} from 'react-native-executorch'; + +function App() { + const model = useVerticalOCR({ + detectorSources: { + detectorLarge: DETECTOR_CRAFT_1280, + detectorNarrow: DETECTOR_CRAFT_320, + }, + recognizerSources: { + recognizerLarge: RECOGNIZER_EN_CRNN_512, + recognizerSmall: RECOGNIZER_EN_CRNN_64, + }, + language: 'en', + independentCharacters: true, + }); + + const runModel = async () => { + const ocrDetections = await model.forward('https://url-to-image.jpg'); + + for (const ocrDetection of ocrDetections) { + console.log('Bounding box: ', ocrDetection.bbox); + console.log('Bounding text: ', ocrDetection.text); + console.log('Bounding score: ', ocrDetection.score); + } + }; +} +``` + +## Supported models + +| Model | Type | +| -------------------------------------------------------- | ---------- | +| [CRAFT_1280](https://github.com/clovaai/CRAFT-pytorch) | Detector | +| [CRAFT_NARROW](https://github.com/clovaai/CRAFT-pytorch) | Detector | +| [CRNN_EN_512](https://www.jaided.ai/easyocr/modelhub/) | Recognizer | +| [CRNN_EN_64](https://www.jaided.ai/easyocr/modelhub/) | Recognizer | + +## Benchmarks + +### Model size + +| Model | XNNPACK [MB] | +| ----------- | ------------ | +| CRAFT_1280 | 83.1 | +| CRAFT_320 | 83.1 | +| CRNN_EN_512 | 277 | +| CRNN_EN_64 | 74.3 | + +### Memory usage + +| Model | Android (XNNPACK) [MB] | iOS (XNNPACK) [MB] | +| ------------------------------------ | ---------------------- | ------------------ | +| CRAFT_1280 + CRAFT_320 + CRNN_EN_512 | 2770 | 3720 | +| CRAFT_1280 + CRAFT_320 + CRNN_EN_64 | 1770 | 2740 | + +### Inference time + +:::warning warning +Times presented in the tables are measured as consecutive runs of the model. Initial run times may be up to 2x longer due to model loading and initialization. +::: + +| Model | iPhone 16 Pro (XNNPACK) [ms] | iPhone 14 Pro Max (XNNPACK) [ms] | iPhone SE 3 (XNNPACK) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | Samsung Galaxy S21 (XNNPACK) [ms] | +| ----------- | ---------------------------- | -------------------------------- | -------------------------- | --------------------------------- | --------------------------------- | +| CRAFT_1280 | 5457 | 5833 | ❌ | 6296 | 14053 | +| CRAFT_320 | 1351 | 1460 | ❌ | 1485 | 3101 | +| CRNN_EN_512 | 39 | 123 | ❌ | 24 | 78 | +| CRNN_EN_64 | 10 | 33 | ❌ | 7 | 18 | + +❌ - Insufficient RAM. diff --git a/docs/docs/hookless-api/ClassificationModule.md b/docs/docs/hookless-api/ClassificationModule.md index 732971db27..2e62cbd4ab 100644 --- a/docs/docs/hookless-api/ClassificationModule.md +++ b/docs/docs/hookless-api/ClassificationModule.md @@ -3,7 +3,7 @@ title: ClassificationModule sidebar_position: 1 --- -Hookless implementation of the [useClassification](../computer-vision/useClassification.mdx) hook. +Hookless implementation of the [useClassification](../computer-vision/useClassification.md) hook. ## Reference diff --git a/docs/docs/hookless-api/OCRModule.md b/docs/docs/hookless-api/OCRModule.md new file mode 100644 index 0000000000..493371196f --- /dev/null +++ b/docs/docs/hookless-api/OCRModule.md @@ -0,0 +1,93 @@ +--- +title: OCRModule +sidebar_position: 6 +--- + +Hookless implementation of the [useOCR](../computer-vision/useOCR.md) hook. + +## Reference + +```typescript +import { + OCRModule, + CRAFT_800, + RECOGNIZER_EN_CRNN_512, + RECOGNIZER_EN_CRNN_256, + RECOGNIZER_EN_CRNN_128, +} from 'react-native-executorch'; +const imageUri = 'path/to/image.png'; + +// Loading the model +await OCRModule.load({ + detectorSource: CRAFT_800, + recognizerSources: { + recognizerLarge: RECOGNIZER_EN_CRNN_512, + recognizerMedium: RECOGNIZER_EN_CRNN_256, + recognizerSmall: RECOGNIZER_EN_CRNN_128, + }, + language: 'en', +}); + +// Running the model +const ocrDetections = await OCRModule.forward(imageUri); +``` + +### Methods + +| Method | Type | Description | +| -------------------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- | +| `load` | `(detectorSource: string, recognizerSources: RecognizerSources, language: OCRLanguage): Promise` | Loads the detector and recognizers, which sources are represented by `RecognizerSources`. | +| `forward` | `(input: string): Promise` | Executes the model's forward pass, where `input` can be a fetchable resource or a Base64-encoded string. | +| `onDownloadProgress` | `(callback: (downloadProgress: number) => void): any` | Subscribe to the download progress event. | + +
+Type definitions + +```typescript +interface RecognizerSources { + recognizerLarge: string | number; + recognizerMedium: string | number; + recognizerSmall: string | number; +} + +type OCRLanguage = 'en'; + +interface Point { + x: number; + y: number; +} + +interface OCRDetection { + bbox: Point[]; + text: string; + score: number; +} +``` + +
+ +## Loading the model + +To load the model, use the `load` method. It accepts: + +**`detectorSource`** - A string that specifies the location of the detector binary. For more information, take a look at [loading models](../fundamentals/loading-models.md) section. + +**`recognizerSources`** - An object that specifies locations of the recognizers binary files. Each recognizer is composed of three models tailored to process images of varying widths. + +- `recognizerLarge` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 512 pixels. +- `recognizerMedium` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 256 pixels. +- `recognizerSmall` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 128 pixels. + +For more information, take a look at [loading models](../fundamentals/loading-models.md) section. + +**`language`** - A parameter that specifies the language of the text to be recognized by the OCR. + +This method returns a promise, which can resolve to an error or void. + +## Listening for download progress + +To subscribe to the download progress event, you can use the `onDownloadProgress` method. It accepts a callback function that will be called whenever the download progress changes. + +## Running the model + +To run the model, you can use the `forward` method. It accepts one argument, which is the image. The image can be a remote URL, a local file URI, or a base64-encoded image. The method returns a promise, which can resolve either to an error or an array of `OCRDetection` objects. Each object contains coordinates of the bounding box, the label of the detected object, and the confidence score. diff --git a/docs/docs/hookless-api/ObjectDetectionModule.md b/docs/docs/hookless-api/ObjectDetectionModule.md index 2cc3504ef4..6c730b7fe0 100644 --- a/docs/docs/hookless-api/ObjectDetectionModule.md +++ b/docs/docs/hookless-api/ObjectDetectionModule.md @@ -3,7 +3,7 @@ title: ObjectDetectionModule sidebar_position: 5 --- -Hookless implementation of the [useObjectDetection](../computer-vision/useObjectDetection.mdx) hook. +Hookless implementation of the [useObjectDetection](../computer-vision/useObjectDetection.md) hook. ## Reference diff --git a/docs/docs/hookless-api/StyleTransferModule.md b/docs/docs/hookless-api/StyleTransferModule.md index f084d8cad5..29c750bee3 100644 --- a/docs/docs/hookless-api/StyleTransferModule.md +++ b/docs/docs/hookless-api/StyleTransferModule.md @@ -3,7 +3,7 @@ title: StyleTransferModule sidebar_position: 4 --- -Hookless implementation of the [useStyleTransfer](../computer-vision/useStyleTransfer.mdx) hook. +Hookless implementation of the [useStyleTransfer](../computer-vision/useStyleTransfer.md) hook. ## Reference diff --git a/docs/docs/hookless-api/VerticalOCRModule.md b/docs/docs/hookless-api/VerticalOCRModule.md new file mode 100644 index 0000000000..d876b82778 --- /dev/null +++ b/docs/docs/hookless-api/VerticalOCRModule.md @@ -0,0 +1,107 @@ +--- +title: VerticalOCRModule +sidebar_position: 7 +--- + +Hookless implementation of the [useVerticalOCR](../computer-vision/useVerticalOCR.md) hook. + +## Reference + +```typescript +import { + DETECTOR_CRAFT_1280, + DETECTOR_CRAFT_320, + RECOGNIZER_EN_CRNN_512, + RECOGNIZER_EN_CRNN_64, + useVerticalOCR, +} from 'react-native-executorch'; + +const imageUri = 'path/to/image.png'; + +// Loading the model +await VerticalOCRModule.load({ + detectorSources: { + detectorLarge: DETECTOR_CRAFT_1280, + detectorNarrow: DETECTOR_CRAFT_320, + }, + recognizerSources: { + recognizerLarge: RECOGNIZER_EN_CRNN_512, + recognizerSmall: RECOGNIZER_EN_CRNN_64, + }, + language: 'en', + independentCharacters: true, +}); + +// Running the model +const ocrDetections = await VerticalOCRModule.forward(imageUri); +``` + +### Methods + +| Method | Type | Description | +| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | +| `load` | `(detectorSources: DetectorSources, recognizerSources: RecognizerSources, language: OCRLanguage independentCharacters: boolean): Promise` | Loads detectors and recognizers, which sources are represented by `DetectorSources` and `RecognizerSources`. | +| `forward` | `(input: string): Promise` | Executes the model's forward pass, where `input` can be a fetchable resource or a Base64-encoded string. | +| `onDownloadProgress` | `(callback: (downloadProgress: number) => void): any` | Subscribe to the download progress event. | + +
+Type definitions + +```typescript +interface DetectorSources { + detectorLarge: string | number; + detectorNarrow: string | number; +} + +interface RecognizerSources { + recognizerLarge: string | number; + recognizerSmall: string | number; +} + +type OCRLanguage = 'en'; + +interface Point { + x: number; + y: number; +} + +interface OCRDetection { + bbox: Point[]; + text: string; + score: number; +} +``` + +
+ +## Loading the model + +To load the model, use the `load` method. It accepts: + +**`detectorSources`** - An object that specifies the location of the detectors binary files. Each detector is composed of two models tailored to process images of varying widths. + +- `detectorLarge` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 1280 pixels. +- `detectorNarrow` - A string that specifies the location of the detector binary file which accepts input images with a width of 320 pixels. + +For more information, take a look at [loading models](../fundamentals/loading-models.md) section. + +**`recognizerSources`** - An object that specifies the locations of the recognizers binary files. Each recognizer is composed of two models tailored to process images of varying widths. + +- `recognizerLarge` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 512 pixels. +- `recognizerSmall` - A string that specifies the location of the recognizer binary file which accepts input images with a width of 64 pixels. + +For more information, take a look at [loading models](../fundamentals/loading-models.md) section. + +**`language`** - A parameter that specifies the language of the text to be recognized by the OCR. + +**`independentCharacters`** – A boolean parameter that indicates whether the text in the image consists of a random sequence of characters. If set to true, the algorithm will scan each character individually instead of reading them as continuous text. + +This method returns a promise, which can resolve to an error or void. + +## Listening for download progress + +To subscribe to the download progress event, you can use the `onDownloadProgress` method. It accepts a callback function that will be called whenever the download progress changes. + +## Running the model + +To run the model, you can use the `forward` method. It accepts one argument, which is the image. The image can be a remote URL, a local file URI, or a base64-encoded image. The method returns a promise, which can resolve either to an error or an array of `OCRDetection` objects. Each object contains coordinates of the bounding box, the label of the detected object, and the confidence score. diff --git a/docs/docs/module-api/executorch-bindings.md b/docs/docs/module-api/executorch-bindings.md index 282beaf533..e2e48ab63f 100644 --- a/docs/docs/module-api/executorch-bindings.md +++ b/docs/docs/module-api/executorch-bindings.md @@ -61,7 +61,7 @@ To run model with ExecuTorch Bindings it's essential to specify the shape of the This example demonstrates the integration and usage of the ExecuTorch bindings with a [style transfer model](../computer-vision/useStyleTransfer.md). Specifically, we'll be using the `STYLE_TRANSFER_CANDY` model, which applies artistic style transfer to an input image. -## Importing the Module and loading the model +### Importing the Module and loading the model First, import the necessary functions from the `react-native-executorch` package and initialize the ExecuTorch module with the specified style transfer model. @@ -77,7 +77,7 @@ const executorchModule = useExecutorchModule({ }); ``` -## Setting up input parameters +### Setting up input parameters To prepare the input for the model, define the shape of the input tensor. This shape depends on the model's requirements. For the `STYLE_TRANSFER_CANDY` model, we need a tensor of shape `[1, 3, 640, 640]`, corresponding to a batch size of 1, 3 color channels (RGB), and dimensions of 640x640 pixels. @@ -88,7 +88,7 @@ const shape = [1, 3, 640, 640]; const input = new Float32Array(1 * 3 * 640 * 640); // fill this array with your image data ``` -## Performing inference +### Performing inference ```typescript try {