The Text Styler API is a service that enhances text using simulated large language model (LLM) integrations. It provides both synchronous and asynchronous endpoints, making it easy to improve text either instantly or as part of background job processing. With caching and background processing support, it ensures efficient and scalable performance.
- /improve: Synchronous endpoint that receives a text query parameter and returns the improved text.
- /improve-async: Asynchronous endpoint that processes text improvement jobs in the background.
- /job-status/{job_id}: Endpoint to check the status of a submitted asynchronous job.
git clone https://github.com/petrunov/textstyler-py
cd textstyler-py-
Create a virtual environment:
python -m venv .venv
-
On macOS/Linux, activate it:
source .venv/bin/activate -
On Windows, activate it:
.venv\Scripts\activate
Install the required dependencies:
pip install -r requirements.txtRun a cURL request to improve text synchronously:
curl "http://127.0.0.1:8000/improve?text=I+has+a+apple"This will return a JSON response with the improved text.
Submit a job for text improvement:
curl -X POST "http://127.0.0.1:8000/improve-async" -H "Content-Type: application/json" -d '{"text": "I has a apple"}'The response will contain a job_id.
Once you have a job_id, you can check the status of the job by visiting the following URL:
http://127.0.0.1:8000/job-status/<job_id>Replace <job_id> with the actual job ID returned from the asynchronous request.
Unit tests are written using pytest-asyncio and httpx's AsyncClient for asynchronous testing. To run the tests:
-
Ensure your virtual environment is activated.
-
Run the tests with:
pytest
Or to run a specific test file:
pytest tests/test_main_async.py
-
Endpoint Functionality:
- Verifies that the
/improveendpoint correctly processes valid text. - Ensures caching is used to avoid duplicate LLM calls.
- Validates error responses (e.g., 422 for missing parameters).
- Verifies that the
-
Asynchronous Job Processing:
- Simulates asynchronous LLM calls by monkey-patching the LLM functions.
- Ensures jobs are queued and processed correctly.
- Validates that the job status endpoint returns accurate results.
-
Error Conditions:
- Covers scenarios such as missing query parameters or non-existent job IDs.
- Ensures appropriate HTTP error codes (422 for missing parameters, 404 for non-existent jobs).
To ensure consistent code quality, the following tools are integrated into the project:
- Black: Automatic code formatting.
- Flake8: Linting to catch common style issues.
- isort: Consistent import ordering.
- Pre-commit Hooks: Automatically run these tools before commits (configured in
.pre-commit-config.yaml).
To manually run the tools:
-
Black:
black . -
Flake8:
flake8 . -
isort:
isort .
Key dependency imports (such as LLM functions) are performed lazily within endpoint functions. This helps improve testability and decouples external dependencies.
- Replace the in-memory cache with a persistent solution like Redis for better scalability.
- Integrate a robust task queue system (e.g., Celery) for production-grade asynchronous job processing.
- Update validators to ensure compatibility with future versions of Pydantic.
This project is licensed under the MIT License. See LICENSE for more details.