update README.md

This commit is contained in:
三洋三洋
2024-04-06 11:57:50 +00:00
parent 93fc22adf5
commit 840be6b843
3 changed files with 26 additions and 5 deletions

View File

@@ -44,13 +44,15 @@ python=3.10
git clone https://github.com/OleehyO/TexTeller git clone https://github.com/OleehyO/TexTeller
``` ```
2. After [installing pytorch](https://pytorch.org/get-started/locally/#start-locally), install the project's dependencies: 2. [Installing pytorch](https://pytorch.org/get-started/locally/#start-locally)
3. Install the project's dependencies:
```bash ```bash
pip install -r requirements.txt pip install -r requirements.txt
``` ```
3. Enter the `TexTeller/src` directory and run the following command in the terminal to start inference: 4. Enter the `TexTeller/src` directory and run the following command in the terminal to start inference:
```bash ```bash
python inference.py -img "/path/to/image.{jpg,png}" python inference.py -img "/path/to/image.{jpg,png}"
@@ -72,7 +74,10 @@ Go to the `TexTeller/src` directory and run the following command:
Enter `http://localhost:8501` in a browser to view the web demo. Enter `http://localhost:8501` in a browser to view the web demo.
> [!TIP] > [!TIP]
> You can change the default configuration of `start_web.sh`, for example, to use GPU for inference (e.g. `USE_CUDA=True`) or to increase the number of beams (e.g. `NUM_BEAM=3`) to achieve higher accuracy > You can change the default configuration of `start_web.sh`, for example, to use GPU for inference (e.g. `USE_CUDA=True`) or to increase the number of beams (e.g. `NUM_BEAM=3`) to achieve higher accuracy.
> [!NOTE]
> If you are Windows user, please run the `start_web.bat` file instead.
## 📡 API Usage ## 📡 API Usage

View File

@@ -44,13 +44,15 @@ python=3.10
git clone https://github.com/OleehyO/TexTeller git clone https://github.com/OleehyO/TexTeller
``` ```
2. [安装pytorch](https://pytorch.org/get-started/locally/#start-locally)后,再安装本项目的依赖包: 2. [安装pytorch](https://pytorch.org/get-started/locally/#start-locally)
3. 安装本项目的依赖包:
```bash ```bash
pip install -r requirements.txt pip install -r requirements.txt
``` ```
3. 进入`TexTeller/src`目录,在终端运行以下命令进行推理: 4. 进入`TexTeller/src`目录,在终端运行以下命令进行推理:
```bash ```bash
python inference.py -img "/path/to/image.{jpg,png}" python inference.py -img "/path/to/image.{jpg,png}"
@@ -102,6 +104,9 @@ python=3.10
> [!TIP] > [!TIP]
> 你可以改变`start_web.sh`的默认配置, 例如使用GPU进行推理(e.g. `USE_CUDA=True`) 或者增加beams的数量(e.g. `NUM_BEAM=3`)来获得更高的精确度 > 你可以改变`start_web.sh`的默认配置, 例如使用GPU进行推理(e.g. `USE_CUDA=True`) 或者增加beams的数量(e.g. `NUM_BEAM=3`)来获得更高的精确度
> [!NOTE]
> 对于Windows用户, 请运行 `start_web.bat`文件.
## 📡 API调用 ## 📡 API调用
我们使用[ray serve](https://github.com/ray-project/ray)来对外提供一个TexTeller的API接口通过使用这个接口你可以把TexTeller整合到自己的项目里。要想启动server你需要先进入`TexTeller/src`目录然后运行以下命令: 我们使用[ray serve](https://github.com/ray-project/ray)来对外提供一个TexTeller的API接口通过使用这个接口你可以把TexTeller整合到自己的项目里。要想启动server你需要先进入`TexTeller/src`目录然后运行以下命令:

11
src/start_web.bat Normal file
View File

@@ -0,0 +1,11 @@
@echo off
SETLOCAL ENABLEEXTENSIONS
set CHECKPOINT_DIR=default
set TOKENIZER_DIR=default
set USE_CUDA=False REM True or False (case-sensitive)
set NUM_BEAM=1
streamlit run web.py
ENDLOCAL