From 840be6b843501c35baf4eca4c7966ddfc4f00fd0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E4=B8=89=E6=B4=8B=E4=B8=89=E6=B4=8B?= <1258009915@qq.com> Date: Sat, 6 Apr 2024 11:57:50 +0000 Subject: [PATCH] update README.md --- README.md | 11 ++++++++--- assets/README_zh.md | 9 +++++++-- src/start_web.bat | 11 +++++++++++ 3 files changed, 26 insertions(+), 5 deletions(-) create mode 100644 src/start_web.bat diff --git a/README.md b/README.md index 3884503..8a7d029 100644 --- a/README.md +++ b/README.md @@ -44,13 +44,15 @@ python=3.10 git clone https://github.com/OleehyO/TexTeller ``` -2. After [installing pytorch](https://pytorch.org/get-started/locally/#start-locally), install the project's dependencies: +2. [Installing pytorch](https://pytorch.org/get-started/locally/#start-locally) + +3. Install the project's dependencies: ```bash pip install -r requirements.txt ``` -3. Enter the `TexTeller/src` directory and run the following command in the terminal to start inference: +4. Enter the `TexTeller/src` directory and run the following command in the terminal to start inference: ```bash python inference.py -img "/path/to/image.{jpg,png}" @@ -72,7 +74,10 @@ Go to the `TexTeller/src` directory and run the following command: Enter `http://localhost:8501` in a browser to view the web demo. > [!TIP] -> You can change the default configuration of `start_web.sh`, for example, to use GPU for inference (e.g. `USE_CUDA=True`) or to increase the number of beams (e.g. `NUM_BEAM=3`) to achieve higher accuracy +> You can change the default configuration of `start_web.sh`, for example, to use GPU for inference (e.g. `USE_CUDA=True`) or to increase the number of beams (e.g. `NUM_BEAM=3`) to achieve higher accuracy. + +> [!NOTE] +> If you are Windows user, please run the `start_web.bat` file instead. ## 📡 API Usage diff --git a/assets/README_zh.md b/assets/README_zh.md index 1b511f2..809609b 100644 --- a/assets/README_zh.md +++ b/assets/README_zh.md @@ -44,13 +44,15 @@ python=3.10 git clone https://github.com/OleehyO/TexTeller ``` -2. [安装pytorch](https://pytorch.org/get-started/locally/#start-locally)后,再安装本项目的依赖包: +2. [安装pytorch](https://pytorch.org/get-started/locally/#start-locally) + +3. 安装本项目的依赖包: ```bash pip install -r requirements.txt ``` -3. 进入`TexTeller/src`目录,在终端运行以下命令进行推理: +4. 进入`TexTeller/src`目录,在终端运行以下命令进行推理: ```bash python inference.py -img "/path/to/image.{jpg,png}" @@ -102,6 +104,9 @@ python=3.10 > [!TIP] > 你可以改变`start_web.sh`的默认配置, 例如使用GPU进行推理(e.g. `USE_CUDA=True`) 或者增加beams的数量(e.g. `NUM_BEAM=3`)来获得更高的精确度 +> [!NOTE] +> 对于Windows用户, 请运行 `start_web.bat`文件. + ## 📡 API调用 我们使用[ray serve](https://github.com/ray-project/ray)来对外提供一个TexTeller的API接口,通过使用这个接口,你可以把TexTeller整合到自己的项目里。要想启动server,你需要先进入`TexTeller/src`目录然后运行以下命令: diff --git a/src/start_web.bat b/src/start_web.bat new file mode 100644 index 0000000..fd521e4 --- /dev/null +++ b/src/start_web.bat @@ -0,0 +1,11 @@ +@echo off +SETLOCAL ENABLEEXTENSIONS + +set CHECKPOINT_DIR=default +set TOKENIZER_DIR=default +set USE_CUDA=False REM True or False (case-sensitive) +set NUM_BEAM=1 + +streamlit run web.py + +ENDLOCAL