From 09f02166dbd4b13a61bd0fdac0c68db7b60502bb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E4=B8=89=E6=B4=8B=E4=B8=89=E6=B4=8B?= <1258009915@qq.com> Date: Sat, 6 Apr 2024 07:43:03 +0000 Subject: [PATCH] update README.md --- README.md | 17 +---------------- assets/README_zh.md | 17 +---------------- 2 files changed, 2 insertions(+), 32 deletions(-) diff --git a/README.md b/README.md index 4b48fe0..3884503 100644 --- a/README.md +++ b/README.md @@ -36,16 +36,6 @@ python=3.10 > [!WARNING] > Only CUDA versions >= 12.0 have been fully tested, so it is recommended to use CUDA version >= 12.0 -## 🖼 About Rendering LaTeX as Images - -* **Install XeLaTex** and ensure `xelatex` can be called directly from the command line. - -* To ensure correct rendering of the predicted formulas, **include the following packages** in your `.tex` file: - - ```tex - \usepackage{multirow,multicol,amsmath,amsfonts,amssymb,mathtools,bm,mathrsfs,wasysym,amsbsy,upgreek,mathalfa,stmaryrd,mathrsfs,dsfont,amsthm,amsmath,multirow} - ``` - ## 🚀 Getting Started 1. Clone the repository: @@ -73,9 +63,7 @@ python=3.10 ## 🌐 Web Demo -First, **ensure that [poppler](https://poppler.freedesktop.org/) is correctly installed and added to the `PATH`** (so that the `pdftoppm` command can be directly used in the terminal). - -Then, go to the `TexTeller/src` directory and run the following command: +Go to the `TexTeller/src` directory and run the following command: ```bash ./start_web.sh @@ -86,9 +74,6 @@ Enter `http://localhost:8501` in a browser to view the web demo. > [!TIP] > You can change the default configuration of `start_web.sh`, for example, to use GPU for inference (e.g. `USE_CUDA=True`) or to increase the number of beams (e.g. `NUM_BEAM=3`) to achieve higher accuracy -> [!IMPORTANT] -> If you want to directly render the prediction results as images on the web (for example, to check if the prediction is correct), you need to ensure [xelatex is correctly installed](https://github.com/OleehyO/TexTeller/blob/main/README.md#-about-rendering-latex-as-images) - ## 📡 API Usage We use [ray serve](https://github.com/ray-project/ray) to provide an API interface for TexTeller, allowing you to integrate TexTeller into your own projects. To start the server, you first need to enter the `TexTeller/src` directory and then run the following command: diff --git a/assets/README_zh.md b/assets/README_zh.md index 67ad91a..1b511f2 100644 --- a/assets/README_zh.md +++ b/assets/README_zh.md @@ -36,16 +36,6 @@ python=3.10 > [!WARNING] > 只有CUDA版本>= 12.0被完全测试过,所以最好使用>= 12.0的CUDA版本 -## 🖼 关于把latex渲染成图片 - -* **安装XeLaTex** 并确保`xelatex`可以直接被命令行调用。 - -* 为了确保正确渲染预测出的公式, 需要在`.tex`文件中**引入以下宏包**: - - ```tex - \usepackage{multirow,multicol,amsmath,amsfonts,amssymb,mathtools,bm,mathrsfs,wasysym,amsbsy,upgreek,mathalfa,stmaryrd,mathrsfs,dsfont,amsthm,amsmath,multirow} - ``` - ## 🚀 开搞 1. 克隆本仓库: @@ -101,9 +91,7 @@ python=3.10 ## 🌐 网页演示 -首先**确保[poppler](https://poppler.freedesktop.org/)被正确安装,并添加到`PATH`路径中**(终端可以直接使用`pdftoppm`命令)。 - -然后进入 `TexTeller/src` 目录,运行以下命令 +进入 `TexTeller/src` 目录,运行以下命令 ```bash ./start_web.sh @@ -114,9 +102,6 @@ python=3.10 > [!TIP] > 你可以改变`start_web.sh`的默认配置, 例如使用GPU进行推理(e.g. `USE_CUDA=True`) 或者增加beams的数量(e.g. `NUM_BEAM=3`)来获得更高的精确度 -> [!IMPORTANT] -> 如果你想直接把预测结果在网页上渲染成图片(比如为了检查预测结果是否正确)你需要确保[xelatex被正确安装](https://github.com/OleehyO/TexTeller/blob/main/assets/README_zh.md#-%E5%85%B3%E4%BA%8E%E6%8A%8Alatex%E6%B8%B2%E6%9F%93%E6%88%90%E5%9B%BE%E7%89%87) - ## 📡 API调用 我们使用[ray serve](https://github.com/ray-project/ray)来对外提供一个TexTeller的API接口,通过使用这个接口,你可以把TexTeller整合到自己的项目里。要想启动server,你需要先进入`TexTeller/src`目录然后运行以下命令: