diff --git a/README.md b/README.md
index b7b428f..722001d 100644
--- a/README.md
+++ b/README.md
@@ -25,6 +25,8 @@ TexTeller was trained with **80M image-formula pairs** (previous dataset can be
>[!NOTE]
> If you would like to provide feedback or suggestions for this project, feel free to start a discussion in the [Discussions section](https://github.com/OleehyO/TexTeller/discussions).
+
+
---
@@ -59,6 +61,10 @@ TexTeller was trained with **80M image-formula pairs** (previous dataset can be
## 📮 Change Log
+- [2025-08-15] We have published the [technical report](https://arxiv.org/abs/2508.09220) of TexTeller. The model evaluated on the Benchmark (which was trained from scratch and had its handwritten subset filtered based on the test set) is available at https://huggingface.co/OleehyO/TexTeller_en. **Please do not directly use the open-source version of TexTeller3.0 to reproduce the experimental results of handwritten formulas**, as this model includes the test sets of these benchmarks.
+
+- [2025-08-15] We have open-sourced the [training dataset](https://huggingface.co/datasets/OleehyO/latex-formulas-80M) of TexTeller 3.0. Please note that the handwritten* subset of this dataset is collected from existing open-source handwritten datasets (including both training and test sets). If you need to use the handwritten* subset for your experimental ablation, please filter the test labels first.
+
- [2024-06-06] **TexTeller3.0 released!** The training data has been increased to **80M** (**10x more than** TexTeller2.0 and also improved in data diversity). TexTeller3.0's new features:
- Support scanned image, handwritten formulas, English(Chinese) mixed formulas.
diff --git a/assets/README_zh.md b/assets/README_zh.md
index d8db6b1..dd93c7a 100644
--- a/assets/README_zh.md
+++ b/assets/README_zh.md
@@ -9,9 +9,9 @@
[](https://oleehyo.github.io/TexTeller/)
[](https://arxiv.org/abs/2508.09220)
- [](https://hub.docker.com/r/oleehyo/texteller)
[](https://huggingface.co/datasets/OleehyO/latex-formulas-80M)
[](https://huggingface.co/OleehyO/TexTeller)
+ [](https://hub.docker.com/r/oleehyo/texteller)
[](https://opensource.org/licenses/Apache-2.0)
@@ -59,6 +59,10 @@ TexTeller 使用 **8千万图像-公式对** 进行训练(前代数据集可
## 📮 更新日志
+- [2025-08-15] 我们发布了 TexTeller 的[技术报告](https://arxiv.org/abs/2508.09220)。在基准集上评测的模型(从零训练,且对手写子集按测试集进行了过滤)可在 https://huggingface.co/OleehyO/TexTeller_en 获取。**请不要直接使用开源的 TexTeller3.0 版本来复现实验中的手写公式结果**,因为该模型的训练包含了这些基准的测试集。
+
+- [2025-08-15] 我们开源了 TexTeller 3.0 的[训练数据集](https://huggingface.co/datasets/OleehyO/latex-formulas-80M)。其中handwritten* 子集来自现有的开源手写数据集(**包含训练集和测试集**),请不要将该子集用于实验消融。
+
- [2024-06-06] **TexTeller3.0 发布!** 训练数据增至 **8千万**(是 TexTeller2.0 的 **10倍** 并提升了数据多样性)。TexTeller3.0 新特性:
- 支持扫描件、手写公式、中英文混合公式识别