105 Commits

Author SHA1 Message Date
8b47d1efae refactor: update config file 2026-03-31 22:59:21 +08:00
1214c91448 refact: update list api 2026-03-31 22:32:14 +08:00
697da06f14 fix: use musl build tag for go-fitz on Alpine (musl libc) 2026-03-31 21:49:30 +08:00
dd1c8e7ef6 fix: add gcc via aliyun mirror for CGO build in golang:1.23-alpine 2026-03-31 21:47:35 +08:00
3f00c5697a fix: use private registry golang:1.23-alpine builder image 2026-03-31 21:46:50 +08:00
35cb741845 fix: upgrade builder to golang:1.23-alpine for go-fitz/x/sys compatibility 2026-03-31 21:39:59 +08:00
d667d1272e fix: remove redundant apk gcc install, golang:1.20-alpine already has gcc 2026-03-31 21:38:45 +08:00
84ce6f6b92 refactor: replace pdftoppm with go-fitz for in-process PDF rendering
Switch PDF page rendering from external pdftoppm/pdftocairo subprocess calls
to github.com/gen2brain/go-fitz (MuPDF wrapper), eliminating the poppler-utils
runtime dependency. Enable CGO in Dockerfile builder stage and install gcc/musl-dev
for the static MuPDF link; runtime image remains unchanged.
2026-03-31 21:21:17 +08:00
86dacb61a6 fix: add pdf content type support in GetPolicyURL 2026-03-31 19:45:54 +08:00
3e07c29376 fix: downgrade dependencies for go 1.20 compatibility 2026-03-31 19:37:51 +08:00
a02156591d fix go version to 1.20 2026-03-31 19:36:24 +08:00
c7592e72af update go to 1.23 for build compatibility 2026-03-31 19:34:09 +08:00
fb78c4052b Merge branch 'feature/pdf-recognition' into test 2026-03-31 19:30:35 +08:00
ac078a16bc fix: pin go directive to 1.20, add user ownership check on GetPDFTask
- Downgrade go directive in go.mod from 1.23.0 back to 1.20 to match
  Docker builder image (golang:1.20-alpine); re-run go mod tidy with
  go1.20 (via gvm) to keep go.sum consistent
- GetPDFTask now verifies callerUserID matches task.UserID to prevent
  cross-user data exposure of PDF page content

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-31 14:52:20 +08:00
9d712c921a feat: add PDF document recognition with 10-page pre-hook
- Migrate recognition_results table to JSON schema (meta_data + content),
  replacing flat latex/markdown/mathml/mml columns
- Add TaskTypePDF constant and update all formula read/write paths
- Add PDFRecognitionService using pdftoppm (Poppler) for CGO-free page
  rendering; limits processing to first 10 pages (pre-hook)
- Reuse existing downstream OCR endpoint (cloud.texpixel.com) for each
  page image; stores results as [{page_number, markdown}] JSON array
- Add Redis queue + distributed lock for PDF worker goroutine
- Add REST endpoints: POST /v1/pdf/recognition, GET /v1/pdf/recognition/:task_no
- Add .pdf to OSS upload file type whitelist
- Add migrations/pdf_recognition.sql for safe data migration

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-31 14:17:44 +08:00
876e64366b feat: upgrade verification code email with bilingual HTML template
- Chinese domains (qq.com, 163.com, etc.) receive a Chinese email
- All other domains receive an English email
- Prominent code display: 40px monospace with wide letter-spacing
- Clean OpenAI-inspired layout with dark header and card design

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 14:44:17 +08:00
87bee98049 feat: add email_send_log table to track email sends and registration status
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 14:29:30 +08:00
a07e08a761 fix: change register verify_code field to code to match frontend
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 10:05:51 +08:00
f6ccadbcd3 refactor: move deploy scripts into .claude/skills/deploy/
- Added deploy skill (SKILL.md) with dev/prod instructions
- Moved deploy_prod.sh, deploy_dev.sh, dev_deploy.sh, speed_take.sh
- Updated settings.local.json: new script paths, git merge/push permissions, auto-deploy hook on merge to master
- Removed dev_deploy.sh and speed_take.sh from .gitignore

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 10:04:41 +08:00
fa1fbfc0f5 fix: set resend api key in prod config
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 09:55:13 +08:00
4f468caedf Merge branch 'feature/email_service_optimize' into master 2026-03-27 09:50:32 +08:00
f594a3e9fb feat: add email verify code endpoint and require code on register
- POST /v1/user/email/code sends a 6-digit verify code via email (rate-limited, 10min TTL)
- RegisterByEmail now validates verify_code before creating the account
- Added email code cache helpers mirroring SMS pattern
- Added error codes 1007 (email code error) and 1008 (send limit)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 09:50:23 +08:00
e538553045 fix: remove unused time import in task.go
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 02:41:40 +08:00
135a8151e4 feat: rm binary file 2026-03-27 02:29:28 +08:00
057681561a feat: update pwd dev env 2026-03-27 02:27:48 +08:00
ea3f3fd482 fix: remove unused time import in task.go and fix deploy script
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 01:48:14 +08:00
a29936f31c feat: add email verification code for registration and optimize email service
- Add POST /user/email/code endpoint to send 6-digit verification code
- Require email code verification before completing registration
- Add email code cache with 10min TTL and 5/day send rate limit
- Fix nil client guard, TLS conn leak, domain parsing, and Resend error body in email pkg
- Deploy via ssh inline command using current branch

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 01:47:06 +08:00
9876169c84 refactor: optimize email 2026-03-27 01:23:01 +08:00
5371b1d1c6 feat: add dual-engine email service with aliyun smtp and resend routing
Route Chinese domains (edu.cn, qq.com, 163.com, etc.) via Aliyun SMTP
and international addresses via Resend API.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-25 18:33:17 +08:00
fcd9816b0b refact: add log for export 2026-03-25 18:28:12 +08:00
liuyuanchuang
18597ba7fa feat: add log for export error 2026-03-12 11:43:26 +08:00
2fafcb9bfd Merge pull request 'feature/google_oath' (#5) from feature/google_oath into master
Reviewed-on: #5
2026-03-06 14:07:42 +08:00
liuyuanchuang
b5d177910c feat: add email in userinfo 2026-03-06 14:02:04 +08:00
liuyuanchuang
7df6587fd6 chore: update docker compose 2026-03-06 11:22:41 +08:00
liuyuanchuang
94988790f8 feat: update callback url 2026-03-06 11:10:44 +08:00
liuyuanchuang
45dcef5702 feat: add proxy 2026-03-06 11:03:41 +08:00
liuyuanchuang
ed7232e5c0 feat google oauth 2026-03-06 10:28:56 +08:00
liuyuanchuang
8852ee5a3a Merge branch 'test' 2026-02-13 18:04:26 +08:00
liuyuanchuang
a7b73b0928 Merge branch 'master' of https://code.texpixel.com/YogeLiu/doc_ai_backed 2026-02-13 18:04:21 +08:00
liuyuanchuang
e35f3ed684 fix: update bucket 2026-02-12 19:45:29 +08:00
liuyuanchuang
aed09d4341 feat: update bucket 2026-02-12 19:41:53 +08:00
liuyuanchuang
a0cf063ff9 Merge branch 'master' into test 2026-02-12 19:40:51 +08:00
liuyuanchuang
323b712c18 fix: rm skd file 2026-02-12 19:40:23 +08:00
6786d174a6 Merge pull request 'feat: add mml' (#4) from test into master
Reviewed-on: #4
2026-02-05 13:48:14 +08:00
liuyuanchuang
de6b5d3960 Merge branch 'master' into test 2026-02-05 10:44:39 +08:00
liuyuanchuang
81c2767423 feat: add mml from backend 2026-02-05 10:44:11 +08:00
liuyuanchuang
a59fbd0edd refact: update script 2026-01-27 23:46:52 +08:00
liuyuanchuang
d1a56a2ab3 fix: panic 2026-01-27 23:23:42 +08:00
liuyuanchuang
41df42dea4 feat: decode uid 2026-01-27 22:28:13 +08:00
liuyuanchuang
be3e82fc2e feat: decode user_id 2026-01-27 22:26:25 +08:00
liuyuanchuang
9e01ee79f1 Merge branch 'master' into test 2026-01-27 22:22:48 +08:00
liuyuanchuang
52c9e48a0f fix: rm router db 2026-01-27 22:22:06 +08:00
liuyuanchuang
9b7657cd73 Merge branch 'master' into test 2026-01-27 22:20:27 +08:00
liuyuanchuang
a04eedc423 feat: add track point 2026-01-27 22:20:07 +08:00
liuyuanchuang
a5f1ad153e refactor: update package path 2026-01-27 21:56:21 +08:00
liuyuanchuang
db3beeddb9 Merge branch 'master' of https://code.texpixel.com/YogeLiu/doc_ai_backed into test 2026-01-27 17:40:47 +08:00
eabfd83fdf feat: add scrip; 2026-01-27 17:40:15 +08:00
97c3617731 feat: replace export url 2026-01-25 09:10:54 +08:00
ece026bea2 feat: add new path for recognize 2026-01-25 09:10:54 +08:00
b9124451d2 feat: update default init env 2026-01-25 09:08:51 +08:00
2e158d3fee feat: add new path for recognize 2025-12-31 17:53:12 +08:00
be1047618e Merge branch 'master' into test 2025-12-27 22:22:15 +08:00
3293f1f8a5 fix: downgrade error 2025-12-27 22:21:34 +08:00
ff6795b469 feat: convert markdown to mml 2025-12-27 22:06:48 +08:00
cb461f0134 feat: update req 2025-12-26 21:31:47 +08:00
7c4dfaba54 feat: modify field 2025-12-26 17:27:35 +08:00
5ee1cea0d7 feat: add gls 2025-12-26 17:11:59 +08:00
a538bd6680 fix: modify ip 2025-12-26 16:41:36 +08:00
cd221719cf fix: http req 2025-12-26 16:38:04 +08:00
d0c0d2cbc3 fix: query by task-no 2025-12-26 16:28:49 +08:00
930d782f18 feat: add api for export 2025-12-26 16:24:34 +08:00
bdd21c4b0f Merge branch 'master' into test 2025-12-26 15:48:25 +08:00
0aaafdbaa3 feat: add file export 2025-12-26 15:48:14 +08:00
68a1755a83 Merge branch 'master' into test 2025-12-25 14:02:33 +08:00
bb7403f700 feat: add baidu api 2025-12-25 14:02:29 +08:00
3a86f811d0 feat: add log for time 2025-12-23 22:32:29 +08:00
28295f825b feat: update http retry 2025-12-23 21:12:44 +08:00
e0904f5bfb feat: add mml 2025-12-20 22:57:53 +08:00
073808eb30 feat: add mml 2025-12-20 22:57:14 +08:00
7be0d705fe Merge pull request 'feat: add mathpixel' (#3) from test into master
Reviewed-on: #3
2025-12-20 22:53:23 +08:00
770c334083 fix: update app key 2025-12-20 22:48:02 +08:00
08d5e37d0e fix: udpate app_id 2025-12-20 22:15:56 +08:00
203c2b64c0 feat: add mathpixel 2025-12-20 21:42:58 +08:00
aa7fb1c7ca feat: rm uname 2025-12-19 17:06:38 +08:00
ae2b58149d Merge pull request 'test' (#2) from test into master
Reviewed-on: #2
2025-12-19 16:40:46 +08:00
9e088879c2 feat: update dockerfile 2025-12-19 16:39:35 +08:00
be00a91637 feat: check login for list 2025-12-19 13:59:47 +08:00
4bbbb99634 build: update dockerfile 2025-12-19 10:50:02 +08:00
4bb59ecf7e feat: update vlm url 2025-12-19 09:55:26 +08:00
5a1983f08b feat: update oss download url 2025-12-18 15:14:42 +08:00
8a6da5b627 feat: add list api 2025-12-18 12:39:50 +08:00
d06f2d9df1 feat: update docker 2025-12-17 21:40:18 +08:00
b1a3b7cd17 Merge pull request 'feat: add user register' (#1) from feature/user_login into master
Reviewed-on: #1
2025-12-17 20:45:41 +08:00
f0449bab25 feat: add user register 2025-12-17 20:43:08 +08:00
liuyuanchuang
f86898ae5f build: update dockerfile 2025-12-16 11:32:36 +08:00
liuyuanchuang
a9db8576eb build: update dockerfile 2025-12-16 11:24:11 +08:00
9ceb5fe92a feat: update url 2025-12-15 23:32:07 +08:00
liuyuanchuang
50922641a9 feat: update ocr model 2025-12-11 19:51:51 +08:00
liuyuanchuang
904ea3d146 feat: update token 2025-12-11 19:43:30 +08:00
liuyuanchuang
696919611c feat: use siliconflow model 2025-12-11 19:39:35 +08:00
liuyuanchuang
ea0f5d8765 refact: update dockerfile 2025-12-11 11:22:56 +08:00
0bc77f61e2 feat: update dockerfile 2025-12-10 23:17:24 +08:00
083142491f refact: update oss config 2025-12-10 22:23:05 +08:00
liuyuanchuang
4bd8cef372 refact: modify db config 2025-12-10 20:01:56 +08:00
liuyuanchuang
89b55edf9f build: rm vendor 2025-12-10 19:33:20 +08:00
2417 changed files with 5313 additions and 1049539 deletions

View File

@@ -0,0 +1,40 @@
---
name: deploy
description: Use when deploying this project to dev or prod environments, or when asked to run, ship, release, or push to a server.
---
# Deploy
## Environments
### Dev (`/deploy dev`)
```bash
bash .claude/skills/deploy/deploy_dev.sh
```
Builds and restarts the service on the dev server (ubuntu).
### Prod (`/deploy prod`)
Prod deploy requires being on `master`. Steps:
1. Ensure all changes are committed and pushed to `master`
2. Run:
```bash
bash .claude/skills/deploy/deploy_prod.sh
```
`deploy_prod.sh` will:
- Pull latest code on ubuntu build host
- Build `linux/amd64` Docker image and push to registry
- SSH into ECS: stop old container, start new one with `-env=prod`
## Quick Reference
| Target | Command | Branch required |
|--------|---------|-----------------|
| Dev | `bash .claude/skills/deploy/deploy_dev.sh` | any |
| Prod | `bash .claude/skills/deploy/deploy_prod.sh` | `master` or `main` |
## Common Mistakes
- Running `deploy_prod.sh` on a feature branch → script guards against this (exits with error)
- Forgetting to merge/push before deploy → ubuntu build host pulls from remote, so local-only commits won't be included
- Prod logs go to `/app/logs/app.log` inside the container, not stdout — use `docker exec doc_ai tail -f /app/logs/app.log` on ECS to tail them

View File

@@ -0,0 +1,15 @@
#!/bin/bash
BRANCH=$(git rev-parse --abbrev-ref HEAD)
git push origin "$BRANCH"
ssh ubuntu "
cd /home/yoge/Dev/doc_ai_backed &&
git checkout $BRANCH &&
git pull origin $BRANCH &&
docker compose -f docker-compose.infra.yml up -d --no-recreate || true &&
docker compose down &&
docker image rm doc_ai_backed-doc_ai:latest || true &&
docker compose up -d
"

View File

@@ -0,0 +1,68 @@
#!/bin/bash
set -euo pipefail
REGISTRY="crpi-8s2ierii2xan4klg.cn-beijing.personal.cr.aliyuncs.com/texpixel/doc_ai_backend"
BUILD_HOST="ubuntu"
BUILD_DIR="~/Dev/doc_ai_backed"
# --- Guard: must be on main/master ---
BRANCH=$(git rev-parse --abbrev-ref HEAD)
if [[ "${BRANCH}" != "main" && "${BRANCH}" != "master" ]]; then
echo "ERROR: must be on main or master branch (current: ${BRANCH})"
exit 1
fi
VERSION=$(git rev-parse --short HEAD)
IMAGE_VERSIONED="${REGISTRY}:${VERSION}"
IMAGE_LATEST="${REGISTRY}:latest"
echo "==> [1/3] Pulling latest code on Ubuntu"
ssh ${BUILD_HOST} "
set -e
cd ${BUILD_DIR}
git fetch origin
git checkout master 2>/dev/null || git checkout main
git pull
"
echo "==> [2/3] Building & pushing image on Ubuntu"
ssh ${BUILD_HOST} "
set -e
cd ${BUILD_DIR}
docker build --platform linux/amd64 \
-t ${IMAGE_VERSIONED} \
-t ${IMAGE_LATEST} \
.
docker push ${IMAGE_VERSIONED}
docker push ${IMAGE_LATEST}
docker rmi ${IMAGE_VERSIONED} ${IMAGE_LATEST} 2>/dev/null || true
"
echo "==> [3/3] Deploying on ECS"
ssh ecs "
set -e
echo '--- Pulling image'
docker pull ${IMAGE_VERSIONED}
echo '--- Stopping old container'
docker stop doc_ai 2>/dev/null || true
docker rm doc_ai 2>/dev/null || true
echo '--- Starting new container'
docker run -d \
--name doc_ai \
-p 8024:8024 \
--restart unless-stopped \
${IMAGE_VERSIONED} \
-env=prod
echo '--- Removing old doc_ai images (keeping current)'
docker images --format '{{.Repository}}:{{.Tag}} {{.ID}}' \
| grep '^${REGISTRY}' \
| grep -v ':${VERSION}' \
| grep -v ':latest' \
| awk '{print \$2}' \
| xargs -r docker rmi || true
"
echo "==> Done. Running version: ${VERSION}"

View File

@@ -0,0 +1,3 @@
docker-compose down
docker image rm doc_ai_backed-doc_ai
docker-compose up -d

View File

@@ -0,0 +1,7 @@
#!/bin/bash
echo "=== Testing 401 Request Speed ==="
curl -X POST "https://api.mathpix.com/v3/text" \
-H "Content-Type: application/json" \
--data '{}' \
-w "\n\n=== Timing ===\nHTTP Status: %{http_code}\nTotal: %{time_total}s\nConnect: %{time_connect}s\nDNS: %{time_namelookup}s\nTTFB: %{time_starttransfer}s\n"

4
.gitignore vendored
View File

@@ -4,4 +4,8 @@
*.cursorrules *.cursorrules
*png *png
/upload /upload
texpixel
/vendor
doc_ai
document_ai document_ai

View File

@@ -1,16 +1,23 @@
# Build stage # Build stage
FROM registry.cn-beijing.aliyuncs.com/bitwsd/golang AS builder FROM crpi-8s2ierii2xan4klg.cn-beijing.personal.cr.aliyuncs.com/texpixel/golang:1.23-alpine AS builder
WORKDIR /app WORKDIR /app
# Copy source code # Copy source code
COPY . . COPY . .
# Build binary ENV GOPROXY=https://goproxy.cn,direct
RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -o main ./main.go ENV GOSUMDB=off
# Build binary (CGO required for go-fitz/MuPDF bundled static libs)
RUN sed -i 's|https://dl-cdn.alpinelinux.org|https://mirrors.aliyun.com|g' /etc/apk/repositories && \
apk add --no-cache gcc musl-dev && \
go mod download && \
CGO_ENABLED=1 GOOS=linux go build -tags musl -ldflags="-s -w" -o doc_ai ./main.go
# Runtime stage # Runtime stage
FROM registry.cn-beijing.aliyuncs.com/bitwsd/alpine FROM crpi-8s2ierii2xan4klg.cn-beijing.personal.cr.aliyuncs.com/texpixel/alpine:latest
# Set timezone # Set timezone
RUN apk add --no-cache tzdata && \ RUN apk add --no-cache tzdata && \
@@ -21,7 +28,7 @@ RUN apk add --no-cache tzdata && \
WORKDIR /app WORKDIR /app
# Copy binary from builder # Copy binary from builder
COPY --from=builder /app/main . COPY --from=builder /app/doc_ai .
# Copy config files # Copy config files
COPY config/config_*.yaml ./config/ COPY config/config_*.yaml ./config/
@@ -34,7 +41,7 @@ RUN mkdir -p /data/formula && \
EXPOSE 8024 EXPOSE 8024
# Set entrypoint # Set entrypoint
ENTRYPOINT ["./main"] ENTRYPOINT ["./doc_ai"]
# Default command (can be overridden) # Default command (can be overridden)
CMD ["-env", "prod"] CMD ["-env", "prod"]

View File

@@ -1,19 +1,74 @@
package api package api
import ( import (
"gitea.com/bitwsd/document_ai/api/v1/formula" "gitea.com/texpixel/document_ai/api/v1/analytics"
"gitea.com/bitwsd/document_ai/api/v1/oss" "gitea.com/texpixel/document_ai/api/v1/formula"
"gitea.com/bitwsd/document_ai/api/v1/task" "gitea.com/texpixel/document_ai/api/v1/oss"
"gitea.com/bitwsd/document_ai/api/v1/user" "gitea.com/texpixel/document_ai/api/v1/pdf"
"gitea.com/texpixel/document_ai/api/v1/task"
"gitea.com/texpixel/document_ai/api/v1/user"
"gitea.com/texpixel/document_ai/pkg/common"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
) )
func SetupRouter(engine *gin.RouterGroup) { func SetupRouter(engine *gin.RouterGroup) {
v1 := engine.Group("/v1") v1 := engine.Group("/v1")
{ {
formula.SetupRouter(v1) formulaRouter := v1.Group("/formula", common.GetAuthMiddleware())
oss.SetupRouter(v1) {
task.SetupRouter(v1) endpoint := formula.NewFormulaEndpoint()
user.SetupRouter(v1) formulaRouter.POST("/recognition", endpoint.CreateTask)
formulaRouter.POST("/ai_enhance", endpoint.AIEnhanceRecognition)
formulaRouter.GET("/recognition/:task_no", endpoint.GetTaskStatus)
formulaRouter.POST("/test_process_mathpix_task", endpoint.TestProcessMathpixTask)
} }
taskRouter := v1.Group("/task", common.GetAuthMiddleware())
{
endpoint := task.NewTaskEndpoint()
taskRouter.POST("/evaluate", endpoint.EvaluateTask)
taskRouter.GET("/list", common.MustAuthMiddleware(), endpoint.GetTaskList)
taskRouter.POST("/export", endpoint.ExportTask)
}
ossRouter := v1.Group("/oss", common.GetAuthMiddleware())
{
endpoint := oss.NewOSSEndpoint()
ossRouter.POST("/signature", endpoint.GetPostObjectSignature)
ossRouter.POST("/signature_url", endpoint.GetSignatureURL)
ossRouter.POST("/file/upload", endpoint.UploadFile)
}
userEndpoint := user.NewUserEndpoint()
userRouter := v1.Group("/user")
{
userRouter.POST("/sms", userEndpoint.SendVerificationCode)
userRouter.POST("/email/code", userEndpoint.SendEmailVerifyCode)
userRouter.POST("/register", userEndpoint.RegisterByEmail)
userRouter.POST("/login", userEndpoint.LoginByEmail)
userRouter.GET("/oauth/google/url", userEndpoint.GetGoogleOAuthUrl)
userRouter.POST("/oauth/google/callback", userEndpoint.GoogleOAuthCallback)
}
userAuthRouter := v1.Group("/user", common.GetAuthMiddleware())
{
userAuthRouter.GET("/info", common.MustAuthMiddleware(), userEndpoint.GetUserInfo)
}
pdfRouter := v1.Group("/pdf", common.GetAuthMiddleware())
{
endpoint := pdf.NewPDFEndpoint()
pdfRouter.POST("/recognition", endpoint.CreateTask)
pdfRouter.GET("/recognition/:task_no", endpoint.GetTaskStatus)
}
// 数据埋点路由
analyticsRouter := v1.Group("/analytics", common.GetAuthMiddleware())
{
analyticsHandler := analytics.NewAnalyticsHandler()
analyticsRouter.POST("/track", analyticsHandler.TrackEvent)
}
}
} }

View File

@@ -0,0 +1,50 @@
package analytics
import (
"net/http"
"gitea.com/texpixel/document_ai/internal/model/analytics"
"gitea.com/texpixel/document_ai/internal/service"
"gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/texpixel/document_ai/pkg/log"
"github.com/gin-gonic/gin"
)
type AnalyticsHandler struct {
analyticsService *service.AnalyticsService
}
func NewAnalyticsHandler() *AnalyticsHandler {
return &AnalyticsHandler{
analyticsService: service.NewAnalyticsService(),
}
}
// TrackEvent 记录单个事件
// @Summary 记录单个埋点事件
// @Description 记录用户行为埋点事件
// @Tags Analytics
// @Accept json
// @Produce json
// @Param request body analytics.TrackEventRequest true "事件信息"
// @Success 200 {object} common.Response
// @Router /api/v1/analytics/track [post]
func (h *AnalyticsHandler) TrackEvent(c *gin.Context) {
var req analytics.TrackEventRequest
if err := c.ShouldBindJSON(&req); err != nil {
log.Error(c.Request.Context(), "bind request failed", "error", err)
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "invalid request"))
return
}
userID := common.GetUserIDFromContext(c)
req.UserID = userID
if err := h.analyticsService.TrackEvent(c.Request.Context(), &req); err != nil {
log.Error(c.Request.Context(), "track event failed", "error", err)
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, "failed to track event"))
return
}
c.JSON(http.StatusOK, common.SuccessResponse(c, "success"))
}

View File

@@ -3,12 +3,14 @@ package formula
import ( import (
"net/http" "net/http"
"path/filepath" "path/filepath"
"strings"
"gitea.com/bitwsd/document_ai/internal/model/formula" "gitea.com/texpixel/document_ai/internal/model/formula"
"gitea.com/bitwsd/document_ai/internal/service" "gitea.com/texpixel/document_ai/internal/service"
"gitea.com/bitwsd/document_ai/internal/storage/dao" "gitea.com/texpixel/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/pkg/common" "gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/utils" "gitea.com/texpixel/document_ai/pkg/constant"
"gitea.com/texpixel/document_ai/pkg/utils"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
) )
@@ -36,17 +38,20 @@ func NewFormulaEndpoint() *FormulaEndpoint {
// @Router /v1/formula/recognition [post] // @Router /v1/formula/recognition [post]
func (endpoint *FormulaEndpoint) CreateTask(ctx *gin.Context) { func (endpoint *FormulaEndpoint) CreateTask(ctx *gin.Context) {
var req formula.CreateFormulaRecognitionRequest var req formula.CreateFormulaRecognitionRequest
uid := ctx.GetInt64(constant.ContextUserID)
if err := ctx.BindJSON(&req); err != nil { if err := ctx.BindJSON(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid parameters")) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid parameters"))
return return
} }
req.UserID = uid
if !utils.InArray(req.TaskType, []string{string(dao.TaskTypeFormula), string(dao.TaskTypeFormula)}) { if !utils.InArray(req.TaskType, []string{string(dao.TaskTypeFormula), string(dao.TaskTypeFormula)}) {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid task type")) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid task type"))
return return
} }
fileExt := filepath.Ext(req.FileName) fileExt := strings.ToLower(filepath.Ext(req.FileName))
if !utils.InArray(fileExt, []string{".jpg", ".jpeg", ".png", ".gif", ".bmp", ".tiff", ".webp"}) { if !utils.InArray(fileExt, []string{".jpg", ".jpeg", ".png", ".gif", ".bmp", ".tiff", ".webp"}) {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid file type")) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid file type"))
return return
@@ -116,3 +121,20 @@ func (endpoint *FormulaEndpoint) AIEnhanceRecognition(c *gin.Context) {
c.JSON(http.StatusOK, common.SuccessResponse(c, nil)) c.JSON(http.StatusOK, common.SuccessResponse(c, nil))
} }
func (endpoint *FormulaEndpoint) TestProcessMathpixTask(c *gin.Context) {
postData := make(map[string]int)
if err := c.BindJSON(&postData); err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "Invalid parameters"))
return
}
taskID := postData["task_id"]
err := endpoint.recognitionService.TestProcessMathpixTask(c, int64(taskID))
if err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, err.Error()))
return
}
c.JSON(http.StatusOK, common.SuccessResponse(c, nil))
}

View File

@@ -1,12 +0,0 @@
package formula
import (
"github.com/gin-gonic/gin"
)
func SetupRouter(engine *gin.RouterGroup) {
endpoint := NewFormulaEndpoint()
engine.POST("/formula/recognition", endpoint.CreateTask)
engine.POST("/formula/ai_enhance", endpoint.AIEnhanceRecognition)
engine.GET("/formula/recognition/:task_no", endpoint.GetTaskStatus)
}

View File

@@ -5,19 +5,26 @@ import (
"net/http" "net/http"
"os" "os"
"path/filepath" "path/filepath"
"strings"
"time" "time"
"gitea.com/bitwsd/document_ai/config" "gitea.com/texpixel/document_ai/config"
"gitea.com/bitwsd/document_ai/internal/storage/dao" "gitea.com/texpixel/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/pkg/common" "gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/constant" "gitea.com/texpixel/document_ai/pkg/oss"
"gitea.com/bitwsd/document_ai/pkg/oss" "gitea.com/texpixel/document_ai/pkg/utils"
"gitea.com/bitwsd/document_ai/pkg/utils"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"gorm.io/gorm" "gorm.io/gorm"
) )
func GetPostObjectSignature(ctx *gin.Context) { type OSSEndpoint struct {
}
func NewOSSEndpoint() *OSSEndpoint {
return &OSSEndpoint{}
}
func (h *OSSEndpoint) GetPostObjectSignature(ctx *gin.Context) {
policyToken, err := oss.GetPolicyToken() policyToken, err := oss.GetPolicyToken()
if err != nil { if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, err.Error())) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, err.Error()))
@@ -36,8 +43,7 @@ func GetPostObjectSignature(ctx *gin.Context) {
// @Success 200 {object} common.Response{data=map[string]string{"sign_url":string, "repeat":bool, "path":string}} "Signed URL generated successfully" // @Success 200 {object} common.Response{data=map[string]string{"sign_url":string, "repeat":bool, "path":string}} "Signed URL generated successfully"
// @Failure 200 {object} common.Response "Error response" // @Failure 200 {object} common.Response "Error response"
// @Router /signature_url [get] // @Router /signature_url [get]
func GetSignatureURL(ctx *gin.Context) { func (h *OSSEndpoint) GetSignatureURL(ctx *gin.Context) {
userID := ctx.GetInt64(constant.ContextUserID)
type Req struct { type Req struct {
FileHash string `json:"file_hash" binding:"required"` FileHash string `json:"file_hash" binding:"required"`
FileName string `json:"file_name" binding:"required"` FileName string `json:"file_name" binding:"required"`
@@ -50,7 +56,7 @@ func GetSignatureURL(ctx *gin.Context) {
} }
taskDao := dao.NewRecognitionTaskDao() taskDao := dao.NewRecognitionTaskDao()
sess := dao.DB.WithContext(ctx) sess := dao.DB.WithContext(ctx)
task, err := taskDao.GetTaskByFileURL(sess, userID, req.FileHash) task, err := taskDao.GetTaskByFileURL(sess, req.FileHash)
if err != nil && err != gorm.ErrRecordNotFound { if err != nil && err != gorm.ErrRecordNotFound {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeDBError, "failed to get task")) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeDBError, "failed to get task"))
return return
@@ -59,12 +65,12 @@ func GetSignatureURL(ctx *gin.Context) {
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, gin.H{"sign_url": "", "repeat": true, "path": task.FileURL})) ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, gin.H{"sign_url": "", "repeat": true, "path": task.FileURL}))
return return
} }
extend := filepath.Ext(req.FileName) extend := strings.ToLower(filepath.Ext(req.FileName))
if extend == "" { if extend == "" {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "invalid file name")) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "invalid file name"))
return return
} }
if !utils.InArray(extend, []string{".jpg", ".jpeg", ".png", ".gif", ".bmp", ".tiff", ".webp"}) { if !utils.InArray(extend, []string{".jpg", ".jpeg", ".png", ".gif", ".bmp", ".tiff", ".webp", ".pdf"}) {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "invalid file type")) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "invalid file type"))
return return
} }
@@ -77,7 +83,7 @@ func GetSignatureURL(ctx *gin.Context) {
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, gin.H{"sign_url": url, "repeat": false, "path": path})) ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, gin.H{"sign_url": url, "repeat": false, "path": path}))
} }
func UploadFile(ctx *gin.Context) { func (h *OSSEndpoint) UploadFile(ctx *gin.Context) {
if err := os.MkdirAll(config.GlobalConfig.UploadDir, 0755); err != nil { if err := os.MkdirAll(config.GlobalConfig.UploadDir, 0755); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, "Failed to create upload directory")) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, "Failed to create upload directory"))
return return

View File

@@ -1,12 +0,0 @@
package oss
import "github.com/gin-gonic/gin"
func SetupRouter(parent *gin.RouterGroup) {
router := parent.Group("oss")
{
router.POST("/signature", GetPostObjectSignature)
router.POST("/signature_url", GetSignatureURL)
router.POST("/file/upload", UploadFile)
}
}

95
api/v1/pdf/handler.go Normal file
View File

@@ -0,0 +1,95 @@
package pdf
import (
"net/http"
"path/filepath"
"strings"
pdfmodel "gitea.com/texpixel/document_ai/internal/model/pdf"
"gitea.com/texpixel/document_ai/internal/service"
"gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/texpixel/document_ai/pkg/constant"
"github.com/gin-gonic/gin"
)
type PDFEndpoint struct {
pdfService *service.PDFRecognitionService
}
func NewPDFEndpoint() *PDFEndpoint {
return &PDFEndpoint{
pdfService: service.NewPDFRecognitionService(),
}
}
// CreateTask godoc
// @Summary Create a PDF recognition task
// @Description Create a new PDF recognition task (max 10 pages processed)
// @Tags PDF
// @Accept json
// @Produce json
// @Param request body pdfmodel.CreatePDFRecognitionRequest true "Create PDF task request"
// @Success 200 {object} common.Response{data=pdfmodel.CreatePDFTaskResponse}
// @Failure 400 {object} common.Response
// @Failure 500 {object} common.Response
// @Router /v1/pdf/recognition [post]
func (e *PDFEndpoint) CreateTask(c *gin.Context) {
var req pdfmodel.CreatePDFRecognitionRequest
if err := c.BindJSON(&req); err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "参数错误"))
return
}
req.UserID = c.GetInt64(constant.ContextUserID)
if strings.ToLower(filepath.Ext(req.FileName)) != ".pdf" {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "仅支持PDF文件"))
return
}
task, err := e.pdfService.CreatePDFTask(c, &req)
if err != nil {
if bizErr, ok := err.(*common.BusinessError); ok {
c.JSON(http.StatusOK, common.ErrorResponse(c, int(bizErr.Code), bizErr.Message))
return
}
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, "创建任务失败"))
return
}
c.JSON(http.StatusOK, common.SuccessResponse(c, &pdfmodel.CreatePDFTaskResponse{
TaskNo: task.TaskUUID,
Status: int(task.Status),
}))
}
// GetTaskStatus godoc
// @Summary Get PDF recognition task status and results
// @Description Poll task status; pages field populated when status=2 (completed)
// @Tags PDF
// @Accept json
// @Produce json
// @Param task_no path string true "Task No"
// @Success 200 {object} common.Response{data=pdfmodel.GetPDFTaskResponse}
// @Failure 404 {object} common.Response
// @Failure 500 {object} common.Response
// @Router /v1/pdf/recognition/{task_no} [get]
func (e *PDFEndpoint) GetTaskStatus(c *gin.Context) {
var req pdfmodel.GetPDFTaskRequest
if err := c.ShouldBindUri(&req); err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "参数错误"))
return
}
resp, err := e.pdfService.GetPDFTask(c, req.TaskNo, c.GetInt64(constant.ContextUserID))
if err != nil {
if bizErr, ok := err.(*common.BusinessError); ok {
c.JSON(http.StatusOK, common.ErrorResponse(c, int(bizErr.Code), bizErr.Message))
return
}
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, "查询任务失败"))
return
}
c.JSON(http.StatusOK, common.SuccessResponse(c, resp))
}

View File

@@ -3,10 +3,10 @@ package task
import ( import (
"net/http" "net/http"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/internal/model/task"
"gitea.com/bitwsd/document_ai/internal/model/task" "gitea.com/texpixel/document_ai/internal/service"
"gitea.com/bitwsd/document_ai/internal/service" "gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/common" "gitea.com/texpixel/document_ai/pkg/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
) )
@@ -43,6 +43,8 @@ func (h *TaskEndpoint) GetTaskList(c *gin.Context) {
return return
} }
req.UserID = common.GetUserIDFromContext(c)
if req.Page <= 0 { if req.Page <= 0 {
req.Page = 1 req.Page = 1
} }
@@ -59,3 +61,31 @@ func (h *TaskEndpoint) GetTaskList(c *gin.Context) {
c.JSON(http.StatusOK, common.SuccessResponse(c, resp)) c.JSON(http.StatusOK, common.SuccessResponse(c, resp))
} }
func (h *TaskEndpoint) ExportTask(c *gin.Context) {
var req task.ExportTaskRequest
if err := c.ShouldBindJSON(&req); err != nil {
log.Error(c, "func", "ExportTask", "msg", "Invalid parameters", "error", err)
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "Invalid parameters"))
return
}
fileData, contentType, err := h.taskService.ExportTask(c, &req)
if err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, "导出任务失败"))
return
}
// set filename based on export type
var filename string
switch req.Type {
case "pdf":
filename = "texpixel_export.pdf"
case "docx":
filename = "texpixel_export.docx"
default:
filename = "texpixel_export"
}
c.Header("Content-Disposition", "attachment; filename="+filename)
c.Data(http.StatusOK, contentType, fileData)
}

View File

@@ -1,11 +0,0 @@
package task
import (
"github.com/gin-gonic/gin"
)
func SetupRouter(engine *gin.RouterGroup) {
endpoint := NewTaskEndpoint()
engine.POST("/task/evaluate", endpoint.EvaluateTask)
engine.GET("/task/list", endpoint.GetTaskList)
}

View File

@@ -1,15 +1,17 @@
package user package user
import ( import (
"fmt"
"net/http" "net/http"
"net/url"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/config"
"gitea.com/bitwsd/document_ai/config" model "gitea.com/texpixel/document_ai/internal/model/user"
model "gitea.com/bitwsd/document_ai/internal/model/user" "gitea.com/texpixel/document_ai/internal/service"
"gitea.com/bitwsd/document_ai/internal/service" "gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/common" "gitea.com/texpixel/document_ai/pkg/constant"
"gitea.com/bitwsd/document_ai/pkg/constant" "gitea.com/texpixel/document_ai/pkg/jwt"
"gitea.com/bitwsd/document_ai/pkg/jwt" "gitea.com/texpixel/document_ai/pkg/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
) )
@@ -55,12 +57,15 @@ func (h *UserEndpoint) LoginByPhoneCode(ctx *gin.Context) {
if config.GlobalConfig.Server.IsDebug() { if config.GlobalConfig.Server.IsDebug() {
uid := 1 uid := 1
token, err := jwt.CreateToken(jwt.User{UserId: int64(uid)}) tokenResult, err := jwt.CreateToken(jwt.User{UserId: int64(uid)})
if err != nil { if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeUnauthorizedMsg)) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeUnauthorizedMsg))
return return
} }
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.PhoneLoginResponse{Token: token})) ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.PhoneLoginResponse{
Token: tokenResult.Token,
ExpiresAt: tokenResult.ExpiresAt,
}))
return return
} }
@@ -70,13 +75,16 @@ func (h *UserEndpoint) LoginByPhoneCode(ctx *gin.Context) {
return return
} }
token, err := jwt.CreateToken(jwt.User{UserId: uid}) tokenResult, err := jwt.CreateToken(jwt.User{UserId: uid})
if err != nil { if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeUnauthorizedMsg)) ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeUnauthorizedMsg))
return return
} }
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.PhoneLoginResponse{Token: token})) ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.PhoneLoginResponse{
Token: tokenResult.Token,
ExpiresAt: tokenResult.ExpiresAt,
}))
} }
func (h *UserEndpoint) GetUserInfo(ctx *gin.Context) { func (h *UserEndpoint) GetUserInfo(ctx *gin.Context) {
@@ -92,14 +100,154 @@ func (h *UserEndpoint) GetUserInfo(ctx *gin.Context) {
return return
} }
status := 0
if user.ID > 0 {
status = 1
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.UserInfoResponse{ ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.UserInfoResponse{
Username: user.Username, Username: user.Username,
Phone: user.Phone, Email: user.Email,
Status: status, }))
}
func (h *UserEndpoint) SendEmailVerifyCode(ctx *gin.Context) {
req := model.EmailVerifyCodeRequest{}
if err := ctx.ShouldBindJSON(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, common.CodeParamErrorMsg))
return
}
if err := h.userService.SendEmailVerifyCode(ctx, req.Email); err != nil {
if bizErr, ok := err.(*common.BusinessError); ok {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, int(bizErr.Code), bizErr.Message))
return
}
log.Error(ctx, "func", "SendEmailVerifyCode", "msg", "发送邮件验证码失败", "error", err)
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.EmailVerifyCodeResponse{}))
}
func (h *UserEndpoint) RegisterByEmail(ctx *gin.Context) {
req := model.EmailRegisterRequest{}
if err := ctx.ShouldBindJSON(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, common.CodeParamErrorMsg))
return
}
uid, err := h.userService.RegisterByEmail(ctx, req.Email, req.Password, req.VerifyCode)
if err != nil {
if bizErr, ok := err.(*common.BusinessError); ok {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, int(bizErr.Code), bizErr.Message))
return
}
log.Error(ctx, "func", "RegisterByEmail", "msg", "注册失败", "error", err)
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
tokenResult, err := jwt.CreateToken(jwt.User{UserId: uid})
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.EmailRegisterResponse{
Token: tokenResult.Token,
ExpiresAt: tokenResult.ExpiresAt,
}))
}
func (h *UserEndpoint) LoginByEmail(ctx *gin.Context) {
req := model.EmailLoginRequest{}
if err := ctx.ShouldBindJSON(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, common.CodeParamErrorMsg))
return
}
uid, err := h.userService.LoginByEmail(ctx, req.Email, req.Password)
if err != nil {
if bizErr, ok := err.(*common.BusinessError); ok {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, int(bizErr.Code), bizErr.Message))
return
}
log.Error(ctx, "func", "LoginByEmail", "msg", "登录失败", "error", err)
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
tokenResult, err := jwt.CreateToken(jwt.User{UserId: uid})
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.EmailLoginResponse{
Token: tokenResult.Token,
ExpiresAt: tokenResult.ExpiresAt,
}))
}
func (h *UserEndpoint) GetGoogleOAuthUrl(ctx *gin.Context) {
req := model.GoogleAuthUrlRequest{}
if err := ctx.ShouldBindQuery(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, common.CodeParamErrorMsg))
return
}
googleConfig := config.GlobalConfig.Google
if googleConfig.ClientID == "" {
log.Error(ctx, "func", "GetGoogleOAuthUrl", "msg", "Google OAuth not configured")
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
authURL := fmt.Sprintf(
"https://accounts.google.com/o/oauth2/v2/auth?client_id=%s&redirect_uri=%s&response_type=code&scope=openid%%20email%%20profile&state=%s",
url.QueryEscape(googleConfig.ClientID),
url.QueryEscape(req.RedirectURI),
url.QueryEscape(req.State),
)
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.GoogleAuthUrlResponse{
AuthURL: authURL,
}))
}
func (h *UserEndpoint) GoogleOAuthCallback(ctx *gin.Context) {
req := model.GoogleOAuthCallbackRequest{}
if err := ctx.ShouldBindJSON(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, common.CodeParamErrorMsg))
return
}
googleConfig := config.GlobalConfig.Google
if googleConfig.ClientID == "" || googleConfig.ClientSecret == "" {
log.Error(ctx, "func", "GoogleOAuthCallback", "msg", "Google OAuth not configured")
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
userInfo, err := h.userService.ExchangeGoogleCodeAndGetUserInfo(ctx, googleConfig.ClientID, googleConfig.ClientSecret, req.Code, req.RedirectURI)
if err != nil {
log.Error(ctx, "func", "GoogleOAuthCallback", "msg", "exchange code failed", "error", err)
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
uid, err := h.userService.FindOrCreateGoogleUser(ctx, userInfo)
if err != nil {
log.Error(ctx, "func", "GoogleOAuthCallback", "msg", "find or create user failed", "error", err)
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
tokenResult, err := jwt.CreateToken(jwt.User{UserId: uid})
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeUnauthorizedMsg))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.GoogleOAuthCallbackResponse{
Token: tokenResult.Token,
ExpiresAt: tokenResult.ExpiresAt,
})) }))
} }

View File

@@ -1,16 +0,0 @@
package user
import (
"gitea.com/bitwsd/document_ai/pkg/common"
"github.com/gin-gonic/gin"
)
func SetupRouter(router *gin.RouterGroup) {
userEndpoint := NewUserEndpoint()
userRouter := router.Group("/user")
{
userRouter.POST("/get/sms", userEndpoint.SendVerificationCode)
userRouter.POST("/login/phone", userEndpoint.LoginByPhoneCode)
userRouter.GET("/info", common.GetAuthMiddleware(), userEndpoint.GetUserInfo)
}
}

73
cmd/migrate/README.md Normal file
View File

@@ -0,0 +1,73 @@
# 数据迁移工具
用于将测试数据库的数据迁移到生产数据库避免ID冲突使用事务确保数据一致性。
## 功能特性
- ✅ 自动避免ID冲突使用数据库自增ID
- ✅ 使用事务确保每个任务和结果数据的一致性
- ✅ 自动跳过已存在的任务基于task_uuid
- ✅ 保留原始时间戳
- ✅ 处理NULL值
- ✅ 详细的日志输出和统计信息
## 使用方法
### 基本用法
```bash
# 从dev环境迁移到prod环境
go run cmd/migrate/main.go -test-env=dev -prod-env=prod
# 从prod环境迁移到dev环境测试反向迁移
go run cmd/migrate/main.go -test-env=prod -prod-env=dev
```
### 参数说明
- `-test-env`: 测试环境配置文件名dev/prod默认值dev
- `-prod-env`: 生产环境配置文件名dev/prod默认值prod
### 编译后使用
```bash
# 编译
go build -o migrate cmd/migrate/main.go
# 运行
./migrate -test-env=dev -prod-env=prod
```
## 工作原理
1. **连接数据库**:同时连接测试数据库和生产数据库
2. **读取数据**从测试数据库读取所有任务和结果数据LEFT JOIN
3. **检查重复**:基于`task_uuid`检查生产数据库中是否已存在
4. **事务迁移**:为每个任务创建独立事务:
- 创建任务记录自动生成新ID
- 如果存在结果数据创建结果记录关联新任务ID
- 提交事务或回滚
5. **统计报告**:输出迁移统计信息
## 注意事项
1. **配置文件**:确保`config/config_dev.yaml``config/config_prod.yaml`存在且配置正确
2. **数据库权限**:确保数据库用户有读写权限
3. **网络连接**:确保能同时连接到两个数据库
4. **数据备份**:迁移前建议备份生产数据库
5. **ID冲突**脚本会自动处理ID冲突使用数据库自增ID不会覆盖现有数据
## 输出示例
```
从测试数据库读取到 100 条任务记录
[1/100] 创建任务成功: task_uuid=xxx, 新ID=1001
[1/100] 创建结果成功: task_id=1001
[2/100] 跳过已存在的任务: task_uuid=yyy, id=1002
...
迁移完成统计:
成功: 95 条
跳过: 3 条
失败: 2 条
数据迁移完成!
```

270
cmd/migrate/main.go Normal file
View File

@@ -0,0 +1,270 @@
package main
import (
"context"
"flag"
"fmt"
"log"
"time"
"gitea.com/texpixel/document_ai/config"
"gitea.com/texpixel/document_ai/internal/storage/dao"
"github.com/spf13/viper"
"gorm.io/driver/mysql"
"gorm.io/gorm"
"gorm.io/gorm/logger"
)
func main() {
// 解析命令行参数
testEnv := flag.String("test-env", "dev", "测试环境配置 (dev/prod)")
prodEnv := flag.String("prod-env", "prod", "生产环境配置 (dev/prod)")
flag.Parse()
// 加载测试环境配置
testConfigPath := fmt.Sprintf("./config/config_%s.yaml", *testEnv)
testConfig, err := loadDatabaseConfig(testConfigPath)
if err != nil {
log.Fatalf("加载测试环境配置失败: %v", err)
}
// 连接测试数据库
testDSN := fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4&parseTime=True&loc=Asia%%2FShanghai",
testConfig.Username, testConfig.Password, testConfig.Host, testConfig.Port, testConfig.DBName)
testDB, err := gorm.Open(mysql.Open(testDSN), &gorm.Config{
Logger: logger.Default.LogMode(logger.Info),
})
if err != nil {
log.Fatalf("连接测试数据库失败: %v", err)
}
// 加载生产环境配置
prodConfigPath := fmt.Sprintf("./config/config_%s.yaml", *prodEnv)
prodConfig, err := loadDatabaseConfig(prodConfigPath)
if err != nil {
log.Fatalf("加载生产环境配置失败: %v", err)
}
// 连接生产数据库
prodDSN := fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4&parseTime=True&loc=Asia%%2FShanghai",
prodConfig.Username, prodConfig.Password, prodConfig.Host, prodConfig.Port, prodConfig.DBName)
prodDB, err := gorm.Open(mysql.Open(prodDSN), &gorm.Config{
Logger: logger.Default.LogMode(logger.Info),
})
if err != nil {
log.Fatalf("连接生产数据库失败: %v", err)
}
// 执行迁移
if err := migrateData(testDB, prodDB); err != nil {
log.Fatalf("数据迁移失败: %v", err)
}
log.Println("数据迁移完成!")
}
func migrateData(testDB, prodDB *gorm.DB) error {
_ = context.Background() // 保留以备将来使用
// 从测试数据库读取所有任务数据(包含结果)
type TaskWithResult struct {
// Task 字段
TaskID int64 `gorm:"column:id"`
UserID int64 `gorm:"column:user_id"`
TaskUUID string `gorm:"column:task_uuid"`
FileName string `gorm:"column:file_name"`
FileHash string `gorm:"column:file_hash"`
FileURL string `gorm:"column:file_url"`
TaskType string `gorm:"column:task_type"`
Status int `gorm:"column:status"`
CompletedAt time.Time `gorm:"column:completed_at"`
Remark string `gorm:"column:remark"`
IP string `gorm:"column:ip"`
TaskCreatedAt time.Time `gorm:"column:created_at"`
TaskUpdatedAt time.Time `gorm:"column:updated_at"`
// Result 字段
ResultID *int64 `gorm:"column:result_id"`
ResultTaskID *int64 `gorm:"column:result_task_id"`
ResultTaskType *string `gorm:"column:result_task_type"`
Latex *string `gorm:"column:latex"`
Markdown *string `gorm:"column:markdown"`
MathML *string `gorm:"column:mathml"`
ResultCreatedAt *time.Time `gorm:"column:result_created_at"`
ResultUpdatedAt *time.Time `gorm:"column:result_updated_at"`
}
var tasksWithResults []TaskWithResult
query := `
SELECT
t.id,
t.user_id,
t.task_uuid,
t.file_name,
t.file_hash,
t.file_url,
t.task_type,
t.status,
t.completed_at,
t.remark,
t.ip,
t.created_at,
t.updated_at,
r.id as result_id,
r.task_id as result_task_id,
r.task_type as result_task_type,
r.latex,
r.markdown,
r.mathml,
r.created_at as result_created_at,
r.updated_at as result_updated_at
FROM recognition_tasks t
LEFT JOIN recognition_results r ON t.id = r.task_id
ORDER BY t.id
`
if err := testDB.Raw(query).Scan(&tasksWithResults).Error; err != nil {
return fmt.Errorf("读取测试数据失败: %v", err)
}
log.Printf("从测试数据库读取到 %d 条任务记录", len(tasksWithResults))
successCount := 0
skipCount := 0
errorCount := 0
// 为每个任务使用独立事务,确保单个任务失败不影响其他任务
for i, item := range tasksWithResults {
// 开始事务
tx := prodDB.Begin()
// 检查生产数据库中是否已存在相同的 task_uuid
var existingTask dao.RecognitionTask
err := tx.Where("task_uuid = ?", item.TaskUUID).First(&existingTask).Error
if err == nil {
log.Printf("[%d/%d] 跳过已存在的任务: task_uuid=%s, id=%d", i+1, len(tasksWithResults), item.TaskUUID, existingTask.ID)
tx.Rollback()
skipCount++
continue
}
if err != gorm.ErrRecordNotFound {
log.Printf("[%d/%d] 检查任务是否存在时出错: task_uuid=%s, error=%v", i+1, len(tasksWithResults), item.TaskUUID, err)
tx.Rollback()
errorCount++
continue
}
// 创建新任务不指定ID让数据库自动生成
newTask := &dao.RecognitionTask{
UserID: item.UserID,
TaskUUID: item.TaskUUID,
FileName: item.FileName,
FileHash: item.FileHash,
FileURL: item.FileURL,
TaskType: dao.TaskType(item.TaskType),
Status: dao.TaskStatus(item.Status),
CompletedAt: item.CompletedAt,
Remark: item.Remark,
IP: item.IP,
}
// 保留原始时间戳
newTask.CreatedAt = item.TaskCreatedAt
newTask.UpdatedAt = item.TaskUpdatedAt
if err := tx.Create(newTask).Error; err != nil {
log.Printf("[%d/%d] 创建任务失败: task_uuid=%s, error=%v", i+1, len(tasksWithResults), item.TaskUUID, err)
tx.Rollback()
errorCount++
continue
}
log.Printf("[%d/%d] 创建任务成功: task_uuid=%s, 新ID=%d", i+1, len(tasksWithResults), item.TaskUUID, newTask.ID)
// 如果有结果数据,创建结果记录
if item.ResultID != nil {
// 处理可能为NULL的字段
latex := ""
if item.Latex != nil {
latex = *item.Latex
}
markdown := ""
if item.Markdown != nil {
markdown = *item.Markdown
}
mathml := ""
if item.MathML != nil {
mathml = *item.MathML
}
contentJSON, err := dao.MarshalFormulaContent(dao.FormulaContent{
Latex: latex,
Markdown: markdown,
MathML: mathml,
})
if err != nil {
log.Printf("[%d/%d] 序列化公式内容失败: task_id=%d, error=%v", i+1, len(tasksWithResults), newTask.ID, err)
tx.Rollback()
errorCount++
continue
}
newResult := dao.RecognitionResult{
TaskID: newTask.ID, // 使用新任务的ID
TaskType: dao.TaskType(item.TaskType),
Content: contentJSON,
}
if err := newResult.SetMetaData(dao.ResultMetaData{TotalNum: 1}); err != nil {
log.Printf("[%d/%d] 序列化MetaData失败: task_id=%d, error=%v", i+1, len(tasksWithResults), newTask.ID, err)
tx.Rollback()
errorCount++
continue
}
// 保留原始时间戳
if item.ResultCreatedAt != nil {
newResult.CreatedAt = *item.ResultCreatedAt
}
if item.ResultUpdatedAt != nil {
newResult.UpdatedAt = *item.ResultUpdatedAt
}
if err := tx.Create(&newResult).Error; err != nil {
log.Printf("[%d/%d] 创建结果失败: task_id=%d, error=%v", i+1, len(tasksWithResults), newTask.ID, err)
tx.Rollback() // 回滚整个事务(包括任务)
errorCount++
continue
}
log.Printf("[%d/%d] 创建结果成功: task_id=%d", i+1, len(tasksWithResults), newTask.ID)
}
// 提交事务
if err := tx.Commit().Error; err != nil {
log.Printf("[%d/%d] 提交事务失败: task_uuid=%s, error=%v", i+1, len(tasksWithResults), item.TaskUUID, err)
errorCount++
continue
}
successCount++
}
log.Printf("迁移完成统计:")
log.Printf(" 成功: %d 条", successCount)
log.Printf(" 跳过: %d 条", skipCount)
log.Printf(" 失败: %d 条", errorCount)
return nil
}
// loadDatabaseConfig 从配置文件加载数据库配置
func loadDatabaseConfig(configPath string) (config.DatabaseConfig, error) {
v := viper.New()
v.SetConfigFile(configPath)
if err := v.ReadInConfig(); err != nil {
return config.DatabaseConfig{}, err
}
var dbConfig config.DatabaseConfig
if err := v.UnmarshalKey("database", &dbConfig); err != nil {
return config.DatabaseConfig{}, err
}
return dbConfig, nil
}

View File

@@ -1,7 +1,7 @@
package config package config
import ( import (
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/pkg/log"
"github.com/spf13/viper" "github.com/spf13/viper"
) )
@@ -13,6 +13,44 @@ type Config struct {
UploadDir string `mapstructure:"upload_dir"` UploadDir string `mapstructure:"upload_dir"`
Limit LimitConfig `mapstructure:"limit"` Limit LimitConfig `mapstructure:"limit"`
Aliyun AliyunConfig `mapstructure:"aliyun"` Aliyun AliyunConfig `mapstructure:"aliyun"`
Mathpix MathpixConfig `mapstructure:"mathpix"`
BaiduOCR BaiduOCRConfig `mapstructure:"baidu_ocr"`
Google GoogleOAuthConfig `mapstructure:"google"`
Email EmailConfig `mapstructure:"email"`
}
type EmailConfig struct {
FromName string `mapstructure:"from_name"`
FromAddr string `mapstructure:"from_addr"`
AliyunSMTP AliyunSMTPConfig `mapstructure:"aliyun_smtp"`
Resend ResendEmailConfig `mapstructure:"resend"`
}
type AliyunSMTPConfig struct {
Host string `mapstructure:"host"`
Port int `mapstructure:"port"`
Username string `mapstructure:"username"`
Password string `mapstructure:"password"`
}
type ResendEmailConfig struct {
APIKey string `mapstructure:"api_key"`
}
type BaiduOCRConfig struct {
Token string `mapstructure:"token"`
}
type GoogleOAuthConfig struct {
ClientID string `mapstructure:"client_id"`
ClientSecret string `mapstructure:"client_secret"`
RedirectURI string `mapstructure:"redirect_uri"`
Proxy string `mapstructure:"proxy"`
}
type MathpixConfig struct {
AppID string `mapstructure:"app_id"`
AppKey string `mapstructure:"app_key"`
} }
type LimitConfig struct { type LimitConfig struct {

View File

@@ -4,17 +4,17 @@ server:
database: database:
driver: mysql driver: mysql
host: 182.92.150.161 host: localhost
port: 3006 port: 3006
username: root username: root
password: yoge@coder%%%123321! password: root123
dbname: doc_ai dbname: doc_ai
max_idle: 10 max_idle: 10
max_open: 100 max_open: 100
redis: redis:
addr: 182.92.150.161:6379 addr: localhost:6079
password: yoge@123321! password: redis123
db: 0 db: 0
limit: limit:
@@ -22,7 +22,7 @@ limit:
log: log:
appName: document_ai appName: document_ai
level: info # debug, info, warn, error level: info
format: console # json, console format: console # json, console
outputPath: ./logs/app.log # 日志文件路径 outputPath: ./logs/app.log # 日志文件路径
maxSize: 2 # 单个日志文件最大尺寸单位MB maxSize: 2 # 单个日志文件最大尺寸单位MB
@@ -30,7 +30,6 @@ log:
maxBackups: 1 # 保留的旧日志文件最大数量 maxBackups: 1 # 保留的旧日志文件最大数量
compress: false # 是否压缩旧日志 compress: false # 是否压缩旧日志
aliyun: aliyun:
sms: sms:
access_key_id: "LTAI5tB9ur4ExCF4dYPq7hLz" access_key_id: "LTAI5tB9ur4ExCF4dYPq7hLz"
@@ -39,8 +38,32 @@ aliyun:
template_code: "SMS_291510729" template_code: "SMS_291510729"
oss: oss:
endpoint: oss-cn-beijing.aliyuncs.com endpoint: static.texpixel.com
inner_endpoint: oss-cn-beijing-internal.aliyuncs.com inner_endpoint: oss-cn-beijing-internal.aliyuncs.com
access_key_id: LTAI5tKogxeiBb4gJGWEePWN access_key_id: LTAI5t8qXhow6NCdYDtu1saF
access_key_secret: l4oCxtt5iLSQ1DAs40guTzKUfrxXwq access_key_secret: qZ2SwYsNCEBckCVSOszH31yYwXU44A
bucket_name: bitwsd-doc-ai bucket_name: texpixel-doc1
mathpix:
app_id: "ocr_eede6f_ea9b5c"
app_key: "fb72d251e33ac85c929bfd4eec40d78368d08d82fb2ee1cffb04a8bb967d1db5"
baidu_ocr:
token: "e3a47bd2438f1f38840c203fc5939d17a54482d1"
google:
client_id: "404402221037-nqdsk11bkpk5a7oh396mrg1ieh28u6q1.apps.googleusercontent.com"
client_secret: "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"
redirect_uri: "https://app.cloud.texpixel.com:10443/auth/google/callback"
proxy: "http://localhost:7890"
email:
from_name: "TexPixel Support"
from_addr: "support@texpixel.com"
aliyun_smtp:
host: "smtp.qiye.aliyun.com"
port: 465
username: "support@texpixel.com"
password: "8bPw2W9LlgHSTTfk"
resend:
api_key: "re_dZxRaFAB_D5YME7u6kdRmDxqw4v1G7t87"

69
config/config_local.yaml Normal file
View File

@@ -0,0 +1,69 @@
server:
port: 8024
mode: debug # debug/release
database:
driver: mysql
host: 192.168.5.56
port: 3006
username: root
password: root123
dbname: doc_ai
max_idle: 10
max_open: 100
redis:
addr: 192.168.5.56:6079
password: redis123
db: 0
limit:
formula_recognition: 3
log:
appName: document_ai
level: info
format: console # json, console
outputPath: ./logs/app.log # 日志文件路径
maxSize: 2 # 单个日志文件最大尺寸单位MB
maxAge: 1 # 日志保留天数
maxBackups: 1 # 保留的旧日志文件最大数量
compress: false # 是否压缩旧日志
aliyun:
sms:
access_key_id: "LTAI5tB9ur4ExCF4dYPq7hLz"
access_key_secret: "91HulOdaCpwhfBesrUDiKYvyi0Qkx1"
sign_name: "北京比特智源科技"
template_code: "SMS_291510729"
oss:
endpoint: static.texpixel.com
inner_endpoint: oss-cn-beijing-internal.aliyuncs.com
access_key_id: LTAI5t8qXhow6NCdYDtu1saF
access_key_secret: qZ2SwYsNCEBckCVSOszH31yYwXU44A
bucket_name: texpixel-doc1
mathpix:
app_id: "ocr_eede6f_ea9b5c"
app_key: "fb72d251e33ac85c929bfd4eec40d78368d08d82fb2ee1cffb04a8bb967d1db5"
baidu_ocr:
token: "e3a47bd2438f1f38840c203fc5939d17a54482d1"
google:
client_id: "404402221037-nqdsk11bkpk5a7oh396mrg1ieh28u6q1.apps.googleusercontent.com"
client_secret: "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"
redirect_uri: "https://app.cloud.texpixel.com:10443/auth/google/callback"
proxy: "http://192.168.5.56:7890"
email:
from_name: "TexPixel Support"
from_addr: "support@texpixel.com"
aliyun_smtp:
host: "smtp.qiye.aliyun.com"
port: 465
username: "support@texpixel.com"
password: "8bPw2W9LlgHSTTfk"
resend:
api_key: "re_dZxRaFAB_D5YME7u6kdRmDxqw4v1G7t87"

View File

@@ -4,21 +4,21 @@ server:
database: database:
driver: mysql driver: mysql
host: rm-bp1uh3e1qop18gz4wto.mysql.rds.aliyuncs.com host: 172.31.134.12
port: 3306 port: 3006
username: root username: root
password: bitwsdttestESAadb12@3341 password: yoge@coder%%%123321!
dbname: doc_ai dbname: doc_ai
max_idle: 10 max_idle: 10
max_open: 100 max_open: 30
redis: redis:
addr: 172.31.32.138:6379 addr: 172.31.134.12:6399
password: bitwsd@8912WE! password: bitwsd@8912WE!
db: 0 db: 0
limit: limit:
formula_recognition: 2 formula_recognition: 10
log: log:
appName: document_ai appName: document_ai
@@ -38,8 +38,32 @@ aliyun:
template_code: "SMS_291510729" template_code: "SMS_291510729"
oss: oss:
endpoint: oss-cn-beijing.aliyuncs.com endpoint: static.texpixel.com
inner_endpoint: oss-cn-beijing-internal.aliyuncs.com inner_endpoint: oss-cn-beijing-internal.aliyuncs.com
access_key_id: LTAI5tKogxeiBb4gJGWEePWN access_key_id: LTAI5t8qXhow6NCdYDtu1saF
access_key_secret: l4oCxtt5iLSQ1DAs40guTzKUfrxXwq access_key_secret: qZ2SwYsNCEBckCVSOszH31yYwXU44A
bucket_name: bitwsd-doc-ai bucket_name: texpixel-doc1
mathpix:
app_id: "ocr_eede6f_ea9b5c"
app_key: "fb72d251e33ac85c929bfd4eec40d78368d08d82fb2ee1cffb04a8bb967d1db5"
baidu_ocr:
token: "e3a47bd2438f1f38840c203fc5939d17a54482d1"
google:
client_id: "404402221037-nqdsk11bkpk5a7oh396mrg1ieh28u6q1.apps.googleusercontent.com"
client_secret: "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"
redirect_uri: "https://texpixel.com/auth/google/callback"
proxy: "http://100.115.184.74:7890"
email:
from_name: "TexPixel Support"
from_addr: "support@texpixel.com"
aliyun_smtp:
host: "smtp.qiye.aliyun.com"
port: 465
username: "support@texpixel.com"
password: "8bPw2W9LlgHSTTfk"
resend:
api_key: "re_dZxRaFAB_D5YME7u6kdRmDxqw4v1G7t87"

32
docker-compose.infra.yml Normal file
View File

@@ -0,0 +1,32 @@
services:
mysql:
image: mysql:8.0
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: texpixel#pwd123!
MYSQL_DATABASE: doc_ai
MYSQL_USER: texpixel
MYSQL_PASSWORD: texpixel#pwd123!
ports:
- "3006:3306"
volumes:
- mysql_data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-uroot", "-ptexpixel#pwd123!"]
interval: 5s
timeout: 5s
retries: 10
start_period: 30s
restart: always
redis:
image: redis:latest
container_name: redis
command: redis-server --requirepass "yoge@123321!"
ports:
- "6079:6379"
restart: always
volumes:
mysql_data:
driver: local

View File

@@ -1,36 +1,10 @@
version: '3.8'
services: services:
mysql: doc_ai:
image: mysql:8.0 build: .
container_name: mysql container_name: doc_ai
environment: network_mode: host
MYSQL_ROOT_PASSWORD: 123456 # 设置root用户密码
MYSQL_DATABASE: document_ai # 设置默认数据库名
MYSQL_USER: bitwsd_document # 设置数据库用户名
MYSQL_PASSWORD: 123456 # 设置数据库用户密码
ports:
- "3306:3306" # 映射宿主机的3306端口到容器内的3306
volumes: volumes:
- mysql_data:/var/lib/mysql # 持久化MySQL数据 - ./config:/app/config
networks: - ./logs:/app/logs
- backend command: ["-env", "dev"]
restart: always restart: always
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379" # 映射宿主机的6379端口到容器内的6379
networks:
- backend
restart: always
volumes:
mysql_data:
# 持久化MySQL数据卷
driver: local
networks:
backend:
driver: bridge

File diff suppressed because it is too large Load Diff

32
go.mod
View File

@@ -1,26 +1,32 @@
module gitea.com/bitwsd/document_ai module gitea.com/texpixel/document_ai
go 1.20 go 1.20
require ( require (
gitea.com/bitwsd/core v0.0.0-20241128075635-8d72a929b914
github.com/alibabacloud-go/darabonba-openapi v0.2.1 github.com/alibabacloud-go/darabonba-openapi v0.2.1
github.com/alibabacloud-go/dysmsapi-20170525/v2 v2.0.18 github.com/alibabacloud-go/dysmsapi-20170525/v2 v2.0.18
github.com/alibabacloud-go/tea v1.1.19 github.com/alibabacloud-go/tea v1.1.19
github.com/alibabacloud-go/tea-utils v1.4.5 github.com/alibabacloud-go/tea-utils v1.4.5
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
github.com/dgrijalva/jwt-go v3.2.0+incompatible github.com/dgrijalva/jwt-go v3.2.0+incompatible
github.com/gin-gonic/gin v1.10.0 github.com/gen2brain/go-fitz v1.24.15
github.com/gin-gonic/gin v1.9.1
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/jtolds/gls v4.20.0+incompatible
github.com/redis/go-redis/v9 v9.7.0 github.com/redis/go-redis/v9 v9.7.0
github.com/spf13/viper v1.19.0 github.com/rs/zerolog v1.33.0
github.com/spf13/viper v1.18.2
golang.org/x/crypto v0.20.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1
gorm.io/datatypes v1.2.0
gorm.io/driver/mysql v1.5.7 gorm.io/driver/mysql v1.5.7
gorm.io/gorm v1.25.12 gorm.io/gorm v1.30.0
) )
require github.com/go-sql-driver/mysql v1.7.0 // indirect require github.com/go-sql-driver/mysql v1.8.1 // indirect
require ( require (
filippo.io/edwards25519 v1.1.0 // indirect
github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4 // indirect github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4 // indirect
github.com/alibabacloud-go/debug v0.0.0-20190504072949-9472017b5c68 // indirect github.com/alibabacloud-go/debug v0.0.0-20190504072949-9472017b5c68 // indirect
github.com/alibabacloud-go/endpoint-util v1.1.0 // indirect github.com/alibabacloud-go/endpoint-util v1.1.0 // indirect
@@ -34,6 +40,7 @@ require (
github.com/cloudwego/base64x v0.1.4 // indirect github.com/cloudwego/base64x v0.1.4 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect github.com/cloudwego/iasm v0.2.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect github.com/gin-contrib/sse v0.1.0 // indirect
@@ -41,20 +48,21 @@ require (
github.com/go-playground/universal-translator v0.18.1 // indirect github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.20.0 // indirect github.com/go-playground/validator/v10 v10.20.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect github.com/goccy/go-json v0.10.2 // indirect
github.com/gopherjs/gopherjs v0.0.0-20200217142428-fce0ec30dd00 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect github.com/hashicorp/hcl v1.0.0 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect github.com/jinzhu/now v1.1.5 // indirect
github.com/json-iterator/go v1.1.12 // indirect github.com/json-iterator/go v1.1.12 // indirect
github.com/jupiterrider/ffi v0.5.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/leodido/go-urn v1.4.0 // indirect github.com/leodido/go-urn v1.4.0 // indirect
github.com/magiconair/properties v1.8.7 // indirect github.com/magiconair/properties v1.8.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/rs/zerolog v1.33.0 // indirect
github.com/sagikazarmark/locafero v0.4.0 // indirect github.com/sagikazarmark/locafero v0.4.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect github.com/sagikazarmark/slog-shim v0.1.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect github.com/sourcegraph/conc v0.3.0 // indirect
@@ -68,14 +76,12 @@ require (
go.uber.org/atomic v1.9.0 // indirect go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.9.0 // indirect go.uber.org/multierr v1.9.0 // indirect
golang.org/x/arch v0.8.0 // indirect golang.org/x/arch v0.8.0 // indirect
golang.org/x/crypto v0.23.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/net v0.25.0 // indirect golang.org/x/net v0.21.0 // indirect
golang.org/x/sys v0.20.0 // indirect golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.15.0 // indirect golang.org/x/text v0.20.0 // indirect
golang.org/x/time v0.5.0 // indirect golang.org/x/time v0.5.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect google.golang.org/protobuf v1.34.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )

68
go.sum
View File

@@ -1,5 +1,5 @@
gitea.com/bitwsd/core v0.0.0-20241128075635-8d72a929b914 h1:3aRCeiuq/PWMr2yjEN9Y5NusfmpdMKiO4i/5tM5qc34= filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
gitea.com/bitwsd/core v0.0.0-20241128075635-8d72a929b914/go.mod h1:hbEUo3t/AFGCnQbxwdG4oiw2IHdlRgK02cqd0yicP1Y= filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4 h1:iC9YFYKDGEy3n/FtqJnOkZsene9olVspKmkX5A2YBEo= github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4 h1:iC9YFYKDGEy3n/FtqJnOkZsene9olVspKmkX5A2YBEo=
github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4/go.mod h1:sCavSAvdzOjul4cEqeVtvlSaSScfNsTQ+46HwlTL1hc= github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4/go.mod h1:sCavSAvdzOjul4cEqeVtvlSaSScfNsTQ+46HwlTL1hc=
github.com/alibabacloud-go/darabonba-openapi v0.1.18/go.mod h1:PB4HffMhJVmAgNKNq3wYbTUlFvPgxJpTzd1F5pTuUsc= github.com/alibabacloud-go/darabonba-openapi v0.1.18/go.mod h1:PB4HffMhJVmAgNKNq3wYbTUlFvPgxJpTzd1F5pTuUsc=
@@ -33,9 +33,7 @@ github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9
github.com/aliyun/credentials-go v1.1.2 h1:qU1vwGIBb3UJ8BwunHDRFtAhS6jnQLnde/yk0+Ih2GY= github.com/aliyun/credentials-go v1.1.2 h1:qU1vwGIBb3UJ8BwunHDRFtAhS6jnQLnde/yk0+Ih2GY=
github.com/aliyun/credentials-go v1.1.2/go.mod h1:ozcZaMR5kLM7pwtCMEpVmQ242suV6qTJya2bDq4X1Tw= github.com/aliyun/credentials-go v1.1.2/go.mod h1:ozcZaMR5kLM7pwtCMEpVmQ242suV6qTJya2bDq4X1Tw=
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs= github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0= github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0=
github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4= github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM= github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM=
@@ -53,43 +51,50 @@ github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSV
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/ebitengine/purego v0.8.4 h1:CF7LEKg5FFOsASUj0+QwaXf8Ht6TlFxg09+S9wz0omw=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0= github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk= github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
github.com/gen2brain/go-fitz v1.24.15 h1:sJNB1MOWkqnzzENPHggFpgxTwW0+S5WF/rM5wUBpJWo=
github.com/gen2brain/go-fitz v1.24.15/go.mod h1:SftkiVbTHqF141DuiLwBBM65zP7ig6AVDQpf2WlHamo=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE= github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI= github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU= github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y= github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8= github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8=
github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM= github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
github.com/go-sql-driver/mysql v1.7.0 h1:ueSltNNllEqE3qcWBTD0iQd3IpL/6U+mJxLkazJ7YPc=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI= github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU= github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I= github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9 h1:au07oEsX2xN0ktxqI+Sida1w446QrXBRJ0nee3SNZlA=
github.com/golang-sql/sqlexp v0.1.0 h1:ZCD6MBpcuOVfGVqsEmY5/4FtYiKz6tSyUv9LPEDei6A=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20200217142428-fce0ec30dd00 h1:l5lAOZEym3oK3SQ2HBHWsJUfbNBiTXJDeW2QDxw9AQ0=
github.com/gopherjs/gopherjs v0.0.0-20200217142428-fce0ec30dd00/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gopherjs/gopherjs v0.0.0-20200217142428-fce0ec30dd00/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a h1:bbPeKD0xmW/Y25WS6cokEszi5g+S0QxI/d45PkRi7Nk=
github.com/jackc/pgx/v5 v5.3.0 h1:/NQi8KHMpKWHInxXesC8yD4DhkXPrVhmnwYkjp9AmBA=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E= github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc= github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ= github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
@@ -97,17 +102,18 @@ github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/jupiterrider/ffi v0.5.0 h1:j2nSgpabbV1JOwgP4Kn449sJUHq3cVLAZVBoOYn44V8=
github.com/jupiterrider/ffi v0.5.0/go.mod h1:x7xdNKo8h0AmLuXfswDUBxUsd2OqUP4ekC8sCnsmbvo=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg= github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM= github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws= github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M= github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY= github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
@@ -115,9 +121,10 @@ github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3v
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-sqlite3 v1.14.15 h1:vfoHhTN1af61xCRSWzFIWzx2YskyMTwHLrExkBOjvxI=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/microsoft/go-mssqldb v0.17.0 h1:Fto83dMZPnYv1Zwx5vHHxpNraeEaUlQ/hhHLgZiaenE=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -134,11 +141,9 @@ github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E= github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw= github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8= github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg= github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8= github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss= github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
@@ -157,8 +162,8 @@ github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.19.0 h1:RWq5SEjt8o25SROyN3z2OrDB9l7RPd3lwTWU8EcEdcI= github.com/spf13/viper v1.18.2 h1:LUXCnvUvSM6FXAsj6nnfc8Q2tp1dIgUfY9Kc8GsSOiQ=
github.com/spf13/viper v1.19.0/go.mod h1:GQUN9bilAbhU/jgc1bKs99f/suXKeUMct8Adx5+Ntkg= github.com/spf13/viper v1.18.2/go.mod h1:EKmWIqdnk5lOcmR72yw6hS+8OPYcwD0jteitLMVB+yk=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
@@ -194,8 +199,8 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191219195013-becbf705a915/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20191219195013-becbf705a915/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200510223506-06a226fb4e37/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200510223506-06a226fb4e37/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI= golang.org/x/crypto v0.20.0 h1:jmAMJJZXr5KiCw05dfYK9QnqaqKLYXijU23lsEdcQqg=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8= golang.org/x/crypto v0.20.0/go.mod h1:Xwo95rrVNIoSMx9wa1JroENMToLWn3RNVrTBpLHgZPQ=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g= golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k= golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -204,8 +209,8 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac= golang.org/x/net v0.21.0 h1:AQyQV4dYCvJ7vGmJyKki9+PBdyvhkSd8EIx/qb0AYv4=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -217,12 +222,12 @@ golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y= golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk= golang.org/x/text v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/text v0.20.0/go.mod h1:D4IsuqiFMhST5bX19pQ9ikHC2GsaKyk/oF+pn3ducp4=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk= golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -247,10 +252,15 @@ gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/datatypes v1.2.0 h1:5YT+eokWdIxhJgWHdrb2zYUimyk0+TaFth+7a0ybzco=
gorm.io/datatypes v1.2.0/go.mod h1:o1dh0ZvjIjhH/bngTpypG6lVRJ5chTBxE09FH/71k04=
gorm.io/driver/mysql v1.5.7 h1:MndhOPYOfEp2rHKgkZIhJ16eVUIRf2HmzgoPmh7FCWo= gorm.io/driver/mysql v1.5.7 h1:MndhOPYOfEp2rHKgkZIhJ16eVUIRf2HmzgoPmh7FCWo=
gorm.io/driver/mysql v1.5.7/go.mod h1:sEtPWMiqiN1N1cMXoXmBbd8C6/l+TESwriotuRRpkDM= gorm.io/driver/mysql v1.5.7/go.mod h1:sEtPWMiqiN1N1cMXoXmBbd8C6/l+TESwriotuRRpkDM=
gorm.io/driver/postgres v1.5.0 h1:u2FXTy14l45qc3UeCJ7QaAXZmZfDDv0YrthvmRq1l0U=
gorm.io/driver/sqlite v1.4.3 h1:HBBcZSDnWi5BW3B3rwvVTc510KGkBkexlOg0QrmLUuU=
gorm.io/driver/sqlserver v1.4.1 h1:t4r4r6Jam5E6ejqP7N82qAJIJAht27EGT41HyPfXRw0=
gorm.io/gorm v1.25.7/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8= gorm.io/gorm v1.25.7/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
gorm.io/gorm v1.25.12 h1:I0u8i2hWQItBq1WfE0o2+WuL9+8L21K9e2HHSTE/0f8= gorm.io/gorm v1.30.0 h1:qbT5aPv1UH8gI99OsRlvDToLxW5zR7FzS9acZDOZcgs=
gorm.io/gorm v1.25.12/go.mod h1:xh7N7RHfYlNc5EmcI/El95gXusucDrQnHXe0+CgWcLQ= gorm.io/gorm v1.30.0/go.mod h1:8Z33v652h4//uMA76KjeDH8mJXPm1QNCYrMeatR0DOE=
nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50= nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4= rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=

View File

@@ -0,0 +1,34 @@
package analytics
import "time"
// TrackEventRequest 埋点事件请求
type TrackEventRequest struct {
TaskNo string `json:"task_no" binding:"required"`
UserID int64 `json:"user_id"`
EventName string `json:"event_name" binding:"required"`
Properties map[string]interface{} `json:"properties"`
DeviceInfo map[string]interface{} `json:"device_info"`
MetaData map[string]interface{} `json:"meta_data"`
}
// BatchTrackEventRequest 批量埋点事件请求
type BatchTrackEventRequest struct {
Events []TrackEventRequest `json:"events" binding:"required,min=1,max=100"`
}
// QueryEventsRequest 查询事件请求
type QueryEventsRequest struct {
UserID *int64 `json:"user_id" form:"user_id"`
EventName string `json:"event_name" form:"event_name"`
StartTime *time.Time `json:"start_time" form:"start_time"`
EndTime *time.Time `json:"end_time" form:"end_time"`
Page int `json:"page" form:"page" binding:"required,min=1"`
PageSize int `json:"page_size" form:"page_size" binding:"required,min=1,max=100"`
}
// EventStatsRequest 事件统计请求
type EventStatsRequest struct {
StartTime time.Time `json:"start_time" form:"start_time" binding:"required"`
EndTime time.Time `json:"end_time" form:"end_time" binding:"required"`
}

View File

@@ -0,0 +1,36 @@
package analytics
import "time"
// EventResponse 事件响应
type EventResponse struct {
ID int64 `json:"id"`
UserID int64 `json:"user_id"`
EventName string `json:"event_name"`
Properties map[string]interface{} `json:"properties"`
DeviceInfo map[string]interface{} `json:"device_info"`
MetaData map[string]interface{} `json:"meta_data"`
CreatedAt time.Time `json:"created_at"`
}
// EventListResponse 事件列表响应
type EventListResponse struct {
Events []*EventResponse `json:"events"`
Total int64 `json:"total"`
Page int `json:"page"`
Size int `json:"size"`
}
// EventStatsResponse 事件统计响应
type EventStatsResponse struct {
EventName string `json:"event_name"`
Count int64 `json:"count"`
UniqueUsers int64 `json:"unique_users"`
}
// EventStatsListResponse 事件统计列表响应
type EventStatsListResponse struct {
Stats []*EventStatsResponse `json:"stats"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
}

View File

@@ -5,6 +5,7 @@ type CreateFormulaRecognitionRequest struct {
FileHash string `json:"file_hash" binding:"required"` // file hash FileHash string `json:"file_hash" binding:"required"` // file hash
FileName string `json:"file_name" binding:"required"` // file name FileName string `json:"file_name" binding:"required"` // file name
TaskType string `json:"task_type" binding:"required,oneof=FORMULA"` // task type TaskType string `json:"task_type" binding:"required,oneof=FORMULA"` // task type
UserID int64 `json:"user_id"` // user id
} }
type GetRecognitionStatusRequest struct { type GetRecognitionStatusRequest struct {

View File

@@ -10,4 +10,20 @@ type GetFormulaTaskResponse struct {
Status int `json:"status"` Status int `json:"status"`
Count int `json:"count"` Count int `json:"count"`
Latex string `json:"latex"` Latex string `json:"latex"`
Markdown string `json:"markdown"`
MathML string `json:"mathml"`
MML string `json:"mml"`
}
// FormulaRecognitionResponse 公式识别服务返回的响应
type FormulaRecognitionResponse struct {
Result string `json:"result"`
}
// ImageOCRResponse 图片OCR接口返回的响应
type ImageOCRResponse struct {
Markdown string `json:"markdown"` // Markdown 格式内容
Latex string `json:"latex"` // LaTeX 格式内容 (无公式时为空)
MathML string `json:"mathml"` // MathML 格式(无公式时为空)
MML string `json:"mml"` // MML 格式(无公式时为空)
} }

View File

@@ -0,0 +1,34 @@
package pdf
// CreatePDFRecognitionRequest 创建PDF识别任务
type CreatePDFRecognitionRequest struct {
FileURL string `json:"file_url" binding:"required"`
FileHash string `json:"file_hash" binding:"required"`
FileName string `json:"file_name" binding:"required"`
UserID int64 `json:"user_id"`
}
// GetPDFTaskRequest URI 参数
type GetPDFTaskRequest struct {
TaskNo string `uri:"task_no" binding:"required"`
}
// CreatePDFTaskResponse 创建任务响应
type CreatePDFTaskResponse struct {
TaskNo string `json:"task_no"`
Status int `json:"status"`
}
// PDFPageResult 单页结果
type PDFPageResult struct {
PageNumber int `json:"page_number"`
Markdown string `json:"markdown"`
}
// GetPDFTaskResponse 查询任务状态和结果
type GetPDFTaskResponse struct {
TaskNo string `json:"task_no"`
Status int `json:"status"`
TotalPages int `json:"total_pages"`
Pages []PDFPageResult `json:"pages"`
}

View File

@@ -8,29 +8,27 @@ type EvaluateTaskRequest struct {
} }
type TaskListRequest struct { type TaskListRequest struct {
TaskType string `json:"task_type" form:"task_type" binding:"required"` TaskType string `json:"task_type" form:"task_type"`
Page int `json:"page" form:"page"` Page int `json:"page" form:"page"`
PageSize int `json:"page_size" form:"page_size"` PageSize int `json:"page_size" form:"page_size"`
} UserID int64 `json:"-"`
type PdfInfo struct {
PageCount int `json:"page_count"`
PageWidth int `json:"page_width"`
PageHeight int `json:"page_height"`
} }
type TaskListDTO struct { type TaskListDTO struct {
TaskID string `json:"task_id"` TaskID string `json:"task_id"`
FileName string `json:"file_name"` FileName string `json:"file_name"`
Status string `json:"status"` Status int `json:"status"`
Path string `json:"path"` OriginURL string `json:"origin_url"`
TaskType string `json:"task_type"` TaskType string `json:"task_type"`
CreatedAt string `json:"created_at"` CreatedAt string `json:"created_at"`
PdfInfo PdfInfo `json:"pdf_info"`
} }
type TaskListResponse struct { type TaskListResponse struct {
TaskList []*TaskListDTO `json:"task_list"` TaskList []*TaskListDTO `json:"task_list"`
HasMore bool `json:"has_more"` Total int64 `json:"total"`
NextPage int `json:"next_page"` }
type ExportTaskRequest struct {
TaskNo string `json:"task_no" binding:"required"`
Type string `json:"type" binding:"required,oneof=pdf docx"`
} }

View File

@@ -15,10 +15,65 @@ type PhoneLoginRequest struct {
type PhoneLoginResponse struct { type PhoneLoginResponse struct {
Token string `json:"token"` Token string `json:"token"`
ExpiresAt int64 `json:"expires_at"`
} }
type UserInfoResponse struct { type UserInfoResponse struct {
Username string `json:"username"` Username string `json:"username"`
Phone string `json:"phone"` Email string `json:"email"`
Status int `json:"status"` // 0: not login, 1: login }
type EmailVerifyCodeRequest struct {
Email string `json:"email" binding:"required,email"`
}
type EmailVerifyCodeResponse struct{}
type EmailRegisterRequest struct {
Email string `json:"email" binding:"required,email"`
Password string `json:"password" binding:"required,min=6"`
VerifyCode string `json:"code" binding:"required"`
}
type EmailRegisterResponse struct {
Token string `json:"token"`
ExpiresAt int64 `json:"expires_at"`
}
type EmailLoginRequest struct {
Email string `json:"email" binding:"required,email"`
Password string `json:"password" binding:"required"`
}
type EmailLoginResponse struct {
Token string `json:"token"`
ExpiresAt int64 `json:"expires_at"`
}
type GoogleAuthUrlRequest struct {
RedirectURI string `form:"redirect_uri" binding:"required"`
State string `form:"state" binding:"required"`
}
type GoogleAuthUrlResponse struct {
AuthURL string `json:"auth_url"`
}
type GoogleOAuthCallbackRequest struct {
Code string `json:"code" binding:"required"`
State string `json:"state" binding:"required"`
RedirectURI string `json:"redirect_uri" binding:"required"`
}
type GoogleOAuthCallbackResponse struct {
Token string `json:"token"`
ExpiresAt int64 `json:"expires_at"`
}
type GoogleUserInfo struct {
ID string `json:"id"`
Email string `json:"email"`
Name string `json:"name"`
Picture string `json:"picture"`
VerifiedEmail bool `json:"verified_email"`
} }

View File

@@ -0,0 +1,232 @@
package service
import (
"context"
"encoding/json"
"fmt"
"time"
"gitea.com/texpixel/document_ai/internal/model/analytics"
"gitea.com/texpixel/document_ai/internal/storage/dao"
"gitea.com/texpixel/document_ai/pkg/log"
"gorm.io/datatypes"
)
type AnalyticsService struct {
eventDao *dao.AnalyticsEventDao
}
func NewAnalyticsService() *AnalyticsService {
return &AnalyticsService{
eventDao: dao.NewAnalyticsEventDao(),
}
}
// TrackEvent 记录单个事件
func (s *AnalyticsService) TrackEvent(ctx context.Context, req *analytics.TrackEventRequest) error {
// 将 map 转换为 JSON
propertiesJSON, err := json.Marshal(req.Properties)
if err != nil {
log.Error(ctx, "marshal properties failed", "error", err)
return fmt.Errorf("invalid properties format")
}
deviceInfoJSON, err := json.Marshal(req.DeviceInfo)
if err != nil {
log.Error(ctx, "marshal device_info failed", "error", err)
return fmt.Errorf("invalid device_info format")
}
metaDataJSON, err := json.Marshal(req.MetaData)
if err != nil {
log.Error(ctx, "marshal meta_data failed", "error", err)
return fmt.Errorf("invalid meta_data format")
}
event := &dao.AnalyticsEvent{
UserID: req.UserID,
EventName: req.EventName,
Properties: datatypes.JSON(propertiesJSON),
DeviceInfo: datatypes.JSON(deviceInfoJSON),
MetaData: datatypes.JSON(metaDataJSON),
CreatedAt: time.Now(),
}
if err := s.eventDao.Create(dao.DB.WithContext(ctx), event); err != nil {
log.Error(ctx, "create analytics event failed", "error", err)
return fmt.Errorf("failed to track event")
}
log.Info(ctx, "event tracked successfully",
"event_id", event.ID,
"user_id", req.UserID,
"event_name", req.EventName)
return nil
}
// BatchTrackEvents 批量记录事件
func (s *AnalyticsService) BatchTrackEvents(ctx context.Context, req *analytics.BatchTrackEventRequest) error {
events := make([]*dao.AnalyticsEvent, 0, len(req.Events))
for _, eventReq := range req.Events {
propertiesJSON, err := json.Marshal(eventReq.Properties)
if err != nil {
log.Error(ctx, "marshal properties failed", "error", err)
continue
}
deviceInfoJSON, err := json.Marshal(eventReq.DeviceInfo)
if err != nil {
log.Error(ctx, "marshal device_info failed", "error", err)
continue
}
metaDataJSON, err := json.Marshal(eventReq.MetaData)
if err != nil {
log.Error(ctx, "marshal meta_data failed", "error", err)
continue
}
event := &dao.AnalyticsEvent{
UserID: eventReq.UserID,
EventName: eventReq.EventName,
Properties: datatypes.JSON(propertiesJSON),
DeviceInfo: datatypes.JSON(deviceInfoJSON),
MetaData: datatypes.JSON(metaDataJSON),
CreatedAt: time.Now(),
}
events = append(events, event)
}
if len(events) == 0 {
return fmt.Errorf("no valid events to track")
}
if err := s.eventDao.BatchCreate(dao.DB.WithContext(ctx), events); err != nil {
log.Error(ctx, "batch create analytics events failed", "error", err)
return fmt.Errorf("failed to batch track events")
}
log.Info(ctx, "batch events tracked successfully", "count", len(events))
return nil
}
// QueryEvents 查询事件
func (s *AnalyticsService) QueryEvents(ctx context.Context, req *analytics.QueryEventsRequest) (*analytics.EventListResponse, error) {
var events []*dao.AnalyticsEvent
var total int64
var err error
// 根据不同条件查询
if req.UserID != nil && req.EventName != "" {
// 查询用户的指定事件
events, total, err = s.eventDao.GetUserEventsByName(dao.DB.WithContext(ctx), *req.UserID, req.EventName, req.Page, req.PageSize)
} else if req.UserID != nil {
// 查询用户的所有事件
events, total, err = s.eventDao.GetUserEvents(dao.DB.WithContext(ctx), *req.UserID, req.Page, req.PageSize)
} else if req.EventName != "" {
// 查询指定事件
events, total, err = s.eventDao.GetEventsByName(dao.DB.WithContext(ctx), req.EventName, req.Page, req.PageSize)
} else if req.StartTime != nil && req.EndTime != nil {
// 查询时间范围内的事件
events, total, err = s.eventDao.GetEventsByTimeRange(dao.DB.WithContext(ctx), *req.StartTime, *req.EndTime, req.Page, req.PageSize)
} else {
return nil, fmt.Errorf("invalid query parameters")
}
if err != nil {
log.Error(ctx, "query events failed", "error", err)
return nil, fmt.Errorf("failed to query events")
}
// 转换为响应格式
eventResponses := make([]*analytics.EventResponse, 0, len(events))
for _, event := range events {
var properties, deviceInfo, metaData map[string]interface{}
if len(event.Properties) > 0 {
json.Unmarshal(event.Properties, &properties)
}
if len(event.DeviceInfo) > 0 {
json.Unmarshal(event.DeviceInfo, &deviceInfo)
}
if len(event.MetaData) > 0 {
json.Unmarshal(event.MetaData, &metaData)
}
eventResponses = append(eventResponses, &analytics.EventResponse{
ID: event.ID,
UserID: event.UserID,
EventName: event.EventName,
Properties: properties,
DeviceInfo: deviceInfo,
MetaData: metaData,
CreatedAt: event.CreatedAt,
})
}
return &analytics.EventListResponse{
Events: eventResponses,
Total: total,
Page: req.Page,
Size: req.PageSize,
}, nil
}
// GetEventStats 获取事件统计
func (s *AnalyticsService) GetEventStats(ctx context.Context, req *analytics.EventStatsRequest) (*analytics.EventStatsListResponse, error) {
results, err := s.eventDao.GetEventStats(dao.DB.WithContext(ctx), req.StartTime, req.EndTime)
if err != nil {
log.Error(ctx, "get event stats failed", "error", err)
return nil, fmt.Errorf("failed to get event stats")
}
stats := make([]*analytics.EventStatsResponse, 0, len(results))
for _, result := range results {
stats = append(stats, &analytics.EventStatsResponse{
EventName: result["event_name"].(string),
Count: result["count"].(int64),
UniqueUsers: result["unique_users"].(int64),
})
}
return &analytics.EventStatsListResponse{
Stats: stats,
StartTime: req.StartTime,
EndTime: req.EndTime,
}, nil
}
// CountUserEvents 统计用户事件数量
func (s *AnalyticsService) CountUserEvents(ctx context.Context, userID int64) (int64, error) {
count, err := s.eventDao.CountUserEvents(dao.DB.WithContext(ctx), userID)
if err != nil {
log.Error(ctx, "count user events failed", "error", err, "user_id", userID)
return 0, fmt.Errorf("failed to count user events")
}
return count, nil
}
// CountEventsByName 统计指定事件的数量
func (s *AnalyticsService) CountEventsByName(ctx context.Context, eventName string) (int64, error) {
count, err := s.eventDao.CountEventsByName(dao.DB.WithContext(ctx), eventName)
if err != nil {
log.Error(ctx, "count events by name failed", "error", err, "event_name", eventName)
return 0, fmt.Errorf("failed to count events")
}
return count, nil
}
// CleanOldEvents 清理旧数据(可以定时执行)
func (s *AnalyticsService) CleanOldEvents(ctx context.Context, retentionDays int) error {
beforeTime := time.Now().AddDate(0, 0, -retentionDays)
if err := s.eventDao.DeleteOldEvents(dao.DB.WithContext(ctx), beforeTime); err != nil {
log.Error(ctx, "clean old events failed", "error", err, "before_time", beforeTime)
return fmt.Errorf("failed to clean old events")
}
log.Info(ctx, "old events cleaned successfully", "retention_days", retentionDays)
return nil
}

View File

@@ -0,0 +1,338 @@
package service
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"fmt"
"image/png"
"io"
"net/http"
"time"
"github.com/gen2brain/go-fitz"
pdfmodel "gitea.com/texpixel/document_ai/internal/model/pdf"
"gitea.com/texpixel/document_ai/internal/storage/cache"
"gitea.com/texpixel/document_ai/internal/storage/dao"
"gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/texpixel/document_ai/pkg/httpclient"
"gitea.com/texpixel/document_ai/pkg/log"
"gitea.com/texpixel/document_ai/pkg/oss"
"gitea.com/texpixel/document_ai/pkg/requestid"
"gitea.com/texpixel/document_ai/pkg/utils"
"gorm.io/gorm"
"gitea.com/texpixel/document_ai/internal/model/formula"
)
const (
pdfMaxPages = 10
pdfOCREndpoint = "https://cloud.texpixel.com:10443/doc_process/v1/image/ocr"
)
// PDFRecognitionService 处理 PDF 识别任务
type PDFRecognitionService struct {
db *gorm.DB
queueLimit chan struct{}
stopChan chan struct{}
httpClient *httpclient.Client
}
func NewPDFRecognitionService() *PDFRecognitionService {
s := &PDFRecognitionService{
db: dao.DB,
queueLimit: make(chan struct{}, 3),
stopChan: make(chan struct{}),
httpClient: httpclient.NewClient(nil),
}
utils.SafeGo(func() {
lock, err := cache.GetPDFDistributedLock(context.Background())
if err != nil || !lock {
log.Error(context.Background(), "func", "NewPDFRecognitionService", "msg", "获取PDF分布式锁失败")
return
}
s.processPDFQueue(context.Background())
})
return s
}
// CreatePDFTask 创建识别任务并入队
func (s *PDFRecognitionService) CreatePDFTask(ctx context.Context, req *pdfmodel.CreatePDFRecognitionRequest) (*dao.RecognitionTask, error) {
task := &dao.RecognitionTask{
UserID: req.UserID,
TaskUUID: utils.NewUUID(),
TaskType: dao.TaskTypePDF,
Status: dao.TaskStatusPending,
FileURL: req.FileURL,
FileName: req.FileName,
FileHash: req.FileHash,
IP: common.GetIPFromContext(ctx),
}
if err := dao.NewRecognitionTaskDao().Create(dao.DB.WithContext(ctx), task); err != nil {
log.Error(ctx, "func", "CreatePDFTask", "msg", "创建任务失败", "error", err)
return nil, common.NewError(common.CodeDBError, "创建任务失败", err)
}
if _, err := cache.PushPDFTask(ctx, task.ID); err != nil {
log.Error(ctx, "func", "CreatePDFTask", "msg", "推入队列失败", "error", err)
return nil, common.NewError(common.CodeSystemError, "推入队列失败", err)
}
return task, nil
}
// GetPDFTask 查询任务状态和结果
func (s *PDFRecognitionService) GetPDFTask(ctx context.Context, taskNo string, callerUserID int64) (*pdfmodel.GetPDFTaskResponse, error) {
sess := dao.DB.WithContext(ctx)
task, err := dao.NewRecognitionTaskDao().GetByTaskNo(sess, taskNo)
if err != nil {
if err == gorm.ErrRecordNotFound {
return nil, common.NewError(common.CodeNotFound, "任务不存在", err)
}
return nil, common.NewError(common.CodeDBError, "查询任务失败", err)
}
// 类型校验:防止公式任务被当成 PDF 解析
if task.TaskType != dao.TaskTypePDF {
return nil, common.NewError(common.CodeNotFound, "任务不存在", nil)
}
// 归属校验:已登录用户只能查询自己的任务
if callerUserID != 0 && task.UserID != 0 && callerUserID != task.UserID {
return nil, common.NewError(common.CodeNotFound, "任务不存在", nil)
}
resp := &pdfmodel.GetPDFTaskResponse{
TaskNo: taskNo,
Status: int(task.Status),
}
if task.Status != dao.TaskStatusCompleted {
return resp, nil
}
result, err := dao.NewRecognitionResultDao().GetByTaskID(sess, task.ID)
if err != nil || result == nil {
return nil, common.NewError(common.CodeDBError, "查询识别结果失败", err)
}
pages, err := result.GetPDFContent()
if err != nil {
return nil, common.NewError(common.CodeSystemError, "解析识别结果失败", err)
}
resp.TotalPages = len(pages)
for _, p := range pages {
resp.Pages = append(resp.Pages, pdfmodel.PDFPageResult{
PageNumber: p.PageNumber,
Markdown: p.Markdown,
})
}
return resp, nil
}
// processPDFQueue 持续消费队列
func (s *PDFRecognitionService) processPDFQueue(ctx context.Context) {
for {
select {
case <-s.stopChan:
return
default:
s.processOnePDFTask(ctx)
}
}
}
func (s *PDFRecognitionService) processOnePDFTask(ctx context.Context) {
s.queueLimit <- struct{}{}
defer func() { <-s.queueLimit }()
taskID, err := cache.PopPDFTask(ctx)
if err != nil {
log.Error(ctx, "func", "processOnePDFTask", "msg", "获取任务失败", "error", err)
return
}
task, err := dao.NewRecognitionTaskDao().GetTaskByID(dao.DB.WithContext(ctx), taskID)
if err != nil || task == nil {
log.Error(ctx, "func", "processOnePDFTask", "msg", "任务不存在", "task_id", taskID)
return
}
ctx = context.WithValue(ctx, utils.RequestIDKey, task.TaskUUID)
requestid.SetRequestID(task.TaskUUID, func() {
if err := s.processPDFTask(ctx, taskID, task.FileURL); err != nil {
log.Error(ctx, "func", "processOnePDFTask", "msg", "处理PDF任务失败", "error", err)
}
})
}
// processPDFTask 核心处理:下载 → pre-hook → 逐页OCR → 写入DB
func (s *PDFRecognitionService) processPDFTask(ctx context.Context, taskID int64, fileURL string) error {
ctx, cancel := context.WithTimeout(ctx, 10*time.Minute)
defer cancel()
taskDao := dao.NewRecognitionTaskDao()
resultDao := dao.NewRecognitionResultDao()
isSuccess := false
defer func() {
status, remark := dao.TaskStatusFailed, "任务处理失败"
if isSuccess {
status, remark = dao.TaskStatusCompleted, ""
}
_ = taskDao.Update(dao.DB.WithContext(context.Background()),
map[string]interface{}{"id": taskID},
map[string]interface{}{"status": status, "completed_at": time.Now(), "remark": remark},
)
}()
// 更新为处理中
if err := taskDao.Update(dao.DB.WithContext(ctx),
map[string]interface{}{"id": taskID},
map[string]interface{}{"status": dao.TaskStatusProcessing},
); err != nil {
return fmt.Errorf("更新任务状态失败: %w", err)
}
// 下载 PDF
reader, err := oss.DownloadFile(ctx, fileURL)
if err != nil {
return fmt.Errorf("下载PDF失败: %w", err)
}
defer reader.Close()
pdfBytes, err := io.ReadAll(reader)
if err != nil {
return fmt.Errorf("读取PDF数据失败: %w", err)
}
// pre-hook: 用 pdftoppm 渲染前 pdfMaxPages 页为 PNG
pageImages, err := renderPDFPages(ctx, pdfBytes, pdfMaxPages)
if err != nil {
return fmt.Errorf("渲染PDF页面失败: %w", err)
}
processPages := len(pageImages)
log.Info(ctx, "func", "processPDFTask", "msg", "开始处理PDF", "task_id", taskID, "process_pages", processPages)
// 逐页 OCR结果收集
var pages []dao.PDFPageContent
for i, imgBytes := range pageImages {
ocrResult, err := s.callOCR(ctx, imgBytes)
if err != nil {
return fmt.Errorf("OCR第%d页失败: %w", i+1, err)
}
pages = append(pages, dao.PDFPageContent{
PageNumber: i + 1,
Markdown: ocrResult.Markdown,
})
log.Info(ctx, "func", "processPDFTask", "msg", "页面OCR完成", "page", i+1, "total", processPages)
}
// 序列化并写入 DB单行
contentJSON, err := dao.MarshalPDFContent(pages)
if err != nil {
return fmt.Errorf("序列化PDF内容失败: %w", err)
}
dbResult := dao.RecognitionResult{
TaskID: taskID,
TaskType: dao.TaskTypePDF,
Content: contentJSON,
}
if err := dbResult.SetMetaData(dao.ResultMetaData{TotalNum: processPages}); err != nil {
return fmt.Errorf("序列化MetaData失败: %w", err)
}
if err := resultDao.Create(dao.DB.WithContext(ctx), dbResult); err != nil {
return fmt.Errorf("保存PDF结果失败: %w", err)
}
isSuccess = true
return nil
}
// renderPDFPages 使用 go-fitz 将 PDF 渲染为 PNG 字节切片,最多渲染 maxPages 页
func renderPDFPages(ctx context.Context, pdfBytes []byte, maxPages int) ([][]byte, error) {
doc, err := fitz.NewFromMemory(pdfBytes)
if err != nil {
return nil, fmt.Errorf("打开PDF失败: %w", err)
}
defer doc.Close()
total := doc.NumPage()
if total == 0 {
return nil, fmt.Errorf("PDF不包含任何页面")
}
if maxPages > 0 && total > maxPages {
total = maxPages
}
pages := make([][]byte, 0, total)
for i := 0; i < total; i++ {
select {
case <-ctx.Done():
return nil, ctx.Err()
default:
}
img, err := doc.Image(i)
if err != nil {
return nil, fmt.Errorf("渲染第%d页失败: %w", i+1, err)
}
var buf bytes.Buffer
if err := png.Encode(&buf, img); err != nil {
return nil, fmt.Errorf("编码第%d页PNG失败: %w", i+1, err)
}
pages = append(pages, buf.Bytes())
}
return pages, nil
}
// callOCR 调用与公式识别相同的下游 OCR 接口
func (s *PDFRecognitionService) callOCR(ctx context.Context, imgBytes []byte) (*formula.ImageOCRResponse, error) {
reqBody := map[string]string{
"image_base64": base64.StdEncoding.EncodeToString(imgBytes),
}
jsonData, err := json.Marshal(reqBody)
if err != nil {
return nil, err
}
headers := map[string]string{
"Content-Type": "application/json",
utils.RequestIDHeaderKey: utils.GetRequestIDFromContext(ctx),
}
resp, err := s.httpClient.RequestWithRetry(ctx, http.MethodPost, pdfOCREndpoint, bytes.NewReader(jsonData), headers)
if err != nil {
return nil, fmt.Errorf("请求OCR接口失败: %w", err)
}
defer resp.Body.Close()
// 下游非 2xx 视为失败,避免把错误响应 body 当成识别结果存库
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, fmt.Errorf("OCR接口返回非200状态: %d, body: %s", resp.StatusCode, string(body))
}
var ocrResp formula.ImageOCRResponse
if err := json.NewDecoder(resp.Body).Decode(&ocrResp); err != nil {
return nil, fmt.Errorf("解析OCR响应失败: %w", err)
}
return &ocrResp, nil
}
func (s *PDFRecognitionService) Stop() {
close(s.stopChan)
}

View File

@@ -7,21 +7,23 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"mime/multipart"
"net/http" "net/http"
"strings" "strings"
"time" "time"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/config"
"gitea.com/bitwsd/document_ai/config" "gitea.com/texpixel/document_ai/internal/model/formula"
"gitea.com/bitwsd/document_ai/internal/model/formula" "gitea.com/texpixel/document_ai/internal/storage/cache"
"gitea.com/bitwsd/document_ai/internal/storage/cache" "gitea.com/texpixel/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/internal/storage/dao" "gitea.com/texpixel/document_ai/pkg/log"
"gitea.com/bitwsd/document_ai/pkg/common" "gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/constant" "gitea.com/texpixel/document_ai/pkg/constant"
"gitea.com/bitwsd/document_ai/pkg/httpclient" "gitea.com/texpixel/document_ai/pkg/httpclient"
"gitea.com/bitwsd/document_ai/pkg/oss" "gitea.com/texpixel/document_ai/pkg/oss"
"gitea.com/bitwsd/document_ai/pkg/utils" "gitea.com/texpixel/document_ai/pkg/requestid"
"gitea.com/texpixel/document_ai/pkg/utils"
"gorm.io/gorm" "gorm.io/gorm"
) )
@@ -105,6 +107,7 @@ func (s *RecognitionService) CreateRecognitionTask(ctx context.Context, req *for
sess := dao.DB.WithContext(ctx) sess := dao.DB.WithContext(ctx)
taskDao := dao.NewRecognitionTaskDao() taskDao := dao.NewRecognitionTaskDao()
task := &dao.RecognitionTask{ task := &dao.RecognitionTask{
UserID: req.UserID,
TaskUUID: utils.NewUUID(), TaskUUID: utils.NewUUID(),
TaskType: dao.TaskType(req.TaskType), TaskType: dao.TaskType(req.TaskType),
Status: dao.TaskStatusPending, Status: dao.TaskStatusPending,
@@ -165,8 +168,24 @@ func (s *RecognitionService) GetFormualTask(ctx context.Context, taskNo string)
log.Error(ctx, "func", "GetFormualTask", "msg", "查询任务结果失败", "error", err, "task_no", taskNo) log.Error(ctx, "func", "GetFormualTask", "msg", "查询任务结果失败", "error", err, "task_no", taskNo)
return nil, common.NewError(common.CodeDBError, "查询任务结果失败", err) return nil, common.NewError(common.CodeDBError, "查询任务结果失败", err)
} }
latex := taskRet.NewContentCodec().GetContent().(string)
return &formula.GetFormulaTaskResponse{TaskNo: taskNo, Latex: latex, Status: int(task.Status)}, nil formulaContent, err := taskRet.GetFormulaContent()
if err != nil {
log.Error(ctx, "func", "GetFormualTask", "msg", "解析公式内容失败", "error", err)
return nil, common.NewError(common.CodeSystemError, "解析识别结果失败", err)
}
markdown := formulaContent.Markdown
if markdown == "" {
markdown = fmt.Sprintf("$$%s$$", formulaContent.Latex)
}
return &formula.GetFormulaTaskResponse{
TaskNo: taskNo,
Latex: formulaContent.Latex,
Markdown: markdown,
MathML: formulaContent.MathML,
MML: formulaContent.MML,
Status: int(task.Status),
}, nil
} }
func (s *RecognitionService) handleFormulaRecognition(ctx context.Context, taskID int64) error { func (s *RecognitionService) handleFormulaRecognition(ctx context.Context, taskID int64) error {
@@ -200,13 +219,230 @@ func (s *RecognitionService) processVLFormula(ctx context.Context, taskID int64)
log.Info(ctx, "func", "processVLFormulaQueue", "msg", "获取任务成功", "task_id", taskID) log.Info(ctx, "func", "processVLFormulaQueue", "msg", "获取任务成功", "task_id", taskID)
// 处理具体任务 // 处理具体任务
if err := s.processVLFormulaTask(ctx, taskID, task.FileURL); err != nil { if err := s.processVLFormulaTask(ctx, taskID, task.FileURL, utils.ModelVLQwen3VL32BInstruct); err != nil {
log.Error(ctx, "func", "processVLFormulaQueue", "msg", "处理任务失败", "error", err) log.Error(ctx, "func", "processVLFormulaQueue", "msg", "处理任务失败", "error", err)
return return
} }
log.Info(ctx, "func", "processVLFormulaQueue", "msg", "处理任务成功", "task_id", taskID) log.Info(ctx, "func", "processVLFormulaQueue", "msg", "处理任务成功", "task_id", taskID)
} }
// MathpixRequest Mathpix API /v3/text 完整请求结构
type MathpixRequest struct {
// 图片源URL 或 base64 编码
Src string `json:"src"`
// 元数据键值对
Metadata map[string]interface{} `json:"metadata"`
// 标签列表,用于标识结果
Tags []string `json:"tags"`
// 异步请求标志
Async bool `json:"async"`
// 回调配置
Callback *MathpixCallback `json:"callback"`
// 输出格式列表text, data, html, latex_styled
Formats []string `json:"formats"`
// 数据选项
DataOptions *MathpixDataOptions `json:"data_options,omitempty"`
// 返回检测到的字母表
IncludeDetectedAlphabets *bool `json:"include_detected_alphabets,omitempty"`
// 允许的字母表
AlphabetsAllowed *MathpixAlphabetsAllowed `json:"alphabets_allowed,omitempty"`
// 指定图片区域
Region *MathpixRegion `json:"region,omitempty"`
// 蓝色HSV过滤模式
EnableBlueHsvFilter bool `json:"enable_blue_hsv_filter"`
// 置信度阈值
ConfidenceThreshold float64 `json:"confidence_threshold"`
// 符号级别置信度阈值默认0.75
ConfidenceRateThreshold float64 `json:"confidence_rate_threshold"`
// 包含公式标签
IncludeEquationTags bool `json:"include_equation_tags"`
// 返回逐行信息
IncludeLineData bool `json:"include_line_data"`
// 返回逐词信息
IncludeWordData bool `json:"include_word_data"`
// 化学结构OCR
IncludeSmiles bool `json:"include_smiles"`
// InChI数据
IncludeInchi bool `json:"include_inchi"`
// 几何图形数据
IncludeGeometryData bool `json:"include_geometry_data"`
// 图表文本提取
IncludeDiagramText bool `json:"include_diagram_text"`
// 页面信息默认true
IncludePageInfo *bool `json:"include_page_info,omitempty"`
// 自动旋转置信度阈值默认0.99
AutoRotateConfidenceThreshold float64 `json:"auto_rotate_confidence_threshold"`
// 移除多余空格默认true
RmSpaces *bool `json:"rm_spaces,omitempty"`
// 移除字体命令默认false
RmFonts bool `json:"rm_fonts"`
// 使用aligned/gathered/cases代替array默认false
IdiomaticEqnArrays bool `json:"idiomatic_eqn_arrays"`
// 移除不必要的大括号默认false
IdiomaticBraces bool `json:"idiomatic_braces"`
// 数字始终为数学模式默认false
NumbersDefaultToMath bool `json:"numbers_default_to_math"`
// 数学字体始终为数学模式默认false
MathFontsDefaultToMath bool `json:"math_fonts_default_to_math"`
// 行内数学分隔符,默认 ["\\(", "\\)"]
MathInlineDelimiters []string `json:"math_inline_delimiters"`
// 行间数学分隔符,默认 ["\\[", "\\]"]
MathDisplayDelimiters []string `json:"math_display_delimiters"`
// 高级表格处理默认false
EnableTablesFallback bool `json:"enable_tables_fallback"`
// 全角标点null表示自动判断
FullwidthPunctuation *bool `json:"fullwidth_punctuation,omitempty"`
}
// MathpixCallback 回调配置
type MathpixCallback struct {
URL string `json:"url"`
Headers map[string]string `json:"headers"`
}
// MathpixDataOptions 数据选项
type MathpixDataOptions struct {
IncludeAsciimath bool `json:"include_asciimath"`
IncludeMathml bool `json:"include_mathml"`
IncludeLatex bool `json:"include_latex"`
IncludeTsv bool `json:"include_tsv"`
}
// MathpixAlphabetsAllowed 允许的字母表
type MathpixAlphabetsAllowed struct {
En bool `json:"en"`
Hi bool `json:"hi"`
Zh bool `json:"zh"`
Ja bool `json:"ja"`
Ko bool `json:"ko"`
Ru bool `json:"ru"`
Th bool `json:"th"`
Vi bool `json:"vi"`
}
// MathpixRegion 图片区域
type MathpixRegion struct {
TopLeftX int `json:"top_left_x"`
TopLeftY int `json:"top_left_y"`
Width int `json:"width"`
Height int `json:"height"`
}
// MathpixResponse Mathpix API /v3/text 完整响应结构
type MathpixResponse struct {
// 请求ID用于调试
RequestID string `json:"request_id"`
// Mathpix Markdown 格式文本
Text string `json:"text"`
// 带样式的LaTeX仅单个公式图片时返回
LatexStyled string `json:"latex_styled"`
// 置信度 [0,1]
Confidence float64 `json:"confidence"`
// 置信度比率 [0,1]
ConfidenceRate float64 `json:"confidence_rate"`
// 行数据
LineData []map[string]interface{} `json:"line_data"`
// 词数据
WordData []map[string]interface{} `json:"word_data"`
// 数据对象列表
Data []MathpixDataItem `json:"data"`
// HTML输出
HTML string `json:"html"`
// 检测到的字母表
DetectedAlphabets []map[string]interface{} `json:"detected_alphabets"`
// 是否打印内容
IsPrinted bool `json:"is_printed"`
// 是否手写内容
IsHandwritten bool `json:"is_handwritten"`
// 自动旋转置信度
AutoRotateConfidence float64 `json:"auto_rotate_confidence"`
// 几何数据
GeometryData []map[string]interface{} `json:"geometry_data"`
// 自动旋转角度 {0, 90, -90, 180}
AutoRotateDegrees int `json:"auto_rotate_degrees"`
// 图片宽度
ImageWidth int `json:"image_width"`
// 图片高度
ImageHeight int `json:"image_height"`
// 错误信息
Error string `json:"error"`
// 错误详情
ErrorInfo *MathpixErrorInfo `json:"error_info"`
// API版本
Version string `json:"version"`
}
// MathpixDataItem 数据项
type MathpixDataItem struct {
Type string `json:"type"`
Value string `json:"value"`
}
// MathpixErrorInfo 错误详情
type MathpixErrorInfo struct {
ID string `json:"id"`
Message string `json:"message"`
}
// BaiduOCRRequest 百度 OCR 版面分析请求结构
type BaiduOCRRequest struct {
// 文件内容 base64 编码
File string `json:"file"`
// 文件类型: 0=PDF, 1=图片
FileType int `json:"fileType"`
// 是否启用文档方向分类
UseDocOrientationClassify bool `json:"useDocOrientationClassify"`
// 是否启用文档扭曲矫正
UseDocUnwarping bool `json:"useDocUnwarping"`
// 是否启用图表识别
UseChartRecognition bool `json:"useChartRecognition"`
}
// BaiduOCRResponse 百度 OCR 版面分析响应结构
type BaiduOCRResponse struct {
ErrorCode int `json:"errorCode"`
ErrorMsg string `json:"errorMsg"`
Result *BaiduOCRResult `json:"result"`
}
// BaiduOCRResult 百度 OCR 响应结果
type BaiduOCRResult struct {
LayoutParsingResults []BaiduLayoutParsingResult `json:"layoutParsingResults"`
}
// BaiduLayoutParsingResult 单页版面解析结果
type BaiduLayoutParsingResult struct {
Markdown BaiduMarkdownResult `json:"markdown"`
OutputImages map[string]string `json:"outputImages"`
}
// BaiduMarkdownResult markdown 结果
type BaiduMarkdownResult struct {
Text string `json:"text"`
Images map[string]string `json:"images"`
}
// GetMathML 从响应中获取MathML
func (r *MathpixResponse) GetMathML() string {
for _, item := range r.Data {
if item.Type == "mathml" {
return item.Value
}
}
return ""
}
// GetAsciiMath 从响应中获取AsciiMath
func (r *MathpixResponse) GetAsciiMath() string {
for _, item := range r.Data {
if item.Type == "asciimath" {
return item.Value
}
}
return ""
}
func (s *RecognitionService) processFormulaTask(ctx context.Context, taskID int64, fileURL string) (err error) { func (s *RecognitionService) processFormulaTask(ctx context.Context, taskID int64, fileURL string) (err error) {
// 为整个任务处理添加超时控制 // 为整个任务处理添加超时控制
ctx, cancel := context.WithTimeout(ctx, 45*time.Second) ctx, cancel := context.WithTimeout(ctx, 45*time.Second)
@@ -263,19 +499,12 @@ func (s *RecognitionService) processFormulaTask(ctx context.Context, taskID int6
return err return err
} }
downloadURL, err := oss.GetDownloadURL(ctx, fileURL)
if err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "获取下载URL失败", "error", err)
return err
}
// 将图片转为base64编码 // 将图片转为base64编码
base64Image := base64.StdEncoding.EncodeToString(imageData) base64Image := base64.StdEncoding.EncodeToString(imageData)
// 创建JSON请求 // 创建JSON请求
requestData := map[string]string{ requestData := map[string]string{
"image_base64": base64Image, "image_base64": base64Image,
"img_url": downloadURL,
} }
jsonData, err := json.Marshal(requestData) jsonData, err := json.Marshal(requestData)
@@ -287,8 +516,8 @@ func (s *RecognitionService) processFormulaTask(ctx context.Context, taskID int6
// 设置Content-Type头为application/json // 设置Content-Type头为application/json
headers := map[string]string{"Content-Type": "application/json", utils.RequestIDHeaderKey: utils.GetRequestIDFromContext(ctx)} headers := map[string]string{"Content-Type": "application/json", utils.RequestIDHeaderKey: utils.GetRequestIDFromContext(ctx)}
// 发送请求时会使用带超时的context // 发送请求到新的 OCR 接口
resp, err := s.httpClient.RequestWithRetry(ctx, http.MethodPost, s.getURL(ctx), bytes.NewReader(jsonData), headers) resp, err := s.httpClient.RequestWithRetry(ctx, http.MethodPost, "https://cloud.texpixel.com:10443/doc_process/v1/image/ocr", bytes.NewReader(jsonData), headers)
if err != nil { if err != nil {
if ctx.Err() == context.DeadlineExceeded { if ctx.Err() == context.DeadlineExceeded {
log.Error(ctx, "func", "processFormulaTask", "msg", "请求超时") log.Error(ctx, "func", "processFormulaTask", "msg", "请求超时")
@@ -299,30 +528,50 @@ func (s *RecognitionService) processFormulaTask(ctx context.Context, taskID int6
} }
defer resp.Body.Close() defer resp.Body.Close()
log.Info(ctx, "func", "processFormulaTask", "msg", "请求成功", "resp", resp.Body) log.Info(ctx, "func", "processFormulaTask", "msg", "请求成功")
body := &bytes.Buffer{} body := &bytes.Buffer{}
if _, err = body.ReadFrom(resp.Body); err != nil { if _, err = body.ReadFrom(resp.Body); err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "读取响应体失败", "error", err) log.Error(ctx, "func", "processFormulaTask", "msg", "读取响应体失败", "error", err)
return err return err
} }
katex := utils.ToKatex(body.String()) log.Info(ctx, "func", "processFormulaTask", "msg", "响应内容", "body", body.String())
content := &dao.FormulaRecognitionContent{Latex: katex}
b, _ := json.Marshal(content) // 解析 JSON 响应
// Save recognition result var ocrResp formula.ImageOCRResponse
result := &dao.RecognitionResult{ if err := json.Unmarshal(body.Bytes(), &ocrResp); err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "解析响应JSON失败", "error", err)
return err
}
contentJSON, err := dao.MarshalFormulaContent(dao.FormulaContent{
Latex: ocrResp.Latex,
Markdown: ocrResp.Markdown,
MathML: ocrResp.MathML,
MML: ocrResp.MML,
})
if err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "序列化公式内容失败", "error", err)
return err
}
result := dao.RecognitionResult{
TaskID: taskID, TaskID: taskID,
TaskType: dao.TaskTypeFormula, TaskType: dao.TaskTypeFormula,
Content: b, Content: contentJSON,
} }
if err := resultDao.Create(tx, *result); err != nil { if err = result.SetMetaData(dao.ResultMetaData{TotalNum: 1}); err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "序列化MetaData失败", "error", err)
return err
}
err = resultDao.Create(tx, result)
if err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "保存任务结果失败", "error", err) log.Error(ctx, "func", "processFormulaTask", "msg", "保存任务结果失败", "error", err)
return err return err
} }
isSuccess = true isSuccess = true
return nil return nil
} }
func (s *RecognitionService) processVLFormulaTask(ctx context.Context, taskID int64, fileURL string) error { func (s *RecognitionService) processVLFormulaTask(ctx context.Context, taskID int64, fileURL string, model string) error {
isSuccess := false isSuccess := false
defer func() { defer func() {
if !isSuccess { if !isSuccess {
@@ -349,28 +598,11 @@ func (s *RecognitionService) processVLFormulaTask(ctx context.Context, taskID in
log.Error(ctx, "func", "processVLFormulaTask", "msg", "读取图片数据失败", "error", err) log.Error(ctx, "func", "processVLFormulaTask", "msg", "读取图片数据失败", "error", err)
return err return err
} }
prompt := ` prompt := `Please perform OCR on the image and output only LaTeX code.`
Please perform OCR on the image and output only LaTeX code.
Important instructions:
* "The image contains mathematical formulas, no plain text."
* "Preserve all layout, symbols, subscripts, summations, parentheses, etc., exactly as shown."
* "Use \[ ... \] or align environments to represent multiline math expressions."
* "Use adaptive symbols such as \left and \right where applicable."
* "Do not include any extra commentary, template answers, or unrelated equations."
* "Only output valid LaTeX code based on the actual content of the image, and not change the original mathematical expression."
* "The output result must be can render by better-react-mathjax."
`
base64Image := base64.StdEncoding.EncodeToString(imageData) base64Image := base64.StdEncoding.EncodeToString(imageData)
requestBody := formula.VLFormulaRequest{ requestBody := formula.VLFormulaRequest{
Model: "Qwen/Qwen2.5-VL-32B-Instruct", Model: model,
Stream: false, Stream: false,
MaxTokens: 512, MaxTokens: 512,
Temperature: 0.1, Temperature: 0.1,
@@ -439,39 +671,31 @@ Important instructions:
} }
resultDao := dao.NewRecognitionResultDao() resultDao := dao.NewRecognitionResultDao()
var formulaRes *dao.FormulaRecognitionContent
result, err := resultDao.GetByTaskID(dao.DB.WithContext(ctx), taskID) result, err := resultDao.GetByTaskID(dao.DB.WithContext(ctx), taskID)
if err != nil { if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "获取任务结果失败", "error", err) log.Error(ctx, "func", "processVLFormulaTask", "msg", "获取任务结果失败", "error", err)
return err return err
} }
if result == nil { if result == nil {
formulaRes = &dao.FormulaRecognitionContent{EnhanceLatex: latex} contentJSON, err := dao.MarshalFormulaContent(dao.FormulaContent{Latex: latex})
b, err := formulaRes.Encode()
if err != nil { if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "编码任务结果失败", "error", err) log.Error(ctx, "func", "processVLFormulaTask", "msg", "序列化公式内容失败", "error", err)
return err return err
} }
err = resultDao.Create(dao.DB.WithContext(ctx), dao.RecognitionResult{TaskID: taskID, TaskType: dao.TaskTypeFormula, Content: b}) newResult := dao.RecognitionResult{TaskID: taskID, TaskType: dao.TaskTypeFormula, Content: contentJSON}
_ = newResult.SetMetaData(dao.ResultMetaData{TotalNum: 1})
err = resultDao.Create(dao.DB.WithContext(ctx), newResult)
if err != nil { if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "创建任务结果失败", "error", err) log.Error(ctx, "func", "processVLFormulaTask", "msg", "创建任务结果失败", "error", err)
return err return err
} }
} else { } else {
formulaRes = result.NewContentCodec().(*dao.FormulaRecognitionContent) contentJSON, err := dao.MarshalFormulaContent(dao.FormulaContent{Latex: latex})
err = formulaRes.Decode()
if err != nil { if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "解码任务结果失败", "error", err) log.Error(ctx, "func", "processVLFormulaTask", "msg", "序列化公式内容失败", "error", err)
return err return err
} }
formulaRes.EnhanceLatex = latex err = resultDao.Update(dao.DB.WithContext(ctx), result.ID, map[string]interface{}{"content": contentJSON})
b, err := formulaRes.Encode()
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "编码任务结果失败", "error", err)
return err
}
err = resultDao.Update(dao.DB.WithContext(ctx), result.ID, map[string]interface{}{"content": b})
if err != nil { if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "更新任务结果失败", "error", err) log.Error(ctx, "func", "processVLFormulaTask", "msg", "更新任务结果失败", "error", err)
return err return err
@@ -515,26 +739,432 @@ func (s *RecognitionService) processOneTask(ctx context.Context) {
} }
ctx = context.WithValue(ctx, utils.RequestIDKey, task.TaskUUID) ctx = context.WithValue(ctx, utils.RequestIDKey, task.TaskUUID)
// 使用 gls 设置 request_id确保在整个任务处理过程中可用
requestid.SetRequestID(task.TaskUUID, func() {
log.Info(ctx, "func", "processFormulaQueue", "msg", "获取任务成功", "task_id", taskID) log.Info(ctx, "func", "processFormulaQueue", "msg", "获取任务成功", "task_id", taskID)
// 处理具体任务 err = s.processFormulaTask(ctx, taskID, task.FileURL)
if err := s.processFormulaTask(ctx, taskID, task.FileURL); err != nil { if err != nil {
log.Error(ctx, "func", "processFormulaQueue", "msg", "处理任务失败", "error", err) log.Error(ctx, "func", "processFormulaQueue", "msg", "处理任务失败", "error", err)
return return
} }
log.Info(ctx, "func", "processFormulaQueue", "msg", "处理任务成功", "task_id", taskID) log.Info(ctx, "func", "processFormulaQueue", "msg", "处理任务成功", "task_id", taskID)
})
} }
func (s *RecognitionService) getURL(ctx context.Context) string { // processMathpixTask 使用 Mathpix API 处理公式识别任务(用于增强识别)
return "http://cloud.srcstar.com:8045/formula/predict" func (s *RecognitionService) processMathpixTask(ctx context.Context, taskID int64, fileURL string) error {
count, err := cache.IncrURLCount(ctx) isSuccess := false
logDao := dao.NewRecognitionLogDao()
defer func() {
if !isSuccess {
err := dao.NewRecognitionTaskDao().Update(dao.DB.WithContext(ctx), map[string]interface{}{"id": taskID}, map[string]interface{}{"status": dao.TaskStatusFailed})
if err != nil { if err != nil {
log.Error(ctx, "func", "getURL", "msg", "获取URL计数失败", "error", err) log.Error(ctx, "func", "processMathpixTask", "msg", "更新任务状态失败", "error", err)
return "http://cloud.srcstar.com:8026/formula/predict"
} }
if count%2 == 0 { return
return "http://cloud.srcstar.com:8026/formula/predict"
} }
return "https://cloud.texpixel.com:1080/formula/predict" err := dao.NewRecognitionTaskDao().Update(dao.DB.WithContext(ctx), map[string]interface{}{"id": taskID}, map[string]interface{}{"status": dao.TaskStatusCompleted})
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "更新任务状态失败", "error", err)
}
}()
// 下载图片
imageUrl, err := oss.GetDownloadURL(ctx, fileURL)
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "获取图片URL失败", "error", err)
return err
}
// 创建 Mathpix API 请求
mathpixReq := MathpixRequest{
Src: imageUrl,
Formats: []string{
"text",
"latex_styled",
"data",
"html",
},
DataOptions: &MathpixDataOptions{
IncludeMathml: true,
IncludeAsciimath: true,
IncludeLatex: true,
IncludeTsv: true,
},
MathInlineDelimiters: []string{"$", "$"},
MathDisplayDelimiters: []string{"$$", "$$"},
RmSpaces: &[]bool{true}[0],
}
jsonData, err := json.Marshal(mathpixReq)
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "JSON编码失败", "error", err)
return err
}
headers := map[string]string{
"Content-Type": "application/json",
"app_id": config.GlobalConfig.Mathpix.AppID,
"app_key": config.GlobalConfig.Mathpix.AppKey,
}
endpoint := "https://api.mathpix.com/v3/text"
startTime := time.Now()
log.Info(ctx, "func", "processMathpixTask", "msg", "MathpixApi_Start", "start_time", startTime)
resp, err := s.httpClient.RequestWithRetry(ctx, http.MethodPost, endpoint, bytes.NewReader(jsonData), headers)
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "Mathpix API 请求失败", "error", err)
return err
}
defer resp.Body.Close()
log.Info(ctx, "func", "processMathpixTask", "msg", "MathpixApi_End", "end_time", time.Now(), "duration", time.Since(startTime))
body := &bytes.Buffer{}
if _, err = body.ReadFrom(resp.Body); err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "读取响应体失败", "error", err)
return err
}
// 创建日志记录
recognitionLog := &dao.RecognitionLog{
TaskID: taskID,
Provider: dao.ProviderMathpix,
RequestBody: string(jsonData),
ResponseBody: body.String(),
}
// 解析响应
var mathpixResp MathpixResponse
if err := json.Unmarshal(body.Bytes(), &mathpixResp); err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "解析响应失败", "error", err)
return err
}
// 检查错误
if mathpixResp.Error != "" {
errMsg := mathpixResp.Error
if mathpixResp.ErrorInfo != nil {
errMsg = fmt.Sprintf("%s: %s", mathpixResp.ErrorInfo.ID, mathpixResp.ErrorInfo.Message)
}
log.Error(ctx, "func", "processMathpixTask", "msg", "Mathpix API 返回错误", "error", errMsg)
return fmt.Errorf("mathpix error: %s", errMsg)
}
// 保存日志
err = logDao.Create(dao.DB.WithContext(ctx), recognitionLog)
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "保存日志失败", "error", err)
}
// 更新或创建识别结果
resultDao := dao.NewRecognitionResultDao()
result, err := resultDao.GetByTaskID(dao.DB.WithContext(ctx), taskID)
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "获取任务结果失败", "error", err)
return err
}
log.Info(ctx, "func", "processMathpixTask", "msg", "saveLog", "end_time", time.Now(), "duration", time.Since(startTime))
if result == nil {
// 创建新结果
contentJSON, err := dao.MarshalFormulaContent(dao.FormulaContent{
Latex: mathpixResp.LatexStyled,
Markdown: mathpixResp.Text,
MathML: mathpixResp.GetMathML(),
})
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "序列化公式内容失败", "error", err)
return err
}
newResult := dao.RecognitionResult{TaskID: taskID, TaskType: dao.TaskTypeFormula, Content: contentJSON}
_ = newResult.SetMetaData(dao.ResultMetaData{TotalNum: 1})
err = resultDao.Create(dao.DB.WithContext(ctx), newResult)
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "创建任务结果失败", "error", err)
return err
}
} else {
// 更新现有结果
contentJSON, err := dao.MarshalFormulaContent(dao.FormulaContent{
Latex: mathpixResp.LatexStyled,
Markdown: mathpixResp.Text,
MathML: mathpixResp.GetMathML(),
})
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "序列化公式内容失败", "error", err)
return err
}
err = resultDao.Update(dao.DB.WithContext(ctx), result.ID, map[string]interface{}{
"content": contentJSON,
})
if err != nil {
log.Error(ctx, "func", "processMathpixTask", "msg", "更新任务结果失败", "error", err)
return err
}
}
isSuccess = true
return nil
}
func (s *RecognitionService) processBaiduOCRTask(ctx context.Context, taskID int64, fileURL string) error {
isSuccess := false
logDao := dao.NewRecognitionLogDao()
defer func() {
if !isSuccess {
err := dao.NewRecognitionTaskDao().Update(dao.DB.WithContext(ctx), map[string]interface{}{"id": taskID}, map[string]interface{}{"status": dao.TaskStatusFailed})
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "更新任务状态失败", "error", err)
}
return
}
err := dao.NewRecognitionTaskDao().Update(dao.DB.WithContext(ctx), map[string]interface{}{"id": taskID}, map[string]interface{}{"status": dao.TaskStatusCompleted})
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "更新任务状态失败", "error", err)
}
}()
// 从 OSS 下载文件
reader, err := oss.DownloadFile(ctx, fileURL)
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "从OSS下载文件失败", "error", err)
return err
}
defer reader.Close()
// 读取文件内容
fileBytes, err := io.ReadAll(reader)
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "读取文件内容失败", "error", err)
return err
}
// Base64 编码
fileData := base64.StdEncoding.EncodeToString(fileBytes)
// 根据文件扩展名确定 fileType: 0=PDF, 1=图片
fileType := 1 // 默认为图片
lowerFileURL := strings.ToLower(fileURL)
if strings.HasSuffix(lowerFileURL, ".pdf") {
fileType = 0
}
// 创建百度 OCR API 请求
baiduReq := BaiduOCRRequest{
File: fileData,
FileType: fileType,
UseDocOrientationClassify: false,
UseDocUnwarping: false,
UseChartRecognition: false,
}
jsonData, err := json.Marshal(baiduReq)
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "JSON编码失败", "error", err)
return err
}
headers := map[string]string{
"Content-Type": "application/json",
"Authorization": fmt.Sprintf("token %s", config.GlobalConfig.BaiduOCR.Token),
}
endpoint := "https://j5veh2l2r6ubk6cb.aistudio-app.com/layout-parsing"
startTime := time.Now()
log.Info(ctx, "func", "processBaiduOCRTask", "msg", "BaiduOCRApi_Start", "start_time", startTime)
resp, err := s.httpClient.RequestWithRetry(ctx, http.MethodPost, endpoint, bytes.NewReader(jsonData), headers)
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "百度 OCR API 请求失败", "error", err)
return err
}
defer resp.Body.Close()
log.Info(ctx, "func", "processBaiduOCRTask", "msg", "BaiduOCRApi_End", "end_time", time.Now(), "duration", time.Since(startTime))
body := &bytes.Buffer{}
if _, err = body.ReadFrom(resp.Body); err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "读取响应体失败", "error", err)
return err
}
// 创建日志记录(不记录请求体中的 base64 数据以节省存储)
requestLogData := map[string]interface{}{
"fileType": fileType,
"useDocOrientationClassify": false,
"useDocUnwarping": false,
"useChartRecognition": false,
"fileSize": len(fileBytes),
}
requestLogBytes, _ := json.Marshal(requestLogData)
recognitionLog := &dao.RecognitionLog{
TaskID: taskID,
Provider: dao.ProviderBaiduOCR,
RequestBody: string(requestLogBytes),
ResponseBody: body.String(),
}
// 解析响应
var baiduResp BaiduOCRResponse
if err := json.Unmarshal(body.Bytes(), &baiduResp); err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "解析响应失败", "error", err)
return err
}
// 检查错误
if baiduResp.ErrorCode != 0 {
errMsg := fmt.Sprintf("errorCode: %d, errorMsg: %s", baiduResp.ErrorCode, baiduResp.ErrorMsg)
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "百度 OCR API 返回错误", "error", errMsg)
return fmt.Errorf("baidu ocr error: %s", errMsg)
}
// 保存日志
err = logDao.Create(dao.DB.WithContext(ctx), recognitionLog)
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "保存日志失败", "error", err)
}
// 合并所有页面的 markdown 结果
var markdownTexts []string
if baiduResp.Result != nil && len(baiduResp.Result.LayoutParsingResults) > 0 {
for _, res := range baiduResp.Result.LayoutParsingResults {
if res.Markdown.Text != "" {
markdownTexts = append(markdownTexts, res.Markdown.Text)
}
}
}
markdownResult := strings.Join(markdownTexts, "\n\n---\n\n")
latex, mml, e := s.HandleConvert(ctx, markdownResult)
if e != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "转换失败", "error", err)
}
// 更新或创建识别结果
resultDao := dao.NewRecognitionResultDao()
result, err := resultDao.GetByTaskID(dao.DB.WithContext(ctx), taskID)
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "获取任务结果失败", "error", err)
return err
}
log.Info(ctx, "func", "processBaiduOCRTask", "msg", "saveLog", "end_time", time.Now(), "duration", time.Since(startTime))
if result == nil {
// 创建新结果
contentJSON, err := dao.MarshalFormulaContent(dao.FormulaContent{
Markdown: markdownResult,
Latex: latex,
MathML: mml,
})
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "序列化公式内容失败", "error", err)
return err
}
newResult := dao.RecognitionResult{TaskID: taskID, TaskType: dao.TaskTypeFormula, Content: contentJSON}
_ = newResult.SetMetaData(dao.ResultMetaData{TotalNum: 1})
err = resultDao.Create(dao.DB.WithContext(ctx), newResult)
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "创建任务结果失败", "error", err)
return err
}
} else {
// 更新现有结果
contentJSON, err := dao.MarshalFormulaContent(dao.FormulaContent{
Markdown: markdownResult,
Latex: latex,
MathML: mml,
})
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "序列化公式内容失败", "error", err)
return err
}
err = resultDao.Update(dao.DB.WithContext(ctx), result.ID, map[string]interface{}{
"content": contentJSON,
})
if err != nil {
log.Error(ctx, "func", "processBaiduOCRTask", "msg", "更新任务结果失败", "error", err)
return err
}
}
isSuccess = true
return nil
}
func (s *RecognitionService) TestProcessMathpixTask(ctx context.Context, taskID int64) error {
task, err := dao.NewRecognitionTaskDao().GetTaskByID(dao.DB.WithContext(ctx), taskID)
if err != nil {
log.Error(ctx, "func", "TestProcessMathpixTask", "msg", "获取任务失败", "error", err)
return err
}
if task == nil {
log.Error(ctx, "func", "TestProcessMathpixTask", "msg", "任务不存在", "task_id", taskID)
return err
}
return s.processMathpixTask(ctx, taskID, task.FileURL)
}
// ConvertResponse Python 接口返回结构
type ConvertResponse struct {
Latex string `json:"latex"`
MathML string `json:"mathml"`
Error string `json:"error,omitempty"`
}
func (s *RecognitionService) HandleConvert(ctx context.Context, markdown string) (latex string, mml string, err error) {
url := "https://cloud.texpixel.com:10443/doc_converter/v1/convert"
// 构建 multipart form
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
_ = writer.WriteField("markdown_input", markdown)
writer.Close()
// 使用正确的 Content-Type包含 boundary
headers := map[string]string{
"Content-Type": writer.FormDataContentType(),
}
resp, err := s.httpClient.RequestWithRetry(ctx, http.MethodPost, url, body, headers)
if err != nil {
return "", "", err
}
defer resp.Body.Close()
// 读取响应体
respBody, err := io.ReadAll(resp.Body)
if err != nil {
return "", "", err
}
// 检查 HTTP 状态码
if resp.StatusCode != http.StatusOK {
return "", "", fmt.Errorf("convert failed: status %d, body: %s", resp.StatusCode, string(respBody))
}
// 解析 JSON 响应
var convertResp ConvertResponse
if err := json.Unmarshal(respBody, &convertResp); err != nil {
return "", "", fmt.Errorf("unmarshal response failed: %v, body: %s", err, string(respBody))
}
// 检查业务错误
if convertResp.Error != "" {
return "", "", fmt.Errorf("convert error: %s", convertResp.Error)
}
return convertResp.Latex, convertResp.MathML, nil
} }

View File

@@ -1,27 +1,37 @@
package service package service
import ( import (
"bytes"
"context" "context"
"encoding/json"
"errors" "errors"
"fmt"
"io"
"net/http"
"strings" "strings"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/internal/model/task"
"gitea.com/bitwsd/document_ai/internal/model/task" "gitea.com/texpixel/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/internal/storage/dao" "gitea.com/texpixel/document_ai/pkg/log"
"gorm.io/gorm" "gitea.com/texpixel/document_ai/pkg/oss"
) )
type TaskService struct { type TaskService struct {
db *gorm.DB recognitionTaskDao *dao.RecognitionTaskDao
evaluateTaskDao *dao.EvaluateTaskDao
recognitionResultDao *dao.RecognitionResultDao
} }
func NewTaskService() *TaskService { func NewTaskService() *TaskService {
return &TaskService{dao.DB} return &TaskService{
recognitionTaskDao: dao.NewRecognitionTaskDao(),
evaluateTaskDao: dao.NewEvaluateTaskDao(),
recognitionResultDao: dao.NewRecognitionResultDao(),
}
} }
func (svc *TaskService) EvaluateTask(ctx context.Context, req *task.EvaluateTaskRequest) error { func (svc *TaskService) EvaluateTask(ctx context.Context, req *task.EvaluateTaskRequest) error {
taskDao := dao.NewRecognitionTaskDao() task, err := svc.recognitionTaskDao.GetByTaskNo(dao.DB.WithContext(ctx), req.TaskNo)
task, err := taskDao.GetByTaskNo(svc.db.WithContext(ctx), req.TaskNo)
if err != nil { if err != nil {
log.Error(ctx, "func", "EvaluateTask", "msg", "get task by task no failed", "error", err) log.Error(ctx, "func", "EvaluateTask", "msg", "get task by task no failed", "error", err)
return err return err
@@ -36,14 +46,13 @@ func (svc *TaskService) EvaluateTask(ctx context.Context, req *task.EvaluateTask
return errors.New("task not finished") return errors.New("task not finished")
} }
evaluateTaskDao := dao.NewEvaluateTaskDao()
evaluateTask := &dao.EvaluateTask{ evaluateTask := &dao.EvaluateTask{
TaskID: task.ID, TaskID: task.ID,
Satisfied: req.Satisfied, Satisfied: req.Satisfied,
Feedback: req.Feedback, Feedback: req.Feedback,
Comment: strings.Join(req.Suggestion, ","), Comment: strings.Join(req.Suggestion, ","),
} }
err = evaluateTaskDao.Create(svc.db.WithContext(ctx), evaluateTask) err = svc.evaluateTaskDao.Create(dao.DB.WithContext(ctx), evaluateTask)
if err != nil { if err != nil {
log.Error(ctx, "func", "EvaluateTask", "msg", "create evaluate task failed", "error", err) log.Error(ctx, "func", "EvaluateTask", "msg", "create evaluate task failed", "error", err)
return err return err
@@ -53,26 +62,126 @@ func (svc *TaskService) EvaluateTask(ctx context.Context, req *task.EvaluateTask
} }
func (svc *TaskService) GetTaskList(ctx context.Context, req *task.TaskListRequest) (*task.TaskListResponse, error) { func (svc *TaskService) GetTaskList(ctx context.Context, req *task.TaskListRequest) (*task.TaskListResponse, error) {
taskDao := dao.NewRecognitionTaskDao() tasks, total, err := svc.recognitionTaskDao.GetTaskList(dao.DB.WithContext(ctx), req.UserID, dao.TaskType(req.TaskType), req.Page, req.PageSize)
tasks, err := taskDao.GetTaskList(svc.db.WithContext(ctx), dao.TaskType(req.TaskType), req.Page, req.PageSize)
if err != nil { if err != nil {
log.Error(ctx, "func", "GetTaskList", "msg", "get task list failed", "error", err) log.Error(ctx, "func", "GetTaskList", "msg", "get task list failed", "error", err)
return nil, err return nil, err
} }
resp := &task.TaskListResponse{ resp := &task.TaskListResponse{
TaskList: make([]*task.TaskListDTO, 0, len(tasks)), TaskList: make([]*task.TaskListDTO, 0, len(tasks)),
HasMore: false, Total: total,
NextPage: 0,
} }
for _, item := range tasks { for _, item := range tasks {
originURL, err := oss.GetDownloadURL(ctx, item.FileURL)
if err != nil {
log.Error(ctx, "func", "GetTaskList", "msg", "get origin url failed", "error", err)
}
resp.TaskList = append(resp.TaskList, &task.TaskListDTO{ resp.TaskList = append(resp.TaskList, &task.TaskListDTO{
TaskID: item.TaskUUID, TaskID: item.TaskUUID,
FileName: item.FileName, FileName: item.FileName,
Status: item.Status.String(), Status: int(item.Status),
Path: item.FileURL, OriginURL: originURL,
TaskType: item.TaskType.String(), TaskType: item.TaskType.String(),
CreatedAt: item.CreatedAt.Format("2006-01-02 15:04:05"), CreatedAt: item.CreatedAt.Format("2006-01-02 15:04:05"),
}) })
} }
return resp, nil return resp, nil
} }
func (svc *TaskService) ExportTask(ctx context.Context, req *task.ExportTaskRequest) ([]byte, string, error) {
recognitionTask, err := svc.recognitionTaskDao.GetByTaskNo(dao.DB.WithContext(ctx), req.TaskNo)
if err != nil {
log.Error(ctx, "func", "ExportTask", "msg", "get task by task id failed", "error", err)
return nil, "", err
}
if recognitionTask == nil {
log.Error(ctx, "func", "ExportTask", "msg", "task not found")
return nil, "", errors.New("task not found")
}
if recognitionTask.Status != dao.TaskStatusCompleted {
log.Error(ctx, "func", "ExportTask", "msg", "task not finished")
return nil, "", errors.New("task not finished")
}
recognitionResult, err := svc.recognitionResultDao.GetByTaskID(dao.DB.WithContext(ctx), recognitionTask.ID)
if err != nil {
log.Error(ctx, "func", "ExportTask", "msg", "get recognition result by task id failed", "error", err)
return nil, "", err
}
if recognitionResult == nil {
log.Error(ctx, "func", "ExportTask", "msg", "recognition result not found")
return nil, "", errors.New("recognition result not found")
}
var markdown string
switch recognitionResult.TaskType {
case dao.TaskTypeFormula:
fc, err := recognitionResult.GetFormulaContent()
if err != nil || fc.Markdown == "" {
log.Error(ctx, "func", "ExportTask", "msg", "公式结果解析失败或markdown为空", "error", err)
return nil, "", errors.New("markdown not found")
}
markdown = fc.Markdown
default:
log.Error(ctx, "func", "ExportTask", "msg", "不支持的导出任务类型", "task_type", recognitionResult.TaskType)
return nil, "", errors.New("unsupported task type for export")
}
// 获取文件名(去掉扩展名)
filename := strings.TrimSuffix(recognitionTask.FileName, "."+strings.ToLower(strings.Split(recognitionTask.FileName, ".")[len(strings.Split(recognitionTask.FileName, "."))-1]))
if filename == "" {
filename = "texpixel"
}
// 构建 JSON 请求体
requestBody := map[string]string{
"markdown": markdown,
"filename": filename,
}
jsonData, err := json.Marshal(requestBody)
if err != nil {
log.Error(ctx, "func", "ExportTask", "msg", "json marshal failed", "error", err)
return nil, "", err
}
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, "https://cloud.texpixel.com:10443/doc_process/v1/convert/file", bytes.NewReader(jsonData))
if err != nil {
log.Error(ctx, "func", "ExportTask", "msg", "create http request failed", "error", err)
return nil, "", err
}
httpReq.Header.Set("Content-Type", "application/json")
client := &http.Client{}
resp, err := client.Do(httpReq)
if err != nil {
log.Error(ctx, "func", "ExportTask", "msg", "http request failed", "error", err)
return nil, "", err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
respBody, _ := io.ReadAll(resp.Body)
log.Error(ctx, "func", "ExportTask", "msg", "export service returned non-200",
"status", resp.StatusCode,
"body", string(respBody),
"markdown_len", len(markdown),
"filename", filename,
)
return nil, "", fmt.Errorf("export service returned status: %d", resp.StatusCode)
}
fileData, err := io.ReadAll(resp.Body)
if err != nil {
log.Error(ctx, "func", "ExportTask", "msg", "read response body failed", "error", err)
return nil, "", err
}
// 新接口只返回 DOCX 格式
contentType := "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
return fileData, contentType, nil
}

View File

@@ -2,23 +2,33 @@ package service
import ( import (
"context" "context"
"encoding/json"
"errors" "errors"
"fmt" "fmt"
"math/rand" "math/rand"
"net/http"
"net/url"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/config"
"gitea.com/bitwsd/document_ai/internal/storage/cache" model "gitea.com/texpixel/document_ai/internal/model/user"
"gitea.com/bitwsd/document_ai/internal/storage/dao" "gitea.com/texpixel/document_ai/internal/storage/cache"
"gitea.com/bitwsd/document_ai/pkg/sms" "gitea.com/texpixel/document_ai/internal/storage/dao"
"gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/texpixel/document_ai/pkg/email"
"gitea.com/texpixel/document_ai/pkg/log"
"gitea.com/texpixel/document_ai/pkg/sms"
"golang.org/x/crypto/bcrypt"
) )
type UserService struct { type UserService struct {
userDao *dao.UserDao userDao *dao.UserDao
emailSendLogDao *dao.EmailSendLogDao
} }
func NewUserService() *UserService { func NewUserService() *UserService {
return &UserService{ return &UserService{
userDao: dao.NewUserDao(), userDao: dao.NewUserDao(),
emailSendLogDao: dao.NewEmailSendLogDao(),
} }
} }
@@ -107,3 +117,237 @@ func (svc *UserService) GetUserInfo(ctx context.Context, uid int64) (*dao.User,
return user, nil return user, nil
} }
func (svc *UserService) SendEmailVerifyCode(ctx context.Context, emailAddr string) error {
limit, err := cache.GetUserSendEmailLimit(ctx, emailAddr)
if err != nil {
log.Error(ctx, "func", "SendEmailVerifyCode", "msg", "get send email limit error", "error", err)
return err
}
if limit >= cache.UserSendEmailLimitCount {
return common.ErrEmailSendLimit
}
code := fmt.Sprintf("%06d", rand.Intn(1000000))
subject, body := email.BuildVerifyCodeEmail(emailAddr, code)
if err := email.Send(ctx, emailAddr, subject, body); err != nil {
log.Error(ctx, "func", "SendEmailVerifyCode", "msg", "send email error", "error", err)
return err
}
if cacheErr := cache.SetUserEmailCode(ctx, emailAddr, code); cacheErr != nil {
log.Error(ctx, "func", "SendEmailVerifyCode", "msg", "set email code error", "error", cacheErr)
}
if cacheErr := cache.SetUserSendEmailLimit(ctx, emailAddr); cacheErr != nil {
log.Error(ctx, "func", "SendEmailVerifyCode", "msg", "set send email limit error", "error", cacheErr)
}
record := &dao.EmailSendLog{Email: emailAddr, Status: dao.EmailSendStatusSent}
if logErr := svc.emailSendLogDao.Create(dao.DB.WithContext(ctx), record); logErr != nil {
log.Error(ctx, "func", "SendEmailVerifyCode", "msg", "create email send log error", "error", logErr)
}
return nil
}
func (svc *UserService) RegisterByEmail(ctx context.Context, emailAddr, password, verifyCode string) (uid int64, err error) {
storedCode, err := cache.GetUserEmailCode(ctx, emailAddr)
if err != nil {
log.Error(ctx, "func", "RegisterByEmail", "msg", "get email code error", "error", err)
return 0, err
}
if storedCode == "" || storedCode != verifyCode {
return 0, common.ErrEmailCodeError
}
_ = cache.DeleteUserEmailCode(ctx, emailAddr)
uid, err = svc.registerByEmailInternal(ctx, emailAddr, password)
if err != nil {
return 0, err
}
if logErr := svc.emailSendLogDao.MarkRegistered(dao.DB.WithContext(ctx), emailAddr); logErr != nil {
log.Error(ctx, "func", "RegisterByEmail", "msg", "mark email send log registered error", "error", logErr)
}
return uid, nil
}
func (svc *UserService) registerByEmailInternal(ctx context.Context, emailAddr, password string) (uid int64, err error) {
existingUser, err := svc.userDao.GetByEmail(dao.DB.WithContext(ctx), emailAddr)
if err != nil {
log.Error(ctx, "func", "RegisterByEmail", "msg", "get user by email error", "error", err)
return 0, err
}
if existingUser != nil {
log.Warn(ctx, "func", "RegisterByEmail", "msg", "email already registered", "email", emailAddr)
return 0, common.ErrEmailExists
}
hashedPassword, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.DefaultCost)
if err != nil {
log.Error(ctx, "func", "RegisterByEmail", "msg", "hash password error", "error", err)
return 0, err
}
user := &dao.User{
Email: emailAddr,
Password: string(hashedPassword),
}
if err = svc.userDao.Create(dao.DB.WithContext(ctx), user); err != nil {
log.Error(ctx, "func", "RegisterByEmail", "msg", "create user error", "error", err)
return 0, err
}
if cacheErr := cache.DeleteUserEmailCode(ctx, emailAddr); cacheErr != nil {
log.Error(ctx, "func", "RegisterByEmail", "msg", "delete email code error", "error", cacheErr)
}
return user.ID, nil
}
func (svc *UserService) LoginByEmail(ctx context.Context, email, password string) (uid int64, err error) {
user, err := svc.userDao.GetByEmail(dao.DB.WithContext(ctx), email)
if err != nil {
log.Error(ctx, "func", "LoginByEmail", "msg", "get user by email error", "error", err)
return 0, err
}
if user == nil {
log.Warn(ctx, "func", "LoginByEmail", "msg", "user not found", "email", email)
return 0, common.ErrEmailNotFound
}
err = bcrypt.CompareHashAndPassword([]byte(user.Password), []byte(password))
if err != nil {
log.Warn(ctx, "func", "LoginByEmail", "msg", "password mismatch", "email", email)
return 0, common.ErrPasswordMismatch
}
return user.ID, nil
}
type googleTokenResponse struct {
AccessToken string `json:"access_token"`
IDToken string `json:"id_token"`
ExpiresIn int `json:"expires_in"`
TokenType string `json:"token_type"`
}
func (svc *UserService) googleHTTPClient() *http.Client {
if config.GlobalConfig.Google.Proxy == "" {
return &http.Client{}
}
proxyURL, err := url.Parse(config.GlobalConfig.Google.Proxy)
if err != nil {
return &http.Client{}
}
return &http.Client{Transport: &http.Transport{Proxy: http.ProxyURL(proxyURL)}}
}
func (svc *UserService) ExchangeGoogleCodeAndGetUserInfo(ctx context.Context, clientID, clientSecret, code, redirectURI string) (*model.GoogleUserInfo, error) {
tokenURL := "https://oauth2.googleapis.com/token"
formData := url.Values{
"client_id": {clientID},
"client_secret": {clientSecret},
"code": {code},
"grant_type": {"authorization_code"},
"redirect_uri": {redirectURI},
}
client := svc.googleHTTPClient()
resp, err := client.PostForm(tokenURL, formData)
if err != nil {
log.Error(ctx, "func", "ExchangeGoogleCodeAndGetUserInfo", "msg", "exchange code failed", "error", err)
return nil, err
}
defer resp.Body.Close()
var tokenResp googleTokenResponse
if err := json.NewDecoder(resp.Body).Decode(&tokenResp); err != nil {
log.Error(ctx, "func", "ExchangeGoogleCodeAndGetUserInfo", "msg", "decode token response failed", "error", err)
return nil, err
}
if tokenResp.AccessToken == "" {
log.Error(ctx, "func", "ExchangeGoogleCodeAndGetUserInfo", "msg", "no access token in response")
return nil, errors.New("no access token in response")
}
userInfo, err := svc.getGoogleUserInfo(ctx, tokenResp.AccessToken)
if err != nil {
log.Error(ctx, "func", "ExchangeGoogleCodeAndGetUserInfo", "msg", "get user info failed", "error", err)
return nil, err
}
return &model.GoogleUserInfo{
ID: userInfo.ID,
Email: userInfo.Email,
Name: userInfo.Name,
}, nil
}
func (svc *UserService) getGoogleUserInfo(ctx context.Context, accessToken string) (*model.GoogleUserInfo, error) {
req, err := http.NewRequestWithContext(ctx, "GET", "https://www.googleapis.com/oauth2/v2/userinfo", nil)
if err != nil {
return nil, err
}
req.Header.Set("Authorization", "Bearer "+accessToken)
client := svc.googleHTTPClient()
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var userInfo model.GoogleUserInfo
if err := json.NewDecoder(resp.Body).Decode(&userInfo); err != nil {
return nil, err
}
return &userInfo, nil
}
func (svc *UserService) FindOrCreateGoogleUser(ctx context.Context, userInfo *model.GoogleUserInfo) (uid int64, err error) {
existingUser, err := svc.userDao.GetByGoogleID(dao.DB.WithContext(ctx), userInfo.ID)
if err != nil {
log.Error(ctx, "func", "FindOrCreateGoogleUser", "msg", "get user by google id error", "error", err)
return 0, err
}
if existingUser != nil {
return existingUser.ID, nil
}
existingUser, err = svc.userDao.GetByEmail(dao.DB.WithContext(ctx), userInfo.Email)
if err != nil {
log.Error(ctx, "func", "FindOrCreateGoogleUser", "msg", "get user by email error", "error", err)
return 0, err
}
if existingUser != nil {
existingUser.GoogleID = userInfo.ID
err = svc.userDao.Update(dao.DB.WithContext(ctx), existingUser)
if err != nil {
log.Error(ctx, "func", "FindOrCreateGoogleUser", "msg", "update user google id error", "error", err)
return 0, err
}
return existingUser.ID, nil
}
user := &dao.User{
Email: userInfo.Email,
GoogleID: userInfo.ID,
Username: userInfo.Name,
}
err = svc.userDao.Create(dao.DB.WithContext(ctx), user)
if err != nil {
log.Error(ctx, "func", "FindOrCreateGoogleUser", "msg", "create user error", "error", err)
return 0, err
}
return user.ID, nil
}

View File

@@ -5,7 +5,7 @@ import (
"fmt" "fmt"
"time" "time"
"gitea.com/bitwsd/document_ai/config" "gitea.com/texpixel/document_ai/config"
"github.com/redis/go-redis/v9" "github.com/redis/go-redis/v9"
) )

27
internal/storage/cache/pdf.go vendored Normal file
View File

@@ -0,0 +1,27 @@
package cache
import (
"context"
"strconv"
)
const (
PDFRecognitionTaskQueue = "pdf_recognition_queue"
PDFRecognitionDistLock = "pdf_recognition_dist_lock"
)
func PushPDFTask(ctx context.Context, taskID int64) (int64, error) {
return RedisClient.LPush(ctx, PDFRecognitionTaskQueue, taskID).Result()
}
func PopPDFTask(ctx context.Context) (int64, error) {
result, err := RedisClient.BRPop(ctx, 0, PDFRecognitionTaskQueue).Result()
if err != nil {
return 0, err
}
return strconv.ParseInt(result[1], 10, 64)
}
func GetPDFDistributedLock(ctx context.Context) (bool, error) {
return RedisClient.SetNX(ctx, PDFRecognitionDistLock, "locked", DefaultLockTimeout).Result()
}

View File

@@ -61,3 +61,55 @@ func SetUserSendSmsLimit(ctx context.Context, phone string) error {
func DeleteUserSmsCode(ctx context.Context, phone string) error { func DeleteUserSmsCode(ctx context.Context, phone string) error {
return RedisClient.Del(ctx, fmt.Sprintf(UserSmsCodePrefix, phone)).Err() return RedisClient.Del(ctx, fmt.Sprintf(UserSmsCodePrefix, phone)).Err()
} }
const (
UserEmailCodeTTL = 10 * time.Minute
UserSendEmailLimitTTL = 24 * time.Hour
UserSendEmailLimitCount = 5
)
const (
UserEmailCodePrefix = "user:email_code:%s"
UserSendEmailLimit = "user:send_email_limit:%s"
)
func GetUserEmailCode(ctx context.Context, email string) (string, error) {
code, err := RedisClient.Get(ctx, fmt.Sprintf(UserEmailCodePrefix, email)).Result()
if err != nil {
if err == redis.Nil {
return "", nil
}
return "", err
}
return code, nil
}
func SetUserEmailCode(ctx context.Context, email, code string) error {
return RedisClient.Set(ctx, fmt.Sprintf(UserEmailCodePrefix, email), code, UserEmailCodeTTL).Err()
}
func GetUserSendEmailLimit(ctx context.Context, email string) (int, error) {
limit, err := RedisClient.Get(ctx, fmt.Sprintf(UserSendEmailLimit, email)).Result()
if err != nil {
if err == redis.Nil {
return 0, nil
}
return 0, err
}
return strconv.Atoi(limit)
}
func SetUserSendEmailLimit(ctx context.Context, email string) error {
count, err := RedisClient.Incr(ctx, fmt.Sprintf(UserSendEmailLimit, email)).Result()
if err != nil {
return err
}
if count > UserSendEmailLimitCount {
return errors.New("send email limit")
}
return RedisClient.Expire(ctx, fmt.Sprintf(UserSendEmailLimit, email), UserSendEmailLimitTTL).Err()
}
func DeleteUserEmailCode(ctx context.Context, email string) error {
return RedisClient.Del(ctx, fmt.Sprintf(UserEmailCodePrefix, email)).Err()
}

View File

@@ -0,0 +1,170 @@
package dao
import (
"time"
"gorm.io/datatypes"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
// AnalyticsEvent 数据埋点事件表
type AnalyticsEvent struct {
ID int64 `gorm:"bigint;primaryKey;autoIncrement;column:id;comment:主键ID" json:"id"`
UserID int64 `gorm:"column:user_id;not null;index:idx_user_id;comment:用户ID" json:"user_id"`
EventName string `gorm:"column:event_name;varchar(128);not null;index:idx_event_name;comment:事件名称" json:"event_name"`
Properties datatypes.JSON `gorm:"column:properties;type:json;comment:事件属性(JSON)" json:"properties"`
DeviceInfo datatypes.JSON `gorm:"column:device_info;type:json;comment:设备信息(JSON)" json:"device_info"`
MetaData datatypes.JSON `gorm:"column:meta_data;type:json;comment:元数据(JSON包含task_id等)" json:"meta_data"`
CreatedAt time.Time `gorm:"column:created_at;comment:创建时间;not null;default:current_timestamp;index:idx_created_at" json:"created_at"`
}
func (e *AnalyticsEvent) TableName() string {
return "analytics_events"
}
// AnalyticsEventDao 数据埋点事件DAO
type AnalyticsEventDao struct{}
func NewAnalyticsEventDao() *AnalyticsEventDao {
return &AnalyticsEventDao{}
}
// Create 创建事件记录
func (dao *AnalyticsEventDao) Create(tx *gorm.DB, event *AnalyticsEvent) error {
return tx.Create(event).Error
}
// BatchCreate 批量创建事件记录
func (dao *AnalyticsEventDao) BatchCreate(tx *gorm.DB, events []*AnalyticsEvent) error {
if len(events) == 0 {
return nil
}
return tx.CreateInBatches(events, 100).Error
}
// GetByID 根据ID获取事件
func (dao *AnalyticsEventDao) GetByID(tx *gorm.DB, id int64) (*AnalyticsEvent, error) {
event := &AnalyticsEvent{}
err := tx.Where("id = ?", id).First(event).Error
if err != nil {
if err == gorm.ErrRecordNotFound {
return nil, nil
}
return nil, err
}
return event, nil
}
// GetUserEvents 获取用户的事件列表
func (dao *AnalyticsEventDao) GetUserEvents(tx *gorm.DB, userID int64, page, pageSize int) ([]*AnalyticsEvent, int64, error) {
var events []*AnalyticsEvent
var total int64
offset := (page - 1) * pageSize
query := tx.Model(&AnalyticsEvent{}).Where("user_id = ?", userID)
err := query.Count(&total).Error
if err != nil {
return nil, 0, err
}
err = query.Offset(offset).Limit(pageSize).
Order(clause.OrderByColumn{Column: clause.Column{Name: "created_at"}, Desc: true}).
Find(&events).Error
return events, total, err
}
// GetEventsByName 根据事件名称获取事件列表
func (dao *AnalyticsEventDao) GetEventsByName(tx *gorm.DB, eventName string, page, pageSize int) ([]*AnalyticsEvent, int64, error) {
var events []*AnalyticsEvent
var total int64
offset := (page - 1) * pageSize
query := tx.Model(&AnalyticsEvent{}).Where("event_name = ?", eventName)
err := query.Count(&total).Error
if err != nil {
return nil, 0, err
}
err = query.Offset(offset).Limit(pageSize).
Order(clause.OrderByColumn{Column: clause.Column{Name: "created_at"}, Desc: true}).
Find(&events).Error
return events, total, err
}
// GetUserEventsByName 获取用户指定事件的列表
func (dao *AnalyticsEventDao) GetUserEventsByName(tx *gorm.DB, userID int64, eventName string, page, pageSize int) ([]*AnalyticsEvent, int64, error) {
var events []*AnalyticsEvent
var total int64
offset := (page - 1) * pageSize
query := tx.Model(&AnalyticsEvent{}).Where("user_id = ? AND event_name = ?", userID, eventName)
err := query.Count(&total).Error
if err != nil {
return nil, 0, err
}
err = query.Offset(offset).Limit(pageSize).
Order(clause.OrderByColumn{Column: clause.Column{Name: "created_at"}, Desc: true}).
Find(&events).Error
return events, total, err
}
// GetEventsByTimeRange 根据时间范围获取事件列表
func (dao *AnalyticsEventDao) GetEventsByTimeRange(tx *gorm.DB, startTime, endTime time.Time, page, pageSize int) ([]*AnalyticsEvent, int64, error) {
var events []*AnalyticsEvent
var total int64
offset := (page - 1) * pageSize
query := tx.Model(&AnalyticsEvent{}).Where("created_at BETWEEN ? AND ?", startTime, endTime)
err := query.Count(&total).Error
if err != nil {
return nil, 0, err
}
err = query.Offset(offset).Limit(pageSize).
Order(clause.OrderByColumn{Column: clause.Column{Name: "created_at"}, Desc: true}).
Find(&events).Error
return events, total, err
}
// CountEventsByName 统计指定事件的数量
func (dao *AnalyticsEventDao) CountEventsByName(tx *gorm.DB, eventName string) (int64, error) {
var count int64
err := tx.Model(&AnalyticsEvent{}).Where("event_name = ?", eventName).Count(&count).Error
return count, err
}
// CountUserEvents 统计用户的事件数量
func (dao *AnalyticsEventDao) CountUserEvents(tx *gorm.DB, userID int64) (int64, error) {
var count int64
err := tx.Model(&AnalyticsEvent{}).Where("user_id = ?", userID).Count(&count).Error
return count, err
}
// GetEventStats 获取事件统计信息(按事件名称分组)
func (dao *AnalyticsEventDao) GetEventStats(tx *gorm.DB, startTime, endTime time.Time) ([]map[string]interface{}, error) {
var results []map[string]interface{}
err := tx.Model(&AnalyticsEvent{}).
Select("event_name, COUNT(*) as count, COUNT(DISTINCT user_id) as unique_users").
Where("created_at BETWEEN ? AND ?", startTime, endTime).
Group("event_name").
Order("count DESC").
Find(&results).Error
return results, err
}
// DeleteOldEvents 删除旧事件(数据清理)
func (dao *AnalyticsEventDao) DeleteOldEvents(tx *gorm.DB, beforeTime time.Time) error {
return tx.Where("created_at < ?", beforeTime).Delete(&AnalyticsEvent{}).Error
}

View File

@@ -0,0 +1,50 @@
package dao
import (
"gorm.io/gorm"
)
type EmailSendStatus int8
const (
EmailSendStatusSent EmailSendStatus = 0 // 已发送,用户未注册
EmailSendStatusRegistered EmailSendStatus = 1 // 用户已完成注册
)
type EmailSendLog struct {
BaseModel
Email string `gorm:"column:email;type:varchar(255);not null;comment:邮箱地址" json:"email"`
Status EmailSendStatus `gorm:"column:status;type:tinyint;not null;default:0;comment:状态: 0=已发送未注册 1=已注册" json:"status"`
}
func (e *EmailSendLog) TableName() string {
return "email_send_log"
}
type EmailSendLogDao struct{}
func NewEmailSendLogDao() *EmailSendLogDao {
return &EmailSendLogDao{}
}
func (d *EmailSendLogDao) Create(tx *gorm.DB, log *EmailSendLog) error {
return tx.Create(log).Error
}
func (d *EmailSendLogDao) GetLatestByEmail(tx *gorm.DB, email string) (*EmailSendLog, error) {
var record EmailSendLog
err := tx.Where("email = ?", email).Order("id DESC").First(&record).Error
if err != nil {
if err == gorm.ErrRecordNotFound {
return nil, nil
}
return nil, err
}
return &record, nil
}
func (d *EmailSendLogDao) MarkRegistered(tx *gorm.DB, email string) error {
return tx.Model(&EmailSendLog{}).
Where("email = ? AND status = ?", email, EmailSendStatusSent).
Update("status", EmailSendStatusRegistered).Error
}

View File

@@ -3,16 +3,19 @@ package dao
import ( import (
"fmt" "fmt"
"gitea.com/bitwsd/document_ai/config" "gitea.com/texpixel/document_ai/config"
"gorm.io/driver/mysql" "gorm.io/driver/mysql"
"gorm.io/gorm" "gorm.io/gorm"
"gorm.io/gorm/logger"
) )
var DB *gorm.DB var DB *gorm.DB
func InitDB(conf config.DatabaseConfig) { func InitDB(conf config.DatabaseConfig) {
dns := fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4&parseTime=True&loc=Asia%%2FShanghai", conf.Username, conf.Password, conf.Host, conf.Port, conf.DBName) dns := fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4&parseTime=True&loc=Asia%%2FShanghai", conf.Username, conf.Password, conf.Host, conf.Port, conf.DBName)
db, err := gorm.Open(mysql.Open(dns), &gorm.Config{}) db, err := gorm.Open(mysql.Open(dns), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent), // 禁用 GORM 日志输出
})
if err != nil { if err != nil {
panic(err) panic(err)
} }

View File

@@ -0,0 +1,53 @@
package dao
import (
"gorm.io/gorm"
)
// RecognitionLogProvider 第三方服务提供商
type RecognitionLogProvider string
const (
ProviderMathpix RecognitionLogProvider = "mathpix"
ProviderSiliconflow RecognitionLogProvider = "siliconflow"
ProviderTexpixel RecognitionLogProvider = "texpixel"
ProviderBaiduOCR RecognitionLogProvider = "baidu_ocr"
)
// RecognitionLog 识别调用日志表记录第三方API调用请求和响应
type RecognitionLog struct {
BaseModel
TaskID int64 `gorm:"column:task_id;bigint;not null;default:0;index;comment:关联任务ID" json:"task_id"`
Provider RecognitionLogProvider `gorm:"column:provider;varchar(32);not null;comment:服务提供商" json:"provider"`
RequestBody string `gorm:"column:request_body;type:longtext;comment:请求体" json:"request_body"`
ResponseBody string `gorm:"column:response_body;type:longtext;comment:响应体" json:"response_body"`
}
func (RecognitionLog) TableName() string {
return "recognition_log"
}
type RecognitionLogDao struct{}
func NewRecognitionLogDao() *RecognitionLogDao {
return &RecognitionLogDao{}
}
// Create 创建日志记录
func (d *RecognitionLogDao) Create(tx *gorm.DB, log *RecognitionLog) error {
return tx.Create(log).Error
}
// GetByTaskID 根据任务ID获取日志
func (d *RecognitionLogDao) GetByTaskID(tx *gorm.DB, taskID int64) ([]*RecognitionLog, error) {
var logs []*RecognitionLog
err := tx.Where("task_id = ?", taskID).Order("created_at DESC").Find(&logs).Error
return logs, err
}
// GetByProvider 根据提供商获取日志
func (d *RecognitionLogDao) GetByProvider(tx *gorm.DB, provider RecognitionLogProvider, limit int) ([]*RecognitionLog, error) {
var logs []*RecognitionLog
err := tx.Where("provider = ?", provider).Order("created_at DESC").Limit(limit).Find(&logs).Error
return logs, err
}

View File

@@ -6,84 +6,99 @@ import (
"gorm.io/gorm" "gorm.io/gorm"
) )
type JSON []byte // FormulaContent 公式识别的 content 字段结构
type FormulaContent struct {
// ContentCodec 定义内容编解码接口
type ContentCodec interface {
Encode() (JSON, error)
Decode() error
GetContent() interface{} // 更明确的方法名
}
type FormulaRecognitionContent struct {
content JSON
Latex string `json:"latex"` Latex string `json:"latex"`
AdjustLatex string `json:"adjust_latex"` Markdown string `json:"markdown"`
EnhanceLatex string `json:"enhance_latex"` MathML string `json:"mathml"`
MML string `json:"mml"`
} }
func (c *FormulaRecognitionContent) Encode() (JSON, error) { // PDFPageContent PDF 单页识别结果
b, err := json.Marshal(c) type PDFPageContent struct {
if err != nil { PageNumber int `json:"page_number"`
return nil, err Markdown string `json:"markdown"`
}
return b, nil
} }
func (c *FormulaRecognitionContent) Decode() error { // ResultMetaData recognition_results.meta_data 字段结构
return json.Unmarshal(c.content, c) type ResultMetaData struct {
} TotalNum int `json:"total_num"`
// GetPreferredContent 按优先级返回公式内容
func (c *FormulaRecognitionContent) GetContent() interface{} {
c.Decode()
if c.EnhanceLatex != "" {
return c.EnhanceLatex
} else if c.AdjustLatex != "" {
return c.AdjustLatex
} else {
return c.Latex
}
} }
// RecognitionResult recognition_results 表模型
type RecognitionResult struct { type RecognitionResult struct {
BaseModel BaseModel
TaskID int64 `gorm:"column:task_id;bigint;not null;default:0;comment:任务ID" json:"task_id"` TaskID int64 `gorm:"column:task_id;bigint;not null;default:0;index;comment:任务ID" json:"task_id"`
TaskType TaskType `gorm:"column:task_type;varchar(16);not null;comment:任务类型;default:''" json:"task_type"` TaskType TaskType `gorm:"column:task_type;varchar(16);not null;comment:任务类型;default:''" json:"task_type"`
Content JSON `gorm:"column:content;type:json;not null;comment:识别内容" json:"content"` MetaData string `gorm:"column:meta_data;type:json;comment:元数据" json:"meta_data"`
Content string `gorm:"column:content;type:json;comment:识别内容JSON" json:"content"`
} }
// NewContentCodec 创建对应任务类型的内容编解码器 // SetMetaData 序列化并写入 MetaData 字段
func (r *RecognitionResult) NewContentCodec() ContentCodec { func (r *RecognitionResult) SetMetaData(meta ResultMetaData) error {
switch r.TaskType { b, err := json.Marshal(meta)
case TaskTypeFormula: if err != nil {
return &FormulaRecognitionContent{content: r.Content} return err
default:
return nil
} }
r.MetaData = string(b)
return nil
} }
type RecognitionResultDao struct { // GetFormulaContent 从 Content 字段反序列化公式结果
func (r *RecognitionResult) GetFormulaContent() (*FormulaContent, error) {
var c FormulaContent
if err := json.Unmarshal([]byte(r.Content), &c); err != nil {
return nil, err
}
return &c, nil
} }
// GetPDFContent 从 Content 字段反序列化 PDF 分页结果
func (r *RecognitionResult) GetPDFContent() ([]PDFPageContent, error) {
var pages []PDFPageContent
if err := json.Unmarshal([]byte(r.Content), &pages); err != nil {
return nil, err
}
return pages, nil
}
// MarshalFormulaContent 将公式结果序列化为 JSON 字符串(供写入 Content
func MarshalFormulaContent(c FormulaContent) (string, error) {
b, err := json.Marshal(c)
return string(b), err
}
// MarshalPDFContent 将 PDF 分页结果序列化为 JSON 字符串(供写入 Content
func MarshalPDFContent(pages []PDFPageContent) (string, error) {
b, err := json.Marshal(pages)
return string(b), err
}
type RecognitionResultDao struct{}
func NewRecognitionResultDao() *RecognitionResultDao { func NewRecognitionResultDao() *RecognitionResultDao {
return &RecognitionResultDao{} return &RecognitionResultDao{}
} }
// 模型方法
func (dao *RecognitionResultDao) Create(tx *gorm.DB, data RecognitionResult) error { func (dao *RecognitionResultDao) Create(tx *gorm.DB, data RecognitionResult) error {
return tx.Create(&data).Error return tx.Create(&data).Error
} }
func (dao *RecognitionResultDao) GetByTaskID(tx *gorm.DB, taskID int64) (result *RecognitionResult, err error) { func (dao *RecognitionResultDao) GetByTaskID(tx *gorm.DB, taskID int64) (*RecognitionResult, error) {
result = &RecognitionResult{} result := &RecognitionResult{}
err = tx.Where("task_id = ?", taskID).First(result).Error err := tx.Where("task_id = ?", taskID).First(result).Error
if err != nil && err == gorm.ErrRecordNotFound { if err != nil && err == gorm.ErrRecordNotFound {
return nil, nil return nil, nil
} }
return return result, err
} }
func (dao *RecognitionResultDao) Update(tx *gorm.DB, id int64, updates map[string]interface{}) error { func (dao *RecognitionResultDao) Update(tx *gorm.DB, id int64, updates map[string]interface{}) error {
return tx.Model(&RecognitionResult{}).Where("id = ?", id).Updates(updates).Error return tx.Model(&RecognitionResult{}).Where("id = ?", id).Updates(updates).Error
} }
func (dao *RecognitionResultDao) GetByTaskIDs(tx *gorm.DB, taskIDs []int64) ([]*RecognitionResult, error) {
var results []*RecognitionResult
err := tx.Where("task_id IN (?)", taskIDs).Find(&results).Error
return results, err
}

View File

@@ -20,6 +20,7 @@ const (
TaskTypeText TaskType = "TEXT" TaskTypeText TaskType = "TEXT"
TaskTypeTable TaskType = "TABLE" TaskTypeTable TaskType = "TABLE"
TaskTypeLayout TaskType = "LAYOUT" TaskTypeLayout TaskType = "LAYOUT"
TaskTypePDF TaskType = "PDF"
) )
func (t TaskType) String() string { func (t TaskType) String() string {
@@ -69,9 +70,9 @@ func (dao *RecognitionTaskDao) GetByTaskNo(tx *gorm.DB, taskUUID string) (task *
return return
} }
func (dao *RecognitionTaskDao) GetTaskByFileURL(tx *gorm.DB, userID int64, fileHash string) (task *RecognitionTask, err error) { func (dao *RecognitionTaskDao) GetTaskByFileURL(tx *gorm.DB, fileHash string) (task *RecognitionTask, err error) {
task = &RecognitionTask{} task = &RecognitionTask{}
err = tx.Model(RecognitionTask{}).Where("user_id = ? AND file_hash = ?", userID, fileHash).First(task).Error err = tx.Model(RecognitionTask{}).Where("file_hash = ?", fileHash).Last(task).Error
return return
} }
@@ -87,8 +88,15 @@ func (dao *RecognitionTaskDao) GetTaskByID(tx *gorm.DB, id int64) (task *Recogni
return task, nil return task, nil
} }
func (dao *RecognitionTaskDao) GetTaskList(tx *gorm.DB, taskType TaskType, page int, pageSize int) (tasks []*RecognitionTask, err error) { func (dao *RecognitionTaskDao) GetTaskList(tx *gorm.DB, userID int64, taskType TaskType, page int, pageSize int) (tasks []*RecognitionTask, total int64, err error) {
offset := (page - 1) * pageSize query := tx.Model(RecognitionTask{}).Where("user_id = ?", userID)
err = tx.Model(RecognitionTask{}).Where("task_type = ?", taskType).Offset(offset).Limit(pageSize).Order(clause.OrderByColumn{Column: clause.Column{Name: "id"}, Desc: true}).Find(&tasks).Error if taskType != "" {
return query = query.Where("task_type = ?", taskType)
}
err = query.Count(&total).Error
if err != nil {
return nil, 0, err
}
err = query.Order(clause.OrderByColumn{Column: clause.Column{Name: "id"}, Desc: true}).Offset((page - 1) * pageSize).Limit(pageSize).Find(&tasks).Error
return tasks, total, err
} }

View File

@@ -10,9 +10,11 @@ type User struct {
BaseModel BaseModel
Username string `gorm:"column:username" json:"username"` Username string `gorm:"column:username" json:"username"`
Phone string `gorm:"column:phone" json:"phone"` Phone string `gorm:"column:phone" json:"phone"`
Email string `gorm:"column:email" json:"email"`
Password string `gorm:"column:password" json:"password"` Password string `gorm:"column:password" json:"password"`
WechatOpenID string `gorm:"column:wechat_open_id" json:"wechat_open_id"` WechatOpenID string `gorm:"column:wechat_open_id" json:"wechat_open_id"`
WechatUnionID string `gorm:"column:wechat_union_id" json:"wechat_union_id"` WechatUnionID string `gorm:"column:wechat_union_id" json:"wechat_union_id"`
GoogleID string `gorm:"column:google_id" json:"google_id"`
} }
func (u *User) TableName() string { func (u *User) TableName() string {
@@ -51,3 +53,29 @@ func (dao *UserDao) GetByID(tx *gorm.DB, id int64) (*User, error) {
} }
return &user, nil return &user, nil
} }
func (dao *UserDao) GetByEmail(tx *gorm.DB, email string) (*User, error) {
var user User
if err := tx.Where("email = ?", email).First(&user).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, nil
}
return nil, err
}
return &user, nil
}
func (dao *UserDao) GetByGoogleID(tx *gorm.DB, googleID string) (*User, error) {
var user User
if err := tx.Where("google_id = ?", googleID).First(&user).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, nil
}
return nil, err
}
return &user, nil
}
func (dao *UserDao) Update(tx *gorm.DB, user *User) error {
return tx.Save(user).Error
}

35
main.go
View File

@@ -10,23 +10,25 @@ import (
"syscall" "syscall"
"time" "time"
"gitea.com/bitwsd/core/common/cors" "gitea.com/texpixel/document_ai/api"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/config"
"gitea.com/bitwsd/core/common/middleware" "gitea.com/texpixel/document_ai/internal/storage/cache"
"gitea.com/bitwsd/document_ai/api" "gitea.com/texpixel/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/config" "gitea.com/texpixel/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/internal/storage/cache" "gitea.com/texpixel/document_ai/pkg/cors"
"gitea.com/bitwsd/document_ai/internal/storage/dao" "gitea.com/texpixel/document_ai/pkg/email"
"gitea.com/bitwsd/document_ai/pkg/common" "gitea.com/texpixel/document_ai/pkg/log"
"gitea.com/bitwsd/document_ai/pkg/sms" "gitea.com/texpixel/document_ai/pkg/middleware"
"gitea.com/texpixel/document_ai/pkg/sms"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
) )
func main() { func main() {
// 加载配置 // 加载配置
env := "dev" env := ""
flag.StringVar(&env, "env", "dev", "environment (dev/prod)") flag.StringVar(&env, "env", "local", "environment (dev/prod/local)")
flag.Parse() flag.Parse()
configPath := fmt.Sprintf("./config/config_%s.yaml", env) configPath := fmt.Sprintf("./config/config_%s.yaml", env)
if err := config.Init(configPath); err != nil { if err := config.Init(configPath); err != nil {
panic(err) panic(err)
@@ -41,14 +43,7 @@ func main() {
dao.InitDB(config.GlobalConfig.Database) dao.InitDB(config.GlobalConfig.Database)
cache.InitRedisClient(config.GlobalConfig.Redis) cache.InitRedisClient(config.GlobalConfig.Redis)
sms.InitSmsClient() sms.InitSmsClient()
email.InitEmailClient()
// 初始化Redis
// cache.InitRedis(config.GlobalConfig.Redis.Addr)
// 初始化OSS客户端
// if err := oss.InitOSS(config.GlobalConfig.OSS); err != nil {
// logger.Fatal("Failed to init OSS client", logger.Fields{"error": err})
// }
// 设置gin模式 // 设置gin模式
gin.SetMode(config.GlobalConfig.Server.Mode) gin.SetMode(config.GlobalConfig.Server.Mode)
@@ -78,6 +73,6 @@ func main() {
if err := srv.Shutdown(context.Background()); err != nil { if err := srv.Shutdown(context.Background()); err != nil {
panic(err) panic(err)
} }
time.Sleep(time.Second * 3) time.Sleep(time.Second * 5)
dao.CloseDB() dao.CloseDB()
} }

View File

@@ -0,0 +1,18 @@
-- 数据埋点事件表
CREATE TABLE IF NOT EXISTS `analytics_events` (
`id` BIGINT NOT NULL AUTO_INCREMENT COMMENT '主键ID',
`user_id` BIGINT NOT NULL COMMENT '用户ID',
`event_name` VARCHAR(128) NOT NULL COMMENT '事件名称',
`properties` JSON DEFAULT NULL COMMENT '事件属性(JSON)',
`device_info` JSON DEFAULT NULL COMMENT '设备信息(JSON)',
`meta_data` JSON DEFAULT NULL COMMENT '元数据(JSON包含task_id等)',
`created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
PRIMARY KEY (`id`),
INDEX `idx_user_id` (`user_id`),
INDEX `idx_event_name` (`event_name`),
INDEX `idx_created_at` (`created_at`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='数据埋点事件表';
-- 创建复合索引以提高查询性能
CREATE INDEX `idx_user_event` ON `analytics_events` (`user_id`, `event_name`);
CREATE INDEX `idx_event_time` ON `analytics_events` (`event_name`, `created_at`);

View File

@@ -0,0 +1,11 @@
CREATE TABLE IF NOT EXISTS `email_send_log` (
`id` BIGINT NOT NULL AUTO_INCREMENT COMMENT '主键ID',
`email` VARCHAR(255) NOT NULL COMMENT '邮箱地址',
`status` TINYINT NOT NULL DEFAULT 0 COMMENT '状态: 0=已发送未注册, 1=已注册',
`created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`updated_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
PRIMARY KEY (`id`),
INDEX `idx_email` (`email`),
INDEX `idx_status` (`status`),
INDEX `idx_created_at` (`created_at`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='邮件发送记录表';

View File

@@ -0,0 +1,32 @@
-- migrations/pdf_recognition.sql
-- 将 recognition_results 表重构为 JSON content schema
-- 执行顺序:加新列 → 洗历史数据 → 删旧列
-- Step 1: 新增 JSON 字段(保留旧字段,等数据迁移完再删)
ALTER TABLE `recognition_results`
ADD COLUMN `meta_data` JSON DEFAULT NULL COMMENT '元数据 {"total_num":1}' AFTER `task_type`,
ADD COLUMN `content` JSON DEFAULT NULL COMMENT '识别内容 JSON' AFTER `meta_data`;
-- Step 2: 将旧列数据洗入新 JSON 字段
-- 所有现有记录均为 FORMULA 类型单页meta_data.total_num = 1
-- content 结构: {"latex":"...","markdown":"...","mathml":"...","mml":"..."}
UPDATE `recognition_results`
SET
`meta_data` = JSON_OBJECT('total_num', 1),
`content` = JSON_OBJECT(
'latex', IFNULL(`latex`, ''),
'markdown', IFNULL(`markdown`, ''),
'mathml', IFNULL(`mathml`, ''),
'mml', IFNULL(`mml`, '')
)
WHERE `content` IS NULL;
-- Step 3: 验证数据洗涤完成(应返回 0
-- SELECT COUNT(*) FROM `recognition_results` WHERE `content` IS NULL;
-- Step 4: 删除旧字段
ALTER TABLE `recognition_results`
DROP COLUMN `latex`,
DROP COLUMN `markdown`,
DROP COLUMN `mathml`,
DROP COLUMN `mml`;

View File

@@ -6,6 +6,7 @@ const (
CodeSuccess = 200 CodeSuccess = 200
CodeParamError = 400 CodeParamError = 400
CodeUnauthorized = 401 CodeUnauthorized = 401
CodeTokenExpired = 4011
CodeForbidden = 403 CodeForbidden = 403
CodeNotFound = 404 CodeNotFound = 404
CodeInvalidStatus = 405 CodeInvalidStatus = 405
@@ -14,12 +15,18 @@ const (
CodeTaskNotComplete = 1001 CodeTaskNotComplete = 1001
CodeRecordRepeat = 1002 CodeRecordRepeat = 1002
CodeSmsCodeError = 1003 CodeSmsCodeError = 1003
CodeEmailExists = 1004
CodeEmailNotFound = 1005
CodePasswordMismatch = 1006
CodeEmailCodeError = 1007
CodeEmailSendLimit = 1008
) )
const ( const (
CodeSuccessMsg = "success" CodeSuccessMsg = "success"
CodeParamErrorMsg = "param error" CodeParamErrorMsg = "param error"
CodeUnauthorizedMsg = "unauthorized" CodeUnauthorizedMsg = "unauthorized"
CodeTokenExpiredMsg = "token expired"
CodeForbiddenMsg = "forbidden" CodeForbiddenMsg = "forbidden"
CodeNotFoundMsg = "not found" CodeNotFoundMsg = "not found"
CodeInvalidStatusMsg = "invalid status" CodeInvalidStatusMsg = "invalid status"
@@ -28,6 +35,11 @@ const (
CodeTaskNotCompleteMsg = "task not complete" CodeTaskNotCompleteMsg = "task not complete"
CodeRecordRepeatMsg = "record repeat" CodeRecordRepeatMsg = "record repeat"
CodeSmsCodeErrorMsg = "sms code error" CodeSmsCodeErrorMsg = "sms code error"
CodeEmailExistsMsg = "email already registered"
CodeEmailNotFoundMsg = "email not found"
CodePasswordMismatchMsg = "password mismatch"
CodeEmailCodeErrorMsg = "email verify code error"
CodeEmailSendLimitMsg = "email send limit reached"
) )
type BusinessError struct { type BusinessError struct {
@@ -47,3 +59,12 @@ func NewError(code ErrorCode, message string, err error) *BusinessError {
Err: err, Err: err,
} }
} }
// 预定义业务错误
var (
ErrEmailExists = NewError(CodeEmailExists, CodeEmailExistsMsg, nil)
ErrEmailNotFound = NewError(CodeEmailNotFound, CodeEmailNotFoundMsg, nil)
ErrPasswordMismatch = NewError(CodePasswordMismatch, CodePasswordMismatchMsg, nil)
ErrEmailCodeError = NewError(CodeEmailCodeError, CodeEmailCodeErrorMsg, nil)
ErrEmailSendLimit = NewError(CodeEmailSendLimit, CodeEmailSendLimitMsg, nil)
)

View File

@@ -4,9 +4,10 @@ import (
"context" "context"
"net/http" "net/http"
"strings" "strings"
"time"
"gitea.com/bitwsd/document_ai/pkg/constant" "gitea.com/texpixel/document_ai/pkg/constant"
"gitea.com/bitwsd/document_ai/pkg/jwt" "gitea.com/texpixel/document_ai/pkg/jwt"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
) )
@@ -45,6 +46,30 @@ func AuthMiddleware(ctx *gin.Context) {
ctx.Set(constant.ContextUserID, claims.UserId) ctx.Set(constant.ContextUserID, claims.UserId)
} }
func MustAuthMiddleware() gin.HandlerFunc {
return func(ctx *gin.Context) {
token := ctx.GetHeader("Authorization")
if token == "" {
ctx.JSON(http.StatusOK, ErrorResponse(ctx, CodeUnauthorized, CodeUnauthorizedMsg))
ctx.Abort()
return
}
token = strings.TrimPrefix(token, "Bearer ")
claims, err := jwt.ParseToken(token)
if err != nil || claims == nil {
ctx.JSON(http.StatusOK, ErrorResponse(ctx, CodeUnauthorized, CodeUnauthorizedMsg))
ctx.Abort()
return
}
if claims.ExpiresAt < time.Now().Unix() {
ctx.JSON(http.StatusOK, ErrorResponse(ctx, CodeTokenExpired, CodeTokenExpiredMsg))
ctx.Abort()
return
}
ctx.Set(constant.ContextUserID, claims.UserId)
}
}
func GetAuthMiddleware() gin.HandlerFunc { func GetAuthMiddleware() gin.HandlerFunc {
return func(ctx *gin.Context) { return func(ctx *gin.Context) {
token := ctx.GetHeader("Authorization") token := ctx.GetHeader("Authorization")

View File

@@ -3,7 +3,7 @@ package common
import ( import (
"context" "context"
"gitea.com/bitwsd/document_ai/pkg/constant" "gitea.com/texpixel/document_ai/pkg/constant"
) )
type Response struct { type Response struct {

View File

@@ -19,9 +19,9 @@ type Config struct {
func DefaultConfig() Config { func DefaultConfig() Config {
return Config{ return Config{
AllowOrigins: []string{"*"}, AllowOrigins: []string{"*"},
AllowMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"}, AllowMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS", "PATCH"},
AllowHeaders: []string{"Origin", "Content-Type", "Accept"}, AllowHeaders: []string{"Origin", "Content-Type", "Accept", "Authorization", "X-Requested-With"},
ExposeHeaders: []string{"Content-Length"}, ExposeHeaders: []string{"Content-Length", "Content-Type"},
AllowCredentials: true, AllowCredentials: true,
MaxAge: 86400, // 24 hours MaxAge: 86400, // 24 hours
} }
@@ -30,16 +30,30 @@ func DefaultConfig() Config {
func Cors(config Config) gin.HandlerFunc { func Cors(config Config) gin.HandlerFunc {
return func(c *gin.Context) { return func(c *gin.Context) {
origin := c.Request.Header.Get("Origin") origin := c.Request.Header.Get("Origin")
if origin == "" {
c.Next()
return
}
// 检查是否允许该来源 // 检查是否允许该来源
allowOrigin := "*" allowOrigin := ""
for _, o := range config.AllowOrigins { for _, o := range config.AllowOrigins {
if o == "*" {
// 通配符时,回显实际 origin兼容 credentials
allowOrigin = origin
break
}
if o == origin { if o == origin {
allowOrigin = origin allowOrigin = origin
break break
} }
} }
if allowOrigin == "" {
c.Next()
return
}
c.Header("Access-Control-Allow-Origin", allowOrigin) c.Header("Access-Control-Allow-Origin", allowOrigin)
c.Header("Access-Control-Allow-Methods", strings.Join(config.AllowMethods, ",")) c.Header("Access-Control-Allow-Methods", strings.Join(config.AllowMethods, ","))
c.Header("Access-Control-Allow-Headers", strings.Join(config.AllowHeaders, ",")) c.Header("Access-Control-Allow-Headers", strings.Join(config.AllowHeaders, ","))
@@ -58,3 +72,4 @@ func Cors(config Config) gin.HandlerFunc {
c.Next() c.Next()
} }
} }

160
pkg/email/email.go Normal file
View File

@@ -0,0 +1,160 @@
package email
import (
"bytes"
"context"
"crypto/tls"
"encoding/json"
"fmt"
"io"
"net/http"
"net/mail"
"net/smtp"
"regexp"
"strings"
"sync"
"gitea.com/texpixel/document_ai/config"
"gitea.com/texpixel/document_ai/pkg/log"
)
var (
once sync.Once
client *Client
)
// chineseDomainRe matches email domains that should be routed via Aliyun SMTP.
var chineseDomainRe = regexp.MustCompile(`(?i)(\.edu\.cn|qq\.com|163\.com|126\.com|sina\.com|sohu\.com)$`)
type Client struct {
cfg config.EmailConfig
}
func InitEmailClient() *Client {
once.Do(func() {
client = &Client{cfg: config.GlobalConfig.Email}
})
return client
}
// Send routes the email to the appropriate provider based on the recipient domain.
func Send(ctx context.Context, to, subject, body string) error {
if client == nil {
return fmt.Errorf("email client not initialized, call InitEmailClient first")
}
return client.Send(ctx, to, subject, body)
}
func (c *Client) Send(ctx context.Context, to, subject, body string) error {
if _, err := mail.ParseAddress(to); err != nil {
return fmt.Errorf("invalid email address %q: %w", to, err)
}
domain := to[strings.LastIndex(to, "@")+1:]
if chineseDomainRe.MatchString(domain) {
return c.sendViaAliyunSMTP(ctx, to, subject, body)
}
return c.sendViaResend(ctx, to, subject, body)
}
func (c *Client) sendViaAliyunSMTP(ctx context.Context, to, subject, body string) error {
cfg := c.cfg.AliyunSMTP
addr := fmt.Sprintf("%s:%d", cfg.Host, cfg.Port)
tlsConfig := &tls.Config{ServerName: cfg.Host}
conn, err := tls.Dial("tcp", addr, tlsConfig)
if err != nil {
log.Error(ctx, "func", "sendViaAliyunSMTP", "msg", "tls dial failed", "error", err)
return err
}
smtpClient, err := smtp.NewClient(conn, cfg.Host)
if err != nil {
conn.Close()
log.Error(ctx, "func", "sendViaAliyunSMTP", "msg", "smtp new client failed", "error", err)
return err
}
defer smtpClient.Close()
auth := smtp.PlainAuth("", cfg.Username, cfg.Password, cfg.Host)
if err = smtpClient.Auth(auth); err != nil {
log.Error(ctx, "func", "sendViaAliyunSMTP", "msg", "smtp auth failed", "error", err)
return err
}
from := c.cfg.FromAddr
if err = smtpClient.Mail(from); err != nil {
return err
}
if err = smtpClient.Rcpt(to); err != nil {
return err
}
wc, err := smtpClient.Data()
if err != nil {
return err
}
defer wc.Close()
if _, err = wc.Write([]byte(buildMessage(c.cfg.FromName, from, to, subject, body))); err != nil {
return err
}
log.Info(ctx, "func", "sendViaAliyunSMTP", "msg", "email sent via aliyun smtp", "to", to)
return nil
}
type resendRequest struct {
From string `json:"from"`
To []string `json:"to"`
Subject string `json:"subject"`
Html string `json:"html"`
}
func (c *Client) sendViaResend(ctx context.Context, to, subject, body string) error {
payload := resendRequest{
From: fmt.Sprintf("%s <%s>", c.cfg.FromName, c.cfg.FromAddr),
To: []string{to},
Subject: subject,
Html: body,
}
jsonData, err := json.Marshal(payload)
if err != nil {
return err
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, "https://api.resend.com/emails", bytes.NewReader(jsonData))
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+c.cfg.Resend.APIKey)
resp, err := (&http.Client{}).Do(req)
if err != nil {
log.Error(ctx, "func", "sendViaResend", "msg", "http request failed", "error", err)
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusCreated {
respBody, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
log.Error(ctx, "func", "sendViaResend", "msg", "resend api returned non-2xx", "status", resp.StatusCode, "to", to, "body", string(respBody))
return fmt.Errorf("resend api returned status %d: %s", resp.StatusCode, string(respBody))
}
log.Info(ctx, "func", "sendViaResend", "msg", "email sent via resend", "to", to)
return nil
}
func buildMessage(fromName, fromAddr, to, subject, body string) string {
var buf bytes.Buffer
buf.WriteString(fmt.Sprintf("From: %s <%s>\r\n", fromName, fromAddr))
buf.WriteString(fmt.Sprintf("To: %s\r\n", to))
buf.WriteString(fmt.Sprintf("Subject: %s\r\n", subject))
buf.WriteString("MIME-Version: 1.0\r\n")
buf.WriteString("Content-Type: text/html; charset=UTF-8\r\n")
buf.WriteString("\r\n")
buf.WriteString(body)
return buf.String()
}

166
pkg/email/template.go Normal file
View File

@@ -0,0 +1,166 @@
package email
import "fmt"
// BuildVerifyCodeEmail returns a locale-appropriate subject and HTML body for
// the verification code email. Chinese domains get a Chinese email; all others
// get an English one.
func BuildVerifyCodeEmail(toEmail, code string) (subject, body string) {
domain := toEmail[lastIndex(toEmail, '@')+1:]
if chineseDomainRe.MatchString(domain) {
return buildVerifyCodeZH(code)
}
return buildVerifyCodeEN(code)
}
func buildVerifyCodeZH(code string) (subject, body string) {
subject = "您的验证码"
body = fmt.Sprintf(`<!DOCTYPE html>
<html lang="zh">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>验证码</title>
</head>
<body style="margin:0;padding:0;background:#f4f4f5;font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif;">
<table width="100%%" cellpadding="0" cellspacing="0" style="background:#f4f4f5;padding:40px 0;">
<tr>
<td align="center">
<table width="480" cellpadding="0" cellspacing="0" style="background:#ffffff;border-radius:12px;overflow:hidden;box-shadow:0 1px 4px rgba(0,0,0,.08);">
<!-- Header -->
<tr>
<td style="background:#0a0a0a;padding:32px 40px;">
<span style="color:#ffffff;font-size:20px;font-weight:700;letter-spacing:-0.3px;">TexPixel</span>
</td>
</tr>
<!-- Body -->
<tr>
<td style="padding:40px 40px 32px;">
<p style="margin:0 0 8px;font-size:24px;font-weight:700;color:#0a0a0a;line-height:1.3;">验证您的邮箱</p>
<p style="margin:0 0 32px;font-size:15px;color:#6b7280;line-height:1.6;">
请使用以下验证码完成注册。验证码仅对您本人有效,请勿分享给他人。
</p>
<!-- Code block -->
<table width="100%%" cellpadding="0" cellspacing="0">
<tr>
<td align="center" style="background:#f9fafb;border:1px solid #e5e7eb;border-radius:8px;padding:28px 0;">
<span style="font-size:40px;font-weight:700;letter-spacing:12px;color:#0a0a0a;font-family:'Courier New',Courier,monospace;">%s</span>
</td>
</tr>
</table>
<p style="margin:24px 0 0;font-size:13px;color:#9ca3af;text-align:center;">
验证码 <strong>10 分钟</strong>内有效,请尽快使用
</p>
</td>
</tr>
<!-- Divider -->
<tr>
<td style="padding:0 40px;">
<div style="height:1px;background:#f3f4f6;"></div>
</td>
</tr>
<!-- Footer -->
<tr>
<td style="padding:24px 40px 32px;">
<p style="margin:0;font-size:12px;color:#9ca3af;line-height:1.7;">
如果您没有请求此验证码,可以忽略本邮件,您的账户仍然安全。<br/>
&copy; 2025 TexPixel. 保留所有权利。
</p>
</td>
</tr>
</table>
</td>
</tr>
</table>
</body>
</html>`, code)
return
}
func buildVerifyCodeEN(code string) (subject, body string) {
subject = "Your verification code"
body = fmt.Sprintf(`<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Verification Code</title>
</head>
<body style="margin:0;padding:0;background:#f4f4f5;font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif;">
<table width="100%%" cellpadding="0" cellspacing="0" style="background:#f4f4f5;padding:40px 0;">
<tr>
<td align="center">
<table width="480" cellpadding="0" cellspacing="0" style="background:#ffffff;border-radius:12px;overflow:hidden;box-shadow:0 1px 4px rgba(0,0,0,.08);">
<!-- Header -->
<tr>
<td style="background:#0a0a0a;padding:32px 40px;">
<span style="color:#ffffff;font-size:20px;font-weight:700;letter-spacing:-0.3px;">TexPixel</span>
</td>
</tr>
<!-- Body -->
<tr>
<td style="padding:40px 40px 32px;">
<p style="margin:0 0 8px;font-size:24px;font-weight:700;color:#0a0a0a;line-height:1.3;">Verify your email address</p>
<p style="margin:0 0 32px;font-size:15px;color:#6b7280;line-height:1.6;">
Use the verification code below to complete your registration. Never share this code with anyone.
</p>
<!-- Code block -->
<table width="100%%" cellpadding="0" cellspacing="0">
<tr>
<td align="center" style="background:#f9fafb;border:1px solid #e5e7eb;border-radius:8px;padding:28px 0;">
<span style="font-size:40px;font-weight:700;letter-spacing:12px;color:#0a0a0a;font-family:'Courier New',Courier,monospace;">%s</span>
</td>
</tr>
</table>
<p style="margin:24px 0 0;font-size:13px;color:#9ca3af;text-align:center;">
This code expires in <strong>10 minutes</strong>
</p>
</td>
</tr>
<!-- Divider -->
<tr>
<td style="padding:0 40px;">
<div style="height:1px;background:#f3f4f6;"></div>
</td>
</tr>
<!-- Footer -->
<tr>
<td style="padding:24px 40px 32px;">
<p style="margin:0;font-size:12px;color:#9ca3af;line-height:1.7;">
If you didn&rsquo;t request this code, you can safely ignore this email. Your account is still secure.<br/>
&copy; 2025 TexPixel. All rights reserved.
</p>
</td>
</tr>
</table>
</td>
</tr>
</table>
</body>
</html>`, code)
return
}
// lastIndex returns the last index of sep in s, or -1.
func lastIndex(s string, sep byte) int {
for i := len(s) - 1; i >= 0; i-- {
if s[i] == sep {
return i
}
}
return -1
}

View File

@@ -10,7 +10,7 @@ import (
"net/http" "net/http"
"time" "time"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/pkg/log"
) )
// RetryConfig 重试配置 // RetryConfig 重试配置
@@ -23,9 +23,9 @@ type RetryConfig struct {
// DefaultRetryConfig 默认重试配置 // DefaultRetryConfig 默认重试配置
var DefaultRetryConfig = RetryConfig{ var DefaultRetryConfig = RetryConfig{
MaxRetries: 2, MaxRetries: 1,
InitialInterval: 100 * time.Millisecond, InitialInterval: 100 * time.Millisecond,
MaxInterval: 5 * time.Second, MaxInterval: 30 * time.Second,
SkipTLSVerify: true, SkipTLSVerify: true,
} }

View File

@@ -18,7 +18,12 @@ type CustomClaims struct {
jwt.StandardClaims jwt.StandardClaims
} }
func CreateToken(user User) (string, error) { type TokenResult struct {
Token string `json:"token"`
ExpiresAt int64 `json:"expires_at"`
}
func CreateToken(user User) (*TokenResult, error) {
expire := time.Now().Add(time.Duration(ValidTime) * time.Second) expire := time.Now().Add(time.Duration(ValidTime) * time.Second)
claims := &CustomClaims{ claims := &CustomClaims{
User: user, User: user,
@@ -32,10 +37,13 @@ func CreateToken(user User) (string, error) {
t, err := token.SignedString(JwtKey) t, err := token.SignedString(JwtKey)
if err != nil { if err != nil {
return "", err return nil, err
} }
return "Bearer " + t, nil return &TokenResult{
Token: "Bearer " + t,
ExpiresAt: expire.Unix(),
}, nil
} }
func ParseToken(signToken string) (*CustomClaims, error) { func ParseToken(signToken string) (*CustomClaims, error) {

View File

@@ -27,3 +27,4 @@ func DefaultLogConfig() *LogConfig {
Compress: true, Compress: true,
} }
} }

View File

@@ -8,6 +8,8 @@ import (
"runtime" "runtime"
"time" "time"
"gitea.com/texpixel/document_ai/pkg/requestid"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"gopkg.in/natefinch/lumberjack.v2" "gopkg.in/natefinch/lumberjack.v2"
) )
@@ -67,8 +69,13 @@ func log(ctx context.Context, level zerolog.Level, logType LogType, kv ...interf
// 添加日志类型 // 添加日志类型
event.Str("type", string(logType)) event.Str("type", string(logType))
// 添加请求ID reqID := requestid.GetRequestID()
if reqID, exists := ctx.Value("request_id").(string); exists { if reqID == "" {
if id, exists := ctx.Value("request_id").(string); exists {
reqID = id
}
}
if reqID != "" {
event.Str("request_id", reqID) event.Str("request_id", reqID)
} }

View File

@@ -6,7 +6,7 @@ import (
"strings" "strings"
"time" "time"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/pkg/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
) )

View File

@@ -0,0 +1,23 @@
package middleware
import (
"gitea.com/texpixel/document_ai/pkg/requestid"
"github.com/gin-gonic/gin"
"github.com/google/uuid"
)
func RequestID() gin.HandlerFunc {
return func(c *gin.Context) {
reqID := c.Request.Header.Get("X-Request-ID")
if reqID == "" {
reqID = uuid.New().String()
}
c.Request.Header.Set("X-Request-ID", reqID)
c.Set("request_id", reqID)
requestid.SetRequestID(reqID, func() {
c.Next()
})
}
}

View File

@@ -12,8 +12,8 @@ import (
"strings" "strings"
"time" "time"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/config"
"gitea.com/bitwsd/document_ai/config" "gitea.com/texpixel/document_ai/pkg/log"
"github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/aliyun/aliyun-oss-go-sdk/oss"
) )
@@ -64,8 +64,7 @@ func GetPolicyToken() (string, error) {
} }
func GetPolicyURL(ctx context.Context, path string) (string, error) { func GetPolicyURL(ctx context.Context, path string) (string, error) {
// Create OSS client client, err := oss.New(config.GlobalConfig.Aliyun.OSS.Endpoint, config.GlobalConfig.Aliyun.OSS.AccessKeyID, config.GlobalConfig.Aliyun.OSS.AccessKeySecret, oss.UseCname(true))
client, err := oss.New(config.GlobalConfig.Aliyun.OSS.Endpoint, config.GlobalConfig.Aliyun.OSS.AccessKeyID, config.GlobalConfig.Aliyun.OSS.AccessKeySecret)
if err != nil { if err != nil {
log.Error(ctx, "func", "GetPolicyURL", "msg", "create oss client failed", "error", err) log.Error(ctx, "func", "GetPolicyURL", "msg", "create oss client failed", "error", err)
return "", err return "", err
@@ -96,6 +95,8 @@ func GetPolicyURL(ctx context.Context, path string) (string, error) {
contentType = "image/tiff" contentType = "image/tiff"
case ".svg": case ".svg":
contentType = "image/svg+xml" contentType = "image/svg+xml"
case ".pdf":
contentType = "application/pdf"
default: default:
return "", fmt.Errorf("unsupported file type: %s", ext) return "", fmt.Errorf("unsupported file type: %s", ext)
} }
@@ -120,14 +121,16 @@ func GetPolicyURL(ctx context.Context, path string) (string, error) {
// DownloadFile downloads a file from OSS and returns the reader, caller should close the reader // DownloadFile downloads a file from OSS and returns the reader, caller should close the reader
func DownloadFile(ctx context.Context, ossPath string) (io.ReadCloser, error) { func DownloadFile(ctx context.Context, ossPath string) (io.ReadCloser, error) {
endpoint := config.GlobalConfig.Aliyun.OSS.InnerEndpoint endpoint := config.GlobalConfig.Aliyun.OSS.InnerEndpoint
useCname := false
if config.GlobalConfig.Server.IsDebug() { if config.GlobalConfig.Server.IsDebug() {
endpoint = config.GlobalConfig.Aliyun.OSS.Endpoint endpoint = config.GlobalConfig.Aliyun.OSS.Endpoint
useCname = true
} }
log.Info(ctx, "func", "DownloadFile", "msg", "endpoint", endpoint, "ossPath", ossPath)
// Create OSS client // Create OSS client
client, err := oss.New(endpoint, client, err := oss.New(endpoint, config.GlobalConfig.Aliyun.OSS.AccessKeyID, config.GlobalConfig.Aliyun.OSS.AccessKeySecret, oss.UseCname(useCname))
config.GlobalConfig.Aliyun.OSS.AccessKeyID,
config.GlobalConfig.Aliyun.OSS.AccessKeySecret)
if err != nil { if err != nil {
log.Error(ctx, "func", "DownloadFile", "msg", "create oss client failed", "error", err) log.Error(ctx, "func", "DownloadFile", "msg", "create oss client failed", "error", err)
return nil, err return nil, err
@@ -153,7 +156,7 @@ func DownloadFile(ctx context.Context, ossPath string) (io.ReadCloser, error) {
func GetDownloadURL(ctx context.Context, ossPath string) (string, error) { func GetDownloadURL(ctx context.Context, ossPath string) (string, error) {
endpoint := config.GlobalConfig.Aliyun.OSS.Endpoint endpoint := config.GlobalConfig.Aliyun.OSS.Endpoint
client, err := oss.New(endpoint, config.GlobalConfig.Aliyun.OSS.AccessKeyID, config.GlobalConfig.Aliyun.OSS.AccessKeySecret) client, err := oss.New(endpoint, config.GlobalConfig.Aliyun.OSS.AccessKeyID, config.GlobalConfig.Aliyun.OSS.AccessKeySecret, oss.UseCname(true))
if err != nil { if err != nil {
log.Error(ctx, "func", "GetDownloadURL", "msg", "create oss client failed", "error", err) log.Error(ctx, "func", "GetDownloadURL", "msg", "create oss client failed", "error", err)
return "", err return "", err
@@ -165,11 +168,13 @@ func GetDownloadURL(ctx context.Context, ossPath string) (string, error) {
return "", err return "", err
} }
signURL, err := bucket.SignURL(ossPath, oss.HTTPGet, 60) signURL, err := bucket.SignURL(ossPath, oss.HTTPGet, 3600)
if err != nil { if err != nil {
log.Error(ctx, "func", "GetDownloadURL", "msg", "get object failed", "error", err) log.Error(ctx, "func", "GetDownloadURL", "msg", "get object failed", "error", err)
return "", err return "", err
} }
signURL = strings.Replace(signURL, "http://", "https://", 1)
return signURL, nil return signURL, nil
} }

View File

@@ -0,0 +1,27 @@
package requestid
import (
"github.com/jtolds/gls"
)
// requestIDKey 是 gls 中存储 request_id 的 key
var requestIDKey = gls.GenSym()
// glsMgr 是 gls 管理器
var glsMgr = gls.NewContextManager()
// SetRequestID 在 gls 中设置 request_id并在 fn 执行期间保持有效
func SetRequestID(requestID string, fn func()) {
glsMgr.SetValues(gls.Values{requestIDKey: requestID}, fn)
}
// GetRequestID 从 gls 中获取当前 goroutine 的 request_id
func GetRequestID() string {
val, ok := glsMgr.GetValue(requestIDKey)
if !ok {
return ""
}
reqID, _ := val.(string)
return reqID
}

View File

@@ -4,7 +4,7 @@ import (
"errors" "errors"
"sync" "sync"
"gitea.com/bitwsd/document_ai/config" "gitea.com/texpixel/document_ai/config"
openapi "github.com/alibabacloud-go/darabonba-openapi/client" openapi "github.com/alibabacloud-go/darabonba-openapi/client"
dysmsapi "github.com/alibabacloud-go/dysmsapi-20170525/v2/client" dysmsapi "github.com/alibabacloud-go/dysmsapi-20170525/v2/client"
aliutil "github.com/alibabacloud-go/tea-utils/service" aliutil "github.com/alibabacloud-go/tea-utils/service"

View File

@@ -23,6 +23,8 @@ func rmDollarSurr(text string) string {
func ToKatex(formula string) string { func ToKatex(formula string) string {
res := formula res := formula
res = strings.ReplaceAll(res, "\n", "")
// Remove mbox surrounding // Remove mbox surrounding
res = changeAll(res, `\mbox `, " ", "{", "}", "", "") res = changeAll(res, `\mbox `, " ", "{", "}", "", "")
res = changeAll(res, `\mbox`, " ", "{", "}", "", "") res = changeAll(res, `\mbox`, " ", "{", "}", "", "")

6
pkg/utils/model.go Normal file
View File

@@ -0,0 +1,6 @@
package utils
const (
ModelVLDeepSeekOCR = "deepseek-ai/DeepSeek-OCR"
ModelVLQwen3VL32BInstruct = "Qwen/Qwen3-VL-32B-Instruct"
)

View File

@@ -3,7 +3,7 @@ package utils
import ( import (
"context" "context"
"gitea.com/bitwsd/core/common/log" "gitea.com/texpixel/document_ai/pkg/log"
) )
func SafeGo(fn func()) { func SafeGo(fn func()) {

View File

@@ -1,5 +1,5 @@
package utils package utils
const ( const (
SiliconFlowToken = "Bearer sk-akbroznlbxikkbiouzasspbbzwgxubnjjtqlujxmxsnvpmhn" SiliconFlowToken = "Bearer sk-wiggxqscvjdveqvwcdywwpipcinglkzkewkcfjnrgjqbdbmc"
) )

View File

@@ -1,18 +0,0 @@
package middleware
import (
"github.com/gin-gonic/gin"
"github.com/google/uuid"
)
func RequestID() gin.HandlerFunc {
return func(c *gin.Context) {
requestID := c.Request.Header.Get("X-Request-ID")
if requestID == "" {
requestID = uuid.New().String()
}
c.Request.Header.Set("X-Request-ID", requestID)
c.Set("request_id", requestID)
c.Next()
}
}

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,305 +0,0 @@
// This file is auto-generated, don't edit it. Thanks.
package client
import (
"io"
"github.com/alibabacloud-go/tea/tea"
credential "github.com/aliyun/credentials-go/credentials"
)
type InterceptorContext struct {
Request *InterceptorContextRequest `json:"request,omitempty" xml:"request,omitempty" require:"true" type:"Struct"`
Configuration *InterceptorContextConfiguration `json:"configuration,omitempty" xml:"configuration,omitempty" require:"true" type:"Struct"`
Response *InterceptorContextResponse `json:"response,omitempty" xml:"response,omitempty" require:"true" type:"Struct"`
}
func (s InterceptorContext) String() string {
return tea.Prettify(s)
}
func (s InterceptorContext) GoString() string {
return s.String()
}
func (s *InterceptorContext) SetRequest(v *InterceptorContextRequest) *InterceptorContext {
s.Request = v
return s
}
func (s *InterceptorContext) SetConfiguration(v *InterceptorContextConfiguration) *InterceptorContext {
s.Configuration = v
return s
}
func (s *InterceptorContext) SetResponse(v *InterceptorContextResponse) *InterceptorContext {
s.Response = v
return s
}
type InterceptorContextRequest struct {
Headers map[string]*string `json:"headers,omitempty" xml:"headers,omitempty"`
Query map[string]*string `json:"query,omitempty" xml:"query,omitempty"`
Body interface{} `json:"body,omitempty" xml:"body,omitempty"`
Stream io.Reader `json:"stream,omitempty" xml:"stream,omitempty"`
HostMap map[string]*string `json:"hostMap,omitempty" xml:"hostMap,omitempty"`
Pathname *string `json:"pathname,omitempty" xml:"pathname,omitempty" require:"true"`
ProductId *string `json:"productId,omitempty" xml:"productId,omitempty" require:"true"`
Action *string `json:"action,omitempty" xml:"action,omitempty" require:"true"`
Version *string `json:"version,omitempty" xml:"version,omitempty" require:"true"`
Protocol *string `json:"protocol,omitempty" xml:"protocol,omitempty" require:"true"`
Method *string `json:"method,omitempty" xml:"method,omitempty" require:"true"`
AuthType *string `json:"authType,omitempty" xml:"authType,omitempty" require:"true"`
BodyType *string `json:"bodyType,omitempty" xml:"bodyType,omitempty" require:"true"`
ReqBodyType *string `json:"reqBodyType,omitempty" xml:"reqBodyType,omitempty" require:"true"`
Style *string `json:"style,omitempty" xml:"style,omitempty"`
Credential credential.Credential `json:"credential,omitempty" xml:"credential,omitempty" require:"true"`
SignatureVersion *string `json:"signatureVersion,omitempty" xml:"signatureVersion,omitempty"`
SignatureAlgorithm *string `json:"signatureAlgorithm,omitempty" xml:"signatureAlgorithm,omitempty"`
UserAgent *string `json:"userAgent,omitempty" xml:"userAgent,omitempty" require:"true"`
}
func (s InterceptorContextRequest) String() string {
return tea.Prettify(s)
}
func (s InterceptorContextRequest) GoString() string {
return s.String()
}
func (s *InterceptorContextRequest) SetHeaders(v map[string]*string) *InterceptorContextRequest {
s.Headers = v
return s
}
func (s *InterceptorContextRequest) SetQuery(v map[string]*string) *InterceptorContextRequest {
s.Query = v
return s
}
func (s *InterceptorContextRequest) SetBody(v interface{}) *InterceptorContextRequest {
s.Body = v
return s
}
func (s *InterceptorContextRequest) SetStream(v io.Reader) *InterceptorContextRequest {
s.Stream = v
return s
}
func (s *InterceptorContextRequest) SetHostMap(v map[string]*string) *InterceptorContextRequest {
s.HostMap = v
return s
}
func (s *InterceptorContextRequest) SetPathname(v string) *InterceptorContextRequest {
s.Pathname = &v
return s
}
func (s *InterceptorContextRequest) SetProductId(v string) *InterceptorContextRequest {
s.ProductId = &v
return s
}
func (s *InterceptorContextRequest) SetAction(v string) *InterceptorContextRequest {
s.Action = &v
return s
}
func (s *InterceptorContextRequest) SetVersion(v string) *InterceptorContextRequest {
s.Version = &v
return s
}
func (s *InterceptorContextRequest) SetProtocol(v string) *InterceptorContextRequest {
s.Protocol = &v
return s
}
func (s *InterceptorContextRequest) SetMethod(v string) *InterceptorContextRequest {
s.Method = &v
return s
}
func (s *InterceptorContextRequest) SetAuthType(v string) *InterceptorContextRequest {
s.AuthType = &v
return s
}
func (s *InterceptorContextRequest) SetBodyType(v string) *InterceptorContextRequest {
s.BodyType = &v
return s
}
func (s *InterceptorContextRequest) SetReqBodyType(v string) *InterceptorContextRequest {
s.ReqBodyType = &v
return s
}
func (s *InterceptorContextRequest) SetStyle(v string) *InterceptorContextRequest {
s.Style = &v
return s
}
func (s *InterceptorContextRequest) SetCredential(v credential.Credential) *InterceptorContextRequest {
s.Credential = v
return s
}
func (s *InterceptorContextRequest) SetSignatureVersion(v string) *InterceptorContextRequest {
s.SignatureVersion = &v
return s
}
func (s *InterceptorContextRequest) SetSignatureAlgorithm(v string) *InterceptorContextRequest {
s.SignatureAlgorithm = &v
return s
}
func (s *InterceptorContextRequest) SetUserAgent(v string) *InterceptorContextRequest {
s.UserAgent = &v
return s
}
type InterceptorContextConfiguration struct {
RegionId *string `json:"regionId,omitempty" xml:"regionId,omitempty" require:"true"`
Endpoint *string `json:"endpoint,omitempty" xml:"endpoint,omitempty"`
EndpointRule *string `json:"endpointRule,omitempty" xml:"endpointRule,omitempty"`
EndpointMap map[string]*string `json:"endpointMap,omitempty" xml:"endpointMap,omitempty"`
EndpointType *string `json:"endpointType,omitempty" xml:"endpointType,omitempty"`
Network *string `json:"network,omitempty" xml:"network,omitempty"`
Suffix *string `json:"suffix,omitempty" xml:"suffix,omitempty"`
}
func (s InterceptorContextConfiguration) String() string {
return tea.Prettify(s)
}
func (s InterceptorContextConfiguration) GoString() string {
return s.String()
}
func (s *InterceptorContextConfiguration) SetRegionId(v string) *InterceptorContextConfiguration {
s.RegionId = &v
return s
}
func (s *InterceptorContextConfiguration) SetEndpoint(v string) *InterceptorContextConfiguration {
s.Endpoint = &v
return s
}
func (s *InterceptorContextConfiguration) SetEndpointRule(v string) *InterceptorContextConfiguration {
s.EndpointRule = &v
return s
}
func (s *InterceptorContextConfiguration) SetEndpointMap(v map[string]*string) *InterceptorContextConfiguration {
s.EndpointMap = v
return s
}
func (s *InterceptorContextConfiguration) SetEndpointType(v string) *InterceptorContextConfiguration {
s.EndpointType = &v
return s
}
func (s *InterceptorContextConfiguration) SetNetwork(v string) *InterceptorContextConfiguration {
s.Network = &v
return s
}
func (s *InterceptorContextConfiguration) SetSuffix(v string) *InterceptorContextConfiguration {
s.Suffix = &v
return s
}
type InterceptorContextResponse struct {
StatusCode *int `json:"statusCode,omitempty" xml:"statusCode,omitempty"`
Headers map[string]*string `json:"headers,omitempty" xml:"headers,omitempty"`
Body io.Reader `json:"body,omitempty" xml:"body,omitempty"`
DeserializedBody interface{} `json:"deserializedBody,omitempty" xml:"deserializedBody,omitempty"`
}
func (s InterceptorContextResponse) String() string {
return tea.Prettify(s)
}
func (s InterceptorContextResponse) GoString() string {
return s.String()
}
func (s *InterceptorContextResponse) SetStatusCode(v int) *InterceptorContextResponse {
s.StatusCode = &v
return s
}
func (s *InterceptorContextResponse) SetHeaders(v map[string]*string) *InterceptorContextResponse {
s.Headers = v
return s
}
func (s *InterceptorContextResponse) SetBody(v io.Reader) *InterceptorContextResponse {
s.Body = v
return s
}
func (s *InterceptorContextResponse) SetDeserializedBody(v interface{}) *InterceptorContextResponse {
s.DeserializedBody = v
return s
}
type AttributeMap struct {
Attributes map[string]interface{} `json:"attributes,omitempty" xml:"attributes,omitempty" require:"true"`
Key map[string]*string `json:"key,omitempty" xml:"key,omitempty" require:"true"`
}
func (s AttributeMap) String() string {
return tea.Prettify(s)
}
func (s AttributeMap) GoString() string {
return s.String()
}
func (s *AttributeMap) SetAttributes(v map[string]interface{}) *AttributeMap {
s.Attributes = v
return s
}
func (s *AttributeMap) SetKey(v map[string]*string) *AttributeMap {
s.Key = v
return s
}
type ClientInterface interface {
ModifyConfiguration(context *InterceptorContext, attributeMap *AttributeMap) error
ModifyRequest(context *InterceptorContext, attributeMap *AttributeMap) error
ModifyResponse(context *InterceptorContext, attributeMap *AttributeMap) error
}
type Client struct {
}
func NewClient() (*Client, error) {
client := new(Client)
err := client.Init()
return client, err
}
func (client *Client) Init() (_err error) {
return nil
}
func (client *Client) ModifyConfiguration(context *InterceptorContext, attributeMap *AttributeMap) (_err error) {
panic("No Support!")
}
func (client *Client) ModifyRequest(context *InterceptorContext, attributeMap *AttributeMap) (_err error) {
panic("No Support!")
}
func (client *Client) ModifyResponse(context *InterceptorContext, attributeMap *AttributeMap) (_err error) {
panic("No Support!")
}

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

File diff suppressed because it is too large Load Diff

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,12 +0,0 @@
package debug
import (
"reflect"
"testing"
)
func assertEqual(t *testing.T, a, b interface{}) {
if !reflect.DeepEqual(a, b) {
t.Errorf("%v != %v", a, b)
}
}

View File

@@ -1,36 +0,0 @@
package debug
import (
"fmt"
"os"
"strings"
)
type Debug func(format string, v ...interface{})
var hookGetEnv = func() string {
return os.Getenv("DEBUG")
}
var hookPrint = func(input string) {
fmt.Println(input)
}
func Init(flag string) Debug {
enable := false
env := hookGetEnv()
parts := strings.Split(env, ",")
for _, part := range parts {
if part == flag {
enable = true
break
}
}
return func(format string, v ...interface{}) {
if enable {
hookPrint(fmt.Sprintf(format, v...))
}
}
}

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

File diff suppressed because it is too large Load Diff

View File

@@ -1,41 +0,0 @@
// This file is auto-generated, don't edit it. Thanks.
/**
* Get endpoint
* @return string
*/
package service
import (
"fmt"
"strings"
"github.com/alibabacloud-go/tea/tea"
)
func GetEndpointRules(product, regionId, endpointType, network, suffix *string) (_result *string, _err error) {
if tea.StringValue(endpointType) == "regional" {
if tea.StringValue(regionId) == "" {
_err = fmt.Errorf("RegionId is empty, please set a valid RegionId")
return tea.String(""), _err
}
_result = tea.String(strings.Replace("<product><suffix><network>.<region_id>.aliyuncs.com",
"<region_id>", tea.StringValue(regionId), 1))
} else {
_result = tea.String("<product><suffix><network>.aliyuncs.com")
}
_result = tea.String(strings.Replace(tea.StringValue(_result),
"<product>", strings.ToLower(tea.StringValue(product)), 1))
if tea.StringValue(network) == "" || tea.StringValue(network) == "public" {
_result = tea.String(strings.Replace(tea.StringValue(_result), "<network>", "", 1))
} else {
_result = tea.String(strings.Replace(tea.StringValue(_result),
"<network>", "-"+tea.StringValue(network), 1))
}
if tea.StringValue(suffix) == "" {
_result = tea.String(strings.Replace(tea.StringValue(_result), "<suffix>", "", 1))
} else {
_result = tea.String(strings.Replace(tea.StringValue(_result),
"<suffix>", "-"+tea.StringValue(suffix), 1))
}
return _result, nil
}

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,635 +0,0 @@
// This file is auto-generated, don't edit it. Thanks.
/**
* This is for OpenApi Util
*/
package service
import (
"bytes"
"crypto"
"crypto/hmac"
"crypto/rand"
"crypto/rsa"
"crypto/sha1"
"crypto/sha256"
"crypto/x509"
"encoding/base64"
"encoding/hex"
"encoding/json"
"encoding/pem"
"errors"
"fmt"
"hash"
"io"
"net/http"
"net/textproto"
"net/url"
"reflect"
"sort"
"strconv"
"strings"
"time"
util "github.com/alibabacloud-go/tea-utils/service"
"github.com/alibabacloud-go/tea/tea"
"github.com/tjfoc/gmsm/sm3"
)
const (
PEM_BEGIN = "-----BEGIN RSA PRIVATE KEY-----\n"
PEM_END = "\n-----END RSA PRIVATE KEY-----"
)
type Sorter struct {
Keys []string
Vals []string
}
func newSorter(m map[string]string) *Sorter {
hs := &Sorter{
Keys: make([]string, 0, len(m)),
Vals: make([]string, 0, len(m)),
}
for k, v := range m {
hs.Keys = append(hs.Keys, k)
hs.Vals = append(hs.Vals, v)
}
return hs
}
// Sort is an additional function for function SignHeader.
func (hs *Sorter) Sort() {
sort.Sort(hs)
}
// Len is an additional function for function SignHeader.
func (hs *Sorter) Len() int {
return len(hs.Vals)
}
// Less is an additional function for function SignHeader.
func (hs *Sorter) Less(i, j int) bool {
return bytes.Compare([]byte(hs.Keys[i]), []byte(hs.Keys[j])) < 0
}
// Swap is an additional function for function SignHeader.
func (hs *Sorter) Swap(i, j int) {
hs.Vals[i], hs.Vals[j] = hs.Vals[j], hs.Vals[i]
hs.Keys[i], hs.Keys[j] = hs.Keys[j], hs.Keys[i]
}
/**
* Convert all params of body other than type of readable into content
* @param body source Model
* @param content target Model
* @return void
*/
func Convert(body interface{}, content interface{}) {
res := make(map[string]interface{})
val := reflect.ValueOf(body).Elem()
dataType := val.Type()
for i := 0; i < dataType.NumField(); i++ {
field := dataType.Field(i)
name, _ := field.Tag.Lookup("json")
name = strings.Split(name, ",omitempty")[0]
_, ok := val.Field(i).Interface().(io.Reader)
if !ok {
res[name] = val.Field(i).Interface()
}
}
byt, _ := json.Marshal(res)
json.Unmarshal(byt, content)
}
/**
* Get the string to be signed according to request
* @param request which contains signed messages
* @return the signed string
*/
func GetStringToSign(request *tea.Request) (_result *string) {
return tea.String(getStringToSign(request))
}
func getStringToSign(request *tea.Request) string {
resource := tea.StringValue(request.Pathname)
queryParams := request.Query
// sort QueryParams by key
var queryKeys []string
for key := range queryParams {
queryKeys = append(queryKeys, key)
}
sort.Strings(queryKeys)
tmp := ""
for i := 0; i < len(queryKeys); i++ {
queryKey := queryKeys[i]
v := tea.StringValue(queryParams[queryKey])
if v != "" {
tmp = tmp + "&" + queryKey + "=" + v
} else {
tmp = tmp + "&" + queryKey
}
}
if tmp != "" {
tmp = strings.TrimLeft(tmp, "&")
resource = resource + "?" + tmp
}
return getSignedStr(request, resource)
}
func getSignedStr(req *tea.Request, canonicalizedResource string) string {
temp := make(map[string]string)
for k, v := range req.Headers {
if strings.HasPrefix(strings.ToLower(k), "x-acs-") {
temp[strings.ToLower(k)] = tea.StringValue(v)
}
}
hs := newSorter(temp)
// Sort the temp by the ascending order
hs.Sort()
// Get the canonicalizedOSSHeaders
canonicalizedOSSHeaders := ""
for i := range hs.Keys {
canonicalizedOSSHeaders += hs.Keys[i] + ":" + hs.Vals[i] + "\n"
}
// Give other parameters values
// when sign URL, date is expires
date := tea.StringValue(req.Headers["date"])
accept := tea.StringValue(req.Headers["accept"])
contentType := tea.StringValue(req.Headers["content-type"])
contentMd5 := tea.StringValue(req.Headers["content-md5"])
signStr := tea.StringValue(req.Method) + "\n" + accept + "\n" + contentMd5 + "\n" + contentType + "\n" + date + "\n" + canonicalizedOSSHeaders + canonicalizedResource
return signStr
}
/**
* Get signature according to stringToSign, secret
* @param stringToSign the signed string
* @param secret accesskey secret
* @return the signature
*/
func GetROASignature(stringToSign *string, secret *string) (_result *string) {
h := hmac.New(func() hash.Hash { return sha1.New() }, []byte(tea.StringValue(secret)))
io.WriteString(h, tea.StringValue(stringToSign))
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
return tea.String(signedStr)
}
func GetEndpoint(endpoint *string, server *bool, endpointType *string) *string {
if tea.StringValue(endpointType) == "internal" {
strs := strings.Split(tea.StringValue(endpoint), ".")
strs[0] += "-internal"
endpoint = tea.String(strings.Join(strs, "."))
}
if tea.BoolValue(server) && tea.StringValue(endpointType) == "accelerate" {
return tea.String("oss-accelerate.aliyuncs.com")
}
return endpoint
}
func HexEncode(raw []byte) *string {
return tea.String(hex.EncodeToString(raw))
}
func Hash(raw []byte, signatureAlgorithm *string) []byte {
signType := tea.StringValue(signatureAlgorithm)
if signType == "ACS3-HMAC-SHA256" || signType == "ACS3-RSA-SHA256" {
h := sha256.New()
h.Write(raw)
return h.Sum(nil)
} else if signType == "ACS3-HMAC-SM3" {
h := sm3.New()
h.Write(raw)
return h.Sum(nil)
}
return nil
}
func GetEncodePath(path *string) *string {
uri := tea.StringValue(path)
strs := strings.Split(uri, "/")
for i, v := range strs {
strs[i] = url.QueryEscape(v)
}
uri = strings.Join(strs, "/")
uri = strings.Replace(uri, "+", "%20", -1)
uri = strings.Replace(uri, "*", "%2A", -1)
uri = strings.Replace(uri, "%7E", "~", -1)
return tea.String(uri)
}
func GetEncodeParam(param *string) *string {
uri := tea.StringValue(param)
uri = url.QueryEscape(uri)
uri = strings.Replace(uri, "+", "%20", -1)
uri = strings.Replace(uri, "*", "%2A", -1)
uri = strings.Replace(uri, "%7E", "~", -1)
return tea.String(uri)
}
func GetAuthorization(request *tea.Request, signatureAlgorithm, payload, acesskey, secret *string) *string {
canonicalURI := tea.StringValue(request.Pathname)
if canonicalURI == "" {
canonicalURI = "/"
}
canonicalURI = strings.Replace(canonicalURI, "+", "%20", -1)
canonicalURI = strings.Replace(canonicalURI, "*", "%2A", -1)
canonicalURI = strings.Replace(canonicalURI, "%7E", "~", -1)
method := tea.StringValue(request.Method)
canonicalQueryString := getCanonicalQueryString(request.Query)
canonicalheaders, signedHeaders := getCanonicalHeaders(request.Headers)
canonicalRequest := method + "\n" + canonicalURI + "\n" + canonicalQueryString + "\n" + canonicalheaders + "\n" +
strings.Join(signedHeaders, ";") + "\n" + tea.StringValue(payload)
signType := tea.StringValue(signatureAlgorithm)
StringToSign := signType + "\n" + tea.StringValue(HexEncode(Hash([]byte(canonicalRequest), signatureAlgorithm)))
signature := tea.StringValue(HexEncode(SignatureMethod(tea.StringValue(secret), StringToSign, signType)))
auth := signType + " Credential=" + tea.StringValue(acesskey) + ",SignedHeaders=" +
strings.Join(signedHeaders, ";") + ",Signature=" + signature
return tea.String(auth)
}
func SignatureMethod(secret, source, signatureAlgorithm string) []byte {
if signatureAlgorithm == "ACS3-HMAC-SHA256" {
h := hmac.New(sha256.New, []byte(secret))
h.Write([]byte(source))
return h.Sum(nil)
} else if signatureAlgorithm == "ACS3-HMAC-SM3" {
h := hmac.New(sm3.New, []byte(secret))
h.Write([]byte(source))
return h.Sum(nil)
} else if signatureAlgorithm == "ACS3-RSA-SHA256" {
return rsaSign(source, secret)
}
return nil
}
func rsaSign(content, secret string) []byte {
h := crypto.SHA256.New()
h.Write([]byte(content))
hashed := h.Sum(nil)
priv, err := parsePrivateKey(secret)
if err != nil {
return nil
}
sign, err := rsa.SignPKCS1v15(rand.Reader, priv, crypto.SHA256, hashed)
if err != nil {
return nil
}
return sign
}
func parsePrivateKey(privateKey string) (*rsa.PrivateKey, error) {
privateKey = formatPrivateKey(privateKey)
block, _ := pem.Decode([]byte(privateKey))
if block == nil {
return nil, errors.New("PrivateKey is invalid")
}
priKey, err := x509.ParsePKCS8PrivateKey(block.Bytes)
if err != nil {
return nil, err
}
switch priKey.(type) {
case *rsa.PrivateKey:
return priKey.(*rsa.PrivateKey), nil
default:
return nil, nil
}
}
func formatPrivateKey(privateKey string) string {
if !strings.HasPrefix(privateKey, PEM_BEGIN) {
privateKey = PEM_BEGIN + privateKey
}
if !strings.HasSuffix(privateKey, PEM_END) {
privateKey += PEM_END
}
return privateKey
}
func getCanonicalHeaders(headers map[string]*string) (string, []string) {
tmp := make(map[string]string)
tmpHeader := http.Header{}
for k, v := range headers {
if strings.HasPrefix(strings.ToLower(k), "x-acs-") || strings.ToLower(k) == "host" ||
strings.ToLower(k) == "content-type" {
tmp[strings.ToLower(k)] = strings.TrimSpace(tea.StringValue(v))
tmpHeader.Add(strings.ToLower(k), strings.TrimSpace(tea.StringValue(v)))
}
}
hs := newSorter(tmp)
// Sort the temp by the ascending order
hs.Sort()
canonicalheaders := ""
for _, key := range hs.Keys {
vals := tmpHeader[textproto.CanonicalMIMEHeaderKey(key)]
sort.Strings(vals)
canonicalheaders += key + ":" + strings.Join(vals, ",") + "\n"
}
return canonicalheaders, hs.Keys
}
func getCanonicalQueryString(query map[string]*string) string {
canonicalQueryString := ""
if tea.BoolValue(util.IsUnset(query)) {
return canonicalQueryString
}
tmp := make(map[string]string)
for k, v := range query {
tmp[k] = tea.StringValue(v)
}
hs := newSorter(tmp)
// Sort the temp by the ascending order
hs.Sort()
for i := range hs.Keys {
if hs.Vals[i] != "" {
canonicalQueryString += "&" + hs.Keys[i] + "=" + url.QueryEscape(hs.Vals[i])
} else {
canonicalQueryString += "&" + hs.Keys[i] + "="
}
}
canonicalQueryString = strings.Replace(canonicalQueryString, "+", "%20", -1)
canonicalQueryString = strings.Replace(canonicalQueryString, "*", "%2A", -1)
canonicalQueryString = strings.Replace(canonicalQueryString, "%7E", "~", -1)
if canonicalQueryString != "" {
canonicalQueryString = strings.TrimLeft(canonicalQueryString, "&")
}
return canonicalQueryString
}
/**
* Parse filter into a form string
* @param filter object
* @return the string
*/
func ToForm(filter map[string]interface{}) (_result *string) {
tmp := make(map[string]interface{})
byt, _ := json.Marshal(filter)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
_ = d.Decode(&tmp)
result := make(map[string]*string)
for key, value := range tmp {
filterValue := reflect.ValueOf(value)
flatRepeatedList(filterValue, result, key)
}
m := util.AnyifyMapValue(result)
return util.ToFormString(m)
}
func flatRepeatedList(dataValue reflect.Value, result map[string]*string, prefix string) {
if !dataValue.IsValid() {
return
}
dataType := dataValue.Type()
if dataType.Kind().String() == "slice" {
handleRepeatedParams(dataValue, result, prefix)
} else if dataType.Kind().String() == "map" {
handleMap(dataValue, result, prefix)
} else {
result[prefix] = tea.String(fmt.Sprintf("%v", dataValue.Interface()))
}
}
func handleRepeatedParams(repeatedFieldValue reflect.Value, result map[string]*string, prefix string) {
if repeatedFieldValue.IsValid() && !repeatedFieldValue.IsNil() {
for m := 0; m < repeatedFieldValue.Len(); m++ {
elementValue := repeatedFieldValue.Index(m)
key := prefix + "." + strconv.Itoa(m+1)
fieldValue := reflect.ValueOf(elementValue.Interface())
if fieldValue.Kind().String() == "map" {
handleMap(fieldValue, result, key)
} else {
result[key] = tea.String(fmt.Sprintf("%v", fieldValue.Interface()))
}
}
}
}
func handleMap(valueField reflect.Value, result map[string]*string, prefix string) {
if valueField.IsValid() && valueField.String() != "" {
valueFieldType := valueField.Type()
if valueFieldType.Kind().String() == "map" {
var byt []byte
byt, _ = json.Marshal(valueField.Interface())
cache := make(map[string]interface{})
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
_ = d.Decode(&cache)
for key, value := range cache {
pre := ""
if prefix != "" {
pre = prefix + "." + key
} else {
pre = key
}
fieldValue := reflect.ValueOf(value)
flatRepeatedList(fieldValue, result, pre)
}
}
}
}
/**
* Get timestamp
* @return the timestamp string
*/
func GetTimestamp() (_result *string) {
gmt := time.FixedZone("GMT", 0)
return tea.String(time.Now().In(gmt).Format("2006-01-02T15:04:05Z"))
}
/**
* Parse filter into a object which's type is map[string]string
* @param filter query param
* @return the object
*/
func Query(filter interface{}) (_result map[string]*string) {
tmp := make(map[string]interface{})
byt, _ := json.Marshal(filter)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
_ = d.Decode(&tmp)
result := make(map[string]*string)
for key, value := range tmp {
filterValue := reflect.ValueOf(value)
flatRepeatedList(filterValue, result, key)
}
return result
}
/**
* Get signature according to signedParams, method and secret
* @param signedParams params which need to be signed
* @param method http method e.g. GET
* @param secret AccessKeySecret
* @return the signature
*/
func GetRPCSignature(signedParams map[string]*string, method *string, secret *string) (_result *string) {
stringToSign := buildRpcStringToSign(signedParams, tea.StringValue(method))
signature := sign(stringToSign, tea.StringValue(secret), "&")
return tea.String(signature)
}
/**
* Parse array into a string with specified style
* @param array the array
* @param prefix the prefix string
* @style specified style e.g. repeatList
* @return the string
*/
func ArrayToStringWithSpecifiedStyle(array interface{}, prefix *string, style *string) (_result *string) {
if tea.BoolValue(util.IsUnset(array)) {
return tea.String("")
}
sty := tea.StringValue(style)
if sty == "repeatList" {
tmp := map[string]interface{}{
tea.StringValue(prefix): array,
}
return flatRepeatList(tmp)
} else if sty == "simple" || sty == "spaceDelimited" || sty == "pipeDelimited" {
return flatArray(array, sty)
} else if sty == "json" {
return util.ToJSONString(array)
}
return tea.String("")
}
func ParseToMap(in interface{}) map[string]interface{} {
if tea.BoolValue(util.IsUnset(in)) {
return nil
}
tmp := make(map[string]interface{})
byt, _ := json.Marshal(in)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
err := d.Decode(&tmp)
if err != nil {
return nil
}
return tmp
}
func flatRepeatList(filter map[string]interface{}) (_result *string) {
tmp := make(map[string]interface{})
byt, _ := json.Marshal(filter)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
_ = d.Decode(&tmp)
result := make(map[string]*string)
for key, value := range tmp {
filterValue := reflect.ValueOf(value)
flatRepeatedList(filterValue, result, key)
}
res := make(map[string]string)
for k, v := range result {
res[k] = tea.StringValue(v)
}
hs := newSorter(res)
hs.Sort()
// Get the canonicalizedOSSHeaders
t := ""
for i := range hs.Keys {
if i == len(hs.Keys)-1 {
t += hs.Keys[i] + "=" + hs.Vals[i]
} else {
t += hs.Keys[i] + "=" + hs.Vals[i] + "&&"
}
}
return tea.String(t)
}
func flatArray(array interface{}, sty string) *string {
t := reflect.ValueOf(array)
strs := make([]string, 0)
for i := 0; i < t.Len(); i++ {
tmp := t.Index(i)
if tmp.Kind() == reflect.Ptr || tmp.Kind() == reflect.Interface {
tmp = tmp.Elem()
}
if tmp.Kind() == reflect.Ptr {
tmp = tmp.Elem()
}
if tmp.Kind() == reflect.String {
strs = append(strs, tmp.String())
} else {
inter := tmp.Interface()
byt, _ := json.Marshal(inter)
strs = append(strs, string(byt))
}
}
str := ""
if sty == "simple" {
str = strings.Join(strs, ",")
} else if sty == "spaceDelimited" {
str = strings.Join(strs, " ")
} else if sty == "pipeDelimited" {
str = strings.Join(strs, "|")
}
return tea.String(str)
}
func buildRpcStringToSign(signedParam map[string]*string, method string) (stringToSign string) {
signParams := make(map[string]string)
for key, value := range signedParam {
signParams[key] = tea.StringValue(value)
}
stringToSign = getUrlFormedMap(signParams)
stringToSign = strings.Replace(stringToSign, "+", "%20", -1)
stringToSign = strings.Replace(stringToSign, "*", "%2A", -1)
stringToSign = strings.Replace(stringToSign, "%7E", "~", -1)
stringToSign = url.QueryEscape(stringToSign)
stringToSign = method + "&%2F&" + stringToSign
return
}
func getUrlFormedMap(source map[string]string) (urlEncoded string) {
urlEncoder := url.Values{}
for key, value := range source {
urlEncoder.Add(key, value)
}
urlEncoded = urlEncoder.Encode()
return
}
func sign(stringToSign, accessKeySecret, secretSuffix string) string {
secret := accessKeySecret + secretSuffix
signedBytes := shaHmac1(stringToSign, secret)
signedString := base64.StdEncoding.EncodeToString(signedBytes)
return signedString
}
func shaHmac1(source, secret string) []byte {
key := []byte(secret)
hmac := hmac.New(sha1.New, key)
hmac.Write([]byte(source))
return hmac.Sum(nil)
}

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,468 +0,0 @@
package service
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"reflect"
"runtime"
"strconv"
"strings"
"time"
"github.com/alibabacloud-go/tea/tea"
)
var defaultUserAgent = fmt.Sprintf("AlibabaCloud (%s; %s) Golang/%s Core/%s TeaDSL/1", runtime.GOOS, runtime.GOARCH, strings.Trim(runtime.Version(), "go"), "0.01")
type RuntimeOptions struct {
Autoretry *bool `json:"autoretry" xml:"autoretry"`
IgnoreSSL *bool `json:"ignoreSSL" xml:"ignoreSSL"`
MaxAttempts *int `json:"maxAttempts" xml:"maxAttempts"`
BackoffPolicy *string `json:"backoffPolicy" xml:"backoffPolicy"`
BackoffPeriod *int `json:"backoffPeriod" xml:"backoffPeriod"`
ReadTimeout *int `json:"readTimeout" xml:"readTimeout"`
ConnectTimeout *int `json:"connectTimeout" xml:"connectTimeout"`
LocalAddr *string `json:"localAddr" xml:"localAddr"`
HttpProxy *string `json:"httpProxy" xml:"httpProxy"`
HttpsProxy *string `json:"httpsProxy" xml:"httpsProxy"`
NoProxy *string `json:"noProxy" xml:"noProxy"`
MaxIdleConns *int `json:"maxIdleConns" xml:"maxIdleConns"`
Socks5Proxy *string `json:"socks5Proxy" xml:"socks5Proxy"`
Socks5NetWork *string `json:"socks5NetWork" xml:"socks5NetWork"`
KeepAlive *bool `json:"keepAlive" xml:"keepAlive"`
}
func (s RuntimeOptions) String() string {
return tea.Prettify(s)
}
func (s RuntimeOptions) GoString() string {
return s.String()
}
func (s *RuntimeOptions) SetAutoretry(v bool) *RuntimeOptions {
s.Autoretry = &v
return s
}
func (s *RuntimeOptions) SetIgnoreSSL(v bool) *RuntimeOptions {
s.IgnoreSSL = &v
return s
}
func (s *RuntimeOptions) SetMaxAttempts(v int) *RuntimeOptions {
s.MaxAttempts = &v
return s
}
func (s *RuntimeOptions) SetBackoffPolicy(v string) *RuntimeOptions {
s.BackoffPolicy = &v
return s
}
func (s *RuntimeOptions) SetBackoffPeriod(v int) *RuntimeOptions {
s.BackoffPeriod = &v
return s
}
func (s *RuntimeOptions) SetReadTimeout(v int) *RuntimeOptions {
s.ReadTimeout = &v
return s
}
func (s *RuntimeOptions) SetConnectTimeout(v int) *RuntimeOptions {
s.ConnectTimeout = &v
return s
}
func (s *RuntimeOptions) SetHttpProxy(v string) *RuntimeOptions {
s.HttpProxy = &v
return s
}
func (s *RuntimeOptions) SetHttpsProxy(v string) *RuntimeOptions {
s.HttpsProxy = &v
return s
}
func (s *RuntimeOptions) SetNoProxy(v string) *RuntimeOptions {
s.NoProxy = &v
return s
}
func (s *RuntimeOptions) SetMaxIdleConns(v int) *RuntimeOptions {
s.MaxIdleConns = &v
return s
}
func (s *RuntimeOptions) SetLocalAddr(v string) *RuntimeOptions {
s.LocalAddr = &v
return s
}
func (s *RuntimeOptions) SetSocks5Proxy(v string) *RuntimeOptions {
s.Socks5Proxy = &v
return s
}
func (s *RuntimeOptions) SetSocks5NetWork(v string) *RuntimeOptions {
s.Socks5NetWork = &v
return s
}
func (s *RuntimeOptions) SetKeepAlive(v bool) *RuntimeOptions {
s.KeepAlive = &v
return s
}
func ReadAsString(body io.Reader) (*string, error) {
byt, err := ioutil.ReadAll(body)
if err != nil {
return tea.String(""), err
}
r, ok := body.(io.ReadCloser)
if ok {
r.Close()
}
return tea.String(string(byt)), nil
}
func StringifyMapValue(a map[string]interface{}) map[string]*string {
res := make(map[string]*string)
for key, value := range a {
if value != nil {
switch value.(type) {
case string:
res[key] = tea.String(value.(string))
default:
byt, _ := json.Marshal(value)
res[key] = tea.String(string(byt))
}
}
}
return res
}
func AnyifyMapValue(a map[string]*string) map[string]interface{} {
res := make(map[string]interface{})
for key, value := range a {
res[key] = tea.StringValue(value)
}
return res
}
func ReadAsBytes(body io.Reader) ([]byte, error) {
byt, err := ioutil.ReadAll(body)
if err != nil {
return nil, err
}
r, ok := body.(io.ReadCloser)
if ok {
r.Close()
}
return byt, nil
}
func DefaultString(reaStr, defaultStr *string) *string {
if reaStr == nil {
return defaultStr
}
return reaStr
}
func ToJSONString(a interface{}) *string {
switch v := a.(type) {
case *string:
return v
case string:
return tea.String(v)
case []byte:
return tea.String(string(v))
case io.Reader:
byt, err := ioutil.ReadAll(v)
if err != nil {
return nil
}
return tea.String(string(byt))
}
byt, err := json.Marshal(a)
if err != nil {
return nil
}
return tea.String(string(byt))
}
func DefaultNumber(reaNum, defaultNum *int) *int {
if reaNum == nil {
return defaultNum
}
return reaNum
}
func ReadAsJSON(body io.Reader) (result interface{}, err error) {
byt, err := ioutil.ReadAll(body)
if err != nil {
return
}
if string(byt) == "" {
return
}
r, ok := body.(io.ReadCloser)
if ok {
r.Close()
}
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
err = d.Decode(&result)
return
}
func GetNonce() *string {
return tea.String(getUUID())
}
func Empty(val *string) *bool {
return tea.Bool(val == nil || tea.StringValue(val) == "")
}
func ValidateModel(a interface{}) error {
if a == nil {
return nil
}
err := tea.Validate(a)
return err
}
func EqualString(val1, val2 *string) *bool {
return tea.Bool(tea.StringValue(val1) == tea.StringValue(val2))
}
func EqualNumber(val1, val2 *int) *bool {
return tea.Bool(tea.IntValue(val1) == tea.IntValue(val2))
}
func IsUnset(val interface{}) *bool {
if val == nil {
return tea.Bool(true)
}
v := reflect.ValueOf(val)
if v.Kind() == reflect.Ptr || v.Kind() == reflect.Slice || v.Kind() == reflect.Map {
return tea.Bool(v.IsNil())
}
valType := reflect.TypeOf(val)
valZero := reflect.Zero(valType)
return tea.Bool(valZero == v)
}
func ToBytes(a *string) []byte {
return []byte(tea.StringValue(a))
}
func AssertAsMap(a interface{}) map[string]interface{} {
r := reflect.ValueOf(a)
if r.Kind().String() != "map" {
panic(fmt.Sprintf("%v is not a map[string]interface{}", a))
}
res := make(map[string]interface{})
tmp := r.MapKeys()
for _, key := range tmp {
res[key.String()] = r.MapIndex(key).Interface()
}
return res
}
func AssertAsNumber(a interface{}) *int {
res := 0
switch a.(type) {
case int:
tmp := a.(int)
res = tmp
case *int:
tmp := a.(*int)
res = tea.IntValue(tmp)
default:
panic(fmt.Sprintf("%v is not a int", a))
}
return tea.Int(res)
}
func AssertAsBoolean(a interface{}) *bool {
res := false
switch a.(type) {
case bool:
tmp := a.(bool)
res = tmp
case *bool:
tmp := a.(*bool)
res = tea.BoolValue(tmp)
default:
panic(fmt.Sprintf("%v is not a bool", a))
}
return tea.Bool(res)
}
func AssertAsString(a interface{}) *string {
res := ""
switch a.(type) {
case string:
tmp := a.(string)
res = tmp
case *string:
tmp := a.(*string)
res = tea.StringValue(tmp)
default:
panic(fmt.Sprintf("%v is not a string", a))
}
return tea.String(res)
}
func AssertAsBytes(a interface{}) []byte {
res, ok := a.([]byte)
if !ok {
panic(fmt.Sprintf("%v is not []byte", a))
}
return res
}
func AssertAsReadable(a interface{}) io.Reader {
res, ok := a.(io.Reader)
if !ok {
panic(fmt.Sprintf("%v is not reader", a))
}
return res
}
func AssertAsArray(a interface{}) []interface{} {
r := reflect.ValueOf(a)
if r.Kind().String() != "array" && r.Kind().String() != "slice" {
panic(fmt.Sprintf("%v is not a [x]interface{}", a))
}
aLen := r.Len()
res := make([]interface{}, 0)
for i := 0; i < aLen; i++ {
res = append(res, r.Index(i).Interface())
}
return res
}
func ParseJSON(a *string) interface{} {
mapTmp := make(map[string]interface{})
d := json.NewDecoder(bytes.NewReader([]byte(tea.StringValue(a))))
d.UseNumber()
err := d.Decode(&mapTmp)
if err == nil {
return mapTmp
}
sliceTmp := make([]interface{}, 0)
d = json.NewDecoder(bytes.NewReader([]byte(tea.StringValue(a))))
d.UseNumber()
err = d.Decode(&sliceTmp)
if err == nil {
return sliceTmp
}
if num, err := strconv.Atoi(tea.StringValue(a)); err == nil {
return num
}
if ok, err := strconv.ParseBool(tea.StringValue(a)); err == nil {
return ok
}
if floa64tVal, err := strconv.ParseFloat(tea.StringValue(a), 64); err == nil {
return floa64tVal
}
return nil
}
func ToString(a []byte) *string {
return tea.String(string(a))
}
func ToMap(in interface{}) map[string]interface{} {
if in == nil {
return nil
}
res := tea.ToMap(in)
return res
}
func ToFormString(a map[string]interface{}) *string {
if a == nil {
return tea.String("")
}
res := ""
urlEncoder := url.Values{}
for key, value := range a {
v := fmt.Sprintf("%v", value)
urlEncoder.Add(key, v)
}
res = urlEncoder.Encode()
return tea.String(res)
}
func GetDateUTCString() *string {
return tea.String(time.Now().UTC().Format(http.TimeFormat))
}
func GetUserAgent(userAgent *string) *string {
if userAgent != nil && tea.StringValue(userAgent) != "" {
return tea.String(defaultUserAgent + " " + tea.StringValue(userAgent))
}
return tea.String(defaultUserAgent)
}
func Is2xx(code *int) *bool {
tmp := tea.IntValue(code)
return tea.Bool(tmp >= 200 && tmp < 300)
}
func Is3xx(code *int) *bool {
tmp := tea.IntValue(code)
return tea.Bool(tmp >= 300 && tmp < 400)
}
func Is4xx(code *int) *bool {
tmp := tea.IntValue(code)
return tea.Bool(tmp >= 400 && tmp < 500)
}
func Is5xx(code *int) *bool {
tmp := tea.IntValue(code)
return tea.Bool(tmp >= 500 && tmp < 600)
}
func Sleep(millisecond *int) error {
ms := tea.IntValue(millisecond)
time.Sleep(time.Duration(ms) * time.Millisecond)
return nil
}
func ToArray(in interface{}) []map[string]interface{} {
if tea.BoolValue(IsUnset(in)) {
return nil
}
tmp := make([]map[string]interface{}, 0)
byt, _ := json.Marshal(in)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
err := d.Decode(&tmp)
if err != nil {
return nil
}
return tmp
}

View File

@@ -1,52 +0,0 @@
package service
import (
"crypto/md5"
"crypto/rand"
"encoding/hex"
"hash"
rand2 "math/rand"
)
type UUID [16]byte
const numBytes = "1234567890"
func getUUID() (uuidHex string) {
uuid := newUUID()
uuidHex = hex.EncodeToString(uuid[:])
return
}
func randStringBytes(n int) string {
b := make([]byte, n)
for i := range b {
b[i] = numBytes[rand2.Intn(len(numBytes))]
}
return string(b)
}
func newUUID() UUID {
ns := UUID{}
safeRandom(ns[:])
u := newFromHash(md5.New(), ns, randStringBytes(16))
u[6] = (u[6] & 0x0f) | (byte(2) << 4)
u[8] = (u[8]&(0xff>>2) | (0x02 << 6))
return u
}
func newFromHash(h hash.Hash, ns UUID, name string) UUID {
u := UUID{}
h.Write(ns[:])
h.Write([]byte(name))
copy(u[:], h.Sum(nil))
return u
}
func safeRandom(dest []byte) {
if _, err := rand.Read(dest); err != nil {
panic(err)
}
}

View File

@@ -1,105 +0,0 @@
package service
import (
"bytes"
"encoding/xml"
"fmt"
"reflect"
"strings"
"github.com/alibabacloud-go/tea/tea"
v2 "github.com/clbanning/mxj/v2"
)
func ToXML(obj map[string]interface{}) *string {
return tea.String(mapToXML(obj))
}
func ParseXml(val *string, result interface{}) map[string]interface{} {
resp := make(map[string]interface{})
start := getStartElement([]byte(tea.StringValue(val)))
if result == nil {
vm, err := v2.NewMapXml([]byte(tea.StringValue(val)))
if err != nil {
return nil
}
return vm
}
out, err := xmlUnmarshal([]byte(tea.StringValue(val)), result)
if err != nil {
return resp
}
resp[start] = out
return resp
}
func mapToXML(val map[string]interface{}) string {
res := ""
for key, value := range val {
switch value.(type) {
case []interface{}:
for _, v := range value.([]interface{}) {
switch v.(type) {
case map[string]interface{}:
res += `<` + key + `>`
res += mapToXML(v.(map[string]interface{}))
res += `</` + key + `>`
default:
if fmt.Sprintf("%v", v) != `<nil>` {
res += `<` + key + `>`
res += fmt.Sprintf("%v", v)
res += `</` + key + `>`
}
}
}
case map[string]interface{}:
res += `<` + key + `>`
res += mapToXML(value.(map[string]interface{}))
res += `</` + key + `>`
default:
if fmt.Sprintf("%v", value) != `<nil>` {
res += `<` + key + `>`
res += fmt.Sprintf("%v", value)
res += `</` + key + `>`
}
}
}
return res
}
func getStartElement(body []byte) string {
d := xml.NewDecoder(bytes.NewReader(body))
for {
tok, err := d.Token()
if err != nil {
return ""
}
if t, ok := tok.(xml.StartElement); ok {
return t.Name.Local
}
}
}
func xmlUnmarshal(body []byte, result interface{}) (interface{}, error) {
start := getStartElement(body)
dataValue := reflect.ValueOf(result).Elem()
dataType := dataValue.Type()
for i := 0; i < dataType.NumField(); i++ {
field := dataType.Field(i)
name, containsNameTag := field.Tag.Lookup("xml")
name = strings.Replace(name, ",omitempty", "", -1)
if containsNameTag {
if name == start {
realType := dataValue.Field(i).Type()
realValue := reflect.New(realType).Interface()
err := xml.Unmarshal(body, realValue)
if err != nil {
return nil, err
}
return realValue, nil
}
}
}
return nil, nil
}

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,333 +0,0 @@
package tea
import (
"encoding/json"
"io"
"math"
"reflect"
"strconv"
"strings"
"unsafe"
jsoniter "github.com/json-iterator/go"
"github.com/modern-go/reflect2"
)
const maxUint = ^uint(0)
const maxInt = int(maxUint >> 1)
const minInt = -maxInt - 1
var jsonParser jsoniter.API
func init() {
jsonParser = jsoniter.Config{
EscapeHTML: true,
SortMapKeys: true,
ValidateJsonRawMessage: true,
CaseSensitive: true,
}.Froze()
jsonParser.RegisterExtension(newBetterFuzzyExtension())
}
func newBetterFuzzyExtension() jsoniter.DecoderExtension {
return jsoniter.DecoderExtension{
reflect2.DefaultTypeOfKind(reflect.String): &nullableFuzzyStringDecoder{},
reflect2.DefaultTypeOfKind(reflect.Bool): &fuzzyBoolDecoder{},
reflect2.DefaultTypeOfKind(reflect.Float32): &nullableFuzzyFloat32Decoder{},
reflect2.DefaultTypeOfKind(reflect.Float64): &nullableFuzzyFloat64Decoder{},
reflect2.DefaultTypeOfKind(reflect.Int): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(maxInt) || val < float64(minInt) {
iter.ReportError("fuzzy decode int", "exceed range")
return
}
*((*int)(ptr)) = int(val)
} else {
*((*int)(ptr)) = iter.ReadInt()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(maxUint) || val < 0 {
iter.ReportError("fuzzy decode uint", "exceed range")
return
}
*((*uint)(ptr)) = uint(val)
} else {
*((*uint)(ptr)) = iter.ReadUint()
}
}},
reflect2.DefaultTypeOfKind(reflect.Int8): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxInt8) || val < float64(math.MinInt8) {
iter.ReportError("fuzzy decode int8", "exceed range")
return
}
*((*int8)(ptr)) = int8(val)
} else {
*((*int8)(ptr)) = iter.ReadInt8()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint8): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxUint8) || val < 0 {
iter.ReportError("fuzzy decode uint8", "exceed range")
return
}
*((*uint8)(ptr)) = uint8(val)
} else {
*((*uint8)(ptr)) = iter.ReadUint8()
}
}},
reflect2.DefaultTypeOfKind(reflect.Int16): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxInt16) || val < float64(math.MinInt16) {
iter.ReportError("fuzzy decode int16", "exceed range")
return
}
*((*int16)(ptr)) = int16(val)
} else {
*((*int16)(ptr)) = iter.ReadInt16()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint16): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxUint16) || val < 0 {
iter.ReportError("fuzzy decode uint16", "exceed range")
return
}
*((*uint16)(ptr)) = uint16(val)
} else {
*((*uint16)(ptr)) = iter.ReadUint16()
}
}},
reflect2.DefaultTypeOfKind(reflect.Int32): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxInt32) || val < float64(math.MinInt32) {
iter.ReportError("fuzzy decode int32", "exceed range")
return
}
*((*int32)(ptr)) = int32(val)
} else {
*((*int32)(ptr)) = iter.ReadInt32()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint32): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxUint32) || val < 0 {
iter.ReportError("fuzzy decode uint32", "exceed range")
return
}
*((*uint32)(ptr)) = uint32(val)
} else {
*((*uint32)(ptr)) = iter.ReadUint32()
}
}},
reflect2.DefaultTypeOfKind(reflect.Int64): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxInt64) || val < float64(math.MinInt64) {
iter.ReportError("fuzzy decode int64", "exceed range")
return
}
*((*int64)(ptr)) = int64(val)
} else {
*((*int64)(ptr)) = iter.ReadInt64()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint64): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxUint64) || val < 0 {
iter.ReportError("fuzzy decode uint64", "exceed range")
return
}
*((*uint64)(ptr)) = uint64(val)
} else {
*((*uint64)(ptr)) = iter.ReadUint64()
}
}},
}
}
type nullableFuzzyStringDecoder struct {
}
func (decoder *nullableFuzzyStringDecoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
switch valueType {
case jsoniter.NumberValue:
var number json.Number
iter.ReadVal(&number)
*((*string)(ptr)) = string(number)
case jsoniter.StringValue:
*((*string)(ptr)) = iter.ReadString()
case jsoniter.BoolValue:
*((*string)(ptr)) = strconv.FormatBool(iter.ReadBool())
case jsoniter.NilValue:
iter.ReadNil()
*((*string)(ptr)) = ""
default:
iter.ReportError("fuzzyStringDecoder", "not number or string or bool")
}
}
type fuzzyBoolDecoder struct {
}
func (decoder *fuzzyBoolDecoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
switch valueType {
case jsoniter.BoolValue:
*((*bool)(ptr)) = iter.ReadBool()
case jsoniter.NumberValue:
var number json.Number
iter.ReadVal(&number)
num, err := number.Int64()
if err != nil {
iter.ReportError("fuzzyBoolDecoder", "get value from json.number failed")
}
if num == 0 {
*((*bool)(ptr)) = false
} else {
*((*bool)(ptr)) = true
}
case jsoniter.StringValue:
strValue := strings.ToLower(iter.ReadString())
if strValue == "true" {
*((*bool)(ptr)) = true
} else if strValue == "false" || strValue == "" {
*((*bool)(ptr)) = false
} else {
iter.ReportError("fuzzyBoolDecoder", "unsupported bool value: "+strValue)
}
case jsoniter.NilValue:
iter.ReadNil()
*((*bool)(ptr)) = false
default:
iter.ReportError("fuzzyBoolDecoder", "not number or string or nil")
}
}
type nullableFuzzyIntegerDecoder struct {
fun func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator)
}
func (decoder *nullableFuzzyIntegerDecoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
var str string
switch valueType {
case jsoniter.NumberValue:
var number json.Number
iter.ReadVal(&number)
str = string(number)
case jsoniter.StringValue:
str = iter.ReadString()
// support empty string
if str == "" {
str = "0"
}
case jsoniter.BoolValue:
if iter.ReadBool() {
str = "1"
} else {
str = "0"
}
case jsoniter.NilValue:
iter.ReadNil()
str = "0"
default:
iter.ReportError("fuzzyIntegerDecoder", "not number or string")
}
newIter := iter.Pool().BorrowIterator([]byte(str))
defer iter.Pool().ReturnIterator(newIter)
isFloat := strings.IndexByte(str, '.') != -1
decoder.fun(isFloat, ptr, newIter)
if newIter.Error != nil && newIter.Error != io.EOF {
iter.Error = newIter.Error
}
}
type nullableFuzzyFloat32Decoder struct {
}
func (decoder *nullableFuzzyFloat32Decoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
var str string
switch valueType {
case jsoniter.NumberValue:
*((*float32)(ptr)) = iter.ReadFloat32()
case jsoniter.StringValue:
str = iter.ReadString()
// support empty string
if str == "" {
*((*float32)(ptr)) = 0
return
}
newIter := iter.Pool().BorrowIterator([]byte(str))
defer iter.Pool().ReturnIterator(newIter)
*((*float32)(ptr)) = newIter.ReadFloat32()
if newIter.Error != nil && newIter.Error != io.EOF {
iter.Error = newIter.Error
}
case jsoniter.BoolValue:
// support bool to float32
if iter.ReadBool() {
*((*float32)(ptr)) = 1
} else {
*((*float32)(ptr)) = 0
}
case jsoniter.NilValue:
iter.ReadNil()
*((*float32)(ptr)) = 0
default:
iter.ReportError("nullableFuzzyFloat32Decoder", "not number or string")
}
}
type nullableFuzzyFloat64Decoder struct {
}
func (decoder *nullableFuzzyFloat64Decoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
var str string
switch valueType {
case jsoniter.NumberValue:
*((*float64)(ptr)) = iter.ReadFloat64()
case jsoniter.StringValue:
str = iter.ReadString()
// support empty string
if str == "" {
*((*float64)(ptr)) = 0
return
}
newIter := iter.Pool().BorrowIterator([]byte(str))
defer iter.Pool().ReturnIterator(newIter)
*((*float64)(ptr)) = newIter.ReadFloat64()
if newIter.Error != nil && newIter.Error != io.EOF {
iter.Error = newIter.Error
}
case jsoniter.BoolValue:
// support bool to float64
if iter.ReadBool() {
*((*float64)(ptr)) = 1
} else {
*((*float64)(ptr)) = 0
}
case jsoniter.NilValue:
// support empty string
iter.ReadNil()
*((*float64)(ptr)) = 0
default:
iter.ReportError("nullableFuzzyFloat64Decoder", "not number or string")
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,491 +0,0 @@
package tea
func String(a string) *string {
return &a
}
func StringValue(a *string) string {
if a == nil {
return ""
}
return *a
}
func Int(a int) *int {
return &a
}
func IntValue(a *int) int {
if a == nil {
return 0
}
return *a
}
func Int8(a int8) *int8 {
return &a
}
func Int8Value(a *int8) int8 {
if a == nil {
return 0
}
return *a
}
func Int16(a int16) *int16 {
return &a
}
func Int16Value(a *int16) int16 {
if a == nil {
return 0
}
return *a
}
func Int32(a int32) *int32 {
return &a
}
func Int32Value(a *int32) int32 {
if a == nil {
return 0
}
return *a
}
func Int64(a int64) *int64 {
return &a
}
func Int64Value(a *int64) int64 {
if a == nil {
return 0
}
return *a
}
func Bool(a bool) *bool {
return &a
}
func BoolValue(a *bool) bool {
if a == nil {
return false
}
return *a
}
func Uint(a uint) *uint {
return &a
}
func UintValue(a *uint) uint {
if a == nil {
return 0
}
return *a
}
func Uint8(a uint8) *uint8 {
return &a
}
func Uint8Value(a *uint8) uint8 {
if a == nil {
return 0
}
return *a
}
func Uint16(a uint16) *uint16 {
return &a
}
func Uint16Value(a *uint16) uint16 {
if a == nil {
return 0
}
return *a
}
func Uint32(a uint32) *uint32 {
return &a
}
func Uint32Value(a *uint32) uint32 {
if a == nil {
return 0
}
return *a
}
func Uint64(a uint64) *uint64 {
return &a
}
func Uint64Value(a *uint64) uint64 {
if a == nil {
return 0
}
return *a
}
func Float32(a float32) *float32 {
return &a
}
func Float32Value(a *float32) float32 {
if a == nil {
return 0
}
return *a
}
func Float64(a float64) *float64 {
return &a
}
func Float64Value(a *float64) float64 {
if a == nil {
return 0
}
return *a
}
func IntSlice(a []int) []*int {
if a == nil {
return nil
}
res := make([]*int, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func IntValueSlice(a []*int) []int {
if a == nil {
return nil
}
res := make([]int, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Int8Slice(a []int8) []*int8 {
if a == nil {
return nil
}
res := make([]*int8, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Int8ValueSlice(a []*int8) []int8 {
if a == nil {
return nil
}
res := make([]int8, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Int16Slice(a []int16) []*int16 {
if a == nil {
return nil
}
res := make([]*int16, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Int16ValueSlice(a []*int16) []int16 {
if a == nil {
return nil
}
res := make([]int16, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Int32Slice(a []int32) []*int32 {
if a == nil {
return nil
}
res := make([]*int32, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Int32ValueSlice(a []*int32) []int32 {
if a == nil {
return nil
}
res := make([]int32, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Int64Slice(a []int64) []*int64 {
if a == nil {
return nil
}
res := make([]*int64, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Int64ValueSlice(a []*int64) []int64 {
if a == nil {
return nil
}
res := make([]int64, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func UintSlice(a []uint) []*uint {
if a == nil {
return nil
}
res := make([]*uint, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func UintValueSlice(a []*uint) []uint {
if a == nil {
return nil
}
res := make([]uint, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Uint8Slice(a []uint8) []*uint8 {
if a == nil {
return nil
}
res := make([]*uint8, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Uint8ValueSlice(a []*uint8) []uint8 {
if a == nil {
return nil
}
res := make([]uint8, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Uint16Slice(a []uint16) []*uint16 {
if a == nil {
return nil
}
res := make([]*uint16, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Uint16ValueSlice(a []*uint16) []uint16 {
if a == nil {
return nil
}
res := make([]uint16, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Uint32Slice(a []uint32) []*uint32 {
if a == nil {
return nil
}
res := make([]*uint32, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Uint32ValueSlice(a []*uint32) []uint32 {
if a == nil {
return nil
}
res := make([]uint32, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Uint64Slice(a []uint64) []*uint64 {
if a == nil {
return nil
}
res := make([]*uint64, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Uint64ValueSlice(a []*uint64) []uint64 {
if a == nil {
return nil
}
res := make([]uint64, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Float32Slice(a []float32) []*float32 {
if a == nil {
return nil
}
res := make([]*float32, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Float32ValueSlice(a []*float32) []float32 {
if a == nil {
return nil
}
res := make([]float32, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Float64Slice(a []float64) []*float64 {
if a == nil {
return nil
}
res := make([]*float64, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Float64ValueSlice(a []*float64) []float64 {
if a == nil {
return nil
}
res := make([]float64, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func StringSlice(a []string) []*string {
if a == nil {
return nil
}
res := make([]*string, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func StringSliceValue(a []*string) []string {
if a == nil {
return nil
}
res := make([]string, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func BoolSlice(a []bool) []*bool {
if a == nil {
return nil
}
res := make([]*bool, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func BoolSliceValue(a []*bool) []bool {
if a == nil {
return nil
}
res := make([]bool, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}

View File

@@ -1,64 +0,0 @@
package utils
import (
"reflect"
"strings"
"testing"
)
func isNil(object interface{}) bool {
if object == nil {
return true
}
value := reflect.ValueOf(object)
kind := value.Kind()
isNilableKind := containsKind(
[]reflect.Kind{
reflect.Chan, reflect.Func,
reflect.Interface, reflect.Map,
reflect.Ptr, reflect.Slice},
kind)
if isNilableKind && value.IsNil() {
return true
}
return false
}
func containsKind(kinds []reflect.Kind, kind reflect.Kind) bool {
for i := 0; i < len(kinds); i++ {
if kind == kinds[i] {
return true
}
}
return false
}
func AssertEqual(t *testing.T, a, b interface{}) {
if !reflect.DeepEqual(a, b) {
t.Errorf("%v != %v", a, b)
}
}
func AssertNil(t *testing.T, object interface{}) {
if !isNil(object) {
t.Errorf("%v is not nil", object)
}
}
func AssertNotNil(t *testing.T, object interface{}) {
if isNil(object) {
t.Errorf("%v is nil", object)
}
}
func AssertContains(t *testing.T, contains string, msgAndArgs ...string) {
for _, value := range msgAndArgs {
if ok := strings.Contains(contains, value); !ok {
t.Errorf("%s does not contain %s", contains, value)
}
}
}

View File

@@ -1,109 +0,0 @@
package utils
import (
"io"
"log"
"strings"
"time"
)
type Logger struct {
*log.Logger
formatTemplate string
isOpen bool
lastLogMsg string
}
var defaultLoggerTemplate = `{time} {channel}: "{method} {uri} HTTP/{version}" {code} {cost} {hostname}`
var loggerParam = []string{"{time}", "{start_time}", "{ts}", "{channel}", "{pid}", "{host}", "{method}", "{uri}", "{version}", "{target}", "{hostname}", "{code}", "{error}", "{req_headers}", "{res_body}", "{res_headers}", "{cost}"}
var logChannel string
func InitLogMsg(fieldMap map[string]string) {
for _, value := range loggerParam {
fieldMap[value] = ""
}
}
func (logger *Logger) SetFormatTemplate(template string) {
logger.formatTemplate = template
}
func (logger *Logger) GetFormatTemplate() string {
return logger.formatTemplate
}
func NewLogger(level string, channel string, out io.Writer, template string) *Logger {
if level == "" {
level = "info"
}
logChannel = "AlibabaCloud"
if channel != "" {
logChannel = channel
}
log := log.New(out, "["+strings.ToUpper(level)+"]", log.Lshortfile)
if template == "" {
template = defaultLoggerTemplate
}
return &Logger{
Logger: log,
formatTemplate: template,
isOpen: true,
}
}
func (logger *Logger) OpenLogger() {
logger.isOpen = true
}
func (logger *Logger) CloseLogger() {
logger.isOpen = false
}
func (logger *Logger) SetIsopen(isopen bool) {
logger.isOpen = isopen
}
func (logger *Logger) GetIsopen() bool {
return logger.isOpen
}
func (logger *Logger) SetLastLogMsg(lastLogMsg string) {
logger.lastLogMsg = lastLogMsg
}
func (logger *Logger) GetLastLogMsg() string {
return logger.lastLogMsg
}
func SetLogChannel(channel string) {
logChannel = channel
}
func (logger *Logger) PrintLog(fieldMap map[string]string, err error) {
if err != nil {
fieldMap["{error}"] = err.Error()
}
fieldMap["{time}"] = time.Now().Format("2006-01-02 15:04:05")
fieldMap["{ts}"] = getTimeInFormatISO8601()
fieldMap["{channel}"] = logChannel
if logger != nil {
logMsg := logger.formatTemplate
for key, value := range fieldMap {
logMsg = strings.Replace(logMsg, key, value, -1)
}
logger.lastLogMsg = logMsg
if logger.isOpen == true {
logger.Output(2, logMsg)
}
}
}
func getTimeInFormatISO8601() (timeStr string) {
gmt := time.FixedZone("GMT", 0)
return time.Now().In(gmt).Format("2006-01-02T15:04:05Z")
}

View File

@@ -1,60 +0,0 @@
package utils
// ProgressEventType defines transfer progress event type
type ProgressEventType int
const (
// TransferStartedEvent transfer started, set TotalBytes
TransferStartedEvent ProgressEventType = 1 + iota
// TransferDataEvent transfer data, set ConsumedBytes anmd TotalBytes
TransferDataEvent
// TransferCompletedEvent transfer completed
TransferCompletedEvent
// TransferFailedEvent transfer encounters an error
TransferFailedEvent
)
// ProgressEvent defines progress event
type ProgressEvent struct {
ConsumedBytes int64
TotalBytes int64
RwBytes int64
EventType ProgressEventType
}
// ProgressListener listens progress change
type ProgressListener interface {
ProgressChanged(event *ProgressEvent)
}
// -------------------- Private --------------------
func NewProgressEvent(eventType ProgressEventType, consumed, total int64, rwBytes int64) *ProgressEvent {
return &ProgressEvent{
ConsumedBytes: consumed,
TotalBytes: total,
RwBytes: rwBytes,
EventType: eventType}
}
// publishProgress
func PublishProgress(listener ProgressListener, event *ProgressEvent) {
if listener != nil && event != nil {
listener.ProgressChanged(event)
}
}
func GetProgressListener(obj interface{}) ProgressListener {
if obj == nil {
return nil
}
listener, ok := obj.(ProgressListener)
if !ok {
return nil
}
return listener
}
type ReaderTracker struct {
CompletedBytes int64
}

View File

@@ -1,14 +0,0 @@
Copyright (c) 2015 aliyun.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,339 +0,0 @@
package oss
import (
"bytes"
"crypto/hmac"
"crypto/sha1"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"fmt"
"hash"
"io"
"net/http"
"sort"
"strconv"
"strings"
"time"
)
// headerSorter defines the key-value structure for storing the sorted data in signHeader.
type headerSorter struct {
Keys []string
Vals []string
}
// getAdditionalHeaderKeys get exist key in http header
func (conn Conn) getAdditionalHeaderKeys(req *http.Request) ([]string, map[string]string) {
var keysList []string
keysMap := make(map[string]string)
srcKeys := make(map[string]string)
for k := range req.Header {
srcKeys[strings.ToLower(k)] = ""
}
for _, v := range conn.config.AdditionalHeaders {
if _, ok := srcKeys[strings.ToLower(v)]; ok {
keysMap[strings.ToLower(v)] = ""
}
}
for k := range keysMap {
keysList = append(keysList, k)
}
sort.Strings(keysList)
return keysList, keysMap
}
// getAdditionalHeaderKeysV4 get exist key in http header
func (conn Conn) getAdditionalHeaderKeysV4(req *http.Request) ([]string, map[string]string) {
var keysList []string
keysMap := make(map[string]string)
srcKeys := make(map[string]string)
for k := range req.Header {
srcKeys[strings.ToLower(k)] = ""
}
for _, v := range conn.config.AdditionalHeaders {
if _, ok := srcKeys[strings.ToLower(v)]; ok {
if !strings.EqualFold(v, HTTPHeaderContentMD5) && !strings.EqualFold(v, HTTPHeaderContentType) {
keysMap[strings.ToLower(v)] = ""
}
}
}
for k := range keysMap {
keysList = append(keysList, k)
}
sort.Strings(keysList)
return keysList, keysMap
}
// signHeader signs the header and sets it as the authorization header.
func (conn Conn) signHeader(req *http.Request, canonicalizedResource string, credentials Credentials) {
akIf := credentials
authorizationStr := ""
if conn.config.AuthVersion == AuthV4 {
strDay := ""
strDate := req.Header.Get(HttpHeaderOssDate)
if strDate == "" {
strDate = req.Header.Get(HTTPHeaderDate)
t, _ := time.Parse(http.TimeFormat, strDate)
strDay = t.Format("20060102")
} else {
t, _ := time.Parse(timeFormatV4, strDate)
strDay = t.Format("20060102")
}
signHeaderProduct := conn.config.GetSignProduct()
signHeaderRegion := conn.config.GetSignRegion()
additionalList, _ := conn.getAdditionalHeaderKeysV4(req)
if len(additionalList) > 0 {
authorizationFmt := "OSS4-HMAC-SHA256 Credential=%v/%v/%v/" + signHeaderProduct + "/aliyun_v4_request,AdditionalHeaders=%v,Signature=%v"
additionnalHeadersStr := strings.Join(additionalList, ";")
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), strDay, signHeaderRegion, additionnalHeadersStr, conn.getSignedStrV4(req, canonicalizedResource, akIf.GetAccessKeySecret(), nil))
} else {
authorizationFmt := "OSS4-HMAC-SHA256 Credential=%v/%v/%v/" + signHeaderProduct + "/aliyun_v4_request,Signature=%v"
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), strDay, signHeaderRegion, conn.getSignedStrV4(req, canonicalizedResource, akIf.GetAccessKeySecret(), nil))
}
} else if conn.config.AuthVersion == AuthV2 {
additionalList, _ := conn.getAdditionalHeaderKeys(req)
if len(additionalList) > 0 {
authorizationFmt := "OSS2 AccessKeyId:%v,AdditionalHeaders:%v,Signature:%v"
additionnalHeadersStr := strings.Join(additionalList, ";")
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), additionnalHeadersStr, conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret()))
} else {
authorizationFmt := "OSS2 AccessKeyId:%v,Signature:%v"
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret()))
}
} else {
// Get the final authorization string
authorizationStr = "OSS " + akIf.GetAccessKeyID() + ":" + conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret())
}
// Give the parameter "Authorization" value
req.Header.Set(HTTPHeaderAuthorization, authorizationStr)
}
func (conn Conn) getSignedStr(req *http.Request, canonicalizedResource string, keySecret string) string {
// Find out the "x-oss-"'s address in header of the request
ossHeadersMap := make(map[string]string)
additionalList, additionalMap := conn.getAdditionalHeaderKeys(req)
for k, v := range req.Header {
if strings.HasPrefix(strings.ToLower(k), "x-oss-") {
ossHeadersMap[strings.ToLower(k)] = v[0]
} else if conn.config.AuthVersion == AuthV2 {
if _, ok := additionalMap[strings.ToLower(k)]; ok {
ossHeadersMap[strings.ToLower(k)] = v[0]
}
}
}
hs := newHeaderSorter(ossHeadersMap)
// Sort the ossHeadersMap by the ascending order
hs.Sort()
// Get the canonicalizedOSSHeaders
canonicalizedOSSHeaders := ""
for i := range hs.Keys {
canonicalizedOSSHeaders += hs.Keys[i] + ":" + hs.Vals[i] + "\n"
}
// Give other parameters values
// when sign URL, date is expires
date := req.Header.Get(HTTPHeaderDate)
contentType := req.Header.Get(HTTPHeaderContentType)
contentMd5 := req.Header.Get(HTTPHeaderContentMD5)
// default is v1 signature
signStr := req.Method + "\n" + contentMd5 + "\n" + contentType + "\n" + date + "\n" + canonicalizedOSSHeaders + canonicalizedResource
h := hmac.New(func() hash.Hash { return sha1.New() }, []byte(keySecret))
// v2 signature
if conn.config.AuthVersion == AuthV2 {
signStr = req.Method + "\n" + contentMd5 + "\n" + contentType + "\n" + date + "\n" + canonicalizedOSSHeaders + strings.Join(additionalList, ";") + "\n" + canonicalizedResource
h = hmac.New(func() hash.Hash { return sha256.New() }, []byte(keySecret))
}
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]signStr:%s\n", req, EscapeLFString(signStr))
}
io.WriteString(h, signStr)
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
return signedStr
}
func (conn Conn) getSignedStrV4(req *http.Request, canonicalizedResource string, keySecret string, signingTime *time.Time) string {
// Find out the "x-oss-"'s address in header of the request
ossHeadersMap := make(map[string]string)
additionalList, additionalMap := conn.getAdditionalHeaderKeysV4(req)
for k, v := range req.Header {
lowKey := strings.ToLower(k)
if strings.EqualFold(lowKey, HTTPHeaderContentMD5) ||
strings.EqualFold(lowKey, HTTPHeaderContentType) ||
strings.HasPrefix(lowKey, "x-oss-") {
ossHeadersMap[lowKey] = strings.Trim(v[0], " ")
} else {
if _, ok := additionalMap[lowKey]; ok {
ossHeadersMap[lowKey] = strings.Trim(v[0], " ")
}
}
}
// get day,eg 20210914
//signingTime
signDate := ""
strDay := ""
if signingTime != nil {
signDate = signingTime.Format(timeFormatV4)
strDay = signingTime.Format(shortTimeFormatV4)
} else {
var t time.Time
// Required parameters
if date := req.Header.Get(HTTPHeaderDate); date != "" {
signDate = date
t, _ = time.Parse(http.TimeFormat, date)
}
if ossDate := req.Header.Get(HttpHeaderOssDate); ossDate != "" {
signDate = ossDate
t, _ = time.Parse(timeFormatV4, ossDate)
}
strDay = t.Format("20060102")
}
hs := newHeaderSorter(ossHeadersMap)
// Sort the ossHeadersMap by the ascending order
hs.Sort()
// Get the canonicalizedOSSHeaders
canonicalizedOSSHeaders := ""
for i := range hs.Keys {
canonicalizedOSSHeaders += hs.Keys[i] + ":" + hs.Vals[i] + "\n"
}
signStr := ""
// v4 signature
hashedPayload := DefaultContentSha256
if val := req.Header.Get(HttpHeaderOssContentSha256); val != "" {
hashedPayload = val
}
// subResource
resource := canonicalizedResource
subResource := ""
subPos := strings.LastIndex(canonicalizedResource, "?")
if subPos != -1 {
subResource = canonicalizedResource[subPos+1:]
resource = canonicalizedResource[0:subPos]
}
// get canonical request
canonicalReuqest := req.Method + "\n" + resource + "\n" + subResource + "\n" + canonicalizedOSSHeaders + "\n" + strings.Join(additionalList, ";") + "\n" + hashedPayload
rh := sha256.New()
io.WriteString(rh, canonicalReuqest)
hashedRequest := hex.EncodeToString(rh.Sum(nil))
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]CanonicalRequest:%s\n", req, EscapeLFString(canonicalReuqest))
}
// Product & Region
signedStrV4Product := conn.config.GetSignProduct()
signedStrV4Region := conn.config.GetSignRegion()
signStr = "OSS4-HMAC-SHA256" + "\n" + signDate + "\n" + strDay + "/" + signedStrV4Region + "/" + signedStrV4Product + "/aliyun_v4_request" + "\n" + hashedRequest
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]signStr:%s\n", req, EscapeLFString(signStr))
}
h1 := hmac.New(func() hash.Hash { return sha256.New() }, []byte("aliyun_v4"+keySecret))
io.WriteString(h1, strDay)
h1Key := h1.Sum(nil)
h2 := hmac.New(func() hash.Hash { return sha256.New() }, h1Key)
io.WriteString(h2, signedStrV4Region)
h2Key := h2.Sum(nil)
h3 := hmac.New(func() hash.Hash { return sha256.New() }, h2Key)
io.WriteString(h3, signedStrV4Product)
h3Key := h3.Sum(nil)
h4 := hmac.New(func() hash.Hash { return sha256.New() }, h3Key)
io.WriteString(h4, "aliyun_v4_request")
h4Key := h4.Sum(nil)
h := hmac.New(func() hash.Hash { return sha256.New() }, h4Key)
io.WriteString(h, signStr)
return fmt.Sprintf("%x", h.Sum(nil))
}
func (conn Conn) getRtmpSignedStr(bucketName, channelName, playlistName string, expiration int64, keySecret string, params map[string]interface{}) string {
if params[HTTPParamAccessKeyID] == nil {
return ""
}
canonResource := fmt.Sprintf("/%s/%s", bucketName, channelName)
canonParamsKeys := []string{}
for key := range params {
if key != HTTPParamAccessKeyID && key != HTTPParamSignature && key != HTTPParamExpires && key != HTTPParamSecurityToken {
canonParamsKeys = append(canonParamsKeys, key)
}
}
sort.Strings(canonParamsKeys)
canonParamsStr := ""
for _, key := range canonParamsKeys {
canonParamsStr = fmt.Sprintf("%s%s:%s\n", canonParamsStr, key, params[key].(string))
}
expireStr := strconv.FormatInt(expiration, 10)
signStr := expireStr + "\n" + canonParamsStr + canonResource
h := hmac.New(func() hash.Hash { return sha1.New() }, []byte(keySecret))
io.WriteString(h, signStr)
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
return signedStr
}
// newHeaderSorter is an additional function for function SignHeader.
func newHeaderSorter(m map[string]string) *headerSorter {
hs := &headerSorter{
Keys: make([]string, 0, len(m)),
Vals: make([]string, 0, len(m)),
}
for k, v := range m {
hs.Keys = append(hs.Keys, k)
hs.Vals = append(hs.Vals, v)
}
return hs
}
// Sort is an additional function for function SignHeader.
func (hs *headerSorter) Sort() {
sort.Sort(hs)
}
// Len is an additional function for function SignHeader.
func (hs *headerSorter) Len() int {
return len(hs.Vals)
}
// Less is an additional function for function SignHeader.
func (hs *headerSorter) Less(i, j int) bool {
return bytes.Compare([]byte(hs.Keys[i]), []byte(hs.Keys[j])) < 0
}
// Swap is an additional function for function SignHeader.
func (hs *headerSorter) Swap(i, j int) {
hs.Vals[i], hs.Vals[j] = hs.Vals[j], hs.Vals[i]
hs.Keys[i], hs.Keys[j] = hs.Keys[j], hs.Keys[i]
}

Some files were not shown because too many files have changed in this diff Show More