init repo

This commit is contained in:
liuyuanchuang
2025-12-10 18:33:37 +08:00
commit 48e63894eb
2408 changed files with 1053045 additions and 0 deletions

View File

@@ -0,0 +1,17 @@
---
description:
globs:
alwaysApply: false
---
# API Structure
The API follows a versioned structure under the path prefix `/doc_ai/v1/`.
- [api/router.go](mdc:api/router.go): Main router setup that connects all API endpoints
- API endpoints are organized by domain:
- Formula: [api/v1/formula/](mdc:api/v1/formula)
- OSS (Object Storage): [api/v1/oss/](mdc:api/v1/oss)
- Task: [api/v1/task/](mdc:api/v1/task)
- User: [api/v1/user/](mdc:api/v1/user)
Each domain has its own router setup and controller implementation.

View File

@@ -0,0 +1,27 @@
---
description:
globs:
alwaysApply: false
---
# Configuration System
The application uses a YAML-based configuration system:
- [config/](mdc:config): Configuration directory
- Environment-specific configuration files:
- `config_dev.yaml`: Development environment configuration
- `config_prod.yaml`: Production environment configuration
The configuration is loaded at startup in [main.go](mdc:main.go) using the `config.Init()` function, with the environment specified via command-line flags:
```
go run main.go -env dev # Run with dev configuration
go run main.go -env prod # Run with production configuration
```
Configuration includes settings for:
- Database connection
- Redis cache
- Logging
- Server port and mode
- External service credentials

View File

@@ -0,0 +1,25 @@
---
description:
globs:
alwaysApply: false
---
# Deployment Configuration
The project includes Docker configuration for containerized deployment:
- [Dockerfile](mdc:Dockerfile): Container definition for the application
- [docker-compose.yml](mdc:docker-compose.yml): Multi-container deployment configuration
The application can be built and deployed using:
```bash
# Build and run with Docker Compose
docker-compose up -d
# Build Docker image directly
docker build -t document_ai .
docker run -p 8080:8080 document_ai
```
The project also includes CI/CD configuration:
- [.gitlab-ci.yml](mdc:.gitlab-ci.yml): GitLab CI/CD pipeline configuration

View File

@@ -0,0 +1,21 @@
---
description:
globs:
alwaysApply: false
---
# Internal Architecture
The internal directory contains the core business logic of the application:
- [internal/model/](mdc:internal/model): Data models and domain entities
- [internal/service/](mdc:internal/service): Business logic services
- [internal/storage/](mdc:internal/storage): Data persistence layer
- [internal/storage/dao/](mdc:internal/storage/dao): Database access objects
- [internal/storage/cache/](mdc:internal/storage/cache): Redis cache implementation
The application follows a layered architecture with clear separation between:
1. HTTP handlers (in api/)
2. Business logic (in internal/service/)
3. Data access (in internal/storage/)
This design promotes maintainability and testability by decoupling components.

View File

@@ -0,0 +1,16 @@
---
description:
globs:
alwaysApply: false
---
# Project Overview
Document AI is a Go-based application for document processing and analysis. The project is structured using a clean architecture pattern with:
- [main.go](mdc:main.go): The application entry point that configures and launches the HTTP server
- [config/](mdc:config): Configuration files and initialization
- [api/](mdc:api): API endpoints and HTTP handlers
- [internal/](mdc:internal): Core business logic and implementation
- [pkg/](mdc:pkg): Shared utilities and helper packages
The application uses the Gin web framework for HTTP routing and middleware functionality.

View File

@@ -0,0 +1,18 @@
---
description:
globs:
alwaysApply: false
---
# Utility Packages
The `pkg` directory contains shared utilities and functionality:
- [pkg/common/](mdc:pkg/common): Common middleware and shared functionality
- [pkg/constant/](mdc:pkg/constant): Constants used throughout the application
- [pkg/jwt/](mdc:pkg/jwt): JWT authentication utilities
- [pkg/oss/](mdc:pkg/oss): Object Storage Service client implementation
- [pkg/sms/](mdc:pkg/sms): SMS service integration
- [pkg/utils/](mdc:pkg/utils): General utility functions
- [pkg/httpclient/](mdc:pkg/httpclient): HTTP client utilities
These packages provide reusable components that can be used across different parts of the application without creating circular dependencies.

4
.cursorignore Normal file
View File

@@ -0,0 +1,4 @@
go.mod
go.sum
upload/
logs/

7
.gitignore vendored Normal file
View File

@@ -0,0 +1,7 @@
/logs
*.log
.vscode
*.cursorrules
*png
/upload
document_ai

83
.gitlab-ci.yml Normal file
View File

@@ -0,0 +1,83 @@
stages:
- build
- deploy
build_image:
stage: build
image: docker:26.0.0
services:
- name: docker:dind
tags:
- gitlab-release-ci
variables:
CI_DEBUG_TRACE: "true"
IMAGE_NAME: "doc_ai"
before_script:
- echo "start build image"
- docker info || { echo "Docker is not running"; exit 1; }
- docker ps || { echo "Docker daemon is not available"; exit 1; }
- echo "$DOCKER_PASSWORD" | docker login --username=$DOCKER_USERNAME --password-stdin $ACR_REGISTRY_URL
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD $ACR_REGISTRY_URL
script:
- docker build -t $ACR_REGISTRY_URL/bitwsd/$IMAGE_NAME:${CI_COMMIT_SHA:0:9} .
- docker push $ACR_REGISTRY_URL/bitwsd/$IMAGE_NAME:${CI_COMMIT_SHA:0:9}
only:
- main
deploy_image:
stage: deploy
variables:
PORT: 8024
IMAGE_NAME: "doc_ai"
tags:
- shell-excutor
script:
- |
for host in $DEPLOY_HOSTS; do
echo "Deploying to $host ..."
# 通过 SSH 执行远程命令
ssh $DEPLOY_USER@$host "
# 停止并删除旧容器(如果存在)
docker stop $IMAGE_NAME || true
docker rm $IMAGE_NAME || true
# 登录容器仓库
echo \"$DOCKER_PASSWORD\" | docker login --username=$DOCKER_USERNAME --password-stdin $ACR_REGISTRY_URL
# 拉取新镜像
docker pull $ACR_REGISTRY_URL/bitwsd/$IMAGE_NAME:${CI_COMMIT_SHA:0:9}
# 启动新容器
docker run -d --name $IMAGE_NAME \
-p $PORT:8024 \
--restart unless-stopped \
$ACR_REGISTRY_URL/bitwsd/$IMAGE_NAME:${CI_COMMIT_SHA:0:9}
"
echo "Deployed to $host"
done
only:
- main
deploy_image_dev:
stage: deploy
variables:
PORT: 8024
BRANCH: "test"
APP_NAME: "doc_ai"
ENV: "dev"
tags:
- shell-excutor
before_script:
- echo "$DOCKER_PASSWORD" | docker login --username=$DOCKER_USERNAME --password-stdin $ACR_REGISTRY_URL
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD $ACR_REGISTRY_URL
script:
- echo "Working in $CI_PROJECT_DIR"
- git config --global --add safe.directory $CI_PROJECT_DIR
- git pull origin $BRANCH
- docker stop $APP_NAME || true
- docker rm $APP_NAME || true
- docker build -t $APP_NAME .
- docker run -d --name $APP_NAME -p $PORT:8024 --restart unless-stopped $APP_NAME -env dev
- echo "Deployed Go binary on Aliyun ECS with ${ENV} environment"
only:
- test

40
Dockerfile Normal file
View File

@@ -0,0 +1,40 @@
# Build stage
FROM registry.cn-beijing.aliyuncs.com/bitwsd/golang AS builder
WORKDIR /app
# Copy source code
COPY . .
# Build binary
RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -o main ./main.go
# Runtime stage
FROM registry.cn-beijing.aliyuncs.com/bitwsd/alpine
# Set timezone
RUN apk add --no-cache tzdata && \
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && \
echo "Asia/Shanghai" > /etc/timezone
WORKDIR /app
# Copy binary from builder
COPY --from=builder /app/main .
# Copy config files
COPY config/config_*.yaml ./config/
# Create data directory
RUN mkdir -p /data/formula && \
chmod -R 755 /data
# Expose port (update based on your config)
EXPOSE 8024
# Set entrypoint
ENTRYPOINT ["./main"]
# Default command (can be overridden)
CMD ["-env", "prod"]

93
README.md Normal file
View File

@@ -0,0 +1,93 @@
# document_ai
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://cloud.srcstar.com:6005/bitwsd/document_ai.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://cloud.srcstar.com:6005/bitwsd/document_ai/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.

19
api/router.go Normal file
View File

@@ -0,0 +1,19 @@
package api
import (
"gitea.com/bitwsd/document_ai/api/v1/formula"
"gitea.com/bitwsd/document_ai/api/v1/oss"
"gitea.com/bitwsd/document_ai/api/v1/task"
"gitea.com/bitwsd/document_ai/api/v1/user"
"github.com/gin-gonic/gin"
)
func SetupRouter(engine *gin.RouterGroup) {
v1 := engine.Group("/v1")
{
formula.SetupRouter(v1)
oss.SetupRouter(v1)
task.SetupRouter(v1)
user.SetupRouter(v1)
}
}

118
api/v1/formula/handler.go Normal file
View File

@@ -0,0 +1,118 @@
package formula
import (
"net/http"
"path/filepath"
"gitea.com/bitwsd/document_ai/internal/model/formula"
"gitea.com/bitwsd/document_ai/internal/service"
"gitea.com/bitwsd/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/utils"
"github.com/gin-gonic/gin"
)
type FormulaEndpoint struct {
recognitionService *service.RecognitionService
}
func NewFormulaEndpoint() *FormulaEndpoint {
return &FormulaEndpoint{
recognitionService: service.NewRecognitionService(),
}
}
// CreateTask godoc
// @Summary Create a formula recognition task
// @Description Create a new formula recognition task from image
// @Tags Formula
// @Accept json
// @Produce json
// @Param request body CreateFormulaRecognitionRequest true "Create task request"
// @Success 200 {object} common.Response{data=CreateTaskResponse}
// @Failure 400 {object} common.Response
// @Failure 500 {object} common.Response
// @Router /v1/formula/recognition [post]
func (endpoint *FormulaEndpoint) CreateTask(ctx *gin.Context) {
var req formula.CreateFormulaRecognitionRequest
if err := ctx.BindJSON(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid parameters"))
return
}
if !utils.InArray(req.TaskType, []string{string(dao.TaskTypeFormula), string(dao.TaskTypeFormula)}) {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid task type"))
return
}
fileExt := filepath.Ext(req.FileName)
if !utils.InArray(fileExt, []string{".jpg", ".jpeg", ".png", ".gif", ".bmp", ".tiff", ".webp"}) {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "Invalid file type"))
return
}
task, err := endpoint.recognitionService.CreateRecognitionTask(ctx, &req)
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, "Failed to create task"))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, &formula.CreateTaskResponse{
TaskNo: task.TaskUUID,
Status: int(task.Status),
}))
}
// GetTaskStatus godoc
// @Summary Get formula recognition task status
// @Description Get the status and results of a formula recognition task
// @Tags Formula
// @Accept json
// @Produce json
// @Param task_no path string true "Task No"
// @Success 200 {object} common.Response{data=GetTaskStatusResponse}
// @Failure 400 {object} common.Response
// @Failure 404 {object} common.Response
// @Failure 500 {object} common.Response
// @Router /v1/formula/recognition/{task_no} [get]
func (endpoint *FormulaEndpoint) GetTaskStatus(c *gin.Context) {
var req formula.GetRecognitionStatusRequest
if err := c.ShouldBindUri(&req); err != nil {
c.JSON(http.StatusBadRequest, common.ErrorResponse(c, common.CodeParamError, "invalid task no"))
return
}
task, err := endpoint.recognitionService.GetFormualTask(c, req.TaskNo)
if err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, "failed to get task status"))
return
}
c.JSON(http.StatusOK, common.SuccessResponse(c, task))
}
// AI增强识别
// @Summary AI增强识别
// @Description AI增强识别
// @Tags Formula
// @Accept json
// @Produce json
// @Param request body formula.AIEnhanceRecognitionRequest true "AI增强识别请求"
// @Success 200 {object} common.Response{data=formula.AIEnhanceRecognitionResponse}
// @Router /v1/formula/ai_enhance [post]
func (endpoint *FormulaEndpoint) AIEnhanceRecognition(c *gin.Context) {
var req formula.AIEnhanceRecognitionRequest
if err := c.BindJSON(&req); err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "Invalid parameters"))
return
}
_, err := endpoint.recognitionService.AIEnhanceRecognition(c, &req)
if err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, err.Error()))
return
}
c.JSON(http.StatusOK, common.SuccessResponse(c, nil))
}

12
api/v1/formula/router.go Normal file
View File

@@ -0,0 +1,12 @@
package formula
import (
"github.com/gin-gonic/gin"
)
func SetupRouter(engine *gin.RouterGroup) {
endpoint := NewFormulaEndpoint()
engine.POST("/formula/recognition", endpoint.CreateTask)
engine.POST("/formula/ai_enhance", endpoint.AIEnhanceRecognition)
engine.GET("/formula/recognition/:task_no", endpoint.GetTaskStatus)
}

106
api/v1/oss/handler.go Normal file
View File

@@ -0,0 +1,106 @@
package oss
import (
"fmt"
"net/http"
"os"
"path/filepath"
"time"
"gitea.com/bitwsd/document_ai/config"
"gitea.com/bitwsd/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/constant"
"gitea.com/bitwsd/document_ai/pkg/oss"
"gitea.com/bitwsd/document_ai/pkg/utils"
"github.com/gin-gonic/gin"
"gorm.io/gorm"
)
func GetPostObjectSignature(ctx *gin.Context) {
policyToken, err := oss.GetPolicyToken()
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, err.Error()))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, policyToken))
}
// GetSignatureURL handles the request to generate a signed URL for a file upload.
// @Summary Generate signed URL for file upload
// @Description This endpoint generates a signed URL for uploading a file to the OSS (Object Storage Service).
// @Tags file
// @Accept json
// @Produce json
// @Param hash query string true "Hash value of the file"
// @Success 200 {object} common.Response{data=map[string]string{"sign_url":string, "repeat":bool, "path":string}} "Signed URL generated successfully"
// @Failure 200 {object} common.Response "Error response"
// @Router /signature_url [get]
func GetSignatureURL(ctx *gin.Context) {
userID := ctx.GetInt64(constant.ContextUserID)
type Req struct {
FileHash string `json:"file_hash" binding:"required"`
FileName string `json:"file_name" binding:"required"`
}
req := Req{}
err := ctx.BindJSON(&req)
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "param error"))
return
}
taskDao := dao.NewRecognitionTaskDao()
sess := dao.DB.WithContext(ctx)
task, err := taskDao.GetTaskByFileURL(sess, userID, req.FileHash)
if err != nil && err != gorm.ErrRecordNotFound {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeDBError, "failed to get task"))
return
}
if task.ID != 0 {
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, gin.H{"sign_url": "", "repeat": true, "path": task.FileURL}))
return
}
extend := filepath.Ext(req.FileName)
if extend == "" {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "invalid file name"))
return
}
if !utils.InArray(extend, []string{".jpg", ".jpeg", ".png", ".gif", ".bmp", ".tiff", ".webp"}) {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "invalid file type"))
return
}
path := filepath.Join(oss.FormulaDir, fmt.Sprintf("%s%s", utils.NewUUID(), extend))
url, err := oss.GetPolicyURL(ctx, path)
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, err.Error()))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, gin.H{"sign_url": url, "repeat": false, "path": path}))
}
func UploadFile(ctx *gin.Context) {
if err := os.MkdirAll(config.GlobalConfig.UploadDir, 0755); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, "Failed to create upload directory"))
return
}
// Get file from form
file, err := ctx.FormFile("file")
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, "File is required"))
return
}
// Generate unique filename with timestamp
ext := filepath.Ext(file.Filename)
filename := fmt.Sprintf("%d%s", time.Now().UnixNano(), ext)
filepath := filepath.Join(config.GlobalConfig.UploadDir, filename)
// Save file
if err := ctx.SaveUploadedFile(file, filepath); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, "Failed to save file"))
return
}
// Return success with file path
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, gin.H{"sign_url": filepath}))
}

12
api/v1/oss/router.go Normal file
View File

@@ -0,0 +1,12 @@
package oss
import "github.com/gin-gonic/gin"
func SetupRouter(parent *gin.RouterGroup) {
router := parent.Group("oss")
{
router.POST("/signature", GetPostObjectSignature)
router.POST("/signature_url", GetSignatureURL)
router.POST("/file/upload", UploadFile)
}
}

61
api/v1/task/handler.go Normal file
View File

@@ -0,0 +1,61 @@
package task
import (
"net/http"
"gitea.com/bitwsd/core/common/log"
"gitea.com/bitwsd/document_ai/internal/model/task"
"gitea.com/bitwsd/document_ai/internal/service"
"gitea.com/bitwsd/document_ai/pkg/common"
"github.com/gin-gonic/gin"
)
type TaskEndpoint struct {
taskService *service.TaskService
}
func NewTaskEndpoint() *TaskEndpoint {
return &TaskEndpoint{taskService: service.NewTaskService()}
}
func (h *TaskEndpoint) EvaluateTask(c *gin.Context) {
var req task.EvaluateTaskRequest
if err := c.ShouldBindJSON(&req); err != nil {
log.Error(c, "func", "EvaluateTask", "msg", "Invalid parameters", "error", err)
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "Invalid parameters"))
return
}
err := h.taskService.EvaluateTask(c, &req)
if err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, "评价任务失败"))
return
}
c.JSON(http.StatusOK, common.SuccessResponse(c, nil))
}
func (h *TaskEndpoint) GetTaskList(c *gin.Context) {
var req task.TaskListRequest
if err := c.ShouldBindQuery(&req); err != nil {
log.Error(c, "func", "GetTaskList", "msg", "Invalid parameters", "error", err)
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeParamError, "Invalid parameters"))
return
}
if req.Page <= 0 {
req.Page = 1
}
if req.PageSize <= 0 {
req.PageSize = 10
}
resp, err := h.taskService.GetTaskList(c, &req)
if err != nil {
c.JSON(http.StatusOK, common.ErrorResponse(c, common.CodeSystemError, "获取任务列表失败"))
return
}
c.JSON(http.StatusOK, common.SuccessResponse(c, resp))
}

11
api/v1/task/router.go Normal file
View File

@@ -0,0 +1,11 @@
package task
import (
"github.com/gin-gonic/gin"
)
func SetupRouter(engine *gin.RouterGroup) {
endpoint := NewTaskEndpoint()
engine.POST("/task/evaluate", endpoint.EvaluateTask)
engine.GET("/task/list", endpoint.GetTaskList)
}

105
api/v1/user/handler.go Normal file
View File

@@ -0,0 +1,105 @@
package user
import (
"net/http"
"gitea.com/bitwsd/core/common/log"
"gitea.com/bitwsd/document_ai/config"
model "gitea.com/bitwsd/document_ai/internal/model/user"
"gitea.com/bitwsd/document_ai/internal/service"
"gitea.com/bitwsd/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/constant"
"gitea.com/bitwsd/document_ai/pkg/jwt"
"github.com/gin-gonic/gin"
)
type UserEndpoint struct {
userService *service.UserService
}
func NewUserEndpoint() *UserEndpoint {
return &UserEndpoint{
userService: service.NewUserService(),
}
}
func (h *UserEndpoint) SendVerificationCode(ctx *gin.Context) {
req := model.SmsSendRequest{}
if err := ctx.ShouldBindJSON(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, common.CodeParamErrorMsg))
return
}
code, err := h.userService.GetSmsCode(ctx, req.Phone)
if err != nil {
log.Error(ctx, "func", "SendVerificationCode", "msg", "发送验证码失败", "error", err)
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.SmsSendResponse{Code: code}))
}
func (h *UserEndpoint) LoginByPhoneCode(ctx *gin.Context) {
req := model.PhoneLoginRequest{}
if err := ctx.ShouldBindJSON(&req); err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, common.CodeParamErrorMsg))
return
}
if req.Code == "" || req.Phone == "" {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeParamError, common.CodeParamErrorMsg))
return
}
if config.GlobalConfig.Server.IsDebug() {
uid := 1
token, err := jwt.CreateToken(jwt.User{UserId: int64(uid)})
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeUnauthorizedMsg))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.PhoneLoginResponse{Token: token}))
return
}
uid, err := h.userService.VerifySmsCode(ctx, req.Phone, req.Code)
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeSmsCodeErrorMsg))
return
}
token, err := jwt.CreateToken(jwt.User{UserId: uid})
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeUnauthorizedMsg))
return
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.PhoneLoginResponse{Token: token}))
}
func (h *UserEndpoint) GetUserInfo(ctx *gin.Context) {
uid := ctx.GetInt64(constant.ContextUserID)
if uid == 0 {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeUnauthorized, common.CodeUnauthorizedMsg))
return
}
user, err := h.userService.GetUserInfo(ctx, uid)
if err != nil {
ctx.JSON(http.StatusOK, common.ErrorResponse(ctx, common.CodeSystemError, common.CodeSystemErrorMsg))
return
}
status := 0
if user.ID > 0 {
status = 1
}
ctx.JSON(http.StatusOK, common.SuccessResponse(ctx, model.UserInfoResponse{
Username: user.Username,
Phone: user.Phone,
Status: status,
}))
}

16
api/v1/user/router.go Normal file
View File

@@ -0,0 +1,16 @@
package user
import (
"gitea.com/bitwsd/document_ai/pkg/common"
"github.com/gin-gonic/gin"
)
func SetupRouter(router *gin.RouterGroup) {
userEndpoint := NewUserEndpoint()
userRouter := router.Group("/user")
{
userRouter.POST("/get/sms", userEndpoint.SendVerificationCode)
userRouter.POST("/login/phone", userEndpoint.LoginByPhoneCode)
userRouter.GET("/info", common.GetAuthMiddleware(), userEndpoint.GetUserInfo)
}
}

77
config/config.go Normal file
View File

@@ -0,0 +1,77 @@
package config
import (
"gitea.com/bitwsd/core/common/log"
"github.com/spf13/viper"
)
type Config struct {
Log log.LogConfig `mapstructure:"log"`
Server ServerConfig `mapstructure:"server"`
Database DatabaseConfig `mapstructure:"database"`
Redis RedisConfig `mapstructure:"redis"`
UploadDir string `mapstructure:"upload_dir"`
Limit LimitConfig `mapstructure:"limit"`
Aliyun AliyunConfig `mapstructure:"aliyun"`
}
type LimitConfig struct {
FormulaRecognition int `mapstructure:"formula_recognition"`
}
type ServerConfig struct {
Port int `mapstructure:"port"`
Mode string `mapstructure:"mode"`
}
func (c *ServerConfig) IsDebug() bool {
return c.Mode == "debug"
}
type RedisConfig struct {
Addr string `mapstructure:"addr"`
Password string `mapstructure:"password"`
DB int `mapstructure:"db"`
}
type DatabaseConfig struct {
Driver string `mapstructure:"driver"`
Host string `mapstructure:"host"`
Port int `mapstructure:"port"`
Username string `mapstructure:"username"`
Password string `mapstructure:"password"`
DBName string `mapstructure:"dbname"`
MaxIdle int `mapstructure:"max_idle"`
MaxOpen int `mapstructure:"max_open"`
}
type OSSConfig struct {
Endpoint string `mapstructure:"endpoint"` // 外网endpoint
InnerEndpoint string `mapstructure:"inner_endpoint"` // 内网endpoint
AccessKeyID string `mapstructure:"access_key_id"`
AccessKeySecret string `mapstructure:"access_key_secret"`
BucketName string `mapstructure:"bucket_name"`
}
type SmsConfig struct {
AccessKeyID string `mapstructure:"access_key_id"`
AccessKeySecret string `mapstructure:"access_key_secret"`
SignName string `mapstructure:"sign_name"`
TemplateCode string `mapstructure:"template_code"`
}
type AliyunConfig struct {
Sms SmsConfig `mapstructure:"sms"`
OSS OSSConfig `mapstructure:"oss"`
}
var GlobalConfig Config
func Init(configPath string) error {
viper.SetConfigFile(configPath)
if err := viper.ReadInConfig(); err != nil {
return err
}
return viper.Unmarshal(&GlobalConfig)
}

46
config/config_dev.yaml Normal file
View File

@@ -0,0 +1,46 @@
server:
port: 8024
mode: debug # debug/release
database:
driver: mysql
host: 182.92.150.161
port: 3006
username: root
password: yoge@coder%%%123321!
dbname: doc_ai
max_idle: 10
max_open: 100
redis:
addr: 182.92.150.161:6379
password: yoge@123321!
db: 0
limit:
formula_recognition: 3
log:
appName: document_ai
level: info # debug, info, warn, error
format: console # json, console
outputPath: ./logs/app.log # 日志文件路径
maxSize: 2 # 单个日志文件最大尺寸单位MB
maxAge: 1 # 日志保留天数
maxBackups: 1 # 保留的旧日志文件最大数量
compress: false # 是否压缩旧日志
aliyun:
sms:
access_key_id: "LTAI5tB9ur4ExCF4dYPq7hLz"
access_key_secret: "91HulOdaCpwhfBesrUDiKYvyi0Qkx1"
sign_name: "北京比特智源科技"
template_code: "SMS_291510729"
oss:
endpoint: oss-cn-beijing.aliyuncs.com
inner_endpoint: oss-cn-beijing-internal.aliyuncs.com
access_key_id: LTAI5tKogxeiBb4gJGWEePWN
access_key_secret: l4oCxtt5iLSQ1DAs40guTzKUfrxXwq
bucket_name: bitwsd-doc-ai

45
config/config_prod.yaml Normal file
View File

@@ -0,0 +1,45 @@
server:
port: 8024
mode: release # debug/release
database:
driver: mysql
host: rm-bp1uh3e1qop18gz4wto.mysql.rds.aliyuncs.com
port: 3306
username: root
password: bitwsdttestESAadb12@3341
dbname: doc_ai
max_idle: 10
max_open: 100
redis:
addr: 172.31.32.138:6379
password: bitwsd@8912WE!
db: 0
limit:
formula_recognition: 2
log:
appName: document_ai
level: info # debug, info, warn, error
format: json # json, console
outputPath: /app/logs/app.log # 日志文件路径
maxSize: 10 # 单个日志文件最大尺寸单位MB
maxAge: 60 # 日志保留天数
maxBackups: 100 # 保留的旧日志文件最大数量
compress: false # 是否压缩旧日志
aliyun:
sms:
access_key_id: "LTAI5tB9ur4ExCF4dYPq7hLz"
access_key_secret: "91HulOdaCpwhfBesrUDiKYvyi0Qkx1"
sign_name: "北京比特智源科技"
template_code: "SMS_291510729"
oss:
endpoint: oss-cn-beijing.aliyuncs.com
inner_endpoint: oss-cn-beijing-internal.aliyuncs.com
access_key_id: LTAI5tKogxeiBb4gJGWEePWN
access_key_secret: l4oCxtt5iLSQ1DAs40guTzKUfrxXwq
bucket_name: bitwsd-doc-ai

36
docker-compose.yml Normal file
View File

@@ -0,0 +1,36 @@
version: '3.8'
services:
mysql:
image: mysql:8.0
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: 123456 # 设置root用户密码
MYSQL_DATABASE: document_ai # 设置默认数据库名
MYSQL_USER: bitwsd_document # 设置数据库用户名
MYSQL_PASSWORD: 123456 # 设置数据库用户密码
ports:
- "3306:3306" # 映射宿主机的3306端口到容器内的3306
volumes:
- mysql_data:/var/lib/mysql # 持久化MySQL数据
networks:
- backend
restart: always
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379" # 映射宿主机的6379端口到容器内的6379
networks:
- backend
restart: always
volumes:
mysql_data:
# 持久化MySQL数据卷
driver: local
networks:
backend:
driver: bridge

81
go.mod Normal file
View File

@@ -0,0 +1,81 @@
module gitea.com/bitwsd/document_ai
go 1.20
require (
gitea.com/bitwsd/core v0.0.0-20241128075635-8d72a929b914
github.com/alibabacloud-go/darabonba-openapi v0.2.1
github.com/alibabacloud-go/dysmsapi-20170525/v2 v2.0.18
github.com/alibabacloud-go/tea v1.1.19
github.com/alibabacloud-go/tea-utils v1.4.5
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
github.com/dgrijalva/jwt-go v3.2.0+incompatible
github.com/gin-gonic/gin v1.10.0
github.com/google/uuid v1.6.0
github.com/redis/go-redis/v9 v9.7.0
github.com/spf13/viper v1.19.0
gorm.io/driver/mysql v1.5.7
gorm.io/gorm v1.25.12
)
require github.com/go-sql-driver/mysql v1.7.0 // indirect
require (
github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4 // indirect
github.com/alibabacloud-go/debug v0.0.0-20190504072949-9472017b5c68 // indirect
github.com/alibabacloud-go/endpoint-util v1.1.0 // indirect
github.com/alibabacloud-go/openapi-util v0.1.0 // indirect
github.com/alibabacloud-go/tea-xml v1.1.2 // indirect
github.com/aliyun/credentials-go v1.1.2 // indirect
github.com/bytedance/sonic v1.11.6 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/clbanning/mxj/v2 v2.5.6 // indirect
github.com/cloudwego/base64x v0.1.4 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.20.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/rs/zerolog v1.33.0 // indirect
github.com/sagikazarmark/locafero v0.4.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.11.0 // indirect
github.com/spf13/cast v1.6.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tjfoc/gmsm v1.3.2 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.9.0 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/crypto v0.23.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.15.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

256
go.sum Normal file
View File

@@ -0,0 +1,256 @@
gitea.com/bitwsd/core v0.0.0-20241128075635-8d72a929b914 h1:3aRCeiuq/PWMr2yjEN9Y5NusfmpdMKiO4i/5tM5qc34=
gitea.com/bitwsd/core v0.0.0-20241128075635-8d72a929b914/go.mod h1:hbEUo3t/AFGCnQbxwdG4oiw2IHdlRgK02cqd0yicP1Y=
github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4 h1:iC9YFYKDGEy3n/FtqJnOkZsene9olVspKmkX5A2YBEo=
github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4/go.mod h1:sCavSAvdzOjul4cEqeVtvlSaSScfNsTQ+46HwlTL1hc=
github.com/alibabacloud-go/darabonba-openapi v0.1.18/go.mod h1:PB4HffMhJVmAgNKNq3wYbTUlFvPgxJpTzd1F5pTuUsc=
github.com/alibabacloud-go/darabonba-openapi v0.2.1 h1:WyzxxKvhdVDlwpAMOHgAiCJ+NXa6g5ZWPFEzaK/ewwY=
github.com/alibabacloud-go/darabonba-openapi v0.2.1/go.mod h1:zXOqLbpIqq543oioL9IuuZYOQgHQ5B8/n5OPrnko8aY=
github.com/alibabacloud-go/darabonba-string v1.0.0/go.mod h1:93cTfV3vuPhhEwGGpKKqhVW4jLe7tDpo3LUM0i0g6mA=
github.com/alibabacloud-go/debug v0.0.0-20190504072949-9472017b5c68 h1:NqugFkGxx1TXSh/pBcU00Y6bljgDPaFdh5MUSeJ7e50=
github.com/alibabacloud-go/debug v0.0.0-20190504072949-9472017b5c68/go.mod h1:6pb/Qy8c+lqua8cFpEy7g39NRRqOWc3rOwAy8m5Y2BY=
github.com/alibabacloud-go/dysmsapi-20170525/v2 v2.0.18 h1:hfZA4cgIl6frNdsRmAyj8sn9J1bihQpYbzIVv2T/+Cs=
github.com/alibabacloud-go/dysmsapi-20170525/v2 v2.0.18/go.mod h1:di54xjBFHvKiQQo7st3TUmiMy0ywne5TOHup786Rhes=
github.com/alibabacloud-go/endpoint-util v1.1.0 h1:r/4D3VSw888XGaeNpP994zDUaxdgTSHBbVfZlzf6b5Q=
github.com/alibabacloud-go/endpoint-util v1.1.0/go.mod h1:O5FuCALmCKs2Ff7JFJMudHs0I5EBgecXXxZRyswlEjE=
github.com/alibabacloud-go/openapi-util v0.0.11/go.mod h1:sQuElr4ywwFRlCCberQwKRFhRzIyG4QTP/P4y1CJ6Ws=
github.com/alibabacloud-go/openapi-util v0.1.0 h1:0z75cIULkDrdEhkLWgi9tnLe+KhAFE/r5Pb3312/eAY=
github.com/alibabacloud-go/openapi-util v0.1.0/go.mod h1:sQuElr4ywwFRlCCberQwKRFhRzIyG4QTP/P4y1CJ6Ws=
github.com/alibabacloud-go/tea v1.1.0/go.mod h1:IkGyUSX4Ba1V+k4pCtJUc6jDpZLFph9QMy2VUPTwukg=
github.com/alibabacloud-go/tea v1.1.7/go.mod h1:/tmnEaQMyb4Ky1/5D+SE1BAsa5zj/KeGOFfwYm3N/p4=
github.com/alibabacloud-go/tea v1.1.8/go.mod h1:/tmnEaQMyb4Ky1/5D+SE1BAsa5zj/KeGOFfwYm3N/p4=
github.com/alibabacloud-go/tea v1.1.11/go.mod h1:/tmnEaQMyb4Ky1/5D+SE1BAsa5zj/KeGOFfwYm3N/p4=
github.com/alibabacloud-go/tea v1.1.17/go.mod h1:nXxjm6CIFkBhwW4FQkNrolwbfon8Svy6cujmKFUq98A=
github.com/alibabacloud-go/tea v1.1.19 h1:Xroq0M+pr0mC834Djj3Fl4ZA8+GGoA0i7aWse1vmgf4=
github.com/alibabacloud-go/tea v1.1.19/go.mod h1:nXxjm6CIFkBhwW4FQkNrolwbfon8Svy6cujmKFUq98A=
github.com/alibabacloud-go/tea-utils v1.3.1/go.mod h1:EI/o33aBfj3hETm4RLiAxF/ThQdSngxrpF8rKUDJjPE=
github.com/alibabacloud-go/tea-utils v1.4.3/go.mod h1:KNcT0oXlZZxOXINnZBs6YvgOd5aYp9U67G+E3R8fcQw=
github.com/alibabacloud-go/tea-utils v1.4.5 h1:h0/6Xd2f3bPE4XHTvkpjwxowIwRCJAJOqY6Eq8f3zfA=
github.com/alibabacloud-go/tea-utils v1.4.5/go.mod h1:KNcT0oXlZZxOXINnZBs6YvgOd5aYp9U67G+E3R8fcQw=
github.com/alibabacloud-go/tea-xml v1.1.2 h1:oLxa7JUXm2EDFzMg+7oRsYc+kutgCVwm+bZlhhmvW5M=
github.com/alibabacloud-go/tea-xml v1.1.2/go.mod h1:Rq08vgCcCAjHyRi/M7xlHKUykZCEtyBy9+DPF6GgEu8=
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g=
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
github.com/aliyun/credentials-go v1.1.2 h1:qU1vwGIBb3UJ8BwunHDRFtAhS6jnQLnde/yk0+Ih2GY=
github.com/aliyun/credentials-go v1.1.2/go.mod h1:ozcZaMR5kLM7pwtCMEpVmQ242suV6qTJya2bDq4X1Tw=
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0=
github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM=
github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/clbanning/mxj/v2 v2.5.5/go.mod h1:hNiWqW14h+kc+MdF9C6/YoRfjEJoR3ou6tn/Qo+ve2s=
github.com/clbanning/mxj/v2 v2.5.6 h1:Jm4VaCI/+Ug5Q57IzEoZbwx4iQFA6wkXv72juUSeK+g=
github.com/clbanning/mxj/v2 v2.5.6/go.mod h1:hNiWqW14h+kc+MdF9C6/YoRfjEJoR3ou6tn/Qo+ve2s=
github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y=
github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg=
github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8=
github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
github.com/go-sql-driver/mysql v1.7.0 h1:ueSltNNllEqE3qcWBTD0iQd3IpL/6U+mJxLkazJ7YPc=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20200217142428-fce0ec30dd00/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ=
github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4=
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/assertions v1.1.0/go.mod h1:tcbTF8ujkAEcZ8TElKY+i30BzYlVhC/LOxJk7iOWnoo=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.19.0 h1:RWq5SEjt8o25SROyN3z2OrDB9l7RPd3lwTWU8EcEdcI=
github.com/spf13/viper v1.19.0/go.mod h1:GQUN9bilAbhU/jgc1bKs99f/suXKeUMct8Adx5+Ntkg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tjfoc/gmsm v1.3.2 h1:7JVkAn5bvUJ7HtU08iW6UiD+UTmJTIToHCfeFzkcCxM=
github.com/tjfoc/gmsm v1.3.2/go.mod h1:HaUcFuY0auTiaHB9MHFGCPx5IaLhTUd2atbCFBQXn9w=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE=
github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.30/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191219195013-becbf705a915/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200510223506-06a226fb4e37/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200509044756-6aff5f38e54f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200509030707-2212a7e161a5/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/ini.v1 v1.56.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/driver/mysql v1.5.7 h1:MndhOPYOfEp2rHKgkZIhJ16eVUIRf2HmzgoPmh7FCWo=
gorm.io/driver/mysql v1.5.7/go.mod h1:sEtPWMiqiN1N1cMXoXmBbd8C6/l+TESwriotuRRpkDM=
gorm.io/gorm v1.25.7/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
gorm.io/gorm v1.25.12 h1:I0u8i2hWQItBq1WfE0o2+WuL9+8L21K9e2HHSTE/0f8=
gorm.io/gorm v1.25.12/go.mod h1:xh7N7RHfYlNc5EmcI/El95gXusucDrQnHXe0+CgWcLQ=
nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=

View File

@@ -0,0 +1,69 @@
package formula
type CreateFormulaRecognitionRequest struct {
FileURL string `json:"file_url" binding:"required"` // oss file url
FileHash string `json:"file_hash" binding:"required"` // file hash
FileName string `json:"file_name" binding:"required"` // file name
TaskType string `json:"task_type" binding:"required,oneof=FORMULA"` // task type
}
type GetRecognitionStatusRequest struct {
TaskNo string `uri:"task_no" binding:"required"`
}
type AIEnhanceRecognitionRequest struct {
TaskNo string `json:"task_no" binding:"required"`
}
type VLFormulaRequest struct {
Model string `json:"model"`
Stream bool `json:"stream"`
MaxTokens int `json:"max_tokens"`
Temperature float64 `json:"temperature"`
TopP float64 `json:"top_p"`
TopK int `json:"top_k"`
N int `json:"n"`
FrequencyPenalty float64 `json:"frequency_penalty"`
Messages []Message `json:"messages"`
}
type Message struct {
Role string `json:"role"`
Content []Content `json:"content"`
}
type Content struct {
Text string `json:"text"`
Type string `json:"type"`
ImageURL Image `json:"image_url"`
}
type Image struct {
Detail string `json:"detail"`
URL string `json:"url"`
}
type VLFormulaResponse struct {
ID string `json:"id"`
Object string `json:"object"`
Created int64 `json:"created"`
Model string `json:"model"`
Choices []Choice `json:"choices"`
Usage Usage `json:"usage"`
SystemFingerprint string `json:"system_fingerprint"`
}
type Choice struct {
Index int `json:"index"`
Message struct {
Content string `json:"content"`
Role string `json:"role"`
} `json:"message"`
FinishReason string `json:"finish_reason"`
}
type Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
}

View File

@@ -0,0 +1,13 @@
package formula
type CreateTaskResponse struct {
TaskNo string `json:"task_no"`
Status int `json:"status"`
}
type GetFormulaTaskResponse struct {
TaskNo string `json:"task_no"`
Status int `json:"status"`
Count int `json:"count"`
Latex string `json:"latex"`
}

View File

@@ -0,0 +1,36 @@
package task
type EvaluateTaskRequest struct {
TaskNo string `json:"task_no" binding:"required"` // 任务编号
Satisfied int `json:"satisfied"` // 0: 不满意, 1: 满意
Suggestion []string `json:"suggestion"` // 建议 1. 公式无法渲染 2. 公式渲染错误
Feedback string `json:"feedback"` // 反馈
}
type TaskListRequest struct {
TaskType string `json:"task_type" form:"task_type" binding:"required"`
Page int `json:"page" form:"page"`
PageSize int `json:"page_size" form:"page_size"`
}
type PdfInfo struct {
PageCount int `json:"page_count"`
PageWidth int `json:"page_width"`
PageHeight int `json:"page_height"`
}
type TaskListDTO struct {
TaskID string `json:"task_id"`
FileName string `json:"file_name"`
Status string `json:"status"`
Path string `json:"path"`
TaskType string `json:"task_type"`
CreatedAt string `json:"created_at"`
PdfInfo PdfInfo `json:"pdf_info"`
}
type TaskListResponse struct {
TaskList []*TaskListDTO `json:"task_list"`
HasMore bool `json:"has_more"`
NextPage int `json:"next_page"`
}

View File

@@ -0,0 +1,24 @@
package model
type SmsSendRequest struct {
Phone string `json:"phone" binding:"required"`
}
type SmsSendResponse struct {
Code string `json:"code"`
}
type PhoneLoginRequest struct {
Phone string `json:"phone" binding:"required"`
Code string `json:"code" binding:"required"`
}
type PhoneLoginResponse struct {
Token string `json:"token"`
}
type UserInfoResponse struct {
Username string `json:"username"`
Phone string `json:"phone"`
Status int `json:"status"` // 0: not login, 1: login
}

View File

@@ -0,0 +1,540 @@
package service
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"time"
"gitea.com/bitwsd/core/common/log"
"gitea.com/bitwsd/document_ai/config"
"gitea.com/bitwsd/document_ai/internal/model/formula"
"gitea.com/bitwsd/document_ai/internal/storage/cache"
"gitea.com/bitwsd/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/constant"
"gitea.com/bitwsd/document_ai/pkg/httpclient"
"gitea.com/bitwsd/document_ai/pkg/oss"
"gitea.com/bitwsd/document_ai/pkg/utils"
"gorm.io/gorm"
)
type RecognitionService struct {
db *gorm.DB
queueLimit chan struct{}
stopChan chan struct{}
httpClient *httpclient.Client
}
func NewRecognitionService() *RecognitionService {
s := &RecognitionService{
db: dao.DB,
queueLimit: make(chan struct{}, config.GlobalConfig.Limit.FormulaRecognition),
stopChan: make(chan struct{}),
httpClient: httpclient.NewClient(nil), // 使用默认配置
}
// 服务启动时就开始处理队列
utils.SafeGo(func() {
lock, err := cache.GetDistributedLock(context.Background())
if err != nil {
log.Error(context.Background(), "func", "NewRecognitionService", "msg", "获取分布式锁失败", "error", err)
return
}
if !lock {
log.Error(context.Background(), "func", "NewRecognitionService", "msg", "获取分布式锁失败")
return
}
s.processFormulaQueue(context.Background())
})
return s
}
func (s *RecognitionService) AIEnhanceRecognition(ctx context.Context, req *formula.AIEnhanceRecognitionRequest) (*dao.RecognitionTask, error) {
count, err := cache.GetVLMFormulaCount(ctx, common.GetIPFromContext(ctx))
if err != nil {
log.Error(ctx, "func", "AIEnhanceRecognition", "msg", "获取VLM公式识别次数失败", "error", err)
return nil, common.NewError(common.CodeSystemError, "系统错误", err)
}
if count >= constant.VLMFormulaCount {
return nil, common.NewError(common.CodeForbidden, "今日VLM公式识别次数已达上限,请明天再试!", nil)
}
taskDao := dao.NewRecognitionTaskDao()
task, err := taskDao.GetByTaskNo(dao.DB.WithContext(ctx), req.TaskNo)
if err != nil {
log.Error(ctx, "func", "AIEnhanceRecognition", "msg", "获取任务失败", "error", err)
return nil, common.NewError(common.CodeDBError, "获取任务失败", err)
}
if task == nil {
log.Error(ctx, "func", "AIEnhanceRecognition", "msg", "任务不存在", "task_no", req.TaskNo)
return nil, common.NewError(common.CodeNotFound, "任务不存在", err)
}
if task.Status == dao.TaskStatusProcessing || task.Status == dao.TaskStatusPending {
log.Error(ctx, "func", "AIEnhanceRecognition", "msg", "任务未完成", "task_no", req.TaskNo)
return nil, common.NewError(common.CodeInvalidStatus, "任务未完成", err)
}
err = taskDao.Update(dao.DB.WithContext(ctx), map[string]interface{}{"id": task.ID}, map[string]interface{}{"status": dao.TaskStatusProcessing})
if err != nil {
log.Error(ctx, "func", "AIEnhanceRecognition", "msg", "更新任务状态失败", "error", err)
return nil, common.NewError(common.CodeDBError, "更新任务状态失败", err)
}
utils.SafeGo(func() {
s.processVLFormula(context.Background(), task.ID)
_, err := cache.IncrVLMFormulaCount(context.Background(), common.GetIPFromContext(ctx))
if err != nil {
log.Error(ctx, "func", "AIEnhanceRecognition", "msg", "增加VLM公式识别次数失败", "error", err)
}
})
return task, nil
}
func (s *RecognitionService) CreateRecognitionTask(ctx context.Context, req *formula.CreateFormulaRecognitionRequest) (*dao.RecognitionTask, error) {
sess := dao.DB.WithContext(ctx)
taskDao := dao.NewRecognitionTaskDao()
task := &dao.RecognitionTask{
TaskUUID: utils.NewUUID(),
TaskType: dao.TaskType(req.TaskType),
Status: dao.TaskStatusPending,
FileURL: req.FileURL,
FileName: req.FileName,
FileHash: req.FileHash,
IP: common.GetIPFromContext(ctx),
}
if err := taskDao.Create(sess, task); err != nil {
log.Error(ctx, "func", "CreateRecognitionTask", "msg", "创建任务失败", "error", err)
return nil, common.NewError(common.CodeDBError, "创建任务失败", err)
}
err := s.handleFormulaRecognition(ctx, task.ID)
if err != nil {
log.Error(ctx, "func", "CreateRecognitionTask", "msg", "处理任务失败", "error", err)
return nil, common.NewError(common.CodeSystemError, "处理任务失败", err)
}
return task, nil
}
func (s *RecognitionService) GetFormualTask(ctx context.Context, taskNo string) (*formula.GetFormulaTaskResponse, error) {
taskDao := dao.NewRecognitionTaskDao()
resultDao := dao.NewRecognitionResultDao()
sess := dao.DB.WithContext(ctx)
count, err := cache.GetFormulaTaskCount(ctx)
if err != nil {
log.Error(ctx, "func", "GetFormualTask", "msg", "获取任务数量失败", "error", err)
}
if count > int64(config.GlobalConfig.Limit.FormulaRecognition) {
return &formula.GetFormulaTaskResponse{TaskNo: taskNo, Status: int(dao.TaskStatusPending), Count: int(count)}, nil
}
task, err := taskDao.GetByTaskNo(sess, taskNo)
if err != nil {
if err == gorm.ErrRecordNotFound {
log.Info(ctx, "func", "GetFormualTask", "msg", "任务不存在", "task_no", taskNo)
return nil, common.NewError(common.CodeNotFound, "任务不存在", err)
}
log.Error(ctx, "func", "GetFormualTask", "msg", "查询任务失败", "error", err, "task_no", taskNo)
return nil, common.NewError(common.CodeDBError, "查询任务失败", err)
}
if task.Status != dao.TaskStatusCompleted {
return &formula.GetFormulaTaskResponse{TaskNo: taskNo, Status: int(task.Status)}, nil
}
taskRet, err := resultDao.GetByTaskID(sess, task.ID)
if err != nil {
if err == gorm.ErrRecordNotFound {
log.Info(ctx, "func", "GetFormualTask", "msg", "任务结果不存在", "task_no", taskNo)
return nil, common.NewError(common.CodeNotFound, "任务结果不存在", err)
}
log.Error(ctx, "func", "GetFormualTask", "msg", "查询任务结果失败", "error", err, "task_no", taskNo)
return nil, common.NewError(common.CodeDBError, "查询任务结果失败", err)
}
latex := taskRet.NewContentCodec().GetContent().(string)
return &formula.GetFormulaTaskResponse{TaskNo: taskNo, Latex: latex, Status: int(task.Status)}, nil
}
func (s *RecognitionService) handleFormulaRecognition(ctx context.Context, taskID int64) error {
// 简化为只负责将任务加入队列
_, err := cache.PushFormulaTask(ctx, taskID)
if err != nil {
log.Error(ctx, "func", "handleFormulaRecognition", "msg", "增加任务计数失败", "error", err)
return err
}
log.Info(ctx, "func", "handleFormulaRecognition", "msg", "增加任务计数成功", "task_id", taskID)
return nil
}
// Stop 用于优雅关闭服务
func (s *RecognitionService) Stop() {
close(s.stopChan)
}
func (s *RecognitionService) processVLFormula(ctx context.Context, taskID int64) {
task, err := dao.NewRecognitionTaskDao().GetTaskByID(dao.DB.WithContext(ctx), taskID)
if err != nil {
log.Error(ctx, "func", "processVLFormulaQueue", "msg", "获取任务失败", "error", err)
return
}
if task == nil {
log.Error(ctx, "func", "processVLFormulaQueue", "msg", "任务不存在", "task_id", taskID)
return
}
ctx = context.WithValue(ctx, utils.RequestIDKey, task.TaskUUID)
log.Info(ctx, "func", "processVLFormulaQueue", "msg", "获取任务成功", "task_id", taskID)
// 处理具体任务
if err := s.processVLFormulaTask(ctx, taskID, task.FileURL); err != nil {
log.Error(ctx, "func", "processVLFormulaQueue", "msg", "处理任务失败", "error", err)
return
}
log.Info(ctx, "func", "processVLFormulaQueue", "msg", "处理任务成功", "task_id", taskID)
}
func (s *RecognitionService) processFormulaTask(ctx context.Context, taskID int64, fileURL string) (err error) {
// 为整个任务处理添加超时控制
ctx, cancel := context.WithTimeout(ctx, 45*time.Second)
defer cancel()
tx := dao.DB.Begin()
var (
taskDao = dao.NewRecognitionTaskDao()
resultDao = dao.NewRecognitionResultDao()
)
isSuccess := false
defer func() {
if !isSuccess {
tx.Rollback()
status := dao.TaskStatusFailed
remark := "任务处理失败"
if ctx.Err() == context.DeadlineExceeded {
remark = "任务处理超时"
}
if err != nil {
remark = err.Error()
}
_ = taskDao.Update(dao.DB.WithContext(context.Background()), // 使用新的context避免已取消的context影响状态更新
map[string]interface{}{"id": taskID},
map[string]interface{}{
"status": status,
"completed_at": time.Now(),
"remark": remark,
})
return
}
_ = taskDao.Update(tx, map[string]interface{}{"id": taskID}, map[string]interface{}{"status": dao.TaskStatusCompleted, "completed_at": time.Now()})
tx.Commit()
}()
err = taskDao.Update(dao.DB.WithContext(ctx), map[string]interface{}{"id": taskID}, map[string]interface{}{"status": dao.TaskStatusProcessing})
if err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "更新任务状态失败", "error", err)
}
// 下载图片文件
reader, err := oss.DownloadFile(ctx, fileURL)
if err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "下载图片文件失败", "error", err)
return err
}
defer reader.Close()
// 读取图片数据
imageData, err := io.ReadAll(reader)
if err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "读取图片数据失败", "error", err)
return err
}
downloadURL, err := oss.GetDownloadURL(ctx, fileURL)
if err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "获取下载URL失败", "error", err)
return err
}
// 将图片转为base64编码
base64Image := base64.StdEncoding.EncodeToString(imageData)
// 创建JSON请求
requestData := map[string]string{
"image_base64": base64Image,
"img_url": downloadURL,
}
jsonData, err := json.Marshal(requestData)
if err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "JSON编码失败", "error", err)
return err
}
// 设置Content-Type头为application/json
headers := map[string]string{"Content-Type": "application/json", utils.RequestIDHeaderKey: utils.GetRequestIDFromContext(ctx)}
// 发送请求时会使用带超时的context
resp, err := s.httpClient.RequestWithRetry(ctx, http.MethodPost, s.getURL(ctx), bytes.NewReader(jsonData), headers)
if err != nil {
if ctx.Err() == context.DeadlineExceeded {
log.Error(ctx, "func", "processFormulaTask", "msg", "请求超时")
return fmt.Errorf("request timeout")
}
log.Error(ctx, "func", "processFormulaTask", "msg", "请求失败", "error", err)
return err
}
defer resp.Body.Close()
log.Info(ctx, "func", "processFormulaTask", "msg", "请求成功", "resp", resp.Body)
body := &bytes.Buffer{}
if _, err = body.ReadFrom(resp.Body); err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "读取响应体失败", "error", err)
return err
}
katex := utils.ToKatex(body.String())
content := &dao.FormulaRecognitionContent{Latex: katex}
b, _ := json.Marshal(content)
// Save recognition result
result := &dao.RecognitionResult{
TaskID: taskID,
TaskType: dao.TaskTypeFormula,
Content: b,
}
if err := resultDao.Create(tx, *result); err != nil {
log.Error(ctx, "func", "processFormulaTask", "msg", "保存任务结果失败", "error", err)
return err
}
isSuccess = true
return nil
}
func (s *RecognitionService) processVLFormulaTask(ctx context.Context, taskID int64, fileURL string) error {
isSuccess := false
defer func() {
if !isSuccess {
err := dao.NewRecognitionTaskDao().Update(dao.DB.WithContext(ctx), map[string]interface{}{"id": taskID}, map[string]interface{}{"status": dao.TaskStatusFailed})
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "更新任务状态失败", "error", err)
}
return
}
err := dao.NewRecognitionTaskDao().Update(dao.DB.WithContext(ctx), map[string]interface{}{"id": taskID}, map[string]interface{}{"status": dao.TaskStatusCompleted})
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "更新任务状态失败", "error", err)
}
}()
reader, err := oss.DownloadFile(ctx, fileURL)
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "获取签名URL失败", "error", err)
return err
}
defer reader.Close()
imageData, err := io.ReadAll(reader)
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "读取图片数据失败", "error", err)
return err
}
prompt := `
Please perform OCR on the image and output only LaTeX code.
Important instructions:
* "The image contains mathematical formulas, no plain text."
* "Preserve all layout, symbols, subscripts, summations, parentheses, etc., exactly as shown."
* "Use \[ ... \] or align environments to represent multiline math expressions."
* "Use adaptive symbols such as \left and \right where applicable."
* "Do not include any extra commentary, template answers, or unrelated equations."
* "Only output valid LaTeX code based on the actual content of the image, and not change the original mathematical expression."
* "The output result must be can render by better-react-mathjax."
`
base64Image := base64.StdEncoding.EncodeToString(imageData)
requestBody := formula.VLFormulaRequest{
Model: "Qwen/Qwen2.5-VL-32B-Instruct",
Stream: false,
MaxTokens: 512,
Temperature: 0.1,
TopP: 0.1,
TopK: 50,
FrequencyPenalty: 0.2,
N: 1,
Messages: []formula.Message{
{
Role: "user",
Content: []formula.Content{
{
Type: "text",
Text: prompt,
},
{
Type: "image_url",
ImageURL: formula.Image{
Detail: "auto",
URL: "data:image/jpeg;base64," + base64Image,
},
},
},
},
},
}
// 将请求体转换为JSON
jsonData, err := json.Marshal(requestBody)
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "JSON编码失败", "error", err)
return err
}
headers := map[string]string{
"Content-Type": "application/json",
"Authorization": utils.SiliconFlowToken,
}
resp, err := s.httpClient.RequestWithRetry(ctx, http.MethodPost, "https://api.siliconflow.cn/v1/chat/completions", bytes.NewReader(jsonData), headers)
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "请求VL服务失败", "error", err)
return err
}
defer resp.Body.Close()
// 解析响应
var response formula.VLFormulaResponse
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "解析响应失败", "error", err)
return err
}
// 提取LaTeX代码
var latex string
if len(response.Choices) > 0 {
if response.Choices[0].Message.Content != "" {
latex = strings.ReplaceAll(response.Choices[0].Message.Content, "\n", "")
latex = strings.ReplaceAll(latex, "```latex", "")
latex = strings.ReplaceAll(latex, "```", "")
// 规范化LaTeX代码移除不必要的空格
latex = strings.ReplaceAll(latex, " = ", "=")
latex = strings.ReplaceAll(latex, "\\left[ ", "\\left[")
latex = strings.TrimPrefix(latex, "\\[")
latex = strings.TrimSuffix(latex, "\\]")
}
}
resultDao := dao.NewRecognitionResultDao()
var formulaRes *dao.FormulaRecognitionContent
result, err := resultDao.GetByTaskID(dao.DB.WithContext(ctx), taskID)
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "获取任务结果失败", "error", err)
return err
}
if result == nil {
formulaRes = &dao.FormulaRecognitionContent{EnhanceLatex: latex}
b, err := formulaRes.Encode()
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "编码任务结果失败", "error", err)
return err
}
err = resultDao.Create(dao.DB.WithContext(ctx), dao.RecognitionResult{TaskID: taskID, TaskType: dao.TaskTypeFormula, Content: b})
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "创建任务结果失败", "error", err)
return err
}
} else {
formulaRes = result.NewContentCodec().(*dao.FormulaRecognitionContent)
err = formulaRes.Decode()
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "解码任务结果失败", "error", err)
return err
}
formulaRes.EnhanceLatex = latex
b, err := formulaRes.Encode()
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "编码任务结果失败", "error", err)
return err
}
err = resultDao.Update(dao.DB.WithContext(ctx), result.ID, map[string]interface{}{"content": b})
if err != nil {
log.Error(ctx, "func", "processVLFormulaTask", "msg", "更新任务结果失败", "error", err)
return err
}
}
isSuccess = true
return nil
}
func (s *RecognitionService) processFormulaQueue(ctx context.Context) {
for {
select {
case <-s.stopChan:
return
default:
s.processOneTask(ctx)
}
}
}
func (s *RecognitionService) processOneTask(ctx context.Context) {
// 限制队列数量
s.queueLimit <- struct{}{}
defer func() { <-s.queueLimit }()
taskID, err := cache.PopFormulaTask(ctx)
if err != nil {
log.Error(ctx, "func", "processFormulaQueue", "msg", "获取任务失败", "error", err)
return
}
task, err := dao.NewRecognitionTaskDao().GetTaskByID(dao.DB.WithContext(ctx), taskID)
if err != nil {
log.Error(ctx, "func", "processFormulaQueue", "msg", "获取任务失败", "error", err)
return
}
if task == nil {
log.Error(ctx, "func", "processFormulaQueue", "msg", "任务不存在", "task_id", taskID)
return
}
ctx = context.WithValue(ctx, utils.RequestIDKey, task.TaskUUID)
log.Info(ctx, "func", "processFormulaQueue", "msg", "获取任务成功", "task_id", taskID)
// 处理具体任务
if err := s.processFormulaTask(ctx, taskID, task.FileURL); err != nil {
log.Error(ctx, "func", "processFormulaQueue", "msg", "处理任务失败", "error", err)
return
}
log.Info(ctx, "func", "processFormulaQueue", "msg", "处理任务成功", "task_id", taskID)
}
func (s *RecognitionService) getURL(ctx context.Context) string {
return "http://cloud.srcstar.com:8045/formula/predict"
count, err := cache.IncrURLCount(ctx)
if err != nil {
log.Error(ctx, "func", "getURL", "msg", "获取URL计数失败", "error", err)
return "http://cloud.srcstar.com:8026/formula/predict"
}
if count%2 == 0 {
return "http://cloud.srcstar.com:8026/formula/predict"
}
return "https://cloud.texpixel.com:1080/formula/predict"
}

78
internal/service/task.go Normal file
View File

@@ -0,0 +1,78 @@
package service
import (
"context"
"errors"
"strings"
"gitea.com/bitwsd/core/common/log"
"gitea.com/bitwsd/document_ai/internal/model/task"
"gitea.com/bitwsd/document_ai/internal/storage/dao"
"gorm.io/gorm"
)
type TaskService struct {
db *gorm.DB
}
func NewTaskService() *TaskService {
return &TaskService{dao.DB}
}
func (svc *TaskService) EvaluateTask(ctx context.Context, req *task.EvaluateTaskRequest) error {
taskDao := dao.NewRecognitionTaskDao()
task, err := taskDao.GetByTaskNo(svc.db.WithContext(ctx), req.TaskNo)
if err != nil {
log.Error(ctx, "func", "EvaluateTask", "msg", "get task by task no failed", "error", err)
return err
}
if task == nil {
log.Error(ctx, "func", "EvaluateTask", "msg", "task not found")
return errors.New("task not found")
}
if task.Status != dao.TaskStatusCompleted {
log.Error(ctx, "func", "EvaluateTask", "msg", "task not finished")
return errors.New("task not finished")
}
evaluateTaskDao := dao.NewEvaluateTaskDao()
evaluateTask := &dao.EvaluateTask{
TaskID: task.ID,
Satisfied: req.Satisfied,
Feedback: req.Feedback,
Comment: strings.Join(req.Suggestion, ","),
}
err = evaluateTaskDao.Create(svc.db.WithContext(ctx), evaluateTask)
if err != nil {
log.Error(ctx, "func", "EvaluateTask", "msg", "create evaluate task failed", "error", err)
return err
}
return nil
}
func (svc *TaskService) GetTaskList(ctx context.Context, req *task.TaskListRequest) (*task.TaskListResponse, error) {
taskDao := dao.NewRecognitionTaskDao()
tasks, err := taskDao.GetTaskList(svc.db.WithContext(ctx), dao.TaskType(req.TaskType), req.Page, req.PageSize)
if err != nil {
log.Error(ctx, "func", "GetTaskList", "msg", "get task list failed", "error", err)
return nil, err
}
resp := &task.TaskListResponse{
TaskList: make([]*task.TaskListDTO, 0, len(tasks)),
HasMore: false,
NextPage: 0,
}
for _, item := range tasks {
resp.TaskList = append(resp.TaskList, &task.TaskListDTO{
TaskID: item.TaskUUID,
FileName: item.FileName,
Status: item.Status.String(),
Path: item.FileURL,
TaskType: item.TaskType.String(),
CreatedAt: item.CreatedAt.Format("2006-01-02 15:04:05"),
})
}
return resp, nil
}

View File

@@ -0,0 +1,109 @@
package service
import (
"context"
"errors"
"fmt"
"math/rand"
"gitea.com/bitwsd/core/common/log"
"gitea.com/bitwsd/document_ai/internal/storage/cache"
"gitea.com/bitwsd/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/pkg/sms"
)
type UserService struct {
userDao *dao.UserDao
}
func NewUserService() *UserService {
return &UserService{
userDao: dao.NewUserDao(),
}
}
func (svc *UserService) GetSmsCode(ctx context.Context, phone string) (string, error) {
limit, err := cache.GetUserSendSmsLimit(ctx, phone)
if err != nil {
log.Error(ctx, "func", "GetSmsCode", "msg", "get user send sms limit error", "error", err)
return "", err
}
if limit >= cache.UserSendSmsLimitCount {
return "", errors.New("sms code send limit reached")
}
user, err := svc.userDao.GetByPhone(dao.DB.WithContext(ctx), phone)
if err != nil {
log.Error(ctx, "func", "GetSmsCode", "msg", "get user error", "error", err)
return "", err
}
if user == nil {
user = &dao.User{Phone: phone}
err = svc.userDao.Create(dao.DB.WithContext(ctx), user)
if err != nil {
log.Error(ctx, "func", "GetSmsCode", "msg", "create user error", "error", err)
return "", err
}
}
code := fmt.Sprintf("%06d", rand.Intn(1000000))
err = sms.SendMessage(&sms.SendSmsRequest{PhoneNumbers: phone, SignName: sms.Signature, TemplateCode: sms.TemplateCode, TemplateParam: fmt.Sprintf(sms.TemplateParam, code)})
if err != nil {
log.Error(ctx, "func", "GetSmsCode", "msg", "send message error", "error", err)
return "", err
}
cacheErr := cache.SetUserSmsCode(ctx, phone, code)
if cacheErr != nil {
log.Error(ctx, "func", "GetSmsCode", "msg", "set user sms code error", "error", cacheErr)
}
cacheErr = cache.SetUserSendSmsLimit(ctx, phone)
if cacheErr != nil {
log.Error(ctx, "func", "GetSmsCode", "msg", "set user send sms limit error", "error", cacheErr)
}
return code, nil
}
func (svc *UserService) VerifySmsCode(ctx context.Context, phone, code string) (uid int64, err error) {
user, err := svc.userDao.GetByPhone(dao.DB.WithContext(ctx), phone)
if err != nil {
log.Error(ctx, "func", "VerifySmsCode", "msg", "get user error", "error", err, "phone", phone)
return 0, err
}
if user == nil {
log.Error(ctx, "func", "VerifySmsCode", "msg", "user not found", "phone", phone)
return 0, errors.New("user not found")
}
storedCode, err := cache.GetUserSmsCode(ctx, phone)
if err != nil {
log.Error(ctx, "func", "VerifySmsCode", "msg", "get user sms code error", "error", err)
return 0, err
}
if storedCode != code {
log.Error(ctx, "func", "VerifySmsCode", "msg", "invalid sms code", "phone", phone, "code", code, "storedCode", storedCode)
return 0, errors.New("invalid sms code")
}
cacheErr := cache.DeleteUserSmsCode(ctx, phone)
if cacheErr != nil {
log.Error(ctx, "func", "VerifySmsCode", "msg", "delete user sms code error", "error", cacheErr)
}
return user.ID, nil
}
func (svc *UserService) GetUserInfo(ctx context.Context, uid int64) (*dao.User, error) {
user, err := svc.userDao.GetByID(dao.DB.WithContext(ctx), uid)
if err != nil {
log.Error(ctx, "func", "GetUserInfo", "msg", "get user error", "error", err)
return nil, err
}
if user == nil {
log.Warn(ctx, "func", "GetUserInfo", "msg", "user not found", "uid", uid)
return &dao.User{}, nil
}
return user, nil
}

33
internal/storage/cache/engine.go vendored Normal file
View File

@@ -0,0 +1,33 @@
package cache
import (
"context"
"fmt"
"time"
"gitea.com/bitwsd/document_ai/config"
"github.com/redis/go-redis/v9"
)
var RedisClient *redis.Client
func InitRedisClient(config config.RedisConfig) {
fmt.Println("Initializing Redis client...")
RedisClient = redis.NewClient(&redis.Options{
Addr: config.Addr,
Password: config.Password,
DB: config.DB,
DialTimeout: 10 * time.Second,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
})
fmt.Println("Pinging Redis server...")
_, err := RedisClient.Ping(context.Background()).Result()
if err != nil {
fmt.Printf("Init redis client failed, err: %v\n", err)
panic(err)
}
fmt.Println("Redis client initialized successfully.")
}

100
internal/storage/cache/formula.go vendored Normal file
View File

@@ -0,0 +1,100 @@
package cache
import (
"context"
"fmt"
"strconv"
"time"
"github.com/redis/go-redis/v9"
)
const (
FormulaRecognitionTaskCount = "formula_recognition_task"
FormulaRecognitionTaskQueue = "formula_recognition_queue"
FormulaRecognitionDistLock = "formula_recognition_dist_lock"
VLMFormulaCount = "vlm_formula_count:%s" // VLM公式识别次数 ip
VLMRecognitionTaskQueue = "vlm_recognition_queue"
DefaultLockTimeout = 60 * time.Second // 默认锁超时时间
)
// TODO the sigle queue not reliable, message maybe lost
func PushVLMRecognitionTask(ctx context.Context, taskID int64) (count int64, err error) {
count, err = RedisClient.LPush(ctx, VLMRecognitionTaskQueue, taskID).Result()
if err != nil {
return 0, err
}
return count, nil
}
func PopVLMRecognitionTask(ctx context.Context) (int64, error) {
result, err := RedisClient.BRPop(ctx, 0, VLMRecognitionTaskQueue).Result()
if err != nil {
return 0, err
}
return strconv.ParseInt(result[1], 10, 64)
}
func PushFormulaTask(ctx context.Context, taskID int64) (count int64, err error) {
count, err = RedisClient.LPush(ctx, FormulaRecognitionTaskQueue, taskID).Result()
if err != nil {
return 0, err
}
return count, nil
}
func PopFormulaTask(ctx context.Context) (int64, error) {
result, err := RedisClient.BRPop(ctx, 0, FormulaRecognitionTaskQueue).Result()
if err != nil {
return 0, err
}
return strconv.ParseInt(result[1], 10, 64)
}
func GetFormulaTaskCount(ctx context.Context) (int64, error) {
count, err := RedisClient.LLen(ctx, FormulaRecognitionTaskQueue).Result()
if err != nil {
return 0, err
}
return count, nil
}
// GetDistributedLock 获取分布式锁
func GetDistributedLock(ctx context.Context) (bool, error) {
return RedisClient.SetNX(ctx, FormulaRecognitionDistLock, "locked", DefaultLockTimeout).Result()
}
// ReleaseLock 释放分布式锁
func ReleaseLock(ctx context.Context) error {
return RedisClient.Del(ctx, FormulaRecognitionDistLock).Err()
}
func GetVLMFormulaCount(ctx context.Context, ip string) (int64, error) {
count, err := RedisClient.Get(ctx, fmt.Sprintf(VLMFormulaCount, ip)).Result()
if err != nil {
if err == redis.Nil {
return 0, nil
}
return 0, err
}
return strconv.ParseInt(count, 10, 64)
}
func IncrVLMFormulaCount(ctx context.Context, ip string) (int64, error) {
key := fmt.Sprintf(VLMFormulaCount, ip)
count, err := RedisClient.Incr(ctx, key).Result()
if err != nil {
return 0, err
}
if count == 1 {
now := time.Now()
nextMidnight := time.Date(now.Year(), now.Month(), now.Day()+1, 0, 0, 0, 0, now.Location())
ttl := nextMidnight.Sub(now)
if err := RedisClient.Expire(ctx, key, ttl).Err(); err != nil {
return count, err
}
}
return count, nil
}

12
internal/storage/cache/url.go vendored Normal file
View File

@@ -0,0 +1,12 @@
package cache
import "context"
func IncrURLCount(ctx context.Context) (int64, error) {
key := "formula_recognition:url_count"
count, err := RedisClient.Incr(ctx, key).Result()
if err != nil {
return 0, err
}
return count, nil
}

63
internal/storage/cache/user.go vendored Normal file
View File

@@ -0,0 +1,63 @@
package cache
import (
"context"
"errors"
"fmt"
"strconv"
"time"
"github.com/redis/go-redis/v9"
)
const (
UserSmsCodeTTL = 10 * time.Minute
UserSendSmsLimitTTL = 24 * time.Hour
UserSendSmsLimitCount = 5
)
const (
UserSmsCodePrefix = "user:sms_code:%s"
UserSendSmsLimit = "user:send_sms_limit:%s"
)
func GetUserSmsCode(ctx context.Context, phone string) (string, error) {
code, err := RedisClient.Get(ctx, fmt.Sprintf(UserSmsCodePrefix, phone)).Result()
if err != nil {
if err == redis.Nil {
return "", nil
}
return "", err
}
return code, nil
}
func SetUserSmsCode(ctx context.Context, phone, code string) error {
return RedisClient.Set(ctx, fmt.Sprintf(UserSmsCodePrefix, phone), code, UserSmsCodeTTL).Err()
}
func GetUserSendSmsLimit(ctx context.Context, phone string) (int, error) {
limit, err := RedisClient.Get(ctx, fmt.Sprintf(UserSendSmsLimit, phone)).Result()
if err != nil {
if err == redis.Nil {
return 0, nil
}
return 0, err
}
return strconv.Atoi(limit)
}
func SetUserSendSmsLimit(ctx context.Context, phone string) error {
count, err := RedisClient.Incr(ctx, fmt.Sprintf(UserSendSmsLimit, phone)).Result()
if err != nil {
return err
}
if count > UserSendSmsLimitCount {
return errors.New("send sms limit")
}
return RedisClient.Expire(ctx, fmt.Sprintf(UserSendSmsLimit, phone), UserSendSmsLimitTTL).Err()
}
func DeleteUserSmsCode(ctx context.Context, phone string) error {
return RedisClient.Del(ctx, fmt.Sprintf(UserSmsCodePrefix, phone)).Err()
}

View File

@@ -0,0 +1,11 @@
package dao
import (
"time"
)
type BaseModel struct {
ID int64 `gorm:"bigint;primaryKey;autoIncrement;column:id;comment:主键ID" json:"id"`
CreatedAt time.Time `gorm:"column:created_at;comment:创建时间;not null;default:current_timestamp" json:"created_at"`
UpdatedAt time.Time `gorm:"column:updated_at;comment:更新时间;not null;default:current_timestamp on update current_timestamp" json:"updated_at"`
}

View File

@@ -0,0 +1,31 @@
package dao
import (
"fmt"
"gitea.com/bitwsd/document_ai/config"
"gorm.io/driver/mysql"
"gorm.io/gorm"
)
var DB *gorm.DB
func InitDB(conf config.DatabaseConfig) {
dns := fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4&parseTime=True&loc=Asia%%2FShanghai", conf.Username, conf.Password, conf.Host, conf.Port, conf.DBName)
db, err := gorm.Open(mysql.Open(dns), &gorm.Config{})
if err != nil {
panic(err)
}
sqlDB, err := db.DB()
if err != nil {
panic(err)
}
sqlDB.SetMaxIdleConns(10)
sqlDB.SetMaxOpenConns(100)
DB = db
}
func CloseDB() {
sqlDB, _ := DB.DB()
sqlDB.Close()
}

View File

@@ -0,0 +1,26 @@
package dao
import "gorm.io/gorm"
type EvaluateTask struct {
BaseModel
TaskID int64 `gorm:"column:task_id;type:int;not null;comment:任务ID"`
Satisfied int `gorm:"column:satisfied;type:int;not null;comment:满意"`
Feedback string `gorm:"column:feedback;type:text;not null;comment:反馈"`
Comment string `gorm:"column:comment;type:text;not null;comment:评论"`
}
func (EvaluateTask) TableName() string {
return "evaluate_tasks"
}
type EvaluateTaskDao struct {
}
func NewEvaluateTaskDao() *EvaluateTaskDao {
return &EvaluateTaskDao{}
}
func (dao *EvaluateTaskDao) Create(sess *gorm.DB, data *EvaluateTask) error {
return sess.Create(data).Error
}

View File

@@ -0,0 +1,89 @@
package dao
import (
"encoding/json"
"gorm.io/gorm"
)
type JSON []byte
// ContentCodec 定义内容编解码接口
type ContentCodec interface {
Encode() (JSON, error)
Decode() error
GetContent() interface{} // 更明确的方法名
}
type FormulaRecognitionContent struct {
content JSON
Latex string `json:"latex"`
AdjustLatex string `json:"adjust_latex"`
EnhanceLatex string `json:"enhance_latex"`
}
func (c *FormulaRecognitionContent) Encode() (JSON, error) {
b, err := json.Marshal(c)
if err != nil {
return nil, err
}
return b, nil
}
func (c *FormulaRecognitionContent) Decode() error {
return json.Unmarshal(c.content, c)
}
// GetPreferredContent 按优先级返回公式内容
func (c *FormulaRecognitionContent) GetContent() interface{} {
c.Decode()
if c.EnhanceLatex != "" {
return c.EnhanceLatex
} else if c.AdjustLatex != "" {
return c.AdjustLatex
} else {
return c.Latex
}
}
type RecognitionResult struct {
BaseModel
TaskID int64 `gorm:"column:task_id;bigint;not null;default:0;comment:任务ID" json:"task_id"`
TaskType TaskType `gorm:"column:task_type;varchar(16);not null;comment:任务类型;default:''" json:"task_type"`
Content JSON `gorm:"column:content;type:json;not null;comment:识别内容" json:"content"`
}
// NewContentCodec 创建对应任务类型的内容编解码器
func (r *RecognitionResult) NewContentCodec() ContentCodec {
switch r.TaskType {
case TaskTypeFormula:
return &FormulaRecognitionContent{content: r.Content}
default:
return nil
}
}
type RecognitionResultDao struct {
}
func NewRecognitionResultDao() *RecognitionResultDao {
return &RecognitionResultDao{}
}
// 模型方法
func (dao *RecognitionResultDao) Create(tx *gorm.DB, data RecognitionResult) error {
return tx.Create(&data).Error
}
func (dao *RecognitionResultDao) GetByTaskID(tx *gorm.DB, taskID int64) (result *RecognitionResult, err error) {
result = &RecognitionResult{}
err = tx.Where("task_id = ?", taskID).First(result).Error
if err != nil && err == gorm.ErrRecordNotFound {
return nil, nil
}
return
}
func (dao *RecognitionResultDao) Update(tx *gorm.DB, id int64, updates map[string]interface{}) error {
return tx.Model(&RecognitionResult{}).Where("id = ?", id).Updates(updates).Error
}

View File

@@ -0,0 +1,94 @@
package dao
import (
"time"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
type TaskStatus int
type TaskType string
const (
TaskStatusPending TaskStatus = 0
TaskStatusProcessing TaskStatus = 1
TaskStatusCompleted TaskStatus = 2
TaskStatusFailed TaskStatus = 3
TaskTypeFormula TaskType = "FORMULA"
TaskTypeText TaskType = "TEXT"
TaskTypeTable TaskType = "TABLE"
TaskTypeLayout TaskType = "LAYOUT"
)
func (t TaskType) String() string {
return string(t)
}
func (t TaskStatus) String() string {
return []string{"PENDING", "PROCESSING", "COMPLETED", "FAILED"}[t]
}
type RecognitionTask struct {
BaseModel
UserID int64 `gorm:"column:user_id;not null;default:0;comment:用户ID" json:"user_id"`
TaskUUID string `gorm:"column:task_uuid;varchar(64);not null;default:'';comment:任务唯一标识" json:"task_uuid"`
FileName string `gorm:"column:file_name;varchar(256);not null;default:'';comment:文件名" json:"file_name"`
FileHash string `gorm:"column:file_hash;varchar(64);not null;default:'';comment:文件hash" json:"file_hash"`
FileURL string `gorm:"column:file_url;varchar(128);not null;comment:oss文件地址;default:''" json:"file_url"`
TaskType TaskType `gorm:"column:task_type;varchar(16);not null;comment:任务类型;default:''" json:"task_type"`
Status TaskStatus `gorm:"column:status;tinyint(2);not null;comment:任务状态;default:0" json:"status"`
CompletedAt time.Time `gorm:"column:completed_at;not null;default:current_timestamp;comment:完成时间" json:"completed_at"`
Remark string `gorm:"column:remark;varchar(64);comment:备注;not null;default:''" json:"remark"`
IP string `gorm:"column:ip;varchar(16);comment:IP地址;not null;default:''" json:"ip"`
}
func (t *RecognitionTask) TableName() string {
return "recognition_tasks"
}
type RecognitionTaskDao struct{}
func NewRecognitionTaskDao() *RecognitionTaskDao {
return &RecognitionTaskDao{}
}
// 模型方法
func (dao *RecognitionTaskDao) Create(tx *gorm.DB, data *RecognitionTask) error {
return tx.Create(data).Error
}
func (dao *RecognitionTaskDao) Update(tx *gorm.DB, filter map[string]interface{}, data map[string]interface{}) error {
return tx.Model(RecognitionTask{}).Where(filter).Updates(data).Error
}
func (dao *RecognitionTaskDao) GetByTaskNo(tx *gorm.DB, taskUUID string) (task *RecognitionTask, err error) {
task = &RecognitionTask{}
err = tx.Model(RecognitionTask{}).Where("task_uuid = ?", taskUUID).First(task).Error
return
}
func (dao *RecognitionTaskDao) GetTaskByFileURL(tx *gorm.DB, userID int64, fileHash string) (task *RecognitionTask, err error) {
task = &RecognitionTask{}
err = tx.Model(RecognitionTask{}).Where("user_id = ? AND file_hash = ?", userID, fileHash).First(task).Error
return
}
func (dao *RecognitionTaskDao) GetTaskByID(tx *gorm.DB, id int64) (task *RecognitionTask, err error) {
task = &RecognitionTask{}
err = tx.Model(RecognitionTask{}).Where("id = ?", id).First(task).Error
if err != nil {
if err == gorm.ErrRecordNotFound {
return nil, nil
}
return nil, err
}
return task, nil
}
func (dao *RecognitionTaskDao) GetTaskList(tx *gorm.DB, taskType TaskType, page int, pageSize int) (tasks []*RecognitionTask, err error) {
offset := (page - 1) * pageSize
err = tx.Model(RecognitionTask{}).Where("task_type = ?", taskType).Offset(offset).Limit(pageSize).Order(clause.OrderByColumn{Column: clause.Column{Name: "id"}, Desc: true}).Find(&tasks).Error
return
}

View File

@@ -0,0 +1,53 @@
package dao
import (
"errors"
"gorm.io/gorm"
)
type User struct {
BaseModel
Username string `gorm:"column:username" json:"username"`
Phone string `gorm:"column:phone" json:"phone"`
Password string `gorm:"column:password" json:"password"`
WechatOpenID string `gorm:"column:wechat_open_id" json:"wechat_open_id"`
WechatUnionID string `gorm:"column:wechat_union_id" json:"wechat_union_id"`
}
func (u *User) TableName() string {
return "users"
}
type UserDao struct {
}
func NewUserDao() *UserDao {
return &UserDao{}
}
func (dao *UserDao) Create(tx *gorm.DB, user *User) error {
return tx.Create(user).Error
}
func (dao *UserDao) GetByPhone(tx *gorm.DB, phone string) (*User, error) {
var user User
if err := tx.Where("phone = ?", phone).First(&user).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, nil
}
return nil, err
}
return &user, nil
}
func (dao *UserDao) GetByID(tx *gorm.DB, id int64) (*User, error) {
var user User
if err := tx.Where("id = ?", id).First(&user).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, nil
}
return nil, err
}
return &user, nil
}

83
main.go Normal file
View File

@@ -0,0 +1,83 @@
package main
import (
"context"
"flag"
"fmt"
"net/http"
"os"
"os/signal"
"syscall"
"time"
"gitea.com/bitwsd/core/common/cors"
"gitea.com/bitwsd/core/common/log"
"gitea.com/bitwsd/core/common/middleware"
"gitea.com/bitwsd/document_ai/api"
"gitea.com/bitwsd/document_ai/config"
"gitea.com/bitwsd/document_ai/internal/storage/cache"
"gitea.com/bitwsd/document_ai/internal/storage/dao"
"gitea.com/bitwsd/document_ai/pkg/common"
"gitea.com/bitwsd/document_ai/pkg/sms"
"github.com/gin-gonic/gin"
)
func main() {
// 加载配置
env := "dev"
flag.StringVar(&env, "env", "dev", "environment (dev/prod)")
flag.Parse()
configPath := fmt.Sprintf("./config/config_%s.yaml", env)
if err := config.Init(configPath); err != nil {
panic(err)
}
// 初始化日志
if err := log.Setup(config.GlobalConfig.Log); err != nil {
panic(err)
}
// 初始化数据库
dao.InitDB(config.GlobalConfig.Database)
cache.InitRedisClient(config.GlobalConfig.Redis)
sms.InitSmsClient()
// 初始化Redis
// cache.InitRedis(config.GlobalConfig.Redis.Addr)
// 初始化OSS客户端
// if err := oss.InitOSS(config.GlobalConfig.OSS); err != nil {
// logger.Fatal("Failed to init OSS client", logger.Fields{"error": err})
// }
// 设置gin模式
gin.SetMode(config.GlobalConfig.Server.Mode)
// 设置路由
r := gin.New()
// 使用中间件
r.Use(gin.Recovery(), middleware.RequestID(), middleware.AccessLog(), cors.Cors(cors.DefaultConfig()), common.MiddlewareContext)
router := r.Group("doc_ai")
api.SetupRouter(router)
// 启动服务器
addr := fmt.Sprintf(":%d", config.GlobalConfig.Server.Port)
srv := &http.Server{Addr: addr, Handler: r}
go func() {
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
panic(err)
}
}()
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
// 优雅地关闭服务器
if err := srv.Shutdown(context.Background()); err != nil {
panic(err)
}
time.Sleep(time.Second * 3)
dao.CloseDB()
}

49
pkg/common/errors.go Normal file
View File

@@ -0,0 +1,49 @@
package common
type ErrorCode int
const (
CodeSuccess = 200
CodeParamError = 400
CodeUnauthorized = 401
CodeForbidden = 403
CodeNotFound = 404
CodeInvalidStatus = 405
CodeDBError = 500
CodeSystemError = 501
CodeTaskNotComplete = 1001
CodeRecordRepeat = 1002
CodeSmsCodeError = 1003
)
const (
CodeSuccessMsg = "success"
CodeParamErrorMsg = "param error"
CodeUnauthorizedMsg = "unauthorized"
CodeForbiddenMsg = "forbidden"
CodeNotFoundMsg = "not found"
CodeInvalidStatusMsg = "invalid status"
CodeDBErrorMsg = "database error"
CodeSystemErrorMsg = "system error"
CodeTaskNotCompleteMsg = "task not complete"
CodeRecordRepeatMsg = "record repeat"
CodeSmsCodeErrorMsg = "sms code error"
)
type BusinessError struct {
Code ErrorCode
Message string
Err error
}
func (e *BusinessError) Error() string {
return e.Message
}
func NewError(code ErrorCode, message string, err error) *BusinessError {
return &BusinessError{
Code: code,
Message: message,
Err: err,
}
}

59
pkg/common/middleware.go Normal file
View File

@@ -0,0 +1,59 @@
package common
import (
"context"
"net/http"
"strings"
"gitea.com/bitwsd/document_ai/pkg/constant"
"gitea.com/bitwsd/document_ai/pkg/jwt"
"github.com/gin-gonic/gin"
)
func MiddlewareContext(c *gin.Context) {
c.Set(constant.ContextIP, c.ClientIP())
}
func GetIPFromContext(ctx context.Context) string {
return ctx.Value(constant.ContextIP).(string)
}
func GetUserIDFromContext(ctx *gin.Context) int64 {
return ctx.GetInt64(constant.ContextUserID)
}
func AuthMiddleware(ctx *gin.Context) {
token := ctx.GetHeader("Authorization")
if token == "" {
ctx.JSON(http.StatusOK, ErrorResponse(ctx, CodeUnauthorized, CodeUnauthorizedMsg))
ctx.Abort()
}
token = strings.TrimPrefix(token, "Bearer ")
claims, err := jwt.ParseToken(token)
if err != nil {
ctx.JSON(http.StatusOK, ErrorResponse(ctx, CodeUnauthorized, CodeUnauthorizedMsg))
ctx.Abort()
}
if claims == nil {
ctx.JSON(http.StatusOK, ErrorResponse(ctx, CodeUnauthorized, CodeUnauthorizedMsg))
ctx.Abort()
}
ctx.Set(constant.ContextUserID, claims.UserId)
}
func GetAuthMiddleware() gin.HandlerFunc {
return func(ctx *gin.Context) {
token := ctx.GetHeader("Authorization")
if token != "" {
token = strings.TrimPrefix(token, "Bearer ")
claims, err := jwt.ParseToken(token)
if err == nil {
ctx.Set(constant.ContextUserID, claims.UserId)
}
}
}
}

31
pkg/common/response.go Normal file
View File

@@ -0,0 +1,31 @@
package common
import (
"context"
"gitea.com/bitwsd/document_ai/pkg/constant"
)
type Response struct {
RequestID string `json:"request_id"`
Code int `json:"code"`
Message string `json:"message"`
Data interface{} `json:"data"`
}
func ErrorResponse(ctx context.Context, code int, message string) *Response {
return &Response{
RequestID: ctx.Value(constant.ContextRequestID).(string),
Code: code,
Message: message,
}
}
func SuccessResponse(ctx context.Context, data interface{}) *Response {
return &Response{
RequestID: ctx.Value(constant.ContextRequestID).(string),
Code: CodeSuccess,
Message: "success",
Data: data,
}
}

7
pkg/constant/context.go Normal file
View File

@@ -0,0 +1,7 @@
package constant
const (
ContextRequestID = "request_id"
ContextUserID = "user_id"
ContextIP = "ip"
)

5
pkg/constant/header.go Normal file
View File

@@ -0,0 +1,5 @@
package constant
const (
HeaderRequestID = "X-Request-ID"
)

5
pkg/constant/limit.go Normal file
View File

@@ -0,0 +1,5 @@
package constant
const (
VLMFormulaCount = 20
)

136
pkg/httpclient/client.go Normal file
View File

@@ -0,0 +1,136 @@
package httpclient
import (
"context"
"crypto/tls"
"fmt"
"io"
"math"
"net"
"net/http"
"time"
"gitea.com/bitwsd/core/common/log"
)
// RetryConfig 重试配置
type RetryConfig struct {
MaxRetries int // 最大重试次数
InitialInterval time.Duration // 初始重试间隔
MaxInterval time.Duration // 最大重试间隔
SkipTLSVerify bool // 是否跳过TLS验证
}
// DefaultRetryConfig 默认重试配置
var DefaultRetryConfig = RetryConfig{
MaxRetries: 2,
InitialInterval: 100 * time.Millisecond,
MaxInterval: 5 * time.Second,
SkipTLSVerify: true,
}
// Client HTTP客户端封装
type Client struct {
client *http.Client
config RetryConfig
}
// NewClient 创建新的HTTP客户端
func NewClient(config *RetryConfig) *Client {
cfg := DefaultRetryConfig
if config != nil {
cfg = *config
}
tr := &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: cfg.SkipTLSVerify,
},
}
return &Client{
client: &http.Client{
Transport: tr,
},
config: cfg,
}
}
// RequestWithRetry 执行带重试的HTTP请求
func (c *Client) RequestWithRetry(ctx context.Context, method, url string, body io.Reader, headers map[string]string) (*http.Response, error) {
var lastErr error
for attempt := 0; attempt < c.config.MaxRetries; attempt++ {
if attempt > 0 {
backoff := c.calculateBackoff(attempt)
select {
case <-ctx.Done():
return nil, ctx.Err()
case <-time.After(backoff):
}
// 如果body是可以Seek的则重置到开始位置
if seeker, ok := body.(io.Seeker); ok {
_, err := seeker.Seek(0, io.SeekStart)
if err != nil {
return nil, fmt.Errorf("failed to reset request body: %w", err)
}
}
log.Info(ctx, "func", "RequestWithRetry", "msg", "正在重试请求",
"attempt", attempt+1, "max_retries", c.config.MaxRetries, "backoff", backoff)
}
// 检查 context 是否已经取消
if ctx.Err() != nil {
return nil, ctx.Err()
}
req, err := http.NewRequestWithContext(ctx, method, url, body)
if err != nil {
lastErr = fmt.Errorf("create request failed: %w", err)
continue
}
for k, v := range headers {
req.Header.Set(k, v)
}
resp, err := c.client.Do(req)
if err != nil {
lastErr = fmt.Errorf("request failed: %w", err)
// 如果是超时错误,直接返回不再重试
if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
return nil, fmt.Errorf("request timeout: %w", err)
}
log.Error(ctx, "func", "RequestWithRetry", "msg", "请求失败",
"error", err, "attempt", attempt+1)
continue
}
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
fmt.Println(string(body))
resp.Body.Close()
lastErr = fmt.Errorf("unexpected status code: %d", resp.StatusCode)
log.Error(ctx, "func", "RequestWithRetry", "msg", "请求返回非200状态码",
"status", resp.StatusCode, "attempt", attempt+1)
continue
}
return resp, nil
}
return nil, fmt.Errorf("max retries reached: %w", lastErr)
}
// calculateBackoff 计算退避时间
func (c *Client) calculateBackoff(attempt int) time.Duration {
backoff := float64(c.config.InitialInterval) * math.Pow(2, float64(attempt))
if backoff > float64(c.config.MaxInterval) {
backoff = float64(c.config.MaxInterval)
}
return time.Duration(backoff)
}

61
pkg/jwt/jwt.go Normal file
View File

@@ -0,0 +1,61 @@
package jwt
import (
"errors"
"time"
"github.com/dgrijalva/jwt-go"
)
var JwtKey = []byte("bitwsd@hello qinshihuang")
var ValidTime = 3600 * 24 * 7
type User struct {
UserId int64 `json:"user_id"`
}
type CustomClaims struct {
User
jwt.StandardClaims
}
func CreateToken(user User) (string, error) {
expire := time.Now().Add(time.Duration(ValidTime) * time.Second)
claims := &CustomClaims{
User: user,
StandardClaims: jwt.StandardClaims{
ExpiresAt: expire.Unix(),
IssuedAt: time.Now().Unix(),
},
}
token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
t, err := token.SignedString(JwtKey)
if err != nil {
return "", err
}
return "Bearer " + t, nil
}
func ParseToken(signToken string) (*CustomClaims, error) {
token, err := jwt.ParseWithClaims(signToken, &CustomClaims{}, func(token *jwt.Token) (interface{}, error) {
return JwtKey, nil
})
if err != nil {
if ve, ok := err.(*jwt.ValidationError); ok {
if ve.Errors&jwt.ValidationErrorExpired != 0 {
return nil, errors.New("token expired")
}
}
return nil, err
}
claims, _ := token.Claims.(*CustomClaims)
if claims == nil || !token.Valid {
return nil, errors.New("token invalid")
}
return claims, nil
}

30
pkg/oss/config.go Normal file
View File

@@ -0,0 +1,30 @@
package oss
// var (
//
// AccessKeyId = os.Getenv("OSS_ACCESS_KEY_ID")
// AccessKeySecret = os.Getenv("OSS_ACCESS_KEY_SECRET")
// Host = "http://${your-bucket}.${your-endpoint}"
// UploadDir = "user-dir-prefix/"
// ExpireTime = int64(3600)
// Endpoint = os.Getenv("OSS_ENDPOINT")
// BucketName = os.Getenv("OSS_BUCKET_NAME")
//
// )
const (
ExpireTime = int64(600) // 签名有效期
FormulaDir = "formula/"
)
type ConfigStruct struct {
Expiration string `json:"expiration"`
Conditions [][]interface{} `json:"conditions"`
}
type PolicyToken struct {
AccessKeyId string `json:"ossAccessKeyId"`
Host string `json:"host"`
Signature string `json:"signature"`
Policy string `json:"policy"`
Directory string `json:"dir"`
}

175
pkg/oss/policy.go Normal file
View File

@@ -0,0 +1,175 @@
package oss
import (
"context"
"crypto/hmac"
"crypto/sha1"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"path/filepath"
"strings"
"time"
"gitea.com/bitwsd/core/common/log"
"gitea.com/bitwsd/document_ai/config"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
)
func GetGMTISO8601(expireEnd int64) string {
return time.Unix(expireEnd, 0).UTC().Format("2006-01-02T15:04:05Z")
}
func GetPolicyToken() (string, error) {
now := time.Now().Unix()
expireEnd := now + ExpireTime
tokenExpire := GetGMTISO8601(expireEnd)
conf := ConfigStruct{
Expiration: tokenExpire,
}
// Add file prefix restriction
// config.Conditions = append(config.Conditions, []interface{}{"starts-with", "$key", FormulaDir})
// Add file size restriction (1KB to 10MB)
minSize := int64(1024)
maxSize := int64(3 * 1024 * 1024)
conf.Conditions = append(conf.Conditions, []interface{}{"content-length-range", minSize, maxSize})
result, err := json.Marshal(conf)
if err != nil {
return "", fmt.Errorf("marshal config error: %w", err)
}
encodedResult := base64.StdEncoding.EncodeToString(result)
h := hmac.New(sha1.New, []byte(config.GlobalConfig.Aliyun.OSS.AccessKeySecret))
io.WriteString(h, encodedResult)
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
policyToken := PolicyToken{
AccessKeyId: config.GlobalConfig.Aliyun.OSS.AccessKeyID,
Host: config.GlobalConfig.Aliyun.OSS.Endpoint,
Signature: signedStr,
Policy: encodedResult,
Directory: FormulaDir,
}
response, err := json.Marshal(policyToken)
if err != nil {
return "", fmt.Errorf("marshal policy token error: %w", err)
}
return string(response), nil
}
func GetPolicyURL(ctx context.Context, path string) (string, error) {
// Create OSS client
client, err := oss.New(config.GlobalConfig.Aliyun.OSS.Endpoint, config.GlobalConfig.Aliyun.OSS.AccessKeyID, config.GlobalConfig.Aliyun.OSS.AccessKeySecret)
if err != nil {
log.Error(ctx, "func", "GetPolicyURL", "msg", "create oss client failed", "error", err)
return "", err
}
// Get bucket instance
bucket, err := client.Bucket(config.GlobalConfig.Aliyun.OSS.BucketName)
if err != nil {
log.Error(ctx, "func", "GetPolicyURL", "msg", "get bucket failed", "error", err)
return "", err
}
// Set options for the signed URL
var contentType string
ext := filepath.Ext(path)
switch ext {
case ".jpg", ".jpeg":
contentType = "image/jpeg"
case ".png":
contentType = "image/png"
case ".gif":
contentType = "image/gif"
case ".bmp":
contentType = "image/bmp"
case ".webp":
contentType = "image/webp"
case ".tiff":
contentType = "image/tiff"
case ".svg":
contentType = "image/svg+xml"
default:
return "", fmt.Errorf("unsupported file type: %s", ext)
}
options := []oss.Option{
oss.ContentType(contentType),
}
// Generate signed URL valid for 10 minutes
signedURL, err := bucket.SignURL(path, oss.HTTPPut, ExpireTime, options...)
if err != nil {
log.Error(ctx, "func", "GetPolicyURL", "msg", "sign url failed", "error", err)
return "", err
}
// http 转 https
signedURL = strings.Replace(signedURL, "http://", "https://", 1)
return signedURL, nil
}
// DownloadFile downloads a file from OSS and returns the reader, caller should close the reader
func DownloadFile(ctx context.Context, ossPath string) (io.ReadCloser, error) {
endpoint := config.GlobalConfig.Aliyun.OSS.InnerEndpoint
if config.GlobalConfig.Server.IsDebug() {
endpoint = config.GlobalConfig.Aliyun.OSS.Endpoint
}
// Create OSS client
client, err := oss.New(endpoint,
config.GlobalConfig.Aliyun.OSS.AccessKeyID,
config.GlobalConfig.Aliyun.OSS.AccessKeySecret)
if err != nil {
log.Error(ctx, "func", "DownloadFile", "msg", "create oss client failed", "error", err)
return nil, err
}
// Get bucket instance
bucket, err := client.Bucket(config.GlobalConfig.Aliyun.OSS.BucketName)
if err != nil {
log.Error(ctx, "func", "DownloadFile", "msg", "get bucket failed", "error", err)
return nil, err
}
// Download the file
reader, err := bucket.GetObject(ossPath)
if err != nil {
log.Error(ctx, "func", "DownloadFile", "msg", "download file failed", "ossPath", ossPath, "error", err)
return nil, err
}
return reader, nil
}
func GetDownloadURL(ctx context.Context, ossPath string) (string, error) {
endpoint := config.GlobalConfig.Aliyun.OSS.Endpoint
client, err := oss.New(endpoint, config.GlobalConfig.Aliyun.OSS.AccessKeyID, config.GlobalConfig.Aliyun.OSS.AccessKeySecret)
if err != nil {
log.Error(ctx, "func", "GetDownloadURL", "msg", "create oss client failed", "error", err)
return "", err
}
bucket, err := client.Bucket(config.GlobalConfig.Aliyun.OSS.BucketName)
if err != nil {
log.Error(ctx, "func", "GetDownloadURL", "msg", "get bucket failed", "error", err)
return "", err
}
signURL, err := bucket.SignURL(ossPath, oss.HTTPGet, 60)
if err != nil {
log.Error(ctx, "func", "GetDownloadURL", "msg", "get object failed", "error", err)
return "", err
}
return signURL, nil
}

65
pkg/sms/sms.go Normal file
View File

@@ -0,0 +1,65 @@
package sms
import (
"errors"
"sync"
"gitea.com/bitwsd/document_ai/config"
openapi "github.com/alibabacloud-go/darabonba-openapi/client"
dysmsapi "github.com/alibabacloud-go/dysmsapi-20170525/v2/client"
aliutil "github.com/alibabacloud-go/tea-utils/service"
"github.com/alibabacloud-go/tea/tea"
)
const (
Signature = "北京比特智源科技"
TemplateCode = "SMS_291510729"
TemplateParam = `{"code":"%s"}`
VerifyCodeLength = 6
VerifyCodeExpire = 3 * 60 // 1 minutes
)
var (
MsgClient *dysmsapi.Client
once sync.Once
)
func InitSmsClient() *dysmsapi.Client {
once.Do(func() {
key := tea.String(config.GlobalConfig.Aliyun.Sms.AccessKeyID)
secret := tea.String(config.GlobalConfig.Aliyun.Sms.AccessKeySecret)
config := &openapi.Config{AccessKeyId: key, AccessKeySecret: secret}
client, err := dysmsapi.NewClient(config)
if err != nil {
panic(err)
}
MsgClient = client
})
return MsgClient
}
type SendSmsRequest struct {
PhoneNumbers string
SignName string
TemplateCode string
TemplateParam string
}
func SendMessage(req *SendSmsRequest) (err error) {
client := MsgClient
request := &dysmsapi.SendSmsRequest{
PhoneNumbers: tea.String(req.PhoneNumbers),
SignName: tea.String(req.SignName),
TemplateCode: tea.String(req.TemplateCode),
TemplateParam: tea.String(req.TemplateParam),
}
resp, err := client.SendSms(request)
if err != nil {
return
}
if !tea.BoolValue(aliutil.EqualString(resp.Body.Code, tea.String("OK"))) {
err = errors.New(*resp.Body.Code)
return
}
return
}

21
pkg/utils/arr.go Normal file
View File

@@ -0,0 +1,21 @@
package utils
import "math/rand"
func InArray[T comparable](needle T, haystack []T) bool {
for _, item := range haystack {
if item == needle {
return true
}
}
return false
}
func NewRandNumber(length int) (string, error) {
letters := []byte{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9'}
b := make([]byte, length)
for i := range b {
b[i] = letters[rand.Intn(len(letters))]
}
return string(b), nil
}

29
pkg/utils/context.go Normal file
View File

@@ -0,0 +1,29 @@
package utils
import (
"context"
"github.com/google/uuid"
)
type contextKey string
const RequestIDKey contextKey = "request_id"
const RequestIDHeaderKey = "X-Request-ID"
func NewContextWithRequestID(ctx context.Context, requestID string) context.Context {
newCtx := context.Background()
newCtx = context.WithValue(newCtx, RequestIDKey, requestID)
return newCtx
}
func NewUUID() string {
return uuid.New().String()
}
func GetRequestIDFromContext(ctx context.Context) string {
if requestID, ok := ctx.Value(RequestIDKey).(string); ok {
return requestID
}
return ""
}

115
pkg/utils/katex.go Normal file
View File

@@ -0,0 +1,115 @@
package utils
import (
"regexp"
"strings"
)
// Helper function to change patterns
func changeAll(text, oldIns, newIns, leftDelim, rightDelim, newLeftDelim, newRightDelim string) string {
pattern := regexp.MustCompile(regexp.QuoteMeta(oldIns) + `\s*` + regexp.QuoteMeta(leftDelim) + `(.*?)` + regexp.QuoteMeta(rightDelim))
return pattern.ReplaceAllString(text, newIns+newLeftDelim+"$1"+newRightDelim)
}
// Helper function to remove dollar surroundings
func rmDollarSurr(text string) string {
if strings.HasPrefix(text, "$") && strings.HasSuffix(text, "$") {
return text[1 : len(text)-1]
}
return text
}
// ToKatex converts LaTeX formula to KaTeX compatible format
func ToKatex(formula string) string {
res := formula
// Remove mbox surrounding
res = changeAll(res, `\mbox `, " ", "{", "}", "", "")
res = changeAll(res, `\mbox`, " ", "{", "}", "", "")
// Remove hbox surrounding
hboxPattern := regexp.MustCompile(`\\hbox to ?-? ?\d+\.\d+(pt)?\{`)
res = hboxPattern.ReplaceAllString(res, `\hbox{`)
res = changeAll(res, `\hbox`, " ", "{", "}", "", " ")
// Remove raise surrounding
raisePattern := regexp.MustCompile(`\\raise ?-? ?\d+\.\d+(pt)?`)
res = raisePattern.ReplaceAllString(res, " ")
// Remove makebox
makeboxPattern := regexp.MustCompile(`\\makebox ?\[\d+\.\d+(pt)?\]\{`)
res = makeboxPattern.ReplaceAllString(res, `\makebox{`)
res = changeAll(res, `\makebox`, " ", "{", "}", "", " ")
// Remove vbox, scalebox, raisebox surrounding
raisebox := regexp.MustCompile(`\\raisebox\{-? ?\d+\.\d+(pt)?\}\{`)
scalebox := regexp.MustCompile(`\\scalebox\{-? ?\d+\.\d+(pt)?\}\{`)
res = raisebox.ReplaceAllString(res, `\raisebox{`)
res = scalebox.ReplaceAllString(res, `\scalebox{`)
res = changeAll(res, `\scalebox`, " ", "{", "}", "", " ")
res = changeAll(res, `\raisebox`, " ", "{", "}", "", " ")
res = changeAll(res, `\vbox`, " ", "{", "}", "", " ")
// Handle size instructions
sizeInstructions := []string{
`\Huge`, `\huge`, `\LARGE`, `\Large`, `\large`,
`\normalsize`, `\small`, `\footnotesize`, `\tiny`,
}
for _, ins := range sizeInstructions {
res = changeAll(res, ins, ins, "$", "$", "{", "}")
}
// Handle boldmath
res = changeAll(res, `\boldmath `, `\bm`, "{", "}", "{", "}")
res = changeAll(res, `\boldmath`, `\bm`, "{", "}", "{", "}")
res = changeAll(res, `\boldmath `, `\bm`, "$", "$", "{", "}")
res = changeAll(res, `\boldmath`, `\bm`, "$", "$", "{", "}")
// Handle other instructions
res = changeAll(res, `\scriptsize`, `\scriptsize`, "$", "$", "{", "}")
res = changeAll(res, `\emph`, `\textit`, "{", "}", "{", "}")
res = changeAll(res, `\emph `, `\textit`, "{", "}", "{", "}")
// Handle math delimiters
delimiters := []string{
`\left`, `\middle`, `\right`, `\big`, `\Big`, `\bigg`, `\Bigg`,
`\bigl`, `\Bigl`, `\biggl`, `\Biggl`, `\bigm`, `\Bigm`, `\biggm`,
`\Biggm`, `\bigr`, `\Bigr`, `\biggr`, `\Biggr`,
}
for _, delim := range delimiters {
res = changeAll(res, delim, delim, "{", "}", "", "")
}
// Handle display math
displayMath := regexp.MustCompile(`\\\[(.*?)\\\]`)
res = displayMath.ReplaceAllString(res, "$1\\newline")
res = strings.TrimSuffix(res, `\newline`)
// Remove multiple spaces
spaces := regexp.MustCompile(`(\\,){1,}`)
res = spaces.ReplaceAllString(res, " ")
res = regexp.MustCompile(`(\\!){1,}`).ReplaceAllString(res, " ")
res = regexp.MustCompile(`(\\;){1,}`).ReplaceAllString(res, " ")
res = regexp.MustCompile(`(\\:){1,}`).ReplaceAllString(res, " ")
res = regexp.MustCompile(`\\vspace\{.*?}`).ReplaceAllString(res, "")
// Merge consecutive text
textPattern := regexp.MustCompile(`(\\text\{[^}]*\}\s*){2,}`)
res = textPattern.ReplaceAllStringFunc(res, func(match string) string {
texts := regexp.MustCompile(`\\text\{([^}]*)\}`).FindAllStringSubmatch(match, -1)
var merged strings.Builder
for _, t := range texts {
merged.WriteString(t[1])
}
return `\text{` + merged.String() + "}"
})
res = strings.ReplaceAll(res, `\bf `, "")
res = rmDollarSurr(res)
// Remove extra spaces
res = regexp.MustCompile(` +`).ReplaceAllString(res, " ")
return strings.TrimSpace(res)
}

18
pkg/utils/routine.go Normal file
View File

@@ -0,0 +1,18 @@
package utils
import (
"context"
"gitea.com/bitwsd/core/common/log"
)
func SafeGo(fn func()) {
go func() {
defer func() {
if err := recover(); err != nil {
log.Error(context.Background(), "panic recover", "err", err)
}
}()
fn()
}()
}

30
pkg/utils/sms.go Normal file
View File

@@ -0,0 +1,30 @@
package utils
import "strings"
// 校验手机号
// 规则:
// 1. 长度必须为11位
// 2. 必须以1开头
// 3. 第二位必须是3,4,5,6,7,8,9
// 4. 其余必须都是数字
func ValidatePhone(phone string) bool {
if len(phone) != 11 || !strings.HasPrefix(phone, "1") {
return false
}
// 检查第二位
secondDigit := phone[1]
if secondDigit < '3' || secondDigit > '9' {
return false
}
// 检查剩余数字
for i := 2; i < len(phone); i++ {
if phone[i] < '0' || phone[i] > '9' {
return false
}
}
return true
}

5
pkg/utils/token.go Normal file
View File

@@ -0,0 +1,5 @@
package utils
const (
SiliconFlowToken = "Bearer sk-akbroznlbxikkbiouzasspbbzwgxubnjjtqlujxmxsnvpmhn"
)

60
vendor/gitea.com/bitwsd/core/common/cors/cors.go generated vendored Normal file
View File

@@ -0,0 +1,60 @@
package cors
import (
"strconv"
"strings"
"github.com/gin-gonic/gin"
)
type Config struct {
AllowOrigins []string
AllowMethods []string
AllowHeaders []string
ExposeHeaders []string
AllowCredentials bool
MaxAge int
}
func DefaultConfig() Config {
return Config{
AllowOrigins: []string{"*"},
AllowMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
AllowHeaders: []string{"Origin", "Content-Type", "Accept"},
ExposeHeaders: []string{"Content-Length"},
AllowCredentials: true,
MaxAge: 86400, // 24 hours
}
}
func Cors(config Config) gin.HandlerFunc {
return func(c *gin.Context) {
origin := c.Request.Header.Get("Origin")
// 检查是否允许该来源
allowOrigin := "*"
for _, o := range config.AllowOrigins {
if o == origin {
allowOrigin = origin
break
}
}
c.Header("Access-Control-Allow-Origin", allowOrigin)
c.Header("Access-Control-Allow-Methods", strings.Join(config.AllowMethods, ","))
c.Header("Access-Control-Allow-Headers", strings.Join(config.AllowHeaders, ","))
c.Header("Access-Control-Expose-Headers", strings.Join(config.ExposeHeaders, ","))
c.Header("Access-Control-Max-Age", strconv.Itoa(config.MaxAge))
if config.AllowCredentials {
c.Header("Access-Control-Allow-Credentials", "true")
}
if c.Request.Method == "OPTIONS" {
c.AbortWithStatus(204)
return
}
c.Next()
}
}

29
vendor/gitea.com/bitwsd/core/common/log/log_config.go generated vendored Normal file
View File

@@ -0,0 +1,29 @@
package log
var (
maxSize = 100 // MB
outputPath = "/app/logs/app.log"
)
type LogConfig struct {
AppName string `yaml:"appName"` // 应用名称
Level string `yaml:"level"` // debug, info, warn, error
Format string `yaml:"format"` // json, console
OutputPath string `yaml:"outputPath"` // 日志文件路径
MaxSize int `yaml:"maxSize"` // 单个日志文件最大尺寸单位MB
MaxAge int `yaml:"maxAge"` // 日志保留天数
MaxBackups int `yaml:"maxBackups"` // 保留的旧日志文件最大数量
Compress bool `yaml:"compress"` // 是否压缩旧日志
}
func DefaultLogConfig() *LogConfig {
return &LogConfig{
Level: "info",
Format: "json",
OutputPath: outputPath,
MaxSize: maxSize,
MaxAge: 7,
MaxBackups: 3,
Compress: true,
}
}

151
vendor/gitea.com/bitwsd/core/common/log/logger.go generated vendored Normal file
View File

@@ -0,0 +1,151 @@
package log
import (
"context"
"fmt"
"os"
"path/filepath"
"runtime"
"time"
"github.com/rs/zerolog"
"gopkg.in/natefinch/lumberjack.v2"
)
type LogType string
const (
TypeAccess LogType = "access"
TypeBusiness LogType = "business"
TypeError LogType = "error"
)
var (
logger zerolog.Logger
)
// Setup 初始化日志配置
func Setup(conf LogConfig) error {
// 确保日志目录存在
if err := os.MkdirAll(filepath.Dir(conf.OutputPath), 0755); err != nil {
return fmt.Errorf("create log directory failed: %v", err)
}
// 配置日志轮转
writer := &lumberjack.Logger{
Filename: conf.OutputPath,
MaxSize: conf.MaxSize, // MB
MaxAge: conf.MaxAge, // days
MaxBackups: conf.MaxBackups,
Compress: conf.Compress,
}
// 设置日志级别
level, err := zerolog.ParseLevel(conf.Level)
if err != nil {
level = zerolog.InfoLevel
}
zerolog.SetGlobalLevel(level)
// 初始化logger 并添加 app_name
logger = zerolog.New(writer).With().
Timestamp().
Str("app_name", conf.AppName). // 添加 app_name
Logger()
return nil
}
// log 统一的日志记录函数
func log(ctx context.Context, level zerolog.Level, logType LogType, kv ...interface{}) {
if len(kv)%2 != 0 {
kv = append(kv, "MISSING")
}
event := logger.WithLevel(level)
// 添加日志类型
event.Str("type", string(logType))
// 添加请求ID
if reqID, exists := ctx.Value("request_id").(string); exists {
event.Str("request_id", reqID)
}
// 添加调用位置
if pc, file, line, ok := runtime.Caller(2); ok {
event.Str("caller", fmt.Sprintf("%s:%d %s", filepath.Base(file), line, runtime.FuncForPC(pc).Name()))
}
// 处理key-value对
for i := 0; i < len(kv); i += 2 {
key, ok := kv[i].(string)
if !ok {
continue
}
value := kv[i+1]
switch v := value.(type) {
case error:
event.AnErr(key, v)
case int:
event.Int(key, v)
case int64:
event.Int64(key, v)
case float64:
event.Float64(key, v)
case bool:
event.Bool(key, v)
case time.Duration:
event.Dur(key, v)
case time.Time:
event.Time(key, v)
case []byte:
event.Bytes(key, v)
case string:
event.Str(key, v)
default:
event.Interface(key, v)
}
}
event.Send()
}
// Debug 记录调试日志
func Debug(ctx context.Context, kv ...interface{}) {
log(ctx, zerolog.DebugLevel, TypeBusiness, kv...)
}
// Info 记录信息日志
func Info(ctx context.Context, kv ...interface{}) {
log(ctx, zerolog.InfoLevel, TypeBusiness, kv...)
}
// Warn 记录警告日志
func Warn(ctx context.Context, kv ...interface{}) {
log(ctx, zerolog.WarnLevel, TypeError, kv...)
}
// Error 记录错误日志
func Error(ctx context.Context, kv ...interface{}) {
log(ctx, zerolog.ErrorLevel, TypeError, kv...)
}
func Fatal(ctx context.Context, kv ...interface{}) {
// 获取错误堆栈
buf := make([]byte, 4096)
n := runtime.Stack(buf, false)
// 添加堆栈信息到kv
newKv := make([]interface{}, 0, len(kv)+2)
newKv = append(newKv, kv...)
newKv = append(newKv, "stack", string(buf[:n]))
log(ctx, zerolog.FatalLevel, TypeError, newKv...)
}
// Access 记录访问日志
func Access(ctx context.Context, kv ...interface{}) {
log(ctx, zerolog.InfoLevel, TypeAccess, kv...)
}

View File

@@ -0,0 +1,74 @@
package middleware
import (
"bytes"
"io"
"strings"
"time"
"gitea.com/bitwsd/core/common/log"
"github.com/gin-gonic/gin"
)
const (
maxBodySize = 1024 * 500 // 500KB 限制
)
// 自定义 ResponseWriter 来捕获响应
type bodyWriter struct {
gin.ResponseWriter
body *bytes.Buffer
}
func (w *bodyWriter) Write(b []byte) (int, error) {
w.body.Write(b)
return w.ResponseWriter.Write(b)
}
func AccessLog() gin.HandlerFunc {
return func(c *gin.Context) {
start := time.Now()
path := c.Request.URL.Path
raw := c.Request.URL.RawQuery
// 处理请求体
var reqBody string
if c.Request.Body != nil && (c.Request.Method == "POST" || c.Request.Method == "PUT") {
// 读取并限制请求体大小
bodyBytes, _ := io.ReadAll(io.LimitReader(c.Request.Body, maxBodySize))
c.Request.Body = io.NopCloser(bytes.NewBuffer(bodyBytes)) // 重新设置 body
reqBody = string(bodyBytes)
}
// 设置自定义 ResponseWriter
var responseBody string
if !strings.Contains(c.GetHeader("Accept"), "text/event-stream") {
bw := &bodyWriter{body: &bytes.Buffer{}, ResponseWriter: c.Writer}
c.Writer = bw
}
c.Next()
// 获取响应体(非 SSE
if writer, ok := c.Writer.(*bodyWriter); ok {
responseBody = writer.body.String()
if len(responseBody) > maxBodySize {
responseBody = responseBody[:maxBodySize] + "... (truncated)"
}
}
// 记录访问日志
log.Access(c.Request.Context(),
"request_id", c.GetString("request_id"),
"method", c.Request.Method,
"path", path,
"query", raw,
"ip", c.ClientIP(),
"user_agent", c.Request.UserAgent(),
"status", c.Writer.Status(),
"duration", time.Since(start),
"request_body", reqBody,
"response_body", responseBody,
)
}
}

View File

@@ -0,0 +1,18 @@
package middleware
import (
"github.com/gin-gonic/gin"
"github.com/google/uuid"
)
func RequestID() gin.HandlerFunc {
return func(c *gin.Context) {
requestID := c.Request.Header.Get("X-Request-ID")
if requestID == "" {
requestID = uuid.New().String()
}
c.Request.Header.Set("X-Request-ID", requestID)
c.Set("request_id", requestID)
c.Next()
}
}

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,305 @@
// This file is auto-generated, don't edit it. Thanks.
package client
import (
"io"
"github.com/alibabacloud-go/tea/tea"
credential "github.com/aliyun/credentials-go/credentials"
)
type InterceptorContext struct {
Request *InterceptorContextRequest `json:"request,omitempty" xml:"request,omitempty" require:"true" type:"Struct"`
Configuration *InterceptorContextConfiguration `json:"configuration,omitempty" xml:"configuration,omitempty" require:"true" type:"Struct"`
Response *InterceptorContextResponse `json:"response,omitempty" xml:"response,omitempty" require:"true" type:"Struct"`
}
func (s InterceptorContext) String() string {
return tea.Prettify(s)
}
func (s InterceptorContext) GoString() string {
return s.String()
}
func (s *InterceptorContext) SetRequest(v *InterceptorContextRequest) *InterceptorContext {
s.Request = v
return s
}
func (s *InterceptorContext) SetConfiguration(v *InterceptorContextConfiguration) *InterceptorContext {
s.Configuration = v
return s
}
func (s *InterceptorContext) SetResponse(v *InterceptorContextResponse) *InterceptorContext {
s.Response = v
return s
}
type InterceptorContextRequest struct {
Headers map[string]*string `json:"headers,omitempty" xml:"headers,omitempty"`
Query map[string]*string `json:"query,omitempty" xml:"query,omitempty"`
Body interface{} `json:"body,omitempty" xml:"body,omitempty"`
Stream io.Reader `json:"stream,omitempty" xml:"stream,omitempty"`
HostMap map[string]*string `json:"hostMap,omitempty" xml:"hostMap,omitempty"`
Pathname *string `json:"pathname,omitempty" xml:"pathname,omitempty" require:"true"`
ProductId *string `json:"productId,omitempty" xml:"productId,omitempty" require:"true"`
Action *string `json:"action,omitempty" xml:"action,omitempty" require:"true"`
Version *string `json:"version,omitempty" xml:"version,omitempty" require:"true"`
Protocol *string `json:"protocol,omitempty" xml:"protocol,omitempty" require:"true"`
Method *string `json:"method,omitempty" xml:"method,omitempty" require:"true"`
AuthType *string `json:"authType,omitempty" xml:"authType,omitempty" require:"true"`
BodyType *string `json:"bodyType,omitempty" xml:"bodyType,omitempty" require:"true"`
ReqBodyType *string `json:"reqBodyType,omitempty" xml:"reqBodyType,omitempty" require:"true"`
Style *string `json:"style,omitempty" xml:"style,omitempty"`
Credential credential.Credential `json:"credential,omitempty" xml:"credential,omitempty" require:"true"`
SignatureVersion *string `json:"signatureVersion,omitempty" xml:"signatureVersion,omitempty"`
SignatureAlgorithm *string `json:"signatureAlgorithm,omitempty" xml:"signatureAlgorithm,omitempty"`
UserAgent *string `json:"userAgent,omitempty" xml:"userAgent,omitempty" require:"true"`
}
func (s InterceptorContextRequest) String() string {
return tea.Prettify(s)
}
func (s InterceptorContextRequest) GoString() string {
return s.String()
}
func (s *InterceptorContextRequest) SetHeaders(v map[string]*string) *InterceptorContextRequest {
s.Headers = v
return s
}
func (s *InterceptorContextRequest) SetQuery(v map[string]*string) *InterceptorContextRequest {
s.Query = v
return s
}
func (s *InterceptorContextRequest) SetBody(v interface{}) *InterceptorContextRequest {
s.Body = v
return s
}
func (s *InterceptorContextRequest) SetStream(v io.Reader) *InterceptorContextRequest {
s.Stream = v
return s
}
func (s *InterceptorContextRequest) SetHostMap(v map[string]*string) *InterceptorContextRequest {
s.HostMap = v
return s
}
func (s *InterceptorContextRequest) SetPathname(v string) *InterceptorContextRequest {
s.Pathname = &v
return s
}
func (s *InterceptorContextRequest) SetProductId(v string) *InterceptorContextRequest {
s.ProductId = &v
return s
}
func (s *InterceptorContextRequest) SetAction(v string) *InterceptorContextRequest {
s.Action = &v
return s
}
func (s *InterceptorContextRequest) SetVersion(v string) *InterceptorContextRequest {
s.Version = &v
return s
}
func (s *InterceptorContextRequest) SetProtocol(v string) *InterceptorContextRequest {
s.Protocol = &v
return s
}
func (s *InterceptorContextRequest) SetMethod(v string) *InterceptorContextRequest {
s.Method = &v
return s
}
func (s *InterceptorContextRequest) SetAuthType(v string) *InterceptorContextRequest {
s.AuthType = &v
return s
}
func (s *InterceptorContextRequest) SetBodyType(v string) *InterceptorContextRequest {
s.BodyType = &v
return s
}
func (s *InterceptorContextRequest) SetReqBodyType(v string) *InterceptorContextRequest {
s.ReqBodyType = &v
return s
}
func (s *InterceptorContextRequest) SetStyle(v string) *InterceptorContextRequest {
s.Style = &v
return s
}
func (s *InterceptorContextRequest) SetCredential(v credential.Credential) *InterceptorContextRequest {
s.Credential = v
return s
}
func (s *InterceptorContextRequest) SetSignatureVersion(v string) *InterceptorContextRequest {
s.SignatureVersion = &v
return s
}
func (s *InterceptorContextRequest) SetSignatureAlgorithm(v string) *InterceptorContextRequest {
s.SignatureAlgorithm = &v
return s
}
func (s *InterceptorContextRequest) SetUserAgent(v string) *InterceptorContextRequest {
s.UserAgent = &v
return s
}
type InterceptorContextConfiguration struct {
RegionId *string `json:"regionId,omitempty" xml:"regionId,omitempty" require:"true"`
Endpoint *string `json:"endpoint,omitempty" xml:"endpoint,omitempty"`
EndpointRule *string `json:"endpointRule,omitempty" xml:"endpointRule,omitempty"`
EndpointMap map[string]*string `json:"endpointMap,omitempty" xml:"endpointMap,omitempty"`
EndpointType *string `json:"endpointType,omitempty" xml:"endpointType,omitempty"`
Network *string `json:"network,omitempty" xml:"network,omitempty"`
Suffix *string `json:"suffix,omitempty" xml:"suffix,omitempty"`
}
func (s InterceptorContextConfiguration) String() string {
return tea.Prettify(s)
}
func (s InterceptorContextConfiguration) GoString() string {
return s.String()
}
func (s *InterceptorContextConfiguration) SetRegionId(v string) *InterceptorContextConfiguration {
s.RegionId = &v
return s
}
func (s *InterceptorContextConfiguration) SetEndpoint(v string) *InterceptorContextConfiguration {
s.Endpoint = &v
return s
}
func (s *InterceptorContextConfiguration) SetEndpointRule(v string) *InterceptorContextConfiguration {
s.EndpointRule = &v
return s
}
func (s *InterceptorContextConfiguration) SetEndpointMap(v map[string]*string) *InterceptorContextConfiguration {
s.EndpointMap = v
return s
}
func (s *InterceptorContextConfiguration) SetEndpointType(v string) *InterceptorContextConfiguration {
s.EndpointType = &v
return s
}
func (s *InterceptorContextConfiguration) SetNetwork(v string) *InterceptorContextConfiguration {
s.Network = &v
return s
}
func (s *InterceptorContextConfiguration) SetSuffix(v string) *InterceptorContextConfiguration {
s.Suffix = &v
return s
}
type InterceptorContextResponse struct {
StatusCode *int `json:"statusCode,omitempty" xml:"statusCode,omitempty"`
Headers map[string]*string `json:"headers,omitempty" xml:"headers,omitempty"`
Body io.Reader `json:"body,omitempty" xml:"body,omitempty"`
DeserializedBody interface{} `json:"deserializedBody,omitempty" xml:"deserializedBody,omitempty"`
}
func (s InterceptorContextResponse) String() string {
return tea.Prettify(s)
}
func (s InterceptorContextResponse) GoString() string {
return s.String()
}
func (s *InterceptorContextResponse) SetStatusCode(v int) *InterceptorContextResponse {
s.StatusCode = &v
return s
}
func (s *InterceptorContextResponse) SetHeaders(v map[string]*string) *InterceptorContextResponse {
s.Headers = v
return s
}
func (s *InterceptorContextResponse) SetBody(v io.Reader) *InterceptorContextResponse {
s.Body = v
return s
}
func (s *InterceptorContextResponse) SetDeserializedBody(v interface{}) *InterceptorContextResponse {
s.DeserializedBody = v
return s
}
type AttributeMap struct {
Attributes map[string]interface{} `json:"attributes,omitempty" xml:"attributes,omitempty" require:"true"`
Key map[string]*string `json:"key,omitempty" xml:"key,omitempty" require:"true"`
}
func (s AttributeMap) String() string {
return tea.Prettify(s)
}
func (s AttributeMap) GoString() string {
return s.String()
}
func (s *AttributeMap) SetAttributes(v map[string]interface{}) *AttributeMap {
s.Attributes = v
return s
}
func (s *AttributeMap) SetKey(v map[string]*string) *AttributeMap {
s.Key = v
return s
}
type ClientInterface interface {
ModifyConfiguration(context *InterceptorContext, attributeMap *AttributeMap) error
ModifyRequest(context *InterceptorContext, attributeMap *AttributeMap) error
ModifyResponse(context *InterceptorContext, attributeMap *AttributeMap) error
}
type Client struct {
}
func NewClient() (*Client, error) {
client := new(Client)
err := client.Init()
return client, err
}
func (client *Client) Init() (_err error) {
return nil
}
func (client *Client) ModifyConfiguration(context *InterceptorContext, attributeMap *AttributeMap) (_err error) {
panic("No Support!")
}
func (client *Client) ModifyRequest(context *InterceptorContext, attributeMap *AttributeMap) (_err error) {
panic("No Support!")
}
func (client *Client) ModifyResponse(context *InterceptorContext, attributeMap *AttributeMap) (_err error) {
panic("No Support!")
}

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

File diff suppressed because it is too large Load Diff

201
vendor/github.com/alibabacloud-go/debug/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,12 @@
package debug
import (
"reflect"
"testing"
)
func assertEqual(t *testing.T, a, b interface{}) {
if !reflect.DeepEqual(a, b) {
t.Errorf("%v != %v", a, b)
}
}

36
vendor/github.com/alibabacloud-go/debug/debug/debug.go generated vendored Normal file
View File

@@ -0,0 +1,36 @@
package debug
import (
"fmt"
"os"
"strings"
)
type Debug func(format string, v ...interface{})
var hookGetEnv = func() string {
return os.Getenv("DEBUG")
}
var hookPrint = func(input string) {
fmt.Println(input)
}
func Init(flag string) Debug {
enable := false
env := hookGetEnv()
parts := strings.Split(env, ",")
for _, part := range parts {
if part == flag {
enable = true
break
}
}
return func(format string, v ...interface{}) {
if enable {
hookPrint(fmt.Sprintf(format, v...))
}
}
}

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,41 @@
// This file is auto-generated, don't edit it. Thanks.
/**
* Get endpoint
* @return string
*/
package service
import (
"fmt"
"strings"
"github.com/alibabacloud-go/tea/tea"
)
func GetEndpointRules(product, regionId, endpointType, network, suffix *string) (_result *string, _err error) {
if tea.StringValue(endpointType) == "regional" {
if tea.StringValue(regionId) == "" {
_err = fmt.Errorf("RegionId is empty, please set a valid RegionId")
return tea.String(""), _err
}
_result = tea.String(strings.Replace("<product><suffix><network>.<region_id>.aliyuncs.com",
"<region_id>", tea.StringValue(regionId), 1))
} else {
_result = tea.String("<product><suffix><network>.aliyuncs.com")
}
_result = tea.String(strings.Replace(tea.StringValue(_result),
"<product>", strings.ToLower(tea.StringValue(product)), 1))
if tea.StringValue(network) == "" || tea.StringValue(network) == "public" {
_result = tea.String(strings.Replace(tea.StringValue(_result), "<network>", "", 1))
} else {
_result = tea.String(strings.Replace(tea.StringValue(_result),
"<network>", "-"+tea.StringValue(network), 1))
}
if tea.StringValue(suffix) == "" {
_result = tea.String(strings.Replace(tea.StringValue(_result), "<suffix>", "", 1))
} else {
_result = tea.String(strings.Replace(tea.StringValue(_result),
"<suffix>", "-"+tea.StringValue(suffix), 1))
}
return _result, nil
}

201
vendor/github.com/alibabacloud-go/openapi-util/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,635 @@
// This file is auto-generated, don't edit it. Thanks.
/**
* This is for OpenApi Util
*/
package service
import (
"bytes"
"crypto"
"crypto/hmac"
"crypto/rand"
"crypto/rsa"
"crypto/sha1"
"crypto/sha256"
"crypto/x509"
"encoding/base64"
"encoding/hex"
"encoding/json"
"encoding/pem"
"errors"
"fmt"
"hash"
"io"
"net/http"
"net/textproto"
"net/url"
"reflect"
"sort"
"strconv"
"strings"
"time"
util "github.com/alibabacloud-go/tea-utils/service"
"github.com/alibabacloud-go/tea/tea"
"github.com/tjfoc/gmsm/sm3"
)
const (
PEM_BEGIN = "-----BEGIN RSA PRIVATE KEY-----\n"
PEM_END = "\n-----END RSA PRIVATE KEY-----"
)
type Sorter struct {
Keys []string
Vals []string
}
func newSorter(m map[string]string) *Sorter {
hs := &Sorter{
Keys: make([]string, 0, len(m)),
Vals: make([]string, 0, len(m)),
}
for k, v := range m {
hs.Keys = append(hs.Keys, k)
hs.Vals = append(hs.Vals, v)
}
return hs
}
// Sort is an additional function for function SignHeader.
func (hs *Sorter) Sort() {
sort.Sort(hs)
}
// Len is an additional function for function SignHeader.
func (hs *Sorter) Len() int {
return len(hs.Vals)
}
// Less is an additional function for function SignHeader.
func (hs *Sorter) Less(i, j int) bool {
return bytes.Compare([]byte(hs.Keys[i]), []byte(hs.Keys[j])) < 0
}
// Swap is an additional function for function SignHeader.
func (hs *Sorter) Swap(i, j int) {
hs.Vals[i], hs.Vals[j] = hs.Vals[j], hs.Vals[i]
hs.Keys[i], hs.Keys[j] = hs.Keys[j], hs.Keys[i]
}
/**
* Convert all params of body other than type of readable into content
* @param body source Model
* @param content target Model
* @return void
*/
func Convert(body interface{}, content interface{}) {
res := make(map[string]interface{})
val := reflect.ValueOf(body).Elem()
dataType := val.Type()
for i := 0; i < dataType.NumField(); i++ {
field := dataType.Field(i)
name, _ := field.Tag.Lookup("json")
name = strings.Split(name, ",omitempty")[0]
_, ok := val.Field(i).Interface().(io.Reader)
if !ok {
res[name] = val.Field(i).Interface()
}
}
byt, _ := json.Marshal(res)
json.Unmarshal(byt, content)
}
/**
* Get the string to be signed according to request
* @param request which contains signed messages
* @return the signed string
*/
func GetStringToSign(request *tea.Request) (_result *string) {
return tea.String(getStringToSign(request))
}
func getStringToSign(request *tea.Request) string {
resource := tea.StringValue(request.Pathname)
queryParams := request.Query
// sort QueryParams by key
var queryKeys []string
for key := range queryParams {
queryKeys = append(queryKeys, key)
}
sort.Strings(queryKeys)
tmp := ""
for i := 0; i < len(queryKeys); i++ {
queryKey := queryKeys[i]
v := tea.StringValue(queryParams[queryKey])
if v != "" {
tmp = tmp + "&" + queryKey + "=" + v
} else {
tmp = tmp + "&" + queryKey
}
}
if tmp != "" {
tmp = strings.TrimLeft(tmp, "&")
resource = resource + "?" + tmp
}
return getSignedStr(request, resource)
}
func getSignedStr(req *tea.Request, canonicalizedResource string) string {
temp := make(map[string]string)
for k, v := range req.Headers {
if strings.HasPrefix(strings.ToLower(k), "x-acs-") {
temp[strings.ToLower(k)] = tea.StringValue(v)
}
}
hs := newSorter(temp)
// Sort the temp by the ascending order
hs.Sort()
// Get the canonicalizedOSSHeaders
canonicalizedOSSHeaders := ""
for i := range hs.Keys {
canonicalizedOSSHeaders += hs.Keys[i] + ":" + hs.Vals[i] + "\n"
}
// Give other parameters values
// when sign URL, date is expires
date := tea.StringValue(req.Headers["date"])
accept := tea.StringValue(req.Headers["accept"])
contentType := tea.StringValue(req.Headers["content-type"])
contentMd5 := tea.StringValue(req.Headers["content-md5"])
signStr := tea.StringValue(req.Method) + "\n" + accept + "\n" + contentMd5 + "\n" + contentType + "\n" + date + "\n" + canonicalizedOSSHeaders + canonicalizedResource
return signStr
}
/**
* Get signature according to stringToSign, secret
* @param stringToSign the signed string
* @param secret accesskey secret
* @return the signature
*/
func GetROASignature(stringToSign *string, secret *string) (_result *string) {
h := hmac.New(func() hash.Hash { return sha1.New() }, []byte(tea.StringValue(secret)))
io.WriteString(h, tea.StringValue(stringToSign))
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
return tea.String(signedStr)
}
func GetEndpoint(endpoint *string, server *bool, endpointType *string) *string {
if tea.StringValue(endpointType) == "internal" {
strs := strings.Split(tea.StringValue(endpoint), ".")
strs[0] += "-internal"
endpoint = tea.String(strings.Join(strs, "."))
}
if tea.BoolValue(server) && tea.StringValue(endpointType) == "accelerate" {
return tea.String("oss-accelerate.aliyuncs.com")
}
return endpoint
}
func HexEncode(raw []byte) *string {
return tea.String(hex.EncodeToString(raw))
}
func Hash(raw []byte, signatureAlgorithm *string) []byte {
signType := tea.StringValue(signatureAlgorithm)
if signType == "ACS3-HMAC-SHA256" || signType == "ACS3-RSA-SHA256" {
h := sha256.New()
h.Write(raw)
return h.Sum(nil)
} else if signType == "ACS3-HMAC-SM3" {
h := sm3.New()
h.Write(raw)
return h.Sum(nil)
}
return nil
}
func GetEncodePath(path *string) *string {
uri := tea.StringValue(path)
strs := strings.Split(uri, "/")
for i, v := range strs {
strs[i] = url.QueryEscape(v)
}
uri = strings.Join(strs, "/")
uri = strings.Replace(uri, "+", "%20", -1)
uri = strings.Replace(uri, "*", "%2A", -1)
uri = strings.Replace(uri, "%7E", "~", -1)
return tea.String(uri)
}
func GetEncodeParam(param *string) *string {
uri := tea.StringValue(param)
uri = url.QueryEscape(uri)
uri = strings.Replace(uri, "+", "%20", -1)
uri = strings.Replace(uri, "*", "%2A", -1)
uri = strings.Replace(uri, "%7E", "~", -1)
return tea.String(uri)
}
func GetAuthorization(request *tea.Request, signatureAlgorithm, payload, acesskey, secret *string) *string {
canonicalURI := tea.StringValue(request.Pathname)
if canonicalURI == "" {
canonicalURI = "/"
}
canonicalURI = strings.Replace(canonicalURI, "+", "%20", -1)
canonicalURI = strings.Replace(canonicalURI, "*", "%2A", -1)
canonicalURI = strings.Replace(canonicalURI, "%7E", "~", -1)
method := tea.StringValue(request.Method)
canonicalQueryString := getCanonicalQueryString(request.Query)
canonicalheaders, signedHeaders := getCanonicalHeaders(request.Headers)
canonicalRequest := method + "\n" + canonicalURI + "\n" + canonicalQueryString + "\n" + canonicalheaders + "\n" +
strings.Join(signedHeaders, ";") + "\n" + tea.StringValue(payload)
signType := tea.StringValue(signatureAlgorithm)
StringToSign := signType + "\n" + tea.StringValue(HexEncode(Hash([]byte(canonicalRequest), signatureAlgorithm)))
signature := tea.StringValue(HexEncode(SignatureMethod(tea.StringValue(secret), StringToSign, signType)))
auth := signType + " Credential=" + tea.StringValue(acesskey) + ",SignedHeaders=" +
strings.Join(signedHeaders, ";") + ",Signature=" + signature
return tea.String(auth)
}
func SignatureMethod(secret, source, signatureAlgorithm string) []byte {
if signatureAlgorithm == "ACS3-HMAC-SHA256" {
h := hmac.New(sha256.New, []byte(secret))
h.Write([]byte(source))
return h.Sum(nil)
} else if signatureAlgorithm == "ACS3-HMAC-SM3" {
h := hmac.New(sm3.New, []byte(secret))
h.Write([]byte(source))
return h.Sum(nil)
} else if signatureAlgorithm == "ACS3-RSA-SHA256" {
return rsaSign(source, secret)
}
return nil
}
func rsaSign(content, secret string) []byte {
h := crypto.SHA256.New()
h.Write([]byte(content))
hashed := h.Sum(nil)
priv, err := parsePrivateKey(secret)
if err != nil {
return nil
}
sign, err := rsa.SignPKCS1v15(rand.Reader, priv, crypto.SHA256, hashed)
if err != nil {
return nil
}
return sign
}
func parsePrivateKey(privateKey string) (*rsa.PrivateKey, error) {
privateKey = formatPrivateKey(privateKey)
block, _ := pem.Decode([]byte(privateKey))
if block == nil {
return nil, errors.New("PrivateKey is invalid")
}
priKey, err := x509.ParsePKCS8PrivateKey(block.Bytes)
if err != nil {
return nil, err
}
switch priKey.(type) {
case *rsa.PrivateKey:
return priKey.(*rsa.PrivateKey), nil
default:
return nil, nil
}
}
func formatPrivateKey(privateKey string) string {
if !strings.HasPrefix(privateKey, PEM_BEGIN) {
privateKey = PEM_BEGIN + privateKey
}
if !strings.HasSuffix(privateKey, PEM_END) {
privateKey += PEM_END
}
return privateKey
}
func getCanonicalHeaders(headers map[string]*string) (string, []string) {
tmp := make(map[string]string)
tmpHeader := http.Header{}
for k, v := range headers {
if strings.HasPrefix(strings.ToLower(k), "x-acs-") || strings.ToLower(k) == "host" ||
strings.ToLower(k) == "content-type" {
tmp[strings.ToLower(k)] = strings.TrimSpace(tea.StringValue(v))
tmpHeader.Add(strings.ToLower(k), strings.TrimSpace(tea.StringValue(v)))
}
}
hs := newSorter(tmp)
// Sort the temp by the ascending order
hs.Sort()
canonicalheaders := ""
for _, key := range hs.Keys {
vals := tmpHeader[textproto.CanonicalMIMEHeaderKey(key)]
sort.Strings(vals)
canonicalheaders += key + ":" + strings.Join(vals, ",") + "\n"
}
return canonicalheaders, hs.Keys
}
func getCanonicalQueryString(query map[string]*string) string {
canonicalQueryString := ""
if tea.BoolValue(util.IsUnset(query)) {
return canonicalQueryString
}
tmp := make(map[string]string)
for k, v := range query {
tmp[k] = tea.StringValue(v)
}
hs := newSorter(tmp)
// Sort the temp by the ascending order
hs.Sort()
for i := range hs.Keys {
if hs.Vals[i] != "" {
canonicalQueryString += "&" + hs.Keys[i] + "=" + url.QueryEscape(hs.Vals[i])
} else {
canonicalQueryString += "&" + hs.Keys[i] + "="
}
}
canonicalQueryString = strings.Replace(canonicalQueryString, "+", "%20", -1)
canonicalQueryString = strings.Replace(canonicalQueryString, "*", "%2A", -1)
canonicalQueryString = strings.Replace(canonicalQueryString, "%7E", "~", -1)
if canonicalQueryString != "" {
canonicalQueryString = strings.TrimLeft(canonicalQueryString, "&")
}
return canonicalQueryString
}
/**
* Parse filter into a form string
* @param filter object
* @return the string
*/
func ToForm(filter map[string]interface{}) (_result *string) {
tmp := make(map[string]interface{})
byt, _ := json.Marshal(filter)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
_ = d.Decode(&tmp)
result := make(map[string]*string)
for key, value := range tmp {
filterValue := reflect.ValueOf(value)
flatRepeatedList(filterValue, result, key)
}
m := util.AnyifyMapValue(result)
return util.ToFormString(m)
}
func flatRepeatedList(dataValue reflect.Value, result map[string]*string, prefix string) {
if !dataValue.IsValid() {
return
}
dataType := dataValue.Type()
if dataType.Kind().String() == "slice" {
handleRepeatedParams(dataValue, result, prefix)
} else if dataType.Kind().String() == "map" {
handleMap(dataValue, result, prefix)
} else {
result[prefix] = tea.String(fmt.Sprintf("%v", dataValue.Interface()))
}
}
func handleRepeatedParams(repeatedFieldValue reflect.Value, result map[string]*string, prefix string) {
if repeatedFieldValue.IsValid() && !repeatedFieldValue.IsNil() {
for m := 0; m < repeatedFieldValue.Len(); m++ {
elementValue := repeatedFieldValue.Index(m)
key := prefix + "." + strconv.Itoa(m+1)
fieldValue := reflect.ValueOf(elementValue.Interface())
if fieldValue.Kind().String() == "map" {
handleMap(fieldValue, result, key)
} else {
result[key] = tea.String(fmt.Sprintf("%v", fieldValue.Interface()))
}
}
}
}
func handleMap(valueField reflect.Value, result map[string]*string, prefix string) {
if valueField.IsValid() && valueField.String() != "" {
valueFieldType := valueField.Type()
if valueFieldType.Kind().String() == "map" {
var byt []byte
byt, _ = json.Marshal(valueField.Interface())
cache := make(map[string]interface{})
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
_ = d.Decode(&cache)
for key, value := range cache {
pre := ""
if prefix != "" {
pre = prefix + "." + key
} else {
pre = key
}
fieldValue := reflect.ValueOf(value)
flatRepeatedList(fieldValue, result, pre)
}
}
}
}
/**
* Get timestamp
* @return the timestamp string
*/
func GetTimestamp() (_result *string) {
gmt := time.FixedZone("GMT", 0)
return tea.String(time.Now().In(gmt).Format("2006-01-02T15:04:05Z"))
}
/**
* Parse filter into a object which's type is map[string]string
* @param filter query param
* @return the object
*/
func Query(filter interface{}) (_result map[string]*string) {
tmp := make(map[string]interface{})
byt, _ := json.Marshal(filter)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
_ = d.Decode(&tmp)
result := make(map[string]*string)
for key, value := range tmp {
filterValue := reflect.ValueOf(value)
flatRepeatedList(filterValue, result, key)
}
return result
}
/**
* Get signature according to signedParams, method and secret
* @param signedParams params which need to be signed
* @param method http method e.g. GET
* @param secret AccessKeySecret
* @return the signature
*/
func GetRPCSignature(signedParams map[string]*string, method *string, secret *string) (_result *string) {
stringToSign := buildRpcStringToSign(signedParams, tea.StringValue(method))
signature := sign(stringToSign, tea.StringValue(secret), "&")
return tea.String(signature)
}
/**
* Parse array into a string with specified style
* @param array the array
* @param prefix the prefix string
* @style specified style e.g. repeatList
* @return the string
*/
func ArrayToStringWithSpecifiedStyle(array interface{}, prefix *string, style *string) (_result *string) {
if tea.BoolValue(util.IsUnset(array)) {
return tea.String("")
}
sty := tea.StringValue(style)
if sty == "repeatList" {
tmp := map[string]interface{}{
tea.StringValue(prefix): array,
}
return flatRepeatList(tmp)
} else if sty == "simple" || sty == "spaceDelimited" || sty == "pipeDelimited" {
return flatArray(array, sty)
} else if sty == "json" {
return util.ToJSONString(array)
}
return tea.String("")
}
func ParseToMap(in interface{}) map[string]interface{} {
if tea.BoolValue(util.IsUnset(in)) {
return nil
}
tmp := make(map[string]interface{})
byt, _ := json.Marshal(in)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
err := d.Decode(&tmp)
if err != nil {
return nil
}
return tmp
}
func flatRepeatList(filter map[string]interface{}) (_result *string) {
tmp := make(map[string]interface{})
byt, _ := json.Marshal(filter)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
_ = d.Decode(&tmp)
result := make(map[string]*string)
for key, value := range tmp {
filterValue := reflect.ValueOf(value)
flatRepeatedList(filterValue, result, key)
}
res := make(map[string]string)
for k, v := range result {
res[k] = tea.StringValue(v)
}
hs := newSorter(res)
hs.Sort()
// Get the canonicalizedOSSHeaders
t := ""
for i := range hs.Keys {
if i == len(hs.Keys)-1 {
t += hs.Keys[i] + "=" + hs.Vals[i]
} else {
t += hs.Keys[i] + "=" + hs.Vals[i] + "&&"
}
}
return tea.String(t)
}
func flatArray(array interface{}, sty string) *string {
t := reflect.ValueOf(array)
strs := make([]string, 0)
for i := 0; i < t.Len(); i++ {
tmp := t.Index(i)
if tmp.Kind() == reflect.Ptr || tmp.Kind() == reflect.Interface {
tmp = tmp.Elem()
}
if tmp.Kind() == reflect.Ptr {
tmp = tmp.Elem()
}
if tmp.Kind() == reflect.String {
strs = append(strs, tmp.String())
} else {
inter := tmp.Interface()
byt, _ := json.Marshal(inter)
strs = append(strs, string(byt))
}
}
str := ""
if sty == "simple" {
str = strings.Join(strs, ",")
} else if sty == "spaceDelimited" {
str = strings.Join(strs, " ")
} else if sty == "pipeDelimited" {
str = strings.Join(strs, "|")
}
return tea.String(str)
}
func buildRpcStringToSign(signedParam map[string]*string, method string) (stringToSign string) {
signParams := make(map[string]string)
for key, value := range signedParam {
signParams[key] = tea.StringValue(value)
}
stringToSign = getUrlFormedMap(signParams)
stringToSign = strings.Replace(stringToSign, "+", "%20", -1)
stringToSign = strings.Replace(stringToSign, "*", "%2A", -1)
stringToSign = strings.Replace(stringToSign, "%7E", "~", -1)
stringToSign = url.QueryEscape(stringToSign)
stringToSign = method + "&%2F&" + stringToSign
return
}
func getUrlFormedMap(source map[string]string) (urlEncoded string) {
urlEncoder := url.Values{}
for key, value := range source {
urlEncoder.Add(key, value)
}
urlEncoded = urlEncoder.Encode()
return
}
func sign(stringToSign, accessKeySecret, secretSuffix string) string {
secret := accessKeySecret + secretSuffix
signedBytes := shaHmac1(stringToSign, secret)
signedString := base64.StdEncoding.EncodeToString(signedBytes)
return signedString
}
func shaHmac1(source, secret string) []byte {
key := []byte(secret)
hmac := hmac.New(sha1.New, key)
hmac.Write([]byte(source))
return hmac.Sum(nil)
}

201
vendor/github.com/alibabacloud-go/tea-utils/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,468 @@
package service
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"reflect"
"runtime"
"strconv"
"strings"
"time"
"github.com/alibabacloud-go/tea/tea"
)
var defaultUserAgent = fmt.Sprintf("AlibabaCloud (%s; %s) Golang/%s Core/%s TeaDSL/1", runtime.GOOS, runtime.GOARCH, strings.Trim(runtime.Version(), "go"), "0.01")
type RuntimeOptions struct {
Autoretry *bool `json:"autoretry" xml:"autoretry"`
IgnoreSSL *bool `json:"ignoreSSL" xml:"ignoreSSL"`
MaxAttempts *int `json:"maxAttempts" xml:"maxAttempts"`
BackoffPolicy *string `json:"backoffPolicy" xml:"backoffPolicy"`
BackoffPeriod *int `json:"backoffPeriod" xml:"backoffPeriod"`
ReadTimeout *int `json:"readTimeout" xml:"readTimeout"`
ConnectTimeout *int `json:"connectTimeout" xml:"connectTimeout"`
LocalAddr *string `json:"localAddr" xml:"localAddr"`
HttpProxy *string `json:"httpProxy" xml:"httpProxy"`
HttpsProxy *string `json:"httpsProxy" xml:"httpsProxy"`
NoProxy *string `json:"noProxy" xml:"noProxy"`
MaxIdleConns *int `json:"maxIdleConns" xml:"maxIdleConns"`
Socks5Proxy *string `json:"socks5Proxy" xml:"socks5Proxy"`
Socks5NetWork *string `json:"socks5NetWork" xml:"socks5NetWork"`
KeepAlive *bool `json:"keepAlive" xml:"keepAlive"`
}
func (s RuntimeOptions) String() string {
return tea.Prettify(s)
}
func (s RuntimeOptions) GoString() string {
return s.String()
}
func (s *RuntimeOptions) SetAutoretry(v bool) *RuntimeOptions {
s.Autoretry = &v
return s
}
func (s *RuntimeOptions) SetIgnoreSSL(v bool) *RuntimeOptions {
s.IgnoreSSL = &v
return s
}
func (s *RuntimeOptions) SetMaxAttempts(v int) *RuntimeOptions {
s.MaxAttempts = &v
return s
}
func (s *RuntimeOptions) SetBackoffPolicy(v string) *RuntimeOptions {
s.BackoffPolicy = &v
return s
}
func (s *RuntimeOptions) SetBackoffPeriod(v int) *RuntimeOptions {
s.BackoffPeriod = &v
return s
}
func (s *RuntimeOptions) SetReadTimeout(v int) *RuntimeOptions {
s.ReadTimeout = &v
return s
}
func (s *RuntimeOptions) SetConnectTimeout(v int) *RuntimeOptions {
s.ConnectTimeout = &v
return s
}
func (s *RuntimeOptions) SetHttpProxy(v string) *RuntimeOptions {
s.HttpProxy = &v
return s
}
func (s *RuntimeOptions) SetHttpsProxy(v string) *RuntimeOptions {
s.HttpsProxy = &v
return s
}
func (s *RuntimeOptions) SetNoProxy(v string) *RuntimeOptions {
s.NoProxy = &v
return s
}
func (s *RuntimeOptions) SetMaxIdleConns(v int) *RuntimeOptions {
s.MaxIdleConns = &v
return s
}
func (s *RuntimeOptions) SetLocalAddr(v string) *RuntimeOptions {
s.LocalAddr = &v
return s
}
func (s *RuntimeOptions) SetSocks5Proxy(v string) *RuntimeOptions {
s.Socks5Proxy = &v
return s
}
func (s *RuntimeOptions) SetSocks5NetWork(v string) *RuntimeOptions {
s.Socks5NetWork = &v
return s
}
func (s *RuntimeOptions) SetKeepAlive(v bool) *RuntimeOptions {
s.KeepAlive = &v
return s
}
func ReadAsString(body io.Reader) (*string, error) {
byt, err := ioutil.ReadAll(body)
if err != nil {
return tea.String(""), err
}
r, ok := body.(io.ReadCloser)
if ok {
r.Close()
}
return tea.String(string(byt)), nil
}
func StringifyMapValue(a map[string]interface{}) map[string]*string {
res := make(map[string]*string)
for key, value := range a {
if value != nil {
switch value.(type) {
case string:
res[key] = tea.String(value.(string))
default:
byt, _ := json.Marshal(value)
res[key] = tea.String(string(byt))
}
}
}
return res
}
func AnyifyMapValue(a map[string]*string) map[string]interface{} {
res := make(map[string]interface{})
for key, value := range a {
res[key] = tea.StringValue(value)
}
return res
}
func ReadAsBytes(body io.Reader) ([]byte, error) {
byt, err := ioutil.ReadAll(body)
if err != nil {
return nil, err
}
r, ok := body.(io.ReadCloser)
if ok {
r.Close()
}
return byt, nil
}
func DefaultString(reaStr, defaultStr *string) *string {
if reaStr == nil {
return defaultStr
}
return reaStr
}
func ToJSONString(a interface{}) *string {
switch v := a.(type) {
case *string:
return v
case string:
return tea.String(v)
case []byte:
return tea.String(string(v))
case io.Reader:
byt, err := ioutil.ReadAll(v)
if err != nil {
return nil
}
return tea.String(string(byt))
}
byt, err := json.Marshal(a)
if err != nil {
return nil
}
return tea.String(string(byt))
}
func DefaultNumber(reaNum, defaultNum *int) *int {
if reaNum == nil {
return defaultNum
}
return reaNum
}
func ReadAsJSON(body io.Reader) (result interface{}, err error) {
byt, err := ioutil.ReadAll(body)
if err != nil {
return
}
if string(byt) == "" {
return
}
r, ok := body.(io.ReadCloser)
if ok {
r.Close()
}
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
err = d.Decode(&result)
return
}
func GetNonce() *string {
return tea.String(getUUID())
}
func Empty(val *string) *bool {
return tea.Bool(val == nil || tea.StringValue(val) == "")
}
func ValidateModel(a interface{}) error {
if a == nil {
return nil
}
err := tea.Validate(a)
return err
}
func EqualString(val1, val2 *string) *bool {
return tea.Bool(tea.StringValue(val1) == tea.StringValue(val2))
}
func EqualNumber(val1, val2 *int) *bool {
return tea.Bool(tea.IntValue(val1) == tea.IntValue(val2))
}
func IsUnset(val interface{}) *bool {
if val == nil {
return tea.Bool(true)
}
v := reflect.ValueOf(val)
if v.Kind() == reflect.Ptr || v.Kind() == reflect.Slice || v.Kind() == reflect.Map {
return tea.Bool(v.IsNil())
}
valType := reflect.TypeOf(val)
valZero := reflect.Zero(valType)
return tea.Bool(valZero == v)
}
func ToBytes(a *string) []byte {
return []byte(tea.StringValue(a))
}
func AssertAsMap(a interface{}) map[string]interface{} {
r := reflect.ValueOf(a)
if r.Kind().String() != "map" {
panic(fmt.Sprintf("%v is not a map[string]interface{}", a))
}
res := make(map[string]interface{})
tmp := r.MapKeys()
for _, key := range tmp {
res[key.String()] = r.MapIndex(key).Interface()
}
return res
}
func AssertAsNumber(a interface{}) *int {
res := 0
switch a.(type) {
case int:
tmp := a.(int)
res = tmp
case *int:
tmp := a.(*int)
res = tea.IntValue(tmp)
default:
panic(fmt.Sprintf("%v is not a int", a))
}
return tea.Int(res)
}
func AssertAsBoolean(a interface{}) *bool {
res := false
switch a.(type) {
case bool:
tmp := a.(bool)
res = tmp
case *bool:
tmp := a.(*bool)
res = tea.BoolValue(tmp)
default:
panic(fmt.Sprintf("%v is not a bool", a))
}
return tea.Bool(res)
}
func AssertAsString(a interface{}) *string {
res := ""
switch a.(type) {
case string:
tmp := a.(string)
res = tmp
case *string:
tmp := a.(*string)
res = tea.StringValue(tmp)
default:
panic(fmt.Sprintf("%v is not a string", a))
}
return tea.String(res)
}
func AssertAsBytes(a interface{}) []byte {
res, ok := a.([]byte)
if !ok {
panic(fmt.Sprintf("%v is not []byte", a))
}
return res
}
func AssertAsReadable(a interface{}) io.Reader {
res, ok := a.(io.Reader)
if !ok {
panic(fmt.Sprintf("%v is not reader", a))
}
return res
}
func AssertAsArray(a interface{}) []interface{} {
r := reflect.ValueOf(a)
if r.Kind().String() != "array" && r.Kind().String() != "slice" {
panic(fmt.Sprintf("%v is not a [x]interface{}", a))
}
aLen := r.Len()
res := make([]interface{}, 0)
for i := 0; i < aLen; i++ {
res = append(res, r.Index(i).Interface())
}
return res
}
func ParseJSON(a *string) interface{} {
mapTmp := make(map[string]interface{})
d := json.NewDecoder(bytes.NewReader([]byte(tea.StringValue(a))))
d.UseNumber()
err := d.Decode(&mapTmp)
if err == nil {
return mapTmp
}
sliceTmp := make([]interface{}, 0)
d = json.NewDecoder(bytes.NewReader([]byte(tea.StringValue(a))))
d.UseNumber()
err = d.Decode(&sliceTmp)
if err == nil {
return sliceTmp
}
if num, err := strconv.Atoi(tea.StringValue(a)); err == nil {
return num
}
if ok, err := strconv.ParseBool(tea.StringValue(a)); err == nil {
return ok
}
if floa64tVal, err := strconv.ParseFloat(tea.StringValue(a), 64); err == nil {
return floa64tVal
}
return nil
}
func ToString(a []byte) *string {
return tea.String(string(a))
}
func ToMap(in interface{}) map[string]interface{} {
if in == nil {
return nil
}
res := tea.ToMap(in)
return res
}
func ToFormString(a map[string]interface{}) *string {
if a == nil {
return tea.String("")
}
res := ""
urlEncoder := url.Values{}
for key, value := range a {
v := fmt.Sprintf("%v", value)
urlEncoder.Add(key, v)
}
res = urlEncoder.Encode()
return tea.String(res)
}
func GetDateUTCString() *string {
return tea.String(time.Now().UTC().Format(http.TimeFormat))
}
func GetUserAgent(userAgent *string) *string {
if userAgent != nil && tea.StringValue(userAgent) != "" {
return tea.String(defaultUserAgent + " " + tea.StringValue(userAgent))
}
return tea.String(defaultUserAgent)
}
func Is2xx(code *int) *bool {
tmp := tea.IntValue(code)
return tea.Bool(tmp >= 200 && tmp < 300)
}
func Is3xx(code *int) *bool {
tmp := tea.IntValue(code)
return tea.Bool(tmp >= 300 && tmp < 400)
}
func Is4xx(code *int) *bool {
tmp := tea.IntValue(code)
return tea.Bool(tmp >= 400 && tmp < 500)
}
func Is5xx(code *int) *bool {
tmp := tea.IntValue(code)
return tea.Bool(tmp >= 500 && tmp < 600)
}
func Sleep(millisecond *int) error {
ms := tea.IntValue(millisecond)
time.Sleep(time.Duration(ms) * time.Millisecond)
return nil
}
func ToArray(in interface{}) []map[string]interface{} {
if tea.BoolValue(IsUnset(in)) {
return nil
}
tmp := make([]map[string]interface{}, 0)
byt, _ := json.Marshal(in)
d := json.NewDecoder(bytes.NewReader(byt))
d.UseNumber()
err := d.Decode(&tmp)
if err != nil {
return nil
}
return tmp
}

View File

@@ -0,0 +1,52 @@
package service
import (
"crypto/md5"
"crypto/rand"
"encoding/hex"
"hash"
rand2 "math/rand"
)
type UUID [16]byte
const numBytes = "1234567890"
func getUUID() (uuidHex string) {
uuid := newUUID()
uuidHex = hex.EncodeToString(uuid[:])
return
}
func randStringBytes(n int) string {
b := make([]byte, n)
for i := range b {
b[i] = numBytes[rand2.Intn(len(numBytes))]
}
return string(b)
}
func newUUID() UUID {
ns := UUID{}
safeRandom(ns[:])
u := newFromHash(md5.New(), ns, randStringBytes(16))
u[6] = (u[6] & 0x0f) | (byte(2) << 4)
u[8] = (u[8]&(0xff>>2) | (0x02 << 6))
return u
}
func newFromHash(h hash.Hash, ns UUID, name string) UUID {
u := UUID{}
h.Write(ns[:])
h.Write([]byte(name))
copy(u[:], h.Sum(nil))
return u
}
func safeRandom(dest []byte) {
if _, err := rand.Read(dest); err != nil {
panic(err)
}
}

View File

@@ -0,0 +1,105 @@
package service
import (
"bytes"
"encoding/xml"
"fmt"
"reflect"
"strings"
"github.com/alibabacloud-go/tea/tea"
v2 "github.com/clbanning/mxj/v2"
)
func ToXML(obj map[string]interface{}) *string {
return tea.String(mapToXML(obj))
}
func ParseXml(val *string, result interface{}) map[string]interface{} {
resp := make(map[string]interface{})
start := getStartElement([]byte(tea.StringValue(val)))
if result == nil {
vm, err := v2.NewMapXml([]byte(tea.StringValue(val)))
if err != nil {
return nil
}
return vm
}
out, err := xmlUnmarshal([]byte(tea.StringValue(val)), result)
if err != nil {
return resp
}
resp[start] = out
return resp
}
func mapToXML(val map[string]interface{}) string {
res := ""
for key, value := range val {
switch value.(type) {
case []interface{}:
for _, v := range value.([]interface{}) {
switch v.(type) {
case map[string]interface{}:
res += `<` + key + `>`
res += mapToXML(v.(map[string]interface{}))
res += `</` + key + `>`
default:
if fmt.Sprintf("%v", v) != `<nil>` {
res += `<` + key + `>`
res += fmt.Sprintf("%v", v)
res += `</` + key + `>`
}
}
}
case map[string]interface{}:
res += `<` + key + `>`
res += mapToXML(value.(map[string]interface{}))
res += `</` + key + `>`
default:
if fmt.Sprintf("%v", value) != `<nil>` {
res += `<` + key + `>`
res += fmt.Sprintf("%v", value)
res += `</` + key + `>`
}
}
}
return res
}
func getStartElement(body []byte) string {
d := xml.NewDecoder(bytes.NewReader(body))
for {
tok, err := d.Token()
if err != nil {
return ""
}
if t, ok := tok.(xml.StartElement); ok {
return t.Name.Local
}
}
}
func xmlUnmarshal(body []byte, result interface{}) (interface{}, error) {
start := getStartElement(body)
dataValue := reflect.ValueOf(result).Elem()
dataType := dataValue.Type()
for i := 0; i < dataType.NumField(); i++ {
field := dataType.Field(i)
name, containsNameTag := field.Tag.Lookup("xml")
name = strings.Replace(name, ",omitempty", "", -1)
if containsNameTag {
if name == start {
realType := dataValue.Field(i).Type()
realValue := reflect.New(realType).Interface()
err := xml.Unmarshal(body, realValue)
if err != nil {
return nil, err
}
return realValue, nil
}
}
}
return nil, nil
}

201
vendor/github.com/alibabacloud-go/tea/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,333 @@
package tea
import (
"encoding/json"
"io"
"math"
"reflect"
"strconv"
"strings"
"unsafe"
jsoniter "github.com/json-iterator/go"
"github.com/modern-go/reflect2"
)
const maxUint = ^uint(0)
const maxInt = int(maxUint >> 1)
const minInt = -maxInt - 1
var jsonParser jsoniter.API
func init() {
jsonParser = jsoniter.Config{
EscapeHTML: true,
SortMapKeys: true,
ValidateJsonRawMessage: true,
CaseSensitive: true,
}.Froze()
jsonParser.RegisterExtension(newBetterFuzzyExtension())
}
func newBetterFuzzyExtension() jsoniter.DecoderExtension {
return jsoniter.DecoderExtension{
reflect2.DefaultTypeOfKind(reflect.String): &nullableFuzzyStringDecoder{},
reflect2.DefaultTypeOfKind(reflect.Bool): &fuzzyBoolDecoder{},
reflect2.DefaultTypeOfKind(reflect.Float32): &nullableFuzzyFloat32Decoder{},
reflect2.DefaultTypeOfKind(reflect.Float64): &nullableFuzzyFloat64Decoder{},
reflect2.DefaultTypeOfKind(reflect.Int): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(maxInt) || val < float64(minInt) {
iter.ReportError("fuzzy decode int", "exceed range")
return
}
*((*int)(ptr)) = int(val)
} else {
*((*int)(ptr)) = iter.ReadInt()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(maxUint) || val < 0 {
iter.ReportError("fuzzy decode uint", "exceed range")
return
}
*((*uint)(ptr)) = uint(val)
} else {
*((*uint)(ptr)) = iter.ReadUint()
}
}},
reflect2.DefaultTypeOfKind(reflect.Int8): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxInt8) || val < float64(math.MinInt8) {
iter.ReportError("fuzzy decode int8", "exceed range")
return
}
*((*int8)(ptr)) = int8(val)
} else {
*((*int8)(ptr)) = iter.ReadInt8()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint8): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxUint8) || val < 0 {
iter.ReportError("fuzzy decode uint8", "exceed range")
return
}
*((*uint8)(ptr)) = uint8(val)
} else {
*((*uint8)(ptr)) = iter.ReadUint8()
}
}},
reflect2.DefaultTypeOfKind(reflect.Int16): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxInt16) || val < float64(math.MinInt16) {
iter.ReportError("fuzzy decode int16", "exceed range")
return
}
*((*int16)(ptr)) = int16(val)
} else {
*((*int16)(ptr)) = iter.ReadInt16()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint16): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxUint16) || val < 0 {
iter.ReportError("fuzzy decode uint16", "exceed range")
return
}
*((*uint16)(ptr)) = uint16(val)
} else {
*((*uint16)(ptr)) = iter.ReadUint16()
}
}},
reflect2.DefaultTypeOfKind(reflect.Int32): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxInt32) || val < float64(math.MinInt32) {
iter.ReportError("fuzzy decode int32", "exceed range")
return
}
*((*int32)(ptr)) = int32(val)
} else {
*((*int32)(ptr)) = iter.ReadInt32()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint32): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxUint32) || val < 0 {
iter.ReportError("fuzzy decode uint32", "exceed range")
return
}
*((*uint32)(ptr)) = uint32(val)
} else {
*((*uint32)(ptr)) = iter.ReadUint32()
}
}},
reflect2.DefaultTypeOfKind(reflect.Int64): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxInt64) || val < float64(math.MinInt64) {
iter.ReportError("fuzzy decode int64", "exceed range")
return
}
*((*int64)(ptr)) = int64(val)
} else {
*((*int64)(ptr)) = iter.ReadInt64()
}
}},
reflect2.DefaultTypeOfKind(reflect.Uint64): &nullableFuzzyIntegerDecoder{func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator) {
if isFloat {
val := iter.ReadFloat64()
if val > float64(math.MaxUint64) || val < 0 {
iter.ReportError("fuzzy decode uint64", "exceed range")
return
}
*((*uint64)(ptr)) = uint64(val)
} else {
*((*uint64)(ptr)) = iter.ReadUint64()
}
}},
}
}
type nullableFuzzyStringDecoder struct {
}
func (decoder *nullableFuzzyStringDecoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
switch valueType {
case jsoniter.NumberValue:
var number json.Number
iter.ReadVal(&number)
*((*string)(ptr)) = string(number)
case jsoniter.StringValue:
*((*string)(ptr)) = iter.ReadString()
case jsoniter.BoolValue:
*((*string)(ptr)) = strconv.FormatBool(iter.ReadBool())
case jsoniter.NilValue:
iter.ReadNil()
*((*string)(ptr)) = ""
default:
iter.ReportError("fuzzyStringDecoder", "not number or string or bool")
}
}
type fuzzyBoolDecoder struct {
}
func (decoder *fuzzyBoolDecoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
switch valueType {
case jsoniter.BoolValue:
*((*bool)(ptr)) = iter.ReadBool()
case jsoniter.NumberValue:
var number json.Number
iter.ReadVal(&number)
num, err := number.Int64()
if err != nil {
iter.ReportError("fuzzyBoolDecoder", "get value from json.number failed")
}
if num == 0 {
*((*bool)(ptr)) = false
} else {
*((*bool)(ptr)) = true
}
case jsoniter.StringValue:
strValue := strings.ToLower(iter.ReadString())
if strValue == "true" {
*((*bool)(ptr)) = true
} else if strValue == "false" || strValue == "" {
*((*bool)(ptr)) = false
} else {
iter.ReportError("fuzzyBoolDecoder", "unsupported bool value: "+strValue)
}
case jsoniter.NilValue:
iter.ReadNil()
*((*bool)(ptr)) = false
default:
iter.ReportError("fuzzyBoolDecoder", "not number or string or nil")
}
}
type nullableFuzzyIntegerDecoder struct {
fun func(isFloat bool, ptr unsafe.Pointer, iter *jsoniter.Iterator)
}
func (decoder *nullableFuzzyIntegerDecoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
var str string
switch valueType {
case jsoniter.NumberValue:
var number json.Number
iter.ReadVal(&number)
str = string(number)
case jsoniter.StringValue:
str = iter.ReadString()
// support empty string
if str == "" {
str = "0"
}
case jsoniter.BoolValue:
if iter.ReadBool() {
str = "1"
} else {
str = "0"
}
case jsoniter.NilValue:
iter.ReadNil()
str = "0"
default:
iter.ReportError("fuzzyIntegerDecoder", "not number or string")
}
newIter := iter.Pool().BorrowIterator([]byte(str))
defer iter.Pool().ReturnIterator(newIter)
isFloat := strings.IndexByte(str, '.') != -1
decoder.fun(isFloat, ptr, newIter)
if newIter.Error != nil && newIter.Error != io.EOF {
iter.Error = newIter.Error
}
}
type nullableFuzzyFloat32Decoder struct {
}
func (decoder *nullableFuzzyFloat32Decoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
var str string
switch valueType {
case jsoniter.NumberValue:
*((*float32)(ptr)) = iter.ReadFloat32()
case jsoniter.StringValue:
str = iter.ReadString()
// support empty string
if str == "" {
*((*float32)(ptr)) = 0
return
}
newIter := iter.Pool().BorrowIterator([]byte(str))
defer iter.Pool().ReturnIterator(newIter)
*((*float32)(ptr)) = newIter.ReadFloat32()
if newIter.Error != nil && newIter.Error != io.EOF {
iter.Error = newIter.Error
}
case jsoniter.BoolValue:
// support bool to float32
if iter.ReadBool() {
*((*float32)(ptr)) = 1
} else {
*((*float32)(ptr)) = 0
}
case jsoniter.NilValue:
iter.ReadNil()
*((*float32)(ptr)) = 0
default:
iter.ReportError("nullableFuzzyFloat32Decoder", "not number or string")
}
}
type nullableFuzzyFloat64Decoder struct {
}
func (decoder *nullableFuzzyFloat64Decoder) Decode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
valueType := iter.WhatIsNext()
var str string
switch valueType {
case jsoniter.NumberValue:
*((*float64)(ptr)) = iter.ReadFloat64()
case jsoniter.StringValue:
str = iter.ReadString()
// support empty string
if str == "" {
*((*float64)(ptr)) = 0
return
}
newIter := iter.Pool().BorrowIterator([]byte(str))
defer iter.Pool().ReturnIterator(newIter)
*((*float64)(ptr)) = newIter.ReadFloat64()
if newIter.Error != nil && newIter.Error != io.EOF {
iter.Error = newIter.Error
}
case jsoniter.BoolValue:
// support bool to float64
if iter.ReadBool() {
*((*float64)(ptr)) = 1
} else {
*((*float64)(ptr)) = 0
}
case jsoniter.NilValue:
// support empty string
iter.ReadNil()
*((*float64)(ptr)) = 0
default:
iter.ReportError("nullableFuzzyFloat64Decoder", "not number or string")
}
}

1142
vendor/github.com/alibabacloud-go/tea/tea/tea.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

491
vendor/github.com/alibabacloud-go/tea/tea/trans.go generated vendored Normal file
View File

@@ -0,0 +1,491 @@
package tea
func String(a string) *string {
return &a
}
func StringValue(a *string) string {
if a == nil {
return ""
}
return *a
}
func Int(a int) *int {
return &a
}
func IntValue(a *int) int {
if a == nil {
return 0
}
return *a
}
func Int8(a int8) *int8 {
return &a
}
func Int8Value(a *int8) int8 {
if a == nil {
return 0
}
return *a
}
func Int16(a int16) *int16 {
return &a
}
func Int16Value(a *int16) int16 {
if a == nil {
return 0
}
return *a
}
func Int32(a int32) *int32 {
return &a
}
func Int32Value(a *int32) int32 {
if a == nil {
return 0
}
return *a
}
func Int64(a int64) *int64 {
return &a
}
func Int64Value(a *int64) int64 {
if a == nil {
return 0
}
return *a
}
func Bool(a bool) *bool {
return &a
}
func BoolValue(a *bool) bool {
if a == nil {
return false
}
return *a
}
func Uint(a uint) *uint {
return &a
}
func UintValue(a *uint) uint {
if a == nil {
return 0
}
return *a
}
func Uint8(a uint8) *uint8 {
return &a
}
func Uint8Value(a *uint8) uint8 {
if a == nil {
return 0
}
return *a
}
func Uint16(a uint16) *uint16 {
return &a
}
func Uint16Value(a *uint16) uint16 {
if a == nil {
return 0
}
return *a
}
func Uint32(a uint32) *uint32 {
return &a
}
func Uint32Value(a *uint32) uint32 {
if a == nil {
return 0
}
return *a
}
func Uint64(a uint64) *uint64 {
return &a
}
func Uint64Value(a *uint64) uint64 {
if a == nil {
return 0
}
return *a
}
func Float32(a float32) *float32 {
return &a
}
func Float32Value(a *float32) float32 {
if a == nil {
return 0
}
return *a
}
func Float64(a float64) *float64 {
return &a
}
func Float64Value(a *float64) float64 {
if a == nil {
return 0
}
return *a
}
func IntSlice(a []int) []*int {
if a == nil {
return nil
}
res := make([]*int, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func IntValueSlice(a []*int) []int {
if a == nil {
return nil
}
res := make([]int, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Int8Slice(a []int8) []*int8 {
if a == nil {
return nil
}
res := make([]*int8, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Int8ValueSlice(a []*int8) []int8 {
if a == nil {
return nil
}
res := make([]int8, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Int16Slice(a []int16) []*int16 {
if a == nil {
return nil
}
res := make([]*int16, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Int16ValueSlice(a []*int16) []int16 {
if a == nil {
return nil
}
res := make([]int16, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Int32Slice(a []int32) []*int32 {
if a == nil {
return nil
}
res := make([]*int32, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Int32ValueSlice(a []*int32) []int32 {
if a == nil {
return nil
}
res := make([]int32, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Int64Slice(a []int64) []*int64 {
if a == nil {
return nil
}
res := make([]*int64, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Int64ValueSlice(a []*int64) []int64 {
if a == nil {
return nil
}
res := make([]int64, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func UintSlice(a []uint) []*uint {
if a == nil {
return nil
}
res := make([]*uint, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func UintValueSlice(a []*uint) []uint {
if a == nil {
return nil
}
res := make([]uint, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Uint8Slice(a []uint8) []*uint8 {
if a == nil {
return nil
}
res := make([]*uint8, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Uint8ValueSlice(a []*uint8) []uint8 {
if a == nil {
return nil
}
res := make([]uint8, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Uint16Slice(a []uint16) []*uint16 {
if a == nil {
return nil
}
res := make([]*uint16, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Uint16ValueSlice(a []*uint16) []uint16 {
if a == nil {
return nil
}
res := make([]uint16, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Uint32Slice(a []uint32) []*uint32 {
if a == nil {
return nil
}
res := make([]*uint32, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Uint32ValueSlice(a []*uint32) []uint32 {
if a == nil {
return nil
}
res := make([]uint32, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Uint64Slice(a []uint64) []*uint64 {
if a == nil {
return nil
}
res := make([]*uint64, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Uint64ValueSlice(a []*uint64) []uint64 {
if a == nil {
return nil
}
res := make([]uint64, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Float32Slice(a []float32) []*float32 {
if a == nil {
return nil
}
res := make([]*float32, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Float32ValueSlice(a []*float32) []float32 {
if a == nil {
return nil
}
res := make([]float32, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func Float64Slice(a []float64) []*float64 {
if a == nil {
return nil
}
res := make([]*float64, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func Float64ValueSlice(a []*float64) []float64 {
if a == nil {
return nil
}
res := make([]float64, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func StringSlice(a []string) []*string {
if a == nil {
return nil
}
res := make([]*string, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func StringSliceValue(a []*string) []string {
if a == nil {
return nil
}
res := make([]string, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}
func BoolSlice(a []bool) []*bool {
if a == nil {
return nil
}
res := make([]*bool, len(a))
for i := 0; i < len(a); i++ {
res[i] = &a[i]
}
return res
}
func BoolSliceValue(a []*bool) []bool {
if a == nil {
return nil
}
res := make([]bool, len(a))
for i := 0; i < len(a); i++ {
if a[i] != nil {
res[i] = *a[i]
}
}
return res
}

64
vendor/github.com/alibabacloud-go/tea/utils/assert.go generated vendored Normal file
View File

@@ -0,0 +1,64 @@
package utils
import (
"reflect"
"strings"
"testing"
)
func isNil(object interface{}) bool {
if object == nil {
return true
}
value := reflect.ValueOf(object)
kind := value.Kind()
isNilableKind := containsKind(
[]reflect.Kind{
reflect.Chan, reflect.Func,
reflect.Interface, reflect.Map,
reflect.Ptr, reflect.Slice},
kind)
if isNilableKind && value.IsNil() {
return true
}
return false
}
func containsKind(kinds []reflect.Kind, kind reflect.Kind) bool {
for i := 0; i < len(kinds); i++ {
if kind == kinds[i] {
return true
}
}
return false
}
func AssertEqual(t *testing.T, a, b interface{}) {
if !reflect.DeepEqual(a, b) {
t.Errorf("%v != %v", a, b)
}
}
func AssertNil(t *testing.T, object interface{}) {
if !isNil(object) {
t.Errorf("%v is not nil", object)
}
}
func AssertNotNil(t *testing.T, object interface{}) {
if isNil(object) {
t.Errorf("%v is nil", object)
}
}
func AssertContains(t *testing.T, contains string, msgAndArgs ...string) {
for _, value := range msgAndArgs {
if ok := strings.Contains(contains, value); !ok {
t.Errorf("%s does not contain %s", contains, value)
}
}
}

109
vendor/github.com/alibabacloud-go/tea/utils/logger.go generated vendored Normal file
View File

@@ -0,0 +1,109 @@
package utils
import (
"io"
"log"
"strings"
"time"
)
type Logger struct {
*log.Logger
formatTemplate string
isOpen bool
lastLogMsg string
}
var defaultLoggerTemplate = `{time} {channel}: "{method} {uri} HTTP/{version}" {code} {cost} {hostname}`
var loggerParam = []string{"{time}", "{start_time}", "{ts}", "{channel}", "{pid}", "{host}", "{method}", "{uri}", "{version}", "{target}", "{hostname}", "{code}", "{error}", "{req_headers}", "{res_body}", "{res_headers}", "{cost}"}
var logChannel string
func InitLogMsg(fieldMap map[string]string) {
for _, value := range loggerParam {
fieldMap[value] = ""
}
}
func (logger *Logger) SetFormatTemplate(template string) {
logger.formatTemplate = template
}
func (logger *Logger) GetFormatTemplate() string {
return logger.formatTemplate
}
func NewLogger(level string, channel string, out io.Writer, template string) *Logger {
if level == "" {
level = "info"
}
logChannel = "AlibabaCloud"
if channel != "" {
logChannel = channel
}
log := log.New(out, "["+strings.ToUpper(level)+"]", log.Lshortfile)
if template == "" {
template = defaultLoggerTemplate
}
return &Logger{
Logger: log,
formatTemplate: template,
isOpen: true,
}
}
func (logger *Logger) OpenLogger() {
logger.isOpen = true
}
func (logger *Logger) CloseLogger() {
logger.isOpen = false
}
func (logger *Logger) SetIsopen(isopen bool) {
logger.isOpen = isopen
}
func (logger *Logger) GetIsopen() bool {
return logger.isOpen
}
func (logger *Logger) SetLastLogMsg(lastLogMsg string) {
logger.lastLogMsg = lastLogMsg
}
func (logger *Logger) GetLastLogMsg() string {
return logger.lastLogMsg
}
func SetLogChannel(channel string) {
logChannel = channel
}
func (logger *Logger) PrintLog(fieldMap map[string]string, err error) {
if err != nil {
fieldMap["{error}"] = err.Error()
}
fieldMap["{time}"] = time.Now().Format("2006-01-02 15:04:05")
fieldMap["{ts}"] = getTimeInFormatISO8601()
fieldMap["{channel}"] = logChannel
if logger != nil {
logMsg := logger.formatTemplate
for key, value := range fieldMap {
logMsg = strings.Replace(logMsg, key, value, -1)
}
logger.lastLogMsg = logMsg
if logger.isOpen == true {
logger.Output(2, logMsg)
}
}
}
func getTimeInFormatISO8601() (timeStr string) {
gmt := time.FixedZone("GMT", 0)
return time.Now().In(gmt).Format("2006-01-02T15:04:05Z")
}

View File

@@ -0,0 +1,60 @@
package utils
// ProgressEventType defines transfer progress event type
type ProgressEventType int
const (
// TransferStartedEvent transfer started, set TotalBytes
TransferStartedEvent ProgressEventType = 1 + iota
// TransferDataEvent transfer data, set ConsumedBytes anmd TotalBytes
TransferDataEvent
// TransferCompletedEvent transfer completed
TransferCompletedEvent
// TransferFailedEvent transfer encounters an error
TransferFailedEvent
)
// ProgressEvent defines progress event
type ProgressEvent struct {
ConsumedBytes int64
TotalBytes int64
RwBytes int64
EventType ProgressEventType
}
// ProgressListener listens progress change
type ProgressListener interface {
ProgressChanged(event *ProgressEvent)
}
// -------------------- Private --------------------
func NewProgressEvent(eventType ProgressEventType, consumed, total int64, rwBytes int64) *ProgressEvent {
return &ProgressEvent{
ConsumedBytes: consumed,
TotalBytes: total,
RwBytes: rwBytes,
EventType: eventType}
}
// publishProgress
func PublishProgress(listener ProgressListener, event *ProgressEvent) {
if listener != nil && event != nil {
listener.ProgressChanged(event)
}
}
func GetProgressListener(obj interface{}) ProgressListener {
if obj == nil {
return nil
}
listener, ok := obj.(ProgressListener)
if !ok {
return nil
}
return listener
}
type ReaderTracker struct {
CompletedBytes int64
}

14
vendor/github.com/aliyun/aliyun-oss-go-sdk/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,14 @@
Copyright (c) 2015 aliyun.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

339
vendor/github.com/aliyun/aliyun-oss-go-sdk/oss/auth.go generated vendored Normal file
View File

@@ -0,0 +1,339 @@
package oss
import (
"bytes"
"crypto/hmac"
"crypto/sha1"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"fmt"
"hash"
"io"
"net/http"
"sort"
"strconv"
"strings"
"time"
)
// headerSorter defines the key-value structure for storing the sorted data in signHeader.
type headerSorter struct {
Keys []string
Vals []string
}
// getAdditionalHeaderKeys get exist key in http header
func (conn Conn) getAdditionalHeaderKeys(req *http.Request) ([]string, map[string]string) {
var keysList []string
keysMap := make(map[string]string)
srcKeys := make(map[string]string)
for k := range req.Header {
srcKeys[strings.ToLower(k)] = ""
}
for _, v := range conn.config.AdditionalHeaders {
if _, ok := srcKeys[strings.ToLower(v)]; ok {
keysMap[strings.ToLower(v)] = ""
}
}
for k := range keysMap {
keysList = append(keysList, k)
}
sort.Strings(keysList)
return keysList, keysMap
}
// getAdditionalHeaderKeysV4 get exist key in http header
func (conn Conn) getAdditionalHeaderKeysV4(req *http.Request) ([]string, map[string]string) {
var keysList []string
keysMap := make(map[string]string)
srcKeys := make(map[string]string)
for k := range req.Header {
srcKeys[strings.ToLower(k)] = ""
}
for _, v := range conn.config.AdditionalHeaders {
if _, ok := srcKeys[strings.ToLower(v)]; ok {
if !strings.EqualFold(v, HTTPHeaderContentMD5) && !strings.EqualFold(v, HTTPHeaderContentType) {
keysMap[strings.ToLower(v)] = ""
}
}
}
for k := range keysMap {
keysList = append(keysList, k)
}
sort.Strings(keysList)
return keysList, keysMap
}
// signHeader signs the header and sets it as the authorization header.
func (conn Conn) signHeader(req *http.Request, canonicalizedResource string, credentials Credentials) {
akIf := credentials
authorizationStr := ""
if conn.config.AuthVersion == AuthV4 {
strDay := ""
strDate := req.Header.Get(HttpHeaderOssDate)
if strDate == "" {
strDate = req.Header.Get(HTTPHeaderDate)
t, _ := time.Parse(http.TimeFormat, strDate)
strDay = t.Format("20060102")
} else {
t, _ := time.Parse(timeFormatV4, strDate)
strDay = t.Format("20060102")
}
signHeaderProduct := conn.config.GetSignProduct()
signHeaderRegion := conn.config.GetSignRegion()
additionalList, _ := conn.getAdditionalHeaderKeysV4(req)
if len(additionalList) > 0 {
authorizationFmt := "OSS4-HMAC-SHA256 Credential=%v/%v/%v/" + signHeaderProduct + "/aliyun_v4_request,AdditionalHeaders=%v,Signature=%v"
additionnalHeadersStr := strings.Join(additionalList, ";")
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), strDay, signHeaderRegion, additionnalHeadersStr, conn.getSignedStrV4(req, canonicalizedResource, akIf.GetAccessKeySecret(), nil))
} else {
authorizationFmt := "OSS4-HMAC-SHA256 Credential=%v/%v/%v/" + signHeaderProduct + "/aliyun_v4_request,Signature=%v"
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), strDay, signHeaderRegion, conn.getSignedStrV4(req, canonicalizedResource, akIf.GetAccessKeySecret(), nil))
}
} else if conn.config.AuthVersion == AuthV2 {
additionalList, _ := conn.getAdditionalHeaderKeys(req)
if len(additionalList) > 0 {
authorizationFmt := "OSS2 AccessKeyId:%v,AdditionalHeaders:%v,Signature:%v"
additionnalHeadersStr := strings.Join(additionalList, ";")
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), additionnalHeadersStr, conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret()))
} else {
authorizationFmt := "OSS2 AccessKeyId:%v,Signature:%v"
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret()))
}
} else {
// Get the final authorization string
authorizationStr = "OSS " + akIf.GetAccessKeyID() + ":" + conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret())
}
// Give the parameter "Authorization" value
req.Header.Set(HTTPHeaderAuthorization, authorizationStr)
}
func (conn Conn) getSignedStr(req *http.Request, canonicalizedResource string, keySecret string) string {
// Find out the "x-oss-"'s address in header of the request
ossHeadersMap := make(map[string]string)
additionalList, additionalMap := conn.getAdditionalHeaderKeys(req)
for k, v := range req.Header {
if strings.HasPrefix(strings.ToLower(k), "x-oss-") {
ossHeadersMap[strings.ToLower(k)] = v[0]
} else if conn.config.AuthVersion == AuthV2 {
if _, ok := additionalMap[strings.ToLower(k)]; ok {
ossHeadersMap[strings.ToLower(k)] = v[0]
}
}
}
hs := newHeaderSorter(ossHeadersMap)
// Sort the ossHeadersMap by the ascending order
hs.Sort()
// Get the canonicalizedOSSHeaders
canonicalizedOSSHeaders := ""
for i := range hs.Keys {
canonicalizedOSSHeaders += hs.Keys[i] + ":" + hs.Vals[i] + "\n"
}
// Give other parameters values
// when sign URL, date is expires
date := req.Header.Get(HTTPHeaderDate)
contentType := req.Header.Get(HTTPHeaderContentType)
contentMd5 := req.Header.Get(HTTPHeaderContentMD5)
// default is v1 signature
signStr := req.Method + "\n" + contentMd5 + "\n" + contentType + "\n" + date + "\n" + canonicalizedOSSHeaders + canonicalizedResource
h := hmac.New(func() hash.Hash { return sha1.New() }, []byte(keySecret))
// v2 signature
if conn.config.AuthVersion == AuthV2 {
signStr = req.Method + "\n" + contentMd5 + "\n" + contentType + "\n" + date + "\n" + canonicalizedOSSHeaders + strings.Join(additionalList, ";") + "\n" + canonicalizedResource
h = hmac.New(func() hash.Hash { return sha256.New() }, []byte(keySecret))
}
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]signStr:%s\n", req, EscapeLFString(signStr))
}
io.WriteString(h, signStr)
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
return signedStr
}
func (conn Conn) getSignedStrV4(req *http.Request, canonicalizedResource string, keySecret string, signingTime *time.Time) string {
// Find out the "x-oss-"'s address in header of the request
ossHeadersMap := make(map[string]string)
additionalList, additionalMap := conn.getAdditionalHeaderKeysV4(req)
for k, v := range req.Header {
lowKey := strings.ToLower(k)
if strings.EqualFold(lowKey, HTTPHeaderContentMD5) ||
strings.EqualFold(lowKey, HTTPHeaderContentType) ||
strings.HasPrefix(lowKey, "x-oss-") {
ossHeadersMap[lowKey] = strings.Trim(v[0], " ")
} else {
if _, ok := additionalMap[lowKey]; ok {
ossHeadersMap[lowKey] = strings.Trim(v[0], " ")
}
}
}
// get day,eg 20210914
//signingTime
signDate := ""
strDay := ""
if signingTime != nil {
signDate = signingTime.Format(timeFormatV4)
strDay = signingTime.Format(shortTimeFormatV4)
} else {
var t time.Time
// Required parameters
if date := req.Header.Get(HTTPHeaderDate); date != "" {
signDate = date
t, _ = time.Parse(http.TimeFormat, date)
}
if ossDate := req.Header.Get(HttpHeaderOssDate); ossDate != "" {
signDate = ossDate
t, _ = time.Parse(timeFormatV4, ossDate)
}
strDay = t.Format("20060102")
}
hs := newHeaderSorter(ossHeadersMap)
// Sort the ossHeadersMap by the ascending order
hs.Sort()
// Get the canonicalizedOSSHeaders
canonicalizedOSSHeaders := ""
for i := range hs.Keys {
canonicalizedOSSHeaders += hs.Keys[i] + ":" + hs.Vals[i] + "\n"
}
signStr := ""
// v4 signature
hashedPayload := DefaultContentSha256
if val := req.Header.Get(HttpHeaderOssContentSha256); val != "" {
hashedPayload = val
}
// subResource
resource := canonicalizedResource
subResource := ""
subPos := strings.LastIndex(canonicalizedResource, "?")
if subPos != -1 {
subResource = canonicalizedResource[subPos+1:]
resource = canonicalizedResource[0:subPos]
}
// get canonical request
canonicalReuqest := req.Method + "\n" + resource + "\n" + subResource + "\n" + canonicalizedOSSHeaders + "\n" + strings.Join(additionalList, ";") + "\n" + hashedPayload
rh := sha256.New()
io.WriteString(rh, canonicalReuqest)
hashedRequest := hex.EncodeToString(rh.Sum(nil))
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]CanonicalRequest:%s\n", req, EscapeLFString(canonicalReuqest))
}
// Product & Region
signedStrV4Product := conn.config.GetSignProduct()
signedStrV4Region := conn.config.GetSignRegion()
signStr = "OSS4-HMAC-SHA256" + "\n" + signDate + "\n" + strDay + "/" + signedStrV4Region + "/" + signedStrV4Product + "/aliyun_v4_request" + "\n" + hashedRequest
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]signStr:%s\n", req, EscapeLFString(signStr))
}
h1 := hmac.New(func() hash.Hash { return sha256.New() }, []byte("aliyun_v4"+keySecret))
io.WriteString(h1, strDay)
h1Key := h1.Sum(nil)
h2 := hmac.New(func() hash.Hash { return sha256.New() }, h1Key)
io.WriteString(h2, signedStrV4Region)
h2Key := h2.Sum(nil)
h3 := hmac.New(func() hash.Hash { return sha256.New() }, h2Key)
io.WriteString(h3, signedStrV4Product)
h3Key := h3.Sum(nil)
h4 := hmac.New(func() hash.Hash { return sha256.New() }, h3Key)
io.WriteString(h4, "aliyun_v4_request")
h4Key := h4.Sum(nil)
h := hmac.New(func() hash.Hash { return sha256.New() }, h4Key)
io.WriteString(h, signStr)
return fmt.Sprintf("%x", h.Sum(nil))
}
func (conn Conn) getRtmpSignedStr(bucketName, channelName, playlistName string, expiration int64, keySecret string, params map[string]interface{}) string {
if params[HTTPParamAccessKeyID] == nil {
return ""
}
canonResource := fmt.Sprintf("/%s/%s", bucketName, channelName)
canonParamsKeys := []string{}
for key := range params {
if key != HTTPParamAccessKeyID && key != HTTPParamSignature && key != HTTPParamExpires && key != HTTPParamSecurityToken {
canonParamsKeys = append(canonParamsKeys, key)
}
}
sort.Strings(canonParamsKeys)
canonParamsStr := ""
for _, key := range canonParamsKeys {
canonParamsStr = fmt.Sprintf("%s%s:%s\n", canonParamsStr, key, params[key].(string))
}
expireStr := strconv.FormatInt(expiration, 10)
signStr := expireStr + "\n" + canonParamsStr + canonResource
h := hmac.New(func() hash.Hash { return sha1.New() }, []byte(keySecret))
io.WriteString(h, signStr)
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
return signedStr
}
// newHeaderSorter is an additional function for function SignHeader.
func newHeaderSorter(m map[string]string) *headerSorter {
hs := &headerSorter{
Keys: make([]string, 0, len(m)),
Vals: make([]string, 0, len(m)),
}
for k, v := range m {
hs.Keys = append(hs.Keys, k)
hs.Vals = append(hs.Vals, v)
}
return hs
}
// Sort is an additional function for function SignHeader.
func (hs *headerSorter) Sort() {
sort.Sort(hs)
}
// Len is an additional function for function SignHeader.
func (hs *headerSorter) Len() int {
return len(hs.Vals)
}
// Less is an additional function for function SignHeader.
func (hs *headerSorter) Less(i, j int) bool {
return bytes.Compare([]byte(hs.Keys[i]), []byte(hs.Keys[j])) < 0
}
// Swap is an additional function for function SignHeader.
func (hs *headerSorter) Swap(i, j int) {
hs.Vals[i], hs.Vals[j] = hs.Vals[j], hs.Vals[i]
hs.Keys[i], hs.Keys[j] = hs.Keys[j], hs.Keys[i]
}

1321
vendor/github.com/aliyun/aliyun-oss-go-sdk/oss/bucket.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

2956
vendor/github.com/aliyun/aliyun-oss-go-sdk/oss/client.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

301
vendor/github.com/aliyun/aliyun-oss-go-sdk/oss/conf.go generated vendored Normal file
View File

@@ -0,0 +1,301 @@
package oss
import (
"bytes"
"fmt"
"log"
"net"
"os"
"time"
)
// Define the level of the output log
const (
LogOff = iota
Error
Warn
Info
Debug
)
// LogTag Tag for each level of log
var LogTag = []string{"[error]", "[warn]", "[info]", "[debug]"}
// HTTPTimeout defines HTTP timeout.
type HTTPTimeout struct {
ConnectTimeout time.Duration
ReadWriteTimeout time.Duration
HeaderTimeout time.Duration
LongTimeout time.Duration
IdleConnTimeout time.Duration
}
// HTTPMaxConns defines max idle connections and max idle connections per host
type HTTPMaxConns struct {
MaxIdleConns int
MaxIdleConnsPerHost int
MaxConnsPerHost int
}
// Credentials is interface for get AccessKeyID,AccessKeySecret,SecurityToken
type Credentials interface {
GetAccessKeyID() string
GetAccessKeySecret() string
GetSecurityToken() string
}
// CredentialsProvider is interface for get Credential Info
type CredentialsProvider interface {
GetCredentials() Credentials
}
type CredentialsProviderE interface {
CredentialsProvider
GetCredentialsE() (Credentials, error)
}
type defaultCredentials struct {
config *Config
}
func (defCre *defaultCredentials) GetAccessKeyID() string {
return defCre.config.AccessKeyID
}
func (defCre *defaultCredentials) GetAccessKeySecret() string {
return defCre.config.AccessKeySecret
}
func (defCre *defaultCredentials) GetSecurityToken() string {
return defCre.config.SecurityToken
}
type defaultCredentialsProvider struct {
config *Config
}
func (defBuild *defaultCredentialsProvider) GetCredentials() Credentials {
return &defaultCredentials{config: defBuild.config}
}
type envCredentials struct {
AccessKeyId string
AccessKeySecret string
SecurityToken string
}
type EnvironmentVariableCredentialsProvider struct {
cred Credentials
}
func (credentials *envCredentials) GetAccessKeyID() string {
return credentials.AccessKeyId
}
func (credentials *envCredentials) GetAccessKeySecret() string {
return credentials.AccessKeySecret
}
func (credentials *envCredentials) GetSecurityToken() string {
return credentials.SecurityToken
}
func (defBuild *EnvironmentVariableCredentialsProvider) GetCredentials() Credentials {
var accessID, accessKey, token string
if defBuild.cred == nil {
accessID = os.Getenv("OSS_ACCESS_KEY_ID")
accessKey = os.Getenv("OSS_ACCESS_KEY_SECRET")
token = os.Getenv("OSS_SESSION_TOKEN")
} else {
accessID = defBuild.cred.GetAccessKeyID()
accessKey = defBuild.cred.GetAccessKeySecret()
token = defBuild.cred.GetSecurityToken()
}
return &envCredentials{
AccessKeyId: accessID,
AccessKeySecret: accessKey,
SecurityToken: token,
}
}
func NewEnvironmentVariableCredentialsProvider() (EnvironmentVariableCredentialsProvider, error) {
var provider EnvironmentVariableCredentialsProvider
accessID := os.Getenv("OSS_ACCESS_KEY_ID")
if accessID == "" {
return provider, fmt.Errorf("access key id is empty!")
}
accessKey := os.Getenv("OSS_ACCESS_KEY_SECRET")
if accessKey == "" {
return provider, fmt.Errorf("access key secret is empty!")
}
token := os.Getenv("OSS_SESSION_TOKEN")
envCredential := &envCredentials{
AccessKeyId: accessID,
AccessKeySecret: accessKey,
SecurityToken: token,
}
return EnvironmentVariableCredentialsProvider{
cred: envCredential,
}, nil
}
// Config defines oss configuration
type Config struct {
Endpoint string // OSS endpoint
AccessKeyID string // AccessId
AccessKeySecret string // AccessKey
RetryTimes uint // Retry count by default it's 5.
UserAgent string // SDK name/version/system information
IsDebug bool // Enable debug mode. Default is false.
Timeout uint // Timeout in seconds. By default it's 60.
SecurityToken string // STS Token
IsCname bool // If cname is in the endpoint.
IsPathStyle bool // If Path Style is in the endpoint.
HTTPTimeout HTTPTimeout // HTTP timeout
HTTPMaxConns HTTPMaxConns // Http max connections
IsUseProxy bool // Flag of using proxy.
ProxyHost string // Flag of using proxy host.
IsAuthProxy bool // Flag of needing authentication.
ProxyUser string // Proxy user
ProxyPassword string // Proxy password
IsEnableMD5 bool // Flag of enabling MD5 for upload.
MD5Threshold int64 // Memory footprint threshold for each MD5 computation (16MB is the default), in byte. When the data is more than that, temp file is used.
IsEnableCRC bool // Flag of enabling CRC for upload.
LogLevel int // Log level
Logger *log.Logger // For write log
UploadLimitSpeed int // Upload limit speed:KB/s, 0 is unlimited
UploadLimiter *OssLimiter // Bandwidth limit reader for upload
DownloadLimitSpeed int // Download limit speed:KB/s, 0 is unlimited
DownloadLimiter *OssLimiter // Bandwidth limit reader for download
CredentialsProvider CredentialsProvider // User provides interface to get AccessKeyID, AccessKeySecret, SecurityToken
LocalAddr net.Addr // local client host info
UserSetUa bool // UserAgent is set by user or not
AuthVersion AuthVersionType // v1 or v2, v4 signature,default is v1
AdditionalHeaders []string // special http headers needed to be sign
RedirectEnabled bool // only effective from go1.7 onward, enable http redirect or not
InsecureSkipVerify bool // for https, Whether to skip verifying the server certificate file
Region string // such as cn-hangzhou
CloudBoxId string //
Product string // oss or oss-cloudbox, default is oss
VerifyObjectStrict bool // a flag of verifying object name strictly. Default is enable.
}
// LimitUploadSpeed uploadSpeed:KB/s, 0 is unlimited,default is 0
func (config *Config) LimitUploadSpeed(uploadSpeed int) error {
if uploadSpeed < 0 {
return fmt.Errorf("invalid argument, the value of uploadSpeed is less than 0")
} else if uploadSpeed == 0 {
config.UploadLimitSpeed = 0
config.UploadLimiter = nil
return nil
}
var err error
config.UploadLimiter, err = GetOssLimiter(uploadSpeed)
if err == nil {
config.UploadLimitSpeed = uploadSpeed
}
return err
}
// LimitDownLoadSpeed downloadSpeed:KB/s, 0 is unlimited,default is 0
func (config *Config) LimitDownloadSpeed(downloadSpeed int) error {
if downloadSpeed < 0 {
return fmt.Errorf("invalid argument, the value of downloadSpeed is less than 0")
} else if downloadSpeed == 0 {
config.DownloadLimitSpeed = 0
config.DownloadLimiter = nil
return nil
}
var err error
config.DownloadLimiter, err = GetOssLimiter(downloadSpeed)
if err == nil {
config.DownloadLimitSpeed = downloadSpeed
}
return err
}
// WriteLog output log function
func (config *Config) WriteLog(LogLevel int, format string, a ...interface{}) {
if config.LogLevel < LogLevel || config.Logger == nil {
return
}
var logBuffer bytes.Buffer
logBuffer.WriteString(LogTag[LogLevel-1])
logBuffer.WriteString(fmt.Sprintf(format, a...))
config.Logger.Printf("%s", logBuffer.String())
}
// for get Credentials
func (config *Config) GetCredentials() Credentials {
return config.CredentialsProvider.GetCredentials()
}
// for get Sign Product
func (config *Config) GetSignProduct() string {
if config.CloudBoxId != "" {
return "oss-cloudbox"
}
return "oss"
}
// for get Sign Region
func (config *Config) GetSignRegion() string {
if config.CloudBoxId != "" {
return config.CloudBoxId
}
return config.Region
}
// getDefaultOssConfig gets the default configuration.
func getDefaultOssConfig() *Config {
config := Config{}
config.Endpoint = ""
config.AccessKeyID = ""
config.AccessKeySecret = ""
config.RetryTimes = 5
config.IsDebug = false
config.UserAgent = userAgent()
config.Timeout = 60 // Seconds
config.SecurityToken = ""
config.IsCname = false
config.IsPathStyle = false
config.HTTPTimeout.ConnectTimeout = time.Second * 30 // 30s
config.HTTPTimeout.ReadWriteTimeout = time.Second * 60 // 60s
config.HTTPTimeout.HeaderTimeout = time.Second * 60 // 60s
config.HTTPTimeout.LongTimeout = time.Second * 300 // 300s
config.HTTPTimeout.IdleConnTimeout = time.Second * 50 // 50s
config.HTTPMaxConns.MaxIdleConns = 100
config.HTTPMaxConns.MaxIdleConnsPerHost = 100
config.IsUseProxy = false
config.ProxyHost = ""
config.IsAuthProxy = false
config.ProxyUser = ""
config.ProxyPassword = ""
config.MD5Threshold = 16 * 1024 * 1024 // 16MB
config.IsEnableMD5 = false
config.IsEnableCRC = true
config.LogLevel = LogOff
config.Logger = log.New(os.Stdout, "", log.LstdFlags)
provider := &defaultCredentialsProvider{config: &config}
config.CredentialsProvider = provider
config.AuthVersion = AuthV1
config.RedirectEnabled = true
config.InsecureSkipVerify = false
config.Product = "oss"
config.VerifyObjectStrict = true
return &config
}

1021
vendor/github.com/aliyun/aliyun-oss-go-sdk/oss/conn.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

273
vendor/github.com/aliyun/aliyun-oss-go-sdk/oss/const.go generated vendored Normal file
View File

@@ -0,0 +1,273 @@
package oss
import "os"
// ACLType bucket/object ACL
type ACLType string
const (
// ACLPrivate definition : private read and write
ACLPrivate ACLType = "private"
// ACLPublicRead definition : public read and private write
ACLPublicRead ACLType = "public-read"
// ACLPublicReadWrite definition : public read and public write
ACLPublicReadWrite ACLType = "public-read-write"
// ACLDefault Object. It's only applicable for object.
ACLDefault ACLType = "default"
)
// bucket versioning status
type VersioningStatus string
const (
// Versioning Status definition: Enabled
VersionEnabled VersioningStatus = "Enabled"
// Versioning Status definition: Suspended
VersionSuspended VersioningStatus = "Suspended"
)
// MetadataDirectiveType specifying whether use the metadata of source object when copying object.
type MetadataDirectiveType string
const (
// MetaCopy the target object's metadata is copied from the source one
MetaCopy MetadataDirectiveType = "COPY"
// MetaReplace the target object's metadata is created as part of the copy request (not same as the source one)
MetaReplace MetadataDirectiveType = "REPLACE"
)
// TaggingDirectiveType specifying whether use the tagging of source object when copying object.
type TaggingDirectiveType string
const (
// TaggingCopy the target object's tagging is copied from the source one
TaggingCopy TaggingDirectiveType = "COPY"
// TaggingReplace the target object's tagging is created as part of the copy request (not same as the source one)
TaggingReplace TaggingDirectiveType = "REPLACE"
)
// AlgorithmType specifying the server side encryption algorithm name
type AlgorithmType string
const (
KMSAlgorithm AlgorithmType = "KMS"
AESAlgorithm AlgorithmType = "AES256"
SM4Algorithm AlgorithmType = "SM4"
)
// StorageClassType bucket storage type
type StorageClassType string
const (
// StorageStandard standard
StorageStandard StorageClassType = "Standard"
// StorageIA infrequent access
StorageIA StorageClassType = "IA"
// StorageArchive archive
StorageArchive StorageClassType = "Archive"
// StorageColdArchive cold archive
StorageColdArchive StorageClassType = "ColdArchive"
// StorageDeepColdArchive deep cold archive
StorageDeepColdArchive StorageClassType = "DeepColdArchive"
)
//RedundancyType bucket data Redundancy type
type DataRedundancyType string
const (
// RedundancyLRS Local redundancy, default value
RedundancyLRS DataRedundancyType = "LRS"
// RedundancyZRS Same city redundancy
RedundancyZRS DataRedundancyType = "ZRS"
)
//ObjecthashFuncType
type ObjecthashFuncType string
const (
HashFuncSha1 ObjecthashFuncType = "SHA-1"
HashFuncSha256 ObjecthashFuncType = "SHA-256"
)
// PayerType the type of request payer
type PayerType string
const (
// Requester the requester who send the request
Requester PayerType = "Requester"
// BucketOwner the requester who send the request
BucketOwner PayerType = "BucketOwner"
)
//RestoreMode the restore mode for coldArchive object
type RestoreMode string
const (
//RestoreExpedited object will be restored in 1 hour
RestoreExpedited RestoreMode = "Expedited"
//RestoreStandard object will be restored in 2-5 hours
RestoreStandard RestoreMode = "Standard"
//RestoreBulk object will be restored in 5-10 hours
RestoreBulk RestoreMode = "Bulk"
)
// HTTPMethod HTTP request method
type HTTPMethod string
const (
// HTTPGet HTTP GET
HTTPGet HTTPMethod = "GET"
// HTTPPut HTTP PUT
HTTPPut HTTPMethod = "PUT"
// HTTPHead HTTP HEAD
HTTPHead HTTPMethod = "HEAD"
// HTTPPost HTTP POST
HTTPPost HTTPMethod = "POST"
// HTTPDelete HTTP DELETE
HTTPDelete HTTPMethod = "DELETE"
)
// HTTP headers
const (
HTTPHeaderAcceptEncoding string = "Accept-Encoding"
HTTPHeaderAuthorization = "Authorization"
HTTPHeaderCacheControl = "Cache-Control"
HTTPHeaderContentDisposition = "Content-Disposition"
HTTPHeaderContentEncoding = "Content-Encoding"
HTTPHeaderContentLength = "Content-Length"
HTTPHeaderContentMD5 = "Content-MD5"
HTTPHeaderContentType = "Content-Type"
HTTPHeaderContentLanguage = "Content-Language"
HTTPHeaderDate = "Date"
HTTPHeaderEtag = "ETag"
HTTPHeaderExpires = "Expires"
HTTPHeaderHost = "Host"
HTTPHeaderLastModified = "Last-Modified"
HTTPHeaderRange = "Range"
HTTPHeaderLocation = "Location"
HTTPHeaderOrigin = "Origin"
HTTPHeaderServer = "Server"
HTTPHeaderUserAgent = "User-Agent"
HTTPHeaderIfModifiedSince = "If-Modified-Since"
HTTPHeaderIfUnmodifiedSince = "If-Unmodified-Since"
HTTPHeaderIfMatch = "If-Match"
HTTPHeaderIfNoneMatch = "If-None-Match"
HTTPHeaderACReqMethod = "Access-Control-Request-Method"
HTTPHeaderACReqHeaders = "Access-Control-Request-Headers"
HTTPHeaderOssACL = "X-Oss-Acl"
HTTPHeaderOssMetaPrefix = "X-Oss-Meta-"
HTTPHeaderOssObjectACL = "X-Oss-Object-Acl"
HTTPHeaderOssSecurityToken = "X-Oss-Security-Token"
HTTPHeaderOssServerSideEncryption = "X-Oss-Server-Side-Encryption"
HTTPHeaderOssServerSideEncryptionKeyID = "X-Oss-Server-Side-Encryption-Key-Id"
HTTPHeaderOssServerSideDataEncryption = "X-Oss-Server-Side-Data-Encryption"
HTTPHeaderSSECAlgorithm = "X-Oss-Server-Side-Encryption-Customer-Algorithm"
HTTPHeaderSSECKey = "X-Oss-Server-Side-Encryption-Customer-Key"
HTTPHeaderSSECKeyMd5 = "X-Oss-Server-Side-Encryption-Customer-Key-MD5"
HTTPHeaderOssCopySource = "X-Oss-Copy-Source"
HTTPHeaderOssCopySourceRange = "X-Oss-Copy-Source-Range"
HTTPHeaderOssCopySourceIfMatch = "X-Oss-Copy-Source-If-Match"
HTTPHeaderOssCopySourceIfNoneMatch = "X-Oss-Copy-Source-If-None-Match"
HTTPHeaderOssCopySourceIfModifiedSince = "X-Oss-Copy-Source-If-Modified-Since"
HTTPHeaderOssCopySourceIfUnmodifiedSince = "X-Oss-Copy-Source-If-Unmodified-Since"
HTTPHeaderOssMetadataDirective = "X-Oss-Metadata-Directive"
HTTPHeaderOssNextAppendPosition = "X-Oss-Next-Append-Position"
HTTPHeaderOssRequestID = "X-Oss-Request-Id"
HTTPHeaderOssCRC64 = "X-Oss-Hash-Crc64ecma"
HTTPHeaderOssSymlinkTarget = "X-Oss-Symlink-Target"
HTTPHeaderOssStorageClass = "X-Oss-Storage-Class"
HTTPHeaderOssCallback = "X-Oss-Callback"
HTTPHeaderOssCallbackVar = "X-Oss-Callback-Var"
HTTPHeaderOssRequester = "X-Oss-Request-Payer"
HTTPHeaderOssTagging = "X-Oss-Tagging"
HTTPHeaderOssTaggingDirective = "X-Oss-Tagging-Directive"
HTTPHeaderOssTrafficLimit = "X-Oss-Traffic-Limit"
HTTPHeaderOssForbidOverWrite = "X-Oss-Forbid-Overwrite"
HTTPHeaderOssRangeBehavior = "X-Oss-Range-Behavior"
HTTPHeaderOssTaskID = "X-Oss-Task-Id"
HTTPHeaderOssHashCtx = "X-Oss-Hash-Ctx"
HTTPHeaderOssMd5Ctx = "X-Oss-Md5-Ctx"
HTTPHeaderAllowSameActionOverLap = "X-Oss-Allow-Same-Action-Overlap"
HttpHeaderOssDate = "X-Oss-Date"
HttpHeaderOssContentSha256 = "X-Oss-Content-Sha256"
HttpHeaderOssNotification = "X-Oss-Notification"
HTTPHeaderOssEc = "X-Oss-Ec"
HTTPHeaderOssErr = "X-Oss-Err"
)
// HTTP Param
const (
HTTPParamExpires = "Expires"
HTTPParamAccessKeyID = "OSSAccessKeyId"
HTTPParamSignature = "Signature"
HTTPParamSecurityToken = "security-token"
HTTPParamPlaylistName = "playlistName"
HTTPParamSignatureVersion = "x-oss-signature-version"
HTTPParamExpiresV2 = "x-oss-expires"
HTTPParamAccessKeyIDV2 = "x-oss-access-key-id"
HTTPParamSignatureV2 = "x-oss-signature"
HTTPParamAdditionalHeadersV2 = "x-oss-additional-headers"
HTTPParamCredential = "x-oss-credential"
HTTPParamDate = "x-oss-date"
HTTPParamOssSecurityToken = "x-oss-security-token"
)
// Other constants
const (
MaxPartSize = 5 * 1024 * 1024 * 1024 // Max part size, 5GB
MinPartSize = 100 * 1024 // Min part size, 100KB
FilePermMode = os.FileMode(0664) // Default file permission
TempFilePrefix = "oss-go-temp-" // Temp file prefix
TempFileSuffix = ".temp" // Temp file suffix
CheckpointFileSuffix = ".cp" // Checkpoint file suffix
NullVersion = "null"
DefaultContentSha256 = "UNSIGNED-PAYLOAD" // for v4 signature
Version = "v3.0.2" // Go SDK version
)
// FrameType
const (
DataFrameType = 8388609
ContinuousFrameType = 8388612
EndFrameType = 8388613
MetaEndFrameCSVType = 8388614
MetaEndFrameJSONType = 8388615
)
// AuthVersion the version of auth
type AuthVersionType string
const (
// AuthV1 v1
AuthV1 AuthVersionType = "v1"
// AuthV2 v2
AuthV2 AuthVersionType = "v2"
// AuthV4 v4
AuthV4 AuthVersionType = "v4"
)

123
vendor/github.com/aliyun/aliyun-oss-go-sdk/oss/crc.go generated vendored Normal file
View File

@@ -0,0 +1,123 @@
package oss
import (
"hash"
"hash/crc64"
)
// digest represents the partial evaluation of a checksum.
type digest struct {
crc uint64
tab *crc64.Table
}
// NewCRC creates a new hash.Hash64 computing the CRC64 checksum
// using the polynomial represented by the Table.
func NewCRC(tab *crc64.Table, init uint64) hash.Hash64 { return &digest{init, tab} }
// Size returns the number of bytes sum will return.
func (d *digest) Size() int { return crc64.Size }
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all writes
// are a multiple of the block size.
func (d *digest) BlockSize() int { return 1 }
// Reset resets the hash to its initial state.
func (d *digest) Reset() { d.crc = 0 }
// Write (via the embedded io.Writer interface) adds more data to the running hash.
// It never returns an error.
func (d *digest) Write(p []byte) (n int, err error) {
d.crc = crc64.Update(d.crc, d.tab, p)
return len(p), nil
}
// Sum64 returns CRC64 value.
func (d *digest) Sum64() uint64 { return d.crc }
// Sum returns hash value.
func (d *digest) Sum(in []byte) []byte {
s := d.Sum64()
return append(in, byte(s>>56), byte(s>>48), byte(s>>40), byte(s>>32), byte(s>>24), byte(s>>16), byte(s>>8), byte(s))
}
// gf2Dim dimension of GF(2) vectors (length of CRC)
const gf2Dim int = 64
func gf2MatrixTimes(mat []uint64, vec uint64) uint64 {
var sum uint64
for i := 0; vec != 0; i++ {
if vec&1 != 0 {
sum ^= mat[i]
}
vec >>= 1
}
return sum
}
func gf2MatrixSquare(square []uint64, mat []uint64) {
for n := 0; n < gf2Dim; n++ {
square[n] = gf2MatrixTimes(mat, mat[n])
}
}
// CRC64Combine combines CRC64
func CRC64Combine(crc1 uint64, crc2 uint64, len2 uint64) uint64 {
var even [gf2Dim]uint64 // Even-power-of-two zeros operator
var odd [gf2Dim]uint64 // Odd-power-of-two zeros operator
// Degenerate case
if len2 == 0 {
return crc1
}
// Put operator for one zero bit in odd
odd[0] = crc64.ECMA // CRC64 polynomial
var row uint64 = 1
for n := 1; n < gf2Dim; n++ {
odd[n] = row
row <<= 1
}
// Put operator for two zero bits in even
gf2MatrixSquare(even[:], odd[:])
// Put operator for four zero bits in odd
gf2MatrixSquare(odd[:], even[:])
// Apply len2 zeros to crc1, first square will put the operator for one zero byte, eight zero bits, in even
for {
// Apply zeros operator for this bit of len2
gf2MatrixSquare(even[:], odd[:])
if len2&1 != 0 {
crc1 = gf2MatrixTimes(even[:], crc1)
}
len2 >>= 1
// If no more bits set, then done
if len2 == 0 {
break
}
// Another iteration of the loop with odd and even swapped
gf2MatrixSquare(odd[:], even[:])
if len2&1 != 0 {
crc1 = gf2MatrixTimes(odd[:], crc1)
}
len2 >>= 1
// If no more bits set, then done
if len2 == 0 {
break
}
}
// Return combined CRC
crc1 ^= crc2
return crc1
}

View File

@@ -0,0 +1,567 @@
package oss
import (
"crypto/md5"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"hash"
"hash/crc64"
"io"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"strconv"
"time"
)
// DownloadFile downloads files with multipart download.
//
// objectKey the object key.
// filePath the local file to download from objectKey in OSS.
// partSize the part size in bytes.
// options object's constraints, check out GetObject for the reference.
//
// error it's nil when the call succeeds, otherwise it's an error object.
//
func (bucket Bucket) DownloadFile(objectKey, filePath string, partSize int64, options ...Option) error {
if partSize < 1 {
return errors.New("oss: part size smaller than 1")
}
uRange, err := GetRangeConfig(options)
if err != nil {
return err
}
cpConf := getCpConfig(options)
routines := getRoutines(options)
var strVersionId string
versionId, _ := FindOption(options, "versionId", nil)
if versionId != nil {
strVersionId = versionId.(string)
}
if cpConf != nil && cpConf.IsEnable {
cpFilePath := getDownloadCpFilePath(cpConf, bucket.BucketName, objectKey, strVersionId, filePath)
if cpFilePath != "" {
return bucket.downloadFileWithCp(objectKey, filePath, partSize, options, cpFilePath, routines, uRange)
}
}
return bucket.downloadFile(objectKey, filePath, partSize, options, routines, uRange)
}
func getDownloadCpFilePath(cpConf *cpConfig, srcBucket, srcObject, versionId, destFile string) string {
if cpConf.FilePath == "" && cpConf.DirPath != "" {
src := fmt.Sprintf("oss://%v/%v", srcBucket, srcObject)
absPath, _ := filepath.Abs(destFile)
cpFileName := getCpFileName(src, absPath, versionId)
cpConf.FilePath = cpConf.DirPath + string(os.PathSeparator) + cpFileName
}
return cpConf.FilePath
}
// downloadWorkerArg is download worker's parameters
type downloadWorkerArg struct {
bucket *Bucket
key string
filePath string
options []Option
hook downloadPartHook
enableCRC bool
}
// downloadPartHook is hook for test
type downloadPartHook func(part downloadPart) error
var downloadPartHooker downloadPartHook = defaultDownloadPartHook
func defaultDownloadPartHook(part downloadPart) error {
return nil
}
// defaultDownloadProgressListener defines default ProgressListener, shields the ProgressListener in options of GetObject.
type defaultDownloadProgressListener struct {
}
// ProgressChanged no-ops
func (listener *defaultDownloadProgressListener) ProgressChanged(event *ProgressEvent) {
}
// downloadWorker
func downloadWorker(id int, arg downloadWorkerArg, jobs <-chan downloadPart, results chan<- downloadPart, failed chan<- error, die <-chan bool) {
for part := range jobs {
if err := arg.hook(part); err != nil {
failed <- err
break
}
// Resolve options
r := Range(part.Start, part.End)
p := Progress(&defaultDownloadProgressListener{})
var respHeader http.Header
opts := make([]Option, len(arg.options)+3)
// Append orderly, can not be reversed!
opts = append(opts, arg.options...)
opts = append(opts, r, p, GetResponseHeader(&respHeader))
rd, err := arg.bucket.GetObject(arg.key, opts...)
if err != nil {
failed <- err
break
}
defer rd.Close()
var crcCalc hash.Hash64
if arg.enableCRC {
crcCalc = crc64.New(CrcTable())
contentLen := part.End - part.Start + 1
rd = ioutil.NopCloser(TeeReader(rd, crcCalc, contentLen, nil, nil))
}
defer rd.Close()
select {
case <-die:
return
default:
}
fd, err := os.OpenFile(arg.filePath, os.O_WRONLY, FilePermMode)
if err != nil {
failed <- err
break
}
_, err = fd.Seek(part.Start-part.Offset, os.SEEK_SET)
if err != nil {
fd.Close()
failed <- err
break
}
startT := time.Now().UnixNano() / 1000 / 1000 / 1000
_, err = io.Copy(fd, rd)
endT := time.Now().UnixNano() / 1000 / 1000 / 1000
if err != nil {
arg.bucket.Client.Config.WriteLog(Debug, "download part error,cost:%d second,part number:%d,request id:%s,error:%s.\n", endT-startT, part.Index, GetRequestId(respHeader), err.Error())
fd.Close()
failed <- err
break
}
if arg.enableCRC {
part.CRC64 = crcCalc.Sum64()
}
fd.Close()
results <- part
}
}
// downloadScheduler
func downloadScheduler(jobs chan downloadPart, parts []downloadPart) {
for _, part := range parts {
jobs <- part
}
close(jobs)
}
// downloadPart defines download part
type downloadPart struct {
Index int // Part number, starting from 0
Start int64 // Start index
End int64 // End index
Offset int64 // Offset
CRC64 uint64 // CRC check value of part
}
// getDownloadParts gets download parts
func getDownloadParts(objectSize, partSize int64, uRange *UnpackedRange) []downloadPart {
parts := []downloadPart{}
part := downloadPart{}
i := 0
start, end := AdjustRange(uRange, objectSize)
for offset := start; offset < end; offset += partSize {
part.Index = i
part.Start = offset
part.End = GetPartEnd(offset, end, partSize)
part.Offset = start
part.CRC64 = 0
parts = append(parts, part)
i++
}
return parts
}
// getObjectBytes gets object bytes length
func getObjectBytes(parts []downloadPart) int64 {
var ob int64
for _, part := range parts {
ob += (part.End - part.Start + 1)
}
return ob
}
// combineCRCInParts caculates the total CRC of continuous parts
func combineCRCInParts(dps []downloadPart) uint64 {
if dps == nil || len(dps) == 0 {
return 0
}
crc := dps[0].CRC64
for i := 1; i < len(dps); i++ {
crc = CRC64Combine(crc, dps[i].CRC64, (uint64)(dps[i].End-dps[i].Start+1))
}
return crc
}
// downloadFile downloads file concurrently without checkpoint.
func (bucket Bucket) downloadFile(objectKey, filePath string, partSize int64, options []Option, routines int, uRange *UnpackedRange) error {
tempFilePath := filePath + TempFileSuffix
listener := GetProgressListener(options)
// If the file does not exist, create one. If exists, the download will overwrite it.
fd, err := os.OpenFile(tempFilePath, os.O_WRONLY|os.O_CREATE, FilePermMode)
if err != nil {
return err
}
fd.Close()
// Get the object detailed meta for object whole size
// must delete header:range to get whole object size
skipOptions := DeleteOption(options, HTTPHeaderRange)
meta, err := bucket.GetObjectDetailedMeta(objectKey, skipOptions...)
if err != nil {
return err
}
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 64)
if err != nil {
return err
}
enableCRC := false
expectedCRC := (uint64)(0)
if bucket.GetConfig().IsEnableCRC && meta.Get(HTTPHeaderOssCRC64) != "" {
if uRange == nil || (!uRange.HasStart && !uRange.HasEnd) {
enableCRC = true
expectedCRC, _ = strconv.ParseUint(meta.Get(HTTPHeaderOssCRC64), 10, 64)
}
}
// Get the parts of the file
parts := getDownloadParts(objectSize, partSize, uRange)
jobs := make(chan downloadPart, len(parts))
results := make(chan downloadPart, len(parts))
failed := make(chan error)
die := make(chan bool)
var completedBytes int64
totalBytes := getObjectBytes(parts)
event := newProgressEvent(TransferStartedEvent, 0, totalBytes, 0)
publishProgress(listener, event)
// Start the download workers
arg := downloadWorkerArg{&bucket, objectKey, tempFilePath, options, downloadPartHooker, enableCRC}
for w := 1; w <= routines; w++ {
go downloadWorker(w, arg, jobs, results, failed, die)
}
// Download parts concurrently
go downloadScheduler(jobs, parts)
// Waiting for parts download finished
completed := 0
for completed < len(parts) {
select {
case part := <-results:
completed++
downBytes := (part.End - part.Start + 1)
completedBytes += downBytes
parts[part.Index].CRC64 = part.CRC64
event = newProgressEvent(TransferDataEvent, completedBytes, totalBytes, downBytes)
publishProgress(listener, event)
case err := <-failed:
close(die)
event = newProgressEvent(TransferFailedEvent, completedBytes, totalBytes, 0)
publishProgress(listener, event)
return err
}
if completed >= len(parts) {
break
}
}
event = newProgressEvent(TransferCompletedEvent, completedBytes, totalBytes, 0)
publishProgress(listener, event)
if enableCRC {
actualCRC := combineCRCInParts(parts)
err = CheckDownloadCRC(actualCRC, expectedCRC)
if err != nil {
return err
}
}
return os.Rename(tempFilePath, filePath)
}
// ----- Concurrent download with chcekpoint -----
const downloadCpMagic = "92611BED-89E2-46B6-89E5-72F273D4B0A3"
type downloadCheckpoint struct {
Magic string // Magic
MD5 string // Checkpoint content MD5
FilePath string // Local file
Object string // Key
ObjStat objectStat // Object status
Parts []downloadPart // All download parts
PartStat []bool // Parts' download status
Start int64 // Start point of the file
End int64 // End point of the file
enableCRC bool // Whether has CRC check
CRC uint64 // CRC check value
}
type objectStat struct {
Size int64 // Object size
LastModified string // Last modified time
Etag string // Etag
}
// isValid flags of checkpoint data is valid. It returns true when the data is valid and the checkpoint is valid and the object is not updated.
func (cp downloadCheckpoint) isValid(meta http.Header, uRange *UnpackedRange) (bool, error) {
// Compare the CP's Magic and the MD5
cpb := cp
cpb.MD5 = ""
js, _ := json.Marshal(cpb)
sum := md5.Sum(js)
b64 := base64.StdEncoding.EncodeToString(sum[:])
if cp.Magic != downloadCpMagic || b64 != cp.MD5 {
return false, nil
}
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 64)
if err != nil {
return false, err
}
// Compare the object size, last modified time and etag
if cp.ObjStat.Size != objectSize ||
cp.ObjStat.LastModified != meta.Get(HTTPHeaderLastModified) ||
cp.ObjStat.Etag != meta.Get(HTTPHeaderEtag) {
return false, nil
}
// Check the download range
if uRange != nil {
start, end := AdjustRange(uRange, objectSize)
if start != cp.Start || end != cp.End {
return false, nil
}
}
return true, nil
}
// load checkpoint from local file
func (cp *downloadCheckpoint) load(filePath string) error {
contents, err := ioutil.ReadFile(filePath)
if err != nil {
return err
}
err = json.Unmarshal(contents, cp)
return err
}
// dump funciton dumps to file
func (cp *downloadCheckpoint) dump(filePath string) error {
bcp := *cp
// Calculate MD5
bcp.MD5 = ""
js, err := json.Marshal(bcp)
if err != nil {
return err
}
sum := md5.Sum(js)
b64 := base64.StdEncoding.EncodeToString(sum[:])
bcp.MD5 = b64
// Serialize
js, err = json.Marshal(bcp)
if err != nil {
return err
}
// Dump
return ioutil.WriteFile(filePath, js, FilePermMode)
}
// todoParts gets unfinished parts
func (cp downloadCheckpoint) todoParts() []downloadPart {
dps := []downloadPart{}
for i, ps := range cp.PartStat {
if !ps {
dps = append(dps, cp.Parts[i])
}
}
return dps
}
// getCompletedBytes gets completed size
func (cp downloadCheckpoint) getCompletedBytes() int64 {
var completedBytes int64
for i, part := range cp.Parts {
if cp.PartStat[i] {
completedBytes += (part.End - part.Start + 1)
}
}
return completedBytes
}
// prepare initiates download tasks
func (cp *downloadCheckpoint) prepare(meta http.Header, bucket *Bucket, objectKey, filePath string, partSize int64, uRange *UnpackedRange) error {
// CP
cp.Magic = downloadCpMagic
cp.FilePath = filePath
cp.Object = objectKey
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 64)
if err != nil {
return err
}
cp.ObjStat.Size = objectSize
cp.ObjStat.LastModified = meta.Get(HTTPHeaderLastModified)
cp.ObjStat.Etag = meta.Get(HTTPHeaderEtag)
if bucket.GetConfig().IsEnableCRC && meta.Get(HTTPHeaderOssCRC64) != "" {
if uRange == nil || (!uRange.HasStart && !uRange.HasEnd) {
cp.enableCRC = true
cp.CRC, _ = strconv.ParseUint(meta.Get(HTTPHeaderOssCRC64), 10, 64)
}
}
// Parts
cp.Parts = getDownloadParts(objectSize, partSize, uRange)
cp.PartStat = make([]bool, len(cp.Parts))
for i := range cp.PartStat {
cp.PartStat[i] = false
}
return nil
}
func (cp *downloadCheckpoint) complete(cpFilePath, downFilepath string) error {
err := os.Rename(downFilepath, cp.FilePath)
if err != nil {
return err
}
return os.Remove(cpFilePath)
}
// downloadFileWithCp downloads files with checkpoint.
func (bucket Bucket) downloadFileWithCp(objectKey, filePath string, partSize int64, options []Option, cpFilePath string, routines int, uRange *UnpackedRange) error {
tempFilePath := filePath + TempFileSuffix
listener := GetProgressListener(options)
// Load checkpoint data.
dcp := downloadCheckpoint{}
err := dcp.load(cpFilePath)
if err != nil {
os.Remove(cpFilePath)
}
// Get the object detailed meta for object whole size
// must delete header:range to get whole object size
skipOptions := DeleteOption(options, HTTPHeaderRange)
meta, err := bucket.GetObjectDetailedMeta(objectKey, skipOptions...)
if err != nil {
return err
}
// Load error or data invalid. Re-initialize the download.
valid, err := dcp.isValid(meta, uRange)
if err != nil || !valid {
if err = dcp.prepare(meta, &bucket, objectKey, filePath, partSize, uRange); err != nil {
return err
}
os.Remove(cpFilePath)
}
// Create the file if not exists. Otherwise the parts download will overwrite it.
fd, err := os.OpenFile(tempFilePath, os.O_WRONLY|os.O_CREATE, FilePermMode)
if err != nil {
return err
}
fd.Close()
// Unfinished parts
parts := dcp.todoParts()
jobs := make(chan downloadPart, len(parts))
results := make(chan downloadPart, len(parts))
failed := make(chan error)
die := make(chan bool)
completedBytes := dcp.getCompletedBytes()
event := newProgressEvent(TransferStartedEvent, completedBytes, dcp.ObjStat.Size, 0)
publishProgress(listener, event)
// Start the download workers routine
arg := downloadWorkerArg{&bucket, objectKey, tempFilePath, options, downloadPartHooker, dcp.enableCRC}
for w := 1; w <= routines; w++ {
go downloadWorker(w, arg, jobs, results, failed, die)
}
// Concurrently downloads parts
go downloadScheduler(jobs, parts)
// Wait for the parts download finished
completed := 0
for completed < len(parts) {
select {
case part := <-results:
completed++
dcp.PartStat[part.Index] = true
dcp.Parts[part.Index].CRC64 = part.CRC64
dcp.dump(cpFilePath)
downBytes := (part.End - part.Start + 1)
completedBytes += downBytes
event = newProgressEvent(TransferDataEvent, completedBytes, dcp.ObjStat.Size, downBytes)
publishProgress(listener, event)
case err := <-failed:
close(die)
event = newProgressEvent(TransferFailedEvent, completedBytes, dcp.ObjStat.Size, 0)
publishProgress(listener, event)
return err
}
if completed >= len(parts) {
break
}
}
event = newProgressEvent(TransferCompletedEvent, completedBytes, dcp.ObjStat.Size, 0)
publishProgress(listener, event)
if dcp.enableCRC {
actualCRC := combineCRCInParts(dcp.Parts)
err = CheckDownloadCRC(actualCRC, dcp.CRC)
if err != nil {
return err
}
}
return dcp.complete(cpFilePath, tempFilePath)
}

136
vendor/github.com/aliyun/aliyun-oss-go-sdk/oss/error.go generated vendored Normal file
View File

@@ -0,0 +1,136 @@
package oss
import (
"encoding/xml"
"fmt"
"io/ioutil"
"net/http"
"strconv"
"strings"
)
// ServiceError contains fields of the error response from Oss Service REST API.
type ServiceError struct {
XMLName xml.Name `xml:"Error"`
Code string `xml:"Code"` // The error code returned from OSS to the caller
Message string `xml:"Message"` // The detail error message from OSS
RequestID string `xml:"RequestId"` // The UUID used to uniquely identify the request
HostID string `xml:"HostId"` // The OSS server cluster's Id
Endpoint string `xml:"Endpoint"`
Ec string `xml:"EC"`
RawMessage string // The raw messages from OSS
StatusCode int // HTTP status code
}
// Error implements interface error
func (e ServiceError) Error() string {
errorStr := fmt.Sprintf("oss: service returned error: StatusCode=%d, ErrorCode=%s, ErrorMessage=\"%s\", RequestId=%s", e.StatusCode, e.Code, e.Message, e.RequestID)
if len(e.Endpoint) > 0 {
errorStr = fmt.Sprintf("%s, Endpoint=%s", errorStr, e.Endpoint)
}
if len(e.Ec) > 0 {
errorStr = fmt.Sprintf("%s, Ec=%s", errorStr, e.Ec)
}
return errorStr
}
// UnexpectedStatusCodeError is returned when a storage service responds with neither an error
// nor with an HTTP status code indicating success.
type UnexpectedStatusCodeError struct {
allowed []int // The expected HTTP stats code returned from OSS
got int // The actual HTTP status code from OSS
}
// Error implements interface error
func (e UnexpectedStatusCodeError) Error() string {
s := func(i int) string { return fmt.Sprintf("%d %s", i, http.StatusText(i)) }
got := s(e.got)
expected := []string{}
for _, v := range e.allowed {
expected = append(expected, s(v))
}
return fmt.Sprintf("oss: status code from service response is %s; was expecting %s",
got, strings.Join(expected, " or "))
}
// Got is the actual status code returned by oss.
func (e UnexpectedStatusCodeError) Got() int {
return e.got
}
// CheckRespCode returns UnexpectedStatusError if the given response code is not
// one of the allowed status codes; otherwise nil.
func CheckRespCode(respCode int, allowed []int) error {
for _, v := range allowed {
if respCode == v {
return nil
}
}
return UnexpectedStatusCodeError{allowed, respCode}
}
// CheckCallbackResp return error if the given response code is not 200
func CheckCallbackResp(resp *Response) error {
var err error
contentLengthStr := resp.Headers.Get("Content-Length")
contentLength, _ := strconv.Atoi(contentLengthStr)
var bodyBytes []byte
if contentLength > 0 {
bodyBytes, _ = ioutil.ReadAll(resp.Body)
}
if len(bodyBytes) > 0 {
srvErr, errIn := serviceErrFromXML(bodyBytes, resp.StatusCode,
resp.Headers.Get(HTTPHeaderOssRequestID))
if errIn != nil {
if len(resp.Headers.Get(HTTPHeaderOssEc)) > 0 {
err = fmt.Errorf("unknown response body, status code = %d, RequestId = %s, ec = %s", resp.StatusCode, resp.Headers.Get(HTTPHeaderOssRequestID), resp.Headers.Get(HTTPHeaderOssEc))
} else {
err = fmt.Errorf("unknown response body, status code= %d, RequestId = %s", resp.StatusCode, resp.Headers.Get(HTTPHeaderOssRequestID))
}
} else {
err = srvErr
}
}
return err
}
func tryConvertServiceError(data []byte, resp *Response, def error) (err error) {
err = def
if len(data) > 0 {
srvErr, errIn := serviceErrFromXML(data, resp.StatusCode, resp.Headers.Get(HTTPHeaderOssRequestID))
if errIn == nil {
err = srvErr
}
}
return err
}
// CRCCheckError is returned when crc check is inconsistent between client and server
type CRCCheckError struct {
clientCRC uint64 // Calculated CRC64 in client
serverCRC uint64 // Calculated CRC64 in server
operation string // Upload operations such as PutObject/AppendObject/UploadPart, etc
requestID string // The request id of this operation
}
// Error implements interface error
func (e CRCCheckError) Error() string {
return fmt.Sprintf("oss: the crc of %s is inconsistent, client %d but server %d; request id is %s",
e.operation, e.clientCRC, e.serverCRC, e.requestID)
}
func CheckDownloadCRC(clientCRC, serverCRC uint64) error {
if clientCRC == serverCRC {
return nil
}
return CRCCheckError{clientCRC, serverCRC, "DownloadFile", ""}
}
func CheckCRC(resp *Response, operation string) error {
if resp.Headers.Get(HTTPHeaderOssCRC64) == "" || resp.ClientCRC == resp.ServerCRC {
return nil
}
return CRCCheckError{resp.ClientCRC, resp.ServerCRC, operation, resp.Headers.Get(HTTPHeaderOssRequestID)}
}

View File

@@ -0,0 +1,29 @@
//go:build !go1.7
// +build !go1.7
// "golang.org/x/time/rate" is depended on golang context package go1.7 onward
// this file is only for build,not supports limit upload speed
package oss
import (
"fmt"
"io"
)
const (
perTokenBandwidthSize int = 1024
)
type OssLimiter struct {
}
type LimitSpeedReader struct {
io.ReadCloser
reader io.Reader
ossLimiter *OssLimiter
}
func GetOssLimiter(uploadSpeed int) (ossLimiter *OssLimiter, err error) {
err = fmt.Errorf("rate.Limiter is not supported below version go1.7")
return nil, err
}

Some files were not shown because too many files have changed in this diff Show More