Compare commits

...

127 Commits

Author SHA1 Message Date
4b938ab18d chore: Bump version
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 16:25:40 +08:00
88356de923 fix: Refactor web reader to use readabilipy (#19789)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 16:23:17 +08:00
5f09900dca chore(api): Upgrade dependencies (#19736)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-15 14:47:15 +08:00
9ac99abf20 docs(CHANGELOG): Update CHANGELOG
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-14 18:03:05 +08:00
32588f562e feat(model): fix and re-add gpt-4.1.
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-14 18:02:32 +08:00
36f8bd3f1a chore: frontend third-part package security issue (#19655) 2025-05-14 14:08:05 +08:00
c919074e06 docs(CHANGELOG.md): Update CHANGELOG.md
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-13 10:31:40 +08:00
88cd9aedb7 add gunicorn keepalive setting (#19537)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Bowen Liang <liang.bowen.123@qq.com>
2025-05-13 10:28:13 +08:00
16a4f77fb4 fix(config): Allow DB_EXTRAS to set search_path via options
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-13 10:19:08 +08:00
3401c52665 chore(pyproject.toml): Upgrade huggingface-hub, transformers and resend (#19563)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-12 23:21:57 +08:00
4fa3d78ed8 Revert "feat : add GPT4.1 in the model providers" (#19002) 2025-04-28 18:15:24 +08:00
5f7f851b17 fix: Refines None checks in result transformation
Simplifies the code by replacing type checks for None with
direct comparisons, improving readability and consistency in
handling None values during output validation.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-28 15:40:14 +08:00
559ab46ee1 fix: Removes redundant token calculations and updates dependencies
Eliminates unnecessary pre-calculation of token limits and recalculation of max tokens
across multiple app runners, simplifying the logic for prompt handling.

Updates tiktoken library from version 0.8.0 to 0.9.0 for improved tokenization performance.

Increases default token limit in TokenBufferMemory to accommodate larger prompt messages.

These changes streamline the token management process and leverage the latest
improvements in the tiktoken library.

Fixes potential token overflow issues and prepares the system for handling larger
inputs more efficiently.

Relates to internal optimization tasks.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-28 15:39:12 +08:00
df98223c8c chore: Updates to version 0.15.7 with new model support
Adds support for GPT-4.1 and Amazon Bedrock DeepSeek-R1 models.
Fixes issues with app creation from template categories and
DSL version checks.

Updates version numbers in configuration files and Docker
setup to 0.15.7 for consistency.

Addresses issues #18807, #18868, #18872, #18878, and #18912.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-28 14:19:07 +08:00
144f9507f8 feat : add GPT4.1 in the model providers (#18912) 2025-04-27 19:31:20 +08:00
2e097a1ac0 add bedrock deepseek-r1 (#18908) 2025-04-27 19:30:42 +08:00
9f7d8a981f Patch: hotfix/create from template category (#18807) (#18868) 2025-04-27 14:47:18 +08:00
40b31bafd5 fix: check dsl version when create app from explore template (#18872) (#18878) 2025-04-27 14:21:45 +08:00
d38a2c95fb docs(CHANGELOG): Update CHANGELOG.md
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-25 18:31:08 +08:00
7d18e2a0ef feat(app_dsl_service): Refines version compatibility logic
Updates logic to handle various version comparisons, ensuring
more precise status returns based on version differences.
Improves handling of older and newer versions to prevent
mismatches and ensure appropriate compatibility status.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-25 18:27:31 +08:00
024f242251 add bedrock claude-sonnet-3.7 (#18788) 2025-04-25 17:35:12 +08:00
bfdce78ca5 chore(*): Bump up to 0.15.6
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-23 14:06:46 +08:00
00c2258352 CHANGELOG): Adds initial changelog for version 0.15.6
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-23 13:55:33 +08:00
a1b3d41712 fix: clickjacking (#18552) 2025-04-22 17:08:52 +08:00
b26e20fe34 fix: fix vertex gemini 2.0 flash 001 schema (#18405)
Co-authored-by: achmad-kautsar <achmad.kautsar@insignia.co.id>
2025-04-19 22:04:13 +08:00
161ff432f1 fix: update reset password token when email code verify success (#18362) 2025-04-18 17:15:15 +08:00
99a9def623 fix: reset_password security issue (#18366) 2025-04-18 05:04:44 -04:00
fe1846c437 fix: change gemini-2.0-flash to validate google api #17082 (#17115) 2025-03-30 13:04:12 +08:00
8e75eb5c63 fix: update version to 0.15.5 in packaging and docker-compose files
Sgned-off-by: -LAN- <lapz8200@outlook.com>
2025-03-24 16:47:06 +08:00
970508fcb6 fix: update GitHub Actions workflow to trigger on tags
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-24 16:45:29 +08:00
9283a5414f fix: update yarn.lock 2025-03-24 16:41:07 +08:00
2a2a0e9be9 fix: update DifySandbox image version to 0.2.11 in docker-compose files
Sgned-off-by: -LAN- <laipz8200@outlook.com>
2025-03-24 15:37:55 +08:00
061a765b7d fix: sanitizer svg to avoid xss (#16608) 2025-03-24 14:48:40 +08:00
acd7fead87 feat: remove Vanna provider and associated assets from the project
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-24 14:34:03 +08:00
bbb080d5b2 fix: update chatbot help doc link on the create app form 2025-03-24 11:28:35 +08:00
c01d8a70f3 fix: upgrade nextjs to v14.2.25. a security patch for CVE-2025-29927. 2025-03-24 10:32:18 +08:00
1ca15989e0 chore: update version to 0.15.4 in configuration and docker files
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-21 16:39:06 +08:00
8b5a3a9424 Merge branch 'release/0.15.4' of github.com:langgenius/dify into release/0.15.4 2025-03-21 16:31:06 +08:00
42ddcf1edd chore: remove 0.15.3 branch config in the build action
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-21 16:30:33 +08:00
21561df10f fix: xss in render svg (#16437) 2025-03-21 15:24:58 +08:00
0e33a3aa5f chore: add ci 2025-02-19 14:34:36 +08:00
d3895bcd6b revert 2025-02-19 14:32:28 +08:00
eeb390650b fix: build failed 2025-02-19 14:32:28 +08:00
ca19bd31d4 chore(*): Bump version to 0.15.3 (#13308)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 15:20:05 +08:00
413dfd5628 feat: add completion mode and context size options for LLM configuration (#13325)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 15:08:53 +08:00
f9515901cc fix: Azure AI Foundry model cannot be used in the workflow (#13323)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 14:52:57 +08:00
3f42fabff8 chore:improve thinking display for llm from xinference and ollama pro… (#13318) 2025-02-07 14:29:29 +08:00
1caa578771 chore(*): Update style of thinking (#13319)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 14:06:35 +08:00
b7c11c1818 Fix the problem of Workflow terminates after parallel tasks execution, merge node not triggered (#12498)
Co-authored-by: Novice Lee <novicelee@NoviPro.local>
2025-02-07 13:56:08 +08:00
3eb3db0663 chore: refactor the OpenAICompatible and improve thinking display (#13299) 2025-02-07 13:28:46 +08:00
be46f32056 fix(credits): require model name equals (#13314)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 13:28:17 +08:00
6e5c915f96 feat(model): add deepseek-r1 for openrouter (#13312) 2025-02-07 12:39:13 +08:00
04d13a8116 feat(credits): Allow to configure model-credit mapping (#13274)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 11:01:31 +08:00
e638ede3f2 Update README_TR.md (#13294) 2025-02-07 09:11:39 +08:00
2348abe4bf feat: added a couple of models not defined in vertex ai, that were already … (#13296) 2025-02-07 09:11:25 +08:00
f7e7a399d9 feat:add think tag display for xinference deepseek r1 (#13291) 2025-02-06 22:04:58 +08:00
ba91f34636 fix: incorrect transferMethod assignment for remote file (#13286) 2025-02-06 19:32:21 +08:00
16865d43a8 feat: add deepseek models for volcengine provider (#13283)
Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com>
2025-02-06 18:20:03 +08:00
0d13aee15c feat:add deepseek r1 think display for ollama provider (#13272) 2025-02-06 15:32:10 +08:00
49b4144ffd fix: add dataset edit permissions (#13223) 2025-02-06 14:26:16 +08:00
186e2d972e chore(deps): bump katex from 0.16.10 to 0.16.21 in /web (#13270)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-06 13:27:07 +08:00
40dd63ecef Upgrade oracle models (#13174)
Co-authored-by: engchina <atjapan2015@gmail.com>
2025-02-06 13:24:27 +08:00
6d66d6da15 feat(model_providers): Support deepseek-r1 for Nvidia Catalog (#13269)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-06 13:03:19 +08:00
03ec3513f3 Fix bug large data no render (#12683)
Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com>
2025-02-06 13:00:04 +08:00
87763fc234 feat(model_providers): Support deepseek for Azure AI Foundry (#13267)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-06 12:45:48 +08:00
f6c44cae2e feat(model): add gemini-2.0 model (#13266) 2025-02-06 12:28:59 +08:00
xhe
da2ee04fce fix: correct linewrap think display in generic openai api (#13260)
Signed-off-by: xhe <xw897002528@gmail.com>
2025-02-06 10:53:08 +08:00
7673c36af3 feat(model): add gemini-2.0-flash-thinking-exp-01-21 (#13230) 2025-02-06 10:01:00 +08:00
9457b2af2f feat: added models :gemini 2.0 flash 001 and gemini 2.0 pro exp 02-05 (#13247) 2025-02-06 09:58:39 +08:00
7203991032 feat: add parameter "reasoning_effort" and Openai o3-mini (#13243) 2025-02-06 09:29:48 +08:00
xhe
5a685f7156 feat: add think display for volcengine and generic openapi (#13234)
Signed-off-by: xhe <xw897002528@gmail.com>
2025-02-06 09:24:40 +08:00
a6a25030ad fix: updated _position.yaml to include the latest model already integ… (#13245) 2025-02-06 09:21:51 +08:00
00458a31d5 feat: added deepseek r1 and v3 to siliconflow (#13238) 2025-02-05 21:59:18 +08:00
c6ddf6d6cc feat(model_providers): Add Groq DeepSeek-R1-Distill-Llama-70b (#13229)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-05 19:15:29 +08:00
34b21b3065 feat: Add o3-mini and o3-mini-2025-01-31 model variants (#13129)
Co-authored-by: crazywoola <427733928@qq.com>
2025-02-05 17:04:45 +08:00
8fbb355cd2 chore: squash system dependencies installation steps (#13206) 2025-02-05 16:42:53 +08:00
e8b3b7e578 Fix new variables in the conversation opener would override prompt_variables (#13191) 2025-02-05 16:16:00 +08:00
59ca44f493 chore(model_runtime): Move deepseek ahead in the providers list. (#13197)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-05 16:08:28 +08:00
9e1457c2c3 fix: mypy checks violation in AzureBlobStorage (#13215) 2025-02-05 15:56:23 +08:00
fac83e14bc Use DefaultAzureCredential for managed identity in azure blob extention (#11559) 2025-02-05 13:43:43 +08:00
a97cec57e4 fix: SSRF proxy file descriptor leak in concurrent requests (#13108) 2025-02-05 13:10:27 +08:00
38c10b47d3 Feat: add linkedin to readme (#13203) 2025-02-05 12:27:58 +08:00
1a2523fd15 feat: bedrock_endpoint_url (#12838) 2025-02-05 12:24:24 +08:00
03243cb422 Modify params for bedrock retrieve generate (#13182) 2025-02-05 12:17:42 +08:00
2ad7ee0344 chore: add tests for build docker image when dockerfile changed (#10732) 2025-02-05 11:40:22 +08:00
55ce3618ce fix: Dollar Sign Handling in Markdown (#13178)
Co-authored-by: crazywoola <427733928@qq.com>
2025-02-05 11:00:56 +08:00
e9e34c1ab2 Install apt dependencies using bookworm source, consistent with base image. Remove unnecessary, error-prone pins (#13176) 2025-02-05 10:07:22 +08:00
d4c916b496 chore(pyproject): Add type stubs into pyproject.toml (#13145)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-04 12:01:28 +08:00
8fbc9c9342 Solve circular dependency issue between workflow/constants.ts file and default.ts file (#13165) 2025-02-04 09:26:01 +08:00
1b6fd9dfe8 fix: set indexing technique from dataset during update-by-text (#13155) 2025-02-03 11:06:03 +08:00
304467e3f5 fix: not install libmagic raise error (#13146) 2025-02-03 11:05:20 +08:00
7452032d81 add azure openai api version 2024-12-01-preview (#13135) 2025-02-03 11:04:20 +08:00
87e2048f1b nitpick: fix small typos in template.en.mdx (#13156) 2025-02-03 11:03:11 +08:00
d876084392 chore: upgrade libldap2 (#13158) 2025-02-03 11:02:14 +08:00
840729afa5 feat: the think tag display of siliconflow's deepseek r1 (#13153) 2025-02-02 21:55:13 +08:00
941ad03f3c pass model and cost so that langfuse can show cost (#13117) 2025-02-02 15:27:27 +08:00
d73d191f99 feature. add feat to modify metadata via dataset api (#13116) 2025-02-02 15:27:12 +08:00
c2664e0283 chore: fix wrong VectorType match case (#13123) 2025-02-02 15:26:59 +08:00
ee61cede4e test(huggingface_hub): Skip the failed test temporarily. (#13142)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-02 14:47:26 +08:00
b47669b80b fix: deduct LLM quota after processing invoke result (#13075)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-02 12:05:11 +08:00
c0d0c63592 feat: switch to chat messages before regenerated (#11301)
Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com>
2025-01-31 13:05:10 +08:00
b09c39c8dc refactor: avoid to use extra space when finding model by name (#13043) 2025-01-30 15:08:29 +08:00
b4b09ddc3c add tongyi qwen2.5-14b/7b-instruct-1m model (#13089) 2025-01-29 11:58:01 +08:00
d0a21086bd refactor: Update Firecrawl API parameters and default settings (#13082) 2025-01-29 11:21:05 +08:00
d44882c1b5 refactor: reduce duplciate code by inheritance (#13073) 2025-01-28 10:52:01 +08:00
23c68efa2d fix: fix the formatter is not applied on log file (#12704) 2025-01-28 10:49:58 +08:00
560c5de1b7 Fixed Novita AI color and added DeepSeek R1 model (#13074) 2025-01-28 10:38:54 +08:00
5d91dbd000 Set default LOG_LEVEL to INFO for celery workers and beat (#13066)
Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com>
2025-01-27 17:09:41 +08:00
6c31ee36cd fix qwen-vl blocking mode (#13052) 2025-01-27 11:35:23 +08:00
edc29780ed fix: "Model schema not found" error only in agents (#12655) (#12760) 2025-01-27 11:33:13 +08:00
aad7e4dd1c fix:Improve MIME type detection for remote URL uploads using python-magic (#12693) 2025-01-27 11:33:03 +08:00
a6a727e8a4 feat: add inner API to create workspace without requiring email (#13021) 2025-01-26 15:36:56 +08:00
d1fc65fabc fix: adjust iteration node dark style (#13051) 2025-01-26 11:19:41 +08:00
d4be5ef9de Update Novita AI predefined models (#13045) 2025-01-26 09:25:29 +08:00
1374be5a31 fix: Unexpected tag creation when pressing enter during tag conversion (#13041) 2025-01-25 19:30:26 +08:00
b2bbc28580 support bedrock kb: retrieve and generate (#13027) 2025-01-25 17:28:06 +08:00
59b3e672aa feat: add agent thinking content display of deepseek R1 (#12949) 2025-01-24 20:13:42 +08:00
a2f8bce8f5 chore: add Japanese translation: model_providers/bedrock (#13016) 2025-01-24 18:43:33 +08:00
a2b9adb3a2 Change typo in translation (#13004) 2025-01-24 13:48:21 +08:00
28067640b5 fix: wrong zh_Hans translation: Ohio (#13006) 2025-01-24 13:41:20 +08:00
da67916843 feat: add glm-4-air-0111 (#12997)
Co-authored-by: lowell <lowell.hu@zkteco.in>
2025-01-24 10:04:46 +08:00
e54ce479ad Feat/prompt editor dark theme (#12976) 2025-01-23 16:20:00 +08:00
6024d8a42d refactor: Update Firecrawl to use v1 API (#12574)
Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com>
2025-01-23 11:14:48 +08:00
f565f08aa0 fix: get property of string type variable caused page crash (#12969) 2025-01-23 11:02:29 +08:00
fd4afe09f8 fix: tools translate search (#12950)
Co-authored-by: lowell <lowell.hu@zkteco.in>
2025-01-22 19:27:02 +08:00
dd0904f95c feat: add giteeAI risk control identification. (#12946) 2025-01-22 19:26:25 +08:00
4c3076f2a4 feat: add pg vector index (#12338)
Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com>
2025-01-22 17:07:18 +08:00
272 changed files with 13487 additions and 9798 deletions

View File

@ -5,8 +5,8 @@ on:
branches:
- "main"
- "deploy/dev"
release:
types: [published]
tags:
- "*"
concurrency:
group: build-push-${{ github.head_ref || github.run_id }}

47
.github/workflows/docker-build.yml vendored Normal file
View File

@ -0,0 +1,47 @@
name: Build docker image
on:
pull_request:
branches:
- "main"
paths:
- api/Dockerfile
- web/Dockerfile
concurrency:
group: docker-build-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build-docker:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- service_name: "api-amd64"
platform: linux/amd64
context: "api"
- service_name: "api-arm64"
platform: linux/arm64
context: "api"
- service_name: "web-amd64"
platform: linux/amd64
context: "web"
- service_name: "web-arm64"
platform: linux/arm64
context: "web"
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build Docker Image
uses: docker/build-push-action@v6
with:
push: false
context: "{{defaultContext}}:${{ matrix.context }}"
platforms: ${{ matrix.platform }}
cache-from: type=gha
cache-to: type=gha,mode=max

4
.markdownlint.json Normal file
View File

@ -0,0 +1,4 @@
{
"MD024": false,
"MD013": false
}

45
CHANGELOG.md Normal file
View File

@ -0,0 +1,45 @@
# Changelog
All notable changes to Dify will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.15.8] - 2025-05-30
### Added
- Added gunicorn keepalive setting (#19537)
### Fixed
- Fixed database configuration to allow DB_EXTRAS to set search_path via options (#16a4f77)
- Fixed frontend third-party package security issues (#19655)
- Updated dependencies: huggingface-hub (~0.16.4 to ~0.31.0), transformers (~4.35.0 to ~4.39.0), and resend (~0.7.0 to ~2.9.0) (#19563)
- Downgrade boto3 from 1.36 to 1.35 (#19736)
## [0.15.7] - 2025-04-27
### Added
- Added support for GPT-4.1 in model providers (#18912)
- Added support for Amazon Bedrock DeepSeek-R1 model (#18908)
- Added support for Amazon Bedrock Claude Sonnet 3.7 model (#18788)
- Refined version compatibility logic in app DSL service
### Fixed
- Fixed issue with creating apps from template categories (#18807, #18868)
- Fixed DSL version check when creating apps from explore templates (#18872, #18878)
## [0.15.6] - 2025-04-22
### Security
- Fixed clickjacking vulnerability (#18552)
- Fixed reset password security issue (#18366)
- Updated reset password token when email code verification succeeds (#18362)
### Fixed
- Fixed Vertex AI Gemini 2.0 Flash 001 schema (#18405)

View File

@ -25,6 +25,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="seguir en X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="seguir en LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Descargas de Docker" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="suivre sur X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="suivre sur LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Tirages Docker" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="X(Twitter)でフォロー"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="LinkedInでフォロー"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -25,6 +25,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -22,6 +22,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="X(Twitter)'da takip et"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="LinkedIn'da takip et"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Çekmeleri" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -62,8 +65,6 @@ Görsel bir arayüz üzerinde güçlü AI iş akışları oluşturun ve test edi
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
Özür dilerim, haklısınız. Daha anlamlı ve akıcı bir çeviri yapmaya çalışayım. İşte güncellenmiş çeviri:
**3. Prompt IDE**:
Komut istemlerini oluşturmak, model performansını karşılaştırmak ve sohbet tabanlı uygulamalara metin-konuşma gibi ek özellikler eklemek için kullanıcı dostu bir arayüz.
@ -150,8 +151,6 @@ Görsel bir arayüz üzerinde güçlü AI iş akışları oluşturun ve test edi
## Dify'ı Kullanma
- **Cloud </br>**
İşte verdiğiniz metnin Türkçe çevirisi, kod bloğu içinde:
-
Herkesin sıfır kurulumla denemesi için bir [Dify Cloud](https://dify.ai) hizmeti sunuyoruz. Bu hizmet, kendi kendine dağıtılan versiyonun tüm yeteneklerini sağlar ve sandbox planında 200 ücretsiz GPT-4 çağrısı içerir.
- **Dify Topluluk Sürümünü Kendi Sunucunuzda Barındırma</br>**
@ -177,8 +176,6 @@ GitHub'da Dify'a yıldız verin ve yeni sürümlerden anında haberdar olun.
>- RAM >= 4GB
</br>
İşte verdiğiniz metnin Türkçe çevirisi, kod bloğu içinde:
Dify sunucusunu başlatmanın en kolay yolu, [docker-compose.yml](docker/docker-compose.yaml) dosyamızı çalıştırmaktır. Kurulum komutunu çalıştırmadan önce, makinenizde [Docker](https://docs.docker.com/get-docker/) ve [Docker Compose](https://docs.docker.com/compose/install/)'un kurulu olduğundan emin olun:
```bash

View File

@ -21,6 +21,9 @@
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="theo dõi trên X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="theo dõi trên LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">

View File

@ -430,4 +430,7 @@ CREATE_TIDB_SERVICE_JOB_ENABLED=false
# Maximum number of submitted thread count in a ThreadPool for parallel node execution
MAX_SUBMIT_COUNT=100
# Lockout duration in seconds
LOGIN_LOCKOUT_DURATION=86400
LOGIN_LOCKOUT_DURATION=86400
# Prevent Clickjacking
ALLOW_EMBED=false

View File

@ -48,16 +48,18 @@ ENV TZ=UTC
WORKDIR /app/api
RUN apt-get update \
&& apt-get install -y --no-install-recommends curl nodejs libgmp-dev libmpfr-dev libmpc-dev \
# if you located in China, you can use aliyun mirror to speed up
# && echo "deb http://mirrors.aliyun.com/debian testing main" > /etc/apt/sources.list \
&& echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list \
&& apt-get update \
# For Security
&& apt-get install -y --no-install-recommends expat=2.6.4-1 libldap-2.5-0=2.5.19+dfsg-1 perl=5.40.0-8 libsqlite3-0=3.46.1-1 zlib1g=1:1.3.dfsg+really1.3.1-1+b1 \
# install a chinese font to support the use of tools like matplotlib
&& apt-get install -y fonts-noto-cjk \
RUN \
apt-get update \
# Install dependencies
&& apt-get install -y --no-install-recommends \
# basic environment
curl nodejs libgmp-dev libmpfr-dev libmpc-dev \
# For Security
expat libldap-2.5-0 perl libsqlite3-0 zlib1g \
# install a chinese font to support the use of tools like matplotlib
fonts-noto-cjk \
# install libmagic to support the use of python-magic guess MIMETYPE
libmagic1 \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
@ -76,7 +78,6 @@ COPY . /app/api/
COPY docker/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ARG COMMIT_SHA
ENV COMMIT_SHA=${COMMIT_SHA}

View File

@ -1,9 +1,40 @@
from typing import Optional
from pydantic import Field, NonNegativeInt
from pydantic import Field, NonNegativeInt, computed_field
from pydantic_settings import BaseSettings
class HostedCreditConfig(BaseSettings):
HOSTED_MODEL_CREDIT_CONFIG: str = Field(
description="Model credit configuration in format 'model:credits,model:credits', e.g., 'gpt-4:20,gpt-4o:10'",
default="",
)
def get_model_credits(self, model_name: str) -> int:
"""
Get credit value for a specific model name.
Returns 1 if model is not found in configuration (default credit).
:param model_name: The name of the model to search for
:return: The credit value for the model
"""
if not self.HOSTED_MODEL_CREDIT_CONFIG:
return 1
try:
credit_map = dict(
item.strip().split(":", 1) for item in self.HOSTED_MODEL_CREDIT_CONFIG.split(",") if ":" in item
)
# Search for matching model pattern
for pattern, credit in credit_map.items():
if pattern.strip() == model_name:
return int(credit)
return 1 # Default quota if no match found
except (ValueError, AttributeError):
return 1 # Return default quota if parsing fails
class HostedOpenAiConfig(BaseSettings):
"""
Configuration for hosted OpenAI service
@ -202,5 +233,7 @@ class HostedServiceConfig(
HostedZhipuAIConfig,
# moderation
HostedModerationConfig,
# credit config
HostedCreditConfig,
):
pass

View File

@ -1,5 +1,5 @@
from typing import Any, Literal, Optional
from urllib.parse import quote_plus
from urllib.parse import parse_qsl, quote_plus
from pydantic import Field, NonNegativeInt, PositiveFloat, PositiveInt, computed_field
from pydantic_settings import BaseSettings
@ -166,14 +166,28 @@ class DatabaseConfig(BaseSettings):
default=False,
)
@computed_field
@computed_field # type: ignore[misc]
@property
def SQLALCHEMY_ENGINE_OPTIONS(self) -> dict[str, Any]:
# Parse DB_EXTRAS for 'options'
db_extras_dict = dict(parse_qsl(self.DB_EXTRAS))
options = db_extras_dict.get("options", "")
# Always include timezone
timezone_opt = "-c timezone=UTC"
if options:
# Merge user options and timezone
merged_options = f"{options} {timezone_opt}"
else:
merged_options = timezone_opt
connect_args = {"options": merged_options}
return {
"pool_size": self.SQLALCHEMY_POOL_SIZE,
"max_overflow": self.SQLALCHEMY_MAX_OVERFLOW,
"pool_recycle": self.SQLALCHEMY_POOL_RECYCLE,
"pool_pre_ping": self.SQLALCHEMY_POOL_PRE_PING,
"connect_args": {"options": "-c timezone=UTC"},
"connect_args": connect_args,
}

View File

@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description="Dify version",
default="0.15.2",
default="0.15.8",
)
COMMIT_SHA: str = Field(

View File

@ -1,12 +1,32 @@
import mimetypes
import os
import platform
import re
import urllib.parse
import warnings
from collections.abc import Mapping
from typing import Any
from uuid import uuid4
import httpx
try:
import magic
except ImportError:
if platform.system() == "Windows":
warnings.warn(
"To use python-magic guess MIMETYPE, you need to run `pip install python-magic-bin`", stacklevel=2
)
elif platform.system() == "Darwin":
warnings.warn("To use python-magic guess MIMETYPE, you need to run `brew install libmagic`", stacklevel=2)
elif platform.system() == "Linux":
warnings.warn(
"To use python-magic guess MIMETYPE, you need to run `sudo apt-get install libmagic1`", stacklevel=2
)
else:
warnings.warn("To use python-magic guess MIMETYPE, you need to install `libmagic`", stacklevel=2)
magic = None # type: ignore
from pydantic import BaseModel
from configs import dify_config
@ -47,6 +67,13 @@ def guess_file_info_from_response(response: httpx.Response):
# If guessing fails, use Content-Type from response headers
mimetype = response.headers.get("Content-Type", "application/octet-stream")
# Use python-magic to guess MIME type if still unknown or generic
if mimetype == "application/octet-stream" and magic is not None:
try:
mimetype = magic.from_buffer(response.content[:1024], mime=True)
except magic.MagicException:
pass
extension = os.path.splitext(filename)[1]
# Ensure filename has an extension

View File

@ -8,7 +8,7 @@ from constants.languages import languages
from controllers.console import api
from controllers.console.auth.error import EmailCodeError, InvalidEmailError, InvalidTokenError, PasswordMismatchError
from controllers.console.error import AccountInFreezeError, AccountNotFound, EmailSendIpLimitError
from controllers.console.wraps import setup_required
from controllers.console.wraps import email_password_login_enabled, setup_required
from events.tenant_event import tenant_was_created
from extensions.ext_database import db
from libs.helper import email, extract_remote_ip
@ -22,6 +22,7 @@ from services.feature_service import FeatureService
class ForgotPasswordSendEmailApi(Resource):
@setup_required
@email_password_login_enabled
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=email, required=True, location="json")
@ -53,6 +54,7 @@ class ForgotPasswordSendEmailApi(Resource):
class ForgotPasswordCheckApi(Resource):
@setup_required
@email_password_login_enabled
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=str, required=True, location="json")
@ -72,11 +74,20 @@ class ForgotPasswordCheckApi(Resource):
if args["code"] != token_data.get("code"):
raise EmailCodeError()
return {"is_valid": True, "email": token_data.get("email")}
# Verified, revoke the first token
AccountService.revoke_reset_password_token(args["token"])
# Refresh token data by generating a new token
_, new_token = AccountService.generate_reset_password_token(
user_email, code=args["code"], additional_data={"phase": "reset"}
)
return {"is_valid": True, "email": token_data.get("email"), "token": new_token}
class ForgotPasswordResetApi(Resource):
@setup_required
@email_password_login_enabled
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("token", type=str, required=True, nullable=False, location="json")
@ -95,6 +106,9 @@ class ForgotPasswordResetApi(Resource):
if reset_data is None:
raise InvalidTokenError()
# Must use token in reset phase
if reset_data.get("phase", "") != "reset":
raise InvalidTokenError()
AccountService.revoke_reset_password_token(token)

View File

@ -22,7 +22,7 @@ from controllers.console.error import (
EmailSendIpLimitError,
NotAllowedCreateWorkspace,
)
from controllers.console.wraps import setup_required
from controllers.console.wraps import email_password_login_enabled, setup_required
from events.tenant_event import tenant_was_created
from libs.helper import email, extract_remote_ip
from libs.password import valid_password
@ -38,6 +38,7 @@ class LoginApi(Resource):
"""Resource for user login."""
@setup_required
@email_password_login_enabled
def post(self):
"""Authenticate user and login."""
parser = reqparse.RequestParser()
@ -110,6 +111,7 @@ class LogoutApi(Resource):
class ResetPasswordSendEmailApi(Resource):
@setup_required
@email_password_login_enabled
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=email, required=True, location="json")

View File

@ -620,7 +620,6 @@ class DatasetRetrievalSettingApi(Resource):
match vector_type:
case (
VectorType.RELYT
| VectorType.PGVECTOR
| VectorType.TIDB_VECTOR
| VectorType.CHROMA
| VectorType.TENCENT

View File

@ -50,7 +50,7 @@ class MessageListApi(InstalledAppResource):
try:
return MessageService.pagination_by_first_id(
app_model, current_user, args["conversation_id"], args["first_id"], args["limit"], "desc"
app_model, current_user, args["conversation_id"], args["first_id"], args["limit"]
)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")

View File

@ -154,3 +154,16 @@ def enterprise_license_required(view):
return view(*args, **kwargs)
return decorated
def email_password_login_enabled(view):
@wraps(view)
def decorated(*args, **kwargs):
features = FeatureService.get_system_features()
if features.enable_email_password_login:
return view(*args, **kwargs)
# otherwise, return 403
abort(403)
return decorated

View File

@ -1,3 +1,5 @@
import json
from flask_restful import Resource, reqparse # type: ignore
from controllers.console.wraps import setup_required
@ -29,4 +31,34 @@ class EnterpriseWorkspace(Resource):
return {"message": "enterprise workspace created."}
class EnterpriseWorkspaceNoOwnerEmail(Resource):
@setup_required
@inner_api_only
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("name", type=str, required=True, location="json")
args = parser.parse_args()
tenant = TenantService.create_tenant(args["name"], is_from_dashboard=True)
tenant_was_created.send(tenant)
resp = {
"id": tenant.id,
"name": tenant.name,
"encrypt_public_key": tenant.encrypt_public_key,
"plan": tenant.plan,
"status": tenant.status,
"custom_config": json.loads(tenant.custom_config) if tenant.custom_config else {},
"created_at": tenant.created_at.isoformat() if tenant.created_at else None,
"updated_at": tenant.updated_at.isoformat() if tenant.updated_at else None,
}
return {
"message": "enterprise workspace created.",
"tenant": resp,
}
api.add_resource(EnterpriseWorkspace, "/enterprise/workspace")
api.add_resource(EnterpriseWorkspaceNoOwnerEmail, "/enterprise/workspace/ownerless")

View File

@ -18,6 +18,7 @@ from controllers.service_api.app.error import (
from controllers.service_api.dataset.error import (
ArchivedDocumentImmutableError,
DocumentIndexingError,
InvalidMetadataError,
)
from controllers.service_api.wraps import DatasetApiResource, cloud_edition_billing_resource_check
from core.errors.error import ProviderTokenNotInitError
@ -50,6 +51,9 @@ class DocumentAddByTextApi(DatasetApiResource):
"indexing_technique", type=str, choices=Dataset.INDEXING_TECHNIQUE_LIST, nullable=False, location="json"
)
parser.add_argument("retrieval_model", type=dict, required=False, nullable=False, location="json")
parser.add_argument("doc_type", type=str, required=False, nullable=True, location="json")
parser.add_argument("doc_metadata", type=dict, required=False, nullable=True, location="json")
args = parser.parse_args()
dataset_id = str(dataset_id)
tenant_id = str(tenant_id)
@ -61,6 +65,28 @@ class DocumentAddByTextApi(DatasetApiResource):
if not dataset.indexing_technique and not args["indexing_technique"]:
raise ValueError("indexing_technique is required.")
# Validate metadata if provided
if args.get("doc_type") or args.get("doc_metadata"):
if not args.get("doc_type") or not args.get("doc_metadata"):
raise InvalidMetadataError("Both doc_type and doc_metadata must be provided when adding metadata")
if args["doc_type"] not in DocumentService.DOCUMENT_METADATA_SCHEMA:
raise InvalidMetadataError(
"Invalid doc_type. Must be one of: " + ", ".join(DocumentService.DOCUMENT_METADATA_SCHEMA.keys())
)
if not isinstance(args["doc_metadata"], dict):
raise InvalidMetadataError("doc_metadata must be a dictionary")
# Validate metadata schema based on doc_type
if args["doc_type"] != "others":
metadata_schema = DocumentService.DOCUMENT_METADATA_SCHEMA[args["doc_type"]]
for key, value in args["doc_metadata"].items():
if key in metadata_schema and not isinstance(value, metadata_schema[key]):
raise InvalidMetadataError(f"Invalid type for metadata field {key}")
# set to MetaDataConfig
args["metadata"] = {"doc_type": args["doc_type"], "doc_metadata": args["doc_metadata"]}
text = args.get("text")
name = args.get("name")
if text is None or name is None:
@ -107,6 +133,8 @@ class DocumentUpdateByTextApi(DatasetApiResource):
"doc_language", type=str, default="English", required=False, nullable=False, location="json"
)
parser.add_argument("retrieval_model", type=dict, required=False, nullable=False, location="json")
parser.add_argument("doc_type", type=str, required=False, nullable=True, location="json")
parser.add_argument("doc_metadata", type=dict, required=False, nullable=True, location="json")
args = parser.parse_args()
dataset_id = str(dataset_id)
tenant_id = str(tenant_id)
@ -115,6 +143,32 @@ class DocumentUpdateByTextApi(DatasetApiResource):
if not dataset:
raise ValueError("Dataset is not exist.")
# indexing_technique is already set in dataset since this is an update
args["indexing_technique"] = dataset.indexing_technique
# Validate metadata if provided
if args.get("doc_type") or args.get("doc_metadata"):
if not args.get("doc_type") or not args.get("doc_metadata"):
raise InvalidMetadataError("Both doc_type and doc_metadata must be provided when adding metadata")
if args["doc_type"] not in DocumentService.DOCUMENT_METADATA_SCHEMA:
raise InvalidMetadataError(
"Invalid doc_type. Must be one of: " + ", ".join(DocumentService.DOCUMENT_METADATA_SCHEMA.keys())
)
if not isinstance(args["doc_metadata"], dict):
raise InvalidMetadataError("doc_metadata must be a dictionary")
# Validate metadata schema based on doc_type
if args["doc_type"] != "others":
metadata_schema = DocumentService.DOCUMENT_METADATA_SCHEMA[args["doc_type"]]
for key, value in args["doc_metadata"].items():
if key in metadata_schema and not isinstance(value, metadata_schema[key]):
raise InvalidMetadataError(f"Invalid type for metadata field {key}")
# set to MetaDataConfig
args["metadata"] = {"doc_type": args["doc_type"], "doc_metadata": args["doc_metadata"]}
if args["text"]:
text = args.get("text")
name = args.get("name")
@ -161,6 +215,30 @@ class DocumentAddByFileApi(DatasetApiResource):
args["doc_form"] = "text_model"
if "doc_language" not in args:
args["doc_language"] = "English"
# Validate metadata if provided
if args.get("doc_type") or args.get("doc_metadata"):
if not args.get("doc_type") or not args.get("doc_metadata"):
raise InvalidMetadataError("Both doc_type and doc_metadata must be provided when adding metadata")
if args["doc_type"] not in DocumentService.DOCUMENT_METADATA_SCHEMA:
raise InvalidMetadataError(
"Invalid doc_type. Must be one of: " + ", ".join(DocumentService.DOCUMENT_METADATA_SCHEMA.keys())
)
if not isinstance(args["doc_metadata"], dict):
raise InvalidMetadataError("doc_metadata must be a dictionary")
# Validate metadata schema based on doc_type
if args["doc_type"] != "others":
metadata_schema = DocumentService.DOCUMENT_METADATA_SCHEMA[args["doc_type"]]
for key, value in args["doc_metadata"].items():
if key in metadata_schema and not isinstance(value, metadata_schema[key]):
raise InvalidMetadataError(f"Invalid type for metadata field {key}")
# set to MetaDataConfig
args["metadata"] = {"doc_type": args["doc_type"], "doc_metadata": args["doc_metadata"]}
# get dataset info
dataset_id = str(dataset_id)
tenant_id = str(tenant_id)
@ -228,6 +306,29 @@ class DocumentUpdateByFileApi(DatasetApiResource):
if "doc_language" not in args:
args["doc_language"] = "English"
# Validate metadata if provided
if args.get("doc_type") or args.get("doc_metadata"):
if not args.get("doc_type") or not args.get("doc_metadata"):
raise InvalidMetadataError("Both doc_type and doc_metadata must be provided when adding metadata")
if args["doc_type"] not in DocumentService.DOCUMENT_METADATA_SCHEMA:
raise InvalidMetadataError(
"Invalid doc_type. Must be one of: " + ", ".join(DocumentService.DOCUMENT_METADATA_SCHEMA.keys())
)
if not isinstance(args["doc_metadata"], dict):
raise InvalidMetadataError("doc_metadata must be a dictionary")
# Validate metadata schema based on doc_type
if args["doc_type"] != "others":
metadata_schema = DocumentService.DOCUMENT_METADATA_SCHEMA[args["doc_type"]]
for key, value in args["doc_metadata"].items():
if key in metadata_schema and not isinstance(value, metadata_schema[key]):
raise InvalidMetadataError(f"Invalid type for metadata field {key}")
# set to MetaDataConfig
args["metadata"] = {"doc_type": args["doc_type"], "doc_metadata": args["doc_metadata"]}
# get dataset info
dataset_id = str(dataset_id)
tenant_id = str(tenant_id)

View File

@ -91,7 +91,7 @@ class MessageListApi(WebApiResource):
try:
return MessageService.pagination_by_first_id(
app_model, end_user, args["conversation_id"], args["first_id"], args["limit"], "desc"
app_model, end_user, args["conversation_id"], args["first_id"], args["limit"]
)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")

View File

@ -104,7 +104,6 @@ class CotAgentRunner(BaseAgentRunner, ABC):
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks = model_instance.invoke_llm(
prompt_messages=prompt_messages,

View File

@ -84,7 +84,6 @@ class FunctionCallAgentRunner(BaseAgentRunner):
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = model_instance.invoke_llm(
prompt_messages=prompt_messages,

View File

@ -55,20 +55,6 @@ class AgentChatAppRunner(AppRunner):
query = application_generate_entity.query
files = application_generate_entity.files
# Pre-calculate the number of tokens of the prompt messages,
# and return the rest number of tokens by model context token size limit and max token size limit.
# If the rest number of tokens is not enough, raise exception.
# Include: prompt template, inputs, query(optional), files(optional)
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
query=query,
)
memory = None
if application_generate_entity.conversation_id:
# get memory of conversation (read-only)
@ -202,7 +188,7 @@ class AgentChatAppRunner(AppRunner):
# change function call strategy based on LLM model
llm_model = cast(LargeLanguageModel, model_instance.model_type_instance)
model_schema = llm_model.get_model_schema(model_instance.model, model_instance.credentials)
if not model_schema or not model_schema.features:
if not model_schema:
raise ValueError("Model schema not found")
if {ModelFeature.MULTI_TOOL_CALL, ModelFeature.TOOL_CALL}.intersection(model_schema.features or []):

View File

@ -15,10 +15,8 @@ from core.app.features.annotation_reply.annotation_reply import AnnotationReplyF
from core.app.features.hosting_moderation.hosting_moderation import HostingModerationFeature
from core.external_data_tool.external_data_fetch import ExternalDataFetch
from core.memory.token_buffer_memory import TokenBufferMemory
from core.model_manager import ModelInstance
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessage
from core.model_runtime.entities.model_entities import ModelPropertyKey
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.moderation.input_moderation import InputModeration
from core.prompt.advanced_prompt_transform import AdvancedPromptTransform
@ -31,106 +29,6 @@ if TYPE_CHECKING:
class AppRunner:
def get_pre_calculate_rest_tokens(
self,
app_record: App,
model_config: ModelConfigWithCredentialsEntity,
prompt_template_entity: PromptTemplateEntity,
inputs: Mapping[str, str],
files: Sequence["File"],
query: Optional[str] = None,
) -> int:
"""
Get pre calculate rest tokens
:param app_record: app record
:param model_config: model config entity
:param prompt_template_entity: prompt template entity
:param inputs: inputs
:param files: files
:param query: query
:return:
"""
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=model_config.provider_model_bundle, model=model_config.model
)
model_context_tokens = model_config.model_schema.model_properties.get(ModelPropertyKey.CONTEXT_SIZE)
max_tokens = 0
for parameter_rule in model_config.model_schema.parameter_rules:
if parameter_rule.name == "max_tokens" or (
parameter_rule.use_template and parameter_rule.use_template == "max_tokens"
):
max_tokens = (
model_config.parameters.get(parameter_rule.name)
or model_config.parameters.get(parameter_rule.use_template or "")
) or 0
if model_context_tokens is None:
return -1
if max_tokens is None:
max_tokens = 0
# get prompt messages without memory and context
prompt_messages, stop = self.organize_prompt_messages(
app_record=app_record,
model_config=model_config,
prompt_template_entity=prompt_template_entity,
inputs=inputs,
files=files,
query=query,
)
prompt_tokens = model_instance.get_llm_num_tokens(prompt_messages)
rest_tokens: int = model_context_tokens - max_tokens - prompt_tokens
if rest_tokens < 0:
raise InvokeBadRequestError(
"Query or prefix prompt is too long, you can reduce the prefix prompt, "
"or shrink the max token, or switch to a llm with a larger token limit size."
)
return rest_tokens
def recalc_llm_max_tokens(
self, model_config: ModelConfigWithCredentialsEntity, prompt_messages: list[PromptMessage]
):
# recalc max_tokens if sum(prompt_token + max_tokens) over model token limit
model_instance = ModelInstance(
provider_model_bundle=model_config.provider_model_bundle, model=model_config.model
)
model_context_tokens = model_config.model_schema.model_properties.get(ModelPropertyKey.CONTEXT_SIZE)
max_tokens = 0
for parameter_rule in model_config.model_schema.parameter_rules:
if parameter_rule.name == "max_tokens" or (
parameter_rule.use_template and parameter_rule.use_template == "max_tokens"
):
max_tokens = (
model_config.parameters.get(parameter_rule.name)
or model_config.parameters.get(parameter_rule.use_template or "")
) or 0
if model_context_tokens is None:
return -1
if max_tokens is None:
max_tokens = 0
prompt_tokens = model_instance.get_llm_num_tokens(prompt_messages)
if prompt_tokens + max_tokens > model_context_tokens:
max_tokens = max(model_context_tokens - prompt_tokens, 16)
for parameter_rule in model_config.model_schema.parameter_rules:
if parameter_rule.name == "max_tokens" or (
parameter_rule.use_template and parameter_rule.use_template == "max_tokens"
):
model_config.parameters[parameter_rule.name] = max_tokens
def organize_prompt_messages(
self,
app_record: App,

View File

@ -50,20 +50,6 @@ class ChatAppRunner(AppRunner):
query = application_generate_entity.query
files = application_generate_entity.files
# Pre-calculate the number of tokens of the prompt messages,
# and return the rest number of tokens by model context token size limit and max token size limit.
# If the rest number of tokens is not enough, raise exception.
# Include: prompt template, inputs, query(optional), files(optional)
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
query=query,
)
memory = None
if application_generate_entity.conversation_id:
# get memory of conversation (read-only)
@ -194,9 +180,6 @@ class ChatAppRunner(AppRunner):
if hosting_moderation_result:
return
# Re-calculate the max tokens if sum(prompt_token + max_tokens) over model token limit
self.recalc_llm_max_tokens(model_config=application_generate_entity.model_conf, prompt_messages=prompt_messages)
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,

View File

@ -43,20 +43,6 @@ class CompletionAppRunner(AppRunner):
query = application_generate_entity.query
files = application_generate_entity.files
# Pre-calculate the number of tokens of the prompt messages,
# and return the rest number of tokens by model context token size limit and max token size limit.
# If the rest number of tokens is not enough, raise exception.
# Include: prompt template, inputs, query(optional), files(optional)
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
query=query,
)
# organize all inputs and template to prompt messages
# Include: prompt template, inputs, query(optional), files(optional)
prompt_messages, stop = self.organize_prompt_messages(
@ -152,9 +138,6 @@ class CompletionAppRunner(AppRunner):
if hosting_moderation_result:
return
# Re-calculate the max tokens if sum(prompt_token + max_tokens) over model token limit
self.recalc_llm_max_tokens(model_config=application_generate_entity.model_conf, prompt_messages=prompt_messages)
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,

View File

@ -11,15 +11,6 @@ from configs import dify_config
SSRF_DEFAULT_MAX_RETRIES = dify_config.SSRF_DEFAULT_MAX_RETRIES
proxy_mounts = (
{
"http://": httpx.HTTPTransport(proxy=dify_config.SSRF_PROXY_HTTP_URL),
"https://": httpx.HTTPTransport(proxy=dify_config.SSRF_PROXY_HTTPS_URL),
}
if dify_config.SSRF_PROXY_HTTP_URL and dify_config.SSRF_PROXY_HTTPS_URL
else None
)
BACKOFF_FACTOR = 0.5
STATUS_FORCELIST = [429, 500, 502, 503, 504]
@ -51,7 +42,11 @@ def make_request(method, url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
if dify_config.SSRF_PROXY_ALL_URL:
with httpx.Client(proxy=dify_config.SSRF_PROXY_ALL_URL) as client:
response = client.request(method=method, url=url, **kwargs)
elif proxy_mounts:
elif dify_config.SSRF_PROXY_HTTP_URL and dify_config.SSRF_PROXY_HTTPS_URL:
proxy_mounts = {
"http://": httpx.HTTPTransport(proxy=dify_config.SSRF_PROXY_HTTP_URL),
"https://": httpx.HTTPTransport(proxy=dify_config.SSRF_PROXY_HTTPS_URL),
}
with httpx.Client(mounts=proxy_mounts) as client:
response = client.request(method=method, url=url, **kwargs)
else:

View File

@ -26,7 +26,7 @@ class TokenBufferMemory:
self.model_instance = model_instance
def get_history_prompt_messages(
self, max_token_limit: int = 2000, message_limit: Optional[int] = None
self, max_token_limit: int = 100000, message_limit: Optional[int] = None
) -> Sequence[PromptMessage]:
"""
Get history prompt messages.

View File

@ -1,4 +1,4 @@
from .llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from .llm_entities import LLMMode, LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from .message_entities import (
AssistantPromptMessage,
AudioPromptMessageContent,
@ -23,6 +23,7 @@ __all__ = [
"AudioPromptMessageContent",
"DocumentPromptMessageContent",
"ImagePromptMessageContent",
"LLMMode",
"LLMResult",
"LLMResultChunk",
"LLMResultChunkDelta",

View File

@ -1,5 +1,5 @@
from decimal import Decimal
from enum import Enum
from enum import StrEnum
from typing import Optional
from pydantic import BaseModel
@ -8,7 +8,7 @@ from core.model_runtime.entities.message_entities import AssistantPromptMessage,
from core.model_runtime.entities.model_entities import ModelUsage, PriceInfo
class LLMMode(Enum):
class LLMMode(StrEnum):
"""
Enum class for large language model mode.
"""

View File

@ -221,13 +221,12 @@ class AIModel(ABC):
:param credentials: model credentials
:return: model schema
"""
# get predefined models (predefined_models)
models = self.predefined_models()
model_map = {model.model: model for model in models}
if model in model_map:
return model_map[model]
# Try to get model schema from predefined models
for predefined_model in self.predefined_models():
if model == predefined_model.model:
return predefined_model
# Try to get model schema from credentials
if credentials:
model_schema = self.get_customizable_model_schema_from_credentials(model, credentials)
if model_schema:

View File

@ -30,6 +30,11 @@ from core.model_runtime.model_providers.__base.ai_model import AIModel
logger = logging.getLogger(__name__)
HTML_THINKING_TAG = (
'<details style="color:gray;background-color: #f8f8f8;padding: 8px;border-radius: 4px;" open> '
"<summary> Thinking... </summary>"
)
class LargeLanguageModel(AIModel):
"""
@ -400,6 +405,40 @@ if you are not sure about the structure.
),
)
def _wrap_thinking_by_reasoning_content(self, delta: dict, is_reasoning: bool) -> tuple[str, bool]:
"""
If the reasoning response is from delta.get("reasoning_content"), we wrap
it with HTML details tag.
:param delta: delta dictionary from LLM streaming response
:param is_reasoning: is reasoning
:return: tuple of (processed_content, is_reasoning)
"""
content = delta.get("content") or ""
reasoning_content = delta.get("reasoning_content")
if reasoning_content:
if not is_reasoning:
content = HTML_THINKING_TAG + reasoning_content
is_reasoning = True
else:
content = reasoning_content
elif is_reasoning:
content = "</details>" + content
is_reasoning = False
return content, is_reasoning
def _wrap_thinking_by_tag(self, content: str) -> str:
"""
if the reasoning response is a <think>...</think> block from delta.get("content"),
we replace <think> to <detail>.
:param content: delta.get("content")
:return: processed_content
"""
return content.replace("<think>", HTML_THINKING_TAG).replace("</think>", "</details>")
def _invoke_result_generator(
self,
model: str,

View File

@ -1,4 +1,5 @@
- openai
- deepseek
- anthropic
- azure_openai
- google
@ -32,7 +33,6 @@
- localai
- volcengine_maas
- openai_api_compatible
- deepseek
- hunyuan
- siliconflow
- perfxcloud

View File

@ -51,6 +51,40 @@ model_credential_schema:
show_on:
- variable: __model_type
value: llm
- variable: mode
show_on:
- variable: __model_type
value: llm
label:
en_US: Completion mode
type: select
required: false
default: chat
placeholder:
zh_Hans: 选择对话类型
en_US: Select completion mode
options:
- value: completion
label:
en_US: Completion
zh_Hans: 补全
- value: chat
label:
en_US: Chat
zh_Hans: 对话
- variable: context_size
label:
zh_Hans: 模型上下文长度
en_US: Model context size
required: true
show_on:
- variable: __model_type
value: llm
type: text-input
default: "4096"
placeholder:
zh_Hans: 在此输入您的模型上下文长度
en_US: Enter your Model context size
- variable: jwt_token
required: true
label:

View File

@ -1,9 +1,9 @@
import logging
from collections.abc import Generator
from collections.abc import Generator, Sequence
from typing import Any, Optional, Union
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import StreamingChatCompletionsUpdate
from azure.ai.inference.models import StreamingChatCompletionsUpdate, SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import (
ClientAuthenticationError,
@ -20,7 +20,7 @@ from azure.core.exceptions import (
)
from core.model_runtime.callbacks.base_callback import Callback
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.llm_entities import LLMMode, LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import (
AssistantPromptMessage,
PromptMessage,
@ -30,6 +30,7 @@ from core.model_runtime.entities.model_entities import (
AIModelEntity,
FetchFrom,
I18nObject,
ModelPropertyKey,
ModelType,
ParameterRule,
ParameterType,
@ -60,10 +61,10 @@ class AzureAIStudioLargeLanguageModel(LargeLanguageModel):
self,
model: str,
credentials: dict,
prompt_messages: list[PromptMessage],
prompt_messages: Sequence[PromptMessage],
model_parameters: dict,
tools: Optional[list[PromptMessageTool]] = None,
stop: Optional[list[str]] = None,
tools: Optional[Sequence[PromptMessageTool]] = None,
stop: Optional[Sequence[str]] = None,
stream: bool = True,
user: Optional[str] = None,
) -> Union[LLMResult, Generator]:
@ -82,8 +83,8 @@ class AzureAIStudioLargeLanguageModel(LargeLanguageModel):
"""
if not self.client:
endpoint = credentials.get("endpoint")
api_key = credentials.get("api_key")
endpoint = str(credentials.get("endpoint"))
api_key = str(credentials.get("api_key"))
self.client = ChatCompletionsClient(endpoint=endpoint, credential=AzureKeyCredential(api_key))
messages = [{"role": msg.role.value, "content": msg.content} for msg in prompt_messages]
@ -94,6 +95,7 @@ class AzureAIStudioLargeLanguageModel(LargeLanguageModel):
"temperature": model_parameters.get("temperature", 0),
"top_p": model_parameters.get("top_p", 1),
"stream": stream,
"model": model,
}
if stop:
@ -255,10 +257,16 @@ class AzureAIStudioLargeLanguageModel(LargeLanguageModel):
:return:
"""
try:
endpoint = credentials.get("endpoint")
api_key = credentials.get("api_key")
endpoint = str(credentials.get("endpoint"))
api_key = str(credentials.get("api_key"))
client = ChatCompletionsClient(endpoint=endpoint, credential=AzureKeyCredential(api_key))
client.get_model_info()
client.complete(
messages=[
SystemMessage(content="I say 'ping', you say 'pong'"),
UserMessage(content="ping"),
],
model=model,
)
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
@ -327,7 +335,10 @@ class AzureAIStudioLargeLanguageModel(LargeLanguageModel):
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_type=ModelType.LLM,
features=[],
model_properties={},
model_properties={
ModelPropertyKey.CONTEXT_SIZE: int(credentials.get("context_size", "4096")),
ModelPropertyKey.MODE: credentials.get("mode", LLMMode.CHAT),
},
parameter_rules=rules,
)

View File

@ -53,6 +53,9 @@ model_credential_schema:
type: select
required: true
options:
- label:
en_US: 2024-12-01-preview
value: 2024-12-01-preview
- label:
en_US: 2024-10-01-preview
value: 2024-10-01-preview
@ -135,6 +138,18 @@ model_credential_schema:
show_on:
- variable: __model_type
value: llm
- label:
en_US: o3-mini
value: o3-mini
show_on:
- variable: __model_type
value: llm
- label:
en_US: o3-mini-2025-01-31
value: o3-mini-2025-01-31
show_on:
- variable: __model_type
value: llm
- label:
en_US: o1-preview
value: o1-preview

View File

@ -44,6 +44,7 @@ provider_credential_schema:
label:
en_US: AWS Region
zh_Hans: AWS 地区
ja_JP: AWS リージョン
type: select
default: us-east-1
options:
@ -51,62 +52,86 @@ provider_credential_schema:
label:
en_US: US East (N. Virginia)
zh_Hans: 美国东部 (弗吉尼亚北部)
ja_JP: 米国 (バージニア北部)
- value: us-east-2
label:
en_US: US East (Ohio)
zh_Hans: 美国东部 (弗吉尼亚北部)
zh_Hans: 美国东部 (俄亥俄)
ja_JP: 米国 (オハイオ)
- value: us-west-2
label:
en_US: US West (Oregon)
zh_Hans: 美国西部 (俄勒冈州)
ja_JP: 米国 (オレゴン)
- value: ap-south-1
label:
en_US: Asia Pacific (Mumbai)
zh_Hans: 亚太地区(孟买)
ja_JP: アジアパシフィック (ムンバイ)
- value: ap-southeast-1
label:
en_US: Asia Pacific (Singapore)
zh_Hans: 亚太地区 (新加坡)
ja_JP: アジアパシフィック (シンガポール)
- value: ap-southeast-2
label:
en_US: Asia Pacific (Sydney)
zh_Hans: 亚太地区 (悉尼)
ja_JP: アジアパシフィック (シドニー)
- value: ap-northeast-1
label:
en_US: Asia Pacific (Tokyo)
zh_Hans: 亚太地区 (东京)
ja_JP: アジアパシフィック (東京)
- value: ap-northeast-2
label:
en_US: Asia Pacific (Seoul)
zh_Hans: 亚太地区(首尔)
ja_JP: アジアパシフィック (ソウル)
- value: ca-central-1
label:
en_US: Canada (Central)
zh_Hans: 加拿大(中部)
ja_JP: カナダ (中部)
- value: eu-central-1
label:
en_US: Europe (Frankfurt)
zh_Hans: 欧洲 (法兰克福)
ja_JP: 欧州 (フランクフルト)
- value: eu-west-1
label:
en_US: Europe (Ireland)
zh_Hans: 欧洲(爱尔兰)
ja_JP: 欧州 (アイルランド)
- value: eu-west-2
label:
en_US: Europe (London)
zh_Hans: 欧洲西部 (伦敦)
ja_JP: 欧州 (ロンドン)
- value: eu-west-3
label:
en_US: Europe (Paris)
zh_Hans: 欧洲(巴黎)
ja_JP: 欧州 (パリ)
- value: sa-east-1
label:
en_US: South America (São Paulo)
zh_Hans: 南美洲(圣保罗)
ja_JP: 南米 (サンパウロ)
- value: us-gov-west-1
label:
en_US: AWS GovCloud (US-West)
zh_Hans: AWS GovCloud (US-West)
ja_JP: AWS GovCloud (米国西部)
- variable: bedrock_endpoint_url
label:
zh_Hans: Bedrock Endpoint URL
en_US: Bedrock Endpoint URL
type: text-input
required: false
placeholder:
zh_Hans: 在此输入您的 Bedrock Endpoint URL, 如https://123456.cloudfront.net
en_US: Enter your Bedrock Endpoint URL, e.g. https://123456.cloudfront.net
- variable: model_for_validation
required: false
label:

View File

@ -13,6 +13,7 @@ def get_bedrock_client(service_name: str, credentials: Mapping[str, str]):
client_config = Config(region_name=region_name)
aws_access_key_id = credentials.get("aws_access_key_id")
aws_secret_access_key = credentials.get("aws_secret_access_key")
bedrock_endpoint_url = credentials.get("bedrock_endpoint_url")
if aws_access_key_id and aws_secret_access_key:
# use aksk to call bedrock
@ -21,6 +22,7 @@ def get_bedrock_client(service_name: str, credentials: Mapping[str, str]):
config=client_config,
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
**({"endpoint_url": bedrock_endpoint_url} if bedrock_endpoint_url else {}),
)
else:
# use iam without aksk to call

View File

@ -0,0 +1,115 @@
model: us.anthropic.claude-3-7-sonnet-20250219-v1:0
label:
en_US: Claude 3.7 Sonnet(US.Cross Region Inference)
icon: icon_s_en.svg
model_type: llm
features:
- agent-thought
- vision
- tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 200000
# docs: https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html
parameter_rules:
- name: enable_cache
label:
zh_Hans: 启用提示缓存
en_US: Enable Prompt Cache
type: boolean
required: false
default: true
help:
zh_Hans: 启用提示缓存可以提高性能并降低成本。Claude 3.7 Sonnet支持在system、messages和tools字段中使用缓存检查点。
en_US: Enable prompt caching to improve performance and reduce costs. Claude 3.7 Sonnet supports cache checkpoints in system, messages, and tools fields.
- name: reasoning_type
label:
zh_Hans: 推理配置
en_US: Reasoning Type
type: boolean
required: false
default: false
placeholder:
zh_Hans: 设置推理配置
en_US: Set reasoning configuration
help:
zh_Hans: 控制模型的推理能力。启用时temperature将固定为1且top_p将被禁用。
en_US: Controls the model's reasoning capability. When enabled, temperature will be fixed to 1 and top_p will be disabled.
- name: reasoning_budget
show_on:
- variable: reasoning_type
value: true
label:
zh_Hans: 推理预算
en_US: Reasoning Budget
type: int
default: 1024
min: 0
max: 128000
help:
zh_Hans: 推理的预算限制最小1024必须小于max_tokens。仅在推理类型为enabled时可用。
en_US: Budget limit for reasoning (minimum 1024), must be less than max_tokens. Only available when reasoning type is enabled.
- name: max_tokens
use_template: max_tokens
required: true
label:
zh_Hans: 最大token数
en_US: Max Tokens
type: int
default: 8192
min: 1
max: 128000
help:
zh_Hans: 停止前生成的最大令牌数。请注意Anthropic Claude 模型可能会在达到 max_tokens 的值之前停止生成令牌。不同的 Anthropic Claude 模型对此参数具有不同的最大值。
en_US: The maximum number of tokens to generate before stopping. Note that Anthropic Claude models might stop generating tokens before reaching the value of max_tokens. Different Anthropic Claude models have different maximum values for this parameter.
- name: temperature
use_template: temperature
required: false
label:
zh_Hans: 模型温度
en_US: Model Temperature
type: float
default: 1
min: 0.0
max: 1.0
help:
zh_Hans: 生成内容的随机性。当推理功能启用时该值将被固定为1。
en_US: The amount of randomness injected into the response. When reasoning is enabled, this value will be fixed to 1.
- name: top_p
show_on:
- variable: reasoning_type
value: disabled
use_template: top_p
label:
zh_Hans: Top P
en_US: Top P
required: false
type: float
default: 0.999
min: 0.000
max: 1.000
help:
zh_Hans: 在核采样中的概率阈值。当推理功能启用时,该参数将被禁用。
en_US: The probability threshold in nucleus sampling. When reasoning is enabled, this parameter will be disabled.
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
required: false
type: int
default: 0
min: 0
# tip docs from aws has error, max value is 500
max: 500
help:
zh_Hans: 对于每个后续标记,仅从前 K 个选项中进行采样。使用 top_k 删除长尾低概率响应。
en_US: Only sample from the top K options for each subsequent token. Use top_k to remove long tail low probability responses.
- name: response_format
use_template: response_format
pricing:
input: '0.003'
output: '0.015'
unit: '0.001'
currency: USD

View File

@ -58,6 +58,7 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
# TODO There is invoke issue: context limit on Cohere Model, will add them after fixed.
CONVERSE_API_ENABLED_MODEL_INFO = [
{"prefix": "anthropic.claude-v2", "support_system_prompts": True, "support_tool_use": False},
{"prefix": "us.deepseek", "support_system_prompts": True, "support_tool_use": False},
{"prefix": "anthropic.claude-v1", "support_system_prompts": True, "support_tool_use": False},
{"prefix": "us.anthropic.claude-3", "support_system_prompts": True, "support_tool_use": True},
{"prefix": "eu.anthropic.claude-3", "support_system_prompts": True, "support_tool_use": True},

View File

@ -0,0 +1,63 @@
model: us.deepseek.r1-v1:0
label:
en_US: DeepSeek-R1(US.Cross Region Inference)
icon: icon_s_en.svg
model_type: llm
features:
- agent-thought
- vision
- tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 32768
parameter_rules:
- name: max_tokens
use_template: max_tokens
required: true
label:
zh_Hans: 最大token数
en_US: Max Tokens
type: int
default: 8192
min: 1
max: 128000
help:
zh_Hans: 停止前生成的最大令牌数。
en_US: The maximum number of tokens to generate before stopping.
- name: temperature
use_template: temperature
required: false
label:
zh_Hans: 模型温度
en_US: Model Temperature
type: float
default: 1
min: 0.0
max: 1.0
help:
zh_Hans: 生成内容的随机性。当推理功能启用时该值将被固定为1。
en_US: The amount of randomness injected into the response. When reasoning is enabled, this value will be fixed to 1.
- name: top_p
show_on:
- variable: reasoning_type
value: disabled
use_template: top_p
label:
zh_Hans: Top P
en_US: Top P
required: false
type: float
default: 0.999
min: 0.000
max: 1.000
help:
zh_Hans: 在核采样中的概率阈值。当推理功能启用时,该参数将被禁用。
en_US: The probability threshold in nucleus sampling. When reasoning is enabled, this parameter will be disabled.
- name: response_format
use_template: response_format
pricing:
input: '0.001'
output: '0.005'
unit: '0.001'
currency: USD

View File

@ -677,16 +677,17 @@ class CohereLargeLanguageModel(LargeLanguageModel):
:return: model schema
"""
# get model schema
models = self.predefined_models()
model_map = {model.model: model for model in models}
mode = credentials.get("mode")
base_model_schema = None
for predefined_model in self.predefined_models():
if (
mode == "chat" and predefined_model.model == "command-light-chat"
) or predefined_model.model == "command-light":
base_model_schema = predefined_model
break
if mode == "chat":
base_model_schema = model_map["command-light-chat"]
else:
base_model_schema = model_map["command-light"]
if not base_model_schema:
raise ValueError("Model not found")
base_model_schema = cast(AIModelEntity, base_model_schema)

View File

@ -19,8 +19,8 @@ class GoogleProvider(ModelProvider):
try:
model_instance = self.get_model_instance(ModelType.LLM)
# Use `gemini-pro` model for validate,
model_instance.validate_credentials(model="gemini-pro", credentials=credentials)
# Use `gemini-2.0-flash` model for validate,
model_instance.validate_credentials(model="gemini-2.0-flash", credentials=credentials)
except CredentialsValidateFailedError as ex:
raise ex
except Exception as ex:

View File

@ -1,4 +1,6 @@
- gemini-2.0-flash-001
- gemini-2.0-flash-exp
- gemini-2.0-pro-exp-02-05
- gemini-2.0-flash-thinking-exp-1219
- gemini-2.0-flash-thinking-exp-01-21
- gemini-1.5-pro
@ -17,5 +19,3 @@
- gemini-exp-1206
- gemini-exp-1121
- gemini-exp-1114
- gemini-pro
- gemini-pro-vision

View File

@ -0,0 +1,41 @@
model: gemini-2.0-flash-001
label:
en_US: Gemini 2.0 Flash 001
model_type: llm
features:
- agent-thought
- vision
- tool-call
- stream-tool-call
- document
- video
- audio
model_properties:
mode: chat
context_size: 1048576
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_output_tokens
use_template: max_tokens
default: 8192
min: 1
max: 8192
- name: json_schema
use_template: json_schema
pricing:
input: '0.00'
output: '0.00'
unit: '0.000001'
currency: USD

View File

@ -0,0 +1,41 @@
model: gemini-2.0-pro-exp-02-05
label:
en_US: Gemini 2.0 pro exp 02-05
model_type: llm
features:
- agent-thought
- vision
- tool-call
- stream-tool-call
- document
- video
- audio
model_properties:
mode: chat
context_size: 1048576
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_output_tokens
use_template: max_tokens
default: 8192
min: 1
max: 8192
- name: json_schema
use_template: json_schema
pricing:
input: '0.00'
output: '0.00'
unit: '0.000001'
currency: USD

View File

@ -1,3 +1,4 @@
- deepseek-r1-distill-llama-70b
- llama-3.1-405b-reasoning
- llama-3.3-70b-versatile
- llama-3.1-70b-versatile

View File

@ -0,0 +1,36 @@
model: deepseek-r1-distill-llama-70b
label:
en_US: DeepSeek R1 Distill Llama 70b
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: max_tokens
use_template: max_tokens
default: 512
min: 1
max: 8192
- name: response_format
label:
zh_Hans: 回复格式
en_US: Response Format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
pricing:
input: '3.00'
output: '3.00'
unit: '0.000001'
currency: USD

View File

@ -1,19 +1,11 @@
<svg width="162" height="36" viewBox="0 0 162 36" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M2 0C0.895431 0 0 0.895432 0 2V29.1891C0 30.2937 0.895433 31.1891 2 31.1891H5.51171L16.0608 35.1377C16.7145 35.3824 17.4114 34.8991 17.4114 34.2012V11.3669C17.4114 10.533 16.894 9.78665 16.1131 9.49405L5.51171 5.52152H25.58V31.1891H29.0917C30.1963 31.1891 31.0917 30.2937 31.0917 29.1891V2C31.0917 0.895431 30.1963 0 29.0917 0H2ZM14.6022 23.7351C15.0558 23.956 15.4239 23.6812 15.4239 23.1185C15.4239 22.5557 15.0558 21.9204 14.6022 21.6995C14.1486 21.4775 13.7804 21.7545 13.7804 22.3161C13.7804 22.8777 14.1486 23.513 14.6022 23.7351Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M2 0C0.895431 0 0 0.895432 0 2V29.1891C0 30.2937 0.895433 31.1891 2 31.1891H5.51171L16.0608 35.1377C16.7145 35.3824 17.4114 34.8991 17.4114 34.2012V11.3669C17.4114 10.533 16.894 9.78665 16.1131 9.49405L5.51171 5.52152H25.58V31.1891H29.0917C30.1963 31.1891 31.0917 30.2937 31.0917 29.1891V2C31.0917 0.895431 30.1963 0 29.0917 0H2ZM14.6022 23.7351C15.0558 23.956 15.4239 23.6812 15.4239 23.1185C15.4239 22.5557 15.0558 21.9204 14.6022 21.6995C14.1486 21.4775 13.7804 21.7545 13.7804 22.3161C13.7804 22.8777 14.1486 23.513 14.6022 23.7351Z" fill="url(#paint0_linear_1473_71)"/>
<path d="M55.9397 27.8804H59.0566V19.0803C59.0566 14.9105 56.381 12.7172 52.8228 12.7172C51.0023 12.7172 49.3197 13.4483 48.2991 14.6668V12.9609H45.1546V27.8804H48.2991V19.5406C48.2991 16.8059 49.8162 15.3978 52.1332 15.3978C54.4226 15.3978 55.9397 16.8059 55.9397 19.5406V27.8804Z" fill="#11101A"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M69.7881 12.7172C74.1187 12.7172 77.539 15.7228 77.539 20.4071C77.539 25.0915 74.0083 28.1241 69.6502 28.1241C65.3196 28.1241 62.0372 25.0915 62.0372 20.4071C62.0372 15.7228 65.4575 12.7172 69.7881 12.7172ZM69.7342 15.3979C67.362 15.3979 65.2381 17.0225 65.2381 20.4071C65.2381 23.7918 67.2793 25.4435 69.6514 25.4435C71.996 25.4435 74.313 23.7918 74.313 20.4071C74.313 17.0225 72.0788 15.3979 69.7342 15.3979Z" fill="#11101A"/>
<path d="M78.861 12.9609L84.6259 27.8804H88.3772L94.1697 12.9609H90.8321L86.5291 25.1185L82.2261 12.9609H78.861Z" fill="#11101A"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M100.13 9.00761C100.13 10.1178 99.2477 10.9842 98.1443 10.9842C97.0134 10.9842 96.1308 10.1178 96.1308 9.00761C96.1308 7.89745 97.0134 7.03098 98.1443 7.03098C99.2477 7.03098 100.13 7.89745 100.13 9.00761ZM99.6882 27.8804H96.5437V12.9609H99.6882V27.8804Z" fill="#11101A"/>
<path d="M104.322 23.7376C104.322 26.7702 106.004 27.8804 108.708 27.8804H111.19V25.308H109.259C107.935 25.308 107.494 24.8477 107.494 23.7376V15.479H111.19V12.9609H107.494V9.25128H104.322V12.9609H102.529V15.479H104.322V23.7376Z" fill="#11101A"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M120.154 28.1241C116.209 28.1241 113.037 24.9561 113.037 20.353C113.037 15.7498 116.209 12.7172 120.209 12.7172C122.774 12.7172 124.539 13.9086 125.477 15.1271V12.9609H128.649V27.8804H125.477V25.6601C124.512 26.9327 122.691 28.1241 120.154 28.1241ZM120.87 25.4435C123.242 25.4435 125.476 23.6293 125.476 20.4071C125.476 17.212 123.242 15.3979 120.87 15.3979C118.526 15.3979 116.264 17.1308 116.264 20.353C116.264 23.5752 118.526 25.4435 120.87 25.4435Z" fill="#11101A"/>
<path d="M136.043 26.0933C136.043 24.9832 135.16 24.1167 134.057 24.1167C132.926 24.1167 132.043 24.9832 132.043 26.0933C132.043 27.2035 132.926 28.07 134.057 28.07C135.16 28.07 136.043 27.2035 136.043 26.0933Z" fill="#11101A"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M145.502 28.1241C141.558 28.1241 138.386 24.9561 138.386 20.353C138.386 15.7498 141.558 12.7172 145.557 12.7172C148.123 12.7172 149.888 13.9086 150.826 15.1271V12.9609H153.998V27.8804H150.826V25.6601C149.86 26.9327 148.04 28.1241 145.502 28.1241ZM146.219 25.4435C148.591 25.4435 150.825 23.6293 150.825 20.4071C150.825 17.212 148.591 15.3979 146.219 15.3979C143.874 15.3979 141.612 17.1308 141.612 20.353C141.612 23.5752 143.874 25.4435 146.219 25.4435Z" fill="#11101A"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M161.722 9.00761C161.722 10.1178 160.84 10.9842 159.736 10.9842C158.605 10.9842 157.723 10.1178 157.723 9.00761C157.723 7.89745 158.605 7.03098 159.736 7.03098C160.84 7.03098 161.722 7.89745 161.722 9.00761ZM161.28 27.8804H158.136V12.9609H161.28V27.8804Z" fill="#11101A"/>
<svg width="88" height="24" viewBox="0 0 88 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_1923_1287)">
<path d="M24 18.8323V18.8326H14.3246L9.16716 13.6751V18.8326H0V18.8314L9.16716 9.66422V4H9.16774L24 18.8323Z" fill="black"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" d="M73.2505 16.8061H76.5869V18.9145H73.9391C72.0857 18.9145 70.9202 17.8952 70.9202 15.9977V10.3921H69.0316V8.26609H70.9202L71.4677 5.47209H73.2329V8.26609H76.5869V10.3921H73.2505V16.8061ZM33.8133 4.85699L38.6679 15.681H38.809V4.85699H41.3333V18.9145H37.52L32.6654 8.09046H32.5243V18.9145H30V4.85699H33.8133ZM47.812 19.1254C44.7225 19.1254 42.7457 16.9641 42.7457 13.6079C42.7457 10.2517 44.6873 8.05518 47.812 8.05518C50.9367 8.05518 52.8429 10.1635 52.8429 13.6079C52.8429 17.0523 50.9014 19.1254 47.812 19.1254ZM47.812 17.017C49.1891 17.017 50.3363 16.5423 50.3715 15.1894V12.0265C50.3715 10.6383 49.2068 10.1635 47.812 10.1635C46.4172 10.1635 45.2171 10.6383 45.2171 12.0265V15.1894C45.2524 16.5599 46.4348 17.017 47.812 17.017ZM55.5444 8.24846L58.2979 16.6826H58.439L61.1926 8.24846H63.7346L59.9389 18.8968H56.7966L53.0186 8.24846H55.5429H55.5444ZM65.0419 8.26609H67.3722V18.9145H65.0419V8.26609ZM64.9001 4.85699H67.5126V6.86027H64.9001V4.85699ZM82.3064 19.143C79.4639 19.143 77.6458 16.9817 77.6458 13.6079C77.6458 10.2341 79.4286 8.07282 82.3064 8.07282C83.6483 8.07282 84.7425 8.59973 85.3958 9.58373H85.5369L85.9962 8.26609H87.7614V18.9145H85.9962L85.5369 17.6314H85.3958C84.6896 18.5625 83.5072 19.1423 82.3064 19.1423V19.143ZM82.7826 17.017C84.1774 17.017 85.3951 16.5776 85.4304 15.1894V12.0265C85.4304 10.603 84.159 10.1988 82.7297 10.1988C81.3004 10.1988 80.1172 10.6383 80.1172 12.0265V15.1894C80.1525 16.5952 81.3709 17.017 82.7826 17.017Z" fill="black"/>
<defs>
<linearGradient id="paint0_linear_1473_71" x1="31" y1="-2" x2="0.975591" y2="14.2625" gradientUnits="userSpaceOnUse">
<stop stop-color="#2622FF"/>
<stop offset="1" stop-color="#A717FF"/>
</linearGradient>
<clipPath id="clip0_1923_1287">
<rect width="24" height="14.8326" fill="white" transform="translate(0 4)"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 4.5 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

@ -1,10 +1,3 @@
<svg width="32" height="36" viewBox="0 0 32 36" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M2 0C0.895431 0 0 0.895432 0 2V29.1891C0 30.2937 0.895433 31.1891 2 31.1891H5.51171L16.0608 35.1377C16.7145 35.3824 17.4114 34.8991 17.4114 34.2012V11.3669C17.4114 10.533 16.894 9.78665 16.1131 9.49405L5.51171 5.52152H25.58V31.1891H29.0917C30.1963 31.1891 31.0917 30.2937 31.0917 29.1891V2C31.0917 0.895431 30.1963 0 29.0917 0H2ZM14.6022 23.7351C15.0558 23.956 15.4239 23.6812 15.4239 23.1185C15.4239 22.5557 15.0558 21.9204 14.6022 21.6995C14.1486 21.4775 13.7804 21.7545 13.7804 22.3161C13.7804 22.8777 14.1486 23.513 14.6022 23.7351Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M2 0C0.895431 0 0 0.895432 0 2V29.1891C0 30.2937 0.895433 31.1891 2 31.1891H5.51171L16.0608 35.1377C16.7145 35.3824 17.4114 34.8991 17.4114 34.2012V11.3669C17.4114 10.533 16.894 9.78665 16.1131 9.49405L5.51171 5.52152H25.58V31.1891H29.0917C30.1963 31.1891 31.0917 30.2937 31.0917 29.1891V2C31.0917 0.895431 30.1963 0 29.0917 0H2ZM14.6022 23.7351C15.0558 23.956 15.4239 23.6812 15.4239 23.1185C15.4239 22.5557 15.0558 21.9204 14.6022 21.6995C14.1486 21.4775 13.7804 21.7545 13.7804 22.3161C13.7804 22.8777 14.1486 23.513 14.6022 23.7351Z" fill="url(#paint0_linear_1473_97)"/>
<defs>
<linearGradient id="paint0_linear_1473_97" x1="31" y1="-2" x2="0.975591" y2="14.2625" gradientUnits="userSpaceOnUse">
<stop stop-color="#2622FF"/>
<stop offset="1" stop-color="#A717FF"/>
</linearGradient>
</defs>
<svg width="24" height="15" viewBox="0 0 24 15" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M24 14.8323V14.8326H14.3246L9.16716 9.67507V14.8326H0V14.8314L9.16716 5.66422V0H9.16774L24 14.8323Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB

After

Width:  |  Height:  |  Size: 228 B

View File

@ -0,0 +1,41 @@
model: Sao10K/L3-8B-Stheno-v3.2
label:
zh_Hans: L3 8B Stheno V3.2
en_US: L3 8B Stheno V3.2
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0005'
output: '0.0005'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
# Deepseek Models
- deepseek/deepseek-r1
- deepseek/deepseek_v3
# LLaMA Models
- meta-llama/llama-3.3-70b-instruct
- meta-llama/llama-3.2-11b-vision-instruct
- meta-llama/llama-3.2-3b-instruct
- meta-llama/llama-3.2-1b-instruct
- meta-llama/llama-3.1-70b-instruct
- meta-llama/llama-3.1-8b-instruct
- meta-llama/llama-3.1-8b-instruct-max
- meta-llama/llama-3.1-8b-instruct-bf16
- meta-llama/llama-3-70b-instruct
- meta-llama/llama-3-8b-instruct
# Mistral Models
- mistralai/mistral-nemo
- mistralai/mistral-7b-instruct
# Qwen Models
- qwen/qwen-2.5-72b-instruct
- qwen/qwen-2-72b-instruct
- qwen/qwen-2-vl-72b-instruct
- qwen/qwen-2-7b-instruct
# Other Models
- sao10k/L3-8B-Stheno-v3.2
- sao10k/l3-70b-euryale-v2.1
- sao10k/l31-70b-euryale-v2.2
- sao10k/l3-8b-lunaris
- jondurbin/airoboros-l2-70b
- cognitivecomputations/dolphin-mixtral-8x22b
- google/gemma-2-9b-it
- nousresearch/hermes-2-pro-llama-3-8b
- sophosympatheia/midnight-rose-70b
- gryphe/mythomax-l2-13b
- nousresearch/nous-hermes-llama2-13b
- openchat/openchat-7b
- teknium/openhermes-2.5-mistral-7b
- microsoft/wizardlm-2-8x22b

View File

@ -1,7 +1,7 @@
model: jondurbin/airoboros-l2-70b
label:
zh_Hans: jondurbin/airoboros-l2-70b
en_US: jondurbin/airoboros-l2-70b
zh_Hans: Airoboros L2 70B
en_US: Airoboros L2 70B
model_type: llm
features:
- agent-thought

View File

@ -0,0 +1,41 @@
model: deepseek/deepseek-r1
label:
zh_Hans: DeepSeek R1
en_US: DeepSeek R1
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 64000
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.04'
output: '0.04'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: deepseek/deepseek_v3
label:
zh_Hans: DeepSeek V3
en_US: DeepSeek V3
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 64000
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0089'
output: '0.0089'
unit: '0.0001'
currency: USD

View File

@ -1,7 +1,7 @@
model: cognitivecomputations/dolphin-mixtral-8x22b
label:
zh_Hans: cognitivecomputations/dolphin-mixtral-8x22b
en_US: cognitivecomputations/dolphin-mixtral-8x22b
zh_Hans: Dolphin Mixtral 8x22B
en_US: Dolphin Mixtral 8x22B
model_type: llm
features:
- agent-thought

View File

@ -1,7 +1,7 @@
model: google/gemma-2-9b-it
label:
zh_Hans: google/gemma-2-9b-it
en_US: google/gemma-2-9b-it
zh_Hans: Gemma 2 9B
en_US: Gemma 2 9B
model_type: llm
features:
- agent-thought

View File

@ -1,7 +1,7 @@
model: nousresearch/hermes-2-pro-llama-3-8b
label:
zh_Hans: nousresearch/hermes-2-pro-llama-3-8b
en_US: nousresearch/hermes-2-pro-llama-3-8b
zh_Hans: Hermes 2 Pro Llama 3 8B
en_US: Hermes 2 Pro Llama 3 8B
model_type: llm
features:
- agent-thought

View File

@ -1,7 +1,7 @@
model: sao10k/l3-70b-euryale-v2.1
label:
zh_Hans: sao10k/l3-70b-euryale-v2.1
en_US: sao10k/l3-70b-euryale-v2.1
zh_Hans: "L3 70B Euryale V2.1\t"
en_US: "L3 70B Euryale V2.1\t"
model_type: llm
features:
- agent-thought

View File

@ -0,0 +1,41 @@
model: sao10k/l3-8b-lunaris
label:
zh_Hans: "Sao10k L3 8B Lunaris"
en_US: "Sao10k L3 8B Lunaris"
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0005'
output: '0.0005'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: sao10k/l31-70b-euryale-v2.2
label:
zh_Hans: L31 70B Euryale V2.2
en_US: L31 70B Euryale V2.2
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 16000
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0148'
output: '0.0148'
unit: '0.0001'
currency: USD

View File

@ -1,7 +1,7 @@
model: meta-llama/llama-3-70b-instruct
label:
zh_Hans: meta-llama/llama-3-70b-instruct
en_US: meta-llama/llama-3-70b-instruct
zh_Hans: Llama3 70b Instruct
en_US: Llama3 70b Instruct
model_type: llm
features:
- agent-thought

View File

@ -1,7 +1,7 @@
model: meta-llama/llama-3-8b-instruct
label:
zh_Hans: meta-llama/llama-3-8b-instruct
en_US: meta-llama/llama-3-8b-instruct
zh_Hans: Llama 3 8B Instruct
en_US: Llama 3 8B Instruct
model_type: llm
features:
- agent-thought
@ -35,7 +35,7 @@ parameter_rules:
max: 2
default: 0
pricing:
input: '0.00063'
output: '0.00063'
input: '0.0004'
output: '0.0004'
unit: '0.0001'
currency: USD

View File

@ -1,13 +1,13 @@
model: meta-llama/llama-3.1-70b-instruct
label:
zh_Hans: meta-llama/llama-3.1-70b-instruct
en_US: meta-llama/llama-3.1-70b-instruct
zh_Hans: Llama 3.1 70B Instruct
en_US: Llama 3.1 70B Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
context_size: 32768
parameter_rules:
- name: temperature
use_template: temperature
@ -35,7 +35,7 @@ parameter_rules:
max: 2
default: 0
pricing:
input: '0.0055'
output: '0.0076'
input: '0.0034'
output: '0.0039'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: meta-llama/llama-3.1-8b-instruct-bf16
label:
zh_Hans: Llama 3.1 8B Instruct BF16
en_US: Llama 3.1 8B Instruct BF16
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0006'
output: '0.0006'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: meta-llama/llama-3.1-8b-instruct-max
label:
zh_Hans: "Llama3.1 8B Instruct Max\t"
en_US: "Llama3.1 8B Instruct Max\t"
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 16384
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0005'
output: '0.0005'
unit: '0.0001'
currency: USD

View File

@ -1,13 +1,13 @@
model: meta-llama/llama-3.1-8b-instruct
label:
zh_Hans: meta-llama/llama-3.1-8b-instruct
en_US: meta-llama/llama-3.1-8b-instruct
zh_Hans: Llama 3.1 8B Instruct
en_US: Llama 3.1 8B Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
context_size: 16384
parameter_rules:
- name: temperature
use_template: temperature
@ -35,7 +35,7 @@ parameter_rules:
max: 2
default: 0
pricing:
input: '0.001'
output: '0.001'
input: '0.0005'
output: '0.0005'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: meta-llama/llama-3.2-11b-vision-instruct
label:
zh_Hans: "Llama 3.2 11B Vision Instruct\t"
en_US: "Llama 3.2 11B Vision Instruct\t"
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 32768
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0006'
output: '0.0006'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: meta-llama/llama-3.2-1b-instruct
label:
zh_Hans: "Llama 3.2 1B Instruct\t"
en_US: "Llama 3.2 1B Instruct\t"
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131000
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0002'
output: '0.0002'
unit: '0.0001'
currency: USD

View File

@ -1,7 +1,7 @@
model: Nous-Hermes-2-Mixtral-8x7B-DPO
model: meta-llama/llama-3.2-3b-instruct
label:
zh_Hans: Nous-Hermes-2-Mixtral-8x7B-DPO
en_US: Nous-Hermes-2-Mixtral-8x7B-DPO
zh_Hans: Llama 3.2 3B Instruct
en_US: Llama 3.2 3B Instruct
model_type: llm
features:
- agent-thought
@ -35,7 +35,7 @@ parameter_rules:
max: 2
default: 0
pricing:
input: '0.0027'
output: '0.0027'
input: '0.0003'
output: '0.0005'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: meta-llama/llama-3.3-70b-instruct
label:
zh_Hans: Llama 3.3 70B Instruct
en_US: Llama 3.3 70B Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0039'
output: '0.0039'
unit: '0.0001'
currency: USD

View File

@ -1,7 +1,7 @@
model: sophosympatheia/midnight-rose-70b
label:
zh_Hans: sophosympatheia/midnight-rose-70b
en_US: sophosympatheia/midnight-rose-70b
zh_Hans: Midnight Rose 70B
en_US: Midnight Rose 70B
model_type: llm
features:
- agent-thought

View File

@ -1,7 +1,7 @@
model: mistralai/mistral-7b-instruct
label:
zh_Hans: mistralai/mistral-7b-instruct
en_US: mistralai/mistral-7b-instruct
zh_Hans: Mistral 7B Instruct
en_US: Mistral 7B Instruct
model_type: llm
features:
- agent-thought

View File

@ -0,0 +1,41 @@
model: mistralai/mistral-nemo
label:
zh_Hans: Mistral Nemo
en_US: Mistral Nemo
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0017'
output: '0.0017'
unit: '0.0001'
currency: USD

View File

@ -1,7 +1,7 @@
model: gryphe/mythomax-l2-13b
label:
zh_Hans: gryphe/mythomax-l2-13b
en_US: gryphe/mythomax-l2-13b
zh_Hans: Mythomax L2 13B
en_US: Mythomax L2 13B
model_type: llm
features:
- agent-thought
@ -35,7 +35,7 @@ parameter_rules:
max: 2
default: 0
pricing:
input: '0.00119'
output: '0.00119'
input: '0.0009'
output: '0.0009'
unit: '0.0001'
currency: USD

View File

@ -1,7 +1,7 @@
model: nousresearch/nous-hermes-llama2-13b
label:
zh_Hans: nousresearch/nous-hermes-llama2-13b
en_US: nousresearch/nous-hermes-llama2-13b
zh_Hans: Nous Hermes Llama2 13B
en_US: Nous Hermes Llama2 13B
model_type: llm
features:
- agent-thought

View File

@ -1,7 +1,7 @@
model: lzlv_70b
model: openchat/openchat-7b
label:
zh_Hans: lzlv_70b
en_US: lzlv_70b
zh_Hans: OpenChat 7B
en_US: OpenChat 7B
model_type: llm
features:
- agent-thought
@ -35,7 +35,7 @@ parameter_rules:
max: 2
default: 0
pricing:
input: '0.0058'
output: '0.0078'
input: '0.0006'
output: '0.0006'
unit: '0.0001'
currency: USD

View File

@ -1,7 +1,7 @@
model: teknium/openhermes-2.5-mistral-7b
label:
zh_Hans: teknium/openhermes-2.5-mistral-7b
en_US: teknium/openhermes-2.5-mistral-7b
zh_Hans: Openhermes2.5 Mistral 7B
en_US: Openhermes2.5 Mistral 7B
model_type: llm
features:
- agent-thought

View File

@ -1,7 +1,7 @@
model: meta-llama/llama-3.1-405b-instruct
model: qwen/qwen-2-72b-instruct
label:
zh_Hans: meta-llama/llama-3.1-405b-instruct
en_US: meta-llama/llama-3.1-405b-instruct
zh_Hans: Qwen2 72B Instruct
en_US: Qwen2 72B Instruct
model_type: llm
features:
- agent-thought
@ -35,7 +35,7 @@ parameter_rules:
max: 2
default: 0
pricing:
input: '0.03'
output: '0.05'
input: '0.0034'
output: '0.0039'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: qwen/qwen-2-7b-instruct
label:
zh_Hans: Qwen 2 7B Instruct
en_US: Qwen 2 7B Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 32768
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.00054'
output: '0.00054'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: qwen/qwen-2-vl-72b-instruct
label:
zh_Hans: Qwen 2 VL 72B Instruct
en_US: Qwen 2 VL 72B Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 32768
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0045'
output: '0.0045'
unit: '0.0001'
currency: USD

View File

@ -0,0 +1,41 @@
model: qwen/qwen-2.5-72b-instruct
label:
zh_Hans: Qwen 2.5 72B Instruct
en_US: Qwen 2.5 72B Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 32000
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0038'
output: '0.004'
unit: '0.0001'
currency: USD

View File

@ -1,7 +1,7 @@
model: microsoft/wizardlm-2-8x22b
label:
zh_Hans: microsoft/wizardlm-2-8x22b
en_US: microsoft/wizardlm-2-8x22b
zh_Hans: Wizardlm 2 8x22B
en_US: Wizardlm 2 8x22B
model_type: llm
features:
- agent-thought
@ -35,7 +35,7 @@ parameter_rules:
max: 2
default: 0
pricing:
input: '0.0064'
output: '0.0064'
input: '0.0062'
output: '0.0062'
unit: '0.0001'
currency: USD

View File

@ -1,6 +1,6 @@
provider: novita
label:
en_US: novita.ai
en_US: Novita AI
description:
en_US: An LLM API that matches various application scenarios with high cost-effectiveness.
zh_Hans: 适配多种海外应用场景的高性价比 LLM API
@ -8,13 +8,13 @@ icon_small:
en_US: icon_s_en.svg
icon_large:
en_US: icon_l_en.svg
background: "#eadeff"
background: "#c7fce2"
help:
title:
en_US: Get your API key from novita.ai
zh_Hans: novita.ai 获取 API Key
en_US: Get your API key from Novita AI
zh_Hans: Novita AI 获取 API Key
url:
en_US: https://novita.ai/settings#key-management?utm_source=dify&utm_medium=ch&utm_campaign=api
en_US: https://novita.ai/settings/key-management?utm_source=dify&utm_medium=ch&utm_campaign=api
supported_model_types:
- llm
configurate_methods:

View File

@ -1,3 +1,4 @@
- deepseek-ai/deepseek-r1
- google/gemma-7b
- google/codegemma-7b
- google/recurrentgemma-2b

View File

@ -0,0 +1,35 @@
model: deepseek-ai/deepseek-r1
label:
en_US: deepseek-ai/deepseek-r1
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 1
default: 0.5
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 1024
default: 1024
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0

View File

@ -83,7 +83,7 @@ class NVIDIALargeLanguageModel(OAIAPICompatLargeLanguageModel):
def _add_custom_parameters(self, credentials: dict, model: str) -> None:
credentials["mode"] = "chat"
if self.MODEL_SUFFIX_MAP[model]:
if self.MODEL_SUFFIX_MAP.get(model):
credentials["server_url"] = f"https://ai.api.nvidia.com/v1/{self.MODEL_SUFFIX_MAP[model]}"
credentials.pop("endpoint_url")
else:

View File

@ -0,0 +1,52 @@
model: cohere.command-r-08-2024
label:
en_US: cohere.command-r-08-2024 v1.7
model_type: llm
features:
- multi-tool-call
- agent-thought
- stream-tool-call
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
default: 1
max: 1.0
- name: topP
use_template: top_p
default: 0.75
min: 0
max: 1
- name: topK
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
default: 0
min: 0
max: 500
- name: presencePenalty
use_template: presence_penalty
min: 0
max: 1
default: 0
- name: frequencyPenalty
use_template: frequency_penalty
min: 0
max: 1
default: 0
- name: maxTokens
use_template: max_tokens
default: 600
max: 4000
pricing:
input: '0.0009'
output: '0.0009'
unit: '0.0001'
currency: USD

View File

@ -50,3 +50,4 @@ pricing:
output: '0.004'
unit: '0.0001'
currency: USD
deprecated: true

View File

@ -0,0 +1,52 @@
model: cohere.command-r-plus-08-2024
label:
en_US: cohere.command-r-plus-08-2024 v1.6
model_type: llm
features:
- multi-tool-call
- agent-thought
- stream-tool-call
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
default: 1
max: 1.0
- name: topP
use_template: top_p
default: 0.75
min: 0
max: 1
- name: topK
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
default: 0
min: 0
max: 500
- name: presencePenalty
use_template: presence_penalty
min: 0
max: 1
default: 0
- name: frequencyPenalty
use_template: frequency_penalty
min: 0
max: 1
default: 0
- name: maxTokens
use_template: max_tokens
default: 600
max: 4000
pricing:
input: '0.0156'
output: '0.0156'
unit: '0.0001'
currency: USD

Some files were not shown because too many files have changed in this diff Show More