Compare commits

...

29 Commits
0.4.7 ... 0.4.9

Author SHA1 Message Date
c7783dbd6c bump version to 0.4.9 (#2103) 2024-01-19 22:25:23 +08:00
ee9c7e204f delete document cache embedding (#2101)
Co-authored-by: jyong <jyong@dify.ai>
2024-01-19 21:37:54 +08:00
483dcb6340 fix: skip linking /etc/localtime file first in api docker image (#2099) 2024-01-19 21:06:26 +08:00
9ad7b65996 support setting timezone in docker images (#2091) 2024-01-19 20:30:36 +08:00
ec1659cba0 fix: saving error in empty dataset (#2098) 2024-01-19 20:12:04 +08:00
09a8db10d4 Add jina-embeddings-v2-base-de model configuration (#2094) 2024-01-19 18:11:55 +08:00
f3323beaca fix: yarn install command in web Dockerfile (#2084) 2024-01-19 18:11:47 +08:00
275973da8c add feature request copilot (#2095) 2024-01-19 17:55:39 +08:00
e2c89a9487 fix: bypass admin users to use dataset api with API key (#2072) 2024-01-19 17:23:05 +08:00
869690c485 fix notion estimate (#2090)
Co-authored-by: jyong <jyong@dify.ai>
2024-01-19 13:27:12 +08:00
a3c7c07ecc use redis to cache embeddings (#2085)
Co-authored-by: jyong <jyong@dify.ai>
2024-01-18 21:39:12 +08:00
dc8a8af117 bump default NodeJS version to 20 LTS (#2061) 2024-01-18 19:12:40 +08:00
6c28e1e69a fix: version (#2083) 2024-01-18 16:44:09 +08:00
0e1163f698 feat: remove deprecated envs (#2078) 2024-01-18 14:44:37 +08:00
8654415f33 bump version to 0.4.8 (#2074) 2024-01-17 22:51:02 +08:00
1a6ad05a23 feat: service api add llm usage (#2051) 2024-01-17 22:39:47 +08:00
1d91535ba6 fix: azure customize model name duplicate (#2073) 2024-01-17 21:17:59 +08:00
8799c888e3 fix: free quota type apply button missing (#2069)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
2024-01-17 15:02:27 +08:00
d7209d9057 feat: add abab5.5s-chat (#2063) 2024-01-16 19:45:21 +08:00
5960103cb8 Fix aspect ratio of buttons in readme (#2062) 2024-01-16 19:25:57 +08:00
2ffea39a5c fix: add sharp package to fix sharp-missing-in-production warning (#2060) 2024-01-16 17:25:18 +08:00
1e76b1bf2d Update README.md (#2058) 2024-01-16 17:24:49 +08:00
2022ca1d52 fix indentation & move contact into community section (#2055) 2024-01-16 16:59:38 +08:00
e1319d1a2d Update contribution guide (#2053) 2024-01-16 15:12:35 +08:00
a61df6cb03 timeout parameter error (#2052)
Co-authored-by: jyong <jyong@dify.ai>
2024-01-16 14:44:47 +08:00
790b885d0a fix multi-dataset retrieve score limit (#2050)
Co-authored-by: jyong <jyong@dify.ai>
2024-01-16 14:14:34 +08:00
1a2eacc5a6 Add jina-embeddings-v2-base-zh model configuration (#2049) 2024-01-16 12:25:42 +08:00
f7a2f7a727 fix: dataset sidebar (#2048) 2024-01-16 12:14:09 +08:00
a4adca595a fix qdrant tag in docker-compose.yaml (#2047) 2024-01-16 10:24:06 +08:00
56 changed files with 884 additions and 508 deletions

View File

@ -14,22 +14,35 @@ body:
required: true
- type: textarea
attributes:
label: Description of the new feature / enhancement
placeholder: What is the expected behavior of the proposed feature?
label: 1. Is this request related to a challenge you're experiencing?
placeholder: Please describe the specific scenario or problem you're facing as clearly as possible. For instance "I was trying to use [feature] for [specific task], and [what happened]... It was frustrating because...."
validations:
required: true
- type: textarea
attributes:
label: Scenario when this would be used?
placeholder: What is the scenario this would be used? Why is this important to your workflow as a dify user?
label: 2. Describe the feature you'd like to see
placeholder: Think about what you want to achieve and how this feature will help you. Sketches, flow diagrams, or any visual representation will be a major plus.
validations:
required: true
- type: textarea
attributes:
label: Supporting information
placeholder: "Having additional evidence, data, tweets, blog posts, research, ... anything is extremely helpful. This information provides context to the scenario that may otherwise be lost."
label: 3. How will this feature improve your workflow or experience?
placeholder: Tell us how this change will benefit your work. This helps us prioritize based on user impact.
validations:
required: true
- type: textarea
attributes:
label: 4. Additional context or comments
placeholder: (Any other information, comments, documentations, links, or screenshots that would provide more clarity. This is the place to add anything else not covered above.)
validations:
required: false
- type: checkboxes
attributes:
label: 5. Can you help us with this feature?
description: Let us know! This is not a commitment, but a starting point for collaboration.
options:
- label: I am interested in contributing to this feature.
required: false
- type: markdown
attributes:
value: Please limit one request per issue.

View File

@ -4,10 +4,6 @@ on:
pull_request:
branches:
- main
push:
branches:
- deploy/dev
- feat/model-runtime
jobs:
test:

View File

@ -4,9 +4,6 @@ on:
pull_request:
branches:
- main
push:
branches:
- deploy/dev
concurrency:
group: dep-${{ github.head_ref || github.run_id }}
@ -24,7 +21,7 @@ jobs:
- name: Setup NodeJS
uses: actions/setup-node@v4
with:
node-version: 18
node-version: 20
cache: yarn
cache-dependency-path: ./web/package.json

View File

@ -1,66 +1,156 @@
# Contributing
So you're looking to contribute to Dify - that's awesome, we can't wait to see what you do. As a startup with limited headcount and funding, we have grand ambitions to design the most intuitive workflow for building and managing LLM applications. Any help from the community counts, truly.
Thanks for your interest in [Dify](https://dify.ai) and for wanting to contribute! Before you begin, read the
[code of conduct](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md) and check out the
[existing issues](https://github.com/langgenius/langgenius-gateway/issues).
This document describes how to set up your development environment to build and test [Dify](https://dify.ai).
We need to be nimble and ship fast given where we are, but we also want to make sure that contributors like you get as smooth an experience at contributing as possible. We've assembled this contribution guide for that purpose, aiming at getting you familiarized with the codebase & how we work with contributors, so you could quickly jump to the fun part.
### Install dependencies
This guide, like Dify itself, is a constant work in progress. We highly appreciate your understanding if at times it lags behind the actual project, and welcome any feedback for us to improve.
You need to install and configure the following dependencies on your machine to build [Dify](https://dify.ai):
In terms of licensing, please take a minute to read our short [License and Contributor Agreement](./license). The community also adheres to the [code of conduct](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md).
## Before you jump in
[Find](https://github.com/langgenius/dify/issues?q=is:issue+is:closed) an existing issue, or [open](https://github.com/langgenius/dify/issues/new/choose) a new one. We categorize issues into 2 types:
### Feature requests:
* If you're opening a new feature request, we'd like you to explain what the proposed feature achieves, and include as much context as possible. [@perzeusss](https://github.com/perzeuss) has made a solid [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) that helps you draft out your needs. Feel free to give it a try.
* If you want to pick one up from the existing issues, simply drop a comment below it saying so.
A team member working in the related direction will be looped in. If all looks good, they will give the go-ahead for you to start coding. We ask that you hold off working on the feature until then, so none of your work goes to waste should we propose changes.
Depending on whichever area the proposed feature falls under, you might talk to different team members. Here's rundown of the areas each our team members are working on at the moment:
| Member | Scope |
| ------------------------------------------------------------ | ---------------------------------------------------- |
| [@yeuoly](https://github.com/Yeuoly) | Architecting Agents |
| [@jyong](https://github.com/JohnJyong) | RAG pipeline design |
| [@GarfieldDai](https://github.com/GarfieldDai) | Building workflow orchestrations |
| [@iamjoel](https://github.com/iamjoel) & [@zxhlyh](https://github.com/zxhlyh) | Making our frontend a breeze to use |
| [@guchenhe](https://github.com/guchenhe) & [@crazywoola](https://github.com/crazywoola) | Developer experience, points of contact for anything |
| [@takatost](https://github.com/takatost) | Overall product direction and architecture |
How we prioritize:
| Feature Type | Priority |
| ------------------------------------------------------------ | --------------- |
| High-Priority Features as being labeled by a team member | High Priority |
| Popular feature requests from our [community feedback board](https://feedback.dify.ai/) | Medium Priority |
| Non-core features and minor enhancements | Low Priority |
| Valuable but not immediate | Future-Feature |
### Anything else (e.g. bug report, performance optimization, typo correction):
* Start coding right away.
How we prioritize:
| Issue Type | Priority |
| ------------------------------------------------------------ | --------------- |
| Bugs in core functions (cannot login, applications not working, security loopholes) | Critical |
| Non-critical bugs, performance boosts | Medium Priority |
| Minor fixes (typos, confusing but working UI) | Low Priority |
## Installing
Here are the steps to set up Dify for development:
### 1. Fork this repository
### 2. Clone the repo
Clone the forked repository from your terminal:
```
git clone git@github.com:<github_username>/dify.git
```
### 3. Verify dependencies
Dify requires the following dependencies to build, make sure they're installed on your system:
- [Git](http://git-scm.com/)
- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Node.js v18.x (LTS)](http://nodejs.org)
- [npm](https://www.npmjs.com/) version 8.x.x or [Yarn](https://yarnpkg.com/)
- [Python](https://www.python.org/) version 3.10.x
## Local development
### 4. Installations
To set up a working development environment, just fork the project git repository and install the backend and frontend dependencies using the proper package manager and create run the docker-compose stack.
Dify is composed of a backend and a frontend. Navigate to the backend directory by `cd api/`, then follow the [Backend README](api/README.md) to install it. In a separate terminal, navigate to the frontend directory by `cd web/`, then follow the [Frontend README](web/README.md) to install.
### Fork the repository
Check the [installation FAQ](https://docs.dify.ai/getting-started/faq/install-faq) for a list of common issues and steps to troubleshoot.
you need to fork the [repository](https://github.com/langgenius/dify).
### 5. Visit dify in your browser
### Clone the repo
To validate your set up, head over to [http://localhost:3000](http://localhost:3000) (the default, or your self-configured URL and port) in your browser. You should now see Dify up and running.
Clone your GitHub forked repository:
## Developing
If you are adding a model provider, [this guide](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md) is for you.
To help you quickly navigate where your contribution fits, a brief, annotated outline of Dify's backend & frontend is as follows:
### Backend
Difys backend is written in Python using [Flask](https://flask.palletsprojects.com/en/3.0.x/). It uses [SQLAlchemy](https://www.sqlalchemy.org/) for ORM and [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html) for task queueing. Authorization logic goes via Flask-login.
```
git clone git@github.com:<github_username>/dify.git
[api/]
├── constants // Constant settings used throughout code base.
├── controllers // API route definitions and request handling logic.
├── core // Core application orchestration, model integrations, and tools.
├── docker // Docker & containerization related configurations.
├── events // Event handling and processing
├── extensions // Extensions with 3rd party frameworks/platforms.
├── fields // field definitions for serialization/marshalling.
├── libs // Reusable libraries and helpers.
├── migrations // Scripts for database migration.
├── models // Database models & schema definitions.
├── services // Specifies business logic.
├── storage // Private key storage.
├── tasks // Handling of async tasks and background jobs.
└── tests
```
### Install backend
### Frontend
To learn how to install the backend application, please refer to the [Backend README](api/README.md).
The website is bootstrapped on [Next.js](https://nextjs.org/) boilerplate in Typescript and uses [Tailwind CSS](https://tailwindcss.com/) for styling. [React-i18next](https://react.i18next.com/) is used for internationalization.
### Install frontend
```
[web/]
├── app // layouts, pages, and components
│ ├── (commonLayout) // common layout used throughout the app
│ ├── (shareLayout) // layouts specifically shared across token-specific sessions
│ ├── activate // activate page
│ ├── components // shared by pages and layouts
│ ├── install // install page
│ ├── signin // signin page
│ └── styles // globally shared styles
├── assets // Static assets
├── bin // scripts ran at build step
├── config // adjustable settings and options
├── context // shared contexts used by different portions of the app
├── dictionaries // Language-specific translate files
├── docker // container configurations
├── hooks // Reusable hooks
├── i18n // Internationalization configuration
├── models // describes data models & shapes of API responses
├── public // meta assets like favicon
├── service // specifies shapes of API actions
├── test
├── types // descriptions of function params and return values
└── utils // Shared utility functions
```
To learn how to install the frontend application, please refer to the [Frontend README](web/README.md).
## Submitting your PR
### Visit dify in your browser
At last, time to open a pull request (PR) to our repo. For major features, we first merge them into the `deploy/dev` branch for testing, before they go into the `main` branch. If you run into issues like merge conflicts or don't know how to open a pull request, check out [GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests).
Finally, you can now visit [http://localhost:3000](http://localhost:3000) to view the [Dify](https://dify.ai) in local environment.
And that's it! Once your PR is merged, you will be featured as a contributor in our [README](https://github.com/langgenius/dify/blob/main/README.md).
## Getting Help
## Create a pull request
After making your changes, open a pull request (PR). Once you submit your pull request, others from the Dify team/community will review it with you.
Did you have an issue, like a merge conflict, or don't know how to open a pull request? Check out [GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) on how to resolve merge conflicts and other issues. Once your PR has been merged, you will be proudly listed as a contributor in the [contributor chart](https://github.com/langgenius/langgenius-gateway/graphs/contributors).
## Community channels
Stuck somewhere? Have any questions? Join the [Discord Community Server](https://discord.gg/j3XRWSPBf7). We are here to help!
### Provider Integrations
If you see a model provider not yet supported by Dify that you'd like to use, follow these [steps](api/core/model_runtime/README.md) to submit a PR.
### i18n (Internationalization) Support
We are looking for contributors to help with translations in other languages. If you are interested in helping, please join the [Discord Community Server](https://discord.gg/AhzKf7dNgk) and let us know.
Also check out the [Frontend i18n README](web/i18n/README_EN.md) for more information.
If you ever get stuck or got a burning question while contributing, simply shoot your queries our way via the related GitHub issue, or hop onto our [Discord](https://discord.gg/AhzKf7dNgk) for a quick chat.

View File

@ -26,23 +26,25 @@
![](./images/demo.png)
## Use Cloud Services
[Dify.AI Cloud](https://dify.ai) provides all the capabilities of the open-source version, and includes 200 free requests to OpenAI GPT-3.5.
## Why Dify
## Using our Cloud Services
Dify is model-agnostic and boasts a comprehensive tech stack compared to hardcoded development libraries like LangChain. Unlike OpenAI's Assistants API, Dify allows for full local deployment of services.
You can try out [Dify.AI Cloud](https://dify.ai) now. It provides all the capabilities of the self-deployed version, and includes 200 free requests to OpenAI GPT-3.5.
## Dify vs. LangChain vs. Assistants API
| Feature | Dify.AI | Assistants API | LangChain |
|---------|---------|----------------|-----------|
| **Programming Approach** | API-oriented | API-oriented | Python Code-oriented |
| **Ecosystem Strategy** | Open Source | Closed and Commercial | Open Source |
| **Ecosystem Strategy** | Open Source | Close Source | Open Source |
| **RAG Engine** | Supported | Supported | Not Supported |
| **Prompt IDE** | Included | Included | None |
| **Supported LLMs** | Rich Variety | Only GPT | Rich Variety |
| **Supported LLMs** | Rich Variety | OpenAI-only | Rich Variety |
| **Local Deployment** | Supported | Not Supported | Not Applicable |
## Features
![](./images/models.png)
@ -59,7 +61,7 @@ Dify is model-agnostic and boasts a comprehensive tech stack compared to hardcod
## Before You Start
**Star us, and you'll get instant notifications for all new releases on GitHub!**
**Star us on GitHub, and be instantly notified for new releases!**
![star-us](https://github.com/langgenius/dify/assets/100913391/95f37259-7370-4456-a9f0-0bc01ef8642f)
@ -103,17 +105,39 @@ If you need to customize the configuration, please refer to the comments in our
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## Contributing
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
At the same time, please consider supporting Dify by sharing it on social media and at events and conferences.
### Contributors
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
### Translations
We are looking for contributors to help with translating Dify to languages other than Mandarin or English. If you are interested in helping, please see the [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README_EN.md) for more information, and leave us a comment in the `global-users` channel of our [Discord Community Server](https://discord.gg/AhzKf7dNgk).
## Community & Support
We welcome you to contribute to Dify to help make Dify better in various ways, submitting code, issues, new ideas, or sharing the interesting and useful AI applications you have created based on Dify. At the same time, we also welcome you to share Dify at different events, conferences, and social media.
* [Canny](https://feedback.dify.ai/). Best for: sharing feedback and checking out our feature roadmap.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Email Support](mailto:hello@dify.ai?subject=[GitHub]Questions%20About%20Dify). Best for: questions you have about using Dify.AI.
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
* [Business Contact](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry). Best for: business inquiries of licensing Dify.AI for commercial use.
- [Roadmap and Feedback](https://feedback.dify.ai/). Best for: sharing feedback and checking out our feature roadmap.
- [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs and errors you encounter using Dify.AI, see the [Contribution Guide](CONTRIBUTING.md).
- [Email Support](mailto:hello@dify.ai?subject=[GitHub]Questions%20About%20Dify). Best for: questions you have about using Dify.AI.
- [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
- [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
- [Business License](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry). Best for: business inquiries of licensing Dify.AI for commercial use.
### Direct Meetings
**Help us make Dify better. Reach out directly to us**.
| Point of Contact | Purpose |
| :----------------------------------------------------------: | :----------------------------------------------------------: |
| <a href='https://cal.com/guchenhe/15min' target='_blank'><img src='https://i.postimg.cc/fWBqSmjP/Git-Hub-README-Button-3x.png' border='0' alt='Git-Hub-README-Button-3x' height="60" width="214"/></a> | Product design feedback, user experience discussions, feature planning and roadmaps. |
| <a href='https://cal.com/pinkbanana' target='_blank'><img src='https://i.postimg.cc/LsRTh87D/Git-Hub-README-Button-2x.png' border='0' alt='Git-Hub-README-Button-2x' height="60" width="225"/></a> | Technical support, issues, or feature requests |
## Security Disclosure

View File

@ -15,7 +15,6 @@ CONSOLE_WEB_URL=http://127.0.0.1:3000
SERVICE_API_URL=http://127.0.0.1:5001
# Web APP base URL
APP_API_URL=http://127.0.0.1:5001
APP_WEB_URL=http://127.0.0.1:3000
# Files URL
@ -102,10 +101,10 @@ NOTION_CLIENT_ID=you-client-id
NOTION_INTERNAL_SECRET=you-internal-secret
# Hosted Model Credentials
HOSTED_OPENAI_ENABLED=false
HOSTED_OPENAI_API_KEY=
HOSTED_OPENAI_API_BASE=
HOSTED_OPENAI_API_ORGANIZATION=
HOSTED_OPENAI_TRIAL_ENABLED=false
HOSTED_OPENAI_QUOTA_LIMIT=200
HOSTED_OPENAI_PAID_ENABLED=false
@ -114,9 +113,9 @@ HOSTED_AZURE_OPENAI_API_KEY=
HOSTED_AZURE_OPENAI_API_BASE=
HOSTED_AZURE_OPENAI_QUOTA_LIMIT=200
HOSTED_ANTHROPIC_ENABLED=false
HOSTED_ANTHROPIC_API_BASE=
HOSTED_ANTHROPIC_API_KEY=
HOSTED_ANTHROPIC_TRIAL_ENABLED=false
HOSTED_ANTHROPIC_QUOTA_LIMIT=600000
HOSTED_ANTHROPIC_PAID_ENABLED=false

View File

@ -13,17 +13,20 @@ RUN pip install --prefix=/pkg -r requirements.txt
# build stage
FROM python:3.10-slim AS builder
ENV FLASK_APP app.py
ENV EDITION SELF_HOSTED
ENV DEPLOY_ENV PRODUCTION
ENV CONSOLE_API_URL http://127.0.0.1:5001
ENV CONSOLE_WEB_URL http://127.0.0.1:3000
ENV SERVICE_API_URL http://127.0.0.1:5001
ENV APP_API_URL http://127.0.0.1:5001
ENV APP_WEB_URL http://127.0.0.1:3000
EXPOSE 5001
# set timezone
ENV TZ UTC
WORKDIR /app/api
RUN apt-get update \

View File

@ -22,7 +22,6 @@ DEFAULTS = {
'CONSOLE_API_URL': 'https://cloud.dify.ai',
'SERVICE_API_URL': 'https://api.dify.ai',
'APP_WEB_URL': 'https://udify.app',
'APP_API_URL': 'https://udify.app',
'FILES_URL': '',
'STORAGE_TYPE': 'local',
'STORAGE_LOCAL_PATH': 'storage',
@ -39,13 +38,19 @@ DEFAULTS = {
'CELERY_BACKEND': 'database',
'LOG_LEVEL': 'INFO',
'HOSTED_OPENAI_QUOTA_LIMIT': 200,
'HOSTED_OPENAI_ENABLED': 'False',
'HOSTED_OPENAI_TRIAL_ENABLED': 'False',
'HOSTED_OPENAI_PAID_ENABLED': 'False',
'HOSTED_OPENAI_PAID_INCREASE_QUOTA': 1,
'HOSTED_OPENAI_PAID_MIN_QUANTITY': 1,
'HOSTED_OPENAI_PAID_MAX_QUANTITY': 1,
'HOSTED_AZURE_OPENAI_ENABLED': 'False',
'HOSTED_AZURE_OPENAI_QUOTA_LIMIT': 200,
'HOSTED_ANTHROPIC_QUOTA_LIMIT': 600000,
'HOSTED_ANTHROPIC_ENABLED': 'False',
'HOSTED_ANTHROPIC_TRIAL_ENABLED': 'False',
'HOSTED_ANTHROPIC_PAID_ENABLED': 'False',
'HOSTED_ANTHROPIC_PAID_INCREASE_QUOTA': 1,
'HOSTED_ANTHROPIC_PAID_MIN_QUANTITY': 1,
'HOSTED_ANTHROPIC_PAID_MAX_QUANTITY': 1,
'HOSTED_MODERATION_ENABLED': 'False',
'HOSTED_MODERATION_PROVIDERS': '',
'CLEAN_DAY_SETTING': 30,
@ -66,7 +71,8 @@ def get_env(key):
def get_bool_env(key):
return get_env(key).lower() == 'true'
value = get_env(key)
return value.lower() == 'true' if value is not None else False
def get_cors_allow_origins(env, default):
@ -87,7 +93,7 @@ class Config:
# ------------------------
# General Configurations.
# ------------------------
self.CURRENT_VERSION = "0.4.7"
self.CURRENT_VERSION = "0.4.9"
self.COMMIT_SHA = get_env('COMMIT_SHA')
self.EDITION = "SELF_HOSTED"
self.DEPLOY_ENV = get_env('DEPLOY_ENV')
@ -96,35 +102,25 @@ class Config:
# The backend URL prefix of the console API.
# used to concatenate the login authorization callback or notion integration callback.
self.CONSOLE_API_URL = get_env('CONSOLE_URL') if get_env('CONSOLE_URL') else get_env('CONSOLE_API_URL')
self.CONSOLE_API_URL = get_env('CONSOLE_API_URL')
# The front-end URL prefix of the console web.
# used to concatenate some front-end addresses and for CORS configuration use.
self.CONSOLE_WEB_URL = get_env('CONSOLE_URL') if get_env('CONSOLE_URL') else get_env('CONSOLE_WEB_URL')
# WebApp API backend Url prefix.
# used to declare the back-end URL for the front-end API.
self.APP_API_URL = get_env('APP_URL') if get_env('APP_URL') else get_env('APP_API_URL')
self.CONSOLE_WEB_URL = get_env('CONSOLE_WEB_URL')
# WebApp Url prefix.
# used to display WebAPP API Base Url to the front-end.
self.APP_WEB_URL = get_env('APP_URL') if get_env('APP_URL') else get_env('APP_WEB_URL')
self.APP_WEB_URL = get_env('APP_WEB_URL')
# Service API Url prefix.
# used to display Service API Base Url to the front-end.
self.SERVICE_API_URL = get_env('API_URL') if get_env('API_URL') else get_env('SERVICE_API_URL')
self.SERVICE_API_URL = get_env('SERVICE_API_URL')
# File preview or download Url prefix.
# used to display File preview or download Url to the front-end or as Multi-model inputs;
# Url is signed and has expiration time.
self.FILES_URL = get_env('FILES_URL') if get_env('FILES_URL') else self.CONSOLE_API_URL
# Fallback Url prefix.
# Will be deprecated in the future.
self.CONSOLE_URL = get_env('CONSOLE_URL')
self.API_URL = get_env('API_URL')
self.APP_URL = get_env('APP_URL')
# Your App secret key will be used for securely signing the session cookie
# Make sure you are changing this key for your deployment with a strong key.
# You can generate a strong key using `openssl rand -base64 42`.
@ -260,23 +256,35 @@ class Config:
# ------------------------
# Platform Configurations.
# ------------------------
self.HOSTED_OPENAI_ENABLED = get_bool_env('HOSTED_OPENAI_ENABLED')
self.HOSTED_OPENAI_API_KEY = get_env('HOSTED_OPENAI_API_KEY')
self.HOSTED_OPENAI_API_BASE = get_env('HOSTED_OPENAI_API_BASE')
self.HOSTED_OPENAI_API_ORGANIZATION = get_env('HOSTED_OPENAI_API_ORGANIZATION')
self.HOSTED_OPENAI_TRIAL_ENABLED = get_bool_env('HOSTED_OPENAI_TRIAL_ENABLED')
self.HOSTED_OPENAI_QUOTA_LIMIT = int(get_env('HOSTED_OPENAI_QUOTA_LIMIT'))
self.HOSTED_OPENAI_PAID_ENABLED = get_bool_env('HOSTED_OPENAI_PAID_ENABLED')
self.HOSTED_OPENAI_PAID_STRIPE_PRICE_ID = get_env('HOSTED_OPENAI_PAID_STRIPE_PRICE_ID')
self.HOSTED_OPENAI_PAID_INCREASE_QUOTA = int(get_env('HOSTED_OPENAI_PAID_INCREASE_QUOTA'))
self.HOSTED_OPENAI_PAID_MIN_QUANTITY = int(get_env('HOSTED_OPENAI_PAID_MIN_QUANTITY'))
self.HOSTED_OPENAI_PAID_MAX_QUANTITY = int(get_env('HOSTED_OPENAI_PAID_MAX_QUANTITY'))
self.HOSTED_AZURE_OPENAI_ENABLED = get_bool_env('HOSTED_AZURE_OPENAI_ENABLED')
self.HOSTED_AZURE_OPENAI_API_KEY = get_env('HOSTED_AZURE_OPENAI_API_KEY')
self.HOSTED_AZURE_OPENAI_API_BASE = get_env('HOSTED_AZURE_OPENAI_API_BASE')
self.HOSTED_AZURE_OPENAI_QUOTA_LIMIT = int(get_env('HOSTED_AZURE_OPENAI_QUOTA_LIMIT'))
self.HOSTED_ANTHROPIC_ENABLED = get_bool_env('HOSTED_ANTHROPIC_ENABLED')
self.HOSTED_ANTHROPIC_API_BASE = get_env('HOSTED_ANTHROPIC_API_BASE')
self.HOSTED_ANTHROPIC_API_KEY = get_env('HOSTED_ANTHROPIC_API_KEY')
self.HOSTED_ANTHROPIC_TRIAL_ENABLED = get_bool_env('HOSTED_ANTHROPIC_TRIAL_ENABLED')
self.HOSTED_ANTHROPIC_QUOTA_LIMIT = int(get_env('HOSTED_ANTHROPIC_QUOTA_LIMIT'))
self.HOSTED_ANTHROPIC_PAID_ENABLED = get_bool_env('HOSTED_ANTHROPIC_PAID_ENABLED')
self.HOSTED_ANTHROPIC_PAID_STRIPE_PRICE_ID = get_env('HOSTED_ANTHROPIC_PAID_STRIPE_PRICE_ID')
self.HOSTED_ANTHROPIC_PAID_INCREASE_QUOTA = int(get_env('HOSTED_ANTHROPIC_PAID_INCREASE_QUOTA'))
self.HOSTED_ANTHROPIC_PAID_MIN_QUANTITY = int(get_env('HOSTED_ANTHROPIC_PAID_MIN_QUANTITY'))
self.HOSTED_ANTHROPIC_PAID_MAX_QUANTITY = int(get_env('HOSTED_ANTHROPIC_PAID_MAX_QUANTITY'))
self.HOSTED_MINIMAX_ENABLED = get_bool_env('HOSTED_MINIMAX_ENABLED')
self.HOSTED_SPARK_ENABLED = get_bool_env('HOSTED_SPARK_ENABLED')
self.HOSTED_ZHIPUAI_ENABLED = get_bool_env('HOSTED_ZHIPUAI_ENABLED')
self.HOSTED_MODERATION_ENABLED = get_bool_env('HOSTED_MODERATION_ENABLED')
self.HOSTED_MODERATION_PROVIDERS = get_env('HOSTED_MODERATION_PROVIDERS')

View File

@ -163,29 +163,8 @@ def compact_response(response: Union[dict, Generator]) -> Response:
return Response(response=json.dumps(response), status=200, mimetype='application/json')
else:
def generate() -> Generator:
try:
for chunk in response:
yield chunk
except services.errors.conversation.ConversationNotExistsError:
yield "data: " + json.dumps(api.handle_error(NotFound("Conversation Not Exists.")).get_json()) + "\n\n"
except services.errors.conversation.ConversationCompletedError:
yield "data: " + json.dumps(api.handle_error(ConversationCompletedError()).get_json()) + "\n\n"
except services.errors.app_model_config.AppModelConfigBrokenError:
logging.exception("App model config broken.")
yield "data: " + json.dumps(api.handle_error(AppUnavailableError()).get_json()) + "\n\n"
except ProviderTokenNotInitError as ex:
yield "data: " + json.dumps(api.handle_error(ProviderNotInitializeError(ex.description)).get_json()) + "\n\n"
except QuotaExceededError:
yield "data: " + json.dumps(api.handle_error(ProviderQuotaExceededError()).get_json()) + "\n\n"
except ModelCurrentlyNotSupportError:
yield "data: " + json.dumps(api.handle_error(ProviderModelCurrentlyNotSupportError()).get_json()) + "\n\n"
except InvokeError as e:
yield "data: " + json.dumps(api.handle_error(CompletionRequestError(e.description)).get_json()) + "\n\n"
except ValueError as e:
yield "data: " + json.dumps(api.handle_error(e).get_json()) + "\n\n"
except Exception:
logging.exception("internal server error.")
yield "data: " + json.dumps(api.handle_error(InternalServerError()).get_json()) + "\n\n"
for chunk in response:
yield chunk
return Response(stream_with_context(generate()), status=200,
mimetype='text/event-stream')

View File

@ -241,27 +241,8 @@ def compact_response(response: Union[dict, Generator]) -> Response:
return Response(response=json.dumps(response), status=200, mimetype='application/json')
else:
def generate() -> Generator:
try:
for chunk in response:
yield chunk
except MessageNotExistsError:
yield "data: " + json.dumps(api.handle_error(NotFound("Message Not Exists.")).get_json()) + "\n\n"
except MoreLikeThisDisabledError:
yield "data: " + json.dumps(api.handle_error(AppMoreLikeThisDisabledError()).get_json()) + "\n\n"
except ProviderTokenNotInitError as ex:
yield "data: " + json.dumps(api.handle_error(ProviderNotInitializeError(ex.description)).get_json()) + "\n\n"
except QuotaExceededError:
yield "data: " + json.dumps(api.handle_error(ProviderQuotaExceededError()).get_json()) + "\n\n"
except ModelCurrentlyNotSupportError:
yield "data: " + json.dumps(
api.handle_error(ProviderModelCurrentlyNotSupportError()).get_json()) + "\n\n"
except InvokeError as e:
yield "data: " + json.dumps(api.handle_error(CompletionRequestError(e.description)).get_json()) + "\n\n"
except ValueError as e:
yield "data: " + json.dumps(api.handle_error(e).get_json()) + "\n\n"
except Exception:
logging.exception("internal server error.")
yield "data: " + json.dumps(api.handle_error(InternalServerError()).get_json()) + "\n\n"
for chunk in response:
yield chunk
return Response(stream_with_context(generate()), status=200,
mimetype='text/event-stream')

View File

@ -19,7 +19,7 @@ from flask import current_app, request
from flask_login import current_user
from flask_restful import Resource, marshal, marshal_with, reqparse
from libs.login import login_required
from models.dataset import Document, DocumentSegment
from models.dataset import Dataset, Document, DocumentSegment
from models.model import ApiToken, UploadFile
from services.dataset_service import DatasetService, DocumentService
from werkzeug.exceptions import Forbidden, NotFound
@ -97,7 +97,8 @@ class DatasetListApi(Resource):
help='type is required. Name must be between 1 to 40 characters.',
type=_validate_name)
parser.add_argument('indexing_technique', type=str, location='json',
choices=('high_quality', 'economy'),
choices=Dataset.INDEXING_TECHNIQUE_LIST,
nullable=True,
help='Invalid indexing technique.')
args = parser.parse_args()
@ -177,8 +178,9 @@ class DatasetApi(Resource):
location='json', store_missing=False,
type=_validate_description_length)
parser.add_argument('indexing_technique', type=str, location='json',
choices=('high_quality', 'economy'),
help='Invalid indexing technique.')
choices=Dataset.INDEXING_TECHNIQUE_LIST,
nullable=True,
help='Invalid indexing technique.')
parser.add_argument('permission', type=str, location='json', choices=(
'only_me', 'all_team_members'), help='Invalid permission.')
parser.add_argument('retrieval_model', type=dict, location='json', help='Invalid retrieval model.')
@ -256,7 +258,9 @@ class DatasetIndexingEstimateApi(Resource):
parser = reqparse.RequestParser()
parser.add_argument('info_list', type=dict, required=True, nullable=True, location='json')
parser.add_argument('process_rule', type=dict, required=True, nullable=True, location='json')
parser.add_argument('indexing_technique', type=str, required=True, nullable=True, location='json')
parser.add_argument('indexing_technique', type=str, required=True,
choices=Dataset.INDEXING_TECHNIQUE_LIST,
nullable=True, location='json')
parser.add_argument('doc_form', type=str, default='text_model', required=False, nullable=False, location='json')
parser.add_argument('dataset_id', type=str, required=False, nullable=False, location='json')
parser.add_argument('doc_language', type=str, default='English', required=False, nullable=False,

View File

@ -158,29 +158,8 @@ def compact_response(response: Union[dict, Generator]) -> Response:
return Response(response=json.dumps(response), status=200, mimetype='application/json')
else:
def generate() -> Generator:
try:
for chunk in response:
yield chunk
except services.errors.conversation.ConversationNotExistsError:
yield "data: " + json.dumps(api.handle_error(NotFound("Conversation Not Exists.")).get_json()) + "\n\n"
except services.errors.conversation.ConversationCompletedError:
yield "data: " + json.dumps(api.handle_error(ConversationCompletedError()).get_json()) + "\n\n"
except services.errors.app_model_config.AppModelConfigBrokenError:
logging.exception("App model config broken.")
yield "data: " + json.dumps(api.handle_error(AppUnavailableError()).get_json()) + "\n\n"
except ProviderTokenNotInitError as ex:
yield "data: " + json.dumps(api.handle_error(ProviderNotInitializeError(ex.description)).get_json()) + "\n\n"
except QuotaExceededError:
yield "data: " + json.dumps(api.handle_error(ProviderQuotaExceededError()).get_json()) + "\n\n"
except ModelCurrentlyNotSupportError:
yield "data: " + json.dumps(api.handle_error(ProviderModelCurrentlyNotSupportError()).get_json()) + "\n\n"
except InvokeError as e:
yield "data: " + json.dumps(api.handle_error(CompletionRequestError(e.description)).get_json()) + "\n\n"
except ValueError as e:
yield "data: " + json.dumps(api.handle_error(e).get_json()) + "\n\n"
except Exception:
logging.exception("internal server error.")
yield "data: " + json.dumps(api.handle_error(InternalServerError()).get_json()) + "\n\n"
for chunk in response:
yield chunk
return Response(stream_with_context(generate()), status=200,
mimetype='text/event-stream')

View File

@ -117,26 +117,8 @@ def compact_response(response: Union[dict, Generator]) -> Response:
return Response(response=json.dumps(response), status=200, mimetype='application/json')
else:
def generate() -> Generator:
try:
for chunk in response:
yield chunk
except MessageNotExistsError:
yield "data: " + json.dumps(api.handle_error(NotFound("Message Not Exists.")).get_json()) + "\n\n"
except MoreLikeThisDisabledError:
yield "data: " + json.dumps(api.handle_error(AppMoreLikeThisDisabledError()).get_json()) + "\n\n"
except ProviderTokenNotInitError as ex:
yield "data: " + json.dumps(api.handle_error(ProviderNotInitializeError(ex.description)).get_json()) + "\n\n"
except QuotaExceededError:
yield "data: " + json.dumps(api.handle_error(ProviderQuotaExceededError()).get_json()) + "\n\n"
except ModelCurrentlyNotSupportError:
yield "data: " + json.dumps(api.handle_error(ProviderModelCurrentlyNotSupportError()).get_json()) + "\n\n"
except InvokeError as e:
yield "data: " + json.dumps(api.handle_error(CompletionRequestError(e.description)).get_json()) + "\n\n"
except ValueError as e:
yield "data: " + json.dumps(api.handle_error(e).get_json()) + "\n\n"
except Exception:
logging.exception("internal server error.")
yield "data: " + json.dumps(api.handle_error(InternalServerError()).get_json()) + "\n\n"
for chunk in response:
yield chunk
return Response(stream_with_context(generate()), status=200,
mimetype='text/event-stream')

View File

@ -109,29 +109,8 @@ def compact_response(response: Union[dict, Generator]) -> Response:
return Response(response=json.dumps(response), status=200, mimetype='application/json')
else:
def generate() -> Generator:
try:
for chunk in response:
yield chunk
except services.errors.conversation.ConversationNotExistsError:
yield "data: " + json.dumps(api.handle_error(NotFound("Conversation Not Exists.")).get_json()) + "\n\n"
except services.errors.conversation.ConversationCompletedError:
yield "data: " + json.dumps(api.handle_error(ConversationCompletedError()).get_json()) + "\n\n"
except services.errors.app_model_config.AppModelConfigBrokenError:
logging.exception("App model config broken.")
yield "data: " + json.dumps(api.handle_error(AppUnavailableError()).get_json()) + "\n\n"
except ProviderTokenNotInitError:
yield "data: " + json.dumps(api.handle_error(ProviderNotInitializeError()).get_json()) + "\n\n"
except QuotaExceededError:
yield "data: " + json.dumps(api.handle_error(ProviderQuotaExceededError()).get_json()) + "\n\n"
except ModelCurrentlyNotSupportError:
yield "data: " + json.dumps(api.handle_error(ProviderModelCurrentlyNotSupportError()).get_json()) + "\n\n"
except InvokeError as e:
yield "data: " + json.dumps(api.handle_error(CompletionRequestError(e.description)).get_json()) + "\n\n"
except ValueError as e:
yield "data: " + json.dumps(api.handle_error(e).get_json()) + "\n\n"
except Exception:
logging.exception("internal server error.")
yield "data: " + json.dumps(api.handle_error(InternalServerError()).get_json()) + "\n\n"
for chunk in response:
yield chunk
return Response(stream_with_context(generate()), status=200,
mimetype='text/event-stream')

View File

@ -6,5 +6,6 @@ bp = Blueprint('service_api', __name__, url_prefix='/v1')
api = ExternalApi(bp)
from . import index
from .app import app, audio, completion, conversation, file, message
from .dataset import dataset, document, segment

View File

@ -79,7 +79,12 @@ class CompletionStopApi(AppApiResource):
if app_model.mode != 'completion':
raise AppUnavailableError()
end_user_id = request.get_json().get('user')
parser = reqparse.RequestParser()
parser.add_argument('user', required=True, nullable=False, type=str, location='json')
args = parser.parse_args()
end_user_id = args.get('user')
ApplicationQueueManager.set_stop_flag(task_id, InvokeFrom.SERVICE_API, end_user_id)
@ -157,29 +162,8 @@ def compact_response(response: Union[dict, Generator]) -> Response:
return Response(response=json.dumps(response), status=200, mimetype='application/json')
else:
def generate() -> Generator:
try:
for chunk in response:
yield chunk
except services.errors.conversation.ConversationNotExistsError:
yield "data: " + json.dumps(api.handle_error(NotFound("Conversation Not Exists.")).get_json()) + "\n\n"
except services.errors.conversation.ConversationCompletedError:
yield "data: " + json.dumps(api.handle_error(ConversationCompletedError()).get_json()) + "\n\n"
except services.errors.app_model_config.AppModelConfigBrokenError:
logging.exception("App model config broken.")
yield "data: " + json.dumps(api.handle_error(AppUnavailableError()).get_json()) + "\n\n"
except ProviderTokenNotInitError as ex:
yield "data: " + json.dumps(api.handle_error(ProviderNotInitializeError(ex.description)).get_json()) + "\n\n"
except QuotaExceededError:
yield "data: " + json.dumps(api.handle_error(ProviderQuotaExceededError()).get_json()) + "\n\n"
except ModelCurrentlyNotSupportError:
yield "data: " + json.dumps(api.handle_error(ProviderModelCurrentlyNotSupportError()).get_json()) + "\n\n"
except InvokeError as e:
yield "data: " + json.dumps(api.handle_error(CompletionRequestError(e.description)).get_json()) + "\n\n"
except ValueError as e:
yield "data: " + json.dumps(api.handle_error(e).get_json()) + "\n\n"
except Exception:
logging.exception("internal server error.")
yield "data: " + json.dumps(api.handle_error(InternalServerError()).get_json()) + "\n\n"
for chunk in response:
yield chunk
return Response(stream_with_context(generate()), status=200,
mimetype='text/event-stream')

View File

@ -86,5 +86,4 @@ class ConversationRenameApi(AppApiResource):
api.add_resource(ConversationRenameApi, '/conversations/<uuid:c_id>/name', endpoint='conversation_name')
api.add_resource(ConversationApi, '/conversations')
api.add_resource(ConversationApi, '/conversations/<uuid:c_id>', endpoint='conversation')
api.add_resource(ConversationDetailApi, '/conversations/<uuid:c_id>', endpoint='conversation_detail')

View File

@ -1,3 +1,4 @@
from models.dataset import Dataset
import services.dataset_service
from controllers.service_api import api
from controllers.service_api.dataset.error import DatasetNameDuplicateError
@ -68,7 +69,7 @@ class DatasetApi(DatasetApiResource):
help='type is required. Name must be between 1 to 40 characters.',
type=_validate_name)
parser.add_argument('indexing_technique', type=str, location='json',
choices=('high_quality', 'economy'),
choices=Dataset.INDEXING_TECHNIQUE_LIST,
help='Invalid indexing technique.')
args = parser.parse_args()

View File

@ -0,0 +1,16 @@
from flask import current_app
from flask_restful import Resource
from controllers.service_api import api
class IndexApi(Resource):
def get(self):
return {
"welcome": "Dify OpenAPI",
"api_version": "v1",
"server_version": current_app.config['CURRENT_VERSION']
}
api.add_resource(IndexApi, '/')

View File

@ -75,7 +75,7 @@ def validate_dataset_token(view=None):
tenant_account_join = db.session.query(Tenant, TenantAccountJoin) \
.filter(Tenant.id == api_token.tenant_id) \
.filter(TenantAccountJoin.tenant_id == Tenant.id) \
.filter(TenantAccountJoin.role == 'owner') \
.filter(TenantAccountJoin.role.in_(['owner', 'admin'])) \
.one_or_none()
if tenant_account_join:
tenant, ta = tenant_account_join

View File

@ -146,29 +146,8 @@ def compact_response(response: Union[dict, Generator]) -> Response:
return Response(response=json.dumps(response), status=200, mimetype='application/json')
else:
def generate() -> Generator:
try:
for chunk in response:
yield chunk
except services.errors.conversation.ConversationNotExistsError:
yield "data: " + json.dumps(api.handle_error(NotFound("Conversation Not Exists.")).get_json()) + "\n\n"
except services.errors.conversation.ConversationCompletedError:
yield "data: " + json.dumps(api.handle_error(ConversationCompletedError()).get_json()) + "\n\n"
except services.errors.app_model_config.AppModelConfigBrokenError:
logging.exception("App model config broken.")
yield "data: " + json.dumps(api.handle_error(AppUnavailableError()).get_json()) + "\n\n"
except ProviderTokenNotInitError as ex:
yield "data: " + json.dumps(api.handle_error(ProviderNotInitializeError(ex.description)).get_json()) + "\n\n"
except QuotaExceededError:
yield "data: " + json.dumps(api.handle_error(ProviderQuotaExceededError()).get_json()) + "\n\n"
except ModelCurrentlyNotSupportError:
yield "data: " + json.dumps(api.handle_error(ProviderModelCurrentlyNotSupportError()).get_json()) + "\n\n"
except InvokeError as e:
yield "data: " + json.dumps(api.handle_error(CompletionRequestError(e.description)).get_json()) + "\n\n"
except ValueError as e:
yield "data: " + json.dumps(api.handle_error(e).get_json()) + "\n\n"
except Exception:
logging.exception("internal server error.")
yield "data: " + json.dumps(api.handle_error(InternalServerError()).get_json()) + "\n\n"
for chunk in response:
yield chunk
return Response(stream_with_context(generate()), status=200,
mimetype='text/event-stream')

View File

@ -151,26 +151,8 @@ def compact_response(response: Union[dict, Generator]) -> Response:
return Response(response=json.dumps(response), status=200, mimetype='application/json')
else:
def generate() -> Generator:
try:
for chunk in response:
yield chunk
except MessageNotExistsError:
yield "data: " + json.dumps(api.handle_error(NotFound("Message Not Exists.")).get_json()) + "\n\n"
except MoreLikeThisDisabledError:
yield "data: " + json.dumps(api.handle_error(AppMoreLikeThisDisabledError()).get_json()) + "\n\n"
except ProviderTokenNotInitError as ex:
yield "data: " + json.dumps(api.handle_error(ProviderNotInitializeError(ex.description)).get_json()) + "\n\n"
except QuotaExceededError:
yield "data: " + json.dumps(api.handle_error(ProviderQuotaExceededError()).get_json()) + "\n\n"
except ModelCurrentlyNotSupportError:
yield "data: " + json.dumps(api.handle_error(ProviderModelCurrentlyNotSupportError()).get_json()) + "\n\n"
except InvokeError as e:
yield "data: " + json.dumps(api.handle_error(CompletionRequestError(e.description)).get_json()) + "\n\n"
except ValueError as e:
yield "data: " + json.dumps(api.handle_error(e).get_json()) + "\n\n"
except Exception:
logging.exception("internal server error.")
yield "data: " + json.dumps(api.handle_error(InternalServerError()).get_json()) + "\n\n"
for chunk in response:
yield chunk
return Response(stream_with_context(generate()), status=200,
mimetype='text/event-stream')

View File

@ -5,16 +5,18 @@ from typing import Generator, Optional, Union, cast
from core.app_runner.moderation_handler import ModerationRule, OutputModerationHandler
from core.application_queue_manager import ApplicationQueueManager, PublishFrom
from core.entities.application_entities import ApplicationGenerateEntity
from core.entities.application_entities import ApplicationGenerateEntity, InvokeFrom
from core.entities.queue_entities import (AnnotationReplyEvent, QueueAgentThoughtEvent, QueueErrorEvent,
QueueMessageEndEvent, QueueMessageEvent, QueueMessageReplaceEvent,
QueuePingEvent, QueueRetrieverResourcesEvent, QueueStopEvent)
from core.errors.error import ProviderTokenNotInitError, QuotaExceededError, ModelCurrentlyNotSupportError
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import (AssistantPromptMessage, ImagePromptMessageContent,
PromptMessage, PromptMessageContentType, PromptMessageRole,
TextPromptMessageContent)
from core.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.model_runtime.utils.encoders import jsonable_encoder
from core.prompt.prompt_template import PromptTemplateParser
from events.message_event import message_was_created
from extensions.ext_database import db
@ -135,6 +137,8 @@ class GenerateTaskPipeline:
completion_tokens
)
self._task_state.metadata['usage'] = jsonable_encoder(self._task_state.llm_result.usage)
# response moderation
if self._output_moderation_handler:
self._output_moderation_handler.stop_thread()
@ -145,12 +149,13 @@ class GenerateTaskPipeline:
)
# Save message
self._save_message(event.llm_result)
self._save_message(self._task_state.llm_result)
response = {
'event': 'message',
'task_id': self._application_generate_entity.task_id,
'id': self._message.id,
'message_id': self._message.id,
'mode': self._conversation.mode,
'answer': event.llm_result.message.content,
'metadata': {},
@ -161,7 +166,7 @@ class GenerateTaskPipeline:
response['conversation_id'] = self._conversation.id
if self._task_state.metadata:
response['metadata'] = self._task_state.metadata
response['metadata'] = self._get_response_metadata()
return response
else:
@ -176,7 +181,9 @@ class GenerateTaskPipeline:
event = message.event
if isinstance(event, QueueErrorEvent):
raise self._handle_error(event)
data = self._error_to_stream_response_data(self._handle_error(event))
yield self._yield_response(data)
break
elif isinstance(event, (QueueStopEvent, QueueMessageEndEvent)):
if isinstance(event, QueueMessageEndEvent):
self._task_state.llm_result = event.llm_result
@ -213,6 +220,8 @@ class GenerateTaskPipeline:
completion_tokens
)
self._task_state.metadata['usage'] = jsonable_encoder(self._task_state.llm_result.usage)
# response moderation
if self._output_moderation_handler:
self._output_moderation_handler.stop_thread()
@ -244,13 +253,14 @@ class GenerateTaskPipeline:
'event': 'message_end',
'task_id': self._application_generate_entity.task_id,
'id': self._message.id,
'message_id': self._message.id,
}
if self._conversation.mode == 'chat':
response['conversation_id'] = self._conversation.id
if self._task_state.metadata:
response['metadata'] = self._task_state.metadata
response['metadata'] = self._get_response_metadata()
yield self._yield_response(response)
elif isinstance(event, QueueRetrieverResourcesEvent):
@ -410,6 +420,86 @@ class GenerateTaskPipeline:
else:
return Exception(e.description if getattr(e, 'description', None) is not None else str(e))
def _error_to_stream_response_data(self, e: Exception) -> dict:
"""
Error to stream response.
:param e: exception
:return:
"""
if isinstance(e, ValueError):
data = {
'code': 'invalid_param',
'message': str(e),
'status': 400
}
elif isinstance(e, ProviderTokenNotInitError):
data = {
'code': 'provider_not_initialize',
'message': e.description,
'status': 400
}
elif isinstance(e, QuotaExceededError):
data = {
'code': 'provider_quota_exceeded',
'message': "Your quota for Dify Hosted Model Provider has been exhausted. "
"Please go to Settings -> Model Provider to complete your own provider credentials.",
'status': 400
}
elif isinstance(e, ModelCurrentlyNotSupportError):
data = {
'code': 'model_currently_not_support',
'message': e.description,
'status': 400
}
elif isinstance(e, InvokeError):
data = {
'code': 'completion_request_error',
'message': e.description,
'status': 400
}
else:
logging.error(e)
data = {
'code': 'internal_server_error',
'message': 'Internal Server Error, please contact support.',
'status': 500
}
return {
'event': 'error',
'task_id': self._application_generate_entity.task_id,
'message_id': self._message.id,
**data
}
def _get_response_metadata(self) -> dict:
"""
Get response metadata by invoke from.
:return:
"""
metadata = {}
# show_retrieve_source
if 'retriever_resources' in self._task_state.metadata:
if self._application_generate_entity.invoke_from in [InvokeFrom.DEBUGGER, InvokeFrom.SERVICE_API]:
metadata['retriever_resources'] = self._task_state.metadata['retriever_resources']
else:
metadata['retriever_resources'] = []
for resource in self._task_state.metadata['retriever_resources']:
metadata['retriever_resources'].append({
'segment_id': resource['segment_id'],
'position': resource['position'],
'document_name': resource['document_name'],
'score': resource['score'],
'content': resource['content'],
})
# show usage
if self._application_generate_entity.invoke_from in [InvokeFrom.DEBUGGER, InvokeFrom.SERVICE_API]:
metadata['usage'] = self._task_state.metadata['usage']
return metadata
def _yield_response(self, response: dict) -> str:
"""
Yield response.

View File

@ -1,10 +1,16 @@
import base64
import json
import logging
from typing import List, Optional
from typing import List, Optional, cast
import numpy as np
from core.model_manager import ModelInstance
from core.model_runtime.entities.model_entities import ModelPropertyKey
from core.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel
from extensions.ext_database import db
from langchain.embeddings.base import Embeddings
from extensions.ext_redis import redis_client
from libs import helper
from models.dataset import Embedding
from sqlalchemy.exc import IntegrityError
@ -18,47 +24,33 @@ class CacheEmbedding(Embeddings):
self._user = user
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Embed search docs."""
# use doc embedding cache or store if not exists
text_embeddings = [None for _ in range(len(texts))]
embedding_queue_indices = []
for i, text in enumerate(texts):
hash = helper.generate_text_hash(text)
embedding = db.session.query(Embedding).filter_by(model_name=self._model_instance.model, hash=hash).first()
if embedding:
text_embeddings[i] = embedding.get_embedding()
else:
embedding_queue_indices.append(i)
"""Embed search docs in batches of 10."""
text_embeddings = []
try:
model_type_instance = cast(TextEmbeddingModel, self._model_instance.model_type_instance)
model_schema = model_type_instance.get_model_schema(self._model_instance.model, self._model_instance.credentials)
max_chunks = model_schema.model_properties[ModelPropertyKey.MAX_CHUNKS] \
if model_schema and ModelPropertyKey.MAX_CHUNKS in model_schema.model_properties else 1
for i in range(0, len(texts), max_chunks):
batch_texts = texts[i:i + max_chunks]
if embedding_queue_indices:
try:
embedding_result = self._model_instance.invoke_text_embedding(
texts=[texts[i] for i in embedding_queue_indices],
texts=batch_texts,
user=self._user
)
embedding_results = embedding_result.embeddings
except Exception as ex:
logger.error('Failed to embed documents: ', ex)
raise ex
for vector in embedding_result.embeddings:
try:
normalized_embedding = (vector / np.linalg.norm(vector)).tolist()
text_embeddings.append(normalized_embedding)
except IntegrityError:
db.session.rollback()
except Exception as e:
logging.exception('Failed to add embedding to redis')
for i, indice in enumerate(embedding_queue_indices):
hash = helper.generate_text_hash(texts[indice])
try:
embedding = Embedding(model_name=self._model_instance.model, hash=hash)
vector = embedding_results[i]
normalized_embedding = (vector / np.linalg.norm(vector)).tolist()
text_embeddings[indice] = normalized_embedding
embedding.set_embedding(normalized_embedding)
db.session.add(embedding)
db.session.commit()
except IntegrityError:
db.session.rollback()
continue
except:
logging.exception('Failed to add embedding to db')
continue
except Exception as ex:
logger.error('Failed to embed documents: ', ex)
raise ex
return text_embeddings
@ -66,9 +58,12 @@ class CacheEmbedding(Embeddings):
"""Embed query text."""
# use doc embedding cache or store if not exists
hash = helper.generate_text_hash(text)
embedding = db.session.query(Embedding).filter_by(model_name=self._model_instance.model, hash=hash).first()
embedding_cache_key = f'{self._model_instance.provider}_{self._model_instance.model}_{hash}'
embedding = redis_client.get(embedding_cache_key)
if embedding:
return embedding.get_embedding()
redis_client.expire(embedding_cache_key, 600)
return list(np.frombuffer(base64.b64decode(embedding), dtype="float"))
try:
embedding_result = self._model_instance.invoke_text_embedding(
@ -82,13 +77,18 @@ class CacheEmbedding(Embeddings):
raise ex
try:
embedding = Embedding(model_name=self._model_instance.model, hash=hash)
embedding.set_embedding(embedding_results)
db.session.add(embedding)
db.session.commit()
# encode embedding to base64
embedding_vector = np.array(embedding_results)
vector_bytes = embedding_vector.tobytes()
# Transform to Base64
encoded_vector = base64.b64encode(vector_bytes)
# Transform to string
encoded_str = encoded_vector.decode("utf-8")
redis_client.setex(embedding_cache_key, 600, encoded_str)
except IntegrityError:
db.session.rollback()
except:
logging.exception('Failed to add embedding to db')
logging.exception('Failed to add embedding to redis')
return embedding_results

View File

@ -166,8 +166,7 @@ class DatasetRetrievalFeature:
dataset_ids=[dataset.id for dataset in available_datasets],
tenant_id=tenant_id,
top_k=retrieve_config.top_k or 2,
score_threshold=(retrieve_config.score_threshold or 0.5)
if retrieve_config.reranking_model.get('score_threshold_enabled', False) else None,
score_threshold=retrieve_config.score_threshold,
hit_callbacks=[hit_callback],
return_resource=return_resource,
retriever_from=invoke_from.to_source(),

View File

@ -1,9 +1,8 @@
import os
from typing import Optional
from core.entities.provider_entities import QuotaUnit, RestrictModel
from core.model_runtime.entities.model_entities import ModelType
from flask import Flask
from flask import Flask, Config
from models.provider import ProviderQuotaType
from pydantic import BaseModel
@ -48,46 +47,47 @@ class HostingConfiguration:
moderation_config: HostedModerationConfig = None
def init_app(self, app: Flask) -> None:
if app.config.get('EDITION') != 'CLOUD':
config = app.config
if config.get('EDITION') != 'CLOUD':
return
self.provider_map["azure_openai"] = self.init_azure_openai()
self.provider_map["openai"] = self.init_openai()
self.provider_map["anthropic"] = self.init_anthropic()
self.provider_map["minimax"] = self.init_minimax()
self.provider_map["spark"] = self.init_spark()
self.provider_map["zhipuai"] = self.init_zhipuai()
self.provider_map["azure_openai"] = self.init_azure_openai(config)
self.provider_map["openai"] = self.init_openai(config)
self.provider_map["anthropic"] = self.init_anthropic(config)
self.provider_map["minimax"] = self.init_minimax(config)
self.provider_map["spark"] = self.init_spark(config)
self.provider_map["zhipuai"] = self.init_zhipuai(config)
self.moderation_config = self.init_moderation_config()
self.moderation_config = self.init_moderation_config(config)
def init_azure_openai(self) -> HostingProvider:
def init_azure_openai(self, app_config: Config) -> HostingProvider:
quota_unit = QuotaUnit.TIMES
if os.environ.get("HOSTED_AZURE_OPENAI_ENABLED") and os.environ.get("HOSTED_AZURE_OPENAI_ENABLED").lower() == 'true':
if app_config.get("HOSTED_AZURE_OPENAI_ENABLED"):
credentials = {
"openai_api_key": os.environ.get("HOSTED_AZURE_OPENAI_API_KEY"),
"openai_api_base": os.environ.get("HOSTED_AZURE_OPENAI_API_BASE"),
"openai_api_key": app_config.get("HOSTED_AZURE_OPENAI_API_KEY"),
"openai_api_base": app_config.get("HOSTED_AZURE_OPENAI_API_BASE"),
"base_model_name": "gpt-35-turbo"
}
quotas = []
hosted_quota_limit = int(os.environ.get("HOSTED_AZURE_OPENAI_QUOTA_LIMIT", "1000"))
if hosted_quota_limit != -1 or hosted_quota_limit > 0:
trial_quota = TrialHostingQuota(
quota_limit=hosted_quota_limit,
restrict_models=[
RestrictModel(model="gpt-4", base_model_name="gpt-4", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-32k", base_model_name="gpt-4-32k", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-1106-preview", base_model_name="gpt-4-1106-preview", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-vision-preview", base_model_name="gpt-4-vision-preview", model_type=ModelType.LLM),
RestrictModel(model="gpt-35-turbo", base_model_name="gpt-35-turbo", model_type=ModelType.LLM),
RestrictModel(model="gpt-35-turbo-1106", base_model_name="gpt-35-turbo-1106", model_type=ModelType.LLM),
RestrictModel(model="gpt-35-turbo-instruct", base_model_name="gpt-35-turbo-instruct", model_type=ModelType.LLM),
RestrictModel(model="gpt-35-turbo-16k", base_model_name="gpt-35-turbo-16k", model_type=ModelType.LLM),
RestrictModel(model="text-davinci-003", base_model_name="text-davinci-003", model_type=ModelType.LLM),
RestrictModel(model="text-embedding-ada-002", base_model_name="text-embedding-ada-002", model_type=ModelType.TEXT_EMBEDDING),
]
)
quotas.append(trial_quota)
hosted_quota_limit = int(app_config.get("HOSTED_AZURE_OPENAI_QUOTA_LIMIT", "1000"))
trial_quota = TrialHostingQuota(
quota_limit=hosted_quota_limit,
restrict_models=[
RestrictModel(model="gpt-4", base_model_name="gpt-4", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-32k", base_model_name="gpt-4-32k", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-1106-preview", base_model_name="gpt-4-1106-preview", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-vision-preview", base_model_name="gpt-4-vision-preview", model_type=ModelType.LLM),
RestrictModel(model="gpt-35-turbo", base_model_name="gpt-35-turbo", model_type=ModelType.LLM),
RestrictModel(model="gpt-35-turbo-1106", base_model_name="gpt-35-turbo-1106", model_type=ModelType.LLM),
RestrictModel(model="gpt-35-turbo-instruct", base_model_name="gpt-35-turbo-instruct", model_type=ModelType.LLM),
RestrictModel(model="gpt-35-turbo-16k", base_model_name="gpt-35-turbo-16k", model_type=ModelType.LLM),
RestrictModel(model="text-davinci-003", base_model_name="text-davinci-003", model_type=ModelType.LLM),
RestrictModel(model="text-embedding-ada-002", base_model_name="text-embedding-ada-002", model_type=ModelType.TEXT_EMBEDDING),
]
)
quotas.append(trial_quota)
return HostingProvider(
enabled=True,
@ -101,43 +101,44 @@ class HostingConfiguration:
quota_unit=quota_unit,
)
def init_openai(self) -> HostingProvider:
def init_openai(self, app_config: Config) -> HostingProvider:
quota_unit = QuotaUnit.TIMES
if os.environ.get("HOSTED_OPENAI_ENABLED") and os.environ.get("HOSTED_OPENAI_ENABLED").lower() == 'true':
quotas = []
if app_config.get("HOSTED_OPENAI_TRIAL_ENABLED"):
hosted_quota_limit = int(app_config.get("HOSTED_OPENAI_QUOTA_LIMIT", "200"))
trial_quota = TrialHostingQuota(
quota_limit=hosted_quota_limit,
restrict_models=[
RestrictModel(model="gpt-3.5-turbo", model_type=ModelType.LLM),
RestrictModel(model="gpt-3.5-turbo-1106", model_type=ModelType.LLM),
RestrictModel(model="gpt-3.5-turbo-instruct", model_type=ModelType.LLM),
RestrictModel(model="gpt-3.5-turbo-16k", model_type=ModelType.LLM),
RestrictModel(model="text-davinci-003", model_type=ModelType.LLM),
RestrictModel(model="whisper-1", model_type=ModelType.SPEECH2TEXT),
]
)
quotas.append(trial_quota)
if app_config.get("HOSTED_OPENAI_PAID_ENABLED"):
paid_quota = PaidHostingQuota(
stripe_price_id=app_config.get("HOSTED_OPENAI_PAID_STRIPE_PRICE_ID"),
increase_quota=int(app_config.get("HOSTED_OPENAI_PAID_INCREASE_QUOTA", "1")),
min_quantity=int(app_config.get("HOSTED_OPENAI_PAID_MIN_QUANTITY", "1")),
max_quantity=int(app_config.get("HOSTED_OPENAI_PAID_MAX_QUANTITY", "1"))
)
quotas.append(paid_quota)
if len(quotas) > 0:
credentials = {
"openai_api_key": os.environ.get("HOSTED_OPENAI_API_KEY"),
"openai_api_key": app_config.get("HOSTED_OPENAI_API_KEY"),
}
if os.environ.get("HOSTED_OPENAI_API_BASE"):
credentials["openai_api_base"] = os.environ.get("HOSTED_OPENAI_API_BASE")
if app_config.get("HOSTED_OPENAI_API_BASE"):
credentials["openai_api_base"] = app_config.get("HOSTED_OPENAI_API_BASE")
if os.environ.get("HOSTED_OPENAI_API_ORGANIZATION"):
credentials["openai_organization"] = os.environ.get("HOSTED_OPENAI_API_ORGANIZATION")
quotas = []
hosted_quota_limit = int(os.environ.get("HOSTED_OPENAI_QUOTA_LIMIT", "200"))
if hosted_quota_limit != -1 or hosted_quota_limit > 0:
trial_quota = TrialHostingQuota(
quota_limit=hosted_quota_limit,
restrict_models=[
RestrictModel(model="gpt-3.5-turbo", model_type=ModelType.LLM),
RestrictModel(model="gpt-3.5-turbo-1106", model_type=ModelType.LLM),
RestrictModel(model="gpt-3.5-turbo-instruct", model_type=ModelType.LLM),
RestrictModel(model="gpt-3.5-turbo-16k", model_type=ModelType.LLM),
RestrictModel(model="text-davinci-003", model_type=ModelType.LLM),
]
)
quotas.append(trial_quota)
if os.environ.get("HOSTED_OPENAI_PAID_ENABLED") and os.environ.get(
"HOSTED_OPENAI_PAID_ENABLED").lower() == 'true':
paid_quota = PaidHostingQuota(
stripe_price_id=os.environ.get("HOSTED_OPENAI_PAID_STRIPE_PRICE_ID"),
increase_quota=int(os.environ.get("HOSTED_OPENAI_PAID_INCREASE_QUOTA", "1")),
min_quantity=int(os.environ.get("HOSTED_OPENAI_PAID_MIN_QUANTITY", "1")),
max_quantity=int(os.environ.get("HOSTED_OPENAI_PAID_MAX_QUANTITY", "1"))
)
quotas.append(paid_quota)
if app_config.get("HOSTED_OPENAI_API_ORGANIZATION"):
credentials["openai_organization"] = app_config.get("HOSTED_OPENAI_API_ORGANIZATION")
return HostingProvider(
enabled=True,
@ -151,33 +152,33 @@ class HostingConfiguration:
quota_unit=quota_unit,
)
def init_anthropic(self) -> HostingProvider:
def init_anthropic(self, app_config: Config) -> HostingProvider:
quota_unit = QuotaUnit.TOKENS
if os.environ.get("HOSTED_ANTHROPIC_ENABLED") and os.environ.get("HOSTED_ANTHROPIC_ENABLED").lower() == 'true':
quotas = []
if app_config.get("HOSTED_ANTHROPIC_TRIAL_ENABLED"):
hosted_quota_limit = int(app_config.get("HOSTED_ANTHROPIC_QUOTA_LIMIT", "0"))
trial_quota = TrialHostingQuota(
quota_limit=hosted_quota_limit
)
quotas.append(trial_quota)
if app_config.get("HOSTED_ANTHROPIC_PAID_ENABLED"):
paid_quota = PaidHostingQuota(
stripe_price_id=app_config.get("HOSTED_ANTHROPIC_PAID_STRIPE_PRICE_ID"),
increase_quota=int(app_config.get("HOSTED_ANTHROPIC_PAID_INCREASE_QUOTA", "1000000")),
min_quantity=int(app_config.get("HOSTED_ANTHROPIC_PAID_MIN_QUANTITY", "20")),
max_quantity=int(app_config.get("HOSTED_ANTHROPIC_PAID_MAX_QUANTITY", "100"))
)
quotas.append(paid_quota)
if len(quotas) > 0:
credentials = {
"anthropic_api_key": os.environ.get("HOSTED_ANTHROPIC_API_KEY"),
"anthropic_api_key": app_config.get("HOSTED_ANTHROPIC_API_KEY"),
}
if os.environ.get("HOSTED_ANTHROPIC_API_BASE"):
credentials["anthropic_api_url"] = os.environ.get("HOSTED_ANTHROPIC_API_BASE")
quotas = []
hosted_quota_limit = int(os.environ.get("HOSTED_ANTHROPIC_QUOTA_LIMIT", "0"))
if hosted_quota_limit != -1 or hosted_quota_limit > 0:
trial_quota = TrialHostingQuota(
quota_limit=hosted_quota_limit
)
quotas.append(trial_quota)
if os.environ.get("HOSTED_ANTHROPIC_PAID_ENABLED") and os.environ.get(
"HOSTED_ANTHROPIC_PAID_ENABLED").lower() == 'true':
paid_quota = PaidHostingQuota(
stripe_price_id=os.environ.get("HOSTED_ANTHROPIC_PAID_STRIPE_PRICE_ID"),
increase_quota=int(os.environ.get("HOSTED_ANTHROPIC_PAID_INCREASE_QUOTA", "1000000")),
min_quantity=int(os.environ.get("HOSTED_ANTHROPIC_PAID_MIN_QUANTITY", "20")),
max_quantity=int(os.environ.get("HOSTED_ANTHROPIC_PAID_MAX_QUANTITY", "100"))
)
quotas.append(paid_quota)
if app_config.get("HOSTED_ANTHROPIC_API_BASE"):
credentials["anthropic_api_url"] = app_config.get("HOSTED_ANTHROPIC_API_BASE")
return HostingProvider(
enabled=True,
@ -191,9 +192,9 @@ class HostingConfiguration:
quota_unit=quota_unit,
)
def init_minimax(self) -> HostingProvider:
def init_minimax(self, app_config: Config) -> HostingProvider:
quota_unit = QuotaUnit.TOKENS
if os.environ.get("HOSTED_MINIMAX_ENABLED") and os.environ.get("HOSTED_MINIMAX_ENABLED").lower() == 'true':
if app_config.get("HOSTED_MINIMAX_ENABLED"):
quotas = [FreeHostingQuota()]
return HostingProvider(
@ -208,9 +209,9 @@ class HostingConfiguration:
quota_unit=quota_unit,
)
def init_spark(self) -> HostingProvider:
def init_spark(self, app_config: Config) -> HostingProvider:
quota_unit = QuotaUnit.TOKENS
if os.environ.get("HOSTED_SPARK_ENABLED") and os.environ.get("HOSTED_SPARK_ENABLED").lower() == 'true':
if app_config.get("HOSTED_SPARK_ENABLED"):
quotas = [FreeHostingQuota()]
return HostingProvider(
@ -225,9 +226,9 @@ class HostingConfiguration:
quota_unit=quota_unit,
)
def init_zhipuai(self) -> HostingProvider:
def init_zhipuai(self, app_config: Config) -> HostingProvider:
quota_unit = QuotaUnit.TOKENS
if os.environ.get("HOSTED_ZHIPUAI_ENABLED") and os.environ.get("HOSTED_ZHIPUAI_ENABLED").lower() == 'true':
if app_config.get("HOSTED_ZHIPUAI_ENABLED"):
quotas = [FreeHostingQuota()]
return HostingProvider(
@ -242,12 +243,12 @@ class HostingConfiguration:
quota_unit=quota_unit,
)
def init_moderation_config(self) -> HostedModerationConfig:
if os.environ.get("HOSTED_MODERATION_ENABLED") and os.environ.get("HOSTED_MODERATION_ENABLED").lower() == 'true' \
and os.environ.get("HOSTED_MODERATION_PROVIDERS"):
def init_moderation_config(self, app_config: Config) -> HostedModerationConfig:
if app_config.get("HOSTED_MODERATION_ENABLED") \
and app_config.get("HOSTED_MODERATION_PROVIDERS"):
return HostedModerationConfig(
enabled=True,
providers=os.environ.get("HOSTED_MODERATION_PROVIDERS").split(',')
providers=app_config.get("HOSTED_MODERATION_PROVIDERS").split(',')
)
return HostedModerationConfig(

View File

@ -274,6 +274,8 @@ class IndexingRunner:
tokens = 0
preview_texts = []
total_segments = 0
total_price = 0
currency = 'USD'
for file_detail in file_details:
processing_rule = DatasetProcessRule(
@ -344,11 +346,13 @@ class IndexingRunner:
price_type=PriceType.INPUT,
tokens=tokens
)
total_price = '{:f}'.format(embedding_price_info.total_amount)
currency = embedding_price_info.currency
return {
"total_segments": total_segments,
"tokens": tokens,
"total_price": '{:f}'.format(embedding_price_info.total_amount) if embedding_model_instance else 0,
"currency": embedding_price_info.currency if embedding_model_instance else 'USD',
"total_price": total_price,
"currency": currency,
"preview": preview_texts
}
@ -388,6 +392,8 @@ class IndexingRunner:
tokens = 0
preview_texts = []
total_segments = 0
total_price = 0
currency = 'USD'
for notion_info in notion_info_list:
workspace_id = notion_info['workspace_id']
data_source_binding = DataSourceBinding.query.filter(
@ -470,20 +476,22 @@ class IndexingRunner:
"qa_preview": document_qa_list,
"preview": preview_texts
}
embedding_model_type_instance = embedding_model_instance.model_type_instance
embedding_model_type_instance = cast(TextEmbeddingModel, embedding_model_type_instance)
embedding_price_info = embedding_model_type_instance.get_price(
model=embedding_model_instance.model,
credentials=embedding_model_instance.credentials,
price_type=PriceType.INPUT,
tokens=tokens
)
if embedding_model_instance:
embedding_model_type_instance = embedding_model_instance.model_type_instance
embedding_model_type_instance = cast(TextEmbeddingModel, embedding_model_type_instance)
embedding_price_info = embedding_model_type_instance.get_price(
model=embedding_model_instance.model,
credentials=embedding_model_instance.credentials,
price_type=PriceType.INPUT,
tokens=tokens
)
total_price = '{:f}'.format(embedding_price_info.total_amount)
currency = embedding_price_info.currency
return {
"total_segments": total_segments,
"tokens": tokens,
"total_price": '{:f}'.format(embedding_price_info.total_amount) if embedding_model_instance else 0,
"currency": embedding_price_info.currency if embedding_model_instance else 'USD',
"total_price": total_price,
"currency": currency,
"preview": preview_texts
}

View File

@ -1,3 +1,4 @@
import copy
import logging
from typing import Generator, List, Optional, Union, cast
@ -625,9 +626,10 @@ class AzureOpenAILargeLanguageModel(_CommonAzureOpenAI, LargeLanguageModel):
def _get_ai_model_entity(base_model_name: str, model: str) -> AzureBaseModel:
for ai_model_entity in LLM_BASE_MODELS:
if ai_model_entity.base_model_name == base_model_name:
ai_model_entity.entity.model = model
ai_model_entity.entity.label.en_US = model
ai_model_entity.entity.label.zh_Hans = model
return ai_model_entity
ai_model_entity_copy = copy.deepcopy(ai_model_entity)
ai_model_entity_copy.entity.model = model
ai_model_entity_copy.entity.label.en_US = model
ai_model_entity_copy.entity.label.zh_Hans = model
return ai_model_entity_copy
return None

View File

@ -1,4 +1,5 @@
import base64
import copy
import time
from typing import Optional, Tuple
@ -186,9 +187,10 @@ class AzureOpenAITextEmbeddingModel(_CommonAzureOpenAI, TextEmbeddingModel):
def _get_ai_model_entity(base_model_name: str, model: str) -> AzureBaseModel:
for ai_model_entity in EMBEDDING_BASE_MODELS:
if ai_model_entity.base_model_name == base_model_name:
ai_model_entity.entity.model = model
ai_model_entity.entity.label.en_US = model
ai_model_entity.entity.label.zh_Hans = model
return ai_model_entity
ai_model_entity_copy = copy.deepcopy(ai_model_entity)
ai_model_entity_copy.entity.model = model
ai_model_entity_copy.entity.label.en_US = model
ai_model_entity_copy.entity.label.zh_Hans = model
return ai_model_entity_copy
return None

View File

@ -0,0 +1,9 @@
model: jina-embeddings-v2-base-de
model_type: text-embedding
model_properties:
context_size: 8192
max_chunks: 2048
pricing:
input: '0.001'
unit: '0.001'
currency: USD

View File

@ -0,0 +1,9 @@
model: jina-embeddings-v2-base-zh
model_type: text-embedding
model_properties:
context_size: 8192
max_chunks: 2048
pricing:
input: '0.001'
unit: '0.001'
currency: USD

View File

@ -17,7 +17,7 @@ class JinaTextEmbeddingModel(TextEmbeddingModel):
Model class for Jina text embedding model.
"""
api_base: str = 'https://api.jina.ai/v1/embeddings'
models: list[str] = ['jina-embeddings-v2-base-en', 'jina-embeddings-v2-small-en']
models: list[str] = ['jina-embeddings-v2-base-en', 'jina-embeddings-v2-small-en', 'jina-embeddings-v2-base-zh', 'jina-embeddings-v2-base-de']
def _invoke(self, model: str, credentials: dict,
texts: list[str], user: Optional[str] = None) \

View File

@ -10,8 +10,14 @@ model_properties:
parameter_rules:
- name: temperature
use_template: temperature
min: 0.01
max: 1
default: 0.9
- name: top_p
use_template: top_p
min: 0.01
max: 1
default: 0.95
- name: max_tokens
use_template: max_tokens
required: true

View File

@ -0,0 +1,35 @@
model: abab5.5s-chat
label:
en_US: Abab5.5s-Chat
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
min: 0.01
max: 1
default: 0.9
- name: top_p
use_template: top_p
min: 0.01
max: 1
default: 0.95
- name: max_tokens
use_template: max_tokens
required: true
default: 3072
min: 1
max: 8192
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
pricing:
input: '0.00'
output: '0.005'
unit: '0.001'
currency: RMB

View File

@ -22,7 +22,7 @@ class MinimaxChatCompletionPro(object):
"""
generate chat completion
"""
if model != 'abab5.5-chat':
if model not in ['abab5.5-chat', 'abab5.5s-chat']:
raise BadRequestError(f'Invalid model: {model}')
if not api_key or not group_id:

View File

@ -18,6 +18,7 @@ from core.model_runtime.model_providers.minimax.llm.types import MinimaxMessage
class MinimaxLargeLanguageModel(LargeLanguageModel):
model_apis = {
'abab5.5s-chat': MinimaxChatCompletionPro,
'abab5.5-chat': MinimaxChatCompletionPro,
'abab5-chat': MinimaxChatCompletion
}

View File

@ -151,6 +151,8 @@ def jsonable_encoder(
return str(obj)
if isinstance(obj, (str, int, float, type(None))):
return obj
if isinstance(obj, Decimal):
return format(obj, 'f')
if isinstance(obj, dict):
encoded_dict = {}
allowed_keys = set(obj.keys())

View File

@ -597,18 +597,28 @@ class ProviderManager:
quota_configurations = []
for provider_quota in provider_hosting_configuration.quotas:
if provider_quota.quota_type not in quota_type_to_provider_records_dict:
continue
if provider_quota.quota_type == ProviderQuotaType.FREE:
quota_configuration = QuotaConfiguration(
quota_type=provider_quota.quota_type,
quota_unit=provider_hosting_configuration.quota_unit,
quota_used=0,
quota_limit=0,
is_valid=False,
restrict_models=provider_quota.restrict_models
)
else:
continue
else:
provider_record = quota_type_to_provider_records_dict[provider_quota.quota_type]
provider_record = quota_type_to_provider_records_dict[provider_quota.quota_type]
quota_configuration = QuotaConfiguration(
quota_type=provider_quota.quota_type,
quota_unit=provider_hosting_configuration.quota_unit,
quota_used=provider_record.quota_used,
quota_limit=provider_record.quota_limit,
is_valid=provider_record.quota_limit > provider_record.quota_used or provider_record.quota_limit == -1,
restrict_models=provider_quota.restrict_models
)
quota_configuration = QuotaConfiguration(
quota_type=provider_quota.quota_type,
quota_unit=provider_hosting_configuration.quota_unit,
quota_used=provider_record.quota_used,
quota_limit=provider_record.quota_limit,
is_valid=provider_record.quota_limit > provider_record.quota_used or provider_record.quota_limit == -1,
restrict_models=provider_quota.restrict_models
)
quota_configurations.append(quota_configuration)
@ -670,6 +680,7 @@ class ProviderManager:
current_using_credentials = cached_provider_credentials
else:
current_using_credentials = {}
quota_configurations = []
return SystemConfiguration(
enabled=True,

View File

@ -1450,7 +1450,7 @@ class Qdrant(VectorStore):
wal_config=wal_config,
quantization_config=quantization_config,
init_from=init_from,
timeout=timeout, # type: ignore[arg-type]
timeout=int(timeout), # type: ignore[arg-type]
)
is_new_collection = True
if force_recreate:

View File

@ -23,12 +23,16 @@ def handle(sender, **kwargs):
for quota_configuration in system_configuration.quota_configurations:
if quota_configuration.quota_type == system_configuration.current_quota_type:
quota_unit = quota_configuration.quota_unit
if quota_configuration.quota_limit == -1:
return
break
used_quota = None
if quota_unit:
if quota_unit == QuotaUnit.TOKENS.value:
used_quota = message.message_tokens + message.prompt_tokens
if quota_unit == QuotaUnit.TOKENS:
used_quota = message.message_tokens + message.answer_tokens
else:
used_quota = 1

View File

@ -31,6 +31,10 @@ class ExternalApi(Api):
'message': getattr(e, 'description', http_status_message(status_code)),
'status': status_code
}
if default_data['message'] and default_data['message'] == 'Failed to decode JSON object: Expecting value: line 1 column 1 (char 0)':
default_data['message'] = 'Invalid JSON payload received or JSON payload is empty.'
headers = e.get_response().headers
elif isinstance(e, ValueError):
status_code = 400

View File

@ -17,7 +17,7 @@ class Dataset(db.Model):
db.Index('retrieval_model_idx', "retrieval_model", postgresql_using='gin')
)
INDEXING_TECHNIQUE_LIST = ['high_quality', 'economy']
INDEXING_TECHNIQUE_LIST = ['high_quality', 'economy', None]
id = db.Column(UUID, server_default=db.text('uuid_generate_v4()'))
tenant_id = db.Column(UUID, nullable=False)

View File

@ -27,10 +27,15 @@ class CompletionService:
auto_generate_name = args['auto_generate_name'] \
if 'auto_generate_name' in args else True
if app_model.mode != 'completion' and not query:
raise ValueError('query is required')
if app_model.mode != 'completion':
if not query:
raise ValueError('query is required')
query = query.replace('\x00', '')
if query:
if not isinstance(query, str):
raise ValueError('query must be a string')
query = query.replace('\x00', '')
conversation_id = args['conversation_id'] if 'conversation_id' in args else None
@ -230,6 +235,10 @@ class CompletionService:
value = user_inputs[variable]
if value:
if not isinstance(value, str):
raise ValueError(f"{variable} in input form must be a string")
if input_type == "select":
options = input_config["options"] if "options" in input_config else []
if value not in options:
@ -243,4 +252,3 @@ class CompletionService:
filtered_inputs[variable] = value.replace('\x00', '') if value else None
return filtered_inputs

View File

@ -2,7 +2,7 @@ version: '3.1'
services:
# API service
api:
image: langgenius/dify-api:0.4.7
image: langgenius/dify-api:0.4.9
restart: always
environment:
# Startup mode, 'api' starts the API server.
@ -23,10 +23,6 @@ services:
# different from console domain.
# example: http://api.dify.ai
SERVICE_API_URL: ''
# The URL prefix for Web APP api server, refers to the Web App base URL of WEB service if web app domain is different from
# console or api domain.
# example: http://udify.app
APP_API_URL: ''
# The URL prefix for Web APP frontend, refers to the Web App base URL of WEB service if web app domain is different from
# console or api domain.
# example: http://udify.app
@ -131,7 +127,7 @@ services:
# worker service
# The Celery worker for processing the queue.
worker:
image: langgenius/dify-api:0.4.7
image: langgenius/dify-api:0.4.9
restart: always
environment:
# Startup mode, 'worker' starts the Celery worker for processing the queue.
@ -202,7 +198,7 @@ services:
# Frontend web application.
web:
image: langgenius/dify-web:0.4.7
image: langgenius/dify-web:0.4.9
restart: always
environment:
EDITION: SELF_HOSTED
@ -287,7 +283,7 @@ services:
# (if uncommented, you need to comment out the weaviate service above,
# and set VECTOR_STORE to qdrant in the api & worker service.)
# qdrant:
# image: langgenius/qdrant:1.7.3
# image: langgenius/qdrant:v1.7.3
# restart: always
# volumes:
# - ./volumes/qdrant:/qdrant/storage

View File

@ -1,16 +1,19 @@
# base image
FROM node:18.17.0-alpine AS base
FROM node:20.11.0-alpine AS base
LABEL maintainer="takatost@gmail.com"
RUN apk add --no-cache tzdata
# install packages
FROM base as packages
LABEL maintainer="takatost@gmail.com"
WORKDIR /app/web
COPY package.json .
COPY yarn.lock .
RUN yarn --only=prod
RUN yarn install --frozen-lockfile
# build resources
@ -32,6 +35,10 @@ ENV CONSOLE_API_URL http://127.0.0.1:5001
ENV APP_API_URL http://127.0.0.1:5001
ENV PORT 3000
# set timezone
ENV TZ UTC
RUN ln -s /usr/share/zoneinfo/${TZ} /etc/localtime \
&& echo ${TZ} > /etc/timezone
WORKDIR /app/web
COPY --from=builder /app/web/public ./public
@ -40,10 +47,9 @@ COPY --from=builder /app/web/.next/static ./.next/static
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ARG COMMIT_SHA
ENV COMMIT_SHA ${COMMIT_SHA}
EXPOSE 3000
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]

View File

@ -55,8 +55,8 @@ const LikedItem = ({
isMobile,
}: ILikedItemProps) => {
return (
<Link className={classNames(s.itemWrapper, 'px-0 sm:px-3 justify-center sm:justify-start')} href={`/app/${detail?.id}/overview`}>
<div className={classNames(s.iconWrapper, 'mr-0 sm:mr-2')}>
<Link className={classNames(s.itemWrapper, 'px-0', isMobile && 'justify-center')} href={`/app/${detail?.id}/overview`}>
<div className={classNames(s.iconWrapper, 'mr-0')}>
<AppIcon size='tiny' icon={detail?.icon} background={detail?.icon_background}/>
{type === 'app' && (
<div className={s.statusPoint}>
@ -64,7 +64,7 @@ const LikedItem = ({
</div>
)}
</div>
{!isMobile && <div className={s.appInfo}>{detail?.name || '--'}</div>}
{!isMobile && <div className={classNames(s.appInfo, 'ml-2')}>{detail?.name || '--'}</div>}
</Link>
)
}
@ -197,14 +197,14 @@ const DatasetDetailLayout: FC<IAppDetailLayoutProps> = (props) => {
return <Loading />
return (
<div className='flex overflow-hidden'>
<div className='grow flex overflow-hidden'>
{!hideSideBar && <AppSideBar
title={datasetRes?.name || '--'}
icon={datasetRes?.icon || 'https://static.dify.ai/images/dataset-default-icon.png'}
icon_background={datasetRes?.icon_background || '#F5F5F5'}
desc={datasetRes?.description || '--'}
navigation={navigation}
extraInfo={<ExtraInfo isMobile={isMobile} relatedApps={relatedApps} />}
extraInfo={mode => <ExtraInfo isMobile={mode === 'collapse'} relatedApps={relatedApps} />}
iconType={datasetRes?.data_source_type === DataSourceType.NOTION ? 'notion' : 'dataset'}
/>}
<DatasetDetailContext.Provider value={{

View File

@ -20,7 +20,7 @@ export type IAppDetailNavProps = {
icon: NavIcon
selectedIcon: NavIcon
}>
extraInfo?: React.ReactNode
extraInfo?: (modeState: string) => React.ReactNode
}
const AppDetailNav = ({ title, desc, icon, icon_background, navigation, extraInfo, iconType = 'app' }: IAppDetailNavProps) => {
@ -72,7 +72,7 @@ const AppDetailNav = ({ title, desc, icon, icon_background, navigation, extraInf
<NavLink key={index} mode={modeState} iconMap={{ selected: item.selectedIcon, normal: item.icon }} name={item.name} href={item.href} />
)
})}
{extraInfo ?? null}
{extraInfo && extraInfo(modeState)}
</nav>
{
!isMobile && (

View File

@ -107,7 +107,6 @@ const PlanItem: FC<Props> = ({
<div>{t('billing.plansCommon.supportItems.emailSupport')}</div>
<div className='mt-3.5 flex items-center space-x-1'>
<div>+ {t('billing.plansCommon.supportItems.logoChange')}</div>
<div>{comingSoon}</div>
</div>
<div className='mt-3.5 flex items-center space-x-1'>
<div className='flex items-center'>
@ -135,7 +134,6 @@ const PlanItem: FC<Props> = ({
<div>{t('billing.plansCommon.supportItems.priorityEmail')}</div>
<div className='mt-3.5 flex items-center space-x-1'>
<div>+ {t('billing.plansCommon.supportItems.logoChange')}</div>
<div>{comingSoon}</div>
</div>
<div className='mt-3.5 flex items-center space-x-1'>
<div>+ {t('billing.plansCommon.supportItems.SSOAuthentication')}</div>

View File

@ -38,7 +38,13 @@ const ModelProviderPage = () => {
const notConfigedProviders: ModelProvider[] = []
providers.forEach((provider) => {
if (provider.custom_configuration.status === CustomConfigurationStatusEnum.active || provider.system_configuration.enabled === true)
if (
provider.custom_configuration.status === CustomConfigurationStatusEnum.active
|| (
provider.system_configuration.enabled === true
&& provider.system_configuration.quota_configurations.find(item => item.quota_type === provider.system_configuration.current_quota_type)
)
)
configedProviders.push(provider)
else
notConfigedProviders.push(provider)

View File

@ -9,6 +9,8 @@ import type {
import { ConfigurateMethodEnum } from '../declarations'
import {
DEFAULT_BACKGROUND_COLOR,
MODEL_PROVIDER_QUOTA_GET_FREE,
MODEL_PROVIDER_QUOTA_GET_PAID,
modelTypeFormat,
} from '../utils'
import ProviderIcon from '../provider-icon'
@ -41,7 +43,7 @@ const ProviderAddedCard: FC<ProviderAddedCardProps> = ({
const configurateMethods = provider.configurate_methods.filter(method => method !== ConfigurateMethodEnum.fetchFromRemote)
const systemConfig = provider.system_configuration
const hasModelList = fetched && !!modelList.length
const showQuota = systemConfig.enabled && ['minimax', 'spark', 'zhipuai', 'anthropic', 'openai'].includes(provider.provider) && !IS_CE_EDITION
const showQuota = systemConfig.enabled && [...MODEL_PROVIDER_QUOTA_GET_FREE, ...MODEL_PROVIDER_QUOTA_GET_PAID].includes(provider.provider) && !IS_CE_EDITION
const getModelList = async (providerName: string) => {
if (loading)

View File

@ -11,6 +11,10 @@ import {
useFreeQuota,
useUpdateModelProviders,
} from '../hooks'
import {
MODEL_PROVIDER_QUOTA_GET_FREE,
MODEL_PROVIDER_QUOTA_GET_PAID,
} from '../utils'
import PriorityUseTip from './priority-use-tip'
import { InfoCircle } from '@/app/components/base/icons/src/vender/line/general'
import Button from '@/app/components/base/button'
@ -34,7 +38,7 @@ const QuotaPanel: FC<QuotaPanelProps> = ({
const priorityUseType = provider.preferred_provider_type
const systemConfig = provider.system_configuration
const currentQuota = systemConfig.enabled && systemConfig.quota_configurations.find(item => item.quota_type === systemConfig.current_quota_type)
const openaiOrAnthropic = ['openai', 'anthropic'].includes(provider.provider)
const openaiOrAnthropic = MODEL_PROVIDER_QUOTA_GET_PAID.includes(provider.provider)
return (
<div className='group relative shrink-0 min-w-[112px] px-3 py-2 rounded-lg bg-white/[0.3] border-[0.5px] border-black/5'>
@ -72,7 +76,7 @@ const QuotaPanel: FC<QuotaPanelProps> = ({
)
}
{
!currentQuota && ['minimax', 'spark', 'zhipuai'].includes(provider.provider) && (
!currentQuota && MODEL_PROVIDER_QUOTA_GET_FREE.includes(provider.provider) && (
<Button
className='h-6 bg-white text-xs font-medium rounded-md'
onClick={() => handleFreeQuota(provider.provider)}

View File

@ -7,6 +7,7 @@ import type {
import { ConfigurateMethodEnum } from '../declarations'
import {
DEFAULT_BACKGROUND_COLOR,
MODEL_PROVIDER_QUOTA_GET_FREE,
modelTypeFormat,
} from '../utils'
import {
@ -55,7 +56,7 @@ const ProviderCard: FC<ProviderCardProps> = ({
}
const handleFreeQuota = useFreeQuota(handleFreeQuotaSuccess)
const configurateMethods = provider.configurate_methods.filter(method => method !== ConfigurateMethodEnum.fetchFromRemote)
const canGetFreeQuota = ['mininmax', 'spark', 'zhipuai'].includes(provider.provider) && !IS_CE_EDITION
const canGetFreeQuota = MODEL_PROVIDER_QUOTA_GET_FREE.includes(provider.provider) && !IS_CE_EDITION && provider.system_configuration.enabled
return (
<div

View File

@ -23,6 +23,9 @@ export const languageMaps = {
'zh-Hans': 'zh_Hans'
}
export const MODEL_PROVIDER_QUOTA_GET_FREE = ['minimax', 'spark', 'zhipuai']
export const MODEL_PROVIDER_QUOTA_GET_PAID = ['anthropic', 'openai']
export const DEFAULT_BACKGROUND_COLOR = '#F3F4F6'
export const isNullOrUndefined = (value: any) => {

View File

@ -4,18 +4,8 @@ set -e
export NEXT_PUBLIC_DEPLOY_ENV=${DEPLOY_ENV}
export NEXT_PUBLIC_EDITION=${EDITION}
if [[ -z "$CONSOLE_URL" ]]; then
export NEXT_PUBLIC_API_PREFIX=${CONSOLE_API_URL}/console/api
else
export NEXT_PUBLIC_API_PREFIX=${CONSOLE_URL}/console/api
fi
if [[ -z "$APP_URL" ]]; then
export NEXT_PUBLIC_PUBLIC_API_PREFIX=${APP_API_URL}/api
else
export NEXT_PUBLIC_PUBLIC_API_PREFIX=${APP_URL}/api
fi
export NEXT_PUBLIC_API_PREFIX=${CONSOLE_API_URL}/console/api
export NEXT_PUBLIC_PUBLIC_API_PREFIX=${APP_API_URL}/api
export NEXT_PUBLIC_SENTRY_DSN=${SENTRY_DSN}
export NEXT_PUBLIC_SITE_ABOUT=${SITE_ABOUT}

View File

@ -1,6 +1,6 @@
{
"name": "dify-web",
"version": "0.4.7",
"version": "0.4.9",
"private": true,
"scripts": {
"dev": "next dev",
@ -74,6 +74,7 @@
"remark-math": "^5.1.1",
"scheduler": "^0.23.0",
"server-only": "^0.0.1",
"sharp": "^0.33.2",
"sortablejs": "^1.15.0",
"swr": "^2.1.0",
"use-context-selector": "^1.4.1"

View File

@ -112,6 +112,13 @@
resolved "https://registry.npmjs.org/@braintree/sanitize-url/-/sanitize-url-6.0.4.tgz"
integrity sha512-s3jaWicZd0pkP0jf5ysyHUI/RE7MHos6qlToFcGWXVp+ykHOy77OUMrfbgJ9it2C5bow7OIQwYYaHjk9XlBQ2A==
"@emnapi/runtime@^0.45.0":
version "0.45.0"
resolved "https://registry.yarnpkg.com/@emnapi/runtime/-/runtime-0.45.0.tgz#e754de04c683263f34fd0c7f32adfe718bbe4ddd"
integrity sha512-Txumi3td7J4A/xTTwlssKieHKTGl3j4A1tglBx72auZ49YK7ePY6XZricgIg9mnZT4xPfA+UPCUdnhRuEFDL+w==
dependencies:
tslib "^2.4.0"
"@emoji-mart/data@^1.1.2":
version "1.1.2"
resolved "https://registry.npmjs.org/@emoji-mart/data/-/data-1.1.2.tgz"
@ -235,6 +242,119 @@
resolved "https://registry.npmjs.org/@humanwhocodes/object-schema/-/object-schema-1.2.1.tgz"
integrity sha512-ZnQMnLV4e7hDlUvw8H+U8ASL02SS2Gn6+9Ac3wGGLIe7+je2AeAOxPY+izIPJDfFDb7eDjev0Us8MO1iFRN8hA==
"@img/sharp-darwin-arm64@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-darwin-arm64/-/sharp-darwin-arm64-0.33.2.tgz#0a52a82c2169112794dac2c71bfba9e90f7c5bd1"
integrity sha512-itHBs1rPmsmGF9p4qRe++CzCgd+kFYktnsoR1sbIAfsRMrJZau0Tt1AH9KVnufc2/tU02Gf6Ibujx+15qRE03w==
optionalDependencies:
"@img/sharp-libvips-darwin-arm64" "1.0.1"
"@img/sharp-darwin-x64@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-darwin-x64/-/sharp-darwin-x64-0.33.2.tgz#982e26bb9d38a81f75915c4032539aed621d1c21"
integrity sha512-/rK/69Rrp9x5kaWBjVN07KixZanRr+W1OiyKdXcbjQD6KbW+obaTeBBtLUAtbBsnlTTmWthw99xqoOS7SsySDg==
optionalDependencies:
"@img/sharp-libvips-darwin-x64" "1.0.1"
"@img/sharp-libvips-darwin-arm64@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@img/sharp-libvips-darwin-arm64/-/sharp-libvips-darwin-arm64-1.0.1.tgz#81e83ffc2c497b3100e2f253766490f8fad479cd"
integrity sha512-kQyrSNd6lmBV7O0BUiyu/OEw9yeNGFbQhbxswS1i6rMDwBBSX+e+rPzu3S+MwAiGU3HdLze3PanQ4Xkfemgzcw==
"@img/sharp-libvips-darwin-x64@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@img/sharp-libvips-darwin-x64/-/sharp-libvips-darwin-x64-1.0.1.tgz#fc1fcd9d78a178819eefe2c1a1662067a83ab1d6"
integrity sha512-eVU/JYLPVjhhrd8Tk6gosl5pVlvsqiFlt50wotCvdkFGf+mDNBJxMh+bvav+Wt3EBnNZWq8Sp2I7XfSjm8siog==
"@img/sharp-libvips-linux-arm64@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@img/sharp-libvips-linux-arm64/-/sharp-libvips-linux-arm64-1.0.1.tgz#26eb8c556a9b0db95f343fc444abc3effb67ebcf"
integrity sha512-bnGG+MJjdX70mAQcSLxgeJco11G+MxTz+ebxlz8Y3dxyeb3Nkl7LgLI0mXupoO+u1wRNx/iRj5yHtzA4sde1yA==
"@img/sharp-libvips-linux-arm@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@img/sharp-libvips-linux-arm/-/sharp-libvips-linux-arm-1.0.1.tgz#2a377b959ff7dd6528deee486c25461296a4fa8b"
integrity sha512-FtdMvR4R99FTsD53IA3LxYGghQ82t3yt0ZQ93WMZ2xV3dqrb0E8zq4VHaTOuLEAuA83oDawHV3fd+BsAPadHIQ==
"@img/sharp-libvips-linux-s390x@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@img/sharp-libvips-linux-s390x/-/sharp-libvips-linux-s390x-1.0.1.tgz#af28ac9ba929204467ecdf843330d791e9421e10"
integrity sha512-3+rzfAR1YpMOeA2zZNp+aYEzGNWK4zF3+sdMxuCS3ey9HhDbJ66w6hDSHDMoap32DueFwhhs3vwooAB2MaK4XQ==
"@img/sharp-libvips-linux-x64@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@img/sharp-libvips-linux-x64/-/sharp-libvips-linux-x64-1.0.1.tgz#4273d182aa51912e655e1214ea47983d7c1f7f8d"
integrity sha512-3NR1mxFsaSgMMzz1bAnnKbSAI+lHXVTqAHgc1bgzjHuXjo4hlscpUxc0vFSAPKI3yuzdzcZOkq7nDPrP2F8Jgw==
"@img/sharp-libvips-linuxmusl-arm64@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@img/sharp-libvips-linuxmusl-arm64/-/sharp-libvips-linuxmusl-arm64-1.0.1.tgz#d150c92151cea2e8d120ad168b9c358d09c77ce8"
integrity sha512-5aBRcjHDG/T6jwC3Edl3lP8nl9U2Yo8+oTl5drd1dh9Z1EBfzUKAJFUDTDisDjUwc7N4AjnPGfCA3jl3hY8uDg==
"@img/sharp-libvips-linuxmusl-x64@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@img/sharp-libvips-linuxmusl-x64/-/sharp-libvips-linuxmusl-x64-1.0.1.tgz#e297c1a4252c670d93b0f9e51fca40a7a5b6acfd"
integrity sha512-dcT7inI9DBFK6ovfeWRe3hG30h51cBAP5JXlZfx6pzc/Mnf9HFCQDLtYf4MCBjxaaTfjCCjkBxcy3XzOAo5txw==
"@img/sharp-linux-arm64@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-linux-arm64/-/sharp-linux-arm64-0.33.2.tgz#af3409f801a9bee1d11d0c7e971dcd6180f80022"
integrity sha512-pz0NNo882vVfqJ0yNInuG9YH71smP4gRSdeL09ukC2YLE6ZyZePAlWKEHgAzJGTiOh8Qkaov6mMIMlEhmLdKew==
optionalDependencies:
"@img/sharp-libvips-linux-arm64" "1.0.1"
"@img/sharp-linux-arm@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-linux-arm/-/sharp-linux-arm-0.33.2.tgz#181f7466e6ac074042a38bfb679eb82505e17083"
integrity sha512-Fndk/4Zq3vAc4G/qyfXASbS3HBZbKrlnKZLEJzPLrXoJuipFNNwTes71+Ki1hwYW5lch26niRYoZFAtZVf3EGA==
optionalDependencies:
"@img/sharp-libvips-linux-arm" "1.0.1"
"@img/sharp-linux-s390x@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-linux-s390x/-/sharp-linux-s390x-0.33.2.tgz#9c171f49211f96fba84410b3e237b301286fa00f"
integrity sha512-MBoInDXDppMfhSzbMmOQtGfloVAflS2rP1qPcUIiITMi36Mm5YR7r0ASND99razjQUpHTzjrU1flO76hKvP5RA==
optionalDependencies:
"@img/sharp-libvips-linux-s390x" "1.0.1"
"@img/sharp-linux-x64@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-linux-x64/-/sharp-linux-x64-0.33.2.tgz#b956dfc092adc58c2bf0fae2077e6f01a8b2d5d7"
integrity sha512-xUT82H5IbXewKkeF5aiooajoO1tQV4PnKfS/OZtb5DDdxS/FCI/uXTVZ35GQ97RZXsycojz/AJ0asoz6p2/H/A==
optionalDependencies:
"@img/sharp-libvips-linux-x64" "1.0.1"
"@img/sharp-linuxmusl-arm64@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-linuxmusl-arm64/-/sharp-linuxmusl-arm64-0.33.2.tgz#10e0ec5a79d1234c6a71df44c9f3b0bef0bc0f15"
integrity sha512-F+0z8JCu/UnMzg8IYW1TMeiViIWBVg7IWP6nE0p5S5EPQxlLd76c8jYemG21X99UzFwgkRo5yz2DS+zbrnxZeA==
optionalDependencies:
"@img/sharp-libvips-linuxmusl-arm64" "1.0.1"
"@img/sharp-linuxmusl-x64@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-linuxmusl-x64/-/sharp-linuxmusl-x64-0.33.2.tgz#29e0030c24aa27c38201b1fc84e3d172899fcbe0"
integrity sha512-+ZLE3SQmSL+Fn1gmSaM8uFusW5Y3J9VOf+wMGNnTtJUMUxFhv+P4UPaYEYT8tqnyYVaOVGgMN/zsOxn9pSsO2A==
optionalDependencies:
"@img/sharp-libvips-linuxmusl-x64" "1.0.1"
"@img/sharp-wasm32@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-wasm32/-/sharp-wasm32-0.33.2.tgz#38d7c740a22de83a60ad1e6bcfce17462b0d4230"
integrity sha512-fLbTaESVKuQcpm8ffgBD7jLb/CQLcATju/jxtTXR1XCLwbOQt+OL5zPHSDMmp2JZIeq82e18yE0Vv7zh6+6BfQ==
dependencies:
"@emnapi/runtime" "^0.45.0"
"@img/sharp-win32-ia32@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-win32-ia32/-/sharp-win32-ia32-0.33.2.tgz#09456314e223f68e5417c283b45c399635c16202"
integrity sha512-okBpql96hIGuZ4lN3+nsAjGeggxKm7hIRu9zyec0lnfB8E7Z6p95BuRZzDDXZOl2e8UmR4RhYt631i7mfmKU8g==
"@img/sharp-win32-x64@0.33.2":
version "0.33.2"
resolved "https://registry.yarnpkg.com/@img/sharp-win32-x64/-/sharp-win32-x64-0.33.2.tgz#148e96dfd6e68747da41a311b9ee4559bb1b1471"
integrity sha512-E4magOks77DK47FwHUIGH0RYWSgRBfGdK56kIHSVeB9uIS4pPFr4N2kIVsXdQQo4LzOsENKV5KAhRlRL7eMAdg==
"@jridgewell/gen-mapping@^0.3.2":
version "0.3.3"
resolved "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.3.tgz"
@ -1581,11 +1701,27 @@ color-name@1.1.3:
resolved "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz"
integrity sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==
color-name@~1.1.4:
color-name@^1.0.0, color-name@~1.1.4:
version "1.1.4"
resolved "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz"
integrity sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==
color-string@^1.9.0:
version "1.9.1"
resolved "https://registry.yarnpkg.com/color-string/-/color-string-1.9.1.tgz#4467f9146f036f855b764dfb5bf8582bf342c7a4"
integrity sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==
dependencies:
color-name "^1.0.0"
simple-swizzle "^0.2.2"
color@^4.2.3:
version "4.2.3"
resolved "https://registry.yarnpkg.com/color/-/color-4.2.3.tgz#d781ecb5e57224ee43ea9627560107c0e0c6463a"
integrity sha512-1rXeuUUiGGrykh+CeBdu5Ie7OJwinCgQY0bc7GCRxy5xVHy+moaqkpL/jqQq0MtQOeYcrqEz4abc5f0KtU7W4A==
dependencies:
color-convert "^2.0.1"
color-string "^1.9.0"
colorette@^2.0.19:
version "2.0.20"
resolved "https://registry.npmjs.org/colorette/-/colorette-2.0.20.tgz"
@ -2076,6 +2212,11 @@ dequal@^2.0.0, dequal@^2.0.3:
resolved "https://registry.npmjs.org/dequal/-/dequal-2.0.3.tgz"
integrity sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==
detect-libc@^2.0.2:
version "2.0.2"
resolved "https://registry.yarnpkg.com/detect-libc/-/detect-libc-2.0.2.tgz#8ccf2ba9315350e1241b88d0ac3b0e1fbd99605d"
integrity sha512-UX6sGumvvqSaXgdKGUsgZWqcUyIXZ/vZTrlRT/iobiKhGL0zL4d3osHj3uqllWJK+i+sixDS/3COVEOFbupFyw==
didyoumean@^1.2.2:
version "1.2.2"
resolved "https://registry.npmjs.org/didyoumean/-/didyoumean-1.2.2.tgz"
@ -3521,6 +3662,11 @@ is-arrayish@^0.2.1:
resolved "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.2.1.tgz"
integrity sha512-zz06S8t0ozoDXMG+ube26zeCTNXcKIPJZJi8hBrF4idCLms4CG9QtK7qBl1boi5ODzFpjswb5JPmHCbMpjaYzg==
is-arrayish@^0.3.1:
version "0.3.2"
resolved "https://registry.yarnpkg.com/is-arrayish/-/is-arrayish-0.3.2.tgz#4574a2ae56f7ab206896fb431eaeed066fdf8f03"
integrity sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ==
is-async-function@^2.0.0:
version "2.0.0"
resolved "https://registry.yarnpkg.com/is-async-function/-/is-async-function-2.0.0.tgz#8e4418efd3e5d3a6ebb0164c05ef5afb69aa9646"
@ -6072,6 +6218,35 @@ set-function-name@^2.0.0, set-function-name@^2.0.1:
functions-have-names "^1.2.3"
has-property-descriptors "^1.0.0"
sharp@^0.33.2:
version "0.33.2"
resolved "https://registry.yarnpkg.com/sharp/-/sharp-0.33.2.tgz#fcd52f2c70effa8a02160b1bfd989a3de55f2dfb"
integrity sha512-WlYOPyyPDiiM07j/UO+E720ju6gtNtHjEGg5vovUk1Lgxyjm2LFO+37Nt/UI3MMh2l6hxTWQWi7qk3cXJTutcQ==
dependencies:
color "^4.2.3"
detect-libc "^2.0.2"
semver "^7.5.4"
optionalDependencies:
"@img/sharp-darwin-arm64" "0.33.2"
"@img/sharp-darwin-x64" "0.33.2"
"@img/sharp-libvips-darwin-arm64" "1.0.1"
"@img/sharp-libvips-darwin-x64" "1.0.1"
"@img/sharp-libvips-linux-arm" "1.0.1"
"@img/sharp-libvips-linux-arm64" "1.0.1"
"@img/sharp-libvips-linux-s390x" "1.0.1"
"@img/sharp-libvips-linux-x64" "1.0.1"
"@img/sharp-libvips-linuxmusl-arm64" "1.0.1"
"@img/sharp-libvips-linuxmusl-x64" "1.0.1"
"@img/sharp-linux-arm" "0.33.2"
"@img/sharp-linux-arm64" "0.33.2"
"@img/sharp-linux-s390x" "0.33.2"
"@img/sharp-linux-x64" "0.33.2"
"@img/sharp-linuxmusl-arm64" "0.33.2"
"@img/sharp-linuxmusl-x64" "0.33.2"
"@img/sharp-wasm32" "0.33.2"
"@img/sharp-win32-ia32" "0.33.2"
"@img/sharp-win32-x64" "0.33.2"
shebang-command@^2.0.0:
version "2.0.0"
resolved "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz"
@ -6098,6 +6273,13 @@ signal-exit@^3.0.2, signal-exit@^3.0.3, signal-exit@^3.0.7:
resolved "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.7.tgz"
integrity sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==
simple-swizzle@^0.2.2:
version "0.2.2"
resolved "https://registry.yarnpkg.com/simple-swizzle/-/simple-swizzle-0.2.2.tgz#a4da6b635ffcccca33f70d17cb92592de95e557a"
integrity sha512-JA//kQgZtbuY83m+xT+tXJkmJncGMTFT+C+g2h2R9uxkYIrE2yy9sgmcLhCnw57/WSD+Eh3J97FPEDFnbXnDUg==
dependencies:
is-arrayish "^0.3.1"
size-sensor@^1.0.1:
version "1.0.1"
resolved "https://registry.npmjs.org/size-sensor/-/size-sensor-1.0.1.tgz"