mirror of
https://github.com/langgenius/dify.git
synced 2026-01-29 08:16:15 +08:00
Compare commits
4 Commits
refactor/c
...
refactor/t
| Author | SHA1 | Date | |
|---|---|---|---|
| f72aaf9ff2 | |||
| 7f8aaa33f7 | |||
| 2f52e62835 | |||
| 0b3bf03818 |
24
AGENTS.md
24
AGENTS.md
@ -25,30 +25,6 @@ pnpm type-check:tsgo
|
||||
pnpm test
|
||||
```
|
||||
|
||||
### Frontend Linting
|
||||
|
||||
ESLint is used for frontend code quality. Available commands:
|
||||
|
||||
```bash
|
||||
# Lint all files (report only)
|
||||
pnpm lint
|
||||
|
||||
# Lint and auto-fix issues
|
||||
pnpm lint:fix
|
||||
|
||||
# Lint specific files or directories
|
||||
pnpm lint:fix app/components/base/button/
|
||||
pnpm lint:fix app/components/base/button/index.tsx
|
||||
|
||||
# Lint quietly (errors only, no warnings)
|
||||
pnpm lint:quiet
|
||||
|
||||
# Check code complexity
|
||||
pnpm lint:complexity
|
||||
```
|
||||
|
||||
**Important**: Always run `pnpm lint:fix` before committing. The pre-commit hook runs `lint-staged` which only lints staged files.
|
||||
|
||||
## Testing & Quality Practices
|
||||
|
||||
- Follow TDD: red → green → refactor.
|
||||
|
||||
0
agent-notes/.gitkeep
Normal file
0
agent-notes/.gitkeep
Normal file
@ -0,0 +1,27 @@
|
||||
# Notes: `large_language_model.py`
|
||||
|
||||
## Purpose
|
||||
|
||||
Provides the base `LargeLanguageModel` implementation used by the model runtime to invoke plugin-backed LLMs and to
|
||||
bridge plugin daemon streaming semantics back into API-layer entities (`LLMResult`, `LLMResultChunk`).
|
||||
|
||||
## Key behaviors / invariants
|
||||
|
||||
- `invoke(..., stream=False)` still calls the plugin in streaming mode and then synthesizes a single `LLMResult` from
|
||||
the first yielded `LLMResultChunk`.
|
||||
- Plugin invocation is wrapped by `_invoke_llm_via_plugin(...)`, and `stream=False` normalization is handled by
|
||||
`_normalize_non_stream_plugin_result(...)` / `_build_llm_result_from_first_chunk(...)`.
|
||||
- Tool call deltas are merged incrementally via `_increase_tool_call(...)` to support multiple provider chunking
|
||||
patterns (IDs anchored to first chunk, every chunk, or missing entirely).
|
||||
- A tool-call delta with an empty `id` requires at least one existing tool call; otherwise we raise `ValueError` to
|
||||
surface invalid delta sequences explicitly.
|
||||
- Callback invocation is centralized in `_run_callbacks(...)` to ensure consistent error handling/logging.
|
||||
- For compatibility with dify issue `#17799`, `prompt_messages` may be removed by the plugin daemon in chunks and must
|
||||
be re-attached in this layer before callbacks/consumers use them.
|
||||
- Callback hooks (`on_before_invoke`, `on_new_chunk`, `on_after_invoke`, `on_invoke_error`) must not break invocation
|
||||
unless `callback.raise_error` is true.
|
||||
|
||||
## Test focus
|
||||
|
||||
- `api/tests/unit_tests/core/model_runtime/__base/test_increase_tool_call.py` validates tool-call delta merging and
|
||||
patches `_gen_tool_call_id` for deterministic IDs.
|
||||
120
api/AGENTS.md
120
api/AGENTS.md
@ -1,47 +1,97 @@
|
||||
# API Agent Guide
|
||||
|
||||
## Notes for Agent (must-check)
|
||||
## Agent Notes (must-check)
|
||||
|
||||
Before changing any backend code under `api/`, you MUST read the surrounding docstrings and comments. These notes contain required context (invariants, edge cases, trade-offs) and are treated as part of the spec.
|
||||
Before you start work on any backend file under `api/`, you MUST check whether a related note exists under:
|
||||
|
||||
Look for:
|
||||
- `agent-notes/<same-relative-path-as-target-file>.md`
|
||||
|
||||
- The module (file) docstring at the top of a source code file
|
||||
- Docstrings on classes and functions/methods
|
||||
- Paragraph/block comments for non-obvious logic
|
||||
Rules:
|
||||
|
||||
### What to write where
|
||||
- **Path mapping**: for a target file `<path>/<name>.py`, the note must be `agent-notes/<path>/<name>.py.md` (same folder structure, same filename, plus `.md`).
|
||||
- **Before working**:
|
||||
- If the note exists, read it first and follow any constraints/decisions recorded there.
|
||||
- If the note conflicts with the current code, or references an "origin" file/path that has been deleted, renamed, or migrated, treat the **code as the single source of truth** and update the note to match reality.
|
||||
- If the note does not exist, create it with a short architecture/intent summary and any relevant invariants/edge cases.
|
||||
- **During working**:
|
||||
- Keep the note in sync as you discover constraints, make decisions, or change approach.
|
||||
- If you move/rename a file, migrate its note to the new mapped path (and fix any outdated references inside the note).
|
||||
- Record non-obvious edge cases, trade-offs, and the test/verification plan as you go (not just at the end).
|
||||
- Keep notes **coherent**: integrate new findings into the relevant sections and rewrite for clarity; avoid append-only “recent fix” / changelog-style additions unless the note is explicitly intended to be a changelog.
|
||||
- **When finishing work**:
|
||||
- Update the related note(s) to reflect what changed, why, and any new edge cases/tests.
|
||||
- If a file is deleted, remove or clearly deprecate the corresponding note so it cannot be mistaken as current guidance.
|
||||
- Keep notes concise and accurate; they are meant to prevent repeated rediscovery.
|
||||
|
||||
- Keep notes scoped: module notes cover module-wide context, class notes cover class-wide context, function/method notes cover behavioural contracts, and paragraph/block comments cover local “why”. Avoid duplicating the same content across scopes unless repetition prevents misuse.
|
||||
- **Module (file) docstring**: purpose, boundaries, key invariants, and “gotchas” that a new reader must know before editing.
|
||||
- Include cross-links to the key collaborators (modules/services) when discovery is otherwise hard.
|
||||
- Prefer stable facts (invariants, contracts) over ephemeral “today we…” notes.
|
||||
- **Class docstring**: responsibility, lifecycle, invariants, and how it should be used (or not used).
|
||||
- If the class is intentionally stateful, note what state exists and what methods mutate it.
|
||||
- If concurrency/async assumptions matter, state them explicitly.
|
||||
- **Function/method docstring**: behavioural contract.
|
||||
- Document arguments, return shape, side effects (DB writes, external I/O, task dispatch), and raised domain exceptions.
|
||||
- Add examples only when they prevent misuse.
|
||||
- **Paragraph/block comments**: explain *why* (trade-offs, historical constraints, surprising edge cases), not what the code already states.
|
||||
- Keep comments adjacent to the logic they justify; delete or rewrite comments that no longer match reality.
|
||||
## Skill Index
|
||||
|
||||
### Rules (must follow)
|
||||
Start with the section that best matches your need. Each entry lists the problems it solves plus key files/concepts so you know what to expect before opening it.
|
||||
|
||||
In this section, “notes” means module/class/function docstrings plus any relevant paragraph/block comments.
|
||||
### Platform Foundations
|
||||
|
||||
- **Before working**
|
||||
- Read the notes in the area you’ll touch; treat them as part of the spec.
|
||||
- If a docstring or comment conflicts with the current code, treat the **code as the single source of truth** and update the docstring or comment to match reality.
|
||||
- If important intent/invariants/edge cases are missing, add them in the closest docstring or comment (module for overall scope, function for behaviour).
|
||||
- **During working**
|
||||
- Keep the notes in sync as you discover constraints, make decisions, or change approach.
|
||||
- If you move/rename responsibilities across modules/classes, update the affected docstrings and comments so readers can still find the “why” and the invariants.
|
||||
- Record non-obvious edge cases, trade-offs, and the test/verification plan in the nearest docstring or comment that will stay correct.
|
||||
- Keep the notes **coherent**: integrate new findings into the relevant docstrings and comments; avoid append-only “recent fix” / changelog-style additions.
|
||||
- **When finishing**
|
||||
- Update the notes to reflect what changed, why, and any new edge cases/tests.
|
||||
- Remove or rewrite any comments that could be mistaken as current guidance but no longer apply.
|
||||
- Keep docstrings and comments concise and accurate; they are meant to prevent repeated rediscovery.
|
||||
#### [Infrastructure Overview](agent_skills/infra.md)
|
||||
|
||||
- **When to read this**
|
||||
- You need to understand where a feature belongs in the architecture.
|
||||
- You’re wiring storage, Redis, vector stores, or OTEL.
|
||||
- You’re about to add CLI commands or async jobs.
|
||||
- **What it covers**
|
||||
- Configuration stack (`configs/app_config.py`, remote settings)
|
||||
- Storage entry points (`extensions/ext_storage.py`, `core/file/file_manager.py`)
|
||||
- Redis conventions (`extensions/ext_redis.py`)
|
||||
- Plugin runtime topology
|
||||
- Vector-store factory (`core/rag/datasource/vdb/*`)
|
||||
- Observability hooks
|
||||
- SSRF proxy usage
|
||||
- Core CLI commands
|
||||
|
||||
### Plugin & Extension Development
|
||||
|
||||
#### [Plugin Systems](agent_skills/plugin.md)
|
||||
|
||||
- **When to read this**
|
||||
- You’re building or debugging a marketplace plugin.
|
||||
- You need to know how manifests, providers, daemons, and migrations fit together.
|
||||
- **What it covers**
|
||||
- Plugin manifests (`core/plugin/entities/plugin.py`)
|
||||
- Installation/upgrade flows (`services/plugin/plugin_service.py`, CLI commands)
|
||||
- Runtime adapters (`core/plugin/impl/*` for tool/model/datasource/trigger/endpoint/agent)
|
||||
- Daemon coordination (`core/plugin/entities/plugin_daemon.py`)
|
||||
- How provider registries surface capabilities to the rest of the platform
|
||||
|
||||
#### [Plugin OAuth](agent_skills/plugin_oauth.md)
|
||||
|
||||
- **When to read this**
|
||||
- You must integrate OAuth for a plugin or datasource.
|
||||
- You’re handling credential encryption or refresh flows.
|
||||
- **Topics**
|
||||
- Credential storage
|
||||
- Encryption helpers (`core/helper/provider_encryption.py`)
|
||||
- OAuth client bootstrap (`services/plugin/oauth_service.py`, `services/plugin/plugin_parameter_service.py`)
|
||||
- How console/API layers expose the flows
|
||||
|
||||
### Workflow Entry & Execution
|
||||
|
||||
#### [Trigger Concepts](agent_skills/trigger.md)
|
||||
|
||||
- **When to read this**
|
||||
- You’re debugging why a workflow didn’t start.
|
||||
- You’re adding a new trigger type or hook.
|
||||
- You need to trace async execution, draft debugging, or webhook/schedule pipelines.
|
||||
- **Details**
|
||||
- Start-node taxonomy
|
||||
- Webhook & schedule internals (`core/workflow/nodes/trigger_*`, `services/trigger/*`)
|
||||
- Async orchestration (`services/async_workflow_service.py`, Celery queues)
|
||||
- Debug event bus
|
||||
- Storage/logging interactions
|
||||
|
||||
## General Reminders
|
||||
|
||||
- All skill docs assume you follow the coding style rules below—run the lint/type/test commands before submitting changes.
|
||||
- When you cannot find an answer in these briefs, search the codebase using the paths referenced (e.g., `core/plugin/impl/tool.py`, `services/dataset_service.py`).
|
||||
- If you run into cross-cutting concerns (tenancy, configuration, storage), check the infrastructure guide first; it links to most supporting modules.
|
||||
- Keep multi-tenancy and configuration central: everything flows through `configs.dify_config` and `tenant_id`.
|
||||
- When touching plugins or triggers, consult both the system overview and the specialised doc to ensure you adjust lifecycle, storage, and observability consistently.
|
||||
|
||||
## Coding Style
|
||||
|
||||
@ -176,7 +226,7 @@ Before opening a PR / submitting:
|
||||
|
||||
- Controllers: parse input via Pydantic, invoke services, return serialised responses; no business logic.
|
||||
- Services: coordinate repositories, providers, background tasks; keep side effects explicit.
|
||||
- Document non-obvious behaviour with concise docstrings and comments.
|
||||
- Document non-obvious behaviour with concise comments.
|
||||
|
||||
### Miscellaneous
|
||||
|
||||
|
||||
@ -36,16 +36,6 @@ class NotionEstimatePayload(BaseModel):
|
||||
doc_language: str = Field(default="English")
|
||||
|
||||
|
||||
class DataSourceNotionListQuery(BaseModel):
|
||||
dataset_id: str | None = Field(default=None, description="Dataset ID")
|
||||
credential_id: str = Field(..., description="Credential ID", min_length=1)
|
||||
datasource_parameters: dict[str, Any] | None = Field(default=None, description="Datasource parameters JSON string")
|
||||
|
||||
|
||||
class DataSourceNotionPreviewQuery(BaseModel):
|
||||
credential_id: str = Field(..., description="Credential ID", min_length=1)
|
||||
|
||||
|
||||
register_schema_model(console_ns, NotionEstimatePayload)
|
||||
|
||||
|
||||
@ -146,15 +136,26 @@ class DataSourceNotionListApi(Resource):
|
||||
def get(self):
|
||||
current_user, current_tenant_id = current_account_with_tenant()
|
||||
|
||||
query = DataSourceNotionListQuery.model_validate(request.args.to_dict())
|
||||
dataset_id = request.args.get("dataset_id", default=None, type=str)
|
||||
credential_id = request.args.get("credential_id", default=None, type=str)
|
||||
if not credential_id:
|
||||
raise ValueError("Credential id is required.")
|
||||
|
||||
# Get datasource_parameters from query string (optional, for GitHub and other datasources)
|
||||
datasource_parameters = query.datasource_parameters or {}
|
||||
datasource_parameters_str = request.args.get("datasource_parameters", default=None, type=str)
|
||||
datasource_parameters = {}
|
||||
if datasource_parameters_str:
|
||||
try:
|
||||
datasource_parameters = json.loads(datasource_parameters_str)
|
||||
if not isinstance(datasource_parameters, dict):
|
||||
raise ValueError("datasource_parameters must be a JSON object.")
|
||||
except json.JSONDecodeError:
|
||||
raise ValueError("Invalid datasource_parameters JSON format.")
|
||||
|
||||
datasource_provider_service = DatasourceProviderService()
|
||||
credential = datasource_provider_service.get_datasource_credentials(
|
||||
tenant_id=current_tenant_id,
|
||||
credential_id=query.credential_id,
|
||||
credential_id=credential_id,
|
||||
provider="notion_datasource",
|
||||
plugin_id="langgenius/notion_datasource",
|
||||
)
|
||||
@ -163,8 +164,8 @@ class DataSourceNotionListApi(Resource):
|
||||
exist_page_ids = []
|
||||
with Session(db.engine) as session:
|
||||
# import notion in the exist dataset
|
||||
if query.dataset_id:
|
||||
dataset = DatasetService.get_dataset(query.dataset_id)
|
||||
if dataset_id:
|
||||
dataset = DatasetService.get_dataset(dataset_id)
|
||||
if not dataset:
|
||||
raise NotFound("Dataset not found.")
|
||||
if dataset.data_source_type != "notion_import":
|
||||
@ -172,7 +173,7 @@ class DataSourceNotionListApi(Resource):
|
||||
|
||||
documents = session.scalars(
|
||||
select(Document).filter_by(
|
||||
dataset_id=query.dataset_id,
|
||||
dataset_id=dataset_id,
|
||||
tenant_id=current_tenant_id,
|
||||
data_source_type="notion_import",
|
||||
enabled=True,
|
||||
@ -239,12 +240,13 @@ class DataSourceNotionApi(Resource):
|
||||
def get(self, page_id, page_type):
|
||||
_, current_tenant_id = current_account_with_tenant()
|
||||
|
||||
query = DataSourceNotionPreviewQuery.model_validate(request.args.to_dict())
|
||||
|
||||
credential_id = request.args.get("credential_id", default=None, type=str)
|
||||
if not credential_id:
|
||||
raise ValueError("Credential id is required.")
|
||||
datasource_provider_service = DatasourceProviderService()
|
||||
credential = datasource_provider_service.get_datasource_credentials(
|
||||
tenant_id=current_tenant_id,
|
||||
credential_id=query.credential_id,
|
||||
credential_id=credential_id,
|
||||
provider="notion_datasource",
|
||||
plugin_id="langgenius/notion_datasource",
|
||||
)
|
||||
|
||||
@ -176,18 +176,7 @@ class IndexingEstimatePayload(BaseModel):
|
||||
return result
|
||||
|
||||
|
||||
class ConsoleDatasetListQuery(BaseModel):
|
||||
page: int = Field(default=1, description="Page number")
|
||||
limit: int = Field(default=20, description="Number of items per page")
|
||||
keyword: str | None = Field(default=None, description="Search keyword")
|
||||
include_all: bool = Field(default=False, description="Include all datasets")
|
||||
ids: list[str] = Field(default_factory=list, description="Filter by dataset IDs")
|
||||
tag_ids: list[str] = Field(default_factory=list, description="Filter by tag IDs")
|
||||
|
||||
|
||||
register_schema_models(
|
||||
console_ns, DatasetCreatePayload, DatasetUpdatePayload, IndexingEstimatePayload, ConsoleDatasetListQuery
|
||||
)
|
||||
register_schema_models(console_ns, DatasetCreatePayload, DatasetUpdatePayload, IndexingEstimatePayload)
|
||||
|
||||
|
||||
def _get_retrieval_methods_by_vector_type(vector_type: str | None, is_mock: bool = False) -> dict[str, list[str]]:
|
||||
@ -286,19 +275,18 @@ class DatasetListApi(Resource):
|
||||
@enterprise_license_required
|
||||
def get(self):
|
||||
current_user, current_tenant_id = current_account_with_tenant()
|
||||
query = ConsoleDatasetListQuery.model_validate(request.args.to_dict(flat=False))
|
||||
page = request.args.get("page", default=1, type=int)
|
||||
limit = request.args.get("limit", default=20, type=int)
|
||||
ids = request.args.getlist("ids")
|
||||
# provider = request.args.get("provider", default="vendor")
|
||||
if query.ids:
|
||||
datasets, total = DatasetService.get_datasets_by_ids(query.ids, current_tenant_id)
|
||||
search = request.args.get("keyword", default=None, type=str)
|
||||
tag_ids = request.args.getlist("tag_ids")
|
||||
include_all = request.args.get("include_all", default="false").lower() == "true"
|
||||
if ids:
|
||||
datasets, total = DatasetService.get_datasets_by_ids(ids, current_tenant_id)
|
||||
else:
|
||||
datasets, total = DatasetService.get_datasets(
|
||||
query.page,
|
||||
query.limit,
|
||||
current_tenant_id,
|
||||
current_user,
|
||||
query.keyword,
|
||||
query.tag_ids,
|
||||
query.include_all,
|
||||
page, limit, current_tenant_id, current_user, search, tag_ids, include_all
|
||||
)
|
||||
|
||||
# check embedding setting
|
||||
@ -330,13 +318,7 @@ class DatasetListApi(Resource):
|
||||
else:
|
||||
item.update({"partial_member_list": []})
|
||||
|
||||
response = {
|
||||
"data": data,
|
||||
"has_more": len(datasets) == query.limit,
|
||||
"limit": query.limit,
|
||||
"total": total,
|
||||
"page": query.page,
|
||||
}
|
||||
response = {"data": data, "has_more": len(datasets) == limit, "limit": limit, "total": total, "page": page}
|
||||
return response, 200
|
||||
|
||||
@console_ns.doc("create_dataset")
|
||||
|
||||
@ -98,19 +98,12 @@ class BedrockRetrievalPayload(BaseModel):
|
||||
knowledge_id: str
|
||||
|
||||
|
||||
class ExternalApiTemplateListQuery(BaseModel):
|
||||
page: int = Field(default=1, description="Page number")
|
||||
limit: int = Field(default=20, description="Number of items per page")
|
||||
keyword: str | None = Field(default=None, description="Search keyword")
|
||||
|
||||
|
||||
register_schema_models(
|
||||
console_ns,
|
||||
ExternalKnowledgeApiPayload,
|
||||
ExternalDatasetCreatePayload,
|
||||
ExternalHitTestingPayload,
|
||||
BedrockRetrievalPayload,
|
||||
ExternalApiTemplateListQuery,
|
||||
)
|
||||
|
||||
|
||||
@ -131,17 +124,19 @@ class ExternalApiTemplateListApi(Resource):
|
||||
@account_initialization_required
|
||||
def get(self):
|
||||
_, current_tenant_id = current_account_with_tenant()
|
||||
query = ExternalApiTemplateListQuery.model_validate(request.args.to_dict())
|
||||
page = request.args.get("page", default=1, type=int)
|
||||
limit = request.args.get("limit", default=20, type=int)
|
||||
search = request.args.get("keyword", default=None, type=str)
|
||||
|
||||
external_knowledge_apis, total = ExternalDatasetService.get_external_knowledge_apis(
|
||||
query.page, query.limit, current_tenant_id, query.keyword
|
||||
page, limit, current_tenant_id, search
|
||||
)
|
||||
response = {
|
||||
"data": [item.to_dict() for item in external_knowledge_apis],
|
||||
"has_more": len(external_knowledge_apis) == query.limit,
|
||||
"limit": query.limit,
|
||||
"has_more": len(external_knowledge_apis) == limit,
|
||||
"limit": limit,
|
||||
"total": total,
|
||||
"page": query.page,
|
||||
"page": page,
|
||||
}
|
||||
return response, 200
|
||||
|
||||
|
||||
@ -3,7 +3,7 @@ from typing import Any
|
||||
|
||||
from flask import request
|
||||
from flask_restx import Resource, marshal_with
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy import and_, select
|
||||
from werkzeug.exceptions import BadRequest, Forbidden, NotFound
|
||||
|
||||
@ -28,10 +28,6 @@ class InstalledAppUpdatePayload(BaseModel):
|
||||
is_pinned: bool | None = None
|
||||
|
||||
|
||||
class InstalledAppsListQuery(BaseModel):
|
||||
app_id: str | None = Field(default=None, description="App ID to filter by")
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@ -41,13 +37,13 @@ class InstalledAppsListApi(Resource):
|
||||
@account_initialization_required
|
||||
@marshal_with(installed_app_list_fields)
|
||||
def get(self):
|
||||
query = InstalledAppsListQuery.model_validate(request.args.to_dict())
|
||||
app_id = request.args.get("app_id", default=None, type=str)
|
||||
current_user, current_tenant_id = current_account_with_tenant()
|
||||
|
||||
if query.app_id:
|
||||
if app_id:
|
||||
installed_apps = db.session.scalars(
|
||||
select(InstalledApp).where(
|
||||
and_(InstalledApp.tenant_id == current_tenant_id, InstalledApp.app_id == query.app_id)
|
||||
and_(InstalledApp.tenant_id == current_tenant_id, InstalledApp.app_id == app_id)
|
||||
)
|
||||
).all()
|
||||
else:
|
||||
|
||||
@ -40,7 +40,6 @@ register_schema_models(
|
||||
TagBasePayload,
|
||||
TagBindingPayload,
|
||||
TagBindingRemovePayload,
|
||||
TagListQueryParam,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@ -87,14 +87,6 @@ class TagUnbindingPayload(BaseModel):
|
||||
target_id: str
|
||||
|
||||
|
||||
class DatasetListQuery(BaseModel):
|
||||
page: int = Field(default=1, description="Page number")
|
||||
limit: int = Field(default=20, description="Number of items per page")
|
||||
keyword: str | None = Field(default=None, description="Search keyword")
|
||||
include_all: bool = Field(default=False, description="Include all datasets")
|
||||
tag_ids: list[str] = Field(default_factory=list, description="Filter by tag IDs")
|
||||
|
||||
|
||||
register_schema_models(
|
||||
service_api_ns,
|
||||
DatasetCreatePayload,
|
||||
@ -104,7 +96,6 @@ register_schema_models(
|
||||
TagDeletePayload,
|
||||
TagBindingPayload,
|
||||
TagUnbindingPayload,
|
||||
DatasetListQuery,
|
||||
)
|
||||
|
||||
|
||||
@ -122,11 +113,15 @@ class DatasetListApi(DatasetApiResource):
|
||||
)
|
||||
def get(self, tenant_id):
|
||||
"""Resource for getting datasets."""
|
||||
query = DatasetListQuery.model_validate(request.args.to_dict(flat=False))
|
||||
page = request.args.get("page", default=1, type=int)
|
||||
limit = request.args.get("limit", default=20, type=int)
|
||||
# provider = request.args.get("provider", default="vendor")
|
||||
search = request.args.get("keyword", default=None, type=str)
|
||||
tag_ids = request.args.getlist("tag_ids")
|
||||
include_all = request.args.get("include_all", default="false").lower() == "true"
|
||||
|
||||
datasets, total = DatasetService.get_datasets(
|
||||
query.page, query.limit, tenant_id, current_user, query.keyword, query.tag_ids, query.include_all
|
||||
page, limit, tenant_id, current_user, search, tag_ids, include_all
|
||||
)
|
||||
# check embedding setting
|
||||
provider_manager = ProviderManager()
|
||||
@ -152,13 +147,7 @@ class DatasetListApi(DatasetApiResource):
|
||||
item["embedding_available"] = False
|
||||
else:
|
||||
item["embedding_available"] = True
|
||||
response = {
|
||||
"data": data,
|
||||
"has_more": len(datasets) == query.limit,
|
||||
"limit": query.limit,
|
||||
"total": total,
|
||||
"page": query.page,
|
||||
}
|
||||
response = {"data": data, "has_more": len(datasets) == limit, "limit": limit, "total": total, "page": page}
|
||||
return response, 200
|
||||
|
||||
@service_api_ns.expect(service_api_ns.models[DatasetCreatePayload.__name__])
|
||||
|
||||
@ -69,14 +69,7 @@ class DocumentTextUpdate(BaseModel):
|
||||
return self
|
||||
|
||||
|
||||
class DocumentListQuery(BaseModel):
|
||||
page: int = Field(default=1, description="Page number")
|
||||
limit: int = Field(default=20, description="Number of items per page")
|
||||
keyword: str | None = Field(default=None, description="Search keyword")
|
||||
status: str | None = Field(default=None, description="Document status filter")
|
||||
|
||||
|
||||
for m in [ProcessRule, RetrievalModel, DocumentTextCreatePayload, DocumentTextUpdate, DocumentListQuery]:
|
||||
for m in [ProcessRule, RetrievalModel, DocumentTextCreatePayload, DocumentTextUpdate]:
|
||||
service_api_ns.schema_model(m.__name__, m.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) # type: ignore
|
||||
|
||||
|
||||
@ -467,33 +460,34 @@ class DocumentListApi(DatasetApiResource):
|
||||
def get(self, tenant_id, dataset_id):
|
||||
dataset_id = str(dataset_id)
|
||||
tenant_id = str(tenant_id)
|
||||
query_params = DocumentListQuery.model_validate(request.args.to_dict())
|
||||
page = request.args.get("page", default=1, type=int)
|
||||
limit = request.args.get("limit", default=20, type=int)
|
||||
search = request.args.get("keyword", default=None, type=str)
|
||||
status = request.args.get("status", default=None, type=str)
|
||||
dataset = db.session.query(Dataset).where(Dataset.tenant_id == tenant_id, Dataset.id == dataset_id).first()
|
||||
if not dataset:
|
||||
raise NotFound("Dataset not found.")
|
||||
|
||||
query = select(Document).filter_by(dataset_id=str(dataset_id), tenant_id=tenant_id)
|
||||
|
||||
if query_params.status:
|
||||
query = DocumentService.apply_display_status_filter(query, query_params.status)
|
||||
if status:
|
||||
query = DocumentService.apply_display_status_filter(query, status)
|
||||
|
||||
if query_params.keyword:
|
||||
search = f"%{query_params.keyword}%"
|
||||
if search:
|
||||
search = f"%{search}%"
|
||||
query = query.where(Document.name.like(search))
|
||||
|
||||
query = query.order_by(desc(Document.created_at), desc(Document.position))
|
||||
|
||||
paginated_documents = db.paginate(
|
||||
select=query, page=query_params.page, per_page=query_params.limit, max_per_page=100, error_out=False
|
||||
)
|
||||
paginated_documents = db.paginate(select=query, page=page, per_page=limit, max_per_page=100, error_out=False)
|
||||
documents = paginated_documents.items
|
||||
|
||||
response = {
|
||||
"data": marshal(documents, document_fields),
|
||||
"has_more": len(documents) == query_params.limit,
|
||||
"limit": query_params.limit,
|
||||
"has_more": len(documents) == limit,
|
||||
"limit": limit,
|
||||
"total": paginated_documents.total,
|
||||
"page": query_params.page,
|
||||
"page": page,
|
||||
}
|
||||
|
||||
return response
|
||||
|
||||
@ -11,9 +11,7 @@ from controllers.service_api.wraps import DatasetApiResource, cloud_edition_bill
|
||||
from fields.dataset_fields import dataset_metadata_fields
|
||||
from services.dataset_service import DatasetService
|
||||
from services.entities.knowledge_entities.knowledge_entities import (
|
||||
DocumentMetadataOperation,
|
||||
MetadataArgs,
|
||||
MetadataDetail,
|
||||
MetadataOperationData,
|
||||
)
|
||||
from services.metadata_service import MetadataService
|
||||
@ -24,13 +22,7 @@ class MetadataUpdatePayload(BaseModel):
|
||||
|
||||
|
||||
register_schema_model(service_api_ns, MetadataUpdatePayload)
|
||||
register_schema_models(
|
||||
service_api_ns,
|
||||
MetadataArgs,
|
||||
MetadataDetail,
|
||||
DocumentMetadataOperation,
|
||||
MetadataOperationData,
|
||||
)
|
||||
register_schema_models(service_api_ns, MetadataArgs, MetadataOperationData)
|
||||
|
||||
|
||||
@service_api_ns.route("/datasets/<uuid:dataset_id>/metadata")
|
||||
|
||||
@ -236,7 +236,4 @@ class AgentChatAppRunner(AppRunner):
|
||||
queue_manager=queue_manager,
|
||||
stream=application_generate_entity.stream,
|
||||
agent=True,
|
||||
message_id=message.id,
|
||||
user_id=application_generate_entity.user_id,
|
||||
tenant_id=app_config.tenant_id,
|
||||
)
|
||||
|
||||
@ -1,8 +1,6 @@
|
||||
import base64
|
||||
import logging
|
||||
import time
|
||||
from collections.abc import Generator, Mapping, Sequence
|
||||
from mimetypes import guess_extension
|
||||
from typing import TYPE_CHECKING, Any, Union
|
||||
|
||||
from core.app.app_config.entities import ExternalDataVariableEntity, PromptTemplateEntity
|
||||
@ -13,16 +11,10 @@ from core.app.entities.app_invoke_entities import (
|
||||
InvokeFrom,
|
||||
ModelConfigWithCredentialsEntity,
|
||||
)
|
||||
from core.app.entities.queue_entities import (
|
||||
QueueAgentMessageEvent,
|
||||
QueueLLMChunkEvent,
|
||||
QueueMessageEndEvent,
|
||||
QueueMessageFileEvent,
|
||||
)
|
||||
from core.app.entities.queue_entities import QueueAgentMessageEvent, QueueLLMChunkEvent, QueueMessageEndEvent
|
||||
from core.app.features.annotation_reply.annotation_reply import AnnotationReplyFeature
|
||||
from core.app.features.hosting_moderation.hosting_moderation import HostingModerationFeature
|
||||
from core.external_data_tool.external_data_fetch import ExternalDataFetch
|
||||
from core.file.enums import FileTransferMethod, FileType
|
||||
from core.memory.token_buffer_memory import TokenBufferMemory
|
||||
from core.model_manager import ModelInstance
|
||||
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
|
||||
@ -30,7 +22,6 @@ from core.model_runtime.entities.message_entities import (
|
||||
AssistantPromptMessage,
|
||||
ImagePromptMessageContent,
|
||||
PromptMessage,
|
||||
TextPromptMessageContent,
|
||||
)
|
||||
from core.model_runtime.entities.model_entities import ModelPropertyKey
|
||||
from core.model_runtime.errors.invoke import InvokeBadRequestError
|
||||
@ -38,10 +29,7 @@ from core.moderation.input_moderation import InputModeration
|
||||
from core.prompt.advanced_prompt_transform import AdvancedPromptTransform
|
||||
from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig
|
||||
from core.prompt.simple_prompt_transform import ModelMode, SimplePromptTransform
|
||||
from core.tools.tool_file_manager import ToolFileManager
|
||||
from extensions.ext_database import db
|
||||
from models.enums import CreatorUserRole
|
||||
from models.model import App, AppMode, Message, MessageAnnotation, MessageFile
|
||||
from models.model import App, AppMode, Message, MessageAnnotation
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from core.file.models import File
|
||||
@ -215,9 +203,6 @@ class AppRunner:
|
||||
queue_manager: AppQueueManager,
|
||||
stream: bool,
|
||||
agent: bool = False,
|
||||
message_id: str | None = None,
|
||||
user_id: str | None = None,
|
||||
tenant_id: str | None = None,
|
||||
):
|
||||
"""
|
||||
Handle invoke result
|
||||
@ -225,41 +210,21 @@ class AppRunner:
|
||||
:param queue_manager: application queue manager
|
||||
:param stream: stream
|
||||
:param agent: agent
|
||||
:param message_id: message id for multimodal output
|
||||
:param user_id: user id for multimodal output
|
||||
:param tenant_id: tenant id for multimodal output
|
||||
:return:
|
||||
"""
|
||||
if not stream and isinstance(invoke_result, LLMResult):
|
||||
self._handle_invoke_result_direct(
|
||||
invoke_result=invoke_result,
|
||||
queue_manager=queue_manager,
|
||||
)
|
||||
self._handle_invoke_result_direct(invoke_result=invoke_result, queue_manager=queue_manager, agent=agent)
|
||||
elif stream and isinstance(invoke_result, Generator):
|
||||
self._handle_invoke_result_stream(
|
||||
invoke_result=invoke_result,
|
||||
queue_manager=queue_manager,
|
||||
agent=agent,
|
||||
message_id=message_id,
|
||||
user_id=user_id,
|
||||
tenant_id=tenant_id,
|
||||
)
|
||||
self._handle_invoke_result_stream(invoke_result=invoke_result, queue_manager=queue_manager, agent=agent)
|
||||
else:
|
||||
raise NotImplementedError(f"unsupported invoke result type: {type(invoke_result)}")
|
||||
|
||||
def _handle_invoke_result_direct(
|
||||
self,
|
||||
invoke_result: LLMResult,
|
||||
queue_manager: AppQueueManager,
|
||||
):
|
||||
def _handle_invoke_result_direct(self, invoke_result: LLMResult, queue_manager: AppQueueManager, agent: bool):
|
||||
"""
|
||||
Handle invoke result direct
|
||||
:param invoke_result: invoke result
|
||||
:param queue_manager: application queue manager
|
||||
:param agent: agent
|
||||
:param message_id: message id for multimodal output
|
||||
:param user_id: user id for multimodal output
|
||||
:param tenant_id: tenant id for multimodal output
|
||||
:return:
|
||||
"""
|
||||
queue_manager.publish(
|
||||
@ -270,22 +235,13 @@ class AppRunner:
|
||||
)
|
||||
|
||||
def _handle_invoke_result_stream(
|
||||
self,
|
||||
invoke_result: Generator[LLMResultChunk, None, None],
|
||||
queue_manager: AppQueueManager,
|
||||
agent: bool,
|
||||
message_id: str | None = None,
|
||||
user_id: str | None = None,
|
||||
tenant_id: str | None = None,
|
||||
self, invoke_result: Generator[LLMResultChunk, None, None], queue_manager: AppQueueManager, agent: bool
|
||||
):
|
||||
"""
|
||||
Handle invoke result
|
||||
:param invoke_result: invoke result
|
||||
:param queue_manager: application queue manager
|
||||
:param agent: agent
|
||||
:param message_id: message id for multimodal output
|
||||
:param user_id: user id for multimodal output
|
||||
:param tenant_id: tenant id for multimodal output
|
||||
:return:
|
||||
"""
|
||||
model: str = ""
|
||||
@ -303,26 +259,12 @@ class AppRunner:
|
||||
text += message.content
|
||||
elif isinstance(message.content, list):
|
||||
for content in message.content:
|
||||
if isinstance(content, str):
|
||||
text += content
|
||||
elif isinstance(content, TextPromptMessageContent):
|
||||
if not isinstance(content, str):
|
||||
# TODO(QuantumGhost): Add multimodal output support for easy ui.
|
||||
_logger.warning("received multimodal output, type=%s", type(content))
|
||||
text += content.data
|
||||
elif isinstance(content, ImagePromptMessageContent):
|
||||
if message_id and user_id and tenant_id:
|
||||
try:
|
||||
self._handle_multimodal_image_content(
|
||||
content=content,
|
||||
message_id=message_id,
|
||||
user_id=user_id,
|
||||
tenant_id=tenant_id,
|
||||
queue_manager=queue_manager,
|
||||
)
|
||||
except Exception:
|
||||
_logger.exception("Failed to handle multimodal image output")
|
||||
else:
|
||||
_logger.warning("Received multimodal output but missing required parameters")
|
||||
else:
|
||||
text += content.data if hasattr(content, "data") else str(content)
|
||||
text += content # failback to str
|
||||
|
||||
if not model:
|
||||
model = result.model
|
||||
@ -347,101 +289,6 @@ class AppRunner:
|
||||
PublishFrom.APPLICATION_MANAGER,
|
||||
)
|
||||
|
||||
def _handle_multimodal_image_content(
|
||||
self,
|
||||
content: ImagePromptMessageContent,
|
||||
message_id: str,
|
||||
user_id: str,
|
||||
tenant_id: str,
|
||||
queue_manager: AppQueueManager,
|
||||
):
|
||||
"""
|
||||
Handle multimodal image content from LLM response.
|
||||
Save the image and create a MessageFile record.
|
||||
|
||||
:param content: ImagePromptMessageContent instance
|
||||
:param message_id: message id
|
||||
:param user_id: user id
|
||||
:param tenant_id: tenant id
|
||||
:param queue_manager: queue manager
|
||||
:return:
|
||||
"""
|
||||
_logger.info("Handling multimodal image content for message %s", message_id)
|
||||
|
||||
image_url = content.url
|
||||
base64_data = content.base64_data
|
||||
|
||||
_logger.info("Image URL: %s, Base64 data present: %s", image_url, base64_data)
|
||||
|
||||
if not image_url and not base64_data:
|
||||
_logger.warning("Image content has neither URL nor base64 data")
|
||||
return
|
||||
|
||||
tool_file_manager = ToolFileManager()
|
||||
|
||||
# Save the image file
|
||||
try:
|
||||
if image_url:
|
||||
# Download image from URL
|
||||
_logger.info("Downloading image from URL: %s", image_url)
|
||||
tool_file = tool_file_manager.create_file_by_url(
|
||||
user_id=user_id,
|
||||
tenant_id=tenant_id,
|
||||
file_url=image_url,
|
||||
conversation_id=None,
|
||||
)
|
||||
_logger.info("Image saved successfully, tool_file_id: %s", tool_file.id)
|
||||
elif base64_data:
|
||||
if base64_data.startswith("data:"):
|
||||
base64_data = base64_data.split(",", 1)[1]
|
||||
|
||||
image_binary = base64.b64decode(base64_data)
|
||||
mimetype = content.mime_type or "image/png"
|
||||
extension = guess_extension(mimetype) or ".png"
|
||||
|
||||
tool_file = tool_file_manager.create_file_by_raw(
|
||||
user_id=user_id,
|
||||
tenant_id=tenant_id,
|
||||
conversation_id=None,
|
||||
file_binary=image_binary,
|
||||
mimetype=mimetype,
|
||||
filename=f"generated_image{extension}",
|
||||
)
|
||||
_logger.info("Image saved successfully, tool_file_id: %s", tool_file.id)
|
||||
else:
|
||||
return
|
||||
except Exception:
|
||||
_logger.exception("Failed to save image file")
|
||||
return
|
||||
|
||||
# Create MessageFile record
|
||||
message_file = MessageFile(
|
||||
message_id=message_id,
|
||||
type=FileType.IMAGE,
|
||||
transfer_method=FileTransferMethod.TOOL_FILE,
|
||||
belongs_to="assistant",
|
||||
url=f"/files/tools/{tool_file.id}",
|
||||
upload_file_id=tool_file.id,
|
||||
created_by_role=(
|
||||
CreatorUserRole.ACCOUNT
|
||||
if queue_manager.invoke_from in {InvokeFrom.DEBUGGER, InvokeFrom.EXPLORE}
|
||||
else CreatorUserRole.END_USER
|
||||
),
|
||||
created_by=user_id,
|
||||
)
|
||||
|
||||
db.session.add(message_file)
|
||||
db.session.commit()
|
||||
db.session.refresh(message_file)
|
||||
|
||||
# Publish QueueMessageFileEvent
|
||||
queue_manager.publish(
|
||||
QueueMessageFileEvent(message_file_id=message_file.id),
|
||||
PublishFrom.APPLICATION_MANAGER,
|
||||
)
|
||||
|
||||
_logger.info("QueueMessageFileEvent published for message_file_id: %s", message_file.id)
|
||||
|
||||
def moderation_for_inputs(
|
||||
self,
|
||||
*,
|
||||
|
||||
@ -226,10 +226,5 @@ class ChatAppRunner(AppRunner):
|
||||
|
||||
# handle invoke result
|
||||
self._handle_invoke_result(
|
||||
invoke_result=invoke_result,
|
||||
queue_manager=queue_manager,
|
||||
stream=application_generate_entity.stream,
|
||||
message_id=message.id,
|
||||
user_id=application_generate_entity.user_id,
|
||||
tenant_id=app_config.tenant_id,
|
||||
invoke_result=invoke_result, queue_manager=queue_manager, stream=application_generate_entity.stream
|
||||
)
|
||||
|
||||
@ -184,10 +184,5 @@ class CompletionAppRunner(AppRunner):
|
||||
|
||||
# handle invoke result
|
||||
self._handle_invoke_result(
|
||||
invoke_result=invoke_result,
|
||||
queue_manager=queue_manager,
|
||||
stream=application_generate_entity.stream,
|
||||
message_id=message.id,
|
||||
user_id=application_generate_entity.user_id,
|
||||
tenant_id=app_config.tenant_id,
|
||||
invoke_result=invoke_result, queue_manager=queue_manager, stream=application_generate_entity.stream
|
||||
)
|
||||
|
||||
@ -39,7 +39,6 @@ from core.app.entities.task_entities import (
|
||||
MessageAudioEndStreamResponse,
|
||||
MessageAudioStreamResponse,
|
||||
MessageEndStreamResponse,
|
||||
StreamEvent,
|
||||
StreamResponse,
|
||||
)
|
||||
from core.app.task_pipeline.based_generate_task_pipeline import BasedGenerateTaskPipeline
|
||||
@ -71,7 +70,6 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline):
|
||||
|
||||
_task_state: EasyUITaskState
|
||||
_application_generate_entity: Union[ChatAppGenerateEntity, CompletionAppGenerateEntity, AgentChatAppGenerateEntity]
|
||||
_precomputed_event_type: StreamEvent | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@ -344,15 +342,11 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline):
|
||||
self._task_state.llm_result.message.content = current_content
|
||||
|
||||
if isinstance(event, QueueLLMChunkEvent):
|
||||
# Determine the event type once, on first LLM chunk, and reuse for subsequent chunks
|
||||
if not hasattr(self, "_precomputed_event_type") or self._precomputed_event_type is None:
|
||||
self._precomputed_event_type = self._message_cycle_manager.get_message_event_type(
|
||||
message_id=self._message_id
|
||||
)
|
||||
event_type = self._message_cycle_manager.get_message_event_type(message_id=self._message_id)
|
||||
yield self._message_cycle_manager.message_to_stream_response(
|
||||
answer=cast(str, delta_text),
|
||||
message_id=self._message_id,
|
||||
event_type=self._precomputed_event_type,
|
||||
event_type=event_type,
|
||||
)
|
||||
else:
|
||||
yield self._agent_message_to_stream_response(
|
||||
|
||||
@ -5,7 +5,7 @@ from threading import Thread
|
||||
from typing import Union
|
||||
|
||||
from flask import Flask, current_app
|
||||
from sqlalchemy import select
|
||||
from sqlalchemy import exists, select
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from configs import dify_config
|
||||
@ -30,7 +30,6 @@ from core.app.entities.task_entities import (
|
||||
StreamEvent,
|
||||
WorkflowTaskState,
|
||||
)
|
||||
from core.db.session_factory import session_factory
|
||||
from core.llm_generator.llm_generator import LLMGenerator
|
||||
from core.tools.signature import sign_tool_file
|
||||
from extensions.ext_database import db
|
||||
@ -58,15 +57,13 @@ class MessageCycleManager:
|
||||
self._message_has_file: set[str] = set()
|
||||
|
||||
def get_message_event_type(self, message_id: str) -> StreamEvent:
|
||||
# Fast path: cached determination from prior QueueMessageFileEvent
|
||||
if message_id in self._message_has_file:
|
||||
return StreamEvent.MESSAGE_FILE
|
||||
|
||||
# Use SQLAlchemy 2.x style session.scalar(select(...))
|
||||
with session_factory.create_session() as session:
|
||||
message_file = session.scalar(select(MessageFile).where(MessageFile.message_id == message_id))
|
||||
with Session(db.engine, expire_on_commit=False) as session:
|
||||
has_file = session.query(exists().where(MessageFile.message_id == message_id)).scalar()
|
||||
|
||||
if message_file:
|
||||
if has_file:
|
||||
self._message_has_file.add(message_id)
|
||||
return StreamEvent.MESSAGE_FILE
|
||||
|
||||
@ -202,8 +199,6 @@ class MessageCycleManager:
|
||||
message_file = session.scalar(select(MessageFile).where(MessageFile.id == event.message_file_id))
|
||||
|
||||
if message_file and message_file.url is not None:
|
||||
self._message_has_file.add(message_file.message_id)
|
||||
|
||||
# get tool file id
|
||||
tool_file_id = message_file.url.split("/")[-1]
|
||||
# trim extension
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
from collections.abc import Generator
|
||||
from collections.abc import Generator, Mapping
|
||||
from typing import Any
|
||||
|
||||
from core.datasource.__base.datasource_plugin import DatasourcePlugin
|
||||
@ -34,7 +34,7 @@ class OnlineDocumentDatasourcePlugin(DatasourcePlugin):
|
||||
def get_online_document_pages(
|
||||
self,
|
||||
user_id: str,
|
||||
datasource_parameters: dict[str, Any],
|
||||
datasource_parameters: Mapping[str, Any],
|
||||
provider_type: str,
|
||||
) -> Generator[OnlineDocumentPagesMessage, None, None]:
|
||||
manager = PluginDatasourceManager()
|
||||
|
||||
@ -64,7 +64,7 @@ dependencies = [
|
||||
"pandas[excel,output-formatting,performance]~=2.2.2",
|
||||
"psycogreen~=1.0.2",
|
||||
"psycopg2-binary~=2.9.6",
|
||||
"pycryptodome==3.23.0",
|
||||
"pycryptodome==3.19.1",
|
||||
"pydantic~=2.11.4",
|
||||
"pydantic-extra-types~=2.10.3",
|
||||
"pydantic-settings~=2.11.0",
|
||||
|
||||
@ -131,7 +131,7 @@ class BillingService:
|
||||
headers = {"Content-Type": "application/json", "Billing-Api-Secret-Key": cls.secret_key}
|
||||
|
||||
url = f"{cls.base_url}{endpoint}"
|
||||
response = httpx.request(method, url, json=json, params=params, headers=headers, follow_redirects=True)
|
||||
response = httpx.request(method, url, json=json, params=params, headers=headers)
|
||||
if method == "GET" and response.status_code != httpx.codes.OK:
|
||||
raise ValueError("Unable to retrieve billing information. Please try again later or contact support.")
|
||||
if method == "PUT":
|
||||
@ -143,9 +143,6 @@ class BillingService:
|
||||
raise ValueError("Invalid arguments.")
|
||||
if method == "POST" and response.status_code != httpx.codes.OK:
|
||||
raise ValueError(f"Unable to send request to {url}. Please try again later or contact support.")
|
||||
if method == "DELETE" and response.status_code != httpx.codes.OK:
|
||||
logger.error("billing_service: DELETE response: %s %s", response.status_code, response.text)
|
||||
raise ValueError(f"Unable to process delete request {url}. Please try again later or contact support.")
|
||||
return response.json()
|
||||
|
||||
@staticmethod
|
||||
@ -168,7 +165,7 @@ class BillingService:
|
||||
def delete_account(cls, account_id: str):
|
||||
"""Delete account."""
|
||||
params = {"account_id": account_id}
|
||||
return cls._send_request("DELETE", "/account", params=params)
|
||||
return cls._send_request("DELETE", "/account/", params=params)
|
||||
|
||||
@classmethod
|
||||
def is_email_in_freeze(cls, email: str) -> bool:
|
||||
|
||||
@ -17,7 +17,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
RETRY_TIMES_OF_ONE_PLUGIN_IN_ONE_TENANT = 3
|
||||
CACHE_REDIS_KEY_PREFIX = "plugin_autoupgrade_check_task:cached_plugin_manifests:"
|
||||
CACHE_REDIS_TTL = 60 * 60 # 1 hour
|
||||
CACHE_REDIS_TTL = 60 * 15 # 15 minutes
|
||||
|
||||
|
||||
def _get_redis_cache_key(plugin_id: str) -> str:
|
||||
|
||||
@ -1,454 +0,0 @@
|
||||
"""Test multimodal image output handling in BaseAppRunner."""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
from uuid import uuid4
|
||||
|
||||
import pytest
|
||||
|
||||
from core.app.apps.base_app_queue_manager import PublishFrom
|
||||
from core.app.apps.base_app_runner import AppRunner
|
||||
from core.app.entities.app_invoke_entities import InvokeFrom
|
||||
from core.app.entities.queue_entities import QueueMessageFileEvent
|
||||
from core.file.enums import FileTransferMethod, FileType
|
||||
from core.model_runtime.entities.message_entities import ImagePromptMessageContent
|
||||
from models.enums import CreatorUserRole
|
||||
|
||||
|
||||
class TestBaseAppRunnerMultimodal:
|
||||
"""Test that BaseAppRunner correctly handles multimodal image content."""
|
||||
|
||||
@pytest.fixture
|
||||
def mock_user_id(self):
|
||||
"""Mock user ID."""
|
||||
return str(uuid4())
|
||||
|
||||
@pytest.fixture
|
||||
def mock_tenant_id(self):
|
||||
"""Mock tenant ID."""
|
||||
return str(uuid4())
|
||||
|
||||
@pytest.fixture
|
||||
def mock_message_id(self):
|
||||
"""Mock message ID."""
|
||||
return str(uuid4())
|
||||
|
||||
@pytest.fixture
|
||||
def mock_queue_manager(self):
|
||||
"""Create a mock queue manager."""
|
||||
manager = MagicMock()
|
||||
manager.invoke_from = InvokeFrom.SERVICE_API
|
||||
return manager
|
||||
|
||||
@pytest.fixture
|
||||
def mock_tool_file(self):
|
||||
"""Create a mock tool file."""
|
||||
tool_file = MagicMock()
|
||||
tool_file.id = str(uuid4())
|
||||
return tool_file
|
||||
|
||||
@pytest.fixture
|
||||
def mock_message_file(self):
|
||||
"""Create a mock message file."""
|
||||
message_file = MagicMock()
|
||||
message_file.id = str(uuid4())
|
||||
return message_file
|
||||
|
||||
def test_handle_multimodal_image_content_with_url(
|
||||
self,
|
||||
mock_user_id,
|
||||
mock_tenant_id,
|
||||
mock_message_id,
|
||||
mock_queue_manager,
|
||||
mock_tool_file,
|
||||
mock_message_file,
|
||||
):
|
||||
"""Test handling image from URL."""
|
||||
# Arrange
|
||||
image_url = "http://example.com/image.png"
|
||||
content = ImagePromptMessageContent(
|
||||
url=image_url,
|
||||
format="png",
|
||||
mime_type="image/png",
|
||||
)
|
||||
|
||||
with patch("core.app.apps.base_app_runner.ToolFileManager") as mock_mgr_class:
|
||||
# Setup mock tool file manager
|
||||
mock_mgr = MagicMock()
|
||||
mock_mgr.create_file_by_url.return_value = mock_tool_file
|
||||
mock_mgr_class.return_value = mock_mgr
|
||||
|
||||
with patch("core.app.apps.base_app_runner.MessageFile") as mock_msg_file_class:
|
||||
# Setup mock message file
|
||||
mock_msg_file_class.return_value = mock_message_file
|
||||
|
||||
with patch("core.app.apps.base_app_runner.db.session") as mock_session:
|
||||
mock_session.add = MagicMock()
|
||||
mock_session.commit = MagicMock()
|
||||
mock_session.refresh = MagicMock()
|
||||
|
||||
# Act
|
||||
# Create a mock runner with the method bound
|
||||
runner = MagicMock()
|
||||
|
||||
method = AppRunner._handle_multimodal_image_content
|
||||
runner._handle_multimodal_image_content = lambda *args, **kwargs: method(runner, *args, **kwargs)
|
||||
|
||||
runner._handle_multimodal_image_content(
|
||||
content=content,
|
||||
message_id=mock_message_id,
|
||||
user_id=mock_user_id,
|
||||
tenant_id=mock_tenant_id,
|
||||
queue_manager=mock_queue_manager,
|
||||
)
|
||||
|
||||
# Assert
|
||||
# Verify tool file was created from URL
|
||||
mock_mgr.create_file_by_url.assert_called_once_with(
|
||||
user_id=mock_user_id,
|
||||
tenant_id=mock_tenant_id,
|
||||
file_url=image_url,
|
||||
conversation_id=None,
|
||||
)
|
||||
|
||||
# Verify message file was created with correct parameters
|
||||
mock_msg_file_class.assert_called_once()
|
||||
call_kwargs = mock_msg_file_class.call_args[1]
|
||||
assert call_kwargs["message_id"] == mock_message_id
|
||||
assert call_kwargs["type"] == FileType.IMAGE
|
||||
assert call_kwargs["transfer_method"] == FileTransferMethod.TOOL_FILE
|
||||
assert call_kwargs["belongs_to"] == "assistant"
|
||||
assert call_kwargs["created_by"] == mock_user_id
|
||||
|
||||
# Verify database operations
|
||||
mock_session.add.assert_called_once_with(mock_message_file)
|
||||
mock_session.commit.assert_called_once()
|
||||
mock_session.refresh.assert_called_once_with(mock_message_file)
|
||||
|
||||
# Verify event was published
|
||||
mock_queue_manager.publish.assert_called_once()
|
||||
publish_call = mock_queue_manager.publish.call_args
|
||||
assert isinstance(publish_call[0][0], QueueMessageFileEvent)
|
||||
assert publish_call[0][0].message_file_id == mock_message_file.id
|
||||
# publish_from might be passed as positional or keyword argument
|
||||
assert (
|
||||
publish_call[0][1] == PublishFrom.APPLICATION_MANAGER
|
||||
or publish_call.kwargs.get("publish_from") == PublishFrom.APPLICATION_MANAGER
|
||||
)
|
||||
|
||||
def test_handle_multimodal_image_content_with_base64(
|
||||
self,
|
||||
mock_user_id,
|
||||
mock_tenant_id,
|
||||
mock_message_id,
|
||||
mock_queue_manager,
|
||||
mock_tool_file,
|
||||
mock_message_file,
|
||||
):
|
||||
"""Test handling image from base64 data."""
|
||||
# Arrange
|
||||
import base64
|
||||
|
||||
# Create a small test image (1x1 PNG)
|
||||
test_image_data = base64.b64encode(
|
||||
b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01\x08\x02\x00\x00\x00\x90wS\xde"
|
||||
).decode()
|
||||
content = ImagePromptMessageContent(
|
||||
base64_data=test_image_data,
|
||||
format="png",
|
||||
mime_type="image/png",
|
||||
)
|
||||
|
||||
with patch("core.app.apps.base_app_runner.ToolFileManager") as mock_mgr_class:
|
||||
# Setup mock tool file manager
|
||||
mock_mgr = MagicMock()
|
||||
mock_mgr.create_file_by_raw.return_value = mock_tool_file
|
||||
mock_mgr_class.return_value = mock_mgr
|
||||
|
||||
with patch("core.app.apps.base_app_runner.MessageFile") as mock_msg_file_class:
|
||||
# Setup mock message file
|
||||
mock_msg_file_class.return_value = mock_message_file
|
||||
|
||||
with patch("core.app.apps.base_app_runner.db.session") as mock_session:
|
||||
mock_session.add = MagicMock()
|
||||
mock_session.commit = MagicMock()
|
||||
mock_session.refresh = MagicMock()
|
||||
|
||||
# Act
|
||||
# Create a mock runner with the method bound
|
||||
runner = MagicMock()
|
||||
method = AppRunner._handle_multimodal_image_content
|
||||
runner._handle_multimodal_image_content = lambda *args, **kwargs: method(runner, *args, **kwargs)
|
||||
|
||||
runner._handle_multimodal_image_content(
|
||||
content=content,
|
||||
message_id=mock_message_id,
|
||||
user_id=mock_user_id,
|
||||
tenant_id=mock_tenant_id,
|
||||
queue_manager=mock_queue_manager,
|
||||
)
|
||||
|
||||
# Assert
|
||||
# Verify tool file was created from base64
|
||||
mock_mgr.create_file_by_raw.assert_called_once()
|
||||
call_kwargs = mock_mgr.create_file_by_raw.call_args[1]
|
||||
assert call_kwargs["user_id"] == mock_user_id
|
||||
assert call_kwargs["tenant_id"] == mock_tenant_id
|
||||
assert call_kwargs["conversation_id"] is None
|
||||
assert "file_binary" in call_kwargs
|
||||
assert call_kwargs["mimetype"] == "image/png"
|
||||
assert call_kwargs["filename"].startswith("generated_image")
|
||||
assert call_kwargs["filename"].endswith(".png")
|
||||
|
||||
# Verify message file was created
|
||||
mock_msg_file_class.assert_called_once()
|
||||
|
||||
# Verify database operations
|
||||
mock_session.add.assert_called_once()
|
||||
mock_session.commit.assert_called_once()
|
||||
mock_session.refresh.assert_called_once()
|
||||
|
||||
# Verify event was published
|
||||
mock_queue_manager.publish.assert_called_once()
|
||||
|
||||
def test_handle_multimodal_image_content_with_base64_data_uri(
|
||||
self,
|
||||
mock_user_id,
|
||||
mock_tenant_id,
|
||||
mock_message_id,
|
||||
mock_queue_manager,
|
||||
mock_tool_file,
|
||||
mock_message_file,
|
||||
):
|
||||
"""Test handling image from base64 data with URI prefix."""
|
||||
# Arrange
|
||||
# Data URI format: data:image/png;base64,<base64_data>
|
||||
test_image_data = (
|
||||
"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=="
|
||||
)
|
||||
content = ImagePromptMessageContent(
|
||||
base64_data=f"data:image/png;base64,{test_image_data}",
|
||||
format="png",
|
||||
mime_type="image/png",
|
||||
)
|
||||
|
||||
with patch("core.app.apps.base_app_runner.ToolFileManager") as mock_mgr_class:
|
||||
# Setup mock tool file manager
|
||||
mock_mgr = MagicMock()
|
||||
mock_mgr.create_file_by_raw.return_value = mock_tool_file
|
||||
mock_mgr_class.return_value = mock_mgr
|
||||
|
||||
with patch("core.app.apps.base_app_runner.MessageFile") as mock_msg_file_class:
|
||||
# Setup mock message file
|
||||
mock_msg_file_class.return_value = mock_message_file
|
||||
|
||||
with patch("core.app.apps.base_app_runner.db.session") as mock_session:
|
||||
mock_session.add = MagicMock()
|
||||
mock_session.commit = MagicMock()
|
||||
mock_session.refresh = MagicMock()
|
||||
|
||||
# Act
|
||||
# Create a mock runner with the method bound
|
||||
runner = MagicMock()
|
||||
method = AppRunner._handle_multimodal_image_content
|
||||
runner._handle_multimodal_image_content = lambda *args, **kwargs: method(runner, *args, **kwargs)
|
||||
|
||||
runner._handle_multimodal_image_content(
|
||||
content=content,
|
||||
message_id=mock_message_id,
|
||||
user_id=mock_user_id,
|
||||
tenant_id=mock_tenant_id,
|
||||
queue_manager=mock_queue_manager,
|
||||
)
|
||||
|
||||
# Assert - verify that base64 data was extracted correctly (without prefix)
|
||||
mock_mgr.create_file_by_raw.assert_called_once()
|
||||
call_kwargs = mock_mgr.create_file_by_raw.call_args[1]
|
||||
# The base64 data should be decoded, so we check the binary was passed
|
||||
assert "file_binary" in call_kwargs
|
||||
|
||||
def test_handle_multimodal_image_content_without_url_or_base64(
|
||||
self,
|
||||
mock_user_id,
|
||||
mock_tenant_id,
|
||||
mock_message_id,
|
||||
mock_queue_manager,
|
||||
):
|
||||
"""Test handling image content without URL or base64 data."""
|
||||
# Arrange
|
||||
content = ImagePromptMessageContent(
|
||||
url="",
|
||||
base64_data="",
|
||||
format="png",
|
||||
mime_type="image/png",
|
||||
)
|
||||
|
||||
with patch("core.app.apps.base_app_runner.ToolFileManager") as mock_mgr_class:
|
||||
with patch("core.app.apps.base_app_runner.MessageFile") as mock_msg_file_class:
|
||||
with patch("core.app.apps.base_app_runner.db.session") as mock_session:
|
||||
# Act
|
||||
# Create a mock runner with the method bound
|
||||
runner = MagicMock()
|
||||
method = AppRunner._handle_multimodal_image_content
|
||||
runner._handle_multimodal_image_content = lambda *args, **kwargs: method(runner, *args, **kwargs)
|
||||
|
||||
runner._handle_multimodal_image_content(
|
||||
content=content,
|
||||
message_id=mock_message_id,
|
||||
user_id=mock_user_id,
|
||||
tenant_id=mock_tenant_id,
|
||||
queue_manager=mock_queue_manager,
|
||||
)
|
||||
|
||||
# Assert - should not create any files or publish events
|
||||
mock_mgr_class.assert_not_called()
|
||||
mock_msg_file_class.assert_not_called()
|
||||
mock_session.add.assert_not_called()
|
||||
mock_queue_manager.publish.assert_not_called()
|
||||
|
||||
def test_handle_multimodal_image_content_with_error(
|
||||
self,
|
||||
mock_user_id,
|
||||
mock_tenant_id,
|
||||
mock_message_id,
|
||||
mock_queue_manager,
|
||||
):
|
||||
"""Test handling image content when an error occurs."""
|
||||
# Arrange
|
||||
image_url = "http://example.com/image.png"
|
||||
content = ImagePromptMessageContent(
|
||||
url=image_url,
|
||||
format="png",
|
||||
mime_type="image/png",
|
||||
)
|
||||
|
||||
with patch("core.app.apps.base_app_runner.ToolFileManager") as mock_mgr_class:
|
||||
# Setup mock to raise exception
|
||||
mock_mgr = MagicMock()
|
||||
mock_mgr.create_file_by_url.side_effect = Exception("Network error")
|
||||
mock_mgr_class.return_value = mock_mgr
|
||||
|
||||
with patch("core.app.apps.base_app_runner.MessageFile") as mock_msg_file_class:
|
||||
with patch("core.app.apps.base_app_runner.db.session") as mock_session:
|
||||
# Act
|
||||
# Create a mock runner with the method bound
|
||||
runner = MagicMock()
|
||||
method = AppRunner._handle_multimodal_image_content
|
||||
runner._handle_multimodal_image_content = lambda *args, **kwargs: method(runner, *args, **kwargs)
|
||||
|
||||
# Should not raise exception, just log it
|
||||
runner._handle_multimodal_image_content(
|
||||
content=content,
|
||||
message_id=mock_message_id,
|
||||
user_id=mock_user_id,
|
||||
tenant_id=mock_tenant_id,
|
||||
queue_manager=mock_queue_manager,
|
||||
)
|
||||
|
||||
# Assert - should not create message file or publish event on error
|
||||
mock_msg_file_class.assert_not_called()
|
||||
mock_session.add.assert_not_called()
|
||||
mock_queue_manager.publish.assert_not_called()
|
||||
|
||||
def test_handle_multimodal_image_content_debugger_mode(
|
||||
self,
|
||||
mock_user_id,
|
||||
mock_tenant_id,
|
||||
mock_message_id,
|
||||
mock_queue_manager,
|
||||
mock_tool_file,
|
||||
mock_message_file,
|
||||
):
|
||||
"""Test that debugger mode sets correct created_by_role."""
|
||||
# Arrange
|
||||
image_url = "http://example.com/image.png"
|
||||
content = ImagePromptMessageContent(
|
||||
url=image_url,
|
||||
format="png",
|
||||
mime_type="image/png",
|
||||
)
|
||||
mock_queue_manager.invoke_from = InvokeFrom.DEBUGGER
|
||||
|
||||
with patch("core.app.apps.base_app_runner.ToolFileManager") as mock_mgr_class:
|
||||
# Setup mock tool file manager
|
||||
mock_mgr = MagicMock()
|
||||
mock_mgr.create_file_by_url.return_value = mock_tool_file
|
||||
mock_mgr_class.return_value = mock_mgr
|
||||
|
||||
with patch("core.app.apps.base_app_runner.MessageFile") as mock_msg_file_class:
|
||||
# Setup mock message file
|
||||
mock_msg_file_class.return_value = mock_message_file
|
||||
|
||||
with patch("core.app.apps.base_app_runner.db.session") as mock_session:
|
||||
mock_session.add = MagicMock()
|
||||
mock_session.commit = MagicMock()
|
||||
mock_session.refresh = MagicMock()
|
||||
|
||||
# Act
|
||||
# Create a mock runner with the method bound
|
||||
runner = MagicMock()
|
||||
method = AppRunner._handle_multimodal_image_content
|
||||
runner._handle_multimodal_image_content = lambda *args, **kwargs: method(runner, *args, **kwargs)
|
||||
|
||||
runner._handle_multimodal_image_content(
|
||||
content=content,
|
||||
message_id=mock_message_id,
|
||||
user_id=mock_user_id,
|
||||
tenant_id=mock_tenant_id,
|
||||
queue_manager=mock_queue_manager,
|
||||
)
|
||||
|
||||
# Assert - verify created_by_role is ACCOUNT for debugger mode
|
||||
call_kwargs = mock_msg_file_class.call_args[1]
|
||||
assert call_kwargs["created_by_role"] == CreatorUserRole.ACCOUNT
|
||||
|
||||
def test_handle_multimodal_image_content_service_api_mode(
|
||||
self,
|
||||
mock_user_id,
|
||||
mock_tenant_id,
|
||||
mock_message_id,
|
||||
mock_queue_manager,
|
||||
mock_tool_file,
|
||||
mock_message_file,
|
||||
):
|
||||
"""Test that service API mode sets correct created_by_role."""
|
||||
# Arrange
|
||||
image_url = "http://example.com/image.png"
|
||||
content = ImagePromptMessageContent(
|
||||
url=image_url,
|
||||
format="png",
|
||||
mime_type="image/png",
|
||||
)
|
||||
mock_queue_manager.invoke_from = InvokeFrom.SERVICE_API
|
||||
|
||||
with patch("core.app.apps.base_app_runner.ToolFileManager") as mock_mgr_class:
|
||||
# Setup mock tool file manager
|
||||
mock_mgr = MagicMock()
|
||||
mock_mgr.create_file_by_url.return_value = mock_tool_file
|
||||
mock_mgr_class.return_value = mock_mgr
|
||||
|
||||
with patch("core.app.apps.base_app_runner.MessageFile") as mock_msg_file_class:
|
||||
# Setup mock message file
|
||||
mock_msg_file_class.return_value = mock_message_file
|
||||
|
||||
with patch("core.app.apps.base_app_runner.db.session") as mock_session:
|
||||
mock_session.add = MagicMock()
|
||||
mock_session.commit = MagicMock()
|
||||
mock_session.refresh = MagicMock()
|
||||
|
||||
# Act
|
||||
# Create a mock runner with the method bound
|
||||
runner = MagicMock()
|
||||
method = AppRunner._handle_multimodal_image_content
|
||||
runner._handle_multimodal_image_content = lambda *args, **kwargs: method(runner, *args, **kwargs)
|
||||
|
||||
runner._handle_multimodal_image_content(
|
||||
content=content,
|
||||
message_id=mock_message_id,
|
||||
user_id=mock_user_id,
|
||||
tenant_id=mock_tenant_id,
|
||||
queue_manager=mock_queue_manager,
|
||||
)
|
||||
|
||||
# Assert - verify created_by_role is END_USER for service API
|
||||
call_kwargs = mock_msg_file_class.call_args[1]
|
||||
assert call_kwargs["created_by_role"] == CreatorUserRole.END_USER
|
||||
@ -1,6 +1,7 @@
|
||||
"""Unit tests for the message cycle manager optimization."""
|
||||
|
||||
from unittest.mock import Mock, patch
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import ANY, Mock, patch
|
||||
|
||||
import pytest
|
||||
from flask import current_app
|
||||
@ -27,14 +28,17 @@ class TestMessageCycleManagerOptimization:
|
||||
|
||||
def test_get_message_event_type_with_message_file(self, message_cycle_manager):
|
||||
"""Test get_message_event_type returns MESSAGE_FILE when message has files."""
|
||||
with patch("core.app.task_pipeline.message_cycle_manager.session_factory") as mock_session_factory:
|
||||
with (
|
||||
patch("core.app.task_pipeline.message_cycle_manager.Session") as mock_session_class,
|
||||
patch("core.app.task_pipeline.message_cycle_manager.db", new=SimpleNamespace(engine=Mock())),
|
||||
):
|
||||
# Setup mock session and message file
|
||||
mock_session = Mock()
|
||||
mock_session_factory.create_session.return_value.__enter__.return_value = mock_session
|
||||
mock_session_class.return_value.__enter__.return_value = mock_session
|
||||
|
||||
mock_message_file = Mock()
|
||||
# Current implementation uses session.scalar(select(...))
|
||||
mock_session.scalar.return_value = mock_message_file
|
||||
# Current implementation uses session.query(...).scalar()
|
||||
mock_session.query.return_value.scalar.return_value = mock_message_file
|
||||
|
||||
# Execute
|
||||
with current_app.app_context():
|
||||
@ -42,16 +46,19 @@ class TestMessageCycleManagerOptimization:
|
||||
|
||||
# Assert
|
||||
assert result == StreamEvent.MESSAGE_FILE
|
||||
mock_session.scalar.assert_called_once()
|
||||
mock_session.query.return_value.scalar.assert_called_once()
|
||||
|
||||
def test_get_message_event_type_without_message_file(self, message_cycle_manager):
|
||||
"""Test get_message_event_type returns MESSAGE when message has no files."""
|
||||
with patch("core.app.task_pipeline.message_cycle_manager.session_factory") as mock_session_factory:
|
||||
with (
|
||||
patch("core.app.task_pipeline.message_cycle_manager.Session") as mock_session_class,
|
||||
patch("core.app.task_pipeline.message_cycle_manager.db", new=SimpleNamespace(engine=Mock())),
|
||||
):
|
||||
# Setup mock session and no message file
|
||||
mock_session = Mock()
|
||||
mock_session_factory.create_session.return_value.__enter__.return_value = mock_session
|
||||
# Current implementation uses session.scalar(select(...))
|
||||
mock_session.scalar.return_value = None
|
||||
mock_session_class.return_value.__enter__.return_value = mock_session
|
||||
# Current implementation uses session.query(...).scalar()
|
||||
mock_session.query.return_value.scalar.return_value = None
|
||||
|
||||
# Execute
|
||||
with current_app.app_context():
|
||||
@ -59,18 +66,21 @@ class TestMessageCycleManagerOptimization:
|
||||
|
||||
# Assert
|
||||
assert result == StreamEvent.MESSAGE
|
||||
mock_session.scalar.assert_called_once()
|
||||
mock_session.query.return_value.scalar.assert_called_once()
|
||||
|
||||
def test_message_to_stream_response_with_precomputed_event_type(self, message_cycle_manager):
|
||||
"""MessageCycleManager.message_to_stream_response expects a valid event_type; callers should precompute it."""
|
||||
with patch("core.app.task_pipeline.message_cycle_manager.session_factory") as mock_session_factory:
|
||||
with (
|
||||
patch("core.app.task_pipeline.message_cycle_manager.Session") as mock_session_class,
|
||||
patch("core.app.task_pipeline.message_cycle_manager.db", new=SimpleNamespace(engine=Mock())),
|
||||
):
|
||||
# Setup mock session and message file
|
||||
mock_session = Mock()
|
||||
mock_session_factory.create_session.return_value.__enter__.return_value = mock_session
|
||||
mock_session_class.return_value.__enter__.return_value = mock_session
|
||||
|
||||
mock_message_file = Mock()
|
||||
# Current implementation uses session.scalar(select(...))
|
||||
mock_session.scalar.return_value = mock_message_file
|
||||
# Current implementation uses session.query(...).scalar()
|
||||
mock_session.query.return_value.scalar.return_value = mock_message_file
|
||||
|
||||
# Execute: compute event type once, then pass to message_to_stream_response
|
||||
with current_app.app_context():
|
||||
@ -84,11 +94,11 @@ class TestMessageCycleManagerOptimization:
|
||||
assert result.answer == "Hello world"
|
||||
assert result.id == "test-message-id"
|
||||
assert result.event == StreamEvent.MESSAGE_FILE
|
||||
mock_session.scalar.assert_called_once()
|
||||
mock_session.query.return_value.scalar.assert_called_once()
|
||||
|
||||
def test_message_to_stream_response_with_event_type_skips_query(self, message_cycle_manager):
|
||||
"""Test that message_to_stream_response skips database query when event_type is provided."""
|
||||
with patch("core.app.task_pipeline.message_cycle_manager.session_factory") as mock_session_factory:
|
||||
with patch("core.app.task_pipeline.message_cycle_manager.Session") as mock_session_class:
|
||||
# Execute with event_type provided
|
||||
result = message_cycle_manager.message_to_stream_response(
|
||||
answer="Hello world", message_id="test-message-id", event_type=StreamEvent.MESSAGE
|
||||
@ -99,8 +109,8 @@ class TestMessageCycleManagerOptimization:
|
||||
assert result.answer == "Hello world"
|
||||
assert result.id == "test-message-id"
|
||||
assert result.event == StreamEvent.MESSAGE
|
||||
# Should not open a session when event_type is provided
|
||||
mock_session_factory.create_session.assert_not_called()
|
||||
# Should not query database when event_type is provided
|
||||
mock_session_class.assert_not_called()
|
||||
|
||||
def test_message_to_stream_response_with_from_variable_selector(self, message_cycle_manager):
|
||||
"""Test message_to_stream_response with from_variable_selector parameter."""
|
||||
@ -120,21 +130,24 @@ class TestMessageCycleManagerOptimization:
|
||||
def test_optimization_usage_example(self, message_cycle_manager):
|
||||
"""Test the optimization pattern that should be used by callers."""
|
||||
# Step 1: Get event type once (this queries database)
|
||||
with patch("core.app.task_pipeline.message_cycle_manager.session_factory") as mock_session_factory:
|
||||
with (
|
||||
patch("core.app.task_pipeline.message_cycle_manager.Session") as mock_session_class,
|
||||
patch("core.app.task_pipeline.message_cycle_manager.db", new=SimpleNamespace(engine=Mock())),
|
||||
):
|
||||
mock_session = Mock()
|
||||
mock_session_factory.create_session.return_value.__enter__.return_value = mock_session
|
||||
# Current implementation uses session.scalar(select(...))
|
||||
mock_session.scalar.return_value = None # No files
|
||||
mock_session_class.return_value.__enter__.return_value = mock_session
|
||||
# Current implementation uses session.query(...).scalar()
|
||||
mock_session.query.return_value.scalar.return_value = None # No files
|
||||
with current_app.app_context():
|
||||
event_type = message_cycle_manager.get_message_event_type("test-message-id")
|
||||
|
||||
# Should open session once
|
||||
mock_session_factory.create_session.assert_called_once()
|
||||
# Should query database once
|
||||
mock_session_class.assert_called_once_with(ANY, expire_on_commit=False)
|
||||
assert event_type == StreamEvent.MESSAGE
|
||||
|
||||
# Step 2: Use event_type for multiple calls (no additional queries)
|
||||
with patch("core.app.task_pipeline.message_cycle_manager.session_factory") as mock_session_factory:
|
||||
mock_session_factory.create_session.return_value.__enter__.return_value = Mock()
|
||||
with patch("core.app.task_pipeline.message_cycle_manager.Session") as mock_session_class:
|
||||
mock_session_class.return_value.__enter__.return_value = Mock()
|
||||
|
||||
chunk1_response = message_cycle_manager.message_to_stream_response(
|
||||
answer="Chunk 1", message_id="test-message-id", event_type=event_type
|
||||
@ -144,8 +157,8 @@ class TestMessageCycleManagerOptimization:
|
||||
answer="Chunk 2", message_id="test-message-id", event_type=event_type
|
||||
)
|
||||
|
||||
# Should not open session again when event_type provided
|
||||
mock_session_factory.create_session.assert_not_called()
|
||||
# Should not query database again
|
||||
mock_session_class.assert_not_called()
|
||||
|
||||
assert chunk1_response.event == StreamEvent.MESSAGE
|
||||
assert chunk2_response.event == StreamEvent.MESSAGE
|
||||
|
||||
@ -171,26 +171,22 @@ class TestBillingServiceSendRequest:
|
||||
"status_code", [httpx.codes.BAD_REQUEST, httpx.codes.INTERNAL_SERVER_ERROR, httpx.codes.NOT_FOUND]
|
||||
)
|
||||
def test_delete_request_non_200_with_valid_json(self, mock_httpx_request, mock_billing_config, status_code):
|
||||
"""Test DELETE request with non-200 status code raises ValueError.
|
||||
"""Test DELETE request with non-200 status code but valid JSON response.
|
||||
|
||||
DELETE now checks status code and raises ValueError for non-200 responses.
|
||||
DELETE doesn't check status code, so it returns the error JSON.
|
||||
"""
|
||||
# Arrange
|
||||
error_response = {"detail": "Error message"}
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = status_code
|
||||
mock_response.text = "Error message"
|
||||
mock_response.json.return_value = error_response
|
||||
mock_httpx_request.return_value = mock_response
|
||||
|
||||
# Act & Assert
|
||||
with patch("services.billing_service.logger") as mock_logger:
|
||||
with pytest.raises(ValueError) as exc_info:
|
||||
BillingService._send_request("DELETE", "/test", json={"key": "value"})
|
||||
assert "Unable to process delete request" in str(exc_info.value)
|
||||
# Verify error logging
|
||||
mock_logger.error.assert_called_once()
|
||||
assert "DELETE response" in str(mock_logger.error.call_args)
|
||||
# Act
|
||||
result = BillingService._send_request("DELETE", "/test", json={"key": "value"})
|
||||
|
||||
# Assert
|
||||
assert result == error_response
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"status_code", [httpx.codes.BAD_REQUEST, httpx.codes.INTERNAL_SERVER_ERROR, httpx.codes.NOT_FOUND]
|
||||
@ -214,9 +210,9 @@ class TestBillingServiceSendRequest:
|
||||
"status_code", [httpx.codes.BAD_REQUEST, httpx.codes.INTERNAL_SERVER_ERROR, httpx.codes.NOT_FOUND]
|
||||
)
|
||||
def test_delete_request_non_200_with_invalid_json(self, mock_httpx_request, mock_billing_config, status_code):
|
||||
"""Test DELETE request with non-200 status code raises ValueError before JSON parsing.
|
||||
"""Test DELETE request with non-200 status code and invalid JSON response raises exception.
|
||||
|
||||
DELETE now checks status code before calling response.json(), so ValueError is raised
|
||||
DELETE doesn't check status code, so it calls response.json() which raises JSONDecodeError
|
||||
when the response cannot be parsed as JSON (e.g., empty response).
|
||||
"""
|
||||
# Arrange
|
||||
@ -227,13 +223,8 @@ class TestBillingServiceSendRequest:
|
||||
mock_httpx_request.return_value = mock_response
|
||||
|
||||
# Act & Assert
|
||||
with patch("services.billing_service.logger") as mock_logger:
|
||||
with pytest.raises(ValueError) as exc_info:
|
||||
BillingService._send_request("DELETE", "/test", json={"key": "value"})
|
||||
assert "Unable to process delete request" in str(exc_info.value)
|
||||
# Verify error logging
|
||||
mock_logger.error.assert_called_once()
|
||||
assert "DELETE response" in str(mock_logger.error.call_args)
|
||||
with pytest.raises(json.JSONDecodeError):
|
||||
BillingService._send_request("DELETE", "/test", json={"key": "value"})
|
||||
|
||||
def test_retry_on_request_error(self, mock_httpx_request, mock_billing_config):
|
||||
"""Test that _send_request retries on httpx.RequestError."""
|
||||
@ -798,7 +789,7 @@ class TestBillingServiceAccountManagement:
|
||||
|
||||
# Assert
|
||||
assert result == expected_response
|
||||
mock_send_request.assert_called_once_with("DELETE", "/account", params={"account_id": account_id})
|
||||
mock_send_request.assert_called_once_with("DELETE", "/account/", params={"account_id": account_id})
|
||||
|
||||
def test_is_email_in_freeze_true(self, mock_send_request):
|
||||
"""Test checking if email is frozen (returns True)."""
|
||||
|
||||
33
api/uv.lock
generated
33
api/uv.lock
generated
@ -1633,7 +1633,7 @@ requires-dist = [
|
||||
{ name = "pandas", extras = ["excel", "output-formatting", "performance"], specifier = "~=2.2.2" },
|
||||
{ name = "psycogreen", specifier = "~=1.0.2" },
|
||||
{ name = "psycopg2-binary", specifier = "~=2.9.6" },
|
||||
{ name = "pycryptodome", specifier = "==3.23.0" },
|
||||
{ name = "pycryptodome", specifier = "==3.19.1" },
|
||||
{ name = "pydantic", specifier = "~=2.11.4" },
|
||||
{ name = "pydantic-extra-types", specifier = "~=2.10.3" },
|
||||
{ name = "pydantic-settings", specifier = "~=2.11.0" },
|
||||
@ -4796,21 +4796,20 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "pycryptodome"
|
||||
version = "3.23.0"
|
||||
version = "3.19.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/8e/a6/8452177684d5e906854776276ddd34eca30d1b1e15aa1ee9cefc289a33f5/pycryptodome-3.23.0.tar.gz", hash = "sha256:447700a657182d60338bab09fdb27518f8856aecd80ae4c6bdddb67ff5da44ef", size = 4921276, upload-time = "2025-05-17T17:21:45.242Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b1/38/42a8855ff1bf568c61ca6557e2203f318fb7afeadaf2eb8ecfdbde107151/pycryptodome-3.19.1.tar.gz", hash = "sha256:8ae0dd1bcfada451c35f9e29a3e5db385caabc190f98e4a80ad02a61098fb776", size = 4782144, upload-time = "2023-12-28T06:52:40.741Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/db/6c/a1f71542c969912bb0e106f64f60a56cc1f0fabecf9396f45accbe63fa68/pycryptodome-3.23.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:187058ab80b3281b1de11c2e6842a357a1f71b42cb1e15bce373f3d238135c27", size = 2495627, upload-time = "2025-05-17T17:20:47.139Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/4e/a066527e079fc5002390c8acdd3aca431e6ea0a50ffd7201551175b47323/pycryptodome-3.23.0-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:cfb5cd445280c5b0a4e6187a7ce8de5a07b5f3f897f235caa11f1f435f182843", size = 1640362, upload-time = "2025-05-17T17:20:50.392Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/52/adaf4c8c100a8c49d2bd058e5b551f73dfd8cb89eb4911e25a0c469b6b4e/pycryptodome-3.23.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67bd81fcbe34f43ad9422ee8fd4843c8e7198dd88dd3d40e6de42ee65fbe1490", size = 2182625, upload-time = "2025-05-17T17:20:52.866Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/e9/a09476d436d0ff1402ac3867d933c61805ec2326c6ea557aeeac3825604e/pycryptodome-3.23.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c8987bd3307a39bc03df5c8e0e3d8be0c4c3518b7f044b0f4c15d1aa78f52575", size = 2268954, upload-time = "2025-05-17T17:20:55.027Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/c5/ffe6474e0c551d54cab931918127c46d70cab8f114e0c2b5a3c071c2f484/pycryptodome-3.23.0-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aa0698f65e5b570426fc31b8162ed4603b0c2841cbb9088e2b01641e3065915b", size = 2308534, upload-time = "2025-05-17T17:20:57.279Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/18/28/e199677fc15ecf43010f2463fde4c1a53015d1fe95fb03bca2890836603a/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:53ecbafc2b55353edcebd64bf5da94a2a2cdf5090a6915bcca6eca6cc452585a", size = 2181853, upload-time = "2025-05-17T17:20:59.322Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/ea/4fdb09f2165ce1365c9eaefef36625583371ee514db58dc9b65d3a255c4c/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:156df9667ad9f2ad26255926524e1c136d6664b741547deb0a86a9acf5ea631f", size = 2342465, upload-time = "2025-05-17T17:21:03.83Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/82/6edc3fc42fe9284aead511394bac167693fb2b0e0395b28b8bedaa07ef04/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:dea827b4d55ee390dc89b2afe5927d4308a8b538ae91d9c6f7a5090f397af1aa", size = 2267414, upload-time = "2025-05-17T17:21:06.72Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/59/fe/aae679b64363eb78326c7fdc9d06ec3de18bac68be4b612fc1fe8902693c/pycryptodome-3.23.0-cp37-abi3-win32.whl", hash = "sha256:507dbead45474b62b2bbe318eb1c4c8ee641077532067fec9c1aa82c31f84886", size = 1768484, upload-time = "2025-05-17T17:21:08.535Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/54/2f/e97a1b8294db0daaa87012c24a7bb714147c7ade7656973fd6c736b484ff/pycryptodome-3.23.0-cp37-abi3-win_amd64.whl", hash = "sha256:c75b52aacc6c0c260f204cbdd834f76edc9fb0d8e0da9fbf8352ef58202564e2", size = 1799636, upload-time = "2025-05-17T17:21:10.393Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/18/3d/f9441a0d798bf2b1e645adc3265e55706aead1255ccdad3856dbdcffec14/pycryptodome-3.23.0-cp37-abi3-win_arm64.whl", hash = "sha256:11eeeb6917903876f134b56ba11abe95c0b0fd5e3330def218083c7d98bbcb3c", size = 1703675, upload-time = "2025-05-17T17:21:13.146Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/ef/4931bc30674f0de0ca0e827b58c8b0c17313a8eae2754976c610b866118b/pycryptodome-3.19.1-cp35-abi3-macosx_10_9_universal2.whl", hash = "sha256:67939a3adbe637281c611596e44500ff309d547e932c449337649921b17b6297", size = 2417027, upload-time = "2023-12-28T06:51:50.138Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/e6/238c53267fd8d223029c0a0d3730cb1b6594d60f62e40c4184703dc490b1/pycryptodome-3.19.1-cp35-abi3-macosx_10_9_x86_64.whl", hash = "sha256:11ddf6c9b52116b62223b6a9f4741bc4f62bb265392a4463282f7f34bb287180", size = 1579728, upload-time = "2023-12-28T06:51:52.385Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7c/87/7181c42c8d5ba89822a4b824830506d0aeec02959bb893614767e3279846/pycryptodome-3.19.1-cp35-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e3e6f89480616781d2a7f981472d0cdb09b9da9e8196f43c1234eff45c915766", size = 2051440, upload-time = "2023-12-28T06:51:55.751Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/dd/332c4c0055527d17dac317ed9f9c864fc047b627d82f4b9a56c110afc6fc/pycryptodome-3.19.1-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:27e1efcb68993b7ce5d1d047a46a601d41281bba9f1971e6be4aa27c69ab8065", size = 2125379, upload-time = "2023-12-28T06:51:58.567Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/24/9e/320b885ea336c218ff54ec2b276cd70ba6904e4f5a14a771ed39a2c47d59/pycryptodome-3.19.1-cp35-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c6273ca5a03b672e504995529b8bae56da0ebb691d8ef141c4aa68f60765700", size = 2153951, upload-time = "2023-12-28T06:52:01.699Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/54/8ae0c43d1257b41bc9d3277c3f875174fd8ad86b9567f0b8609b99c938ee/pycryptodome-3.19.1-cp35-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:b0bfe61506795877ff974f994397f0c862d037f6f1c0bfc3572195fc00833b96", size = 2044041, upload-time = "2023-12-28T06:52:03.737Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/93/f8450a92cc38541c3ba1f4cb4e267e15ae6d6678ca617476d52c3a3764d4/pycryptodome-3.19.1-cp35-abi3-musllinux_1_1_i686.whl", hash = "sha256:f34976c5c8eb79e14c7d970fb097482835be8d410a4220f86260695ede4c3e17", size = 2182446, upload-time = "2023-12-28T06:52:05.588Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/af/cd/ed6e429fb0792ce368f66e83246264dd3a7a045b0b1e63043ed22a063ce5/pycryptodome-3.19.1-cp35-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:7c9e222d0976f68d0cf6409cfea896676ddc1d98485d601e9508f90f60e2b0a2", size = 2144914, upload-time = "2023-12-28T06:52:07.44Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f6/23/b064bd4cfbf2cc5f25afcde0e7c880df5b20798172793137ba4b62d82e72/pycryptodome-3.19.1-cp35-abi3-win32.whl", hash = "sha256:4805e053571140cb37cf153b5c72cd324bb1e3e837cbe590a19f69b6cf85fd03", size = 1713105, upload-time = "2023-12-28T06:52:09.585Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/e0/ded1968a5257ab34216a0f8db7433897a2337d59e6d03be113713b346ea2/pycryptodome-3.19.1-cp35-abi3-win_amd64.whl", hash = "sha256:a470237ee71a1efd63f9becebc0ad84b88ec28e6784a2047684b693f458f41b7", size = 1749222, upload-time = "2023-12-28T06:52:11.534Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -5004,11 +5003,11 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "pypdf"
|
||||
version = "6.6.2"
|
||||
version = "6.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b8/bb/a44bab1ac3c54dbcf653d7b8bcdee93dddb2d3bf025a3912cacb8149a2f2/pypdf-6.6.2.tar.gz", hash = "sha256:0a3ea3b3303982333404e22d8f75d7b3144f9cf4b2970b96856391a516f9f016", size = 5281850, upload-time = "2026-01-26T11:57:55.964Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d8/f4/801632a8b62a805378b6af2b5a3fcbfd8923abf647e0ed1af846a83433b2/pypdf-6.6.0.tar.gz", hash = "sha256:4c887ef2ea38d86faded61141995a3c7d068c9d6ae8477be7ae5de8a8e16592f", size = 5281063, upload-time = "2026-01-09T11:20:11.786Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/be/549aaf1dfa4ab4aed29b09703d2fb02c4366fc1f05e880948c296c5764b9/pypdf-6.6.2-py3-none-any.whl", hash = "sha256:44c0c9811cfb3b83b28f1c3d054531d5b8b81abaedee0d8cb403650d023832ba", size = 329132, upload-time = "2026-01-26T11:57:54.099Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/ba/96f99276194f720e74ed99905a080f6e77810558874e8935e580331b46de/pypdf-6.6.0-py3-none-any.whl", hash = "sha256:bca9091ef6de36c7b1a81e09327c554b7ce51e88dad68f5890c2b4a4417f1fd7", size = 328963, upload-time = "2026-01-09T11:20:09.278Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
@ -1,15 +1,27 @@
|
||||
import type { StorybookConfig } from '@storybook/nextjs-vite'
|
||||
import type { StorybookConfig } from '@storybook/nextjs'
|
||||
import path from 'node:path'
|
||||
import { fileURLToPath } from 'node:url'
|
||||
|
||||
const storybookDir = path.dirname(fileURLToPath(import.meta.url))
|
||||
|
||||
const config: StorybookConfig = {
|
||||
stories: ['../app/components/**/*.stories.@(js|jsx|mjs|ts|tsx)'],
|
||||
addons: [
|
||||
// Not working with Storybook Vite framework
|
||||
// '@storybook/addon-onboarding',
|
||||
'@storybook/addon-onboarding',
|
||||
'@storybook/addon-links',
|
||||
'@storybook/addon-docs',
|
||||
'@chromatic-com/storybook',
|
||||
],
|
||||
framework: '@storybook/nextjs-vite',
|
||||
framework: {
|
||||
name: '@storybook/nextjs',
|
||||
options: {
|
||||
builder: {
|
||||
useSWC: true,
|
||||
lazyCompilation: false,
|
||||
},
|
||||
nextConfigPath: undefined,
|
||||
},
|
||||
},
|
||||
staticDirs: ['../public'],
|
||||
core: {
|
||||
disableWhatsNewNotifications: true,
|
||||
@ -17,5 +29,17 @@ const config: StorybookConfig = {
|
||||
docs: {
|
||||
defaultName: 'Documentation',
|
||||
},
|
||||
webpackFinal: async (config) => {
|
||||
// Add alias to mock problematic modules with circular dependencies
|
||||
config.resolve = config.resolve || {}
|
||||
config.resolve.alias = {
|
||||
...config.resolve.alias,
|
||||
// Mock the plugin index files to avoid circular dependencies
|
||||
[path.resolve(storybookDir, '../app/components/base/prompt-editor/plugins/context-block/index.tsx')]: path.resolve(storybookDir, '__mocks__/context-block.tsx'),
|
||||
[path.resolve(storybookDir, '../app/components/base/prompt-editor/plugins/history-block/index.tsx')]: path.resolve(storybookDir, '__mocks__/history-block.tsx'),
|
||||
[path.resolve(storybookDir, '../app/components/base/prompt-editor/plugins/query-block/index.tsx')]: path.resolve(storybookDir, '__mocks__/query-block.tsx'),
|
||||
}
|
||||
return config
|
||||
},
|
||||
}
|
||||
export default config
|
||||
|
||||
@ -1,91 +0,0 @@
|
||||
'use client'
|
||||
import type { FC } from 'react'
|
||||
import type { ModelAndParameter } from './types'
|
||||
import {
|
||||
RiAddLine,
|
||||
RiEqualizer2Line,
|
||||
} from '@remixicon/react'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import ActionButton, { ActionButtonState } from '@/app/components/base/action-button'
|
||||
import Button from '@/app/components/base/button'
|
||||
import { RefreshCcw01 } from '@/app/components/base/icons/src/vender/line/arrows'
|
||||
import TooltipPlus from '@/app/components/base/tooltip'
|
||||
import { AppModeEnum } from '@/types/app'
|
||||
|
||||
type DebugHeaderProps = {
|
||||
readonly?: boolean
|
||||
mode: AppModeEnum
|
||||
debugWithMultipleModel: boolean
|
||||
multipleModelConfigs: ModelAndParameter[]
|
||||
varListLength: number
|
||||
expanded: boolean
|
||||
onExpandedChange: (expanded: boolean) => void
|
||||
onClearConversation: () => void
|
||||
onAddModel: () => void
|
||||
}
|
||||
|
||||
const DebugHeader: FC<DebugHeaderProps> = ({
|
||||
readonly,
|
||||
mode,
|
||||
debugWithMultipleModel,
|
||||
multipleModelConfigs,
|
||||
varListLength,
|
||||
expanded,
|
||||
onExpandedChange,
|
||||
onClearConversation,
|
||||
onAddModel,
|
||||
}) => {
|
||||
const { t } = useTranslation()
|
||||
|
||||
return (
|
||||
<div className="flex items-center justify-between px-4 pb-2 pt-3">
|
||||
<div className="system-xl-semibold text-text-primary">{t('inputs.title', { ns: 'appDebug' })}</div>
|
||||
<div className="flex items-center">
|
||||
{debugWithMultipleModel && (
|
||||
<>
|
||||
<Button
|
||||
variant="ghost-accent"
|
||||
onClick={onAddModel}
|
||||
disabled={multipleModelConfigs.length >= 4}
|
||||
>
|
||||
<RiAddLine className="mr-1 h-3.5 w-3.5" />
|
||||
{t('modelProvider.addModel', { ns: 'common' })}
|
||||
(
|
||||
{multipleModelConfigs.length}
|
||||
/4)
|
||||
</Button>
|
||||
<div className="mx-2 h-[14px] w-[1px] bg-divider-regular" />
|
||||
</>
|
||||
)}
|
||||
{mode !== AppModeEnum.COMPLETION && (
|
||||
<>
|
||||
{!readonly && (
|
||||
<TooltipPlus popupContent={t('operation.refresh', { ns: 'common' })}>
|
||||
<ActionButton onClick={onClearConversation}>
|
||||
<RefreshCcw01 className="h-4 w-4" />
|
||||
</ActionButton>
|
||||
</TooltipPlus>
|
||||
)}
|
||||
{varListLength > 0 && (
|
||||
<div className="relative ml-1 mr-2">
|
||||
<TooltipPlus popupContent={t('panel.userInputField', { ns: 'workflow' })}>
|
||||
<ActionButton
|
||||
state={expanded ? ActionButtonState.Active : undefined}
|
||||
onClick={() => !readonly && onExpandedChange(!expanded)}
|
||||
>
|
||||
<RiEqualizer2Line className="h-4 w-4" />
|
||||
</ActionButton>
|
||||
</TooltipPlus>
|
||||
{expanded && (
|
||||
<div className="absolute bottom-[-14px] right-[5px] z-10 h-3 w-3 rotate-45 border-l-[0.5px] border-t-[0.5px] border-components-panel-border-subtle bg-components-panel-on-panel-item-bg" />
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
export default DebugHeader
|
||||
@ -1,737 +0,0 @@
|
||||
import type { ModelAndParameter } from '../types'
|
||||
import type { ChatConfig } from '@/app/components/base/chat/types'
|
||||
import { render, screen, waitFor } from '@testing-library/react'
|
||||
import { ModelFeatureEnum } from '@/app/components/header/account-setting/model-provider-page/declarations'
|
||||
import { DEFAULT_AGENT_SETTING, DEFAULT_CHAT_PROMPT_CONFIG, DEFAULT_COMPLETION_PROMPT_CONFIG } from '@/config'
|
||||
import { ModelModeType } from '@/types/app'
|
||||
import { APP_CHAT_WITH_MULTIPLE_MODEL, APP_CHAT_WITH_MULTIPLE_MODEL_RESTART } from '../types'
|
||||
import ChatItem from './chat-item'
|
||||
|
||||
const mockUseAppContext = vi.fn()
|
||||
const mockUseDebugConfigurationContext = vi.fn()
|
||||
const mockUseProviderContext = vi.fn()
|
||||
const mockUseFeatures = vi.fn()
|
||||
const mockUseConfigFromDebugContext = vi.fn()
|
||||
const mockUseFormattingChangedSubscription = vi.fn()
|
||||
const mockUseChat = vi.fn()
|
||||
const mockUseEventEmitterContextContext = vi.fn()
|
||||
|
||||
vi.mock('@/context/app-context', () => ({
|
||||
useAppContext: () => mockUseAppContext(),
|
||||
}))
|
||||
|
||||
vi.mock('@/context/debug-configuration', () => ({
|
||||
useDebugConfigurationContext: () => mockUseDebugConfigurationContext(),
|
||||
}))
|
||||
|
||||
vi.mock('@/context/provider-context', () => ({
|
||||
useProviderContext: () => mockUseProviderContext(),
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/features/hooks', () => ({
|
||||
useFeatures: (selector: (state: unknown) => unknown) => mockUseFeatures(selector),
|
||||
}))
|
||||
|
||||
vi.mock('../hooks', () => ({
|
||||
useConfigFromDebugContext: () => mockUseConfigFromDebugContext(),
|
||||
useFormattingChangedSubscription: (chatList: unknown) => mockUseFormattingChangedSubscription(chatList),
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/chat/chat/hooks', () => ({
|
||||
useChat: (...args: unknown[]) => mockUseChat(...args),
|
||||
}))
|
||||
|
||||
vi.mock('@/context/event-emitter', () => ({
|
||||
useEventEmitterContextContext: () => mockUseEventEmitterContextContext(),
|
||||
}))
|
||||
|
||||
const mockStopChatMessageResponding = vi.fn()
|
||||
const mockFetchConversationMessages = vi.fn()
|
||||
const mockFetchSuggestedQuestions = vi.fn()
|
||||
|
||||
vi.mock('@/service/debug', () => ({
|
||||
fetchConversationMessages: (...args: unknown[]) => mockFetchConversationMessages(...args),
|
||||
fetchSuggestedQuestions: (...args: unknown[]) => mockFetchSuggestedQuestions(...args),
|
||||
stopChatMessageResponding: (...args: unknown[]) => mockStopChatMessageResponding(...args),
|
||||
}))
|
||||
|
||||
vi.mock('@/utils', () => ({
|
||||
canFindTool: (collectionId: string, providerId: string) => collectionId === providerId,
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/chat/utils', () => ({
|
||||
getLastAnswer: (chatList: { id: string }[]) => chatList.length > 0 ? chatList[chatList.length - 1] : null,
|
||||
}))
|
||||
|
||||
let capturedChatProps: Record<string, unknown> | null = null
|
||||
vi.mock('@/app/components/base/chat/chat', () => ({
|
||||
default: (props: Record<string, unknown>) => {
|
||||
capturedChatProps = props
|
||||
return <div data-testid="chat-component">Chat</div>
|
||||
},
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/avatar', () => ({
|
||||
default: ({ name }: { name: string }) => <div data-testid="avatar">{name}</div>,
|
||||
}))
|
||||
|
||||
let modelIdCounter = 0
|
||||
|
||||
const createModelAndParameter = (overrides: Partial<ModelAndParameter> = {}): ModelAndParameter => ({
|
||||
id: `model-${++modelIdCounter}`,
|
||||
model: 'gpt-3.5-turbo',
|
||||
provider: 'openai',
|
||||
parameters: { temperature: 0.7 },
|
||||
...overrides,
|
||||
})
|
||||
|
||||
const createDefaultModelConfig = () => ({
|
||||
provider: 'openai',
|
||||
model_id: 'gpt-4',
|
||||
mode: ModelModeType.chat,
|
||||
configs: {
|
||||
prompt_template: 'Hello {{name}}',
|
||||
prompt_variables: [
|
||||
{ key: 'name', name: 'Name', type: 'string' as const },
|
||||
{ key: 'api-var', name: 'API Var', type: 'api' as const },
|
||||
],
|
||||
},
|
||||
chat_prompt_config: DEFAULT_CHAT_PROMPT_CONFIG,
|
||||
completion_prompt_config: DEFAULT_COMPLETION_PROMPT_CONFIG,
|
||||
opening_statement: '',
|
||||
more_like_this: null,
|
||||
suggested_questions: [],
|
||||
suggested_questions_after_answer: null,
|
||||
speech_to_text: null,
|
||||
text_to_speech: null,
|
||||
file_upload: null,
|
||||
retriever_resource: null,
|
||||
sensitive_word_avoidance: null,
|
||||
annotation_reply: null,
|
||||
external_data_tools: [],
|
||||
dataSets: [],
|
||||
agentConfig: DEFAULT_AGENT_SETTING,
|
||||
system_parameters: {
|
||||
audio_file_size_limit: 0,
|
||||
file_size_limit: 0,
|
||||
image_file_size_limit: 0,
|
||||
video_file_size_limit: 0,
|
||||
workflow_file_upload_limit: 0,
|
||||
},
|
||||
})
|
||||
|
||||
const createDefaultFeatures = () => ({
|
||||
moreLikeThis: { enabled: false },
|
||||
opening: { enabled: true, opening_statement: 'Hello', suggested_questions: ['Q1'] },
|
||||
moderation: { enabled: false },
|
||||
speech2text: { enabled: true },
|
||||
text2speech: { enabled: false },
|
||||
file: { enabled: true, image: { enabled: true } },
|
||||
suggested: { enabled: true },
|
||||
citation: { enabled: false },
|
||||
annotationReply: { enabled: false },
|
||||
})
|
||||
|
||||
const createTextGenerationModelList = (models: Array<{
|
||||
provider: string
|
||||
model: string
|
||||
features?: string[]
|
||||
mode?: string
|
||||
}> = []) => {
|
||||
const providerMap = new Map<string, { model: string, features: string[], model_properties: { mode: string } }[]>()
|
||||
|
||||
for (const m of models) {
|
||||
if (!providerMap.has(m.provider)) {
|
||||
providerMap.set(m.provider, [])
|
||||
}
|
||||
providerMap.get(m.provider)!.push({
|
||||
model: m.model,
|
||||
features: m.features ?? [],
|
||||
model_properties: { mode: m.mode ?? 'chat' },
|
||||
})
|
||||
}
|
||||
|
||||
return Array.from(providerMap.entries()).map(([provider, modelsList]) => ({
|
||||
provider,
|
||||
models: modelsList,
|
||||
}))
|
||||
}
|
||||
|
||||
describe('ChatItem', () => {
|
||||
let subscriptionCallback: ((v: { type: string, payload?: { message: string, files?: unknown[] } }) => void) | null = null
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
modelIdCounter = 0
|
||||
capturedChatProps = null
|
||||
subscriptionCallback = null
|
||||
|
||||
mockUseAppContext.mockReturnValue({
|
||||
userProfile: { avatar_url: 'avatar.png', name: 'Test User' },
|
||||
})
|
||||
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
modelConfig: createDefaultModelConfig(),
|
||||
appId: 'test-app-id',
|
||||
inputs: { name: 'World' },
|
||||
collectionList: [],
|
||||
})
|
||||
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-3.5-turbo', features: [ModelFeatureEnum.vision], mode: 'chat' },
|
||||
{ provider: 'openai', model: 'gpt-4', features: [], mode: 'chat' },
|
||||
]),
|
||||
})
|
||||
|
||||
const features = createDefaultFeatures()
|
||||
mockUseFeatures.mockImplementation((selector: (state: { features: ReturnType<typeof createDefaultFeatures> }) => unknown) => selector({ features }))
|
||||
|
||||
mockUseConfigFromDebugContext.mockReturnValue({
|
||||
baseConfig: true,
|
||||
})
|
||||
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1', content: 'Hello' }],
|
||||
isResponding: false,
|
||||
handleSend: vi.fn(),
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
mockUseEventEmitterContextContext.mockReturnValue({
|
||||
eventEmitter: {
|
||||
// eslint-disable-next-line react/no-unnecessary-use-prefix -- mocking real API
|
||||
useSubscription: (callback: (v: { type: string, payload?: { message: string, files?: unknown[] } }) => void) => {
|
||||
subscriptionCallback = callback
|
||||
},
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
describe('rendering', () => {
|
||||
it('should render Chat component when chatList is not empty', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('chat-component')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should return null when chatList is empty', () => {
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [],
|
||||
isResponding: false,
|
||||
handleSend: vi.fn(),
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
const { container } = render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(container.firstChild).toBeNull()
|
||||
})
|
||||
|
||||
it('should pass correct props to Chat component', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedChatProps!.noChatInput).toBe(true)
|
||||
expect(capturedChatProps!.noStopResponding).toBe(true)
|
||||
expect(capturedChatProps!.showPromptLog).toBe(true)
|
||||
expect(capturedChatProps!.hideLogModal).toBe(true)
|
||||
expect(capturedChatProps!.noSpacing).toBe(true)
|
||||
expect(capturedChatProps!.chatContainerClassName).toBe('p-4')
|
||||
expect(capturedChatProps!.chatFooterClassName).toBe('p-4 pb-0')
|
||||
})
|
||||
})
|
||||
|
||||
describe('config building', () => {
|
||||
it('should merge configTemplate with features', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const config = capturedChatProps!.config as ChatConfig & { baseConfig?: boolean }
|
||||
expect(config.baseConfig).toBe(true)
|
||||
expect(config.more_like_this).toEqual({ enabled: false })
|
||||
expect(config.opening_statement).toBe('Hello')
|
||||
expect(config.suggested_questions).toEqual(['Q1'])
|
||||
expect(config.speech_to_text).toEqual({ enabled: true })
|
||||
expect(config.file_upload).toEqual({ enabled: true, image: { enabled: true } })
|
||||
})
|
||||
|
||||
it('should use empty opening_statement when opening is disabled', () => {
|
||||
const features = createDefaultFeatures()
|
||||
features.opening = { enabled: false, opening_statement: 'Hello', suggested_questions: ['Q1'] }
|
||||
mockUseFeatures.mockImplementation((selector: (state: { features: ReturnType<typeof createDefaultFeatures> }) => unknown) => selector({ features }))
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const config = capturedChatProps!.config as ChatConfig
|
||||
expect(config.opening_statement).toBe('')
|
||||
expect(config.suggested_questions).toEqual([])
|
||||
})
|
||||
|
||||
it('should use empty string fallback when opening_statement is undefined', () => {
|
||||
const features = createDefaultFeatures()
|
||||
// eslint-disable-next-line ts/no-explicit-any -- Testing edge case with undefined
|
||||
features.opening = { enabled: true, opening_statement: undefined as any, suggested_questions: ['Q1'] }
|
||||
mockUseFeatures.mockImplementation((selector: (state: { features: ReturnType<typeof createDefaultFeatures> }) => unknown) => selector({ features }))
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const config = capturedChatProps!.config as ChatConfig
|
||||
expect(config.opening_statement).toBe('')
|
||||
})
|
||||
|
||||
it('should use empty array fallback when suggested_questions is undefined', () => {
|
||||
const features = createDefaultFeatures()
|
||||
// eslint-disable-next-line ts/no-explicit-any -- Testing edge case with undefined
|
||||
features.opening = { enabled: true, opening_statement: 'Hello', suggested_questions: undefined as any }
|
||||
mockUseFeatures.mockImplementation((selector: (state: { features: ReturnType<typeof createDefaultFeatures> }) => unknown) => selector({ features }))
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const config = capturedChatProps!.config as ChatConfig
|
||||
expect(config.suggested_questions).toEqual([])
|
||||
})
|
||||
|
||||
it('should handle undefined opening feature', () => {
|
||||
const features = createDefaultFeatures()
|
||||
// eslint-disable-next-line ts/no-explicit-any -- Testing edge case with undefined
|
||||
features.opening = undefined as any
|
||||
mockUseFeatures.mockImplementation((selector: (state: { features: ReturnType<typeof createDefaultFeatures> }) => unknown) => selector({ features }))
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const config = capturedChatProps!.config as ChatConfig
|
||||
expect(config.opening_statement).toBe('')
|
||||
expect(config.suggested_questions).toEqual([])
|
||||
})
|
||||
})
|
||||
|
||||
describe('inputsForm transformation', () => {
|
||||
it('should filter out api type variables and map to InputForm', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// The useChat is called with inputsForm
|
||||
const useChatCall = mockUseChat.mock.calls[0]
|
||||
const inputsForm = useChatCall[1].inputsForm
|
||||
|
||||
expect(inputsForm).toHaveLength(1)
|
||||
expect(inputsForm[0]).toEqual(expect.objectContaining({
|
||||
key: 'name',
|
||||
label: 'Name',
|
||||
variable: 'name',
|
||||
}))
|
||||
})
|
||||
})
|
||||
|
||||
describe('event subscription', () => {
|
||||
it('should handle APP_CHAT_WITH_MULTIPLE_MODEL event', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// Trigger the event
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test message', files: [{ id: 'file-1' }] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
it('should handle APP_CHAT_WITH_MULTIPLE_MODEL_RESTART event', async () => {
|
||||
const handleRestart = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend: vi.fn(),
|
||||
suggestedQuestions: [],
|
||||
handleRestart,
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// Trigger the event
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL_RESTART,
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleRestart).toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('doSend', () => {
|
||||
it('should find current provider and model from textGenerationModelList', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-3.5-turbo' })
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
'apps/test-app-id/chat-messages',
|
||||
expect.objectContaining({
|
||||
query: 'test',
|
||||
inputs: { name: 'World' },
|
||||
model_config: expect.objectContaining({
|
||||
model: expect.objectContaining({
|
||||
provider: 'openai',
|
||||
name: 'gpt-3.5-turbo',
|
||||
mode: 'chat',
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
expect.any(Object),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
it('should include files when file upload is enabled and vision is supported', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
// gpt-3.5-turbo has vision feature
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-3.5-turbo' })
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const files = [{ id: 'file-1', name: 'image.png' }]
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
files,
|
||||
}),
|
||||
expect.any(Object),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
it('should not include files when vision is not supported', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
// gpt-4 does not have vision feature
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const files = [{ id: 'file-1', name: 'image.png' }]
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
const callArgs = handleSend.mock.calls[0][1]
|
||||
expect(callArgs.files).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
it('should handle provider not found in textGenerationModelList', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
// Use a provider that doesn't exist in the list
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'unknown-provider', model: 'unknown-model' })
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [{ id: 'file-1' }] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalled()
|
||||
const callArgs = handleSend.mock.calls[0][1]
|
||||
// Files should not be included when provider/model not found (no vision support)
|
||||
expect(callArgs.files).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
it('should handle model with no features array', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
// Model list where model has no features property
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: [
|
||||
{
|
||||
provider: 'custom',
|
||||
models: [{ model: 'custom-model', model_properties: { mode: 'chat' } }],
|
||||
},
|
||||
],
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'custom', model: 'custom-model' })
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [{ id: 'file-1' }] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalled()
|
||||
const callArgs = handleSend.mock.calls[0][1]
|
||||
// Files should not be included when features is undefined
|
||||
expect(callArgs.files).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
it('should handle undefined files parameter', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-3.5-turbo' })
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: undefined },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalled()
|
||||
const callArgs = handleSend.mock.calls[0][1]
|
||||
expect(callArgs.files).toBeUndefined()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('tool icons building', () => {
|
||||
it('should build tool icons from agent config', () => {
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
modelConfig: {
|
||||
...createDefaultModelConfig(),
|
||||
agentConfig: {
|
||||
tools: [
|
||||
{ tool_name: 'search', provider_id: 'provider-1' },
|
||||
{ tool_name: 'calculator', provider_id: 'provider-2' },
|
||||
],
|
||||
},
|
||||
},
|
||||
appId: 'test-app-id',
|
||||
inputs: {},
|
||||
collectionList: [
|
||||
{ id: 'provider-1', icon: 'search-icon' },
|
||||
{ id: 'provider-2', icon: 'calc-icon' },
|
||||
],
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedChatProps!.allToolIcons).toEqual({
|
||||
search: 'search-icon',
|
||||
calculator: 'calc-icon',
|
||||
})
|
||||
})
|
||||
|
||||
it('should handle missing tools gracefully', () => {
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
modelConfig: {
|
||||
...createDefaultModelConfig(),
|
||||
agentConfig: {
|
||||
tools: undefined,
|
||||
},
|
||||
},
|
||||
appId: 'test-app-id',
|
||||
inputs: {},
|
||||
collectionList: [],
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedChatProps!.allToolIcons).toEqual({})
|
||||
})
|
||||
})
|
||||
|
||||
describe('useFormattingChangedSubscription', () => {
|
||||
it('should call useFormattingChangedSubscription with chatList', () => {
|
||||
const chatList = [{ id: 'msg-1' }, { id: 'msg-2' }]
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList,
|
||||
isResponding: false,
|
||||
handleSend: vi.fn(),
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(mockUseFormattingChangedSubscription).toHaveBeenCalledWith(chatList)
|
||||
})
|
||||
})
|
||||
|
||||
describe('useChat callbacks', () => {
|
||||
it('should pass stopChatMessageResponding callback to useChat', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// Get the stopResponding callback passed to useChat (4th argument)
|
||||
const useChatCall = mockUseChat.mock.calls[0]
|
||||
const stopRespondingCallback = useChatCall[3]
|
||||
|
||||
// Invoke it with a taskId
|
||||
stopRespondingCallback('test-task-id')
|
||||
|
||||
expect(mockStopChatMessageResponding).toHaveBeenCalledWith('test-app-id', 'test-task-id')
|
||||
})
|
||||
|
||||
it('should pass onGetConversationMessages callback to handleSend', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-3.5-turbo' })
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalled()
|
||||
})
|
||||
|
||||
// Get the callbacks object (3rd argument to handleSend)
|
||||
const callbacks = handleSend.mock.calls[0][2]
|
||||
|
||||
// Invoke onGetConversationMessages
|
||||
const mockGetAbortController = vi.fn()
|
||||
callbacks.onGetConversationMessages('conv-123', mockGetAbortController)
|
||||
|
||||
expect(mockFetchConversationMessages).toHaveBeenCalledWith('test-app-id', 'conv-123', mockGetAbortController)
|
||||
})
|
||||
|
||||
it('should pass onGetSuggestedQuestions callback to handleSend', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseChat.mockReturnValue({
|
||||
chatList: [{ id: 'msg-1' }],
|
||||
isResponding: false,
|
||||
handleSend,
|
||||
suggestedQuestions: [],
|
||||
handleRestart: vi.fn(),
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-3.5-turbo' })
|
||||
|
||||
render(<ChatItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalled()
|
||||
})
|
||||
|
||||
// Get the callbacks object (3rd argument to handleSend)
|
||||
const callbacks = handleSend.mock.calls[0][2]
|
||||
|
||||
// Invoke onGetSuggestedQuestions
|
||||
const mockGetAbortController = vi.fn()
|
||||
callbacks.onGetSuggestedQuestions('response-item-123', mockGetAbortController)
|
||||
|
||||
expect(mockFetchSuggestedQuestions).toHaveBeenCalledWith('test-app-id', 'response-item-123', mockGetAbortController)
|
||||
})
|
||||
})
|
||||
})
|
||||
@ -1,599 +0,0 @@
|
||||
import type { ModelAndParameter } from '../types'
|
||||
import { fireEvent, render, screen } from '@testing-library/react'
|
||||
import { ModelStatusEnum } from '@/app/components/header/account-setting/model-provider-page/declarations'
|
||||
import { AppModeEnum } from '@/types/app'
|
||||
import DebugItem from './debug-item'
|
||||
|
||||
const mockUseTranslation = vi.fn()
|
||||
const mockUseDebugConfigurationContext = vi.fn()
|
||||
const mockUseDebugWithMultipleModelContext = vi.fn()
|
||||
const mockUseProviderContext = vi.fn()
|
||||
|
||||
vi.mock('react-i18next', () => ({
|
||||
useTranslation: () => mockUseTranslation(),
|
||||
}))
|
||||
|
||||
vi.mock('@/context/debug-configuration', () => ({
|
||||
useDebugConfigurationContext: () => mockUseDebugConfigurationContext(),
|
||||
}))
|
||||
|
||||
vi.mock('./context', () => ({
|
||||
useDebugWithMultipleModelContext: () => mockUseDebugWithMultipleModelContext(),
|
||||
}))
|
||||
|
||||
vi.mock('@/context/provider-context', () => ({
|
||||
useProviderContext: () => mockUseProviderContext(),
|
||||
}))
|
||||
|
||||
vi.mock('./chat-item', () => ({
|
||||
default: ({ modelAndParameter }: { modelAndParameter: ModelAndParameter }) => (
|
||||
<div data-testid="chat-item" data-model-id={modelAndParameter.id}>ChatItem</div>
|
||||
),
|
||||
}))
|
||||
|
||||
vi.mock('./text-generation-item', () => ({
|
||||
default: ({ modelAndParameter }: { modelAndParameter: ModelAndParameter }) => (
|
||||
<div data-testid="text-generation-item" data-model-id={modelAndParameter.id}>TextGenerationItem</div>
|
||||
),
|
||||
}))
|
||||
|
||||
vi.mock('./model-parameter-trigger', () => ({
|
||||
default: ({ modelAndParameter }: { modelAndParameter: ModelAndParameter }) => (
|
||||
<div data-testid="model-parameter-trigger" data-model-id={modelAndParameter.id}>ModelParameterTrigger</div>
|
||||
),
|
||||
}))
|
||||
|
||||
type DropdownItem = { value: string, text: string }
|
||||
type DropdownProps = {
|
||||
items?: DropdownItem[]
|
||||
secondItems?: DropdownItem[]
|
||||
onSelect: (item: DropdownItem) => void
|
||||
}
|
||||
let capturedDropdownProps: DropdownProps | null = null
|
||||
vi.mock('@/app/components/base/dropdown', () => ({
|
||||
default: (props: DropdownProps) => {
|
||||
capturedDropdownProps = props
|
||||
return (
|
||||
<div data-testid="dropdown">
|
||||
<button
|
||||
type="button"
|
||||
data-testid="dropdown-trigger"
|
||||
onClick={() => {
|
||||
// Mock dropdown menu showing items
|
||||
}}
|
||||
>
|
||||
Dropdown
|
||||
</button>
|
||||
{props.items?.map((item: DropdownItem) => (
|
||||
<button
|
||||
key={item.value}
|
||||
type="button"
|
||||
data-testid={`dropdown-item-${item.value}`}
|
||||
onClick={() => props.onSelect(item)}
|
||||
>
|
||||
{item.text}
|
||||
</button>
|
||||
))}
|
||||
{props.secondItems?.map((item: DropdownItem) => (
|
||||
<button
|
||||
key={item.value}
|
||||
type="button"
|
||||
data-testid={`dropdown-second-item-${item.value}`}
|
||||
onClick={() => props.onSelect(item)}
|
||||
>
|
||||
{item.text}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
)
|
||||
},
|
||||
}))
|
||||
|
||||
let modelIdCounter = 0
|
||||
|
||||
const createModelAndParameter = (overrides: Partial<ModelAndParameter> = {}): ModelAndParameter => ({
|
||||
id: `model-${++modelIdCounter}`,
|
||||
model: 'gpt-3.5-turbo',
|
||||
provider: 'openai',
|
||||
parameters: {},
|
||||
...overrides,
|
||||
})
|
||||
|
||||
const createTextGenerationModelList = (models: Array<{ provider: string, model: string, status?: ModelStatusEnum }> = []) => {
|
||||
const providerMap = new Map<string, { model: string, status: ModelStatusEnum, model_properties: { mode: string }, features: string[] }[]>()
|
||||
|
||||
for (const m of models) {
|
||||
if (!providerMap.has(m.provider)) {
|
||||
providerMap.set(m.provider, [])
|
||||
}
|
||||
providerMap.get(m.provider)!.push({
|
||||
model: m.model,
|
||||
status: m.status ?? ModelStatusEnum.active,
|
||||
model_properties: { mode: 'chat' },
|
||||
features: [],
|
||||
})
|
||||
}
|
||||
|
||||
return Array.from(providerMap.entries()).map(([provider, modelsList]) => ({
|
||||
provider,
|
||||
models: modelsList,
|
||||
}))
|
||||
}
|
||||
|
||||
describe('DebugItem', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
modelIdCounter = 0
|
||||
capturedDropdownProps = null
|
||||
|
||||
mockUseTranslation.mockReturnValue({
|
||||
t: (key: string) => key,
|
||||
})
|
||||
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
mode: AppModeEnum.CHAT,
|
||||
})
|
||||
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: [],
|
||||
})
|
||||
})
|
||||
|
||||
describe('rendering', () => {
|
||||
it('should render with index number', () => {
|
||||
const modelAndParameter = createModelAndParameter({ id: 'model-a' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByText('#1')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should render correct index for second model', () => {
|
||||
const model1 = createModelAndParameter({ id: 'model-a' })
|
||||
const model2 = createModelAndParameter({ id: 'model-b' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [model1, model2],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={model2} />)
|
||||
|
||||
expect(screen.getByText('#2')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should render ModelParameterTrigger', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('model-parameter-trigger')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should render Dropdown', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('dropdown')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should apply custom className', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
const { container } = render(<DebugItem modelAndParameter={modelAndParameter} className="custom-class" />)
|
||||
|
||||
expect(container.firstChild).toHaveClass('custom-class')
|
||||
})
|
||||
|
||||
it('should apply custom style', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
const { container } = render(<DebugItem modelAndParameter={modelAndParameter} style={{ width: '300px' }} />)
|
||||
|
||||
expect(container.firstChild).toHaveStyle({ width: '300px' })
|
||||
})
|
||||
})
|
||||
|
||||
describe('ChatItem rendering', () => {
|
||||
it('should render ChatItem in CHAT mode with active model', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
mode: AppModeEnum.CHAT,
|
||||
})
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-4', status: ModelStatusEnum.active },
|
||||
]),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('chat-item')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should render ChatItem in AGENT_CHAT mode with active model', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
mode: AppModeEnum.AGENT_CHAT,
|
||||
})
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-4', status: ModelStatusEnum.active },
|
||||
]),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('chat-item')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should not render ChatItem when model is not active', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
mode: AppModeEnum.CHAT,
|
||||
})
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-4', status: ModelStatusEnum.disabled },
|
||||
]),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.queryByTestId('chat-item')).not.toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should not render ChatItem when provider not found', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'unknown', model: 'model' })
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
mode: AppModeEnum.CHAT,
|
||||
})
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-4', status: ModelStatusEnum.active },
|
||||
]),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.queryByTestId('chat-item')).not.toBeInTheDocument()
|
||||
})
|
||||
})
|
||||
|
||||
describe('TextGenerationItem rendering', () => {
|
||||
it('should render TextGenerationItem in COMPLETION mode with active model', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
mode: AppModeEnum.COMPLETION,
|
||||
})
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-4', status: ModelStatusEnum.active },
|
||||
]),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('text-generation-item')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should not render TextGenerationItem when model is not active', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
mode: AppModeEnum.COMPLETION,
|
||||
})
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-4', status: ModelStatusEnum.disabled },
|
||||
]),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.queryByTestId('text-generation-item')).not.toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should not render TextGenerationItem in CHAT mode', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
mode: AppModeEnum.CHAT,
|
||||
})
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-4', status: ModelStatusEnum.active },
|
||||
]),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.queryByTestId('text-generation-item')).not.toBeInTheDocument()
|
||||
})
|
||||
})
|
||||
|
||||
describe('dropdown menu items', () => {
|
||||
it('should show duplicate option when less than 4 models', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter, createModelAndParameter()],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedDropdownProps!.items).toContainEqual(
|
||||
expect.objectContaining({ value: 'duplicate' }),
|
||||
)
|
||||
})
|
||||
|
||||
it('should hide duplicate option when 4 or more models', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [
|
||||
modelAndParameter,
|
||||
createModelAndParameter(),
|
||||
createModelAndParameter(),
|
||||
createModelAndParameter(),
|
||||
],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedDropdownProps!.items).not.toContainEqual(
|
||||
expect.objectContaining({ value: 'duplicate' }),
|
||||
)
|
||||
})
|
||||
|
||||
it('should show debug-as-single-model when provider and model are set', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedDropdownProps!.items).toContainEqual(
|
||||
expect.objectContaining({ value: 'debug-as-single-model' }),
|
||||
)
|
||||
})
|
||||
|
||||
it('should hide debug-as-single-model when provider is missing', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: '', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedDropdownProps!.items).not.toContainEqual(
|
||||
expect.objectContaining({ value: 'debug-as-single-model' }),
|
||||
)
|
||||
})
|
||||
|
||||
it('should hide debug-as-single-model when model is missing', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: '' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedDropdownProps!.items).not.toContainEqual(
|
||||
expect.objectContaining({ value: 'debug-as-single-model' }),
|
||||
)
|
||||
})
|
||||
|
||||
it('should show remove option when more than 2 models', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter, createModelAndParameter(), createModelAndParameter()],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedDropdownProps!.secondItems).toContainEqual(
|
||||
expect.objectContaining({ value: 'remove' }),
|
||||
)
|
||||
})
|
||||
|
||||
it('should hide remove option when 2 or fewer models', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter, createModelAndParameter()],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedDropdownProps!.secondItems).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe('dropdown actions', () => {
|
||||
it('should duplicate model when clicking duplicate', () => {
|
||||
const modelAndParameter = createModelAndParameter({ id: 'model-a', provider: 'openai', model: 'gpt-4' })
|
||||
const model2 = createModelAndParameter({ id: 'model-b' })
|
||||
const onMultipleModelConfigsChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter, model2],
|
||||
onMultipleModelConfigsChange,
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
fireEvent.click(screen.getByTestId('dropdown-item-duplicate'))
|
||||
|
||||
expect(onMultipleModelConfigsChange).toHaveBeenCalledWith(
|
||||
true,
|
||||
expect.arrayContaining([
|
||||
expect.objectContaining({ id: 'model-a' }),
|
||||
expect.objectContaining({ provider: 'openai', model: 'gpt-4' }),
|
||||
expect.objectContaining({ id: 'model-b' }),
|
||||
]),
|
||||
)
|
||||
expect(onMultipleModelConfigsChange.mock.calls[0][1]).toHaveLength(3)
|
||||
})
|
||||
|
||||
it('should not duplicate when already at 4 models', () => {
|
||||
const modelAndParameter = createModelAndParameter({ id: 'model-a', provider: 'openai', model: 'gpt-4' })
|
||||
const onMultipleModelConfigsChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [
|
||||
modelAndParameter,
|
||||
createModelAndParameter(),
|
||||
createModelAndParameter(),
|
||||
createModelAndParameter(),
|
||||
],
|
||||
onMultipleModelConfigsChange,
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// Duplicate option should not be rendered when at 4 models
|
||||
expect(screen.queryByTestId('dropdown-item-duplicate')).not.toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should early return when trying to duplicate with 4 models via handleSelect', () => {
|
||||
const modelAndParameter = createModelAndParameter({ id: 'model-a', provider: 'openai', model: 'gpt-4' })
|
||||
const onMultipleModelConfigsChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [
|
||||
modelAndParameter,
|
||||
createModelAndParameter(),
|
||||
createModelAndParameter(),
|
||||
createModelAndParameter(),
|
||||
],
|
||||
onMultipleModelConfigsChange,
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// Directly call handleSelect with duplicate action to cover line 42
|
||||
capturedDropdownProps!.onSelect({ value: 'duplicate', text: 'Duplicate' })
|
||||
|
||||
// Should not call onMultipleModelConfigsChange due to early return
|
||||
expect(onMultipleModelConfigsChange).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('should call onDebugWithMultipleModelChange when clicking debug-as-single-model', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
const onDebugWithMultipleModelChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange,
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
fireEvent.click(screen.getByTestId('dropdown-item-debug-as-single-model'))
|
||||
|
||||
expect(onDebugWithMultipleModelChange).toHaveBeenCalledWith(modelAndParameter)
|
||||
})
|
||||
|
||||
it('should remove model when clicking remove', () => {
|
||||
const modelAndParameter = createModelAndParameter({ id: 'model-a', provider: 'openai', model: 'gpt-4' })
|
||||
const model2 = createModelAndParameter({ id: 'model-b' })
|
||||
const model3 = createModelAndParameter({ id: 'model-c' })
|
||||
const onMultipleModelConfigsChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter, model2, model3],
|
||||
onMultipleModelConfigsChange,
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<DebugItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
fireEvent.click(screen.getByTestId('dropdown-second-item-remove'))
|
||||
|
||||
expect(onMultipleModelConfigsChange).toHaveBeenCalledWith(
|
||||
true,
|
||||
[
|
||||
expect.objectContaining({ id: 'model-b' }),
|
||||
expect.objectContaining({ id: 'model-c' }),
|
||||
],
|
||||
)
|
||||
})
|
||||
})
|
||||
})
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,7 +1,6 @@
|
||||
import type { FC } from 'react'
|
||||
import type { DebugWithMultipleModelContextType } from './context'
|
||||
import type { InputForm } from '@/app/components/base/chat/chat/type'
|
||||
import type { EnableType } from '@/app/components/base/chat/types'
|
||||
import type { FileEntity } from '@/app/components/base/file-uploader/types'
|
||||
import {
|
||||
memo,
|
||||
@ -41,7 +40,13 @@ const DebugWithMultipleModel = () => {
|
||||
if (checkCanSend && !checkCanSend())
|
||||
return
|
||||
|
||||
eventEmitter?.emit({ type: APP_CHAT_WITH_MULTIPLE_MODEL, payload: { message, files } } as any) // eslint-disable-line ts/no-explicit-any
|
||||
eventEmitter?.emit({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: {
|
||||
message,
|
||||
files,
|
||||
},
|
||||
} as any)
|
||||
}, [eventEmitter, checkCanSend])
|
||||
|
||||
const twoLine = multipleModelConfigs.length === 2
|
||||
@ -142,7 +147,7 @@ const DebugWithMultipleModel = () => {
|
||||
showFileUpload={false}
|
||||
onFeatureBarClick={setShowAppConfigureFeaturesModal}
|
||||
onSend={handleSend}
|
||||
speechToTextConfig={speech2text as EnableType}
|
||||
speechToTextConfig={speech2text as any}
|
||||
visionConfig={file}
|
||||
inputs={inputs}
|
||||
inputsForm={inputsForm}
|
||||
|
||||
@ -1,436 +0,0 @@
|
||||
import type * as React from 'react'
|
||||
import type { ModelAndParameter } from '../types'
|
||||
import { fireEvent, render, screen } from '@testing-library/react'
|
||||
import { ModelStatusEnum } from '@/app/components/header/account-setting/model-provider-page/declarations'
|
||||
import ModelParameterTrigger from './model-parameter-trigger'
|
||||
|
||||
// Mock MODEL_STATUS_TEXT that is imported in the component
|
||||
vi.mock('@/app/components/header/account-setting/model-provider-page/declarations', async (importOriginal) => {
|
||||
const original = await importOriginal() as object
|
||||
return {
|
||||
...original,
|
||||
MODEL_STATUS_TEXT: {
|
||||
'disabled': { en_US: 'Disabled', zh_Hans: '已禁用' },
|
||||
'quota-exceeded': { en_US: 'Quota Exceeded', zh_Hans: '配额已用完' },
|
||||
'no-configure': { en_US: 'No Configure', zh_Hans: '未配置凭据' },
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
const mockUseTranslation = vi.fn()
|
||||
const mockUseDebugConfigurationContext = vi.fn()
|
||||
const mockUseDebugWithMultipleModelContext = vi.fn()
|
||||
const mockUseLanguage = vi.fn()
|
||||
|
||||
vi.mock('react-i18next', () => ({
|
||||
useTranslation: () => mockUseTranslation(),
|
||||
}))
|
||||
|
||||
vi.mock('@/context/debug-configuration', () => ({
|
||||
useDebugConfigurationContext: () => mockUseDebugConfigurationContext(),
|
||||
}))
|
||||
|
||||
vi.mock('./context', () => ({
|
||||
useDebugWithMultipleModelContext: () => mockUseDebugWithMultipleModelContext(),
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/header/account-setting/model-provider-page/hooks', () => ({
|
||||
useLanguage: () => mockUseLanguage(),
|
||||
}))
|
||||
|
||||
type RenderTriggerParams = {
|
||||
open: boolean
|
||||
currentProvider: { provider: string, icon: string } | null
|
||||
currentModel: { model: string, status: ModelStatusEnum } | null
|
||||
}
|
||||
type ModalProps = {
|
||||
provider: string
|
||||
modelId: string
|
||||
isAdvancedMode: boolean
|
||||
completionParams: Record<string, unknown>
|
||||
debugWithMultipleModel: boolean
|
||||
setModel: (model: { modelId: string, provider: string }) => void
|
||||
onCompletionParamsChange: (params: Record<string, unknown>) => void
|
||||
onDebugWithMultipleModelChange: () => void
|
||||
renderTrigger: (params: RenderTriggerParams) => React.ReactElement
|
||||
}
|
||||
let capturedModalProps: ModalProps | null = null
|
||||
let mockRenderTriggerFn: ((params: RenderTriggerParams) => React.ReactElement) | null = null
|
||||
|
||||
vi.mock('@/app/components/header/account-setting/model-provider-page/model-parameter-modal', () => ({
|
||||
default: (props: ModalProps) => {
|
||||
capturedModalProps = props
|
||||
mockRenderTriggerFn = props.renderTrigger
|
||||
|
||||
// Render the trigger with some mock data
|
||||
const triggerElement = props.renderTrigger({
|
||||
open: false,
|
||||
currentProvider: props.provider
|
||||
? { provider: props.provider, icon: 'provider-icon' }
|
||||
: null,
|
||||
currentModel: props.modelId
|
||||
? { model: props.modelId, status: ModelStatusEnum.active }
|
||||
: null,
|
||||
})
|
||||
|
||||
return (
|
||||
<div data-testid="model-parameter-modal">
|
||||
{triggerElement}
|
||||
<button
|
||||
type="button"
|
||||
data-testid="select-model-btn"
|
||||
onClick={() => props.setModel({ modelId: 'new-model', provider: 'new-provider' })}
|
||||
>
|
||||
Select Model
|
||||
</button>
|
||||
<button
|
||||
type="button"
|
||||
data-testid="change-params-btn"
|
||||
onClick={() => props.onCompletionParamsChange({ temperature: 0.9 })}
|
||||
>
|
||||
Change Params
|
||||
</button>
|
||||
<button
|
||||
type="button"
|
||||
data-testid="debug-single-btn"
|
||||
onClick={() => props.onDebugWithMultipleModelChange()}
|
||||
>
|
||||
Debug Single
|
||||
</button>
|
||||
</div>
|
||||
)
|
||||
},
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/header/account-setting/model-provider-page/model-icon', () => ({
|
||||
default: ({ provider, modelName }: { provider: { provider: string } | null, modelName?: string }) => (
|
||||
<div data-testid="model-icon" data-provider={provider?.provider} data-model={modelName}>
|
||||
ModelIcon
|
||||
</div>
|
||||
),
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/header/account-setting/model-provider-page/model-name', () => ({
|
||||
default: ({ modelItem }: { modelItem: { model: string } | null }) => (
|
||||
<div data-testid="model-name" data-model={modelItem?.model}>
|
||||
{modelItem?.model}
|
||||
</div>
|
||||
),
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/icons/src/vender/line/shapes', () => ({
|
||||
CubeOutline: () => <div data-testid="cube-icon">CubeOutline</div>,
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/icons/src/vender/line/alertsAndFeedback', () => ({
|
||||
AlertTriangle: () => <div data-testid="alert-icon">AlertTriangle</div>,
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/tooltip', () => ({
|
||||
default: ({ children }: { children: React.ReactNode }) => <div data-testid="tooltip">{children}</div>,
|
||||
}))
|
||||
|
||||
let modelIdCounter = 0
|
||||
|
||||
const createModelAndParameter = (overrides: Partial<ModelAndParameter> = {}): ModelAndParameter => ({
|
||||
id: `model-${++modelIdCounter}`,
|
||||
model: 'gpt-3.5-turbo',
|
||||
provider: 'openai',
|
||||
parameters: { temperature: 0.7 },
|
||||
...overrides,
|
||||
})
|
||||
|
||||
describe('ModelParameterTrigger', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
modelIdCounter = 0
|
||||
capturedModalProps = null
|
||||
mockRenderTriggerFn = null
|
||||
|
||||
mockUseTranslation.mockReturnValue({
|
||||
t: (key: string) => key,
|
||||
})
|
||||
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
isAdvancedMode: false,
|
||||
})
|
||||
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
mockUseLanguage.mockReturnValue('en_US')
|
||||
})
|
||||
|
||||
describe('rendering', () => {
|
||||
it('should render ModelParameterModal with correct props', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('model-parameter-modal')).toBeInTheDocument()
|
||||
expect(capturedModalProps!.isAdvancedMode).toBe(false)
|
||||
expect(capturedModalProps!.provider).toBe('openai')
|
||||
expect(capturedModalProps!.modelId).toBe('gpt-4')
|
||||
expect(capturedModalProps!.completionParams).toEqual({ temperature: 0.7 })
|
||||
expect(capturedModalProps!.debugWithMultipleModel).toBe(true)
|
||||
})
|
||||
|
||||
it('should pass isAdvancedMode from context', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
isAdvancedMode: true,
|
||||
})
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedModalProps!.isAdvancedMode).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe('trigger rendering', () => {
|
||||
it('should render model icon when provider exists', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('model-icon')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should render cube icon when no provider', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: '', model: '' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('cube-icon')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should render model name when model exists', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('model-name')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should render select model text when no model', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: '', model: '' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByText('modelProvider.selectModel')).toBeInTheDocument()
|
||||
})
|
||||
})
|
||||
|
||||
describe('handleSelectModel', () => {
|
||||
it('should update model and provider in configs', () => {
|
||||
const model1 = createModelAndParameter({ id: 'model-a', provider: 'openai', model: 'gpt-3.5' })
|
||||
const model2 = createModelAndParameter({ id: 'model-b' })
|
||||
const onMultipleModelConfigsChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [model1, model2],
|
||||
onMultipleModelConfigsChange,
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={model1} />)
|
||||
|
||||
fireEvent.click(screen.getByTestId('select-model-btn'))
|
||||
|
||||
expect(onMultipleModelConfigsChange).toHaveBeenCalledWith(
|
||||
true,
|
||||
[
|
||||
expect.objectContaining({ id: 'model-a', model: 'new-model', provider: 'new-provider' }),
|
||||
expect.objectContaining({ id: 'model-b' }),
|
||||
],
|
||||
)
|
||||
})
|
||||
|
||||
it('should update correct model when multiple configs exist', () => {
|
||||
const model1 = createModelAndParameter({ id: 'model-a' })
|
||||
const model2 = createModelAndParameter({ id: 'model-b' })
|
||||
const model3 = createModelAndParameter({ id: 'model-c' })
|
||||
const onMultipleModelConfigsChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [model1, model2, model3],
|
||||
onMultipleModelConfigsChange,
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={model2} />)
|
||||
|
||||
fireEvent.click(screen.getByTestId('select-model-btn'))
|
||||
|
||||
expect(onMultipleModelConfigsChange).toHaveBeenCalledWith(
|
||||
true,
|
||||
[
|
||||
expect.objectContaining({ id: 'model-a' }),
|
||||
expect.objectContaining({ id: 'model-b', model: 'new-model', provider: 'new-provider' }),
|
||||
expect.objectContaining({ id: 'model-c' }),
|
||||
],
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('handleParamsChange', () => {
|
||||
it('should update parameters in configs', () => {
|
||||
const model1 = createModelAndParameter({ id: 'model-a', parameters: { temperature: 0.5 } })
|
||||
const model2 = createModelAndParameter({ id: 'model-b' })
|
||||
const onMultipleModelConfigsChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [model1, model2],
|
||||
onMultipleModelConfigsChange,
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={model1} />)
|
||||
|
||||
fireEvent.click(screen.getByTestId('change-params-btn'))
|
||||
|
||||
expect(onMultipleModelConfigsChange).toHaveBeenCalledWith(
|
||||
true,
|
||||
[
|
||||
expect.objectContaining({ id: 'model-a', parameters: { temperature: 0.9 } }),
|
||||
expect.objectContaining({ id: 'model-b' }),
|
||||
],
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('onDebugWithMultipleModelChange', () => {
|
||||
it('should call onDebugWithMultipleModelChange with current modelAndParameter', () => {
|
||||
const modelAndParameter = createModelAndParameter({ id: 'model-a', provider: 'openai', model: 'gpt-4' })
|
||||
const onDebugWithMultipleModelChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange,
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
fireEvent.click(screen.getByTestId('debug-single-btn'))
|
||||
|
||||
expect(onDebugWithMultipleModelChange).toHaveBeenCalledWith(modelAndParameter)
|
||||
})
|
||||
})
|
||||
|
||||
describe('index finding', () => {
|
||||
it('should find correct index for model in middle of array', () => {
|
||||
const model1 = createModelAndParameter({ id: 'model-a' })
|
||||
const model2 = createModelAndParameter({ id: 'model-b' })
|
||||
const model3 = createModelAndParameter({ id: 'model-c' })
|
||||
const onMultipleModelConfigsChange = vi.fn()
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [model1, model2, model3],
|
||||
onMultipleModelConfigsChange,
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={model2} />)
|
||||
|
||||
// Verify that the correct index is used by checking the result of handleSelectModel
|
||||
fireEvent.click(screen.getByTestId('select-model-btn'))
|
||||
|
||||
// The second model (index 1) should be updated
|
||||
const updatedConfigs = onMultipleModelConfigsChange.mock.calls[0][1]
|
||||
expect(updatedConfigs[0].id).toBe('model-a')
|
||||
expect(updatedConfigs[1].model).toBe('new-model') // This one should be updated
|
||||
expect(updatedConfigs[2].id).toBe('model-c')
|
||||
})
|
||||
})
|
||||
|
||||
describe('renderTrigger styling and states', () => {
|
||||
it('should render trigger with open state styling', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// Call renderTrigger with open=true to test the open styling branch
|
||||
const triggerWithOpen = mockRenderTriggerFn!({
|
||||
open: true,
|
||||
currentProvider: { provider: 'openai', icon: 'provider-icon' },
|
||||
currentModel: { model: 'gpt-4', status: ModelStatusEnum.active },
|
||||
})
|
||||
|
||||
expect(triggerWithOpen).toBeDefined()
|
||||
})
|
||||
|
||||
it('should render warning tooltip when model status is not active', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// Call renderTrigger with inactive model status to test the warning branch
|
||||
const triggerWithInactiveModel = mockRenderTriggerFn!({
|
||||
open: false,
|
||||
currentProvider: { provider: 'openai', icon: 'provider-icon' },
|
||||
currentModel: { model: 'gpt-4', status: ModelStatusEnum.disabled },
|
||||
})
|
||||
|
||||
expect(triggerWithInactiveModel).toBeDefined()
|
||||
})
|
||||
|
||||
it('should render warning background and tooltip for inactive model', () => {
|
||||
const modelAndParameter = createModelAndParameter({ provider: 'openai', model: 'gpt-4' })
|
||||
mockUseDebugWithMultipleModelContext.mockReturnValue({
|
||||
multipleModelConfigs: [modelAndParameter],
|
||||
onMultipleModelConfigsChange: vi.fn(),
|
||||
onDebugWithMultipleModelChange: vi.fn(),
|
||||
})
|
||||
|
||||
render(<ModelParameterTrigger modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// Test with quota_exceeded status (another inactive status)
|
||||
const triggerWithQuotaExceeded = mockRenderTriggerFn!({
|
||||
open: false,
|
||||
currentProvider: { provider: 'openai', icon: 'provider-icon' },
|
||||
currentModel: { model: 'gpt-4', status: ModelStatusEnum.quotaExceeded },
|
||||
})
|
||||
|
||||
expect(triggerWithQuotaExceeded).toBeDefined()
|
||||
})
|
||||
})
|
||||
})
|
||||
@ -1,621 +0,0 @@
|
||||
import type { ModelAndParameter } from '../types'
|
||||
import { render, screen, waitFor } from '@testing-library/react'
|
||||
import { TransferMethod } from '@/app/components/base/chat/types'
|
||||
import { DEFAULT_AGENT_SETTING, DEFAULT_CHAT_PROMPT_CONFIG, DEFAULT_COMPLETION_PROMPT_CONFIG } from '@/config'
|
||||
import { ModelModeType } from '@/types/app'
|
||||
import { APP_CHAT_WITH_MULTIPLE_MODEL } from '../types'
|
||||
import TextGenerationItem from './text-generation-item'
|
||||
|
||||
const mockUseDebugConfigurationContext = vi.fn()
|
||||
const mockUseProviderContext = vi.fn()
|
||||
const mockUseFeatures = vi.fn()
|
||||
const mockUseTextGeneration = vi.fn()
|
||||
const mockUseEventEmitterContextContext = vi.fn()
|
||||
const mockPromptVariablesToUserInputsForm = vi.fn()
|
||||
|
||||
vi.mock('@/context/debug-configuration', () => ({
|
||||
useDebugConfigurationContext: () => mockUseDebugConfigurationContext(),
|
||||
}))
|
||||
|
||||
vi.mock('@/context/provider-context', () => ({
|
||||
useProviderContext: () => mockUseProviderContext(),
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/features/hooks', () => ({
|
||||
useFeatures: (selector: (state: unknown) => unknown) => mockUseFeatures(selector),
|
||||
}))
|
||||
|
||||
vi.mock('@/app/components/base/text-generation/hooks', () => ({
|
||||
useTextGeneration: () => mockUseTextGeneration(),
|
||||
}))
|
||||
|
||||
vi.mock('@/context/event-emitter', () => ({
|
||||
useEventEmitterContextContext: () => mockUseEventEmitterContextContext(),
|
||||
}))
|
||||
|
||||
vi.mock('@/utils/model-config', () => ({
|
||||
promptVariablesToUserInputsForm: (vars: unknown) => mockPromptVariablesToUserInputsForm(vars),
|
||||
}))
|
||||
|
||||
let capturedTextGenerationProps: Record<string, unknown> | null = null
|
||||
vi.mock('@/app/components/app/text-generate/item', () => ({
|
||||
default: (props: Record<string, unknown>) => {
|
||||
capturedTextGenerationProps = props
|
||||
return <div data-testid="text-generation-component">TextGeneration</div>
|
||||
},
|
||||
}))
|
||||
|
||||
let modelIdCounter = 0
|
||||
|
||||
const createModelAndParameter = (overrides: Partial<ModelAndParameter> = {}): ModelAndParameter => ({
|
||||
id: `model-${++modelIdCounter}`,
|
||||
model: 'gpt-3.5-turbo',
|
||||
provider: 'openai',
|
||||
parameters: { temperature: 0.7 },
|
||||
...overrides,
|
||||
})
|
||||
|
||||
const createDefaultModelConfig = () => ({
|
||||
provider: 'openai',
|
||||
model_id: 'gpt-4',
|
||||
mode: ModelModeType.completion,
|
||||
configs: {
|
||||
prompt_template: 'Hello {{name}}',
|
||||
prompt_variables: [
|
||||
{ key: 'name', name: 'Name', type: 'string' as const, is_context_var: false },
|
||||
{ key: 'context', name: 'Context', type: 'string' as const, is_context_var: true },
|
||||
],
|
||||
},
|
||||
chat_prompt_config: DEFAULT_CHAT_PROMPT_CONFIG,
|
||||
completion_prompt_config: DEFAULT_COMPLETION_PROMPT_CONFIG,
|
||||
opening_statement: '',
|
||||
more_like_this: null,
|
||||
suggested_questions: [],
|
||||
suggested_questions_after_answer: null,
|
||||
speech_to_text: null,
|
||||
text_to_speech: null,
|
||||
file_upload: null,
|
||||
retriever_resource: null,
|
||||
sensitive_word_avoidance: null,
|
||||
annotation_reply: null,
|
||||
external_data_tools: [],
|
||||
dataSets: [],
|
||||
agentConfig: DEFAULT_AGENT_SETTING,
|
||||
system_parameters: {
|
||||
audio_file_size_limit: 0,
|
||||
file_size_limit: 0,
|
||||
image_file_size_limit: 0,
|
||||
video_file_size_limit: 0,
|
||||
workflow_file_upload_limit: 0,
|
||||
},
|
||||
})
|
||||
|
||||
const createDefaultFeatures = () => ({
|
||||
moreLikeThis: { enabled: true },
|
||||
moderation: { enabled: false },
|
||||
text2speech: { enabled: true },
|
||||
file: { enabled: true },
|
||||
})
|
||||
|
||||
const createTextGenerationModelList = (models: Array<{
|
||||
provider: string
|
||||
model: string
|
||||
mode?: string
|
||||
}> = []) => {
|
||||
const providerMap = new Map<string, { model: string, model_properties: { mode: string } }[]>()
|
||||
|
||||
for (const m of models) {
|
||||
if (!providerMap.has(m.provider)) {
|
||||
providerMap.set(m.provider, [])
|
||||
}
|
||||
providerMap.get(m.provider)!.push({
|
||||
model: m.model,
|
||||
model_properties: { mode: m.mode ?? 'completion' },
|
||||
})
|
||||
}
|
||||
|
||||
return Array.from(providerMap.entries()).map(([provider, modelsList]) => ({
|
||||
provider,
|
||||
models: modelsList,
|
||||
}))
|
||||
}
|
||||
|
||||
describe('TextGenerationItem', () => {
|
||||
let subscriptionCallback: ((v: { type: string, payload?: { message: string, files?: unknown[] } }) => void) | null = null
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
modelIdCounter = 0
|
||||
capturedTextGenerationProps = null
|
||||
subscriptionCallback = null
|
||||
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
isAdvancedMode: false,
|
||||
modelConfig: createDefaultModelConfig(),
|
||||
appId: 'test-app-id',
|
||||
inputs: { name: 'World' },
|
||||
promptMode: 'simple',
|
||||
speechToTextConfig: { enabled: true },
|
||||
introduction: 'Welcome',
|
||||
suggestedQuestionsAfterAnswerConfig: { enabled: false },
|
||||
citationConfig: { enabled: false },
|
||||
externalDataToolsConfig: [],
|
||||
chatPromptConfig: DEFAULT_CHAT_PROMPT_CONFIG,
|
||||
completionPromptConfig: DEFAULT_COMPLETION_PROMPT_CONFIG,
|
||||
dataSets: [{ id: 'ds-1', name: 'Dataset 1' }],
|
||||
datasetConfigs: { retrieval_model: 'single' },
|
||||
})
|
||||
|
||||
mockUseProviderContext.mockReturnValue({
|
||||
textGenerationModelList: createTextGenerationModelList([
|
||||
{ provider: 'openai', model: 'gpt-3.5-turbo', mode: 'completion' },
|
||||
{ provider: 'openai', model: 'gpt-4', mode: 'completion' },
|
||||
]),
|
||||
})
|
||||
|
||||
const features = createDefaultFeatures()
|
||||
mockUseFeatures.mockImplementation((selector: (state: { features: ReturnType<typeof createDefaultFeatures> }) => unknown) => selector({ features }))
|
||||
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'Generated text',
|
||||
handleSend: vi.fn(),
|
||||
isResponding: false,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
mockUseEventEmitterContextContext.mockReturnValue({
|
||||
eventEmitter: {
|
||||
// eslint-disable-next-line react/no-unnecessary-use-prefix -- mocking real API
|
||||
useSubscription: (callback: (v: { type: string, payload?: { message: string, files?: unknown[] } }) => void) => {
|
||||
subscriptionCallback = callback
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
mockPromptVariablesToUserInputsForm.mockReturnValue([
|
||||
{ key: 'name', label: 'Name', variable: 'name' },
|
||||
])
|
||||
})
|
||||
|
||||
describe('rendering', () => {
|
||||
it('should render TextGeneration component', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(screen.getByTestId('text-generation-component')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('should pass correct props to TextGeneration component', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedTextGenerationProps!.content).toBe('Generated text')
|
||||
expect(capturedTextGenerationProps!.isLoading).toBe(false)
|
||||
expect(capturedTextGenerationProps!.isResponding).toBe(false)
|
||||
expect(capturedTextGenerationProps!.messageId).toBe('msg-1')
|
||||
expect(capturedTextGenerationProps!.isError).toBe(false)
|
||||
expect(capturedTextGenerationProps!.inSidePanel).toBe(true)
|
||||
expect(capturedTextGenerationProps!.siteInfo).toBeNull()
|
||||
})
|
||||
|
||||
it('should show loading state when no completion and is responding', () => {
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: '',
|
||||
handleSend: vi.fn(),
|
||||
isResponding: true,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedTextGenerationProps!.isLoading).toBe(true)
|
||||
})
|
||||
|
||||
it('should not show loading state when has completion', () => {
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'Some text',
|
||||
handleSend: vi.fn(),
|
||||
isResponding: true,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
expect(capturedTextGenerationProps!.isLoading).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe('config building', () => {
|
||||
it('should build config with correct pre_prompt in simple mode', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
// The config is built internally, we verify via the handleSend call
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
const handleSend = mockUseTextGeneration().handleSend
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
model_config: expect.objectContaining({
|
||||
pre_prompt: 'Hello {{name}}',
|
||||
}),
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
it('should use empty pre_prompt in advanced mode', () => {
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
...mockUseDebugConfigurationContext(),
|
||||
isAdvancedMode: true,
|
||||
modelConfig: createDefaultModelConfig(),
|
||||
appId: 'test-app-id',
|
||||
inputs: {},
|
||||
promptMode: 'advanced',
|
||||
speechToTextConfig: { enabled: true },
|
||||
introduction: '',
|
||||
suggestedQuestionsAfterAnswerConfig: { enabled: false },
|
||||
citationConfig: { enabled: false },
|
||||
externalDataToolsConfig: [],
|
||||
chatPromptConfig: DEFAULT_CHAT_PROMPT_CONFIG,
|
||||
completionPromptConfig: DEFAULT_COMPLETION_PROMPT_CONFIG,
|
||||
dataSets: [],
|
||||
datasetConfigs: { retrieval_model: 'single' },
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
const handleSend = mockUseTextGeneration().handleSend
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
model_config: expect.objectContaining({
|
||||
pre_prompt: '',
|
||||
}),
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
it('should find context variable from prompt_variables', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
const handleSend = mockUseTextGeneration().handleSend
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
model_config: expect.objectContaining({
|
||||
dataset_query_variable: 'context',
|
||||
}),
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
it('should use empty string for dataset_query_variable when no context var exists', () => {
|
||||
const modelConfigWithoutContextVar = {
|
||||
...createDefaultModelConfig(),
|
||||
configs: {
|
||||
prompt_template: 'Hello {{name}}',
|
||||
prompt_variables: [
|
||||
{ key: 'name', name: 'Name', type: 'string' as const, is_context_var: false },
|
||||
],
|
||||
},
|
||||
}
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
isAdvancedMode: false,
|
||||
modelConfig: modelConfigWithoutContextVar,
|
||||
appId: 'test-app-id',
|
||||
inputs: { name: 'World' },
|
||||
promptMode: 'simple',
|
||||
speechToTextConfig: { enabled: true },
|
||||
introduction: 'Welcome',
|
||||
suggestedQuestionsAfterAnswerConfig: { enabled: false },
|
||||
citationConfig: { enabled: false },
|
||||
externalDataToolsConfig: [],
|
||||
chatPromptConfig: DEFAULT_CHAT_PROMPT_CONFIG,
|
||||
completionPromptConfig: DEFAULT_COMPLETION_PROMPT_CONFIG,
|
||||
dataSets: [],
|
||||
datasetConfigs: { retrieval_model: 'single' },
|
||||
})
|
||||
|
||||
const handleSend = vi.fn()
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'text',
|
||||
handleSend,
|
||||
isResponding: false,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
model_config: expect.objectContaining({
|
||||
dataset_query_variable: '',
|
||||
}),
|
||||
}),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('datasets transformation', () => {
|
||||
it('should transform dataSets to postDatasets format', () => {
|
||||
mockUseDebugConfigurationContext.mockReturnValue({
|
||||
...mockUseDebugConfigurationContext(),
|
||||
isAdvancedMode: false,
|
||||
modelConfig: createDefaultModelConfig(),
|
||||
appId: 'test-app-id',
|
||||
inputs: {},
|
||||
promptMode: 'simple',
|
||||
speechToTextConfig: { enabled: true },
|
||||
introduction: '',
|
||||
suggestedQuestionsAfterAnswerConfig: { enabled: false },
|
||||
citationConfig: { enabled: false },
|
||||
externalDataToolsConfig: [],
|
||||
chatPromptConfig: DEFAULT_CHAT_PROMPT_CONFIG,
|
||||
completionPromptConfig: DEFAULT_COMPLETION_PROMPT_CONFIG,
|
||||
dataSets: [
|
||||
{ id: 'ds-1', name: 'Dataset 1' },
|
||||
{ id: 'ds-2', name: 'Dataset 2' },
|
||||
],
|
||||
datasetConfigs: { retrieval_model: 'single' },
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
const handleSend = mockUseTextGeneration().handleSend
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
model_config: expect.objectContaining({
|
||||
dataset_configs: expect.objectContaining({
|
||||
datasets: {
|
||||
datasets: [
|
||||
{ dataset: { enabled: true, id: 'ds-1' } },
|
||||
{ dataset: { enabled: true, id: 'ds-2' } },
|
||||
],
|
||||
},
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('event subscription', () => {
|
||||
it('should handle APP_CHAT_WITH_MULTIPLE_MODEL event', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'text',
|
||||
handleSend,
|
||||
isResponding: false,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test message', files: [] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
'apps/test-app-id/completion-messages',
|
||||
expect.any(Object),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
it('should ignore non-matching events', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'text',
|
||||
handleSend,
|
||||
isResponding: false,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: 'SOME_OTHER_EVENT',
|
||||
payload: { message: 'test' },
|
||||
})
|
||||
|
||||
expect(handleSend).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe('doSend', () => {
|
||||
it('should build config data with model info', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'text',
|
||||
handleSend,
|
||||
isResponding: false,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter({
|
||||
provider: 'openai',
|
||||
model: 'gpt-3.5-turbo',
|
||||
parameters: { temperature: 0.8 },
|
||||
})
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
model_config: expect.objectContaining({
|
||||
model: {
|
||||
provider: 'openai',
|
||||
name: 'gpt-3.5-turbo',
|
||||
mode: 'completion',
|
||||
completion_params: { temperature: 0.8 },
|
||||
},
|
||||
}),
|
||||
}),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
it('should process local files by clearing url', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'text',
|
||||
handleSend,
|
||||
isResponding: false,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const files = [
|
||||
{ id: 'file-1', transfer_method: TransferMethod.local_file, url: 'http://example.com/file1' },
|
||||
{ id: 'file-2', transfer_method: TransferMethod.remote_url, url: 'http://example.com/file2' },
|
||||
]
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
const callArgs = handleSend.mock.calls[0][1]
|
||||
expect(callArgs.files[0].url).toBe('')
|
||||
expect(callArgs.files[1].url).toBe('http://example.com/file2')
|
||||
})
|
||||
})
|
||||
|
||||
it('should not include files when file upload is disabled', async () => {
|
||||
const features = { ...createDefaultFeatures(), file: { enabled: false } }
|
||||
mockUseFeatures.mockImplementation((selector: (state: { features: typeof features }) => unknown) => selector({ features }))
|
||||
|
||||
const handleSend = vi.fn()
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'text',
|
||||
handleSend,
|
||||
isResponding: false,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
const files = [{ id: 'file-1', transfer_method: TransferMethod.remote_url }]
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
const callArgs = handleSend.mock.calls[0][1]
|
||||
expect(callArgs.files).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
it('should not include files when no files provided', async () => {
|
||||
const handleSend = vi.fn()
|
||||
mockUseTextGeneration.mockReturnValue({
|
||||
completion: 'text',
|
||||
handleSend,
|
||||
isResponding: false,
|
||||
messageId: 'msg-1',
|
||||
})
|
||||
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
await waitFor(() => {
|
||||
const callArgs = handleSend.mock.calls[0][1]
|
||||
expect(callArgs.files).toBeUndefined()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('features integration', () => {
|
||||
it('should include features in config', () => {
|
||||
const modelAndParameter = createModelAndParameter()
|
||||
|
||||
render(<TextGenerationItem modelAndParameter={modelAndParameter} />)
|
||||
|
||||
subscriptionCallback?.({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: { message: 'test', files: [] },
|
||||
})
|
||||
|
||||
const handleSend = mockUseTextGeneration().handleSend
|
||||
expect(handleSend).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
model_config: expect.objectContaining({
|
||||
more_like_this: { enabled: true },
|
||||
sensitive_word_avoidance: { enabled: false },
|
||||
text_to_speech: { enabled: true },
|
||||
file_upload: { enabled: true },
|
||||
}),
|
||||
}),
|
||||
)
|
||||
})
|
||||
})
|
||||
})
|
||||
@ -6,26 +6,18 @@ import type {
|
||||
ChatConfig,
|
||||
ChatItem,
|
||||
} from '@/app/components/base/chat/types'
|
||||
import type { VisionFile } from '@/types/app'
|
||||
import { cloneDeep } from 'es-toolkit/object'
|
||||
import {
|
||||
useCallback,
|
||||
useEffect,
|
||||
useRef,
|
||||
useState,
|
||||
} from 'react'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import { useContext } from 'use-context-selector'
|
||||
import { ToastContext } from '@/app/components/base/toast'
|
||||
import { SupportUploadFileTypes } from '@/app/components/workflow/types'
|
||||
import { DEFAULT_CHAT_PROMPT_CONFIG, DEFAULT_COMPLETION_PROMPT_CONFIG } from '@/config'
|
||||
import { useDebugConfigurationContext } from '@/context/debug-configuration'
|
||||
import { useEventEmitterContextContext } from '@/context/event-emitter'
|
||||
import {
|
||||
AgentStrategy,
|
||||
AppModeEnum,
|
||||
ModelModeType,
|
||||
TransferMethod,
|
||||
} from '@/types/app'
|
||||
import { promptVariablesToUserInputsForm } from '@/utils/model-config'
|
||||
import { ORCHESTRATE_CHANGED } from './types'
|
||||
@ -170,111 +162,3 @@ export const useFormattingChangedSubscription = (chatList: ChatItem[]) => {
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
export const useInputValidation = () => {
|
||||
const { t } = useTranslation()
|
||||
const { notify } = useContext(ToastContext)
|
||||
const {
|
||||
isAdvancedMode,
|
||||
mode,
|
||||
modelModeType,
|
||||
hasSetBlockStatus,
|
||||
modelConfig,
|
||||
} = useDebugConfigurationContext()
|
||||
|
||||
const logError = useCallback((message: string) => {
|
||||
notify({ type: 'error', message })
|
||||
}, [notify])
|
||||
|
||||
const checkCanSend = useCallback((inputs: Record<string, unknown>, completionFiles: VisionFile[]) => {
|
||||
if (isAdvancedMode && mode !== AppModeEnum.COMPLETION) {
|
||||
if (modelModeType === ModelModeType.completion) {
|
||||
if (!hasSetBlockStatus.history) {
|
||||
notify({ type: 'error', message: t('otherError.historyNoBeEmpty', { ns: 'appDebug' }) })
|
||||
return false
|
||||
}
|
||||
if (!hasSetBlockStatus.query) {
|
||||
notify({ type: 'error', message: t('otherError.queryNoBeEmpty', { ns: 'appDebug' }) })
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
let hasEmptyInput = ''
|
||||
const requiredVars = modelConfig.configs.prompt_variables.filter(({ key, name, required, type }) => {
|
||||
if (type !== 'string' && type !== 'paragraph' && type !== 'select' && type !== 'number')
|
||||
return false
|
||||
const res = (!key || !key.trim()) || (!name || !name.trim()) || (required || required === undefined || required === null)
|
||||
return res
|
||||
})
|
||||
requiredVars.forEach(({ key, name }) => {
|
||||
if (hasEmptyInput)
|
||||
return
|
||||
|
||||
if (!inputs[key])
|
||||
hasEmptyInput = name
|
||||
})
|
||||
|
||||
if (hasEmptyInput) {
|
||||
logError(t('errorMessage.valueOfVarRequired', { ns: 'appDebug', key: hasEmptyInput }))
|
||||
return false
|
||||
}
|
||||
|
||||
if (completionFiles.find(item => item.transfer_method === TransferMethod.local_file && !item.upload_file_id)) {
|
||||
notify({ type: 'info', message: t('errorMessage.waitForFileUpload', { ns: 'appDebug' }) })
|
||||
return false
|
||||
}
|
||||
return !hasEmptyInput
|
||||
}, [
|
||||
hasSetBlockStatus.history,
|
||||
hasSetBlockStatus.query,
|
||||
isAdvancedMode,
|
||||
mode,
|
||||
modelConfig.configs.prompt_variables,
|
||||
t,
|
||||
logError,
|
||||
notify,
|
||||
modelModeType,
|
||||
])
|
||||
|
||||
return { checkCanSend, logError }
|
||||
}
|
||||
|
||||
export const useFormattingChangeConfirm = () => {
|
||||
const [isShowFormattingChangeConfirm, setIsShowFormattingChangeConfirm] = useState(false)
|
||||
const { formattingChanged, setFormattingChanged } = useDebugConfigurationContext()
|
||||
|
||||
useEffect(() => {
|
||||
if (formattingChanged)
|
||||
setIsShowFormattingChangeConfirm(true) // eslint-disable-line react-hooks-extra/no-direct-set-state-in-use-effect
|
||||
}, [formattingChanged])
|
||||
|
||||
const handleConfirm = useCallback((onClear: () => void) => {
|
||||
onClear()
|
||||
setIsShowFormattingChangeConfirm(false)
|
||||
setFormattingChanged(false)
|
||||
}, [setFormattingChanged])
|
||||
|
||||
const handleCancel = useCallback(() => {
|
||||
setIsShowFormattingChangeConfirm(false)
|
||||
setFormattingChanged(false)
|
||||
}, [setFormattingChanged])
|
||||
|
||||
return {
|
||||
isShowFormattingChangeConfirm,
|
||||
handleConfirm,
|
||||
handleCancel,
|
||||
}
|
||||
}
|
||||
|
||||
export const useModalWidth = (containerRef: React.RefObject<HTMLDivElement | null>) => {
|
||||
const [width, setWidth] = useState(0)
|
||||
|
||||
useEffect(() => {
|
||||
if (containerRef.current) {
|
||||
const calculatedWidth = document.body.clientWidth - (containerRef.current.clientWidth + 16) - 8
|
||||
setWidth(calculatedWidth) // eslint-disable-line react-hooks-extra/no-direct-set-state-in-use-effect
|
||||
}
|
||||
}, [containerRef])
|
||||
|
||||
return width
|
||||
}
|
||||
|
||||
@ -3,39 +3,54 @@ import type { FC } from 'react'
|
||||
import type { DebugWithSingleModelRefType } from './debug-with-single-model'
|
||||
import type { ModelAndParameter } from './types'
|
||||
import type { ModelParameterModalProps } from '@/app/components/header/account-setting/model-provider-page/model-parameter-modal'
|
||||
import type { Inputs, PromptVariable } from '@/models/debug'
|
||||
import type { VisionFile, VisionSettings } from '@/types/app'
|
||||
import type { Inputs } from '@/models/debug'
|
||||
import type { ModelConfig as BackendModelConfig, VisionFile, VisionSettings } from '@/types/app'
|
||||
import {
|
||||
RiAddLine,
|
||||
RiEqualizer2Line,
|
||||
RiSparklingFill,
|
||||
} from '@remixicon/react'
|
||||
import { useBoolean } from 'ahooks'
|
||||
import { noop } from 'es-toolkit/function'
|
||||
import { cloneDeep } from 'es-toolkit/object'
|
||||
import { produce, setAutoFreeze } from 'immer'
|
||||
import * as React from 'react'
|
||||
import { useCallback, useEffect, useRef, useState } from 'react'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import { useContext } from 'use-context-selector'
|
||||
import { useShallow } from 'zustand/react/shallow'
|
||||
import ChatUserInput from '@/app/components/app/configuration/debug/chat-user-input'
|
||||
import PromptValuePanel from '@/app/components/app/configuration/prompt-value-panel'
|
||||
import { useStore as useAppStore } from '@/app/components/app/store'
|
||||
import TextGeneration from '@/app/components/app/text-generate/item'
|
||||
import ActionButton, { ActionButtonState } from '@/app/components/base/action-button'
|
||||
import AgentLogModal from '@/app/components/base/agent-log-modal'
|
||||
import Button from '@/app/components/base/button'
|
||||
import { useFeatures, useFeaturesStore } from '@/app/components/base/features/hooks'
|
||||
import { RefreshCcw01 } from '@/app/components/base/icons/src/vender/line/arrows'
|
||||
import PromptLogModal from '@/app/components/base/prompt-log-modal'
|
||||
import { ToastContext } from '@/app/components/base/toast'
|
||||
import TooltipPlus from '@/app/components/base/tooltip'
|
||||
import { ModelFeatureEnum, ModelTypeEnum } from '@/app/components/header/account-setting/model-provider-page/declarations'
|
||||
import { useDefaultModel } from '@/app/components/header/account-setting/model-provider-page/hooks'
|
||||
import { IS_CE_EDITION } from '@/config'
|
||||
import { DEFAULT_CHAT_PROMPT_CONFIG, DEFAULT_COMPLETION_PROMPT_CONFIG, IS_CE_EDITION } from '@/config'
|
||||
import ConfigContext from '@/context/debug-configuration'
|
||||
import { useEventEmitterContextContext } from '@/context/event-emitter'
|
||||
import { useProviderContext } from '@/context/provider-context'
|
||||
import { AppModeEnum } from '@/types/app'
|
||||
import { sendCompletionMessage } from '@/service/debug'
|
||||
import { AppSourceType } from '@/service/share'
|
||||
import { AppModeEnum, ModelModeType, TransferMethod } from '@/types/app'
|
||||
import { formatBooleanInputs, promptVariablesToUserInputsForm } from '@/utils/model-config'
|
||||
import GroupName from '../base/group-name'
|
||||
import CannotQueryDataset from '../base/warning-mask/cannot-query-dataset'
|
||||
import FormattingChanged from '../base/warning-mask/formatting-changed'
|
||||
import HasNotSetAPIKEY from '../base/warning-mask/has-not-set-api'
|
||||
import DebugHeader from './debug-header'
|
||||
import DebugWithMultipleModel from './debug-with-multiple-model'
|
||||
import DebugWithSingleModel from './debug-with-single-model'
|
||||
import { useFormattingChangeConfirm, useInputValidation, useModalWidth } from './hooks'
|
||||
import TextCompletionResult from './text-completion-result'
|
||||
import {
|
||||
APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
APP_CHAT_WITH_MULTIPLE_MODEL_RESTART,
|
||||
} from './types'
|
||||
import { useTextCompletion } from './use-text-completion'
|
||||
|
||||
type IDebug = {
|
||||
isAPIKeySet: boolean
|
||||
@ -56,17 +71,33 @@ const Debug: FC<IDebug> = ({
|
||||
multipleModelConfigs,
|
||||
onMultipleModelConfigsChange,
|
||||
}) => {
|
||||
const { t } = useTranslation()
|
||||
const {
|
||||
readonly,
|
||||
appId,
|
||||
mode,
|
||||
modelModeType,
|
||||
hasSetBlockStatus,
|
||||
isAdvancedMode,
|
||||
promptMode,
|
||||
chatPromptConfig,
|
||||
completionPromptConfig,
|
||||
introduction,
|
||||
suggestedQuestionsAfterAnswerConfig,
|
||||
speechToTextConfig,
|
||||
textToSpeechConfig,
|
||||
citationConfig,
|
||||
formattingChanged,
|
||||
setFormattingChanged,
|
||||
dataSets,
|
||||
modelConfig,
|
||||
completionParams,
|
||||
hasSetContextVar,
|
||||
datasetConfigs,
|
||||
externalDataToolsConfig,
|
||||
} = useContext(ConfigContext)
|
||||
const { eventEmitter } = useEventEmitterContextContext()
|
||||
const { data: text2speechDefaultModel } = useDefaultModel(ModelTypeEnum.textEmbedding)
|
||||
const features = useFeatures(s => s.features)
|
||||
const featuresStore = useFeaturesStore()
|
||||
|
||||
// Disable immer auto-freeze for this component
|
||||
useEffect(() => {
|
||||
setAutoFreeze(false)
|
||||
return () => {
|
||||
@ -74,77 +105,226 @@ const Debug: FC<IDebug> = ({
|
||||
}
|
||||
}, [])
|
||||
|
||||
// UI state
|
||||
const [expanded, setExpanded] = useState(true)
|
||||
const [isResponding, { setTrue: setRespondingTrue, setFalse: setRespondingFalse }] = useBoolean(false)
|
||||
const [isShowFormattingChangeConfirm, setIsShowFormattingChangeConfirm] = useState(false)
|
||||
const [isShowCannotQueryDataset, setShowCannotQueryDataset] = useState(false)
|
||||
const containerRef = useRef<HTMLDivElement>(null)
|
||||
const debugWithSingleModelRef = React.useRef<DebugWithSingleModelRefType>(null!)
|
||||
|
||||
// Hooks
|
||||
const { checkCanSend } = useInputValidation()
|
||||
const { isShowFormattingChangeConfirm, handleConfirm, handleCancel } = useFormattingChangeConfirm()
|
||||
const modalWidth = useModalWidth(containerRef)
|
||||
|
||||
// Wrapper for checkCanSend that uses current completionFiles
|
||||
const [completionFilesForValidation, setCompletionFilesForValidation] = useState<VisionFile[]>([])
|
||||
const checkCanSendWithFiles = useCallback(() => {
|
||||
return checkCanSend(inputs, completionFilesForValidation)
|
||||
}, [checkCanSend, inputs, completionFilesForValidation])
|
||||
|
||||
const {
|
||||
isResponding,
|
||||
completionRes,
|
||||
messageId,
|
||||
completionFiles,
|
||||
setCompletionFiles,
|
||||
sendTextCompletion,
|
||||
} = useTextCompletion({
|
||||
checkCanSend: checkCanSendWithFiles,
|
||||
onShowCannotQueryDataset: () => setShowCannotQueryDataset(true),
|
||||
})
|
||||
|
||||
// Sync completionFiles for validation
|
||||
useEffect(() => {
|
||||
setCompletionFilesForValidation(completionFiles as VisionFile[]) // eslint-disable-line react-hooks-extra/no-direct-set-state-in-use-effect
|
||||
}, [completionFiles])
|
||||
if (formattingChanged)
|
||||
setIsShowFormattingChangeConfirm(true)
|
||||
}, [formattingChanged])
|
||||
|
||||
// App store for modals
|
||||
const { currentLogItem, setCurrentLogItem, showPromptLogModal, setShowPromptLogModal, showAgentLogModal, setShowAgentLogModal } = useAppStore(useShallow(state => ({
|
||||
currentLogItem: state.currentLogItem,
|
||||
setCurrentLogItem: state.setCurrentLogItem,
|
||||
showPromptLogModal: state.showPromptLogModal,
|
||||
setShowPromptLogModal: state.setShowPromptLogModal,
|
||||
showAgentLogModal: state.showAgentLogModal,
|
||||
setShowAgentLogModal: state.setShowAgentLogModal,
|
||||
})))
|
||||
|
||||
// Provider context for model list
|
||||
const { textGenerationModelList } = useProviderContext()
|
||||
|
||||
// Computed values
|
||||
const varList = modelConfig.configs.prompt_variables.map((item: PromptVariable) => ({
|
||||
label: item.key,
|
||||
value: inputs[item.key],
|
||||
}))
|
||||
|
||||
// Handlers
|
||||
const handleClearConversation = useCallback(() => {
|
||||
const debugWithSingleModelRef = React.useRef<DebugWithSingleModelRefType>(null!)
|
||||
const handleClearConversation = () => {
|
||||
debugWithSingleModelRef.current?.handleRestart()
|
||||
}, [])
|
||||
|
||||
const clearConversation = useCallback(async () => {
|
||||
}
|
||||
const clearConversation = async () => {
|
||||
if (debugWithMultipleModel) {
|
||||
eventEmitter?.emit({ type: APP_CHAT_WITH_MULTIPLE_MODEL_RESTART } as any) // eslint-disable-line ts/no-explicit-any
|
||||
eventEmitter?.emit({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL_RESTART,
|
||||
} as any)
|
||||
return
|
||||
}
|
||||
|
||||
handleClearConversation()
|
||||
}, [debugWithMultipleModel, eventEmitter, handleClearConversation])
|
||||
}
|
||||
|
||||
const handleFormattingConfirm = useCallback(() => {
|
||||
handleConfirm(clearConversation)
|
||||
}, [handleConfirm, clearConversation])
|
||||
const handleConfirm = () => {
|
||||
clearConversation()
|
||||
setIsShowFormattingChangeConfirm(false)
|
||||
setFormattingChanged(false)
|
||||
}
|
||||
|
||||
const handleChangeToSingleModel = useCallback((item: ModelAndParameter) => {
|
||||
const handleCancel = () => {
|
||||
setIsShowFormattingChangeConfirm(false)
|
||||
setFormattingChanged(false)
|
||||
}
|
||||
|
||||
const { notify } = useContext(ToastContext)
|
||||
const logError = useCallback((message: string) => {
|
||||
notify({ type: 'error', message })
|
||||
}, [notify])
|
||||
const [completionFiles, setCompletionFiles] = useState<VisionFile[]>([])
|
||||
|
||||
const checkCanSend = useCallback(() => {
|
||||
if (isAdvancedMode && mode !== AppModeEnum.COMPLETION) {
|
||||
if (modelModeType === ModelModeType.completion) {
|
||||
if (!hasSetBlockStatus.history) {
|
||||
notify({ type: 'error', message: t('otherError.historyNoBeEmpty', { ns: 'appDebug' }) })
|
||||
return false
|
||||
}
|
||||
if (!hasSetBlockStatus.query) {
|
||||
notify({ type: 'error', message: t('otherError.queryNoBeEmpty', { ns: 'appDebug' }) })
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
let hasEmptyInput = ''
|
||||
const requiredVars = modelConfig.configs.prompt_variables.filter(({ key, name, required, type }) => {
|
||||
if (type !== 'string' && type !== 'paragraph' && type !== 'select' && type !== 'number')
|
||||
return false
|
||||
const res = (!key || !key.trim()) || (!name || !name.trim()) || (required || required === undefined || required === null)
|
||||
return res
|
||||
}) // compatible with old version
|
||||
requiredVars.forEach(({ key, name }) => {
|
||||
if (hasEmptyInput)
|
||||
return
|
||||
|
||||
if (!inputs[key])
|
||||
hasEmptyInput = name
|
||||
})
|
||||
|
||||
if (hasEmptyInput) {
|
||||
logError(t('errorMessage.valueOfVarRequired', { ns: 'appDebug', key: hasEmptyInput }))
|
||||
return false
|
||||
}
|
||||
|
||||
if (completionFiles.find(item => item.transfer_method === TransferMethod.local_file && !item.upload_file_id)) {
|
||||
notify({ type: 'info', message: t('errorMessage.waitForFileUpload', { ns: 'appDebug' }) })
|
||||
return false
|
||||
}
|
||||
return !hasEmptyInput
|
||||
}, [
|
||||
completionFiles,
|
||||
hasSetBlockStatus.history,
|
||||
hasSetBlockStatus.query,
|
||||
inputs,
|
||||
isAdvancedMode,
|
||||
mode,
|
||||
modelConfig.configs.prompt_variables,
|
||||
t,
|
||||
logError,
|
||||
notify,
|
||||
modelModeType,
|
||||
])
|
||||
|
||||
const [completionRes, setCompletionRes] = useState('')
|
||||
const [messageId, setMessageId] = useState<string | null>(null)
|
||||
const features = useFeatures(s => s.features)
|
||||
const featuresStore = useFeaturesStore()
|
||||
|
||||
const sendTextCompletion = async () => {
|
||||
if (isResponding) {
|
||||
notify({ type: 'info', message: t('errorMessage.waitForResponse', { ns: 'appDebug' }) })
|
||||
return false
|
||||
}
|
||||
|
||||
if (dataSets.length > 0 && !hasSetContextVar) {
|
||||
setShowCannotQueryDataset(true)
|
||||
return true
|
||||
}
|
||||
|
||||
if (!checkCanSend())
|
||||
return
|
||||
|
||||
const postDatasets = dataSets.map(({ id }) => ({
|
||||
dataset: {
|
||||
enabled: true,
|
||||
id,
|
||||
},
|
||||
}))
|
||||
const contextVar = modelConfig.configs.prompt_variables.find(item => item.is_context_var)?.key
|
||||
|
||||
const postModelConfig: BackendModelConfig = {
|
||||
pre_prompt: !isAdvancedMode ? modelConfig.configs.prompt_template : '',
|
||||
prompt_type: promptMode,
|
||||
chat_prompt_config: isAdvancedMode ? chatPromptConfig : cloneDeep(DEFAULT_CHAT_PROMPT_CONFIG),
|
||||
completion_prompt_config: isAdvancedMode ? completionPromptConfig : cloneDeep(DEFAULT_COMPLETION_PROMPT_CONFIG),
|
||||
user_input_form: promptVariablesToUserInputsForm(modelConfig.configs.prompt_variables),
|
||||
dataset_query_variable: contextVar || '',
|
||||
dataset_configs: {
|
||||
...datasetConfigs,
|
||||
datasets: {
|
||||
datasets: [...postDatasets],
|
||||
} as any,
|
||||
},
|
||||
agent_mode: {
|
||||
enabled: false,
|
||||
tools: [],
|
||||
},
|
||||
model: {
|
||||
provider: modelConfig.provider,
|
||||
name: modelConfig.model_id,
|
||||
mode: modelConfig.mode,
|
||||
completion_params: completionParams as any,
|
||||
},
|
||||
more_like_this: features.moreLikeThis as any,
|
||||
sensitive_word_avoidance: features.moderation as any,
|
||||
text_to_speech: features.text2speech as any,
|
||||
file_upload: features.file as any,
|
||||
opening_statement: introduction,
|
||||
suggested_questions_after_answer: suggestedQuestionsAfterAnswerConfig,
|
||||
speech_to_text: speechToTextConfig,
|
||||
retriever_resource: citationConfig,
|
||||
system_parameters: modelConfig.system_parameters,
|
||||
external_data_tools: externalDataToolsConfig,
|
||||
}
|
||||
|
||||
const data: Record<string, any> = {
|
||||
inputs: formatBooleanInputs(modelConfig.configs.prompt_variables, inputs),
|
||||
model_config: postModelConfig,
|
||||
}
|
||||
|
||||
if ((features.file as any).enabled && completionFiles && completionFiles?.length > 0) {
|
||||
data.files = completionFiles.map((item) => {
|
||||
if (item.transfer_method === TransferMethod.local_file) {
|
||||
return {
|
||||
...item,
|
||||
url: '',
|
||||
}
|
||||
}
|
||||
return item
|
||||
})
|
||||
}
|
||||
|
||||
setCompletionRes('')
|
||||
setMessageId('')
|
||||
let res: string[] = []
|
||||
|
||||
setRespondingTrue()
|
||||
sendCompletionMessage(appId, data, {
|
||||
onData: (data: string, _isFirstMessage: boolean, { messageId }) => {
|
||||
res.push(data)
|
||||
setCompletionRes(res.join(''))
|
||||
setMessageId(messageId)
|
||||
},
|
||||
onMessageReplace: (messageReplace) => {
|
||||
res = [messageReplace.answer]
|
||||
setCompletionRes(res.join(''))
|
||||
},
|
||||
onCompleted() {
|
||||
setRespondingFalse()
|
||||
},
|
||||
onError() {
|
||||
setRespondingFalse()
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
const handleSendTextCompletion = () => {
|
||||
if (debugWithMultipleModel) {
|
||||
eventEmitter?.emit({
|
||||
type: APP_CHAT_WITH_MULTIPLE_MODEL,
|
||||
payload: {
|
||||
message: '',
|
||||
files: completionFiles,
|
||||
},
|
||||
} as any)
|
||||
return
|
||||
}
|
||||
|
||||
sendTextCompletion()
|
||||
}
|
||||
|
||||
const varList = modelConfig.configs.prompt_variables.map((item: any) => {
|
||||
return {
|
||||
label: item.key,
|
||||
value: inputs[item.key],
|
||||
}
|
||||
})
|
||||
|
||||
const { textGenerationModelList } = useProviderContext()
|
||||
const handleChangeToSingleModel = (item: ModelAndParameter) => {
|
||||
const currentProvider = textGenerationModelList.find(modelItem => modelItem.provider === item.provider)
|
||||
const currentModel = currentProvider?.models.find(model => model.model === item.model)
|
||||
|
||||
@ -155,18 +335,26 @@ const Debug: FC<IDebug> = ({
|
||||
features: currentModel?.features,
|
||||
})
|
||||
modelParameterParams.onCompletionParamsChange(item.parameters)
|
||||
onMultipleModelConfigsChange(false, [])
|
||||
}, [modelParameterParams, onMultipleModelConfigsChange, textGenerationModelList])
|
||||
onMultipleModelConfigsChange(
|
||||
false,
|
||||
[],
|
||||
)
|
||||
}
|
||||
|
||||
const handleVisionConfigInMultipleModel = useCallback(() => {
|
||||
if (debugWithMultipleModel && mode) {
|
||||
const supportedVision = multipleModelConfigs.some((config) => {
|
||||
const currentProvider = textGenerationModelList.find(modelItem => modelItem.provider === config.provider)
|
||||
const currentModel = currentProvider?.models.find(model => model.model === config.model)
|
||||
const supportedVision = multipleModelConfigs.some((modelConfig) => {
|
||||
const currentProvider = textGenerationModelList.find(modelItem => modelItem.provider === modelConfig.provider)
|
||||
const currentModel = currentProvider?.models.find(model => model.model === modelConfig.model)
|
||||
|
||||
return currentModel?.features?.includes(ModelFeatureEnum.vision)
|
||||
})
|
||||
const { features: storeFeatures, setFeatures } = featuresStore!.getState()
|
||||
const newFeatures = produce(storeFeatures, (draft) => {
|
||||
const {
|
||||
features,
|
||||
setFeatures,
|
||||
} = featuresStore!.getState()
|
||||
|
||||
const newFeatures = produce(features, (draft) => {
|
||||
draft.file = {
|
||||
...draft.file,
|
||||
enabled: supportedVision,
|
||||
@ -180,131 +368,210 @@ const Debug: FC<IDebug> = ({
|
||||
handleVisionConfigInMultipleModel()
|
||||
}, [multipleModelConfigs, mode, handleVisionConfigInMultipleModel])
|
||||
|
||||
const handleSendTextCompletion = useCallback(() => {
|
||||
if (debugWithMultipleModel) {
|
||||
eventEmitter?.emit({ type: APP_CHAT_WITH_MULTIPLE_MODEL, payload: { message: '', files: completionFiles } } as any) // eslint-disable-line ts/no-explicit-any
|
||||
return
|
||||
}
|
||||
sendTextCompletion()
|
||||
}, [completionFiles, debugWithMultipleModel, eventEmitter, sendTextCompletion])
|
||||
const { currentLogItem, setCurrentLogItem, showPromptLogModal, setShowPromptLogModal, showAgentLogModal, setShowAgentLogModal } = useAppStore(useShallow(state => ({
|
||||
currentLogItem: state.currentLogItem,
|
||||
setCurrentLogItem: state.setCurrentLogItem,
|
||||
showPromptLogModal: state.showPromptLogModal,
|
||||
setShowPromptLogModal: state.setShowPromptLogModal,
|
||||
showAgentLogModal: state.showAgentLogModal,
|
||||
setShowAgentLogModal: state.setShowAgentLogModal,
|
||||
})))
|
||||
const [width, setWidth] = useState(0)
|
||||
const ref = useRef<HTMLDivElement>(null)
|
||||
|
||||
const handleAddModel = useCallback(() => {
|
||||
onMultipleModelConfigsChange(true, [...multipleModelConfigs, { id: `${Date.now()}`, model: '', provider: '', parameters: {} }])
|
||||
}, [multipleModelConfigs, onMultipleModelConfigsChange])
|
||||
const adjustModalWidth = () => {
|
||||
if (ref.current)
|
||||
setWidth(document.body.clientWidth - (ref.current?.clientWidth + 16) - 8)
|
||||
}
|
||||
|
||||
const handleClosePromptLogModal = useCallback(() => {
|
||||
setCurrentLogItem()
|
||||
setShowPromptLogModal(false)
|
||||
}, [setCurrentLogItem, setShowPromptLogModal])
|
||||
useEffect(() => {
|
||||
adjustModalWidth()
|
||||
}, [])
|
||||
|
||||
const handleCloseAgentLogModal = useCallback(() => {
|
||||
setCurrentLogItem()
|
||||
setShowAgentLogModal(false)
|
||||
}, [setCurrentLogItem, setShowAgentLogModal])
|
||||
|
||||
const isShowTextToSpeech = features.text2speech?.enabled && !!text2speechDefaultModel
|
||||
const [expanded, setExpanded] = useState(true)
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="shrink-0">
|
||||
<DebugHeader
|
||||
readonly={readonly}
|
||||
mode={mode}
|
||||
debugWithMultipleModel={debugWithMultipleModel}
|
||||
multipleModelConfigs={multipleModelConfigs}
|
||||
varListLength={varList.length}
|
||||
expanded={expanded}
|
||||
onExpandedChange={setExpanded}
|
||||
onClearConversation={clearConversation}
|
||||
onAddModel={handleAddModel}
|
||||
/>
|
||||
<div className="flex items-center justify-between px-4 pb-2 pt-3">
|
||||
<div className="system-xl-semibold text-text-primary">{t('inputs.title', { ns: 'appDebug' })}</div>
|
||||
<div className="flex items-center">
|
||||
{
|
||||
debugWithMultipleModel
|
||||
? (
|
||||
<>
|
||||
<Button
|
||||
variant="ghost-accent"
|
||||
onClick={() => onMultipleModelConfigsChange(true, [...multipleModelConfigs, { id: `${Date.now()}`, model: '', provider: '', parameters: {} }])}
|
||||
disabled={multipleModelConfigs.length >= 4}
|
||||
>
|
||||
<RiAddLine className="mr-1 h-3.5 w-3.5" />
|
||||
{t('modelProvider.addModel', { ns: 'common' })}
|
||||
(
|
||||
{multipleModelConfigs.length}
|
||||
/4)
|
||||
</Button>
|
||||
<div className="mx-2 h-[14px] w-[1px] bg-divider-regular" />
|
||||
</>
|
||||
)
|
||||
: null
|
||||
}
|
||||
{mode !== AppModeEnum.COMPLETION && (
|
||||
<>
|
||||
{
|
||||
!readonly && (
|
||||
<TooltipPlus
|
||||
popupContent={t('operation.refresh', { ns: 'common' })}
|
||||
>
|
||||
<ActionButton onClick={clearConversation}>
|
||||
<RefreshCcw01 className="h-4 w-4" />
|
||||
</ActionButton>
|
||||
|
||||
</TooltipPlus>
|
||||
)
|
||||
}
|
||||
|
||||
{
|
||||
varList.length > 0 && (
|
||||
<div className="relative ml-1 mr-2">
|
||||
<TooltipPlus
|
||||
popupContent={t('panel.userInputField', { ns: 'workflow' })}
|
||||
>
|
||||
<ActionButton state={expanded ? ActionButtonState.Active : undefined} onClick={() => !readonly && setExpanded(!expanded)}>
|
||||
<RiEqualizer2Line className="h-4 w-4" />
|
||||
</ActionButton>
|
||||
</TooltipPlus>
|
||||
{expanded && <div className="absolute bottom-[-14px] right-[5px] z-10 h-3 w-3 rotate-45 border-l-[0.5px] border-t-[0.5px] border-components-panel-border-subtle bg-components-panel-on-panel-item-bg" />}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
{mode !== AppModeEnum.COMPLETION && expanded && (
|
||||
<div className="mx-3">
|
||||
<ChatUserInput inputs={inputs} />
|
||||
</div>
|
||||
)}
|
||||
{mode === AppModeEnum.COMPLETION && (
|
||||
<PromptValuePanel
|
||||
appType={mode as AppModeEnum}
|
||||
onSend={handleSendTextCompletion}
|
||||
inputs={inputs}
|
||||
visionConfig={{
|
||||
...features.file! as VisionSettings,
|
||||
transfer_methods: features.file!.allowed_file_upload_methods || [],
|
||||
image_file_size_limit: features.file?.fileUploadConfig?.image_file_size_limit,
|
||||
}}
|
||||
onVisionFilesChange={setCompletionFiles}
|
||||
/>
|
||||
)}
|
||||
{
|
||||
mode === AppModeEnum.COMPLETION && (
|
||||
<PromptValuePanel
|
||||
appType={mode as AppModeEnum}
|
||||
onSend={handleSendTextCompletion}
|
||||
inputs={inputs}
|
||||
visionConfig={{
|
||||
...features.file! as VisionSettings,
|
||||
transfer_methods: features.file!.allowed_file_upload_methods || [],
|
||||
image_file_size_limit: features.file?.fileUploadConfig?.image_file_size_limit,
|
||||
}}
|
||||
onVisionFilesChange={setCompletionFiles}
|
||||
/>
|
||||
)
|
||||
}
|
||||
</div>
|
||||
|
||||
{debugWithMultipleModel && (
|
||||
<div className="mt-3 grow overflow-hidden" ref={containerRef}>
|
||||
<DebugWithMultipleModel
|
||||
multipleModelConfigs={multipleModelConfigs}
|
||||
onMultipleModelConfigsChange={onMultipleModelConfigsChange}
|
||||
onDebugWithMultipleModelChange={handleChangeToSingleModel}
|
||||
checkCanSend={checkCanSendWithFiles}
|
||||
/>
|
||||
{showPromptLogModal && (
|
||||
<PromptLogModal
|
||||
width={modalWidth}
|
||||
currentLogItem={currentLogItem}
|
||||
onCancel={handleClosePromptLogModal}
|
||||
{
|
||||
debugWithMultipleModel && (
|
||||
<div className="mt-3 grow overflow-hidden" ref={ref}>
|
||||
<DebugWithMultipleModel
|
||||
multipleModelConfigs={multipleModelConfigs}
|
||||
onMultipleModelConfigsChange={onMultipleModelConfigsChange}
|
||||
onDebugWithMultipleModelChange={handleChangeToSingleModel}
|
||||
checkCanSend={checkCanSend}
|
||||
/>
|
||||
)}
|
||||
{showAgentLogModal && (
|
||||
<AgentLogModal
|
||||
width={modalWidth}
|
||||
currentLogItem={currentLogItem}
|
||||
onCancel={handleCloseAgentLogModal}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{!debugWithMultipleModel && (
|
||||
<div className="flex grow flex-col" ref={containerRef}>
|
||||
{mode !== AppModeEnum.COMPLETION && (
|
||||
<div className="h-0 grow overflow-hidden">
|
||||
<DebugWithSingleModel
|
||||
ref={debugWithSingleModelRef}
|
||||
checkCanSend={checkCanSendWithFiles}
|
||||
{showPromptLogModal && (
|
||||
<PromptLogModal
|
||||
width={width}
|
||||
currentLogItem={currentLogItem}
|
||||
onCancel={() => {
|
||||
setCurrentLogItem()
|
||||
setShowPromptLogModal(false)
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
{mode === AppModeEnum.COMPLETION && (
|
||||
<TextCompletionResult
|
||||
completionRes={completionRes}
|
||||
isResponding={isResponding}
|
||||
messageId={messageId}
|
||||
isShowTextToSpeech={isShowTextToSpeech}
|
||||
/>
|
||||
)}
|
||||
{mode === AppModeEnum.COMPLETION && showPromptLogModal && (
|
||||
<PromptLogModal
|
||||
width={modalWidth}
|
||||
currentLogItem={currentLogItem}
|
||||
onCancel={handleClosePromptLogModal}
|
||||
/>
|
||||
)}
|
||||
{isShowCannotQueryDataset && (
|
||||
<CannotQueryDataset onConfirm={() => setShowCannotQueryDataset(false)} />
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{isShowFormattingChangeConfirm && (
|
||||
<FormattingChanged
|
||||
onConfirm={handleFormattingConfirm}
|
||||
onCancel={handleCancel}
|
||||
/>
|
||||
)}
|
||||
{!isAPIKeySet && !readonly && (
|
||||
<HasNotSetAPIKEY isTrailFinished={!IS_CE_EDITION} onSetting={onSetting} />
|
||||
)}
|
||||
)}
|
||||
{showAgentLogModal && (
|
||||
<AgentLogModal
|
||||
width={width}
|
||||
currentLogItem={currentLogItem}
|
||||
onCancel={() => {
|
||||
setCurrentLogItem()
|
||||
setShowAgentLogModal(false)
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
{
|
||||
!debugWithMultipleModel && (
|
||||
<div className="flex grow flex-col" ref={ref}>
|
||||
{/* Chat */}
|
||||
{mode !== AppModeEnum.COMPLETION && (
|
||||
<div className="h-0 grow overflow-hidden">
|
||||
<DebugWithSingleModel
|
||||
ref={debugWithSingleModelRef}
|
||||
checkCanSend={checkCanSend}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
{/* Text Generation */}
|
||||
{mode === AppModeEnum.COMPLETION && (
|
||||
<>
|
||||
{(completionRes || isResponding) && (
|
||||
<>
|
||||
<div className="mx-4 mt-3"><GroupName name={t('result', { ns: 'appDebug' })} /></div>
|
||||
<div className="mx-3 mb-8">
|
||||
<TextGeneration
|
||||
appSourceType={AppSourceType.webApp}
|
||||
className="mt-2"
|
||||
content={completionRes}
|
||||
isLoading={!completionRes && isResponding}
|
||||
isShowTextToSpeech={textToSpeechConfig.enabled && !!text2speechDefaultModel}
|
||||
isResponding={isResponding}
|
||||
messageId={messageId}
|
||||
isError={false}
|
||||
onRetry={noop}
|
||||
siteInfo={null}
|
||||
/>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
{!completionRes && !isResponding && (
|
||||
<div className="flex grow flex-col items-center justify-center gap-2">
|
||||
<RiSparklingFill className="h-12 w-12 text-text-empty-state-icon" />
|
||||
<div className="system-sm-regular text-text-quaternary">{t('noResult', { ns: 'appDebug' })}</div>
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
{mode === AppModeEnum.COMPLETION && showPromptLogModal && (
|
||||
<PromptLogModal
|
||||
width={width}
|
||||
currentLogItem={currentLogItem}
|
||||
onCancel={() => {
|
||||
setCurrentLogItem()
|
||||
setShowPromptLogModal(false)
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
{isShowCannotQueryDataset && (
|
||||
<CannotQueryDataset
|
||||
onConfirm={() => setShowCannotQueryDataset(false)}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
{
|
||||
isShowFormattingChangeConfirm && (
|
||||
<FormattingChanged
|
||||
onConfirm={handleConfirm}
|
||||
onCancel={handleCancel}
|
||||
/>
|
||||
)
|
||||
}
|
||||
{!isAPIKeySet && !readonly && (<HasNotSetAPIKEY isTrailFinished={!IS_CE_EDITION} onSetting={onSetting} />)}
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
export default React.memo(Debug)
|
||||
|
||||
@ -1,57 +0,0 @@
|
||||
'use client'
|
||||
import type { FC } from 'react'
|
||||
import { RiSparklingFill } from '@remixicon/react'
|
||||
import { noop } from 'es-toolkit/function'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import TextGeneration from '@/app/components/app/text-generate/item'
|
||||
import { AppSourceType } from '@/service/share'
|
||||
import GroupName from '../base/group-name'
|
||||
|
||||
type TextCompletionResultProps = {
|
||||
completionRes: string
|
||||
isResponding: boolean
|
||||
messageId: string | null
|
||||
isShowTextToSpeech?: boolean
|
||||
}
|
||||
|
||||
const TextCompletionResult: FC<TextCompletionResultProps> = ({
|
||||
completionRes,
|
||||
isResponding,
|
||||
messageId,
|
||||
isShowTextToSpeech,
|
||||
}) => {
|
||||
const { t } = useTranslation()
|
||||
|
||||
if (!completionRes && !isResponding) {
|
||||
return (
|
||||
<div className="flex grow flex-col items-center justify-center gap-2">
|
||||
<RiSparklingFill className="h-12 w-12 text-text-empty-state-icon" />
|
||||
<div className="system-sm-regular text-text-quaternary">{t('noResult', { ns: 'appDebug' })}</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="mx-4 mt-3">
|
||||
<GroupName name={t('result', { ns: 'appDebug' })} />
|
||||
</div>
|
||||
<div className="mx-3 mb-8">
|
||||
<TextGeneration
|
||||
appSourceType={AppSourceType.webApp}
|
||||
className="mt-2"
|
||||
content={completionRes}
|
||||
isLoading={!completionRes && isResponding}
|
||||
isShowTextToSpeech={isShowTextToSpeech}
|
||||
isResponding={isResponding}
|
||||
messageId={messageId}
|
||||
isError={false}
|
||||
onRetry={noop}
|
||||
siteInfo={null}
|
||||
/>
|
||||
</div>
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
export default TextCompletionResult
|
||||
@ -1,187 +0,0 @@
|
||||
import type { ModelConfig as BackendModelConfig, VisionFile } from '@/types/app'
|
||||
import { useBoolean } from 'ahooks'
|
||||
import { cloneDeep } from 'es-toolkit/object'
|
||||
import { useCallback, useState } from 'react'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import { useContext } from 'use-context-selector'
|
||||
import { useFeatures } from '@/app/components/base/features/hooks'
|
||||
import { ToastContext } from '@/app/components/base/toast'
|
||||
import { DEFAULT_CHAT_PROMPT_CONFIG, DEFAULT_COMPLETION_PROMPT_CONFIG } from '@/config'
|
||||
import { useDebugConfigurationContext } from '@/context/debug-configuration'
|
||||
import { sendCompletionMessage } from '@/service/debug'
|
||||
import { TransferMethod } from '@/types/app'
|
||||
import { formatBooleanInputs, promptVariablesToUserInputsForm } from '@/utils/model-config'
|
||||
|
||||
type UseTextCompletionOptions = {
|
||||
checkCanSend: () => boolean
|
||||
onShowCannotQueryDataset: () => void
|
||||
}
|
||||
|
||||
export const useTextCompletion = ({
|
||||
checkCanSend,
|
||||
onShowCannotQueryDataset,
|
||||
}: UseTextCompletionOptions) => {
|
||||
const { t } = useTranslation()
|
||||
const { notify } = useContext(ToastContext)
|
||||
const {
|
||||
appId,
|
||||
isAdvancedMode,
|
||||
promptMode,
|
||||
chatPromptConfig,
|
||||
completionPromptConfig,
|
||||
introduction,
|
||||
suggestedQuestionsAfterAnswerConfig,
|
||||
speechToTextConfig,
|
||||
citationConfig,
|
||||
dataSets,
|
||||
modelConfig,
|
||||
completionParams,
|
||||
hasSetContextVar,
|
||||
datasetConfigs,
|
||||
externalDataToolsConfig,
|
||||
inputs,
|
||||
} = useDebugConfigurationContext()
|
||||
const features = useFeatures(s => s.features)
|
||||
|
||||
const [isResponding, { setTrue: setRespondingTrue, setFalse: setRespondingFalse }] = useBoolean(false)
|
||||
const [completionRes, setCompletionRes] = useState('')
|
||||
const [messageId, setMessageId] = useState<string | null>(null)
|
||||
const [completionFiles, setCompletionFiles] = useState<VisionFile[]>([])
|
||||
|
||||
const sendTextCompletion = useCallback(async () => {
|
||||
if (isResponding) {
|
||||
notify({ type: 'info', message: t('errorMessage.waitForResponse', { ns: 'appDebug' }) })
|
||||
return false
|
||||
}
|
||||
|
||||
if (dataSets.length > 0 && !hasSetContextVar) {
|
||||
onShowCannotQueryDataset()
|
||||
return true
|
||||
}
|
||||
|
||||
if (!checkCanSend())
|
||||
return
|
||||
|
||||
const postDatasets = dataSets.map(({ id }) => ({
|
||||
dataset: {
|
||||
enabled: true,
|
||||
id,
|
||||
},
|
||||
}))
|
||||
const contextVar = modelConfig.configs.prompt_variables.find(item => item.is_context_var)?.key
|
||||
|
||||
const postModelConfig: BackendModelConfig = {
|
||||
pre_prompt: !isAdvancedMode ? modelConfig.configs.prompt_template : '',
|
||||
prompt_type: promptMode,
|
||||
chat_prompt_config: isAdvancedMode ? chatPromptConfig : cloneDeep(DEFAULT_CHAT_PROMPT_CONFIG),
|
||||
completion_prompt_config: isAdvancedMode ? completionPromptConfig : cloneDeep(DEFAULT_COMPLETION_PROMPT_CONFIG),
|
||||
user_input_form: promptVariablesToUserInputsForm(modelConfig.configs.prompt_variables),
|
||||
dataset_query_variable: contextVar || '',
|
||||
/* eslint-disable ts/no-explicit-any */
|
||||
dataset_configs: {
|
||||
...datasetConfigs,
|
||||
datasets: {
|
||||
datasets: [...postDatasets],
|
||||
} as any,
|
||||
},
|
||||
agent_mode: {
|
||||
enabled: false,
|
||||
tools: [],
|
||||
},
|
||||
model: {
|
||||
provider: modelConfig.provider,
|
||||
name: modelConfig.model_id,
|
||||
mode: modelConfig.mode,
|
||||
completion_params: completionParams as any,
|
||||
},
|
||||
more_like_this: features.moreLikeThis as any,
|
||||
sensitive_word_avoidance: features.moderation as any,
|
||||
text_to_speech: features.text2speech as any,
|
||||
file_upload: features.file as any,
|
||||
/* eslint-enable ts/no-explicit-any */
|
||||
opening_statement: introduction,
|
||||
suggested_questions_after_answer: suggestedQuestionsAfterAnswerConfig,
|
||||
speech_to_text: speechToTextConfig,
|
||||
retriever_resource: citationConfig,
|
||||
system_parameters: modelConfig.system_parameters,
|
||||
external_data_tools: externalDataToolsConfig,
|
||||
}
|
||||
|
||||
// eslint-disable-next-line ts/no-explicit-any
|
||||
const data: Record<string, any> = {
|
||||
inputs: formatBooleanInputs(modelConfig.configs.prompt_variables, inputs),
|
||||
model_config: postModelConfig,
|
||||
}
|
||||
|
||||
// eslint-disable-next-line ts/no-explicit-any
|
||||
if ((features.file as any).enabled && completionFiles && completionFiles?.length > 0) {
|
||||
data.files = completionFiles.map((item) => {
|
||||
if (item.transfer_method === TransferMethod.local_file) {
|
||||
return {
|
||||
...item,
|
||||
url: '',
|
||||
}
|
||||
}
|
||||
return item
|
||||
})
|
||||
}
|
||||
|
||||
setCompletionRes('')
|
||||
setMessageId('')
|
||||
let res: string[] = []
|
||||
|
||||
setRespondingTrue()
|
||||
sendCompletionMessage(appId, data, {
|
||||
onData: (data: string, _isFirstMessage: boolean, { messageId }) => {
|
||||
res.push(data)
|
||||
setCompletionRes(res.join(''))
|
||||
setMessageId(messageId)
|
||||
},
|
||||
onMessageReplace: (messageReplace) => {
|
||||
res = [messageReplace.answer]
|
||||
setCompletionRes(res.join(''))
|
||||
},
|
||||
onCompleted() {
|
||||
setRespondingFalse()
|
||||
},
|
||||
onError() {
|
||||
setRespondingFalse()
|
||||
},
|
||||
})
|
||||
}, [
|
||||
appId,
|
||||
checkCanSend,
|
||||
chatPromptConfig,
|
||||
citationConfig,
|
||||
completionFiles,
|
||||
completionParams,
|
||||
completionPromptConfig,
|
||||
datasetConfigs,
|
||||
dataSets,
|
||||
externalDataToolsConfig,
|
||||
features,
|
||||
hasSetContextVar,
|
||||
inputs,
|
||||
introduction,
|
||||
isAdvancedMode,
|
||||
isResponding,
|
||||
modelConfig,
|
||||
notify,
|
||||
onShowCannotQueryDataset,
|
||||
promptMode,
|
||||
setRespondingFalse,
|
||||
setRespondingTrue,
|
||||
speechToTextConfig,
|
||||
suggestedQuestionsAfterAnswerConfig,
|
||||
t,
|
||||
])
|
||||
|
||||
return {
|
||||
isResponding,
|
||||
completionRes,
|
||||
messageId,
|
||||
completionFiles,
|
||||
setCompletionFiles,
|
||||
sendTextCompletion,
|
||||
}
|
||||
}
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { RiAddLine, RiDeleteBinLine, RiEditLine, RiMore2Fill, RiSaveLine, RiShareLine } from '@remixicon/react'
|
||||
import ActionButton, { ActionButtonState } from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { IChatItem } from '@/app/components/base/chat/chat/type'
|
||||
import type { AgentLogDetailResponse } from '@/models/log'
|
||||
import { useEffect, useRef } from 'react'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { ReactNode } from 'react'
|
||||
import AnswerIcon from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { AppIconSelection } from '.'
|
||||
import { useState } from 'react'
|
||||
import AppIconPicker from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { ComponentProps } from 'react'
|
||||
import AppIcon from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { ComponentProps } from 'react'
|
||||
import { useEffect } from 'react'
|
||||
import AudioBtn from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import AudioGallery from '.'
|
||||
|
||||
const AUDIO_SOURCES = [
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import AutoHeightTextarea from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import Avatar from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import Badge from '../badge'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import BlockInput from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import AddButton from './add-button'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
|
||||
import { RocketLaunchIcon } from '@heroicons/react/20/solid'
|
||||
import { Button } from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import SyncButton from './sync-button'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { ChatItem } from '../../types'
|
||||
import { WorkflowRunningStatus } from '@/app/components/workflow/types'
|
||||
import Answer from '.'
|
||||
|
||||
@ -1,178 +0,0 @@
|
||||
/**
|
||||
* Tests for multimodal image file handling in chat hooks.
|
||||
* Tests the file object conversion logic without full hook integration.
|
||||
*/
|
||||
|
||||
describe('Multimodal File Handling', () => {
|
||||
describe('File type to MIME type mapping', () => {
|
||||
it('should map image to image/png', () => {
|
||||
const fileType: string = 'image'
|
||||
const expectedMime = 'image/png'
|
||||
const mimeType = fileType === 'image' ? 'image/png' : 'application/octet-stream'
|
||||
expect(mimeType).toBe(expectedMime)
|
||||
})
|
||||
|
||||
it('should map video to video/mp4', () => {
|
||||
const fileType: string = 'video'
|
||||
const expectedMime = 'video/mp4'
|
||||
const mimeType = fileType === 'video' ? 'video/mp4' : 'application/octet-stream'
|
||||
expect(mimeType).toBe(expectedMime)
|
||||
})
|
||||
|
||||
it('should map audio to audio/mpeg', () => {
|
||||
const fileType: string = 'audio'
|
||||
const expectedMime = 'audio/mpeg'
|
||||
const mimeType = fileType === 'audio' ? 'audio/mpeg' : 'application/octet-stream'
|
||||
expect(mimeType).toBe(expectedMime)
|
||||
})
|
||||
|
||||
it('should map unknown to application/octet-stream', () => {
|
||||
const fileType: string = 'unknown'
|
||||
const expectedMime = 'application/octet-stream'
|
||||
const mimeType = ['image', 'video', 'audio'].includes(fileType) ? 'image/png' : 'application/octet-stream'
|
||||
expect(mimeType).toBe(expectedMime)
|
||||
})
|
||||
})
|
||||
|
||||
describe('TransferMethod selection', () => {
|
||||
it('should select remote_url for images', () => {
|
||||
const fileType: string = 'image'
|
||||
const transferMethod = fileType === 'image' ? 'remote_url' : 'local_file'
|
||||
expect(transferMethod).toBe('remote_url')
|
||||
})
|
||||
|
||||
it('should select local_file for non-images', () => {
|
||||
const fileType: string = 'video'
|
||||
const transferMethod = fileType === 'image' ? 'remote_url' : 'local_file'
|
||||
expect(transferMethod).toBe('local_file')
|
||||
})
|
||||
})
|
||||
|
||||
describe('File extension mapping', () => {
|
||||
it('should use .png extension for images', () => {
|
||||
const fileType: string = 'image'
|
||||
const expectedExtension = '.png'
|
||||
const extension = fileType === 'image' ? 'png' : 'bin'
|
||||
expect(extension).toBe(expectedExtension.replace('.', ''))
|
||||
})
|
||||
|
||||
it('should use .mp4 extension for videos', () => {
|
||||
const fileType: string = 'video'
|
||||
const expectedExtension = '.mp4'
|
||||
const extension = fileType === 'video' ? 'mp4' : 'bin'
|
||||
expect(extension).toBe(expectedExtension.replace('.', ''))
|
||||
})
|
||||
|
||||
it('should use .mp3 extension for audio', () => {
|
||||
const fileType: string = 'audio'
|
||||
const expectedExtension = '.mp3'
|
||||
const extension = fileType === 'audio' ? 'mp3' : 'bin'
|
||||
expect(extension).toBe(expectedExtension.replace('.', ''))
|
||||
})
|
||||
})
|
||||
|
||||
describe('File name generation', () => {
|
||||
it('should generate correct file name for images', () => {
|
||||
const fileType: string = 'image'
|
||||
const expectedName = 'generated_image.png'
|
||||
const fileName = `generated_${fileType}.${fileType === 'image' ? 'png' : 'bin'}`
|
||||
expect(fileName).toBe(expectedName)
|
||||
})
|
||||
|
||||
it('should generate correct file name for videos', () => {
|
||||
const fileType: string = 'video'
|
||||
const expectedName = 'generated_video.mp4'
|
||||
const fileName = `generated_${fileType}.${fileType === 'video' ? 'mp4' : 'bin'}`
|
||||
expect(fileName).toBe(expectedName)
|
||||
})
|
||||
|
||||
it('should generate correct file name for audio', () => {
|
||||
const fileType: string = 'audio'
|
||||
const expectedName = 'generated_audio.mp3'
|
||||
const fileName = `generated_${fileType}.${fileType === 'audio' ? 'mp3' : 'bin'}`
|
||||
expect(fileName).toBe(expectedName)
|
||||
})
|
||||
})
|
||||
|
||||
describe('SupportFileType mapping', () => {
|
||||
it('should map image type to image supportFileType', () => {
|
||||
const fileType: string = 'image'
|
||||
const supportFileType = fileType === 'image' ? 'image' : fileType === 'video' ? 'video' : fileType === 'audio' ? 'audio' : 'document'
|
||||
expect(supportFileType).toBe('image')
|
||||
})
|
||||
|
||||
it('should map video type to video supportFileType', () => {
|
||||
const fileType: string = 'video'
|
||||
const supportFileType = fileType === 'image' ? 'image' : fileType === 'video' ? 'video' : fileType === 'audio' ? 'audio' : 'document'
|
||||
expect(supportFileType).toBe('video')
|
||||
})
|
||||
|
||||
it('should map audio type to audio supportFileType', () => {
|
||||
const fileType: string = 'audio'
|
||||
const supportFileType = fileType === 'image' ? 'image' : fileType === 'video' ? 'video' : fileType === 'audio' ? 'audio' : 'document'
|
||||
expect(supportFileType).toBe('audio')
|
||||
})
|
||||
|
||||
it('should map unknown type to document supportFileType', () => {
|
||||
const fileType: string = 'unknown'
|
||||
const supportFileType = fileType === 'image' ? 'image' : fileType === 'video' ? 'video' : fileType === 'audio' ? 'audio' : 'document'
|
||||
expect(supportFileType).toBe('document')
|
||||
})
|
||||
})
|
||||
|
||||
describe('File conversion logic', () => {
|
||||
it('should detect existing transferMethod', () => {
|
||||
const fileWithTransferMethod = {
|
||||
id: 'file-123',
|
||||
transferMethod: 'remote_url' as const,
|
||||
type: 'image/png',
|
||||
name: 'test.png',
|
||||
size: 1024,
|
||||
supportFileType: 'image',
|
||||
progress: 100,
|
||||
}
|
||||
const hasTransferMethod = 'transferMethod' in fileWithTransferMethod
|
||||
expect(hasTransferMethod).toBe(true)
|
||||
})
|
||||
|
||||
it('should detect missing transferMethod', () => {
|
||||
const fileWithoutTransferMethod = {
|
||||
id: 'file-456',
|
||||
type: 'image',
|
||||
url: 'http://example.com/image.png',
|
||||
belongs_to: 'assistant',
|
||||
}
|
||||
const hasTransferMethod = 'transferMethod' in fileWithoutTransferMethod
|
||||
expect(hasTransferMethod).toBe(false)
|
||||
})
|
||||
|
||||
it('should create file with size 0 for generated files', () => {
|
||||
const expectedSize = 0
|
||||
expect(expectedSize).toBe(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe('Agent vs Non-Agent mode logic', () => {
|
||||
it('should check for agent_thoughts to determine mode', () => {
|
||||
const agentResponse: { agent_thoughts?: Array<Record<string, unknown>> } = {
|
||||
agent_thoughts: [{}],
|
||||
}
|
||||
const isAgentMode = agentResponse.agent_thoughts && agentResponse.agent_thoughts.length > 0
|
||||
expect(isAgentMode).toBe(true)
|
||||
})
|
||||
|
||||
it('should detect non-agent mode when agent_thoughts is empty', () => {
|
||||
const nonAgentResponse: { agent_thoughts?: Array<Record<string, unknown>> } = {
|
||||
agent_thoughts: [],
|
||||
}
|
||||
const isAgentMode = nonAgentResponse.agent_thoughts && nonAgentResponse.agent_thoughts.length > 0
|
||||
expect(isAgentMode).toBe(false)
|
||||
})
|
||||
|
||||
it('should detect non-agent mode when agent_thoughts is undefined', () => {
|
||||
const nonAgentResponse: { agent_thoughts?: Array<Record<string, unknown>> } = {}
|
||||
const isAgentMode = nonAgentResponse.agent_thoughts && nonAgentResponse.agent_thoughts.length > 0
|
||||
expect(isAgentMode).toBeFalsy()
|
||||
})
|
||||
})
|
||||
})
|
||||
@ -419,40 +419,9 @@ export const useChat = (
|
||||
}
|
||||
},
|
||||
onFile(file) {
|
||||
// Convert simple file type to MIME type for non-agent mode
|
||||
// Backend sends: { id, type: "image", belongs_to, url }
|
||||
// Frontend expects: { id, type: "image/png", transferMethod, url, uploadedId, supportFileType, name, size }
|
||||
|
||||
// Determine file type for MIME conversion
|
||||
const fileType = (file as { type?: string }).type || 'image'
|
||||
|
||||
// If file already has transferMethod, use it as base and ensure all required fields exist
|
||||
// Otherwise, create a new complete file object
|
||||
const baseFile = ('transferMethod' in file) ? (file as Partial<FileEntity>) : null
|
||||
|
||||
const convertedFile: FileEntity = {
|
||||
id: baseFile?.id || (file as { id: string }).id,
|
||||
type: baseFile?.type || (fileType === 'image' ? 'image/png' : fileType === 'video' ? 'video/mp4' : fileType === 'audio' ? 'audio/mpeg' : 'application/octet-stream'),
|
||||
transferMethod: (baseFile?.transferMethod as FileEntity['transferMethod']) || (fileType === 'image' ? 'remote_url' : 'local_file'),
|
||||
uploadedId: baseFile?.uploadedId || (file as { id: string }).id,
|
||||
supportFileType: baseFile?.supportFileType || (fileType === 'image' ? 'image' : fileType === 'video' ? 'video' : fileType === 'audio' ? 'audio' : 'document'),
|
||||
progress: baseFile?.progress ?? 100,
|
||||
name: baseFile?.name || `generated_${fileType}.${fileType === 'image' ? 'png' : fileType === 'video' ? 'mp4' : fileType === 'audio' ? 'mp3' : 'bin'}`,
|
||||
url: baseFile?.url || (file as { url?: string }).url,
|
||||
size: baseFile?.size ?? 0, // Generated files don't have a known size
|
||||
}
|
||||
|
||||
// For agent mode, add files to the last thought
|
||||
const lastThought = responseItem.agent_thoughts?.[responseItem.agent_thoughts?.length - 1]
|
||||
if (lastThought) {
|
||||
const thought = lastThought as { message_files?: FileEntity[] }
|
||||
responseItem.agent_thoughts![responseItem.agent_thoughts!.length - 1].message_files = [...(thought.message_files ?? []), convertedFile]
|
||||
}
|
||||
// For non-agent mode, add files directly to responseItem.message_files
|
||||
else {
|
||||
const currentFiles = (responseItem.message_files as FileEntity[] | undefined) ?? []
|
||||
responseItem.message_files = [...currentFiles, convertedFile]
|
||||
}
|
||||
if (lastThought)
|
||||
responseItem.agent_thoughts![responseItem.agent_thoughts!.length - 1].message_files = [...(lastThought as any).message_files, file]
|
||||
|
||||
updateCurrentQAOnTree({
|
||||
placeholderQuestionId,
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
|
||||
import type { ChatItem } from '../types'
|
||||
import { User } from '@/app/components/base/icons/src/public/avatar'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import Checkbox from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { Item } from '.'
|
||||
import { useState } from 'react'
|
||||
import Chip from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import Confirm from '.'
|
||||
import Button from '../button'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useEffect, useState } from 'react'
|
||||
import ContentDialog from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import CopyFeedback, { CopyFeedbackNew } from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import CopyIcon from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import CornerLabel from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { DatePickerProps } from './types'
|
||||
import { useState } from 'react'
|
||||
import { fn } from 'storybook/test'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useEffect, useState } from 'react'
|
||||
import Dialog from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import Divider from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import { fn } from 'storybook/test'
|
||||
import DrawerPlus from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import { fn } from 'storybook/test'
|
||||
import Drawer from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { Item } from '.'
|
||||
import { useState } from 'react'
|
||||
import { fn } from 'storybook/test'
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
/* eslint-disable tailwindcss/classnames-order */
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import Effect from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import EmojiPickerInner from './Inner'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import EmojiPicker from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { Features } from './types'
|
||||
import { useState } from 'react'
|
||||
import { FeaturesProvider } from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import FileIcon from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import FileImageRender from './file-image-render'
|
||||
|
||||
const SAMPLE_IMAGE = 'data:image/svg+xml;utf8,<svg xmlns=\'http://www.w3.org/2000/svg\' width=\'320\' height=\'180\'><defs><linearGradient id=\'grad\' x1=\'0%\' y1=\'0%\' x2=\'100%\' y2=\'100%\'><stop offset=\'0%\' stop-color=\'#FEE2FF\'/><stop offset=\'100%\' stop-color=\'#E0EAFF\'/></linearGradient></defs><rect width=\'320\' height=\'180\' rx=\'18\' fill=\'url(#grad)\'/><text x=\'50%\' y=\'50%\' dominant-baseline=\'middle\' text-anchor=\'middle\' font-family=\'sans-serif\' font-size=\'24\' fill=\'#1F2937\'>Preview</text></svg>'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { FileEntity } from './types'
|
||||
import { useState } from 'react'
|
||||
import { SupportUploadFileTypes } from '@/app/components/workflow/types'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import FileTypeIcon from './file-type-icon'
|
||||
import { FileAppearanceTypeEnum } from './types'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { FileEntity } from '../types'
|
||||
import type { FileUpload } from '@/app/components/base/features/types'
|
||||
import { useState } from 'react'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { FileEntity } from '../types'
|
||||
import type { FileUpload } from '@/app/components/base/features/types'
|
||||
import { useState } from 'react'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import { fn } from 'storybook/test'
|
||||
import FloatRightContainer from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { FormStoryRender } from '../../../../.storybook/utils/form-story-wrapper'
|
||||
import type { FormSchema } from './types'
|
||||
import { useStore } from '@tanstack/react-form'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import FullScreenModal from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import GridMask from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,9 +1,9 @@
|
||||
/// <reference types="vite/client" />
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import * as React from 'react'
|
||||
|
||||
declare const require: any
|
||||
|
||||
type IconComponent = React.ComponentType<Record<string, unknown>>
|
||||
type IconModule = { default: IconComponent }
|
||||
|
||||
type IconEntry = {
|
||||
name: string
|
||||
@ -12,16 +12,18 @@ type IconEntry = {
|
||||
Component: IconComponent
|
||||
}
|
||||
|
||||
const iconModules: Record<string, IconModule> = import.meta.glob('./src/**/*.tsx', { eager: true })
|
||||
const iconContext = require.context('./src', true, /\.tsx$/)
|
||||
|
||||
const iconEntries: IconEntry[] = Object.entries(iconModules)
|
||||
.filter(([key]) => !key.endsWith('.stories.tsx') && !key.endsWith('.spec.tsx'))
|
||||
.map(([key, mod]) => {
|
||||
const Component = mod.default
|
||||
const iconEntries: IconEntry[] = iconContext
|
||||
.keys()
|
||||
.filter((key: string) => !key.endsWith('.stories.tsx') && !key.endsWith('.spec.tsx'))
|
||||
.map((key: string) => {
|
||||
const mod = iconContext(key)
|
||||
const Component = mod.default as IconComponent | undefined
|
||||
if (!Component)
|
||||
return null
|
||||
|
||||
const relativePath = key.replace(/^\.\/src\//, '')
|
||||
const relativePath = key.replace(/^\.\//, '')
|
||||
const path = `app/components/base/icons/src/${relativePath}`
|
||||
const parts = relativePath.split('/')
|
||||
const fileName = parts.pop() || ''
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import ImageGallery from '.'
|
||||
|
||||
const IMAGE_SOURCES = [
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { ImageFile } from '@/types/app'
|
||||
import { useMemo, useState } from 'react'
|
||||
import { TransferMethod } from '@/types/app'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import { fn } from 'storybook/test'
|
||||
import InlineDeleteConfirm from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import { InputNumber } from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import Input from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { RelatedApp } from '@/models/datasets'
|
||||
import { AppModeEnum } from '@/types/app'
|
||||
import LinkedAppsPanel from '.'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import ListEmpty from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import Loading from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { ReactNode } from 'react'
|
||||
import { ThemeProvider } from 'next-themes'
|
||||
import DifyLogo from './dify-logo'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import CodeBlock from './code-block'
|
||||
|
||||
const SAMPLE_CODE = `const greet = (name: string) => {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import { ChatContextProvider } from '@/app/components/base/chat/chat/context'
|
||||
import ThinkBlock from './think-block'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import { Markdown } from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useState } from 'react'
|
||||
import Flowchart from '.'
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import type { IChatItem } from '@/app/components/base/chat/chat/type'
|
||||
import type { WorkflowRunDetailResponse } from '@/models/log'
|
||||
import type { NodeTracing, NodeTracingListResponse } from '@/types/workflow'
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import ModalLikeWrap from '.'
|
||||
|
||||
const meta = {
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs-vite'
|
||||
import type { Meta, StoryObj } from '@storybook/nextjs'
|
||||
import { useEffect, useState } from 'react'
|
||||
import Modal from '.'
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user