### What problem does this PR solve?
Enterprise deployments that use an external Identity Provider (e.g.,
Microsoft Entra ID, Okta, Keycloak) need the ability to enforce SSO-only
authentication by hiding the email/password login form. Currently, the
login page always shows the password form alongside OAuth buttons, with
no way to disable it.
This PR adds a `disable_password_login` configuration option under the
existing `authentication` section in `service_conf.yaml`. When set to
`true`, the login page only displays configured OAuth/SSO buttons and
hides the email/password form, "Remember me" checkbox, and "Sign up"
link.
The flag can be set via:
- `service_conf.yaml` (`authentication.disable_password_login: true`)
- Environment variable (`DISABLE_PASSWORD_LOGIN=true`)
Default behavior is unchanged (`false`).
### Behavior
| `disable_password_login` | OAuth configured | Result |
|---|---|---|
| `false` (default) | No | Standard email/password form |
| `false` | Yes | Email/password form + SSO buttons below |
| `true` | Yes | **SSO buttons only** (no form, no sign up link) |
| `true` | No | Empty card (admin should configure OAuth first) |
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### Files changed (5)
1. `docker/service_conf.yaml.template` — added `disable_password_login:
false` under authentication
2. `common/settings.py` — added `DISABLE_PASSWORD_LOGIN` global variable
and loader in `init_settings()`
3. `common/config_utils.py` — fixed `TypeError` in `show_configs()` when
authentication section contains non-dict values (e.g., booleans)
4. `api/apps/system_app.py` — exposed `disablePasswordLogin` flag in
`/config` endpoint
5. `web/src/pages/login/index.tsx` — conditionally render password form
based on config flag; OAuth buttons always render when channels exist
---------
Co-authored-by: Ahmad Intisar <ahmadintisar@Ahmads-MacBook-M4-Pro.local>
### What problem does this PR solve?
This PR adds comprehensive **Right-to-Left (RTL) language support**,
primarily targeting Arabic and other RTL scripts (Hebrew, Persian, Urdu,
etc.).
Previously, RTL content had multiple rendering issues:
- Incorrect sentence splitting for Arabic punctuation in citation logic
- Misaligned text in chat messages and markdown components
- Improper positioning of blockquotes and “think” sections
- Incorrect table alignment
- Citation placement ambiguity in RTL prompts
- UI layout inconsistencies when mixing LTR and RTL text
This PR introduces backend and frontend improvements to properly detect,
render, and style RTL content while preserving existing LTR behavior.
#### Backend
- Updated sentence boundary regex in `rag/nlp/search.py` to include
Arabic punctuation:
- `،` (comma)
- `؛` (semicolon)
- `؟` (question mark)
- `۔` (Arabic full stop)
- Ensures citation insertion works correctly in RTL sentences.
- Updated citation prompt instructions to clarify citation placement
rules for RTL languages.
#### Frontend
- Introduced a new utility: `text-direction.ts`
- Detects text direction based on Unicode ranges.
- Supports Arabic, Hebrew, Syriac, Thaana, and related scripts.
- Provides `getDirAttribute()` for automatic `dir` assignment.
- Applied dynamic `dir` attributes across:
- Markdown rendering
- Chat messages
- Search results
- Tables
- Hover cards and reference popovers
- Added proper RTL styling in LESS:
- Text alignment adjustments
- Blockquote border flipping
- Section indentation correction
- Table direction switching
- Use of `<bdi>` for figure labels to prevent bidirectional conflicts
#### DevOps / Environment
- Added Windows backend launch script with retry handling.
- Updated dependency metadata.
- Adjusted development-only React debugging behavior.
---
### Type of change
- [x] Bug Fix (non-breaking change which fixes RTL rendering and
citation issues)
- [x] New Feature (non-breaking change which adds RTL detection and
dynamic direction handling)
---------
Co-authored-by: 6ba3i <isbaaoui09@gmail.com>
Co-authored-by: Ahmad Intisar <ahmadintisar@Ahmads-MacBook-M4-Pro.local>
Co-authored-by: Ahmad Intisar <168020872+ahmadintisar@users.noreply.github.com>
Co-authored-by: Liu An <asiro@qq.com>
### What problem does this PR solve?
The SeaFile connector currently synchronises the entire account — every
library
visible to the authenticated user. This is impractical for users who
only need
a subset of their data indexed, especially on large SeaFile instances
with many
shared libraries.
This PR introduces granular sync scope support, allowing users to choose
between
syncing their entire account, a single library, or a specific directory
within a
library. It also adds support for SeaFile library-scoped API tokens
(`/api/v2.1/via-repo-token/` endpoints), enabling tighter access control
without
exposing account-level credentials.
### Type of change
- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
### Test
```
from seafile_connector import SeaFileConnector
import logging
import os
logging.basicConfig(level=logging.DEBUG)
URL = os.environ.get("SEAFILE_URL", "https://seafile.example.com")
TOKEN = os.environ.get("SEAFILE_TOKEN", "")
REPO_ID = os.environ.get("SEAFILE_REPO_ID", "")
SYNC_PATH = os.environ.get("SEAFILE_SYNC_PATH", "/Documents")
REPO_TOKEN = os.environ.get("SEAFILE_REPO_TOKEN", "")
def _test_scope(scope, repo_id=None, sync_path=None):
print(f"\n{'='*50}")
print(f"Testing scope: {scope}")
print(f"{'='*50}")
creds = {"seafile_token": TOKEN} if TOKEN else {}
if REPO_TOKEN and scope in ("library", "directory"):
creds["repo_token"] = REPO_TOKEN
connector = SeaFileConnector(
seafile_url=URL,
batch_size=5,
sync_scope=scope,
include_shared = False,
repo_id=repo_id,
sync_path=sync_path,
)
connector.load_credentials(creds)
connector.validate_connector_settings()
count = 0
for batch in connector.load_from_state():
for doc in batch:
count += 1
print(f" [{count}] {doc.semantic_identifier} "
f"({doc.size_bytes} bytes, {doc.extension})")
print(f"\n-> {scope} scope: {count} document(s) found.\n")
# 1. Account scope
if TOKEN:
_test_scope("account")
else:
print("\nSkipping account scope (set SEAFILE_TOKEN)")
# 2. Library scope
if REPO_ID and (TOKEN or REPO_TOKEN):
_test_scope("library", repo_id=REPO_ID)
else:
print("\nSkipping library scope (set SEAFILE_REPO_ID + token)")
# 3. Directory scope
if REPO_ID and SYNC_PATH and (TOKEN or REPO_TOKEN):
_test_scope("directory", repo_id=REPO_ID, sync_path=SYNC_PATH)
else:
print("\nSkipping directory scope (set SEAFILE_REPO_ID + SEAFILE_SYNC_PATH + token)")
```
The RDBMS (MySQL/PostgreSQL) connector generates document filenames
using the first 100 characters of the content column
(semantic_identifier). When the content contains newline characters
(\n), the resulting filename includes those newlines — for example:
Category: غير صحيح كليًا\nTitle: تفنيد حقائق....txt
RAGFlow's filename_type() function uses re.match(r".*\.txt$", filename)
to detect file types, but .* does not match newline characters by
default in Python regex. This causes the regex to fail, returning
FileType.OTHER, which triggers:
pythonraise RuntimeError("This type of file has not been supported
yet!")
As a result, all documents synced via the MySQL/PostgreSQL connector are
silently discarded. The sync logs report success (e.g., "399 docs
synchronized"), but zero documents actually appear in the dataset. This
is the root cause of issue #13001.
Root cause trace:
rdbms_connector.py → _row_to_document() sets semantic_identifier from
raw content (may contain \n)
connector_service.py → duplicate_and_parse() uses semantic_identifier as
the filename
file_service.py → upload_document() calls filename_type(filename)
file_utils.py → filename_type() regex .*\.txt$ fails on newlines →
returns FileType.OTHER
upload_document() raises "This type of file has not been supported yet!"
Fix: Sanitize the semantic_identifier in _row_to_document() by replacing
newlines and carriage returns with spaces before truncating to 100
characters.
Relates to: #13001, #12817
Type of change
Bug Fix (non-breaking change which fixes an issue)
Co-authored-by: Ahmad Intisar <ahmadintisar@Ahmads-MacBook-M4-Pro.local>
## Type of Change
- [x] Bug fix
## Description
Closes#13119
The current IMAP connector uses `split(',')` to parse email headers,
which crashes when a sender's display name contains a comma inside
quotes (e.g., `"Doe, John" <john@example.com>`).
This PR replaces the manual string splitting with Python's standard
`email.utils.getaddresses`. This correctly handles RFC 5322 quoted
strings and prevents the `RuntimeError: Expected a singular address`.
## Checklist
- [x] I have checked the code and it works as expected.
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
What problem does this PR solve?
The sync_data_source.py module imports WebDAVConnector from
common.data_source, but WebDAVConnector was never registered in the
package's __init__.py. This causes an ImportError at startup, crashing
the data sync service:
ImportError: cannot import name 'WebDAVConnector' from
'common.data_source'
The webdav_connector.py file already exists in the common/data_source/
directory — it just wasn't exported. This PR adds the import and
registers it in __all__.
Type of change
Bug Fix (non-breaking change which fixes an issue)
Co-authored-by: Ahmad Intisar <ahmadintisar@Ahmads-MacBook-M4-Pro.local>
### What problem does this PR solve?
Decouple the memory API into a gateway layer (for routing/param parse)
and a service layer (for business logic).
### Type of change
- [x] Refactoring
### What problem does this PR solve?
Judge table created with current infinity mapping before migrate db.
#13089
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
Fix RDBMS field separation after chunking by wrapping field names in
brackets (【field】: value). This ensures fields remain distinguishable
even when TxtParser strips newline delimiters during chunk merging.
Closes #13001
Co-authored-by: mkdev11 <YOUR_GITHUB_ID+MkDev11@users.noreply.github.com>
### What problem does this PR solve?
Bug: When a filter key doesn't exist in metas or has no matching values,
the filter was skipped entirely, causing AND logic to fail.
Example:
- Filter 1: meeting_series = '宏观早8点' (matches doc1, doc2, doc3)
- Filter 2: date = '2026-03-05' (no matches)
- Expected: [] (AND should return empty)
- Actual: [doc1, doc2, doc3] (Filter 2 was skipped)
Root cause:
Old logic iterated metas.items() first, then filters. If a filter's key
wasn't in metas, it was never processed.
Fix:
Iterate filters first, then look up in metas. If key not found, treat as
no match (empty result), which correctly applies AND logic.
Changes:
- Changed loop order from 'for k in metas: for f in filters' to 'for f
in filters: if f.key in metas'
- Explicitly handle missing keys as empty results
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
Co-authored-by: Clint-chan <Clint-chan@users.noreply.github.com>
## Description
This PR fixes the issue where date metadata conditions with comparison
operators (`>=`, `<=`, `>`, `<`) did not work correctly in the
`/api/v1/retrieval` endpoint.
## Problem
When using metadata conditions like:
```json
{
"metadata_condition": {
"conditions": [
{
"name": "date",
"comparison_operator": ">=",
"value": "2027-01-13"
}
]
}
}
The filtering did not work as expected because:
1. Operators >= and <= were not mapped to internal symbols ≥ and ≤
2. Date strings like "2027-01-13" failed to parse with
ast.literal_eval()
3. Non-standard date formats were incorrectly compared as strings
Solution
Changes in common/metadata_utils.py:
1. Added operator mapping in convert_conditions():
- >= → ≥
- <= → ≤
- != → ≠
2. Implemented strict date format detection in meta_filter():
- Only processes dates in YYYY-MM-DD format (10 characters, properly
formatted)
- When query value is a date, only matches data in the same standard
format
- Non-standard formats (e.g., "2026年1月13日", "2026-1-22") are skipped
3. Maintained backward compatibility:
- Numeric comparisons still work
- String comparisons still work
- Only affects date-formatted queries
Testing
All test cases pass (8/8):
- ✅ Date >= comparison
- ✅ Date > comparison
- ✅ Date < comparison
- ✅ Date <= comparison
- ✅ Date = comparison
- ✅ Date range queries
- ✅ Non-date string comparison (backward compatibility)
- ✅ Numeric comparison (backward compatibility)
Example Usage
{
"dataset_ids": ["xxx"],
"question": "test",
"metadata_condition": {
"conditions": [
{
"name": "date",
"comparison_operator": ">=",
"value": "2027-01-13"
}
]
}
}
Notes
- Only supports standard YYYY-MM-DD format
- Non-standard date formats in data are treated as data quality issues
and will not match
- Users should ensure their date metadata is in the correct format
---------
Co-authored-by: Clint-chan <Clint-chan@users.noreply.github.com>
### What problem does this PR solve?
Fix: adressing style without a default value #12396#11510
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
This PR adds MySQL and PostgreSQL as data source connectors, allowing
users to import data directly from relational databases into RAGFlow for
RAG workflows.
Many users store their knowledge in databases (product catalogs,
documentation, FAQs, etc.) and currently have no way to sync this data
into RAGFlow without exporting to files first. This feature lets them
connect directly to their databases, run SQL queries, and automatically
create documents from the results.
Closes#763Closes#11560
### Type of change
- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
### What this PR does
**New capabilities:**
- Connect to MySQL and PostgreSQL databases
- Run custom SQL queries to extract data
- Map database columns to document content (vectorized) and metadata
(searchable)
- Support incremental sync using a timestamp column
- Full frontend UI with connection form and tooltips
**Files changed:**
Backend:
- `common/constants.py` - Added MYSQL/POSTGRESQL to FileSource enum
- `common/data_source/config.py` - Added to DocumentSource enum
- `common/data_source/rdbms_connector.py` - New connector (368 lines)
- `common/data_source/__init__.py` - Exported the connector
- `rag/svr/sync_data_source.py` - Added MySQL and PostgreSQL sync
classes
- `pyproject.toml` - Added mysql-connector-python dependency
Frontend:
- `web/src/pages/user-setting/data-source/constant/index.tsx` - Form
fields
- `web/src/locales/en.ts` - English translations
- `web/src/assets/svg/data-source/mysql.svg` - MySQL icon
- `web/src/assets/svg/data-source/postgresql.svg` - PostgreSQL icon
### Testing done
Tested with MySQL 8.0 and PostgreSQL 16:
- Connection validation works correctly
- Full sync imports all query results as documents
- Incremental sync only fetches rows updated since last sync
- Custom SQL queries filter data as expected
- Invalid credentials show clear error messages
- Lint checks pass (`ruff check` returns no errors)
---------
Co-authored-by: mkdev11 <YOUR_GITHUB_ID+MkDev11@users.noreply.github.com>
### What problem does this PR solve?
Add OceanBase memory store and extracting base class `OBConnectionBase`.
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
---------
Co-authored-by: Cursor <cursoragent@cursor.com>
### What problem does this PR solve?
This PR adds **Seafile** as a new data source connector for RAGFlow.
[Seafile](https://www.seafile.com/) is an open-source, self-hosted file
sync and share platform widely used by enterprises, universities, and
organizations that require data sovereignty and privacy. Users who store
documents in Seafile currently have no way to index and search their
content through RAGFlow.
This connector enables RAGFlow users to:
- Connect to self-hosted Seafile servers via API token
- Index documents from personal and shared libraries
- Support incremental polling for updated files
- Seamlessly integrate Seafile-stored documents into their RAG pipelines
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### Changes included
- `SeaFileConnector` implementing `LoadConnector` and `PollConnector`
interfaces
- Support for API token
- Recursive file traversal across libraries
- Time-based filtering for incremental updates
- Seafile logo (sourced from Simple Icons, CC0)
- Connector configuration and registration
### Testing
- Tested against self-hosted Seafile Community Edition
- Verified authentication (token)
- Verified document ingestion from personal and shared libraries
- Verified incremental polling with time filters
### What problem does this PR solve?
#### Summary
This PR enhances the Semi-automatic metadata filtering mode by allowing
users to explicitly pre-define operators (e.g., contains, =, >, etc.)
for selected metadata keys. While the LLM still dynamically extracts the
filter value from the user's query, it is now strictly constrained to
use the operator specified in the UI configuration.
Using this feature is optional. By default the operator selection is set
to "automatic" resulting in the LLM choosing the operator (as
presently).
#### Rationale & Use Case
This enhancement was driven by a concrete challenge I encountered while
working with technical documentation.
In my specific use case, I was trying to filter for software versions
within a technical manual. In this dataset, a single document chunk
often applies to multiple software versions. These versions are stored
as a combined string within the metadata for each chunk.
When using the standard semi-automatic filter, the LLM would
inconsistently choose between the contains and equals operators. When it
chose equals, it would exclude every chunk that applied to more than one
version, even if the version I was searching for was clearly included in
that metadata string. This led to incomplete and frustrating retrieval
results.
By extending the semi-automatic filter to allow pre-defining the
operator for a specific key, I was able to force the use of contains for
the version field. This change immediately led to significantly improved
and more reliable results in my case.
I believe this functionality will be equally useful for others dealing
with "tagged" or multi-value metadata where the relationship between the
query and the field is known, but the specific value needs to remain
dynamic.
#### Key Changes
##### Backend & Core Logic
- `common/metadata_utils.py`: Updated apply_meta_data_filter to support
a mixed data structure for semi_auto (handling both legacy string arrays
and the new object-based format {"key": "...", "op": "..."}).
- `rag/prompts/generator.py`: Extended gen_meta_filter to accept and
pass operator constraints to the LLM.
- `rag/prompts/meta_filter.md`: Updated the system prompt to instruct
the LLM to strictly respect provided operator constraints.
##### Frontend
- `web/src/components/metadata-filter/metadata-semi-auto-fields.tsx`:
Enhanced the UI to include an operator dropdown for each selected
metadata key, utilizing existing operator constants.
- `web/src/components/metadata-filter/index.tsx`: Updated the validation
schema to accommodate the new state structure.
#### Test Plan
- Backward Compatibility: Verified that existing semi-auto filters
stored as simple strings still function correctly.
- Prompt Verification: Confirmed that constraints are correctly rendered
in the LLM system prompt when specified.
- Added unit tests as
`test/unit_test/common/test_apply_semi_auto_meta_data_filter.py`
- Manual End-to-End:
- Configured a "Semi-automatic" filter for a "Version" key with the
"contains" operator.
- Asked a version-specific query.
- Result
<img width="1173" height="704" alt="Screenshot 2026-02-02 145359"
src="https://github.com/user-attachments/assets/510a6a61-a231-4dc2-a7fe-cdfc07219132"
/>
### Type of change
- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
---------
Co-authored-by: Philipp Heyken Soares <philipp.heyken-soares@am.ai>
### What problem does this PR solve?
As title.
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
Co-authored-by: Liu An <asiro@qq.com>
### What problem does this PR solve?
close#12770
This PR adds OceanBase as a storage backend for the Table Parser. It
enables dynamic table schema storage via JSON and implements OceanBase
SQL execution for text-to-SQL retrieval.
### Type of change
- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
### Changes
- Table Parser stores row data into `chunk_data` when doc engine is
OceanBase. (table.py)
- OceanBase table schema adds `chunk_data` JSON column and migrates if
needed.
- Implemented OceanBase `sql()` to execute text-to-SQL results.
(ob_conn.py)
- Add `DOC_ENGINE_OCEANBASE` flag for engine detection (setting.py)
### Test
1. Set `DOC_ENGINE=oceanbase` (e.g. in `docker/.env`)
<img width="1290" height="783" alt="doc_engine_ob"
src="https://github.com/user-attachments/assets/7d1c609f-7bf2-4b2e-b4cc-4243e72ad4f1"
/>
2. Upload an Excel file to Knowledge Base.(for test, we use as below)
<img width="786" height="930" alt="excel"
src="https://github.com/user-attachments/assets/bedf82f2-cd00-426b-8f4d-6978a151231a"
/>
3. Choose **Table** as parsing method.
<img width="2550" height="1134" alt="parse_excel"
src="https://github.com/user-attachments/assets/aba11769-02be-4905-97e1-e24485e24cd0"
/>
4.Ask a natural language query in chat.
<img width="2550" height="1134" alt="query"
src="https://github.com/user-attachments/assets/26a910a6-e503-4ac7-b66a-f5754bbb0e91"
/>
**What problem does this PR solve?**
When loading JSON mapping/schema files, the code used
json.load(open(path)) without closing the file. The file handle stayed
open until garbage collection, which can leak file descriptors under
load (e.g. repeated reconnects or migrations).
**Type of change**
[x] Bug Fix (non-breaking change which fixes an issue)
**Change**
Replaced json.load(open(...)) with a context manager so the file is
closed after loading:
with open(fp_mapping, "r") as f: ... = json.load(f)
**Files updated**
rag/utils/opensearch_conn.py – mapping load (1 place)
common/doc_store/es_conn_base.py – mapping load + doc_meta_mapping load
(2 places)
common/doc_store/infinity_conn_base.py – schema loads in _migrate_db,
doc metadata table creation, and SQL field mapping (4 places)
Behavior is unchanged; only resource handling is fixed.
Co-authored-by: Gittensor Miner <miner@gittensor.io>
### What problem does this PR solve?
Fixed thread pool workers and improve retrieval component
### Type of change
- [x] Refactoring
- [x] Performance Improvement
### What problem does this PR solve?
##### Summary
This PR fixes a bug in the metadata filtering logic where the contains
and not contains operators were behaving identically to the in and not
in operators. It also standardizes the syntax for string-based
operators.
##### The Issue
On the main branch, the contains operator was implemented as:
`matched = input in value if not isinstance(input, list) else all(i in
value for i in input)`
This logic is identical to the `in` operator. It checks if the metadata
(`input`) exists within the filter (`value`). For a "contains" search,
the logic should be reversed: _we want to check if the filter value
exists within the metadata input_.
##### Solution Presented Here
The operators have been rewritten using str.find():
Contains: `str(input).find(value) >= 0`
Not Contains: `str(input).find(value) == -1`
##### Advantage
This approach places the metadata (input) on the left side of the
expression. This maintains stylistic consistency with the existing start
with and end with operators in the same file, which also place the input
on the left (e.g., str(input).lower().startswith(...)).
##### Considered Alternative
In a previous PR we considered using the standard Python `in` operator:
`value in str(input)`.
The `in` operator is approximately 15% faster because it uses optimized
Python bytecode (CONTAINS_OP) and avoids an attribute lookup. However
following rejection of this PR we now propose the change presented here.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
---------
Co-authored-by: Philipp Heyken Soares <philipp.heyken-soares@am.ai>
### What problem does this PR solve?
Put document metadata in ES/Infinity.
Index name of meta data: ragflow_doc_meta_{tenant_id}
### Type of change
- [x] Refactoring
### What problem does this PR solve?
1) Create dataset using table parser for infinity
2) Answer questions in chat using SQL
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
Add seekdb as doc_engine wich is the lite version of oceanbase.
close#12691
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
Fixes Infinity-specific API regressions: preserves ```important_kwd```
round‑trip for ```[""]```, restores required highlight key in retrieval
responses, and enforces Infinity guards for unsupported
```parser_id=tag``` and pagerank in ```/v1/kb/update```. Also removes a
slow/buggy pandas row-wise apply that was throwing ```ValueError``` and
causing flakiness.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Add PaddleOCR as a new PDF parser.
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
change:
initialize webhook configuration in webhook function
remove debug print statement from airtable_connector
remove redundant uuid import in imap_connector
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Feat: Bitbucket connector NOT READY TO MERGE
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
issue:
#12313
change:
add Zendesk data source integration with configuration and sync
capabilities
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
issue:
#12217 [#12313](https://github.com/infiniflow/ragflow/issues/12313)
change:
add IMAP data source integration with configuration and sync
capabilities
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
The filter_out function in metadata_utils.py was using a list of tuples
to evaluate conditions. Python eagerly evaluates all tuple elements when
constructing the list, causing "input in value" to be evaluated even
when the operator is "=". When input and value are floats (after numeric
conversion), this causes TypeError: "argument of type 'float' is not
iterable".
This change replaces the tuple list with if-elif chain, ensuring only
the matching condition is evaluated.
### What problem does this PR solve?
Fixes#12285
When using comparison operators like `=`, `>`, `<` with numeric
metadata, the `filter_out` function throws `TypeError("argument of type
'float' is not iterable")`. This is because Python eagerly evaluates all
tuple elements when constructing a list, causing `input in value` to be
evaluated even when the operator is `=`.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
### What problem does this PR solve?
Use async task to save memory.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
---------
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
### What problem does this PR solve?
Feat: Gitlab connector
Fix: submit button in darkmode
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
### What problem does this PR solve?
change: Add Asana data source integration and configuration options
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
### What problem does this PR solve?
Fix several text issues.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
---------
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
### What problem does this PR solve?
Manage message and use in agent.
Issue #4213
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
change:
add Airtable connector and integration for data synchronization
### Type of change
- [x] New Feature (non-breaking change which adds functionality)