mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-05-03 08:47:56 +08:00
feat(assets): align local API with cloud spec (#12863)
* feat(assets): align local API with cloud spec Unify response models, add missing fields, and align input schemas with the cloud OpenAPI spec at cloud.comfy.org/openapi. - Replace AssetSummary/AssetDetail/AssetUpdated with single Asset model - Add is_immutable, metadata (system_metadata), prompt_id fields - Support mime_type and preview_id in update endpoint - Make CreateFromHashBody.name optional, add mime_type, require >=1 tag - Add id/mime_type/preview_id to upload, relax tags to optional - Rename total_tags → tags in tag add/remove responses - Add GET /api/assets/tags/refine histogram endpoint - Add DB migration for system_metadata and prompt_id columns Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix review issues: tags validation, size nullability, type annotation, hash mismatch check, and add tag histogram tests - Remove contradictory min_length=1 from CreateFromHashBody.tags default - Restore size field to int|None=None for proper null semantics - Add Union type annotation to _build_asset_response result param - Add hash mismatch validation on idempotent upload path (409 HASH_MISMATCH) - Add unit tests for list_tag_histogram service function Amp-Thread-ID: https://ampcode.com/threads/T-019cd993-f43c-704e-b3d7-6cfc3d4d4a80 Co-authored-by: Amp <amp@ampcode.com> * Add preview_url to /assets API response using /api/view endpoint For input and output assets, generate a preview_url pointing to the existing /api/view endpoint using the asset's filename and tag-derived type (input/output). Handles subdirectories via subfolder param and URL-encodes filenames with spaces, unicode, and special characters. This aligns the OSS backend response with the frontend AssetCard expectation for thumbnail rendering. Amp-Thread-ID: https://ampcode.com/threads/T-019cda3f-5c2c-751a-a906-ac6c9153ac5c Co-authored-by: Amp <amp@ampcode.com> * chore: remove unused imports from asset_reference queries Amp-Thread-ID: https://ampcode.com/threads/T-019cda7d-cb21-77b4-a51b-b965af60208c Co-authored-by: Amp <amp@ampcode.com> * feat: resolve blake3 hashes in /view endpoint via asset database Amp-Thread-ID: https://ampcode.com/threads/T-019cda7d-cb21-77b4-a51b-b965af60208c Co-authored-by: Amp <amp@ampcode.com> * Register uploaded images in asset database when --enable-assets is set Add register_file_in_place() service function to ingest module for registering already-saved files without moving them. Call it from the /upload/image endpoint to return asset metadata in the response. Amp-Thread-ID: https://ampcode.com/threads/T-019ce023-3384-7560-bacf-de40b0de0dd2 Co-authored-by: Amp <amp@ampcode.com> * Exclude None fields from asset API JSON responses Add exclude_none=True to model_dump() calls across asset routes to keep response payloads clean by omitting unset optional fields. Amp-Thread-ID: https://ampcode.com/threads/T-019ce023-3384-7560-bacf-de40b0de0dd2 Co-authored-by: Amp <amp@ampcode.com> * Add comment explaining why /view resolves blake3 hashes Amp-Thread-ID: https://ampcode.com/threads/T-019ce023-3384-7560-bacf-de40b0de0dd2 Co-authored-by: Amp <amp@ampcode.com> * Move blake3 hash resolution to asset_management service Extract resolve_hash_to_path() into asset_management.py and remove _resolve_blake3_to_path from server.py. Also revert loopback origin check to original logic. Amp-Thread-ID: https://ampcode.com/threads/T-019ce023-3384-7560-bacf-de40b0de0dd2 Co-authored-by: Amp <amp@ampcode.com> * Require at least one tag in UploadAssetSpec Enforce non-empty tags at the Pydantic validation layer so uploads with no tags are rejected with a 400 before reaching ingest. Adds test_upload_empty_tags_rejected to cover this case. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Add owner_id check to resolve_hash_to_path Filter asset references by owner visibility so the /view endpoint only resolves hashes for assets the requesting user can access. Adds table-driven tests for owner visibility cases. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Make ReferenceData.created_at and updated_at required Remove None defaults and type: ignore comments. Move fields before optional fields to satisfy dataclass ordering. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Fix double commit in create_from_hash Move mime_type update into _register_existing_asset so it shares a single transaction with reference creation. Log a warning when the hash is not found instead of silently returning None. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Add exclude_none=True to create/upload responses Align with get/update/list endpoints for consistent JSON output. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Change preview_id to reference asset by reference ID, not content ID Clients receive preview_id in API responses but could not dereference it through public routes (which use reference IDs). Now preview_id is a self-referential FK to asset_references.id so the value is directly usable in the public API. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Filter soft-deleted and missing refs from visibility queries list_references_by_asset_id and list_tags_with_usage were not filtering out deleted_at/is_missing refs, allowing /view?filename=blake3:... to serve files through hidden references and inflating tag usage counts. Add list_all_file_paths_by_asset_id for orphan cleanup which intentionally needs unfiltered access. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Pass preview_id and mime_type through all asset creation fast paths The duplicate-content upload path and hash-based creation paths were silently dropping preview_id and mime_type. This wires both fields through _register_existing_asset, create_from_hash, and all route call sites so behavior is consistent regardless of whether the asset content already exists. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Remove unimplemented client-provided ID from upload API The `id` field on UploadAssetSpec was advertised for idempotent creation but never actually honored when creating new references. Remove it rather than implementing the feature. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Make asset mime_type immutable after first ingest Prevents cross-tenant metadata mutation when multiple references share the same content-addressed Asset row. mime_type can now only be set when NULL (first ingest); subsequent attempts to change it are silently ignored. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Use resolved content_type from asset lookup in /view endpoint The /view endpoint was discarding the content_type computed by resolve_hash_to_path() and re-guessing from the filename, which produced wrong results for extensionless files or mismatched extensions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Merge system+user metadata into filter projection Extract rebuild_metadata_projection() to build AssetReferenceMeta rows from {**system_metadata, **user_metadata}, so system-generated metadata is queryable via metadata_filter and user keys override system keys. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Standardize tag ordering to alphabetical across all endpoints Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Derive subfolder tags from path in register_file_in_place Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Reject client-provided id, fix preview URLs, rename tags→total_tags - Reject 'id' field in multipart upload with 400 UNSUPPORTED_FIELD instead of silently ignoring it - Build preview URL from the preview asset's own metadata rather than the parent asset's - Rename 'tags' to 'total_tags' in TagsAdd/TagsRemove response schemas for clarity Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: SQLite migration 0003 FK drop fails on file-backed DBs (MB-2) Add naming_convention to Base.metadata so Alembic batch-mode reflection can match unnamed FK constraints created by migration 0002. Pass naming_convention and render_as_batch=True through env.py online config. Add migration roundtrip tests (upgrade/downgrade/cycle from baseline). Amp-Thread-ID: https://ampcode.com/threads/T-019ce466-1683-7471-b6e1-bb078223cda0 Co-authored-by: Amp <amp@ampcode.com> * Fix missing tag count for is_missing references and update test for total_tags field - Allow is_missing=True references to be counted in list_tags_with_usage when the tag is 'missing', so the missing tag count reflects all references that have been tagged as missing - Add update_is_missing_by_asset_id query helper for bulk updates by asset - Update test_add_and_remove_tags to use 'total_tags' matching the API schema Amp-Thread-ID: https://ampcode.com/threads/T-019ce482-05e7-7324-a1b0-a56a929cc7ef Co-authored-by: Amp <amp@ampcode.com> * Remove unused imports in scanner.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Rename prompt_id to job_id on asset_references Rename the column in the DB model, migration, and service schemas. The API response emits both job_id and prompt_id (deprecated alias) for backward compatibility with the cloud API. Amp-Thread-ID: https://ampcode.com/threads/T-019cef41-60b0-752a-aa3c-ed7f20fda2f7 Co-authored-by: Amp <amp@ampcode.com> * Add index on asset_references.preview_id for FK cascade performance Amp-Thread-ID: https://ampcode.com/threads/T-019cef45-a4d2-7548-86d2-d46bcd3db419 Co-authored-by: Amp <amp@ampcode.com> * Add clarifying comments for Asset/AssetReference naming and preview_id Amp-Thread-ID: https://ampcode.com/threads/T-019cef49-f94e-7348-bf23-9a19ebf65e0d Co-authored-by: Amp <amp@ampcode.com> * Disallow all-null meta rows: add CHECK constraint, skip null values on write - convert_metadata_to_rows returns [] for None values instead of an all-null row - Remove dead None branch from _scalar_to_row - Simplify null filter in common.py to just check for row absence - Add CHECK constraint ck_asset_reference_meta_has_value to model and migration 0003 Amp-Thread-ID: https://ampcode.com/threads/T-019cef4e-5240-7749-bb25-1f17fcf9c09c Co-authored-by: Amp <amp@ampcode.com> * Remove dead None guards on result.asset in upload handler register_file_in_place guarantees a non-None asset, so the 'if result.asset else None' checks were unreachable. Amp-Thread-ID: https://ampcode.com/threads/T-019cef5b-4cf8-723c-8a98-8fb8f333c133 Co-authored-by: Amp <amp@ampcode.com> * Remove mime_type from asset update API Clients can no longer modify mime_type after asset creation via the PUT /api/assets/{id} endpoint. This reduces the risk of mime_type spoofing. The internal update_asset_hash_and_mime function remains available for server-side use (e.g., enrichment). Amp-Thread-ID: https://ampcode.com/threads/T-019cef5d-8d61-75cc-a1c6-2841ac395648 Co-authored-by: Amp <amp@ampcode.com> * Fix migration constraint naming double-prefix and NULL in mixed metadata lists - Use fully-rendered constraint names in migration 0003 to avoid the naming convention doubling the ck_ prefix on batch operations. - Add table_args to downgrade so SQLite batch mode can find the CHECK constraint (not exposed by SQLite reflection). - Fix model CheckConstraint name to use bare 'has_value' (convention auto-prefixes). - Skip None items when converting metadata lists to rows, preventing all-NULL rows that violate the has_value check constraint. Amp-Thread-ID: https://ampcode.com/threads/T-019cef87-94f9-7172-a6af-c6282290ce4f Co-authored-by: Amp <amp@ampcode.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
committed by
GitHub
parent
593be209a4
commit
2bd4d82b4f
57
tests-unit/app_test/test_migrations.py
Normal file
57
tests-unit/app_test/test_migrations.py
Normal file
@ -0,0 +1,57 @@
|
||||
"""Test that Alembic migrations run cleanly on a file-backed SQLite DB.
|
||||
|
||||
This catches problems like unnamed FK constraints that prevent batch-mode
|
||||
drop_constraint from working on real SQLite files (see MB-2).
|
||||
|
||||
Migrations 0001 and 0002 are already shipped, so we only exercise
|
||||
upgrade/downgrade for 0003+.
|
||||
"""
|
||||
|
||||
import os
|
||||
|
||||
import pytest
|
||||
from alembic import command
|
||||
from alembic.config import Config
|
||||
|
||||
|
||||
# Oldest shipped revision — we upgrade to here as a baseline and never
|
||||
# downgrade past it.
|
||||
_BASELINE = "0002_merge_to_asset_references"
|
||||
|
||||
|
||||
def _make_config(db_path: str) -> Config:
|
||||
root = os.path.join(os.path.dirname(__file__), "../..")
|
||||
config_path = os.path.abspath(os.path.join(root, "alembic.ini"))
|
||||
scripts_path = os.path.abspath(os.path.join(root, "alembic_db"))
|
||||
|
||||
cfg = Config(config_path)
|
||||
cfg.set_main_option("script_location", scripts_path)
|
||||
cfg.set_main_option("sqlalchemy.url", f"sqlite:///{db_path}")
|
||||
return cfg
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def migration_db(tmp_path):
|
||||
"""Yield an alembic Config pre-upgraded to the baseline revision."""
|
||||
db_path = str(tmp_path / "test_migration.db")
|
||||
cfg = _make_config(db_path)
|
||||
command.upgrade(cfg, _BASELINE)
|
||||
yield cfg
|
||||
|
||||
|
||||
def test_upgrade_to_head(migration_db):
|
||||
"""Upgrade from baseline to head must succeed on a file-backed DB."""
|
||||
command.upgrade(migration_db, "head")
|
||||
|
||||
|
||||
def test_downgrade_to_baseline(migration_db):
|
||||
"""Upgrade to head then downgrade back to baseline."""
|
||||
command.upgrade(migration_db, "head")
|
||||
command.downgrade(migration_db, _BASELINE)
|
||||
|
||||
|
||||
def test_upgrade_downgrade_cycle(migration_db):
|
||||
"""Full cycle: upgrade → downgrade → upgrade again."""
|
||||
command.upgrade(migration_db, "head")
|
||||
command.downgrade(migration_db, _BASELINE)
|
||||
command.upgrade(migration_db, "head")
|
||||
@ -10,6 +10,7 @@ from app.assets.database.queries import (
|
||||
get_asset_by_hash,
|
||||
upsert_asset,
|
||||
bulk_insert_assets,
|
||||
update_asset_hash_and_mime,
|
||||
)
|
||||
|
||||
|
||||
@ -142,3 +143,45 @@ class TestBulkInsertAssets:
|
||||
session.commit()
|
||||
|
||||
assert session.query(Asset).count() == 200
|
||||
|
||||
|
||||
class TestMimeTypeImmutability:
|
||||
"""mime_type on Asset is write-once: set on first ingest, never overwritten."""
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"initial_mime,second_mime,expected_mime",
|
||||
[
|
||||
("image/png", "image/jpeg", "image/png"),
|
||||
(None, "image/png", "image/png"),
|
||||
],
|
||||
ids=["preserves_existing", "fills_null"],
|
||||
)
|
||||
def test_upsert_mime_immutability(self, session: Session, initial_mime, second_mime, expected_mime):
|
||||
h = f"blake3:upsert_{initial_mime}_{second_mime}"
|
||||
upsert_asset(session, asset_hash=h, size_bytes=100, mime_type=initial_mime)
|
||||
session.commit()
|
||||
|
||||
asset, created, _ = upsert_asset(session, asset_hash=h, size_bytes=100, mime_type=second_mime)
|
||||
assert created is False
|
||||
assert asset.mime_type == expected_mime
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"initial_mime,update_mime,update_hash,expected_mime,expected_hash",
|
||||
[
|
||||
(None, "image/png", None, "image/png", "blake3:upd0"),
|
||||
("image/png", "image/jpeg", None, "image/png", "blake3:upd1"),
|
||||
("image/png", "image/jpeg", "blake3:upd2_new", "image/png", "blake3:upd2_new"),
|
||||
],
|
||||
ids=["fills_null", "preserves_existing", "hash_updates_mime_locked"],
|
||||
)
|
||||
def test_update_asset_hash_and_mime_immutability(
|
||||
self, session: Session, initial_mime, update_mime, update_hash, expected_mime, expected_hash,
|
||||
):
|
||||
h = expected_hash.removesuffix("_new")
|
||||
asset = Asset(hash=h, size_bytes=100, mime_type=initial_mime)
|
||||
session.add(asset)
|
||||
session.flush()
|
||||
|
||||
update_asset_hash_and_mime(session, asset_id=asset.id, mime_type=update_mime, asset_hash=update_hash)
|
||||
assert asset.mime_type == expected_mime
|
||||
assert asset.hash == expected_hash
|
||||
|
||||
@ -242,22 +242,24 @@ class TestSetReferencePreview:
|
||||
asset = _make_asset(session, "hash1")
|
||||
preview_asset = _make_asset(session, "preview_hash")
|
||||
ref = _make_reference(session, asset)
|
||||
preview_ref = _make_reference(session, preview_asset, name="preview.png")
|
||||
session.commit()
|
||||
|
||||
set_reference_preview(session, reference_id=ref.id, preview_asset_id=preview_asset.id)
|
||||
set_reference_preview(session, reference_id=ref.id, preview_reference_id=preview_ref.id)
|
||||
session.commit()
|
||||
|
||||
session.refresh(ref)
|
||||
assert ref.preview_id == preview_asset.id
|
||||
assert ref.preview_id == preview_ref.id
|
||||
|
||||
def test_clears_preview(self, session: Session):
|
||||
asset = _make_asset(session, "hash1")
|
||||
preview_asset = _make_asset(session, "preview_hash")
|
||||
ref = _make_reference(session, asset)
|
||||
ref.preview_id = preview_asset.id
|
||||
preview_ref = _make_reference(session, preview_asset, name="preview.png")
|
||||
ref.preview_id = preview_ref.id
|
||||
session.commit()
|
||||
|
||||
set_reference_preview(session, reference_id=ref.id, preview_asset_id=None)
|
||||
set_reference_preview(session, reference_id=ref.id, preview_reference_id=None)
|
||||
session.commit()
|
||||
|
||||
session.refresh(ref)
|
||||
@ -265,15 +267,15 @@ class TestSetReferencePreview:
|
||||
|
||||
def test_raises_for_nonexistent_reference(self, session: Session):
|
||||
with pytest.raises(ValueError, match="not found"):
|
||||
set_reference_preview(session, reference_id="nonexistent", preview_asset_id=None)
|
||||
set_reference_preview(session, reference_id="nonexistent", preview_reference_id=None)
|
||||
|
||||
def test_raises_for_nonexistent_preview(self, session: Session):
|
||||
asset = _make_asset(session, "hash1")
|
||||
ref = _make_reference(session, asset)
|
||||
session.commit()
|
||||
|
||||
with pytest.raises(ValueError, match="Preview Asset"):
|
||||
set_reference_preview(session, reference_id=ref.id, preview_asset_id="nonexistent")
|
||||
with pytest.raises(ValueError, match="Preview AssetReference"):
|
||||
set_reference_preview(session, reference_id=ref.id, preview_reference_id="nonexistent")
|
||||
|
||||
|
||||
class TestInsertReference:
|
||||
@ -351,13 +353,14 @@ class TestUpdateReferenceTimestamps:
|
||||
asset = _make_asset(session, "hash1")
|
||||
preview_asset = _make_asset(session, "preview_hash")
|
||||
ref = _make_reference(session, asset)
|
||||
preview_ref = _make_reference(session, preview_asset, name="preview.png")
|
||||
session.commit()
|
||||
|
||||
update_reference_timestamps(session, ref, preview_id=preview_asset.id)
|
||||
update_reference_timestamps(session, ref, preview_id=preview_ref.id)
|
||||
session.commit()
|
||||
|
||||
session.refresh(ref)
|
||||
assert ref.preview_id == preview_asset.id
|
||||
assert ref.preview_id == preview_ref.id
|
||||
|
||||
|
||||
class TestSetReferenceMetadata:
|
||||
|
||||
@ -20,6 +20,7 @@ def _make_reference(
|
||||
asset: Asset,
|
||||
name: str,
|
||||
metadata: dict | None = None,
|
||||
system_metadata: dict | None = None,
|
||||
) -> AssetReference:
|
||||
now = get_utc_now()
|
||||
ref = AssetReference(
|
||||
@ -27,6 +28,7 @@ def _make_reference(
|
||||
name=name,
|
||||
asset_id=asset.id,
|
||||
user_metadata=metadata,
|
||||
system_metadata=system_metadata,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
last_access_time=now,
|
||||
@ -34,8 +36,10 @@ def _make_reference(
|
||||
session.add(ref)
|
||||
session.flush()
|
||||
|
||||
if metadata:
|
||||
for key, val in metadata.items():
|
||||
# Build merged projection: {**system_metadata, **user_metadata}
|
||||
merged = {**(system_metadata or {}), **(metadata or {})}
|
||||
if merged:
|
||||
for key, val in merged.items():
|
||||
for row in convert_metadata_to_rows(key, val):
|
||||
meta_row = AssetReferenceMeta(
|
||||
asset_reference_id=ref.id,
|
||||
@ -182,3 +186,46 @@ class TestMetadataFilterEmptyDict:
|
||||
|
||||
refs, _, total = list_references_page(session, metadata_filter={})
|
||||
assert total == 2
|
||||
|
||||
|
||||
class TestSystemMetadataProjection:
|
||||
"""Tests for system_metadata merging into the filter projection."""
|
||||
|
||||
def test_system_metadata_keys_are_filterable(self, session: Session):
|
||||
"""system_metadata keys should appear in the merged projection."""
|
||||
asset = _make_asset(session, "hash1")
|
||||
_make_reference(
|
||||
session, asset, "with_sys",
|
||||
system_metadata={"source": "scanner"},
|
||||
)
|
||||
_make_reference(session, asset, "without_sys")
|
||||
session.commit()
|
||||
|
||||
refs, _, total = list_references_page(
|
||||
session, metadata_filter={"source": "scanner"}
|
||||
)
|
||||
assert total == 1
|
||||
assert refs[0].name == "with_sys"
|
||||
|
||||
def test_user_metadata_overrides_system_metadata(self, session: Session):
|
||||
"""user_metadata should win when both have the same key."""
|
||||
asset = _make_asset(session, "hash1")
|
||||
_make_reference(
|
||||
session, asset, "overridden",
|
||||
metadata={"origin": "user_upload"},
|
||||
system_metadata={"origin": "auto_scan"},
|
||||
)
|
||||
session.commit()
|
||||
|
||||
# Should match the user value, not the system value
|
||||
refs, _, total = list_references_page(
|
||||
session, metadata_filter={"origin": "user_upload"}
|
||||
)
|
||||
assert total == 1
|
||||
assert refs[0].name == "overridden"
|
||||
|
||||
# Should NOT match the system value (it was overridden)
|
||||
refs, _, total = list_references_page(
|
||||
session, metadata_filter={"origin": "auto_scan"}
|
||||
)
|
||||
assert total == 0
|
||||
|
||||
@ -11,6 +11,7 @@ from app.assets.services import (
|
||||
delete_asset_reference,
|
||||
set_asset_preview,
|
||||
)
|
||||
from app.assets.services.asset_management import resolve_hash_to_path
|
||||
|
||||
|
||||
def _make_asset(session: Session, hash_val: str = "blake3:test", size: int = 1024) -> Asset:
|
||||
@ -219,31 +220,33 @@ class TestSetAssetPreview:
|
||||
asset = _make_asset(session, hash_val="blake3:main")
|
||||
preview_asset = _make_asset(session, hash_val="blake3:preview")
|
||||
ref = _make_reference(session, asset)
|
||||
preview_ref = _make_reference(session, preview_asset, name="preview.png")
|
||||
ref_id = ref.id
|
||||
preview_id = preview_asset.id
|
||||
preview_ref_id = preview_ref.id
|
||||
session.commit()
|
||||
|
||||
set_asset_preview(
|
||||
reference_id=ref_id,
|
||||
preview_asset_id=preview_id,
|
||||
preview_reference_id=preview_ref_id,
|
||||
)
|
||||
|
||||
# Verify by re-fetching from DB
|
||||
session.expire_all()
|
||||
updated_ref = session.get(AssetReference, ref_id)
|
||||
assert updated_ref.preview_id == preview_id
|
||||
assert updated_ref.preview_id == preview_ref_id
|
||||
|
||||
def test_clears_preview(self, mock_create_session, session: Session):
|
||||
asset = _make_asset(session)
|
||||
preview_asset = _make_asset(session, hash_val="blake3:preview")
|
||||
ref = _make_reference(session, asset)
|
||||
ref.preview_id = preview_asset.id
|
||||
preview_ref = _make_reference(session, preview_asset, name="preview.png")
|
||||
ref.preview_id = preview_ref.id
|
||||
ref_id = ref.id
|
||||
session.commit()
|
||||
|
||||
set_asset_preview(
|
||||
reference_id=ref_id,
|
||||
preview_asset_id=None,
|
||||
preview_reference_id=None,
|
||||
)
|
||||
|
||||
# Verify by re-fetching from DB
|
||||
@ -263,6 +266,45 @@ class TestSetAssetPreview:
|
||||
with pytest.raises(PermissionError, match="not owner"):
|
||||
set_asset_preview(
|
||||
reference_id=ref.id,
|
||||
preview_asset_id=None,
|
||||
preview_reference_id=None,
|
||||
owner_id="user2",
|
||||
)
|
||||
|
||||
|
||||
class TestResolveHashToPath:
|
||||
def test_returns_none_for_unknown_hash(self, mock_create_session):
|
||||
result = resolve_hash_to_path("blake3:" + "a" * 64)
|
||||
assert result is None
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"ref_owner, query_owner, expect_found",
|
||||
[
|
||||
("user1", "user1", True),
|
||||
("user1", "user2", False),
|
||||
("", "anyone", True),
|
||||
("", "", True),
|
||||
],
|
||||
ids=[
|
||||
"owner_sees_own_ref",
|
||||
"other_owner_blocked",
|
||||
"ownerless_visible_to_anyone",
|
||||
"ownerless_visible_to_empty",
|
||||
],
|
||||
)
|
||||
def test_owner_visibility(
|
||||
self, ref_owner, query_owner, expect_found,
|
||||
mock_create_session, session: Session, temp_dir,
|
||||
):
|
||||
f = temp_dir / "file.bin"
|
||||
f.write_bytes(b"data")
|
||||
asset = _make_asset(session, hash_val="blake3:" + "b" * 64)
|
||||
ref = _make_reference(session, asset, name="file.bin", owner_id=ref_owner)
|
||||
ref.file_path = str(f)
|
||||
session.commit()
|
||||
|
||||
result = resolve_hash_to_path(asset.hash, owner_id=query_owner)
|
||||
if expect_found:
|
||||
assert result is not None
|
||||
assert result.abs_path == str(f)
|
||||
else:
|
||||
assert result is None
|
||||
|
||||
@ -113,11 +113,19 @@ class TestIngestFileFromPath:
|
||||
file_path = temp_dir / "with_preview.bin"
|
||||
file_path.write_bytes(b"data")
|
||||
|
||||
# Create a preview asset first
|
||||
# Create a preview asset and reference
|
||||
preview_asset = Asset(hash="blake3:preview", size_bytes=100)
|
||||
session.add(preview_asset)
|
||||
session.flush()
|
||||
from app.assets.helpers import get_utc_now
|
||||
now = get_utc_now()
|
||||
preview_ref = AssetReference(
|
||||
asset_id=preview_asset.id, name="preview.png", owner_id="",
|
||||
created_at=now, updated_at=now, last_access_time=now,
|
||||
)
|
||||
session.add(preview_ref)
|
||||
session.commit()
|
||||
preview_id = preview_asset.id
|
||||
preview_id = preview_ref.id
|
||||
|
||||
result = _ingest_file_from_path(
|
||||
abs_path=str(file_path),
|
||||
|
||||
123
tests-unit/assets_test/services/test_tag_histogram.py
Normal file
123
tests-unit/assets_test/services/test_tag_histogram.py
Normal file
@ -0,0 +1,123 @@
|
||||
"""Tests for list_tag_histogram service function."""
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from app.assets.database.models import Asset, AssetReference
|
||||
from app.assets.database.queries import ensure_tags_exist, add_tags_to_reference
|
||||
from app.assets.helpers import get_utc_now
|
||||
from app.assets.services.tagging import list_tag_histogram
|
||||
|
||||
|
||||
def _make_asset(session: Session, hash_val: str = "blake3:test") -> Asset:
|
||||
asset = Asset(hash=hash_val, size_bytes=1024)
|
||||
session.add(asset)
|
||||
session.flush()
|
||||
return asset
|
||||
|
||||
|
||||
def _make_reference(
|
||||
session: Session,
|
||||
asset: Asset,
|
||||
name: str = "test",
|
||||
owner_id: str = "",
|
||||
) -> AssetReference:
|
||||
now = get_utc_now()
|
||||
ref = AssetReference(
|
||||
owner_id=owner_id,
|
||||
name=name,
|
||||
asset_id=asset.id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
last_access_time=now,
|
||||
)
|
||||
session.add(ref)
|
||||
session.flush()
|
||||
return ref
|
||||
|
||||
|
||||
class TestListTagHistogram:
|
||||
def test_returns_counts_for_all_tags(self, mock_create_session, session: Session):
|
||||
ensure_tags_exist(session, ["alpha", "beta"])
|
||||
a1 = _make_asset(session, "blake3:aaa")
|
||||
r1 = _make_reference(session, a1, name="r1")
|
||||
add_tags_to_reference(session, reference_id=r1.id, tags=["alpha", "beta"])
|
||||
|
||||
a2 = _make_asset(session, "blake3:bbb")
|
||||
r2 = _make_reference(session, a2, name="r2")
|
||||
add_tags_to_reference(session, reference_id=r2.id, tags=["alpha"])
|
||||
session.commit()
|
||||
|
||||
result = list_tag_histogram()
|
||||
|
||||
assert result["alpha"] == 2
|
||||
assert result["beta"] == 1
|
||||
|
||||
def test_empty_when_no_assets(self, mock_create_session, session: Session):
|
||||
ensure_tags_exist(session, ["unused"])
|
||||
session.commit()
|
||||
|
||||
result = list_tag_histogram()
|
||||
|
||||
assert result == {}
|
||||
|
||||
def test_include_tags_filter(self, mock_create_session, session: Session):
|
||||
ensure_tags_exist(session, ["models", "loras", "input"])
|
||||
a1 = _make_asset(session, "blake3:aaa")
|
||||
r1 = _make_reference(session, a1, name="r1")
|
||||
add_tags_to_reference(session, reference_id=r1.id, tags=["models", "loras"])
|
||||
|
||||
a2 = _make_asset(session, "blake3:bbb")
|
||||
r2 = _make_reference(session, a2, name="r2")
|
||||
add_tags_to_reference(session, reference_id=r2.id, tags=["input"])
|
||||
session.commit()
|
||||
|
||||
result = list_tag_histogram(include_tags=["models"])
|
||||
|
||||
# Only r1 has "models", so only its tags appear
|
||||
assert "models" in result
|
||||
assert "loras" in result
|
||||
assert "input" not in result
|
||||
|
||||
def test_exclude_tags_filter(self, mock_create_session, session: Session):
|
||||
ensure_tags_exist(session, ["models", "loras", "input"])
|
||||
a1 = _make_asset(session, "blake3:aaa")
|
||||
r1 = _make_reference(session, a1, name="r1")
|
||||
add_tags_to_reference(session, reference_id=r1.id, tags=["models", "loras"])
|
||||
|
||||
a2 = _make_asset(session, "blake3:bbb")
|
||||
r2 = _make_reference(session, a2, name="r2")
|
||||
add_tags_to_reference(session, reference_id=r2.id, tags=["input"])
|
||||
session.commit()
|
||||
|
||||
result = list_tag_histogram(exclude_tags=["models"])
|
||||
|
||||
# r1 excluded, only r2's tags remain
|
||||
assert "input" in result
|
||||
assert "loras" not in result
|
||||
|
||||
def test_name_contains_filter(self, mock_create_session, session: Session):
|
||||
ensure_tags_exist(session, ["alpha", "beta"])
|
||||
a1 = _make_asset(session, "blake3:aaa")
|
||||
r1 = _make_reference(session, a1, name="my_model.safetensors")
|
||||
add_tags_to_reference(session, reference_id=r1.id, tags=["alpha"])
|
||||
|
||||
a2 = _make_asset(session, "blake3:bbb")
|
||||
r2 = _make_reference(session, a2, name="picture.png")
|
||||
add_tags_to_reference(session, reference_id=r2.id, tags=["beta"])
|
||||
session.commit()
|
||||
|
||||
result = list_tag_histogram(name_contains="model")
|
||||
|
||||
assert "alpha" in result
|
||||
assert "beta" not in result
|
||||
|
||||
def test_limit_caps_results(self, mock_create_session, session: Session):
|
||||
tags = [f"tag{i}" for i in range(10)]
|
||||
ensure_tags_exist(session, tags)
|
||||
a = _make_asset(session, "blake3:aaa")
|
||||
r = _make_reference(session, a, name="r1")
|
||||
add_tags_to_reference(session, reference_id=r.id, tags=tags)
|
||||
session.commit()
|
||||
|
||||
result = list_tag_histogram(limit=3)
|
||||
|
||||
assert len(result) == 3
|
||||
@ -243,6 +243,15 @@ def test_upload_tags_traversal_guard(http: requests.Session, api_base: str):
|
||||
assert body["error"]["code"] in ("BAD_REQUEST", "INVALID_BODY")
|
||||
|
||||
|
||||
def test_upload_empty_tags_rejected(http: requests.Session, api_base: str):
|
||||
files = {"file": ("notags.bin", b"A" * 64, "application/octet-stream")}
|
||||
form = {"tags": json.dumps([]), "name": "notags.bin", "user_metadata": json.dumps({})}
|
||||
r = http.post(api_base + "/api/assets", data=form, files=files, timeout=120)
|
||||
body = r.json()
|
||||
assert r.status_code == 400
|
||||
assert body["error"]["code"] == "INVALID_BODY"
|
||||
|
||||
|
||||
@pytest.mark.parametrize("root", ["input", "output"])
|
||||
def test_duplicate_upload_same_display_name_does_not_clobber(
|
||||
root: str,
|
||||
|
||||
Reference in New Issue
Block a user