This website requires JavaScript.
Explore
Help
Sign In
youngkingdom
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
Files
8f37be38ebfe0295a4925837c501c87149997a4d
vllm
/
tests
/
models
/
decoder_only
/
vision_language
/
processing
History
Cyrus Leung
8f37be38eb
[Bugfix] Comprehensively test and fix LLaVA-NeXT feature size calculation (
#11800
)
...
Signed-off-by: DarkLight1337 <
tlleungac@connect.ust.hk
>
2025-01-07 18:25:02 +08:00
..
__init__.py
[VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision (
#11717
)
2025-01-04 11:40:53 +00:00
test_idefics3.py
[VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision (
#11717
)
2025-01-04 11:40:53 +00:00
test_internvl.py
[VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision (
#11717
)
2025-01-04 11:40:53 +00:00
test_llava_next.py
[Bugfix] Comprehensively test and fix LLaVA-NeXT feature size calculation (
#11800
)
2025-01-07 18:25:02 +08:00
test_llava_onevision.py
[Bugfix] Comprehensively test and fix LLaVA-NeXT feature size calculation (
#11800
)
2025-01-07 18:25:02 +08:00
test_phi3v.py
[VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision (
#11717
)
2025-01-04 11:40:53 +00:00
test_qwen2_vl.py
[VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision (
#11717
)
2025-01-04 11:40:53 +00:00
test_qwen.py
[VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision (
#11717
)
2025-01-04 11:40:53 +00:00