[autofix.ci] apply automated fixes

This commit is contained in:
autofix-ci[bot]
2025-08-22 10:23:06 +00:00
committed by -LAN-
parent 8c35663220
commit 48cbf4c78f
3 changed files with 29 additions and 27 deletions

View File

@ -5,9 +5,9 @@
The GraphEngine now supports **dynamic worker pool management** to optimize performance and resource usage. Instead of a fixed 10-worker pool, the engine can:
1. **Start with optimal worker count** based on graph complexity
2. **Scale up** when workload increases
3. **Scale down** when workers are idle
4. **Respect configurable min/max limits**
1. **Scale up** when workload increases
1. **Scale down** when workers are idle
1. **Respect configurable min/max limits**
## Benefits
@ -60,7 +60,7 @@ export GRAPH_ENGINE_SCALE_DOWN_IDLE_TIME=10.0
The engine analyzes the graph structure at startup:
- **Sequential graphs** (no branches): 1 worker
- **Limited parallelism** (few branches): 2 workers
- **Limited parallelism** (few branches): 2 workers
- **Moderate parallelism**: 3 workers
- **High parallelism** (many branches): 5 workers
@ -69,11 +69,13 @@ The engine analyzes the graph structure at startup:
During execution:
1. **Scale Up** triggers when:
- Queue depth exceeds `SCALE_UP_THRESHOLD`
- All workers are busy and queue has items
- Not at `MAX_WORKERS` limit
2. **Scale Down** triggers when:
1. **Scale Down** triggers when:
- Worker idle for more than `SCALE_DOWN_IDLE_TIME` seconds
- Above `MIN_WORKERS` limit
@ -146,11 +148,11 @@ INFO: Scaled down workers: 3 -> 2 (removed 1 idle workers)
## Best Practices
1. **Start with defaults** - They work well for most cases
2. **Monitor queue depth** - Adjust `SCALE_UP_THRESHOLD` if queues back up
3. **Consider workload patterns**:
1. **Monitor queue depth** - Adjust `SCALE_UP_THRESHOLD` if queues back up
1. **Consider workload patterns**:
- Bursty: Lower `SCALE_DOWN_IDLE_TIME`
- Steady: Higher `SCALE_DOWN_IDLE_TIME`
4. **Test with your workloads** - Measure and tune
1. **Test with your workloads** - Measure and tune
## Troubleshooting

View File

@ -147,9 +147,9 @@ classDiagram
### Data Flow
1. **Commands** flow from CommandChannels → CommandProcessing → Domain
2. **Events** flow from Workers → EventHandlerRegistry → State updates
3. **Node outputs** flow from Workers → OutputRegistry → ResponseCoordinator
4. **Ready nodes** flow from GraphTraversal → StateManagement → WorkerManagement
1. **Events** flow from Workers → EventHandlerRegistry → State updates
1. **Node outputs** flow from Workers → OutputRegistry → ResponseCoordinator
1. **Ready nodes** flow from GraphTraversal → StateManagement → WorkerManagement
### Extension Points
@ -160,11 +160,11 @@ classDiagram
## Execution Flow
1. **Initialization**: GraphEngine creates all subsystems with the workflow graph
2. **Node Discovery**: Traversal components identify ready nodes
3. **Worker Execution**: Workers pull from ready queue and execute nodes
4. **Event Processing**: Dispatcher routes events to appropriate handlers
5. **State Updates**: Managers track node/edge states for next steps
6. **Completion**: Coordinator detects when all nodes are done
1. **Node Discovery**: Traversal components identify ready nodes
1. **Worker Execution**: Workers pull from ready queue and execute nodes
1. **Event Processing**: Dispatcher routes events to appropriate handlers
1. **State Updates**: Managers track node/edge states for next steps
1. **Completion**: Coordinator detects when all nodes are done
## Usage