bubus is history-policy based, not queue-capacity based.
emit()enqueues synchronously and returns immediately.- Pending queues are unbounded in both runtimes.
- Overload behavior is controlled by
max_history_size+max_history_drop.
1) If I emit 1,000,000 events, will errors be raised?
Error conditions
| Runtime | Condition | What is raised |
|---|---|---|
| Python | emit() called with no running event loop | RuntimeError (emit() called but no event loop is running) |
| Python | max_history_size > 0, max_history_drop=False, and history already at limit | RuntimeError (history limit reached) |
| TypeScript | emit() with max_history_size > 0, max_history_drop=false, and history already at limit | Error (message contains history limit reached) |
| Both | Process runs out of memory under extreme load | Runtime/VM OOM failure (not a bus-specific exception type) |
max_history_size=0 is a special case in both runtimes: it does not trigger history-limit rejection, and instead keeps only in-flight visibility.
With max_history_drop=true, emit() does not reject on history size. Under sustained overload, old uncompleted entries can be dropped and a warning is logged.
Reject vs drop behavior
- Python
- TypeScript
2) If 1,000,000 events complete, how many are kept?
LetN = max_history_size.
| Setting | Events retained after bus becomes idle | Notes |
|---|---|---|
N = None / null | All completed events (so up to 1,000,000) | History is unbounded. |
N > 0, max_history_drop = false | Up to N | New emits are rejected once history reaches N. |
N > 0, max_history_drop = true | Bounded to N at steady state | Oldest history entries are removed first. |
N = 0 | 0 completed events retained | Only pending/in-flight visibility is kept; completed entries are dropped. |
max_history_drop=True, cleanup is amortized, so history can temporarily exceed N before converging back to <= N.
For the broader retention model, see Event History Store.
3) How RAM usage scales
At a high level, memory grows with:- pending queue depth,
- retained history size,
- per-event handler/result payload size.
RAM ~= O(pending_event_queue) + O(event_history) + O(event_results and payloads)
Measured slopes from perf suites
- Python README matrix reports scenario-dependent peak RSS slopes between about
0.025kb/eventand8.024kb/event. - TypeScript README matrix reports scenario/runtime-dependent peak RSS slopes between about
0.1kb/eventand7.9kb/event. - TypeScript README notes those
kb/eventvalues are measured during active processing with history aggressively bounded (max_history_size=1in perf harnesses).
- bounded history (
Nfinite) keeps steady-state memory bounded by queue depth +N, - unbounded history (
N=None/null) makes retained RAM grow roughly linearly with total completed events.
4) Queue vs history lifecycle (exact behavior)
Events do not “move from queue to history.” They are added to history atemit() time, and can exist in both structures while pending.
Python timeline (emit)
- Validate pressure policy.
- Enqueue into
pending_event_queue. - Add same event object to
event_history. - Runloop dequeues event (
queue.get()), then executes handlers. - Event remains in
event_historyaspending->started->completedunless trimmed/removed by history policy.
TypeScript timeline (emit)
- Validate pressure policy.
- Add event to
event_history. - Apply
trimHistory(). - Push event into
pending_event_queue. - Runloop shifts from queue and executes handlers.
- Event remains in
event_historyunless trimmed/removed by policy.
- an event can be in both
pending_event_queueandevent_historyat the same time, event_historycan contain pending events (not only started/completed events).
Observe both structures directly
- Python
- TypeScript