Altruism / Vote-Mode Adaptation
In the current model, altruism_factor does not represent a separate strategic planner.
It controls how likely an agent is to cast a puzzle-aligned rather than a
self-regarding ballot.
Meaning
altruism_factor lies in [0,1] and controls vote-mode selection:
- with probability
altruism_factor: altruistic (puzzle-aligned) - with probability
1 - altruism_factor: self-regarding (preference-ordering)
Implementation Reference
Vote-mode selection is exposed in Strategies. Agent-level altruism updates are documented in VoteAgent, and the surrounding election orchestration is documented in Area.
Modes
altruism_mode="static": fixedaltruism_factor = altruism_staticaltruism_mode="satisfaction"(default): pre-election mapping from dissatisfactionaltruism_mode="surprise_learning": participant-only post-election update from dissatisfaction signal
Satisfaction Mode
Given normalized dissatisfaction d in [0,1]:
s = 1 - dtarget = sigmoid(altruism_satisfaction_slope * (s - altruism_satisfaction_theta))
Response behavior via altruism_response_gamma:
gamma = 1: direct mapping totargetgamma < 1: smoothed update towardtarget
Final value is clipped to [altruism_clip_min, altruism_clip_max].
Surprise-Learning Mode
Participant-only update:
a <- a - altruism_alpha * dissatisfaction_signal- clip to
[altruism_clip_min, altruism_clip_max]
Key Knobs
altruism_modealtruism_staticaltruism_initaltruism_satisfaction_thetaaltruism_satisfaction_slopealtruism_response_gammaaltruism_alphaaltruism_clip_minaltruism_clip_max