Software fairness testing is a central method for evaluating AI systems, yet the meaning of fairness is often treated as fixed and universally applicable. This vision paper positions fairness testing as culturally situated and examines the problem across three dimensions. First, fairness metrics encode particular cultural values while marginalizing others. Second, test datasets are predominantly designed from Western contexts, excluding knowledge systems grounded in oral traditions, Indigenous languages, and non-digital communities. Third, fairness testing raises ethical concerns, including the reliance on low-paid data labeling in the Global South and the environmental costs of training and deploying large-scale models, which disproportionately affect climate-vulnerable populations. Addressing these issues requires rethinking fairness testing beyond universal metrics and moving toward evaluation frameworks that respect cultural plurality and acknowledge the right to refuse algorithmic mediation.