Developers’ interactions on collaborative software development platforms like GitHub are key to maintaining technical alignment and community engagement. However, uncivil behaviors such as disrespectful, sarcastic, or offensive comments can undermine these efforts, discouraging contributions and harming code quality. This study introduces PeacemakerBot, an automated moderation tool that detects and warns developers of incivility signs in GitHub conversations. We leverage Large Language Models (LLMs) to analyze conversations, identify signals of incivility, and generate reformulation suggestions in real time. To evaluate it, we conducted a user study with six developers, followed by a survey based on the Technology Acceptance Model (TAM) to understand their perception of the tool’s usefulness. Our results suggest that PeacemakerBot successfully identifies multiple types of incivility and promotes more constructive conversations. The moderation feedback loop allows users to revise flagged comments, enhancing awareness and reducing harmful language over time. Our tool fills a key gap in OSS by providing AI-assisted moderation to enhance the social climate and inclusiveness of developer interactions. Video link: https://doi.org/10.5281/zenodo.15485535