Text this: Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models