Text this: Enhancing On-Device DNN Inference Performance With a Reduced Retention-Time MRAM-Based Memory Architecture