TransformerPayne: Enhancing Spectral Emulation Accuracy and Data Efficiency by Capturing Long-range Correlations
Stellar spectra emulators often rely on large grids and tend to reach a plateau in emulation accuracy, leading to significant systematic errors when inferring stellar properties. Our study explores the use of Transformer models to capture long-range information in spectra, comparing their performanc...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2025-01-01
|
Series: | The Astrophysical Journal |
Subjects: | |
Online Access: | https://doi.org/10.3847/1538-4357/ad9b99 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Stellar spectra emulators often rely on large grids and tend to reach a plateau in emulation accuracy, leading to significant systematic errors when inferring stellar properties. Our study explores the use of Transformer models to capture long-range information in spectra, comparing their performance to the Payne emulator (a fully connected multilayer perceptron), an expanded version of The Payne, and a convolutional-based emulator. We tested these models on synthetic spectral grids, evaluating their performance by analyzing emulation residuals and assessing the quality of spectral parameter inference. The newly introduced TransformerPayne emulator outperformed all other tested models, achieving a mean absolute error (MAE) of approximately 0.15% when trained on the full grid. The most significant improvements were observed in grids containing between 1000 and 10,000 spectra, with TransformerPayne showing 2–5 times better performance than the scaled-up version of The Payne. Additionally, TransformerPayne demonstrated superior fine-tuning capabilities, allowing for pretraining on one spectral model grid before transferring to another. This fine-tuning approach enabled up to a 10-fold reduction in training grid size compared to models trained from scratch. Analysis of TransformerPayne's attention maps revealed that they encode interpretable features common across many spectral lines of chosen elements. While scaling up The Payne to a larger network reduced its MAE from 1.2% to 0.3% when trained on the full data set, TransformerPayne consistently achieved the lowest MAE across all tests. The inductive biases of the TransformerPayne emulator enhance accuracy, data efficiency, and interpretability for spectral emulation compared to existing methods. |
---|---|
ISSN: | 1538-4357 |