Time series models are always hungry for more data to improve the accuracy of their predictions. Methods to increase the time-dependent diversity a model is exposed to — such as bagging, Monte Carlo, oversampling, K-folds, etc. — and deep learning models such as LSTM, VAE, and GAN have all been used to generate time series data with differing results. However, generating user-level time series data is challenging because a user's actions and behaviors change over time. We'll show how to construct, train, and evaluate a GPT model using the NVIDIA Megatron framework on tabular time series data. We designed a special tabular tokenizer to generate the time series data conditioned on the table’s structural information. This approach is general, and has applications in backtesting and fraud detection for financial services, demand forecasting in energy markets, and anonymizing electronic medical records for health care and the life sciences.