Large language models (LLMs) have taken the field of AI by storm. But how large do they really need to be? I'll discuss the phi series of models from Microsoft Research, which exhibit many of the striking emergent properties of LLMs despite having a mere 1 billion parameters.