This thesis proposes scaling humanoid robots through large-scale motion imitation, learned control priors, and reinforcement learning. A perpetual humanoid controller achieves full-dataset imitation, distilled into a universal latent space enabling efficient learning of manipulation and vision-based tasks. The approach transfers from simulation to real robots, advancing practical humanoid control.