This PhD uses brain-inspired AI to decode vision from neural data. Using human fMRI (24 hours of Doctor Who) and monkey electrophysiology, signals are transformed into 2D brain maps to improve reconstruction. The model learns receptive-field structure, compares contributions of V1/V4/IT, and aims for efficient, interpretable decoding with applications to neuroscience and BCIs.

My research improves brain–computer interfaces for children with disabilities by reducing the repetitive calibration needed before use. Using transfer learning and a team-selection algorithm, data from other users help personalise the system, cutting calibration by up to 90%. This makes creative activities like painting more accessible, enjoyable, and sustainable.