I explore the intersection of creativity and technology—building AI systems that unlock new ways of human expression and interaction. From gesture recognition to narrative understanding, I'm fascinated by how we can augment human capabilities and reimagine our relationship with technology.
I'm driven by curiosity about how we interact with our world and how technology can enhance those interactions. My work spans gesture-based interfaces, narrative AI, computer vision, and 3D visualization—each project exploring different facets of human-computer symbiosis.
Beyond algorithms and code, I'm passionate about the artistic side of technology. Whether it's crafting 3D models in Blender, designing intuitive interfaces, or building systems that understand human intention, I believe the most powerful technology seamlessly blends logic with creativity.
Human-centered AI, human-augmentation technologies and AI Interpretability/Safety
3D modeling, visualization, and exploring the aesthetic dimensions of technical systems
Making complex technical concepts tangible, experiential, and beautiful
Reimagining neural networks as sculptable matter. This visual exploration takes the abstract concept of latent space and transforms it into something you can feel—geometry that breathes, adapts, and responds like clay under your hands. Making AI systems not just intelligent, but tangible and alive.
More visual explorations of technical concepts—turning abstract ideas into experiential, intuitive understanding. Science as art, technology as wonder.
A selection of projects exploring AI, creativity, and human augmentation
Exploring the intersection of storytelling and technical documentation through semantic embedding spaces. A system that bridges narrative understanding with structured technical knowledge.
Language model with interpretability focus, investigating how neural networks understand and generate narrative structures. Exploring the inner workings of sequence models for storytelling.
IoT-enabled UAV system for real-time motion and environmental monitoring, with live 3D visualisation.
Collection of 3D visualizations and models created in Blender. Exploring form, structure, and the visual expression of technical concepts through digital art.
Wearable device enabling gesture-based control through IMU sensors and deep learning. Real-time gesture recognition for intuitive human-computer interaction.
Additional experiments and projects exploring various aspects of AI, creativity, and human augmentation. Always experimenting with new ideas.
Experiments, deep dives, and learning experiences—all documented
Deep exploration of recurrent neural networks for narrative generation. Investigating how GRU architectures learn story structure, with visualizations of hidden states and attention patterns throughout the storytelling process.
Analyzing Vector Quantized Variational Autoencoders through codeword visualization and sequencing patterns. Exploring how discrete latent representations capture and organize complex data structures in compression and generation tasks.
Reconstructing missing or corrupted regions in artwork using deterministic approaches. Trained on WikiArt dataset to learn artistic patterns and styles.
Research contributions to the field
Comprehensive wearable system capable of recognizing complex arm gestures in real-time, enabling intuitive human-computer interaction without traditional input devices. The system achieves high accuracy across multiple gesture categories through an end-to-end deep learning pipeline.
Research, engineering, and technical work
Researching integration of object detection, tracking, and classification into live video pipelines. Developing interactive visualizations and conducting benchmarking studies to optimize model performance and user experience. Direct visual servoing framework integrating YOLOE and MiDaS to perform depth-aware tracking for robotic arm without inverse kinematics.
Leading AI development for wearable gesture recognition systems. Designing end-to-end IMU gesture pipeline with CNN and attention-based models. Building sequence-to-sequence models and optimizing for real-time deployment.
Performed server room maintenance, monitored network infrastructure, and managed IT operations. Configured and decommissioned enterprise devices with focus on security protocols.
Nile University of Nigeria | 2019 - 2024
Grade: First Class Honours
Thesis: AI-Based Wearable Human-Computer Interface
Open to collaborations, research opportunities, and creative projects