Hey there, I'm

David Adeshina Arungbemi

AI Engineer & Researcher

I explore the intersection of creativity and technology—building AI systems that unlock new ways of human expression and interaction. From gesture recognition to narrative understanding, I'm fascinated by how we can augment human capabilities and reimagine our relationship with technology.

About Me

I'm driven by curiosity about how we interact with our world and how technology can enhance those interactions. My work spans gesture-based interfaces, narrative AI, computer vision, and 3D visualization—each project exploring different facets of human-computer symbiosis.

Beyond algorithms and code, I'm passionate about the artistic side of technology. Whether it's crafting 3D models in Blender, designing intuitive interfaces, or building systems that understand human intention, I believe the most powerful technology seamlessly blends logic with creativity.

Research Focus

Human-centered AI, human-augmentation technologies and AI Interpretability/Safety

Creative Expression

3D modeling, visualization, and exploring the aesthetic dimensions of technical systems

Programming

Python C++ TypeScript Java

AI & Machine Learning

TensorFlow Keras PyTorch Scikit-learn

Creative Tools

Blender Three.js Matplotlib Visualization

Hardware & Embedded

Arduino ESP32 IMU Sensors IoT

Visual Explorations-Ideas Illustrated

Making complex technical concepts tangible, experiential, and beautiful

Coming Soon

More visual explorations of technical concepts—turning abstract ideas into experiential, intuitive understanding. Science as art, technology as wonder.

In Progress

Featured Projects

A selection of projects exploring AI, creativity, and human augmentation

Narrative-Technical Embedding Space

Exploring the intersection of storytelling and technical documentation through semantic embedding spaces. A system that bridges narrative understanding with structured technical knowledge.

NLP Embeddings Semantic Analysis

Story GRU Language Model

Language model with interpretability focus, investigating how neural networks understand and generate narrative structures. Exploring the inner workings of sequence models for storytelling.

RNN GRU Interpretability

AerialAV

IoT-enabled UAV system for real-time motion and environmental monitoring, with live 3D visualisation.

Sensors Embedded Hardware Visualisation

3D Modeling Portfolio

Collection of 3D visualizations and models created in Blender. Exploring form, structure, and the visual expression of technical concepts through digital art.

Blender 3D Modeling Visual Design

Gyronics Wearable Interface

Wearable device enabling gesture-based control through IMU sensors and deep learning. Real-time gesture recognition for intuitive human-computer interaction.

Wearables Deep Learning HCI

More on GitHub

Additional experiments and projects exploring various aspects of AI, creativity, and human augmentation. Always experimenting with new ideas.

Open Source Experiments

Research Notebooks

Experiments, deep dives, and learning experiences—all documented

Story GRU Language Model

Deep exploration of recurrent neural networks for narrative generation. Investigating how GRU architectures learn story structure, with visualizations of hidden states and attention patterns throughout the storytelling process.

RNN Language Modeling Interpretability

VQ-VAE Codeword Analysis

Analyzing Vector Quantized Variational Autoencoders through codeword visualization and sequencing patterns. Exploring how discrete latent representations capture and organize complex data structures in compression and generation tasks.

VAE Representation Learning Visualization

Deterministic Inpainting

Reconstructing missing or corrupted regions in artwork using deterministic approaches. Trained on WikiArt dataset to learn artistic patterns and styles.

Image Inpainting WikiArt Reconstruction

Publications

Research contributions to the field

IEEE Access 2025

An End to End Wearable Device and System for Real Time Gesture Recognition of Directional, Rotational, and Shape-based Arm Gestures

Comprehensive wearable system capable of recognizing complex arm gestures in real-time, enabling intuitive human-computer interaction without traditional input devices. The system achieves high accuracy across multiple gesture categories through an end-to-end deep learning pipeline.

Wearable Computing Gesture Recognition Deep Learning HCI

Experience

Research, engineering, and technical work

Research Intern

National Centre for Artificial Intelligence and Robotics
Feb 2025 - Present | Abuja, Nigeria

Researching integration of object detection, tracking, and classification into live video pipelines. Developing interactive visualizations and conducting benchmarking studies to optimize model performance and user experience. Direct visual servoing framework integrating YOLOE and MiDaS to perform depth-aware tracking for robotic arm without inverse kinematics.

AI Engineer

Gyronics Technologies
Nov 2024 - Present

Leading AI development for wearable gesture recognition systems. Designing end-to-end IMU gesture pipeline with CNN and attention-based models. Building sequence-to-sequence models and optimizing for real-time deployment.

IT Intern

Huawei Technologies Co. Nig. Ltd
March 2023 - Sept 2023 | Abuja, Nigeria

Performed server room maintenance, monitored network infrastructure, and managed IT operations. Configured and decommissioned enterprise devices with focus on security protocols.

Education

B.Eng. Computer Engineering

Nile University of Nigeria | 2019 - 2024

Grade: First Class Honours

Thesis: AI-Based Wearable Human-Computer Interface

Let's Connect

Open to collaborations, research opportunities, and creative projects