AI systems engineering

Deep systems across LLM, robotics, quantum AI, and GPU compute

Building reproducible, benchmarked projects at the intersection of large language models, robotic manipulation, brain–computer interfaces, quantum optimization, and energy systems.

Focus areas

Building at the intersection of six domains.

Large language models CUDA / GPU compute Robotic manipulation Reinforcement learning Brain–computer interfaces Quantum AI Energy systems Transformer architectures MLOps & deployment

Flagship projects

Deep, reproducible systems at the intersection of LLM, robotics, quantum AI, BCI, and GPU compute.

FlashKernel

Custom CUDA C++ and Triton kernels for transformer inference — tiled FlashAttention, fused GeLU, RoPE, paged KV-cache — benchmarked with Nsight Compute on T4.

CUDA C++ Triton Nsight Compute
View details →

RoboLLM

Language-grounded robotic manipulation — VLM planner decomposes instructions into sub-tasks, RL policies execute each step in MuJoCo simulation.

MuJoCo PaliGemma-3B SAC
View details →

NeuroLLM

Foundation model for neural signal decoding — pre-train a transformer on large-scale EEG, fine-tune for motor imagery BCI with frequency-band attention.

PyTorch MNE-Python EEG
View details →

QuantumGrid

Quantum-classical hybrid optimization for energy grids — QAOA and VQE applied to unit commitment on real ENTSO-E data, benchmarked against MILP solvers.

PennyLane QAOA/VQE OR-Tools
View details →

Experience

Leading applied ML and AI systems in startups, NGOs, and banking.

Contact

Available for collaborations, advisory work, and technical leadership roles.