01 / ABOUT
System Thinker
Student since 2023. Focused on DevOps & System Architecture.
I don't just build features. I design systems. I think in control flow, model failure states, and design for observability.
CORE COMPETENCIES
- ▸Linux internals (filesystem, permissions, processes, services)
- ▸Git beyond push (commit graph mental model)
- ▸Question architecture decisions instead of blindly using tools
- ▸Backend + DevOps oriented, not pure frontend
A city-scale real-time bus tracking and management platform built with system reliability as the priority.
[ARCHITECTURE]
- Admin Dashboard – Next.js
- Driver App – React Native (Expo)
- Passenger App – React Native
- Backend – Supabase (PostgreSQL, Auth, Realtime, RLS)
- Maps – Leaflet + OpenStreetMap
[CORE DESIGN PHILOSOPHY]
- ▸Reliability over UI
- ▸Database as single source of truth
- ▸Enforce logic at DB level using RLS
- ▸Event-driven realtime architecture
- ▸Offline-first design with queue + auto-sync
[KEY FEATURES]
- •Live GPS tracking
- •Geofencing-based stop detection
- •ETA recalculation
- •Role-based access control at database level
- •Offline location queue with timestamp sync
- •Admin observability dashboard
- •Passenger reports & announcements
[TECHNICAL HIGHLIGHTS]
Separation of static schedules vs dynamic trips
Postgres relational schema for consistency
Row-Level Security policies
Enforced at database level, not application
Realtime event subscriptions
WebSocket-based live updates
Avoided NoSQL overengineering
Relational model fits the domain
[KNOWN LIMITATIONS]
- △Driver phone dependency – single point of failure
- △GPS precision limits – urban canyon effects
- △No traffic prediction yet – ETA is distance-based only
03 / DEVOPS_STACK
Tools & Technologies
04 / SYSTEM_THINKING
I don't just build features.
I design systems.
Years building
Outages learned from
Uptime after fixes
Git commits understood
Think in control flow
Map the execution path. Understand what happens when, and why.
Model failure states
Design for what breaks, not what works. Every system has a failure mode.
Design for observability
If you can't measure it, you can't debug it. Logs, metrics, traces.
Prefer explicit over magical abstractions
Magic is technical debt. Explicit is maintainable.
Break systems to understand them
Chaos engineering isn't optional. It's how you learn.
05 / CURRENTLY_BUILDING
CI/CD Sentinel[IN PROGRESS][GitHub]
Designing a CI/CD monitoring and enforcement system focused on building observability around CI/CD instead of blindly trusting green checkmarks.
[BUILD_PROGRESS]
[FOCUS AREAS]
- ▸Pipeline health visibility
- ▸Deployment audit tracking
- ▸Log aggregation insights
- ▸Failure pattern detection
- ▸Git commit-to-deploy traceability
- ▸Infrastructure sanity checks
[GOAL]
Build observability around CI/CD pipelines. Make deployment decisions explicit, not implicit. Track what changed, when, and why.
