~/logs

System Logs

Chronological record of daily activities. No sleep. No meals. Just the work.

latest

date: "2026-02-22"

08:00 – System Boot

  • Reviewed open PRs on the portfolio OS project.
  • Merged feature branch for the Command Palette terminal emulation.

09:30 – Deep Work

  • Implemented the content.ts shared library to unify blog, logs, and archive reading.
  • Switched from Google Fonts to local font bundling after a build-time network timeout.

11:00 – Design Review

  • Revised the TerminalLayout — removed CRT scanlines, tuned ambient glow radii.
  • Adjusted dark orange color values for better contrast on OLED displays.

14:00 – Writing

  • Started this blog post on rendering Mermaid diagrams in MDX.
  • Debugged the next-mdx-remote/rsc Mermaid hydration on Safari.

17:00 – Learning

  • Watched Guillermo Rauch's talk on the future of DX at Next.js Conf.
  • Read chapter 7 of Designing Data-Intensive Applications (replication).

21:00 – Wrap-up

  • Committed and pushed all changes. Site builds green. Exit code: 0

Wrapping up the week by updating the internal documentation for the new cache architecture. If it isn't documented, it doesn't exist.

Created several Mermaid sequence diagrams to illustrate the L1/L2 fallback logic for the rest of the team. Also began scaffolding a small React dashboard to monitor the cache hit rates in real-time using WebSockets.

A demanding but incredibly satisfying week of engineering.

Successfully re-deployed the optimized Go cache.

The sync.Pool optimizations worked flawlessly. GC pauses dropped from >50ms to <2ms at P99 under full load. The database CPU utilization has subsequently halved. A massive win for the infrastructure team.

Spent the latter half of the day doing pair programming with a junior engineer, walking through the logic of how the pprof flame graphs actually map back to the codebase.

Production deployment day for the new cache layer.

The rollout was smooth until we hit 80% traffic shifting. At that point, the L1 Go cache began seeing massive garbage collection pauses. Profiling the application (pprof) revealed we were creating and destroying millions of tiny pointer objects every minute instead of using object pooling.

Rolled back to 10% traffic. Refactoring the cache node structs to avoid pointers where possible and utilizing sync.Pool for byte slice allocations.

Deep work on the new distributed caching layer.

I'm implementing a multi-tier cache strategy using a fast, bounded in-memory LRU cache in Go acting as an L1, backed by Redis as L2. This is to alleviate pressure on the primary database during thundering herd events.

Read through the "Dynamo" paper again to brush up on consistent hashing techniques. Decided to implement a Ring Hash algorithm to partition keys across the Redis cluster instances dynamically.

Migrating the analytics service from our legacy Node.js monolith to the new Go microservice.

Found a thorny bug where timestamps were being silently truncated from millisecond to second precision when passing through the Kafka ingest topic. Spent 3 hours tracing it down to a mismatched Protobuf definition in a shared internal library.

Learnings: Always version control shared proto definitions explicitly, and never trust a legacy timestamp column in Postgres without verifying its underlying type (timestamp without time zone strikes again).