/images/earth.jpg

The Future of Human Work Might Just Be Pressing Enter for AI

Cover image: The Stanley Parable. Its protagonist, Stanley, is Employee 427 in an office building, and his daily work consists of repeatedly pressing keys exactly as the computer instructs him to.

First, I want to make one thing clear: this article is entirely based on phenomena I’ve observed recently and the thoughts and reflections that came from them. Every word was typed by hand, and no AI was used in the writing of this article.

Investigating how Codex context compaction works

Original author: Kangwook Lee

Original article: https://x.com/Kangwook_Lee/article/2028955292025962534

For non-codex models, the open-source Codex CLI compacts context locally: an LLM summarizes the conversation using a compaction prompt. When the compacted context is later used, responses.create() receives it with a handoff prompt that frames the summary. Both prompts are visible in the source code.

Visualizing L1 and L2 Regularization on a Cross-Entropy Loss Surface

I had wanted to write this post for a long time. Once I finally got ECharts working inside my blog, I could finish it properly.

The goal here is simple: make L1 and L2 regularization visible. Instead of discussing them only as formulas, we will look at how they reshape a cross-entropy loss surface in 3D. That makes several abstract ideas much easier to grasp, especially why L1 regularization often produces sparse models and therefore behaves a bit like feature selection.