/images/earth.jpg

The Future of Human Work Might Just Be Pressing Enter for AI

Cover image: The Stanley Parable. Its protagonist, Stanley, is Employee 427 in an office building, and his daily work consists of repeatedly pressing keys exactly as the computer instructs him to.

First, I want to make one thing clear: this article is entirely based on phenomena I’ve observed recently and the thoughts and reflections that came from them. Every word was typed by hand, and no AI was used in the writing of this article.

Investigating how Codex context compaction works

Original author: Kangwook Lee

Original article: https://x.com/Kangwook_Lee/article/2028955292025962534

For non-codex models, the open-source Codex CLI compacts context locally: an LLM summarizes the conversation using a compaction prompt. When the compacted context is later used, responses.create() receives it with a handoff prompt that frames the summary. Both prompts are visible in the source code.

What does L1 & L2 Regularization Look Like?

This article visualizes L1 & L2 Regularization, with Cross Entropy Loss as the base loss function. Moreover, the Visualization shows how L1 & L2 Regularization could affect the original surface of cross entropy loss. Although the concept is not difficult, the visualization do make understanding of L1 & L2 regularization easier. For example why L1-reg often leads to sparse model. Above all, the visualization itself is indeedly beautiful.