<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Lukas Heidemann</title><description/><link>https://heidemann.me/</link><item><title>Computing Attention via Monoid Aggregates</title><link>https://heidemann.me/blog/2026-03-06-attention-monoid/</link><guid isPermaLink="true">https://heidemann.me/blog/2026-03-06-attention-monoid/</guid><description>A derivation of numerically stable softmax attention via a monoid structure underlying FlashAttention.</description></item><item><title>Tools that Work Together</title><link>https://heidemann.me/blog/2026-03-13-tools-that-work-together/</link><guid isPermaLink="true">https://heidemann.me/blog/2026-03-13-tools-that-work-together/</guid><description>An optimistic argument that the current AI wave may push software ecosystems toward better interoperability, automation, and composability.</description></item><item><title>MLIR was so close</title><link>https://heidemann.me/blog/2026-03-21-mlir-was-so-close/</link><guid isPermaLink="true">https://heidemann.me/blog/2026-03-21-mlir-was-so-close/</guid><description>A critique of MLIR as a de facto compiler IR standard, from the perspective of implementation-independence, tooling, and dialect interoperability.</description></item></channel></rss>