<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/rss/styles.xsl" type="text/xsl"?><rss version="2.0"><channel><title>Daniel Hails | hails.info</title><description>Ideas worth fumbling through</description><link>https://hails.info/</link><item><title>The CIA was &quot;Probably&quot; Right</title><link>https://hails.info/writing/perception-of-probability/</link><guid isPermaLink="true">https://hails.info/writing/perception-of-probability/</guid><description>In 1951, CIA analysts couldn&apos;t agree what &quot;serious possibility&quot; meant — estimates ranged from 20% to 80%. The chart used to prove this for the next fifty years turns out to be broken. The point still stands.
</description><pubDate>Wed, 22 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Sharpen your Axe</title><link>https://hails.info/writing/sharpen-your-axe/</link><guid isPermaLink="true">https://hails.info/writing/sharpen-your-axe/</guid><description>Knowledge workers compound returns by building reusable tools that remove friction, rather than optimising within existing systems.
</description><pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Yardsticks</title><link>https://hails.info/writing/yardsticks/</link><guid isPermaLink="true">https://hails.info/writing/yardsticks/</guid><description>Intuitive scales for large and small numbers. A \$10 tax per webpage covers the US deficit; one career is roughly a billion seconds.
</description><pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate></item><item><title>How professional gamblers size bets</title><link>https://hails.info/writing/kelly-criterion/</link><guid isPermaLink="true">https://hails.info/writing/kelly-criterion/</guid><description>The Kelly Criterion solves a question most people get wrong: how much should you bet when the odds are in your favour? Bet too much and you go bust; too little and you leave money on the table.
</description><pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate></item><item><title>A Picture is Worth 256 Tokens</title><link>https://hails.info/writing/picture-thousand-words/</link><guid isPermaLink="true">https://hails.info/writing/picture-thousand-words/</guid><description>The old adage is backwards. A picture doesn&apos;t need a thousand words because images are rich; it needs them because English is a terrible format for visual information.
</description><pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Bias is a bad word</title><link>https://hails.info/writing/bias-is-a-bad-word/</link><guid isPermaLink="true">https://hails.info/writing/bias-is-a-bad-word/</guid><description>&quot;Bias&quot; smuggles together statistical error, cognitive shortcuts, and prejudice. Language models don&apos;t have bias; they have learned priors. Swapping the word changes the conversation from blame to engineering.
</description><pubDate>Wed, 10 Dec 2025 00:00:00 GMT</pubDate></item><item><title>Fresh Data is Fairer Data</title><link>https://hails.info/writing/fresh-is-fairer/</link><guid isPermaLink="true">https://hails.info/writing/fresh-is-fairer/</guid><description>Retrofitting fairness onto stale training data is futile because the data reflects the unfair world it came from. Fresh data won&apos;t make models fair, but stale data almost guarantees they won&apos;t be.
</description><pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate></item><item><title>Traumagotchi</title><link>https://hails.info/writing/traumagotchi/</link><guid isPermaLink="true">https://hails.info/writing/traumagotchi/</guid><description>Friend.com&apos;s AI pendant trauma-dumps on users to harvest genuine emotional responses. The product isn&apos;t the companion; it&apos;s the training data of human compassion.
</description><pubDate>Wed, 12 Mar 2025 00:00:00 GMT</pubDate></item><item><title>Entropix Sampling</title><link>https://hails.info/writing/entropix/</link><guid isPermaLink="true">https://hails.info/writing/entropix/</guid><description>A per-token sampling strategy that uses entropy and var-entropy to switch between confident output, cautious backtracking, and exploratory branching.
</description><pubDate>Tue, 22 Oct 2024 00:00:00 GMT</pubDate></item><item><title>Coder&apos;s Pangram: A Modern Font Test</title><link>https://hails.info/writing/code-pangram/</link><guid isPermaLink="true">https://hails.info/writing/code-pangram/</guid><description>A pangram designed for code fonts, not calligraphy. Tests the characters that actually trip developers up: 0 vs O, 1 vs l vs I.
</description><pubDate>Sat, 19 Oct 2024 00:00:00 GMT</pubDate></item><item><title>How Arc Max achieves Magic, at a Cost</title><link>https://hails.info/writing/arc-cards/</link><guid isPermaLink="true">https://hails.info/writing/arc-cards/</guid><description>Reverse-engineering Arc browser&apos;s AI features to expose the prompts, fine-tuning, and streaming tricks behind them. No arcane wizardry; just good engineering at an eyewatering price.
</description><pubDate>Fri, 19 Apr 2024 00:00:00 GMT</pubDate></item><item><title>Codenames</title><link>https://hails.info/writing/codenames/</link><guid isPermaLink="true">https://hails.info/writing/codenames/</guid><description>How to name services and codebases so they age well. Descriptive names mislead as scope drifts; suggestive names hint at function without constraining it.
</description><pubDate>Sun, 14 Apr 2024 00:00:00 GMT</pubDate></item><item><title>Mark Rober @ MIT</title><link>https://hails.info/writing/mark-rober/</link><guid isPermaLink="true">https://hails.info/writing/mark-rober/</guid><description>Notes on Rober&apos;s MIT commencement: naive optimism, reframing failure, and a 50,000-person experiment showing people persist 2.5x longer when mistakes aren&apos;t penalised.
</description><pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate></item><item><title>A Visual Exploration of Neural Radiance Fields</title><link>https://hails.info/writing/radiance-fields/</link><guid isPermaLink="true">https://hails.info/writing/radiance-fields/</guid><description>An interactive review article on how we are moving beyond voxels through positional encoding.
</description><pubDate>Sat, 08 Apr 2023 00:00:00 GMT</pubDate></item><item><title>When Deep Neural Nets Fail</title><link>https://hails.info/writing/dnn-failure/</link><guid isPermaLink="true">https://hails.info/writing/dnn-failure/</guid><description>Adversarial examples aren&apos;t bugs; they&apos;re features the model found useful that humans never intended. DNNs learn texture-based shortcuts that generalise poorly and fail silently.
</description><pubDate>Mon, 31 Oct 2022 00:00:00 GMT</pubDate></item><item><title>Can GPT pass at MIT?</title><link>https://hails.info/writing/gpt-at-mit/</link><guid isPermaLink="true">https://hails.info/writing/gpt-at-mit/</guid><description>GPT-3 answers MIT graduate AI course questions side-by-side with a human. It excels at coherence but struggles with depth, examples, and intellectual risk.
</description><pubDate>Thu, 22 Sep 2022 00:00:00 GMT</pubDate></item><item><title>Steve Jobs @ Stanford</title><link>https://hails.info/writing/steve-jobs/</link><guid isPermaLink="true">https://hails.info/writing/steve-jobs/</guid><description>Condensed notes on Jobs&apos; 2005 Stanford commencement: connecting dots backwards, love and loss at Apple, and death as the ultimate clarifier.
</description><pubDate>Sun, 12 Jun 2005 00:00:00 GMT</pubDate></item></channel></rss>