Blog

Ad-hoc insights from our engineering and delivery work.

Why LLMs Cannot Predict Financial Returns by Tokenizing a Price Sequence

Transformers and LLMs work because language has syntax, semantics, and recoverable structure. Financial return sequences have none of that. A deep look at why tokenizing returns the way you tokenize words is a category error.

machine-learningtransformersfinancial-returnstime-seriesquantitative-financellmtokenization
Read post

How AGENTS.md Improves AI Coding Agent Efficiency

A practical guide to how AGENTS.md files improve AI coding agent efficiency, and how engineering teams can apply these patterns in real repositories.

agents-mdai-coding-agentsdeveloper-productivitysoftware-engineering
Read post