<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://nasonov.fun/feed.xml" rel="self" type="application/atom+xml" /><link href="https://nasonov.fun/" rel="alternate" type="text/html" hreflang="en" /><updated>2026-03-22T01:52:43+03:00</updated><id>https://nasonov.fun/feed.xml</id><title type="html">Yaroslav Nasonov</title><subtitle>I build mobile products and scalable systems.  Focused on AI, architecture and teams.</subtitle><author><name>Ярослав Насонов</name><email>yaroslav.nasonov@gmail.com</email></author><entry><title type="html">Why CTOs Should Still Write Code</title><link href="https://nasonov.fun/blog/2025/02/10/why-ctos-should-still-write-code/" rel="alternate" type="text/html" title="Why CTOs Should Still Write Code" /><published>2025-02-10T00:00:00+03:00</published><updated>2025-02-10T00:00:00+03:00</updated><id>https://nasonov.fun/blog/2025/02/10/why-ctos-should-still-write-code</id><content type="html" xml:base="https://nasonov.fun/blog/2025/02/10/why-ctos-should-still-write-code/"><![CDATA[<p>There’s a persistent debate in tech leadership circles: should CTOs write code? The conventional wisdom says no—your time is better spent on strategy, people, and architecture. I disagree.</p>

<h2 id="the-context-switch-problem">The Context Switch Problem</h2>

<p>As an engineering leader, you face constant pressure to step away from the keyboard. Meetings multiply. Strategic planning demands attention. Stakeholder management becomes a full-time job. Before you know it, code becomes something “other people do.”</p>

<p>This is a mistake.</p>

<h2 id="the-technical-debt-of-distance">The Technical Debt of Distance</h2>

<p>When you stop coding, several things happen:</p>

<p><strong>You lose credibility.</strong> Engineers can smell a manager who’s lost touch. Your architectural decisions become theoretical rather than practical. Your technology choices lack the nuance that comes from implementation experience.</p>

<p><strong>You miss the details.</strong> The friction points in your systems—the ones that slow down your team daily—become invisible. You can’t feel the pain of poorly designed APIs or awkward development workflows.</p>

<p><strong>You disconnect from reality.</strong> Your estimates become wishful thinking. Your technical roadmap drifts from what’s actually feasible. The gap between vision and execution widens.</p>

<h2 id="what-writing-code-actually-means">What “Writing Code” Actually Means</h2>

<p>I’m not suggesting CTOs should be on the critical path for features. That’s a different failure mode. Instead, writing code as a CTO means:</p>

<ul>
  <li><strong>Building tools and automation</strong> that make your team more productive</li>
  <li><strong>Prototyping new approaches</strong> before asking others to implement them</li>
  <li><strong>Investigating production issues</strong> directly rather than through layers of reports</li>
  <li><strong>Maintaining a small piece of infrastructure</strong> that keeps you connected to deployment reality</li>
</ul>

<p>My rule: I ship code at least once a week. Not heroic features. Not critical path work. But real code that runs in production.</p>

<h2 id="the-poc-advantage">The PoC Advantage</h2>

<p>As a technical leader, you have a superpower: the ability to validate ideas quickly without the pressure of production quality. Use it.</p>

<p>When we considered moving to a new AI pipeline architecture, I didn’t write a spec document and delegate. I spent three days building a proof of concept. The learnings from that week informed months of team work and saved us from several dead ends.</p>

<p>PoCs are where CTOs should code most. They’re high-leverage, low-risk, and give you the credibility to make informed technical bets.</p>

<h2 id="the-balance">The Balance</h2>

<p>Yes, your primary job is leadership. But leadership in engineering requires technical credibility, and credibility requires practice. The goal isn’t to be the best coder on your team—it’s to maintain enough context to make good decisions and command respect.</p>

<p>Write enough code to stay dangerous. Ship enough to stay humble. Build enough to stay credible.</p>

<h2 id="practical-implementation">Practical Implementation</h2>

<p>Here’s how I make this work:</p>

<ol>
  <li><strong>Protected time:</strong> Block 4-6 hours per week for hands-on work</li>
  <li><strong>Scope control:</strong> Only work on things that won’t block others</li>
  <li><strong>Infrastructure focus:</strong> Prefer tools, automation, and internal systems</li>
  <li><strong>PoC mindset:</strong> Use it to validate ideas and learn, not to ship features</li>
  <li><strong>Public commits:</strong> Make your code visible to maintain accountability</li>
</ol>

<p>The strongest technical leaders I know haven’t stopped coding. They’ve just changed what they code and why.</p>

<hr />

<p><em>What’s your take? I’m <a href="https://t.me/yaroslav_nasonov">@yaroslav_nasonov</a> on Telegram.</em></p>]]></content><author><name>Yaroslav Nasonov</name></author><summary type="html"><![CDATA[There’s a persistent debate in tech leadership circles: should CTOs write code? The conventional wisdom says no—your time is better spent on strategy, people, and architecture. I disagree.]]></summary></entry><entry><title type="html">Building AI Pipelines Without Overengineering</title><link href="https://nasonov.fun/blog/2025/02/08/building-ai-pipelines-without-overengineering/" rel="alternate" type="text/html" title="Building AI Pipelines Without Overengineering" /><published>2025-02-08T00:00:00+03:00</published><updated>2025-02-08T00:00:00+03:00</updated><id>https://nasonov.fun/blog/2025/02/08/building-ai-pipelines-without-overengineering</id><content type="html" xml:base="https://nasonov.fun/blog/2025/02/08/building-ai-pipelines-without-overengineering/"><![CDATA[<p>The AI/LLM space is full of overengineered solutions. Every startup wants to build the next comprehensive AI platform. Most should start with a Python script and a prompt.</p>

<h2 id="the-overengineering-trap">The Overengineering Trap</h2>

<p>I’ve watched teams spend months building “AI infrastructure” before shipping a single AI feature. Vector databases, embedding pipelines, fine-tuning frameworks, evaluation harnesses—all before validating that users want what they’re building.</p>

<p>This is backwards.</p>

<h2 id="start-with-the-simplest-thing">Start with the Simplest Thing</h2>

<p>Here’s what a minimal AI pipeline actually needs:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">openai</span>

<span class="k">def</span> <span class="nf">process_user_query</span><span class="p">(</span><span class="n">query</span><span class="p">:</span> <span class="nb">str</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="nb">str</span><span class="p">:</span>
    <span class="n">response</span> <span class="o">=</span> <span class="n">openai</span><span class="p">.</span><span class="n">ChatCompletion</span><span class="p">.</span><span class="n">create</span><span class="p">(</span>
        <span class="n">model</span><span class="o">=</span><span class="s">"gpt-4"</span><span class="p">,</span>
        <span class="n">messages</span><span class="o">=</span><span class="p">[{</span><span class="s">"role"</span><span class="p">:</span> <span class="s">"user"</span><span class="p">,</span> <span class="s">"content"</span><span class="p">:</span> <span class="n">query</span><span class="p">}]</span>
    <span class="p">)</span>
    <span class="k">return</span> <span class="n">response</span><span class="p">.</span><span class="n">choices</span><span class="p">[</span><span class="mi">0</span><span class="p">].</span><span class="n">message</span><span class="p">.</span><span class="n">content</span>
</code></pre></div></div>

<p>That’s it. Ship that. Learn from it. Then iterate.</p>

<h2 id="the-three-phases-of-ai-maturity">The Three Phases of AI Maturity</h2>

<h3 id="phase-1-direct-api-calls-week-1">Phase 1: Direct API Calls (Week 1)</h3>

<ul>
  <li>Call OpenAI/Anthropic APIs directly</li>
  <li>Hard-code prompts in your application</li>
  <li>No fancy infrastructure</li>
  <li><strong>Goal:</strong> Validate the use case</li>
</ul>

<p>At this stage, your “AI pipeline” is a function. That’s fine. Most products should stay here longer than they do.</p>

<h3 id="phase-2-basic-structure-weeks-2-4">Phase 2: Basic Structure (Weeks 2-4)</h3>

<p>Once you’ve proven the concept, add:</p>

<ul>
  <li>Prompt templates with variable substitution</li>
  <li>Simple prompt versioning (even just comments with dates)</li>
  <li>Basic logging of inputs/outputs</li>
  <li>Cost tracking (API calls aren’t free)</li>
</ul>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">PromptTemplate</span><span class="p">:</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">template</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">version</span><span class="p">:</span> <span class="nb">str</span><span class="p">):</span>
        <span class="bp">self</span><span class="p">.</span><span class="n">template</span> <span class="o">=</span> <span class="n">template</span>
        <span class="bp">self</span><span class="p">.</span><span class="n">version</span> <span class="o">=</span> <span class="n">version</span>
    
    <span class="k">def</span> <span class="nf">render</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="nb">str</span><span class="p">:</span>
        <span class="k">return</span> <span class="bp">self</span><span class="p">.</span><span class="n">template</span><span class="p">.</span><span class="nb">format</span><span class="p">(</span><span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>

<span class="c1"># prompts/analyze_code.txt (v1.2)
</span><span class="n">ANALYZE_CODE</span> <span class="o">=</span> <span class="n">PromptTemplate</span><span class="p">(</span>
    <span class="n">template</span><span class="o">=</span><span class="s">"""
    Analyze this code for issues:
    
    {code}
    
    Focus on: {focus_areas}
    """</span><span class="p">,</span>
    <span class="n">version</span><span class="o">=</span><span class="s">"1.2"</span>
<span class="p">)</span>
</code></pre></div></div>

<h3 id="phase-3-production-scale-month-2">Phase 3: Production Scale (Month 2+)</h3>

<p>Only when you have real usage and real problems:</p>

<ul>
  <li>Implement caching (most queries repeat)</li>
  <li>Add fallback models (for cost/speed tradeoffs)</li>
  <li>Structured logging and monitoring</li>
  <li>Rate limiting and retry logic</li>
  <li>Evaluation framework</li>
</ul>

<p>Notice what’s missing: vector databases, fine-tuning, custom models. You probably don’t need them.</p>

<h2 id="when-to-add-complexity">When to Add Complexity</h2>

<p>Add infrastructure only when you feel specific pain:</p>

<p><strong>Vector databases</strong> → When search quality matters and you have &gt;10k documents</p>

<p><strong>Fine-tuning</strong> → When prompting fails after serious iteration and you have quality training data</p>

<p><strong>Self-hosting</strong> → When API costs exceed $10k/month or you have strict data requirements</p>

<p><strong>Custom models</strong> → Probably never for your use case</p>

<h2 id="the-rag-reality-check">The RAG Reality Check</h2>

<p>RAG (Retrieval-Augmented Generation) has become the default “solution” for any AI product. Most implementations are overengineered.</p>

<p>Simple RAG that works:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">answer_question</span><span class="p">(</span><span class="n">question</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">docs</span><span class="p">:</span> <span class="nb">list</span><span class="p">[</span><span class="nb">str</span><span class="p">])</span> <span class="o">-&gt;</span> <span class="nb">str</span><span class="p">:</span>
    <span class="c1"># Find relevant docs (simple embedding similarity)
</span>    <span class="n">relevant</span> <span class="o">=</span> <span class="n">find_top_k_docs</span><span class="p">(</span><span class="n">question</span><span class="p">,</span> <span class="n">docs</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
    
    <span class="c1"># Stuff them in the prompt
</span>    <span class="n">context</span> <span class="o">=</span> <span class="s">"</span><span class="se">\n\n</span><span class="s">"</span><span class="p">.</span><span class="n">join</span><span class="p">(</span><span class="n">relevant</span><span class="p">)</span>
    <span class="n">prompt</span> <span class="o">=</span> <span class="sa">f</span><span class="s">"Context:</span><span class="se">\n</span><span class="si">{</span><span class="n">context</span><span class="si">}</span><span class="se">\n\n</span><span class="s">Question: </span><span class="si">{</span><span class="n">question</span><span class="si">}</span><span class="se">\n</span><span class="s">Answer:"</span>
    
    <span class="k">return</span> <span class="n">call_llm</span><span class="p">(</span><span class="n">prompt</span><span class="p">)</span>
</code></pre></div></div>

<p>Do you need a vector database? Only if <code class="language-python highlighter-rouge"><span class="n">docs</span></code> is so large that in-memory search is slow. For most applications, that’s &gt;100k documents.</p>

<p>Do you need sophisticated retrieval? Only if simple similarity search fails. Try it first.</p>

<h2 id="measuring-what-matters">Measuring What Matters</h2>

<p>Skip the MLOps complexity. Track:</p>

<ol>
  <li><strong>Response quality</strong> (thumbs up/down from users)</li>
  <li><strong>Latency</strong> (time to first token)</li>
  <li><strong>Cost per query</strong></li>
  <li><strong>Cache hit rate</strong></li>
</ol>

<p>These four metrics tell you everything. Fancy evaluation frameworks can come later.</p>

<h2 id="the-poc-mindset">The PoC Mindset</h2>

<p>Every AI feature should start as a proof of concept:</p>

<ul>
  <li>One Python file</li>
  <li>Hard-coded configuration</li>
  <li>Manual testing</li>
  <li>Runs on your laptop</li>
</ul>

<p>Ship this to 5 users. Get feedback. Only then architect a “real” solution.</p>

<p>We’ve launched three AI features this way. Two are still running essentially as PoCs (with better error handling). One evolved into a proper service after we saw 10x usage growth. The architecture emerged from need, not anticipation.</p>

<h2 id="common-mistakes">Common Mistakes</h2>

<p><strong>Mistake 1:</strong> Building an “AI platform” before building AI features</p>

<p><strong>Fix:</strong> Build one feature. Extract common patterns. Then build infrastructure.</p>

<p><strong>Mistake 2:</strong> Optimizing for problems you don’t have yet</p>

<p><strong>Fix:</strong> Start with the slowest, dumbest solution that works.</p>

<p><strong>Mistake 3:</strong> Treating LLMs like traditional ML</p>

<p><strong>Fix:</strong> LLMs are more like databases than models. Query them, don’t train them (yet).</p>

<h2 id="the-right-kind-of-engineering">The Right Kind of Engineering</h2>

<p>This isn’t anti-engineering. It’s pro-engineering. Good engineering means:</p>

<ul>
  <li>Solving actual problems</li>
  <li>Building for current scale</li>
  <li>Iterating based on real feedback</li>
  <li>Avoiding premature abstraction</li>
</ul>

<p>The best AI pipeline is the one that ships features users love. Start simple. Stay simple as long as you can. Add complexity only when simplicity breaks.</p>

<p>Most teams never reach that point. And that’s fine.</p>

<hr />

<p><em>Building AI products? I’m interested in your architecture decisions. Reach out: <a href="https://t.me/yaroslav_nasonov">@yaroslav_nasonov</a></em></p>]]></content><author><name>Yaroslav Nasonov</name></author><summary type="html"><![CDATA[The AI/LLM space is full of overengineered solutions. Every startup wants to build the next comprehensive AI platform. Most should start with a Python script and a prompt.]]></summary></entry><entry><title type="html">From Chaos to Process: Structuring Engineering Teams</title><link href="https://nasonov.fun/blog/2025/02/05/from-chaos-to-process-structuring-teams/" rel="alternate" type="text/html" title="From Chaos to Process: Structuring Engineering Teams" /><published>2025-02-05T00:00:00+03:00</published><updated>2025-02-05T00:00:00+03:00</updated><id>https://nasonov.fun/blog/2025/02/05/from-chaos-to-process-structuring-teams</id><content type="html" xml:base="https://nasonov.fun/blog/2025/02/05/from-chaos-to-process-structuring-teams/"><![CDATA[<p>Early-stage startups run on chaos. Move fast, ship things, break stuff, fix it later. This works until it doesn’t. The transition from chaotic velocity to structured execution is where most engineering teams struggle.</p>

<h2 id="the-chaos-stage">The Chaos Stage</h2>

<p>In the beginning, chaos is appropriate. You’re searching for product-market fit. Everything is an experiment. Process would be premature optimization.</p>

<p>Signs you’re in healthy chaos:</p>

<ul>
  <li>Small team (3-7 engineers)</li>
  <li>Direct communication (no layers)</li>
  <li>Fast decisions (hours, not days)</li>
  <li>High agency (people just do things)</li>
  <li>Low coordination overhead</li>
</ul>

<p>This is the golden age of startups. Savor it. But recognize it doesn’t scale.</p>

<h2 id="when-chaos-breaks">When Chaos Breaks</h2>

<p>Chaos breaks predictably. The symptoms:</p>

<p><strong>Repeated work.</strong> Teams building the same thing in parallel because there’s no visibility.</p>

<p><strong>Tribal knowledge.</strong> Only 1-2 people understand critical systems. They become bottlenecks.</p>

<p><strong>Unclear priorities.</strong> Everyone is busy, but no one knows what matters most.</p>

<p><strong>Quality decay.</strong> Technical debt compounds. Incidents increase. Velocity drops.</p>

<p><strong>Frustrated engineers.</strong> The best people leave because “nothing works here.”</p>

<p>If you’re seeing 3+ of these, you need process. But be careful—most teams add the wrong process.</p>

<h2 id="the-wrong-kind-of-process">The Wrong Kind of Process</h2>

<p>Bad process cargo-cults big company practices:</p>

<ul>
  <li>Sprint planning that takes 3 hours to plan 10 days</li>
  <li>Story points and velocity tracking</li>
  <li>Mandatory stand-ups where no one cares</li>
  <li>Jira workflows with 12 states</li>
  <li>Architecture review boards that block work</li>
  <li>Incident post-mortems that produce blame, not learning</li>
</ul>

<p>This is process theater. It feels productive but adds friction without adding clarity.</p>

<h2 id="the-right-kind-of-process">The Right Kind of Process</h2>

<p>Good process has three properties:</p>

<ol>
  <li><strong>Lightweight:</strong> Takes minimal time to follow</li>
  <li><strong>Clarifying:</strong> Makes priorities and ownership obvious</li>
  <li><strong>Evolving:</strong> Changes as needs change</li>
</ol>

<p>Start with these fundamentals:</p>

<h3 id="1-clear-ownership">1. Clear Ownership</h3>

<p>Every project, system, and decision needs an owner. Not a committee. One person who’s accountable.</p>

<p>Use a simple RACI-like model:</p>

<ul>
  <li><strong>Owner:</strong> Makes the decision, owns the outcome</li>
  <li><strong>Consulted:</strong> Input sought before decision</li>
  <li><strong>Informed:</strong> Told after decision</li>
</ul>

<p>Most paralysis comes from unclear ownership. Fix this first.</p>

<h3 id="2-written-context">2. Written Context</h3>

<p>Replace meetings with documents. Before starting anything:</p>

<ul>
  <li><strong>What:</strong> One paragraph describing the goal</li>
  <li><strong>Why:</strong> What problem this solves</li>
  <li><strong>How:</strong> High-level approach</li>
  <li><strong>Success:</strong> How we’ll know it worked</li>
</ul>

<p>This is the minimum viable spec. Takes 15 minutes to write. Saves hours of confusion.</p>

<p>We use Confluence. Notion works. Google Docs works. The tool doesn’t matter. The habit does.</p>

<h3 id="3-visible-priorities">3. Visible Priorities</h3>

<p>Everyone should be able to answer: “What are the top 3 priorities right now?”</p>

<p>Keep a single source of truth. We use Jira with a simple rule:</p>

<ul>
  <li><strong>P0:</strong> Team stops everything to fix this (incidents)</li>
  <li><strong>P1:</strong> Must ship this month (max 3 items)</li>
  <li><strong>P2:</strong> Should ship this quarter</li>
  <li><strong>Everything else:</strong> Backlog (which we mostly ignore)</li>
</ul>

<p>The magic is in the constraints. Only 3 P1 items forces real prioritization.</p>

<h3 id="4-regular-retrospection">4. Regular Retrospection</h3>

<p>Every 2 weeks, ask:</p>

<ul>
  <li>What worked well?</li>
  <li>What slowed us down?</li>
  <li>What will we change?</li>
</ul>

<p>Pick one change. Try it for two weeks. Repeat.</p>

<p>This is how process evolves. Small adjustments compound.</p>

<h2 id="the-architecture-decision-record">The Architecture Decision Record</h2>

<p>One of the best lightweight processes we adopted: Architecture Decision Records (ADRs).</p>

<p>For any significant technical decision, write a markdown file:</p>

<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gh"># ADR-012: Switch to PostgreSQL for Primary Database</span>

<span class="gu">## Context</span>
Current MongoDB setup causing scaling issues...

<span class="gu">## Decision</span>
Migrate to PostgreSQL...

<span class="gu">## Consequences</span>
Positive:
<span class="p">-</span> Better query capabilities
<span class="p">-</span> Stronger consistency guarantees

Negative:
<span class="p">-</span> Migration effort ~3 weeks
<span class="p">-</span> Team needs PostgreSQL training

<span class="gu">## Status</span>
Accepted (2025-02-01)
</code></pre></div></div>

<p>Store these in the repo. They become living documentation of why things are the way they are.</p>

<h2 id="the-communication-cadence">The Communication Cadence</h2>

<p>Process isn’t just about tracking work—it’s about communication rhythm.</p>

<p><strong>Daily:</strong> Quick async updates (we use Slack threads)</p>
<ul>
  <li>What did you ship yesterday?</li>
  <li>What are you doing today?</li>
  <li>Any blockers?</li>
</ul>

<p><strong>Weekly:</strong> Sync alignment (30 min meeting)</p>
<ul>
  <li>Progress on P1 items</li>
  <li>Upcoming decisions</li>
  <li>Quick Q&amp;A</li>
</ul>

<p><strong>Monthly:</strong> Strategic review (1 hour)</p>
<ul>
  <li>Retrospective</li>
  <li>Priority adjustment</li>
  <li>Technical roadmap check</li>
</ul>

<p><strong>Quarterly:</strong> Planning (half day)</p>
<ul>
  <li>Review goals</li>
  <li>Set next quarter priorities</li>
  <li>Resurface technical debt</li>
</ul>

<p>Notice: Only one daily meeting. Everything else is async or low-frequency.</p>

<h2 id="measuring-process-health">Measuring Process Health</h2>

<p>How do you know if your process is working?</p>

<p>Track these:</p>

<p><strong>Lead time:</strong> Time from “we should do this” to “it’s in production”</p>
<ul>
  <li>Good: &lt;2 weeks for small features</li>
  <li>Warning: &gt;1 month consistently</li>
</ul>

<p><strong>Decision velocity:</strong> Time from question to answer</p>
<ul>
  <li>Good: &lt;48 hours for most decisions</li>
  <li>Warning: &gt;1 week regularly</li>
</ul>

<p><strong>Context sharing:</strong> Can new engineers understand why things exist?</p>
<ul>
  <li>Good: They find answers in docs</li>
  <li>Warning: They have to ask people repeatedly</li>
</ul>

<p><strong>Team satisfaction:</strong> Do engineers feel productive?</p>
<ul>
  <li>Good: People say they get stuff done</li>
  <li>Warning: People complain about process/meetings</li>
</ul>

<p>These matter more than velocity or story points.</p>

<h2 id="the-transition-strategy">The Transition Strategy</h2>

<p>Moving from chaos to process is tricky. Too fast and you kill momentum. Too slow and you burn out.</p>

<p>Our approach:</p>

<p><strong>Month 1:</strong> Add ownership and visibility</p>
<ul>
  <li>Assign clear owners to projects</li>
  <li>Create the priority list</li>
  <li>Start weekly syncs</li>
</ul>

<p><strong>Month 2:</strong> Add documentation</p>
<ul>
  <li>Introduce ADRs</li>
  <li>Start requiring spec docs for big projects</li>
  <li>Begin retrospectives</li>
</ul>

<p><strong>Month 3:</strong> Refine and optimize</p>
<ul>
  <li>Review what’s working</li>
  <li>Cut what’s not</li>
  <li>Adjust communication cadence</li>
</ul>

<p>Go slow. Add one piece at a time. Get buy-in. Don’t force it.</p>

<h2 id="the-meta-process">The Meta-Process</h2>

<p>The most important process is the process for changing process.</p>

<p>Make it explicit:</p>

<ul>
  <li>Anyone can propose a process change</li>
  <li>Proposal = what, why, how we’ll measure success</li>
  <li>Try it for 4 weeks</li>
  <li>Keep it or drop it based on evidence</li>
</ul>

<p>This creates a culture of experimentation rather than cargo-culting.</p>

<h2 id="what-not-to-add-yet">What Not to Add (Yet)</h2>

<p>Resist adding these until you feel real pain:</p>

<ul>
  <li>Formal sprint planning (just work from the priority list)</li>
  <li>Story points (velocity is a vanity metric)</li>
  <li>Detailed time tracking (trust &gt; surveillance)</li>
  <li>Mandatory code review approvals (async review works)</li>
  <li>Architecture review boards (ADRs are enough)</li>
</ul>

<p>You might never need them. We haven’t.</p>

<h2 id="the-endgame">The Endgame</h2>

<p>Good process is invisible. Engineers should feel like:</p>

<ul>
  <li>They know what matters</li>
  <li>They can make progress</li>
  <li>They understand context</li>
  <li>They can get help when stuck</li>
</ul>

<p>That’s it. If process feels heavy, you’ve added too much. If things feel chaotic, you’ve added too little.</p>

<p>The goal isn’t process for its own sake. It’s clarity that enables velocity. Structure that enables autonomy. Process that gets out of the way.</p>

<hr />

<p><em>Leading engineering teams through this transition? I’d love to hear about your approach: <a href="https://t.me/yaroslav_nasonov">@yaroslav_nasonov</a></em></p>]]></content><author><name>Yaroslav Nasonov</name></author><summary type="html"><![CDATA[Early-stage startups run on chaos. Move fast, ship things, break stuff, fix it later. This works until it doesn’t. The transition from chaotic velocity to structured execution is where most engineering teams struggle.]]></summary></entry></feed>