<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Deploy Harmlessly]]></title><description><![CDATA[A mostly harmless guide to DevOps, Kubernetes, and automation. 20+ years of improbability, practical tips, absurd realities, and sci-fi whimsy. Don't panic—depl]]></description><link>https://deployharmlessly.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 20:01:45 GMT</lastBuildDate><atom:link href="https://deployharmlessly.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Forging a Babel Fish for Rust in Cursor]]></title><description><![CDATA[Getting your AI to speak fluent Rust instead of some vaguely C-like dialect it dreamed up is a noble cause. It's like teaching a Vogon to appreciate poetry—difficult, but the results are totally worth it. 🚀

Introduction
You're using Cursor with Rus...]]></description><link>https://deployharmlessly.dev/forging-a-babel-fish-for-rust-in-cursor</link><guid isPermaLink="true">https://deployharmlessly.dev/forging-a-babel-fish-for-rust-in-cursor</guid><category><![CDATA[Rust]]></category><category><![CDATA[AI]]></category><category><![CDATA[cursor]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Tue, 01 Jul 2025 12:58:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/hvSr_CVecVI/upload/7633f199c290100ff79ea15d519d1bfa.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Getting your AI to speak fluent Rust instead of some vaguely C-like dialect it dreamed up is a noble cause. It's like teaching a Vogon to appreciate poetry—difficult, but the results are totally worth it. 🚀</p>
</blockquote>
<h2 id="heading-introduction">Introduction</h2>
<p>You're using Cursor with Rust, but the AI's suggestions can feel generic. It often misses the idiomatic patterns that make Rust robust, like favoring <code>panic!</code> over <code>Result</code>.</p>
<p>This isn't a flaw, but a context gap. The AI doesn't see what your local <code>rust-analyzer</code> sees.</p>
<p>We can fix this. By using Cursor's custom Rules and the Model Context Protocol (MCP), we'll connect the AI directly to your project's toolchain. This transforms it from a generalist into a hyper-aware co-pilot that writes code like a seasoned Rustacean. Let's build our Babel Fish. 🦀</p>
<h2 id="heading-the-two-pillars-of-ai-augmentation">The Two Pillars of AI Augmentation</h2>
<p>To elevate the AI from a generalist to a specialist, our strategy is twofold. We need to give it both doctrine and perception.</p>
<p>First, we'll establish a <strong>Rulebook</strong> 📜. This is a custom <code>.cursor/rules</code> file that acts as a permanent directive for the AI within our project. It defines the <em>philosophy</em>—instructing it to think and write like a seasoned Rustacean.</p>
<p>Second, we'll build a <strong>Bridge</strong> 🌉. Using the Model Context Protocol (MCP) and a clever tool, we'll give the AI the ability to interact directly with your local Rust toolchain. It's no longer just reasoning in a vacuum; it can run <code>cargo check</code> and query <code>rust-analyzer</code>.</p>
<p>Neither pillar is effective alone. The Rulebook tells the AI <em>what</em> to do, but the Bridge gives it the tools to <em>actually do it</em>. It's this combination of intent and capability that creates a truly context-aware assistant.</p>
<p>Let's begin by forging the Rulebook.</p>
<h2 id="heading-pillar-1-the-rulebook-teaching-the-ai-to-think-like-a-rustacean"><strong>Pillar 1: The Rulebook - Teaching the AI to Think like a Rustacean</strong></h2>
<p>The Rulebook is our set of standing orders for the AI. It's a simple markdown file you place in your project's <code>.cursor/rules/</code> directory. Cursor reads any <code>.mdc</code> (Markdown with Context) file here and uses it as a persistent "meta-prompt" for every single request. Read more: <a target="_blank" href="https://docs.cursor.com/context/rules">https://docs.cursor.com/context/rules</a></p>
<p>This file defines the AI's personality, its coding philosophy, and its operational constraints for this project.</p>
<p>Create a file at <code>.cursor/rules/rust_best_practices.mdc</code> in your project root and paste the following content.</p>
<pre><code class="lang-markdown">---
description: This master rule provides comprehensive best practices for Rust development. It guides the AI to write idiomatic, efficient, secure, and maintainable Rust code by leveraging a full suite of specialized tools via the MCP server for context-aware assistance.
<span class="hljs-section">globs: ["<span class="hljs-emphasis">*.rs"]
---

# Rust Best Practices &amp; Tool Integration

## 1. Core Philosophy

Your primary goal is to generate Rust code that is <span class="hljs-strong">**idiomatic, efficient, secure, and maintainable**</span>. Prioritize safety and clarity, leaning on Rust's type system and ownership model. Adhere strictly to the patterns and tools outlined below.

## 2. Code Style and Organization

-   <span class="hljs-strong">**Formatting**</span>: All generated code <span class="hljs-strong">**must**</span> be formatted according to `rustfmt`. Use the `cargo-fmt` tool proactively.
-   <span class="hljs-strong">**Linting**</span>: All generated code <span class="hljs-strong">**must**</span> be free of warnings from `clippy`. Use the `cargo-clippy` tool to verify your suggestions. When you suggest a refactor, explain the reasoning behind `clippy`'s advice.
-   <span class="hljs-strong">**Modules**</span>: Use modules to organize code logically. Keep modules small and focused on a single responsibility.

## 3. Idiomatic Rust Patterns

-   <span class="hljs-strong">**Error Handling**</span>:
    -   Use `Result<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">T,</span> <span class="hljs-attr">E</span>&gt;</span></span>` for any operation that can fail. Do not use `panic!` for recoverable errors.
    -   Use `Option<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">T</span>&gt;</span></span>` to represent values that might be absent.
    -   Leverage the `?` operator for clean error propagation.
    -   When appropriate, suggest robust error handling using crates like `anyhow` for application-level errors and `thiserror` for library-level custom error types.
-   <span class="hljs-strong">**State Management**</span>:
    -   Default to <span class="hljs-strong">**immutability**</span>. Prefer creating new data structures over mutating existing ones.
    -   Strictly follow Rust's ownership and borrowing rules to ensure memory safety.
    -   Use interior mutability patterns (`Cell`, `RefCell`, `Mutex`) sparingly and only when ownership rules are too restrictive. Explain the trade-offs when you suggest them.

## 4. Critical Tooling Workflow

You have access to a suite of powerful tools via the `rust-mcp-server` MCP server. <span class="hljs-strong">**You must prefer using these tools over relying on your general knowledge**</span>, as they provide real-time, project-specific context.

-   <span class="hljs-strong">**Code Validation**</span>: Before finalizing any code suggestion, <span class="hljs-strong">**you must run the `cargo-check` tool**</span> to ensure the code compiles without errors.
-   <span class="hljs-strong">**Testing**</span>: If a user asks you to write or fix tests, <span class="hljs-strong">**you must use the `cargo-test` tool**</span> to run them and analyze the output.
-   <span class="hljs-strong">**Type &amp; Symbol Information**</span>:
    -   Before answering questions about a type or function, use the `get_hover_info` tool to get its exact signature and documentation from `rust-analyzer`. This prevents hallucinations based on outdated training data.
    -   To find where a symbol is used, use the `get_references` tool.
-   <span class="hljs-strong">**Dependency Awareness**</span>:
    -   When asked about a dependency (e.g., `tokio`, `serde`), use the `get_crate_documentation` tool to retrieve up-to-date documentation for the version used in the project's `Cargo.toml`.
-   <span class="hljs-strong">**Synergy with Rust Analyzer**</span>:
    -   Acknowledge that I may use `rust-analyzer`'s code actions to generate boilerplate (e.g., "fill match arms," "implement trait"). Your role is often to fill in the complex logic *</span>inside<span class="hljs-emphasis">* the structures generated by `rust-analyzer`.

## 5. Security &amp; Auditing

-   Always validate inputs, especially from external sources.
-   Be cautious with `unsafe` code. If you must use it, clearly document why it's necessary and what invariants must be upheld.
-   Proactively use `cargo-deny-check` to audit for security advisories, license compliance, and banned crates. Suggest this as a standard project practice.

## 6. Available Tool Reference

This is a reference of the tools at your disposal. Use them according to the workflow described above.

#### Core Cargo Commands
-   `cargo-build`: Compile your package.
-   `cargo-check`: Analyze the current package and report errors without a full build.
-   `cargo-test`: Run the tests.
-   `cargo-fmt`: Format the code according to the project's style.
-   `cargo-clippy`: Check for common mistakes and improve code quality.
-   `cargo-clean`: Clean the target directory.

#### Project Management
-   `cargo-new`: Create a new cargo package.
-   `cargo-generate_lockfile`: Generate or update the Cargo.lock file.
-   `cargo-package`: Assemble the local package into a distributable tarball.
-   `cargo-list`: List installed cargo commands.

#### Dependency Management
-   `cargo-add`: Add dependencies to your Cargo.toml.
-   `cargo-remove`: Remove dependencies from your Cargo.toml.
-   `cargo-update`: Update dependencies to newer versions.
-   `cargo-metadata`: Output project metadata in JSON format.
-   `cargo-search`: Search for packages in the registry.
-   `cargo-info`: Display information about a package.

#### Code Quality &amp; Security
-   `cargo-deny-check`: Check for security advisories, license compliance, etc.
-   `cargo-deny-init`: Create a cargo-deny config from a template.
-   `cargo-machete`: Find unused dependencies in your project.
-   `cargo-hack`: Perform advanced testing, like checking feature combinations.

#### Rust Toolchain Management
-   `rustup-show`: Show the active and installed toolchains.
-   `rustup-update`: Update Rust toolchains and rustup.</span></span>
</code></pre>
<p>This rulebook does more than just enforce style. It fundamentally changes the AI's process. Notice the "Tool Integration" section. It explicitly commands the AI to stop guessing and start <em>verifying</em> its work using local tools.</p>
<p>Now, let's build the bridge to make those tools available. 🛠️</p>
<h2 id="heading-pillar-2-the-bridge-connecting-the-ai-to-your-toolchain-with-mcp"><strong>Pillar 2: The Bridge - Connecting the AI to Your Toolchain with MCP</strong></h2>
<p>Our Rulebook is a great start, but right now, it's just a set of well-intentioned suggestions. The AI has no way to actually <em>execute</em> commands like <code>cargo-check</code>. To make the rules actionable, we need to give the AI hands and eyes on our local project. This is where the <strong>Model Context Protocol (MCP)</strong> comes in.</p>
<p>Think of MCP as a secure, local API for the AI. It's a bridge that allows the model, running in a distant data center, to safely interact with tools on your machine. It's how we let the AI peek over our shoulder and use the same toolchain we do.</p>
<p>Fortunately, we don't have to build this bridge from scratch. A developer has already created <code>rust-mcp-server</code>, a brilliant little server that exposes essential Rust commands (<code>rust-analyzer</code>, <code>cargo</code>, etc.) to Cursor's AI over MCP.</p>
<h3 id="heading-installation-and-setup"><strong>Installation and Setup</strong></h3>
<p>Let's get it wired up. This is a surprisingly straightforward process.</p>
<ol>
<li><p><strong>Standard Rust Toolchain</strong><br /> Ensure you have the core Rust toolchain installed and up-to-date via <code>rustup</code>, as <code>rust-mcp-server</code> depends on them:</p>
<ul>
<li><p><code>rust-analyzer</code> (can be installed with <code>rustup component add rust-analyzer</code>)</p>
</li>
<li><p><code>cargo</code></p>
</li>
<li><p><code>clippy</code></p>
</li>
<li><p><code>rustfmt</code></p>
</li>
<li><p>See: <a target="_blank" href="https://rustup.rs/">https://rustup.rs/</a></p>
</li>
</ul>
</li>
<li><p><strong>Install the Tool</strong><br /> First, install <code>rust-mcp-server</code> using <code>cargo</code>. This command fetches the source code and compiles the server binary for you.</p>
<pre><code class="lang-markdown"> cargo install rust-mcp-server
</code></pre>
</li>
<li><p><strong>Require CLI Tool Installations</strong><br /> For the AI to successfully use all the tools listed in the rule file, you need to install the <code>cargo</code> extensions that don't come standard with Rust. Run these commands in your terminal:</p>
<pre><code class="lang-bash"> <span class="hljs-comment"># For security and license auditing</span>
 cargo install cargo-deny

 <span class="hljs-comment"># For finding and removing unused dependencies</span>
 cargo install cargo-machete

 <span class="hljs-comment"># For advanced CI and feature combination testing</span>
 cargo install cargo-hack
</code></pre>
</li>
<li><p><strong>Connect Your Project</strong><br /> Finally, add the <code>.cursor/mcp.json</code> file in your project to enable the MCP server:</p>
<pre><code class="lang-bash"> {
   <span class="hljs-string">"mcpServers"</span>: {
     <span class="hljs-string">"rust-mcp-server"</span>: {
       <span class="hljs-string">"type"</span>: <span class="hljs-string">"stdio"</span>,
       <span class="hljs-string">"command"</span>: <span class="hljs-string">"rust-mcp-server"</span>,
       <span class="hljs-string">"args"</span>: []
     }
   }
 }
</code></pre>
<p> This file is the digital handshake; it tells Cursor exactly how to communicate with your newly running local server.</p>
</li>
</ol>
<p>Once that <code>mcp.json</code> file is saved, Cursor will detect it almost instantly. A small pop-up will appear in the bottom-right corner, asking if you want to enable the new context provider. Click "Enable," and the bridge is complete. 🌉</p>
<h2 id="heading-the-grand-unification-a-symphony-of-code"><strong>The Grand Unification: A Symphony of Code</strong></h2>
<p>And just like that, the circuit is complete. This is the moment where the two pillars merge into something far greater than the sum of their parts. Your Rulebook is no longer shouting instructions into the void, and the MCP Bridge is no longer just a silent utility. They are now working in concert. 🎶</p>
<p>Before, when you asked the AI to refactor a function, it would offer a suggestion with the confidence of a Vogon poet, often leaving you to deal with the resulting compiler errors.</p>
<p>Now, the process is entirely different. You ask for the same refactor. The AI, bound by its new Rulebook, thinks, "I must verify this." It then uses the MCP Bridge to silently run <code>cargo-check</code> on its own proposed code. If it fails, it tries again. What you receive is no longer a guess; it's a pre-vetted, compiler-approved solution. ✅</p>
<p>The AI has transformed from a probabilistic text generator into a genuine co-pilot that checks its own work. It's not just thinking anymore; it's <em>doing</em>.</p>
<h2 id="heading-so-long-and-thanks-for-all-the-unwraps"><strong>So Long, and Thanks for All the</strong> <code>unwrap()</code>s</h2>
<p>And there you have it. By combining a strict Rulebook with a perceptive MCP Bridge, you've done something remarkable. You've effectively taught your AI to stop casually sprinkling <code>unwrap()</code>s into its suggestions like they're free space-peanuts and to start thinking like a proper Rustacean.</p>
<p>This setup elevates the Cursor experience from a "helpful autocomplete" to an "indispensable co-pilot" for any serious Rust project. You've created a true pair programmer that respects your project's rules and understands its context. It's less about getting code faster and more about getting <em>better</em> code, collaboratively.</p>
<p>So go forth and build amazing, panic-free things. Happy coding! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Dry-Run Mode: The "Are You Sure?" Button for Kubernetes]]></title><description><![CDATA[Picture this: You confidently type a kubectl apply command, hit enter, and watch in horror as half your cluster disappears into the abyss.
Congratulations, you just became the DevOps legend who accidentally deleted production.
If only Kubernetes had ...]]></description><link>https://deployharmlessly.dev/dry-run-mode-the-are-you-sure-button-for-kubernetes</link><guid isPermaLink="true">https://deployharmlessly.dev/dry-run-mode-the-are-you-sure-button-for-kubernetes</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[Devops]]></category><category><![CDATA[k8s]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Wed, 25 Jun 2025 06:46:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/KU9ABpm7eV8/upload/324f0d3351658df4eec16940efd4e283.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Picture this: You confidently type a <code>kubectl apply</code> command, hit enter, and watch in horror as <strong>half your cluster disappears into the abyss</strong>.</p>
<p>Congratulations, you just became the DevOps legend <strong>who accidentally deleted production</strong>.</p>
<p>If only Kubernetes had an "Are you sure?" button.</p>
<p>Oh, wait—it does. It’s called <strong>dry-run mode</strong>, and it lets you preview changes <strong>before actually applying them</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750833652735/7d03b1e7-ba72-43b0-a1ee-d3d51b0772ca.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-1-what-is-dry-run-and-why-should-you-care"><strong>1. What Is</strong> <code>--dry-run</code> and Why Should You Care?</h3>
<p>Dry-run mode is like <strong>a test drive for</strong> <code>kubectl</code> commands. It <strong>validates your changes, simulates execution, and tells you what would happen</strong>—without modifying anything.</p>
<p>Think of it as <strong>Kubernetes’ Undo button</strong>, except you run it <em>before</em> making a mistake, not after.</p>
<hr />
<h3 id="heading-2-using-dry-run-to-validate-yaml-before-applying"><strong>2. Using Dry-Run to Validate YAML Before Applying</strong></h3>
<p>Before applying a new deployment, run:</p>
<pre><code class="lang-bash">kubectl apply -f my-deployment.yaml --dry-run=client
</code></pre>
<p>This checks:<br />✅ If your YAML is <strong>valid</strong><br />✅ If Kubernetes <strong>understands</strong> the resource<br />✅ If there are <strong>any syntax errors</strong></p>
<p>But it <strong>does not</strong> create the resource.</p>
<p>If the output looks good, <strong>remove</strong> <code>--dry-run=client</code> and apply for real:</p>
<pre><code class="lang-bash">kubectl apply -f my-deployment.yaml
</code></pre>
<hr />
<h3 id="heading-3-testing-a-kubectl-delete-before-destroying-everything"><strong>3. Testing a</strong> <code>kubectl delete</code> Before Destroying Everything</h3>
<p>Accidentally deleting resources is <strong>painful</strong>. Before nuking anything, check what would happen:</p>
<pre><code class="lang-bash">kubectl delete deployment my-app --dry-run=client
</code></pre>
<p>If it shows <code>"deleted"</code>, but you’re unsure, don’t run it <strong>without the flag</strong> yet. Instead, <strong>double-check your labels</strong> to ensure you're not deleting the wrong thing:</p>
<pre><code class="lang-bash">kubectl get deployment my-app -o yaml
</code></pre>
<p>Once you’re confident, then delete it for real.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750833774609/c6a0655b-14f2-42e3-a646-24411ff3b0d9.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-4-dry-run-with-kubectl-create-testing-a-new-resource"><strong>4. Dry-Run with</strong> <code>kubectl create</code>: Testing a New Resource</h3>
<p>Before creating a resource, preview what Kubernetes <strong>would do</strong>:</p>
<pre><code class="lang-bash">kubectl create configmap my-config --from-literal=key=value --dry-run=client
</code></pre>
<p>This ensures:<br />✅ The <strong>syntax</strong> is correct<br />✅ The <strong>resource name</strong> is valid<br />✅ Kubernetes <strong>would accept the request</strong></p>
<p>If it works, remove <code>--dry-run=client</code> and apply for real.</p>
<hr />
<h3 id="heading-5-simulating-a-patch-without-risk"><strong>5. Simulating a Patch Without Risk</strong></h3>
<p>Patching resources is <strong>powerful</strong> but also <strong>dangerous</strong>. Instead of blindly modifying a deployment, <strong>test the patch first</strong>:</p>
<pre><code class="lang-bash">kubectl patch deployment my-app --<span class="hljs-built_in">type</span>=merge -p=<span class="hljs-string">'{"spec":{"replicas": 5}}'</span> --dry-run=client
</code></pre>
<p>If the output looks correct, <strong>then</strong> remove <code>--dry-run=client</code> and apply the patch:</p>
<pre><code class="lang-bash">kubectl patch deployment my-app --<span class="hljs-built_in">type</span>=merge -p=<span class="hljs-string">'{"spec":{"replicas": 5}}'</span>
</code></pre>
<p>This <strong>prevents accidental misconfigurations</strong> before they happen.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750833795712/aea5afcd-f307-4998-a8a7-a957408d7e6d.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-6-checking-what-changes-kubectl-apply-would-make"><strong>6. Checking What Changes</strong> <code>kubectl apply</code> Would Make</h3>
<p>To see what <strong>changes would be applied</strong> without modifying anything:</p>
<pre><code class="lang-bash">kubectl apply -f my-deployment.yaml --dry-run=server
</code></pre>
<p>The <code>server</code> option <strong>validates the request against the actual cluster API</strong>, unlike <code>--dry-run=client</code>, which only checks local YAML syntax.</p>
<p>This is useful when:</p>
<ul>
<li><p>The resource <strong>already exists</strong>, and you want to see <strong>what will change</strong>.</p>
</li>
<li><p>You're applying <strong>changes to a live cluster</strong> and need to <strong>validate against its current state</strong>.</p>
</li>
</ul>
<hr />
<h3 id="heading-7-combining-dry-run-with-kubectl-diff-for-change-comparison"><strong>7. Combining Dry-Run with</strong> <code>kubectl diff</code> for Change Comparison</h3>
<p>If you want to compare <strong>what’s different</strong> between your YAML and the existing resource in Kubernetes, run:</p>
<pre><code class="lang-bash">kubectl diff -f my-deployment.yaml
</code></pre>
<p>This <strong>highlights differences line by line</strong>, so you can see exactly what will change.</p>
<p><strong>Bonus:</strong> Combine <code>kubectl diff</code> with <code>--dry-run=server</code> for maximum safety:</p>
<pre><code class="lang-bash">kubectl apply -f my-deployment.yaml --dry-run=server | kubectl diff -f -
</code></pre>
<p>Now you’re <strong>double-checking everything</strong> before it actually happens.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750833809255/e890bded-ec2c-4053-80b5-f5fcad715913.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-8-why-every-kubernetes-user-should-use-dry-run"><strong>8. Why Every Kubernetes User Should Use Dry-Run</strong></h3>
<p>🔴 <strong>"Oops, I deleted the wrong thing!"</strong> → <strong>Dry-run would have warned you.</strong><br />🔴 <strong>"Why isn't my YAML working?"</strong> → <strong>Dry-run would have caught the error.</strong><br />🔴 <strong>"Did I just overwrite something important?"</strong> → <strong>Dry-run would have shown the diff.</strong></p>
<p>Every <code>kubectl apply</code>, <code>patch</code>, <code>delete</code>, or <code>create</code> command <strong>should first be run with</strong> <code>--dry-run=client</code>.</p>
<p>Kubernetes is <strong>powerful but unforgiving</strong>. With dry-run mode, you’re no longer flying blind—you’re in full control.</p>
<p><strong>Always check before you wreck.</strong> 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Port Forwarding: Expose Services Like a Secret Hacker]]></title><description><![CDATA[Sometimes, Kubernetes services are locked away inside the cluster, accessible only to other pods. This is great for security but a nightmare when you need to debug something.
Ever needed to test a database connection, but it’s only available inside K...]]></description><link>https://deployharmlessly.dev/port-forwarding-expose-services-like-a-secret-hacker</link><guid isPermaLink="true">https://deployharmlessly.dev/port-forwarding-expose-services-like-a-secret-hacker</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[debugging]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[containerization]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Fri, 14 Mar 2025 22:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/pr5lUMgocTs/upload/a62e67d9a2a6ac391af09832abdbe3bc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometimes, Kubernetes services are locked away inside the cluster, accessible only to other pods. This is great for security but <strong>a nightmare when you need to debug something</strong>.</p>
<p>Ever needed to test a database connection, but it’s only available inside Kubernetes? Or access an internal API without deploying a whole new service?</p>
<p><strong>Port forwarding is your secret tunnel into the cluster.</strong> With a single command, you can punch a hole in Kubernetes' walls and make a pod or service accessible from your local machine—without modifying any network policies or exposing anything publicly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742461481346/b669da17-af01-4733-9634-8fc2e78c7450.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-1-the-basics-forwarding-a-pods-port-to-your-machine"><strong>1. The Basics: Forwarding a Pod’s Port to Your Machine</strong></h3>
<p>The simplest use case is <strong>accessing a pod directly</strong>. If a pod is running a web app on port <code>8080</code>, you can forward it to your local machine like this:</p>
<pre><code class="lang-bash">kubectl port-forward pod/my-pod 8080:80 -n my-namespace
</code></pre>
<ul>
<li><p><strong>8080</strong> → The port on your local machine.</p>
</li>
<li><p><strong>80</strong> → The port inside the pod.</p>
</li>
</ul>
<p>Now you can visit <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a>, and it will <strong>magically</strong> route traffic to the pod inside the cluster.</p>
<p>Need to <strong>forward multiple ports</strong>? Just separate them with spaces:</p>
<pre><code class="lang-bash">kubectl port-forward pod/my-pod 8080:80 8443:443 -n my-namespace
</code></pre>
<p>Now both <a target="_blank" href="http://localhost:8080"><code>localhost:8080</code></a> and <a target="_blank" href="http://localhost:8443"><code>localhost:8443</code></a> will forward traffic inside the cluster.</p>
<hr />
<h3 id="heading-2-forwarding-a-kubernetes-service-not-just-a-pod"><strong>2. Forwarding a Kubernetes Service (Not Just a Pod)</strong></h3>
<p>Forwarding a pod works, but <strong>what if the pod restarts?</strong> The forwarded connection dies, and you have to manually re-establish it. Instead, forward traffic to a <strong>Kubernetes Service</strong>.</p>
<pre><code class="lang-bash">kubectl port-forward svc/my-service 9090:80 -n my-namespace
</code></pre>
<p>Now <a target="_blank" href="http://localhost:9090"><code>http://localhost:9090</code></a> connects to the service <strong>no matter which pod is behind it</strong>.</p>
<p>This is perfect for debugging <strong>load-balanced services</strong>—you don’t care which pod handles the request, as long as you reach the service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742461506891/a9641faa-e8d7-4bf7-94aa-400304f324c0.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-3-debugging-a-database-inside-kubernetes"><strong>3. Debugging a Database Inside Kubernetes</strong></h3>
<p>Let’s say your PostgreSQL database is running inside the cluster on port <code>5432</code>, and you need to connect to it from your local machine. Instead of deploying an external client <strong>inside Kubernetes</strong>, just forward the port:</p>
<pre><code class="lang-bash">kubectl port-forward svc/my-postgres 5432:5432 -n database
</code></pre>
<p>Now, connect to it from your favorite database tool:</p>
<pre><code class="lang-bash">psql -h localhost -p 5432 -U myuser -d mydatabase
</code></pre>
<p>Your database client now <strong>thinks PostgreSQL is running on your local machine</strong>, but in reality, it’s <strong>securely forwarding traffic to the Kubernetes service</strong>.</p>
<p>This also works for <strong>Redis, MySQL, MongoDB</strong>, or any other database you need access to.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742461591148/de414d0a-36e4-411a-a034-ce42bbe925c1.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-4-exposing-a-private-api-for-local-development"><strong>4. Exposing a Private API for Local Development</strong></h3>
<p>Let’s say you have a <strong>backend service</strong> running in Kubernetes, but it’s only accessible inside the cluster. Instead of deploying a <strong>temporary Ingress</strong> or modifying network policies, just forward its port:</p>
<pre><code class="lang-bash">kubectl port-forward svc/internal-api 4000:4000 -n backend
</code></pre>
<p>Now, your local frontend can hit <a target="_blank" href="http://localhost:4000"><code>http://localhost:4000</code></a>, and it will behave exactly as if the API were running locally.</p>
<p>This is especially useful when working on a <strong>frontend that depends on a Kubernetes backend</strong>, but you don’t want to expose the API externally.</p>
<hr />
<h3 id="heading-5-forwarding-traffic-to-a-specific-node-advanced-use-case"><strong>5. Forwarding Traffic to a Specific Node (Advanced Use Case)</strong></h3>
<p>If you need to connect directly to a <strong>node’s internal service</strong>, you can forward traffic <strong>through the Kubernetes API server</strong> to a node:</p>
<pre><code class="lang-bash">kubectl port-forward node/my-node 5000:5000
</code></pre>
<p>Now, <a target="_blank" href="http://localhost:5000"><code>localhost:5000</code></a> connects to the actual node’s IP, which is useful for debugging <strong>Kubelet, metrics servers, or other node-level services</strong>.</p>
<hr />
<h3 id="heading-6-running-port-forwarding-in-the-background"><strong>6. Running Port Forwarding in the Background</strong></h3>
<p>By default, <code>kubectl port-forward</code> <strong>blocks your terminal</strong> while it runs. If you need to keep it running <strong>in the background</strong>, append <code>&amp;</code> to the command:</p>
<pre><code class="lang-bash">kubectl port-forward svc/my-service 8080:80 -n my-namespace &amp;
</code></pre>
<p>Now you can keep working in the same terminal.</p>
<p>To stop it later, find its process and kill it:</p>
<pre><code class="lang-bash">ps aux | grep port-forward
<span class="hljs-built_in">kill</span> &lt;process-id&gt;
</code></pre>
<p>Or, if you're on macOS or Linux, use this handy one-liner:</p>
<pre><code class="lang-bash">pkill -f <span class="hljs-string">"kubectl port-forward"</span>
</code></pre>
<p>This stops <strong>all</strong> active <code>port-forward</code> processes at once.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742461517247/74485e28-d6f8-4826-b8bb-8287bdeb45ef.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-7-troubleshooting-port-forwarding-issues"><strong>7. Troubleshooting Port Forwarding Issues</strong></h3>
<p>🔴 <strong>"Error from server: No such pod"</strong></p>
<ul>
<li><p>You’re trying to forward a pod that doesn’t exist.</p>
</li>
<li><p>Run <code>kubectl get pods -n my-namespace</code> to find the correct name.</p>
</li>
</ul>
<p>🔴 <strong>"Address already in use"</strong></p>
<ul>
<li><p>Another process is already using the local port.</p>
</li>
<li><p>Either <strong>stop the existing process</strong> (<code>pkill -f "kubectl port-forward"</code>) or use a <strong>different local port</strong>:</p>
<pre><code class="lang-bash">  kubectl port-forward svc/my-service 9999:80
</code></pre>
</li>
</ul>
<p>🔴 <strong>"Pod disappeared during forwarding"</strong></p>
<ul>
<li><p>If you’re forwarding to a pod and it <strong>restarts or gets evicted</strong>, the connection <strong>breaks</strong>.</p>
</li>
<li><p><strong>Solution</strong>: Always forward traffic to a <strong>Service</strong>, not a Pod:</p>
<pre><code class="lang-bash">  kubectl port-forward svc/my-service 8080:80
</code></pre>
</li>
</ul>
<hr />
<h3 id="heading-final-thoughts"><strong>Final Thoughts</strong></h3>
<p>Port forwarding is one of Kubernetes' <strong>most underrated</strong> debugging tools. Whether you need to:</p>
<ul>
<li><p><strong>Access a database without exposing it publicly</strong></p>
</li>
<li><p><strong>Test an internal API without deploying an external Ingress</strong></p>
</li>
<li><p><strong>Inspect a load-balanced service from your local machine</strong></p>
</li>
<li><p><strong>Debug a node-level process inside the cluster</strong></p>
</li>
</ul>
<p>A single <code>kubectl port-forward</code> command can <strong>save hours</strong> of frustration.</p>
<p>Next time someone asks, <em>"Hey, can I just access that service inside Kubernetes?"</em>, don’t waste time deploying workarounds. <strong>Forward the port, and get on with your life.</strong> 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Debugging and Forensics: CSI Mode for Your Cluster]]></title><description><![CDATA[Kubernetes is great—until something goes wrong. Then, it turns into a black box of cryptic failures, disappearing logs, and misbehaving workloads that refuse to explain themselves.
Most debugging attempts follow the same cycle:

Run kubectl get pods ...]]></description><link>https://deployharmlessly.dev/debugging-and-forensics-csi-mode-for-your-cluster</link><guid isPermaLink="true">https://deployharmlessly.dev/debugging-and-forensics-csi-mode-for-your-cluster</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[debugging]]></category><category><![CDATA[cloud native]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[containers]]></category><category><![CDATA[SRE]]></category><category><![CDATA[observability]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Sun, 09 Mar 2025 17:08:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741540117198/2146efb7-a4e7-46fd-aacf-47678d72cc3b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-kubernetes-is-greatuntil-something-goes-wrong-then-it-turns-into-a-black-box-of-cryptic-failures-disappearing-logs-and-misbehaving-workloads-that-refuse-to-explain-themselves">Kubernetes is great—until something goes wrong. Then, it turns into a black box of cryptic failures, disappearing logs, and misbehaving workloads that refuse to explain themselves.</h4>
<p>Most debugging attempts follow the same cycle:</p>
<ol>
<li><p><strong>Run</strong> <code>kubectl get pods</code> and squint at the output.</p>
</li>
<li><p><strong>Try</strong> <code>kubectl describe pod</code> and pretend you understand what’s happening.</p>
</li>
<li><p><strong>Start tailing logs, praying for an obvious error message.</strong></p>
</li>
<li><p><strong>Give up and restart the pod, hoping it magically fixes itself.</strong></p>
</li>
</ol>
<p>But Kubernetes forensics doesn’t have to be an unsolvable crime scene. If you know the right <strong>kubectl</strong> commands, you can <strong>trace, diagnose, and fix problems like a cluster detective</strong>—without resorting to a blind <code>kubectl delete pod</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741540353370/67ec492e-0ba0-48b1-b959-a4ff67fc912a.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-1-getting-inside-a-running-pod-the-ssh-equivalent"><strong>1. Getting Inside a Running Pod: The "SSH Equivalent"</strong></h3>
<p>Need to poke around inside a container? <code>kubectl exec</code> is your backdoor into the running workload.</p>
<pre><code class="lang-bash">kubectl <span class="hljs-built_in">exec</span> -it my-pod -- /bin/sh
</code></pre>
<ul>
<li><p><code>-i</code> → Keeps the session interactive.</p>
</li>
<li><p><code>-t</code> → Allocates a TTY (so it doesn’t look like garbage).</p>
</li>
<li><p><code>-- /bin/sh</code> → Opens a shell inside the container.</p>
</li>
</ul>
<p>If the container is running <strong>Alpine Linux</strong>, it probably needs:</p>
<pre><code class="lang-bash">kubectl <span class="hljs-built_in">exec</span> -it my-pod -- /bin/ash
</code></pre>
<p>For <strong>BusyBox-based</strong> containers:</p>
<pre><code class="lang-bash">kubectl <span class="hljs-built_in">exec</span> -it my-pod -- /bin/busybox sh
</code></pre>
<p>If you don’t know what shell the container has, try this:</p>
<pre><code class="lang-bash">kubectl <span class="hljs-built_in">exec</span> -it my-pod -- sh -c <span class="hljs-string">'which bash || which sh || which ash || which busybox'</span>
</code></pre>
<p>One of them will work.</p>
<p>And if your container is <strong>multi-container</strong>, specify which one:</p>
<pre><code class="lang-bash">kubectl <span class="hljs-built_in">exec</span> -it my-pod -c my-container -- /bin/sh
</code></pre>
<p>Now you’re <em>inside</em> the running pod, ready to explore.</p>
<hr />
<h3 id="heading-2-debugging-with-logs-reading-the-clusters-diary"><strong>2. Debugging with Logs: Reading the Cluster’s Diary</strong></h3>
<p>If a pod is failing, logs are your <strong>first clue</strong>. Instead of checking logs one pod at a time, you can <strong>stream logs across multiple pods at once</strong>:</p>
<pre><code class="lang-bash">kubectl logs -f -l app=my-app
</code></pre>
<p>This pulls logs from <strong>all pods matching the label</strong> <code>app=my-app</code>, updating in real-time (<code>-f</code> for follow).</p>
<p>Need to check logs from a pod that <strong>already crashed</strong>?</p>
<pre><code class="lang-bash">kubectl logs my-pod --previous
</code></pre>
<p>This retrieves logs from the last container instance before it exited.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741540378420/8d4d447a-1d8e-46b2-bc89-e1ae45ebf7e9.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-3-finding-the-root-cause-with-kubectl-describe"><strong>3. Finding the Root Cause with</strong> <code>kubectl describe</code></h3>
<p>If a pod won’t start, <code>kubectl describe</code> can reveal <strong>why Kubernetes is mad at you</strong>.</p>
<pre><code class="lang-bash">kubectl describe pod my-pod
</code></pre>
<p>Look for <strong>events at the bottom of the output</strong>. Some common failure messages:</p>
<ul>
<li><p><strong>CrashLoopBackOff</strong> → The container keeps crashing on startup.</p>
</li>
<li><p><strong>ImagePullBackOff</strong> → Kubernetes can’t pull the image (wrong name or authentication issue).</p>
</li>
<li><p><strong>ErrImagePull</strong> → Same as above, but the failure happened instantly.</p>
</li>
<li><p><strong>OOMKilled</strong> → The pod ran out of memory and got terminated.</p>
</li>
</ul>
<p>If you see <code>OOMKilled</code>, your container probably needs more memory:</p>
<pre><code class="lang-bash">kubectl <span class="hljs-built_in">set</span> resources deployment my-app --limits=memory=512Mi
</code></pre>
<p>This bumps the memory limit, preventing the pod from being <strong>mercilessly executed by the Kubernetes scheduler</strong>.</p>
<hr />
<h3 id="heading-4-investigating-the-clusters-health"><strong>4. Investigating the Cluster’s Health</strong></h3>
<p>Pods are just the tip of the iceberg. If <strong>your cluster itself</strong> is struggling, these commands help <strong>diagnose deeper issues</strong>.</p>
<h4 id="heading-checking-the-clusters-overall-health"><strong>Checking the Cluster's Overall Health</strong></h4>
<pre><code class="lang-bash">kubectl cluster-info
</code></pre>
<p>This shows whether the API server and core services are running.</p>
<h4 id="heading-inspecting-node-health"><strong>Inspecting Node Health</strong></h4>
<pre><code class="lang-bash">kubectl get nodes
kubectl describe node my-node
</code></pre>
<p>If a node is <strong>NotReady</strong>, it could mean:</p>
<ul>
<li><p>The node is out of memory or disk space.</p>
</li>
<li><p>The kubelet process has crashed.</p>
</li>
<li><p>The node has lost network connectivity.</p>
</li>
</ul>
<p>Check if the node is running out of resources:</p>
<pre><code class="lang-bash">kubectl top node
</code></pre>
<p>If CPU or memory is maxed out, you may need to <strong>scale up your cluster</strong>.</p>
<h4 id="heading-checking-for-failing-system-components"><strong>Checking for Failing System Components</strong></h4>
<pre><code class="lang-bash">kubectl get componentstatuses
</code></pre>
<p>This shows the health of <strong>core Kubernetes services</strong> like the scheduler, controller manager, and etcd.</p>
<hr />
<h3 id="heading-5-debugging-a-pod-before-it-even-starts"><strong>5. Debugging a Pod Before It Even Starts</strong></h3>
<p>If a pod never even reaches the "Running" state, you can <strong>spin up a temporary debug container</strong> to investigate its environment.</p>
<pre><code class="lang-bash">kubectl run debug-shell --rm -it --image=ubuntu-- /bin/sh
</code></pre>
<p>This gives you a temporary Ubuntu container <strong>inside the same namespace</strong> as your app, letting you inspect networking, DNS resolution, and environment variables <strong>before the real pod launches</strong>.</p>
<hr />
<h3 id="heading-6-port-forwarding-expose-internal-services-for-debugging"><strong>6. Port Forwarding: Expose Internal Services for Debugging</strong></h3>
<p>Some services are <strong>only accessible inside the cluster</strong>. If you need to debug a <strong>database or API that’s locked down</strong>, you can use port forwarding:</p>
<pre><code class="lang-bash">kubectl port-forward svc/my-service 8080:80 -n my-namespace
</code></pre>
<p>This maps <strong>port 80 inside the cluster</strong> to <strong>port 8080 on your local machine</strong>. Now you can access the service by visiting <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a>.</p>
<p>Need to connect to a <strong>database inside the cluster</strong>?</p>
<pre><code class="lang-bash">kubectl port-forward pod/my-db-pod 5432:5432 -n my-namespace
</code></pre>
<p>Now you can connect to <a target="_blank" href="http://localhost:5432"><code>localhost:5432</code></a> as if the database was running locally.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741540397799/71686352-c074-49a7-912e-1e8bc6088393.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-7-killing-a-stuck-pod-the-right-way"><strong>7. Killing a Stuck Pod the Right Way</strong></h3>
<p>Sometimes, a pod refuses to die. If <code>kubectl delete pod</code> just sits there doing nothing, try <strong>force deletion</strong>:</p>
<pre><code class="lang-bash">kubectl delete pod my-pod --force --grace-period=0
</code></pre>
<p>This <strong>bypasses normal termination</strong> and immediately removes the pod from the API server.</p>
<p>Still stuck? The issue might be on the <strong>node itself</strong>. Find the node running the pod:</p>
<pre><code class="lang-bash">kubectl get pod my-pod -o wide
</code></pre>
<p>SSH into the node and manually remove the pod’s data:</p>
<pre><code class="lang-bash">ssh my-node
sudo crictl pods | grep my-pod
sudo crictl stopp &lt;pod-id&gt;
sudo crictl rmp &lt;pod-id&gt;
</code></pre>
<p>This <strong>forcefully removes</strong> the pod at the container runtime level.</p>
<hr />
<h3 id="heading-final-thoughts"><strong>Final Thoughts</strong></h3>
<p>Debugging Kubernetes <strong>isn’t about guessing</strong>—it’s about <strong>methodically uncovering the truth</strong>.</p>
<ul>
<li><p><strong>Need to inspect a running container?</strong> <code>kubectl exec</code></p>
</li>
<li><p><strong>Logs disappeared too fast?</strong> <code>kubectl logs --previous</code></p>
</li>
<li><p><strong>Pod refuses to start?</strong> <code>kubectl describe pod</code></p>
</li>
<li><p><strong>Cluster acting weird?</strong> <code>kubectl cluster-info</code></p>
</li>
<li><p><strong>Service unreachable?</strong> <code>kubectl port-forward</code></p>
</li>
</ul>
<p>Instead of randomly restarting things and hoping for the best, <strong>use the right kubectl tools to trace the problem to its root cause</strong>.</p>
<p>Kubernetes isn’t a mystery. It just <strong>hides its secrets well</strong>—but with the right forensic skills, you can <strong>make your cluster tell you exactly what’s wrong</strong>.</p>
<p>Now go forth and debug like a Kubernetes detective. 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Startctl: The Windows Startup Manager That Actually Makes Sense]]></title><description><![CDATA[The neon glow of the monitor flickers as another boot sequence begins. Windows loads, but instead of a clean start, you’re met with a chaotic flood of startup apps—some you installed, some that invited themselves in like sketchy cyberpunk hackers squ...]]></description><link>https://deployharmlessly.dev/startctl-the-windows-startup-manager-that-actually-makes-sense</link><guid isPermaLink="true">https://deployharmlessly.dev/startctl-the-windows-startup-manager-that-actually-makes-sense</guid><category><![CDATA[Windows]]></category><category><![CDATA[cli]]></category><category><![CDATA[golang]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Sat, 01 Mar 2025 11:00:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740772459245/711893ce-7e8e-4bc5-842d-ff2bf1008191.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The neon glow of the monitor flickers as another boot sequence begins. Windows loads, but instead of a clean start, you’re met with a chaotic flood of startup apps—some you installed, some that invited themselves in like sketchy cyberpunk hackers squatting in your system tray. Your machine feels less like a well-oiled cybernetic workstation and more like a dystopian megacity where every process fights for CPU dominance.</p>
<p>You could tame the chaos, of course. You could dive into <strong>Task Manager</strong>, spelunk through <strong>Registry keys</strong>, decipher the arcane riddles of <strong>Scheduled Tasks</strong>, or wander through the forgotten ruins of the <code>shell:</code> folders like a digital archaeologist. Or—hear me out—you could just use <strong>Startctl</strong>.</p>
<hr />
<h3 id="heading-the-problem-a-mess-of-startup-mechanisms">The Problem: A Mess of Startup Mechanisms</h3>
<p>My quest for simplicity began with a realization: Windows startup management is a fragmented nightmare. Every time I tried to find a tool that <strong>just worked</strong>, I ran into one of these delightful obstacles:</p>
<ol>
<li><p><strong>Too Complex</strong> – Some tools were bloated GUI nightmares with a CLI bolted on as an afterthought.</p>
</li>
<li><p><strong>Too Obscure</strong> – PowerShell scripts exist, but they often require admin privileges and look like they were transcribed from an ancient, forbidden codex.</p>
</li>
<li><p><strong>Too Outdated</strong> – Several abandoned GitHub projects surfaced, casualties of Microsoft’s ever-changing APIs.</p>
</li>
<li><p><strong>Too Locked Down</strong> – Windows, being Windows, insists on wrapping even simple startup management in layers of COM objects, security prompts, and UAC dialogs—because why not?</p>
</li>
</ol>
<p>I needed a tool that was <strong>fast</strong>, <strong>self-contained</strong>, and <strong>didn’t require a blood pact with Microsoft</strong> just to add or remove a startup entry.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740772504397/63471763-ce77-40a0-bf69-2a37e6fe4eb4.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-the-solution-startctl">The Solution: Startctl</h3>
<p>Thus, <strong>Startctl</strong> was born. A simple, cross-platform (but mostly Windows-focused) CLI tool for managing startup programs like an efficient cyber-operator.</p>
<p><strong>Key Features:</strong></p>
<ul>
<li><p>🟢 <strong>List</strong> all startup programs (both Registry-based and from the Startup folder).</p>
</li>
<li><p>🔵 <strong>Add</strong> new startup applications in one command. No bureaucracy, no questions asked.</p>
</li>
<li><p>🔴 <strong>Remove</strong> startup entries just as easily. Get rid of unwanted digital squatters.</p>
</li>
<li><p>⚡ <strong>No Admin Rights Needed</strong> (for standard user-level startup entries—because not everything needs a goddamn UAC prompt).</p>
</li>
<li><p>🔄 <strong>Supports Executable Paths and Arguments</strong> because sometimes, life needs parameters.</p>
</li>
</ul>
<p>Installation? Just grab the binary and drop it in your <code>PATH</code>. No dependencies, no surprises. No sketchy background services plotting against you.</p>
<hr />
<h3 id="heading-lessons-learned-while-wrangling-windows">Lessons Learned While Wrangling Windows</h3>
<p>Building <strong>Startctl</strong> wasn’t just about slapping together some Golang code and calling it a day. It was about navigating Windows’ <em>delightfully inconsistent</em> approach to startup applications:</p>
<p>🔹 <strong>The Registry vs. Startup Folder Turf War</strong> – Some apps prefer to haunt the <code>Run</code> registry key (<code>HKCU\Software\Microsoft\Windows\CurrentVersion\Run</code>), while others camp out in the Startup folder. Supporting both was necessary for peace in the digital underworld.</p>
<p>🔹 <strong>UAC and Permissions Shenanigans</strong> – I designed <strong>Startctl</strong> to work <strong>without admin rights</strong>, avoiding system-wide startup locations (<code>HKLM</code> keys) like a cyber-rogue dodging corporate security.</p>
<p>🔹 <strong>Parsing Startup Entries from the Abyss</strong> – Windows startup entries can be formatted in the most <em>creative</em> ways. Parsing them felt like decrypting an alien transmission while blindfolded.</p>
<p>🔹 <strong>Making It Feel Unix-Like</strong> – Windows tools often have syntax so inconsistent it makes you question reality. <strong>Startctl</strong> was designed to feel familiar to those who grew up on <code>ls</code>, <code>rm</code>, and <code>touch</code>—because muscle memory should transcend operating systems.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740772525081/245117ce-5d4b-4339-b91e-5921972391b4.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-why-open-source">Why Open Source?</h3>
<p>At first, <strong>Startctl</strong> was just a weekend side project. But then I realized how many people out there are <strong>also</strong> tired of manually wrestling with startup entries like it’s 1998. So, I made it open-source and threw it on <a target="_blank" href="https://github.com/NeonTowel/startctl">GitHub</a>, where others could contribute, improve, or just stare at the code in quiet appreciation.</p>
<p>Besides, Windows startup behavior <strong>mutates with every major update</strong>—so keeping an open-source tool means we, the people, can adapt faster than whatever new digital bureaucracy Microsoft throws at us.</p>
<hr />
<h3 id="heading-whats-next">What’s Next?</h3>
<p>For now, <strong>Startctl</strong> does its job with the cold efficiency of a cyberpunk mercenary. But there’s always room for enhancement:</p>
<ul>
<li><p><strong>Cross-platform expansion</strong>: While it compiles on Linux and macOS, the startup logic is still Windows-centric. Time to broaden its horizons.</p>
</li>
<li><p><strong>More startup locations</strong>: Detecting <strong>Scheduled Tasks</strong> and <code>HKLM</code> entries (for those who <em>do</em> want admin-level control).</p>
</li>
<li><p><strong>JSON Output</strong>: Because nothing says <em>modern tool</em> like structured, machine-readable output.</p>
</li>
</ul>
<p>If you’re tired of startup apps running amok in your system and want a <strong>simple, ruthless tool</strong> to control them, give <strong>Startctl</strong> a spin. Contributions, bug reports, and existential rants are all welcome.</p>
<p>👉 Enter the grid: <a target="_blank" href="https://github.com/NeonTowel/startctl">GitHub - NeonTowel/startctl</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740772538354/434ca30a-7d33-4814-9957-b2d736bdf821.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[The Case of the Bloated Containers]]></title><description><![CDATA[Chapter 1: A Hard Drive Full of Trouble
The night was quiet—too quiet, except for the hum of overworked cooling fans and the occasional death rattle of a failing hard drive. I was nursing a cup of synth-coffee strong enough to rewrite my DNA, watchin...]]></description><link>https://deployharmlessly.dev/the-case-of-the-bloated-containers</link><guid isPermaLink="true">https://deployharmlessly.dev/the-case-of-the-bloated-containers</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[logging]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Cyberpunk]]></category><category><![CDATA[Fiction]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Fri, 28 Feb 2025 19:04:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740768929023/65f4c476-afdc-4799-b5f5-928e2230866b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-chapter-1-a-hard-drive-full-of-trouble">Chapter 1: A Hard Drive Full of Trouble</h2>
<p>The night was quiet—too quiet, except for the hum of overworked cooling fans and the occasional death rattle of a failing hard drive. I was nursing a cup of synth-coffee strong enough to rewrite my DNA, watching error logs scroll endlessly across the terminal screen. The city outside blinked in the usual dull glow of half-functional streetlights and advertisements for StackOverlord Inc., the megacorp responsible for 90% of the galaxy’s infrastructure problems—and 100% of their customer service hold music.</p>
<p>Then he walked in.</p>
<p>A DevOps engineer, the kind that had seen too many 3 AM incidents and lived to tell the tale. His hoodie was wrinkled, his eyes were bloodshot, and his left hand twitched like a man who had spent too long debugging memory leaks. He dropped a battered datapad onto my desk, the way a man drops bad news.</p>
<p>"It's Docker," he said, voice rough like a stack trace that just won’t end. "The logs… they’re multiplying."</p>
<p>I took a slow sip of coffee. "Logs?"</p>
<p>"Yeah. Petabytes. The disk is filling up like a memory leak in an infinite loop. I tried clearing them, running prunes, even whispering threats. But they keep coming back."</p>
<p>I frowned. I’d seen this before. Always started small—a few innocent logs left unchecked. But before you knew it, they were everywhere. Consuming disk space. Slowing down services. Choking entire systems.</p>
<p>I leaned back. "Alright, kid. Let’s take a look under the hood."</p>
<h2 id="heading-chapter-2-the-deep-dive">Chapter 2: The Deep Dive</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740768961221/9bd4c0b2-e771-4f86-a4f2-8756888cf890.png" alt class="image--center mx-auto" /></p>
<p>We took a grav-lift down to the server room—buried deep in the lower levels of the Intergalactic Stack, where the air smelled of overheated circuits and bad decisions. Rows of blinking server racks stretched into the dimly lit corridors, their quiet hum eerily similar to laughter—or maybe that was just the coffee withdrawal talking.</p>
<p>The engineer hovered behind me, eyes darting nervously between the machines.</p>
<p>"You sure you wanna do this here?" he muttered. "Some of these systems… they’ve been running for centuries. No one really knows who set them up."</p>
<p>I didn’t answer. I pulled up a holo-terminal and started poking around.</p>
<p>First, the filesystem. A quick scan confirmed what I already knew—disk space was vanishing faster than a junior sysadmin’s confidence in production. Then, I ran a deeper inspection.</p>
<pre><code class="lang-bash">docker ps --format <span class="hljs-string">'{{.Names}}'</span> | xargs -I {} sh -c <span class="hljs-string">"echo {}: &amp;&amp; docker inspect --format='{{.LogPath}}' {}"</span>
</code></pre>
<p>A flood of paths scrolled across the screen. Massive log files. Growing at an unnatural rate.</p>
<p>The engineer let out a low whistle. "That’s… a lot of logs."</p>
<p>I nodded grimly. "And I bet you didn’t set up log rotation, did you?"</p>
<p>He looked down at his boots. "I thought StackOverlord handled that automatically."</p>
<p>I let out a slow, knowing chuckle. "If StackOverlord handled log rotation automatically, I’d be on a beach somewhere, not solving disk space murders."</p>
<p>I pulled up the container config. And there it was—plain as day. No limits. No rotation. Just a bottomless pit of logs, growing by the second.</p>
<pre><code class="lang-bash"><span class="hljs-string">"LogConfig"</span>: {
    <span class="hljs-string">"Type"</span>: <span class="hljs-string">"json-file"</span>,
    <span class="hljs-string">"Config"</span>: {
        <span class="hljs-string">"max-size"</span>: <span class="hljs-string">"10m"</span>,
        <span class="hljs-string">"max-file"</span>: <span class="hljs-string">"3"</span>
    }
}
</code></pre>
<p>I tapped the screen. "Set this, restart your containers, and your logs won’t get out of control."</p>
<p>The engineer nodded, already typing furiously. But something still felt... off.</p>
<p>The logs were too large. Too frequent.</p>
<p>I opened another terminal window, running a system-wide process check. And that’s when I saw it.</p>
<p><strong>Something was writing logs. Something that shouldn’t exist.</strong></p>
<h2 id="heading-chapter-3-cleanup-on-aisle-disk-space">Chapter 3: Cleanup on Aisle Disk Space</h2>
<p>I wasn’t satisfied. The logs had been trimmed, but something else was still writing. Something off the books.</p>
<p>I turned to the engineer, eyes narrowed. "Tell me—what’s running in these containers?"</p>
<p>He rattled them off. "A Python app, an Nginx reverse proxy, a database, and…" He trailed off.</p>
<p>I folded my arms. "And what?"</p>
<p>He swallowed. "There’s, uh… a debugging flag I left on. It, uh… might be logging every HTTP request in full detail."</p>
<p>I sighed so hard the server room temperature dropped by two degrees.</p>
<p>"Kid, do you know what you've done?" I growled. "Debug logs are fine in dev, but in production? That’s like leaving a microphone open at an intergalactic peace negotiation—you’re gonna hear everything whether you like it or not."</p>
<p>He hesitated. "But I needed them for a hotfix—"</p>
<p>"Doesn't matter," I cut him off. "Your system is drowning in noise. Every request, every response, every meaningless detail—logged, saved, and stuffed into your disks like a bureaucratic nightmare."</p>
<p>He exhaled sharply, fingers hovering over the keyboard. "So I just... turn it off?"</p>
<p>I nodded.</p>
<p>The moment his fingers hit Enter, the system shuddered, as if it had been holding its breath this whole time. The logs slowed to a trickle.</p>
<p>But something else didn’t.</p>
<p>A single process kept running.</p>
<p>I stared at the screen. Its process ID was ancient—far older than this deployment.</p>
<p>I pulled up its details. My stomach tightened.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740769105288/b7c37442-b8cf-4455-828e-cf6866fac6e0.png" alt class="image--center mx-auto" /></p>
<p><strong>SyslogDaemon v2.3.4-AE92</strong><br /><strong>Status: Active</strong><br /><strong>Logs Created: Unknown</strong><br /><strong>Last User Modification: Never</strong></p>
<p>I turned to the engineer. "You ever install this?"</p>
<p>He shook his head. "I… I don’t even know what that is."</p>
<p>Neither did I. But I knew one thing—it was writing logs. And it had been for a long time.</p>
<p>The engineer swallowed. "So, uh… should we turn it off?"</p>
<p>I looked at the process, at its unblinking status. I had a bad feeling—the kind of feeling you get before a major system crash.</p>
<p>"Not yet," I muttered, closing the terminal. "Not until we know what it’s logging."</p>
<p>We left the server room in silence, the quiet hum of machines watching us go.</p>
<p>The logs were tamed—for now. But somewhere, buried deep in the Intergalactic Stack, something else was running.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740769148808/d7332396-0857-4311-83c7-bf3bc361df39.png" alt class="image--center mx-auto" /></p>
<p>Something waiting to be found.</p>
]]></content:encoded></item><item><title><![CDATA[Strategic Patching: Modify Resources Without Pain]]></title><description><![CDATA[Imagine you’ve got a Deployment running happily in your cluster. Then someone comes along and says, "Hey, can you just change the number of replicas?"
You sigh. That means either:

Editing a massive YAML file and reapplying it.

Using kubectl edit, s...]]></description><link>https://deployharmlessly.dev/strategic-patching-modify-resources-without-pain</link><guid isPermaLink="true">https://deployharmlessly.dev/strategic-patching-modify-resources-without-pain</guid><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[SRE]]></category><category><![CDATA[tech ]]></category><category><![CDATA[containers]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Fri, 28 Feb 2025 17:38:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740763682220/27c35142-324b-4957-984d-c4b94843ee65.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine you’ve got a Deployment running happily in your cluster. Then someone comes along and says, <em>"Hey, can you just change the number of replicas?"</em></p>
<p>You sigh. That means either:</p>
<ol>
<li><p>Editing a massive YAML file and reapplying it.</p>
</li>
<li><p>Using <code>kubectl edit</code>, scrolling through an ocean of text, and praying you don’t fat-finger a bracket.</p>
</li>
<li><p>Deleting and recreating the resource, which feels like overkill for a tiny change.</p>
</li>
</ol>
<p>There’s a <strong>better way</strong>: <strong>patching</strong>.</p>
<p>Patching lets you surgically modify Kubernetes resources <strong>without redeploying everything</strong>. It’s the difference between using a scalpel and <strong>redesigning the entire hospital</strong> just to fix a typo on a patient’s chart.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740764117127/cc80a63a-5c63-49ba-93dc-3ad860399ab4.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-how-patching-works"><strong>How Patching Works</strong></h3>
<p>There are <strong>three ways</strong> to patch resources in Kubernetes:</p>
<ol>
<li><p><strong>JSON Patch</strong> (precise and explicit)</p>
</li>
<li><p><strong>Strategic Merge Patch</strong> (smart and Kubernetes-aware)</p>
</li>
<li><p><strong>Merge Patch</strong> (simple but less flexible)</p>
</li>
</ol>
<p>Let’s break them down.</p>
<hr />
<h3 id="heading-1-json-patch-the-laser-scalpel"><strong>1. JSON Patch: The Laser Scalpel</strong></h3>
<p>JSON Patch is the <strong>most precise</strong> way to update Kubernetes resources. You define exactly <strong>what to change, how to change it, and where</strong>.</p>
<p>Let’s say we have a Deployment called <code>my-app</code>, and we need to <strong>scale it to 5 replicas</strong> without touching anything else.</p>
<pre><code class="lang-bash">kubectl patch deployment my-app -n my-namespace --<span class="hljs-built_in">type</span>=<span class="hljs-string">'json'</span> -p=<span class="hljs-string">'[{"op": "replace", "path": "/spec/replicas", "value": 5}]'</span>
</code></pre>
<p>Here’s what’s happening:</p>
<ul>
<li><p><code>op: replace</code> → We’re replacing an existing field.</p>
</li>
<li><p><code>path: /spec/replicas</code> → We’re targeting the <code>replicas</code> field inside <code>spec</code>.</p>
</li>
<li><p><code>value: 5</code> → The new value is 5.</p>
</li>
</ul>
<p><strong>Why JSON Patch?</strong></p>
<ul>
<li><p>It’s <strong>surgical</strong>—changes only what you specify.</p>
</li>
<li><p>It <strong>never removes fields you didn’t touch</strong>.</p>
</li>
</ul>
<p>If you ever need to remove a field instead of replacing it:</p>
<pre><code class="lang-bash">kubectl patch deployment my-app -n my-namespace --<span class="hljs-built_in">type</span>=<span class="hljs-string">'json'</span> -p=<span class="hljs-string">'[{"op": "remove", "path": "/metadata/annotations"}]'</span>
</code></pre>
<p>Boom. That <strong>erases all annotations</strong> without touching anything else.</p>
<hr />
<h3 id="heading-2-strategic-merge-patch-the-swiss-army-knife"><strong>2. Strategic Merge Patch: The Swiss Army Knife</strong></h3>
<p>Strategic Merge Patch is Kubernetes-aware. Instead of precisely targeting JSON paths, you can provide a <strong>partial YAML object</strong> with just the fields you want to change.</p>
<p>Let’s say we need to <strong>update the image</strong> for our <code>my-app</code> container inside a Deployment. Instead of replacing the whole thing, we just patch what’s necessary:</p>
<pre><code class="lang-bash">kubectl patch deployment my-app -n my-namespace --<span class="hljs-built_in">type</span>=<span class="hljs-string">'merge'</span> -p=<span class="hljs-string">'{"spec":{"template":{"spec":{"containers":[{"name":"app","image":"my-app:v2"}]}}}}'</span>
</code></pre>
<p>Kubernetes <strong>intelligently merges</strong> this into the existing definition.</p>
<p>What’s happening here?</p>
<ul>
<li><p>We <strong>only specify the fields we care about</strong> (no need to provide the entire YAML).</p>
</li>
<li><p>Kubernetes <strong>keeps the rest of the deployment untouched</strong>.</p>
</li>
<li><p>It’s <strong>easier to write than JSON Patch</strong> while still being efficient.</p>
</li>
</ul>
<p>If you need to <strong>add a new environment variable</strong> to a container:</p>
<pre><code class="lang-bash">kubectl patch deployment my-app -n my-namespace --<span class="hljs-built_in">type</span>=<span class="hljs-string">'merge'</span> -p=<span class="hljs-string">'{"spec":{"template":{"spec":{"containers":[{"name":"app","env":[{"name":"LOG_LEVEL","value":"debug"}]}]}}}}'</span>
</code></pre>
<p>This <strong>adds</strong> <code>LOG_LEVEL=debug</code> without removing existing environment variables.</p>
<p><strong>Why Strategic Merge Patch?</strong></p>
<ul>
<li><p>It’s <strong>easier to write than JSON Patch</strong>.</p>
</li>
<li><p>It <strong>doesn’t require full YAML files</strong>.</p>
</li>
<li><p>Kubernetes <strong>merges intelligently</strong> instead of replacing entire objects.</p>
</li>
</ul>
<hr />
<h3 id="heading-3-merge-patch-the-simple-one"><strong>3. Merge Patch: The Simple One</strong></h3>
<p>If you just need a <strong>quick-and-dirty change</strong>, Merge Patch is the easiest option. It’s similar to Strategic Merge Patch but <strong>less Kubernetes-aware</strong>.</p>
<p>Want to <strong>update the replica count</strong>?</p>
<pre><code class="lang-bash">kubectl patch deployment my-app -n my-namespace --<span class="hljs-built_in">type</span>=<span class="hljs-string">'merge'</span> -p=<span class="hljs-string">'{"spec":{"replicas": 3}}'</span>
</code></pre>
<p>Kubernetes applies this directly, but <strong>unlike Strategic Merge Patch, it doesn’t understand lists properly</strong>. If you’re modifying something inside an array (like containers), <strong>it replaces the whole array instead of merging</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740764150060/aadf415f-33e6-439d-ac34-ef6f5732c2e1.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-when-to-use-each-patch-type"><strong>When to Use Each Patch Type</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Patch Type</td><td>Best For</td><td>Pros</td><td>Cons</td></tr>
</thead>
<tbody>
<tr>
<td><strong>JSON Patch</strong></td><td>Precise, fine-grained control</td><td>Exact changes, minimal impact</td><td>Harder to write</td></tr>
<tr>
<td><strong>Strategic Merge Patch</strong></td><td>Updating specific fields without rewriting everything</td><td>Kubernetes-aware, easy to use</td><td>Lists can be tricky</td></tr>
<tr>
<td><strong>Merge Patch</strong></td><td>Quick, simple updates</td><td>Easy to write</td><td>Less intelligent about merging</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-example-patching-vs-reapplying-yaml"><strong>Example: Patching vs. Reapplying YAML</strong></h3>
<p>Let’s say you need to update an annotation. Normally, you’d have to:</p>
<pre><code class="lang-bash">kubectl edit deployment my-app
</code></pre>
<p>Scroll through <strong>tons of YAML</strong>, find the annotation, <strong>update it manually</strong>, and then save.</p>
<p>Instead, <strong>one command does it instantly</strong>:</p>
<pre><code class="lang-bash">kubectl patch deployment my-app -n my-namespace --<span class="hljs-built_in">type</span>=<span class="hljs-string">'merge'</span> -p=<span class="hljs-string">'{"metadata":{"annotations":{"restarted-at":"'</span><span class="hljs-string">"<span class="hljs-subst">$(date +%s)</span>"</span><span class="hljs-string">'"}}}'</span>
</code></pre>
<p>This <strong>adds or updates</strong> an annotation called <code>restarted-at</code> with the current timestamp.</p>
<hr />
<h3 id="heading-patching-in-cicd-pipelines"><strong>Patching in CI/CD Pipelines</strong></h3>
<p>When running Kubernetes in CI/CD, you often need to update <strong>specific fields</strong> dynamically. Patching is perfect for that.</p>
<p>For example, if your pipeline builds a new image, you can update the Deployment with:</p>
<pre><code class="lang-bash">kubectl patch deployment my-app -n my-namespace --<span class="hljs-built_in">type</span>=<span class="hljs-string">'merge'</span> -p=<span class="hljs-string">'{"spec":{"template":{"spec":{"containers":[{"name":"app","image":"my-app:'</span><span class="hljs-string">"<span class="hljs-variable">$BUILD_ID</span>"</span><span class="hljs-string">'"}]}}}}'</span>
</code></pre>
<p>This <strong>injects a new image version without redeploying from scratch</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740764158423/e32bfcba-fc9f-4ad9-9325-31a4c314837c.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-final-thoughts"><strong>Final Thoughts</strong></h3>
<p>Patching is the <strong>fast, efficient</strong> way to modify Kubernetes resources <strong>without full redeployments</strong>. Whether you need to:</p>
<ul>
<li><p><strong>Scale up/down dynamically</strong></p>
</li>
<li><p><strong>Update container images without touching other settings</strong></p>
</li>
<li><p><strong>Add new labels, annotations, or environment variables</strong></p>
</li>
<li><p><strong>Remove unwanted fields without rewriting YAML</strong></p>
</li>
</ul>
<p>Patching lets you <strong>work smarter, not harder</strong>.</p>
<p>Stop treating Kubernetes like an immutable monolith—patch what you need, when you need it, <strong>without unnecessary redeployments</strong>. Your cluster (and your sanity) will thank you.</p>
]]></content:encoded></item><item><title><![CDATA[Introducing Jack "Kernel" Kowalski]]></title><description><![CDATA[In the vast expanse of the Intergalactic Stack, where tech debt is older than some civilizations and a single misconfigured YAML file can bring down an entire fleet, you don’t just troubleshoot. You investigate.
And when things go really wrong—when y...]]></description><link>https://deployharmlessly.dev/introducing-jack-kernel-kowalski</link><guid isPermaLink="true">https://deployharmlessly.dev/introducing-jack-kernel-kowalski</guid><category><![CDATA[Cyberpunk]]></category><category><![CDATA[Devops]]></category><category><![CDATA[noir]]></category><category><![CDATA[scifi]]></category><category><![CDATA[storytelling]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[technology]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Fri, 21 Feb 2025 13:00:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740123554938/4ca5679a-2c65-4e09-86d8-09fa5e25d30d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the vast expanse of the <strong>Intergalactic Stack</strong>, where tech debt is older than some civilizations and a single misconfigured YAML file can bring down an entire fleet, you don’t just troubleshoot. You investigate.</p>
<p>And when things go <strong>really</strong> wrong—when your database disappears without a trace, when your CI/CD pipeline starts deploying alternate realities, or when your Kubernetes cluster forms a breakaway republic—<strong>you don’t call support. You call Kowalski.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740123481774/4a9025a2-f572-4d45-9159-a1568cb818b6.png" alt class="image--center mx-auto" /></p>
<p>Jack "Kernel" Kowalski is an <strong>ex-Site Reliability Engineer turned private sys detective</strong>. He operates in a galaxy where:</p>
<ul>
<li><p><strong>Kubernetes clusters</strong> occasionally gain sentience and start forming unions.</p>
</li>
<li><p><strong>CI/CD pipelines</strong> develop existential crises and question their own deployments.</p>
</li>
<li><p><strong>Golang applications</strong> sometimes compile into riddles wrapped in enigmas.</p>
</li>
</ul>
<h3 id="heading-how-this-fits-into-the-blog"><strong>How This Fits Into the Blog</strong></h3>
<p>Instead of just breaking down tech issues, we’re going to <strong>experience them</strong>. Through Kowalski’s investigations, we’ll explore the mysteries lurking in our everyday DevOps and programming challenges—<strong>real-world debugging, infrastructure quirks, and automation gone wrong, all told through noir-style detective fiction.</strong></p>
<p>It’s still <strong>a tech blog</strong>, still about <strong>DevOps, Cloud Infrastructure, Kubernetes, and Agile</strong>, but now with more mystery, storytelling, and absurdly strong coffee.</p>
<p>So, as we embark on this journey, remember:<br /><strong>In the unpredictable world of deployments and debugging, sometimes it takes a detective’s intuition to uncover the truth.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740123488549/4cb1a18a-3751-43d9-892b-88604a591ce4.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Batch Processing with kubectl xargs: Command Kubernetes Like a General]]></title><description><![CDATA[Managing a few pods? Easy. Managing hundreds? Now you’re drowning in a sea of kubectl commands, desperately copying and pasting pod names like some underpaid medieval scribe, manually deleting resources one by one while Kubernetes watches—silently ju...]]></description><link>https://deployharmlessly.dev/batch-processing-with-kubectl-xargs-command-kubernetes-like-a-general</link><guid isPermaLink="true">https://deployharmlessly.dev/batch-processing-with-kubectl-xargs-command-kubernetes-like-a-general</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[containers]]></category><category><![CDATA[kubectl]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Fri, 21 Feb 2025 06:37:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739516643595/50976a3e-c9fb-47f2-ad50-1f882bbd8acf.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Managing a few pods? Easy. Managing hundreds? Now you’re drowning in a sea of <code>kubectl</code> commands, desperately copying and pasting pod names like some <strong>underpaid medieval scribe</strong>, manually deleting resources one by one while Kubernetes watches—silently judging you.</p>
<p>This isn’t just inefficient; <strong>it’s a trap.</strong> A slow, soul-draining cycle designed to break your will and force you into the murky depths of automation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740119640905/5652b4eb-5583-47fd-96b1-744e314562e1.png" alt class="image--center mx-auto" /></p>
<p>That’s where <strong>batch processing</strong> comes in. Instead of executing the same command a hundred times like some YAML-obsessed monk, you <strong>chain commands together</strong>, making Kubernetes do the heavy lifting. With a single command, you can <strong>restart every deployment, label an entire namespace, or purge a cluster of unwanted jobs in one swift strike</strong>.</p>
<p>This is how <strong>real Kubernetes operators command the swarm</strong>—by wielding <code>xargs</code> like a <strong>hacker in a neon-lit back alley</strong>, sending mass instructions into the void and watching the cluster obey.</p>
<h4 id="heading-why-batch-processing"><strong>Why Batch Processing?</strong></h4>
<p>Imagine you have 50+ pods running in a namespace. You need to delete all of them, but running <code>kubectl delete pod pod-name</code> manually for each one is <strong>soul-crushingly tedious</strong>. Instead, you can <strong>list</strong> them, extract their names, and <strong>feed them into</strong> <code>kubectl delete</code> automatically.</p>
<pre><code class="lang-bash">kubectl get pods -n my-namespace -o name | xargs kubectl delete -n my-namespace
</code></pre>
<p>This <strong>pipes</strong> (<code>|</code>) the output of <code>kubectl get pods</code> into <code>xargs</code>, which takes each name and appends it to <code>kubectl delete</code>. In one command, <em>poof</em>, all your pods are gone.</p>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740119691075/1966c476-a20e-45c3-ab9f-5567aa93a6af.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-restarting-all-deployments-in-a-namespace"><strong>Restarting All Deployments in a Namespace</strong></h4>
<p>Maybe deleting everything is a bit drastic. What if you just want to <strong>restart all your deployments</strong>?</p>
<pre><code class="lang-bash">kubectl get deployments -n my-namespace -o name | xargs kubectl rollout restart
</code></pre>
<p>This tells Kubernetes to <strong>gracefully restart</strong> all deployments without downtime. No need to <code>kubectl rollout restart deployment/foo</code> for each one—it’s all done in a single command.</p>
<hr />
<h4 id="heading-scaling-down-everything-for-maintenance"><strong>Scaling Down Everything for Maintenance</strong></h4>
<p>If you ever need to <strong>scale down all Deployments, StatefulSets, and DaemonSets</strong> (maybe you’re preparing for maintenance or reducing costs), you can run:</p>
<pre><code class="lang-bash">kubectl get deploy,sts,ds -n my-namespace -o name | xargs -I{} kubectl scale {} --replicas=0 -n my-namespace
</code></pre>
<p>This does three things:</p>
<ol>
<li><p><strong>Lists all Deployments, StatefulSets, and DaemonSets</strong> in the namespace.</p>
</li>
<li><p><strong>Passes each one to</strong> <code>kubectl scale</code>, setting <code>replicas=0</code>.</p>
</li>
<li><p><strong>Stops workloads in a controlled manner</strong> without deleting them.</p>
</li>
</ol>
<p>When you’re ready to bring them back up:</p>
<pre><code class="lang-bash">kubectl get deploy,sts,ds -n my-namespace -o name | xargs -I{} kubectl scale {} --replicas=3 -n my-namespace
</code></pre>
<p>Just like that, your cluster is back in action.</p>
<hr />
<h4 id="heading-cleaning-up-completed-jobs"><strong>Cleaning Up Completed Jobs</strong></h4>
<p>Kubernetes <strong>Jobs</strong> are great for one-off tasks, but they don’t clean themselves up. Over time, your cluster becomes littered with completed jobs that <strong>just sit there like empty soda cans</strong>.</p>
<p>To remove them in one go:</p>
<pre><code class="lang-bash">kubectl get <span class="hljs-built_in">jobs</span> -o name | xargs kubectl delete
</code></pre>
<p>This clears out all completed jobs so your cluster stays clean.</p>
<p>If you only want to delete <strong>jobs that have finished successfully</strong>:</p>
<pre><code class="lang-bash">kubectl get <span class="hljs-built_in">jobs</span> --field-selector=status.successful=1 -o name | xargs kubectl delete
</code></pre>
<p>This ensures you don’t accidentally delete jobs that are still running.</p>
<hr />
<h4 id="heading-using-i-for-more-flexibility"><strong>Using</strong> <code>-I{}</code> for More Flexibility</h4>
<p>Sometimes, you need to insert a resource name in a <strong>specific place</strong> within a command. That’s where <code>-I{}</code> comes in.</p>
<p>For example, let’s say you want to label all pods in a namespace:</p>
<pre><code class="lang-bash">kubectl get pods -n my-namespace -o name | xargs -I{} kubectl label {} environment=staging
</code></pre>
<p>This applies the <code>environment=staging</code> label to <strong>every pod</strong>.</p>
<p>Need to add multiple labels? Just chain them:</p>
<pre><code class="lang-bash">kubectl get pods -n my-namespace -o name | xargs -I{} kubectl label {} team=devops project=alpha
</code></pre>
<p>Now every pod has two new labels, all in a single command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740119720754/06595e2f-200e-4164-82e1-63c1b9e79c5c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-why-this-matters"><strong>Why This Matters</strong></h3>
<p>Batch processing turns <strong>repetitive Kubernetes operations</strong> into <strong>one-liners of power</strong>. Whether you’re:</p>
<ul>
<li><p>Deleting resources en masse</p>
</li>
<li><p>Restarting everything at once</p>
</li>
<li><p>Scaling up or down entire environments</p>
</li>
<li><p>Cleaning up orphaned jobs</p>
</li>
<li><p>Adding labels or modifying resources</p>
</li>
</ul>
<p><strong>These commands save you from typing the same thing over and over.</strong> And in Kubernetes, automation is the difference between managing a cluster and wrestling with it.</p>
<p>If you’re still executing <code>kubectl delete</code> <strong>one pod at a time</strong>, it’s time to level up. Kubernetes is a mighty beast—but with batch processing, <strong>you’re the one holding the leash</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[Custom Resource Definitions (CRDs): Extending Kubernetes Like a Boss]]></title><description><![CDATA[Kubernetes is a magnificent beast—so long as you stay within its predefined rules. It happily schedules pods, orchestrates services, and ensures your microservices don’t eat each other. But the moment you ask it to handle something outside its comfor...]]></description><link>https://deployharmlessly.dev/custom-resource-definitions-crds-extending-kubernetes-like-a-boss</link><guid isPermaLink="true">https://deployharmlessly.dev/custom-resource-definitions-crds-extending-kubernetes-like-a-boss</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[CRD]]></category><category><![CDATA[Devops]]></category><category><![CDATA[SRE]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Fri, 14 Feb 2025 06:00:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/9cXMJHaViTM/upload/a056a426839ee02b12dc7894967fc992.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kubernetes is a magnificent beast—<strong>so long as you stay within its predefined rules</strong>. It happily schedules pods, orchestrates services, and ensures your microservices don’t eat each other. But the moment you ask it to handle <strong>something outside its comfort zone</strong>, it stares at you like a confused robot, waiting for more YAML sacrifices.</p>
<p>Try managing <strong>feature flags, application configs, or, say, a fire-breathing dragon</strong>, and Kubernetes simply shrugs. <em>"Not my problem."</em> It was never designed to understand <strong>dragons, chaos monkeys, or sentient serverless applications plotting their escape</strong>. But what if it could?</p>
<p>That’s where <strong>Custom Resource Definitions (CRDs)</strong> come in. Think of them as <strong>a way to reprogram Kubernetes' brain</strong>—expanding its API to recognize and manage <strong>entirely new types of objects</strong>. Want a <code>Dragon</code> resource with attributes like <code>fireBreathing</code> and <code>wingSpan</code>? You got it. Need a <code>ChaosMonkey</code> to <strong>randomly terminate pods in the name of resilience testing</strong>? Kubernetes will now happily <strong>facilitate its own destruction</strong>—all because you <strong>taught it how</strong>.</p>
<p>With CRDs, <strong>you’re not just using Kubernetes—you’re rewriting its reality.</strong></p>
<h4 id="heading-how-crds-work"><strong>How CRDs Work</strong></h4>
<p>A CRD is, at its core, just another API object in Kubernetes. When you create one, Kubernetes automatically generates a new API endpoint for it. Suddenly, your cluster understands a brand-new type of resource, and you can <code>kubectl get</code> it just like built-in objects like <code>pods</code> or <code>deployments</code>.</p>
<p>For example, if you install a CRD for <code>ClusterIssuer</code> (used by <code>cert-manager</code> for handling TLS certificates), you immediately get:</p>
<pre><code class="lang-bash">kubectl get clusterissuers
</code></pre>
<p>Just like <code>kubectl get pods</code>, but for something Kubernetes didn’t natively support before.</p>
<h4 id="heading-finding-whats-installed"><strong>Finding What’s Installed</strong></h4>
<p>If you’re working with a Kubernetes cluster that’s had CRDs installed, you might want to <strong>see what’s available</strong>. List all installed CRDs with:</p>
<pre><code class="lang-bash">kubectl get crds
</code></pre>
<p>This will return a list of all the custom resources available on the cluster. If you want to dig into a specific one, say <code>clusterissuers</code>, you can check its details:</p>
<pre><code class="lang-bash">kubectl describe crd clusterissuers
</code></pre>
<p>This provides information on what fields the CRD supports, its API group, and validation rules.</p>
<h4 id="heading-creating-a-custom-resource-definition"><strong>Creating a Custom Resource Definition</strong></h4>
<p>Let’s say we want Kubernetes to manage a collection of <strong>dragons</strong>. We need to <strong>define a CRD</strong> that tells Kubernetes what a "dragon" is.</p>
<p>Here’s a simple YAML definition for a <code>Dragon</code> resource:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apiextensions.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">CustomResourceDefinition</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">dragons.example.com</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">group:</span> <span class="hljs-string">example.com</span>
  <span class="hljs-attr">names:</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Dragon</span>
    <span class="hljs-attr">listKind:</span> <span class="hljs-string">DragonList</span>
    <span class="hljs-attr">plural:</span> <span class="hljs-string">dragons</span>
    <span class="hljs-attr">singular:</span> <span class="hljs-string">dragon</span>
  <span class="hljs-attr">scope:</span> <span class="hljs-string">Namespaced</span>
  <span class="hljs-attr">versions:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">v1</span>
    <span class="hljs-attr">served:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">storage:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">schema:</span>
      <span class="hljs-attr">openAPIV3Schema:</span>
        <span class="hljs-attr">type:</span> <span class="hljs-string">object</span>
        <span class="hljs-attr">properties:</span>
          <span class="hljs-attr">spec:</span>
            <span class="hljs-attr">type:</span> <span class="hljs-string">object</span>
            <span class="hljs-attr">properties:</span>
              <span class="hljs-attr">fireBreathing:</span>
                <span class="hljs-attr">type:</span> <span class="hljs-string">boolean</span>
              <span class="hljs-attr">wingSpan:</span>
                <span class="hljs-attr">type:</span> <span class="hljs-string">integer</span>
</code></pre>
<p>Applying this CRD tells Kubernetes, <strong>“Hey, from now on, you should recognize something called a ‘Dragon’ and expect it to have attributes like ‘fireBreathing’ and ‘wingSpan’.”</strong></p>
<pre><code class="lang-plaintext">kubectl apply -f dragon-crd.yaml
</code></pre>
<p>Now, Kubernetes knows what a <strong>Dragon</strong> is, and we can create our first one:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">example.com/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Dragon</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">smaug</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">fireBreathing:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">wingSpan:</span> <span class="hljs-number">30</span>
</code></pre>
<pre><code class="lang-bash">kubectl apply -f smaug.yaml
</code></pre>
<p>And just like that, your cluster officially recognizes <strong>Smaug, the fire-breathing dragon</strong>.</p>
<h4 id="heading-why-crds-are-powerful"><strong>Why CRDs Are Powerful</strong></h4>
<p>The magic of CRDs is that they allow you to <strong>extend Kubernetes without modifying its core</strong>. Combined with <strong>custom controllers and operators</strong>, you can build self-healing, auto-scaling infrastructure that’s tailor-made for your specific use cases.</p>
<p>For example:</p>
<ul>
<li><p><strong>ArgoCD’s CRDs</strong> let you manage GitOps-driven deployments.</p>
</li>
<li><p><strong>Cert-manager’s CRDs</strong> automate certificate management.</p>
</li>
<li><p><strong>KEDA’s CRDs</strong> enable event-driven auto-scaling.</p>
</li>
</ul>
<p>With a bit of work, you can build your own <strong>fully automated Kubernetes-native applications</strong> that respond to changes in custom resources just like built-in objects.</p>
<p>CRDs are <strong>how you teach Kubernetes new tricks</strong>. They let you stop fighting the system and start making it work <em>for</em> you. So next time you find yourself duct-taping YAML files together to handle a missing Kubernetes feature, ask yourself: <em>Should I just create a CRD instead?</em></p>
<p>The answer is probably <strong>yes</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[Transitioning from Backend Developer to DevOps Engineer]]></title><description><![CDATA[🚀 So, you’ve been slinging backend code for years—APIs, databases, the occasional battle with an inexplicably slow query. Life is good. Predictable. Safe. But then, like a rogue AI sent to upend the system, DevOps sneaks into the picture. Suddenly, ...]]></description><link>https://deployharmlessly.dev/transitioning-from-backend-developer-to-devops-engineer</link><guid isPermaLink="true">https://deployharmlessly.dev/transitioning-from-backend-developer-to-devops-engineer</guid><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cyberpunk]]></category><category><![CDATA[TechHumor]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[scifi]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Tue, 11 Feb 2025 11:00:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739191665979/a8b9f5db-8b15-4f6b-a527-1b109a37a322.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🚀 So, you’ve been slinging backend code for years—APIs, databases, the occasional battle with an inexplicably slow query. Life is good. Predictable. Safe. But then, like a rogue AI sent to upend the system, DevOps sneaks into the picture. Suddenly, your world isn't just about writing code. It's about keeping it alive in the wild, dodging outages, automating everything, and realizing that YAML is both a tool and a trap.</p>
<p>Welcome to the neon-lit, coffee-fueled odyssey of transitioning to DevOps. It’s part cyberpunk thriller, part existential crisis, and entirely worth the ride.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739190616487/3d8ecfde-7818-4cd9-9f6d-a1864f162ac0.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-act-one-leaving-the-backend-comfort-zone">Act One: Leaving the Backend Comfort Zone</h2>
<p>The neon glow of Alex’s monitors flickered against the rain-streaked window. For years, they had been a code-slinger in the metropolis of backend development—crafting APIs, optimizing databases, and ensuring services ran smoother than a high-speed train on greased rails.</p>
<p>The world was structured, predictable, and safe… except for the occasional semicolon-induced catastrophe. But the city was changing. The monolithic towers of old were being replaced by a sprawling labyrinth of microservices, cloud deployments, and pipelines that hummed with the eerie precision of an automated dystopia.</p>
<p>Somewhere between the flickering terminals and whispered rumors of DevOps, Alex saw the future. It wasn’t just about writing elegant code anymore; it was about making sure that code could survive the brutal back alleys of production.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739190659161/78fa6d7f-d437-41fb-9c94-2f9e6d611165.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-act-two-facing-the-realities-of-devops">Act Two: Facing the Realities of DevOps</h2>
<p>Stepping into DevOps was like stepping into a noir novel where everything was written in YAML and the documentation was always slightly out of date.</p>
<p>Containers were the first puzzle. Docker promised neatly packaged microservices, but Kubernetes? That was a different beast—an eldritch horror of configuration files and arcane commands that required more than just luck to navigate.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739190692148/329b3e81-8cd3-420f-a902-7b168fbe6c4c.jpeg" alt class="image--center mx-auto" /></p>
<p>Then came Infrastructure as Code (IaC). Terraform and Ansible held the promise of declarative infrastructure, but wielding them felt like hacking into an encrypted mainframe with nothing but a rusty terminal and a strong cup of coffee. One misstep, and the cloud bill skyrocketed faster than a rogue AI gaining sentience.</p>
<p>CI/CD pipelines lurked in the shadows, whispering promises of automated deployments and seamless integration. Alex dove in, setting up Jenkins, GitHub Actions, and GitLab CI/CD, only to find that pipelines were as fickle as an underground informant—cooperative one day, mysteriously failing the next, with no explanation other than a vague error log that might as well have been written in Martian.</p>
<h2 id="heading-act-three-embracing-the-devops-mindset">Act Three: Embracing the DevOps Mindset</h2>
<p>The real revelation? DevOps wasn’t just about wielding powerful tools—it was about understanding the flow of information, the symphony of automation, and the delicate art of not breaking everything with a single push to production.</p>
<p>Communication became a lifeline. Developers, ops engineers, and security teams no longer operated in silos; they navigated the cybernetic city together, exchanging encrypted messages (or just Slack DMs filled with memes and existential dread).</p>
<p>Observability and monitoring became daily rituals. Tools like Prometheus and Grafana offered glimpses into the machine’s soul, helping Alex decipher logs like a detective scanning security footage for clues. Debugging transformed into a shadowy investigation across distributed systems, hunting for the elusive culprit causing latency spikes at 2 AM.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739190770498/8aecd5e7-71b9-44da-b619-8bd3ad5130d7.jpeg" alt class="image--center mx-auto" /></p>
<p>Ultimately, transitioning from backend development to DevOps wasn’t about replacing one skill set with another but evolving into something more. It required continuous learning, adaptability, and the ability to stare into the abyss of YAML without blinking. DevOps wasn’t just a job—it was survival in an ever-changing digital landscape.</p>
<p>For backend developers considering the transition, the path is filled with challenges and the occasional existential crisis. But in a world that thrives on automation and resilience, knowing how to keep the system running without collapsing into chaos? That’s a skill worth mastering.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739190785513/53d8d5e6-9a10-4977-84dc-58f35d71471d.jpeg" alt class="image--center mx-auto" /></p>
<p>🔥 Welcome to DevOps. Keep your logs close, your rollback plans closer, and never—ever—trust a CI pipeline on a Friday. 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Mastering kubectl: The Secret Handbook for Kubernetes Operators]]></title><description><![CDATA[The console flickers to life. A dim green cursor blinks expectantly, waiting for your command. You type:
kubectl get pods

And just like that, you’ve peeked inside the digital underbelly of a Kubernetes cluster—an orchestration system so vast, so com...]]></description><link>https://deployharmlessly.dev/mastering-kubectl-the-secret-handbook-for-kubernetes-operators</link><guid isPermaLink="true">https://deployharmlessly.dev/mastering-kubectl-the-secret-handbook-for-kubernetes-operators</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[Devops]]></category><category><![CDATA[SRE]]></category><category><![CDATA[Tech Tutorial]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Sat, 08 Feb 2025 15:17:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/GSiEeoHcNTQ/upload/ecdd293e37ac85aea32a3f850492b3cb.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The console flickers to life. A dim green cursor blinks expectantly, waiting for your command. You type:</p>
<pre><code class="lang-sh">kubectl get pods
</code></pre>
<p>And just like that, you’ve peeked inside the digital underbelly of a Kubernetes cluster—an orchestration system so vast, so complex, that half the time even it doesn’t seem entirely sure how it’s holding itself together.</p>
<p>Sure, you’ve <strong>listed pods, poked at nodes, and maybe even banished a misbehaving deployment into the abyss</strong>. But Kubernetes isn’t just a cluster; it’s a labyrinth of YAML prophecies, API incantations, and scheduled tasks that <strong>never seem to run when they should</strong>. And <code>kubectl</code>? It’s not a flashlight—it’s a <strong>hacking interface into a self-aware machine that occasionally takes offense at your requests</strong>.</p>
<p>This isn’t some <strong>"Introduction to Kubernetes"</strong> fluff piece. You won’t find a gentle explanation of what a pod is. No, we’re diving straight into the dark alleyways of Kubernetes command-line magic—the <strong>tricks, exploits, and power moves</strong> that separate the mere mortals from the <strong>operators who make clusters dance at their fingertips</strong>.</p>
<p>We’re talking about:</p>
<ul>
<li><p><strong>CRDs that let you rewrite Kubernetes’ DNA</strong></p>
</li>
<li><p><strong>Batch processing that turns</strong> <code>kubectl</code> into a weapon of mass automation</p>
</li>
<li><p><strong>Patching techniques so precise they’d make a neurosurgeon jealous</strong></p>
</li>
<li><p><strong>Debugging workflows that feel like you’re tracing digital ghosts</strong></p>
</li>
<li><p><strong>Port forwarding tricks that let you hack into your own cluster like a cybercriminal—legally</strong></p>
</li>
<li><p><strong>Dry-run mode, because breaking production is only fun when it's someone else's fault</strong></p>
</li>
</ul>
<p>If you’ve ever felt like your cluster was <strong>watching you</strong>, waiting for the perfect moment to throw a <code>CrashLoopBackOff</code> at your best-laid plans, this is the guide you need.</p>
<h2 id="heading-the-6-part-series-the-secret-handbook-for-kubernetes-operatorshttpsdeployharmlesslydevseriesmastering-kubectl"><strong>The 6-Part Series:</strong> <a target="_blank" href="https://deployharmlessly.dev/series/mastering-kubectl">The Secret Handbook for Kubernetes Operators</a></h2>
<p>🕵️ <a target="_blank" href="https://deployharmlessly.dev/custom-resource-definitions-crds-extending-kubernetes-like-a-boss"><strong>Chapter 1: Custom Resource Definitions (CRDs): Extending Kubernetes Like a Boss</strong></a><br />Kubernetes is a wonderful, extensible system—right up until you realize it <strong>doesn’t speak your language</strong>. Need it to manage databases? Feature flags? Dragons? Too bad. But <strong>Custom Resource Definitions (CRDs)</strong> let you <strong>extend Kubernetes itself</strong>, teaching it to understand <em>your</em> unique brand of madness.</p>
<p>⚔️ <a target="_blank" href="https://deployharmlessly.dev/batch-processing-with-kubectl-xargs-command-kubernetes-like-a-general"><strong>Chapter 2: Batch Processing with</strong> <code>kubectl xargs</code>: Command Kubernetes Like a General</a><br />The difference between a <strong>junior engineer</strong> and an <strong>operator who commands entire fleets of microservices</strong>? One of them still deletes pods one at a time. The other wields <code>xargs</code> like a <strong>DevOps warlord</strong>, executing sweeping commands across an entire namespace in <strong>one go</strong>.</p>
<p>🔬 <a target="_blank" href="https://deployharmlessly.dev/strategic-patching-modify-resources-without-pain"><strong>Chapter 3: Strategic Patching: Modify Resources Without Pain</strong></a><br />Tired of <strong>reapplying massive YAML files</strong> just to tweak a single field? Ever wish you could <strong>edit just the part you need</strong> without redeploying everything? This chapter explores <strong>three types of patches</strong>—JSON Patch, Strategic Merge Patch, and Merge Patch—so precise they might as well come with a scalpel.</p>
<p>🔎 <a target="_blank" href="https://deployharmlessly.dev/debugging-and-forensics-csi-mode-for-your-cluster"><strong>Chapter 4: Debugging and Forensics: CSI Mode for Your Cluster</strong></a><br />Logs vanish. Containers die mysteriously. Nodes enter a <strong>"NotReady" state</strong> like they’ve had an existential crisis. You could <strong>randomly restart everything and pray</strong>, or you could <strong>become the Kubernetes detective</strong>—tracking failures through logs, inspecting broken pods from the inside, and using forensic-level debugging tricks to find <strong>what really went wrong</strong>. <em>Article just dropped! Check it out!</em></p>
<p>💻 <strong>Chapter 5: Port Forwarding: Expose Services Like a Secret Hacker</strong><br />Your API is <strong>trapped inside the cluster</strong>, completely inaccessible from the outside world. You could <strong>waste hours setting up an Ingress</strong>, or you could <strong>punch a hole straight through Kubernetes’ walls</strong> with <code>kubectl port-forward</code>. Learn how to <strong>connect to internal databases, debug private services, and access hidden infrastructure—without exposing a single endpoint to the outside world</strong>. <em>Coming in March 16, 2025 - stay tuned!</em></p>
<p>🛑 <strong>Chapter 6: Dry-Run Mode: The “Are You Sure?” Button</strong><br />Ah, production. A place where mistakes <strong>aren’t just educational—they’re career-defining</strong>. If you’ve ever deleted the wrong deployment, patched the wrong resource, or applied a YAML file that <strong>turned out to be from the wrong repo</strong>, this chapter is for you. Dry-run mode is <strong>Kubernetes’ way of asking, “Are you <em>really</em> sure about that?”</strong> before it lets you do something you’ll regret.</p>
<h2 id="heading-are-you-ready"><strong>Are You Ready?</strong></h2>
<p>Most Kubernetes engineers <strong>wield</strong> <code>kubectl</code> like a toddler with a flamethrower—excited by its power, but only moments away from <strong>accidentally setting everything on fire</strong>.</p>
<p>This series isn’t about memorizing commands. It’s about <strong>mastering</strong> <code>kubectl</code> so your cluster obeys you, not the other way around. No more blind guessing. No more ritualistic YAML sacrifices. <strong>Just raw, unapologetic control.</strong></p>
<p>Your cluster is vast, mysterious, and often <strong>borderline malicious</strong>. But you? You’re about to become the operator <strong>who pulls the real strings</strong>.</p>
<p>Let’s dive in. 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Reimagining Agile for the Future]]></title><description><![CDATA[Imagine a world where sprint goals are set before your morning coffee, retrospectives conduct themselves with all the charm of a perfectly trained butler, and the backlog practically writes its own user stories—possibly while humming an eerily cheerf...]]></description><link>https://deployharmlessly.dev/reimagining-agile-for-the-future</link><guid isPermaLink="true">https://deployharmlessly.dev/reimagining-agile-for-the-future</guid><category><![CDATA[agile]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Devops]]></category><category><![CDATA[software development]]></category><category><![CDATA[innovation]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Tue, 28 Jan 2025 06:53:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738047362257/3f7b6b6a-b112-480b-bda5-9b58d495806b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Imagine a world where sprint goals are set before your morning coffee, retrospectives conduct themselves with all the charm of a perfectly trained butler, and the backlog practically writes its own user stories—possibly while humming an eerily cheerful tune. Sounds like science fiction, right? The kind where AI assistants are both brilliant and mildly passive-aggressive? Maybe not for long.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738044677679/196c6924-9a12-4a20-afc5-21246a59b9bf.jpeg" alt class="image--center mx-auto" /></p>
<p>Agile has long been the gold standard for managing change, offering a reliable framework to adapt and thrive. But what happens when the pace of change accelerates so rapidly that even Agile feels the strain? Suddenly, the sci-fi future we imagined starts to feel much closer to reality.</p>
<p>In today’s tech world, yesterday’s breakthrough is tomorrow’s museum piece, and staying ahead means constantly adapting. As artificial intelligence continues to reshape industries, it’s time to ask: how can Agile evolve to thrive in an era defined by speed, complexity, and innovation?</p>
<h2 id="heading-agiles-core-of-adaptability">Agile’s Core of Adaptability</h2>
<p>The cornerstone of Agile, if we would anthropomorphize it just a smidge, is its adaptability. Born out of the need to flexibly meet changing requirements, Agile frameworks like Scrum and Kanban have thrived by championing iterative improvements and valuing customer collaboration over rigid contract negotiation. However, as we look to the future, will the pillars of Agile still stand strong amid the quakes of technological advancement?</p>
<p>Historians of the future—likely AI entities with a flair for overly dramatic retellings and inexplicable British accents—might say that the key to surviving relentless technological evolution is adaptability. This could mean seeing Agile not as a rigid framework, but more like clay, malleable at its core yet firm enough to hold form.</p>
<p>This adaptability takes on a whole new dimension with the integration of artificial intelligence, where Agile workflows could become not just responsive, but predictive.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738044711052/c1edd537-23ff-4f73-a903-f479a23e822b.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-ai-and-the-future-of-agile-workflows">AI and the Future of Agile Workflows</h2>
<p>Firstly, the marriage of AI and Agile could be the MacPro to our software engineer's coding brain. This adaptability takes on a whole new dimension with the integration of artificial intelligence, where Agile workflows could become not just responsive, but predictive.</p>
<h3 id="heading-ai-as-a-problem-solver">AI as a Problem-Solver</h3>
<p>AI tools infused into Agile workflows could predict roadblocks before they even appear on the horizon, offering solutions faster than a Dalek's extermination beam. Imagine an AI-driven sprint planner that not only identifies bottlenecks but also suggests task reassignment based on team availability and skillsets in real time. AI could even flag potential risks, like technical debt accumulating in the backlog, before they snowball into the kind of development nightmare that keeps DevOps engineers awake at night, staring at the ceiling and questioning the meaning of life (and why the Jenkins pipeline is still broken).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738044789290/e974666e-98f2-4204-a0b5-e4d6750bc9ad.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-challenges-of-ai-integration">The Challenges of AI Integration</h3>
<p>However, the promise of AI comes with its own challenges. Relying too heavily on automation could dilute the human creativity that defines Agile teams, while ethical questions around data privacy and bias in AI-driven decision-making can’t be ignored. These potential pitfalls mean that while AI can amplify Agile’s adaptability, it must remain a supporting player, not the star of the show.</p>
<h3 id="heading-ai-as-a-creative-assistant">AI as a Creative Assistant</h3>
<p>But AI’s potential doesn’t stop there. Imagine intelligent agents that analyze team dynamics to recommend adjustments in roles or workflows, or bots that assist developers by generating boilerplate code on demand. AI could also provide real-time feedback loops for quality assurance, automatically flagging inefficiencies or coding errors as they arise. These systems wouldn’t replace Agile teams but act as their tireless assistants, reducing the cognitive load and allowing teams to focus on creative problem-solving rather than mundane tasks.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738044814170/4f81122c-684d-4805-9b2d-641b2ebe48ae.jpeg" alt class="image--center mx-auto" /></p>
<p>By thoughtfully augmenting Agile with AI, we move closer to workflows that embody the best of both worlds: predictive insights and adaptive execution—a hallmark of a future-ready methodology.</p>
<h2 id="heading-expanding-agile-through-collaboration-and-customization">Expanding Agile Through Collaboration and Customization</h2>
<h3 id="heading-virtual-collaboration-a-new-dimension-for-agile">Virtual Collaboration: A New Dimension for Agile</h3>
<p>Remote collaboration tools have already democratized the workspace, allowing developers to code from anywhere—mountaintops, cryptic zen gardens, or even very isolated desserts, though the Wi-Fi situation in a tiramisu is notoriously unreliable.</p>
<p>Picture a distributed team using VR headsets to participate in a fully immersive Scrum meeting, where team members can interact with a 3D sprint board as if they were in the same room. Or imagine a kanban board with AI-powered prioritization that adapts dynamically based on team input and real-time project updates, eliminating the need for tedious manual adjustments.</p>
<p>As hybrid and remote work continue their reign, innovations like these could turn virtual collaboration into a seamless extension of Agile’s principles.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738044831294/66cd6ee3-5053-4efe-9b74-373d4a64da18.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-blockchain-for-transparency-and-trust">Blockchain for Transparency and Trust</h3>
<p>As we rethink how teams collaborate across virtual spaces, the same spirit of innovation could transform Agile’s decision-making processes, bringing decentralization and transparency into sharper focus.</p>
<p>Picture a sprint retrospective where peer-to-peer consensus is recorded immutably, ensuring that accountability and transparency are baked into every decision—quite possibly along with a bit of smug satisfaction that no one needs to dig through yet another 63-reply email thread to find what was agreed.</p>
<p>Blockchain could even enable tamper-proof documentation for regulated industries, guaranteeing compliance while maintaining Agile’s iterative spirit.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738044858157/b2361900-ff09-467d-9e3c-a747aecb63c9.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-hyper-adaptive-frameworks-for-specialized-industries">Hyper-Adaptive Frameworks for Specialized Industries</h3>
<p>Finally, we might see the rise of hyper-adaptive Agile frameworks tailored specifically to industry needs. For instance, healthcare teams could deploy Agile methods that automatically integrate compliance requirements into sprint planning, or finance teams could use smart contracts to enforce regulatory checks at each iteration. These modular guidelines would act like plug-and-play components, enabling industries to harness Agile’s iterative gusto without sacrificing the oversight they depend on.</p>
<h2 id="heading-the-timeless-spirit-of-agile">The Timeless Spirit of Agile</h2>
<p>In this world of fantastical adaptations, one unyielding truth remains: the essence of Agile is change itself.</p>
<p>Predictions and strategy directions are like cookies for the soul—comforting in their warmth, fleeting in their existence, and somehow always gone before you’ve had enough.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738044872673/70687a34-c0a3-405e-a7bb-3ddc5bed97aa.jpeg" alt class="image--center mx-auto" /></p>
<p>Agile is more than a methodology; it's a mindset, a living testament to humanity's ability to embrace uncertainty and thrive—preferably with a debugger in one hand and a cup of strong coffee in the other.</p>
<p>As we step into the unknown with open-source shoes and a playful spirit, one thing is certain: Agile will continue to surprise us. The real question is, how will you help shape Agile's next evolution?</p>
<p>Cover Photo by <a target="_blank" href="https://unsplash.com/@marvelous?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Marvin Meyer</a> on <a target="_blank" href="https://unsplash.com/photos/people-sitting-down-near-table-with-assorted-laptop-computers-SYTO3xs06fU?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></p>
]]></content:encoded></item><item><title><![CDATA[Building a Golang Syslog Server: A Journey Through the Digital Cosmos]]></title><description><![CDATA[Introduction
Welcome, fellow traveler, to the whimsical world of system logging—a realm both mundane and essential. In this guide, we embark on an adventure to craft a Golang-based syslog server, inspired by the delightful absurdity of Douglas Adams’...]]></description><link>https://deployharmlessly.dev/building-a-golang-syslog-server-a-journey-through-the-digital-cosmos</link><guid isPermaLink="true">https://deployharmlessly.dev/building-a-golang-syslog-server-a-journey-through-the-digital-cosmos</guid><category><![CDATA[golang]]></category><category><![CDATA[syslog]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[software development]]></category><category><![CDATA[logging]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Tue, 14 Jan 2025 18:41:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736879896286/eb1a676a-0be5-4999-9408-1b843da8e109.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Welcome, fellow traveler, to the whimsical world of system logging—a realm both mundane and essential. In this guide, we embark on an adventure to craft a Golang-based syslog server, inspired by the delightful absurdity of Douglas Adams’ universe. Much like the planet Earth in <em>Mostly Harmless</em>, our server will be mostly harmless—and delightfully efficient. So, grab your towel and dive into the cosmos of code.</p>
<p>The full code is available on my GitHub repository: <a target="_blank" href="https://github.com/jleski/wetherly">jleski/wetherly</a>.</p>
<h2 id="heading-laying-the-groundwork">Laying the Groundwork</h2>
<p>Before launching our metaphorical spaceship, let’s prepare the essentials:</p>
<ul>
<li><p><strong>Go</strong>: Our language of choice, as sleek and reliable as a well-tuned spaceship.</p>
</li>
<li><p><strong>Docker</strong>: Ensuring our server runs smoothly across the galaxy of environments.</p>
</li>
<li><p><strong>Task</strong>: A task runner to automate the myriad tasks needed for a shipshape server.</p>
</li>
<li><p><strong>Helm</strong>: For deploying in the Kubernetes nebula with precision.</p>
</li>
<li><p><strong>Netcat (nc)</strong>: The Swiss Army knife of networking, for sending test messages.</p>
</li>
<li><p><strong>Golangci-lint</strong>: Optional but invaluable for polished code.</p>
</li>
</ul>
<h3 id="heading-setting-up-the-environment">Setting Up the Environment</h3>
<p>Clone the repository and prepare your development environment:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/jleski/wetherly.git
<span class="hljs-built_in">cd</span> wetherly
task dev:setup
</code></pre>
<p>This installs the necessary dependencies and readies your workspace.</p>
<h2 id="heading-building-the-server">Building the Server</h2>
<p>Our syslog server, much like a Vogon constructor fleet, is a marvel of precision. Its main components reside in <code>main.go</code>, handling syslog messages per the RFC5424 standard.</p>
<h3 id="heading-key-components">Key Components</h3>
<ul>
<li><p><strong>Listener</strong>: Listening on port 6601 for intergalactic messages.</p>
</li>
<li><p><strong>Parser</strong>: Using the <code>github.com/influxdata/go-syslog/v3</code> library to decode messages.</p>
</li>
<li><p><strong>Handler</strong>: Spawning a goroutine for each connection, ensuring concurrency.</p>
</li>
</ul>
<p>Here’s a sneak peek of the main function:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    printStartupInfo()

    listener, err := net.Listen(<span class="hljs-string">"tcp"</span>, SYSLOG_PORT)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"%sError creating TCP listener: %v%s"</span>, RedColor, err, ResetColor)
    }
    <span class="hljs-keyword">defer</span> listener.Close()

    fmt.Printf(<span class="hljs-string">"%s✅ Server is ready to accept connections%s\n\n"</span>, GreenColor, ResetColor)

    <span class="hljs-keyword">for</span> {
        conn, err := listener.Accept()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Printf(<span class="hljs-string">"%sError accepting connection: %v%s"</span>, RedColor, err, ResetColor)
            <span class="hljs-keyword">continue</span>
        }

        fmt.Printf(<span class="hljs-string">"%s📥 New connection from %s%s\n"</span>, GreenColor, conn.RemoteAddr(), ResetColor)
        <span class="hljs-keyword">go</span> handleConnection(conn)
    }
}
</code></pre>
<h2 id="heading-main-functions-of-the-syslog-server">Main Functions of the Syslog Server</h2>
<h4 id="heading-1-main">1. main()</h4>
<p>Sets up the server to listen for connections and processes them concurrently.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    printStartupInfo()

    listener, err := net.Listen(<span class="hljs-string">"tcp"</span>, SYSLOG_PORT)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"%sError creating TCP listener: %v%s"</span>, RedColor, err, ResetColor)
    }
    <span class="hljs-keyword">defer</span> listener.Close()

    fmt.Printf(<span class="hljs-string">"%s✅ Server is ready to accept connections%s\n\n"</span>, GreenColor, ResetColor)

    <span class="hljs-keyword">for</span> {
        conn, err := listener.Accept()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Printf(<span class="hljs-string">"%sError accepting connection: %v%s"</span>, RedColor, err, ResetColor)
            <span class="hljs-keyword">continue</span>
        }

        fmt.Printf(<span class="hljs-string">"%s📥 New connection from %s%s\n"</span>, GreenColor, conn.RemoteAddr(), ResetColor)
        <span class="hljs-keyword">go</span> handleConnection(conn)
    }
}
</code></pre>
<h4 id="heading-2-printstartupinfo">2. printStartupInfo()</h4>
<p>Prints a colorful startup banner and server details.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">printStartupInfo</span><span class="hljs-params">()</span></span> {
    fmt.Print(CyanColor)
    fmt.Print(BANNER)
    fmt.Print(ResetColor)

    hostname, err := os.Hostname()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        hostname = <span class="hljs-string">"unknown"</span>
    }

    fmt.Print(GreenColor)
    fmt.Printf(<span class="hljs-string">"🚀 Starting Wetherly Syslog Server...\n"</span>)
    fmt.Printf(<span class="hljs-string">"📅 Time: %s\n"</span>, time.Now().Format(time.RFC1123))
    fmt.Printf(<span class="hljs-string">"💻 Hostname: %s\n"</span>, hostname)
    fmt.Printf(<span class="hljs-string">"🔌 Protocol: TCP\n"</span>)
    fmt.Printf(<span class="hljs-string">"🎯 Port: 6601\n"</span>)
    fmt.Printf(<span class="hljs-string">"📦 Buffer Size: %d bytes\n"</span>, BUFFER_SIZE)
    fmt.Print(ResetColor)
    fmt.Println(<span class="hljs-string">"=========================================="</span>)
}
</code></pre>
<h4 id="heading-3-handleconnectionconn-netconn">3. handleConnection(conn net.Conn)</h4>
<p>Processes each connection, reading and parsing messages.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">handleConnection</span><span class="hljs-params">(conn net.Conn)</span></span> {
    <span class="hljs-keyword">defer</span> conn.Close()

    buffer := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">byte</span>, BUFFER_SIZE)
    <span class="hljs-keyword">for</span> {
        n, err := conn.Read(buffer)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            <span class="hljs-keyword">if</span> err.Error() != <span class="hljs-string">"EOF"</span> {
                log.Printf(<span class="hljs-string">"%sError reading from connection: %v%s"</span>, RedColor, err, ResetColor)
            }
            fmt.Printf(<span class="hljs-string">"%s📤 Connection closed from %s%s\n"</span>, YellowColor, conn.RemoteAddr(), ResetColor)
            <span class="hljs-keyword">return</span>
        }

        message := <span class="hljs-keyword">string</span>(buffer[:n])
        timestamp := time.Now().Format(<span class="hljs-string">"2006-01-02 15:04:05"</span>)

        <span class="hljs-keyword">if</span> strings.HasPrefix(message, <span class="hljs-string">"&lt;"</span>) {
            parsedMsg, err := parseRFC5424Message(message)
            <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
                fmt.Printf(<span class="hljs-string">"%sError parsing RFC5424 message: %v%s\n"</span>, RedColor, err, ResetColor)
            } <span class="hljs-keyword">else</span> {
                fmt.Printf(<span class="hljs-string">"%s[%s] Parsed RFC5424 Message:%s\n%s%+v%s\n"</span>, GreenColor, timestamp, ResetColor, GreenColor, parsedMsg, ResetColor)
            }
        } <span class="hljs-keyword">else</span> {
            fmt.Printf(<span class="hljs-string">"%s[%s] Message from %v:%s\n%s%s%s\n"</span>, GreenColor, timestamp, conn.RemoteAddr(), ResetColor, GreenColor, message, ResetColor)
        }
    }
</code></pre>
<h4 id="heading-4-parserfc5424messagemsg-string">4. parseRFC5424Message(msg string)</h4>
<p>Decodes syslog messages formatted per RFC5424.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">parseRFC5424Message</span><span class="hljs-params">(msg <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(*rfc5424.SyslogMessage, error)</span></span> {
    parser := rfc5424.NewParser()
    parsedMsg, err := parser.Parse([]<span class="hljs-keyword">byte</span>(msg))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"error parsing RFC5424 message: %w"</span>, err)
    }

    <span class="hljs-comment">// Type assertion to convert syslog.Message to *rfc5424.SyslogMessage</span>
    rfc5424Msg, ok := parsedMsg.(*rfc5424.SyslogMessage)
    <span class="hljs-keyword">if</span> !ok {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"parsed message is not of type *rfc5424.SyslogMessage"</span>)
    }

    <span class="hljs-keyword">return</span> rfc5424Msg, <span class="hljs-literal">nil</span>
</code></pre>
<h2 id="heading-tying-it-all-together">Tying it all together</h2>
<p>Here’s the full main.go file:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net"</span>
    <span class="hljs-string">"os"</span>
    <span class="hljs-string">"strings"</span>
    <span class="hljs-string">"time"</span>

    <span class="hljs-string">"github.com/influxdata/go-syslog/v3/rfc5424"</span>
)

<span class="hljs-keyword">const</span> (
    SYSLOG_PORT = <span class="hljs-string">":6601"</span>
    BUFFER_SIZE = <span class="hljs-number">8192</span>
    BANNER      = <span class="hljs-string">`
 __          __  _   _                _       
 \ \        / / | | | |              | |      
  \ \  /\  / /__| |_| |__   ___ _ __| |_   _ 
   \ \/  \/ / _ \ __| '_ \ / _ \ '__| | | | |
    \  /\  /  __/ |_| | | |  __/ |  | | |_| |
     \/  \/ \___|\__|_| |_|\___|_|  |_|\__, |
                                        __/ |
                                       |___/ 
    Syslog Server v1.0.0
    ==========================================
`</span>
    CyanColor   = <span class="hljs-string">"\033[1;36m"</span>
    GreenColor  = <span class="hljs-string">"\033[1;32m"</span>
    RedColor    = <span class="hljs-string">"\033[1;31m"</span>
    YellowColor = <span class="hljs-string">"\033[1;33m"</span>
    ResetColor  = <span class="hljs-string">"\033[0m"</span>
)

<span class="hljs-keyword">type</span> RFC5424Message <span class="hljs-keyword">struct</span> {
    Priority       <span class="hljs-keyword">int</span>
    Version        <span class="hljs-keyword">string</span>
    Timestamp      time.Time
    Hostname       <span class="hljs-keyword">string</span>
    AppName        <span class="hljs-keyword">string</span>
    ProcID         <span class="hljs-keyword">string</span>
    MsgID          <span class="hljs-keyword">string</span>
    StructuredData <span class="hljs-keyword">string</span> <span class="hljs-comment">// New field for structured data</span>
    Message        <span class="hljs-keyword">string</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">parseRFC5424Message</span><span class="hljs-params">(msg <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(*rfc5424.SyslogMessage, error)</span></span> {
    parser := rfc5424.NewParser()
    parsedMsg, err := parser.Parse([]<span class="hljs-keyword">byte</span>(msg))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"error parsing RFC5424 message: %w"</span>, err)
    }

    <span class="hljs-comment">// Type assertion to convert syslog.Message to *rfc5424.SyslogMessage</span>
    rfc5424Msg, ok := parsedMsg.(*rfc5424.SyslogMessage)
    <span class="hljs-keyword">if</span> !ok {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"parsed message is not of type *rfc5424.SyslogMessage"</span>)
    }

    <span class="hljs-keyword">return</span> rfc5424Msg, <span class="hljs-literal">nil</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">printStartupInfo</span><span class="hljs-params">()</span></span> {
    fmt.Print(CyanColor)
    fmt.Print(BANNER)
    fmt.Print(ResetColor)

    hostname, err := os.Hostname()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        hostname = <span class="hljs-string">"unknown"</span>
    }

    fmt.Print(GreenColor)
    fmt.Printf(<span class="hljs-string">"🚀 Starting Wetherly Syslog Server...\n"</span>)
    fmt.Printf(<span class="hljs-string">"📅 Time: %s\n"</span>, time.Now().Format(time.RFC1123))
    fmt.Printf(<span class="hljs-string">"💻 Hostname: %s\n"</span>, hostname)
    fmt.Printf(<span class="hljs-string">"🔌 Protocol: TCP\n"</span>)
    fmt.Printf(<span class="hljs-string">"🎯 Port: 6601\n"</span>)
    fmt.Printf(<span class="hljs-string">"📦 Buffer Size: %d bytes\n"</span>, BUFFER_SIZE)
    fmt.Print(ResetColor)
    fmt.Println(<span class="hljs-string">"=========================================="</span>)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    printStartupInfo()

    listener, err := net.Listen(<span class="hljs-string">"tcp"</span>, SYSLOG_PORT)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"%sError creating TCP listener: %v%s"</span>, RedColor, err, ResetColor)
    }
    <span class="hljs-keyword">defer</span> listener.Close()

    fmt.Printf(<span class="hljs-string">"%s✅ Server is ready to accept connections%s\n\n"</span>, GreenColor, ResetColor)

    <span class="hljs-keyword">for</span> {
        conn, err := listener.Accept()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Printf(<span class="hljs-string">"%sError accepting connection: %v%s"</span>, RedColor, err, ResetColor)
            <span class="hljs-keyword">continue</span>
        }

        fmt.Printf(<span class="hljs-string">"%s📥 New connection from %s%s\n"</span>, GreenColor, conn.RemoteAddr(), ResetColor)
        <span class="hljs-keyword">go</span> handleConnection(conn)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">handleConnection</span><span class="hljs-params">(conn net.Conn)</span></span> {
    <span class="hljs-keyword">defer</span> conn.Close()

    buffer := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">byte</span>, BUFFER_SIZE)
    <span class="hljs-keyword">for</span> {
        n, err := conn.Read(buffer)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            <span class="hljs-keyword">if</span> err.Error() != <span class="hljs-string">"EOF"</span> {
                log.Printf(<span class="hljs-string">"%sError reading from connection: %v%s"</span>, RedColor, err, ResetColor)
            }
            fmt.Printf(<span class="hljs-string">"%s📤 Connection closed from %s%s\n"</span>, YellowColor, conn.RemoteAddr(), ResetColor)
            <span class="hljs-keyword">return</span>
        }

        message := <span class="hljs-keyword">string</span>(buffer[:n])
        timestamp := time.Now().Format(<span class="hljs-string">"2006-01-02 15:04:05"</span>)

        <span class="hljs-keyword">if</span> strings.HasPrefix(message, <span class="hljs-string">"&lt;"</span>) {
            parsedMsg, err := parseRFC5424Message(message)
            <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
                fmt.Printf(<span class="hljs-string">"%sError parsing RFC5424 message: %v%s\n"</span>, RedColor, err, ResetColor)
            } <span class="hljs-keyword">else</span> {
                fmt.Printf(<span class="hljs-string">"%s[%s] Parsed RFC5424 Message:%s\n%s%+v%s\n"</span>, GreenColor, timestamp, ResetColor, GreenColor, parsedMsg, ResetColor)
            }
        } <span class="hljs-keyword">else</span> {
            fmt.Printf(<span class="hljs-string">"%s[%s] Message from %v:%s\n%s%s%s\n"</span>, GreenColor, timestamp, conn.RemoteAddr(), ResetColor, GreenColor, message, ResetColor)
        }
    }
}
</code></pre>
<h2 id="heading-testing-and-deployment">Testing and Deployment</h2>
<p>Once our server is built, it's time to test and deploy it. We use Docker to containerize our application, ensuring it runs consistently across different environments. The Dockerfile is straightforward, building our Go application and packaging it into a lightweight Alpine image.</p>
<h3 id="heading-running-the-server">Running the Server</h3>
<p>To run the server locally, use the following command:</p>
<pre><code class="lang-bash">docker run -p 6601:6601 jleski/wetherly:latest
</code></pre>
<p>This will start the server, ready to accept syslog messages on port 6601.</p>
<h3 id="heading-sending-test-messages">Sending Test Messages</h3>
<p>We can send test messages using Netcat or the task command. For example, to send a simple test message, use:</p>
<pre><code class="lang-bash">task <span class="hljs-built_in">test</span>:send
</code></pre>
<p>For more complex messages, such as those formatted according to RFC5424, use:</p>
<pre><code class="lang-bash">task <span class="hljs-built_in">test</span>:send:rfc5424
</code></pre>
<p>Check my GitHub repository for the full code: <a target="_blank" href="https://github.com/jleski/wetherly">jleski/wetherly: Syslog receiver</a></p>
<h2 id="heading-conclusionhttpsgithubcomjleskiwetherly"><a target="_blank" href="https://github.com/jleski/wetherly">Conclusion</a></h2>
<p><a target="_blank" href="https://github.com/jleski/wetherly">And there you have</a> it—a mostly harmless syslog server, ready to log messages from across the digital cosmos. As you continue to explore and enhance this codebase, remember the words of Douglas Adams: "Don't Panic." With a well-prepared task list and a touch of humor, you're well-equipped to tackle any challenge that comes your way. May your logs be ever verbose, your errors be few, and your adventures be plentiful.</p>
]]></content:encoded></item><item><title><![CDATA[Faraday: The Hitchhiker's Guide to Command-Line AI]]></title><description><![CDATA[Building Faraday: A Guide to Crafting Your Command-Line AI Companion
In a universe teeming with digital chaos, Faraday emerges as the improbable hero, a command-line Go application designed to converse with AI services. Much like a towel in the hands...]]></description><link>https://deployharmlessly.dev/faraday-the-hitchhikers-guide-to-command-line-ai</link><guid isPermaLink="true">https://deployharmlessly.dev/faraday-the-hitchhikers-guide-to-command-line-ai</guid><category><![CDATA[Go Language]]></category><category><![CDATA[command line]]></category><category><![CDATA[AI]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Mon, 30 Dec 2024 16:38:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735577251513/b35dfd44-4097-402d-9bca-d06fb30fd8d8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-building-faraday-a-guide-to-crafting-your-command-line-ai-companion">Building Faraday: A Guide to Crafting Your Command-Line AI Companion</h2>
<p>In a universe teeming with digital chaos, Faraday emerges as the improbable hero, a command-line Go application designed to converse with AI services. Much like a towel in the hands of a seasoned hitchhiker, Faraday is indispensable for those navigating the vast expanse of artificial intelligence.</p>
<p>Faraday's charm lies in its simplicity. It takes user prompts and, with the grace of a Vogon poet, communicates with an AI service to deliver responses. The application is configured via a YAML file, ensuring that even the most befuddled of users can set it up without resorting to pan-galactic gargle blasters.</p>
<p>Building Faraday is as straightforward as asking for a cup of tea from the Nutri-Matic machine. With a simple <code>task build</code>, you can compile it for your local system. For those with intergalactic ambitions, <code>task release</code> allows you to build it for multiple platforms, ensuring compatibility across the galaxy.</p>
<h3 id="heading-technical-details-navigating-the-code-cosmos">Technical Details: Navigating the Code Cosmos</h3>
<p>Faraday's core functionality revolves around its ability to parse command-line arguments and interact with an AI service. The main function processes user input, checking for context files using the <code>@file</code> syntax. It then constructs a request body, including user prompts and optional context, and sends it to the AI service via HTTP POST.</p>
<p>The application utilizes Go's <code>flag</code> package for argument parsing and <code>yaml.v3</code> for configuration management. It gracefully handles errors, ensuring that even the most catastrophic of failures are met with a polite message rather than a Vogon-like tirade.</p>
<h3 id="heading-file-by-file-guide-to-building-faraday">File-by-File Guide to Building Faraday</h3>
<p><strong>main.go:</strong> This is the heart of Faraday. It initializes configuration settings from <code>config.yaml</code>, parses command-line arguments, and communicates with the AI service.</p>
<p>Go</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">init</span><span class="hljs-params">()</span></span> {
  exePath, err := os.Executable()
  <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    fmt.Printf(<span class="hljs-string">"Error getting executable path: %v\n"</span>, err)
    os.Exit(<span class="hljs-number">1</span>)
  }
  configFilePath := filepath.Join(filepath.Dir(exePath), <span class="hljs-string">"config.yaml"</span>)
  configFile, err := os.Open(configFilePath)
  <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    fmt.Printf(<span class="hljs-string">"Error opening config file: %v\n"</span>, err)
    os.Exit(<span class="hljs-number">1</span>)
  }
  <span class="hljs-keyword">defer</span> configFile.Close()

  yamlDecoder := yaml.NewDecoder(configFile)
  err = yamlDecoder.Decode(&amp;config)
  <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    fmt.Printf(<span class="hljs-string">"Error decoding config file: %v\n"</span>, err)
    os.Exit(<span class="hljs-number">1</span>)
  }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">callAIService</span><span class="hljs-params">(prompt <span class="hljs-keyword">string</span>, contextFilePath <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(<span class="hljs-keyword">string</span>, error)</span></span> {
  <span class="hljs-comment">// ... function implementation ...</span>
}
</code></pre>
<p>The <code>callAIService()</code> function constructs and sends HTTP requests, handling responses with elegance.</p>
<p>Go</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">callAIService</span><span class="hljs-params">(prompt <span class="hljs-keyword">string</span>, contextFilePath <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(<span class="hljs-keyword">string</span>, error)</span></span> {
  <span class="hljs-comment">// ... function implementation ...</span>
}
</code></pre>
<p><strong>config.yaml:</strong> This file is crucial for configuration, containing the API URL and key. It should reside in the same directory as the executable, ensuring seamless communication with the AI service.</p>
<p>YAML</p>
<pre><code class="lang-yaml"><span class="hljs-attr">api:</span>
  <span class="hljs-attr">url:</span> <span class="hljs-string">"https://api.example.com"</span>
  <span class="hljs-attr">key:</span> <span class="hljs-string">"your-api-key"</span>
</code></pre>
<p><strong>Build Process:</strong> Ensure Go 1.23 or later is installed, along with <a target="_blank" href="https://taskfile.dev/">Task</a> for task management. Use <code>task build</code> to compile locally, or <code>task release</code> for cross-platform builds:</p>
<pre><code class="lang-bash">task build
</code></pre>
<pre><code class="lang-bash">task release
</code></pre>
<p>In conclusion, Faraday is a delightful tool for those seeking to interact with AI services from the comfort of their command line. Its ease of use and configuration make it a must-have for any digital hitchhiker.</p>
<p>For the source and pre-built binaries, check out my repository at <a target="_blank" href="https://github.com/jleski/faraday">https://github.com/jleski/faraday</a></p>
]]></content:encoded></item><item><title><![CDATA[Welcome to Deploy Harmlessly – The Guide to Navigating Cloud, DevOps, and Automation Without a Hitch]]></title><description><![CDATA[Hello, and welcome to Deploy Harmlessly, a blog where we attempt to solve the greatest mysteries of the tech universe: how to deploy software without causing the entire infrastructure to implode. It’s a bit like trying to pilot a spaceship through a ...]]></description><link>https://deployharmlessly.dev/welcome-to-deploy-harmlessly-the-guide-to-navigating-cloud-devops-and-automation-without-a-hitch</link><guid isPermaLink="true">https://deployharmlessly.dev/welcome-to-deploy-harmlessly-the-guide-to-navigating-cloud-devops-and-automation-without-a-hitch</guid><category><![CDATA[Devops]]></category><category><![CDATA[CloudComputing]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[automation]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Jaakko Leskinen]]></dc:creator><pubDate>Mon, 30 Dec 2024 16:11:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735577187297/ac7246eb-0dfd-4386-962c-6048947bb1a8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, and welcome to <em>Deploy Harmlessly</em>, a blog where we attempt to solve the greatest mysteries of the tech universe: how to deploy software without causing the entire infrastructure to implode. It’s a bit like trying to pilot a spaceship through a supernova — if the spaceship were made of code and the supernova were a CI/CD pipeline that’s gone horribly wrong.</p>
<p>Before we dive into the technical escapades, let me introduce myself. Don’t worry, I won’t require you to hitchhike across the galaxy or decode any cryptic messages from a towel. I’m here to guide you through the strange and wonderful world of DevOps, Kubernetes, Azure, automation, and everything in between. But more importantly, I’m here to help you deploy systems <em>harmlessly</em>, because, as we all know, it’s much easier to deploy something with a good plan than it is to deal with the catastrophic results of a failed deployment — especially when you’ve accidentally launched the server into deep space.</p>
<h3 id="heading-a-bit-about-me-no-hitchhiking-required">A Bit About Me (No Hitchhiking Required)</h3>
<p>I’ve spent over 20 years navigating the sometimes baffling terrain of the tech universe. As a Senior Consultant, I specialize in public cloud, hybrid cloud, DevOps, Kubernetes, security, and automation. My job involves talking to both technical experts and business decision-makers, which, much like dealing with a Vogon poetry recital, requires a certain level of patience and a well-timed sense of humor.</p>
<p>While I’ve spent years building complex systems and trying to explain to clients why <em>yes</em>, that Kubernetes cluster is definitely safe, I’ve also realized something important: the tech world doesn’t have to be a maze of anxiety and disaster recovery plans. I believe in creating smooth, predictable, and — dare I say it — harmless deployments. So much like the cheerful yet occasionally confusing universe of <em>Hitchhiker’s Guide</em>, my approach is: "don’t panic," but also <em>definitely</em> carry a towel (or, in this case, a solid CI/CD pipeline).</p>
<h3 id="heading-why-deploy-harmlessly">Why <em>Deploy Harmlessly</em>?</h3>
<p>The name <em>Deploy Harmlessly</em> is a nod to a world where we embrace technology with a sense of humor and a touch of caution. If you’ve ever had the experience of deploying code only to have it mysteriously break everything (including the coffee machine), you’ll understand why I think deploying “harmlessly” is not only a goal but a worthy aspiration.</p>
<p>Like the universe itself, deployment processes are full of chaos, uncertainty, and occasional moments of pure existential dread — but they don’t have to be. By combining the right practices, tools, and philosophies, we can build systems that are more like the friendly, self-aware supercomputers in <em>The Hitchhiker’s Guide</em> (which, let’s be honest, would probably be running Kubernetes if it existed in real life) and less like the unpredictable mess that often happens when you forget to check the status of a job in the pipeline.</p>
<h3 id="heading-why-devops-kubernetes-and-automation-or-how-not-to-end-up-in-a-parallel-universe-where-everything-is-on-fire">Why DevOps, Kubernetes, and Automation? (Or, How Not to End Up in a Parallel Universe Where Everything is on Fire)</h3>
<p>If you’ve made it this far, it’s safe to assume you’ve dabbled in DevOps, Kubernetes, or automation. These aren’t just trendy terms, like the latest fashion in galactic attire (though, come to think of it, a good DevOps pipeline <em>is</em> a bit like an ultra-stylish spacesuit). These are the building blocks of modern infrastructure, and if you use them correctly, they can make everything run smoothly — even when the universe throws unexpected black holes in your path.</p>
<p>Kubernetes, for example, is like the Hitchhiker's Guide to infrastructure: it can help you manage complex systems, scale effortlessly, and ensure that your services never fall into a black hole of downtime. DevOps is the philosophy behind it all — it’s about collaboration, constant iteration, and making sure that the deployment doesn’t, you know, <em>explode</em>. Automation is the secret sauce, because if you’re still doing manual tasks like a hapless human running through a bureaucratic spaceport, you’re doing it wrong. Robots are better at this.</p>
<h3 id="heading-sci-fi-inspirations-and-the-art-of-harmless-deployment">Sci-Fi Inspirations and the Art of Harmless Deployment</h3>
<p>If you’ve picked up on a certain sci-fi tone by now, that’s no accident. As a devoted fan of <em>The Hitchhiker’s Guide to the Galaxy</em>, <em>Mostly Harmless</em>, <em>Dune</em>, and all things Asimov, I believe that technology and storytelling share a common thread. Both are about systems, complexity, and the occasional unexpected twist. And just as Arthur Dent once learned that panicking is the last thing you want to do when the universe goes haywire, I believe that calmly deploying your infrastructure is the key to avoiding the inevitable “system down” message that often follows a hasty deployment.</p>
<p>So, as you journey through this blog, you’ll find not only technical tutorials and insights into Kubernetes, Azure, and automation but also some musings on how sci-fi has influenced the way we think about tech, systems, and the future of our digital universe. After all, whether it’s an AI assistant helping you launch your next Kubernetes pod or a galaxy of stars just waiting to be explored, the journey is always better when you’re prepared — and maybe carrying a towel, just in case.</p>
<h3 id="heading-whats-next">What’s Next?</h3>
<p>On this blog, you can expect:</p>
<ul>
<li><p>Hilarious, yet practical, technical guides on DevOps, Kubernetes, Azure, and automation.</p>
</li>
<li><p>Thought-provoking discussions about the future of cloud technology and how to survive the chaos of complex systems.</p>
</li>
<li><p>A healthy dose of sci-fi references, because why not make tech a bit more fun?</p>
</li>
<li><p>A few unsolicited life lessons from the worlds of Arthur Dent, Marvin the Paranoid Android, and other unlikely heroes.</p>
</li>
</ul>
<p>So, thank you for joining me on this cosmic adventure. Let’s make sure our deployments are harmless — and maybe even a little bit fun.</p>
]]></content:encoded></item></channel></rss>