<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Jack (Jarkynbyek) Japar's blog]]></title><description><![CDATA[Cloud / DevOps Engineer focusing on AWS, Kubernetes, and GitHub Actions]]></description><link>https://jackjapar.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 29 Apr 2026 09:07:11 GMT</lastBuildDate><atom:link href="https://jackjapar.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Terraform Associate (004) – Exam Cheat Sheet]]></title><description><![CDATA[This cheat sheet is based on my notes from online courses, Hashicorp documentation, and missing & updated concepts for the Terraform Associate 004 exam after completing the exam. It is optimized for quick revision before the exam.

1. Terraform Core ...]]></description><link>https://jackjapar.com/terraform-associate-004-exam-cheat-sheet</link><guid isPermaLink="true">https://jackjapar.com/terraform-associate-004-exam-cheat-sheet</guid><category><![CDATA[Terraform]]></category><category><![CDATA[terraform-cloud]]></category><category><![CDATA[Terraform Associate]]></category><category><![CDATA[#IaC]]></category><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Thu, 08 Jan 2026 19:13:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767899394957/7c5cedeb-7196-4917-aa05-3f80b8a507f9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This cheat sheet is based on my notes from online courses, Hashicorp documentation, and <strong>missing &amp; updated concepts for the Terraform Associate 004 exam</strong> after completing the exam. It is optimized for <strong>quick revision before the exam</strong>.</p>
<hr />
<h2 id="heading-1-terraform-core-workflow">1. Terraform Core Workflow</h2>
<h3 id="heading-core-commands">Core Commands</h3>
<pre><code class="lang-bash">terraform init       <span class="hljs-comment"># Initialize providers &amp; backend</span>
terraform plan       <span class="hljs-comment"># Preview execution plan</span>
terraform apply      <span class="hljs-comment"># Apply changes</span>
terraform destroy    <span class="hljs-comment"># Destroy resources</span>
</code></pre>
<h3 id="heading-typical-workflow">Typical Workflow</h3>
<ol>
<li><p>Write <code>.tf</code> files</p>
</li>
<li><p><code>terraform init</code></p>
</li>
<li><p><code>terraform plan</code></p>
</li>
<li><p><code>terraform apply</code></p>
</li>
</ol>
<hr />
<h2 id="heading-2-terraform-configuration-basics">2. Terraform Configuration Basics</h2>
<h3 id="heading-blocks">Blocks</h3>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"example"</span> {}
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Block</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td>provider</td><td>Define cloud provider</td></tr>
<tr>
<td>resource</td><td>Infrastructure object</td></tr>
<tr>
<td>variable</td><td>Input variable</td></tr>
<tr>
<td>output</td><td>Export values</td></tr>
<tr>
<td>data</td><td>Read external data</td></tr>
<tr>
<td>module</td><td>Reusable configuration</td></tr>
<tr>
<td>terraform</td><td>Backend &amp; version config</td></tr>
<tr>
<td>action</td><td>Invoke provider-defined action</td></tr>
<tr>
<td>check</td><td>Validate your infrastructure</td></tr>
<tr>
<td>ephermal</td><td>define temporary resources</td></tr>
<tr>
<td>import</td><td>import existing infrastructure</td></tr>
<tr>
<td>locals</td><td>define values and reuse</td></tr>
<tr>
<td>moved</td><td>change the address of a resource</td></tr>
<tr>
<td>removed</td><td>remove from state without changing infra</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-3-providers">3. Providers</h2>
<h3 id="heading-provider-types">Provider Types</h3>
<ul>
<li><p>Official</p>
</li>
<li><p>Partner</p>
</li>
<li><p>Community</p>
</li>
</ul>
<h3 id="heading-provider-configuration">Provider Configuration</h3>
<pre><code class="lang-apache"><span class="hljs-attribute">provider</span> <span class="hljs-string">"aws"</span> {
  <span class="hljs-attribute">region</span> = <span class="hljs-string">"us-east-1"</span>
}
</code></pre>
<h3 id="heading-version-constraints">Version Constraints</h3>
<pre><code class="lang-apache"><span class="hljs-attribute">terraform</span> {
  <span class="hljs-attribute">required_providers</span> {
    <span class="hljs-attribute">aws</span> = {
      <span class="hljs-attribute">source</span>  = <span class="hljs-string">"hashicorp/aws"</span>
      <span class="hljs-attribute">version</span> = <span class="hljs-string">"~&gt; 5.0"</span>
    }
  }
}
</code></pre>
<p>Version rules:</p>
<ul>
<li><p><code>&gt;= 1.2.0</code></p>
</li>
<li><p><code>&lt;= 2.0.0</code></p>
</li>
<li><p><code>~&gt; 1.2</code> → &gt;=1.2,&lt;2.0</p>
</li>
<li><p><code>!= 1.4.0</code></p>
</li>
</ul>
<hr />
<h2 id="heading-4-variables">4. Variables</h2>
<h3 id="heading-declare-variables">Declare Variables</h3>
<pre><code class="lang-apache"><span class="hljs-attribute">variable</span> <span class="hljs-string">"filename"</span> {
  <span class="hljs-attribute">type</span>        = string
  <span class="hljs-attribute">description</span> = <span class="hljs-string">"File name"</span>
  <span class="hljs-attribute">default</span>     = <span class="hljs-string">"/tmp/file.txt"</span>
}
</code></pre>
<h3 id="heading-types">Types</h3>
<ul>
<li><p>string</p>
</li>
<li><p>number</p>
</li>
<li><p>bool</p>
</li>
<li><p>list(type)</p>
</li>
<li><p>map(type)</p>
</li>
<li><p>object({})</p>
</li>
<li><p>tuple([])</p>
</li>
</ul>
<h3 id="heading-variable-precedence-high-low">Variable Precedence (High → Low)</h3>
<ol>
<li><p>CLI <code>var</code></p>
</li>
<li><p><code>.tfvars</code> or <code>var-file</code></p>
</li>
<li><p>Environment (<code>TF_VAR_name</code>)</p>
</li>
<li><p>Default</p>
</li>
</ol>
<hr />
<h2 id="heading-5-resource-attributes-amp-references">5. Resource Attributes &amp; References</h2>
<pre><code class="lang-apache">${<span class="hljs-attribute">resource_type</span>.resource_name.attribute}
</code></pre>
<p>Example:</p>
<pre><code class="lang-apache"><span class="hljs-attribute">aws_instance</span>.web.public_ip
</code></pre>
<hr />
<h2 id="heading-6-dependencies">6. Dependencies</h2>
<h3 id="heading-implicit">Implicit</h3>
<pre><code class="lang-apache"><span class="hljs-attribute">instance_id</span> = aws_instance.web.id
</code></pre>
<h3 id="heading-explicit">Explicit</h3>
<pre><code class="lang-apache"><span class="hljs-attribute">depends_on</span> =<span class="hljs-meta"> [aws_instance.web]</span>
</code></pre>
<hr />
<h2 id="heading-7-output-values">7. Output Values</h2>
<pre><code class="lang-bash">output <span class="hljs-string">"public_ip"</span> {
  value = aws_instance.web.public_ip
}
</code></pre>
<p>Commands:</p>
<pre><code class="lang-bash">terraform output
terraform output public_ip
</code></pre>
<hr />
<h2 id="heading-8-terraform-state">8. Terraform State</h2>
<h3 id="heading-purpose">Purpose</h3>
<ul>
<li><p>Maps config → real infrastructure</p>
</li>
<li><p>Tracks metadata &amp; dependencies</p>
</li>
</ul>
<h3 id="heading-state-storage">State Storage</h3>
<ul>
<li><p>Local (default)</p>
</li>
<li><p>Remote (recommended):</p>
<ul>
<li><p>S3</p>
</li>
<li><p>Terraform Cloud</p>
</li>
<li><p>GCS</p>
</li>
<li><p>Consul</p>
</li>
</ul>
</li>
</ul>
<p>Remote state locking is available, depending on whether the backend supports</p>
<h3 id="heading-remote-backend-s3-example">Remote Backend (S3 Example)</h3>
<pre><code class="lang-apache"><span class="hljs-attribute">terraform</span> {
  <span class="hljs-attribute">backend</span> <span class="hljs-string">"s3"</span> {
    <span class="hljs-attribute">bucket</span>         = <span class="hljs-string">"tf-state-bucket"</span>
    <span class="hljs-attribute">key</span>            = <span class="hljs-string">"prod/terraform.tfstate"</span>
    <span class="hljs-attribute">region</span>         = <span class="hljs-string">"us-east-1"</span>
    <span class="hljs-attribute">dynamodb_table</span> = <span class="hljs-string">"terraform-locks"</span>
  }
}
</code></pre>
<hr />
<h2 id="heading-9-terraform-state-commands">9. Terraform State Commands</h2>
<pre><code class="lang-bash">terraform state list
terraform state show &lt;resource&gt;
terraform state mv
terraform state rm
terraform state pull
</code></pre>
<hr />
<h2 id="heading-10-lifecycle-rules">10. Lifecycle Rules</h2>
<pre><code class="lang-apache"><span class="hljs-attribute">lifecycle</span> {
  <span class="hljs-attribute">create_before_destroy</span> = true
  <span class="hljs-attribute">prevent_destroy</span>       = true
  <span class="hljs-attribute">ignore_changes</span>        =<span class="hljs-meta"> [tags]</span>
}
</code></pre>
<hr />
<h2 id="heading-11-meta-arguments">11. Meta-Arguments</h2>
<p><a target="_blank" href="https://developer.hashicorp.com/terraform/language/meta-arguments">https://developer.hashicorp.com/terraform/language/meta-arguments</a></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Argument</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td>depends_on</td><td>Explicit dependency</td></tr>
<tr>
<td>count</td><td>Create multiple resources</td></tr>
<tr>
<td>for_each</td><td>Iterate over map/set</td></tr>
<tr>
<td>lifecycle</td><td>Control resource behavior</td></tr>
<tr>
<td>provider</td><td>specifies which provider to use</td></tr>
<tr>
<td>providers</td><td>specify an alternate provider specification</td></tr>
</tbody>
</table>
</div><h3 id="heading-count-vs-foreach">count vs for_each</h3>
<ul>
<li><p><code>count</code> → indexed list</p>
</li>
<li><p><code>for_each</code> → map or set</p>
</li>
</ul>
<hr />
<h2 id="heading-12-data-sources">12. Data Sources</h2>
<p>Used to <strong>read existing resources</strong>:</p>
<pre><code class="lang-bash">data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"amazon_linux"</span> {
  most_recent = <span class="hljs-literal">true</span>
}
</code></pre>
<hr />
<h2 id="heading-13-provisioners-use-sparingly">13. Provisioners (Use Sparingly)</h2>
<h3 id="heading-local-exec">Local Exec</h3>
<pre><code class="lang-bash">provisioner <span class="hljs-string">"local-exec"</span> {
  <span class="hljs-built_in">command</span> = <span class="hljs-string">"echo Hello"</span>
}
</code></pre>
<h3 id="heading-remote-exec">Remote Exec</h3>
<p>Requires SSH access.</p>
<p>⚠️ <strong>Not recommended for production</strong></p>
<hr />
<h2 id="heading-14-terraform-import">14. Terraform Import</h2>
<pre><code class="lang-bash">terraform import aws_instance.web i-123456
</code></pre>
<p>⚠️ Does NOT generate <code>.tf</code> code. Only updates the state file.</p>
<hr />
<h2 id="heading-15-terraform-workspaces">15. Terraform Workspaces</h2>
<pre><code class="lang-bash">terraform workspace new dev
terraform workspace list
terraform workspace select dev
</code></pre>
<p>Each workspace has <strong>separate state</strong>.</p>
<hr />
<h2 id="heading-16-terraform-functions-important">16. Terraform Functions (Important)</h2>
<p><a target="_blank" href="https://developer.hashicorp.com/terraform/language/functions">https://developer.hashicorp.com/terraform/language/functions</a></p>
<h3 id="heading-numeric">Numeric</h3>
<ul>
<li><p><code>max()</code></p>
</li>
<li><p><code>min()</code></p>
</li>
<li><p><code>ceil()</code></p>
</li>
<li><p><code>floor()</code></p>
</li>
</ul>
<h3 id="heading-string">String</h3>
<ul>
<li><p><code>lower()</code></p>
</li>
<li><p><code>upper()</code></p>
</li>
<li><p><code>split()</code></p>
</li>
<li><p><code>join()</code></p>
</li>
<li><p><code>substr()</code></p>
</li>
</ul>
<h3 id="heading-collection">Collection</h3>
<ul>
<li><p><code>length()</code></p>
</li>
<li><p><code>contains()</code></p>
</li>
<li><p><code>element()</code></p>
</li>
<li><p><code>lookup()</code></p>
</li>
</ul>
<hr />
<h2 id="heading-17-terraform-console">17. Terraform Console</h2>
<pre><code class="lang-bash">terraform console
</code></pre>
<p>Used for testing expressions.</p>
<hr />
<h2 id="heading-18-debugging">18. Debugging</h2>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> TF_LOG=TRACE
<span class="hljs-built_in">export</span> TF_LOG_PATH=/tmp/terraform.log
</code></pre>
<p>Levels:</p>
<ul>
<li><p>TRACE</p>
</li>
<li><p>DEBUG</p>
</li>
<li><p>INFO</p>
</li>
<li><p>WARN</p>
</li>
<li><p>ERROR</p>
</li>
</ul>
<hr />
<h2 id="heading-19-terraform-modules">19. Terraform Modules</h2>
<pre><code class="lang-apache"><span class="hljs-attribute">module</span> <span class="hljs-string">"vpc"</span> {
  <span class="hljs-attribute">source</span>  = <span class="hljs-string">"terraform-aws-modules/vpc/aws"</span>
  <span class="hljs-attribute">version</span> = <span class="hljs-string">"5.0.0"</span>
}
</code></pre>
<p>Commands:</p>
<pre><code class="lang-bash">terraform get
</code></pre>
<hr />
<h2 id="heading-20-security-best-practices">20. Security Best Practices</h2>
<ul>
<li><p>Never commit <code>terraform.tfstate</code></p>
</li>
<li><p>Use <code>.gitignore</code></p>
</li>
<li><p>Encrypt remote state</p>
</li>
<li><p>Use least privilege</p>
</li>
</ul>
<hr />
<h2 id="heading-21-quick-commands-summary">21. Quick Commands Summary</h2>
<pre><code class="lang-bash">terraform init
terraform plan
terraform apply
terraform destroy
terraform fmt
terraform validate
terraform providers
terraform output
terraform graph
terraform workspace list
</code></pre>
<h2 id="heading-22-hcp-terraform">22. HCP Terraform</h2>
<p>The HashiCorp Cloud Platform — <a target="_blank" href="https://developer.hashicorp.com/terraform/cloud-docs">https://developer.hashicorp.com/terraform/cloud-docs</a></p>
<p>In my experience in this exam, you have to know what they are; there is no need to dig deep, so briefly review these topics, and let me know your experience:</p>
<ul>
<li><p>Users</p>
</li>
<li><p>Teams</p>
</li>
<li><p>Organizations</p>
</li>
<li><p>Permissions</p>
</li>
<li><p>Stacks &amp; Workspaces</p>
</li>
<li><p>Integration with VCS (Version Control Systems)</p>
</li>
<li><p>Private registry</p>
</li>
<li><p>Automatic health checks</p>
</li>
<li><p>Run triggers</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Hosting a DNS Server on My Homelab Kubernetes Cluster]]></title><description><![CDATA[I decided to host my own DNS server on my homelab Kubernetes cluster. There are many reasons why someone might want to run a self-hosted DNS server, but for me, the main motivations were network-wide ad blocking and local DNS rewriting.
Why Host Your...]]></description><link>https://jackjapar.com/hosting-a-dns-server-on-my-homelab-kubernetes-cluster</link><guid isPermaLink="true">https://jackjapar.com/hosting-a-dns-server-on-my-homelab-kubernetes-cluster</guid><category><![CDATA[homelab-setup]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[SelfHosting]]></category><category><![CDATA[dns]]></category><category><![CDATA[dns resolver]]></category><category><![CDATA[adguardhome]]></category><category><![CDATA[AdGuard]]></category><category><![CDATA[Ad-Blocking]]></category><category><![CDATA[privacy]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[homeserver]]></category><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Mon, 15 Dec 2025 01:42:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765762443734/927e982f-1c59-472b-95e9-888cbd4588e9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I decided to host my own DNS server on my homelab Kubernetes cluster. There are many reasons why someone might want to run a self-hosted DNS server, but for me, the main motivations were <strong>network-wide ad blocking</strong> and <strong>local DNS rewriting</strong>.</p>
<h2 id="heading-why-host-your-own-dns">Why Host Your Own DNS?</h2>
<h3 id="heading-1-network-wide-ad-blocking">1. Network-Wide Ad Blocking</h3>
<p>By running my own DNS server, I can block ads, malicious sites, and unwanted tracking <strong>across the entire network</strong>—without installing ad blockers on each individual device.</p>
<p>This is especially useful in a household with kids. They tend to download lots of apps, many of which come with annoying ads and sometimes questionable network requests. With DNS-level blocking, all of that can be filtered automatically at the network level.</p>
<h3 id="heading-2-local-dns-rewriting-for-homelab-services">2. Local DNS Rewriting for Homelab Services</h3>
<p>The second major reason was the ability to use <strong>local domain names</strong> for my homelab services.</p>
<p>For example:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Domain</td><td>Answer</td></tr>
</thead>
<tbody>
<tr>
<td>homelab.lan</td><td>192.168.1.234</td></tr>
<tr>
<td>*.homelab.lan</td><td>homelab.lan</td></tr>
</tbody>
</table>
</div><p>With these rewrites in place, I can expose services using Traefik or NGINX Proxy Manager and access them via friendly URLs.</p>
<p>Instead of visiting something like <code>http://192.168.1.234:9090</code><br />I can simply use: <code>http://linkding.homelab.lan</code><br />Much cleaner and easier to remember.</p>
<h1 id="heading-setup">Setup</h1>
<p>While researching open-source DNS solutions, I came across <a target="_blank" href="https://github.com/AdguardTeam/AdGuardHome"><strong>AdGuard Home</strong></a> and <a target="_blank" href="https://github.com/pi-hole/pi-hole"><strong>Pi-hole</strong></a>. Both have excellent features, strong communities, and web-based dashboards.</p>
<p>I chose <strong>AdGuard Home</strong> because of its simplicity and clean, user-friendly dashboard.</p>
<h2 id="heading-kubernetes-deployment">Kubernetes Deployment</h2>
<p>I created a Kubernetes manifest to deploy AdGuard Home. You can find it here:</p>
<p>👉 <strong>Gist:</strong> <a target="_blank" href="https://gist.github.com/devsteppe9/bf2ec5e81fa2f559c49f94e34b3064e0">https://gist.github.com/devsteppe9/bf2ec5e81fa2f559c49f94e34b3064e0</a></p>
<blockquote>
<p>⚠️ <strong>Tip:</strong><br />I recommend creating the Kubernetes <code>Service</code> at the very end. When I exposed DNS too early, it changed the local DNS resolver on the node, which caused issues resolving <code>docker.io</code> while pulling images. There are probably cleaner ways to handle this, but I wanted to keep the setup simple.</p>
</blockquote>
<h2 id="heading-initial-configuration">Initial Configuration</h2>
<p>Once installed, AdGuard Home binds to:</p>
<ul>
<li><p><strong>UDP 53 / TCP 53</strong> – DNS</p>
</li>
<li><p><strong>Port 3000</strong> – Admin dashboard</p>
</li>
</ul>
<p>You can access the web interface at: <code>http://&lt;your-server-ip&gt;:3000</code></p>
<p>You’ll be prompted to:</p>
<ul>
<li><p>Create an admin user</p>
</li>
<li><p>Configure upstream DNS servers</p>
</li>
</ul>
<h2 id="heading-router-configuration">Router Configuration</h2>
<p>After AdGuard is running, you need to tell your router to use it as the DNS server.</p>
<p>Every router is different. In my case (Starlink), I configured this through the mobile app:</p>
<ul>
<li><p>Enabled <strong>Custom DNS</strong></p>
</li>
<li><p>Set my AdGuard server IP as <strong>Primary DNS</strong></p>
</li>
<li><p>Added <strong>Google DNS</strong> as a fallback in case my server goes down</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765762705193/a1220e8c-c78d-4d17-80c3-7783abf90a72.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-results">Results</h2>
<p>With AdGuard in place, I immediately started seeing blocked ads and tracking requests. I was also surprised to discover that some devices—like TVs and even appliances—were constantly sending analytics data to the internet.</p>
<p>AdGuard made it easy to block all of this traffic. You can also customize blocklists further in the settings, but for my use case, the default AdGuard lists were more than sufficient.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765759997187/179e8db2-6712-48fa-87a9-46909e215cec.gif" alt class="image--center mx-auto" /></p>
<h1 id="heading-summary">Summary</h1>
<p>In this post, I shared how I set up my own DNS server using AdGuard Home on a Kubernetes homelab. The benefits have been huge:</p>
<ul>
<li><p>Network-wide ad and tracker blocking</p>
</li>
<li><p>Improved privacy</p>
</li>
<li><p>Friendly local domain names for homelab services</p>
</li>
<li><p>No need to remember IPs or ports</p>
</li>
</ul>
<p>I hope this gives you some inspiration for your own homelab setup. See you in the next one!</p>
]]></content:encoded></item><item><title><![CDATA[Self-hosted Google Photos by Immich]]></title><description><![CDATA[I've been hosting my Kubernetes cluster in the cloud for a while, but recently I managed to get my hands on a bare-metal server — a Dell PowerEdge with 32 CPU cores and 128GB RAM 😮 — and set it up in my basement. Naturally, I turned it into a Kubern...]]></description><link>https://jackjapar.com/self-hosted-google-photos-by-immich</link><guid isPermaLink="true">https://jackjapar.com/self-hosted-google-photos-by-immich</guid><category><![CDATA[relic]]></category><category><![CDATA[Immich]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[SelfHosting]]></category><category><![CDATA[Google Photos]]></category><category><![CDATA[Backup]]></category><category><![CDATA[Photo Backup]]></category><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Thu, 11 Dec 2025 21:38:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765485442124/e69d4577-60e4-499e-89ab-2c08e9e67d3a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've been hosting my Kubernetes cluster in the cloud for a while, but recently I managed to get my hands on a bare-metal server — a <strong>Dell PowerEdge with 32 CPU cores and 128GB RAM</strong> 😮 — and set it up in my basement. Naturally, I turned it into a Kubernetes node.</p>
<h1 id="heading-problem">Problem</h1>
<p>In our household, we have multiple Android and iPhone devices filled with photos and videos. Since we don’t want to pay for cloud services like iCloud or Google Photos, our media library has always been at risk. If a device breaks or gets lost, years of memories disappear with it.</p>
<p>Now that I have a powerful home Kubernetes cluster running, I decided to finally solve this problem.</p>
<h1 id="heading-solution">Solution</h1>
<h2 id="heading-immich-for-photo-amp-video-backup">Immich for Photo &amp; Video Backup</h2>
<p>After researching self-hosting options, I discovered an amazing open-source project called <a target="_blank" href="https://github.com/immich-app/immich"><strong>Immich</strong></a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765483081510/bfcaec11-add4-4c9c-b7a0-1e075fedb6d5.png" alt class="image--center mx-auto" /></p>
<p>I chose Immich because:</p>
<ul>
<li><p>It has official <strong>mobile apps</strong> and a <strong>web interface</strong></p>
</li>
<li><p>Supports <strong>facial recognition</strong>, <strong>OCR</strong>, and <strong>multi-user</strong> features</p>
</li>
<li><p>Has a strong community (over <strong>86k GitHub stars</strong>)</p>
</li>
<li><p>Easy enough that non-technical family members can use it</p>
</li>
</ul>
<p>We installed the Immich app on our phones and connected it to the Immich server running inside our private network.</p>
<h2 id="heading-optional-external-backup-to-aws-s3-using-restic">(Optional) External Backup to AWS S3 Using Restic</h2>
<p>There was still one more risk:<br />If our home server or NAS disk failed, all our backups would be gone.</p>
<p>To protect against that, I set up an additional off-site backup to <strong>AWS S3</strong> using an open-source tool called <a target="_blank" href="https://github.com/restic/restic"><strong>Restic</strong></a>.</p>
<p>Restic tracks backup folders and database dumps, then <strong>incrementally uploads</strong> them to S3. Since I store everything in <strong>S3 Glacier Flexible Retrieval</strong>, the cost is extremely low.</p>
<ul>
<li><p><strong>S3 Glacier (us-east-1): $0.0036 per GB</strong></p>
</li>
<li><p>That’s <strong>$3.60/month for 1 TB</strong> of data — super affordable.</p>
</li>
</ul>
<h1 id="heading-deployment">Deployment</h1>
<p>Immich requires <strong>PostgreSQL</strong>, <strong>Redis</strong>, and its <strong>machine-learning services</strong>. I also added Restic backup Jobs/CronJobs.</p>
<p>I prepared a full <strong>Kubernetes manifest</strong> for this setup:</p>
<p>👉 <strong>YAML file:</strong><br /><a target="_blank" href="https://gist.github.com/devsteppe9/17b83f51ca05c5012632d32c5e42cea5">https://gist.github.com/devsteppe9/17b83f51ca05c5012632d32c5e42cea5</a></p>
<p>Before applying it, update the following depending on your environment:</p>
<ul>
<li><p><strong>PersistentVolume paths</strong> (if using local storage or NAS)</p>
</li>
<li><p><strong>External Postgres/Redis URLs</strong> (if you already run those separately)</p>
</li>
</ul>
<p>Then simply apply:</p>
<pre><code class="lang-bash">kubectl apply -f FILENAME.yaml
</code></pre>
<p>Once deployed, Immich is ready to use. The first user to register will be the admin user. The admin user will be able to add other users to the application.</p>
<p>To register for the admin user, access the web application at <code>http://&lt; ip-address&gt;:2283</code> and click on the <strong>Getting Started</strong> button.</p>
<p>The mobile app can be downloaded from the following places:</p>
<ul>
<li><p>Obtainium: You can get your Obtainium config link from the <a target="_blank" href="https://my.immich.app/utilities">Utilities page of your Immich se</a><a target="_blank" href="https://my.immich.app/utilities">rver.</a></p>
</li>
<li><p><a target="_blank" href="https://my.immich.app/utilities">Google Pl</a><a target="_blank" href="https://play.google.com/store/apps/details?id=app.alextran.immich">ay Store</a></p>
</li>
<li><p><a target="_blank" href="https://play.google.com/store/apps/details?id=app.alextran.immich">App</a><a target="_blank" href="https://apps.apple.com/us/app/immich/id1613945652">le App</a> <a target="_blank" href="https://f-droid.org/packages/app.alextran.immich">Store</a></p>
</li>
<li><p><a target="_blank" href="https://f-droid.org/packages/app.alextran.immich">F-Droid</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/immich-app/immich/releases">G</a><a target="_blank" href="https://github.com/immich-app/immich/releases">it</a><a target="_blank" href="https://my.immich.app/utilities">Hub Releases (apk)</a><a target="_blank" href="https://play.google.com/store/apps/details?id=app.alextran.immich">On the mobile</a> <a target="_blank" href="https://apps.apple.com/us/app/immich/id1613945652">app, ju</a><a target="_blank" href="https://f-droid.org/packages/app.alextran.immich">st ente</a><a target="_blank" href="https://apps.apple.com/us/app/immich/id1613945652">r</a> your Immich <a target="_blank" href="https://github.com/immich-app/immich/releases">URL, username, and pa</a>ssword — and start <a target="_blank" href="https://my.immich.app/utilities">backing up!</a></p>
</li>
</ul>
<p>After uploading our media to Immich, we ended up with approximately 160GB of data. With the Kubernetes configuration mentioned above, Restic initializes and uploads the data to AWS S3, and I can confirm that it has been successfully transferred</p>
<pre><code class="lang-bash">➜  ~ kubectl logs restic-init-4trwh
Initializing restic repository...
open repository
no parent snapshot found, will <span class="hljs-built_in">read</span> all files
load index files
start scan on [/data]
start backup on [/data]
scan finished <span class="hljs-keyword">in</span> 2.639s: 17108 files, 157.909 GiB

Files:       17108 new,     0 changed,     0 unmodified
Dirs:        16615 new,     0 changed,     0 unmodified
Data Blobs:      0 new
Tree Blobs:      0 new
Added to the repository: 0 B   (0 B   stored)

processed 17108 files, 157.909 GiB <span class="hljs-keyword">in</span> 21:04
snapshot 560b0e01 saved
</code></pre>
<p>And let’s confirm from AWS S3:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765483649700/e1704b4a-75d1-42a0-8e45-584688a36b53.png" alt class="image--center mx-auto" /></p>
<p>Like this, relic tracks my filesystem and uploads any new or changed files to S3 every day</p>
<h1 id="heading-suhttpsmyimmichapputilitiesmmahttpsplaygooglecomstoreappsdetailsidappalextranimmichryhttpsmyimmichapputilities"><a target="_blank" href="https://my.immich.app/utilities">Su</a><a target="_blank" href="https://play.google.com/store/apps/details?id=app.alextran.immich">mma</a><a target="_blank" href="https://my.immich.app/utilities">ry</a></h1>
<p><a target="_blank" href="https://my.immich.app/utilities">By self-hostin</a><a target="_blank" href="https://play.google.com/store/apps/details?id=app.alextran.immich">g</a> <a target="_blank" href="https://my.immich.app/utilities">Immich on my home Kubernetes</a> <a target="_blank" href="https://play.google.com/store/apps/details?id=app.alextran.immich">cluster and op</a><a target="_blank" href="https://apps.apple.com/us/app/immich/id1613945652">tionall</a><a target="_blank" href="https://f-droid.org/packages/app.alextran.immich">y backi</a><a target="_blank" href="https://apps.apple.com/us/app/immich/id1613945652">n</a><a target="_blank" href="https://github.com/immich-app/immich/releases">g up everything to AWS S3, we now</a> have:</p>
<ul>
<li><p>A safe private photo/video library</p>
</li>
<li><p>Control of our own data</p>
</li>
<li><p>Off-site backups in case the home server dies</p>
</li>
</ul>
<p>I hope this setup inspires you! If you have your own self-hosting solution or ideas, feel free to share them</p>
]]></content:encoded></item><item><title><![CDATA[How to Monitor Your Spring Boot App with Prometheus and Grafana in Kubernetes]]></title><description><![CDATA[I've recently configured the Prometheus and Grafana stack on my Kubernetes cluster to monitor system performance, including memory, CPU, and network usage. In this post, I’ll walk you through how I integrated Prometheus into my Spring Boot applicatio...]]></description><link>https://jackjapar.com/how-to-monitor-your-spring-boot-app-with-prometheus-and-grafana-in-kubernetes</link><guid isPermaLink="true">https://jackjapar.com/how-to-monitor-your-spring-boot-app-with-prometheus-and-grafana-in-kubernetes</guid><category><![CDATA[Grafana]]></category><category><![CDATA[#prometheus]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[metrics]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Wed, 18 Jun 2025 21:08:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750281697070/23fd18f6-ffcd-4dfb-9886-a3841dd9c142.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've recently configured the Prometheus and Grafana stack on my Kubernetes cluster to monitor system performance, including memory, CPU, and network usage. In this post, I’ll walk you through how I integrated Prometheus into my Spring Boot application to expose application-level metrics. Let's dive in!</p>
<hr />
<h2 id="heading-outline">Outline</h2>
<ol>
<li><p>Prerequisites</p>
</li>
<li><p>Why Monitor Spring Boot with Prometheus</p>
</li>
<li><p>How It Works</p>
</li>
<li><p>Configure Spring Boot App</p>
</li>
<li><p>Configure Prometheus</p>
</li>
<li><p>Configure the Grafana Dashboard</p>
</li>
<li><p>Summary &amp; Resources</p>
</li>
</ol>
<hr />
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you start, make sure you have the following:</p>
<ul>
<li><p>A running Kubernetes cluster</p>
</li>
<li><p>Prometheus installed</p>
</li>
<li><p>Grafana installed</p>
</li>
<li><p>ArgoCD (optional) — I used ArgoCD to update Prometheus config, but you can also configure it manually</p>
</li>
</ul>
<hr />
<h2 id="heading-why-monitor-spring-boot-with-prometheus">Why Monitor Spring Boot with Prometheus?</h2>
<p>You might wonder: If Prometheus is already monitoring my Kubernetes cluster, why do I need to monitor the Spring Boot app?</p>
<p>Cluster-level metrics only show infrastructure health. App-level metrics give you insights into how your application is behaving internally.</p>
<p>For example:</p>
<ul>
<li><p>Request count and latency (<code>http_server_requests_seconds_count</code>)</p>
</li>
<li><p>Active threads and thread pool usage</p>
</li>
<li><p>JVM memory and GC pauses</p>
</li>
<li><p>HikariCP connection pool metrics</p>
</li>
<li><p>Custom business metrics (e.g., number of logins, votes, etc.)</p>
</li>
</ul>
<hr />
<h2 id="heading-how-it-works">How It Works</h2>
<ol>
<li><p>Spring Boot exposes an endpoint <code>/actuator/prometheus</code> that provides metrics.</p>
</li>
<li><p>Prometheus scrapes this endpoint at regular intervals.</p>
</li>
<li><p>Grafana visualizes these metrics using dashboards.</p>
</li>
</ol>
<hr />
<h2 id="heading-configure-spring-boot-app">Configure Spring Boot App</h2>
<h3 id="heading-1-add-dependencies">1. Add Dependencies</h3>
<p>Update your <code>pom.xml</code> with the following:</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-starter-actuator<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>

<span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>io.micrometer<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>micrometer-registry-prometheus<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
</code></pre>
<ul>
<li><p><strong>Actuator</strong> provides built-in endpoints to display performance information of your application, such as health, metrics, and more.</p>
</li>
<li><p><strong>Micrometer</strong> then makes these built-in metrics in a Prometheus understandable format</p>
</li>
</ul>
<h3 id="heading-2-configure-applicationproperties">2. Configure <code>application.properties</code></h3>
<p>This ensures our application is an exposed <code>/actuator/prometheus</code> endpoint, so that later steps, Prometheus can scrape it.</p>
<pre><code class="lang-ini"><span class="hljs-attr">management.endpoints.web.exposure.include</span>=*
<span class="hljs-attr">management.endpoint.health.show-details</span>=always
</code></pre>
<blockquote>
<p>⚠️ Exposing all actuator endpoints (<code>management.endpoints.web.exposure.include=*</code>) is okay for development, but in production, limit it to only what's necessary. See the <a target="_blank" href="https://docs.spring.io/spring-boot/reference/actuator/endpoints.html">Spring Actuator docs</a>.</p>
</blockquote>
<p>Start your app and visit:</p>
<ul>
<li><p><code>/actuator</code> to see available endpoints</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750275264786/37ad42ee-8469-4e30-960d-7adcc2518c20.png" alt /></p>
</li>
<li><p><code>/actuator/prometheus</code> to see Prometheus-readable metrics</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750275386446/b1757104-7847-4784-aefe-517e84a0acd6.png" alt /></p>
</li>
</ul>
<p>That’s it! Your app is now ready to be scraped by Prometheus.</p>
<hr />
<h2 id="heading-configure-prometheus">Configure Prometheus</h2>
<p>We need to configure Prometheus to scrape metrics from our Spring Boot application. If you installed Prometheus on your own way add the following to your Prometheus config (e.g., <code>prometheus.yml</code>):</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">job_name:</span> <span class="hljs-string">'vote-spring-app'</span>
    <span class="hljs-attr">metrics_path:</span> <span class="hljs-string">'/actuator/prometheus'</span>
    <span class="hljs-attr">scrape_interval:</span> <span class="hljs-string">'10s'</span>
    <span class="hljs-attr">static_configs:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">targets:</span> [<span class="hljs-string">'vote.vote-app.svc.cluster.local:8080'</span>]
</code></pre>
<blockquote>
<p>Replace <code>vote.vote-app.svc.cluster.local:8080</code> with your app's actual address (e.g., <code>localhost:8080</code>).</p>
</blockquote>
<p>In my case, I’ve addedthe above config on the ArgoCD UI by:</p>
<ol>
<li><p>Navigate to <strong>Application → kuber-prometheus-stack → Details → Parameters</strong>.</p>
</li>
<li><p>Edit the <code>Values</code> field with the config above.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750276574861/a6388cda-e433-4e34-b7f9-8255be94c6d7.png" alt /></p>
<p>After deployment, verify your Spring Boot app appears in Prometheus targets and is in the <strong>UP</strong> state:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750276703527/1ba34376-850e-468b-92be-81e1fa7a6654.png" alt /></p>
<p>Explore metrics like:</p>
<ul>
<li><p><code>http_server_requests_seconds_count</code></p>
</li>
<li><p><code>http_server_requests_seconds_max</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750277009249/c499f0e4-0ba9-4393-83f8-149ad3720553.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750277031789/c5e58ea2-acfa-4aec-aea0-67fecb86a062.png" alt /></p>
<hr />
<h2 id="heading-configure-grafana-dashboard">Configure Grafana Dashboard</h2>
<p>While this step is optional, dashboards in Grafana make it easier to view all metrics in one place. And looking into every metric one by one by querying in Prometheus is a repetitive task. So let’s configure the Grafana dashboard.</p>
<p>You can create a Grafana dashboard on your own, but in this blog, I am going to use a pre-configured dashboard from <a target="_blank" href="https://grafana.com/grafana/dashboards/"><strong>Grafana Dashboards</strong></a><strong>,</strong> which is the place you can find lots of great dashboards from the Grafana community. And I chose this <a target="_blank" href="https://grafana.com/grafana/dashboards/11378-justai-system-monitor/"><strong>Spring Boot 2.1 System Monitor</strong></a> dashboard.</p>
<h3 id="heading-steps">Steps:</h3>
<ol>
<li><p>Visit your Grafana website</p>
</li>
<li><p>Use dashboard ID <code>11378</code> (Spring Boot 2.1 System Monitor)</p>
</li>
<li><p>Go to <strong>Grafana → Import Dashboard</strong></p>
</li>
<li><p>Enter the ID and select Prometheus as the data source</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750277922214/6fd42571-5845-4c1e-8658-25ec54721ad3.png" alt /></p>
<p>And voilà — your Spring Boot metrics dashboard is ready!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750278595254/f6b0ae49-6341-47e9-86fb-924153a818e2.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750278646762/3790c704-df86-49d6-ac70-10de77a931c9.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750278699687/934b4187-85a9-4b2b-ba9a-a7e8dd1d18fa.png" alt /></p>
<hr />
<h2 id="heading-summary">Summary</h2>
<p>By integrating Spring Boot metrics into your Prometheus and Grafana stack:</p>
<ul>
<li><p>You gain insights into your application’s performance and health</p>
</li>
<li><p>It complements your cluster-level observability</p>
</li>
<li><p>Dashboards give you real-time visibility</p>
</li>
</ul>
<hr />
<h2 id="heading-resources">Resources</h2>
<ul>
<li><p><a target="_blank" href="https://www.baeldung.com/spring-boot-prometheus">Spring Boot + Prometheus Guide (Baeldung)</a></p>
</li>
<li><p><a target="_blank" href="https://medium.com/simform-engineering/revolutionize-monitoring-empowering-spring-boot-applications-with-prometheus-and-grafana-e99c5c7248cf">Simform Blog on Monitoring Spring Boot</a></p>
</li>
<li><p><a target="_blank" href="https://docs.spring.io/spring-boot/reference/actuator/endpoints.html">Spring Actuator Endpoints</a></p>
</li>
<li><p><a target="_blank" href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/">Kubernetes DNS Docs</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Monitoring Kubernetes Cluster with Prometheus and Grafana using ArgoCD]]></title><description><![CDATA[In the last blog, I configured an ArgoCD-based GitOps pipeline and deployed my distributed app called vote-app. In this post, I’ll walk through how to set up Prometheus and Grafana to monitor the Kubernetes cluster and track resource usage of the vot...]]></description><link>https://jackjapar.com/monitoring-kubernetes-cluster-with-prometheus-and-grafana-using-argocd</link><guid isPermaLink="true">https://jackjapar.com/monitoring-kubernetes-cluster-with-prometheus-and-grafana-using-argocd</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[gitops]]></category><category><![CDATA[#prometheus]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[observability]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[Helm]]></category><category><![CDATA[cloud native]]></category><category><![CDATA[#SiteReliabilityEngineering]]></category><category><![CDATA[kubePrometheusStack]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Fri, 13 Jun 2025 13:00:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749793024001/8aff0345-00ea-4167-beb0-2444f49134b1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the last blog, I configured an ArgoCD-based GitOps pipeline and deployed my distributed app called <code>vote-app</code>. In this post, I’ll walk through how to set up Prometheus and Grafana to monitor the Kubernetes cluster and track resource usage of the <code>vote-app</code> pods, such as memory and CPU.</p>
<p>This setup assumes you already have a Kubernetes cluster and ArgoCD installed. If not, check the <a target="_blank" href="https://jackjapar.com/kubernetes-deployments-argocd-and-github-actions-in-action">installation section of my previous blog</a>.</p>
<hr />
<h2 id="heading-outline">Outline</h2>
<ul>
<li><p>What is Observability</p>
</li>
<li><p>Prometheus and Grafana</p>
</li>
<li><p><strong>kube-prometheus-stack</strong></p>
</li>
<li><p>Setting up kube-prometheus-stack</p>
</li>
</ul>
<hr />
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p>Kubernetes Cluster</p>
</li>
<li><p>ArgoCD (see my previous blog for setup instructions)</p>
</li>
</ul>
<hr />
<h2 id="heading-observability">Observability</h2>
<p>Observability is the ability to understand the internal state of a system by examining the data it produces—logs, metrics, and traces. Highly observable systems make it easier to detect and diagnose complex issues.</p>
<p>It's not just about bugs and outages but also about understanding the impact of changes in your code.</p>
<h3 id="heading-three-pillars-of-observability">Three Pillars of Observability</h3>
<ol>
<li><p><strong>Logs</strong>: Time-stamped records of events that are often unstructured and verbose.</p>
</li>
<li><p><strong>Traces</strong>: Visualize the lifecycle of a request across services. Useful for understanding latency and bottlenecks, especially in distributed systems.</p>
</li>
<li><p><strong>Metrics</strong>: Numeric data that represent the behavior of your system (e.g., CPU usage, request count).</p>
</li>
</ol>
<p>For this blog, we’ll focus on the <strong>metrics</strong> pillar.</p>
<hr />
<h2 id="heading-prometheus-and-grafana">Prometheus and Grafana</h2>
<h3 id="heading-prometheus">Prometheus</h3>
<p>Prometheus is a CNCF-hosted monitoring and alerting toolkit. It scrapes metrics using a pull-based approach and stores them in a time-series database. It supports dynamic target discovery via Kubernetes service discovery.</p>
<h3 id="heading-grafana">Grafana</h3>
<p>Grafana is an open-source observability platform. Although Prometheus provides a UI for querying metrics, Grafana makes it easier to visualize those metrics using beautiful dashboards.</p>
<hr />
<h2 id="heading-kube-prometheus-stack">kube-prometheus-stack</h2>
<p>We’ll use the <a target="_blank" href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack"><code>kube-prometheus-stack</code></a> Helm chart, which bundles all components required for Kubernetes monitoring:</p>
<ul>
<li><p><strong>Prometheus</strong>: Scrapes, stores, and exposes metrics.</p>
</li>
<li><p><strong>Node Exporter</strong>: Collects node-level metrics. And Prometheus scrapes data prepared by this exporter.</p>
</li>
<li><p><strong>Kube-State-Metrics</strong>: This one is another exporter. Exposes information about Kubernetes objects, such as Pods and containers.</p>
</li>
<li><p><strong>Grafana</strong>: Dashboards and visualizations.</p>
</li>
<li><p><strong>Alertmanager</strong>: Sends notifications based on alerts.</p>
</li>
</ul>
<p>For more details, check this <a target="_blank" href="https://spacelift.io/blog/prometheus-kubernetes#what-is-kubeprometheusstack">Spacelift tutorial</a>.</p>
<hr />
<h2 id="heading-setup-kube-prometheus-stack">Setup kube-prometheus-stack</h2>
<ol>
<li><p>In ArgoCD UI, go to <strong>Applications → New App → Edit as YAML</strong>.</p>
</li>
<li><p>Paste the following YAML:</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">'kube-prometheus-stack'</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">https://prometheus-community.github.io/helm-charts</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-number">73.2</span><span class="hljs-number">.0</span>
    <span class="hljs-attr">helm:</span>
      <span class="hljs-attr">values:</span> <span class="hljs-string">|
        grafana:
          service:
            type: NodePort
            nodePort: 31006
        prometheus:
          service:
            type: NodePort
            nodePort: 31005
</span>    <span class="hljs-attr">chart:</span> <span class="hljs-string">kube-prometheus-stack</span>
  <span class="hljs-attr">destination:</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">&lt;https://kubernetes.default.svc&gt;</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">kube-prometheus-stack</span>
  <span class="hljs-attr">syncPolicy:</span>
    <span class="hljs-attr">syncOptions:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">CreateNamespace=true</span>
    <span class="hljs-attr">automated:</span>
      <span class="hljs-attr">prune:</span> <span class="hljs-literal">false</span>
      <span class="hljs-attr">selfHeal:</span> <span class="hljs-literal">false</span>
</code></pre>
<blockquote>
<p>I’ve changed the default <code>ClusterIP</code> services to <code>NodePort</code> so we can access Prometheus and Grafana from the browser.</p>
</blockquote>
<ol start="3">
<li>Click <strong>Save</strong>, then <strong>Create</strong>.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749789076374/673a54cf-19bb-4644-ae1a-a324a0beff4a.png" alt="ArgoCD create app" /></p>
<ol start="4">
<li>After creation, the stack will appear in your ArgoCD dashboard:</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749789094586/2f120d92-9b4e-4f94-9734-040b01ba3f85.png" alt="ArgoCD dashboard" /></p>
<hr />
<h2 id="heading-try-prometheus">Try Prometheus</h2>
<p>Access Prometheus at <a target="_blank" href="http://100.117.103.104:31005/query">http://YOUR_NODE_IP:31005/</a></p>
<p>Type a query in the <code>Enter expression</code> field you can type queries in PromQL syntax. E.g. typing <code>node_memory_Active_bytes</code> to see memory utilization.</p>
<p>Click <strong>Execute</strong> to view memory usage per node. Use the <strong>Table</strong> tab for raw data and the <strong>Graph</strong> tab for visualization.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749789564337/ef7718a6-f285-4fa7-9c27-0ed89e64f3af.png" alt="Prometheus UI" /></p>
<hr />
<h2 id="heading-visualize-metrics-using-grafana">Visualize Metrics Using Grafana</h2>
<p>Access Grafana at <a target="_blank" href="http://100.117.103.104:31006/">http://YOUR_NODE_IP:31006/</a></p>
<p>Login with:</p>
<ul>
<li><p><strong>Username:</strong> <code>admin</code></p>
</li>
<li><p><strong>Password:</strong> <code>prom-operator</code></p>
</li>
</ul>
<p>After logging in, you see this Grafana welcome page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749789983883/125cc585-72aa-4141-b86b-057ccf7f55a1.png" alt="Grafana Welcome" /></p>
<p>Click <strong>Dashboards</strong> in the sidebar. Explore pre-built dashboards such as:</p>
<ul>
<li><strong>Kubernetes / Compute Resources / Cluster</strong> – Overview of your cluster’s resource usage.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749790136252/b0ed917c-5af3-4fce-8dfa-3446ece68d4b.png" alt="Cluster Dashboard" /></p>
<ul>
<li><strong>Kubernetes / Compute Resources / Namespace (Pods)</strong> – View resource usage by namespace. Select <code>vote-app</code> from the dropdown to see pod-specific metrics. By doing this I can see how my <code>vote-app</code> utilizing resources in this Kubernetes cluster.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749790324938/c3f3596e-0de9-4acb-bdea-88b4ce8e1585.png" alt="vote-app Dashboard" /></p>
<hr />
<h2 id="heading-recap">Recap</h2>
<p>In this post, we set up the popular Prometheus and Grafana monitoring stack in our Kubernetes cluster using only the ArgoCD GUI—no CLI required. We explored how to view resource metrics like memory, CPU, and network usage for our <code>vote-app</code> via Grafana dashboards.</p>
<p>To further analyze our application, we could implement custom application metrics (e.g., total votes, requests/sec) in vote-app by creating exporters, and configure Prometheus to scrape them.</p>
<hr />
<h2 id="heading-references">References</h2>
<ul>
<li><p><a target="_blank" href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack">Github: kube-prometheus-stack</a></p>
</li>
<li><p><a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/user-guide/helm/">Argo CD - Declarative GitOps CD for Kubernetes</a></p>
</li>
<li><p><a target="_blank" href="https://spacelift.io/blog/prometheus-kubernetes">Prometheus Monitoring for Kubernetes Cluster [Tutorial]</a></p>
</li>
<li><p><a target="_blank" href="https://newsletter.pragmaticengineer.com/p/observability-the-present-and-future">Observability: the present and future, with Charity Majors</a></p>
</li>
<li><p><a target="_blank" href="https://blog.devops.dev/prometheus-cloud-native-observability-101-3b630e34cd86">Prometheus &amp; Cloud Native Observability 101</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Kubernetes Deployments: ArgoCD and GitHub Actions in Action]]></title><description><![CDATA[I recently developed a distributed voting application using Spring Boot and Kafka. So, I decided to build a CI/CD pipeline for this project and deploy it into a Kubernetes cluster. I containerized the services with Docker and set up a GitHub Actions ...]]></description><link>https://jackjapar.com/kubernetes-deployments-argocd-and-github-actions-in-action</link><guid isPermaLink="true">https://jackjapar.com/kubernetes-deployments-argocd-and-github-actions-in-action</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[#kubernetes #container ]]></category><category><![CDATA[Kubernetes deployments]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[Docker]]></category><category><![CDATA[dockerhub]]></category><category><![CDATA[Springboot]]></category><category><![CDATA[kafka]]></category><category><![CDATA[k3s]]></category><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Sat, 31 May 2025 15:27:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748704839318/46b66d3e-da01-41ab-bd5c-3944c7fde109.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently developed a distributed voting application using Spring Boot and Kafka. So, I decided to build a CI/CD pipeline for this project and deploy it into a Kubernetes cluster. I containerized the services with Docker and set up a GitHub Actions pipeline to push version-tagged images to DockerHub. ArgoCD then auto-syncs those images into a Kubernetes cluster configured on AWS EC2. This setup ensures that whenever code is committed and merged into the <code>main</code> branch, a new version of the app is rolled out seamlessly.</p>
<p>In this post, I’ll walk you through everything step by step—from setting up a simple Kubernetes cluster to configuring a CI/CD pipeline with ArgoCD and GitHub Actions.</p>
<h1 id="heading-outline">Outline</h1>
<ul>
<li><p>Spin up a new K3s cluster on AWS EC2</p>
</li>
<li><p>Kubernetes specification files for the project</p>
</li>
<li><p>Install ArgoCD on the cluster</p>
</li>
<li><p>Configure GitHub Actions</p>
</li>
<li><p>Test the CI/CD pipeline</p>
</li>
</ul>
<hr />
<h1 id="heading-project-overview-cicd-setup">Project Overview: CI/CD Setup</h1>
<p>Below is the application we want to deploy. It's a simple distributed voting application orchestrated with Docker containers. You can find the source code here:<br />🔗 <a target="_blank" href="https://github.com/devsteppe9/voting_app">https://github.com/devsteppe9/voting_app</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748655374468/6f0a8d60-ce03-4d75-8cda-3ec2c28139ca.png" alt /></p>
<p>Here’s a visual overview of the directory structure:</p>
<pre><code class="lang-bash">├── .github               <span class="hljs-comment"># GitHub Actions workflow files</span>
│   └── workflows
│       ├── build-result.yaml
│       ├── build-vote-session.yaml
│       ├── build-vote.yaml
│       └── build-worker.yaml
├── docker-compose.yml    <span class="hljs-comment"># Local development</span>
├── k8s-specifications    <span class="hljs-comment"># Kubernetes manifests</span>
├── result                <span class="hljs-comment"># Node.js web app for real-time results</span>
├── vote                  <span class="hljs-comment"># Spring Boot/Thymeleaf vote submission app</span>
├── vote-session          <span class="hljs-comment"># Spring Boot REST API to manage sessions</span>
└── worker                <span class="hljs-comment"># Spring Boot service to persist votes</span>
</code></pre>
<hr />
<h1 id="heading-spin-up-a-new-k3s-cluster-on-aws-ec2">Spin Up a New K3s Cluster on AWS EC2</h1>
<p>If you already have a Kubernetes cluster running, feel free to skip this section.</p>
<p>I launched a <code>t4g.medium</code> Ubuntu EC2 instance and saved the <code>.pem</code> key for later SSH access. If you're not familiar, <a target="_blank" href="https://docs.k3s.io/">K3s</a> is a lightweight, production-ready Kubernetes distribution developed by Rancher Labs. If you noticed, I also launched <code>ARM</code> based Ubuntu instance because I am using <code>ARM</code> based MacBook at home and it was comfortable to push images directly from my laptop when I needed to quickly launch Docker images from my laptop.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748653347923/27be0570-27e7-4442-98f0-9c0a2ac578cd.png" alt /></p>
<p>Make sure to open these ports on your EC2 Security Group:</p>
<ul>
<li><p><code>8080</code>: ArgoCD UI</p>
</li>
<li><p><code>22</code>: SSH access</p>
</li>
<li><p><code>6443</code>: Kubernetes API</p>
</li>
<li><p><code>31000–31002</code>: Application ports</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748653630356/f16d96f8-6314-4dd6-8009-94b1feb3fa0e.png" alt /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">I have exposed these ports to the world <code>0.0.0.0/0</code> If you have a specific public IP address, it is better to set up <code>8080</code>,<code>22</code>, <code>6443</code> ports accessible only from your specific IP address range.</div>
</div>

<p>To bootstrap the K3s cluster, I used cool tool <a target="_blank" href="https://github.com/alexellis/k3sup"><code>k3sup</code></a> (said 'ketchup'), built by <a target="_blank" href="https://github.com/alexellis">Alex Ellis</a>. From your laptop:</p>
<pre><code class="lang-bash">curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/<span class="hljs-built_in">local</span>/bin/
k3sup --<span class="hljs-built_in">help</span>
</code></pre>
<p>Now install K3s to your EC2 instance:</p>
<p>💡 Replace <code>$IP</code> and key path with your own. <code>$HOME/controlplanekeypair.pem</code> is a private key path on my laptop, I saved in this path while I launched an EC2 instance.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> IP=54.90.96.48
k3sup install --ip <span class="hljs-variable">$IP</span> --user ec2-user \
  --ssh-key <span class="hljs-variable">$HOME</span>/controlplanekeypair.pem
</code></pre>
<p>It might take couple of minutes and if you don’t see any errors from command output above voila! Your kubernetes cluster ready to run and your <code>kubeconfig</code> file is located in your local machine.</p>
<p>Verify:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> KUBECONFIG=`<span class="hljs-built_in">pwd</span>`/kubeconfig
kubectl config use-context default
kubectl get node -o wide

<span class="hljs-comment"># -------- Output ------- #</span>
NAME                           STATUS   ROLES                  AGE    VERSION
ip-172-31-85-66.ec2.internal   Ready    control-plane,master   7m4s   v1.32.5+k3s1
</code></pre>
<hr />
<h1 id="heading-kubernetes-specification-files">Kubernetes Specification Files</h1>
<p>These are the deployment and service specs I created to deploy the app. ArgoCD watches these files and updates deployments when GitHub Actions push new images to DockerHub:</p>
<pre><code class="lang-bash">├── k8s-specifications
│   ├── kafka-deployment.yaml
│   ├── kafka-service.yaml
│   ├── result-deployment.yaml
│   ├── result-service.yaml
│   ├── vote-db-deployment.yaml
│   ├── vote-db-service.yaml
│   ├── vote-deployment.yaml
│   ├── vote-service.yaml
│   ├── vote-session-deployment.yaml
│   ├── vote-session-service.yaml
│   └── worker-deployment.yaml
</code></pre>
<p>More details here: <a target="_blank" href="https://github.com/devsteppe9/voting_app/tree/main/k8s-specifications">k8s-specifications</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748665551573/d2aff0b8-d03b-4977-a632-00fead9998c4.jpeg" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Photo by Jack Japar <strong>😉</strong></p>
</blockquote>
<hr />
<h1 id="heading-install-argocd-on-kubernetes">Install ArgoCD on Kubernetes</h1>
<pre><code class="lang-bash">kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

<span class="hljs-comment"># -------- Output ------- #</span>
NAME                                                READY   STATUS    RESTARTS    AGE
argocd-application-controller-0                     1/1     Running   0           90s
argocd-applicationset-controller-777d5b5dc7-w8blz   1/1     Running   0           90s
argocd-dex-server-7d8fcd845-lg9hr                   1/1     Running   0           90s
argocd-notifications-controller-655df7c996-q2vp4    1/1     Running   0           90s
argocd-redis-574484f6db-ssf2c                       1/1     Running   0           90s
argocd-repo-server-57449f957c-cdjc5                 1/1     Running   0           90s
argocd-server-7dd4c8cf5f-6x68f                      1/1     Running   0           90s
</code></pre>
<p>Expose ArgoCD on NodePort:</p>
<pre><code class="lang-bash">cat &lt;&lt;EOF &gt; argocd-server-service.yml
apiVersion: v1
kind: Service
metadata:
  name: argocd-server-nodeport
  labels:
    app.kubernetes.io/name: argocd-server
    app.kubernetes.io/component: server
    app.kubernetes.io/part-of: argocd
spec:
  <span class="hljs-built_in">type</span>: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 8080
      nodePort: 31002
      protocol: TCP
  selector:
    app.kubernetes.io/name: argocd-server
EOF

kubectl create -f argocd-server-service.yml -n argocd
</code></pre>
<p>The service above exposes the ArgoCD GUI on port <code>31002</code></p>
<p>Access <code>http://YOUR_IP:31002</code> in your browser:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748658647474/3b42d36e-5236-439a-a228-911f2117203b.png" alt /></p>
<p>Get the admin password:</p>
<pre><code class="lang-bash">kubectl get secret argocd-initial-admin-secret -n argocd \
  -o jsonpath={.data.password} | base64 -d
</code></pre>
<p>To log in ArgoCD dashboard, usethe password you got from above command and use <code>admin</code> as a username.</p>
<hr />
<h1 id="heading-create-argocd-app">Create ArgoCD App</h1>
<ol>
<li><p>Go to Applications → New App → <code>Edit as YAML</code></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748659322958/05cbe710-98f8-4880-9211-247e90115b43.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Paste:</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">vote-app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">destination:</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">vote-app</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">https://kubernetes.default.svc</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">https://github.com/devsteppe9/voting_app</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">k8s-specifications</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">main</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">syncPolicy:</span>
    <span class="hljs-attr">automated:</span>
      <span class="hljs-attr">selfHeal:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">syncOptions:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">CreateNamespace=true</span>
</code></pre>
<ol start="3">
<li><p>Then click Create</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748659429349/b96b526d-d52b-489c-b892-18023bc89814.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748659458628/5a2fb228-4a87-44ee-bff4-ecc2c059c675.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h1 id="heading-github-actions-setup">GitHub Actions Setup</h1>
<p>I prepared 4 workflow files under the <code>.github/workflows</code> directory. Each of them detects code changes under <code>result</code>, <code>vote-session</code>, <code>vote</code> and <code>worker</code> subdirectories respectively.</p>
<pre><code class="lang-bash">├── .github               <span class="hljs-comment"># Github Actions workflow files</span>
│   └── workflows
│       ├── build-result.yaml       <span class="hljs-comment"># workflow for result app</span>
│       ├── build-vote-session.yaml <span class="hljs-comment"># workflow for vote-session app</span>
│       ├── build-vote.yaml         <span class="hljs-comment"># workflow for vote app</span>
│       └── build-worker.yaml       <span class="hljs-comment"># workflow for worker app</span>
</code></pre>
<p>The workflow file below is for <code>vote</code> service, which is one of the 4 workflows above. The remaining 3 workflows are similar, the only difference is the Docker image name tags and <code>on.push.paths</code> field values.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Integrate</span> <span class="hljs-string">vote</span> <span class="hljs-string">app</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
    <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'vote/**'</span>
<span class="hljs-attr">env:</span>
  <span class="hljs-attr">DOCKERHUB_USERNAME:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_USERNAME</span> <span class="hljs-string">}}</span>
  <span class="hljs-attr">DOCKERHUB_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_TOKEN</span> <span class="hljs-string">}}</span>

<span class="hljs-attr">permissions:</span>
  <span class="hljs-attr">contents:</span> <span class="hljs-string">write</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build-vote-app:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Login</span> <span class="hljs-string">to</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Hub</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/login-action@v3</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">env.DOCKERHUB_USERNAME</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">${{</span> <span class="hljs-string">env.DOCKERHUB_TOKEN</span> <span class="hljs-string">}}</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">QEMU</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/setup-qemu-action@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Buildx</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/setup-buildx-action@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">push</span> <span class="hljs-string">Docker</span> <span class="hljs-string">image</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/build-push-action@v6</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">context:</span> <span class="hljs-string">./vote</span>
          <span class="hljs-attr">platforms:</span> <span class="hljs-string">linux/amd64,linux/arm64</span>
          <span class="hljs-attr">push:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">tags:</span> <span class="hljs-string">|
            ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:${{ github.sha }}
            ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:latest
</span>          <span class="hljs-attr">build-args:</span> <span class="hljs-string">|
            KAFKA_BOOTSTRAP_SERVERS=kafka:9092
            SESSION_API_URL=http://vote-session:8080/sessions
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Update</span> <span class="hljs-string">Kubernetes</span> <span class="hljs-string">deployment</span>
        <span class="hljs-comment"># Replace image tag in deployment.yaml with new Docker image tagged by commit SHA</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          sed -i "s|image: ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:.*|image: ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:${{ github.sha }}|g" k8s-specifications/vote-deployment.yaml
          echo "Updated image in k8s-specifications/vote-deployment.yaml"
</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Commit</span> <span class="hljs-string">and</span> <span class="hljs-string">push</span> <span class="hljs-string">changes</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          git config --local user.name "github-actions[bot]"
          git config --local user.email "41898282+github-actions[bot]@users.noreply.github.com"
          git add k8s-specifications/vote-deployment.yaml
          git commit -m "Update vote deployment image to ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:${{ github.sha }}"
          git pull origin main --rebase || false
          git push origin main</span>
</code></pre>
<p>The workflow builds Docker images for <code>linux/amd64</code> and <code>linux/arm64</code> architectures, pushes to DockerHub, and updates the corresponding <code>vote-deployment.yaml</code> with the new tag.</p>
<p>You need to set up DockerHub secrets on your GitHub Repository. Below is the guideline on how to set up secrets in your GitHub repository: <a target="_blank" href="https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions"><strong>Using secrets in GitHub Actions</strong></a></p>
<hr />
<h1 id="heading-test-the-deployment">Test the Deployment</h1>
<p>As shown in the ArgoCD dashboard, the application has successfully synced changes. I’ve experimented with adding and removing some code on <code>result</code> service, then pushed it into <code>main</code> branch of the repository. As you see, there are 7 more revisions created for the <code>result</code> service, and the latest one is serving, running as a pod.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748660881326/d917a83b-8111-4539-afbe-945689ab0ce2.png" alt /></p>
<p>Test your application:</p>
<ul>
<li><p>Vote App: http://YOUR_IP:31000/votes/1</p>
</li>
<li><p>Result App: http://YOUR_IP:31001/results/1</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748661063702/b28c1488-11f8-482e-ade4-615a9ac9c5ae.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748661083421/0479754e-d9e3-4ba7-a454-7544d8e8fca1.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-recap">Recap</h1>
<h3 id="heading-github-actions">GitHub Actions</h3>
<ul>
<li><p>Builds and pushes Docker images</p>
</li>
<li><p>Update Kubernetes deployment files</p>
</li>
</ul>
<h3 id="heading-argocd">ArgoCD</h3>
<ul>
<li><p>Monitors <code>k8s-specifications/</code> directory</p>
</li>
<li><p>Auto-syncs updated manifests into the K8s cluster</p>
</li>
</ul>
<p>In conclusion, deploying a distributed voting application using Kubernetes, ArgoCD, and GitHub Actions provides a robust and automated CI/CD pipeline. Additionally, you can extend your workflow by adding Docker image scanning and Linting stages, which check the syntax of your source code, among other features. But these are not covered in this blog post. By integrating these technologies, developers can efficiently manage application deployments, monitor changes, and ultimately enhance the overall development and deployment process.</p>
]]></content:encoded></item><item><title><![CDATA[Creating a Kubernetes Cluster on AWS EC2 Using a user-data Script]]></title><description><![CDATA[In this post, we’ll walk through creating a simple Kubernetes cluster on a single AWS EC2 instance using K3s, a lightweight Kubernetes distribution. K3s is easy to install, uses half the memory of standard Kubernetes, and comes in a compact binary of...]]></description><link>https://jackjapar.com/creating-a-kubernetes-cluster-on-aws-ec2-using-a-user-data-script-621c61ee6f87</link><guid isPermaLink="true">https://jackjapar.com/creating-a-kubernetes-cluster-on-aws-ec2-using-a-user-data-script-621c61ee6f87</guid><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Thu, 15 May 2025 18:49:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865152520/0ad505b3-3e85-4b91-ab8d-abd3852b526d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this post, we’ll walk through creating a simple Kubernetes cluster on a single AWS EC2 instance using K3s, a lightweight Kubernetes distribution. K3s is easy to install, uses half the memory of standard Kubernetes, and comes in a compact binary of less than 100 MB.</p>
<p>The process involves four key steps:</p>
<ul>
<li>Setting up security groups with essential ports</li>
<li>Launching and configuring an EC2 instance with K3s using user-data script</li>
<li>Verifying the cluster’s functionality</li>
<li>Optional: Setting up local machine access</li>
</ul>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li>An AWS console account with permissions to create EC2 instances and security groups</li>
</ul>
<h3 id="heading-step-1-create-security-group">Step 1: Create Security group</h3>
<ol>
<li>Open the Amazon EC2 console at <a target="_blank" href="https://console.aws.amazon.com/ec2/">https://console.aws.amazon.com/ec2/</a>.</li>
<li>In the navigation pane, choose <strong>Security Groups</strong>.</li>
<li>Choose <strong>Create security group</strong>.</li>
<li>Enter a descriptive name and brief description for the security group. You can’t change the name and description of a security group after it is created.</li>
<li>For <strong>VPC</strong>, choose the VPC in which you’ll run your Amazon EC2 instances.</li>
<li>For inbound rules, choose <strong>Inbound rules</strong>. For each rule, choose <strong>Add rule</strong> and specify the protocol, port, and source. I’ve configured Port 22 for SSH access and Port 6443 to expose the Kubernetes API. While I’ve currently opened these ports to all IP addresses (0.0.0.0/0), this isn’t secure — you should restrict access to your specific IP address.</li>
<li>To add outbound rules, choose <strong>Outbound rules</strong>. For each rule, choose <strong>Add rule</strong> and specify the protocol, port, and destination. Otherwise, you can keep the default rule, which allows all outbound traffic.</li>
<li>Choose <strong>Create security group</strong>.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865148928/c70c7cf1-a2d7-4768-b573-ba75c6fb06f2.png" alt /></p>
<p>Screen shot of security group</p>
<h3 id="heading-step-2-launch-instance">Step 2: Launch instance</h3>
<ol>
<li>Open the Amazon EC2 console at <a target="_blank" href="https://console.aws.amazon.com/ec2/">https://console.aws.amazon.com/ec2/</a>.</li>
<li>In the navigation bar at the top of the screen, the current AWS Region is displayed (for example, US East (Ohio)). If needed, select a different Region in which to launch the instance.</li>
<li>From the Amazon EC2 console dashboard, choose <strong>Launch instance</strong>.</li>
<li>(Optional) Under <strong>Name and tags</strong>, for <strong>Name</strong>, enter a descriptive name for your instance.</li>
<li>Under <strong>Application and OS Images (Amazon Machine Image)</strong>, choose <strong>Quick Start</strong>, and then choose the operating system (OS) for your instance. I chose Amazon Linux, t2.micro instance type.</li>
<li>Under <strong>Key pair (login)</strong>, for <strong>Key pair name</strong>, choose an existing key pair or create a new one.</li>
<li>Expand the <strong>Advanced details</strong> toggle menu and paste the following script into the <strong>user data</strong> field. I added comments on below script for the details. You can change configuration variables depending on your use case. This script installs K3s, sets up NGINX Ingress, exposes the Kubernetes API using the instance’s public IP, and verifies the installation by creating a sample deployment.</li>
</ol>
<p>#!/bin/bash  </p>
<p># === CONFIGURATION VARIABLES ===<br />install_nginx_ingress=true       # set to true or false<br />expose_kubeapi=true              # set to true or false<br />k3s_version="v1.31.6+k3s1"       # set to "latest" or specific version like "v1.28.5+k3s1"<br />k3s_token="REPLACE_WITH_YOUR_RANDOM_TOKEN" # No special characters<br />nginx_ingress_release="v1.12.0"  # if nginx ingress is enabled  </p>
<p># === FUNCTION DEFINITIONS ===  </p>
<p>check_os() {<br />  name=$(grep ^NAME= /etc/os-release | sed 's/"//g')<br />  clean_name=${name#*=}  </p>
<p>  version=$(grep ^VERSION_ID= /etc/os-release | sed 's/"//g')<br />  clean_version=${version#*=}<br />  major=${clean_version%.*}<br />  minor=${clean_version#*.}  </p>
<p>  if [[ "$clean_name" == "Ubuntu" ]]; then<br />    operating_system="ubuntu"<br />  elif [[ "$clean_name" == "Amazon Linux" ]]; then<br />    operating_system="amazonlinux"<br />  else<br />    operating_system="undef"<br />  fi  </p>
<p>  echo "K3S install process running on: "<br />  echo "OS: $operating_system"<br />  echo "OS Major Release: $major"<br />  echo "OS Minor Release: $minor"<br />}  </p>
<p># === MAIN EXECUTION ===  </p>
<p>check_os<br />AWS_IMDS_TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" \<br />  -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")  </p>
<p># Install dependencies<br />if [[ "$operating_system" == "ubuntu" ]]; then<br />  apt-get update<br />  apt-get install -y software-properties-common unzip git nfs-common jq<br />  DEBIAN_FRONTEND=noninteractive apt-get upgrade -y<br />  curl -s "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />  unzip -q awscliv2.zip<br />  sudo ./aws/install<br />  rm -rf aws awscliv2.zip<br />elif [[ "$operating_system" == "amazonlinux" ]]; then<br />  yum install -y --skip-broken unzip curl jq git<br />else<br />  echo "Unsupported OS: $operating_system"<br />  exit 1<br />fi  </p>
<p># Prepare K3s install params<br />k3s_install_params=()  </p>
<p>if [[ "$install_nginx_ingress" == true ]]; then<br />  k3s_install_params+=("--disable" "traefik")<br />fi  </p>
<p>if [[ "$expose_kubeapi" == true ]]; then<br />  provider_public_ip="$(curl -s -H "X-aws-ec2-metadata-token: $AWS_IMDS_TOKEN" \<br />    http://169.254.169.254/latest/meta-data/public-ipv4)"<br />  k3s_install_params+=("--tls-san" "${provider_public_ip}")<br />fi  </p>
<p>INSTALL_PARAMS="${k3s_install_params[*]}"<br />echo "INSTALL_PARAMS: $INSTALL_PARAMS"  </p>
<p># Get K3s version<br />if [[ "$k3s_version" == "latest" ]]; then<br />  K3S_VERSION=$(curl -s https://api.github.com/repos/k3s-io/k3s/releases/latest | jq -r '.name')<br />else<br />  K3S_VERSION="$k3s_version"<br />fi  </p>
<p># Install K3s with retries<br />until (curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=$K3S_VERSION K3S_TOKEN=$k3s_token sh -s - --cluster-init $INSTALL_PARAMS); do<br />  echo 'k3s did not install correctly'<br />  exit 1<br />  sleep 2<br />done  </p>
<p># Wait for k3s to become ready<br />until kubectl get pods -A 2&gt;/dev/null | grep 'Running'; do<br />  echo 'Waiting for k3s startup'<br />  sleep 5<br />done  </p>
<p># Configure kubectl alias<br />echo 'alias k=kubectl' &gt;&gt;~/.bashrc<br />source ~/.bashrc  </p>
<p># Optionally install nginx ingress<br />if [[ "$install_nginx_ingress" == true ]]; then<br />  kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-${nginx_ingress_release}/deploy/static/provider/cloud/deploy.yaml<br />  kubectl create deployment demo --image=httpd --port=80  </p>
<p>  until kubectl get pods --namespace=ingress-nginx 2&gt;/dev/null | grep 'Running'; do<br />    echo 'Waiting for nginx ingress controller to start'<br />    sleep 5<br />  done<br />fi</p>
<p>8. In the Summary panel, choose Launch instance. Below is the snapshot of before I click Launch Instance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865150569/ea969e43-9949-46ba-a30c-144e8621e14d.png" alt /></p>
<h3 id="heading-step-3-confirm-kubernetes-cluster">Step 3: Confirm Kubernetes Cluster</h3>
<p>Access to your instance through SSH and check Kubernetes working correct</p>
<p>[ec2-user@ip-172-31-30-75 ~]$ sudo su # user-data installed kubernetes with root user<br />[root@ip-172-31-30-75 ec2-user]# kubectl get nodes<br />NAME                           STATUS   ROLES                       AGE   VERSION<br />ip-172-31-30-75.ec2.internal   Ready    control-plane,etcd,master   58m   v1.31.6+k3s1<br />[root@ip-172-31-30-75 ec2-user]#</p>
<p>If you cannot see that your Kubernetes is not working, you can checkthe initial user-data log on your EC2 instance by:</p>
<p>tail -n 100 /var/log/cloud-init-output.log</p>
<p>That’s it! Your K3s Kubernetes cluster is now ready to use. Note that additional security configurations are recommended, though we won’t cover those in this post.</p>
<h3 id="heading-step-4-optional-access-to-your-kubernetes-api-from-your-local-machine">Step 4 (Optional): Access to your Kubernetes API from your local machine</h3>
<p>As you can see in the above, we exposed Kubernetes API while installing K3s and exposed 6443 on our security group configuration. Which enables us to access our Kubernetes cluster from our local machine. To access, you need steps below</p>
<ol>
<li>Install `kubectl` command-command line tool into your local machine using this official Kubernetes guide, depending on your machine: <a target="_blank" href="https://kubernetes.io/docs/tasks/tools/#kubectl">https://kubernetes.io/docs/tasks/tools/#kubectl</a>.<br />I simply installed it on my Mac by: <code>brew install kubectl</code>If you installed kubectl properly, you can use with below command</li>
</ol>
<p>kubectl version --client</p>
<p>2. Transfer kubeconfig file from the EC2 instance to your local machine</p>
<p>scp -i mykeypair.pem root@remote-host:/etc/rancher/k3s/k3s.yaml ~/.kube/my-ec2-remote-k3s.yaml</p>
<p>3. Update server address:<br />Replace <a target="_blank" href="https://127.0.0.1:6443">https://127.0.0.1:6443</a> with the public IP or DNS of your remote host. Example:</p>
<p>server: http://:6443</p>
<p>4. Set KUBECONFIG locally</p>
<p>export KUBECONFIG=~/.kube/remote-k3s.yaml</p>
<p>5. Finally, check your Kubernetes cluster on your EC2 accessible from your local machine. This shows you list of nodes you are connected with:</p>
<p>kubectl get nodes</p>
<p>If you see your EC2 node listed in the output, it means the connection is successful — you can now interact with your Kubernetes cluster from your local machine using <code>kubectl</code>.</p>
<h3 id="heading-summary">Summary</h3>
<p>Setting up a Kubernetes cluster with k3s doesn’t have to be complicated. This guide walks through creating a lightweight Kubernetes cluster using K3s on AWS EC2.</p>
<p>However, before diving in, consider these crucial points about Kubernetes adoption:</p>
<ul>
<li>Evaluate your scaling needs — Kubernetes is ideal for rapid scaling scenarios, as migration later can be challenging</li>
<li>Ensure you have qualified personnel — Kubernetes requires expertise for proper management, as it’s not secure by default</li>
<li>Consider managed services — Rather than building your own cluster, it’s recommended to use managed Kubernetes services that handle the complex operational aspects</li>
</ul>
<p>Reference:<br /><a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html</a><br /><a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-security-group.html">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-security-group.html</a><br /><a target="_blank" href="https://github.com/devsteppe9/k3s-aws-cdk/blob/main/scripts/k3s-master-install.sh">https://github.com/devsteppe9/k3s-aws-cdk/blob/main/scripts/k3s-master-install.sh</a><br /><a target="_blank" href="https://docs.k3s.io/">https://docs.k3s.io/</a><br /><a target="_blank" href="https://docs.k3s.io/cluster-access">https://docs.k3s.io/cluster-access</a></p>
]]></content:encoded></item><item><title><![CDATA[The CAP Theorem in Distributed Databases]]></title><description><![CDATA[As part of my journey into understanding distributed databases, I encountered a concept that fundamentally shapes how these systems are designed: the CAP theorem.It was so eye-opening that I decided to document and share what I learned today.
What is...]]></description><link>https://jackjapar.com/today-i-learned-the-cap-theorem-in-distributed-databases-1cb862c063a2</link><guid isPermaLink="true">https://jackjapar.com/today-i-learned-the-cap-theorem-in-distributed-databases-1cb862c063a2</guid><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Tue, 29 Apr 2025 02:09:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865124654/e46ff523-c108-4fa3-9dc2-40500b2dd6e9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As part of my journey into understanding distributed databases, I encountered a concept that fundamentally shapes how these systems are designed: <strong>the CAP theorem</strong>.<br />It was so eye-opening that I decided to document and share what I learned today.</p>
<h3 id="heading-what-is-the-cap-theorem">What is the CAP Theorem?</h3>
<p>The CAP theorem states that in any distributed system, <strong>you can guarantee only two out of three</strong> desired characteristics:</p>
<ul>
<li><strong>Consistency (C):</strong> Every client sees the same data at the same time, no matter which node they connect to.</li>
<li><strong>Availability (A):</strong> Every request receives a response — even if some of the nodes are down.</li>
<li><strong>Partition Tolerance (P):</strong> The system continues to operate despite network partitions or communication failures between nodes.</li>
</ul>
<p>Since network partitions can (and eventually will) happen in any distributed system, <strong>Partition Tolerance</strong> is non-negotiable. Thus, distributed databases must prioritize between <strong>Consistency</strong> and <strong>Availability</strong> when a partition occurs.</p>
<h3 id="heading-types-of-distributed-databases-based-on-cap">Types of Distributed Databases Based on CAP</h3>
<ul>
<li><strong>CA databases:  
</strong>All nodes remain consistent and available <strong>as long as</strong> there’s no partition.<br />However, if a network partition happens, the system may crash.<br /><strong>Examples:</strong> PostgreSQL, MariaDB</li>
<li><strong>CP databases:  
</strong>Prioritizes consistency even during partitions — meaning some nodes may become unavailable until the partition is resolved.<br /><strong>Examples:</strong> MongoDB</li>
<li><strong>AP databases:  
</strong>Prioritizes availability at the cost of consistency.<br />During a partition, nodes may continue serving <strong>potentially outdated data</strong> to ensure availability.<br /><strong>Examples:</strong> Couchbase, DynamoDB</li>
</ul>
<p><strong>Reflection:</strong><br />Learning about CAP helped me better appreciate the trade-offs that engineers must consider when designing distributed systems. It’s all about understanding your system’s priorities — consistency, availability, or how you handle network failures.</p>
<p><em>#DistributedSystems #CAPTheorem #DatabaseDesign #TodayILearned #SoftwareEngineering</em></p>
]]></content:encoded></item><item><title><![CDATA[How I prepared for AWS DevOps (DOP-C02) Professional exam]]></title><description><![CDATA[To give some background, I have been a software engineer with experience since 2017. Even though I worked as a software engineer, I worked half of my time handling EC2 servers, containers on ECS, etc. Having already earned three AWS certifications (D...]]></description><link>https://jackjapar.com/how-i-prepared-for-aws-devops-dop-c02-professional-exam-770eb5a8cc0f</link><guid isPermaLink="true">https://jackjapar.com/how-i-prepared-for-aws-devops-dop-c02-professional-exam-770eb5a8cc0f</guid><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Fri, 28 Feb 2025 03:54:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865141445/f17e7e17-a84c-4e9e-b067-75389327fe85.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>To give some background, I have been a software engineer with experience since 2017. Even though I worked as a software engineer, I worked half of my time handling EC2 servers, containers on ECS, etc. Having already earned three AWS certifications (Developer Associate, Solutions Architect Associate, and Cloud Practitioner) and the Kubernetes CKAD certification, I decided to pursue the AWS DevOps Professional certification to further advance my career. And here I wanted to share my DevOps Certification journey.</p>
<p>I enrolled in <a target="_blank" href="https://www.udemy.com/share/101WpU3@ZawdPEgxCjmIKIsTkoa2BZRmZDnIUXdgmQEX9yxuHUbu0nY3n16DUUelxt1lv8mt/">Stephane Maarek’s Udemy</a> course. Completed it within 2 weeks with daily study. I previously took Stephane’s SAA course so it helped me to progress faster. I took detailed notes while taking the course.</p>
<p>For the practice purchased exams from <a target="_blank" href="https://www.udemy.com/share/108diY3@fbLHd8zqi5xlPo2DdXxipVNNhMDmNYYeKm2DD0LiBruRhlOlog5fxvWSwYz7S03E/">Tutorials Dojo</a> and Stephane’s question sets. While working with sets, I saw some deprecated services such as OpsWorks, CodeCommit, etc. still in the questions, so I suggest you check deprecated services from the AWS documentation. Also, I realized some EKS topics were not covered with Stephane’s course, especially Pod Identity and IAM Roles for Service Accounts (IRSA). Basically, below is the strategy I followed</p>
<ul>
<li>Tackled 25 questions per session</li>
<li>Reviewed incorrect answers immediately</li>
<li>Note unfamiliar service features from the options</li>
<li>After finishing each set, review the above noted features from the AWS Documentation and do small experiments on the AWS console</li>
<li>After finishing 3 sets, I sit with 75 questions in one sit and check how I can solve the previous 3 sets</li>
</ul>
<p>After tackling about 200 questions, I scheduled my exam. My exam is scheduled at 13:30 and I checked in from 13:00. The check-in process was pretty quick. Proctor tried to connect with voice connection with my MacBook, but I could not hear the proctor’s voice, so we decided to communicate with chat instead. Finished exam around 16:40, total spending 3:40 minutes, and it was the longest exam I've ever had.</p>
<p>Surprisingly, I received my exam result around 21:00 after 4 hours. I first received the Credly badge email, then got an email from AWS telling me congrats! That was a belief I did not want to sit that long exam again! The previous 2 certifications, AWS Developer — Associate and AWS Cloud Practitioner, are extended by 3 years automatically. And AWS Solution Architect did not extend.</p>
<p>This is the experience I had, as a summary, it was a pretty hard exam to prepare. I spent my weeks reading paragraphs of questions, options, it required pretty consistency and energy, I am pretty happy to pass!</p>
<p>In my next post, I am going to put my real raw notes I took while studying for this exam. Hopefully it helps whoever is preparing for the exam. And stay focused!</p>
]]></content:encoded></item><item><title><![CDATA[Япон хэлний дүрэм #2]]></title><description><![CDATA[〜わけがない
Утга: Ямар нэгэн зүйлийн үйлдэгдэх хувь нь 0% байх үед ашиглана
Дүрэм: 〜辞書形＋わけがない。 Толь бичгийн хэлбэр＋わけがない
Жишээ өгүүлбэр:
①紙に名前を書くわけがない。(Цаасанд нэрээ бичихгүй. Магадгүй цаасанд нэрээ бичих шаардлагагүй учраас бичмээргүй байгаа)
②雨が降るわけがない。...]]></description><link>https://jackjapar.com/d18fd0bfd0bed0bd-d185d18dd0bbd0bdd0b8d0b9-d0b4d2afd180d18dd0bc-2-7323afd5fce4</link><guid isPermaLink="true">https://jackjapar.com/d18fd0bfd0bed0bd-d185d18dd0bbd0bdd0b8d0b9-d0b4d2afd180d18dd0bc-2-7323afd5fce4</guid><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Sat, 09 May 2020 13:44:36 GMT</pubDate><content:encoded><![CDATA[<h3 id="heading-kirjgjzjgojgzhjgyzjgarjgyqqkg"><strong>〜わけがない</strong></h3>
<p>Утга: Ямар нэгэн зүйлийн үйлдэгдэх хувь нь 0% байх үед ашиглана</p>
<p>Дүрэм: 〜辞書形＋わけがない。 Толь бичгийн хэлбэр＋わけがない</p>
<p>Жишээ өгүүлбэр:</p>
<p>①紙に名前を書くわけがない。(Цаасанд нэрээ бичихгүй. Магадгүй цаасанд нэрээ бичих шаардлагагүй учраас бичмээргүй байгаа)</p>
<p>②雨が降る<strong>わけが</strong>ない。(Бороо ерөөсөө орохгүй)</p>
<p>③授業をし<strong><em>ない</em></strong>わけが<strong><em>ない*</em></strong>。(Хичээлээ хийхгүй гэсэн юм ерөөсөө байхгүй. Хоёр үгүйсгэл нийлээд заавал болж байна)*</p>
<h3 id="heading-kirjgjzjgojjgybjgyzjgarjgyqqkg"><strong>〜ようがない</strong></h3>
<p>Утга: Ямар нэгэн зүйлийг үйлдэх арга зам байхгүй</p>
<p>Дүрэм: 〜ます形い＋わけがない。 Үйл үгийн ます хэлбэр＋わけがない</p>
<p>Жишээ өгүүлбэр:</p>
<p>①紙に名前を書き<strong>ようがない。</strong>(Цаасанд нэрээ бичих боломжгүй. Магадгүй цаас эсвэл үзэг байхгүйн улмаас бичиж чадахгүй)</p>
<p>②お腹空いたでも食べようがない。(Гэдэс өлсөж байгаа ч идэж чадахгүй. Магадгүй идэх юм байхгүй)</p>
]]></content:encoded></item><item><title><![CDATA[Япон хэл дүрэм #1]]></title><description><![CDATA[〜ばかりに
Утга: -тийм учраас, -ийм учраас. Сөрөг зүйлийг тайлбарлахдаа ашиглана
Дүрэм: 〜た形＋ばかりに。 た(хэлбэр) ＋ばかりに
Жишээ өгүүлбэр:
①傘を持っていなかったばかりにぬれてしまった。(Шүхэрээ авч гараагүй учраас шалба норлоо)
②映画をたくさん見たばかりに授業をしなかった。(Кино их үзсэнээс болж хичээлээ хийж...]]></description><link>https://jackjapar.com/d18fd0bfd0bed0bd-d185d18dd0bb-d0b4d2afd180d18dd0bc-1-b50877a9b622</link><guid isPermaLink="true">https://jackjapar.com/d18fd0bfd0bed0bd-d185d18dd0bb-d0b4d2afd180d18dd0bc-1-b50877a9b622</guid><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Sat, 09 May 2020 13:06:08 GMT</pubDate><content:encoded><![CDATA[<h3 id="heading-kirjgjzjgbdjgyvjgorjgasqkg"><strong>〜ばかりに</strong></h3>
<p>Утга: -тийм учраас, -ийм учраас. Сөрөг зүйлийг тайлбарлахдаа ашиглана</p>
<p>Дүрэм: 〜た形＋ばかりに。 た(хэлбэр) ＋ばかりに</p>
<p>Жишээ өгүүлбэр:</p>
<p>①傘を持っていなかっ<strong>た<em>ばかりに</em></strong>ぬれてしまった。(Шүхэрээ авч гараагүй учраас шалба норлоо)</p>
<p>②映画をたくさん見<strong>た<em>ばかり</em></strong>に授業をしなかった。(Кино их үзсэнээс болж хичээлээ хийж чадсангүй)</p>
<p>③食べ物をたくさん食べたばかりに太ってしまった。(Хоол их идсэнээс болж таргалчихлаа)</p>
<h3 id="heading-kirjgjzjgbdjgyvjgooqkg"><strong>〜ばかり</strong></h3>
<p>Утга: Ямар нэгэн зүйлийг эхлэж удаагүй байх үед ашиглана.</p>
<p>Дүрэм: 〜た形＋ばかり。 た(хэлбэр) ＋ばかり (にーашиглахгүйг анхаар)</p>
<p>Жишээ өгүүлбэр:</p>
<p>①日本に来たばかりです。(Японд ирээд удаагүй)</p>
<p>②日本語を勉強したばかり。(Япон хэл сурч удаагүй)</p>
]]></content:encoded></item><item><title><![CDATA[Хэрхэн юмыг хурдан сурах вэ?]]></title><description><![CDATA[Өнөөдөр Nishant Kasibhatla гэх 2011 оны Ой тогтоолтын Гиннесийн рекордын эзэмшигчийн сонирхолтой, маш хэрэгтэй илтгэлийг хуваалцах гэж байна.

Та бүхэн нэртэй алдартнуудыг надаар хэлүүлэлтгүй мэдэж байгаа байх. Тэд бүгд өөрсдийн салбартаа амжилтад хү...]]></description><link>https://jackjapar.com/d185d18dd180d185d18dd0bd-d18ed0bcd18bd0b3-d185d183d180d0b4d0b0d0bd-d181d183d180d0b0d185-d0b2d18d-f7220458a187</link><guid isPermaLink="true">https://jackjapar.com/d185d18dd180d185d18dd0bd-d18ed0bcd18bd0b3-d185d183d180d0b4d0b0d0bd-d181d183d180d0b0d185-d0b2d18d-f7220458a187</guid><dc:creator><![CDATA[Jarkynbyek (Jack) Japar]]></dc:creator><pubDate>Sun, 01 Mar 2020 15:50:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865137977/4ffaf39b-455e-4a51-8a13-9d46957ab1df.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Өнөөдөр <strong>Nishant Kasibhatla</strong> гэх 2011 оны Ой тогтоолтын Гиннесийн рекордын эзэмшигчийн сонирхолтой, маш хэрэгтэй илтгэлийг хуваалцах гэж байна.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865131698/3e11535f-3c4c-4eaf-9a1f-540462c5f2eb.png" alt /></p>
<p>Та бүхэн нэртэй алдартнуудыг надаар хэлүүлэлтгүй мэдэж байгаа байх. Тэд бүгд өөрсдийн салбартаа амжилтад хүрсэн хүмүүс. Гэхдээ тэдний ард үргэлж дагаж мөрддөг нэгэн нийтлэг зуршил байдаг нь — <strong>насан туршдаа тасралтгүй суралцсаар байдаг</strong> явдал юм.</p>
<p>Тэд хэзээ ч шинэ зүйл сурахаа зогсоогоогүй. Амжилтад хүрэхэд олон чанар хэрэгтэй байдаг ч, <strong>суралцах чадвар</strong> хамгийн гол түлхэц нь болж чаддагт би итгэлтэй байна.<br />Бид суралцах тусам амжилтад хүрэх магадлал маань улам нэмэгдсээр байдаг.</p>
<h3 id="heading-gehdee-herhen-uer-duentej-surah-ve">Гэхдээ хэрхэн үр дүнтэй сурах вэ?</h3>
<p>Бид суралцахдаа унших, үзэх, сонсох буюу ном унших, бичлэг үзэх, подкаст сонсох гэх мэт аргаар олон <strong>оролт</strong>-ыг тархиндаа хийдэг. Гэхдээ хүмүүс оролтондоо л их анхаарал тавиад байдаг. Үүний үр дүнд:</p>
<ul>
<li>Орж буй мэдээлэл их</li>
<li>Гэхдээ гаралт байхгүй</li>
<li>Ингээд <strong>гүехэн сурах (Shallow Learning)</strong> байдал үүсдэг.</li>
</ul>
<blockquote>
<p>Харин бидний хүсэж буй зүйл бол:<br /><strong>Гүн сурах (Deep Learning)</strong> юм.</p>
</blockquote>
<p>Гэвч яаж вэ…</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865133608/afb416e2-8dc5-4e61-92ef-4849ba2bf36d.png" alt /></p>
<p>Бид сайн сурахын тулд <strong>оролтоос илүү гаралтанд</strong> анхаарах хэрэгтэй.<br /><strong>Гаралт</strong> гэдэг нь сурсан зүйлээ хэрэглэх, бодитоор ашиглах гэсэн үг.</p>
<blockquote>
<p>Сурсан зүйлээ ашигла эсвэл алд!</p>
</blockquote>
<p>За тэгээд энэхүү илтгэгч маань амжилттай, хурдан сурах нэгэн томъёог загварчилж өгч</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747865135999/e2065730-6f89-4b27-ba26-d8c352861de3.png" alt /></p>
<ol>
<li>Оролтын чанар</li>
</ol>
<p>Хамгийн түрүүнд тархиндаа оруулж байгаа оролтынхоо ЧАНАРдаа өндөр анхаарал хандуул. Тархиндаа оролт оруулах явцдаа буюу сурч байхдаа тухайн зүйлдээ гүн анхаарал хандуулах хэрэгтэй. Сурч байх хооронд хүнтэй ярих, утасныхаа мэдэгдэлийг шалгах гэх мэтчлэн анхаарал саатааруулсан олон зорилтот (<em>multitasking</em>) зүйлүүдээс ангижрах хэрэгтэй. Юмыг уншиж, сурч, ойлгож сууж байлаа гэж бодоё. Гар утсандаа нэгэн мэдэгдэл ирлээ. Энэ үед яадаг билээ… Мэдээж шалгана. Ингээд мэдэгдэлийг шалгаж байх хооронд тэрхүү ойлгож байсан сурч байсан тэрхүү чармайл (effort)-ыг, тэрхүү оролтын чанарыг шууд алж байна гэсэн үг.Оролтын чанарыг алдаж байна гэдэг нь дараа нь тэр мэдлэгийг санах явцаа алдаж байна. Тэгэхээр оролт оруулах явцдаа тэрхүү зүйлд 100% анхаарал хандуулах, олон зүйл (multitasking) хийхгүй байх. Нэг агшинд нэг л зүйлийг анхаарах.</p>
<p>2. Эргэцүүлэн бодох</p>
<p>Ямар нэгэн зүйлийг сурч дуусаад.Сурсан зүйлдээ эргэж хараад өөрөөсөө дараах хэдэн асуултыг асуух</p>
<ul>
<li>Энэ мэдлэгээс авчихаар зүйл юу байсан вэ?</li>
<li>Энэхүү мэдлэгийг би юунд ашиглах вэ?</li>
<li>Амьдралдаа яаж ашиглаж болох вэ?</li>
<li>Ажилдаа, гэр бүлдээ яаж ашиглавал болох вэ?</li>
</ul>
<p>Энэхүү асуултуудыг өөрөөсөө асууснаар сурсан зүйлээ өөртөө бэхжүүлж үлдээж чадаж байгаа юм.</p>
<p>3. Хэрэгжүүлэх</p>
<p>Сурсан зүйлээ хэрэгжүүлж сурах. Бид ямар нэгэн зүйлийг сурах үед сурч дуусчихсан мэт мэдрэмж төрдөг бөгөөд дараагийн зүйлээ сураад явдаг. Хэрэгжүүлээгүйгээс хий мэдлэг хуримтлуулдаг — тэрхүү зүйлийг сурсан ашиглах гэхээр чадахгүй байдаг. Сурсан зүйлээ төлөвлөөд ямар нэгэн үйлдэл хийх хэрэгтэй. Ямар нэгэн үйлдэл хийх нь юу ч хийгээгүйгээс илүү тул.</p>
<p>4. Хуваалцах</p>
<p>Сүүлийнх нь бол хуваалцах. Мэдээж бид бүгд сонсож байсан байх. Хамгийн сайн сурах арга бол заах. Зүгээр сурсан зүйлээ бусад хүмүүстэй хуваалц, заа, хэлэлц. Өөр хүмүүст хуваалцах явцдаа бидний тархи тэрхүү мэдлэгт маш сайн анхаарал хандуулдаг байна.</p>
<p>Тэгэхээр хэн ч зөвхөн оролтондоо анхаарал хандуулсанаар төгс эзэмшигч байж чадахгүй юм байна. Төгс эзэмшигч - орсон оролтоосоо илүү их гаралт гаргадаг.</p>
<p>Ингээд таниас нэг асуулт асууяа. Ямар нэгэн шинэ зүйл сурахдаа, сурахад хэр их цаг зарцуулж байсан вэ? Бас эргэцүүлэн бодох, хэрэгжүүлэх, хуваалцахдаа хэр их цаг зарцуулж байсан вэ?</p>
<p>Хэрэв та гаралтаас илүү оролтонд их цаг зарцуулж байсан бол энэ бол зөв зүйл биш юм.</p>
<p>За ингээд бидний амьдралд маш чухал зүйл болох зогсолтгүй суралцах, сурах үедээ хэрхэн үр дүнтэй хурдан сурах талаар орууллаа. Энэхүү илтгэлийн бичлэгийг доор оруулсан бөгөөд орж сонирхож үзээрэй. Бас алдаатай засчимаар байгаа зүйл олдвол коммент хэсэгт үлдээгээрэй.</p>
<iframe src="https://www.youtube.com/embed/ZVO8Wt_PCgE?feature=oembed" width="700" height="393"></iframe>

<p>2020.03.02 Japan, Osaka</p>
]]></content:encoded></item></channel></rss>