<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[GetintoKube]]></title><description><![CDATA[Bridging the gap between theory and practice in today's top technologies.]]></description><link>https://getintokube.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 02:40:31 GMT</lastBuildDate><atom:link href="https://getintokube.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Migrating from Ingress NGINX to Gateway API in GKE]]></title><description><![CDATA[If you have been working with Kubernetes for a while, there's a good chance you've used NGINX Ingress to expose services outside your cluster. For years, it has been the default choice for many teams ]]></description><link>https://getintokube.com/migrating-from-ingress-nginx-to-gateway-api-in-gke</link><guid isPermaLink="true">https://getintokube.com/migrating-from-ingress-nginx-to-gateway-api-in-gke</guid><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Wed, 11 Mar 2026 19:02:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/66f43cce4175136d80945d5a/f772a9cb-c099-4fcb-b124-77451b744ec8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you have been working with Kubernetes for a while, there's a good chance you've used <strong>NGINX Ingress</strong> to expose services outside your cluster. For years, it has been the default choice for many teams because it was simple to set up and widely supported.</p>
<p>But Kubernetes networking has been evolving rapidly.</p>
<p>Recently, the Kubernetes community has been shifting focus toward the <strong>Gateway API</strong>, which is designed to be the <strong>next-generation replacement for the traditional Ingress model</strong>. Because of this shift, the community is gradually <strong>deprecating the ingress-nginx controller in favor of more modern networking approaches built around Gateway API</strong>.</p>
<p>For teams running workloads on <strong>Google Kubernetes Engine (GKE)</strong>, this transition becomes even more relevant since GKE provides <strong>native support for Gateway API with managed load balancing</strong>.</p>
<p>If you're currently running applications using <strong>NGINX Ingress</strong>, this guide will help you understand <strong>why the shift is happening and how to transition to Gateway API in GKE</strong>.</p>
<p>By the end of this tutorial, you’ll have a <strong>working Gateway + HTTPRoute setup</strong> that replaces your traditional ingress-based routing.</p>
<h1>Why Move Away from Ingress NGINX?</h1>
<img src="https://cdn.hashnode.com/uploads/covers/66f43cce4175136d80945d5a/1733cdef-4835-4a62-a2c3-6c6014fe7e7e.png" alt="" style="display:block;margin:0 auto" />

<p>While <strong>Ingress NGINX</strong> has been a widely used solution for exposing Kubernetes services, the ecosystem is gradually shifting toward the <strong>Gateway API</strong> as the long-term standard for traffic management.</p>
<p>Some of the major limitations of traditional Ingress include:</p>
<ul>
<li><p>Limited routing capabilities</p>
</li>
<li><p>Complex annotations</p>
</li>
<li><p>Lack of role separation between platform teams and app teams</p>
</li>
<li><p>Poor extensibility</p>
</li>
</ul>
<p>Gateway API solves these problems by introducing a <strong>more structured and modular networking model</strong>.</p>
<h1>Understanding Gateway API Architecture</h1>
<p>Gateway API introduces several new Kubernetes resources that separate responsibilities clearly.</p>
<img src="https://cdn.hashnode.com/uploads/covers/66f43cce4175136d80945d5a/577cca5a-3e18-4f3d-9499-d86eb252e77a.png" alt="" style="display:block;margin:0 auto" />

<table>
<thead>
<tr>
<th>Resource</th>
<th>Responsibility</th>
</tr>
</thead>
<tbody><tr>
<td>GatewayClass</td>
<td>Defines the gateway implementation</td>
</tr>
<tr>
<td>Gateway</td>
<td>Defines the entry point to the cluster</td>
</tr>
<tr>
<td>HTTPRoute</td>
<td>Defines routing rules to services</td>
</tr>
</tbody></table>
<p>This model allows <strong>platform teams to manage infrastructure</strong> while <strong>application teams manage routing rules</strong>.</p>
<h1>Prerequisites</h1>
<p>Before starting the migration, ensure you have:</p>
<ul>
<li><p>A <strong>running GKE cluster</strong></p>
</li>
<li><p><strong>kubectl installed</strong></p>
</li>
<li><p><strong>gcloud CLI configured</strong></p>
</li>
<li><p>Permissions to manage cluster networking</p>
</li>
</ul>
<p>Verify cluster connectivity:</p>
<pre><code class="language-bash">kubectl get nodes
</code></pre>
<h1>Enable Gateway API in GKE</h1>
<p>Gateway API support must be enabled in your GKE cluster.</p>
<p>You can do this either through the <strong>GKE Console</strong> or <strong>Terraform</strong>, depending on how you manage infrastructure.</p>
<h2>Option 1: Enable via GKE Console</h2>
<ol>
<li><p>Open <strong>Google Cloud Console</strong></p>
</li>
<li><p>Navigate to <strong>Kubernetes Engine → Clusters</strong></p>
</li>
<li><p>Select your cluster</p>
</li>
<li><p>Go to <strong>Details</strong></p>
</li>
<li><p>Enable <strong>Gateway API</strong></p>
</li>
</ol>
<img src="https://cdn.hashnode.com/uploads/covers/66f43cce4175136d80945d5a/b4fe955a-4b55-466a-bf2a-c0b19da539cf.png" alt="GKE console" style="display:block;margin:0 auto" />

<h2>Option 2: Enable Gateway API Using Terraform</h2>
<p>If you're managing infrastructure using Terraform, you can enable the Gateway API during cluster creation.</p>
<pre><code class="language-hcl">resource "google_container_cluster" "gke_cluster" {
  name     = "gateway-demo-cluster"
  location = "us-central1"

  gateway_api_config {
    channel = "CHANNEL_STANDARD"
  }
}
</code></pre>
<p>Apply the configuration:</p>
<pre><code class="language-bash">terraform apply
</code></pre>
<h1>List Available GatewayClasses</h1>
<p>Once Gateway API is enabled, GKE automatically installs supported <strong>GatewayClasses</strong>.</p>
<p>List them using:</p>
<pre><code class="language-bash">kubectl get gatewayclass
</code></pre>
<p>Example output:</p>
<pre><code class="language-plaintext">NAME                                       CONTROLLER                            
gke-l7-global-external-managed             networking.gke.io/gateway                   
gke-l7-gxlb                                networking.gke.io/gateway            
gke-l7-regional-external-managed           networking.gke.io/gateway                    
gke-l7-rilb                                networking.gke.io/gateway                                         
</code></pre>
<p>Each GatewayClass corresponds to a specific load balancing capability.</p>
<p>For most public applications, we will typically use:</p>
<pre><code class="language-plaintext">gke-l7-global-external-managed
</code></pre>
<h1>Create the Gateway (Application Gateway)</h1>
<p>Now we create the <strong>Gateway resource</strong> that acts as the entry point to your cluster.</p>
<p>Create a file called <strong>gateway.yaml</strong></p>
<pre><code class="language-yaml">apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: application-gateway
spec:
  gatewayClassName: gke-l7-global-external-managed
  listeners:
  - name: http
    protocol: HTTP
    port: 80
    allowedRoutes:
      namespaces:
        from: All
</code></pre>
<p>Apply it:</p>
<pre><code class="language-bash">kubectl apply -f gateway.yaml
</code></pre>
<p>Verify:</p>
<pre><code class="language-bash">kubectl get gateway
</code></pre>
<p>CLI output:</p>
<pre><code class="language-plaintext">kubectl get gateway
</code></pre>
<h1>Deploy Application</h1>
<p>Let’s deploy a sample <strong>nginx application</strong> to simulate a typical service behind an ingress.</p>
<p>Deployment:</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
</code></pre>
<p>Service:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
</code></pre>
<p>Apply:</p>
<pre><code class="language-bash">kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
</code></pre>
<h1>Create an HTTPRoute</h1>
<p>Now we connect the <strong>application-gateway</strong> to the backend service using <strong>HTTPRoute</strong>.</p>
<p>Create <strong>httproute.yaml</strong></p>
<pre><code class="language-yaml">apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: nginx-route
spec:
  parentRefs:
  - name: application-gateway
  rules:
  - backendRefs:
    - name: nginx-service
      port: 80
</code></pre>
<p>Apply it:</p>
<pre><code class="language-bash">kubectl apply -f httproute.yaml
</code></pre>
<p>Verify:</p>
<pre><code class="language-bash">kubectl get httproute
</code></pre>
<h3><strong>Architecture flow:</strong></h3>
<p><strong>Gateway API architecture in GKE</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/66f43cce4175136d80945d5a/6ec0dbec-4b31-4845-ad43-987322e8cd92.png" alt="Gateway API architecture in GKE" style="display:block;margin:0 auto" />

<h1>Access the Application</h1>
<p>Retrieve the Gateway external IP:</p>
<pre><code class="language-bash">kubectl get gateway application-gateway
</code></pre>
<p>Open the IP in your browser.</p>
<p>You should see:</p>
<img src="https://cdn.hashnode.com/uploads/covers/66f43cce4175136d80945d5a/050d6989-262b-48a5-b543-5cafba0d7ccb.png" alt="" style="display:block;margin:0 auto" />

<p>This confirms your <strong>Gateway API configuration is working successfully</strong>.</p>
<h1>Troubleshooting Tips</h1>
<p>If your application isn't accessible, Perform these checks:</p>
<p>To check Gateway status:</p>
<pre><code class="language-bash">kubectl describe gateway application-gateway
</code></pre>
<p>To check HTTPRoute status:</p>
<pre><code class="language-bash">kubectl describe httproute nginx-route
</code></pre>
<p>To get Pod health:</p>
<pre><code class="language-bash">kubectl get pods
</code></pre>
<p>Most issues occur due to <strong>incorrect service references or missing GatewayClass</strong>.</p>
<h1>Key Takeaways</h1>
<p>Migrating from <strong>Ingress NGINX to Gateway API</strong> in GKE provides several advantages:</p>
<ul>
<li><p>Modern Kubernetes networking standard</p>
</li>
<li><p>More powerful routing capabilities</p>
</li>
<li><p>Clear separation between infrastructure and application routing</p>
</li>
<li><p>Native integration with GKE load balancers</p>
</li>
</ul>
<p>Gateway API is expected to become the <strong>primary way to manage traffic in Kubernetes clusters going forward</strong>.</p>
<blockquote>
<p>💡Do not treat migration as a simple YAML conversion ... treat it as a <strong>networking architecture upgrade</strong>.</p>
</blockquote>
<h1>Conclusion</h1>
<p>Gateway API represents the <strong>next generation of Kubernetes traffic management</strong>. It provides:</p>
<ul>
<li><p>Clear separation of responsibilities</p>
</li>
<li><p>Rich routing capabilities</p>
</li>
<li><p>Extensible policy framework</p>
</li>
<li><p>Better security model</p>
</li>
</ul>
<p>Migrating from <strong>Ingress NGINX</strong> to <strong>Gateway API</strong> requires planning, but by following <strong>progressive migration, policy standardization, and observability best practices</strong>, teams can modernize their Kubernetes networking stack safely.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding CloudFront Access to S3: OAI vs OAC — My Personal Experiment]]></title><description><![CDATA[Hey everyone! 👋
When I’m working with an S3 + CloudFront setup, I noticed something that made me curious there are two different ways to let CloudFront access S3: OAI (Origin Access Identity) and the newer OAC (Origin Access Control).
That got me wo...]]></description><link>https://getintokube.com/oai-vs-oac</link><guid isPermaLink="true">https://getintokube.com/oai-vs-oac</guid><category><![CDATA[getintokube]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[Cloudfront distribution]]></category><category><![CDATA[S3]]></category><category><![CDATA[S3-bucket]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Sat, 14 Jun 2025 12:17:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749899554243/86f64651-0a8d-4b24-822f-fcca0d85c014.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone! 👋</p>
<p>When I’m working with an S3 + CloudFront setup, I noticed something that made me curious there are <strong>two different ways</strong> to let CloudFront access S3: <strong>OAI (Origin Access Identity)</strong> and the newer <strong>OAC (Origin Access Control)</strong>.</p>
<p>That got me wondering:</p>
<blockquote>
<p>Why are there two options? What’s the real difference?</p>
</blockquote>
<p>To answer that, I decided to run a hands-on experiment and explore both approaches in detail.</p>
<p>Now that I’ve done it, I’m sharing my experience below including what I built, what I learned, and how you can try it too.</p>
<p>If you've ever been confused about when to use which or how they actually work in practice this post is for you!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749896667013/2821c6fa-d7b2-41a6-843e-676c8a1e59ab.gif" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-wait-why-are-there-two-options-anyway">🤔 Wait, Why Are There Two Options Anyway?</h1>
<p>At first glance, both OAI and OAC let CloudFront access private S3 content securely so naturally, I wondered:</p>
<blockquote>
<p>If both are doing the same thing, why did AWS introduce a new method?</p>
</blockquote>
<p>Here’s the thing: <strong>both OAI and OAC are used to let CloudFront securely access content from an S3 bucket</strong>. So, you might think: “If the end result is the same, why is AWS pushing everyone toward OAC now?”</p>
<p>Let me break it down for you:</p>
<hr />
<h1 id="heading-the-problem-with-legacy-oai">🔍 The Problem with Legacy OAI</h1>
<p>With <strong>OAI</strong>, we create a special identity, and we update our S3 bucket policy to allow that OAI to access the bucket. Then, CloudFront uses that identity to fetch content from S3.</p>
<p>Seems straightforward, right?</p>
<blockquote>
<p>But here’s the issue:<br /><strong>Any CloudFront distribution using that same OAI can access the bucket.</strong></p>
</blockquote>
<p>So, imagine I use the same OAI for multiple CloudFront distributions now, <em>all of them</em> can read from that S3 bucket. There's <strong>no way to limit it to just one CloudFront distribution</strong>.</p>
<p>🎯 That’s a <strong>security gap</strong>.</p>
<hr />
<h1 id="heading-enter-oac-origin-access-control">🔐 Enter OAC (Origin Access Control)</h1>
<p>To address this issue, AWS introduced <strong>OAC</strong>.</p>
<p>With OAC, we create an access control configuration and assign it <strong>per CloudFront distribution</strong>. Then, in the S3 bucket policy, we don’t just allow based on a shared OAI… instead, we use a condition like <code>AWS:SourceArn</code> to <strong>only allow that exact CloudFront distribution</strong> to access the S3 content.</p>
<p>So now, <strong>only one specific CloudFront distribution</strong> and no others can read from the bucket. That’s a huge win for security and best practices. 🚀</p>
<hr />
<h1 id="heading-but-wait-why-cant-we-just-use-sourcearn-with-oai">❓But wait… Why Can’t We Just Use <code>SourceArn</code> with OAI?</h1>
<p>I had the same question at first!</p>
<p>The problem is: <strong>OAI doesn’t include the CloudFront distribution’s identity when making the request to S3</strong>.</p>
<p>It only signs the request using the OAI credentials, but there's no way for S3 to know <strong>which distribution</strong> is calling. That means even if you try to use <code>AWS:SourceArn</code> in the bucket policy, it won’t work with OAI because the request lacks that source context.</p>
<p>👉 <strong>OAC solves this by including the distribution’s ARN in the signature</strong>, allowing S3 to enforce precise, per-distribution policies.</p>
<p>So, while OAI gives S3 a kind of generic identity, OAC brings full context and secure granularity.</p>
<hr />
<h1 id="heading-what-i-built">🛠️ What I Built</h1>
<p>Okay now I believe you understand the difference <strong>theoretically</strong>.<br />But let’s be real: theory alone isn’t enough.</p>
<p>So how do we <em>actually</em> see this difference in action?</p>
<p>No worries <strong>I got you!</strong> 😎</p>
<p>I wrote a Terraform script that will let you spin up a hands-on environment in seconds.</p>
<p>Here’s what it does:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749895709405/32bc21b0-9512-40a3-a959-f740fc9d03a5.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-scenario-1-using-oai-legacy"><strong>Scenario 1 – Using OAI (Legacy)</strong></h4>
<ul>
<li><p>Creates <strong>one S3 bucket</strong> with a test file.</p>
</li>
<li><p>Creates <strong>two CloudFront distributions (A &amp; B)</strong> using the <strong>same OAI</strong>.</p>
</li>
<li><p>The S3 bucket policy allows access to that OAI.</p>
</li>
<li><p>Result: <strong>both distributions can access the S3 content</strong>, even though we didn’t restrict them individually and <em>that’s the core issue</em> with OAI.</p>
</li>
</ul>
<h4 id="heading-scenario-2-using-oac-modern-approach"><strong>Scenario 2 – Using OAC (Modern Approach)</strong></h4>
<ul>
<li><p>Same S3 bucket and test file.</p>
</li>
<li><p>Creates <strong>two new CloudFront distributions (C &amp; D)</strong> using <strong>OAC</strong>.</p>
</li>
<li><p>The bucket policy uses <code>AWS:SourceArn</code> to allow access <strong>only from CloudFront distribution C</strong>.</p>
</li>
<li><p>This means:<br />  ✅ <strong>CloudFront C can access the S3 content</strong><br />  ❌ <strong>CloudFront D cannot</strong>, because it's not explicitly allowed in the S3 bucket policy.</p>
</li>
<li><p>Result: Now we have <strong>tight access control</strong> only the specified distributions can access the content.</p>
</li>
</ul>
<hr />
<h3 id="heading-what-i-learned">🤯 What I Learned</h3>
<h4 id="heading-oai-origin-access-identity">✅ OAI (Origin Access Identity)</h4>
<ul>
<li><p>Simple and familiar.</p>
</li>
<li><p>Bucket policy just checks the OAI ARN.</p>
</li>
<li><p><strong>BUT</strong>: you can't restrict it to a specific distribution, so if you share the same OAI with multiple CloudFront distributions, all of them get access.</p>
</li>
</ul>
<p>This was clear when I spun up two distributions (A and B) both could access the S3 content, even though the bucket policy didn’t mention their individual ARNs.</p>
<h4 id="heading-oac-origin-access-control">✅ OAC (Origin Access Control)</h4>
<ul>
<li><p>Modern approach supports sigv4 signing.</p>
</li>
<li><p>You <strong>must</strong> use <code>origin_access_control_id</code> in CloudFront.</p>
</li>
<li><p>The S3 policy can now restrict access to a <strong>specific CloudFront distribution</strong> using <code>AWS:SourceArn</code>.</p>
</li>
</ul>
<p>When I enabled OAC, only the distributions C and D were created. The S3 bucket policy now allow the specific distribution can access its content.</p>
<hr />
<h3 id="heading-try-it-yourself">🧪 Try It Yourself</h3>
<p>I have written all the Terraform code you need, also included clear instructions in the <a target="_blank" href="https://github.com/gerlynm/just-curious/blob/main/cloudfront/oai-oac/README.md">GitHub README</a>. so, you can spin up the full environment in just a few commands.</p>
<p>👉 <strong>Repository URL</strong>:<br />🔗 <a target="_blank" href="https://github.com/gerlynm/just-curious">gerlynm/just-curious</a></p>
<hr />
<h3 id="heading-final-thoughts">📌 Final Thoughts</h3>
<p>If you're building something new go with <strong>OAC</strong>. It’s more secure and future ready. But <strong>OAI</strong> still works.</p>
<p>Hope this helps someone trying to understand the differences through actual practice, just like I did. Let me know if you’d like the full Terraform code or a breakdown of the policy structures!</p>
<p>Happy experimenting! ✨</p>
]]></content:encoded></item><item><title><![CDATA[How to Configure ExternalDNS with Cross-Account Route53]]></title><description><![CDATA[When I’m working with a private EKS cluster, I recently encountered a requirement to configure ExternalDNS to update records in a Route53 hosted zone belonging to a different AWS account. After extensive research on articles, documentation, and forum...]]></description><link>https://getintokube.com/how-to-configure-externaldns-with-cross-account-route53</link><guid isPermaLink="true">https://getintokube.com/how-to-configure-externaldns-with-cross-account-route53</guid><category><![CDATA[externaldns]]></category><category><![CDATA[external-dns]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[EKS]]></category><category><![CDATA[EKS cluster]]></category><category><![CDATA[aws-cross-account]]></category><category><![CDATA[route53]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Sun, 01 Jun 2025 16:28:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747594221736/8bf9810a-3277-4363-b43b-1f583ccff706.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I’m working with a private EKS cluster, I recently encountered a requirement to configure ExternalDNS to update records in a Route53 hosted zone belonging to a different AWS account. After extensive research on articles, documentation, and forums, I couldn’t find a clear solution or even a solid hint. Eventually, a <a target="_blank" href="https://github.com/kubernetes-sigs/external-dns/issues/1608">GitHub issue</a> provided the crucial insight leveraging OIDC provider integration in EKS to enable cross-account access.</p>
<p>Using that information, I successfully configured ExternalDNS to update records across AWS accounts. If you're facing a similar challenge, this blog will save you time by providing a direct, step-by-step solution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747592747153/60b7262b-1556-46b5-8199-186bd910447b.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-prerequisites"><strong>Prerequisites: ✅</strong></h1>
<p>Before we dive in, make sure you have the following in place:</p>
<ul>
<li><p><strong>Two AWS Accounts:</strong></p>
<ul>
<li><p><strong>Account A:</strong> This is where your Route 53 hosted zone resides.</p>
</li>
<li><p><strong>Account B:</strong> This will host your EKS cluster and where ExternalDNS will run.</p>
</li>
</ul>
</li>
<li><p><strong>Tools Configured:</strong> Aws cli, Kubectl , lens (optional), Ingress controller (optional to check with ingress)</p>
</li>
<li><p><strong>Route53 (Account A):</strong> You should have your DNS hosted zone already setup in Account A.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747586636213/ef447a18-99df-408a-a535-15808e65424f.png" alt="example image for Route53" /></p>
</li>
<li><p><strong>EKS Cluster (Account B):</strong> You need a running EKS cluster in Account B that ExternalDNS will interact with.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747593308940/21cd0197-d84d-4cd4-a941-888cf853d850.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<hr />
<h1 id="heading-high-level-overview"><strong>High-Level Overview: 💡</strong></h1>
<p>Here is a quick rundown of the steps I'll be taking:</p>
<ol>
<li><p><strong>Create an IAM Role in the Route 53 Account (Account A):</strong> This role will grant necessary Route 53 permissions and trust the IAM role we will create for ExternalDNS in Account B.</p>
</li>
<li><p><strong>Create an IAM Role in the EKS Account (Account B):</strong> This role will have the permission to assume the IAM role we created in Account A. We will configure its trust policy to trust your OIDC provider associated with your EKS cluster.</p>
</li>
<li><p><strong>Install ExternalDNS in your EKS Cluster (Account B):</strong> We will use Addons to install ExternalDNS with custom configuration values that leverage the cross-account IAM roles.</p>
</li>
<li><p><strong>Verification:</strong> We will verify the setup by deploying application by pointing hostname.</p>
</li>
</ol>
<hr />
<h1 id="heading-lets-get-started">Let's get started! 🚀</h1>
<h3 id="heading-1-create-the-iam-role-in-the-route-53-account-account-a-1"><strong>1. Create the IAM Role in the Route 53 Account (Account A): 1️⃣</strong></h3>
<p>In your <strong>Account A</strong>, navigate to the IAM console and follow these steps:</p>
<ul>
<li><p>Click on <strong>Roles</strong> in the left-hand navigation pane.</p>
</li>
<li><p>Click <strong>Create role</strong>.</p>
</li>
<li><p>For the "Select type of trusted entity," choose <strong>AWS account</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747587298563/c6723b8f-745a-4dcd-b5c6-88b8e330c75d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Enter the <strong>Account ID of Account B</strong>.</p>
</li>
<li><p>Click <strong>Next</strong>. (ignore other options)</p>
</li>
<li><p>Now, we need to attach permissions for Route 53. Search for and select the <strong>AmazonRoute53FullAccess</strong> policy.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747587510238/6d33db6d-8460-46dd-850c-edb39ddbe96d.png" alt class="image--center mx-auto" /></p>
</li>
<li><ul>
<li><strong>Important Security Consideration:</strong> For production environments, it's highly recommended to scope down these permissions to the minimum required for ExternalDNS to function.</li>
</ul>
</li>
<li><p>Click <strong>Next</strong>.</p>
</li>
<li><p>Give your role a descriptive name (e.g., <code>Route53ExternalDNSAccess</code>).</p>
</li>
<li><p>Review the role details and click <strong>Create role</strong>.</p>
</li>
</ul>
<blockquote>
<p><strong>Take note of the ARN (Amazon Resource Name) of this newly created role.</strong> You'll need it in the next step. It will look something like: <code>arn:aws:iam::ACCOUNT_A_ID:role/Route53ExternalDNSAccess</code>.</p>
</blockquote>
<hr />
<h2 id="heading-2-create-the-iam-role-in-the-eks-account-account-b-2"><strong>2. Create the IAM Role in the EKS Account (Account B): 2️⃣</strong></h2>
<p>Now, switch to your <strong>Account B</strong> and follow these steps in the IAM console:</p>
<ul>
<li><p>Click on <strong>Roles</strong>.</p>
</li>
<li><p>Click <strong>Create role</strong>.</p>
</li>
<li><p>For the "Select type of trusted entity," choose <strong>Web identity (OIDC)</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747589628958/3513b37f-bcfb-4a07-b5da-52690a9776eb.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>For the "Identity provider," select the OIDC provider associated with your EKS cluster. You can find this information in your EKS cluster details under the "Overview" tab. It will typically look like <code>oidc.eks.REGION.amazonaws.com/id/YOUR_OIDC_ID</code>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747589683143/2e296178-5d9d-4d34-baaf-5623858512b5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>For the "Audience," enter <a target="_blank" href="http://sts.amazonaws.com"><code>sts.amazonaws.com</code></a>.</p>
</li>
<li><p>Then click <strong>Add Condition</strong> to provide service account and external DNS namespace to give access</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747589818431/5595bca9-6b44-4aae-b16d-86baecb6dd3c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Before creating role, we need to add permissions that allow this role to assume the role we created in Account A.</p>
</li>
<li><p>So, for that open policy console in new tab and click <strong>Create policy</strong>.</p>
</li>
<li><p>In the JSON tab of the "Create policy" page, paste the following policy, <strong>replacing</strong><br />  <code>arn:aws:iam::&lt;ACCOUNT_A_ID&gt;:role/Route53ExternalDNSAccess</code> with the actual ARN of the role you created in Account A:</p>
</li>
<li><pre><code class="lang-json">      {
          <span class="hljs-attr">"Statement"</span>: [
              {
                  <span class="hljs-attr">"Action"</span>: [
                      <span class="hljs-string">"sts:AssumeRole"</span>
                  ],
                  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
                  <span class="hljs-attr">"Resource"</span>: [
                      <span class="hljs-string">"arn:aws:iam::&lt;ACCOUNT_A_ID&gt;:role/Route53ExternalDNSAccess"</span> #arn of route53 role which we created above.
                  ],
                  <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"Statement1"</span>
              }
          ],
          <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>
      }
</code></pre>
</li>
<li><p>Click <strong>Next</strong></p>
</li>
<li><p>Give your policy a descriptive name (e.g., <code>ExternalDNSAssumeRoute53RolePolicy</code>).</p>
</li>
<li><p>Click <strong>Create policy</strong>.</p>
</li>
<li><p>Go back to the "Create role" page (where you selected the OIDC trust). Click <strong>Next: Permissions</strong>.</p>
</li>
<li><p>Search for and select the policy you just created (<code>ExternalDNSAssumeRoute53RolePolicy</code>).</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747590632201/bc24c293-efbd-4c98-85a9-400387902e7d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Next: Review</strong>.</p>
</li>
<li><p>Give your role a descriptive name (e.g., <code>ExternalDNSIRSAccess</code>).</p>
</li>
<li><p>Review the role details and click <strong>Create role</strong>.</p>
</li>
</ul>
<blockquote>
<p><strong>Take note of the ARN of this newly created role.</strong> You'll need this for the <code>Route53ExternalDNSAccess</code> Role (Account A) to assume this. It will look something like: <code>arn:aws:iam::ACCOUNT_B_ID:role/ExternalDNSIRSAccess</code>.</p>
</blockquote>
<hr />
<h2 id="heading-3-install-externaldns-with-configuration-values"><strong>3. Install ExternalDNS with Configuration Values 🔻</strong></h2>
<p><strong>(Account B):</strong></p>
<ul>
<li>Now, in your <strong>Account B</strong>, use Addons to install ExternalDNS.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747590922705/68832dd0-33b8-4462-8aa5-5494e2b10d54.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Click <strong>Next</strong></p>
</li>
<li><p>Now add the role which we have created under IRSA option</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747591078819/f5ab0dc6-bda0-42ba-8f23-7a4068945439.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>To pass the below configuration to the external DNS, expand the <code>Optional configuration settings</code> option.</p>
</li>
<li><pre><code class="lang-json">      domainFilters:
        - nginx.local
      txtOwnerId: Z03011653ICHPFH5D6TUJ
      extraArgs:
        - --aws-zone-type=private
        - --aws-assume-role=arn:aws:iam::&lt;ACCOUNT_A_ID&gt;:role/Route53ExternalDNSAccess
</code></pre>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747591325610/0d662019-cf6b-4526-8f2c-673b19af3169.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Once you have configured the values, you can install ExternalDNS using Addons</p>
<hr />
<h2 id="heading-4verification"><strong>4.Verification: 🍾</strong></h2>
<p>After the ExternalDNS is deployed, you can check the ExternalDNS pod logs in your EKS cluster (Account B) using <code>kubectl logs -n kube-system -l app=external-dns</code>. Look for any errors related to IAM permissions or Route 53 connectivity.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747591945860/49874fb1-61f1-48f5-8adb-b27f64dbd0da.png" alt class="image--center mx-auto" /></p>
<p>To verify that ExternalDNS is correctly creating DNS records, you can deploy a sample application and create an Ingress resource with a hostname. Here are example Kubernetes manifests:</p>
<p><strong>Deployment (deploy.yaml):</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:latest</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
          <span class="hljs-attr">resources:</span>
            <span class="hljs-attr">requests:</span>
              <span class="hljs-attr">cpu:</span> <span class="hljs-string">100m</span>
              <span class="hljs-attr">memory:</span> <span class="hljs-string">128Mi</span>
            <span class="hljs-attr">limits:</span>
              <span class="hljs-attr">cpu:</span> <span class="hljs-string">200m</span>
              <span class="hljs-attr">memory:</span> <span class="hljs-string">256Mi</span>
</code></pre>
<p><strong>Service (service.yaml):</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-service</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">ClusterIP</span> 
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
</code></pre>
<p><strong>Ingress (ingress.yaml):</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-ingress</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">nginx.ingress.kubernetes.io/rewrite-target:</span> <span class="hljs-string">/</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">ingressClassName:</span> <span class="hljs-string">nginx</span> 
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">test.nginx.local</span>
      <span class="hljs-attr">http:</span>
        <span class="hljs-attr">paths:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
            <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
            <span class="hljs-attr">backend:</span>
              <span class="hljs-attr">service:</span>
                <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-service</span>
                <span class="hljs-attr">port:</span>
                  <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
</code></pre>
<ul>
<li>Replace <code>test.nginx.local</code> with a hostname that matches the <code>domainFilters</code> you configured for ExternalDNS (e.g., if your <code>domainFilters</code> is <code>nginx.local</code>).</li>
</ul>
<p>Apply these manifests to your EKS cluster:</p>
<pre><code class="lang-json">kubectl apply -f nginx.yaml
kubectl apply -f nginx-service.yaml
kubectl apply -f nginx-ingress.yaml
</code></pre>
<p>After the Ingress is created, ExternalDNS should automatically create your records in your Route 53 hosted zone (in Account A) pointing <code>test.nginx.local</code> to the load balancer associated with your Ingress.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747593118855/a23eed65-feff-4e6c-a3ae-5c510519a6c6.png" alt class="image--center mx-auto" /></p>
<p>This is how it looks.</p>
<hr />
<h1 id="heading-conclusion"><strong>Conclusion: ⭐</strong></h1>
<p>Setting up ExternalDNS with cross-account Route 53 felt daunting at first for me, but now you can easily setup by following these precise steps, you can save yourself a significant amount of time and get your DNS records managed seamlessly across your AWS accounts. Remember to always adhere to the principle of least privilege when granting IAM permissions in production environments.</p>
<p>I hope this guide has been helpful! ❤️Let me know in the comments if you have any questions or run into any issues. Happy Automating! 🥳</p>
]]></content:encoded></item><item><title><![CDATA[How to Enable SSM Connect in EC2 
Instances 💡]]></title><description><![CDATA[Managing AWS EC2 instances without worrying about SSH keys is a big relief, and AWS Systems Manager (SSM) Session Manager makes it even easier. It provides a secure way to connect to your instances. In this guide, I will walk you through the steps to...]]></description><link>https://getintokube.com/how-to-enable-ssm-connect-in-ec2-instances</link><guid isPermaLink="true">https://getintokube.com/how-to-enable-ssm-connect-in-ec2-instances</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[ssm]]></category><category><![CDATA[ec2]]></category><category><![CDATA[ssh]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Sat, 08 Feb 2025 16:04:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739030213566/ee549b2f-5d84-45a7-8629-148f7c39a132.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Managing AWS EC2 instances without worrying about SSH keys is a big relief, and AWS Systems Manager (SSM) Session Manager makes it even easier. It provides a secure way to connect to your instances. In this guide, I will walk you through the steps to enable SSM Connect in your EC2 instances, making your cloud management more secure, efficient, and hassle-free.</p>
<hr />
<h1 id="heading-step-1-create-and-attach-an-iam-role-to-the-ec2-instance">Step 1: Create and Attach an IAM Role to the EC2 Instance 🍳</h1>
<ul>
<li><p>Navigate to the <strong>AWS IAM Console</strong> and go to <strong>Roles</strong>.</p>
</li>
<li><p>Click <strong>Create role</strong>.</p>
</li>
<li><p>Select <strong>AWS Service</strong> and choose <strong>EC2</strong>, then click <strong>Next</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739026072372/58af2129-3904-4dce-ac55-83ddd4fd4b65.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>In the permissions section, search for and attach the following policy:</p>
<ul>
<li><p><code>AmazonSSMManagedInstanceCore</code> (required for SSM agent to communicate with AWS Systems Manager)</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739026163658/3e7ec9fe-affe-432f-8738-21f10be7b614.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Click <strong>Next</strong>, give the role a name (e.g., <code>SSMManagedEC2</code>), and create the <strong>role</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739026217279/61023b26-1910-4b2b-91c2-4bdabaeabdf5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Attach the IAM role to your EC2 instance:</p>
<ul>
<li><p>Go to the <strong>EC2 Console</strong> &gt; Select your instance</p>
</li>
<li><p>Click <strong>Actions</strong> &gt; <strong>Security</strong> &gt; <strong>Modify IAM Role</strong></p>
</li>
<li><p>Select the IAM role you created and click <strong>Update IAM Role</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739027865069/93fa0c8f-322a-4b58-abd2-6e7616fb1e9a.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ul>
<hr />
<h1 id="heading-step-2-verify-ssm-agent-is-installed-and-running">Step 2: Verify SSM Agent is Installed and Running ✅</h1>
<blockquote>
<p>📌 Check out for SSM agent Verification details: <a target="_blank" href="https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html">AWS Systems Manager</a></p>
</blockquote>
<p>For Ubuntu, run the following command to check if the agent is installed:</p>
<pre><code class="lang-sh">sudo systemctl status snap.amazon-ssm-agent.amazon-ssm-agent.service
</code></pre>
<blockquote>
<p>📌 If the agent is not installed, then refer this link for installation: <a target="_blank" href="https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-linux.html">Manually installing and uninstalling SSM Agent</a></p>
</blockquote>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Note:</strong> It may take some time for the installation to take effect. To apply the changes quickly ⚡, restart the SSM agent or reboot the EC2 instance.</div>
</div>

<hr />
<h1 id="heading-step-3-access-ec2-instance-using-ssm">Step 3: Access EC2 Instance Using SSM 🚀</h1>
<p>Now you can access your EC2 instance using SSM through the AWS Console or AWS CLI.</p>
<h2 id="heading-option-1-using-the-aws-console">Option 1: Using the AWS Console 💻</h2>
<ol>
<li><p>Navigate to <strong>AWS Systems Manager Console</strong>.</p>
</li>
<li><p>Click the <strong>EC2 instance</strong> you want to access.</p>
</li>
<li><p>Click <strong>Connect</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739028751828/289f5563-53f3-4a37-aa4f-f55af8b02eb8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Session Manager</strong> and then <strong>Connect</strong> to open a shell session.</p>
</li>
</ol>
<h2 id="heading-option-2-using-aws-cli">Option 2: Using AWS CLI ⚡</h2>
<p>To access EC2 via the local terminal or AWS CLI, install the <strong>SSM Agent</strong> on your local machine using the following commands:</p>
<pre><code class="lang-apache"><span class="hljs-attribute">curl</span> <span class="hljs-string">"https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb"</span> -o <span class="hljs-string">"session-manager-plugin.deb"</span>
<span class="hljs-attribute">sudo</span> dpkg -i session-manager-plugin.deb
</code></pre>
<p>For other OS versions, refer to the <a target="_blank" href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html">SSM Plugin Installation</a>.</p>
<p>Run the following command to enter an EC2 machine via SSM through the terminal. 🥳</p>
<pre><code class="lang-sh">aws ssm start-session --target &lt;instance-id&gt;
</code></pre>
<blockquote>
<p>📌 Make sure your AWS CLI is configured with the right permissions and region settings.</p>
</blockquote>
<hr />
<h1 id="heading-conclusion">Conclusion 🎃</h1>
<p>Enabling SSM Connect in EC2 instances enhances security, eliminates the need for SSH keys, and simplifies instance management. By following these steps, you can securely manage your AWS environment with AWS Systems Managers “Session Manager”.</p>
<hr />
<p>📬Do you have any questions or need further assistance? Leave a comment below or explore my other AWS tutorials for more cloud management tips!</p>
]]></content:encoded></item><item><title><![CDATA[How to Migrate an RDS Database to Another AWS Account💡]]></title><description><![CDATA[Migrating an Amazon RDS (Relational Database Service) instance from one AWS account to another is a bit different from migrating an EC2 instance. Unlike EC2, RDS is a managed service, and you can't directly "Create AMI and give permission to destinat...]]></description><link>https://getintokube.com/how-to-migrate-an-rds-database-to-another-aws-account</link><guid isPermaLink="true">https://getintokube.com/how-to-migrate-an-rds-database-to-another-aws-account</guid><category><![CDATA[rds-migration]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[rds]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[AWS RDS]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[MariaDB]]></category><category><![CDATA[Oracle]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Wed, 05 Feb 2025 08:02:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738742141004/5e9180dc-1442-4abd-89d6-d5a2a9582c0d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Migrating an <strong>Amazon RDS (Relational Database Service)</strong> instance from one AWS account to another is a bit different from migrating an EC2 instance. Unlike EC2, RDS is a managed service, and you can't directly "Create AMI and give permission to destination account" like we did for EC2 instance. However, you can achieve the migration using <strong>snapshots</strong> and <strong>sharing capabilities</strong>.</p>
<hr />
<h1 id="heading-step-1-create-a-manual-snapshot-of-the-rds-instance"><strong>Step 1: Create a Manual Snapshot of the RDS Instance 📸</strong></h1>
<ol>
<li><p>Go to the <strong>RDS Dashboard</strong> in the <strong>source AWS account</strong>.</p>
</li>
<li><p>Select the RDS instance you want to migrate.</p>
</li>
<li><p>Click <strong>Actions &gt; Take Snapshot</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738305082165/ff748a17-dc65-4085-afc0-d9e80869fc63.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Provide a name for the snapshot and confirm.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738737895113/0d5cf101-6787-4490-8a7e-4a28fb6cdca5.png" alt class="image--center mx-auto" /></p>
<p> <strong>Note:</strong> Creating a snapshot may take some time, depending on the size of your database.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-2-share-the-snapshot-with-the-destination-aws-account"><strong>Step 2: Share the Snapshot with the Destination AWS Account ⏩</strong></h1>
<ol>
<li><p>Once the snapshot is created, go to the <strong>Snapshots</strong> section in the RDS Dashboard.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738738641099/9d6a658c-9ab7-4759-969a-e8cf99c9fac2.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Select the snapshot you just created.</p>
</li>
<li><p>Click <strong>Actions &gt; Share Snapshot</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738738712586/0c354db7-f6de-45c0-9298-ad30e1c19816.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Enter the <strong>AWS Account ID</strong> of the destination account.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738738898910/78aa2911-dcf8-4fd7-a7ff-36ee16953ba7.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Save</strong> to share the snapshot.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-3-copy-the-snapshot-to-the-destination-account"><strong>Step 3: Copy the Snapshot to the Destination Account ☕</strong></h1>
<ol>
<li><p>Log in to the <strong>destination AWS account</strong>.</p>
</li>
<li><p>Go to the <strong>RDS Dashboard &gt; Snapshots</strong>.</p>
</li>
<li><p>Look for the shared snapshot in <strong>Shared with Me</strong> section.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738739019450/a82104c9-7034-4318-b479-2b69e60f669e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Select the snapshot and click <strong>Actions &gt; Copy Snapshot</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738739075338/9104ae34-8c96-4313-81dc-45e12fc82408.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Choose the destination region (if different) and provide a name for the copied snapshot.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738739150731/e125b9e3-4719-4c38-b215-60e44cc8742f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Copy Snapshot</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738740243425/511e0843-9579-47e0-b6b4-71b915cee1d7.png" alt class="image--center mx-auto" /></p>
<p> <strong>Note:</strong> Copying the snapshot may take some time, depending on the size of the database and the region.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-4-restore-the-rds-instance-in-the-destination-account"><strong>Step 4: Restore the RDS Instance in the Destination Account 🔁🗺️</strong></h1>
<ol>
<li><p>Once the snapshot is copied, select it and click <strong>Actions &gt; Restore Snapshot</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738740276842/6359dd95-4615-49ab-be85-831a3059823a.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Configure the new RDS instance:</p>
<ul>
<li><p>Set the <strong>DB identifier name.</strong></p>
</li>
<li><p>Set the <strong>DB instance size</strong>, <strong>storage type</strong>, and other settings.</p>
</li>
<li><p>Configure networking (VPC, subnet group, security group).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738740393707/1be4da6a-1caa-4cc1-89e6-e85315f8a151.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Click <strong>Restore DB Instance</strong>.</p>
<p> <strong>Note:</strong> The restored RDS instance will have the same data as the original database but will be a new instance in the destination account.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-5-verify-and-clean-up"><strong>Step 5: Verify and Clean Up ✅🧹</strong></h1>
<ol>
<li><p><strong>Verify the New RDS Instance:</strong></p>
<ul>
<li><p>Log in to the new RDS instance and ensure all data and configurations are correct.</p>
</li>
<li><p>Test the connection to the database from your application or client tools.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738741022828/b9fdbd9b-4b08-4f87-9afe-e62d148cda89.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Clean Up:</strong></p>
<ul>
<li><p>Delete the shared snapshot from the source account if it’s no longer needed.</p>
</li>
<li><p>Terminate the original RDS instance in the source account (if no longer required).</p>
</li>
</ul>
</li>
</ol>
<hr />
<h1 id="heading-important-considerations"><strong>Important Considerations: 🛑‼️</strong></h1>
<ol>
<li><p><strong>Downtime:</strong></p>
<ul>
<li>During the migration, the original RDS instance remains operational. However, any changes made to the database after the snapshot was taken will not be included in the migrated instance. Plan accordingly to minimize data loss.</li>
</ul>
</li>
<li><p><strong>Automated Backups:</strong></p>
<ul>
<li>If automated backups are enabled for the RDS instance, ensure you have a recent backup before starting the migration.</li>
</ul>
</li>
<li><p><strong>IAM Permissions:</strong></p>
<ul>
<li>Ensure both AWS accounts have the necessary IAM permissions to share and access snapshots.</li>
</ul>
</li>
<li><p><strong>Cross-Region Migration:</strong></p>
<ul>
<li>If you’re migrating the RDS instance to a different region, you’ll need to copy the snapshot to the new region before restoring it.</li>
</ul>
</li>
<li><p><strong>Encryption:</strong></p>
<ul>
<li>If your RDS instance is encrypted, ensure the destination account has access to the encryption key (KMS key). You may need to share or recreate the key in the destination account.</li>
</ul>
</li>
</ol>
<hr />
<h1 id="heading-pro-tip"><strong>Pro Tip: 🚀</strong></h1>
<p>If you want to minimize downtime or migrate a live database, you can use <strong>AWS Database Migration Service (DMS)</strong>. DMS allows you to replicate data from the source RDS instance to the destination RDS instance in near real-time. This approach is more complex but ideal for large databases or zero-downtime migrations.</p>
<hr />
<h1 id="heading-conclusion"><strong>Conclusion: 🎃</strong></h1>
<p>Migrating an RDS database to another AWS account is straightforward if you follow the snapshot and sharing process. While it requires some planning, the steps are simple and ensure your data remains intact. For more advanced use cases, consider using AWS DMS for a seamless migration.</p>
<hr />
<p>📬Need help with your RDS migration? Drop a comment below or reach out for more tips on managing your AWS resources! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[How to Migrate an S3 Bucket to Another AWS Account💡]]></title><description><![CDATA[Need to move an S3 bucket to another AWS account? Whether you’re handing over data to a client, reorganizing resources, or consolidating accounts, this guide breaks it down into easy, actionable steps. No fluff, no jargon just a clear roadmap to get ...]]></description><link>https://getintokube.com/how-to-migrate-an-s3-bucket-to-another-aws-account</link><guid isPermaLink="true">https://getintokube.com/how-to-migrate-an-s3-bucket-to-another-aws-account</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[migration]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Sat, 01 Feb 2025 11:58:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738410841351/b90aeae7-a7a1-4d92-9d60-eddd5b1b2049.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Need to move an S3 bucket to another AWS account? Whether you’re handing over data to a client, reorganizing resources, or consolidating accounts, this guide breaks it down into easy, actionable steps. No fluff, no jargon just a clear roadmap to get your S3 bucket migrated quickly and efficiently. Let’s dive in!</p>
<hr />
<h1 id="heading-step-1-share-the-s3-bucket-with-the-destination-account"><strong>Step 1: Share the S3 Bucket with the Destination Account ⚙️</strong></h1>
<blockquote>
<p>📌 AWS doesn’t let you “move” a bucket directly, but you can <strong>share it</strong> and copy its contents.<br />Here’s how:</p>
</blockquote>
<ol>
<li><p>Go to the <strong>S3 Console</strong> in the <strong>source account</strong>.</p>
</li>
<li><p>Select the bucket you want to migrate.</p>
</li>
<li><p>Click the <strong>Permissions tab</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738409178906/25c3cf1c-61f5-4b62-867b-15fd51d6e0e7.png" alt /></p>
</li>
<li><p>Add a bucket policy to grant access to the destination account by adding the below policy.</p>
<pre><code class="lang-json"> {
   <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
   <span class="hljs-attr">"Statement"</span>: [
     {
       <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
       <span class="hljs-attr">"Principal"</span>: {
         <span class="hljs-attr">"AWS"</span>: <span class="hljs-string">"arn:aws:iam::DESTINATION_ACCOUNT_ID:root"</span>
       },
       <span class="hljs-attr">"Action"</span>: [
         <span class="hljs-string">"s3:ListBucket"</span>,
         <span class="hljs-string">"s3:GetObject"</span>
       ],
       <span class="hljs-attr">"Resource"</span>: [
         <span class="hljs-string">"arn:aws:s3:::SOURCE_BUCKET_NAME"</span>,
         <span class="hljs-string">"arn:aws:s3:::SOURCE_BUCKET_NAME/*"</span>
       ]
     }
   ]
 }
</code></pre>
<p> Replace <code>DESTINATION_ACCOUNT_ID</code> and <code>SOURCE_BUCKET_NAME</code> with your details.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738409298993/85816176-12f2-4071-9ef2-db44718f64b1.png" alt /></p>
</li>
<li><p>Save the policy. Now, the destination account can access the bucket.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-2-copy-the-bucket-contents-to-the-destination-account"><strong>Step 2: Copy the Bucket Contents to the Destination Account ☕🗺️</strong></h1>
<p>Once shared, <strong>copy the data</strong> to a new bucket in the destination account.</p>
<ol>
<li><p>Log in to the <strong>destination account</strong> and create a <strong>new bucket</strong>.</p>
</li>
<li><p>Use the <strong>AWS CLI</strong> to copy the files: (<strong>Use destination account credentials)</strong></p>
<pre><code class="lang-bash"> aws s3 sync s3://SOURCE_BUCKET_NAME s3://DESTINATION_BUCKET_NAME
</code></pre>
<p> <strong>Expected Output:</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738409404167/472fd46c-2e66-406a-91cb-1a76fc2709b3.png" alt /></p>
</li>
</ol>
<hr />
<h1 id="heading-step-3-verify-the-data"><strong>Step 3: Verify the Data ✅</strong></h1>
<p>After copying, <strong>check the data</strong> in the destination bucket:</p>
<ul>
<li><p>Confirm the <strong>file count</strong> and <strong>size</strong> match the source.</p>
</li>
<li><p>Open a few files to ensure they’re intact.</p>
</li>
<li><p>If the bucket has versioning, verify all versions were copied.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738409429507/16a6a372-7ee4-4ca3-8c76-a2cad698ac18.png" alt /></p>
</li>
</ul>
<hr />
<h1 id="heading-pro-tips"><strong>Pro Tips: 🚀</strong></h1>
<ul>
<li><p><strong>Use AWS Datasync</strong> for large buckets to speed up transfers.</p>
</li>
<li><p><strong>Enable versioning</strong> in the destination bucket if the source has it.</p>
</li>
<li><p><strong>Test the process</strong> with a small bucket before migrating critical data.</p>
</li>
</ul>
<hr />
<h1 id="heading-conclusion"><strong>Conclusion: 🎃</strong></h1>
<p>Migrating an S3 bucket to another AWS account is simple when you know the steps. Share the bucket, copy the data, verify, and done! Follow this guide, and you’ll have your S3 bucket migrated in no time.</p>
<hr />
<p>📬Found this guide helpful? Share it with your team or drop a comment below with your questions. For more AWS tips, subscribe and stay tuned!</p>
<p>#getintokube #getintokubeblogs</p>
]]></content:encoded></item><item><title><![CDATA[How I Successfully Migrated My EC2 Instance to Another AWS Account💡]]></title><description><![CDATA[Migrating an EC2 instance from one AWS account to another might sound like a technical nightmare, but trust me, it’s easier than you think! I recently went through this process, and I’m here to break it down for you in simple steps. Whether you’re a ...]]></description><link>https://getintokube.com/how-to-migrate-ec2-instance-between-aws-accounts</link><guid isPermaLink="true">https://getintokube.com/how-to-migrate-ec2-instance-between-aws-accounts</guid><category><![CDATA[ec2migration]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[AWS]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[ec2]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Fri, 31 Jan 2025 08:02:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738411327388/2f633b73-5e6a-44dd-b49a-eb029e0857cf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Migrating an EC2 instance from one AWS account to another might sound like a technical nightmare, but trust me, it’s easier than you think! I recently went through this process, and I’m here to break it down for you in simple steps. Whether you’re a beginner or an experienced AWS user, this guide will help you migrate your EC2 instance smoothly. Let’s dive in!</p>
<hr />
<h1 id="heading-step-1-stop-the-ec2-instance"><strong>Step 1: Stop the EC2 Instance 🫸🛑</strong></h1>
<p>The first step is to <strong>stop the EC2 instance</strong> in the source account. Why?</p>
<ul>
<li><p>Stopping the instance ensures that all data is written to the disk, and no ongoing processes are modifying the filesystem. This guarantees a <strong>consistent snapshot</strong>.</p>
</li>
<li><p>It reduces the risk of corruption or missing data in the AMI (Amazon Machine Image).</p>
</li>
</ul>
<p><strong>How I Did It:</strong></p>
<ol>
<li><p>Logged into the <strong>source AWS account</strong>.</p>
</li>
<li><p>Navigated to the <strong>EC2 Dashboard</strong>.</p>
</li>
<li><p>Selected the instance I wanted to <strong>migrate</strong>.</p>
</li>
<li><p>Clicked <strong>Instance State &gt; Stop Instance</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738304764171/a284825f-640a-448d-a449-17d926dd6e12.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h1 id="heading-step-2-create-an-ami-image"><strong>Step 2: Create an AMI Image 📸</strong></h1>
<p>Once the instance is stopped, the next step is to create an <strong>AMI (Amazon Machine Image)</strong>. An AMI is a template that contains the information required to launch an instance, including the root volume, permissions, and configurations.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Key Tip:</strong> While creating the AMI, I made sure to <strong>untick the "Reboot" option</strong>. This ensures that the instance remains stopped during the AMI creation process, avoiding any unintended changes.</div>
</div>

<p><strong>How I Did It:</strong></p>
<ol>
<li><p>Selected the stopped instance in the EC2 Dashboard.</p>
</li>
<li><p>Clicked <strong>Actions &gt; Image and Templates &gt; Create Image</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738305184898/be8be9b6-f82b-4c0e-a267-69331414d76b.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Filled in the details:</p>
<ul>
<li><p><strong>Name:</strong> Gave the AMI a meaningful name.</p>
</li>
<li><p><strong>Description:</strong> Added a brief description for future reference.</p>
</li>
<li><p><strong>Unticked the "Reboot" option</strong> to ensure the instance stayed stopped.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738305374786/ff972d60-fff9-4b8d-927b-6ec6be753863.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Clicked <strong>Create Image</strong>.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-3-wait-for-the-ami-to-become-available"><strong>Step 3: Wait for the AMI to Become Available⚙️</strong></h1>
<p>Creating an AMI can take a few minutes, depending on the size of the instance. Wait for the AMI to reach the <strong>"Available" state</strong> in the <strong>AMIs section</strong> of the EC2 Dashboard.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738305701258/ed516c8f-2f07-4ebc-a182-7fb843a37dd2.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-step-4-share-the-ami-with-the-destination-account"><strong>Step 4: Share the AMI with the Destination Account ⏩</strong></h1>
<p>Once the AMI is available, the next step is to <strong>share it with the destination AWS account</strong>. This allows the destination account to access and use the AMI.</p>
<p><strong>How I Did It:</strong></p>
<ol>
<li><p>Went to the <strong>AMIs section</strong> in the EC2 Dashboard.</p>
</li>
<li><p>Selected the AMI I just created.</p>
</li>
<li><p>Clicked <strong>Actions &gt; Modify AMI Permissions</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738305839810/8a5bafdc-cbbd-443f-905f-94323b1111d9.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Added the <strong>AWS Account ID</strong> of the destination account.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738305995180/bbcbcb24-590d-4bd6-bcb0-85969aff7142.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Clicked <strong>Save Changes</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738306076257/099ecb43-256e-496e-8de1-aa030fa9cfff.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h1 id="heading-step-5-verify-the-ami-in-the-destination-account"><strong>Step 5: Verify the AMI in the Destination Account ✅</strong></h1>
<p>After sharing the AMI, I switched to the <strong>destination AWS account</strong> to verify that the AMI was successfully shared.</p>
<p><strong>How I Did It:</strong></p>
<ol>
<li><p>Logged into the <strong>destination AWS account</strong>.</p>
</li>
<li><p>Navigated to the <strong>AMIs section</strong> in the EC2 Dashboard.</p>
</li>
<li><p>Looked for the shared AMI in the <strong>Private image</strong> section.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738306259063/44c6a817-173d-4609-a843-6e3b5eac1395.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p><strong>Success!</strong> The AMI was visible in the destination account, ready to be used.</p>
<hr />
<h1 id="heading-step-6-copy-the-ami-to-the-destination-account"><strong>Step 6: Copy the AMI to the Destination Account ☕</strong></h1>
<p>To ensure the AMI is fully available in the destination account, I copied it. This step is especially important if the source and destination accounts are in <strong>different AWS regions</strong>.</p>
<p><strong>How I Did It:</strong></p>
<ol>
<li><p>Selected the shared AMI in the destination account.</p>
</li>
<li><p>Clicked <strong>Actions &gt; Copy AMI</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738306361851/231d27b2-c3b7-4663-8c8b-31de7a960435.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Chose the <strong>destination region</strong> and provided a name for the copied AMI.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738306574566/f409ea6b-c36e-417d-b4dd-616c39b8e43e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Clicked <strong>Copy AMI</strong>.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-7-launch-the-instance-from-the-ami"><strong>Step 7: Launch the Instance from the AMI 🚀</strong></h1>
<p>Finally, it was time to <strong>launch the EC2 instance</strong> in the destination account using the copied AMI.</p>
<p><strong>How I Did It:</strong></p>
<ol>
<li><p>Selected the copied AMI in the <strong>AMIs section</strong>.</p>
</li>
<li><p>Clicked <strong>Actions &gt; Launch Instance</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738306972898/91e80ff3-85c1-4a51-9a73-9b809adb8e01.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Configured the instance settings:</p>
<ul>
<li><p><strong>Instance Type:</strong> Chose the appropriate instance type.</p>
</li>
<li><p><strong>Network Settings:</strong> Selected the VPC, subnet, and security group.</p>
</li>
<li><p><strong>Key Pair:</strong> Associated a key pair for SSH access.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738307056651/c44a178c-0028-45a3-aa19-eb7c3e29eab5.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Clicked <strong>Launch Instance</strong>.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-8-verify-and-celebrate"><strong>Step 8: Verify and celebrate! ✅🥳</strong></h1>
<p>Once the instance was launched, I logged in to verify that everything was working as expected. I checked:</p>
<ul>
<li><p><strong>Applications:</strong> Were they running correctly?</p>
</li>
<li><p><strong>Data:</strong> Was all the data intact?</p>
</li>
<li><p><strong>Configurations:</strong> Were the settings preserved?</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738307307844/3a84de5a-f492-4fee-a14d-5a2058f7e8b8.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><strong>Boom!</strong> The migration was successful, and the instance was up and running in the destination account.</p>
<hr />
<h1 id="heading-pro-tips-for-a-smooth-migration"><strong>Pro Tips for a Smooth Migration: 🚀</strong></h1>
<ol>
<li><p><strong>Test the Process:</strong> Before migrating a critical instance, test the process with a non-essential instance to ensure everything works.</p>
</li>
<li><p><strong>Monitor Costs:</strong> Keep an eye on data transfer and storage costs during the migration.</p>
</li>
<li><p><strong>Clean Up:</strong> After the migration, delete the AMI and snapshots from the source account to avoid unnecessary charges.</p>
</li>
</ol>
<hr />
<h1 id="heading-conclusion"><strong>Conclusion: 🎃</strong></h1>
<p>Migrating an EC2 instance to another AWS account doesn’t have to be complicated. By following these steps stopping the instance, creating an AMI, sharing it, and launching it in the destination account you can ensure a smooth and hassle-free migration.</p>
<p>📌So, what are you waiting for? Start migrating your EC2 instances today and take control of your AWS infrastructure!</p>
<hr />
<p>📬Found this guide helpful? Share it with your team or leave a comment below if you have any questions. Don’t forget to follow for more AWS tips and tricks!</p>
<p>#ec2 #aws #ec2migration # devops #cloud #getintokube #getintokubeblogs</p>
]]></content:encoded></item><item><title><![CDATA[How to Point Your GoDaddy Apex (Root) Domain to a CloudFront Application       -               A Simple Workaround! 💡]]></title><description><![CDATA[Have you ever faced the issue where your application works perfectly with www.example.com but fails to load when you try example.com? If you’re using GoDaddy and CloudFront, you might have encountered this problem. By default, GoDaddy doesn’t allow y...]]></description><link>https://getintokube.com/how-to-point-godaddy-apex-domain-to-cloudfront</link><guid isPermaLink="true">https://getintokube.com/how-to-point-godaddy-apex-domain-to-cloudfront</guid><category><![CDATA[cloudfront]]></category><category><![CDATA[AWS]]></category><category><![CDATA[godaddy]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Thu, 30 Jan 2025 07:49:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738218560283/38e58c8e-5ce6-4c79-8358-605632d4f58e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever faced the issue where your application works perfectly with <code>www.example.com</code> but fails to load when you try <code>example.com</code>? If you’re using GoDaddy and CloudFront, you might have encountered this problem. By default, GoDaddy doesn’t allow you to point a CNAME record to an APEX (root) domain like <code>example.com</code>. But don’t worry – there’s a workaround! In this blog, I’ll Walk you through the steps I took to solve this issue and get my application up and running for both <code>www.example.com</code> and <code>example.com</code>.</p>
<hr />
<h1 id="heading-my-scenario"><strong>My Scenario: 😵‍💫</strong></h1>
<p>When I deployed my application using AWS CloudFront, I could only point my domain to <code>www.example.com</code>. Whenever I tried to access <code>example.com</code> (the root domain), I got Error from GoDaddy. This is because GoDaddy doesn’t support CNAME records for APEX (root) domains by default. CNAME records are typically used for subdomains like <code>www</code>, but not for the root domain.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738219195188/dc55659d-f147-485e-8270-481dbc2846f9.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-the-solution-using-godaddys-forwarding-option"><strong>The Solution: Using GoDaddy’s Forwarding Option💡</strong></h1>
<p>After some research and trial and error, I found a simple workaround using GoDaddy’s forwarding feature. Here’s how I did it:</p>
<ol>
<li><p><strong>Log in to Your GoDaddy Account:</strong> Go to your GoDaddy dashboard and navigate to the domain you want to configure.</p>
</li>
<li><p><strong>Set Up a Forward:</strong></p>
<ul>
<li><p>Go to the <strong>Forwarding</strong> section.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738219458549/0b5c95bd-8efa-4cc0-b670-c0fef9291882.png" alt /></p>
</li>
<li><p>Select the root domain (<code>example.com</code>) and set it to forward to <code>www.example.com</code>.</p>
</li>
<li><p>Choose the <strong>Permanent (301)</strong> forward option.</p>
</li>
<li><p>Then click &gt; <strong>Save</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738220109387/c9cb3d6e-78d4-4afd-b059-81a11af2cc01.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Save the Changes:</strong> Once you’ve configured the forward, save the changes. It might take a few minutes for the changes to propagate.</p>
</li>
</ol>
<hr />
<h1 id="heading-why-this-works"><strong>Why This Works: ⚒️✅</strong></h1>
<p>By setting up a forward, you’re essentially telling GoDaddy to forward all traffic from <code>example.com</code> to <code>www.example.com</code>. Since <code>www.example.com</code> is a subdomain, you can easily point it to your CloudFront distribution using a CNAME record. This way, users accessing <code>example.com</code> will be seamlessly forwarded to <code>www.example.com</code>, where your application is hosted.</p>
<hr />
<h1 id="heading-steps-to-point-wwwexamplecom-to-cloudfront"><strong>Steps to Point</strong> <code>www.example.com</code> to CloudFront: 🧱🔨</h1>
<ol>
<li><p><strong>Go to DNS Management:</strong> In your GoDaddy account, navigate to the DNS management section for your domain.</p>
</li>
<li><p><strong>Add a CNAME Record:</strong></p>
<ul>
<li><p>Create a new CNAME record for <code>www</code>.</p>
</li>
<li><p>Point it to your CloudFront distribution’s domain name (e.g., <code>d1234.cloudfront.net</code>).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738220901560/90d361a5-86ce-4d97-8f8f-441d3c8cbf6f.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Save the Changes:</strong> Save the DNS changes and wait for them to propagate (this can take up to 48 hours, but it’s usually faster).</p>
</li>
</ol>
<hr />
<h1 id="heading-conclusion"><strong>Conclusion: 🎃</strong></h1>
<p>While GoDaddy doesn’t allow CNAME records for APEX (root) domains, this simple forward workaround ensures that your users can access your application using both <code>example.com</code> and <code>www.example.com</code>. By following these steps, I was able to solve the issue and make my application accessible from both URLs.  </p>
<p>I hope this guide helps you too! ❤️</p>
<hr />
<h1 id="heading-pro-tip"><strong>Pro Tip: 🚀</strong></h1>
<p>📌If you’re looking for a more advanced solution, consider using <strong>AWS Route 53</strong> for DNS management. Route 53 <strong>supports ALIAS records</strong>, which allow you to point your root domain directly to CloudFront without the need of forwarding option.</p>
<p>📌Alternatively, if you don’t want to use AWS, you can use <strong>Cloudflare</strong>. Cloudflare allows you to create CNAME-like records for root domains (using their <strong>CNAME flattening feature</strong>), making it easy to point <code>example.com</code> directly to your CloudFront endpoint. Plus, Cloudflare offers additional benefits like improved performance, security, and DDoS protection.</p>
<hr />
<p>📬Let me know in the comments if this solution worked for you! and 🔗share this guide with others who might find it helpful!</p>
<p>#cloudfront #aws #getintokube #getintokubeblogs</p>
]]></content:encoded></item><item><title><![CDATA[How to Enter into AWS Fargate Container 💡]]></title><description><![CDATA[This blog is for those who are tired of trying to exec into AWS Fargate containers. Even after referring to ChatGPT and various online blogs, you still couldn't find a solution to get inside a Fargate container. Here is the short and on-point solutio...]]></description><link>https://getintokube.com/exec-into-aws-fargate-container</link><guid isPermaLink="true">https://getintokube.com/exec-into-aws-fargate-container</guid><category><![CDATA[getintokube]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[ECS]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS ECS]]></category><category><![CDATA[AWS ECS Fargate]]></category><category><![CDATA[aws-fargate]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[DevOps Journey]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Thu, 23 Jan 2025 16:15:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737704645075/f1bb96d0-11cb-446c-83af-536eea674ca1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This blog is for those who are tired of trying to exec into AWS Fargate containers. Even after referring to ChatGPT and various online blogs, you still couldn't find a solution to get inside a Fargate container. Here is the short and on-point solution you've been looking for.</p>
<h3 id="heading-pre-requisites"><strong>Pre-requisites</strong></h3>
<ol>
<li><p><strong>AWS CLI Installed and Configured:</strong></p>
<ul>
<li><p>Install AWS CLI v2 or later if you haven’t already.</p>
</li>
<li><p>Ensure your CLI is configured with the correct region and credentials (aws configure).</p>
</li>
</ul>
</li>
<li><p><strong>IAM Permissions:</strong></p>
<ul>
<li><p><strong>Add SSM permissions to the <mark>Task IAM role</mark>:</strong></p>
</li>
<li><p>You should add the following policy to your existing ECS task IAM role. This grants permission for the ECS task to connect with the SSM Session Manager service.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737029633071/59d972da-bd25-4940-af2b-c5a480b28ada.png" alt="ECS Task Role we can find here." class="image--center mx-auto" /></p>
</li>
<li><p>Click <code>ecsTaskExecutionRole</code> &gt; Add Permission &gt; Create inline policy &gt; Switch to JSON &gt; Paste the below policy then save. Do this for both the policies.</p>
<pre><code class="lang-bash">  {
     <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
     <span class="hljs-string">"Statement"</span>: [
         {
         <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
         <span class="hljs-string">"Action"</span>: [
              <span class="hljs-string">"ssmmessages:CreateControlChannel"</span>,
              <span class="hljs-string">"ssmmessages:CreateDataChannel"</span>,
              <span class="hljs-string">"ssmmessages:OpenControlChannel"</span>,
              <span class="hljs-string">"ssmmessages:OpenDataChannel"</span>
         ],
        <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"*"</span>
        }
     ]
  }
</code></pre>
</li>
<li><p><strong>Add ECS Execute Command permission to your <mark>Task IAM role:</mark></strong></p>
<p>  Make sure your IAM role contains a policy that allows the action <code>ecs:ExecuteCommand</code>. Otherwise, you’re not able to run <code>aws ecs execute-command</code> in the AWS CLI in order to access the running container.</p>
</li>
<li><p>✍️ Alter “Resource” value with ECS cluster arn in the below policy⬇️.</p>
<pre><code class="lang-bash">  {
    <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-string">"Statement"</span>: [
      {
        <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
        <span class="hljs-string">"Action"</span>: <span class="hljs-string">"ecs:ExecuteCommand"</span>,
        <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"arn:aws:ecs:example-region:example-arn:cluster/example-cluster/*"</span>
      }
    ]
  }
</code></pre>
</li>
</ul>
</li>
<li><p><strong>AWS Session Manager Plugin Installed:</strong></p>
<ul>
<li><a target="_blank" href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html">Install the Session Manager Plugin for AWS CLI.</a></li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-steps-to-execute-into-a-container"><strong>Steps to Execute into a Container</strong></h3>
<h4 id="heading-1-identify-your-cluster-and-task"><strong>1. Identify Your Cluster and Task</strong></h4>
<ul>
<li><p>Find the ECS cluster name and the task running your container:</p>
<pre><code class="lang-bash">  aws ecs list-clusters
</code></pre>
<pre><code class="lang-bash">  aws ecs list-tasks --cluster &lt;your-cluster-name&gt;
</code></pre>
</li>
</ul>
<h4 id="heading-2-describe-the-task"><strong>2. Describe the Task</strong></h4>
<ul>
<li><p>Get details about the task, including the container name:</p>
<pre><code class="lang-bash">  aws ecs describe-tasks --cluster &lt;your-cluster-name&gt; --tasks &lt;task-id&gt;
</code></pre>
</li>
</ul>
<h4 id="heading-3-enable-execute-command-on-the-task"><strong>3. Enable Execute Command on the Task</strong></h4>
<ul>
<li><p>Now you need to enable the ECS Exec feature on existing ECS service and deploy the new task by using the below command.</p>
<pre><code class="lang-bash">  aws ecs update-service \
      --cluster &lt;cluster-name&gt; \
      --task-definition &lt;task-definition-name&gt; \
      --service &lt;service-name&gt; \
      --enable-execute-command \
      --force-new-deployment
</code></pre>
</li>
<li><p>After executing the above command, wait for the new task to deploy successfully.</p>
</li>
</ul>
<h4 id="heading-4-execute-the-command"><strong>4. Execute the Command</strong></h4>
<ul>
<li><p>To open an interactive shell inside the container, replace <code>/bin/bash</code> with <code>/bin/sh</code> if <code>bash</code> is not available in your container.</p>
<pre><code class="lang-bash">  aws ecs execute-command --cluster &lt;cluster-name&gt; \
      --task &lt;task-id&gt; \
      --container &lt;container-name&gt; \
      --interactive \
      --<span class="hljs-built_in">command</span> <span class="hljs-string">"/bin/sh"</span>
</code></pre>
</li>
<li><p>This is the output you’ll see when you’re executing <code>aws ecs execute-command</code> on an actual running container.</p>
<pre><code class="lang-bash">  aws ecs execute-command --cluster &lt;cluster-name&gt; \
      --task &lt;task-id&gt; \
      --container &lt;container-name&gt; \
      --interactive \
      --<span class="hljs-built_in">command</span> <span class="hljs-string">"/bin/sh"</span>

  The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.

  Starting session with SessionId: ecs-execute-command-5tap5jrfpg8g5p2o5z8opsfqxe
  <span class="hljs-comment">#</span>
</code></pre>
</li>
</ul>
<p>By following these steps, you can 🤩 successfully enable and use the ECS Exec feature to open an interactive shell inside a running container.</p>
<hr />
<p>If you have any suggestions, ideas, or thoughts to add, feel free to drop them in the comments. 👇📩</p>
<p>Your feedback means a lot! Don’t forget to hit that like❤️ button to show your support and stay tuned for more content. 🔔</p>
<p>⭐Thanks again!</p>
<p>#ecs #aws #ecs_fargate #getintokube #getintokube_blogs #aws #ecs #ecs_fargate #How_to_Enter_into_ AWS_Fargate_Container #How_to_exec_into_AWS_Fargate_Container</p>
]]></content:encoded></item><item><title><![CDATA[Deploy Java Helm chart on EKS using ArgoCD and GitHub Actions / DevOps Project - 1]]></title><description><![CDATA[In this Article we are going to deploy a java application on AWS EKS cluster. For that we are going to containerize the application, creates Helm charts, install Argocd on EKS cluster, Using GitHub Actions for CI also we are using nginx ingress contr...]]></description><link>https://getintokube.com/deploy-java-helm-chart-on-eks-using-argocd-and-github-actions-devops-project-1</link><guid isPermaLink="true">https://getintokube.com/deploy-java-helm-chart-on-eks-using-argocd-and-github-actions-devops-project-1</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Helm]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Java]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[EKS]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Sun, 24 Nov 2024 11:38:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729276721361/e91a587d-779b-489c-9c75-3100c230743d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this Article we are going to deploy a java application on AWS EKS cluster. For that we are going to containerize the application, creates Helm charts, install Argocd on EKS cluster, Using GitHub Actions for CI also we are using nginx ingress controller for exposing our application.</p>
<blockquote>
<p>📌Resources: <a target="_blank" href="https://github.com/gerlynm/java-deployment.git">https://github.com/gerlynm/java-deployment.git</a></p>
</blockquote>
<h2 id="heading-create-eks-cluster-using-eksctl">Create EKS cluster using eksctl</h2>
<p>At first, we can start with creating EKS cluster using eksctl. It will take up to 10 to 15 minutes of time for provisioning resources.</p>
<pre><code class="lang-bash">eksctl create cluster --name java-eks-cluster --region ap-south-1  --nodegroup-name java-eks-nodes --node-type t2.micro --nodes 2 --profile magic
</code></pre>
<p>While waiting for the resource to be created, we can create <code>Dockerfile</code> for our java application.</p>
<hr />
<h2 id="heading-create-docker-file-for-java-application">Create Docker file for Java application</h2>
<p>You can change the <code>Dockerfile</code> as per your requirement.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Create the image using a distroless base image</span>
FROM gcr.io/distroless/java17-debian11

<span class="hljs-comment"># Set the working directory</span>
WORKDIR /app

<span class="hljs-comment"># Copy the JAR file from the build stage</span>
COPY /target/*.jar ./java.jar

<span class="hljs-comment"># Expose the port the application runs on</span>
EXPOSE 8080

<span class="hljs-comment"># Run the application</span>
ENTRYPOINT [<span class="hljs-string">"java"</span>, <span class="hljs-string">"-jar"</span>, <span class="hljs-string">"java.jar"</span>]
</code></pre>
<ul>
<li><p>Using base image as <code>distroless</code> image because it has only the necessary runtime libraries and no package manager or shell, also we can be able to reduce image size by using this.</p>
</li>
<li><p><code>WORKDIR</code> sets the working directory inside the container to <code>/app</code>.</p>
</li>
<li><p><code>COPY</code> it copies the JAR file from our local <code>target</code> directory into the container.</p>
</li>
<li><p><code>EXPOSE</code> this command informs container listens on port <code>8080</code>, and it <mark>doesn’t publish </mark> the port.</p>
</li>
<li><p><code>ENTRYPOINT</code> it runs the Java application using the <code>java -jar</code> command, specifying the <code>java.jar</code> file that was copied earlier.</p>
</li>
</ul>
<hr />
<h2 id="heading-create-helm-charts">Create Helm charts</h2>
<ul>
<li><p>Helm charts are package manager for Kubernetes environment, also we can simply create manifest files without typing manually.</p>
</li>
<li><p>Using the below command to create a sample helm chart and we can alter it as per our requirement.</p>
</li>
</ul>
<pre><code class="lang-bash">helm create java-api-charts
</code></pre>
<ul>
<li><p>Depending upon the project requirement we need to modify the helm charts templates.</p>
</li>
<li><p>In our case we are going to modify some files like services.yaml, deployments.yaml, values.yml</p>
</li>
</ul>
<p>Below I have mentioned code snippets what are the changes needs to make in each file.</p>
<p><code>deployment.yaml</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
    <span class="hljs-attr">containerPort:</span> {{ <span class="hljs-string">.Values.service.targetPort</span> }}
    <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
</code></pre>
<p><code>service.yaml</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> {{ <span class="hljs-string">.Values.service.port</span> }}
      <span class="hljs-attr">targetPort:</span> {{ <span class="hljs-string">.Values.service.targetPort</span> }}
      <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
</code></pre>
<p><code>values.yaml</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span>
  <span class="hljs-attr">repository:</span> <span class="hljs-string">&lt;aws-account-id&gt;.dkr.ecr.ap-south-1.amazonaws.com/&lt;repo-name&gt;</span>
  <span class="hljs-comment"># This sets the pull policy for images.</span>
  <span class="hljs-attr">pullPolicy:</span> <span class="hljs-string">IfNotPresent</span>
  <span class="hljs-comment"># Overrides the image tag whose default is the chart appVersion.</span>
  <span class="hljs-attr">tag:</span> <span class="hljs-string">"latest"</span>

<span class="hljs-attr">service:</span>
  <span class="hljs-comment"># This sets the service type more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">ClusterIP</span>
  <span class="hljs-comment"># This sets the ports more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#field-spec-ports</span>
  <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
  <span class="hljs-attr">targetPort:</span> <span class="hljs-number">8080</span>

<span class="hljs-attr">ingress:</span>
  <span class="hljs-attr">enabled:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">className:</span> <span class="hljs-string">"nginx"</span>
  <span class="hljs-attr">annotations:</span> 
    <span class="hljs-attr">nginx.ingress.kubernetes.io/rewrite-target:</span> <span class="hljs-string">/</span>
    <span class="hljs-comment"># kubernetes.io/ingress.class: nginx</span>
    <span class="hljs-comment"># kubernetes.io/tls-acme: "true"</span>
  <span class="hljs-attr">hosts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">rest-java-api.local</span>
      <span class="hljs-attr">paths:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
          <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>

<span class="hljs-attr">livenessProbe:</span>
  <span class="hljs-attr">httpGet:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">/hello-world</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">8080</span>
<span class="hljs-attr">readinessProbe:</span>
  <span class="hljs-attr">httpGet:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">/hello-world</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">8080</span>
</code></pre>
<hr />
<h2 id="heading-create-ecr-in-aws">Create ECR in AWS</h2>
<p>To avoid clicky clicky…... Just create ECR repository by using below command</p>
<pre><code class="lang-bash">aws ecr create-repository --repository-name rest-java-api --profile magic
</code></pre>
<hr />
<h2 id="heading-install-nginx-ingress-controller-nlb">Install Nginx Ingress Controller (NLB)</h2>
<p>This command will install the nginx ingress controller in the cluster and creates a Network Load balancer in our AWS account.</p>
<pre><code class="lang-bash">kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/aws/deploy.yaml
</code></pre>
<blockquote>
<p>📌 NLB (Network Load Balancer) doesn’t natively handle HTTP/HTTPS</p>
<p>But You can still route HTTP/HTTPS traffic by configuring the NLB to forward the traffic to your instances, which can be handled by an ingress controller (like NGINX) running within your Kubernetes cluster.</p>
</blockquote>
<hr />
<h2 id="heading-install-argocd-in-eks-cluster">Install ArgoCD in EKS cluster</h2>
<p>Using below command can be able to install ArgoCD in our EKS cluster.</p>
<pre><code class="lang-bash">kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<hr />
<h2 id="heading-to-access-the-argocd-ui">To access the ArgoCD UI</h2>
<p>They are multiple ways to access ArgoCD UI, here we are using simplest way</p>
<pre><code class="lang-bash">kubectl get svc -n argocd <span class="hljs-comment">#look for service named "argocd-server" </span>
kubectl port-forward svc/argocd-server 8080:443 -n argocd
</code></pre>
<p>(Optional) To run <code>kubectl port-forward</code> command in background use below command</p>
<pre><code class="lang-bash"><span class="hljs-comment">#To run the process</span>
nohup kubectl port-forward svc/argocd-server 8080:443 -n argocd &amp;
tail -f nohup.out

<span class="hljs-comment">#To kill the process</span>
ps aux | grep <span class="hljs-string">'kubectl port-forward'</span>
<span class="hljs-built_in">kill</span> &lt;PID&gt;
</code></pre>
<p>For Login into the argoCD UI</p>
<ul>
<li><p>Get the secrets of argocd namespace.</p>
</li>
<li><p>Then edit the secret named <code>argocd-initial-admin-secret</code> to view the secret. Copy that.</p>
</li>
<li><p>By default, that copied password is encoded with base64, so to decode that password use the below <code>base64</code> command.</p>
<pre><code class="lang-bash">  kubectl get secrets -n argocd
  kubectl edit secrets argocd-initial-admin-secret  -n argocd
  <span class="hljs-built_in">echo</span> &lt;password&gt; | base64 --decode
</code></pre>
</li>
<li><p>Now provide the credentials in the login page. The default username is <code>admin</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729611327405/e7761797-fd4b-4d6e-ab62-7b6e92e42d7f.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-create-github-actions-workflow">Create GitHub Actions workflow</h2>
<p>This is our GitHub Actions workflow for our Java application</p>
<ol>
<li><p>Configure the triggers (main, master, dev etc…)</p>
</li>
<li><p>Choose the working Directory</p>
</li>
<li><p>Configure Java and build the application using Maven.</p>
</li>
<li><p>Configures AWS credentials.</p>
</li>
<li><p>Logs in to Amazon ECR (Elastic Container Registry).</p>
</li>
<li><p>Builds a Docker image and pushes it to ECR.</p>
</li>
</ol>
<pre><code class="lang-bash">name: Java CI with Maven and Dockerized it <span class="hljs-keyword">then</span> push it to AWS ECR

on:
  push:
    branches: [ <span class="hljs-string">"master"</span> ]
  pull_request:
    branches: [ <span class="hljs-string">"master"</span> ]

<span class="hljs-built_in">jobs</span>:
  build:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: 01-hello-world-rest-api
    steps:
    - uses: actions/checkout@v4
    - name: Set up JDK 17
      uses: actions/setup-java@v4
      with:
        java-version: <span class="hljs-string">'17'</span>
        distribution: <span class="hljs-string">'temurin'</span>
        cache: maven

    - name: Build with Maven
      run: mvn clean package -DskipTests

    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: <span class="hljs-variable">${{ secrets.AWS_ACCESS_KEY_ID }</span>}
        aws-secret-access-key: <span class="hljs-variable">${{ secrets.AWS_SECRET_ACCESS_KEY }</span>}
        aws-region: ap-south-1

    - name: Login to Amazon ECR
      id: login-ecr
      uses: aws-actions/amazon-ecr-login@v1

    - name: Build, tag, and push docker image to Amazon ECR
      env:
        REGISTRY: <span class="hljs-variable">${{ steps.login-ecr.outputs.registry }</span>}
        REPOSITORY: <span class="hljs-variable">${{ secrets.AWS_ECR }</span>}
        IMAGE_TAG: latest
      run: |
        docker build -t <span class="hljs-variable">$REGISTRY</span>/<span class="hljs-variable">$REPOSITORY</span>:<span class="hljs-variable">$IMAGE_TAG</span> .
        docker push <span class="hljs-variable">$REGISTRY</span>/<span class="hljs-variable">$REPOSITORY</span>:<span class="hljs-variable">$IMAGE_TAG</span>
</code></pre>
<p>Add the AWS Credentials and ECR repository name as secrets on the repository secrets.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729653749102/27ca8260-fd35-459a-8e68-4c8e9dd37fbb.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-create-argocd-application">Create ArgoCD application</h2>
<ul>
<li><p>Click on <strong>NEW APP</strong> button &gt; provide Application Name, Project Name, Sync Policy, GitHub Repository, Path.</p>
</li>
<li><p>Then click <strong>Create</strong> button.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729658928972/cf8cb50a-4ee3-458a-a8ef-6630af6f0fa3.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-application-deployed">Application Deployed!</h2>
<ul>
<li><p>Within couple of seconds our argoCD application will sync up with our repository and deploy the application.</p>
</li>
<li><p>From the below image we can see all objects are in healthy state.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729653816143/71688970-5c50-411c-b99e-e71af9565d4c.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-access-the-application-via-cli">Access the Application via CLI</h2>
<p>As we configured the local domain (<code>rest-java-api.local</code>) in ingress. The load balancer listens on that particular domain name.</p>
<p>To access using local domain follow the steps below</p>
<pre><code class="lang-bash">host &lt;loadbalancer-dns-name&gt;
</code></pre>
<p>Using <code>host</code> command, we can get the Ip address of the load balancer.</p>
<p>Then using the below command with the load balancer Ip address can get the desired output.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Replace 13.200.151.192 with your load balancer ip address</span>
curl -H <span class="hljs-string">"Host: rest-java-api.local"</span> 13.200.151.192/hello-world
</code></pre>
<p>This command will send a request to <code>http://13.200.151.192/hello-world</code> with the <code>Host</code> header set to <code>rest-java-api.local</code>.</p>
<p>In simple terms, this command sends a request to a specific IP address but tells the server it's trying to reach a specific domain.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729653689534/856cf979-f679-435a-b840-5163654fd246.png" alt class="image--center mx-auto" /></p>
<p>Here you can see the <code>Hello World</code> output from our application.</p>
<hr />
<h2 id="heading-access-the-application-via-ui">Access the Application via UI</h2>
<p>To access the application using UI we need to setup the DNS mapping for our local domain name.</p>
<p>I have already written docs on <a target="_blank" href="https://hashnode.com/docs/6717c178cfa8adba40050060/guide/6717c179cae8d3b14b6719cf/page/6717c257db0267b3367ae88c">how to step up DNS mapping</a>, follow those steps and browse this URL: <code>rest-java-api.local/hello-world</code></p>
<blockquote>
<p>📌 In Realtime scenarios we would use valid domain name like google.com, amazon.com, so above step not required in Realtime Projects.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729653504027/64742e64-bfa7-4a95-b265-37912008318c.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-clean-up-the-resources">Clean Up the Resources</h2>
<p>To delete all the resources created for this project use the below command.</p>
<h3 id="heading-delete-the-eks-cluster">Delete the EKS cluster</h3>
<pre><code class="lang-bash">eksctl delete cluster --name eks-cluster --region ap-south-1 --profile magic
</code></pre>
<h3 id="heading-delete-ecr-repository">Delete ECR Repository</h3>
<pre><code class="lang-bash"> aws ecr delete-repository --repository-name rest-java-api  --force --profile magic
</code></pre>
<hr />
<p>Thanks for sticking with me until the end! 😊</p>
<p>I hope you found this project valuable and worth your time. If you have any suggestions, ideas, or thoughts to add, feel free to drop them in the comments. 👇📩</p>
<p>Your feedback means a lot! Don’t forget to hit that like ❤️ button to show your support and stay tuned for more content. 🔔</p>
<p>⭐Thanks again!</p>
<p>#getintokube #getintokubeblogs</p>
]]></content:encoded></item><item><title><![CDATA[PHP for DevOps Engineer]]></title><description><![CDATA[PHP is a server-side scripting language mainly for web development. It is an Interpreted language so it does not require a compiler.
PHP runs on the server rather than the client’s browser because PHP interprets the .php code into HTML and send it to...]]></description><link>https://getintokube.com/php-for-devops-engineer</link><guid isPermaLink="true">https://getintokube.com/php-for-devops-engineer</guid><category><![CDATA[php for devops]]></category><category><![CDATA[PHP]]></category><category><![CDATA[Devops]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[DevOps trends]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Wed, 09 Oct 2024 04:30:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728365595235/03bc9d9e-9938-4d8d-9595-2eae8ea2f211.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>PHP is a server-side scripting language mainly for web development. It is an Interpreted language so it does not require a compiler.</p>
<p>PHP runs on the server rather than the client’s browser because PHP interprets the <code>.php</code> code into HTML and send it to the client’s browser.</p>
<blockquote>
<p>⭐ Other client-side languages like javascript run in the user’s browser.</p>
</blockquote>
<h2 id="heading-how-does-php-work">How does PHP work?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728369410480/2e58dd59-04dc-4f85-b90c-23e3210f5b7d.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Client Request</strong>: When a user requests a PHP webpage, that browser sends a request to the web server.</p>
</li>
<li><p><strong>Web Server Processes Request</strong>: The web server (like Apache or Nginx) receives the request and recognizes that it’s a required PHP webpage. Then it hands over the file to the PHP interpreter.</p>
</li>
<li><p><strong>PHP Interpreter Executes Code</strong>: The PHP interpreter processes the PHP code within the PHP file.</p>
</li>
<li><p><strong>Generate HTML</strong>: After processing the PHP file, it generates HTML (or other web-compatible output) as the response content.</p>
</li>
<li><p><strong>Send Response to Browser</strong>: The web server sends the generated HTML back to the client's browser, which renders the content.</p>
</li>
</ol>
<h2 id="heading-use-of-dependency-manager">Use of Dependency Manager?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728376214934/e68ea74d-905f-4f4a-bcfa-acabb965319b.png" alt class="image--center mx-auto" /></p>
<p>Dependency manager is an important tool that helps us manage libraries and packages required for our PHP application.</p>
<p>There are multiple Dependency managers but the most widely used dependency manager for PHP is <strong>Composer</strong>.</p>
<ul>
<li><p>Composer makes it simple to manage our project's dependencies through a single <code>composer.json</code> file.</p>
</li>
<li><p>You can add dependencies using the <code>composer</code> command:</p>
</li>
</ul>
<pre><code class="lang-json">composer require monolog/monolog
</code></pre>
<p>This command updates the <code>composer.json</code> file and installs the package.</p>
<ul>
<li>To install the dependencies listed in <code>composer.json</code></li>
</ul>
<pre><code class="lang-json">composer install
</code></pre>
<ul>
<li>To update all dependencies to their latest versions</li>
</ul>
<pre><code class="lang-json">composer update
</code></pre>
<p>Example for <code>composer.json</code> file</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"my-php-project"</span>,
    <span class="hljs-attr">"description"</span>: <span class="hljs-string">"A simple PHP project."</span>,
    <span class="hljs-attr">"require"</span>: {
        <span class="hljs-attr">"monolog/monolog"</span>: <span class="hljs-string">"2.3.0"</span>,
        <span class="hljs-attr">"guzzlehttp/guzzle"</span>: <span class="hljs-string">"^7.0"</span> 
    }
}
</code></pre>
<h2 id="heading-logging">Logging:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728375935505/003a70ad-1d92-4c1d-8d72-70ea77e147f2.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>PHP applications typically use logging libraries like <strong>Monolog</strong>, <strong>Built-in error_log() function</strong>, <strong>Log to a Database</strong> or <strong>Laravel Logging (if using Laravel)</strong>.</p>
</li>
<li><p>Using these tools, we can set up robust logging mechanisms for our PHP application, making it easier to monitor and debug our application.</p>
</li>
</ul>
<h2 id="heading-environment-variables">Environment variables:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728376117727/38399e28-a6eb-4422-bb24-ba7c82795495.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>In PHP, configuration settings are typically placed in a file named <code>config.php</code>. But nowadays environment-specific configurations are kept in a <code>.env</code> file for managing environment variables.</p>
</li>
<li><p>Especially for applications coded with frameworks such as <strong>Laravel</strong>, the <strong>configuration settings</strong> of such applications like ports, and database connection strings are stored using this <code>.env</code>.</p>
<p>  <strong>Example:</strong></p>
</li>
</ul>
<pre><code class="lang-plaintext">DB_HOST=localhost
DB_USERNAME=root
DB_PASSWORD=password
DB_NAME=example_db
APP_DEBUG=true 
APP_URL=http://example.com
</code></pre>
<h2 id="heading-containerizing-the-php-application"><strong>Containerizing the PHP application:</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728375982436/a938fa8d-d6cd-480c-9162-55f8d436d35d.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Use the official PHP 8.0 image installed with the Apache web server.</p>
</li>
<li><p>Set the working directory in the container to <code>/var/www</code></p>
</li>
<li><p>Install PHP extensions required by the application.</p>
</li>
<li><p>Copy the files into the container directory.</p>
</li>
<li><p>Enables the Apache <code>mod_rewrite</code> module, commonly used for URL rewriting in PHP applications.</p>
</li>
<li><p>This line indicates that the application will listen on port 9000. This does not publish the port.</p>
</li>
<li><p>This command runs Apache in the foreground to keep the container running.</p>
</li>
</ol>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Use the official PHP image as a parent image</span>
<span class="hljs-keyword">FROM</span> php:<span class="hljs-number">8.0</span>-apache    

<span class="hljs-comment"># Set working directory</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /var/www</span>

<span class="hljs-comment"># Install PHP extensions</span>
<span class="hljs-keyword">RUN</span><span class="bash"> docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd</span>

<span class="hljs-comment"># Copy application files</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . /var/www</span>

<span class="hljs-comment"># Enable Apache modules</span>
<span class="hljs-keyword">RUN</span><span class="bash"> a2enmod rewrite</span>

<span class="hljs-comment"># Expose port 9000 </span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">9000</span>

<span class="hljs-comment"># Start php-fpm server</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"apache2-foreground"</span>]</span>
</code></pre>
<hr />
<p>Thanks for sticking with me until the end! 😊</p>
<p>I know it was a lengthy read, but I hope you found it valuable and worth your time. If you have any suggestions, ideas, or thoughts to add, feel free to drop them in the comments. 👇📩</p>
<p>Your feedback means a lot! Don’t forget to hit that like❤️ button to show your support and stay tuned for more content. 🔔</p>
<p>⭐Thanks again!</p>
<p>#getintokube #getintokubeblogs</p>
]]></content:encoded></item><item><title><![CDATA[JAVA for DevOps Engineer (One Stop Solution)]]></title><description><![CDATA[Java is a hybrid of both Compiled and Interpreted language, also it’s not an Interpreted language like JavaScript or Python. Here we use the Build tool to build Java applications so that we can deploy and manage them properly.
We will cover why Java ...]]></description><link>https://getintokube.com/java-for-devops-engineer</link><guid isPermaLink="true">https://getintokube.com/java-for-devops-engineer</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[java for devops]]></category><category><![CDATA[getintokubeblogs]]></category><category><![CDATA[getintokube]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[DevOps trends]]></category><category><![CDATA[Java]]></category><dc:creator><![CDATA[Gerlyn M]]></dc:creator><pubDate>Fri, 27 Sep 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727446505614/39ac781e-5414-471a-9d3b-9805f74a7267.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Java is a <strong>hybrid of both Compiled and Interpreted language</strong>, also it’s not an Interpreted language like JavaScript or Python. Here we use the <strong>Build tool</strong> to build Java applications so that we can deploy and manage them properly.</p>
<p>We will cover why Java is a hybrid of both compiled and interpreted language in a bit.</p>
<blockquote>
<p>⭐Java is platform-independent but not JVM is...</p>
<p>Because <mark>“write once, run anywhere”</mark> concept only works on bytecode which is produced by <strong>Java compiler</strong>. However, platforms like Windows, macOS, Linus have its own version of JVM. So, Java maintains platform independence through its bytecode only.</p>
</blockquote>
<h2 id="heading-how-does-java-work">How does Java work?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727456355679/7da153c8-490c-487c-b971-916d21abbd2f.png" alt class="image--center mx-auto" /></p>
<p>1. <strong>Writing the Java Code:</strong></p>
<ul>
<li><p>The developer writes source code in <strong>Java</strong> typically using a <code>.java</code> file.</p>
</li>
<li><p>It is a <strong>human-readable</strong> code that follows the syntax of the programming language for Java.</p>
</li>
</ul>
<p>2. <strong>Compile to Bytecode:</strong></p>
<ul>
<li><p>Following the writing of code, it must be compiled. There is a tool called the <strong>Java Compiler (javac)</strong> that accepts the human readable <code>.java</code> file and changes it into bytecode.</p>
</li>
<li><p><strong>Bytecode</strong> is a low-level, platform-independent representation of code that's stored in <code>.class</code> files</p>
</li>
</ul>
<p>3. <strong>Execution - Java Virtual Machine</strong> (JVM):</p>
<ul>
<li><p>The compiled bytecode doesn’t run directly on the hardware like fully compiled languages (C, C++).</p>
</li>
<li><p>Instead, it <strong>interprets</strong> the bytecode into machine code and executes the application.</p>
</li>
</ul>
<blockquote>
<p>⭐That’s why JAVA is called as the hybrid of Compiled and Interpreted language.</p>
</blockquote>
<hr />
<h2 id="heading-jdk-vs-jre-vs-jvm">JDK vs JRE vs JVM</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727442073740/5560af9f-d4b4-4829-9249-dc2b8da2f143.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-jdk"><strong>JDK</strong>:</h3>
<ul>
<li><p>It is a whole <strong>software development kit</strong> used to write, compile, and debug Java applications.</p>
</li>
<li><p>Includes JRE (Java Runtime Environment) + development tools (like compilers and debuggers).</p>
</li>
</ul>
<h3 id="heading-jre"><strong>JRE</strong>:</h3>
<ul>
<li><p>The JRE provides everything needed to <strong>run</strong> the Java application like libraries and other dependencies, but not to <strong>develop</strong> them.</p>
</li>
<li><p>Contains JVM (Java Virtual Machine) + core libraries and other components to run Java applications.</p>
</li>
</ul>
<h3 id="heading-jvm"><strong>JVM</strong>:</h3>
<ul>
<li><p>This is the <strong>engine</strong> that runs our Java applications.</p>
</li>
<li><p>Takes the bytecode and translates it into <strong>machine</strong> code for execution.</p>
</li>
</ul>
<p>So, to develop an application JDK is a must-have component. If we need to run a Java application, then we can use JRE alone.</p>
<hr />
<h2 id="heading-why-do-we-need-a-applicationproperties-file">Why do we need a <code>application.properties</code> file?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727444059886/4b7b8f00-e78b-44d1-b9fb-eeb7fb5e244b.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Application configurations are usually done through an <code>application.properties</code> or an <code>application.yml</code> files.</p>
</li>
<li><p>Especially for applications coded with frameworks such as <strong>Spring Boot</strong>, the <strong>configuration settings</strong> of such applications like ports, and database connection strings are stored using this.</p>
</li>
</ul>
<p><strong>Example</strong>:</p>
<pre><code class="lang-java"># application.properties
server.port=<span class="hljs-number">8081</span>

# Database settings
spring.datasource.url=jdbc:mysql:<span class="hljs-comment">//localhost:3306/mydb</span>
spring.datasource.username=root
spring.datasource.password=pass12
</code></pre>
<hr />
<h2 id="heading-use-of-java-build-tools">Use of Java Build Tools?</h2>
<ul>
<li><p><strong>Java build tools</strong> are important in managing the complexities of building, packaging, testing, and deploying a Java application.</p>
</li>
<li><p>Build tools such as <strong>Maven</strong>, <strong>Gradle</strong>, and <strong>Ant</strong> automatically make these processes consistent and efficient throughout the entire lifecycle of software development.</p>
</li>
</ul>
<hr />
<h2 id="heading-what-is-pomxml-amp-buildgradle-files">What is <code>pom.xml</code> &amp; <code>build.gradle</code> files?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727443449179/799e1663-c1aa-407a-9a0d-658387f73b22.png" alt class="image--center mx-auto" /></p>
<p>These <code>pom.xml</code> (used with <strong>Maven</strong>) and <code>build.gradle</code> (used with <strong>Gradle</strong>) are configuration files essential for managing dependencies, building, and packaging our code.</p>
<h3 id="heading-whats-in-pomxml-for-maven">What’s in <code>pom.xml</code> (for Maven)?</h3>
<p>The <strong>Project Object Model (POM)</strong> file is an XML file that defines the configuration of a Maven project.</p>
<h4 id="heading-key-sections-of-pomxml">Key Sections of <code>pom.xml</code>:</h4>
<ol>
<li><p><strong>Project Information</strong>:</p>
<ul>
<li>Here it defines the project's metadata like name, version, description, etc.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    &lt;project&gt;
      &lt;modelVersion&gt;<span class="hljs-number">4.0</span>.<span class="hljs-number">0</span>&lt;/modelVersion&gt;
      &lt;groupId&gt;com.example&lt;/groupId&gt;
      &lt;artifactId&gt;myapp&lt;/artifactId&gt;
      &lt;version&gt;<span class="hljs-number">1.0</span>-SNAPSHOT&lt;/version&gt;
      &lt;packaging&gt;jar&lt;/packaging&gt;
    &lt;/project&gt;
</code></pre>
<ol start="2">
<li><p><strong>Dependencies</strong>:</p>
<ul>
<li>Dependencies are used to list external libraries (dependencies) that our project needs. Maven will automatically download these libraries from the <strong>Maven Central Repository</strong> or another repository.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    &lt;dependencies&gt;
      &lt;dependency&gt;
        &lt;groupId&gt;junit&lt;/groupId&gt;
        &lt;artifactId&gt;junit&lt;/artifactId&gt;
        &lt;version&gt;<span class="hljs-number">4.12</span>&lt;/version&gt;
        &lt;scope&gt;test&lt;/scope&gt;
      &lt;/dependency&gt;
    &lt;/dependencies&gt;
</code></pre>
<ol start="3">
<li><p><strong>Build Configuration</strong>:</p>
<ul>
<li>Here we need to mention the details how to build the project, including <strong>plugins</strong> for compiling, testing, packaging, etc.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    &lt;build&gt;
      &lt;plugins&gt;
        &lt;plugin&gt;
          &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
          &lt;artifactId&gt;maven-compiler-plugin&lt;/artifactId&gt;
          &lt;version&gt;<span class="hljs-number">3.8</span>.<span class="hljs-number">1</span>&lt;/version&gt;
          &lt;configuration&gt;
            &lt;source&gt;<span class="hljs-number">1.8</span>&lt;/source&gt;
            &lt;target&gt;<span class="hljs-number">1.8</span>&lt;/target&gt;
          &lt;/configuration&gt;
        &lt;/plugin&gt;
      &lt;/plugins&gt;
    &lt;/build&gt;
</code></pre>
<ol start="4">
<li><p><strong>Repositories</strong>:</p>
<ul>
<li>It specifies where Maven should look for dependencies if they are not found in the Maven Central Repository.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    &lt;repositories&gt;
      &lt;repository&gt;
        &lt;id&gt;my-repo&lt;/id&gt;
        &lt;url&gt;https:<span class="hljs-comment">//my.repo.url&lt;/url&gt;</span>
      &lt;/repository&gt;
    &lt;/repositories&gt;
</code></pre>
<ol start="5">
<li><p><strong>Profiles</strong>:</p>
<ul>
<li>Profiles are used to configure builds for different environments, like production or development.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    &lt;profiles&gt;
      &lt;profile&gt;
        &lt;id&gt;prod&lt;/id&gt;
        &lt;build&gt;
          &lt;!-- Prod specific config --&gt;
        &lt;/build&gt;
      &lt;/profile&gt;
    &lt;/profiles&gt;
</code></pre>
<h3 id="heading-whats-in-buildgradle-for-gradle"><strong>What’s in</strong> <code>build.gradle</code> (for Gradle)?</h3>
<p>Gradle uses a <strong>DSL (Domain Specific Language)</strong>, and <code>build.gradle</code> files can be written in <strong>Groovy</strong> or <strong>Kotlin</strong>. They are generally more concise and flexible than <code>pom.xml</code>.</p>
<h4 id="heading-key-sections-of-buildgradle">Key Sections of <code>build.gradle</code>:</h4>
<ol>
<li><p><strong>Plugins</strong>:</p>
<ul>
<li>It defines the plugins that extend Gradle's functionality. For example, the <code>java</code> plugin helps in building Java projects.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    plugins {
      id <span class="hljs-string">'java'</span>
    }
</code></pre>
<ol start="2">
<li><p><strong>Dependencies</strong>:</p>
<ul>
<li>Similar to Maven, this section manages the external libraries of our project needs. Gradle can fetch these libraries from repositories like <strong>Maven Central</strong>.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    dependencies {
      implementation <span class="hljs-string">'org.springframework.boot:spring-boot-starter-actuator'</span>
      testImplementation <span class="hljs-string">'org.testcontainers:junit-jupiter'</span>
    }
</code></pre>
<ol start="3">
<li><p><strong>Repositories</strong>:</p>
<ul>
<li>Here it specifies where Gradle should look for dependencies. By default, Gradle uses Maven Central.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    repositories {
      mavenCentral()
    }
</code></pre>
<ol start="4">
<li><p><strong>Build Script</strong>:</p>
<ul>
<li>In this, we need to mention how our project should be built. We can add tasks for compilation, packaging, running tests, etc.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    tasks.register(<span class="hljs-string">'hello'</span>) {
      doLast {
        println <span class="hljs-string">'Hello, World!'</span>
      }
    }
</code></pre>
<ol start="5">
<li><p><strong>Custom Tasks</strong>:</p>
<ul>
<li>Here we have a custom feature that allows us to define custom tasks to automate specific actions, giving it more flexibility than Maven in certain aspects.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    task customTask {
      doLast {
        println <span class="hljs-string">'Running custom task!'</span>
      }
    }
</code></pre>
<ol start="6">
<li><p><strong>Project Information</strong>:</p>
<ul>
<li>We can specify project details like <code>group</code>, <code>version</code>, etc., similar to <code>pom.xml</code>.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    group = <span class="hljs-string">'com.example'</span>
    version = <span class="hljs-string">'1.0-SNAPSHOT'</span>
</code></pre>
<ol start="7">
<li><p><strong>Build Configuration</strong>:</p>
<ul>
<li>We can configure how to compile and package the project, including specifying Java versions, adding plugins, and more.</li>
</ul>
</li>
</ol>
<pre><code class="lang-java">    compileJava {
      sourceCompatibility = <span class="hljs-string">'1.8'</span>
      targetCompatibility = <span class="hljs-string">'1.8'</span>
    }
</code></pre>
<hr />
<h2 id="heading-logging">Logging:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727458484833/8dd57d82-c52a-485b-a9d3-a8d81cbc24d7.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Java applications typically use logging frameworks like <strong>Log4j</strong>, <strong>SLF4J</strong>, or <strong>Logback</strong>.</p>
</li>
<li><p>Why because <strong>Logging Frameworks</strong> provide <strong>more flexible configurations</strong> comparing build-in logging and frameworks give more advantages in logging format, optimized for performance and fine-grained control over log levels.</p>
</li>
</ul>
<hr />
<h2 id="heading-difference-between-war-and-jar-files">Difference between War and Jar files:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727459522799/5c019361-0a92-4a12-81f7-9f9fb3f5d055.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-jar-java-archive"><strong>JAR (Java Archive)</strong>:</h3>
<ul>
<li><p>JAR files are primarily used to package <strong>Java applications</strong>, libraries, or resources.</p>
</li>
<li><p>Contains only compiled <code>.class</code> files and resources.</p>
</li>
<li><p>Typically <strong>deploy or execute directly</strong> with the <code>java -jar</code> command.</p>
</li>
<li><p><strong>Example</strong>: A desktop application, <strong>API</strong> application or a library like <code>spring-core.jar</code>.</p>
</li>
</ul>
<h3 id="heading-war-web-application-archive"><strong>WAR (Web Application Archive)</strong>:</h3>
<ul>
<li><p>WAR files are specifically used to package <strong>Java web applications</strong>.</p>
</li>
<li><p>Contains <strong>everything needed for a web application</strong>, including <code>.class</code> files, JSP pages, HTML, CSS, JS, and configuration files like <code>web.xml</code>.</p>
</li>
<li><p>Deploy <code>myapp.war</code> to Tomcat by copying it to <code>/path/to/tomcat/webapps/</code></p>
</li>
<li><p><strong>Example</strong>: A web-based application or service like <code>myapp.war</code> running on a Tomcat server.</p>
</li>
</ul>
<hr />
<h2 id="heading-version-manager-of-java">Version Manager of Java:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727459439054/210671cf-2481-4603-8de3-ea3040bc968e.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>A <strong>version manager</strong> is a tool that allows us to easily install, manage, and <strong>switch between</strong> <strong>different versions</strong> of programming languages, frameworks, or tools.</p>
</li>
<li><p><strong>SDKMAN:</strong> Tools that help manage multiple versions of Java, SDK and frameworks on our system.</p>
</li>
</ul>
<blockquote>
<p>SDKMAN supported platforms: Linux, macOS, WSL (Windows subsystem for Linux).</p>
</blockquote>
<p><strong>Example:</strong></p>
<ul>
<li><p>Install a specific version:</p>
<pre><code class="lang-java">  sdk install java <span class="hljs-number">11.0</span>.<span class="hljs-number">11</span>-open
</code></pre>
</li>
<li><p>Switch to a different version:</p>
<pre><code class="lang-java">  sdk use java <span class="hljs-number">8.0</span>.<span class="hljs-number">292</span>-open
</code></pre>
</li>
<li><p>List all available SDKs and their versions:</p>
</li>
<li><pre><code class="lang-java">          sdk list
</code></pre>
</li>
</ul>
<hr />
<h2 id="heading-containerizing-the-java-application">Containerizing the Java application:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727459712868/98f07540-090f-437d-a06f-82d89cc10811.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-jar-java-archive-1"><strong>JAR (Java Archive)</strong>:</h3>
<ol>
<li><p>Why <strong>JDK</strong> image because it includes all the tools required for compiling and packaging Java applications.</p>
</li>
<li><p>Copies all the files into the container directory.</p>
</li>
<li><p>Then clean and build the Java application based on the <code>pom.xml</code></p>
</li>
<li><p>Use the <strong>JRE</strong> image for the runtime stage, because we already built the JAR file also it can run the application without JDK.</p>
</li>
<li><p>Set the working directory in the container to <code>/app</code></p>
</li>
<li><p>Now copy the built JAR file from the build stage</p>
</li>
<li><p>This line indicates that the application will listen on port 8080. This does not publish the port.</p>
</li>
<li><p>It executes the Java application using the <code>java.jar</code> file</p>
</li>
</ol>
<p><strong>Dockerfile:</strong></p>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> techiescamp/jdk-<span class="hljs-number">17</span>:<span class="hljs-number">1.0</span>.<span class="hljs-number">0</span> AS build

<span class="hljs-comment"># Copy the Java Application source code</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . /usr/src/</span>

<span class="hljs-comment"># Build Java Application</span>
<span class="hljs-keyword">RUN</span><span class="bash"> mvn -f /usr/src/pom.xml clean install -DskipTests</span>

<span class="hljs-keyword">FROM</span> techiescamp/jre-<span class="hljs-number">17</span>:<span class="hljs-number">1.0</span>.<span class="hljs-number">0</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment"># Copy the JAR file from the build stage (/app)</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=build /usr/src/target/*.jar ./java.jar</span>

<span class="hljs-comment"># Expose the port the app runs on</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">8080</span>

<span class="hljs-comment"># Run the jar file</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"java"</span>, <span class="hljs-string">"-jar"</span>, <span class="hljs-string">"java.jar"</span>]</span>
</code></pre>
<h3 id="heading-war-web-application-archive-1"><strong>WAR (Web Application Archive)</strong>:</h3>
<ol>
<li><p>Why <strong>Tomcat</strong> image because it contains a pre-configured Tomcat server, which is used to run Java web applications.</p>
</li>
<li><p>Remove any default web applications that come pre-installed with the Tomcat image.</p>
</li>
<li><p>Now copy all the files into the container directory.</p>
</li>
<li><p>This line indicates that the application will listen on port 8080. This does not publish the port.</p>
</li>
<li><p>No CMD is needed since the base image already has a CMD instruction</p>
</li>
</ol>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Use Tomcat base image</span>
<span class="hljs-keyword">FROM</span> tomcat:<span class="hljs-number">9.0</span>

<span class="hljs-comment"># Remove the default web apps</span>
<span class="hljs-keyword">RUN</span><span class="bash"> rm -rf /usr/<span class="hljs-built_in">local</span>/tomcat/webapps/*</span>

<span class="hljs-comment"># Copy the WAR file into the Tomcat webapps directory</span>
<span class="hljs-keyword">COPY</span><span class="bash"> target/myapp.war /usr/<span class="hljs-built_in">local</span>/tomcat/webapps/</span>

<span class="hljs-comment"># Expose the port on which Tomcat listens</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">8080</span>

<span class="hljs-comment"># No CMD is needed since the base image already has a CMD instruction</span>
</code></pre>
<hr />
<p>Thanks for sticking with me until the end! 😊</p>
<p>I know it was a lengthy read, but I hope you found it valuable and worth your time. If you have any suggestions, ideas, or thoughts to add, feel free to drop them in the comments. 👇📩</p>
<p>Your feedback means a lot! Don’t forget to hit that like❤️ button to show your support, and stay tuned for more content. 🔔</p>
<p>⭐Thanks again!</p>
<p>#getintokube #getintokubeblogs</p>
]]></content:encoded></item></channel></rss>