DKProbes Software Engineering and productivity demystified humanely. 2024-07-13T00:00:00Z https://blog.dkpathak.in Dushyant Pathak Dependency Injection 2024-07-13T00:00:00Z https://blog.dkpathak.in/dependency-injection/ <p>Dependency Injection (DI) is a design pattern that allows an object to receive its dependencies from an external source rather than creating them itself. This pattern promotes loose coupling and makes your code more modular and testable, and less error prone.</p> <p>I had an opportunity to refactor DI implemention at my workplace.</p> <h2 id="what-is-dependency-injection" tabindex="-1">What is Dependency Injection?<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#what-is-dependency-injection">#</a></h2> <p>Dependency Injection is a technique where the dependencies (objects) of a class are provided (injected) by Spring, typically through constructors, setters, or interfaces. DI helps in separating the creation of dependencies from the business logic, thereby adhering to the principle of Inversion of Control (IoC).</p> <h3 id="types-of-dependency-injection" tabindex="-1">Types of Dependency Injection<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#types-of-dependency-injection">#</a></h3> <ol> <li><strong>Constructor Injection</strong>: Dependencies are provided through a class constructor.</li> <li><strong>Setter Injection</strong>: Dependencies are provided through setter methods.</li> <li><strong>Field Injection</strong>: Dependencies are directly injected into the class fields using annotations.</li> </ol> <h2 id="problem-statement" tabindex="-1">Problem statement<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#problem-statement">#</a></h2> <p>Our first implementation relied on a tightly coupled instantiation of services into a component</p> <pre class="language-java"><code class="language-java"><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Subsidiary</span> <span class="token punctuation">{</span><br /> <span class="token class-name">String</span> name<span class="token punctuation">;</span><br /> <span class="token class-name">Integer</span> partyId<span class="token punctuation">;</span><br /> <span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">;</span><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateSubsidiary</span><span class="token punctuation">(</span><span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>ratings <span class="token operator">=</span> ratings<span class="token punctuation">;</span><br /> <span class="token comment">// update ratings in db</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateSub</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">SubsidiaryService</span> subsidiaryService<span class="token punctuation">;</span><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateSub</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>subsidiaryService <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">SubsidiaryService</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">processUpdates</span><span class="token punctuation">(</span><span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> subsidiaryService<span class="token punctuation">.</span><span class="token function">updateSubsidiary</span><span class="token punctuation">(</span>ratings<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Main.java</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Main</span> <span class="token punctuation">{</span><br /> <span class="token keyword">public</span> <span class="token keyword">static</span> <span class="token keyword">void</span> <span class="token function">main</span><span class="token punctuation">(</span><span class="token class-name">String</span><span class="token punctuation">[</span><span class="token punctuation">]</span> args<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">UpdateSub</span> <span class="token class-name">UpdateSub</span> <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">UpdateSub</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token class-name">UpdateSub</span><span class="token punctuation">.</span><span class="token function">processUpdates</span><span class="token punctuation">(</span><span class="token punctuation">[</span><span class="token string">"AA+"</span><span class="token punctuation">,</span> <span class="token string">"BB-"</span><span class="token punctuation">,</span> <span class="token string">"CCC"</span><span class="token punctuation">]</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>As visible here, we are creating an object of SubsidiaryService inside UpdateSub and instantiating it.</p> <h2 id="challenges-with-the-above-approach" tabindex="-1">Challenges with the above approach?<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#challenges-with-the-above-approach">#</a></h2> <ol> <li> <p>Tight coupling : Since we create and instantiate the object of SubService manually, it is coupled to the business logic of the UpdateSub itself. Should SubService be made into an interface, we'd need to update the logic inside UpdateSub to instantiate the impl of the interface.</p> </li> <li> <p>Challenge during testing : When writing junits, we don't need to actually create database connections, rather, just mock them. However, in this case, when we call UpdateSub, it'd end up updating the database connection and it won't be possible to mock it</p> </li> </ol> <h2 id="stage-1-of-solution--field-injection" tabindex="-1">Stage 1 of solution : Field injection<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#stage-1-of-solution--field-injection">#</a></h2> <p>To cater to the above limitations, we decided to implement dependency injection. But we decided to go with a type called field injection.</p> <p>As the name suggests, we inject dependencies as a field of the class.</p> <p>In code, it looked something like this</p> <pre class="language-java"><code class="language-java"><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateSub</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">SubsidiaryService</span> subsidiaryService<span class="token punctuation">;</span><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateSub</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">processUpdates</span><span class="token punctuation">(</span><span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> subsidiaryService<span class="token punctuation">.</span><span class="token function">updateSubsidiary</span><span class="token punctuation">(</span>ratings<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>Just by writing the <code>@Autowired</code> annotation, we were able to inject the SubService dependency. Now, our junits could mock SubService <a href="https://howtodoinjava.com/mockito/mockito-mock-injectmocks/#:~:text=2.-,Difference%20between%20%40Mock%20and%20%40InjectMocks,tested%20in%20the%20test%20class.">using <code>@InjectMocks</code> or <code>@Mock</code> from Mockito</a>.</p> <h3 id="stage-2--limitations-of-autowired" tabindex="-1">Stage 2 : Limitations of @Autowired<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#stage-2--limitations-of-autowired">#</a></h3> <p>There are a couple of limitations with this approach</p> <ol> <li>You cannot make the injected service immutable</li> </ol> <p>Since <code>@Autowired</code> will inject the service after the instantiation of UpdateSub, setting it as final will throw a compile time error. This is a challenge when we want to make sure our injections aren't overridden</p> <ol start="2"> <li>Chances of NPE</li> </ol> <p>Again, owing to the above reason that injection happens after the root class instantiation, we found null pointer exceptions because we were trying to access the method of a service that spring hadn't yet been able to instantiate</p> <ol start="3"> <li>The partial accuracy of inject mocks</li> </ol> <p>In the junits for UpdateSub, if we want to mock UpdateSub, we'd need to mock SubService and pass it along to UpdateSub. We achieved this by <code>@Mock</code>ing SubService and <code>@InjectMock</code>ing the mock into UpdateSub, but that didn't feel like the right approach</p> <h3 id="solution---constructor-injection" tabindex="-1">Solution - Constructor injection<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#solution---constructor-injection">#</a></h3> <p>We therefore decided to move to Constructor injection</p> <p>As the name suggests, we inject services into the constructor, rather than as a field.</p> <p>Here is how it looked like in code :</p> <pre class="language-java"><code class="language-java"><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateSub</span> <span class="token punctuation">{</span><br /> <br /> <span class="token class-name">SubsidiaryService</span> subsidiaryService<span class="token punctuation">;</span><br /><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateSub</span><span class="token punctuation">(</span><span class="token class-name">Subsidiaryservice</span> subsidiaryService<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Update Sub"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">processUpdates</span><span class="token punctuation">(</span><span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> subsidiaryService<span class="token punctuation">.</span><span class="token function">updateSubsidiary</span><span class="token punctuation">(</span>ratings<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /></code></pre> <p>Here, we autowire the constructor and pass the dependency as a param. The advantage of this is that the dependency will be initialized when object of UpdateSub is created, thus solving the null pointer concerns of above</p> <p>We can even do away with the explicit <code>@Autowired</code> annotation when there is just one constructor, as is the case above, since Spring handles the initiation during the constructor invocation.</p> <p>This, considering all factors, seems to us, the most useful and recommended implementation of Dependency injection</p> <h2 id="advantages-of-dependency-injection" tabindex="-1">Advantages of Dependency Injection<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#advantages-of-dependency-injection">#</a></h2> <p>To summarize, following are the advantages of DI</p> <ol> <li><strong>Loose Coupling</strong>: DI reduces the coupling between classes, making the system more flexible and easier to maintain.</li> <li><strong>Easier Testing</strong>: Dependencies can be easily mocked or stubbed during unit testing, leading to more isolated and reliable tests.</li> <li><strong>Improved Code Readability</strong>: DI promotes clean code practices by clearly defining dependencies and their relationships.</li> <li><strong>Enhanced Maintainability</strong>: Changes in dependencies require minimal changes in the dependent classes, making the system more maintainable.</li> <li><strong>Increased Reusability</strong>: DI encourages the use of interfaces and abstract classes, enhancing the reusability of components.</li> </ol> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#conclusion">#</a></h2> <p>Dependency Injection is a powerful design pattern that improves the modularity, testability, and maintainability of your code.</p> Google Cloud VPC 2024-06-15T00:00:00Z https://blog.dkpathak.in/google-cloud-vpc/ <p>In the world of cloud computing, a Virtual Private Cloud (VPC) is a private network within a public cloud that allows organizations to isolate their resources and manage them securely. Google Cloud Platform (GCP) offers a robust VPC service that provides scalable and flexible networking capabilities. In this blog, we'll delve into the concept of VPCs in GCP, explore their features, and guide you through setting up a VPC with snapshots from the GCP platform.</p> <h2 id="what-is-a-virtual-private-cloud-vpc" tabindex="-1">What is a Virtual Private Cloud (VPC)?<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#what-is-a-virtual-private-cloud-vpc">#</a></h2> <p>A Virtual Private Cloud (VPC) is a logically isolated section of a public cloud where you can launch resources in a virtual network that you define. A VPC provides the ability to:</p> <ul> <li>Isolate resources within the cloud environment.</li> <li>Control network settings such as IP address ranges, subnets, and route tables.</li> <li>Secure communication between resources using firewalls and security groups.</li> </ul> <h2 id="key-features-of-gcp-vpc" tabindex="-1">Key Features of GCP VPC<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#key-features-of-gcp-vpc">#</a></h2> <ol> <li><strong>Global Scope</strong>: GCP VPCs are global resources that span all the regions, allowing you to create subnets in any region without creating multiple VPCs.</li> <li><strong>Flexible Subnetworks</strong>: Subnets can be defined per region, allowing for more granular control over your network.</li> <li><strong>Custom Routes and Firewalls</strong>: VPCs come with default route tables and firewall rules that you can customize to control traffic flow.</li> <li><strong>Private Google Access</strong>: VPCs can enable private access to Google services, ensuring secure communication without exposing traffic to the internet.</li> <li><strong>VPC Peering</strong>: Connect multiple VPCs together to share resources across different projects or organizations.</li> </ol> <h2 id="setting-up-a-vpc-in-gcp" tabindex="-1">Setting Up a VPC in GCP<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#setting-up-a-vpc-in-gcp">#</a></h2> <h3 id="step-1-create-a-vpc" tabindex="-1">Step 1: Create a VPC<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#step-1-create-a-vpc">#</a></h3> <ol> <li> <p><strong>Navigate to the VPC Network Section</strong>: <img src="https://cloud.google.com/static/images/getting-started/gcp-console.png" alt="VPC Network Section" /></p> </li> <li> <p><strong>Create a New VPC</strong>:</p> <ul> <li>Go to the GCP Console.</li> <li>Navigate to the &quot;VPC network&quot; section under the &quot;Networking&quot; category.</li> <li>Click on &quot;Create VPC network&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/create-vpc.png" alt="Create VPC" /></p> </li> <li> <p><strong>Configure the VPC</strong>:</p> <ul> <li>Provide a name for your VPC.</li> <li>Choose an automatic or custom subnet creation mode. For this example, select &quot;Custom&quot; to define subnets manually.</li> <li>Click &quot;Create&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/configure-vpc.png" alt="Configure VPC" /></p> </li> </ol> <h3 id="step-2-create-subnets" tabindex="-1">Step 2: Create Subnets<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#step-2-create-subnets">#</a></h3> <ol> <li> <p><strong>Add Subnet</strong>:</p> <ul> <li>In the &quot;Create a subnet&quot; section, provide a name for the subnet.</li> <li>Select the region where the subnet will be located.</li> <li>Specify the IP address range for the subnet (e.g., 10.0.0.0/24).</li> <li>Click &quot;Add subnet&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/add-subnet.png" alt="Add Subnet" /></p> </li> <li> <p><strong>Repeat for Additional Subnets</strong>:</p> <ul> <li>Add more subnets as needed for different regions or availability zones.</li> </ul> </li> </ol> <h3 id="step-3-configure-firewall-rules" tabindex="-1">Step 3: Configure Firewall Rules<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#step-3-configure-firewall-rules">#</a></h3> <ol> <li> <p><strong>Navigate to Firewall Rules</strong>:</p> <ul> <li>Under the &quot;VPC network&quot; section, click on &quot;Firewall rules&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/firewall-rules.png" alt="Firewall Rules" /></p> </li> <li> <p><strong>Create Firewall Rule</strong>:</p> <ul> <li>Click on &quot;Create firewall rule&quot;.</li> <li>Provide a name for the firewall rule.</li> <li>Define the targets, source IP ranges, and protocols/ports.</li> <li>Click &quot;Create&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/create-firewall-rule.png" alt="Create Firewall Rule" /></p> </li> </ol> <h3 id="step-4-enable-private-google-access" tabindex="-1">Step 4: Enable Private Google Access<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#step-4-enable-private-google-access">#</a></h3> <ol> <li> <p><strong>Private Google Access</strong>:</p> <ul> <li>Navigate to the &quot;Subnets&quot; section under the &quot;VPC network&quot;.</li> <li>Select a subnet and edit it.</li> <li>Enable &quot;Private Google Access&quot; to allow instances in the subnet to access Google APIs and services using internal IP addresses.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/private-google-access.png" alt="Private Google Access" /></p> </li> </ol> <h2 id="advantages-of-using-gcp-vpc" tabindex="-1">Advantages of Using GCP VPC<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#advantages-of-using-gcp-vpc">#</a></h2> <ol> <li><strong>Global Connectivity</strong>: GCP VPC allows you to connect resources across regions without needing multiple VPCs.</li> <li><strong>Scalability</strong>: Easily scale your network by adding subnets and configuring routes and firewalls as needed.</li> <li><strong>Security</strong>: Implement granular security controls using firewall rules, private access, and custom routes.</li> <li><strong>Flexibility</strong>: Create custom subnet configurations and manage IP address ranges to suit your specific needs.</li> <li><strong>Integration</strong>: Seamlessly integrate with other GCP services such as Cloud Interconnect, Cloud VPN, and more.</li> </ol> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#conclusion">#</a></h2> <p>Understanding and utilizing VPCs in Google Cloud Platform is essential for creating a secure and scalable cloud infrastructure. By leveraging GCP VPCs, you can isolate your resources, manage network configurations, and ensure secure communication within your cloud environment. The step-by-step guide provided in this blog, along with the snapshots from the GCP platform, should help you get started with setting up and configuring your own VPC in GCP.</p> Understanding GraphQL Mutations 2024-07-08T00:00:00Z https://blog.dkpathak.in/understanding-graphql-mutations/ <p>GraphQL has revolutionized the way we interact with APIs by providing a flexible and efficient approach to querying and mutating data. While queries are used to fetch data, mutations are the means to modify it. In this blog, we'll dive deep into GraphQL mutations, explore the concept of transactional updates, and discuss how to implement rollbacks to ensure data integrity.</p> <h2 id="what-are-graphql-mutations" tabindex="-1">What are GraphQL Mutations?<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#what-are-graphql-mutations">#</a></h2> <p>GraphQL mutations are operations that allow you to create, update, or delete data. Unlike queries, which are idempotent (they can be called multiple times without changing the result), mutations are meant to cause side effects on the server.</p> <h3 id="basic-mutation-example" tabindex="-1">Basic Mutation Example<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#basic-mutation-example">#</a></h3> <p>Let's start with a simple example of a mutation to update a financial transaction:</p> <pre class="language-graphql"><code class="language-graphql"><span class="token keyword">mutation</span> <span class="token definition-mutation function">UpdateTransaction</span><span class="token punctuation">(</span><span class="token variable variable-input">$id</span><span class="token punctuation">:</span> <span class="token scalar">ID</span><span class="token operator">!</span><span class="token punctuation">,</span> <span class="token variable variable-input">$amount</span><span class="token punctuation">:</span> <span class="token scalar">Float</span><span class="token operator">!</span><span class="token punctuation">,</span> <span class="token variable variable-input">$status</span><span class="token punctuation">:</span> <span class="token scalar">String</span><span class="token operator">!</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token property-query property-mutation">updateTransaction</span><span class="token punctuation">(</span><span class="token attr-name">id</span><span class="token punctuation">:</span> <span class="token variable variable-input">$id</span><span class="token punctuation">,</span> <span class="token attr-name">amount</span><span class="token punctuation">:</span> <span class="token variable variable-input">$amount</span><span class="token punctuation">,</span> <span class="token attr-name">status</span><span class="token punctuation">:</span> <span class="token variable variable-input">$status</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token property">id</span><br /> <span class="token property">amount</span><br /> <span class="token property">status</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>In this mutation, we pass the transaction ID, amount, and status as arguments to update the transaction details. The response includes the updated transaction information.</p> <h3 id="implementing-mutations-in-a-server" tabindex="-1">Implementing Mutations in a Server<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#implementing-mutations-in-a-server">#</a></h3> <p>Here’s how you can implement the above mutation in a Java server using Spring Boot and a mock data source:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Transaction.java - Entity Class</span><br /><span class="token annotation punctuation">@Entity</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Transaction</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Id</span><br /> <span class="token annotation punctuation">@GeneratedValue</span><span class="token punctuation">(</span>strategy <span class="token operator">=</span> <span class="token class-name">GenerationType</span><span class="token punctuation">.</span>IDENTITY<span class="token punctuation">)</span><br /> <span class="token keyword">private</span> <span class="token class-name">Long</span> id<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token keyword">double</span> amount<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> status<span class="token punctuation">;</span><br /><br /> <span class="token comment">// Getters and Setters</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// TransactionRepository.java - Repository Interface</span><br /><span class="token keyword">public</span> <span class="token keyword">interface</span> <span class="token class-name">TransactionRepository</span> <span class="token keyword">extends</span> <span class="token class-name">JpaRepository</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">Transaction</span><span class="token punctuation">,</span> <span class="token class-name">Long</span><span class="token punctuation">></span></span> <span class="token punctuation">{</span><span class="token punctuation">}</span><br /><br /><span class="token comment">// TransactionService.java - Service Class</span><br /><span class="token annotation punctuation">@Service</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionService</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionRepository</span> repository<span class="token punctuation">;</span><br /><br /> <span class="token annotation punctuation">@Transactional</span><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransaction</span><span class="token punctuation">(</span><span class="token class-name">Long</span> id<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Transaction</span> transaction <span class="token operator">=</span> repository<span class="token punctuation">.</span><span class="token function">findById</span><span class="token punctuation">(</span>id<span class="token punctuation">)</span><br /> <span class="token punctuation">.</span><span class="token function">orElseThrow</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">-></span> <span class="token keyword">new</span> <span class="token class-name">ResourceNotFoundException</span><span class="token punctuation">(</span><span class="token string">"Transaction not found"</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setAmount</span><span class="token punctuation">(</span>amount<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setStatus</span><span class="token punctuation">(</span>status<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token keyword">return</span> repository<span class="token punctuation">.</span><span class="token function">save</span><span class="token punctuation">(</span>transaction<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// TransactionResolver.java - GraphQL Resolver</span><br /><span class="token annotation punctuation">@Component</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionResolver</span> <span class="token keyword">implements</span> <span class="token class-name">GraphQLMutationResolver</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionService</span> service<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransaction</span><span class="token punctuation">(</span><span class="token class-name">Long</span> id<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">return</span> service<span class="token punctuation">.</span><span class="token function">updateTransaction</span><span class="token punctuation">(</span>id<span class="token punctuation">,</span> amount<span class="token punctuation">,</span> status<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// schema.graphqls - GraphQL Schema</span><br />type <span class="token class-name">Transaction</span> <span class="token punctuation">{</span><br /> id<span class="token operator">:</span> ID<span class="token operator">!</span><br /> amount<span class="token operator">:</span> <span class="token class-name">Float</span><span class="token operator">!</span><br /> status<span class="token operator">:</span> <span class="token class-name">String</span><span class="token operator">!</span><br /><span class="token punctuation">}</span><br /><br />type <span class="token class-name">Mutation</span> <span class="token punctuation">{</span><br /> <span class="token function">updateTransaction</span><span class="token punctuation">(</span>id<span class="token operator">:</span> ID<span class="token operator">!</span><span class="token punctuation">,</span> amount<span class="token operator">:</span> <span class="token class-name">Float</span><span class="token operator">!</span><span class="token punctuation">,</span> status<span class="token operator">:</span> <span class="token class-name">String</span><span class="token operator">!</span><span class="token punctuation">)</span><span class="token operator">:</span> <span class="token class-name">Transaction</span><br /><span class="token punctuation">}</span><br /><br />type <span class="token class-name">Query</span> <span class="token punctuation">{</span><br /> <span class="token function">transaction</span><span class="token punctuation">(</span>id<span class="token operator">:</span> ID<span class="token operator">!</span><span class="token punctuation">)</span><span class="token operator">:</span> <span class="token class-name">Transaction</span><br /><span class="token punctuation">}</span></code></pre> <h2 id="transactional-updates" tabindex="-1">Transactional Updates<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#transactional-updates">#</a></h2> <p>In a production environment, mutations often need to be part of a transaction to ensure data consistency. A transaction is a sequence of operations performed as a single logical unit of work. If any operation within the transaction fails, the entire transaction is rolled back, leaving the database in a consistent state.</p> <h3 id="transaction-example-with-spring-boot" tabindex="-1">Transaction Example with Spring Boot<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#transaction-example-with-spring-boot">#</a></h3> <p>Spring Boot provides strong support for transactions, making it easy to implement transactional updates in your GraphQL mutations:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// TransactionService.java - Service Class with Transaction Management</span><br /><span class="token annotation punctuation">@Service</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionService</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionRepository</span> repository<span class="token punctuation">;</span><br /><br /> <span class="token annotation punctuation">@Transactional</span><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransaction</span><span class="token punctuation">(</span><span class="token class-name">Long</span> id<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Transaction</span> transaction <span class="token operator">=</span> repository<span class="token punctuation">.</span><span class="token function">findById</span><span class="token punctuation">(</span>id<span class="token punctuation">)</span><br /> <span class="token punctuation">.</span><span class="token function">orElseThrow</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">-></span> <span class="token keyword">new</span> <span class="token class-name">ResourceNotFoundException</span><span class="token punctuation">(</span><span class="token string">"Transaction not found"</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setAmount</span><span class="token punctuation">(</span>amount<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setStatus</span><span class="token punctuation">(</span>status<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token keyword">return</span> repository<span class="token punctuation">.</span><span class="token function">save</span><span class="token punctuation">(</span>transaction<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>In this example, we wrap the mutation in a transaction. If any error occurs during the update, the transaction is rolled back to ensure data consistency.</p> <h2 id="rollbacks-in-graphql" tabindex="-1">Rollbacks in GraphQL<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#rollbacks-in-graphql">#</a></h2> <p>Rollbacks are crucial for maintaining data integrity, especially in scenarios where multiple mutations are involved. Implementing rollbacks in GraphQL involves using transactions provided by the database or ORM.</p> <h3 id="handling-rollbacks" tabindex="-1">Handling Rollbacks<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#handling-rollbacks">#</a></h3> <p>To handle rollbacks, ensure that each mutation is wrapped in a transaction. Here’s a more complex example involving multiple updates:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// TransactionService.java - Service Class with Complex Transaction Management</span><br /><span class="token annotation punctuation">@Service</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionService</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionRepository</span> repository<span class="token punctuation">;</span><br /><br /> <span class="token annotation punctuation">@Transactional</span><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransactionAndLog</span><span class="token punctuation">(</span><span class="token class-name">Long</span> transactionId<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">,</span> <span class="token class-name">Long</span> logId<span class="token punctuation">,</span> <span class="token class-name">String</span> logMessage<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Transaction</span> transaction <span class="token operator">=</span> repository<span class="token punctuation">.</span><span class="token function">findById</span><span class="token punctuation">(</span>transactionId<span class="token punctuation">)</span><br /> <span class="token punctuation">.</span><span class="token function">orElseThrow</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">-></span> <span class="token keyword">new</span> <span class="token class-name">ResourceNotFoundException</span><span class="token punctuation">(</span><span class="token string">"Transaction not found"</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setAmount</span><span class="token punctuation">(</span>amount<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setStatus</span><span class="token punctuation">(</span>status<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> <span class="token class-name">Log</span> log <span class="token operator">=</span> logRepository<span class="token punctuation">.</span><span class="token function">findById</span><span class="token punctuation">(</span>logId<span class="token punctuation">)</span><br /> <span class="token punctuation">.</span><span class="token function">orElseThrow</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">-></span> <span class="token keyword">new</span> <span class="token class-name">ResourceNotFoundException</span><span class="token punctuation">(</span><span class="token string">"Log not found"</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> log<span class="token punctuation">.</span><span class="token function">setMessage</span><span class="token punctuation">(</span>logMessage<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> repository<span class="token punctuation">.</span><span class="token function">save</span><span class="token punctuation">(</span>transaction<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> logRepository<span class="token punctuation">.</span><span class="token function">save</span><span class="token punctuation">(</span>log<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> <span class="token keyword">return</span> transaction<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// TransactionResolver.java - GraphQL Resolver</span><br /><span class="token annotation punctuation">@Component</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionResolver</span> <span class="token keyword">implements</span> <span class="token class-name">GraphQLMutationResolver</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionService</span> service<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransactionAndLog</span><span class="token punctuation">(</span><span class="token class-name">Long</span> transactionId<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">,</span> <span class="token class-name">Long</span> logId<span class="token punctuation">,</span> <span class="token class-name">String</span> logMessage<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">return</span> service<span class="token punctuation">.</span><span class="token function">updateTransactionAndLog</span><span class="token punctuation">(</span>transactionId<span class="token punctuation">,</span> amount<span class="token punctuation">,</span> status<span class="token punctuation">,</span> logId<span class="token punctuation">,</span> logMessage<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// schema.graphqls - GraphQL Schema Update</span><br />type <span class="token class-name">Mutation</span> <span class="token punctuation">{</span><br /> <span class="token function">updateTransactionAndLog</span><span class="token punctuation">(</span>transactionId<span class="token operator">:</span> ID<span class="token operator">!</span><span class="token punctuation">,</span> amount<span class="token operator">:</span> <span class="token class-name">Float</span><span class="token operator">!</span><span class="token punctuation">,</span> status<span class="token operator">:</span> <span class="token class-name">String</span><span class="token operator">!</span><span class="token punctuation">,</span> logId<span class="token operator">:</span> ID<span class="token operator">!</span><span class="token punctuation">,</span> logMessage<span class="token operator">:</span> <span class="token class-name">String</span><span class="token operator">!</span><span class="token punctuation">)</span><span class="token operator">:</span> <span class="token class-name">Transaction</span><br /><span class="token punctuation">}</span></code></pre> <p>In this example, we update both a transaction and a log entry within a single transaction. If either update fails, the transaction is rolled back, ensuring that partial updates do not occur.</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#conclusion">#</a></h2> <p>Understanding GraphQL mutations, transactional updates, and rollbacks is essential for building robust and reliable applications. By leveraging transactions, you can ensure data consistency and integrity, even in the face of errors. Implementing these practices in your GraphQL server can help you avoid common pitfalls and provide a better experience for your users.</p> The challenges of Database Migration 2024-07-08T00:00:00Z https://blog.dkpathak.in/the-challenges-of-database-migration/ <p>Database migration is a critical task that involves transferring data from one database to another. This process is often necessary when upgrading systems, consolidating databases, or changing database vendors. However, database migration comes with its own set of challenges and potential pitfalls. In this blog, I’ll share insights from our recent database migration project at my workplace, highlighting the challenges we faced and how we overcame them.</p> <h2 id="understanding-database-migration" tabindex="-1">Understanding Database Migration<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#understanding-database-migration">#</a></h2> <p>Database migration involves moving data from a source database to a target database. This can include migrating the database schema, data, and sometimes even the database engine. Successful migration requires careful planning, execution, and validation to ensure data integrity and minimal downtime.</p> <h2 id="common-challenges-and-pitfalls" tabindex="-1">Common Challenges and Pitfalls<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#common-challenges-and-pitfalls">#</a></h2> <h3 id="1-data-integrity-and-consistency" tabindex="-1">1. Data Integrity and Consistency<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#1-data-integrity-and-consistency">#</a></h3> <p><strong>Challenge</strong>: Ensuring that the data remains intact and consistent during and after the migration is paramount. Any loss or corruption of data can have significant consequences.</p> <p><strong>Pitfall</strong>: Inconsistent data formats, incompatible data types, and schema differences can lead to data integrity issues.</p> <p><strong>Solution</strong>: Thoroughly analyze the source and target databases to identify and address any discrepancies. Use data validation techniques and tools to verify data integrity before, during, and after migration.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of data validation in Java</span><br /><span class="token keyword">public</span> <span class="token keyword">boolean</span> <span class="token function">validateData</span><span class="token punctuation">(</span><span class="token class-name">String</span> sourceData<span class="token punctuation">,</span> <span class="token class-name">String</span> targetData<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">return</span> sourceData<span class="token punctuation">.</span><span class="token function">equals</span><span class="token punctuation">(</span>targetData<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><span class="token punctuation">}</span></code></pre> <h3 id="2-downtime-management" tabindex="-1">2. Downtime Management<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#2-downtime-management">#</a></h3> <p><strong>Challenge</strong>: Minimizing downtime during migration is crucial, especially for applications that require high availability.</p> <p><strong>Pitfall</strong>: Prolonged downtime can disrupt business operations and lead to customer dissatisfaction.</p> <p><strong>Solution</strong>: Plan the migration during off-peak hours and implement a phased or incremental migration approach. Use techniques like database replication and shadow databases to minimize downtime.</p> <pre class="language-sql"><code class="language-sql"><span class="token comment">-- Example of using replication to minimize downtime</span><br /><span class="token keyword">CREATE</span> PUBLICATION my_publication <span class="token keyword">FOR</span> <span class="token keyword">ALL</span> <span class="token keyword">TABLES</span><span class="token punctuation">;</span><br /><span class="token keyword">CREATE</span> SUBSCRIPTION my_subscription CONNECTION <span class="token string">'dbname=mydb'</span> PUBLICATION my_publication<span class="token punctuation">;</span></code></pre> <h3 id="3-performance-issues" tabindex="-1">3. Performance Issues<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#3-performance-issues">#</a></h3> <p><strong>Challenge</strong>: The performance of the target database can be affected due to differences in indexing, query optimization, and hardware configurations.</p> <p><strong>Pitfall</strong>: Poor performance can lead to slow application response times and increased resource consumption.</p> <p><strong>Solution</strong>: Optimize the target database for performance by analyzing and tuning queries, indexing, and database configurations. Perform load testing to identify and address performance bottlenecks.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of indexing in SQL</span><br />CREATE INDEX idx_user_name <span class="token class-name">ON</span> users <span class="token punctuation">(</span>name<span class="token punctuation">)</span><span class="token punctuation">;</span></code></pre> <h3 id="4-compatibility-issues" tabindex="-1">4. Compatibility Issues<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#4-compatibility-issues">#</a></h3> <p><strong>Challenge</strong>: Migrating between different database systems can lead to compatibility issues with SQL syntax, stored procedures, and database features.</p> <p><strong>Pitfall</strong>: Incompatible SQL queries and database functions can cause errors and application failures.</p> <p><strong>Solution</strong>: Rewrite SQL queries and stored procedures to be compatible with the target database. Use database migration tools that offer compatibility checks and automated code conversion.</p> <pre class="language-sql"><code class="language-sql"><span class="token comment">-- Example of converting SQL syntax for compatibility</span><br /><span class="token comment">-- Source (MySQL)</span><br /><span class="token keyword">SELECT</span> <span class="token operator">*</span> <span class="token keyword">FROM</span> users <span class="token keyword">WHERE</span> <span class="token keyword">DATE</span><span class="token punctuation">(</span>created_at<span class="token punctuation">)</span> <span class="token operator">=</span> CURDATE<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /><span class="token comment">-- Target (PostgreSQL)</span><br /><span class="token keyword">SELECT</span> <span class="token operator">*</span> <span class="token keyword">FROM</span> users <span class="token keyword">WHERE</span> created_at::<span class="token keyword">date</span> <span class="token operator">=</span> <span class="token keyword">CURRENT_DATE</span><span class="token punctuation">;</span></code></pre> <h3 id="5-data-volume" tabindex="-1">5. Data Volume<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#5-data-volume">#</a></h3> <p><strong>Challenge</strong>: Migrating large volumes of data can be time-consuming and resource-intensive.</p> <p><strong>Pitfall</strong>: Insufficient planning for data volume can lead to extended migration times and potential failures.</p> <p><strong>Solution</strong>: Use data chunking and parallel processing to handle large volumes of data efficiently. Consider using cloud-based migration services that offer scalability.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of data chunking in Java</span><br /><span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">migrateDataInChunks</span><span class="token punctuation">(</span><span class="token keyword">int</span> chunkSize<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">for</span> <span class="token punctuation">(</span><span class="token keyword">int</span> i <span class="token operator">=</span> <span class="token number">0</span><span class="token punctuation">;</span> i <span class="token operator">&lt;</span> totalDataSize<span class="token punctuation">;</span> i <span class="token operator">+=</span> chunkSize<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token comment">// Migrate data chunk</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <h3 id="6-security-concerns" tabindex="-1">6. Security Concerns<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#6-security-concerns">#</a></h3> <p><strong>Challenge</strong>: Ensuring the security of data during migration is critical, especially for sensitive and confidential information.</p> <p><strong>Pitfall</strong>: Data breaches and unauthorized access during migration can have severe consequences.</p> <p><strong>Solution</strong>: Implement strong encryption and access control measures during migration. Use secure connections and data masking techniques to protect sensitive information.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of encrypting data during migration</span><br /><span class="token keyword">public</span> <span class="token class-name">String</span> <span class="token function">encryptData</span><span class="token punctuation">(</span><span class="token class-name">String</span> data<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token comment">// Encryption logic</span><br /> <span class="token keyword">return</span> encryptedData<span class="token punctuation">;</span><br /><span class="token punctuation">}</span></code></pre> <h3 id="7-testing-and-validation" tabindex="-1">7. Testing and Validation<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#7-testing-and-validation">#</a></h3> <p><strong>Challenge</strong>: Thorough testing and validation are essential to ensure the success of the migration.</p> <p><strong>Pitfall</strong>: Inadequate testing can lead to undetected issues that surface post-migration.</p> <p><strong>Solution</strong>: Develop a comprehensive testing plan that includes unit tests, integration tests, and user acceptance tests. Validate the migrated data and application functionality to ensure everything works as expected.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of unit testing in Java</span><br /><span class="token annotation punctuation">@Test</span><br /><span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">testMigration</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">String</span> sourceData <span class="token operator">=</span> <span class="token string">"source"</span><span class="token punctuation">;</span><br /> <span class="token class-name">String</span> targetData <span class="token operator">=</span> <span class="token string">"target"</span><span class="token punctuation">;</span><br /> <span class="token function">assertTrue</span><span class="token punctuation">(</span><span class="token function">validateData</span><span class="token punctuation">(</span>sourceData<span class="token punctuation">,</span> targetData<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><span class="token punctuation">}</span></code></pre> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#conclusion">#</a></h2> <p>Database migration is a complex and challenging process that requires careful planning, execution, and validation. By understanding and addressing the common challenges and pitfalls, you can ensure a smooth and successful migration. Our recent migration project at my workplace taught us valuable lessons that can help others navigate this intricate process.</p> Grafana 2024-07-08T00:00:00Z https://blog.dkpathak.in/grafana/ <p>Observability has become a crucial aspect of modern software systems. It enables developers and operations teams to understand the internal state of a system based on the data it produces. At my workplace, we recently implemented Grafana to enhance our observability capabilities. This blog will guide you through the basics of observability, why we chose Grafana, and how we implemented it to gain deeper insights into our applications.</p> <h2 id="what-is-observability" tabindex="-1">What is Observability?<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#what-is-observability">#</a></h2> <p>Observability refers to the ability to measure the internal states of a system by examining its outputs. The three key pillars of observability are:</p> <ol> <li><strong>Metrics</strong>: Quantitative data about the system's performance.</li> <li><strong>Logs</strong>: Detailed records of events that occur within the system.</li> <li><strong>Traces</strong>: A record of the journey of a request through the system.</li> </ol> <h2 id="why-grafana" tabindex="-1">Why Grafana?<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#why-grafana">#</a></h2> <p>Grafana is a powerful open-source platform for monitoring and observability. It allows you to query, visualize, alert on, and understand your metrics no matter where they are stored. Here's why we chose Grafana:</p> <ul> <li><strong>Extensibility</strong>: Grafana supports a wide range of data sources and plugins.</li> <li><strong>Customizable Dashboards</strong>: Create interactive and visually appealing dashboards.</li> <li><strong>Alerting</strong>: Set up alert rules to notify you when certain conditions are met.</li> <li><strong>Ease of Use</strong>: User-friendly interface for setting up and managing observability.</li> </ul> <h2 id="setting-up-grafana" tabindex="-1">Setting Up Grafana<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#setting-up-grafana">#</a></h2> <h3 id="step-1-install-grafana" tabindex="-1">Step 1: Install Grafana<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-1-install-grafana">#</a></h3> <p>First, we need to install Grafana. You can install Grafana on various platforms. Here’s an example of installing Grafana on Ubuntu:</p> <pre class="language-bash"><code class="language-bash"><span class="token function">sudo</span> <span class="token function">apt-get</span> <span class="token function">install</span> -y software-properties-common<br /><span class="token function">sudo</span> add-apt-repository <span class="token string">"deb https://packages.grafana.com/oss/deb stable main"</span><br /><span class="token function">wget</span> -q -O - https://packages.grafana.com/gpg.key <span class="token operator">|</span> <span class="token function">sudo</span> apt-key <span class="token function">add</span> -<br /><span class="token function">sudo</span> <span class="token function">apt-get</span> update<br /><span class="token function">sudo</span> <span class="token function">apt-get</span> <span class="token function">install</span> grafana<br /><span class="token function">sudo</span> systemctl start grafana-server<br /><span class="token function">sudo</span> systemctl <span class="token builtin class-name">enable</span> grafana-server</code></pre> <h3 id="step-2-configure-data-sources" tabindex="-1">Step 2: Configure Data Sources<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-2-configure-data-sources">#</a></h3> <p>Once Grafana is installed, configure the data sources. Grafana supports various data sources like Prometheus, InfluxDB, Elasticsearch, etc. In our setup, we used Prometheus.</p> <ol> <li>Navigate to the Grafana UI (http://localhost:3000).</li> <li>Log in with the default credentials (username: <code>admin</code>, password: <code>admin</code>).</li> <li>Go to <strong>Configuration &gt; Data Sources</strong>.</li> <li>Add Prometheus as a data source by providing the URL of your Prometheus server.</li> </ol> <p><img src="https://grafana.com/docs/grafana/latest/getting-started/getting-started-prometheus/add-data-source-prometheus.png" alt="Add Data Source" /></p> <h3 id="step-3-create-dashboards" tabindex="-1">Step 3: Create Dashboards<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-3-create-dashboards">#</a></h3> <p>Next, we create dashboards to visualize our metrics.</p> <ol> <li>Go to <strong>Create &gt; Dashboard</strong>.</li> <li>Add a new panel and configure the query to fetch data from Prometheus.</li> <li>Customize the visualization type (e.g., Graph, Gauge, Heatmap) and panel settings.</li> </ol> <p>Here’s an example query to display CPU usage:</p> <pre class="language-sql"><code class="language-sql">rate<span class="token punctuation">(</span>node_cpu_seconds_total{job<span class="token operator">=</span><span class="token string">"node_exporter"</span><span class="token punctuation">,</span><span class="token keyword">mode</span><span class="token operator">=</span><span class="token string">"idle"</span>}<span class="token punctuation">[</span><span class="token number">5</span>m<span class="token punctuation">]</span><span class="token punctuation">)</span></code></pre> <p><img src="https://grafana.com/static/assets/img/features/dashboard/dashboard_overview_light.png" alt="Grafana Dashboard" /></p> <h3 id="step-4-set-up-alerts" tabindex="-1">Step 4: Set Up Alerts<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-4-set-up-alerts">#</a></h3> <p>Alerts are crucial for proactive monitoring. In Grafana, you can set up alerts based on specific conditions.</p> <ol> <li>In the panel editor, go to the <strong>Alert</strong> tab.</li> <li>Create a new alert rule with conditions (e.g., CPU usage &gt; 80%).</li> <li>Configure notification channels (e.g., email, Slack).</li> </ol> <p>Here's a sample configuration for setting up an alert:</p> <pre class="language-yaml"><code class="language-yaml"><span class="token key atrule">alerting</span><span class="token punctuation">:</span><br /> <span class="token key atrule">alertmanagers</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> <span class="token key atrule">static_configs</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> <span class="token key atrule">targets</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> <span class="token string">'localhost:9093'</span></code></pre> <h3 id="step-5-explore-logs-and-traces" tabindex="-1">Step 5: Explore Logs and Traces<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-5-explore-logs-and-traces">#</a></h3> <p>Grafana also supports log aggregation and tracing. Integrate with Loki for logs and Tempo for tracing to gain a comprehensive view of your system's behavior.</p> <pre class="language-yaml"><code class="language-yaml">logcli query '<span class="token punctuation">{</span>job="varlogs"<span class="token punctuation">}</span> <span class="token punctuation">|</span> logfmt'<br />tempo query 'span_id=12345'</code></pre> <h2 id="advantages-of-using-grafana" tabindex="-1">Advantages of Using Grafana<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#advantages-of-using-grafana">#</a></h2> <ol> <li><strong>Unified View</strong>: Grafana provides a single-pane-of-glass view of your metrics, logs, and traces.</li> <li><strong>Proactive Monitoring</strong>: With alerting, you can detect and respond to issues before they impact users.</li> <li><strong>Historical Analysis</strong>: Grafana allows you to explore historical data, aiding in troubleshooting and capacity planning.</li> <li><strong>Customization</strong>: Tailor dashboards and visualizations to meet specific needs.</li> </ol> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#conclusion">#</a></h2> <p>Implementing Grafana at my workplace has significantly enhanced our observability capabilities. We can now monitor our systems in real-time, set up alerts for critical conditions, and analyze logs and traces for in-depth insights. Grafana’s extensibility and ease of use make it an excellent choice for any organization looking to improve its observability practices.</p> Implementing the Command Design Pattern 2024-07-10T00:00:00Z https://blog.dkpathak.in/implementing-the-command-design-pattern/ <p>At my workplace, we often deal with complex business logic that involves multiple operations. To maintain a clean and maintainable codebase, we decided to implement the Command Design Pattern. This pattern not only improved our code structure but also enhanced its extensibility and scalability. In this blog, I'll walk you through the Command Design Pattern, compare code written without and with this pattern, and discuss its advantages using a real-world example of updating organization details.</p> <h2 id="what-is-the-command-design-pattern" tabindex="-1">What is the Command Design Pattern?<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#what-is-the-command-design-pattern">#</a></h2> <p>The Command Design Pattern is a behavioral design pattern that turns a request into a stand-alone object that contains all information about the request. This transformation allows us to parameterize methods with different requests, delay or queue a request's execution, and support undoable operations.</p> <h2 id="code-without-command-pattern" tabindex="-1">Code Without Command Pattern<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#code-without-command-pattern">#</a></h2> <p>Let's consider a simple example where we need to update the details of an organization. Here's how the code might look without using the Command Pattern:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Organization class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Organization</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> name<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> address<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">Organization</span><span class="token punctuation">(</span><span class="token class-name">String</span> name<span class="token punctuation">,</span> <span class="token class-name">String</span> address<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>name <span class="token operator">=</span> name<span class="token punctuation">;</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>address <span class="token operator">=</span> address<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateName</span><span class="token punctuation">(</span><span class="token class-name">String</span> newName<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>name <span class="token operator">=</span> newName<span class="token punctuation">;</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Organization name updated to: "</span> <span class="token operator">+</span> newName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateAddress</span><span class="token punctuation">(</span><span class="token class-name">String</span> newAddress<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>address <span class="token operator">=</span> newAddress<span class="token punctuation">;</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Organization address updated to: "</span> <span class="token operator">+</span> newAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token comment">// Getters for name and address</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// OrganizationService class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">OrganizationService</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">Organization</span> organization<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">OrganizationService</span><span class="token punctuation">(</span><span class="token class-name">Organization</span> organization<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>organization <span class="token operator">=</span> organization<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateDetails</span><span class="token punctuation">(</span><span class="token class-name">String</span> newName<span class="token punctuation">,</span> <span class="token class-name">String</span> newAddress<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> organization<span class="token punctuation">.</span><span class="token function">updateName</span><span class="token punctuation">(</span>newName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> organization<span class="token punctuation">.</span><span class="token function">updateAddress</span><span class="token punctuation">(</span>newAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Main class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Main</span> <span class="token punctuation">{</span><br /> <span class="token keyword">public</span> <span class="token keyword">static</span> <span class="token keyword">void</span> <span class="token function">main</span><span class="token punctuation">(</span><span class="token class-name">String</span><span class="token punctuation">[</span><span class="token punctuation">]</span> args<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Organization</span> org <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">Organization</span><span class="token punctuation">(</span><span class="token string">"Old Name"</span><span class="token punctuation">,</span> <span class="token string">"Old Address"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token class-name">OrganizationService</span> orgService <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">OrganizationService</span><span class="token punctuation">(</span>org<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> orgService<span class="token punctuation">.</span><span class="token function">updateDetails</span><span class="token punctuation">(</span><span class="token string">"New Name"</span><span class="token punctuation">,</span> <span class="token string">"New Address"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>In this implementation, the <code>OrganizationService</code> class directly depends on the <code>Organization</code> class and its specific methods. This tight coupling makes the code difficult to extend and maintain, especially when new operations are introduced.</p> <h2 id="code-with-command-pattern" tabindex="-1">Code With Command Pattern<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#code-with-command-pattern">#</a></h2> <p>By implementing the Command Pattern, we can decouple the invoker (organization service) from the receiver (organization) and encapsulate the request as an object. Here's how we can refactor the above code:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Command interface</span><br /><span class="token keyword">public</span> <span class="token keyword">interface</span> <span class="token class-name">Command</span> <span class="token punctuation">{</span><br /> <span class="token keyword">void</span> <span class="token function">execute</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Organization class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Organization</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> name<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> address<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">Organization</span><span class="token punctuation">(</span><span class="token class-name">String</span> name<span class="token punctuation">,</span> <span class="token class-name">String</span> address<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>name <span class="token operator">=</span> name<span class="token punctuation">;</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>address <span class="token operator">=</span> address<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateName</span><span class="token punctuation">(</span><span class="token class-name">String</span> newName<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>name <span class="token operator">=</span> newName<span class="token punctuation">;</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Organization name updated to: "</span> <span class="token operator">+</span> newName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateAddress</span><span class="token punctuation">(</span><span class="token class-name">String</span> newAddress<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>address <span class="token operator">=</span> newAddress<span class="token punctuation">;</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Organization address updated to: "</span> <span class="token operator">+</span> newAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token comment">// Getters for name and address</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Concrete Command classes</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateNameCommand</span> <span class="token keyword">implements</span> <span class="token class-name">Command</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">Organization</span> organization<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> newName<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateNameCommand</span><span class="token punctuation">(</span><span class="token class-name">Organization</span> organization<span class="token punctuation">,</span> <span class="token class-name">String</span> newName<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>organization <span class="token operator">=</span> organization<span class="token punctuation">;</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>newName <span class="token operator">=</span> newName<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token annotation punctuation">@Override</span><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">execute</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> organization<span class="token punctuation">.</span><span class="token function">updateName</span><span class="token punctuation">(</span>newName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateAddressCommand</span> <span class="token keyword">implements</span> <span class="token class-name">Command</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">Organization</span> organization<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> newAddress<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateAddressCommand</span><span class="token punctuation">(</span><span class="token class-name">Organization</span> organization<span class="token punctuation">,</span> <span class="token class-name">String</span> newAddress<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>organization <span class="token operator">=</span> organization<span class="token punctuation">;</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>newAddress <span class="token operator">=</span> newAddress<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token annotation punctuation">@Override</span><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">execute</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> organization<span class="token punctuation">.</span><span class="token function">updateAddress</span><span class="token punctuation">(</span>newAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// OrganizationService class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">OrganizationService</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">Command</span> command<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">setCommand</span><span class="token punctuation">(</span><span class="token class-name">Command</span> command<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>command <span class="token operator">=</span> command<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">executeCommand</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> command<span class="token punctuation">.</span><span class="token function">execute</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Main class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Main</span> <span class="token punctuation">{</span><br /> <span class="token keyword">public</span> <span class="token keyword">static</span> <span class="token keyword">void</span> <span class="token function">main</span><span class="token punctuation">(</span><span class="token class-name">String</span><span class="token punctuation">[</span><span class="token punctuation">]</span> args<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Organization</span> org <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">Organization</span><span class="token punctuation">(</span><span class="token string">"Old Name"</span><span class="token punctuation">,</span> <span class="token string">"Old Address"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> <span class="token class-name">Command</span> updateName <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">UpdateNameCommand</span><span class="token punctuation">(</span>org<span class="token punctuation">,</span> <span class="token string">"New Name"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token class-name">Command</span> updateAddress <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">UpdateAddressCommand</span><span class="token punctuation">(</span>org<span class="token punctuation">,</span> <span class="token string">"New Address"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> <span class="token class-name">OrganizationService</span> orgService <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">OrganizationService</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> orgService<span class="token punctuation">.</span><span class="token function">setCommand</span><span class="token punctuation">(</span>updateName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> orgService<span class="token punctuation">.</span><span class="token function">executeCommand</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> orgService<span class="token punctuation">.</span><span class="token function">setCommand</span><span class="token punctuation">(</span>updateAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> orgService<span class="token punctuation">.</span><span class="token function">executeCommand</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>In this refactored implementation, we have introduced a <code>Command</code> interface and concrete command classes (<code>UpdateNameCommand</code> and <code>UpdateAddressCommand</code>). The <code>OrganizationService</code> class now uses a command object to perform operations, which decouples it from the specific implementations of those operations.</p> <h2 id="advantages-of-using-the-command-pattern" tabindex="-1">Advantages of Using the Command Pattern<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#advantages-of-using-the-command-pattern">#</a></h2> <ol> <li> <p><strong>Decoupling of Invoker and Receiver</strong>: The invoker (organization service) does not need to know the specifics of the receiver (organization). It only interacts with the command interface, making the code more flexible and easier to extend.</p> </li> <li> <p><strong>Extensibility</strong>: Adding new commands is straightforward. We just need to implement a new command class without modifying existing code.</p> </li> <li> <p><strong>Support for Undo Operations</strong>: By storing executed commands, we can implement undo functionality. Each command can have an <code>unexecute</code> method to reverse its action.</p> </li> <li> <p><strong>Queue and Log Requests</strong>: Commands can be queued or logged for future execution, enabling features like request logging, job scheduling, and task retry mechanisms.</p> </li> <li> <p><strong>Promotes Reusability</strong>: Common commands can be reused across different parts of the application, reducing code duplication.</p> </li> </ol> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#conclusion">#</a></h2> <p>Implementing the Command Design Pattern at my workplace has significantly improved our code's maintainability and extensibility. By decoupling the invoker from the receiver and encapsulating requests as objects, we have made our codebase more flexible and easier to manage. If you're dealing with complex operations in your projects, consider using the Command Pattern to achieve a cleaner and more modular design.</p> <p><img src="https://refactoring.guru/images/patterns/diagrams/command/structure.png" alt="Command Design Pattern" /></p> Can database consistency, exception handling and Angular popups come together 2024-06-24T00:00:00Z https://blog.dkpathak.in/can-database-consistency-exception-handling-and-angular-popups-come-together/ <p>Having prided myself on my full stack skills, I was tested of my bonafides with a rather interesting and critical problem at work, which involved me to understand Oracle SQL database writes, GraphQL mutations, Java collections, run time exception handling, and RxJS - pretty much the entire full stack.</p> <h4>System architecture -</h4> <p>My system writes updates into an Oracle SQL DB, via GraphQL.This GraphQL mutation logic is called from a Java service, driven by Command and Builder patterns, and if it fails, should be handled appropriately through rollback mechanisms.</p> <p>Mutations(write operations) are batched together to ensure readability and consistency.</p> <p>An Angular MVC takes care of the UI end.</p> <h4>Context :</h4> <p>My system works as a group of parties. There's a root party A, and subordinate parties A1, A2 and so on. Changes to A, propagate to all its subordinates. --&gt; Java principle of hybrid inheritance.</p> <h4>The problem -</h4> <p>The user updated a single subordinate party (A4), and all the rest of the parties started messing up. In fact, my framework was built to handle corruption oso well that data corruption with one party won't affect the rest. However, bang's the problem</p> <h4>The UX -</h4> <p>The user saw a big exception thrown to the screen in the form of an popup, and the user was transfixed, as he hadn't even touched anything except to load the application. How can something break, when you haven't even started working at it.</p> <h4>The analysis -</h4> <p>Common sense would've had me check the origination of the request and see what went missing. However, software engineers often don't go by common sense. So, I ended up checking logs for the approval of the last request (A4).</p> <p>Why do that? Since ALL requests on other parties were corrupted, I figured there'd a common mess up between them. This mess up can not happen at an initiation - which is an operation on each party, but on updation - where common attributes might get changed.</p> <p>Next up - what could've changed? And how did that happen?</p> <h4>The code intricacy -</h4> <p>We implemented the command, builder and factory patterns in our Spring Boot - GraphQL codebase, to follow a highly modularized and extensible approach. We first build a structure, run what we 'premutation commands' followed by the actual update of the party (the actual mutation), and finally 'post mutation commands'</p> <p>Codewise,</p> <pre><code>PartyUpdateBuilder = party.addPartyId() .addPartyName() .addPartyStatus() .build(); </code></pre> <p>Followed by</p> <pre><code>PartyUpdateCommandBuilder.execute(); </code></pre> <p>which actually writes the data into the tables.</p> <p>Mutation is GraphQL version of an update query - it updates the database. It looks something like this</p> <pre><code>mutation ($params: Params!, $partyId: PartyId!) { updateData(params: $params, partyId: $partyId) { status log created } } { variables: { params: { &quot;partyId&quot;: 1234, &quot;partyName&quot;: &quot;ABC&quot; } } } </code></pre> <h4>Implications of the structure</h4> <p>The framework we created ensured that even if multiple tables were being updated via our mutation, we remained fully ACID compliant by batching the entire update in a single mutation that was run centrally</p> <p>Services written by different developers only need to call this central source, and all fields would be updated.</p> <p>While the mutation was running, there is an inbuilt system of preemptive locking, to avoid stale data being overwritten in the microseconds it takes to update the data.</p> <p>Addtionally, the entire operation is maintained as a business change log in a central database, for tracking.</p> <p>Our update strategy was inspired from <a href="https://github.com/graphile/crystal/issues/944">this</a></p> <p>Now, in such an apparent 'fail safe' framework, there was corruption happening en masse. The question was how?</p> <p>Stay tuned for part 2</p> Stopgapping 2023-09-02T00:00:00Z https://blog.dkpathak.in/stopgapping/ <p>Stopgapping as a strategy is rather underrated. Often times, when a severely life changing event occurs, we can't immediately find a path to walk on. More often than not, we're indecisive, and torn. Indecision is one of the worst states one can be in. And in this state, a major choice might tend to cause regret and fear.</p> <p>In such a case, we stopgap - we find out a minimalist set of steps so that we are not arresting all momentum, yet at the same time, give our body the time to get used to the new reality.</p> <p>Let's consider an example. You were working at a job and one fine day, realize you've been laid off. Your world turns upside down. A stable income you'd planned on for years vanishes instantly. And most people don't look for jobs immediately on getting laid off. In this time, a pragmatic approach would be to do activities that can contribute to the overall financial and mental stability of your life, and at the same time, not place you under duress of rushing. In this case, you can choose to follow a daily routine of job shortlisting, meditation, and a set number of topics you upskill on each day. None of these are large enough to cause a major paradigm shift in your thinking, yet considerable enough to give you momentum at a time when you fear you've come to a standstill.</p> <p>When I'd ended a serious relationship, my goals and priorities all went up in smoke. I chose to follow daily rituals of upskilling, finding a new hobby, and meditation to ensure that I was moving ahead and on, yet at the same time, not making a hard choice I'd regret.</p> <p>Not all major decisions have to be that way - some, you just need time on, for new circumstances to pop through, and the best thing you can do, in the moment, is keep going, without regretting.</p> Monorepo architecture 2023-09-02T00:00:00Z https://blog.dkpathak.in/monorepo-architecture/ <p>Since the advent of the microservices concept, most people are fans of distributed architectures. You don't want to have single points of failure, have autonomy across teams, and want to customize tech stack by service.</p> <p>This concept has propagated into domains other than services too.</p> <p>At work, we had three different repos catering to one single application - two libraries, and one repo for the actual configuration which just mapped components from the libraries onto the actual Angular app.</p> <p>The idea behind this was primarily, the separation of concerns. Repo A included fundamental components and styling, say, dialog boxes, text editors, toasts and their corresponding styles. Repo B included actual application components organized on the basis of business logic and their occurrence on the application. Both A and B were built, deployed and their prod build versions injected as npm packages into C.</p> <p>Now, for an application as large and diverse as ours, it kinda made sense. If you just had to make a config change, we wouldn't really want to rebuild fundamental CSS all over again. Different teams could own these repos differently and you could get the latest working version of any of these repos by picking the last build version from the common artifactory.</p> <p>Now, however, there come the pitfalls :</p> <ol> <li> <p>Cumbersome fixing and testing : If I have to make a fundamental CSS change, in repo A, I need to fix it in A, test it in B, and then finally in C. I essentially have to setup and run all three repos for a minor CSS change. Because the repos were owned separately, there's a fair chance they'd have their separate requirements in terms of setup, dependencies and run commands. How much of an overhead for such a little change? Wouldn't it be better to just have one repo, fix something, and voila, see the change?</p> </li> <li> <p>Inconsistent design : If you want to make a CSS change to a component, do you make it in A or B? It's a subjective question and varies by use case, so most people just did what they felt was right, meaning half the changes were in one repo, half in another. One actual example - our dialog boxes were styled from Repo A in two of our application tabs and from Repo B in the remaining 3. Who'd remember where the styles are coming from then?</p> </li> <li> <p>Versioning : Some change works on x.1 version of Repo A, y.2 version of B and z.3 version of C. Now, everytime, we have to check this version compatibility. Changing one of the versions could adversely impact the rest.</p> </li> </ol> <p>-- work in progress--</p> Action method 2021-12-24T00:00:00Z https://blog.dkpathak.in/action-method/ <p>The Action method encourages you to look at everything in your life as a project, with a set of actionable items, organized by priority, and associated references. Completion of all the action items will signify completion of the event/project. The advantage of this method is that converting seemingly subjective items like events/meetings into actionable steps will prompt you into taking the next small step, and get you started on tasks that you'd otherwise have procrastinated.</p> <p>Every major life item you have is considered a project, and you break it into action steps, references and backburners</p> <p>The action steps are as they sound like - a progession of doable items that will lead to achievement of the project goal.</p> <p>References are materials, resources and information that will aid in the achievement of the actions steps. It is stuff which is related to the project, but not directly actionable. This includes URLs to necessary references, some go to reference books/articles for the project, and so on.</p> <p>Backburner items refer to items that might be important at later stages, but can be put aside at the moment. Entire projects can be backburners too, meaning that a project need not be taken up at the present moment, because you have other, more important projects on your plate.</p> <p>What's the advantage of this method?</p> <ol> <li> <p>'Action is the greatest motivation'. The most common reason for procrastination is the lack of the next small step towards a goal. If that next small todo can be found and completed, it's enough to get the ball rolling. Each action item is the next small step towards achieving a large project.</p> </li> <li> <p>Looking at every item in your life as a project gives you an objective vision into what you'd actually need to do. Meetings and events are otherwise subjective and abstract events - converting these into projects gives you action items for before, during and post the meetings, and thus, you know what would make an event a success.</p> </li> <li> <p>Unlike other todo lists, this method differentiates between actionable items and non actionable items, that complement the former, and provides a way to keep track of both.</p> </li> <li> <p>The concept of backburner items and projects allows you to prioritize projects and action items based on the impact they have on your life and project, without worrying about forgetting them later. Once you're done with the action items of your project, you can pull items from backburner and take them up as action items.</p> </li> </ol> <h2 id="how-do-you-make-action-method-work-with-routine" tabindex="-1">How do you make ACTION method work with Routine<a class="tdbc-anchor" href="https://blog.dkpathak.in/action-method/#how-do-you-make-action-method-work-with-routine">#</a></h2> <p>Routine's flexibility can be leveraged to use it to implement the ACTION method for some/all of your projects/tasks. Each task can be opened as a document which can include everything that we need - first, markdown, so that we can create and distinguish between the sections, second, the embed feature to embed resources, and finally, checkboxes which will make every checkbox item a task, which can be scheduled on the Routine calendar just like any other task.</p> <p>Let's see an example. I create a new task - which will represent my project, let's say Blog writing.</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-1.PNG" alt="" /></p> <p>Double clicking on it opens up the task as a document</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-2.PNG" alt="" /></p> <p>Now, click on add subtasks, and add a few tasks</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-3.PNG" alt="" /></p> <p>Next, create the sections we'd need - action items, references and backburners. Go to a new line, and press '/', which will give you the list of possible markdown options - choose H2, and create the three headings.</p> <p>Now, drag and drop the immediate tasks you'd need doing into action items, and the others, into backburners</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-4.PNG" alt="" /></p> <p>Next, I want to embed a YouTube video I wish to refer to. Under the references heading, I select 'embed', and paste the video link, and there you have it</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-5.PNG" alt="" /></p> <p>And finally, to schedule our tasks - when you hover over a task, a calendar icon is highlighted - click on it, and give it a date and time - the great bit is you can just write it in words and Routine will intellisense it into a schedule.</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-6.PNG" alt="" /></p> <p>Bingo, you see your task in the upcoming tasks</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-7.PNG" alt="" /></p> <p>There you have it - a complete action system in Routine to help you on the road to getting that project done.</p> Creating a full stack app using AWS Amplify 2021-12-12T00:00:00Z https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/ <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#overview">#</a></h2> <p>Amplify is an offering by AWS that lets you develop and deploy full stack applications by only focusing on the business logic, with all the configuration being handled behind the scenes.</p> <p>In this tutorial, we'll understand what Amplify is, how it works, and finally, set up a Todo list application with a GraphQL backend and a React frontend using Amplify</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#prerequisites">#</a></h2> <p>You'll need to have Node/NPM and Git installed on your local systems.</p> <p>You should have an AWS account. Some knowledge of AWS concepts like IAM roles will come in handy, since we'll have to setup an IAM user for connecting our app.</p> <p>It'll also be useful to have some knowledge of React, since we'll be adding some code for the UI. GraphQL code will also be used, but since it'll be autogenerated, it isn't absolutely necessary for you to know that.</p> <h2 id="introduction-to-aws-amplify" tabindex="-1">Introduction to AWS Amplify<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#introduction-to-aws-amplify">#</a></h2> <p>Building full stack applications is no small task. And a lot of time is spent on writing boilerplate code that's already been written previously, and not enough effort can be put in developing the business logic of the application. Moreover, once the app has been built, deploying and scaling it is another major blocker for development teams to handle.</p> <p>Amplify tries to alleviate these use cases. It abstracts away some core functionalities by leveraging existing AWS services and code, and allows developers to only add the business logic of the application and configuring the rest of the application intelligently.</p> <p>Some of the existing AWS services that are leveraged by Amplify include AppSync for GraphQL. Cognito to handle authentication and DynamoDB for database.</p> <p>Amplify also provides other features like REST APIs, Lambda functions support and adding prebuilt Figma components into the frontend of the app, all of which are very frequent use cases</p> <h2 id="the-process-well-follow" tabindex="-1">The process we'll follow<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#the-process-well-follow">#</a></h2> <p>We'll first setup the Amplify CLI on our local systems. We'll then setup the AWS profile to connect our app to AWS. We'll then add the frontend and the GraphQL code respectively, to get our app running.</p> <h2 id="setting-up-amplify-on-local" tabindex="-1">Setting up Amplify on local<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#setting-up-amplify-on-local">#</a></h2> <p>Create a new folder called amplify-app. Open the command line and navigate to this folder</p> <p>We'll start with installing the Amplify CLI. It's the command line application for Amplify that'll allow us to configure our app using commands. Use the following command to install Amplify</p> <pre><code>npm install -g @aws-amplify/cli </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/1-install.PNG" alt="" /></p> <p>Next, we'll be configuring Amplify by creating an AWS IAM user.</p> <p>Enter</p> <pre><code>amplify configure </code></pre> <p>You'll be prmoted to enter a username and select a region. You can choose anything you wish for both, just make sure to remember it.</p> <p>You'll then be prompted to sign in to your AWS account on the browser</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/2-username.PNG" alt="" /></p> <p>Once you're signed in, you'll have to create an IAM(Identity and Access Management) user. It's this user whose credentials will be used for the app.</p> <p>The username will have been auto populated.</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/3-iam.PNG" alt="" /></p> <p>Check the password and custom password option and add a custom password, since it's easier than auto generated password. Do note that the access keys option should remain checked.</p> <p>Then keep hitting next until you get the button to Create the user and click it</p> <p>Your user will be created with an access key id and a secret key. Keep the window open since you'll be needing the details.</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/4-user-created.PNG" alt="" /></p> <p>Come back to the terminal and press enter</p> <p>You'll be prompted to add, first the access key id and then the secret. Copy and paste both of them.</p> <p>If you're prompted to add a profile name, add a random one.</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/5-terminal-user.PNG" alt="" /></p> <p>With this, our AWS profile setup is complete</p> <h2 id="setting-up-react-app" tabindex="-1">Setting up React app<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#setting-up-react-app">#</a></h2> <p>Use the following command to set up the default react application and name it todo-amplify</p> <pre><code>npx create-react-app todo-amplify cd todo-amplify npm run start </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/6-npm.PNG" alt="" /></p> <p>This will start the sample React app on localhost:3000.</p> <p>Close the app and keep it on hold. We'll come back to the frontend in a bit</p> <h2 id="initialize-backend" tabindex="-1">Initialize backend<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#initialize-backend">#</a></h2> <p>Type</p> <pre><code>amplify init </code></pre> <p>to start the setup for the backend</p> <p>You'll be asked for some configuration options like this :</p> <pre><code>Enter a name for the project (react-amplified) # All AWS services you provision for your app are grouped into an &quot;environment&quot; # A common naming convention is dev, staging, and production Enter a name for the environment (dev) # Sometimes the CLI will prompt you to edit a file, it will use this editor to open those files. Choose your default editor # Amplify supports JavaScript (Web &amp; React Native), iOS, and Android apps Choose the type of app that you're building (javascript) What JavaScript framework are you using (react) Source directory path (src) Distribution directory path (build) Build command (npm run build) Start command (npm start) # This is the profile you created with the `amplify configure` command in the introduction step. Do you want to use an AWS profile </code></pre> <p>Keep hitting enter to choose all the default options. For the AWS profile, choose the one you'd created previously. The setup will eventually finish in a few seconds</p> <h2 id="so-what-exactly-happens" tabindex="-1">So what exactly happens<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#so-what-exactly-happens">#</a></h2> <p>When you initialize a new Amplify project, a few things happen:</p> <ul> <li> <p>It creates a top level directory called amplify that stores your backend definition. During the tutorial you'll add capabilities such as a GraphQL API and authentication. As you add features, the amplify folder will grow with infrastructure-as-code templates that define your backend stack. Infrastructure-as-code is a best practice way to create a replicable backend stack.</p> </li> <li> <p>It creates a file called <code>aws-exports.js</code> in the <code>src</code> directory that holds all the configuration for the services you create with Amplify. This is how the Amplify client is able to get the necessary information about your backend services.</p> </li> <li> <p>It modifies the <code>.gitignore</code> file, adding some generated files to the ignore list</p> </li> <li> <p>A cloud project is created for you in the AWS Amplify Console that can be accessed by running <code>amplify console</code>. The Console provides a list of backend environments, deep links to provisioned resources per Amplify category, status of recent deployments, and instructions on how to promote, clone, pull, and delete backend resources</p> </li> </ul> <h2 id="back-to-the-frontend" tabindex="-1">Back to the frontend<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#back-to-the-frontend">#</a></h2> <p>We'll install the 2 packages we're going to need for the project, using :</p> <pre><code>npm install aws-amplify @aws-amplify/ui-react@1.x.x </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/16-other-npm.PNG" alt="" /></p> <p>Next, we'll update our client with the backend configuration stuff. Open <code>src/index.js</code> of your React app and add the following code at the top</p> <pre><code>import Amplify from &quot;aws-amplify&quot;; import awsExports from &quot;./aws-exports&quot;; Amplify.configure(awsExports); </code></pre> <p>And that's all it takes to configure Amplify. As you add or remove categories and make updates to your backend configuration using the CLI, the configuration in aws-exports.js will update automatically.</p> <p>Finally, update your <code>src/App.js</code> with the logic for Todo :</p> <pre><code>/* src/App.js */ import React, { useEffect, useState } from 'react' import Amplify, { API, graphqlOperation } from 'aws-amplify' import { createTodo } from './graphql/mutations' import { listTodos } from './graphql/queries' import awsExports from &quot;./aws-exports&quot;; Amplify.configure(awsExports); const initialState = { name: '', description: '' } const App = () =&gt; { const [formState, setFormState] = useState(initialState) const [todos, setTodos] = useState([]) useEffect(() =&gt; { fetchTodos() }, []) function setInput(key, value) { setFormState({ ...formState, [key]: value }) } async function fetchTodos() { try { const todoData = await API.graphql(graphqlOperation(listTodos)) const todos = todoData.data.listTodos.items setTodos(todos) } catch (err) { console.log('error fetching todos') } } async function addTodo() { try { if (!formState.name || !formState.description) return const todo = { ...formState } setTodos([...todos, todo]) setFormState(initialState) await API.graphql(graphqlOperation(createTodo, {input: todo})) } catch (err) { console.log('error creating todo:', err) } } return ( &lt;div style={styles.container}&gt; &lt;h2&gt;Amplify Todos&lt;/h2&gt; &lt;input onChange={event =&gt; setInput('name', event.target.value)} style={styles.input} value={formState.name} placeholder=&quot;Name&quot; /&gt; &lt;input onChange={event =&gt; setInput('description', event.target.value)} style={styles.input} value={formState.description} placeholder=&quot;Description&quot; /&gt; &lt;button style={styles.button} onClick={addTodo}&gt;Create Todo&lt;/button&gt; { todos.map((todo, index) =&gt; ( &lt;div key={todo.id ? todo.id : index} style={styles.todo}&gt; &lt;p style={styles.todoName}&gt;{todo.name}&lt;/p&gt; &lt;p style={styles.todoDescription}&gt;{todo.description}&lt;/p&gt; &lt;/div&gt; )) } &lt;/div&gt; ) } const styles = { container: { width: 400, margin: '0 auto', display: 'flex', flexDirection: 'column', justifyContent: 'center', padding: 20 }, todo: { marginBottom: 15 }, input: { border: 'none', backgroundColor: '#ddd', marginBottom: 10, padding: 8, fontSize: 18 }, todoName: { fontSize: 20, fontWeight: 'bold' }, todoDescription: { marginBottom: 0 }, button: { backgroundColor: 'black', color: 'white', outline: 'none', fontSize: 18, padding: '12px 0px' } } export default App </code></pre> <h2 id="setting-up-api-and-database" tabindex="-1">Setting up API and Database<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#setting-up-api-and-database">#</a></h2> <p>The API you will be creating in this step is a GraphQL API using AppSync and DynamoDB database</p> <p>Use the following CLI command to initialize the API creation :</p> <pre><code>amplify add api </code></pre> <p>You'll be prompted through a list of options. Keep hitting enter to choose the default ones</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/17-gql.PNG" alt="" /></p> <p>Once it's complete, we'll push the changes using</p> <pre><code>amplify push </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/18-push.PNG" alt="" /></p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/19-push-2.PNG" alt="" /></p> <p>Once it completes, it gives you the endpoint and an API key</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/20-gql-complete.PNG" alt="" /></p> <p>Once it's done, run the React app again, and going to localhost:3000, you should see your todo app</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/21-done.PNG" alt="" /></p> <h2 id="deploying-your-app-to-amplify-cloud" tabindex="-1">Deploying your app to Amplify cloud<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#deploying-your-app-to-amplify-cloud">#</a></h2> <p>Additionally, you can also deploy your todo application to Amplify cloud using the following commands :</p> <pre><code>amplify add hosting amplify publish </code></pre> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#conclusion">#</a></h2> <p>Thus, with this, you've completed developing an entire full stack application, with the only code you had to write being the business logic for the todo. Imagine the time and effort saved when all the GraphQL code, and the connections came up magically outta nowhere!</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#references">#</a></h2> <ul> <li><a href="https://aws.amazon.com/amplify/">AWS Amplify Docs</a></li> </ul> AWS Lambda vs ECS 2021-12-06T00:00:00Z https://blog.dkpathak.in/aws-lambda-vs-ecs/ <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#overview">#</a></h2> <p>In this tutorial, we'll be taking a deep dive into the differences between AWS Lambda and AWS ECS. We'll be setting up sample applications using each of those and then contrasting the different use cases they have.</p> <h2 id="what-is-aws-lambda" tabindex="-1">What is AWS Lambda<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#what-is-aws-lambda">#</a></h2> <p>Lambda uses resources that are the same that a server-driven deployment would've given us - EC2 instances, coupled with load balancers, security groups, auto-scaling services. However, unlike the latter, these resources are configured entirely on the backend, away from the user, and automatically scaled up/down as per traffic. All the user needs to do is provide the code, and let Lambda take care of ensuring it runs.</p> <p>The following Block diagram describes how lambda works</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/lambda-bd.png" alt="" /></p> <h2 id="what-is-ecs" tabindex="-1">What is ECS<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#what-is-ecs">#</a></h2> <p>ECS stands for Elastic Container Service and is a container orchestration solution - meaning, it allows deployment and management of applications which are containerized using tools like Docker.</p> <p>The following block diagram gives an explanation of how it all comes together</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/ecs-bd.png" alt="" /></p> <h2 id="what-is-aws-fargate" tabindex="-1">What is AWS Fargate<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#what-is-aws-fargate">#</a></h2> <p>We'll be using AWS Fargate in our ECS example. Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Unlike EC2, you don't actually have to worry about setting up and provisioning the servers. You only provide a containerized application, and Fargate handles the hosting based on the resources you require</p> <p><img src="https://blog.dkpathak.in/img/scalex/fargate.png" alt="" /></p> <h1 id="setting-up-lambda" tabindex="-1">Setting up Lambda<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#setting-up-lambda">#</a></h1> <p>Go to aws.amazon.com and sign up for an account if you don't already have one.</p> <p>Once you're signed in, search 'Lambda' in the search bar. You should be redirected to the Lambda dashboard</p> <p>Before you create a Lambda function, you need to identify its inputs and triggers, choose a runtime environment, and decide what permissions and role the service will use.</p> <p>Lambda functions accept JSON input and JSON output. Your function’s input and output contents are closely tied to the event source that will trigger your function.</p> <p>An event source is usually a web request, that'll cause the execution of the function code</p> <p>You also need to select a runtime for your function. We'll be using Node.js</p> <p>Finally, your function will need an AWS role, that defines the entitlements the function has within the AWS platform.</p> <p>Click on Create function</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image9.png" alt="" /></p> <p>Keep the default 'Author from scratch' option selected</p> <p>Give your function a name as you wish, and leave everything else as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image5.png" alt="" /></p> <p>Click on Create function at the bottom of the page</p> <p>You'll be redirected to the function configuration page, that looks something like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <p>You'll first have to add a trigger for your lambda function. Click on add trigger.</p> <p>You'll then be asked to choose a trigger - select API Gateway. An API Gateway essentially lets you create, deploy and monitor APIs. In our case, we'll be able to use our function like an API - when we hit the deployed URL, it'll trigger our function.</p> <p>Choose API type as REST API, security as Open, and leave the rest as it is. Finally, click Add</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image1.png" alt="" /></p> <p>You'll see that the trigger is added.</p> <p>Next, you are given a code source window with an integrated code editor, where you can add/edit code and files.</p> <p>A sample code snippet is provided. You can choose to modify the message to something you wish, and keep the rest of the code as it is for now.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <h2 id="testing-the-function" tabindex="-1">Testing the function<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#testing-the-function">#</a></h2> <p>Next, we'll test if the function works as expected. Go to the test tab.</p> <p>Here, you're given an option to create an event. An event is a happening that triggers the function. It has a JSON input. Since we're not actually using the input in any way, it's not much to us. However, when the lambda function is deployed as a service to some application, there'll be inputs coming in that the function will use. Those inputs can be given here to test if they give the required outcome.</p> <p>Leave everything unchanged, and click Test.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image6.png" alt="" /></p> <p>It'll run the test using the event config, and will pass with the following message in a second or two.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image7.png" alt="" /></p> <h2 id="understanding-the-result" tabindex="-1">Understanding the result<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#understanding-the-result">#</a></h2> <p>The details show the function output. In our case, the status code and the message body.</p> <p>The summary tab has a few important fields. The duration denotes the time it took for the lambda to run, which is an important pointer when we are running a production grade application and are likely to get timeout/performance issues</p> <p>The billed duration is another important indicator - you only pay for what you use. Unlike the EC2 instance, where you were charged for the server just being on, irrespective of whether or not anything was running on it, Lambda only charges you for the times your function runs. Thus, being an obvious cost advantage</p> <p>And the field one of the most significant to our discussion - Resources configured. 128 MB in our case. Do you remember configuring anything at all, apart from the function code itself? Nope. So where did the 128 MB come from? That's the magic - by just telling Lambda what code you need to run, it automatically provisions the resources needed to run it, saving considerable bandwidth of the developers that would've otherwise gone in getting the servers configured.</p> <h2 id="deploying-the-lambda-function" tabindex="-1">Deploying the Lambda function<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#deploying-the-lambda-function">#</a></h2> <p>Go back to the code tab, and click on Deploy</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <p>Now, click on API Gateway in Function Overview.</p> <p>It'll give you the API endpoint. Copy it, and paste it in a new browser tab.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image3.png" alt="" /></p> <p>Sure enough, you'll see the learning lambda message on the screen.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image4.png" alt="" /></p> <p>Come back to the lambda dashboard and go to the monitor tab. Here, you'll be able to monitor the calls being made to your API. Refresh the page of the API a few times, and you'll see the requests being shown on the graphs</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image2.png" alt="" /></p> <p>Notice the usefulness of the graphs - The invocations show you how many times the API was invoked.</p> <p>The error count and success rate let you track if the function is facing downtime/run time errors.</p> <h1 id="setting-up-ecs" tabindex="-1">Setting up ECS<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#setting-up-ecs">#</a></h1> <p>Next, we'll setup and configure an ECS application using AWS Fargate</p> <p>Go to AWS dashboard and search for ECS. You'll be taken to the ECS dashboard, that looks like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/1-dashboard.PNG" alt="" /></p> <p>Click on Get Started</p> <p>We'll be selecting an Nginx container</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/2-select.PNG" alt="" /></p> <p>Next, you'll be prompted to add a service, which ensures that the defined task instances are maintained. If not, a new task instance is created.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/3-task.PNG" alt="" /></p> <p>Next, you'll be asked to configure your cluster details - keep them as they are.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/4-cluster.PNG" alt="" /></p> <p>Finally, click create.</p> <p>You can see the status of the resources being provisioned :</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/5-launch.PNG" alt="" /></p> <p>Finally, your service will be active</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/6-view.PNG" alt="" /></p> <p>Go to task definitions</p> <p>Copy the public IP and paste it in a new browser tab</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/7-pip.PNG" alt="" /></p> <p>You'll see that the default nginx screen opens up</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/8-nginx.PNG" alt="" /></p> <p>Refresh it a few times</p> <p>Come back to the ECS dashboard and go to logs. You'll see that for every refresh, a log entry is created</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/9-logs.PNG" alt="" /></p> <h2 id="difference-between-lambda-and-ecs" tabindex="-1">Difference between Lambda and ECS<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#difference-between-lambda-and-ecs">#</a></h2> <p>Thus, you created and deployed sample services using both Lambda and ECS(via Fargate).</p> <p>At the first glance, these two look similar - both of them are serverless solutions that configure the server resources based on the configuration that your application needs, and work on a pay-per-use model. They both also provide monitoring and logs in a similar fashion</p> <p>However, there are a few subtle differences - Lambda essentially allows you to run tiny functions -they can of course be as gigantic as applicaations themselves, but that's not what it's meant for. Isolated services that can then be plugged into existing applications via triggers like the API Gateway we used ensure that your services work in isolation, and the downtime of one doesn't affect the other.</p> <p>ECS is a container orchestrator, and is principally meant for running 'containerized applications'. There's some configuration needed for you to define when setting up the resources, where in Lambda, it was handled in its entirety by AWS itself. ECS is mainly meant for larger applications but with a flexibility of not having to manage compute instances yourself.</p> <h3 id="consider-lambda-over-ecs-when" tabindex="-1">Consider Lambda over ECS when<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#consider-lambda-over-ecs-when">#</a></h3> <ul> <li> <p>You have a smaller application that runs on-demand in 15 minutes or less.</p> </li> <li> <p>You don’t need advanced EC2 instance configuration. Lambda manages, provisions, and secures EC2 instances for you, along with providing target groups, load balancing, and auto-scaling. It eliminates the complexity of managing EC2 instances.</p> </li> <li> <p>You want to pay only for capacity used. Lambda charges are metered by milliseconds used and the number of times your code is triggered. Costs are correlated to usage. Lambda also has a free usage tier.</p> </li> </ul> <h3 id="consider-ecs-over-lambda-when" tabindex="-1">Consider ECS over Lambda when<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#consider-ecs-over-lambda-when">#</a></h3> <ul> <li> <p>You are running Docker containers. While Lambda now has Container Image Support, ECS is a better choice for a Docker ecosystem, especially if you are already creating Docker containers.</p> </li> <li> <p>You want flexibility to run in a managed EC2 environment or in a serverless environment. You can provision your own EC2 instances or Amazon can provision them for you. You have several options.</p> </li> <li> <p>You have tasks or batch jobs running longer than 15 minutes. Choose ECS when dealing with longer-running jobs, as it avoids the Lambda timeout limit above.</p> </li> <li> <p>You need to schedule jobs. ECS provides a service scheduler for long running tasks and applications, along with the ability to run tasks manually.</p> </li> </ul> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#conclusion">#</a></h2> <p>Thus, in this tutorial, you got an introduction to AWS Lambda, AWS ECS and Fargate. You understood the similarities among them by setting up sample applications using each. You then created distinctions between them, and hands on checklists as to when one would be preferred over the other</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#references">#</a></h2> <ul> <li> <p><a href="https://aws.amazon.com/ecs/">AWS ECS</a></p> </li> <li> <p><a href="https://aws.amazon.com/fargate/">AWS Fargate</a></p> </li> </ul> Intro to Serverless 2021-12-05T00:00:00Z https://blog.dkpathak.in/intro-to-serverless/ <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#overview">#</a></h2> <p>In this section, we'll get a deep understanding of what it means to have 'serverless' applications - most importantly, why it's a misnomer. We'll understand the use case of using this paradigm, how it's implemented on the ground and finally, take up a hands on example to create a sample NodeJS service using AWS Lambda.</p> <h2 id="introduction" tabindex="-1">Introduction<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#introduction">#</a></h2> <p>Web applications and services need servers to run on. These servers can be custom on-premise servers that large companies themselves own, or cloud servers by providers like EC2 by AWS. We've used the latter in a few tutorials in the past.</p> <p>While the cloud servers leave out the complexity of server maintenance, we still need to manually configure load balancing and track usage. We'll be charged for all the time the server's up, irrespective of whether or not we're the server's being used at all or not. This is suboptimal for many small organizations, who not only want to minimize cloud costs, but also can't spare enough manpower on customizing load balancing and server instance uptime.</p> <p>Thus came the concept of 'Serverless'. First things first, it's NOT like there's no server at all. It's just that we aren't granted access to an entire server like we were for EC2. Instead, we just give the cloud provider the application code we need to run, and then it's their job to run the code, ensure that it scales up/down based on traffic, allowing us to focus on the application itself.</p> <h2 id="how-exactly-does-this-work" tabindex="-1">How exactly does this work?<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#how-exactly-does-this-work">#</a></h2> <p>The following Block diagram describes how lambda works</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/lambda-bd.png" alt="" /></p> <p>Lambda uses resources that are the same that a server-driven deployment would've given us - EC2 instances, coupled with load balancers, security groups, auto-scaling services. However, unlike the latter, these resources are configured entirely on the backend, away from the user, and automatically scaled up/down as per traffic. All the user needs to do is provide the code, and let Lambda take care of ensuring it runs.</p> <h2 id="what-well-be-doing" tabindex="-1">What we'll be doing<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#what-well-be-doing">#</a></h2> <p>We'll be setting up a NodeJS service using AWS Lambda, configuring the triggers that would cause it to run, and then hitting those triggers to run it, and tracking the logs as the function runs.</p> <h2 id="setting-up-aws-lambda" tabindex="-1">Setting up AWS lambda<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#setting-up-aws-lambda">#</a></h2> <p>Go to aws.amazon.com and sign up for an account if you don't already have one.</p> <p>Once you're signed in, search 'Lambda' in the search bar. You should be redirected to the Lambda dashboard</p> <p>Before you create a Lambda function, you need to identify its inputs and triggers, choose a runtime environment, and decide what permissions and role the service will use.</p> <p>Lambda functions accept JSON input and JSON output. Your function’s input and output contents are closely tied to the event source that will trigger your function.</p> <p>An event source is usually a web request, that'll cause the execution of the function code</p> <p>You also need to select a runtime for your function. We'll be using Node.js</p> <p>Finally, your function will need an AWS role, that defines the entitlements the function has within the AWS platform.</p> <p>Click on Create function</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image9.png" alt="" /></p> <p>Keep the default 'Author from scratch' option selected</p> <p>Give your function a name as you wish, and leave everything else as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image5.png" alt="" /></p> <p>Click on Create function at the bottom of the page</p> <p>You'll be redirected to the function configuration page, that looks something like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <p>You'll first have to add a trigger for your lambda function. Click on add trigger.</p> <p>You'll then be asked to choose a trigger - select API Gateway. An API Gateway essentially lets you create, deploy and monitor APIs. In our case, we'll be able to use our function like an API - when we hit the deployed URL, it'll trigger our function.</p> <p>Choose API type as REST API, security as Open, and leave the rest as it is. Finally, click Add</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image1.png" alt="" /></p> <p>You'll see that the trigger is added.</p> <p>Next, you are given a code source window with an integrated code editor, where you can add/edit code and files.</p> <p>A sample code snippet is provided. You can choose to modify the message to something you wish, and keep the rest of the code as it is for now.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <h2 id="testing-the-function" tabindex="-1">Testing the function<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#testing-the-function">#</a></h2> <p>Next, we'll test if the function works as expected. Go to the test tab.</p> <p>Here, you're given an option to create an event. An event is a happening that triggers the function. It has a JSON input. Since we're not actually using the input in any way, it's not much to us. However, when the lambda function is deployed as a service to some application, there'll be inputs coming in that the function will use. Those inputs can be given here to test if they give the required outcome.</p> <p>Leave everything unchanged, and click Test.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image6.png" alt="" /></p> <p>It'll run the test using the event config, and will pass with the following message in a second or two.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image7.png" alt="" /></p> <h2 id="understanding-the-result" tabindex="-1">Understanding the result<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#understanding-the-result">#</a></h2> <p>The details show the function output. In our case, the status code and the message body.</p> <p>The summary tab has a few important fields. The duration denotes the time it took for the lambda to run, which is an important pointer when we are running a production grade application and are likely to get timeout/performance issues</p> <p>The billed duration is another important indicator - you only pay for what you use. Unlike the EC2 instance, where you were charged for the server just being on, irrespective of whether or not anything was running on it, Lambda only charges you for the times your function runs. Thus, being an obvious cost advantage</p> <p>And the field one of the most significant to our discussion - Resources configured. 128 MB in our case. Do you remember configuring anything at all, apart from the function code itself? Nope. So where did the 128 MB come from? That's the magic - by just telling Lambda what code you need to run, it automatically provisions the resources needed to run it, saving considerable bandwidth of the developers that would've otherwise gone in getting the servers configured.</p> <h2 id="deploying-the-lambda-function" tabindex="-1">Deploying the Lambda function<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#deploying-the-lambda-function">#</a></h2> <p>Go back to the code tab, and click on Deploy</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <p>Now, click on API Gateway in Function Overview.</p> <p>It'll give you the API endpoint. Copy it, and paste it in a new browser tab.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image3.png" alt="" /></p> <p>Sure enough, you'll see the learning lambda message on the screen.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image4.png" alt="" /></p> <p>Come back to the lambda dashboard and go to the monitor tab. Here, you'll be able to monitor the calls being made to your API. Refresh the page of the API a few times, and you'll see the requests being shown on the graphs</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image2.png" alt="" /></p> <p>Notice the usefulness of the graphs - The invocations show you how many times the API was invoked.</p> <p>The error count and success rate let you track if the function is facing downtime/run time errors.</p> <p>All of this, without having to configure any of it - that's the beauty of Lambda</p> <h2 id="adding-further-code" tabindex="-1">Adding further code<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#adding-further-code">#</a></h2> <p>Now that your Lambda function is up and running, you can add further code to create actual services, connect it to databases and more.</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#conclusion">#</a></h2> <p>Thus, in this tutorial, we got introduced to what serverless means, and how it is beneficial over the traditional server-driven model. We used AWS lambda to setup and configure a NodeJS service, set up a trigger using the API Gateway, and monitored our service, all while having to configure little beyond our business logic.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#references">#</a></h2> <ul> <li><a href="https://aws.amazon.com/lambda/">AWS Lambda official docs</a></li> </ul> Demystifying procrastination 2021-12-20T00:00:00Z https://blog.dkpathak.in/demystifying-procrastination/ <p>The biggest threat to productivity is procrastination - the wilful(?) destruction of a more structured life by giving in to short term pleasures over long term contentment. Notice the '?' after 'wilful'.</p> <p>Is procrastinating wilful? Do we, who have big aims and aspirations of a better life, CHOOSE to derail our progress on those goals, WHILST being aware that it could potentially be the death knell for the consistency we'd maintained so far? I mean, no sane person would kill one's own desires so willingly, right?</p> <p>The subject of procrastination has been under medical research for years, and while it's unnecessary for us to deep dive into the intricacies of the flashing neurons, it helps to know a few superficial facts (I am no more a medical guy than Jackie Chan is a ballerina, so do not kill me over the medical accuracy of what I write - it's been vastly simplified for ease of understanding) - a section of our brain, called the prefrontal cortex, can be thought of as the logical part - which makes you do stuff that make 'logical sense', making goals like exercising, personal projects, reading and so on.</p> <p>And there's this other dude called the limbic system, which has more to do with the emotional and instinctive stuff. And no surprises, it's this little son of a jumbled mass of neurons that makes you procrastinate - it's responsible for short term pleasures that your brain seeks, and fights with the prefrontal cortex for dominance over your body. Whenever the PC wins, you're productive. When it's the limbic system who comes out on top, #NetflixBinge</p> <p>And thus, our quest to cutting at our procrastination would be to ensure that our prefrontal cortex wins more often.</p> <p>Superb. How do we make that happen?</p> <p><i>The Productivity Project</i> author Chris Bailey calls out six traits that usually occur in various quantities in almost all tasks we procrastinate. The intensity and quantity of each of these traits in a task determines how likely we are to procrastinate it.</p> <p>These are :</p> <ul> <li> <p>Boring</p> </li> <li> <p>Frustrating</p> </li> <li> <p>Difficult</p> </li> <li> <p>Unstructured or ambiguous</p> </li> <li> <p>Lacking in personal meaning</p> </li> <li> <p>Lacking in intrinsic rewards</p> </li> </ul> <p>Let's take an activity that's pretty common a procrastinated one for many of us - tidying our rooms, and rate it on a scale of 1 to 10 in all 6 of these.</p> <p>Boring - yeah, a bit. 6/10.</p> <p>Frustrating? Often - you know it's gonna be the same old mess within a week at most, and thus makes you wonder why do it at all. 9/10.</p> <p>Difficult? Umm, not so much, unless your room is a palace. 2/10.</p> <p>Unstructured or ambiguous? Yes, absolutely. When do you decide if it's 'clean enough'? Where do you start cleaning? Do you clean the insides of the cupboards too? 10/10</p> <p>Lacking in personal meaning? Unless you're a Monica from Friends, definitely yes. It doesn't give the kicks, and contributes little to personal goals. 9/10</p> <p>Lacking in intrinsic rewards? Again, yes. No direct benefit to me that I can see. 9/10</p> <p>And there, we have it. While thinking of rating the intensity of each of these traits on the task of cleaning the room, we thought about the negative aspects of the task that made us procrastinate it. And once you know where you're going wrong with a problem, the problem's half solved.</p> <p>Boring? -&gt; Play your fav music as you clean.</p> <p>Unstructured? -&gt; Create a weekwise plan beforehand as to what part of the room you'll clean the coming day/week, and then, tackle only that, not worrying about the others.</p> <p>Lacking in intrinsic rewards? -&gt; If the 'feeling of accomplishment' isn't a good enough reward, you may create a reward for yourself - 10 minutes of binging on something you like if you clean the room.</p> <p>And thus, by quantifying and categorizing some of the 'procrastin-able' aspects of a task, you make plans to systematically limit/eliminate those, and make it harder for the limbic system dude to come out on top.</p> <p>It may seem like a pain, definitely, to think and plan so much before all your procrastinable tasks, and might make you wonder - should I just have gritted myself and got done with the task in the time, rather than planning like a military general for it? Well, the very reason you're planning on the task is BECAUSE you could not grit yourself and get done with it. The planning will get it done. And once you've gotten a hang of it, the eliminations will come instinctively and faster</p> Productivi-TEA - Time, Energy, Attention 2021-12-29T00:00:00Z https://blog.dkpathak.in/productivi-tea-time-energy-attention/ <p>When starting with a new productivity goal, we often expect from ourselves, something that's entirely alien to human nature - that, irrespective of our body's responses, we're constantly able to attack our day's plans with the same zeal, zest and energy throughout the day. Most of us who started on a sunny day with the motivation to blast their productivity through the roof, started with dumping tasks and plans on every second of the day, and then watched helplessly as the scheduled stuff came, but the body didn't respond with the same energy, or if our attention went into scrolling through some extremely relatable memes on IG.</p> <p>Productivity is a function of Time, Energy and Attention, all of which, we have in finite availability. Think of it like money - if you only have a 100 bucks, you'd rather spend 90 of it on what's going to help the most in your survival - food, water and clothing, rather than buying a Netflix subscription. Similarly, the best of our Time, Energy and Attention has to be devoted to the tasks that are the most meaningful to us, from which, we hope to derive the maximum output.</p> <p>And that's why, the name of this article - ProductiviTEA - the TEA are like caffeine. The optimal usage of the TEA can give you a boost in your life.</p> <p>So, how do you ensure that the best of your TEA goes into your most important tasks? And how can Routine help you in your journey?</p> <h2 id="tracking-your-tea" tabindex="-1">Tracking your TEA<a class="tdbc-anchor" href="https://blog.dkpathak.in/productivi-tea-time-energy-attention/#tracking-your-tea">#</a></h2> <p>You can only improve if you know where you are lacking. Tracking your Time, energy and attention throughout the day will give you insights on what are your peak moments, and thus, how you can leverage those.</p> <p>Routine's essence, is its calendar-drag-and-drop, and you can utilize this core feature to track your traits, for a week.</p> <p>To do that, schedule a task every waking hour on the Routine calendar. This task can be an actual work task, or just be anything that you intend to do at that particular time, including watching TV, or propping your feet onto the table and staring at the ceiling. You just need to track what you're doing at that time.</p> <p>Double click on a task to open it as a doc, and add two points - Energy and Attention</p> <p>For each hour, track your energy level out of 100. It may be tough to quantify it at first, but after a few trials, you'll be to able to put in a number relative to what you put in the past.</p> <p>Also, for each hour, try and track how many times you felt your attention wandering from what you were meant to do in the past hour. You need not keep a strict count of this - a rough approximation works, to begin with.</p> <p>Follow this ritual for about a week, and you'll begin to notice some patterns - Your energy and attention levels are high at certain times of the day. For early morning birds, it's usually the morning hours, likewise for night owls. These reflect what's called your Biological Prime Time. Note, manually enforced energy and attention like caffeine induced, pressure of deadlines, do not count here. Your BPT is based on your natural body clock, and your habits - when you're naturally the most prone to attention and action. During these BPT high times, perform your most important tasks, usually the ones that you're highly likely to procrastinate on.</p> <p>At your worst BPT, schedule tasks that require the least bit attention, such as tracking emails.</p> <p>Thus, by actively managing your BPT, you can get more done, without forcing your body</p> CI CD using Github Actions and Netlify 2021-12-06T00:00:00Z https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/ <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#overview">#</a></h2> <p>In this tutorial, we'll be taking a ground-up understanding of what Continuous Integration-Delivery-Deployment means, why it's so useful in modern software development and DevOps. We'll then use a hands on example of configuring a CI pipeline for a sample React application using Github actions, understanding some syntax along the way. We'll then use netlify to connect our repo to Netlify and configuring a CD pipeline.</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#prerequisites">#</a></h2> <p>You'll need a Github account. We'll be using a sample React application to set up the workflow, and it might help to understand how to run a React app, although any detailed understanding is not necessary, since we won't be adding any React code in this tutorial.</p> <p>You'll also need an account on netlify.com, which we'll be connecting with the Github account to set up a CD pipeline. All of this is entirely free.</p> <h2 id="introduction-to-ci-cd" tabindex="-1">Introduction to CI CD<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#introduction-to-ci-cd">#</a></h2> <blockquote> <p>Disclaimer : Some of the practices might seem vague or overkill right now, especially for those who have not had experience working in large teams. However, CI CD was developed keeping in mind software development for large, distributed teams.</p> </blockquote> <p>In any team delivering software for a client, it is not enough to just push your code along to a remote repository and be done with it. There's an entire process that happens once you're done coding, and it's fraught with complications.</p> <p>There'll be tens or hundreds of developers making changes to the same codebase, all with different coding styles. Your code might not work with the most recent push made by another developer. Your code might not be good quality, which might make it difficult for other developers to understand it/build up on it. Your code might 'work on your machine', but it might not work in the higher environments.</p> <p>All of these things can go wrong, and they do, so much so that they forced the industry pioneers to come up with an approach to ensure that any new code that was being pushed followed a set of guidelines, and that it went through a series of steps before it finally got merged into the main codebase. This process was rote enough to not be done manually each time somebody pushed something, and thus, there were tools developed to automate the checks and steps that needed to be taken.</p> <p>This process, is called Continuous Integration. Your code is continuously 'integrated' into the application, AFTER ensuring that automated tests and other scripts run onto it confirm that it doesn't break some existing feature and is of good quality.</p> <p>A sample CI workflow looks something like this (It differs by team - this is just a sample one) :</p> <ol> <li> <p>Developer pushes the code into her/his feature branch. No one pushes code directly into master branch in a development team.</p> </li> <li> <p>Developer seeks code reviews from teammates and raises a pull request</p> </li> <li> <p>As soon as the PR is raised, a step in the CI workflow is triggered and a build starts using the new code, on a build automation tool like Jenkins or Teamcity. If the build fails, it's pointless to carry on to further steps, and the code is reverted to the developer asking her/him to check why it failed and make the changes.</p> </li> <li> <p>If the build passes, the reviewers manually check the changes made by the dev and approve the PR.</p> </li> <li> <p>Once the necessary number of approvals have been granted, the next workflow step gets triggered, wherein automated tests are run on the code to ensure the functionality is working as expected</p> </li> <li> <p>Further checks MIGHT be made by automated tools checking for code quality or test coverage using tools like SonarQube(SonarLint) or Codecov. These tools raise flags if the new code does not follow some coding standards configured by the team. The developer has to rectify those and restart the workflow.</p> </li> <li> <p>Once the checks are complete, the code then 'tries' to get merged onto the main branch. If there's some other code commit made on the same lines as this push has, there is an automatic merge failure that the developer has to resolve manually.</p> </li> <li> <p>If not, the code gets merged into the main branch.</p> </li> </ol> <p>This might sound like a lot of work, but in a complex project, it's critical to ensure that any new change is the 'right change', or it could take weeks to unroll if it passes undetected. Moreover, with the practice of automated CI, almost all the steps are done automatically, without the need of someone to manually push the code along to the next workflow step.</p> <p>Thus, CI is about pushing code in small increments as frequently as possible, ensuring that it's bug free and follows best practices, and finally merging it into the main code.</p> <p>CD is a term that can refer to Continuous Deployment, and/or Continuous Delivery, usually referring to both, first Delivery, then Deployment. Atlassian describes the difference between Delivery and Deployment as while Delivery requires a manual intervention for pushing to a production environment, deployment automates that step as well.</p> <p>Once your code is pushed into the master branch at the end of a CI workflow, it needs to now go through various testing environments where further tests like FT(Functional testing), SIT(System Integration Testing) and UAT(User Acceptance Testing) are run to ensure the application is working as expected. And once it's gone through all the testing environments, the final release to production can be done manually(c. delivery), or automatically(c. deployment)</p> <h2 id="intro-to-github-actions" tabindex="-1">Intro to GitHub Actions<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#intro-to-github-actions">#</a></h2> <p>GitHub actions is a tool provided by Github that helps you create and run the workflows for CI/CD. By creating a simple workflow file, you can ensure that once your code is committed to GitHub, it'll get released to your production environment entirely on its own without requiring any effort from you.</p> <p>GitHub actions is an extremely popular tool for beginners, since, unlike other CI tools like Jenkins, it's extremely simple to set up and start, and abstracts away a lot of the setup that newbies need not bother themselves with.</p> <h2 id="how-does-it-work" tabindex="-1">How does it work?<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#how-does-it-work">#</a></h2> <p>Actions uses code packages in Docker containers, which run on GitHub servers and which, in turn, are compatible with any programming language. There are tons of preconfigured workflows available across frameworks like Node, Python, Java, which we can pick and customize for our application, and that's precisely what we're going to do when we get to the hands on.</p> <h2 id="terms" tabindex="-1">Terms<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#terms">#</a></h2> <p>There are a few terms that will be used in the configuration files that we need to look through. Fortunately, they're more or less exactly like they sound :</p> <ul> <li> <p>Step : A set of tasks that need to be performed. They can be commands like <code>run:npm ci</code> or other actions, like checking out a specific branch</p> </li> <li> <p>Work : It is a set of steps that run-in runner of our process. The works can be executed independently or sequentially depending on whether the success of our work depends on the previous one.</p> </li> <li> <p>Workflow : This is what we'll be creating as our end goal - a workflow. It is an automated procedure composed of one or more jobs that is added to a repository and can be activated by an event. They are defined by YAML files and with it you can build, test, package, relay or deploy a project.</p> </li> <li> <p>Event : These are specific activities that trigger the execution of a workflow. For instance, committing to a specific branch, a new PR and so on.</p> </li> <li> <p>Action : Smallest building block of a workflow and can be combined as steps to create a job.</p> </li> <li> <p>Runner : It is a machine with the GitHub Actions application already installed, whose function is to wait for the work to be available and then be able to execute the actions and report the progress and the results.</p> </li> </ul> <h2 id="introduction-to-netlify" tabindex="-1">Introduction to Netlify<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#introduction-to-netlify">#</a></h2> <p>Netlify is a platform that allows you to host and deploy frontend applications. It is an extremely popular tool for newbies because it takes no more than a few clicks to deploy an application code on Github direct to Netlify.</p> <h2 id="sample-application-to-set-up-workflow" tabindex="-1">Sample application to set up workflow<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#sample-application-to-set-up-workflow">#</a></h2> <p>We'll be setting up the workflow using a simple React application - https://github.com/dkp1903/react-github-actions. You need not clone it to your local, since we will not be making any code changes to the app. Instead, you'll have to fork the repo to your own github using the fork button available on the top right corner.</p> <p>Once it's done, go to the Actions tab on Github, and select the NodeJS workflow.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image5.png" alt="" /></p> <p>It'll create a file called <code>node.js.yml</code> with some prewritten configuration, like this :</p> <pre><code># This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node # For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions name: Node.js CI on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest strategy: matrix: node-version: [12.x, 14.x, 16.x] # See supported Node.js release schedule at https://nodejs.org/en/about/releases/ steps: - uses: actions/checkout@v2 - name: Use Node.js $NaN uses: actions/setup-node@v2 with: node-version: $NaN cache: 'npm' - run: npm ci - run: npm run build --if-present - run: npm test </code></pre> <p>The 'on' field describes when this particular workflow will be run. Right now, it's set to all pushes to and pull requests on, the main branch.</p> <p>'runs-on' desribes the environment the code will be run on, on Github servers. It's Ubuntu 20, and we'll leave it at that.</p> <p>The node versions to be checked against are 12, 14 and 16, so we'll have three different jobs running parallely when the workflow gets triggered. We'll leave this one as it as well.</p> <p>The 'run' fields signify the commands to be run, first the npm ci(clean install). We'll change that to <code>npm i</code> for ease of understanding.</p> <p>Then the npm run build with the --if-present flag, which means that the build script will run only if the build script is present. Fortunately, our app does have a build script, so we'll leave this as wlel</p> <p>Finally, the npm test command will run the test file we have(App.test.js) which contains just a single test.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image1.png" alt="" /></p> <p>Click on the start commit button on the top right. Once you do, the workflow will automatically be triggered.</p> <p>There will be three jobs running in parallel, one each for Node versions 12, 14 and 16. The jobs will all be successful in a few minutes.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image9.png" alt="" /></p> <p>Open the build 12, and look at the steps that were followed. If you open the npm test one, it will see that there's one test, which had passed.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image2.png" alt="" /></p> <p>We'll soon mess with that.</p> <p>We've set up the CI part of the CI/CD pipeline. Now, for the CD, we'll connect our repository to Netlify, where we'll host our react application code.</p> <p>Go to netlify.com and sign up using Github, using the same github account as your react-github-actions repo is on.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image8.png" alt="" /></p> <p>Now, click on New site. Select provider as Github. Search for the react-github-actions repo and add it.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image6.png" alt="" /></p> <p>You'll be asked for some details.</p> <p>Changed the build command to <code>npm run build</code></p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image11.png" alt="" /></p> <p>Click on Deploy site.</p> <p>Once you do, the deploy will be auto triggered.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image7.png" alt="" /></p> <p>So, now that we've everything running smoothly, we'll make a mess of things by introducing a breaking change, because, things don't work at one go in software development</p> <p>Go to Github, and edit the App.js file by removing the words 'React App'.</p> <p>Add the commit message as 'added-breaking-change', and instead of pushing directly to the master branch, click on create a new branch. You may name it anything.</p> <p>Now, we'll make a PR into the master branch.</p> <p>If you remember, we'd configured the CI pipeline to work in two cases : One, if a push was made to master, and two, if there was a PR raised.</p> <p>So this time, as soon as we raise the PR, the workflow should be triggered.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image10.png" alt="" /></p> <p>Sure enough, you'll see the steps being run.</p> <p>We see that the checks fail.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image4.png" alt="" /></p> <p>If you go to the Actions tab and check the CI logs, you'll see that the test failed, which is what we'd expected.</p> <p>Go to Netlify and confirm that no deployment has started.</p> <p>Now, add the 'React App' back into the file and make a commit into the same branch.</p> <p>The tests will now run again, and you'll see that they pass now. Once the tests pass, you can merge the Pull request</p> <p>And going to Netlify, you'll see that a deploy has been triggered</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image3.png" alt="" /></p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#conclusion">#</a></h2> <p>Thus, you understood the concepts of CI/CD, how they work in a production environment. You set up an application configured CI on it using Github actions and CD using Netlify. You confirmed the flow by purposely failing the CI test and ensured that an incorrect deployment did not get triggered.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#references">#</a></h2> <ul> <li>(CI vs CD vs CD)[https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment]</li> </ul> Optimal timeblocking 2021-11-26T00:00:00Z https://blog.dkpathak.in/optimal-timeblocking/ <p>Time blocking has been much acclaimed as a wonderful means to get things done by not giving our mind an alternative - if it's there on the calendar, you do it. Come. What. May.</p> <p>The premise is this - instead of dumping a list of tasks on the Todo list and getting to them when you 'feel like it', if you instead assign a time you'll do each task, you'll not give your brain a chance to procrastinate.</p> <p>It takes various forms - Elon Musk plans his entire day in five minute chunks, with not a minute of his waking day left unscheduled, whereas many others only schedule the most important and unmissable events and tasks, and handle the rest on a 'will be taken up as possible' basis</p> <p>In such a case, how do we make sure that we block times 'optimally', so that we get the tasks done, and at the same time, keep enough leeway for interruptions?</p> <p>Here are a few steps you can take to ensure you're timeblocking 'optimally' :</p> <ol> <li> <p>Block incrementally. A large number of us, in an initial spur of motivation, end up blocking the day down to the minute, only for the entire schedule to unravel at the first overrun task, or the first distraction. To steel your brain into following a calendar is a challenge, and it takes time to get used to it. Thus, start with blocking unavoidable meetings/events/tasks - these have the highest likelihood of not being procrastinated. Once your brain gets into the habit of checking your calendar before picking up a task, then start adding further tasks slowly - start with the ones you're most likely to procrastinate on, and then go to the relatively easy ones. The moment you feel an urge to NOT do a task inspite of it being on the calendar, take a day's break, without adding any further tasks, until you can steel yourself to stick to it. Reason being - the mind should see the calendar as sacred and unmodifiable. If you start pushing around tasks, you'll start doing that with every task on there, in no time.</p> </li> <li> <p>Block time, for not just work, but also 'non work' : 'Spending time with family' may not figure on many of our todolists, however, it does take time. Block time for stuff that's not a direct task/meeting, but is anyway gonna take time - otherwise, it'll feel like you had an empty calendar, and still got nothing done.</p> </li> <li> <p>Optimal duration : How much time should you give to a task? On the one hand, there's an idea that says that work expands to take up as much time as you allot to it, but it misses the fine print that there's definite upper and lower limits. You can't cook dinner in 2.5 minutes, no matter how motivated you are. Giving too little of time to a task will make you feel demotivated at being unable to meet the deadline. And at the same time, giving way too much time to a task will make you procrastinate - the very thing we're trying to avoid. Thus, spend a few extra seconds planning the optimal duration for each slot you block on the calendar. As a rule of thumb, always plan a few more minutes for a task than you think you'll need, since humans have a tendency to overestimate themselves and underestimate the challenges. If the task/meeting involves other people, make sure you finalize the agenda and the duration in advance, since it can otherwise derail very easily.</p> </li> <li> <p>Padding : Add a few minutes of padding after a task - say 15 minutes for every one hour. This is to ensure that you can take a break before getting on with the next task. This break is necessary, because no matter how motivated, humans' attention span for deep work is low, and needs constant replenishment. Moreover, you can utilize this</p> </li> <li> <p>Scheduling breaks : No, all the white space on the calendar is NOT a break. You MUST schedule break times on your calendar, wherein you can actually rejuvenate. And it doesn't mean scrolling socials. Your mind needs a break, your eyes need a break, and your body needs movement - so give it that.</p> </li> <li> <p>Rescheduling : No, you can't work without having to reschedule at least once a week. But at the same time, 'not feeling like it' isn't a valid excuse for pushing a task to the next day. Rescheduling has to follow the same discipline that you followed when scheduling, or you'll eventually end up rescheduling all the tasks you don't want to do. Rescheduling has to follow a careful evaluation process;</p> </li> </ol> <ul> <li> <p>one, reschedule a task only if circumstances entirely out of your control come in and threaten to take up more than 50% of the time you initially allotted for the task. Otherwise, just push the task a bit and see it through.</p> </li> <li> <p>Second, reschedule the task to a time that you KNOW you'll be able to do it at. Pushing a task away to a random slot just to get it out of the way from the moment means that you're going to have to reschedule the task at least once more, and that'll kill off the motivation you have for doing it.</p> </li> <li> <p>Three, if you have to reschedule the same task more than twice, reevaluate it - is it really unavoidable circumstances, or are you just finding excuses to delay the inevitable?</p> </li> <li> <p>Finally, If you end up with a lot of rescheduling done over the week, your scheduling wasn't good enough to begin with - so rethink your scheduling strategy.</p> </li> </ul> <ol start="7"> <li>Flexibility : This may seem counterintuitive, because the tone of this article has been to force your mind. However, flexibility does not mean to reschedule and reprioritize tasks at will. Instead, it's the freedom to reevaluate your scheduling strategies based on the insights you derive from your present schedule. For instance, you observe over a week that 9 PM - 10 PM is a super productive time for you, but your calendar is filled with relatively unimportant tasks in that slot - change it in your next schedule. If you observe your tasks often overshoot, reevaluate how you estimate the time block for each task.</li> </ol> <p>Timeblocking is an effective way to avoid procrastinating on necessary tasks by leaving choice out of the equation, and if done the right way, can greatly boost net productivity</p> Lessons learnt from a year long experiment on productivity 2022-01-15T00:00:00Z https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/ <p>For over the past 52 weeks, I've invested in learning more about productivity patterns, and how I could possibly meet the goals I set for myself, get over my ADHD and do a decent job at work, without burning myself out in the process.</p> <p>This article reflects my major findings - all of which I've tried and tested on myself.</p> <h3 id="1-action-is-the-greatest-motivation" tabindex="-1">1. Action is the greatest motivation<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#1-action-is-the-greatest-motivation">#</a></h3> <p>10 minutes of actually working on a task that's important for you creates a motivation boost to complete the rest of it, far more than any planning ever will. This is how most habits begin to develop - we start at 1%, and the action becomes the motivation for further action. Thus, next time you procrastinate, just get started on one tiny bit of task, and it'll boost you to continue.</p> <h3 id="2-conservative-time-blocking" tabindex="-1">2. Conservative time blocking<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#2-conservative-time-blocking">#</a></h3> <p>Timeblocking is a well recognized technique where you schedule time for a task, and in that duration, just work on that task, thereby eliminating the need for the mind to will itself into picking a task off the todo list.</p> <p>However, if done wrongly, timeblocking can end up being as unproductive as all other todo lists and way more demoralizing. Timeblocking every minute of the day without considering your energy levels and other distractions can make the exercise futile. Thus, when you start, block time only for the most essential tasks that you dare not skip. Once you get into the habit, only then schedule your time more. This incremental approach will trick your brain into believing, that if it's on the calendar, it's sacred, and can NOT be missed, come what may.</p> <h3 id="3-part-day-planning" tabindex="-1">3. Part-day planning<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#3-part-day-planning">#</a></h3> <p>Most productivity gurus talk about planning one day ahead. However, early on in our careers, we have very little control over our time at work, and thus, our day plans can get disrupted if we're pushed into an energy draining task that we hadn't expected. You have considerable more control over only the next 6 hours of the day, at any given point. Thus, divide your day into 3 halves, and only plan for the next 6 hours. At 8 AM, plan for your 8-1. At 1:30, plan for your 2-7 PM, and at 7:30 PM, plan for your 8 PM - 1 AM. You'll be able to gauge your energy levels and calendar blockers much better this way</p> <h3 id="4-biological-prime-time" tabindex="-1">4. Biological Prime Time<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#4-biological-prime-time">#</a></h3> <p>As the name suggests, it refers to a few times of the day when you're at your highest energy. Schedule your most energy draining tasks around your BPT to increase your chances of getting them done.</p> <p>How do you find out your BPT? For one/two weeks, keep track of how motivated and mentally fresh you feel at every hour of the day. After the duration, you'll see a recurring high at some common time slots. For me, it usually comes between 7 AM - 9 AM in the morning, 5 PM - 7 PM in the afternoon, and 9:30 - 10:30 PM at night.</p> <p>At your lowest energy levels, either switch off entirely from doing anything, or if that's unavoidable(like, if it falls during work hours), do your least energy consuming, 'maintenance' tasks like checking mail, cleaning up your workspace etc, which require very little mental presence.</p> <h3 id="5-objectivize" tabindex="-1">5. Objectivize<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#5-objectivize">#</a></h3> <p>A lot of what we do is based on subjective decisions and considerations - I'll consider this website complete, when I feel it's 'good enough', or this task will take 'some time', or I have a goal to 'get 6 pack abs eventually'.</p> <p>These subjective connotations means that your mind has to work at 'interpreting' what they mean, before you actually do something about them - your mind has to define when you feel good enough for your website, or how much time is 'some time', or what you should do at this moment to 'get 6 pack abs eventually'</p> <p>Instead, creating objective, measurable checklists for your tasks and milestones makes it infinitely easier for you to track them, and remove the barrier for your brain to expend energy every time into defining the criteria for completion. You can just kick into auto gear mode. In the above examples, 'I'll consider this website done, once the header has a gradient, I have done the three body sections and added 5 links in the footer', 'completing these 3 checkpoints in the task will take a total of 90 mins', 'I'll do 60 situps and 80 leg rotations every alternate day'.</p> Intro to Terraform 2021-11-23T00:00:00Z https://blog.dkpathak.in/intro-to-terraform/ <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#overview">#</a></h2> <p>Terraform is an Infrastructure as Code (IaC) tool, used to deploy and manage infrastructure (cloud servers, cloud DB instances etc) using code, rather than a GUI. In this tutorial, we'll look at what IaC is, why it's thought to be a better idea than using a GUI, and how Terraform achieves it. We'll then implement a rather unique project, to create a Spotify playlist using Terraform!</p> <p>We'll use an AWS EC2 instance for the tutorial because it's much faster and straightforward than to fight with our miserly personal laptop RAMs. Instructions to set up an AWS EC2 instance can be found <a href="https://dkprobes.tech/setting-up-a-production-ready-application-with-react/#setting-up-an-aws-ec2-instance">here</a></p> <h2 id="introduction-to-iac" tabindex="-1">Introduction to IaC<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#introduction-to-iac">#</a></h2> <p>Infrastructure refers to everything that's used in the deployment of an application - including the server configurations, load balancers, access groups, VPCs and a zillion other things. As beginners, we often use GUIs to configure this infrastructure - such as the AWS EC2 setup you'd have done - configuring the security groups, storage etc by clicking away at the console.</p> <p>This practice however, is often not optimal when you're working with hundreds of instances which need very precise configurations, and are worked on by hundreds of developers. In this case, we take a trick off the old hat - just like how our normal application code changes are managed and maintained using version control, we use code to configure our infrastructure, and deploy that configuration to version control so that other developers can see it, edit it and use it.</p> <p>How exactly does that work? The configurations that we do work via APIs that modify and manage the resources and infrastructure. When we use a GUI like the EC2 dashboard, it's the UI that's making the calls to the APIs for modifying the infrastructure. The same APIs can also be accessed via code, to give the same result. And that's precisely what IaC is.</p> <h2 id="intro-to-terraform" tabindex="-1">Intro to Terraform<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#intro-to-terraform">#</a></h2> <p>Terraform is the tool to bring IaC to reality. It has a configuration language using which you can interact with the infrastructure platform APIs, like AWS EC2 APIs, to add, update and remove resources.</p> <p>These configuration files can be deployed to version control, meaning that other developers on the team can refer to these or update them as required, without the intervention of the person who first set it up.</p> <p>So how exactly does it all come together in practice?</p> <h3 id="1-making-configuration-edits" tabindex="-1">1. Making configuration edits<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#1-making-configuration-edits">#</a></h3> <p>The developers first make the changes to the infrastructure in the configuration language</p> <h3 id="2-execution-plans" tabindex="-1">2. Execution plans<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#2-execution-plans">#</a></h3> <p>Terraform then generates an execution plan based on the configuration changes you made, and asks you for your approval to ensure there're no unexpected changes. You wouldn't want a semi colon removed by the ill famed intern to bring down your primary server, would you?</p> <h3 id="3-resource-graph" tabindex="-1">3. Resource graph<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#3-resource-graph">#</a></h3> <p>Infrastructure takes time to setup and configure, especially when there's tons of it involved with specifics in each. Thus, it creates a resource graph to allow it to build and provision independent resources in parallel to save time</p> <h3 id="4-change-automation" tabindex="-1">4. Change automation<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#4-change-automation">#</a></h3> <p>When you make changes to your infrastructure, Terraform applies those changes with as much efficiency as possible, and with minimal human intervention required.</p> <p>Now that we're clear with the concepts, let's get out hands dirty by setting up Terraform</p> <h2 id="setting-up-terraform" tabindex="-1">Setting up Terraform<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#setting-up-terraform">#</a></h2> <p>As discussed, we'll be using an EC2 instance to set up and configure Terraform and the other necessary dependencies, since it has a much better RAM and doesn't heat your laptop to 10 million degrees.</p> <p>You can follow the instructions in the article linked in the overview for setting up an EC2 instance. If not, you can continue on your personal laptops. OS wise instructions can be found <a href="https://learn.hashicorp.com/tutorials/terraform/install-cli">here</a></p> <p>Once you're logged into the EC2 terminal, we'd first need a few packages that Terraform uses. Execute the following commands on the terminal</p> <pre><code>sudo apt-get update &amp;&amp; sudo apt-get install -y gnupg software-properties-common curl </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/1-install-curl.PNG" alt="" /></p> <pre><code>curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - </code></pre> <pre><code>curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/2-hashicorp.PNG" alt="" /></p> <p>Finally, to install terraform (We do the apt-get update to update the repository we installed in the previous step):</p> <pre><code>sudo apt-get update &amp;&amp; sudo apt-get install terraform </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/4-terraform.PNG" alt="" /></p> <p>Once complete, type <code>terraform -help</code> and a list of options as below will indicate that the installation has been successful.</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/4-terraform-installed.PNG" alt="" /></p> <h2 id="setting-up-docker-engine" tabindex="-1">Setting up Docker Engine<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#setting-up-docker-engine">#</a></h2> <p>Now that we're complete with installing Terraform, the next step is to set up Docker Engine, since we'll be using a Docker image for our project.</p> <p>First, we update the apt package index and install packages to allow apt to use a repository over HTTPS:</p> <pre><code>sudo apt-get update </code></pre> <pre><code>sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/6-ca-certs.PNG" alt="" /></p> <p>Next, we add Docker's official GPG key. GPG stands for GNU Privacy Guard and is essentially an encryption mechanism to keep your docker images and installations secure.</p> <pre><code> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/7-docker-gpg-key.PNG" alt="" /></p> <p>Next, we add the stable repository</p> <pre><code> echo \ &quot;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable&quot; | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null </code></pre> <p>Now, we'll install the docker engine</p> <pre><code>sudo apt-get update </code></pre> <pre><code>sudo apt-get install docker-ce docker-ce-cli containerd.io </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/8-install-docker.PNG" alt="" /></p> <p>In case you're wondering, <code>apt-get update</code> downloads the package lists from the repositories and &quot;updates&quot; them to get information on the newest versions of packages and their dependencies.</p> <p>Finally, to ensure if Docker has been installed successfully, run the following hello world image.</p> <pre><code>sudo docker run hello-world </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/9-docker-run.PNG" alt="" /></p> <h2 id="configuring-spotify" tabindex="-1">Configuring Spotify<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#configuring-spotify">#</a></h2> <p>Next, we'll set up Spotify developer dashboard. Go to https://developer.spotify.com/dashboard and login/signup. Once you do, you should see a dashboard like this :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/16-spotify.PNG" alt="" /></p> <p>Click the Create an App button, and enter details like so :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/17-create-app.PNG" alt="" /></p> <p>and click create</p> <p>Once the application is created, click the green Edit Settings button on the top right side.</p> <p>Go to the redirect_uris section and add a URL - <code>http://localhost:27228/spotify_callback</code>. Click on add and then save at the bottom. Do not forget to save - it can be easily missed.</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/18-redirect-url.PNG" alt="" /></p> <p>This URL is what we'll be redirected to, once we're authenticated by Spotify, to have rights to create the playlist.</p> <p>One question you might have - we're using an EC2 instance for the Terraform setup. Why did we add a localhost link there? We'll come to that answer in a bit.</p> <p>Now, since we're dealing with a port that's expected to have some traffic, we'll need to add it to inbound rules of our AWS security group for our instance to avoid getting a failed request. If you don't know how, follow the instructions <a href="https://dkprobes.tech/setting-up-a-production-ready-application-with-react/#setting-up-an-aws-ec2-instance">here</a></p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/19-add-port.PNG" alt="" /></p> <p>Now, we'll have to add the redirect URL as an environment variable to our EC2 instance. Go to the terminal and enter the following</p> <pre><code>export SPOTIFY_CLIENT_REDIRECT_URI=http://localhost:27228/spotify_callback </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/20-export.PNG" alt="" /></p> <p>Next, we'll create a .env file to host our Spotify app credentials.</p> <p>Type</p> <pre><code>nano .env </code></pre> <p>to create a .env file and open it in the nano text editor.</p> <p>We'll be adding two variables, the client ID and the client secret :</p> <pre><code>SPOTIFY_CLIENT_ID= SPOTIFY_CLIENT_SECRET= </code></pre> <p>For these values, go to the Spotify developer dashboard and copy the client ID and secret and paste them here</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/21-env.PNG" alt="" /></p> <p>And now, the moment of truth - we'll use the docker image of the application to run it and see if we're able to authenticate ourselves. In the terminal, enter the following command</p> <pre><code>docker run --rm -it -p 27228:27228 --env-file ./.env ghcr.io/conradludgate/spotify-auth-proxy </code></pre> <p>You shoul see an output like this</p> <pre><code>Unable to find image 'ghcr.io/conradludgate/spotify-auth-proxy:latest' locally latest: Pulling from conradludgate/spotify-auth-proxy 5843afab3874: Pull complete b244520335f6: Pull complete Digest: sha256:c738f59a734ac17812aae5032cfc6f799e03c1f09d9146edb9c2836bc589f3dc Status: Downloaded newer image for ghcr.io/conradludgate/spotify-auth-proxy:latest APIKey: xxxxxx... Token: xxxxxx... Auth: http://localhost:27228/authorize?token=xxxxxx... </code></pre> <p>Copy the <code>http://localhost</code> url and paste it in a new browser tab.</p> <p>Well?</p> <p>Did you get a 'Site can't be reached' page? Of course you did. Wonder why?</p> <p>Your server is running on EC2, not on localhost, as we'd noted earlier. So, in the URL, replace the localhost with the Public IPv4 address of your EC2 instance. Once you do that, the page would load into an authorization page like this :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/22-authorize.PNG" alt="" /></p> <p>Click on agree, and you'll be redirected to the localhost link you'd given as the redirect url. Again, replace the localhost with the ip of the server, and you'll be able to see this message :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/23-auth-successful.PNG" alt="" /></p> <p>And your terminal will be updated as follows :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/24-auth-success-terminal.PNG" alt="" /></p> <p>Keep the server up and running. Open a new terminal and ssh into it once again to connect to the EC2 instance - we need this one for setting up the terraform configuration.</p> <p>Now, we'll be working on the terraform configuration we'd need for our app. Use this command to clone a repo that contains the Tf configuration that searches for songs by Dolly Parton and creates a playlist out of them.</p> <pre><code>git clone https://github.com/hashicorp/learn-terraform-spotify.git </code></pre> <p>And cd into the directory</p> <pre><code>cd learn-terraform-spotify </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/25-clone.png" alt="" /></p> <p>Do an ls command, and you'll see three files in the repo</p> <p>Enter <code>cat main.tf</code> to open the file. The content will be something like this</p> <pre><code>terraform { required_providers { spotify = { version = &quot;~&gt; 0.1.5&quot; source = &quot;conradludgate/spotify&quot; } } } variable &quot;spotify_api_key&quot; { type = string } provider &quot;spotify&quot; { api_key = var.spotify_api_key } resource &quot;spotify_playlist&quot; &quot;playlist&quot; { name = &quot;Terraform Summer Playlist&quot; description = &quot;This playlist was created by Terraform&quot; public = true tracks = [ data.spotify_search_track.by_artist.tracks[0].id, data.spotify_search_track.by_artist.tracks[1].id, data.spotify_search_track.by_artist.tracks[2].id, ] } data &quot;spotify_search_track&quot; &quot;by_artist&quot; { artists = [&quot;Dolly Parton&quot;] # album = &quot;Jolene&quot; # name = &quot;Early Morning Breeze&quot; } output &quot;tracks&quot; { value = data.spotify_search_track.by_artist.tracks } </code></pre> <p>The first <code>terraform</code> block contains the terraform configuration, followed by the provider details. Here, we'll enter the spotify API key, which'll allow us to access the developer account and add the song details</p> <p>Then come the details of the playlist itself - we search the artist Dolly Parton, and (commented out) the album and name of the song.</p> <p>Next, Rename the <code>terraform.tfvars.example</code> file <code>terraform.tfvars</code> so that Terraform can detect the file, using the following command :</p> <pre><code>mv terraform.tfvars.example terraform.tfvars </code></pre> <p>Next, open the above file using nano and add the API key which you'd copied earlier from the running Docker container. Remember to keep the quotes there.</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/28-api-key.png" alt="" /></p> <p>Next, we'll initialize terraform, which will install the Spotify provider, using the following command :</p> <pre><code>terraform init </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/30-tf-init.png" alt="" /></p> <p>Now, enter</p> <pre><code>terraform apply </code></pre> <p>to apply the configuration you have made. You'll see a confirmation with the details you've entered, like so :</p> <pre><code>Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # spotify_playlist.playlist will be created + resource &quot;spotify_playlist&quot; &quot;playlist&quot; { + description = &quot;This playlist was created by Terraform&quot; + id = (known after apply) + name = &quot;Terraform Summer Playlist&quot; + public = true + snapshot_id = (known after apply) + tracks = [ + &quot;2SpEHTbUuebeLkgs9QB7Ue&quot;, + &quot;4w3tQBXhn5345eUXDGBWZG&quot;, + &quot;6dnco8haegnJYtylV26cBq&quot;, ] } Plan: 1 to add, 0 to change, 0 to destroy. Changes to Outputs: + playlist_url = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: </code></pre> <p>Enter yes, and the playlist will be created</p> <pre><code> Enter a value: yes spotify_playlist.playlist: Creating... spotify_playlist.playlist: Creation complete after 1s [id=40bGNifvqzwjO8gHDvhbB3] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: playlist_url = &quot;https://open.spotify.com/playlist/40bGNifvqzwjO8gHDvhbB3&quot; </code></pre> <p>And there you have it. You can open the link in the browser.</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#conclusion">#</a></h2> <p>Thus, in this tutorial, we understood what IaC is, and the use cases for it, and how it's better than the GUI based configuration. We got introduced to Terraform and how it works.</p> <p>We then set up a Spotify playlist using Terraform, getting a decent overview of how it works in the process.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#references">#</a></h2> <ul> <li><a href="https://www.terraform.io/intro/index.html">Terraform Docs</a></li> </ul> AWS Cloudwatch 2021-12-08T00:00:00Z https://blog.dkpathak.in/aws-cloudwatch/ <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#overview">#</a></h2> <p>In this tutorial, we'll understand the requirements of metrics for our server instances. We'll set up an EC2 instance and configure cloudwatch to track metrics for the instance and set up alerts when certain criteria are met</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#prerequisites">#</a></h2> <p>You'll need an AWS account. If you do not have one, sign up on aws.amazon.com.</p> <h2 id="metrics-and-why-we-need-them" tabindex="-1">Metrics, and why we need them<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#metrics-and-why-we-need-them">#</a></h2> <p>Every server instance we use has a finite amount of load it can hold, the CPU power, the number of reads/writes, and so on. In case of excessive load, the server might crash, leading to service disruption for users. While it doesn't sound like a big deal when working on personal projects, it can have serious business consequences when working with actual users. Remember the time Google went down for just 45 mins? The world practically came to a standstill. To avoid this, we use metrics that track server activity - how many requests it's handling, the CPU being used, and so on. If we see traffic hitting the server's limits, we can configure additional instances and <a href="https://dkprobes.tech/setting-up-load-balancing-using-nginx/">balance load across these</a></p> <h2 id="introduction-to-aws-cloudwatch" tabindex="-1">Introduction to AWS Cloudwatch<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#introduction-to-aws-cloudwatch">#</a></h2> <p>AWS Cloudwatch is a service that tracks various metrics for your AWS resources, including EC2 instances, S3 buckets, lambda functions, EBS and more.</p> <p>You can create dashboards to track the metrics over time and can take intelligent decisions regarding scaling up your server capacity</p> <p>Most importantly, it also allows to setup alarms when certain critical thresholds are hit, so that you can take action immediately without any service disruption</p> <h2 id="getting-hands-on-with-cloudwatch" tabindex="-1">Getting hands on with Cloudwatch<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#getting-hands-on-with-cloudwatch">#</a></h2> <p>In this tutorial, we'll be setting up an EC2 instance and run a simple React app on it. We'll then track the metrics of the instance as we make hits to the server and set up an alarm when the requests cross a certain threshold</p> <p>The following section describes the steps to set up and run a React app on an EC2 instance. If you already have one running, skip this section and go to the next one.</p> <h2 id="setting-up-a-react-app-on-an-ec2-instance" tabindex="-1">Setting up a react app on an EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#setting-up-a-react-app-on-an-ec2-instance">#</a></h2> <p>Next, let’s set up a remote EC2 server instance. As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image3.png" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image6.png" alt="" /></p> <p>Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.</p> <p>Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section. Finally, we’ll add a custom TCP rule for the port our application is going to expose - port 3000.</p> <p>For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image17.png" alt="" /></p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image9.png" alt="" /></p> <p>Next, select the instance, and click on Connect on the top bar. It’ll open this page :</p> <p><img src="https://blog.dkpathak.in/aws-cloudwatch/..img/scalex/image1.png" alt="" /></p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i … one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image14.png" alt="" /></p> <p>Great going.</p> <p>Now, our next step is to bring the code into our instance and run it. To do that, we’ll do a git clone exactly the same way we cloned the repo on our local system, using the git clone command.</p> <p>Once you’re done cloning the repo, the next step is to install the dependencies and start the application. Navigate to the repo directory and try running</p> <p><code>npm install</code></p> <p>Did you get an error? Ofcourse you did. You need to install NodeJS on the instance. How do you do that? The answer’s in the error itself :</p> <p>sudo apt install nodejs</p> <p>This will take a few minutes to complete. Once it’s done, try running npm install again, and you’ll see that this time, you’re able to.</p> <p>Finally, the moment of truth - run</p> <p><code>npm run start</code></p> <p>Once you see the application live on localhost:5000 written on the terminal, you’ll have to navigate to the server IP to check if it works.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image16.png" alt="" /></p> <p>This IP can be found from the AWS instance details - Public IPV4 address. Copy that, and paste it onto a browser tab, and add :3000 after it.</p> <p>If the application did work correctly - you should be able to see the same screen that you were able to see locally on your machine.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image8.png" alt="" /></p> <h2 id="setting-up-cloudwatch" tabindex="-1">Setting up cloudwatch<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#setting-up-cloudwatch">#</a></h2> <p>Now that you have a working application, we'll setup Cloudwatch.</p> <p>Go to the search bar and type Cloudwatch. You'll get the service option there like so</p> <!-- Add cloudwatch image here--> <p>Click on it, and you'll be taken to the Cloudwatch home page. Look at the navigation tab carefully - it has options for Logs, events, metrics, dashboards and so on.</p> <p>Click on the Create dashboard button, and give it a name of your choice</p> <p>Next, you'll be prompted for the widget type you want to add, line graph/cumulative/alarm, etc</p> <p>We'll pick the line graph. We can always add more widgets later</p> <p>Next, you'll be asked where should this graph's data come from - the metrics, or the logs? We'll pick metrics since that's what we want to track</p> <p>Next, you'll get a screen like this, and a list of services which you can track. Click on EC2, and use the select all button to have all EC2 metrics showing up on the widget.</p> <p>Finally click Create widget, and you'll be able to see the widget on the dashboard</p> <p>Similarly, you can add another widget for numeric data like so :</p> <p>Finally, we'll be trying to set up an alarm.</p> <p>Click on add new widget and select alarm</p> <p>You'll be redirected to the alarms dashboard</p> <p>Click on Create alarm</p> <p>We'll be asked to select the metric on which we want to set an alarm. Search, and select CPUUtilization</p> <p>You'll then be asked to specify the conditions for the alarm - we'll set the alarm when the CPUUtilization is Greater than 0.6. (It's a pretty low number, but since we wish to see the alarm triggered while not having that amount of utilization, we're keeping it this way)</p> <p>You'll then be prompted to configure notifications - we choose to get notified 'in alarm', that is, when the threshold has been breached</p> <p>Next, we are asked to select an SNS topic. SNS stands for Simple notification service, an AWS service used to send alerts to users. We'll click on creating a new topic, and add our email ID in the email endpoint</p> <p>Click on create topic</p> <p>Finally, you'll be asked to enter the name of the alarm. And then, the alarm will be created</p> <p>You'll get a notification on the top stating that you'd need to verify your subscription to the SNS. Go to the email ID you'd entered in the alarm page, and you'll see a mail from AWS with a confirmation link, like this :</p> <p>If you do not see it, check spam.</p> <p>Once you hit the confirm link, you're then confirmed to start receiving the notification messages.</p> <p>Now, go to your SSH terminal,and run the following command, to trigger the CPU usage.</p> <pre><code>sudo npm i -g pm2 </code></pre> <p>Within a few seconds, you'll see the state of the alarm change to 'In alarm', and you'll have received an email from AWS with the alert</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#conclusion">#</a></h2> <p>Thus, in this tutorial, you understood why metrics are important, and how we can use AWS's cloudwatch service to setup and track metrics for your instances. We set up an EC2 instance, and configured cloudwatch to track metrics on it.</p> <p>You can further expand on this knowledge and track metrics across your projects to drive improvements</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#references">#</a></h2> <ul> <li><a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-tutorials.html">AWS Cloudwatch official tutorials</a></li> </ul> Setting up load balancing using Nginx 2021-11-14T00:00:00Z https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/ <blockquote> <p>This post has been written in collaboration with <a href="https://backtobackswe.com/">BacktoBackSWE.com</a>, a portal for interview preparation.</p> </blockquote> <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#overview">#</a></h2> <p>In this tutorial, we'd be understanding some core concepts of load balancing - what it is, why we need it using a practical example. We'll then be using setting up three server instances using AWS EC2. We'll then understand what Nginx is, and configure it on the servers so that one of them acts as a load balancer and directs requests to the other two</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#prerequisites">#</a></h2> <p>A basic understanding of AWS will be helpful - what is an instance, what is SSH etc. You'll need an AWS account to set up the servers. If you don't have one, you'll have to sign up on https://aws.amazon.com. You'll be asked for Credit/Debit card details, but as long as you stick to the instructions in this tutorial, you won't be charged.</p> <h2 id="introduction-to-load-balancing" tabindex="-1">Introduction to load balancing<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#introduction-to-load-balancing">#</a></h2> <p>Very few things in software engineering sound like they are. Fortunately, load balancing is one of them. Let's consider Uber - an application that sees varying loads in a day based on the time of day - if it's rush hour, the application will be overloaded with requests from the thousands of folks who need to get to their offices on time. In contrast, in the middle of the night, the number of requests will be way lesser.</p> <p>To handle such scenarios, what does Uber do? They keep multiple servers - each with the same application as their sister server, and all of these sister servers are connected to a main load balancer, not directly to the outside world. Now, when the requests for booking a ride come in, they go to the load balancer, which redirects the requests to any of the sister servers. The LB also keeps track of how many requests are being processed by each server, so that any one server doesn't get overwhelmed and die of exhaustion, while the others sit around swatting flies. This way, the 'load' - the number of requests coming in, gets 'balanced' across the servers, and thus, allows all users to have a smooth experience.</p> <p>That's the core concept of load balancing.</p> <h2 id="introduction-to-aws-hosting-services-and-ec2" tabindex="-1">Introduction to AWS hosting services and EC2<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#introduction-to-aws-hosting-services-and-ec2">#</a></h2> <p>AWS isn’t something you’re new to, or you won’t be reading this tutorial, but a one liner for it is that it’s a cloud hosting solutions provider by Amazon that allows you to host, manage and scale applications. For the sake of this tutorial, AWS will provide you the remote server where your React app will eventually run. The server itself will be located in some Amazon Data center, but you’d be able to access it remotely from your PC via a set of commands. We’ll be using the EC2 service of AWS. EC2 stands for Elastic Compute Cloud, and it does what we described above - lets you access a remote server and host applications on it</p> <h2 id="setting-up-an-aws-ec2-instance" tabindex="-1">Setting up an AWS EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#setting-up-an-aws-ec2-instance">#</a></h2> <p>Next, let’s set up a remote EC2 server instance. You’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We'll be going with 3 instances - 2 as server instances, and third a load balancer. They'll be ditto copies of each other for now, until we configure one of them.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/aws-multiple.PNG" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image6.png" alt="" /></p> <p>Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.</p> <p>Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section.</p> <p>For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/security-group-for-load-balancer.PNG" alt="" /></p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/review-and-launch-load-balancer.PNG" alt="" /></p> <p>Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instances are starting, and after a few minutes, they'll be started. Your EC2 home page should look like this (Three running instances. Ignore the fourth terminated one you can see here. It's an old one):</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-three-instances-running.PNG" alt="" /></p> <p>For easier understanding, let's rename our instances. If you hover around their names, you'll see a pencil icon - you can click on it to rename the instances - Server-A, Server-B and Load-Balancer, like so :</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-renamed-servers.PNG" alt="" /></p> <p>Now that our instances are running, we have to connect to each one of them. We'll connect to them via the SSH command line, the terminal. For easy access, we'll stay connected to all three of them via three separate terminals</p> <p>Select one of the instances, and click on Connect. You'll be taken to another page.</p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the <code>ssh -i</code> one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p>Repeat the exact same process for the other two servers in two separate command prompts</p> <p>If all goes well, you should have the three cmds open, looking like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-three-cmds.PNG" alt="" /></p> <p>Great going.</p> <p>Now, we'll be installing Nginx onto each of the three servers, to permit us to load balance</p> <h2 id="intro-to-nginx" tabindex="-1">Intro to Nginx<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#intro-to-nginx">#</a></h2> <p>Nginx is a lot of things. Primarily, it's a web server - it takes requests for applications hosted on it, and returns the corresponding files as response to the requests. What does it look like? It's essentially a software that you download and setup on a machine. It has configurations, that once setup, will allow the host machine to accept incoming requests, process them, and send out the outputs.</p> <p>This request-response ability of Nginx can be put to other uses as well - such as load balancing, reverse proxying, and so on. Load balancing is what we're going to use it for, in this tutorial.</p> <p>Since Nginx has the ability to accept requests, we can also configure it to accept requests, and based on preset rules, direct those requests to other Nginx servers.</p> <p>See the reason for the three servers now? Each of those will have nginx set up on them, and thus, all of them can accept incoming requests and return the corresponding responses. We'll configure one of them to work as a load balancer, such that all it does is accept the traffic, and redirect it to either of the two other servers.</p> <p>Now that we're clear with the theory, let's see how we can set up our servers for the task</p> <h2 id="configuring-the-servers" tabindex="-1">Configuring the servers<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#configuring-the-servers">#</a></h2> <p>Go to the server A command prompt, and type the following command</p> <pre><code>sudo apt-get update </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-apt-get-update.PNG" alt="" /></p> <p>Once that's done, this command :</p> <pre><code>sudo apt-get install nginx </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-install-nginx.PNG" alt="" /></p> <p>Now, go to the EC2 instance dashboard, select Server A, copy its public IPv4 DNS from the details below(remember, copy it - directly opening the URL might lead to unexpected errors) and paste it in a new browser window.</p> <p>You should see a plain HTML page like so :</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-public-dns-home-page.PNG" alt="" /></p> <p>Repeat the exact same procedure for servers B and Load Balancer, and ensure that you see the Welcome to Nginx page on the public DNS links for both of these as well.</p> <p>Next, let's try to edit this page so that we can uniquely identify the server the page is on just by looking at it.</p> <p>As you might've guessed, the content comes from a simple index.html page that comes with the nginx installation.</p> <p>In the terminal for server B, we'll go into the directory that houses the index.html page using the following command :</p> <pre><code>cd /var/www/html </code></pre> <p>Type</p> <pre><code>ls -l </code></pre> <p>to list the files inside the directory, and sure enough, you'll see a file named something like <code>index.nginx-debian.html</code> (The <code>nginx-debian</code> thing refers to the nginx version tells that we have the Debian distribution of nginx downloaded - Debian is a Linux distribution, like Ubuntu and Fedora)</p> <p>This is the file whose contents we'll have to edit to customize them for the server we're on.</p> <p>Type</p> <pre><code>sudo nano index.nginx-debian.html </code></pre> <p>which will open the file in the Nano editor - a text editor for Ubuntu. And sure enough, you can see the Welcome to nginx content in the file that you are able to see on the public DNS.</p> <p>Replace the content of the file like this (for server B).:</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-nano-server-B.PNG" alt="" /></p> <p>Once that's done, do a command/Ctrl + X to exit the editor. The terminal will prompt if you want to save it - type Y and hit enter to return back to the terminal.</p> <p>Repeat the exact same process for Server A.</p> <h2 id="configuring-load-balancer" tabindex="-1">Configuring load balancer<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#configuring-load-balancer">#</a></h2> <p>Now is the main configuration change - configuring the load balancer to manage the requests going to A and B by routing through it. Based on the ratio we decide, x% of the requests will be going to server A, and the remaining to B.</p> <p>This configuration is done in the nginx.conf file. To go there</p> <pre><code>cd /etc/nginx </code></pre> <p>Then, to open the file</p> <pre><code>sudo nano nginx.conf </code></pre> <p>Do NOT forget the <code>sudo</code> since you'd otherwise not be able to save the file after editing it - editing a configuration file requires superuser permission.</p> <p>You'll see some pre-written content in the file already. Clear all of it, and paste the following content in there :</p> <pre><code>http { upstream myapp { server &lt;Server_1_Address&gt; weight=1; server &lt;Server_2_Address&gt; weight=1; } server { listen 80; location / { proxy_pass http://myapp } } } </code></pre> <p>And replace the &lt;Server_1_Address&gt; by the Public IPv4 address of Server A, and similarly, for B.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-nano-conf-final.PNG" alt="" /></p> <p>Since we updated the configuration file, we'd need to restart nginx, which we do by this command :</p> <pre><code>sudo systemctl restart nginx </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-restart.PNG" alt="" /></p> <p>Note that we didn't have to restart the service after updating the index.html file, since we didn't change any Nginx configuration when we edited the file.</p> <p>Now, if you go to the public DNS of Load balancer and refresh it - you'll see Server A. Refresh it again - Server B, and this alternates each time.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-done.PNG" alt="" /></p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-done-2.PNG" alt="" /></p> <p>So, what just happened? And what's all the gobbledygook we wrote in the conf file?</p> <p>The first <code>http {}</code> reflects the type of requests we'll be accepting - HTTP requests. Upstream means that the requests will be sent FROM the load balancer, to the other servers. What other servers? The servers defined inside that block - defined by their IP addresses. 'myapp' is the name of the group of servres. We then have the server addresses, and weights for each. What do the weights represent? The ratio of the requests - right now, it's 1:1, that's why we see requests going between A and B alternately. You may tweak the weights to see the corresponding changes. In real life, some servers are often larger and can handle more requests, and thus, are allotted more weight.</p> <p>The <code>server{}</code> block shows the port number the requests should be listened for on(80 - the HTTP port). The remaining line is the most crucial - it essentially says, whenever you encounter the route '/', replace it with http://myapp, aka, our server group. That one line is responsible for directing requests to the respective servers.</p> <p>Thus, this is how we've successfully setup a load balancing system using three AWS servers</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#references">#</a></h2> <ul> <li> <p><a href="https://developer.mozilla.org/en-US/docs/Learn/Common_questions/What_is_a_web_server">What is a web server</a></p> </li> <li> <p><a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass">Proxy_pass</a></p> </li> </ul> Intro to Server security 2021-11-21T00:00:00Z https://blog.dkpathak.in/intro-to-server-security/ <blockquote> <p>This post has been written in collaboration with <a href="https://backtobackswe.com/">BacktoBackSWE.com</a>, a portal for interview preparation.</p> </blockquote> <h2 id="introduction" tabindex="-1">Introduction<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#introduction">#</a></h2> <p>None of us would want an attack on our servers - neither us, who actually own the applications being run on the servers and whose bread and butter depend on the application running smoothly, nor the cloud provider - AWS, Azure, GCP, who actually own the server and whose bread and butter depend on our continuing to use the server.</p> <p>However, server attacks, breaches and data thefts have been as old as the concept of shared servers itself. And interestingly, the job of ensuring security is equally distributed between the cloud provider, and the user.</p> <p>Why, you ask. The cloud provider owns the server, and therefore, it ought to be their responsibility to ensure its security, no? If you open a locker in a bank, they don't ask you to provide a security guard for the safety of your locker,do they? That's their responsibility.</p> <p>However, the bank does ask you to clearly specify the owners and expect you to remember your details every time you wish to visit your locker. If you end up revealing your details to a thief, who can then access your locker, it's not really the bank's fault, is it? Just like that, AWS does have general firewalls and gatekeepers that are usually meant to keep 'unauthorized' requests out. However, if you end up authorizing a client IP, the firewalls and gatekeepers of AWS will have no choice but to let it through. Thus, it's what AWS calls a 'shared responsibility model', with clearly defined areas which AWS secures, and the others, that the user does.</p> <p>In this tutorial, we'll be looking at some of the options AWS provides us to ensure security of the servers we rent.</p> <h2 id="1-users-responsibilities" tabindex="-1">1. User's responsibilities<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#1-users-responsibilities">#</a></h2> <p>The facets of security that the user is responsible for, include :</p> <p>A. Control network access : Control what requests can come to the server, from what sets of IPs can make requests, and how they can do it(ports). The concept of security groups, which we'll also be doing practically, falls here.</p> <p>B. Credential management : Who all have the credentials to connect to your server, such as the ..pem file, the access to the private IP of the server, and so on.</p> <p>C. Server OS updates : What security and critical software updates should be allowed onto the server, how frequently, and what trusted sources.</p> <p>D. IAM roles : IAM stands for Identity and Access Management, and is mainly useful when different people are responsible for different sets of services on AWS. For instance, the you want to restrict EC2 connection to only a select few, but wish to allow RDS access to some other members of the database team - you can configure that with IAM.</p> <p>We'll be getting a hands on understanding of A and D, as well as understanding the concepts behind some of the other security practices AWS encourages.</p> <h2 id="2-security-groups" tabindex="-1">2. Security groups<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#2-security-groups">#</a></h2> <p>We start with this topic, as it's the most commonly configured when working with AWS EC2 for new cloud users.</p> <p>A security group is a firewall for incoming and outgoing traffic for the server. You can configure it to specify the protocols(SSH, TCP, HTTP, HTTPS etc) and corresponding ports you wish to allow traffic on, and what all IPs you wish to allow traffic from. These are called inbound rules. Similarly, there are outbound rules - that define what all you want your server to be able to access. This is usually kept open, since you're mainly bothered with what comes into the server, and not what goes out.</p> <p>To understand the role of a security group better, we'll be provisioning an EC2 instance, launching an application on it and customizing the security group to allow access to it. If you already have an EC2 instance running, you may skip the next section.</p> <h2 id="setting-up-an-aws-ec2-instance" tabindex="-1">Setting up an AWS EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#setting-up-an-aws-ec2-instance">#</a></h2> <p>As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image3.png" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image6.png" alt="" /></p> <p>Next comes the security group option. Do not edit anything in there for now. We'll edit it later.</p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image9.png" alt="" /></p> <p>Next, select the instance, and click on Connect on the top bar. It’ll open this page :</p> <p><img src="https://blog.dkpathak.in/intro-to-server-security/..img/scalex/image1.png" alt="" /></p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i … one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image14.png" alt="" /></p> <p>Great going.</p> <h2 id="setting-up-react-application-on-ec2" tabindex="-1">Setting up React application on EC2<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#setting-up-react-application-on-ec2">#</a></h2> <p>The next step is to set up a sample application on the instance. We'll be using a simple React todolist app for the same. We just have to clone it on the instance, like we'd have done for our local laptop/PC.</p> <pre><code>git clone https://github.com/gagangaur/React-TODO-App.git </code></pre> <p>You need not know any React for this, since we're only focusing on the security aspects.</p> <p>Once it's cloned, we'll have to install npm, and then set up the dependencies for the project</p> <p>The commands are</p> <pre><code> sudo apt get update sudo apt get npm </code></pre> <pre><code>cd React-TODO-App npm install </code></pre> <p>Finally, once the dependencies are installed, we run the application using <code>npm run start</code>, and if you'd followed all the steps perfecty, you should see the app running on port 5000</p> <p><img src="https://blog.dkpathak.in/img/scalex/image16.png" alt="" /></p> <p>So now, ideally, you should be able to see the app running on the server, right? We access the instance using the public IPv4 address. Copy it from the EC2 console home and paste it on the address bar, and add a :5000 at the end to indicate the port number. Did the application load?</p> <p>Unfortunately not.</p> <p>The reason is - security group. Remember, we hadn't made any change to the default security group settings when setting up the instance. And by default, inbound rules restrict everything but SSH access into port 22 - which we used to connect to the instance using the <code>ssh -i..</code> command. To be able to access the running application from our browser, we need to allow access to the port the application is running on, 5000, to the outside world.</p> <p>To do that, go to AWS. n the left navigation pane, scroll down to find the “Network and Security” section, and within it, Security groups. Open it, and select the new security group we’d created when we were setting up the instance(not the default one).</p> <p>Below, go to the Inbound rules tab, and hit the edit inbound rules button.</p> <p>Now, put in a custom TCP connection rule for port 5000, and allow access from 'anywhere'. Note, to avoid issues arising due to DHCP(Read <a href="https://www.quora.com/Does-my-IP-address-constantly-change-or-stay-the-same">this</a> for more info), we're allowing access from anywhere, but you can restrict only specific IPs to access some ports.</p> <p>Once that’s done, save the rules, and come back to the public IP page, and refresh. If you didn’t mess up, you should be able to see the application loading on port 5000 now!</p> <p>This is the one of the most important security measures put in place by AWS to ensure you're in charge of what traffic you allow in and out of the server.</p> <h2 id="iam---identity-and-access-management" tabindex="-1">IAM - Identity and Access Management<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#iam---identity-and-access-management">#</a></h2> <p>In professional settings, you'll be working across a large team, with multiple people with different responsibilities. It's neither required, nor safe to grant each of the users complete access to everything on your instances. For instance, folks in the database team have only to deal with the RDS services, and have little use case for the lambda services.</p> <p>To manage the permissions for users, we use this service called IAM.</p> <p>Go to https://console.aws.amazon.com/iamv2/home#/home</p> <p>You should see a screen similar to this. This is the IAM home page.</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-1.PNG" alt="" /></p> <p>Select the Users option from the left navigation tab, and it'll show the the existing list of users - empty initially. Let's create a user</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-2.PNG" alt="" /></p> <p>In the AWS access type section, select the password option, and add a custom password of your choice - we're doing this for ease of access. Leave the rest as it is, and click the Next:Permissions button</p> <p>You'll then see an option to add a user to a group. As the name suggests, a user group is a set of users who'll have similar permissions and accesses. Thus, all members of the database team will have similar set of permissions - to view, edit the database. We won't be bothering with a user group creation in this tutorial.</p> <p>Click on the attach existing policies directly tab</p> <p>Here, we have to specify the permissions we wish to grant to this user.</p> <p>We'll add the PowerUserAccess, since we want this user to have complete control of the EC2 instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-5-power-user-access.PNG" alt="" /></p> <p>In the set permissions boundary section, leave it unchanged. Click on the Next:Tags button</p> <p>Add a tag like so</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-6.PNG" alt="" /></p> <p>Click on Next:Review, take a scan at all the options you've chosen, and finally hit Create user to see a screen like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-7.PNG" alt="" /></p> <p>Woohoo! You have successfully created a new user. You can download the csv containing the user details, or mail the access details to the user you wish to assign it to. The user will then be able to access the information she/he is allotted, without you having to share your root AWS account password. Sweet, no?</p> <h2 id="network-isolation" tabindex="-1">Network Isolation<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#network-isolation">#</a></h2> <p>A virtual private cloud (VPC) is a virtual network in your own logically isolated area in the AWS Cloud. Use separate VPCs to isolate infrastructure by workload or organizational entity.</p> <p>A subnet is a range of IP addresses in a VPC. When you launch an instance, you launch it into a subnet in your VPC. Use subnets to isolate the tiers of your application (for example, web, application, and database) within a single VPC. Use private subnets for your instances if they should not be accessed directly from the internet.</p> <p>These were some of the major security features that AWS allows us to leverage and customize, to ensure server security.</p> Setting up a NodeJS service for production 2021-11-16T00:00:00Z https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/ <blockquote> <p>This post has been written in collaboration with <a href="https://backtobackswe.com/">BacktoBackSWE.com</a>, a portal for interview preparation.</p> </blockquote> <h2 id="table-of-contents" tabindex="-1">Table of contents<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#table-of-contents">#</a></h2> <ul> <li> <p>Overview</p> </li> <li> <p>Prerequisites</p> </li> <li> <p>Introduction to Node Express and MongoDB</p> </li> <li> <p>Introducing the application we’ll be using</p> </li> <li> <p>Introduction to AWS and EC2 hosting services</p> </li> <li> <p>Setting up an AWS EC2 instance</p> </li> <li> <p>Cloning Node Express app on server</p> </li> <li> <p>Setting up MongoDB</p> </li> <li> <p>Testing the application so far.</p> </li> <li> <p>Setting up additional packages</p> </li> <li> <p>Setting up monitoring using PM2</p> </li> <li> <p>Conclusion</p> </li> <li> <p>References</p> </li> </ul> <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#overview">#</a></h2> <p>NodeJS is a server side programming framework using JavaScript. Using NodeJS frameworks like Express, you can create backend services quickly and wire them up with the frontend, all in Javascript.</p> <p>We’ll be using a Node-Express application built along the lines of the Zomock application, with a MongoDB database. You’ll get a basic understanding of a Node-Express application, and some things you need to consider while building for production. You’ll than setup a remote server using AWS EC2, similar to how you’d done in the React tutorial. You’ll than set up MongoDB using MongoDB’s cloud offering called Atlas, and connect your Node-Express app to MongoDB. Finally you’ll run your service using PM2 to keep the application running even after you’ve closed down the SSH connection. We conclude with some additional steps you can yourself choose to add to your project, and finally, leave you with some references for further information.</p> <p>Let’s jump in</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#prerequisites">#</a></h2> <p>You’re expected to have a basic understanding of what Node and Express are and how to write simple NodeJS code to start a server. Here is a sample tutorial in case you’re entirely new. You should have a basic idea of Postman, which we’ll be using to check if our service is working as expected.</p> <h2 id="introduction-to-node-express-and-mongodb" tabindex="-1">Introduction to Node, Express and MongoDB<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#introduction-to-node-express-and-mongodb">#</a></h2> <p>NodeJS(or simply Node) is a JavaScript runtime, and a server side framework, meaning that it allows you to create server side applications, and provides you the environment to run them in. Express is a framework based on NodeJS to help you create the endpoints for the application.</p> <p>MongoDB is a NoSQL database that stores data in the form of documents and collections. It is NoSQL, since it doesn't have tables and doesn't enforce a fixed schema across all documents.</p> <p>In case you need further brushing up on any of these, take a look at the links in the last section of the tutorial. Note that while we'll not be focusing on the development aspect, and instead will be looking at the deployment, you're still expected to know the basics to be able to understand some of the concepts we'll be using.</p> <h2 id="introduction-to-the-application-well-be-using" tabindex="-1">Introduction to the application we'll be using<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#introduction-to-the-application-well-be-using">#</a></h2> <p>We'll be using a simple mock Zomato API express application for the tutorial. This API exposes an endpoint to return a list of restaurants with details like rating, cost. You can also add restaurants by making a POST request. The application uses Node and Express for the logic, and MongoDB as a database, which we'll be setting up from scratch in the coming sections.</p> <h2 id="introduction-to-aws-hosting-services-and-ec2" tabindex="-1">Introduction to AWS hosting services and EC2<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#introduction-to-aws-hosting-services-and-ec2">#</a></h2> <p>AWS isn’t something you’re new to, or you won’t be reading this tutorial, but a one liner for it is that it’s a cloud hosting solutions provider by Amazon that allows you to host, manage and scale applications. For the sake of this tutorial, AWS will provide you the remote server where your React app will eventually run. The server itself will be located in some Amazon Data center, but you’d be able to access it remotely from your PC via a set of commands. We’ll be using the EC2 service of AWS. EC2 stands for Elastic Compute Cloud, and it does what we described above - lets you access a remote server and use it to host applications</p> <h2 id="setting-up-an-aws-ec2-instance" tabindex="-1">Setting up an AWS EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#setting-up-an-aws-ec2-instance">#</a></h2> <p>Next, let’s set up a remote EC2 server instance. As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image3.png" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image6.png" alt="" /></p> <p>Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.</p> <p>Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section. Finally, we’ll add a custom TCP rule for the port our application is going to expose - port 3000.</p> <p>For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image17.png" alt="" /></p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image9.png" alt="" /></p> <p>Next, select the instance, and click on Connect on the top bar. It’ll open this page :</p> <p><img src="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/..img/scalex/node-mongo/image1.png" alt="" /></p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/main-node-ssh-connected.PNG" alt="" /></p> <p>Great going.</p> <p>Now, our next step is to bring the code into our instance and run it. To do that, we'll clone the repo we're working with, using</p> <pre><code>git clone https://github.com/dkp1903/zomock.git </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/zomock-clone.PNG" alt="" /></p> <p>Once it's complete, go to the installed folder using</p> <pre><code>cd zomock </code></pre> <p>We'll have to create an additional .env file in the repo. What is this file for? Our app has some configurations and credentials that we'd rather keep secret. This includes things like database passwords, connection urls and so on. Thus, we need a file where we can store this, and NOT commit this file to version control. The .env file is the accepted standard.</p> <p>In our case, we'll be storing two things - one, the PORT number of our application and two, the connection URL to our MongoDB database, which includes a database username and password. For now, we'll start with just the port number, and add the database URL once we set up the database in the next section. To create the env file, type</p> <pre><code>nano .env </code></pre> <p>This will open the env file in the Nano text editor.</p> <p>Add the following line in there :</p> <pre><code>PORT=5000 </code></pre> <p>To save the file, press Ctrl + X. You'll be prompted if you want to save the changes. Enter Y, and the file will be saved and you'll go back to the CLI.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/nano-env.PNG" alt="" /></p> <p>The next step is to install the dependencies.</p> <pre><code>npm install </code></pre> <p>Did you get an error? Ofcourse you did. You need to install NPM on the instance. How do you do that? The answer’s in the error itself :</p> <pre><code>sudo apt install npm </code></pre> <p>If you get an error like this, use the command sudo apt get update and then rerun the above command</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/error-1.PNG" alt="" /></p> <p>This will take a few minutes to complete. Once it’s done, try running npm install again, and you’ll see that this time, you’re able to.</p> <p>In case you see an error like this now, or anytime throughout this project, add a sudo before any command you run(for eg, sudo npm install)</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/node-error-sudo.PNG" alt="" /></p> <p>Now, start the application using</p> <pre><code>npm run start </code></pre> <p>You should see a line saying Server running on port 5000</p> <p>Are we done? Not quite. We still haven't set up the database, and thus, we wouldn't be able to do anything at all with the service. Let's resolve that in the next section.</p> <h2 id="setting-up-mongodb" tabindex="-1">Setting up MongoDB<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#setting-up-mongodb">#</a></h2> <p>We'll be using the MongoDB cloud service called Atlas to create the database that our Node-Express service will be interacting with. One of the great advantages of MongoDB is this cloud service, that you can setup, configure and maintain without having to install anything at all anywhere, something that's not found in existing relational DB systems like Postgres or MySQL.</p> <p>MongoDB has a free tier option, and that's what we'll be using. Remember, you'll not be prompted to add your billing details anywhere. If you are, that means you did a step wrong.</p> <p>To get started, go to mongodb.com, and log in/create an account. Follow through the steps to set up your account.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-login.PNG" alt="" /></p> <p>Then, you'll be asked to select a cluster type. Select the free version as shown</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-cluster-free.jfif" alt="" /></p> <p>Next, you'll be asked to customize your cluster details like hosting zone. Leave everything unchanged, and ensuring that there's no total cost at the bottom, select Create.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-create.jfif" alt="" /></p> <p>It'll take a minute or two for your cluster to get created. Once it's ready, you should see a screen like this.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-pre-connect.PNG" alt="" /></p> <p>Carefully take a look at the various details being shown, such as the R W graph - R and W stand for Reads and Writes respectively, which is an important metric for determining the traffic to your DB.</p> <p>The connections graph shows the number of connections to your DB. A connection is either via an application, as we'll do, or via the command line, and for practical purposes, represents the number of folks modifying/viewing our database.</p> <p>The in/out graph shows the bytes transferred to/from the database every second.</p> <p>Data size is the size of the database.</p> <p>Now, to establish a connection to the database, we need to do a few things first.</p> <p>Click on connect next to the cluster name, and you'll be prompted to add a connection IP address. This will clarify what all traffic do we want to allow to connect to the database. Remember, in a production application, you dare not give direct database access to anyone and everyone, or you might end losing/leaking thousands of users' data. However, for ease of access, we'll start with the 'Allow access from anywhere' option, since we'll be trying to connect via an EC2 instance, which has a dynamic IP, and thus, you'd have to keep updating the rules every now and then.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-add-ip.PNG" alt="" /></p> <p>Click on Add IP Address</p> <p>Next, you have to create a database user. You can create any username and password(make sure you remember it).</p> <p>Next, you'll be asked to choose a connection method - via shell(CLI), Compass(GUI) or via an application, which is the one we'll use. You'll then be asked to pick a driver version, and a connection string. Ensure that the driver is Node.JS and version is 4.0 and later. Copy the connection string.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-connection-url.PNG" alt="" /></p> <p>Now, go to the .env file we'd created on our server instance. Add a line there (no extra spaces, or you might face unexpected errors) :</p> <pre><code>MONGO_URL=&lt;the-string-you-had-copied&gt; </code></pre> <p>And replace the username and password with the user's credentials you had created.</p> <p>Did you see why we did that? We wish to restrict access to the database, and thus, the connection string, which is used to connect to the database, will only be present in a secure local environment and will not be committed with the rest of the code.</p> <p>With this, you'll finally have added the last requirement to your code. Now, we can run the application using</p> <p><code>npm run start</code></p> <p>Now, in addition to the 'Server running on port 5000', you should see an additional</p> <p>'Connected to database' message as well.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/npm-run-start.PNG" alt="" /></p> <p>If you don't, you need to recheck your connection string.</p> <h2 id="testing-the-application-done-so-far" tabindex="-1">Testing the application done so far<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#testing-the-application-done-so-far">#</a></h2> <p>Now, we need to test if the application is actually working. Since it's a backend only service without a frontend, we'd need to use an API testing tool. We'll be going with postman.</p> <p>Go to postman.com. If it's your first time with Postman, there'll be some setup steps.</p> <p>If we were developing this on our local laptops/PCs, we'd have used a localhost:5000 link. However, since it's on a remote server, we need to find the IP address of the server.</p> <p>This IP can be found from the AWS instance details - Public IPV4 address.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/image11.PNG" alt="" /></p> <p>Paste the IP into the request field on Postman. Add an <code>http://</code> before the IP and a <code>:5000</code> after.</p> <p>Now, if you check the Readme of the repo, hitting the /restaurants endpoint should retrieve a list of restaurants present in the DB. Add a <code>/restaurants</code> after the <code>:5000</code> and hit send.</p> <p>If it works well, you should see an empty array <code>[]</code> in the response tab, since there's no data in the database yet. If you get an error like connection refused or request timed out, recheck the IP.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/postman-get.PNG" alt="" /></p> <p>Now, let's try adding some data to the DB. Another look at the readme file will show that making a POST request to the endpoint <code>/restaurants/add</code> will create a restaurant. So update the endpoint, and add the following restaurant data in the body :</p> <pre><code> { &quot;_id&quot;: &quot;6073ccae8bab295faebb5718&quot;, &quot;name&quot;: &quot;Kiran Plaza&quot;, &quot;rating&quot;: &quot;5&quot;, &quot;image&quot;: &quot;https://i.ibb.co/ZTHr2cM/res-sample.jpg&quot;, &quot;cost&quot;: &quot;350&quot;, &quot;numOfReviews&quot;: &quot;4380&quot;, &quot;discount&quot;: &quot;40%&quot;, &quot;spec&quot;: &quot;Chinese&quot;, &quot;area&quot;: &quot;Koramangala&quot; } </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/add-res.PNG" alt="" /></p> <p>Now, rerun the get request, and you should see this restaurant being returned.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/get-restaurants-2.PNG" alt="" /></p> <h2 id="setting-up-additional-packages" tabindex="-1">Setting up additional packages<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#setting-up-additional-packages">#</a></h2> <p>Great, so you got it all running on a server. But we’re not done. What happens if you close off the terminal. Try doing just that and see if your get requests still work.</p> <p>As expected, they won’t. And that doesn’t make sense. For a server to stay up, you need not have to keep a dedicated computer with a terminal on all day - then there’s no point in holding a remote server.</p> <p>Fortunately, there’s a simple npm package that can keep your service running even when your terminal isn’t. It’s called pm2(most likely short for process monitoring and management). Apart from ensuring that the server remains up, you can use it to check the status of all your node processes running at any time to figure out which of those are causing the issue, logs management, to track the application and see where errors/bugs/incidents if any, occur, and metrics such as memory consumed.</p> <p>So, we’ll be installing the same on our server and then configuring it to start our node service. Again SSH into the instance using the ssh -i command, go to the project directory, and write</p> <p><code>npm i -g pm2</code></p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/pm2.PNG" alt="" /></p> <p>Note the <code>-g</code> flag. It stands for global, meaning that pm2 will be installed as a global package, not just for our project. This is important, because pm2 is expected to handle the restarting of the application even if our project stops, and any project level dependency would not be able to do it.</p> <p>Once that’s done, we need to start our service using pm2.</p> <p>The command for that is</p> <pre><code>pm2 start zomock/index.js 5000 -i max --watch </code></pre> <p><code>-i max</code> - allows us to run processes with the max number of threads available. Because NodeJS is single-threaded, using all available cores will maximize the performance of the app.</p> <p><code>--watch</code> - allows the app to automatically restart if there are any changes to the directory.</p> <p>Note that the above command should be run in the root(outside of the zomock directory)</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/pm2.PNG" alt="" /></p> <p>Now, if you close the terminal and make a GET request, you'll see that you're able to still get a response.</p> <blockquote> <p>Note : Due to an issue with PM2, sometimes the production environment is unable to parse the MongoDB connection string correctly from the .env file. So, in case you get a connection refused issue when making a get request, declare the mongo_url as a const in index.js itself, and use the constant instead of the <code>process.env.MONGO_URL</code> and you should be good to go</p> </blockquote> <h2 id="monitoring-using-pm2" tabindex="-1">Monitoring using pm2<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#monitoring-using-pm2">#</a></h2> <p>In production environments, we often need to monitor our deployed code for issues/crashes, so they can be resolved quickly. Fortunately, pm2 can help us with that as well.</p> <p>Enter the command <code>pm2 monitor</code> on the terminal.</p> <p>It'll prompt you to sign up for a pm2 account, and once you do, you'll get a URL which holds the metrics dashboard for your application</p> <p>If you go to that URL in the browser, you'll be able to see metrics of your application like the requests being made, as well as issues and errors. This is extremely advantageous when working with a large number of users</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/pm2-1.PNG" alt="" /></p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/pm2-2.PNG" alt="" /></p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#conclusion">#</a></h2> <p>Thus, in this tutorial, you learnt how to deploy a Node-Express based application onto an EC2 server you'd set up from scratch. You also set up a MongoDB database and connected it to your application. You then ensured that your application continues running even when you close off the terminal running the development process. Finally, you learnt some concepts of monitoring and set up monitoring for your application using PM2</p> <p>Some of the most major challenges in backend development for production is to track errors and handle them gracefully. You should further research on how to handle exceptions, how to catch errors, log them and ensure that the user has a seamless experience.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#references">#</a></h2> <ul> <li> <p><a href="https://pm2.io/">PM2 docs</a></p> </li> <li> <p><a href="https://stackify.com/node-js-logging/">NodeJS logging</a></p> </li> </ul> Setting up a production ready application with React 2021-11-14T00:00:00Z https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/ <blockquote> <p>This post has been written in collaboration with <a href="https://backtobackswe.com/">BacktoBackSWE.com</a>, a portal for interview preparation.</p> </blockquote> <h2 id="table-of-contents-" tabindex="-1">Table of contents :<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#table-of-contents-">#</a></h2> <ul> <li> <p>Overview</p> </li> <li> <p>Prerequisite knowledge</p> </li> <li> <p>Why should you read this tutorial</p> </li> <li> <p>Introduction to React - building an app for production</p> </li> <li> <p>Introduction to AWS EC2</p> </li> <li> <p>Downloading and running the React app source code locally</p> </li> <li> <p>Creating a build</p> </li> <li> <p>Setting up and connecting to a remote EC2 instance</p> </li> <li> <p>Using pm2 to run the app on the instance</p> </li> <li> <p>Additional pointers on scaling and future references</p> </li> </ul> <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#overview">#</a></h2> <p>Creating a website on localhost, versus deploying it in a production environment is like comparing a zoo to a forest. There’s way more stuff you need to consider when you’re building for the end user - including scaling, fallbacks, load balancing, security, monitoring, CDNs and so on. In this tutorial, we’ll take our first step into deploying a React application into a production environment and actually seeing it work live, while learning some important concepts that go into ensuring that the app works as expected, in the way. We’ll be using a sample todolist application and deploy it to an AWS EC2 instance. You are free to use the same sample app, or any app of your choice.</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#prerequisites">#</a></h2> <p>Since we’re focusing on deploying the application and not creating it, you need not know everything about React. However, you should be aware of the way React works - the concept of virtual dom, how a page is built and populated, and so on. While we’ll be covering some of the concepts in brief in the following section, the react documentation is a good reference point in case you need to refresh on any of the above concepts.</p> <p>You’ll also need to set up an AWS account. The steps we follow will fall within the free tier offering of AWS, but you’d still need a debit/credit card to sign up. However, as long as you follow all the steps correctly, you won’t be charged.</p> <h2 id="why-should-you-read-this-tutorial" tabindex="-1">Why should you read this tutorial<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#why-should-you-read-this-tutorial">#</a></h2> <p>Developing a web app UI is only the base camp - the rest of the trip to the top of Mt Everest is in deploying it to real users, ensuring that traffic is balanced, that any failure is monitored and handled, and that any security vulnerabilities that might compromise user data are caught and remedied.</p> <p>This tutorial will focus on deploying a react application on an EC2 instance. Along the way, you’ll learn how a React build gets created and rendered, how we set up and connect to a remote server thousands of miles away via a few terminal commands, the concepts of instances and security groups, and how we can set these up in a few clicks on AWS.</p> <p>This knowledge will be critical for you to develop applications built for hundreds of users, which is the aim with which most apps are built. Introduction to React - building for production You’ll most likely be aware of what React is and does, it’s a JavaScript library used to create UI components. It uses a Javascript + HTML like syntax called JSX. The HTML bit defines the way the UI looks, and the JS populates data and adds functionality to the application. React is the most popular frontend library these days, given its learning curve is much less steep compared to its competitors like Angular or Vue.</p> <p>Your first foray into React development would have started with something like <code>npx create-react-app myapp</code>, a command which bootstraps a sample react application and runs it on localhost:3000. However, when you want to let your users use your app, you can’t give them a localhost:3000 link. You need to first ‘build’ the application using <code>npm run build</code>, which creates a directory called build and contains ‘minified’(simplified) CSS, JS and HTML pages and static assets.</p> <p>If some of the above concepts sound alien to you, do spend some time in understanding how React works under the hood. Some helpful resources are linked in the last section of the tutorial.</p> <h2 id="introduction-to-aws-hosting-services-and-ec2" tabindex="-1">Introduction to AWS hosting services and EC2<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#introduction-to-aws-hosting-services-and-ec2">#</a></h2> <p>Again, AWS isn’t something you’re new to, or you won’t be reading this tutorial, but a one liner for it is that it’s a cloud hosting solutions provider by Amazon that allows you to host, manage and scale applications. For the sake of this tutorial, AWS will provide you the remote server where your React app will eventually run. The server itself will be located in some Amazon Data center, but you’d be able to access it remotely from your PC via a set of commands. We’ll be using the EC2 service of AWS. EC2 stands for Elastic Compute Cloud, and it does what we described above - lets you access a remote server and host applications on it</p> <h2 id="downloading-and-running-the-react-app-locally" tabindex="-1">Downloading and running the React app locally<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#downloading-and-running-the-react-app-locally">#</a></h2> <p>The first step is to get hold of the app which we’re going to deploy. As we said earlier, you can perform the deployment steps with any react app of your choice, but if you don’t have one, or are a nerd student and want to follow instructions down to the letter, clone this repo to your local using the following command :</p> <p><code>git clone https://github.com/gagangaur/React-TODO-App.git</code></p> <p>Next, we install the dependencies and run the application locally. To do that</p> <p><code>cd React-TODO-App</code> <code>npm install</code> <code>npm start</code></p> <p>This will start the react app on port 3000 - you can check it out by going to http://localhost:3000 on your browser.</p> <h2 id="creating-a-build" tabindex="-1">Creating a build<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#creating-a-build">#</a></h2> <p>What you saw on localhost:3000 was a development version of the application - which is visible only to you, and as such cannot be displayed to users.</p> <p>We need to create a build of this - package that we can then use to show the app to the users. Go to the terminal and type</p> <p><code>npm run build</code></p> <p>Once the command runs, you’ll notice that a build directory has been created in your root folder. Go to the file explorer and open it. You’ll see a list of assets like images, as well as a folder called static. Open it, to further reveal 2 folders - CSS and JS.</p> <p>What the build command has done, is that it has converted the React code into these CSS and JS files, which’ll now complement the index.html file to load the app.</p> <p>Now, how do we view this ‘built’ version of our app? We need to ‘serve’ our static files so that they are the ones that open up on localhost:3000, instead of the development version.</p> <p>To do that, install a package called serve, using</p> <p><code>npm i -g serve</code></p> <p>Once that’s done, run</p> <p><code>serve -s build</code></p> <h2 id="setting-up-an-aws-ec2-instance" tabindex="-1">Setting up an AWS EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#setting-up-an-aws-ec2-instance">#</a></h2> <p>Next, let’s set up a remote EC2 server instance. As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/react/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image3.png" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image6.png" alt="" /></p> <p>Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.</p> <p>Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section. Finally, we’ll add a custom TCP rule for the port our application is going to expose - port 3000.</p> <p>For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image17.png" alt="" /></p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image9.png" alt="" /></p> <p>Next, select the instance, and click on Connect on the top bar. It’ll open this page :</p> <p><img src="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/..img/scalex/react/image1.png" alt="" /></p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i … one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image14.png" alt="" /></p> <p>Great going.</p> <p>Now, our next step is to bring the code into our instance and run it. To do that, we’ll do a git clone exactly the same way we cloned the repo on our local system, using the git clone command.</p> <p>Once you’re done cloning the repo, the next step is to install the dependencies and start the application. Navigate to the repo directory and try running</p> <p><code>npm install</code></p> <p>Did you get an error? Ofcourse you did. You need to install NodeJS on the instance. How do you do that? The answer’s in the error itself :</p> <p>sudo apt install nodejs</p> <p>This will take a few minutes to complete. Once it’s done, try running npm install again, and you’ll see that this time, you’re able to.</p> <p>Finally, the moment of truth - run</p> <p><code>npm run start</code></p> <p>Once you see the application live on localhost:5000 written on the terminal, you’ll have to navigate to the server IP to check if it works.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image16.png" alt="" /></p> <p>This IP can be found from the AWS instance details - Public IPV4 address. Copy that, and paste it onto a browser tab, and add :3000 after it.</p> <p>If the application did work correctly - you should be able to see the same screen that you were able to see locally on your machine.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image8.png" alt="" /></p> <p>As we’d seen above, a simple npm run start gives us the development version. However, this is a production environment we’re running the app on, and we need to ‘build’ the app, using</p> <p><code>npm run build</code></p> <p>Then, following the same steps as we did above, install the serve package and use the command</p> <p><code>serve -s build</code> to serve the build version.</p> <p>Looks good. Or does it?</p> <p>Did you notice the port number? 5000. Do you think we’d be able to access it with the security rules we created?</p> <p>To find out, go to the public IP browser tab and replace the :3000 by :5000.</p> <p>Oops. Doesn’t work, does it? Wouldn’t it be great if AWS could just ‘guess’ the port number!</p> <p>Unfortunately, that functionality is still not active, and thus, we need to manually add the port 5000 to be allowed. To do that, go to the instances page. In the left navigation pane, scroll down to find the “Network and Security” section, and within it, Security groups. Open it, and select the new security group we’d created when we were setting up the instance(not the default one).</p> <p>Below, go to the Inbound rules tab, and hit the edit inbound rules button.</p> <p>Now, put in a custom TCP connection rule for port 5000, and allow access from? You guessed it - anywhere.</p> <p>Once that’s done, save the rules, and come back to the public IP page, and refresh. If you didn’t mess up, you should be able to see the application loading on port 5000 now!</p> <p>Great, so you got it all running on a server. But we’re not done. What happens if you close off the terminal. Try doing just that and see if your website still works.</p> <p>As expected, it won’t. And that doesn’t make sense. For a server to stay up, you need not have to keep a dedicated computer with a terminal on all day - then there’s no point in holding a remote server.</p> <p>Fortunately, there’s a simple npm package that can keep your app running even when your terminal isn’t. It’s called pm2(most likely short for process monitoring and management). Apart from ensuring that the server remains up, you can use it to check the status of all your node processes running at any time to figure out which of those are causing the issue, logs management, to track the application and see where errors/bugs/incidents if any, occur, and metrics such as memory consumed.</p> <p>So, we’ll be installing the same on our server and then configuring it to start our react app. Again SSH into the instance using the ssh -i command, go to the project directory, and write</p> <p><code>npm i -g pm2</code></p> <p>Note the <code>-g</code> flag. It stands for global, meaning that pm2 will be installed as a global package, not just for our project. This is important, because pm2 is expected to handle the restarting of the application even if our project stops, and any project level dependency would not be able to do it.</p> <p>Once that’s done, we need to start our app using pm2. And remember, we’re looking at the build version,</p> <p>The command for that is</p> <p><code>pm2 serve React-TODO-App/build/ 3000</code></p> <p>Note that the above command should be run in the root. If elsewhere, edit the path to the build folder accordingly. And we’ve used the port to be 3000. You may use 5000 as well.</p> <p>Now, if you close the terminal, you’ll see that the application continues to stay up and running.</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#conclusion">#</a></h2> <p>Thus, in this tutorial, we learnt what it means to build a React app for production, how do we create a build locally, and how it works. We then learned how to set up and configure a remote EC2 server, and managed access details. We then set up our repo on the instance, and ran it. Since we wanted the app to continue running even when we closed the terminal, we used the pm2 package for that.</p> <p>In future blogs, we’ll be looking at how to add load balancers to balance the traffic on our application.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#references">#</a></h2> <ul> <li> <p><a href="https://create-react-app.dev/docs/production-build/">Creating a production build - React</a></p> </li> <li> <p><a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.create-cluster.console.configure-inbound-rules.html">Configuring EC2 inbound rules</a></p> </li> </ul> Objectivizing milestones, the Agile way 2021-11-07T00:00:00Z https://blog.dkpathak.in/objectivizing-milestones-the-agile-way/ <p>Determining when a task is 'done', is often harder than doing the task itself, especially when it involves a creative turn of mind like writing a blog post, painting a portrait, and so on. And going by 'satisfaction' also doesn't work - the creators can almost never be perfectly satisfied with their work, and thus, unless you have a specific stop point, your tasks could end up progressing infinitely. When there're other stakeholders involved, like clients, this problem exacerbates - you say you did your best, but the client still doesn't agree.</p> <p>The makers of Agile anticipated this problem, and incorporated features and principles in their methodology to ensure that milestones, capacities and goals were objectivised. And it's helpful if we can take a leaf out of the agile book to ensure we are able to arrive at tasks' conclusion much faster.</p> <p>These are 3 steps that we can take to ensure we're more objective :</p> <h2 id="1-objectivizing-capacity-" tabindex="-1">1. Objectivizing capacity :<a class="tdbc-anchor" href="https://blog.dkpathak.in/objectivizing-milestones-the-agile-way/#1-objectivizing-capacity-">#</a></h2> <p>There's a term in the agile methodology for this - velocity estimation. You have a fixed amount of time and mental resources. You estimate the total number of hours in a day you believe you can work productively for your tasks, and only then take up tasks. For instance, let's say you wish to take up a new side project. In a typical 24 hour work day, you have 8 hours of sleeping, 9 hours of work, and 2 hours of chores, which leaves you with 5. That's your capacity for the day, and you know that even operating at your best, you cannot take up tasks that'd take longer than 5 hours.</p> <p>This objectivization goes beyond the 'I'll find time to do it today' myth, and instead focuses on a real chunk of time you have available, making you feel less overwhelmed and more in control of your time and energy.</p> <h2 id="2-objectivizing-acceptance-criteria-milestones-" tabindex="-1">2. Objectivizing acceptance criteria/milestones :<a class="tdbc-anchor" href="https://blog.dkpathak.in/objectivizing-milestones-the-agile-way/#2-objectivizing-acceptance-criteria-milestones-">#</a></h2> <p>When is a task 'done'? When does it look 'good enough' to go into the 'Done' category? Unless we have a very clear and objective milestone to achieve, we'll never get anything done to satisfaction. In agile, this takes the form of 'acceptance criteria' for features. Before putting a feature into development, the developer and the product team agree to when that feature will be considered complete to avoid back and forth over expectations.</p> <p>The same can be, and should be, done with our personal tasks, especially those that require a creative turn of mind. When I first wrote this very post, I never knew what I wanted it to look like at the end, even whether I should write 3 steps or 5. After a lot of confusion that lasted days, I created a set of essential points that I wanted the post to cover, and other features like word limit that I wanted it to have. Only then did I start writing it in earnest and sure enough, I was able to hit the acceptance criteria within less than an hour</p> <ol start="3"> <li>Objectivizing habits : We all want good habits. But only a fraction of us actually end up keeping up with our aspired habits for long. And the reason isn't always our laziness and lack of consistency. Often times, habits we plan are so subjective that we don't really know of the next course of action that we should take to keep the streak going, and everyday, we first need to think about what we need to do, and when can we consider our habit done for the day.</li> </ol> <p>Instead of this, having a clear set of actionable items that we should do everyday will slowly switch our bodies to autogear and soon enough, the habits will start coming subconsciously.</p> <p>In Agile, this takes the form of consistent flows in ceremonies - a daily standup is always limited to answering three questions -</p> <ol> <li> <p>What did you do yesterday</p> </li> <li> <p>What will you do today</p> </li> <li> <p>Blockers</p> </li> </ol> <p>A retrospective meeting is always defined by what went well in the previous sprint, and what could have been done better. Converting the subjective into these relatively more objective questions ensures that the ceremonies get completed within time and that they actually do what they're meant to do without losing track.</p> <p>A simple example of an adaptation in our personal lives is our workout routines. Instead of scheduling 'core workout' twice a week, make it '40 situps and 80 leg rotations' with an increase of 4 reps week on week. This objective goal means that your brain doesn't have to worry about thinking what workout it has to do, only the measurable rep count that needs to be met.</p> <p>These 3 categories of objectivizing our life facets can therefore, greatly alleviate the progress we make on our tasks and goals.</p> 3 free calendar-cum-todolist apps you can use to time-block your day 2021-11-06T00:00:00Z https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/ <p>The market is overflowing today with todo lists, and with calendar applications. With the definition of 'work' being a lot more than just our job, and with multiple things going through our minds at the same time, these apps ensure that we're able to capture tasks and that we see them through when we have enough bandwidth.</p> <p>Sadly, both the todo list type of apps, and the calendar type of apps, lack completeness individually. You can type in a zillion tasks in your todo list, but unless you have assigned a time to them and follow that calendar religiously, those tasks will stay untouched on the list forever.</p> <p>On the other hand, if you have a calendar but no way you can check things off or organize them, you wouldn't really be able to track whether an item you'd scheduled is completed or not.</p> <p>Thus, the optimal solution would be an application that combines these functionalities - a todolist to manage your tasks, and a calendar to block time for doing them. And here we have shortlisted three free solutions that do just this.</p> <h3 id="1-routine" tabindex="-1">1. <a href="https://routine.co/">Routine</a><a class="tdbc-anchor" href="https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/#1-routine">#</a></h3> <p>Disclaimer : At the time of writing this, Routine is still in private beta, and thus, it's not accessible to all. However, it holds great potential and is expected to go live very soon, and thus, is part of this list.</p> <p>Routine allows you to have an inbox of todo items, and a calendar parallely, synced with your Google calendar. You can drag and drop tasks from the inbox into the calendar. Better, you can even move your GCal events and the same will be synced to your calendar - this is a feature not found in many calendar apps. Additionally, Routine also follows a page model like Notion, where every task/event can be opened as a page for notes. It has Intellisense to automatically gauge date/time from the task name, which is another great plus.</p> <p>Pros :</p> <ol> <li> <p>Drag drop tasks and gcal events and have a two way sync - many other apps don't allow GCal events do be modified by another app</p> </li> <li> <p>Very accessible - tasks can be added, and the platform can be navigated just via the keyboard</p> </li> <li> <p>UI and UX are the best among the three apps discussed here - it breathes minimalism and focus.</p> </li> <li> <p>Easily schedule tasks thanks to Intellisense.</p> </li> </ol> <p>Cons :</p> <ol> <li>Still in private beta, so many features are lacking. Some of these include :</li> </ol> <ul> <li> <p>integrations with other apps like Todoist</p> </li> <li> <p>no Android app</p> </li> <li> <p>No Reminders</p> </li> <li> <p>No project system to sort tasks into</p> </li> </ul> <p><img src="https://blog.dkpathak.in/img/calendar-todolist/routine-1.jfif" alt="" /></p> <h3 id="2-kosmotime" tabindex="-1">2. Kosmotime<a class="tdbc-anchor" href="https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/#2-kosmotime">#</a></h3> <p>Kosmotime is a time blocking and tracking application. You can add tasks, sort them into projects, and schedule them onto the calendar. This app also syncs with Google calendar to display GCal events onto the Kosmotime calendar, however, you cannot modify the GCal events. One unique feature about Kosmotime is the concept of focus blocks, wherein you can block a time for a set of tasks grouped together - a manifestation of the concept of time blocking. You can also track time across each task using the inbuilt timer, however, you'll have to remember to start and stop the timer for the task. I once remember working 72 hours non-stop on a task :P</p> <p>Pros :</p> <ul> <li> <p>Can group tasks into projects</p> </li> <li> <p>Can drag and drop tasks onto calendar</p> </li> <li> <p>Can create focus blocks</p> </li> <li> <p>Time tracking for tasks</p> </li> <li> <p>Good UI/UX</p> </li> </ul> <p>Cons :</p> <ul> <li> <p>Google calendar events cannot be modified from Kosmotime calendar</p> </li> <li> <p>Only has integrations for Slack and Asana</p> </li> <li> <p>No Android application</p> </li> <li> <p>No reminder option</p> </li> <li> <p>Lacks time, date intellisense - you have to manually set the time and date for each task or drag and drop it</p> </li> <li> <p>No dark mode(Might not be a con for many, but it is for me)</p> </li> </ul> <p><img src="https://blog.dkpathak.in/img/calendar-todolist/kosmotime.PNG" alt="" /></p> <h3 id="3-plan" tabindex="-1">3. <a href="https://getplan.co/">Plan</a><a class="tdbc-anchor" href="https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/#3-plan">#</a></h3> <p>Among the three, plan offers the richest set of features, (and if I am not wrong) has been around the longest. It also allows you to create tasks, sort them into projects and drag and drop them onto calendar, just like the other two. Additionally, you can view the tasks in list, kanban or timeline views. My favorite feature, though, is that it allows you to edit and drag and drop Google calendar events as well. Moreover, it has a chrome extension which creates a plan homepage, and therein, you can even check off events - something that both the above apps miss. That 'checking off' is like productivity-adrenaline.</p> <p>Additionally, it has a documents feature for you to create and store documents. It also has a metrics dashboard wherein you can view the time spent on each task/project.</p> <p>The downsides are that it has a buggy UI/UX and is often not responsive(some of the content gets cropped out of the screen and there's no scroll)</p> <p>Pros :</p> <ul> <li> <p>Allows you to edit and drag calendar events from the app as well, and even allows you to check off Gcal events in its chrome extension</p> </li> <li> <p>Has on browser reminders(notifications)</p> </li> <li> <p>Has a variety of views to visualize tasks - list, kanban, timeline</p> </li> </ul> <p>Cons :</p> <ul> <li>UI is buggy and content often gets cropped out of the picture with no option for scrolling</li> </ul> <p><img src="https://blog.dkpathak.in/img/calendar-todolist/plan.PNG" alt="" /></p> <p>Thus, these were three calendar-cum-todolist management applications that you can use to block time for various tasks, with their pros and cons.</p> 5 Agile processes you can use to improve your personal productivity 2021-11-05T00:00:00Z https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/ <h3 id="intro-to-agile" tabindex="-1">Intro to Agile<a class="tdbc-anchor" href="https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/#intro-to-agile">#</a></h3> <p>For the uninitiated, Agile started as a methodology to deliver software, improving upon the flaws of the then popular waterfall model. The idea was simple - to speed up, you must adapt to change, incorporate it quickly, and deliver updates incrementally. For instance, before Agile, teams spent months finalizing the expected features, then developed them, then tested them, and just as they were about to deliver, they realized a client requirement had changed rendering much of their months of effort practically wasted. And anyone who has worked for any length of time in the tech/corporate world knows that if there's anything fickle in the world, that's client requirements. And Agile was a means to not resist change, but instead accept it, and tune our processes to fit the inevitable change, by delivering small updates in short chunks of time, while taking feedback and incorporating it in further iterations.</p> <p>And as teams and companies realized that this was not at all a bad idea and helped them deliver more and therefore, improve their financials, they adapted the practice religiously, making tweaks to the process that could suit larger teams and projects. This led to several different forms of Agile making their way out in the open, each suited to a particular team or objective or process - Scrum, Kanban, SAFe, to name a few.</p> <p>Today, Agile is adapted by almost all development teams around the globe.</p> <p>But interestingly, you don't have to be a development team, or even a developer to utilize the power of Agile.</p> <h3 id="agile-for-the-individual" tabindex="-1">Agile for the individual<a class="tdbc-anchor" href="https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/#agile-for-the-individual">#</a></h3> <p>Most of us are agile in some way - we spend 15 mins at the start of the day planning our tasks and time - that's like a daily standup in agile. We often don't go all in on an idea, but instead, take an incremental step by step approach to see if the outcome's worth the effort - another agile mindset. However, creating a formal structure including some of these instinctive habits, and adapting a few formal processes can make us more productive in doing tasks, personal projects, as well as general mundane tasks.</p> <h3 id="5-agile-processes-you-can-use-in-your-daily-life" tabindex="-1">5 agile processes you can use in your daily life<a class="tdbc-anchor" href="https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/#5-agile-processes-you-can-use-in-your-daily-life">#</a></h3> <h4>1. Daily standup</h4> <p>A daily standup typically includes answering the following three questions :</p> <ul> <li> <p>What did you accomplish yesterday?</p> </li> <li> <p>What are your goals for today?</p> </li> <li> <p>Any blockers?</p> </li> </ul> <p>In a team, each developer talks about her/his own content on the above three points. The same format can be taken up by you personally as well. You start by reflecting on everything you accomplished yesterday, which not only gives you motivation, but also pick up on where you could do better today. You then check out your goals for today - work, personal, social, all of it, and put it up on a todo list so that you can check them off. Finally, you think about any blockers you have - any work task that requires an input from a teammate, a social obligation that's gonna eat up a couple hours of your time? This is helpful for you to set your expectations for the day.</p> <p>If you already have a standup for work, it's recommended that you do a quick personal standup before it, so that the blockers you find for yourself can be discussed in the team standup.</p> <p>Are there tools you can use to help you in this? The simplest option would be a calendar to place your blockers, and a todo list to keep track of tasks. If you're feeling nerdy, <a href="https://dailybot.co/">Dailybot</a> is a slack bot that you can use to customize questions sent to you over Slack that you can answer</p> <h4>2. Weekly/bi-weekly retrospectives</h4> <p>As the name suggests, you retrospect on the past week(s). What went well, what could be improved?</p> <p>Sprints are usually 2 weeks long in development teams, and a retrospective happens at the end of each sprint to improve the next sprint.</p> <p>How can you use it personally? You have a habit that you started. You need to take a reflection to see if it's working, and make tweaks to improve. For instance, when reflecting, you realize that you're more able to absorb the content of a book you want to read more in the morning, than at night, and you therefore update your calendar to block time in the morning instead of the night. Similarly, you take note of some activities that are stopping you from being at your productive peak - social media scrolling, Netflix, Zomato, and plan to reduce them in the next week.</p> <p>The core tenet of agile is adapting to change, and retrospecting ensures that you're aware of what needs to be changed, and how.</p> <p>Tools you can use for retrospectives can be as easy as looking across the past two weeks on your calendar/todo list to try and recall where things can be optimized. However, if you want to delve into detail and have the time and will power to, you can use time tracking tools like <a href="https://app.kosmotime.com/">Kosmotime</a> to track the time spent across various tasks, that'll allow you to determine if the outcome of a task was worth the amount of time you spent on it.</p> <h4>3. Velocity estimation and prioritization</h4> <p>Velocity refers to 'units of work' that can be allotted to a given task. A relatively simpler task has lower number of units, and a complex or a time consuming one will have higher. Each team has a fixed number of units per day they can manage to complete, and thus, this allows quantitative estimation of how many tasks they can do in a given sprint.</p> <p>Prioritization is as the name suggests - needs no explanation.</p> <p>The same can be applied in our daily tasks as well - in each retrospective/planning session, we assign points to each task we have to do, and decide how many we can fit in a day/week. For instance, while writing this blog post, I assigned 4 points to this task, which should take me a couple of hours, and this helped me blocking my calendar accordingly.</p> <p>Similarly, task prioritization is critical, since we always have way more to do than we're capable of, and some things need to be done way more urgently than others. Thus, giving priorities to some tasks and accomplishing them sooner should be part of retrospectives, plannings and standups.</p> <p>Todo lists like Todoist have a priority flag, which can be used to prioritize and then filter tasks on priority. While velocity is not a direct feature in many todo applications, it's as simple as a number you assign to a task, as long as you have a clear idea in mind/on paper, as to the relation between the unit of work and the time you should block for it.</p> <p>This exercise might often seem like an overkill - you might think that it's much easier to just do a task right away, than go through this entire process of setting velocities, priorities and what not, and while that is indeed the case sometimes, not all tasks can be just 'done'. For instance, I had to plan this blog well beforehand and allot time for it, so that I could research on the content, finalize the points, leave enough time for proofreading and publishing. I could not just sit up one fine day and shoot through all of this in lesser time. It requires some thought on what tasks can be just 'done right away', and what need planning.</p> <h4>4. Frequent delivery</h4> <p>While it may sound like this doesn't apply for a lot of our daily tasks, well, it does really help on long term projects, to complete small chunks of work rather than hoping for a grand release. For instance, doing a Rangoli for Diwali - plan the design one day, the chalk outline another, the colors the third day, instead of waiting for a day you could do it all.</p> <p>If you're working with other people/clients, this is even more important since you can get feedback before you've done a lot of work that could potentially go to 'waste', if the requirements or priorities change.</p> <p>When I was writing a blog for a firm some months back, I decided to go through it all in one sitting, for which I took a week, and on the day of presentation, realized that I had gotten the topic all wrong.</p> <p>Continuous feedback is another tenet critical to agile, and while your tasks might not usually have other people involved, your own feedback and expectations are important enough to ensure are at par, and thus, try to keep deliveries short and frequent.</p> <p>Jira has versioning for delivery of software products - version 1.0.1, version 1.0.2 and so on. This can be customized for other tasks as well, including those that aren't clear enough to be defined via versions.</p> <h4>5. Using metrics for assessing output</h4> <p>One thing almost everyone can agree on - a task can almost never be 'good enough' for all stakeholders. You will always find ways to do it better. The client will always have further feature requests. And when working with personal tasks which are often not deadline bound from the start, it's tough to know when to stop and when to keep improving. A subjective 'does it look good' is a very ambiguous and unhelpful milestone to achieve, and does nothing to consider the effort put into the activity. Instead, there should be more objective metrics that can help us analyze how we performed and as Taylor Swift puts it, 'If the high was worth the pain'.</p> <p>Agile has numerous metrics to track team performance, such as cycle time, lead time, burndown, throughput, business value, but almost none of these should be taken up blindly as a metric for your own tasks or projects, because focusing on a wrong metric can lead the project into an entirely non productive direction.</p> <p>For instance, judging the growth of a blog by number of articles per month isn't a great idea, if you do not consider the quality of articles that go in there.</p> <p>Instead, you should research and finalize a metric you'll aim to optimize as you go through a project, and customize it so that an improvement in the metric score actually translates into the project becoming better. In the above example, the number of minutes an average user stays on the blog gives a good idea of whether the blog is doing good from the 'quality' perspective.</p> <p>The tools for this process will depend on the metric you aim to use - Google Analytics for traffic tracking, time tracking for amount of time spent, Jira/other project management tools for number of features delivered, and so on.</p> <p>Thus, these were 5 agile processes that you could implement in your daily life and personal projects as well, to optimize your productivity. But before you do that, remember the very core tenet of agile - adapting to change. Priorities change, circumstances change, requirements change. And it's important that you realize this, expect the change and adapt to it</p> An introduction to static code analysis using Sonar 2021-10-02T00:00:00Z https://blog.dkpathak.in/an-introduction-to-static-code-analysis-using-sonar/ <blockquote> <p>Good programmers write code for humans first, and computers next</p> </blockquote> <p>No idea who said that above line, or if anyone said it at all before I stole it off the internet, but damn right it is.</p> <p>Code changes more often than I change my mind(which is saying something), and it's almost certain that the next change to the code you're writing right now, will be done by someone other than you. In such a case, ensuring that code is readable, maintainable, follows a set of standard practices becomes critical.</p> <p>In a large organization with a crazy big codebase worked on by multiple teams and developers, the problem is exacerbated - no one really knows who wrote the code they are having to debug, and thus, it does save a lot of WTFs if the code follows coding practices.</p> <p>So now the question comes - who ensures developers follow standard practices? You can't give all developers a book of rules, and ask them to refer to it before each variable name they type. There is a need for a tool that checks code as the developer types, and points out the issues and the flaws</p> <p>And this tool, is called Sonar.</p> Intro to Async Javascript 2021-10-04T00:00:00Z https://blog.dkpathak.in/intro-to-async-javascript/ <p>Most developers who come to JavaScript from Java, which uses threads for Asynchronicity are often left wondering - why the hell can JS not do the same? Or can it? Let's find out.</p> <p>Asynchronicity is the ability to break the regular flow of control in a script, in order to not let the program stall on blocking operations - calls that take a long time to complete - talk network requests or such.</p> <p>Asynchronicity is usually achieved by using multiple threads - this is the way Java does it - all the stuff that you don't want your main thread to bother wasting time on, just spawn off a new thread for.</p> <p>But that means, that Java has the ability to directly create and manage threads - this makes it decidedly more complex, but that's the way it was built.</p> <p>JavaScript is higher order than Java, and was initially meant to be a scripting language, and not to manage threads. Now, when the use case for it did come, the powers that be had two choices - add new features to the language to make threading allowable from within JavaScript, or find another way for it.</p> <p>And the far thinking powers behind the language, decided to try something else, rather than complicate JavaScript.</p> <p>How could you make a language Asynchronous, without actually calling multiple threads?</p> <p>The first place they looked for, is where JS ran - in the browser, initially. Java, on the other hand, runs on a server.</p> <p>JavaScript doesn't do every little thing itself - it takes help of several web browser 'APIs', to do things - for instance, there's a timer API, an XHR API, and so on.</p> <p>JavaScript uses these APIs to defer some logic to the browser, and expects the output when it's done.</p> <p>That means, that JavaScript could also use the same logic to defer code that it knows would take a long time.</p> <p>And that's where comes in setTimeout(), the gateway to the world of AsyncJS.</p> <p>To the uninitiated, setTimeout() is a function used within JavaScript, which looks something like this -</p> <pre><code>setTimeout(() =&gt; alert('Hello'), 1000) </code></pre> <p>Don't worry if it looks like Gobbledygook, here's a more readable version of the same snippet :</p> <pre><code>setTimeout(function a(){ alert('Hello') }, 1000) </code></pre> <p>Better now?</p> <p>setTimeout takes two params - a function, and a time in milliseconds. The function is what you'd call a callback function, a function that will be CALLED BACK, when the time as provided as the second param, is up. Meaning, after 1000ms, the function a is called.</p> <p>That's the more widely accepted, slightly inaccurate version.</p> <p>What goes behind the scenes, is horrifyingly amazing(Go figure:/)</p> <p>setTimeout is NOT a JavaScript function, first. There goes your belief system, but I am sorry - it is what it is.</p> <p>It's instead, a facade, for a web browser API being called behind the scenes - the Timer API, and for once, in the godforsaken world of bad naming conventions, does exactly what it sounds like - it's a TIMER.</p> <p>The execution is interesting - when the JavaScript compiler runs into setTimeout, it just does two things - it first heaves a sigh of relief, and then, throws the whole thing, the callback, and the time, to the web browser API, and forgets all about it. Literally, yes. Forgets.</p> <p>For JavaScript, that line has finished execution. It can continue on with the rest of the code.</p> <p>Did you see it happen? Did you see how JavaScript went asynchronous without us having to mess with threads?</p> <p>That's the beauty.</p> <p>So, what happens when the compiler throws the stuff to the browser API? Simple, the browser counts till the time given as param is up, and then adds the function to the call stack.</p> <p>The call stack is where functions go, to get called.</p> <p>This isn't entirely accurate, but that's something we'll discuss in a coming probe.</p> <p>Till then, just remember this - JavaScript is NOT an Asynchronous language. It's synchronous by design, but supports Async functionality by cleverly utilizing the environment it runs in - the browser.</p> <p>Coming in the next probe, what really happens with the callback function, and why it isn't very good.</p> Your next side project like a pro 2021-10-04T00:00:00Z https://blog.dkpathak.in/your-next-side-project-like-a-pro/ <p>A lot of us often complained, especially during our college years, the wide discrepancy between how we did side projects in college, vs how they work in the industry, and I am not referring just to the scale and complexity of the industry project.</p> <p>Even if a project of the same level was done personally by us, vs it being done in a formal software industry setting, the process would be entirely different - the latter would involve requirements analysis, PRD making, design creating, feature branches, code reviews, automated tests, linting and more.</p> <p>And this means that even if we do projects on the side, it doesn't give us enough confidence of us being able to do justice to our roles immediately in a software industry setting.</p> <p>But tell you the truth - it doesn't have to be so. To quite an extent, we can tweak our side project structure to emulate as much of a 'real' software project as to give us a pretty fair idea of what we're up against when we enter the industry.</p> <p>Here are a few points you should look at/implement when doing a side project :</p> <ol> <li> <p>Requirement analysis/Software Requirements Specification/Product Requirements Document : This is one thing we often ignore or take for granted, simply because we start our project with an idea or a set of features in mind, and we usually try to keep em verbal and limited to those. More often, we often follow tutorials that walk us through the project's code step by step, and we emulate the same thing. These factors mean that we don't spend enough time doing a requirement analysis to analyze the feasibility and priority of each feature, create user stories and so on.</p> <p>This means that we are only looking at a project from a constrained point of view - and this isn't the case in most software projects. In the industry, we're given a bird's eye view of the project, its expected functionalities and user base, and it's we who have to formalize and structure it into requirements. This allows us to think of the business/user side of things, teach us to prioritize important features so that we're spending less effort on unimportant features, and have a clear set of iterative goals in mind.</p> <p>Creating a PRD or an SRS is subjective - product managers spend days making a PRD in an industry setting, but you need not do the same. Just understanding the contents of a typical SRS and PRD should give you enough knowledge to create a simple one for your next project within an hour.</p> <p>The important part is to stick to it.</p> </li> <li> <p>Sprint planning : We often binge our projects - we go into frenzy mode and do em all working 10-12 hours a day for 3-4 days, then give them up for a week, and repeat the cycle. Moreover, what we do in each cycle is also dependent on our moods, what the tutorial guy is teaching and so on. A more scalable idea would be to plan things beforehand in terms of sprints. A sprint is a period of software engineering with a definitive goal and clearly defined expected outcomes. It can range anything from a couple of days to multiple weeks, based on complexity.</p> </li> </ol> <p>The advantage of this, is that after each sprint, you have a significant chunk of the project ready, matching the requirements that you set at the beginning of the sprint, as well as regularly make tweaks to your priorities and deadlines based on each sprint's review.</p> <p>Again, this isn't as hard as it seems. You have the final SRS/PRD of the project - you need to break it down to ACHIEVABLE chunks, with deadlines for each set of features.</p> <p>This can and I recommend, should go into a project management tool - preferably JIRA(the most commonly used in the industry), Trello, Notion or such like. No, 'verbally speaking and remembering' sprint features doesn't work. Writing it down on paper might sound appealing and might be a preferred way for many of us, but it doesn't work that way in the industry.</p> <ol start="3"> <li>Design (Frontend AND Backend)</li> </ol> <p>In most of our side projects, especially those we follow tutorials for, we immediately start writing teh code - if it's a backend thing, we start making the DB queries and Schema. If it's frontend, we directly start writing React. This, however, doesn't work in the industry, because first, unlike in side projects, you don't have tutorials you can blatantly emulate, and second, there's a lot of open questions that you need to take a call on - the design system of the project, the theme, the schema design, the flow of the website, and a zillion others. A design is created first, then that design is iterated and improved upon, and the design is then implemented in code.</p> <p>Frontend design, creating UI mockups of the end product is usually done using tools like Figma or Sketch, but you need not do this if you don't want to spend a lot of time learning these. Instead, you can use an tools like <a href="https://whimsical.com/">Whimsical</a> or <a href="http://diagrams.net/">Diagrams.net</a> to create a similar but vastly less complex version of the design - something like a Wireframe, so that you know what components go where, the style guides(color palette, typography, transitions etc) and so on. Note that the design is not set in stone, not even in the software industry, it is iterated upon by the designers, the product managers and the developers based on UX, priorities and complexity of implementation, so do not worry if you think you're gonna tweak your design later on - just make sure that changes go from design to code, not vice versa.</p> <p>In case of a backend/full stack application, you need to create a rough architecture of what are the different entities involved - the DB, the backend server, and so on so that it forms a coherent pathway for data to flow. This architecture can be as simple as a bunch of boxes for simple projects like Todo applications, but increase in complexity when things like Microservices, messaging queues are involved. Another important aspect you should think of beforehand, is schema design - you should not change your database schema whenever you feel like it - in a production environment, it'll cause disaster if you remove one small field and it breaks the app for a zillion users. Schema design is carefully analyzed and planned based on the required fields, while ensuring ACID compliance(in case of SQL DBs), and creating a pathway for schema changes to be made without affecting existing users.</p> <ol start="4"> <li>Project structure</li> </ol> <p>As beginners, a lot of us often tend to not be aware, or care a lot, about file structure in projects. Even the tutorials we follow usually do not enforce this. We might have one single file making API calls, creating the UI across multiple tables and so on. This is extremely unscalable - the second you have to add an extra feature, you will get overwhelmed because different code snips, stuck in the same file will confuse you.</p> <p>Separation of concerns and modularity are extremely critical when doing a software project, especially if we want it to scale seamlessly. For instance, consider a React project - you could have easily dumped your entire code into the single App.js file. But we don't/shouldn't do that. Instead, a separate folder for each component - each folder contains an index.js file to hold the JS file and a styles.scss file, if required. Additionally, Child components are created as subfolders inside the parent component.</p> <p>Similarly in case of a Node.js backend project, it is recommended to follow the MVC architectural style. You'll have a separate folder for models, which represents the database schema, another one for controllers, which coordinate between the requests we get from frontend, and the business logic, and the services folder, which holds the business logic and makes the API/Database calls.</p> <p>This kind of structure ensures that if we want to add a service, we very easily are able to do it by adding a file in the services folder, without touching the logic for the models or controller.</p> <p>This might seem like an unnecessary exercise when there're a few components, but as applications scale to hundreds, or thousands of features, this organization is what keeps the project manageable.</p> <ol start="5"> <li>Static Code Analysis - Linting</li> </ol> <p>Linting is the process of checking for basic structural and syntactical correctness in your code, statically, that is, without actually running it. This includes checking formatting of your code, checking redeclaration of variables, poor error handling and lots more.</p> <p>This is an automated way of improving what was once done manually - a developer would write some code, another senior developer would review it and suggest these style changes and syntactical error rectifications. However, this hurts developer productivity. Linters are scripts that run through the code, check for these issues, and in most cases, even create a commit after fixing the issues, so that you don't waste time on doing this 'menial' stuff.</p> <p>We have ESLint for JavaScript, SonarQube for Java, Pylint for Python and so on.</p> <p>These are commonly used in teams, and can be implemented in pet projects to focus on code quality without spending time on it.</p> <ol start="6"> <li>Testing</li> </ol> <p>This is, by far, the most important, and most unheeded piece of the Software Development life cycle. I am yet to hear of a case in my lifetime, where I saw a pet project that was tested. It's a whole different ball game in industry projects, however - every project depends on it NOT failing for something, anything the user might do. Mistaken email formats, hitting the back button anytime, and so on. Not to mention the stuff that can go wrong due to network issues - especially in case of critical applications like payment apps.</p> <p>Imagine if a user is carrying out a large transaction, the bank's servers catch up and the transaction fails, but the user is anyway shown the money deducted popup - imagine the frustration. Testing ensures that such cases are minimized.</p> <p>Testing in bigger companies is a process as complex as the development itself, however, you need not follow all stages in your pet projects. You can start with unit testing your code, that is, testing separate modules, files, components, to ensure that that particular component works well in isolation. Frameworks like Angular already provide the test files inbuilt, and you only need to tweak them slightly and run them. In React, you can use a library like React-Testing or Jest. Similarly for Java you have JUnit. This type of testing ensures that there's nothing wrong with your component logic - a mistaken API call, an incorrect query and so on. This however, isn't it.</p> <p>You need to make sure that your entire application works in a good flow. You have to emulate a user's journey through your app as closely as possible, and account for each case - what if the user makes a mistake with the email, what if he/she presses the back button before the transaction is done, and so on. End to end tests ensure that this flow goes on - using libraries like Cypress or Protractor for React, Angular or Selenium etc for Java.</p> <ol start="7"> <li>Regular commits</li> </ol> <p>We usually develop our projects on our local system, and once we're done, we push it to Github, to ensure that we can put up the links in our resume. That however, isn't the right way to use version control, and definitely not the way it works in the industry.</p> <p>In the industry, version control is used for collaboration between multiple developers who work on various branches/forks, and in case of bugs, track what change was the bug introduced in. There's also a concept of Continuous Integration, which means that code is released in increments, tests are automatically run on it, and that code is integrated into the main central codebase.</p> <p>All of this is pretty easy to implement in our projects too.</p> <p>First, set milestones in your project that will determine when an important feature has been implemented or done. For instance, adding the css for a login form, creating the core logic for checkout, and so on. Every time you hit a milestone, you create a new branch, make a commit, and instead of pushing directly to the main branch, you raise a pull request.</p> <p>Now, you go to Github, and merge it into your main branch. Here an additional step that can be taken up is writing automated tests. Tests that run on each deploy to ensure that it builds well and doesn't break anything. This can be configured using the deployment tools like Netlify. Further details in the deployment section.</p> <p>The advantage of this process is that this is exactly how things work in the industry - different developers are responsible for different features, so instead of pushing everything on the main branch, they create separate branches and raise pull requests.</p> <ol start="8"> <li>Code review</li> </ol> <p>This is a process which is norm, necessary and critical in software development teams. There'll be at least 2 other developers who'll look at the code you wrote, to check if it's structured well, logically sound, follows all requirements. This ensures that more bugs can be caught out at the early stage, before moving to the testing environment.</p> <p>Code reviews are usually done by experienced developers who have already seen lots of code, and thus, are very clear on the common pitfalls and checkpoints they have to ensure are ticked.</p> <p>Now, in your pet project, in most cases, you'll be working individually, and that means that you don't have anyone to review your code. Anyone but you, that is. And that's what you have to do. Read through your own code. Try to figure out if there are some pieces that can be optimized, refactor it, add helpful comments.</p> <p>Reading code is a monotonous exercise, especially when you'd written it yourself, but it's critical to ensuring quality, and something you're gonna spend a lot of time doing in the industry, so do develop this habit.</p> <p>Once you think your code is as optimal and clean as you can make it, mark it reviewed(Github has a feature for that), and only then merge the PR.</p> <ol start="9"> <li>Deployment</li> </ol> <p>So you did a project, which rn, runs on <a href="http://localhost/">localhost</a>:8080. You know how it works and looks like, but how're you gonna show it to others. You can't expect every potential interviewer to download your source code, install the dependencies and run it to check. Moreover, no software project is ever made to live on localhost - it has to go to what is called a 'production environment' sometime, where actual users use it.</p> <p>There's various levels at which you can deploy your project, and they differ based on the tech stack of your project. If it's just HTML, CSS, JS, you can directly activate Github pages from your repository settings and the project will immediately be live on <project-name>.github.io .</project-name></p> <p>In case of frontend projects involving a JS framework/library like React, Angular etc, you have to use a platform like Netlify, Vercel or Heroku. Netlify is the easiest of the lot. All you have to do is connect your github repo to your netlify account, specify the 'build' command, and that's it. You'll get a deployed link within minutes. Vercel is similar.</p> <p>Note that backend projects like NodeJS have to have their server on all the time, unlike projects like React, which only build once, and then serve an index.html file and run JS on the browser. Thus, Netlify/Vercel won't work for backend projects, where you need your server to remain on to accept requests and send responses. Heroku is a good option to start with in this case. It works similar to Netlify in terms of setting up the project.</p> <p>These platforms, however, abstract away several complexities of deployment, and these are almost never used in software industry settings. In the industry, we use cloud solutions like AWS EC2, GCP or Azure hosting. These provide us servers where we can store and run our app, and they're guaranteed to stay up more than 99.5% of time. These cloud solutions have a zillion other features such as setting up load balancers, domain mapping and more, all of which are common in the industry.</p> <p>The only concern is that these platforms have a very limited free tier, and if you're not very careful, you could end up getting an extravagant bill, so when you use these, make sure you follow a decent tutorial, and do not do something without understanding its implications. My record is getting a 4 lakh bill from AWS.</p> <p>Thus, following these steps will set you off on a journey to do your pet projects in a much more professional, industry-oriented fashion, so that you do not face a lot of trouble when entering the industry.</p> <h3 id="optional" tabindex="-1">Optional<a class="tdbc-anchor" href="https://blog.dkpathak.in/your-next-side-project-like-a-pro/#optional">#</a></h3> <ol> <li> <p>Team/group projects</p> <p>Almost no project in the industry is done by a single person. Even if there's just one developer, there'll be one designer/product manager/tester alongside. And working in a team is world apart from working individually. You have to understand others' code, designs, priorities and tweak your code and thinking accordingly.</p> <p>This point is still optional because group projects aren't possible or feasible for everyone. However, if you can, find a group of 2-3 like minded friends and do a project together. Assign features to different members, have regular meetings, review each others' code, and it'll literally be like working in the industry.</p> </li> </ol> Asking questions for a software engineer 2021-09-19T00:00:00Z https://blog.dkpathak.in/asking-questions-for-a-software-engineer/ <p>Asking questions at the right time, in the right way, to the right person</p> <p>Almost all freshers will agree that they've been encouraged by their teammates and managers to ask questions. However, this has a fine print : it actually goes, 'ask questions, if you can't find out the answer yourself'</p> <p>And not because they don't wanna answer, but because</p> <ul> <li> If you keep asking questions you can find out with a few google searches, you're no good as a developer - you need to learn the art of googling. </li><li> The managers/teammates usually have tasks of their own and are helping you on the side, which means that more often than not, they won't find enough time to answer your questions. </li><li> Answering questions is wayyyy tougher than asking them, especially if you're answering to a noob. Suppose you ask a question that you think is innocuous, and has a one line answer - "What does this imported package do". Now, your teammate's mind races back to what the package is, why it was brought in, why it's used, how it's used, and a few zillion other things, most of which would make no sense to you. So, she/he has to filter these out in a way so that you're able to grasp the essentials without feeling dumb or overwhelmed. That's tricky business. </li></ul> <p>So, what should you do? How should you 'ask questions'?</p> <p>First, the 'right way'</p> <ol> <li> Any query you find, first google it. Right away, as it is. Maybe you find a blank google search result - very very rare. You'll find something that can complement your understanding in some way, even if it doesn't give you the complete answer. But you'll at least have some more idea and can ask the question to your teammate in a more refined way so that the tough choices your teammate would face, mentioned in point 3 above can be minimized. </li><li> Instead of asking a teammate to explain it all - tell her/him what you've understood and ask her/him to validate/correct you. If you've got it 70% right, the teammate only need explain 30%, saving both of your times. If you've got it entirely wrong, the teammate would know that there's something lacking in your fundamental understanding and correct that first. If you've got it entirely right, you're getting a promotion sooner. </li><li> Try asking the teammate for a resource where you can learn more about the question you're asking. That way, the teammate will not be under pressure to explain 'everything' to you, and instead, guide you to a resource, which can help you better. </li><li> Make a habit of taking notes of what you ask and their answers. We often overestimate our memories and underestimate all the crap that's gonna take a chunk off our memories, so you best have it in written somewhere so that you can save yourself from your teammates' irritation by asking the same question 20 times. <p>Next, the right time. If you're an overexcited sorta person, you wanna know the entire architecture and each and every package right on the first bloody day of the job, because you then wanna go and be Napoleon. Or if you're the shy sort, you keep stalling, waiting for the 'right time' until it's too late. Figuring out the right time to ask comes mainly with experience, but a thumb rule is that if it's something that's blocking your progress, ask it right away. If you think the teammate is going to come to this question, give her/him an opportunity to address it. If they skip, then ask. Finally, ensure that the teammate is in the right frame of mind when you ask a question, not when they're debugging a critical prod issue.</p> <p>Finally, the right person. You could ask the same question to an immediate senior, your manager, and your team lead, and get different answers. You need to figure out which of these would work best in the context in which you're seeking an answer.</p> <p>For instance, if you're struggling with a syntactical issue, you should most likely reach out to an immediate senior, someone who has the closest interface with the code, since they can give you the quickest answer. If you're looking at understanding a big picture of a project or a feature, someone who's been around longer can help better.</p> </li></ol>
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<feed xmlns="http://www.w3.org/2005/Atom">
<title>DKProbes</title>
<subtitle>Software Engineering and productivity demystified humanely.</subtitle>
<link href="https://blog.dkpathak.in/feed/" rel="self"/>
<link href="https://blog.dkpathak.in"/>
<updated>2024-07-13T00:00:00Z</updated>
<id>https://blog.dkpathak.in</id>
<author>
<name>Dushyant Pathak</name>
</author>
<entry>
<title>Dependency Injection</title>
<link href="https://blog.dkpathak.in/dependency-injection/"/>
<updated>2024-07-13T00:00:00Z</updated>
<id>https://blog.dkpathak.in/dependency-injection/</id>
<content type="html"><p>Dependency Injection (DI) is a design pattern that allows an object to receive its dependencies from an external source rather than creating them itself. This pattern promotes loose coupling and makes your code more modular and testable, and less error prone.</p> <p>I had an opportunity to refactor DI implemention at my workplace.</p> <h2 id="what-is-dependency-injection" tabindex="-1">What is Dependency Injection?<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#what-is-dependency-injection">#</a></h2> <p>Dependency Injection is a technique where the dependencies (objects) of a class are provided (injected) by Spring, typically through constructors, setters, or interfaces. DI helps in separating the creation of dependencies from the business logic, thereby adhering to the principle of Inversion of Control (IoC).</p> <h3 id="types-of-dependency-injection" tabindex="-1">Types of Dependency Injection<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#types-of-dependency-injection">#</a></h3> <ol> <li><strong>Constructor Injection</strong>: Dependencies are provided through a class constructor.</li> <li><strong>Setter Injection</strong>: Dependencies are provided through setter methods.</li> <li><strong>Field Injection</strong>: Dependencies are directly injected into the class fields using annotations.</li> </ol> <h2 id="problem-statement" tabindex="-1">Problem statement<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#problem-statement">#</a></h2> <p>Our first implementation relied on a tightly coupled instantiation of services into a component</p> <pre class="language-java"><code class="language-java"><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Subsidiary</span> <span class="token punctuation">{</span><br /> <span class="token class-name">String</span> name<span class="token punctuation">;</span><br /> <span class="token class-name">Integer</span> partyId<span class="token punctuation">;</span><br /> <span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">;</span><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateSubsidiary</span><span class="token punctuation">(</span><span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>ratings <span class="token operator">=</span> ratings<span class="token punctuation">;</span><br /> <span class="token comment">// update ratings in db</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateSub</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">SubsidiaryService</span> subsidiaryService<span class="token punctuation">;</span><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateSub</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>subsidiaryService <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">SubsidiaryService</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">processUpdates</span><span class="token punctuation">(</span><span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> subsidiaryService<span class="token punctuation">.</span><span class="token function">updateSubsidiary</span><span class="token punctuation">(</span>ratings<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Main.java</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Main</span> <span class="token punctuation">{</span><br /> <span class="token keyword">public</span> <span class="token keyword">static</span> <span class="token keyword">void</span> <span class="token function">main</span><span class="token punctuation">(</span><span class="token class-name">String</span><span class="token punctuation">[</span><span class="token punctuation">]</span> args<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">UpdateSub</span> <span class="token class-name">UpdateSub</span> <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">UpdateSub</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token class-name">UpdateSub</span><span class="token punctuation">.</span><span class="token function">processUpdates</span><span class="token punctuation">(</span><span class="token punctuation">[</span><span class="token string">"AA+"</span><span class="token punctuation">,</span> <span class="token string">"BB-"</span><span class="token punctuation">,</span> <span class="token string">"CCC"</span><span class="token punctuation">]</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>As visible here, we are creating an object of SubsidiaryService inside UpdateSub and instantiating it.</p> <h2 id="challenges-with-the-above-approach" tabindex="-1">Challenges with the above approach?<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#challenges-with-the-above-approach">#</a></h2> <ol> <li> <p>Tight coupling : Since we create and instantiate the object of SubService manually, it is coupled to the business logic of the UpdateSub itself. Should SubService be made into an interface, we'd need to update the logic inside UpdateSub to instantiate the impl of the interface.</p> </li> <li> <p>Challenge during testing : When writing junits, we don't need to actually create database connections, rather, just mock them. However, in this case, when we call UpdateSub, it'd end up updating the database connection and it won't be possible to mock it</p> </li> </ol> <h2 id="stage-1-of-solution--field-injection" tabindex="-1">Stage 1 of solution : Field injection<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#stage-1-of-solution--field-injection">#</a></h2> <p>To cater to the above limitations, we decided to implement dependency injection. But we decided to go with a type called field injection.</p> <p>As the name suggests, we inject dependencies as a field of the class.</p> <p>In code, it looked something like this</p> <pre class="language-java"><code class="language-java"><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateSub</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">SubsidiaryService</span> subsidiaryService<span class="token punctuation">;</span><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateSub</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">processUpdates</span><span class="token punctuation">(</span><span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> subsidiaryService<span class="token punctuation">.</span><span class="token function">updateSubsidiary</span><span class="token punctuation">(</span>ratings<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>Just by writing the <code>@Autowired</code> annotation, we were able to inject the SubService dependency. Now, our junits could mock SubService <a href="https://howtodoinjava.com/mockito/mockito-mock-injectmocks/#:~:text=2.-,Difference%20between%20%40Mock%20and%20%40InjectMocks,tested%20in%20the%20test%20class.">using <code>@InjectMocks</code> or <code>@Mock</code> from Mockito</a>.</p> <h3 id="stage-2--limitations-of-autowired" tabindex="-1">Stage 2 : Limitations of @Autowired<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#stage-2--limitations-of-autowired">#</a></h3> <p>There are a couple of limitations with this approach</p> <ol> <li>You cannot make the injected service immutable</li> </ol> <p>Since <code>@Autowired</code> will inject the service after the instantiation of UpdateSub, setting it as final will throw a compile time error. This is a challenge when we want to make sure our injections aren't overridden</p> <ol start="2"> <li>Chances of NPE</li> </ol> <p>Again, owing to the above reason that injection happens after the root class instantiation, we found null pointer exceptions because we were trying to access the method of a service that spring hadn't yet been able to instantiate</p> <ol start="3"> <li>The partial accuracy of inject mocks</li> </ol> <p>In the junits for UpdateSub, if we want to mock UpdateSub, we'd need to mock SubService and pass it along to UpdateSub. We achieved this by <code>@Mock</code>ing SubService and <code>@InjectMock</code>ing the mock into UpdateSub, but that didn't feel like the right approach</p> <h3 id="solution---constructor-injection" tabindex="-1">Solution - Constructor injection<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#solution---constructor-injection">#</a></h3> <p>We therefore decided to move to Constructor injection</p> <p>As the name suggests, we inject services into the constructor, rather than as a field.</p> <p>Here is how it looked like in code :</p> <pre class="language-java"><code class="language-java"><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateSub</span> <span class="token punctuation">{</span><br /> <br /> <span class="token class-name">SubsidiaryService</span> subsidiaryService<span class="token punctuation">;</span><br /><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateSub</span><span class="token punctuation">(</span><span class="token class-name">Subsidiaryservice</span> subsidiaryService<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Update Sub"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">processUpdates</span><span class="token punctuation">(</span><span class="token class-name">List</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">String</span><span class="token punctuation">></span></span> ratings<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> subsidiaryService<span class="token punctuation">.</span><span class="token function">updateSubsidiary</span><span class="token punctuation">(</span>ratings<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /></code></pre> <p>Here, we autowire the constructor and pass the dependency as a param. The advantage of this is that the dependency will be initialized when object of UpdateSub is created, thus solving the null pointer concerns of above</p> <p>We can even do away with the explicit <code>@Autowired</code> annotation when there is just one constructor, as is the case above, since Spring handles the initiation during the constructor invocation.</p> <p>This, considering all factors, seems to us, the most useful and recommended implementation of Dependency injection</p> <h2 id="advantages-of-dependency-injection" tabindex="-1">Advantages of Dependency Injection<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#advantages-of-dependency-injection">#</a></h2> <p>To summarize, following are the advantages of DI</p> <ol> <li><strong>Loose Coupling</strong>: DI reduces the coupling between classes, making the system more flexible and easier to maintain.</li> <li><strong>Easier Testing</strong>: Dependencies can be easily mocked or stubbed during unit testing, leading to more isolated and reliable tests.</li> <li><strong>Improved Code Readability</strong>: DI promotes clean code practices by clearly defining dependencies and their relationships.</li> <li><strong>Enhanced Maintainability</strong>: Changes in dependencies require minimal changes in the dependent classes, making the system more maintainable.</li> <li><strong>Increased Reusability</strong>: DI encourages the use of interfaces and abstract classes, enhancing the reusability of components.</li> </ol> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/dependency-injection/#conclusion">#</a></h2> <p>Dependency Injection is a powerful design pattern that improves the modularity, testability, and maintainability of your code.</p> </content>
</entry>
<entry>
<title>Google Cloud VPC</title>
<link href="https://blog.dkpathak.in/google-cloud-vpc/"/>
<updated>2024-06-15T00:00:00Z</updated>
<id>https://blog.dkpathak.in/google-cloud-vpc/</id>
<content type="html"><p>In the world of cloud computing, a Virtual Private Cloud (VPC) is a private network within a public cloud that allows organizations to isolate their resources and manage them securely. Google Cloud Platform (GCP) offers a robust VPC service that provides scalable and flexible networking capabilities. In this blog, we'll delve into the concept of VPCs in GCP, explore their features, and guide you through setting up a VPC with snapshots from the GCP platform.</p> <h2 id="what-is-a-virtual-private-cloud-vpc" tabindex="-1">What is a Virtual Private Cloud (VPC)?<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#what-is-a-virtual-private-cloud-vpc">#</a></h2> <p>A Virtual Private Cloud (VPC) is a logically isolated section of a public cloud where you can launch resources in a virtual network that you define. A VPC provides the ability to:</p> <ul> <li>Isolate resources within the cloud environment.</li> <li>Control network settings such as IP address ranges, subnets, and route tables.</li> <li>Secure communication between resources using firewalls and security groups.</li> </ul> <h2 id="key-features-of-gcp-vpc" tabindex="-1">Key Features of GCP VPC<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#key-features-of-gcp-vpc">#</a></h2> <ol> <li><strong>Global Scope</strong>: GCP VPCs are global resources that span all the regions, allowing you to create subnets in any region without creating multiple VPCs.</li> <li><strong>Flexible Subnetworks</strong>: Subnets can be defined per region, allowing for more granular control over your network.</li> <li><strong>Custom Routes and Firewalls</strong>: VPCs come with default route tables and firewall rules that you can customize to control traffic flow.</li> <li><strong>Private Google Access</strong>: VPCs can enable private access to Google services, ensuring secure communication without exposing traffic to the internet.</li> <li><strong>VPC Peering</strong>: Connect multiple VPCs together to share resources across different projects or organizations.</li> </ol> <h2 id="setting-up-a-vpc-in-gcp" tabindex="-1">Setting Up a VPC in GCP<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#setting-up-a-vpc-in-gcp">#</a></h2> <h3 id="step-1-create-a-vpc" tabindex="-1">Step 1: Create a VPC<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#step-1-create-a-vpc">#</a></h3> <ol> <li> <p><strong>Navigate to the VPC Network Section</strong>: <img src="https://cloud.google.com/static/images/getting-started/gcp-console.png" alt="VPC Network Section" /></p> </li> <li> <p><strong>Create a New VPC</strong>:</p> <ul> <li>Go to the GCP Console.</li> <li>Navigate to the &quot;VPC network&quot; section under the &quot;Networking&quot; category.</li> <li>Click on &quot;Create VPC network&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/create-vpc.png" alt="Create VPC" /></p> </li> <li> <p><strong>Configure the VPC</strong>:</p> <ul> <li>Provide a name for your VPC.</li> <li>Choose an automatic or custom subnet creation mode. For this example, select &quot;Custom&quot; to define subnets manually.</li> <li>Click &quot;Create&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/configure-vpc.png" alt="Configure VPC" /></p> </li> </ol> <h3 id="step-2-create-subnets" tabindex="-1">Step 2: Create Subnets<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#step-2-create-subnets">#</a></h3> <ol> <li> <p><strong>Add Subnet</strong>:</p> <ul> <li>In the &quot;Create a subnet&quot; section, provide a name for the subnet.</li> <li>Select the region where the subnet will be located.</li> <li>Specify the IP address range for the subnet (e.g., 10.0.0.0/24).</li> <li>Click &quot;Add subnet&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/add-subnet.png" alt="Add Subnet" /></p> </li> <li> <p><strong>Repeat for Additional Subnets</strong>:</p> <ul> <li>Add more subnets as needed for different regions or availability zones.</li> </ul> </li> </ol> <h3 id="step-3-configure-firewall-rules" tabindex="-1">Step 3: Configure Firewall Rules<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#step-3-configure-firewall-rules">#</a></h3> <ol> <li> <p><strong>Navigate to Firewall Rules</strong>:</p> <ul> <li>Under the &quot;VPC network&quot; section, click on &quot;Firewall rules&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/firewall-rules.png" alt="Firewall Rules" /></p> </li> <li> <p><strong>Create Firewall Rule</strong>:</p> <ul> <li>Click on &quot;Create firewall rule&quot;.</li> <li>Provide a name for the firewall rule.</li> <li>Define the targets, source IP ranges, and protocols/ports.</li> <li>Click &quot;Create&quot;.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/create-firewall-rule.png" alt="Create Firewall Rule" /></p> </li> </ol> <h3 id="step-4-enable-private-google-access" tabindex="-1">Step 4: Enable Private Google Access<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#step-4-enable-private-google-access">#</a></h3> <ol> <li> <p><strong>Private Google Access</strong>:</p> <ul> <li>Navigate to the &quot;Subnets&quot; section under the &quot;VPC network&quot;.</li> <li>Select a subnet and edit it.</li> <li>Enable &quot;Private Google Access&quot; to allow instances in the subnet to access Google APIs and services using internal IP addresses.</li> </ul> <p><img src="https://cloud.google.com/static/images/docs/private-google-access.png" alt="Private Google Access" /></p> </li> </ol> <h2 id="advantages-of-using-gcp-vpc" tabindex="-1">Advantages of Using GCP VPC<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#advantages-of-using-gcp-vpc">#</a></h2> <ol> <li><strong>Global Connectivity</strong>: GCP VPC allows you to connect resources across regions without needing multiple VPCs.</li> <li><strong>Scalability</strong>: Easily scale your network by adding subnets and configuring routes and firewalls as needed.</li> <li><strong>Security</strong>: Implement granular security controls using firewall rules, private access, and custom routes.</li> <li><strong>Flexibility</strong>: Create custom subnet configurations and manage IP address ranges to suit your specific needs.</li> <li><strong>Integration</strong>: Seamlessly integrate with other GCP services such as Cloud Interconnect, Cloud VPN, and more.</li> </ol> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/google-cloud-vpc/#conclusion">#</a></h2> <p>Understanding and utilizing VPCs in Google Cloud Platform is essential for creating a secure and scalable cloud infrastructure. By leveraging GCP VPCs, you can isolate your resources, manage network configurations, and ensure secure communication within your cloud environment. The step-by-step guide provided in this blog, along with the snapshots from the GCP platform, should help you get started with setting up and configuring your own VPC in GCP.</p> </content>
</entry>
<entry>
<title>Understanding GraphQL Mutations</title>
<link href="https://blog.dkpathak.in/understanding-graphql-mutations/"/>
<updated>2024-07-08T00:00:00Z</updated>
<id>https://blog.dkpathak.in/understanding-graphql-mutations/</id>
<content type="html"><p>GraphQL has revolutionized the way we interact with APIs by providing a flexible and efficient approach to querying and mutating data. While queries are used to fetch data, mutations are the means to modify it. In this blog, we'll dive deep into GraphQL mutations, explore the concept of transactional updates, and discuss how to implement rollbacks to ensure data integrity.</p> <h2 id="what-are-graphql-mutations" tabindex="-1">What are GraphQL Mutations?<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#what-are-graphql-mutations">#</a></h2> <p>GraphQL mutations are operations that allow you to create, update, or delete data. Unlike queries, which are idempotent (they can be called multiple times without changing the result), mutations are meant to cause side effects on the server.</p> <h3 id="basic-mutation-example" tabindex="-1">Basic Mutation Example<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#basic-mutation-example">#</a></h3> <p>Let's start with a simple example of a mutation to update a financial transaction:</p> <pre class="language-graphql"><code class="language-graphql"><span class="token keyword">mutation</span> <span class="token definition-mutation function">UpdateTransaction</span><span class="token punctuation">(</span><span class="token variable variable-input">$id</span><span class="token punctuation">:</span> <span class="token scalar">ID</span><span class="token operator">!</span><span class="token punctuation">,</span> <span class="token variable variable-input">$amount</span><span class="token punctuation">:</span> <span class="token scalar">Float</span><span class="token operator">!</span><span class="token punctuation">,</span> <span class="token variable variable-input">$status</span><span class="token punctuation">:</span> <span class="token scalar">String</span><span class="token operator">!</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token property-query property-mutation">updateTransaction</span><span class="token punctuation">(</span><span class="token attr-name">id</span><span class="token punctuation">:</span> <span class="token variable variable-input">$id</span><span class="token punctuation">,</span> <span class="token attr-name">amount</span><span class="token punctuation">:</span> <span class="token variable variable-input">$amount</span><span class="token punctuation">,</span> <span class="token attr-name">status</span><span class="token punctuation">:</span> <span class="token variable variable-input">$status</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token property">id</span><br /> <span class="token property">amount</span><br /> <span class="token property">status</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>In this mutation, we pass the transaction ID, amount, and status as arguments to update the transaction details. The response includes the updated transaction information.</p> <h3 id="implementing-mutations-in-a-server" tabindex="-1">Implementing Mutations in a Server<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#implementing-mutations-in-a-server">#</a></h3> <p>Here’s how you can implement the above mutation in a Java server using Spring Boot and a mock data source:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Transaction.java - Entity Class</span><br /><span class="token annotation punctuation">@Entity</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Transaction</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Id</span><br /> <span class="token annotation punctuation">@GeneratedValue</span><span class="token punctuation">(</span>strategy <span class="token operator">=</span> <span class="token class-name">GenerationType</span><span class="token punctuation">.</span>IDENTITY<span class="token punctuation">)</span><br /> <span class="token keyword">private</span> <span class="token class-name">Long</span> id<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token keyword">double</span> amount<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> status<span class="token punctuation">;</span><br /><br /> <span class="token comment">// Getters and Setters</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// TransactionRepository.java - Repository Interface</span><br /><span class="token keyword">public</span> <span class="token keyword">interface</span> <span class="token class-name">TransactionRepository</span> <span class="token keyword">extends</span> <span class="token class-name">JpaRepository</span><span class="token generics"><span class="token punctuation">&lt;</span><span class="token class-name">Transaction</span><span class="token punctuation">,</span> <span class="token class-name">Long</span><span class="token punctuation">></span></span> <span class="token punctuation">{</span><span class="token punctuation">}</span><br /><br /><span class="token comment">// TransactionService.java - Service Class</span><br /><span class="token annotation punctuation">@Service</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionService</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionRepository</span> repository<span class="token punctuation">;</span><br /><br /> <span class="token annotation punctuation">@Transactional</span><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransaction</span><span class="token punctuation">(</span><span class="token class-name">Long</span> id<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Transaction</span> transaction <span class="token operator">=</span> repository<span class="token punctuation">.</span><span class="token function">findById</span><span class="token punctuation">(</span>id<span class="token punctuation">)</span><br /> <span class="token punctuation">.</span><span class="token function">orElseThrow</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">-></span> <span class="token keyword">new</span> <span class="token class-name">ResourceNotFoundException</span><span class="token punctuation">(</span><span class="token string">"Transaction not found"</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setAmount</span><span class="token punctuation">(</span>amount<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setStatus</span><span class="token punctuation">(</span>status<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token keyword">return</span> repository<span class="token punctuation">.</span><span class="token function">save</span><span class="token punctuation">(</span>transaction<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// TransactionResolver.java - GraphQL Resolver</span><br /><span class="token annotation punctuation">@Component</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionResolver</span> <span class="token keyword">implements</span> <span class="token class-name">GraphQLMutationResolver</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionService</span> service<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransaction</span><span class="token punctuation">(</span><span class="token class-name">Long</span> id<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">return</span> service<span class="token punctuation">.</span><span class="token function">updateTransaction</span><span class="token punctuation">(</span>id<span class="token punctuation">,</span> amount<span class="token punctuation">,</span> status<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// schema.graphqls - GraphQL Schema</span><br />type <span class="token class-name">Transaction</span> <span class="token punctuation">{</span><br /> id<span class="token operator">:</span> ID<span class="token operator">!</span><br /> amount<span class="token operator">:</span> <span class="token class-name">Float</span><span class="token operator">!</span><br /> status<span class="token operator">:</span> <span class="token class-name">String</span><span class="token operator">!</span><br /><span class="token punctuation">}</span><br /><br />type <span class="token class-name">Mutation</span> <span class="token punctuation">{</span><br /> <span class="token function">updateTransaction</span><span class="token punctuation">(</span>id<span class="token operator">:</span> ID<span class="token operator">!</span><span class="token punctuation">,</span> amount<span class="token operator">:</span> <span class="token class-name">Float</span><span class="token operator">!</span><span class="token punctuation">,</span> status<span class="token operator">:</span> <span class="token class-name">String</span><span class="token operator">!</span><span class="token punctuation">)</span><span class="token operator">:</span> <span class="token class-name">Transaction</span><br /><span class="token punctuation">}</span><br /><br />type <span class="token class-name">Query</span> <span class="token punctuation">{</span><br /> <span class="token function">transaction</span><span class="token punctuation">(</span>id<span class="token operator">:</span> ID<span class="token operator">!</span><span class="token punctuation">)</span><span class="token operator">:</span> <span class="token class-name">Transaction</span><br /><span class="token punctuation">}</span></code></pre> <h2 id="transactional-updates" tabindex="-1">Transactional Updates<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#transactional-updates">#</a></h2> <p>In a production environment, mutations often need to be part of a transaction to ensure data consistency. A transaction is a sequence of operations performed as a single logical unit of work. If any operation within the transaction fails, the entire transaction is rolled back, leaving the database in a consistent state.</p> <h3 id="transaction-example-with-spring-boot" tabindex="-1">Transaction Example with Spring Boot<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#transaction-example-with-spring-boot">#</a></h3> <p>Spring Boot provides strong support for transactions, making it easy to implement transactional updates in your GraphQL mutations:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// TransactionService.java - Service Class with Transaction Management</span><br /><span class="token annotation punctuation">@Service</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionService</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionRepository</span> repository<span class="token punctuation">;</span><br /><br /> <span class="token annotation punctuation">@Transactional</span><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransaction</span><span class="token punctuation">(</span><span class="token class-name">Long</span> id<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Transaction</span> transaction <span class="token operator">=</span> repository<span class="token punctuation">.</span><span class="token function">findById</span><span class="token punctuation">(</span>id<span class="token punctuation">)</span><br /> <span class="token punctuation">.</span><span class="token function">orElseThrow</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">-></span> <span class="token keyword">new</span> <span class="token class-name">ResourceNotFoundException</span><span class="token punctuation">(</span><span class="token string">"Transaction not found"</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setAmount</span><span class="token punctuation">(</span>amount<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setStatus</span><span class="token punctuation">(</span>status<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token keyword">return</span> repository<span class="token punctuation">.</span><span class="token function">save</span><span class="token punctuation">(</span>transaction<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>In this example, we wrap the mutation in a transaction. If any error occurs during the update, the transaction is rolled back to ensure data consistency.</p> <h2 id="rollbacks-in-graphql" tabindex="-1">Rollbacks in GraphQL<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#rollbacks-in-graphql">#</a></h2> <p>Rollbacks are crucial for maintaining data integrity, especially in scenarios where multiple mutations are involved. Implementing rollbacks in GraphQL involves using transactions provided by the database or ORM.</p> <h3 id="handling-rollbacks" tabindex="-1">Handling Rollbacks<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#handling-rollbacks">#</a></h3> <p>To handle rollbacks, ensure that each mutation is wrapped in a transaction. Here’s a more complex example involving multiple updates:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// TransactionService.java - Service Class with Complex Transaction Management</span><br /><span class="token annotation punctuation">@Service</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionService</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionRepository</span> repository<span class="token punctuation">;</span><br /><br /> <span class="token annotation punctuation">@Transactional</span><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransactionAndLog</span><span class="token punctuation">(</span><span class="token class-name">Long</span> transactionId<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">,</span> <span class="token class-name">Long</span> logId<span class="token punctuation">,</span> <span class="token class-name">String</span> logMessage<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Transaction</span> transaction <span class="token operator">=</span> repository<span class="token punctuation">.</span><span class="token function">findById</span><span class="token punctuation">(</span>transactionId<span class="token punctuation">)</span><br /> <span class="token punctuation">.</span><span class="token function">orElseThrow</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">-></span> <span class="token keyword">new</span> <span class="token class-name">ResourceNotFoundException</span><span class="token punctuation">(</span><span class="token string">"Transaction not found"</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setAmount</span><span class="token punctuation">(</span>amount<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> transaction<span class="token punctuation">.</span><span class="token function">setStatus</span><span class="token punctuation">(</span>status<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> <span class="token class-name">Log</span> log <span class="token operator">=</span> logRepository<span class="token punctuation">.</span><span class="token function">findById</span><span class="token punctuation">(</span>logId<span class="token punctuation">)</span><br /> <span class="token punctuation">.</span><span class="token function">orElseThrow</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">-></span> <span class="token keyword">new</span> <span class="token class-name">ResourceNotFoundException</span><span class="token punctuation">(</span><span class="token string">"Log not found"</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> log<span class="token punctuation">.</span><span class="token function">setMessage</span><span class="token punctuation">(</span>logMessage<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> repository<span class="token punctuation">.</span><span class="token function">save</span><span class="token punctuation">(</span>transaction<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> logRepository<span class="token punctuation">.</span><span class="token function">save</span><span class="token punctuation">(</span>log<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> <span class="token keyword">return</span> transaction<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// TransactionResolver.java - GraphQL Resolver</span><br /><span class="token annotation punctuation">@Component</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">TransactionResolver</span> <span class="token keyword">implements</span> <span class="token class-name">GraphQLMutationResolver</span> <span class="token punctuation">{</span><br /> <span class="token annotation punctuation">@Autowired</span><br /> <span class="token keyword">private</span> <span class="token class-name">TransactionService</span> service<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">Transaction</span> <span class="token function">updateTransactionAndLog</span><span class="token punctuation">(</span><span class="token class-name">Long</span> transactionId<span class="token punctuation">,</span> <span class="token keyword">double</span> amount<span class="token punctuation">,</span> <span class="token class-name">String</span> status<span class="token punctuation">,</span> <span class="token class-name">Long</span> logId<span class="token punctuation">,</span> <span class="token class-name">String</span> logMessage<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">return</span> service<span class="token punctuation">.</span><span class="token function">updateTransactionAndLog</span><span class="token punctuation">(</span>transactionId<span class="token punctuation">,</span> amount<span class="token punctuation">,</span> status<span class="token punctuation">,</span> logId<span class="token punctuation">,</span> logMessage<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// schema.graphqls - GraphQL Schema Update</span><br />type <span class="token class-name">Mutation</span> <span class="token punctuation">{</span><br /> <span class="token function">updateTransactionAndLog</span><span class="token punctuation">(</span>transactionId<span class="token operator">:</span> ID<span class="token operator">!</span><span class="token punctuation">,</span> amount<span class="token operator">:</span> <span class="token class-name">Float</span><span class="token operator">!</span><span class="token punctuation">,</span> status<span class="token operator">:</span> <span class="token class-name">String</span><span class="token operator">!</span><span class="token punctuation">,</span> logId<span class="token operator">:</span> ID<span class="token operator">!</span><span class="token punctuation">,</span> logMessage<span class="token operator">:</span> <span class="token class-name">String</span><span class="token operator">!</span><span class="token punctuation">)</span><span class="token operator">:</span> <span class="token class-name">Transaction</span><br /><span class="token punctuation">}</span></code></pre> <p>In this example, we update both a transaction and a log entry within a single transaction. If either update fails, the transaction is rolled back, ensuring that partial updates do not occur.</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/understanding-graphql-mutations/#conclusion">#</a></h2> <p>Understanding GraphQL mutations, transactional updates, and rollbacks is essential for building robust and reliable applications. By leveraging transactions, you can ensure data consistency and integrity, even in the face of errors. Implementing these practices in your GraphQL server can help you avoid common pitfalls and provide a better experience for your users.</p> </content>
</entry>
<entry>
<title>The challenges of Database Migration</title>
<link href="https://blog.dkpathak.in/the-challenges-of-database-migration/"/>
<updated>2024-07-08T00:00:00Z</updated>
<id>https://blog.dkpathak.in/the-challenges-of-database-migration/</id>
<content type="html"><p>Database migration is a critical task that involves transferring data from one database to another. This process is often necessary when upgrading systems, consolidating databases, or changing database vendors. However, database migration comes with its own set of challenges and potential pitfalls. In this blog, I’ll share insights from our recent database migration project at my workplace, highlighting the challenges we faced and how we overcame them.</p> <h2 id="understanding-database-migration" tabindex="-1">Understanding Database Migration<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#understanding-database-migration">#</a></h2> <p>Database migration involves moving data from a source database to a target database. This can include migrating the database schema, data, and sometimes even the database engine. Successful migration requires careful planning, execution, and validation to ensure data integrity and minimal downtime.</p> <h2 id="common-challenges-and-pitfalls" tabindex="-1">Common Challenges and Pitfalls<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#common-challenges-and-pitfalls">#</a></h2> <h3 id="1-data-integrity-and-consistency" tabindex="-1">1. Data Integrity and Consistency<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#1-data-integrity-and-consistency">#</a></h3> <p><strong>Challenge</strong>: Ensuring that the data remains intact and consistent during and after the migration is paramount. Any loss or corruption of data can have significant consequences.</p> <p><strong>Pitfall</strong>: Inconsistent data formats, incompatible data types, and schema differences can lead to data integrity issues.</p> <p><strong>Solution</strong>: Thoroughly analyze the source and target databases to identify and address any discrepancies. Use data validation techniques and tools to verify data integrity before, during, and after migration.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of data validation in Java</span><br /><span class="token keyword">public</span> <span class="token keyword">boolean</span> <span class="token function">validateData</span><span class="token punctuation">(</span><span class="token class-name">String</span> sourceData<span class="token punctuation">,</span> <span class="token class-name">String</span> targetData<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">return</span> sourceData<span class="token punctuation">.</span><span class="token function">equals</span><span class="token punctuation">(</span>targetData<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><span class="token punctuation">}</span></code></pre> <h3 id="2-downtime-management" tabindex="-1">2. Downtime Management<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#2-downtime-management">#</a></h3> <p><strong>Challenge</strong>: Minimizing downtime during migration is crucial, especially for applications that require high availability.</p> <p><strong>Pitfall</strong>: Prolonged downtime can disrupt business operations and lead to customer dissatisfaction.</p> <p><strong>Solution</strong>: Plan the migration during off-peak hours and implement a phased or incremental migration approach. Use techniques like database replication and shadow databases to minimize downtime.</p> <pre class="language-sql"><code class="language-sql"><span class="token comment">-- Example of using replication to minimize downtime</span><br /><span class="token keyword">CREATE</span> PUBLICATION my_publication <span class="token keyword">FOR</span> <span class="token keyword">ALL</span> <span class="token keyword">TABLES</span><span class="token punctuation">;</span><br /><span class="token keyword">CREATE</span> SUBSCRIPTION my_subscription CONNECTION <span class="token string">'dbname=mydb'</span> PUBLICATION my_publication<span class="token punctuation">;</span></code></pre> <h3 id="3-performance-issues" tabindex="-1">3. Performance Issues<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#3-performance-issues">#</a></h3> <p><strong>Challenge</strong>: The performance of the target database can be affected due to differences in indexing, query optimization, and hardware configurations.</p> <p><strong>Pitfall</strong>: Poor performance can lead to slow application response times and increased resource consumption.</p> <p><strong>Solution</strong>: Optimize the target database for performance by analyzing and tuning queries, indexing, and database configurations. Perform load testing to identify and address performance bottlenecks.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of indexing in SQL</span><br />CREATE INDEX idx_user_name <span class="token class-name">ON</span> users <span class="token punctuation">(</span>name<span class="token punctuation">)</span><span class="token punctuation">;</span></code></pre> <h3 id="4-compatibility-issues" tabindex="-1">4. Compatibility Issues<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#4-compatibility-issues">#</a></h3> <p><strong>Challenge</strong>: Migrating between different database systems can lead to compatibility issues with SQL syntax, stored procedures, and database features.</p> <p><strong>Pitfall</strong>: Incompatible SQL queries and database functions can cause errors and application failures.</p> <p><strong>Solution</strong>: Rewrite SQL queries and stored procedures to be compatible with the target database. Use database migration tools that offer compatibility checks and automated code conversion.</p> <pre class="language-sql"><code class="language-sql"><span class="token comment">-- Example of converting SQL syntax for compatibility</span><br /><span class="token comment">-- Source (MySQL)</span><br /><span class="token keyword">SELECT</span> <span class="token operator">*</span> <span class="token keyword">FROM</span> users <span class="token keyword">WHERE</span> <span class="token keyword">DATE</span><span class="token punctuation">(</span>created_at<span class="token punctuation">)</span> <span class="token operator">=</span> CURDATE<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /><span class="token comment">-- Target (PostgreSQL)</span><br /><span class="token keyword">SELECT</span> <span class="token operator">*</span> <span class="token keyword">FROM</span> users <span class="token keyword">WHERE</span> created_at::<span class="token keyword">date</span> <span class="token operator">=</span> <span class="token keyword">CURRENT_DATE</span><span class="token punctuation">;</span></code></pre> <h3 id="5-data-volume" tabindex="-1">5. Data Volume<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#5-data-volume">#</a></h3> <p><strong>Challenge</strong>: Migrating large volumes of data can be time-consuming and resource-intensive.</p> <p><strong>Pitfall</strong>: Insufficient planning for data volume can lead to extended migration times and potential failures.</p> <p><strong>Solution</strong>: Use data chunking and parallel processing to handle large volumes of data efficiently. Consider using cloud-based migration services that offer scalability.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of data chunking in Java</span><br /><span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">migrateDataInChunks</span><span class="token punctuation">(</span><span class="token keyword">int</span> chunkSize<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">for</span> <span class="token punctuation">(</span><span class="token keyword">int</span> i <span class="token operator">=</span> <span class="token number">0</span><span class="token punctuation">;</span> i <span class="token operator">&lt;</span> totalDataSize<span class="token punctuation">;</span> i <span class="token operator">+=</span> chunkSize<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token comment">// Migrate data chunk</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <h3 id="6-security-concerns" tabindex="-1">6. Security Concerns<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#6-security-concerns">#</a></h3> <p><strong>Challenge</strong>: Ensuring the security of data during migration is critical, especially for sensitive and confidential information.</p> <p><strong>Pitfall</strong>: Data breaches and unauthorized access during migration can have severe consequences.</p> <p><strong>Solution</strong>: Implement strong encryption and access control measures during migration. Use secure connections and data masking techniques to protect sensitive information.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of encrypting data during migration</span><br /><span class="token keyword">public</span> <span class="token class-name">String</span> <span class="token function">encryptData</span><span class="token punctuation">(</span><span class="token class-name">String</span> data<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token comment">// Encryption logic</span><br /> <span class="token keyword">return</span> encryptedData<span class="token punctuation">;</span><br /><span class="token punctuation">}</span></code></pre> <h3 id="7-testing-and-validation" tabindex="-1">7. Testing and Validation<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#7-testing-and-validation">#</a></h3> <p><strong>Challenge</strong>: Thorough testing and validation are essential to ensure the success of the migration.</p> <p><strong>Pitfall</strong>: Inadequate testing can lead to undetected issues that surface post-migration.</p> <p><strong>Solution</strong>: Develop a comprehensive testing plan that includes unit tests, integration tests, and user acceptance tests. Validate the migrated data and application functionality to ensure everything works as expected.</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Example of unit testing in Java</span><br /><span class="token annotation punctuation">@Test</span><br /><span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">testMigration</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">String</span> sourceData <span class="token operator">=</span> <span class="token string">"source"</span><span class="token punctuation">;</span><br /> <span class="token class-name">String</span> targetData <span class="token operator">=</span> <span class="token string">"target"</span><span class="token punctuation">;</span><br /> <span class="token function">assertTrue</span><span class="token punctuation">(</span><span class="token function">validateData</span><span class="token punctuation">(</span>sourceData<span class="token punctuation">,</span> targetData<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><span class="token punctuation">}</span></code></pre> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/the-challenges-of-database-migration/#conclusion">#</a></h2> <p>Database migration is a complex and challenging process that requires careful planning, execution, and validation. By understanding and addressing the common challenges and pitfalls, you can ensure a smooth and successful migration. Our recent migration project at my workplace taught us valuable lessons that can help others navigate this intricate process.</p> </content>
</entry>
<entry>
<title>Grafana</title>
<link href="https://blog.dkpathak.in/grafana/"/>
<updated>2024-07-08T00:00:00Z</updated>
<id>https://blog.dkpathak.in/grafana/</id>
<content type="html"><p>Observability has become a crucial aspect of modern software systems. It enables developers and operations teams to understand the internal state of a system based on the data it produces. At my workplace, we recently implemented Grafana to enhance our observability capabilities. This blog will guide you through the basics of observability, why we chose Grafana, and how we implemented it to gain deeper insights into our applications.</p> <h2 id="what-is-observability" tabindex="-1">What is Observability?<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#what-is-observability">#</a></h2> <p>Observability refers to the ability to measure the internal states of a system by examining its outputs. The three key pillars of observability are:</p> <ol> <li><strong>Metrics</strong>: Quantitative data about the system's performance.</li> <li><strong>Logs</strong>: Detailed records of events that occur within the system.</li> <li><strong>Traces</strong>: A record of the journey of a request through the system.</li> </ol> <h2 id="why-grafana" tabindex="-1">Why Grafana?<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#why-grafana">#</a></h2> <p>Grafana is a powerful open-source platform for monitoring and observability. It allows you to query, visualize, alert on, and understand your metrics no matter where they are stored. Here's why we chose Grafana:</p> <ul> <li><strong>Extensibility</strong>: Grafana supports a wide range of data sources and plugins.</li> <li><strong>Customizable Dashboards</strong>: Create interactive and visually appealing dashboards.</li> <li><strong>Alerting</strong>: Set up alert rules to notify you when certain conditions are met.</li> <li><strong>Ease of Use</strong>: User-friendly interface for setting up and managing observability.</li> </ul> <h2 id="setting-up-grafana" tabindex="-1">Setting Up Grafana<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#setting-up-grafana">#</a></h2> <h3 id="step-1-install-grafana" tabindex="-1">Step 1: Install Grafana<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-1-install-grafana">#</a></h3> <p>First, we need to install Grafana. You can install Grafana on various platforms. Here’s an example of installing Grafana on Ubuntu:</p> <pre class="language-bash"><code class="language-bash"><span class="token function">sudo</span> <span class="token function">apt-get</span> <span class="token function">install</span> -y software-properties-common<br /><span class="token function">sudo</span> add-apt-repository <span class="token string">"deb https://packages.grafana.com/oss/deb stable main"</span><br /><span class="token function">wget</span> -q -O - https://packages.grafana.com/gpg.key <span class="token operator">|</span> <span class="token function">sudo</span> apt-key <span class="token function">add</span> -<br /><span class="token function">sudo</span> <span class="token function">apt-get</span> update<br /><span class="token function">sudo</span> <span class="token function">apt-get</span> <span class="token function">install</span> grafana<br /><span class="token function">sudo</span> systemctl start grafana-server<br /><span class="token function">sudo</span> systemctl <span class="token builtin class-name">enable</span> grafana-server</code></pre> <h3 id="step-2-configure-data-sources" tabindex="-1">Step 2: Configure Data Sources<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-2-configure-data-sources">#</a></h3> <p>Once Grafana is installed, configure the data sources. Grafana supports various data sources like Prometheus, InfluxDB, Elasticsearch, etc. In our setup, we used Prometheus.</p> <ol> <li>Navigate to the Grafana UI (http://localhost:3000).</li> <li>Log in with the default credentials (username: <code>admin</code>, password: <code>admin</code>).</li> <li>Go to <strong>Configuration &gt; Data Sources</strong>.</li> <li>Add Prometheus as a data source by providing the URL of your Prometheus server.</li> </ol> <p><img src="https://grafana.com/docs/grafana/latest/getting-started/getting-started-prometheus/add-data-source-prometheus.png" alt="Add Data Source" /></p> <h3 id="step-3-create-dashboards" tabindex="-1">Step 3: Create Dashboards<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-3-create-dashboards">#</a></h3> <p>Next, we create dashboards to visualize our metrics.</p> <ol> <li>Go to <strong>Create &gt; Dashboard</strong>.</li> <li>Add a new panel and configure the query to fetch data from Prometheus.</li> <li>Customize the visualization type (e.g., Graph, Gauge, Heatmap) and panel settings.</li> </ol> <p>Here’s an example query to display CPU usage:</p> <pre class="language-sql"><code class="language-sql">rate<span class="token punctuation">(</span>node_cpu_seconds_total{job<span class="token operator">=</span><span class="token string">"node_exporter"</span><span class="token punctuation">,</span><span class="token keyword">mode</span><span class="token operator">=</span><span class="token string">"idle"</span>}<span class="token punctuation">[</span><span class="token number">5</span>m<span class="token punctuation">]</span><span class="token punctuation">)</span></code></pre> <p><img src="https://grafana.com/static/assets/img/features/dashboard/dashboard_overview_light.png" alt="Grafana Dashboard" /></p> <h3 id="step-4-set-up-alerts" tabindex="-1">Step 4: Set Up Alerts<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-4-set-up-alerts">#</a></h3> <p>Alerts are crucial for proactive monitoring. In Grafana, you can set up alerts based on specific conditions.</p> <ol> <li>In the panel editor, go to the <strong>Alert</strong> tab.</li> <li>Create a new alert rule with conditions (e.g., CPU usage &gt; 80%).</li> <li>Configure notification channels (e.g., email, Slack).</li> </ol> <p>Here's a sample configuration for setting up an alert:</p> <pre class="language-yaml"><code class="language-yaml"><span class="token key atrule">alerting</span><span class="token punctuation">:</span><br /> <span class="token key atrule">alertmanagers</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> <span class="token key atrule">static_configs</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> <span class="token key atrule">targets</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> <span class="token string">'localhost:9093'</span></code></pre> <h3 id="step-5-explore-logs-and-traces" tabindex="-1">Step 5: Explore Logs and Traces<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#step-5-explore-logs-and-traces">#</a></h3> <p>Grafana also supports log aggregation and tracing. Integrate with Loki for logs and Tempo for tracing to gain a comprehensive view of your system's behavior.</p> <pre class="language-yaml"><code class="language-yaml">logcli query '<span class="token punctuation">{</span>job="varlogs"<span class="token punctuation">}</span> <span class="token punctuation">|</span> logfmt'<br />tempo query 'span_id=12345'</code></pre> <h2 id="advantages-of-using-grafana" tabindex="-1">Advantages of Using Grafana<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#advantages-of-using-grafana">#</a></h2> <ol> <li><strong>Unified View</strong>: Grafana provides a single-pane-of-glass view of your metrics, logs, and traces.</li> <li><strong>Proactive Monitoring</strong>: With alerting, you can detect and respond to issues before they impact users.</li> <li><strong>Historical Analysis</strong>: Grafana allows you to explore historical data, aiding in troubleshooting and capacity planning.</li> <li><strong>Customization</strong>: Tailor dashboards and visualizations to meet specific needs.</li> </ol> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/grafana/#conclusion">#</a></h2> <p>Implementing Grafana at my workplace has significantly enhanced our observability capabilities. We can now monitor our systems in real-time, set up alerts for critical conditions, and analyze logs and traces for in-depth insights. Grafana’s extensibility and ease of use make it an excellent choice for any organization looking to improve its observability practices.</p> </content>
</entry>
<entry>
<title>Implementing the Command Design Pattern</title>
<link href="https://blog.dkpathak.in/implementing-the-command-design-pattern/"/>
<updated>2024-07-10T00:00:00Z</updated>
<id>https://blog.dkpathak.in/implementing-the-command-design-pattern/</id>
<content type="html"><p>At my workplace, we often deal with complex business logic that involves multiple operations. To maintain a clean and maintainable codebase, we decided to implement the Command Design Pattern. This pattern not only improved our code structure but also enhanced its extensibility and scalability. In this blog, I'll walk you through the Command Design Pattern, compare code written without and with this pattern, and discuss its advantages using a real-world example of updating organization details.</p> <h2 id="what-is-the-command-design-pattern" tabindex="-1">What is the Command Design Pattern?<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#what-is-the-command-design-pattern">#</a></h2> <p>The Command Design Pattern is a behavioral design pattern that turns a request into a stand-alone object that contains all information about the request. This transformation allows us to parameterize methods with different requests, delay or queue a request's execution, and support undoable operations.</p> <h2 id="code-without-command-pattern" tabindex="-1">Code Without Command Pattern<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#code-without-command-pattern">#</a></h2> <p>Let's consider a simple example where we need to update the details of an organization. Here's how the code might look without using the Command Pattern:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Organization class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Organization</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> name<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> address<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">Organization</span><span class="token punctuation">(</span><span class="token class-name">String</span> name<span class="token punctuation">,</span> <span class="token class-name">String</span> address<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>name <span class="token operator">=</span> name<span class="token punctuation">;</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>address <span class="token operator">=</span> address<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateName</span><span class="token punctuation">(</span><span class="token class-name">String</span> newName<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>name <span class="token operator">=</span> newName<span class="token punctuation">;</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Organization name updated to: "</span> <span class="token operator">+</span> newName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateAddress</span><span class="token punctuation">(</span><span class="token class-name">String</span> newAddress<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>address <span class="token operator">=</span> newAddress<span class="token punctuation">;</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Organization address updated to: "</span> <span class="token operator">+</span> newAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token comment">// Getters for name and address</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// OrganizationService class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">OrganizationService</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">Organization</span> organization<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">OrganizationService</span><span class="token punctuation">(</span><span class="token class-name">Organization</span> organization<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>organization <span class="token operator">=</span> organization<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateDetails</span><span class="token punctuation">(</span><span class="token class-name">String</span> newName<span class="token punctuation">,</span> <span class="token class-name">String</span> newAddress<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> organization<span class="token punctuation">.</span><span class="token function">updateName</span><span class="token punctuation">(</span>newName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> organization<span class="token punctuation">.</span><span class="token function">updateAddress</span><span class="token punctuation">(</span>newAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Main class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Main</span> <span class="token punctuation">{</span><br /> <span class="token keyword">public</span> <span class="token keyword">static</span> <span class="token keyword">void</span> <span class="token function">main</span><span class="token punctuation">(</span><span class="token class-name">String</span><span class="token punctuation">[</span><span class="token punctuation">]</span> args<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Organization</span> org <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">Organization</span><span class="token punctuation">(</span><span class="token string">"Old Name"</span><span class="token punctuation">,</span> <span class="token string">"Old Address"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token class-name">OrganizationService</span> orgService <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">OrganizationService</span><span class="token punctuation">(</span>org<span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> orgService<span class="token punctuation">.</span><span class="token function">updateDetails</span><span class="token punctuation">(</span><span class="token string">"New Name"</span><span class="token punctuation">,</span> <span class="token string">"New Address"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>In this implementation, the <code>OrganizationService</code> class directly depends on the <code>Organization</code> class and its specific methods. This tight coupling makes the code difficult to extend and maintain, especially when new operations are introduced.</p> <h2 id="code-with-command-pattern" tabindex="-1">Code With Command Pattern<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#code-with-command-pattern">#</a></h2> <p>By implementing the Command Pattern, we can decouple the invoker (organization service) from the receiver (organization) and encapsulate the request as an object. Here's how we can refactor the above code:</p> <pre class="language-java"><code class="language-java"><span class="token comment">// Command interface</span><br /><span class="token keyword">public</span> <span class="token keyword">interface</span> <span class="token class-name">Command</span> <span class="token punctuation">{</span><br /> <span class="token keyword">void</span> <span class="token function">execute</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Organization class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Organization</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> name<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> address<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">Organization</span><span class="token punctuation">(</span><span class="token class-name">String</span> name<span class="token punctuation">,</span> <span class="token class-name">String</span> address<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>name <span class="token operator">=</span> name<span class="token punctuation">;</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>address <span class="token operator">=</span> address<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateName</span><span class="token punctuation">(</span><span class="token class-name">String</span> newName<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>name <span class="token operator">=</span> newName<span class="token punctuation">;</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Organization name updated to: "</span> <span class="token operator">+</span> newName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">updateAddress</span><span class="token punctuation">(</span><span class="token class-name">String</span> newAddress<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>address <span class="token operator">=</span> newAddress<span class="token punctuation">;</span><br /> <span class="token class-name">System</span><span class="token punctuation">.</span>out<span class="token punctuation">.</span><span class="token function">println</span><span class="token punctuation">(</span><span class="token string">"Organization address updated to: "</span> <span class="token operator">+</span> newAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token comment">// Getters for name and address</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Concrete Command classes</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateNameCommand</span> <span class="token keyword">implements</span> <span class="token class-name">Command</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">Organization</span> organization<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> newName<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateNameCommand</span><span class="token punctuation">(</span><span class="token class-name">Organization</span> organization<span class="token punctuation">,</span> <span class="token class-name">String</span> newName<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>organization <span class="token operator">=</span> organization<span class="token punctuation">;</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>newName <span class="token operator">=</span> newName<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token annotation punctuation">@Override</span><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">execute</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> organization<span class="token punctuation">.</span><span class="token function">updateName</span><span class="token punctuation">(</span>newName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">UpdateAddressCommand</span> <span class="token keyword">implements</span> <span class="token class-name">Command</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">Organization</span> organization<span class="token punctuation">;</span><br /> <span class="token keyword">private</span> <span class="token class-name">String</span> newAddress<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token class-name">UpdateAddressCommand</span><span class="token punctuation">(</span><span class="token class-name">Organization</span> organization<span class="token punctuation">,</span> <span class="token class-name">String</span> newAddress<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>organization <span class="token operator">=</span> organization<span class="token punctuation">;</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>newAddress <span class="token operator">=</span> newAddress<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token annotation punctuation">@Override</span><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">execute</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> organization<span class="token punctuation">.</span><span class="token function">updateAddress</span><span class="token punctuation">(</span>newAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// OrganizationService class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">OrganizationService</span> <span class="token punctuation">{</span><br /> <span class="token keyword">private</span> <span class="token class-name">Command</span> command<span class="token punctuation">;</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">setCommand</span><span class="token punctuation">(</span><span class="token class-name">Command</span> command<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token keyword">this</span><span class="token punctuation">.</span>command <span class="token operator">=</span> command<span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><br /> <span class="token keyword">public</span> <span class="token keyword">void</span> <span class="token function">executeCommand</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> command<span class="token punctuation">.</span><span class="token function">execute</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span><br /><br /><span class="token comment">// Main class</span><br /><span class="token keyword">public</span> <span class="token keyword">class</span> <span class="token class-name">Main</span> <span class="token punctuation">{</span><br /> <span class="token keyword">public</span> <span class="token keyword">static</span> <span class="token keyword">void</span> <span class="token function">main</span><span class="token punctuation">(</span><span class="token class-name">String</span><span class="token punctuation">[</span><span class="token punctuation">]</span> args<span class="token punctuation">)</span> <span class="token punctuation">{</span><br /> <span class="token class-name">Organization</span> org <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">Organization</span><span class="token punctuation">(</span><span class="token string">"Old Name"</span><span class="token punctuation">,</span> <span class="token string">"Old Address"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> <span class="token class-name">Command</span> updateName <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">UpdateNameCommand</span><span class="token punctuation">(</span>org<span class="token punctuation">,</span> <span class="token string">"New Name"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token class-name">Command</span> updateAddress <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">UpdateAddressCommand</span><span class="token punctuation">(</span>org<span class="token punctuation">,</span> <span class="token string">"New Address"</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> <span class="token class-name">OrganizationService</span> orgService <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">OrganizationService</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> orgService<span class="token punctuation">.</span><span class="token function">setCommand</span><span class="token punctuation">(</span>updateName<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> orgService<span class="token punctuation">.</span><span class="token function">executeCommand</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /><br /> orgService<span class="token punctuation">.</span><span class="token function">setCommand</span><span class="token punctuation">(</span>updateAddress<span class="token punctuation">)</span><span class="token punctuation">;</span><br /> orgService<span class="token punctuation">.</span><span class="token function">executeCommand</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span><br /> <span class="token punctuation">}</span><br /><span class="token punctuation">}</span></code></pre> <p>In this refactored implementation, we have introduced a <code>Command</code> interface and concrete command classes (<code>UpdateNameCommand</code> and <code>UpdateAddressCommand</code>). The <code>OrganizationService</code> class now uses a command object to perform operations, which decouples it from the specific implementations of those operations.</p> <h2 id="advantages-of-using-the-command-pattern" tabindex="-1">Advantages of Using the Command Pattern<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#advantages-of-using-the-command-pattern">#</a></h2> <ol> <li> <p><strong>Decoupling of Invoker and Receiver</strong>: The invoker (organization service) does not need to know the specifics of the receiver (organization). It only interacts with the command interface, making the code more flexible and easier to extend.</p> </li> <li> <p><strong>Extensibility</strong>: Adding new commands is straightforward. We just need to implement a new command class without modifying existing code.</p> </li> <li> <p><strong>Support for Undo Operations</strong>: By storing executed commands, we can implement undo functionality. Each command can have an <code>unexecute</code> method to reverse its action.</p> </li> <li> <p><strong>Queue and Log Requests</strong>: Commands can be queued or logged for future execution, enabling features like request logging, job scheduling, and task retry mechanisms.</p> </li> <li> <p><strong>Promotes Reusability</strong>: Common commands can be reused across different parts of the application, reducing code duplication.</p> </li> </ol> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/implementing-the-command-design-pattern/#conclusion">#</a></h2> <p>Implementing the Command Design Pattern at my workplace has significantly improved our code's maintainability and extensibility. By decoupling the invoker from the receiver and encapsulating requests as objects, we have made our codebase more flexible and easier to manage. If you're dealing with complex operations in your projects, consider using the Command Pattern to achieve a cleaner and more modular design.</p> <p><img src="https://refactoring.guru/images/patterns/diagrams/command/structure.png" alt="Command Design Pattern" /></p> </content>
</entry>
<entry>
<title>Can database consistency, exception handling and Angular popups come together</title>
<link href="https://blog.dkpathak.in/can-database-consistency-exception-handling-and-angular-popups-come-together/"/>
<updated>2024-06-24T00:00:00Z</updated>
<id>https://blog.dkpathak.in/can-database-consistency-exception-handling-and-angular-popups-come-together/</id>
<content type="html"><p>Having prided myself on my full stack skills, I was tested of my bonafides with a rather interesting and critical problem at work, which involved me to understand Oracle SQL database writes, GraphQL mutations, Java collections, run time exception handling, and RxJS - pretty much the entire full stack.</p> <h4>System architecture -</h4> <p>My system writes updates into an Oracle SQL DB, via GraphQL.This GraphQL mutation logic is called from a Java service, driven by Command and Builder patterns, and if it fails, should be handled appropriately through rollback mechanisms.</p> <p>Mutations(write operations) are batched together to ensure readability and consistency.</p> <p>An Angular MVC takes care of the UI end.</p> <h4>Context :</h4> <p>My system works as a group of parties. There's a root party A, and subordinate parties A1, A2 and so on. Changes to A, propagate to all its subordinates. --&gt; Java principle of hybrid inheritance.</p> <h4>The problem -</h4> <p>The user updated a single subordinate party (A4), and all the rest of the parties started messing up. In fact, my framework was built to handle corruption oso well that data corruption with one party won't affect the rest. However, bang's the problem</p> <h4>The UX -</h4> <p>The user saw a big exception thrown to the screen in the form of an popup, and the user was transfixed, as he hadn't even touched anything except to load the application. How can something break, when you haven't even started working at it.</p> <h4>The analysis -</h4> <p>Common sense would've had me check the origination of the request and see what went missing. However, software engineers often don't go by common sense. So, I ended up checking logs for the approval of the last request (A4).</p> <p>Why do that? Since ALL requests on other parties were corrupted, I figured there'd a common mess up between them. This mess up can not happen at an initiation - which is an operation on each party, but on updation - where common attributes might get changed.</p> <p>Next up - what could've changed? And how did that happen?</p> <h4>The code intricacy -</h4> <p>We implemented the command, builder and factory patterns in our Spring Boot - GraphQL codebase, to follow a highly modularized and extensible approach. We first build a structure, run what we 'premutation commands' followed by the actual update of the party (the actual mutation), and finally 'post mutation commands'</p> <p>Codewise,</p> <pre><code>PartyUpdateBuilder = party.addPartyId() .addPartyName() .addPartyStatus() .build(); </code></pre> <p>Followed by</p> <pre><code>PartyUpdateCommandBuilder.execute(); </code></pre> <p>which actually writes the data into the tables.</p> <p>Mutation is GraphQL version of an update query - it updates the database. It looks something like this</p> <pre><code>mutation ($params: Params!, $partyId: PartyId!) { updateData(params: $params, partyId: $partyId) { status log created } } { variables: { params: { &quot;partyId&quot;: 1234, &quot;partyName&quot;: &quot;ABC&quot; } } } </code></pre> <h4>Implications of the structure</h4> <p>The framework we created ensured that even if multiple tables were being updated via our mutation, we remained fully ACID compliant by batching the entire update in a single mutation that was run centrally</p> <p>Services written by different developers only need to call this central source, and all fields would be updated.</p> <p>While the mutation was running, there is an inbuilt system of preemptive locking, to avoid stale data being overwritten in the microseconds it takes to update the data.</p> <p>Addtionally, the entire operation is maintained as a business change log in a central database, for tracking.</p> <p>Our update strategy was inspired from <a href="https://github.com/graphile/crystal/issues/944">this</a></p> <p>Now, in such an apparent 'fail safe' framework, there was corruption happening en masse. The question was how?</p> <p>Stay tuned for part 2</p> </content>
</entry>
<entry>
<title>Stopgapping</title>
<link href="https://blog.dkpathak.in/stopgapping/"/>
<updated>2023-09-02T00:00:00Z</updated>
<id>https://blog.dkpathak.in/stopgapping/</id>
<content type="html"><p>Stopgapping as a strategy is rather underrated. Often times, when a severely life changing event occurs, we can't immediately find a path to walk on. More often than not, we're indecisive, and torn. Indecision is one of the worst states one can be in. And in this state, a major choice might tend to cause regret and fear.</p> <p>In such a case, we stopgap - we find out a minimalist set of steps so that we are not arresting all momentum, yet at the same time, give our body the time to get used to the new reality.</p> <p>Let's consider an example. You were working at a job and one fine day, realize you've been laid off. Your world turns upside down. A stable income you'd planned on for years vanishes instantly. And most people don't look for jobs immediately on getting laid off. In this time, a pragmatic approach would be to do activities that can contribute to the overall financial and mental stability of your life, and at the same time, not place you under duress of rushing. In this case, you can choose to follow a daily routine of job shortlisting, meditation, and a set number of topics you upskill on each day. None of these are large enough to cause a major paradigm shift in your thinking, yet considerable enough to give you momentum at a time when you fear you've come to a standstill.</p> <p>When I'd ended a serious relationship, my goals and priorities all went up in smoke. I chose to follow daily rituals of upskilling, finding a new hobby, and meditation to ensure that I was moving ahead and on, yet at the same time, not making a hard choice I'd regret.</p> <p>Not all major decisions have to be that way - some, you just need time on, for new circumstances to pop through, and the best thing you can do, in the moment, is keep going, without regretting.</p> </content>
</entry>
<entry>
<title>Monorepo architecture</title>
<link href="https://blog.dkpathak.in/monorepo-architecture/"/>
<updated>2023-09-02T00:00:00Z</updated>
<id>https://blog.dkpathak.in/monorepo-architecture/</id>
<content type="html"><p>Since the advent of the microservices concept, most people are fans of distributed architectures. You don't want to have single points of failure, have autonomy across teams, and want to customize tech stack by service.</p> <p>This concept has propagated into domains other than services too.</p> <p>At work, we had three different repos catering to one single application - two libraries, and one repo for the actual configuration which just mapped components from the libraries onto the actual Angular app.</p> <p>The idea behind this was primarily, the separation of concerns. Repo A included fundamental components and styling, say, dialog boxes, text editors, toasts and their corresponding styles. Repo B included actual application components organized on the basis of business logic and their occurrence on the application. Both A and B were built, deployed and their prod build versions injected as npm packages into C.</p> <p>Now, for an application as large and diverse as ours, it kinda made sense. If you just had to make a config change, we wouldn't really want to rebuild fundamental CSS all over again. Different teams could own these repos differently and you could get the latest working version of any of these repos by picking the last build version from the common artifactory.</p> <p>Now, however, there come the pitfalls :</p> <ol> <li> <p>Cumbersome fixing and testing : If I have to make a fundamental CSS change, in repo A, I need to fix it in A, test it in B, and then finally in C. I essentially have to setup and run all three repos for a minor CSS change. Because the repos were owned separately, there's a fair chance they'd have their separate requirements in terms of setup, dependencies and run commands. How much of an overhead for such a little change? Wouldn't it be better to just have one repo, fix something, and voila, see the change?</p> </li> <li> <p>Inconsistent design : If you want to make a CSS change to a component, do you make it in A or B? It's a subjective question and varies by use case, so most people just did what they felt was right, meaning half the changes were in one repo, half in another. One actual example - our dialog boxes were styled from Repo A in two of our application tabs and from Repo B in the remaining 3. Who'd remember where the styles are coming from then?</p> </li> <li> <p>Versioning : Some change works on x.1 version of Repo A, y.2 version of B and z.3 version of C. Now, everytime, we have to check this version compatibility. Changing one of the versions could adversely impact the rest.</p> </li> </ol> <p>-- work in progress--</p> </content>
</entry>
<entry>
<title>Action method</title>
<link href="https://blog.dkpathak.in/action-method/"/>
<updated>2021-12-24T00:00:00Z</updated>
<id>https://blog.dkpathak.in/action-method/</id>
<content type="html"><p>The Action method encourages you to look at everything in your life as a project, with a set of actionable items, organized by priority, and associated references. Completion of all the action items will signify completion of the event/project. The advantage of this method is that converting seemingly subjective items like events/meetings into actionable steps will prompt you into taking the next small step, and get you started on tasks that you'd otherwise have procrastinated.</p> <p>Every major life item you have is considered a project, and you break it into action steps, references and backburners</p> <p>The action steps are as they sound like - a progession of doable items that will lead to achievement of the project goal.</p> <p>References are materials, resources and information that will aid in the achievement of the actions steps. It is stuff which is related to the project, but not directly actionable. This includes URLs to necessary references, some go to reference books/articles for the project, and so on.</p> <p>Backburner items refer to items that might be important at later stages, but can be put aside at the moment. Entire projects can be backburners too, meaning that a project need not be taken up at the present moment, because you have other, more important projects on your plate.</p> <p>What's the advantage of this method?</p> <ol> <li> <p>'Action is the greatest motivation'. The most common reason for procrastination is the lack of the next small step towards a goal. If that next small todo can be found and completed, it's enough to get the ball rolling. Each action item is the next small step towards achieving a large project.</p> </li> <li> <p>Looking at every item in your life as a project gives you an objective vision into what you'd actually need to do. Meetings and events are otherwise subjective and abstract events - converting these into projects gives you action items for before, during and post the meetings, and thus, you know what would make an event a success.</p> </li> <li> <p>Unlike other todo lists, this method differentiates between actionable items and non actionable items, that complement the former, and provides a way to keep track of both.</p> </li> <li> <p>The concept of backburner items and projects allows you to prioritize projects and action items based on the impact they have on your life and project, without worrying about forgetting them later. Once you're done with the action items of your project, you can pull items from backburner and take them up as action items.</p> </li> </ol> <h2 id="how-do-you-make-action-method-work-with-routine" tabindex="-1">How do you make ACTION method work with Routine<a class="tdbc-anchor" href="https://blog.dkpathak.in/action-method/#how-do-you-make-action-method-work-with-routine">#</a></h2> <p>Routine's flexibility can be leveraged to use it to implement the ACTION method for some/all of your projects/tasks. Each task can be opened as a document which can include everything that we need - first, markdown, so that we can create and distinguish between the sections, second, the embed feature to embed resources, and finally, checkboxes which will make every checkbox item a task, which can be scheduled on the Routine calendar just like any other task.</p> <p>Let's see an example. I create a new task - which will represent my project, let's say Blog writing.</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-1.PNG" alt="" /></p> <p>Double clicking on it opens up the task as a document</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-2.PNG" alt="" /></p> <p>Now, click on add subtasks, and add a few tasks</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-3.PNG" alt="" /></p> <p>Next, create the sections we'd need - action items, references and backburners. Go to a new line, and press '/', which will give you the list of possible markdown options - choose H2, and create the three headings.</p> <p>Now, drag and drop the immediate tasks you'd need doing into action items, and the others, into backburners</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-4.PNG" alt="" /></p> <p>Next, I want to embed a YouTube video I wish to refer to. Under the references heading, I select 'embed', and paste the video link, and there you have it</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-5.PNG" alt="" /></p> <p>And finally, to schedule our tasks - when you hover over a task, a calendar icon is highlighted - click on it, and give it a date and time - the great bit is you can just write it in words and Routine will intellisense it into a schedule.</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-6.PNG" alt="" /></p> <p>Bingo, you see your task in the upcoming tasks</p> <p><img src="https://blog.dkpathak.in/img/routine/routine-7.PNG" alt="" /></p> <p>There you have it - a complete action system in Routine to help you on the road to getting that project done.</p> </content>
</entry>
<entry>
<title>Creating a full stack app using AWS Amplify</title>
<link href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/"/>
<updated>2021-12-12T00:00:00Z</updated>
<id>https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/</id>
<content type="html"><h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#overview">#</a></h2> <p>Amplify is an offering by AWS that lets you develop and deploy full stack applications by only focusing on the business logic, with all the configuration being handled behind the scenes.</p> <p>In this tutorial, we'll understand what Amplify is, how it works, and finally, set up a Todo list application with a GraphQL backend and a React frontend using Amplify</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#prerequisites">#</a></h2> <p>You'll need to have Node/NPM and Git installed on your local systems.</p> <p>You should have an AWS account. Some knowledge of AWS concepts like IAM roles will come in handy, since we'll have to setup an IAM user for connecting our app.</p> <p>It'll also be useful to have some knowledge of React, since we'll be adding some code for the UI. GraphQL code will also be used, but since it'll be autogenerated, it isn't absolutely necessary for you to know that.</p> <h2 id="introduction-to-aws-amplify" tabindex="-1">Introduction to AWS Amplify<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#introduction-to-aws-amplify">#</a></h2> <p>Building full stack applications is no small task. And a lot of time is spent on writing boilerplate code that's already been written previously, and not enough effort can be put in developing the business logic of the application. Moreover, once the app has been built, deploying and scaling it is another major blocker for development teams to handle.</p> <p>Amplify tries to alleviate these use cases. It abstracts away some core functionalities by leveraging existing AWS services and code, and allows developers to only add the business logic of the application and configuring the rest of the application intelligently.</p> <p>Some of the existing AWS services that are leveraged by Amplify include AppSync for GraphQL. Cognito to handle authentication and DynamoDB for database.</p> <p>Amplify also provides other features like REST APIs, Lambda functions support and adding prebuilt Figma components into the frontend of the app, all of which are very frequent use cases</p> <h2 id="the-process-well-follow" tabindex="-1">The process we'll follow<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#the-process-well-follow">#</a></h2> <p>We'll first setup the Amplify CLI on our local systems. We'll then setup the AWS profile to connect our app to AWS. We'll then add the frontend and the GraphQL code respectively, to get our app running.</p> <h2 id="setting-up-amplify-on-local" tabindex="-1">Setting up Amplify on local<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#setting-up-amplify-on-local">#</a></h2> <p>Create a new folder called amplify-app. Open the command line and navigate to this folder</p> <p>We'll start with installing the Amplify CLI. It's the command line application for Amplify that'll allow us to configure our app using commands. Use the following command to install Amplify</p> <pre><code>npm install -g @aws-amplify/cli </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/1-install.PNG" alt="" /></p> <p>Next, we'll be configuring Amplify by creating an AWS IAM user.</p> <p>Enter</p> <pre><code>amplify configure </code></pre> <p>You'll be prmoted to enter a username and select a region. You can choose anything you wish for both, just make sure to remember it.</p> <p>You'll then be prompted to sign in to your AWS account on the browser</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/2-username.PNG" alt="" /></p> <p>Once you're signed in, you'll have to create an IAM(Identity and Access Management) user. It's this user whose credentials will be used for the app.</p> <p>The username will have been auto populated.</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/3-iam.PNG" alt="" /></p> <p>Check the password and custom password option and add a custom password, since it's easier than auto generated password. Do note that the access keys option should remain checked.</p> <p>Then keep hitting next until you get the button to Create the user and click it</p> <p>Your user will be created with an access key id and a secret key. Keep the window open since you'll be needing the details.</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/4-user-created.PNG" alt="" /></p> <p>Come back to the terminal and press enter</p> <p>You'll be prompted to add, first the access key id and then the secret. Copy and paste both of them.</p> <p>If you're prompted to add a profile name, add a random one.</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/5-terminal-user.PNG" alt="" /></p> <p>With this, our AWS profile setup is complete</p> <h2 id="setting-up-react-app" tabindex="-1">Setting up React app<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#setting-up-react-app">#</a></h2> <p>Use the following command to set up the default react application and name it todo-amplify</p> <pre><code>npx create-react-app todo-amplify cd todo-amplify npm run start </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/6-npm.PNG" alt="" /></p> <p>This will start the sample React app on localhost:3000.</p> <p>Close the app and keep it on hold. We'll come back to the frontend in a bit</p> <h2 id="initialize-backend" tabindex="-1">Initialize backend<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#initialize-backend">#</a></h2> <p>Type</p> <pre><code>amplify init </code></pre> <p>to start the setup for the backend</p> <p>You'll be asked for some configuration options like this :</p> <pre><code>Enter a name for the project (react-amplified) # All AWS services you provision for your app are grouped into an &quot;environment&quot; # A common naming convention is dev, staging, and production Enter a name for the environment (dev) # Sometimes the CLI will prompt you to edit a file, it will use this editor to open those files. Choose your default editor # Amplify supports JavaScript (Web &amp; React Native), iOS, and Android apps Choose the type of app that you're building (javascript) What JavaScript framework are you using (react) Source directory path (src) Distribution directory path (build) Build command (npm run build) Start command (npm start) # This is the profile you created with the `amplify configure` command in the introduction step. Do you want to use an AWS profile </code></pre> <p>Keep hitting enter to choose all the default options. For the AWS profile, choose the one you'd created previously. The setup will eventually finish in a few seconds</p> <h2 id="so-what-exactly-happens" tabindex="-1">So what exactly happens<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#so-what-exactly-happens">#</a></h2> <p>When you initialize a new Amplify project, a few things happen:</p> <ul> <li> <p>It creates a top level directory called amplify that stores your backend definition. During the tutorial you'll add capabilities such as a GraphQL API and authentication. As you add features, the amplify folder will grow with infrastructure-as-code templates that define your backend stack. Infrastructure-as-code is a best practice way to create a replicable backend stack.</p> </li> <li> <p>It creates a file called <code>aws-exports.js</code> in the <code>src</code> directory that holds all the configuration for the services you create with Amplify. This is how the Amplify client is able to get the necessary information about your backend services.</p> </li> <li> <p>It modifies the <code>.gitignore</code> file, adding some generated files to the ignore list</p> </li> <li> <p>A cloud project is created for you in the AWS Amplify Console that can be accessed by running <code>amplify console</code>. The Console provides a list of backend environments, deep links to provisioned resources per Amplify category, status of recent deployments, and instructions on how to promote, clone, pull, and delete backend resources</p> </li> </ul> <h2 id="back-to-the-frontend" tabindex="-1">Back to the frontend<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#back-to-the-frontend">#</a></h2> <p>We'll install the 2 packages we're going to need for the project, using :</p> <pre><code>npm install aws-amplify @aws-amplify/ui-react@1.x.x </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/16-other-npm.PNG" alt="" /></p> <p>Next, we'll update our client with the backend configuration stuff. Open <code>src/index.js</code> of your React app and add the following code at the top</p> <pre><code>import Amplify from &quot;aws-amplify&quot;; import awsExports from &quot;./aws-exports&quot;; Amplify.configure(awsExports); </code></pre> <p>And that's all it takes to configure Amplify. As you add or remove categories and make updates to your backend configuration using the CLI, the configuration in aws-exports.js will update automatically.</p> <p>Finally, update your <code>src/App.js</code> with the logic for Todo :</p> <pre><code>/* src/App.js */ import React, { useEffect, useState } from 'react' import Amplify, { API, graphqlOperation } from 'aws-amplify' import { createTodo } from './graphql/mutations' import { listTodos } from './graphql/queries' import awsExports from &quot;./aws-exports&quot;; Amplify.configure(awsExports); const initialState = { name: '', description: '' } const App = () =&gt; { const [formState, setFormState] = useState(initialState) const [todos, setTodos] = useState([]) useEffect(() =&gt; { fetchTodos() }, []) function setInput(key, value) { setFormState({ ...formState, [key]: value }) } async function fetchTodos() { try { const todoData = await API.graphql(graphqlOperation(listTodos)) const todos = todoData.data.listTodos.items setTodos(todos) } catch (err) { console.log('error fetching todos') } } async function addTodo() { try { if (!formState.name || !formState.description) return const todo = { ...formState } setTodos([...todos, todo]) setFormState(initialState) await API.graphql(graphqlOperation(createTodo, {input: todo})) } catch (err) { console.log('error creating todo:', err) } } return ( &lt;div style={styles.container}&gt; &lt;h2&gt;Amplify Todos&lt;/h2&gt; &lt;input onChange={event =&gt; setInput('name', event.target.value)} style={styles.input} value={formState.name} placeholder=&quot;Name&quot; /&gt; &lt;input onChange={event =&gt; setInput('description', event.target.value)} style={styles.input} value={formState.description} placeholder=&quot;Description&quot; /&gt; &lt;button style={styles.button} onClick={addTodo}&gt;Create Todo&lt;/button&gt; { todos.map((todo, index) =&gt; ( &lt;div key={todo.id ? todo.id : index} style={styles.todo}&gt; &lt;p style={styles.todoName}&gt;{todo.name}&lt;/p&gt; &lt;p style={styles.todoDescription}&gt;{todo.description}&lt;/p&gt; &lt;/div&gt; )) } &lt;/div&gt; ) } const styles = { container: { width: 400, margin: '0 auto', display: 'flex', flexDirection: 'column', justifyContent: 'center', padding: 20 }, todo: { marginBottom: 15 }, input: { border: 'none', backgroundColor: '#ddd', marginBottom: 10, padding: 8, fontSize: 18 }, todoName: { fontSize: 20, fontWeight: 'bold' }, todoDescription: { marginBottom: 0 }, button: { backgroundColor: 'black', color: 'white', outline: 'none', fontSize: 18, padding: '12px 0px' } } export default App </code></pre> <h2 id="setting-up-api-and-database" tabindex="-1">Setting up API and Database<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#setting-up-api-and-database">#</a></h2> <p>The API you will be creating in this step is a GraphQL API using AppSync and DynamoDB database</p> <p>Use the following CLI command to initialize the API creation :</p> <pre><code>amplify add api </code></pre> <p>You'll be prompted through a list of options. Keep hitting enter to choose the default ones</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/17-gql.PNG" alt="" /></p> <p>Once it's complete, we'll push the changes using</p> <pre><code>amplify push </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/18-push.PNG" alt="" /></p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/19-push-2.PNG" alt="" /></p> <p>Once it completes, it gives you the endpoint and an API key</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/20-gql-complete.PNG" alt="" /></p> <p>Once it's done, run the React app again, and going to localhost:3000, you should see your todo app</p> <p><img src="https://blog.dkpathak.in/img/scalex/amplify/21-done.PNG" alt="" /></p> <h2 id="deploying-your-app-to-amplify-cloud" tabindex="-1">Deploying your app to Amplify cloud<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#deploying-your-app-to-amplify-cloud">#</a></h2> <p>Additionally, you can also deploy your todo application to Amplify cloud using the following commands :</p> <pre><code>amplify add hosting amplify publish </code></pre> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#conclusion">#</a></h2> <p>Thus, with this, you've completed developing an entire full stack application, with the only code you had to write being the business logic for the todo. Imagine the time and effort saved when all the GraphQL code, and the connections came up magically outta nowhere!</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/creating-a-full-stack-app-using-aws-amplify/#references">#</a></h2> <ul> <li><a href="https://aws.amazon.com/amplify/">AWS Amplify Docs</a></li> </ul> </content>
</entry>
<entry>
<title>AWS Lambda vs ECS</title>
<link href="https://blog.dkpathak.in/aws-lambda-vs-ecs/"/>
<updated>2021-12-06T00:00:00Z</updated>
<id>https://blog.dkpathak.in/aws-lambda-vs-ecs/</id>
<content type="html"><h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#overview">#</a></h2> <p>In this tutorial, we'll be taking a deep dive into the differences between AWS Lambda and AWS ECS. We'll be setting up sample applications using each of those and then contrasting the different use cases they have.</p> <h2 id="what-is-aws-lambda" tabindex="-1">What is AWS Lambda<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#what-is-aws-lambda">#</a></h2> <p>Lambda uses resources that are the same that a server-driven deployment would've given us - EC2 instances, coupled with load balancers, security groups, auto-scaling services. However, unlike the latter, these resources are configured entirely on the backend, away from the user, and automatically scaled up/down as per traffic. All the user needs to do is provide the code, and let Lambda take care of ensuring it runs.</p> <p>The following Block diagram describes how lambda works</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/lambda-bd.png" alt="" /></p> <h2 id="what-is-ecs" tabindex="-1">What is ECS<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#what-is-ecs">#</a></h2> <p>ECS stands for Elastic Container Service and is a container orchestration solution - meaning, it allows deployment and management of applications which are containerized using tools like Docker.</p> <p>The following block diagram gives an explanation of how it all comes together</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/ecs-bd.png" alt="" /></p> <h2 id="what-is-aws-fargate" tabindex="-1">What is AWS Fargate<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#what-is-aws-fargate">#</a></h2> <p>We'll be using AWS Fargate in our ECS example. Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Unlike EC2, you don't actually have to worry about setting up and provisioning the servers. You only provide a containerized application, and Fargate handles the hosting based on the resources you require</p> <p><img src="https://blog.dkpathak.in/img/scalex/fargate.png" alt="" /></p> <h1 id="setting-up-lambda" tabindex="-1">Setting up Lambda<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#setting-up-lambda">#</a></h1> <p>Go to aws.amazon.com and sign up for an account if you don't already have one.</p> <p>Once you're signed in, search 'Lambda' in the search bar. You should be redirected to the Lambda dashboard</p> <p>Before you create a Lambda function, you need to identify its inputs and triggers, choose a runtime environment, and decide what permissions and role the service will use.</p> <p>Lambda functions accept JSON input and JSON output. Your function’s input and output contents are closely tied to the event source that will trigger your function.</p> <p>An event source is usually a web request, that'll cause the execution of the function code</p> <p>You also need to select a runtime for your function. We'll be using Node.js</p> <p>Finally, your function will need an AWS role, that defines the entitlements the function has within the AWS platform.</p> <p>Click on Create function</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image9.png" alt="" /></p> <p>Keep the default 'Author from scratch' option selected</p> <p>Give your function a name as you wish, and leave everything else as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image5.png" alt="" /></p> <p>Click on Create function at the bottom of the page</p> <p>You'll be redirected to the function configuration page, that looks something like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <p>You'll first have to add a trigger for your lambda function. Click on add trigger.</p> <p>You'll then be asked to choose a trigger - select API Gateway. An API Gateway essentially lets you create, deploy and monitor APIs. In our case, we'll be able to use our function like an API - when we hit the deployed URL, it'll trigger our function.</p> <p>Choose API type as REST API, security as Open, and leave the rest as it is. Finally, click Add</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image1.png" alt="" /></p> <p>You'll see that the trigger is added.</p> <p>Next, you are given a code source window with an integrated code editor, where you can add/edit code and files.</p> <p>A sample code snippet is provided. You can choose to modify the message to something you wish, and keep the rest of the code as it is for now.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <h2 id="testing-the-function" tabindex="-1">Testing the function<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#testing-the-function">#</a></h2> <p>Next, we'll test if the function works as expected. Go to the test tab.</p> <p>Here, you're given an option to create an event. An event is a happening that triggers the function. It has a JSON input. Since we're not actually using the input in any way, it's not much to us. However, when the lambda function is deployed as a service to some application, there'll be inputs coming in that the function will use. Those inputs can be given here to test if they give the required outcome.</p> <p>Leave everything unchanged, and click Test.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image6.png" alt="" /></p> <p>It'll run the test using the event config, and will pass with the following message in a second or two.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image7.png" alt="" /></p> <h2 id="understanding-the-result" tabindex="-1">Understanding the result<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#understanding-the-result">#</a></h2> <p>The details show the function output. In our case, the status code and the message body.</p> <p>The summary tab has a few important fields. The duration denotes the time it took for the lambda to run, which is an important pointer when we are running a production grade application and are likely to get timeout/performance issues</p> <p>The billed duration is another important indicator - you only pay for what you use. Unlike the EC2 instance, where you were charged for the server just being on, irrespective of whether or not anything was running on it, Lambda only charges you for the times your function runs. Thus, being an obvious cost advantage</p> <p>And the field one of the most significant to our discussion - Resources configured. 128 MB in our case. Do you remember configuring anything at all, apart from the function code itself? Nope. So where did the 128 MB come from? That's the magic - by just telling Lambda what code you need to run, it automatically provisions the resources needed to run it, saving considerable bandwidth of the developers that would've otherwise gone in getting the servers configured.</p> <h2 id="deploying-the-lambda-function" tabindex="-1">Deploying the Lambda function<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#deploying-the-lambda-function">#</a></h2> <p>Go back to the code tab, and click on Deploy</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <p>Now, click on API Gateway in Function Overview.</p> <p>It'll give you the API endpoint. Copy it, and paste it in a new browser tab.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image3.png" alt="" /></p> <p>Sure enough, you'll see the learning lambda message on the screen.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image4.png" alt="" /></p> <p>Come back to the lambda dashboard and go to the monitor tab. Here, you'll be able to monitor the calls being made to your API. Refresh the page of the API a few times, and you'll see the requests being shown on the graphs</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image2.png" alt="" /></p> <p>Notice the usefulness of the graphs - The invocations show you how many times the API was invoked.</p> <p>The error count and success rate let you track if the function is facing downtime/run time errors.</p> <h1 id="setting-up-ecs" tabindex="-1">Setting up ECS<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#setting-up-ecs">#</a></h1> <p>Next, we'll setup and configure an ECS application using AWS Fargate</p> <p>Go to AWS dashboard and search for ECS. You'll be taken to the ECS dashboard, that looks like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/1-dashboard.PNG" alt="" /></p> <p>Click on Get Started</p> <p>We'll be selecting an Nginx container</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/2-select.PNG" alt="" /></p> <p>Next, you'll be prompted to add a service, which ensures that the defined task instances are maintained. If not, a new task instance is created.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/3-task.PNG" alt="" /></p> <p>Next, you'll be asked to configure your cluster details - keep them as they are.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/4-cluster.PNG" alt="" /></p> <p>Finally, click create.</p> <p>You can see the status of the resources being provisioned :</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/5-launch.PNG" alt="" /></p> <p>Finally, your service will be active</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/6-view.PNG" alt="" /></p> <p>Go to task definitions</p> <p>Copy the public IP and paste it in a new browser tab</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/7-pip.PNG" alt="" /></p> <p>You'll see that the default nginx screen opens up</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/8-nginx.PNG" alt="" /></p> <p>Refresh it a few times</p> <p>Come back to the ECS dashboard and go to logs. You'll see that for every refresh, a log entry is created</p> <p><img src="https://blog.dkpathak.in/img/scalex/ecs/9-logs.PNG" alt="" /></p> <h2 id="difference-between-lambda-and-ecs" tabindex="-1">Difference between Lambda and ECS<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#difference-between-lambda-and-ecs">#</a></h2> <p>Thus, you created and deployed sample services using both Lambda and ECS(via Fargate).</p> <p>At the first glance, these two look similar - both of them are serverless solutions that configure the server resources based on the configuration that your application needs, and work on a pay-per-use model. They both also provide monitoring and logs in a similar fashion</p> <p>However, there are a few subtle differences - Lambda essentially allows you to run tiny functions -they can of course be as gigantic as applicaations themselves, but that's not what it's meant for. Isolated services that can then be plugged into existing applications via triggers like the API Gateway we used ensure that your services work in isolation, and the downtime of one doesn't affect the other.</p> <p>ECS is a container orchestrator, and is principally meant for running 'containerized applications'. There's some configuration needed for you to define when setting up the resources, where in Lambda, it was handled in its entirety by AWS itself. ECS is mainly meant for larger applications but with a flexibility of not having to manage compute instances yourself.</p> <h3 id="consider-lambda-over-ecs-when" tabindex="-1">Consider Lambda over ECS when<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#consider-lambda-over-ecs-when">#</a></h3> <ul> <li> <p>You have a smaller application that runs on-demand in 15 minutes or less.</p> </li> <li> <p>You don’t need advanced EC2 instance configuration. Lambda manages, provisions, and secures EC2 instances for you, along with providing target groups, load balancing, and auto-scaling. It eliminates the complexity of managing EC2 instances.</p> </li> <li> <p>You want to pay only for capacity used. Lambda charges are metered by milliseconds used and the number of times your code is triggered. Costs are correlated to usage. Lambda also has a free usage tier.</p> </li> </ul> <h3 id="consider-ecs-over-lambda-when" tabindex="-1">Consider ECS over Lambda when<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#consider-ecs-over-lambda-when">#</a></h3> <ul> <li> <p>You are running Docker containers. While Lambda now has Container Image Support, ECS is a better choice for a Docker ecosystem, especially if you are already creating Docker containers.</p> </li> <li> <p>You want flexibility to run in a managed EC2 environment or in a serverless environment. You can provision your own EC2 instances or Amazon can provision them for you. You have several options.</p> </li> <li> <p>You have tasks or batch jobs running longer than 15 minutes. Choose ECS when dealing with longer-running jobs, as it avoids the Lambda timeout limit above.</p> </li> <li> <p>You need to schedule jobs. ECS provides a service scheduler for long running tasks and applications, along with the ability to run tasks manually.</p> </li> </ul> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#conclusion">#</a></h2> <p>Thus, in this tutorial, you got an introduction to AWS Lambda, AWS ECS and Fargate. You understood the similarities among them by setting up sample applications using each. You then created distinctions between them, and hands on checklists as to when one would be preferred over the other</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-lambda-vs-ecs/#references">#</a></h2> <ul> <li> <p><a href="https://aws.amazon.com/ecs/">AWS ECS</a></p> </li> <li> <p><a href="https://aws.amazon.com/fargate/">AWS Fargate</a></p> </li> </ul> </content>
</entry>
<entry>
<title>Intro to Serverless</title>
<link href="https://blog.dkpathak.in/intro-to-serverless/"/>
<updated>2021-12-05T00:00:00Z</updated>
<id>https://blog.dkpathak.in/intro-to-serverless/</id>
<content type="html"><h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#overview">#</a></h2> <p>In this section, we'll get a deep understanding of what it means to have 'serverless' applications - most importantly, why it's a misnomer. We'll understand the use case of using this paradigm, how it's implemented on the ground and finally, take up a hands on example to create a sample NodeJS service using AWS Lambda.</p> <h2 id="introduction" tabindex="-1">Introduction<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#introduction">#</a></h2> <p>Web applications and services need servers to run on. These servers can be custom on-premise servers that large companies themselves own, or cloud servers by providers like EC2 by AWS. We've used the latter in a few tutorials in the past.</p> <p>While the cloud servers leave out the complexity of server maintenance, we still need to manually configure load balancing and track usage. We'll be charged for all the time the server's up, irrespective of whether or not we're the server's being used at all or not. This is suboptimal for many small organizations, who not only want to minimize cloud costs, but also can't spare enough manpower on customizing load balancing and server instance uptime.</p> <p>Thus came the concept of 'Serverless'. First things first, it's NOT like there's no server at all. It's just that we aren't granted access to an entire server like we were for EC2. Instead, we just give the cloud provider the application code we need to run, and then it's their job to run the code, ensure that it scales up/down based on traffic, allowing us to focus on the application itself.</p> <h2 id="how-exactly-does-this-work" tabindex="-1">How exactly does this work?<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#how-exactly-does-this-work">#</a></h2> <p>The following Block diagram describes how lambda works</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/lambda-bd.png" alt="" /></p> <p>Lambda uses resources that are the same that a server-driven deployment would've given us - EC2 instances, coupled with load balancers, security groups, auto-scaling services. However, unlike the latter, these resources are configured entirely on the backend, away from the user, and automatically scaled up/down as per traffic. All the user needs to do is provide the code, and let Lambda take care of ensuring it runs.</p> <h2 id="what-well-be-doing" tabindex="-1">What we'll be doing<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#what-well-be-doing">#</a></h2> <p>We'll be setting up a NodeJS service using AWS Lambda, configuring the triggers that would cause it to run, and then hitting those triggers to run it, and tracking the logs as the function runs.</p> <h2 id="setting-up-aws-lambda" tabindex="-1">Setting up AWS lambda<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#setting-up-aws-lambda">#</a></h2> <p>Go to aws.amazon.com and sign up for an account if you don't already have one.</p> <p>Once you're signed in, search 'Lambda' in the search bar. You should be redirected to the Lambda dashboard</p> <p>Before you create a Lambda function, you need to identify its inputs and triggers, choose a runtime environment, and decide what permissions and role the service will use.</p> <p>Lambda functions accept JSON input and JSON output. Your function’s input and output contents are closely tied to the event source that will trigger your function.</p> <p>An event source is usually a web request, that'll cause the execution of the function code</p> <p>You also need to select a runtime for your function. We'll be using Node.js</p> <p>Finally, your function will need an AWS role, that defines the entitlements the function has within the AWS platform.</p> <p>Click on Create function</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image9.png" alt="" /></p> <p>Keep the default 'Author from scratch' option selected</p> <p>Give your function a name as you wish, and leave everything else as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image5.png" alt="" /></p> <p>Click on Create function at the bottom of the page</p> <p>You'll be redirected to the function configuration page, that looks something like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <p>You'll first have to add a trigger for your lambda function. Click on add trigger.</p> <p>You'll then be asked to choose a trigger - select API Gateway. An API Gateway essentially lets you create, deploy and monitor APIs. In our case, we'll be able to use our function like an API - when we hit the deployed URL, it'll trigger our function.</p> <p>Choose API type as REST API, security as Open, and leave the rest as it is. Finally, click Add</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image1.png" alt="" /></p> <p>You'll see that the trigger is added.</p> <p>Next, you are given a code source window with an integrated code editor, where you can add/edit code and files.</p> <p>A sample code snippet is provided. You can choose to modify the message to something you wish, and keep the rest of the code as it is for now.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <h2 id="testing-the-function" tabindex="-1">Testing the function<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#testing-the-function">#</a></h2> <p>Next, we'll test if the function works as expected. Go to the test tab.</p> <p>Here, you're given an option to create an event. An event is a happening that triggers the function. It has a JSON input. Since we're not actually using the input in any way, it's not much to us. However, when the lambda function is deployed as a service to some application, there'll be inputs coming in that the function will use. Those inputs can be given here to test if they give the required outcome.</p> <p>Leave everything unchanged, and click Test.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image6.png" alt="" /></p> <p>It'll run the test using the event config, and will pass with the following message in a second or two.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image7.png" alt="" /></p> <h2 id="understanding-the-result" tabindex="-1">Understanding the result<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#understanding-the-result">#</a></h2> <p>The details show the function output. In our case, the status code and the message body.</p> <p>The summary tab has a few important fields. The duration denotes the time it took for the lambda to run, which is an important pointer when we are running a production grade application and are likely to get timeout/performance issues</p> <p>The billed duration is another important indicator - you only pay for what you use. Unlike the EC2 instance, where you were charged for the server just being on, irrespective of whether or not anything was running on it, Lambda only charges you for the times your function runs. Thus, being an obvious cost advantage</p> <p>And the field one of the most significant to our discussion - Resources configured. 128 MB in our case. Do you remember configuring anything at all, apart from the function code itself? Nope. So where did the 128 MB come from? That's the magic - by just telling Lambda what code you need to run, it automatically provisions the resources needed to run it, saving considerable bandwidth of the developers that would've otherwise gone in getting the servers configured.</p> <h2 id="deploying-the-lambda-function" tabindex="-1">Deploying the Lambda function<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#deploying-the-lambda-function">#</a></h2> <p>Go back to the code tab, and click on Deploy</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image8.png" alt="" /></p> <p>Now, click on API Gateway in Function Overview.</p> <p>It'll give you the API endpoint. Copy it, and paste it in a new browser tab.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image3.png" alt="" /></p> <p>Sure enough, you'll see the learning lambda message on the screen.</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image4.png" alt="" /></p> <p>Come back to the lambda dashboard and go to the monitor tab. Here, you'll be able to monitor the calls being made to your API. Refresh the page of the API a few times, and you'll see the requests being shown on the graphs</p> <p><img src="https://blog.dkpathak.in/img/scalex/lambda/image2.png" alt="" /></p> <p>Notice the usefulness of the graphs - The invocations show you how many times the API was invoked.</p> <p>The error count and success rate let you track if the function is facing downtime/run time errors.</p> <p>All of this, without having to configure any of it - that's the beauty of Lambda</p> <h2 id="adding-further-code" tabindex="-1">Adding further code<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#adding-further-code">#</a></h2> <p>Now that your Lambda function is up and running, you can add further code to create actual services, connect it to databases and more.</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#conclusion">#</a></h2> <p>Thus, in this tutorial, we got introduced to what serverless means, and how it is beneficial over the traditional server-driven model. We used AWS lambda to setup and configure a NodeJS service, set up a trigger using the API Gateway, and monitored our service, all while having to configure little beyond our business logic.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-serverless/#references">#</a></h2> <ul> <li><a href="https://aws.amazon.com/lambda/">AWS Lambda official docs</a></li> </ul> </content>
</entry>
<entry>
<title>Demystifying procrastination</title>
<link href="https://blog.dkpathak.in/demystifying-procrastination/"/>
<updated>2021-12-20T00:00:00Z</updated>
<id>https://blog.dkpathak.in/demystifying-procrastination/</id>
<content type="html"><p>The biggest threat to productivity is procrastination - the wilful(?) destruction of a more structured life by giving in to short term pleasures over long term contentment. Notice the '?' after 'wilful'.</p> <p>Is procrastinating wilful? Do we, who have big aims and aspirations of a better life, CHOOSE to derail our progress on those goals, WHILST being aware that it could potentially be the death knell for the consistency we'd maintained so far? I mean, no sane person would kill one's own desires so willingly, right?</p> <p>The subject of procrastination has been under medical research for years, and while it's unnecessary for us to deep dive into the intricacies of the flashing neurons, it helps to know a few superficial facts (I am no more a medical guy than Jackie Chan is a ballerina, so do not kill me over the medical accuracy of what I write - it's been vastly simplified for ease of understanding) - a section of our brain, called the prefrontal cortex, can be thought of as the logical part - which makes you do stuff that make 'logical sense', making goals like exercising, personal projects, reading and so on.</p> <p>And there's this other dude called the limbic system, which has more to do with the emotional and instinctive stuff. And no surprises, it's this little son of a jumbled mass of neurons that makes you procrastinate - it's responsible for short term pleasures that your brain seeks, and fights with the prefrontal cortex for dominance over your body. Whenever the PC wins, you're productive. When it's the limbic system who comes out on top, #NetflixBinge</p> <p>And thus, our quest to cutting at our procrastination would be to ensure that our prefrontal cortex wins more often.</p> <p>Superb. How do we make that happen?</p> <p><i>The Productivity Project</i> author Chris Bailey calls out six traits that usually occur in various quantities in almost all tasks we procrastinate. The intensity and quantity of each of these traits in a task determines how likely we are to procrastinate it.</p> <p>These are :</p> <ul> <li> <p>Boring</p> </li> <li> <p>Frustrating</p> </li> <li> <p>Difficult</p> </li> <li> <p>Unstructured or ambiguous</p> </li> <li> <p>Lacking in personal meaning</p> </li> <li> <p>Lacking in intrinsic rewards</p> </li> </ul> <p>Let's take an activity that's pretty common a procrastinated one for many of us - tidying our rooms, and rate it on a scale of 1 to 10 in all 6 of these.</p> <p>Boring - yeah, a bit. 6/10.</p> <p>Frustrating? Often - you know it's gonna be the same old mess within a week at most, and thus makes you wonder why do it at all. 9/10.</p> <p>Difficult? Umm, not so much, unless your room is a palace. 2/10.</p> <p>Unstructured or ambiguous? Yes, absolutely. When do you decide if it's 'clean enough'? Where do you start cleaning? Do you clean the insides of the cupboards too? 10/10</p> <p>Lacking in personal meaning? Unless you're a Monica from Friends, definitely yes. It doesn't give the kicks, and contributes little to personal goals. 9/10</p> <p>Lacking in intrinsic rewards? Again, yes. No direct benefit to me that I can see. 9/10</p> <p>And there, we have it. While thinking of rating the intensity of each of these traits on the task of cleaning the room, we thought about the negative aspects of the task that made us procrastinate it. And once you know where you're going wrong with a problem, the problem's half solved.</p> <p>Boring? -&gt; Play your fav music as you clean.</p> <p>Unstructured? -&gt; Create a weekwise plan beforehand as to what part of the room you'll clean the coming day/week, and then, tackle only that, not worrying about the others.</p> <p>Lacking in intrinsic rewards? -&gt; If the 'feeling of accomplishment' isn't a good enough reward, you may create a reward for yourself - 10 minutes of binging on something you like if you clean the room.</p> <p>And thus, by quantifying and categorizing some of the 'procrastin-able' aspects of a task, you make plans to systematically limit/eliminate those, and make it harder for the limbic system dude to come out on top.</p> <p>It may seem like a pain, definitely, to think and plan so much before all your procrastinable tasks, and might make you wonder - should I just have gritted myself and got done with the task in the time, rather than planning like a military general for it? Well, the very reason you're planning on the task is BECAUSE you could not grit yourself and get done with it. The planning will get it done. And once you've gotten a hang of it, the eliminations will come instinctively and faster</p> </content>
</entry>
<entry>
<title>Productivi-TEA - Time, Energy, Attention</title>
<link href="https://blog.dkpathak.in/productivi-tea-time-energy-attention/"/>
<updated>2021-12-29T00:00:00Z</updated>
<id>https://blog.dkpathak.in/productivi-tea-time-energy-attention/</id>
<content type="html"><p>When starting with a new productivity goal, we often expect from ourselves, something that's entirely alien to human nature - that, irrespective of our body's responses, we're constantly able to attack our day's plans with the same zeal, zest and energy throughout the day. Most of us who started on a sunny day with the motivation to blast their productivity through the roof, started with dumping tasks and plans on every second of the day, and then watched helplessly as the scheduled stuff came, but the body didn't respond with the same energy, or if our attention went into scrolling through some extremely relatable memes on IG.</p> <p>Productivity is a function of Time, Energy and Attention, all of which, we have in finite availability. Think of it like money - if you only have a 100 bucks, you'd rather spend 90 of it on what's going to help the most in your survival - food, water and clothing, rather than buying a Netflix subscription. Similarly, the best of our Time, Energy and Attention has to be devoted to the tasks that are the most meaningful to us, from which, we hope to derive the maximum output.</p> <p>And that's why, the name of this article - ProductiviTEA - the TEA are like caffeine. The optimal usage of the TEA can give you a boost in your life.</p> <p>So, how do you ensure that the best of your TEA goes into your most important tasks? And how can Routine help you in your journey?</p> <h2 id="tracking-your-tea" tabindex="-1">Tracking your TEA<a class="tdbc-anchor" href="https://blog.dkpathak.in/productivi-tea-time-energy-attention/#tracking-your-tea">#</a></h2> <p>You can only improve if you know where you are lacking. Tracking your Time, energy and attention throughout the day will give you insights on what are your peak moments, and thus, how you can leverage those.</p> <p>Routine's essence, is its calendar-drag-and-drop, and you can utilize this core feature to track your traits, for a week.</p> <p>To do that, schedule a task every waking hour on the Routine calendar. This task can be an actual work task, or just be anything that you intend to do at that particular time, including watching TV, or propping your feet onto the table and staring at the ceiling. You just need to track what you're doing at that time.</p> <p>Double click on a task to open it as a doc, and add two points - Energy and Attention</p> <p>For each hour, track your energy level out of 100. It may be tough to quantify it at first, but after a few trials, you'll be to able to put in a number relative to what you put in the past.</p> <p>Also, for each hour, try and track how many times you felt your attention wandering from what you were meant to do in the past hour. You need not keep a strict count of this - a rough approximation works, to begin with.</p> <p>Follow this ritual for about a week, and you'll begin to notice some patterns - Your energy and attention levels are high at certain times of the day. For early morning birds, it's usually the morning hours, likewise for night owls. These reflect what's called your Biological Prime Time. Note, manually enforced energy and attention like caffeine induced, pressure of deadlines, do not count here. Your BPT is based on your natural body clock, and your habits - when you're naturally the most prone to attention and action. During these BPT high times, perform your most important tasks, usually the ones that you're highly likely to procrastinate on.</p> <p>At your worst BPT, schedule tasks that require the least bit attention, such as tracking emails.</p> <p>Thus, by actively managing your BPT, you can get more done, without forcing your body</p> </content>
</entry>
<entry>
<title>CI CD using Github Actions and Netlify</title>
<link href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/"/>
<updated>2021-12-06T00:00:00Z</updated>
<id>https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/</id>
<content type="html"><h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#overview">#</a></h2> <p>In this tutorial, we'll be taking a ground-up understanding of what Continuous Integration-Delivery-Deployment means, why it's so useful in modern software development and DevOps. We'll then use a hands on example of configuring a CI pipeline for a sample React application using Github actions, understanding some syntax along the way. We'll then use netlify to connect our repo to Netlify and configuring a CD pipeline.</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#prerequisites">#</a></h2> <p>You'll need a Github account. We'll be using a sample React application to set up the workflow, and it might help to understand how to run a React app, although any detailed understanding is not necessary, since we won't be adding any React code in this tutorial.</p> <p>You'll also need an account on netlify.com, which we'll be connecting with the Github account to set up a CD pipeline. All of this is entirely free.</p> <h2 id="introduction-to-ci-cd" tabindex="-1">Introduction to CI CD<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#introduction-to-ci-cd">#</a></h2> <blockquote> <p>Disclaimer : Some of the practices might seem vague or overkill right now, especially for those who have not had experience working in large teams. However, CI CD was developed keeping in mind software development for large, distributed teams.</p> </blockquote> <p>In any team delivering software for a client, it is not enough to just push your code along to a remote repository and be done with it. There's an entire process that happens once you're done coding, and it's fraught with complications.</p> <p>There'll be tens or hundreds of developers making changes to the same codebase, all with different coding styles. Your code might not work with the most recent push made by another developer. Your code might not be good quality, which might make it difficult for other developers to understand it/build up on it. Your code might 'work on your machine', but it might not work in the higher environments.</p> <p>All of these things can go wrong, and they do, so much so that they forced the industry pioneers to come up with an approach to ensure that any new code that was being pushed followed a set of guidelines, and that it went through a series of steps before it finally got merged into the main codebase. This process was rote enough to not be done manually each time somebody pushed something, and thus, there were tools developed to automate the checks and steps that needed to be taken.</p> <p>This process, is called Continuous Integration. Your code is continuously 'integrated' into the application, AFTER ensuring that automated tests and other scripts run onto it confirm that it doesn't break some existing feature and is of good quality.</p> <p>A sample CI workflow looks something like this (It differs by team - this is just a sample one) :</p> <ol> <li> <p>Developer pushes the code into her/his feature branch. No one pushes code directly into master branch in a development team.</p> </li> <li> <p>Developer seeks code reviews from teammates and raises a pull request</p> </li> <li> <p>As soon as the PR is raised, a step in the CI workflow is triggered and a build starts using the new code, on a build automation tool like Jenkins or Teamcity. If the build fails, it's pointless to carry on to further steps, and the code is reverted to the developer asking her/him to check why it failed and make the changes.</p> </li> <li> <p>If the build passes, the reviewers manually check the changes made by the dev and approve the PR.</p> </li> <li> <p>Once the necessary number of approvals have been granted, the next workflow step gets triggered, wherein automated tests are run on the code to ensure the functionality is working as expected</p> </li> <li> <p>Further checks MIGHT be made by automated tools checking for code quality or test coverage using tools like SonarQube(SonarLint) or Codecov. These tools raise flags if the new code does not follow some coding standards configured by the team. The developer has to rectify those and restart the workflow.</p> </li> <li> <p>Once the checks are complete, the code then 'tries' to get merged onto the main branch. If there's some other code commit made on the same lines as this push has, there is an automatic merge failure that the developer has to resolve manually.</p> </li> <li> <p>If not, the code gets merged into the main branch.</p> </li> </ol> <p>This might sound like a lot of work, but in a complex project, it's critical to ensure that any new change is the 'right change', or it could take weeks to unroll if it passes undetected. Moreover, with the practice of automated CI, almost all the steps are done automatically, without the need of someone to manually push the code along to the next workflow step.</p> <p>Thus, CI is about pushing code in small increments as frequently as possible, ensuring that it's bug free and follows best practices, and finally merging it into the main code.</p> <p>CD is a term that can refer to Continuous Deployment, and/or Continuous Delivery, usually referring to both, first Delivery, then Deployment. Atlassian describes the difference between Delivery and Deployment as while Delivery requires a manual intervention for pushing to a production environment, deployment automates that step as well.</p> <p>Once your code is pushed into the master branch at the end of a CI workflow, it needs to now go through various testing environments where further tests like FT(Functional testing), SIT(System Integration Testing) and UAT(User Acceptance Testing) are run to ensure the application is working as expected. And once it's gone through all the testing environments, the final release to production can be done manually(c. delivery), or automatically(c. deployment)</p> <h2 id="intro-to-github-actions" tabindex="-1">Intro to GitHub Actions<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#intro-to-github-actions">#</a></h2> <p>GitHub actions is a tool provided by Github that helps you create and run the workflows for CI/CD. By creating a simple workflow file, you can ensure that once your code is committed to GitHub, it'll get released to your production environment entirely on its own without requiring any effort from you.</p> <p>GitHub actions is an extremely popular tool for beginners, since, unlike other CI tools like Jenkins, it's extremely simple to set up and start, and abstracts away a lot of the setup that newbies need not bother themselves with.</p> <h2 id="how-does-it-work" tabindex="-1">How does it work?<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#how-does-it-work">#</a></h2> <p>Actions uses code packages in Docker containers, which run on GitHub servers and which, in turn, are compatible with any programming language. There are tons of preconfigured workflows available across frameworks like Node, Python, Java, which we can pick and customize for our application, and that's precisely what we're going to do when we get to the hands on.</p> <h2 id="terms" tabindex="-1">Terms<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#terms">#</a></h2> <p>There are a few terms that will be used in the configuration files that we need to look through. Fortunately, they're more or less exactly like they sound :</p> <ul> <li> <p>Step : A set of tasks that need to be performed. They can be commands like <code>run:npm ci</code> or other actions, like checking out a specific branch</p> </li> <li> <p>Work : It is a set of steps that run-in runner of our process. The works can be executed independently or sequentially depending on whether the success of our work depends on the previous one.</p> </li> <li> <p>Workflow : This is what we'll be creating as our end goal - a workflow. It is an automated procedure composed of one or more jobs that is added to a repository and can be activated by an event. They are defined by YAML files and with it you can build, test, package, relay or deploy a project.</p> </li> <li> <p>Event : These are specific activities that trigger the execution of a workflow. For instance, committing to a specific branch, a new PR and so on.</p> </li> <li> <p>Action : Smallest building block of a workflow and can be combined as steps to create a job.</p> </li> <li> <p>Runner : It is a machine with the GitHub Actions application already installed, whose function is to wait for the work to be available and then be able to execute the actions and report the progress and the results.</p> </li> </ul> <h2 id="introduction-to-netlify" tabindex="-1">Introduction to Netlify<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#introduction-to-netlify">#</a></h2> <p>Netlify is a platform that allows you to host and deploy frontend applications. It is an extremely popular tool for newbies because it takes no more than a few clicks to deploy an application code on Github direct to Netlify.</p> <h2 id="sample-application-to-set-up-workflow" tabindex="-1">Sample application to set up workflow<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#sample-application-to-set-up-workflow">#</a></h2> <p>We'll be setting up the workflow using a simple React application - https://github.com/dkp1903/react-github-actions. You need not clone it to your local, since we will not be making any code changes to the app. Instead, you'll have to fork the repo to your own github using the fork button available on the top right corner.</p> <p>Once it's done, go to the Actions tab on Github, and select the NodeJS workflow.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image5.png" alt="" /></p> <p>It'll create a file called <code>node.js.yml</code> with some prewritten configuration, like this :</p> <pre><code># This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node # For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions name: Node.js CI on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest strategy: matrix: node-version: [12.x, 14.x, 16.x] # See supported Node.js release schedule at https://nodejs.org/en/about/releases/ steps: - uses: actions/checkout@v2 - name: Use Node.js $NaN uses: actions/setup-node@v2 with: node-version: $NaN cache: 'npm' - run: npm ci - run: npm run build --if-present - run: npm test </code></pre> <p>The 'on' field describes when this particular workflow will be run. Right now, it's set to all pushes to and pull requests on, the main branch.</p> <p>'runs-on' desribes the environment the code will be run on, on Github servers. It's Ubuntu 20, and we'll leave it at that.</p> <p>The node versions to be checked against are 12, 14 and 16, so we'll have three different jobs running parallely when the workflow gets triggered. We'll leave this one as it as well.</p> <p>The 'run' fields signify the commands to be run, first the npm ci(clean install). We'll change that to <code>npm i</code> for ease of understanding.</p> <p>Then the npm run build with the --if-present flag, which means that the build script will run only if the build script is present. Fortunately, our app does have a build script, so we'll leave this as wlel</p> <p>Finally, the npm test command will run the test file we have(App.test.js) which contains just a single test.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image1.png" alt="" /></p> <p>Click on the start commit button on the top right. Once you do, the workflow will automatically be triggered.</p> <p>There will be three jobs running in parallel, one each for Node versions 12, 14 and 16. The jobs will all be successful in a few minutes.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image9.png" alt="" /></p> <p>Open the build 12, and look at the steps that were followed. If you open the npm test one, it will see that there's one test, which had passed.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image2.png" alt="" /></p> <p>We'll soon mess with that.</p> <p>We've set up the CI part of the CI/CD pipeline. Now, for the CD, we'll connect our repository to Netlify, where we'll host our react application code.</p> <p>Go to netlify.com and sign up using Github, using the same github account as your react-github-actions repo is on.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image8.png" alt="" /></p> <p>Now, click on New site. Select provider as Github. Search for the react-github-actions repo and add it.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image6.png" alt="" /></p> <p>You'll be asked for some details.</p> <p>Changed the build command to <code>npm run build</code></p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image11.png" alt="" /></p> <p>Click on Deploy site.</p> <p>Once you do, the deploy will be auto triggered.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image7.png" alt="" /></p> <p>So, now that we've everything running smoothly, we'll make a mess of things by introducing a breaking change, because, things don't work at one go in software development</p> <p>Go to Github, and edit the App.js file by removing the words 'React App'.</p> <p>Add the commit message as 'added-breaking-change', and instead of pushing directly to the master branch, click on create a new branch. You may name it anything.</p> <p>Now, we'll make a PR into the master branch.</p> <p>If you remember, we'd configured the CI pipeline to work in two cases : One, if a push was made to master, and two, if there was a PR raised.</p> <p>So this time, as soon as we raise the PR, the workflow should be triggered.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image10.png" alt="" /></p> <p>Sure enough, you'll see the steps being run.</p> <p>We see that the checks fail.</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image4.png" alt="" /></p> <p>If you go to the Actions tab and check the CI logs, you'll see that the test failed, which is what we'd expected.</p> <p>Go to Netlify and confirm that no deployment has started.</p> <p>Now, add the 'React App' back into the file and make a commit into the same branch.</p> <p>The tests will now run again, and you'll see that they pass now. Once the tests pass, you can merge the Pull request</p> <p>And going to Netlify, you'll see that a deploy has been triggered</p> <p><img src="https://blog.dkpathak.in/img/scalex/ga/image3.png" alt="" /></p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#conclusion">#</a></h2> <p>Thus, you understood the concepts of CI/CD, how they work in a production environment. You set up an application configured CI on it using Github actions and CD using Netlify. You confirmed the flow by purposely failing the CI test and ensured that an incorrect deployment did not get triggered.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/ci-cd-using-github-actions-and-netlify/#references">#</a></h2> <ul> <li>(CI vs CD vs CD)[https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment]</li> </ul> </content>
</entry>
<entry>
<title>Optimal timeblocking</title>
<link href="https://blog.dkpathak.in/optimal-timeblocking/"/>
<updated>2021-11-26T00:00:00Z</updated>
<id>https://blog.dkpathak.in/optimal-timeblocking/</id>
<content type="html"><p>Time blocking has been much acclaimed as a wonderful means to get things done by not giving our mind an alternative - if it's there on the calendar, you do it. Come. What. May.</p> <p>The premise is this - instead of dumping a list of tasks on the Todo list and getting to them when you 'feel like it', if you instead assign a time you'll do each task, you'll not give your brain a chance to procrastinate.</p> <p>It takes various forms - Elon Musk plans his entire day in five minute chunks, with not a minute of his waking day left unscheduled, whereas many others only schedule the most important and unmissable events and tasks, and handle the rest on a 'will be taken up as possible' basis</p> <p>In such a case, how do we make sure that we block times 'optimally', so that we get the tasks done, and at the same time, keep enough leeway for interruptions?</p> <p>Here are a few steps you can take to ensure you're timeblocking 'optimally' :</p> <ol> <li> <p>Block incrementally. A large number of us, in an initial spur of motivation, end up blocking the day down to the minute, only for the entire schedule to unravel at the first overrun task, or the first distraction. To steel your brain into following a calendar is a challenge, and it takes time to get used to it. Thus, start with blocking unavoidable meetings/events/tasks - these have the highest likelihood of not being procrastinated. Once your brain gets into the habit of checking your calendar before picking up a task, then start adding further tasks slowly - start with the ones you're most likely to procrastinate on, and then go to the relatively easy ones. The moment you feel an urge to NOT do a task inspite of it being on the calendar, take a day's break, without adding any further tasks, until you can steel yourself to stick to it. Reason being - the mind should see the calendar as sacred and unmodifiable. If you start pushing around tasks, you'll start doing that with every task on there, in no time.</p> </li> <li> <p>Block time, for not just work, but also 'non work' : 'Spending time with family' may not figure on many of our todolists, however, it does take time. Block time for stuff that's not a direct task/meeting, but is anyway gonna take time - otherwise, it'll feel like you had an empty calendar, and still got nothing done.</p> </li> <li> <p>Optimal duration : How much time should you give to a task? On the one hand, there's an idea that says that work expands to take up as much time as you allot to it, but it misses the fine print that there's definite upper and lower limits. You can't cook dinner in 2.5 minutes, no matter how motivated you are. Giving too little of time to a task will make you feel demotivated at being unable to meet the deadline. And at the same time, giving way too much time to a task will make you procrastinate - the very thing we're trying to avoid. Thus, spend a few extra seconds planning the optimal duration for each slot you block on the calendar. As a rule of thumb, always plan a few more minutes for a task than you think you'll need, since humans have a tendency to overestimate themselves and underestimate the challenges. If the task/meeting involves other people, make sure you finalize the agenda and the duration in advance, since it can otherwise derail very easily.</p> </li> <li> <p>Padding : Add a few minutes of padding after a task - say 15 minutes for every one hour. This is to ensure that you can take a break before getting on with the next task. This break is necessary, because no matter how motivated, humans' attention span for deep work is low, and needs constant replenishment. Moreover, you can utilize this</p> </li> <li> <p>Scheduling breaks : No, all the white space on the calendar is NOT a break. You MUST schedule break times on your calendar, wherein you can actually rejuvenate. And it doesn't mean scrolling socials. Your mind needs a break, your eyes need a break, and your body needs movement - so give it that.</p> </li> <li> <p>Rescheduling : No, you can't work without having to reschedule at least once a week. But at the same time, 'not feeling like it' isn't a valid excuse for pushing a task to the next day. Rescheduling has to follow the same discipline that you followed when scheduling, or you'll eventually end up rescheduling all the tasks you don't want to do. Rescheduling has to follow a careful evaluation process;</p> </li> </ol> <ul> <li> <p>one, reschedule a task only if circumstances entirely out of your control come in and threaten to take up more than 50% of the time you initially allotted for the task. Otherwise, just push the task a bit and see it through.</p> </li> <li> <p>Second, reschedule the task to a time that you KNOW you'll be able to do it at. Pushing a task away to a random slot just to get it out of the way from the moment means that you're going to have to reschedule the task at least once more, and that'll kill off the motivation you have for doing it.</p> </li> <li> <p>Three, if you have to reschedule the same task more than twice, reevaluate it - is it really unavoidable circumstances, or are you just finding excuses to delay the inevitable?</p> </li> <li> <p>Finally, If you end up with a lot of rescheduling done over the week, your scheduling wasn't good enough to begin with - so rethink your scheduling strategy.</p> </li> </ul> <ol start="7"> <li>Flexibility : This may seem counterintuitive, because the tone of this article has been to force your mind. However, flexibility does not mean to reschedule and reprioritize tasks at will. Instead, it's the freedom to reevaluate your scheduling strategies based on the insights you derive from your present schedule. For instance, you observe over a week that 9 PM - 10 PM is a super productive time for you, but your calendar is filled with relatively unimportant tasks in that slot - change it in your next schedule. If you observe your tasks often overshoot, reevaluate how you estimate the time block for each task.</li> </ol> <p>Timeblocking is an effective way to avoid procrastinating on necessary tasks by leaving choice out of the equation, and if done the right way, can greatly boost net productivity</p> </content>
</entry>
<entry>
<title>Lessons learnt from a year long experiment on productivity</title>
<link href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/"/>
<updated>2022-01-15T00:00:00Z</updated>
<id>https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/</id>
<content type="html"><p>For over the past 52 weeks, I've invested in learning more about productivity patterns, and how I could possibly meet the goals I set for myself, get over my ADHD and do a decent job at work, without burning myself out in the process.</p> <p>This article reflects my major findings - all of which I've tried and tested on myself.</p> <h3 id="1-action-is-the-greatest-motivation" tabindex="-1">1. Action is the greatest motivation<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#1-action-is-the-greatest-motivation">#</a></h3> <p>10 minutes of actually working on a task that's important for you creates a motivation boost to complete the rest of it, far more than any planning ever will. This is how most habits begin to develop - we start at 1%, and the action becomes the motivation for further action. Thus, next time you procrastinate, just get started on one tiny bit of task, and it'll boost you to continue.</p> <h3 id="2-conservative-time-blocking" tabindex="-1">2. Conservative time blocking<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#2-conservative-time-blocking">#</a></h3> <p>Timeblocking is a well recognized technique where you schedule time for a task, and in that duration, just work on that task, thereby eliminating the need for the mind to will itself into picking a task off the todo list.</p> <p>However, if done wrongly, timeblocking can end up being as unproductive as all other todo lists and way more demoralizing. Timeblocking every minute of the day without considering your energy levels and other distractions can make the exercise futile. Thus, when you start, block time only for the most essential tasks that you dare not skip. Once you get into the habit, only then schedule your time more. This incremental approach will trick your brain into believing, that if it's on the calendar, it's sacred, and can NOT be missed, come what may.</p> <h3 id="3-part-day-planning" tabindex="-1">3. Part-day planning<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#3-part-day-planning">#</a></h3> <p>Most productivity gurus talk about planning one day ahead. However, early on in our careers, we have very little control over our time at work, and thus, our day plans can get disrupted if we're pushed into an energy draining task that we hadn't expected. You have considerable more control over only the next 6 hours of the day, at any given point. Thus, divide your day into 3 halves, and only plan for the next 6 hours. At 8 AM, plan for your 8-1. At 1:30, plan for your 2-7 PM, and at 7:30 PM, plan for your 8 PM - 1 AM. You'll be able to gauge your energy levels and calendar blockers much better this way</p> <h3 id="4-biological-prime-time" tabindex="-1">4. Biological Prime Time<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#4-biological-prime-time">#</a></h3> <p>As the name suggests, it refers to a few times of the day when you're at your highest energy. Schedule your most energy draining tasks around your BPT to increase your chances of getting them done.</p> <p>How do you find out your BPT? For one/two weeks, keep track of how motivated and mentally fresh you feel at every hour of the day. After the duration, you'll see a recurring high at some common time slots. For me, it usually comes between 7 AM - 9 AM in the morning, 5 PM - 7 PM in the afternoon, and 9:30 - 10:30 PM at night.</p> <p>At your lowest energy levels, either switch off entirely from doing anything, or if that's unavoidable(like, if it falls during work hours), do your least energy consuming, 'maintenance' tasks like checking mail, cleaning up your workspace etc, which require very little mental presence.</p> <h3 id="5-objectivize" tabindex="-1">5. Objectivize<a class="tdbc-anchor" href="https://blog.dkpathak.in/lessons-learnt-from-a-year-long-experiment-on-productivity/#5-objectivize">#</a></h3> <p>A lot of what we do is based on subjective decisions and considerations - I'll consider this website complete, when I feel it's 'good enough', or this task will take 'some time', or I have a goal to 'get 6 pack abs eventually'.</p> <p>These subjective connotations means that your mind has to work at 'interpreting' what they mean, before you actually do something about them - your mind has to define when you feel good enough for your website, or how much time is 'some time', or what you should do at this moment to 'get 6 pack abs eventually'</p> <p>Instead, creating objective, measurable checklists for your tasks and milestones makes it infinitely easier for you to track them, and remove the barrier for your brain to expend energy every time into defining the criteria for completion. You can just kick into auto gear mode. In the above examples, 'I'll consider this website done, once the header has a gradient, I have done the three body sections and added 5 links in the footer', 'completing these 3 checkpoints in the task will take a total of 90 mins', 'I'll do 60 situps and 80 leg rotations every alternate day'.</p> </content>
</entry>
<entry>
<title>Intro to Terraform</title>
<link href="https://blog.dkpathak.in/intro-to-terraform/"/>
<updated>2021-11-23T00:00:00Z</updated>
<id>https://blog.dkpathak.in/intro-to-terraform/</id>
<content type="html"><h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#overview">#</a></h2> <p>Terraform is an Infrastructure as Code (IaC) tool, used to deploy and manage infrastructure (cloud servers, cloud DB instances etc) using code, rather than a GUI. In this tutorial, we'll look at what IaC is, why it's thought to be a better idea than using a GUI, and how Terraform achieves it. We'll then implement a rather unique project, to create a Spotify playlist using Terraform!</p> <p>We'll use an AWS EC2 instance for the tutorial because it's much faster and straightforward than to fight with our miserly personal laptop RAMs. Instructions to set up an AWS EC2 instance can be found <a href="https://dkprobes.tech/setting-up-a-production-ready-application-with-react/#setting-up-an-aws-ec2-instance">here</a></p> <h2 id="introduction-to-iac" tabindex="-1">Introduction to IaC<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#introduction-to-iac">#</a></h2> <p>Infrastructure refers to everything that's used in the deployment of an application - including the server configurations, load balancers, access groups, VPCs and a zillion other things. As beginners, we often use GUIs to configure this infrastructure - such as the AWS EC2 setup you'd have done - configuring the security groups, storage etc by clicking away at the console.</p> <p>This practice however, is often not optimal when you're working with hundreds of instances which need very precise configurations, and are worked on by hundreds of developers. In this case, we take a trick off the old hat - just like how our normal application code changes are managed and maintained using version control, we use code to configure our infrastructure, and deploy that configuration to version control so that other developers can see it, edit it and use it.</p> <p>How exactly does that work? The configurations that we do work via APIs that modify and manage the resources and infrastructure. When we use a GUI like the EC2 dashboard, it's the UI that's making the calls to the APIs for modifying the infrastructure. The same APIs can also be accessed via code, to give the same result. And that's precisely what IaC is.</p> <h2 id="intro-to-terraform" tabindex="-1">Intro to Terraform<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#intro-to-terraform">#</a></h2> <p>Terraform is the tool to bring IaC to reality. It has a configuration language using which you can interact with the infrastructure platform APIs, like AWS EC2 APIs, to add, update and remove resources.</p> <p>These configuration files can be deployed to version control, meaning that other developers on the team can refer to these or update them as required, without the intervention of the person who first set it up.</p> <p>So how exactly does it all come together in practice?</p> <h3 id="1-making-configuration-edits" tabindex="-1">1. Making configuration edits<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#1-making-configuration-edits">#</a></h3> <p>The developers first make the changes to the infrastructure in the configuration language</p> <h3 id="2-execution-plans" tabindex="-1">2. Execution plans<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#2-execution-plans">#</a></h3> <p>Terraform then generates an execution plan based on the configuration changes you made, and asks you for your approval to ensure there're no unexpected changes. You wouldn't want a semi colon removed by the ill famed intern to bring down your primary server, would you?</p> <h3 id="3-resource-graph" tabindex="-1">3. Resource graph<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#3-resource-graph">#</a></h3> <p>Infrastructure takes time to setup and configure, especially when there's tons of it involved with specifics in each. Thus, it creates a resource graph to allow it to build and provision independent resources in parallel to save time</p> <h3 id="4-change-automation" tabindex="-1">4. Change automation<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#4-change-automation">#</a></h3> <p>When you make changes to your infrastructure, Terraform applies those changes with as much efficiency as possible, and with minimal human intervention required.</p> <p>Now that we're clear with the concepts, let's get out hands dirty by setting up Terraform</p> <h2 id="setting-up-terraform" tabindex="-1">Setting up Terraform<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#setting-up-terraform">#</a></h2> <p>As discussed, we'll be using an EC2 instance to set up and configure Terraform and the other necessary dependencies, since it has a much better RAM and doesn't heat your laptop to 10 million degrees.</p> <p>You can follow the instructions in the article linked in the overview for setting up an EC2 instance. If not, you can continue on your personal laptops. OS wise instructions can be found <a href="https://learn.hashicorp.com/tutorials/terraform/install-cli">here</a></p> <p>Once you're logged into the EC2 terminal, we'd first need a few packages that Terraform uses. Execute the following commands on the terminal</p> <pre><code>sudo apt-get update &amp;&amp; sudo apt-get install -y gnupg software-properties-common curl </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/1-install-curl.PNG" alt="" /></p> <pre><code>curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - </code></pre> <pre><code>curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/2-hashicorp.PNG" alt="" /></p> <p>Finally, to install terraform (We do the apt-get update to update the repository we installed in the previous step):</p> <pre><code>sudo apt-get update &amp;&amp; sudo apt-get install terraform </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/4-terraform.PNG" alt="" /></p> <p>Once complete, type <code>terraform -help</code> and a list of options as below will indicate that the installation has been successful.</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/4-terraform-installed.PNG" alt="" /></p> <h2 id="setting-up-docker-engine" tabindex="-1">Setting up Docker Engine<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#setting-up-docker-engine">#</a></h2> <p>Now that we're complete with installing Terraform, the next step is to set up Docker Engine, since we'll be using a Docker image for our project.</p> <p>First, we update the apt package index and install packages to allow apt to use a repository over HTTPS:</p> <pre><code>sudo apt-get update </code></pre> <pre><code>sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/6-ca-certs.PNG" alt="" /></p> <p>Next, we add Docker's official GPG key. GPG stands for GNU Privacy Guard and is essentially an encryption mechanism to keep your docker images and installations secure.</p> <pre><code> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/7-docker-gpg-key.PNG" alt="" /></p> <p>Next, we add the stable repository</p> <pre><code> echo \ &quot;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable&quot; | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null </code></pre> <p>Now, we'll install the docker engine</p> <pre><code>sudo apt-get update </code></pre> <pre><code>sudo apt-get install docker-ce docker-ce-cli containerd.io </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/8-install-docker.PNG" alt="" /></p> <p>In case you're wondering, <code>apt-get update</code> downloads the package lists from the repositories and &quot;updates&quot; them to get information on the newest versions of packages and their dependencies.</p> <p>Finally, to ensure if Docker has been installed successfully, run the following hello world image.</p> <pre><code>sudo docker run hello-world </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/9-docker-run.PNG" alt="" /></p> <h2 id="configuring-spotify" tabindex="-1">Configuring Spotify<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#configuring-spotify">#</a></h2> <p>Next, we'll set up Spotify developer dashboard. Go to https://developer.spotify.com/dashboard and login/signup. Once you do, you should see a dashboard like this :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/16-spotify.PNG" alt="" /></p> <p>Click the Create an App button, and enter details like so :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/17-create-app.PNG" alt="" /></p> <p>and click create</p> <p>Once the application is created, click the green Edit Settings button on the top right side.</p> <p>Go to the redirect_uris section and add a URL - <code>http://localhost:27228/spotify_callback</code>. Click on add and then save at the bottom. Do not forget to save - it can be easily missed.</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/18-redirect-url.PNG" alt="" /></p> <p>This URL is what we'll be redirected to, once we're authenticated by Spotify, to have rights to create the playlist.</p> <p>One question you might have - we're using an EC2 instance for the Terraform setup. Why did we add a localhost link there? We'll come to that answer in a bit.</p> <p>Now, since we're dealing with a port that's expected to have some traffic, we'll need to add it to inbound rules of our AWS security group for our instance to avoid getting a failed request. If you don't know how, follow the instructions <a href="https://dkprobes.tech/setting-up-a-production-ready-application-with-react/#setting-up-an-aws-ec2-instance">here</a></p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/19-add-port.PNG" alt="" /></p> <p>Now, we'll have to add the redirect URL as an environment variable to our EC2 instance. Go to the terminal and enter the following</p> <pre><code>export SPOTIFY_CLIENT_REDIRECT_URI=http://localhost:27228/spotify_callback </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/20-export.PNG" alt="" /></p> <p>Next, we'll create a .env file to host our Spotify app credentials.</p> <p>Type</p> <pre><code>nano .env </code></pre> <p>to create a .env file and open it in the nano text editor.</p> <p>We'll be adding two variables, the client ID and the client secret :</p> <pre><code>SPOTIFY_CLIENT_ID= SPOTIFY_CLIENT_SECRET= </code></pre> <p>For these values, go to the Spotify developer dashboard and copy the client ID and secret and paste them here</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/21-env.PNG" alt="" /></p> <p>And now, the moment of truth - we'll use the docker image of the application to run it and see if we're able to authenticate ourselves. In the terminal, enter the following command</p> <pre><code>docker run --rm -it -p 27228:27228 --env-file ./.env ghcr.io/conradludgate/spotify-auth-proxy </code></pre> <p>You shoul see an output like this</p> <pre><code>Unable to find image 'ghcr.io/conradludgate/spotify-auth-proxy:latest' locally latest: Pulling from conradludgate/spotify-auth-proxy 5843afab3874: Pull complete b244520335f6: Pull complete Digest: sha256:c738f59a734ac17812aae5032cfc6f799e03c1f09d9146edb9c2836bc589f3dc Status: Downloaded newer image for ghcr.io/conradludgate/spotify-auth-proxy:latest APIKey: xxxxxx... Token: xxxxxx... Auth: http://localhost:27228/authorize?token=xxxxxx... </code></pre> <p>Copy the <code>http://localhost</code> url and paste it in a new browser tab.</p> <p>Well?</p> <p>Did you get a 'Site can't be reached' page? Of course you did. Wonder why?</p> <p>Your server is running on EC2, not on localhost, as we'd noted earlier. So, in the URL, replace the localhost with the Public IPv4 address of your EC2 instance. Once you do that, the page would load into an authorization page like this :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/22-authorize.PNG" alt="" /></p> <p>Click on agree, and you'll be redirected to the localhost link you'd given as the redirect url. Again, replace the localhost with the ip of the server, and you'll be able to see this message :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/23-auth-successful.PNG" alt="" /></p> <p>And your terminal will be updated as follows :</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/24-auth-success-terminal.PNG" alt="" /></p> <p>Keep the server up and running. Open a new terminal and ssh into it once again to connect to the EC2 instance - we need this one for setting up the terraform configuration.</p> <p>Now, we'll be working on the terraform configuration we'd need for our app. Use this command to clone a repo that contains the Tf configuration that searches for songs by Dolly Parton and creates a playlist out of them.</p> <pre><code>git clone https://github.com/hashicorp/learn-terraform-spotify.git </code></pre> <p>And cd into the directory</p> <pre><code>cd learn-terraform-spotify </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/25-clone.png" alt="" /></p> <p>Do an ls command, and you'll see three files in the repo</p> <p>Enter <code>cat main.tf</code> to open the file. The content will be something like this</p> <pre><code>terraform { required_providers { spotify = { version = &quot;~&gt; 0.1.5&quot; source = &quot;conradludgate/spotify&quot; } } } variable &quot;spotify_api_key&quot; { type = string } provider &quot;spotify&quot; { api_key = var.spotify_api_key } resource &quot;spotify_playlist&quot; &quot;playlist&quot; { name = &quot;Terraform Summer Playlist&quot; description = &quot;This playlist was created by Terraform&quot; public = true tracks = [ data.spotify_search_track.by_artist.tracks[0].id, data.spotify_search_track.by_artist.tracks[1].id, data.spotify_search_track.by_artist.tracks[2].id, ] } data &quot;spotify_search_track&quot; &quot;by_artist&quot; { artists = [&quot;Dolly Parton&quot;] # album = &quot;Jolene&quot; # name = &quot;Early Morning Breeze&quot; } output &quot;tracks&quot; { value = data.spotify_search_track.by_artist.tracks } </code></pre> <p>The first <code>terraform</code> block contains the terraform configuration, followed by the provider details. Here, we'll enter the spotify API key, which'll allow us to access the developer account and add the song details</p> <p>Then come the details of the playlist itself - we search the artist Dolly Parton, and (commented out) the album and name of the song.</p> <p>Next, Rename the <code>terraform.tfvars.example</code> file <code>terraform.tfvars</code> so that Terraform can detect the file, using the following command :</p> <pre><code>mv terraform.tfvars.example terraform.tfvars </code></pre> <p>Next, open the above file using nano and add the API key which you'd copied earlier from the running Docker container. Remember to keep the quotes there.</p> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/28-api-key.png" alt="" /></p> <p>Next, we'll initialize terraform, which will install the Spotify provider, using the following command :</p> <pre><code>terraform init </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/terraform/30-tf-init.png" alt="" /></p> <p>Now, enter</p> <pre><code>terraform apply </code></pre> <p>to apply the configuration you have made. You'll see a confirmation with the details you've entered, like so :</p> <pre><code>Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # spotify_playlist.playlist will be created + resource &quot;spotify_playlist&quot; &quot;playlist&quot; { + description = &quot;This playlist was created by Terraform&quot; + id = (known after apply) + name = &quot;Terraform Summer Playlist&quot; + public = true + snapshot_id = (known after apply) + tracks = [ + &quot;2SpEHTbUuebeLkgs9QB7Ue&quot;, + &quot;4w3tQBXhn5345eUXDGBWZG&quot;, + &quot;6dnco8haegnJYtylV26cBq&quot;, ] } Plan: 1 to add, 0 to change, 0 to destroy. Changes to Outputs: + playlist_url = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: </code></pre> <p>Enter yes, and the playlist will be created</p> <pre><code> Enter a value: yes spotify_playlist.playlist: Creating... spotify_playlist.playlist: Creation complete after 1s [id=40bGNifvqzwjO8gHDvhbB3] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: playlist_url = &quot;https://open.spotify.com/playlist/40bGNifvqzwjO8gHDvhbB3&quot; </code></pre> <p>And there you have it. You can open the link in the browser.</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#conclusion">#</a></h2> <p>Thus, in this tutorial, we understood what IaC is, and the use cases for it, and how it's better than the GUI based configuration. We got introduced to Terraform and how it works.</p> <p>We then set up a Spotify playlist using Terraform, getting a decent overview of how it works in the process.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-terraform/#references">#</a></h2> <ul> <li><a href="https://www.terraform.io/intro/index.html">Terraform Docs</a></li> </ul> </content>
</entry>
<entry>
<title>AWS Cloudwatch</title>
<link href="https://blog.dkpathak.in/aws-cloudwatch/"/>
<updated>2021-12-08T00:00:00Z</updated>
<id>https://blog.dkpathak.in/aws-cloudwatch/</id>
<content type="html"><h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#overview">#</a></h2> <p>In this tutorial, we'll understand the requirements of metrics for our server instances. We'll set up an EC2 instance and configure cloudwatch to track metrics for the instance and set up alerts when certain criteria are met</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#prerequisites">#</a></h2> <p>You'll need an AWS account. If you do not have one, sign up on aws.amazon.com.</p> <h2 id="metrics-and-why-we-need-them" tabindex="-1">Metrics, and why we need them<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#metrics-and-why-we-need-them">#</a></h2> <p>Every server instance we use has a finite amount of load it can hold, the CPU power, the number of reads/writes, and so on. In case of excessive load, the server might crash, leading to service disruption for users. While it doesn't sound like a big deal when working on personal projects, it can have serious business consequences when working with actual users. Remember the time Google went down for just 45 mins? The world practically came to a standstill. To avoid this, we use metrics that track server activity - how many requests it's handling, the CPU being used, and so on. If we see traffic hitting the server's limits, we can configure additional instances and <a href="https://dkprobes.tech/setting-up-load-balancing-using-nginx/">balance load across these</a></p> <h2 id="introduction-to-aws-cloudwatch" tabindex="-1">Introduction to AWS Cloudwatch<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#introduction-to-aws-cloudwatch">#</a></h2> <p>AWS Cloudwatch is a service that tracks various metrics for your AWS resources, including EC2 instances, S3 buckets, lambda functions, EBS and more.</p> <p>You can create dashboards to track the metrics over time and can take intelligent decisions regarding scaling up your server capacity</p> <p>Most importantly, it also allows to setup alarms when certain critical thresholds are hit, so that you can take action immediately without any service disruption</p> <h2 id="getting-hands-on-with-cloudwatch" tabindex="-1">Getting hands on with Cloudwatch<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#getting-hands-on-with-cloudwatch">#</a></h2> <p>In this tutorial, we'll be setting up an EC2 instance and run a simple React app on it. We'll then track the metrics of the instance as we make hits to the server and set up an alarm when the requests cross a certain threshold</p> <p>The following section describes the steps to set up and run a React app on an EC2 instance. If you already have one running, skip this section and go to the next one.</p> <h2 id="setting-up-a-react-app-on-an-ec2-instance" tabindex="-1">Setting up a react app on an EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#setting-up-a-react-app-on-an-ec2-instance">#</a></h2> <p>Next, let’s set up a remote EC2 server instance. As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image3.png" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image6.png" alt="" /></p> <p>Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.</p> <p>Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section. Finally, we’ll add a custom TCP rule for the port our application is going to expose - port 3000.</p> <p>For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image17.png" alt="" /></p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image9.png" alt="" /></p> <p>Next, select the instance, and click on Connect on the top bar. It’ll open this page :</p> <p><img src="https://blog.dkpathak.in/aws-cloudwatch/..img/scalex/image1.png" alt="" /></p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i … one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image14.png" alt="" /></p> <p>Great going.</p> <p>Now, our next step is to bring the code into our instance and run it. To do that, we’ll do a git clone exactly the same way we cloned the repo on our local system, using the git clone command.</p> <p>Once you’re done cloning the repo, the next step is to install the dependencies and start the application. Navigate to the repo directory and try running</p> <p><code>npm install</code></p> <p>Did you get an error? Ofcourse you did. You need to install NodeJS on the instance. How do you do that? The answer’s in the error itself :</p> <p>sudo apt install nodejs</p> <p>This will take a few minutes to complete. Once it’s done, try running npm install again, and you’ll see that this time, you’re able to.</p> <p>Finally, the moment of truth - run</p> <p><code>npm run start</code></p> <p>Once you see the application live on localhost:5000 written on the terminal, you’ll have to navigate to the server IP to check if it works.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image16.png" alt="" /></p> <p>This IP can be found from the AWS instance details - Public IPV4 address. Copy that, and paste it onto a browser tab, and add :3000 after it.</p> <p>If the application did work correctly - you should be able to see the same screen that you were able to see locally on your machine.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image8.png" alt="" /></p> <h2 id="setting-up-cloudwatch" tabindex="-1">Setting up cloudwatch<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#setting-up-cloudwatch">#</a></h2> <p>Now that you have a working application, we'll setup Cloudwatch.</p> <p>Go to the search bar and type Cloudwatch. You'll get the service option there like so</p> <!-- Add cloudwatch image here--> <p>Click on it, and you'll be taken to the Cloudwatch home page. Look at the navigation tab carefully - it has options for Logs, events, metrics, dashboards and so on.</p> <p>Click on the Create dashboard button, and give it a name of your choice</p> <p>Next, you'll be prompted for the widget type you want to add, line graph/cumulative/alarm, etc</p> <p>We'll pick the line graph. We can always add more widgets later</p> <p>Next, you'll be asked where should this graph's data come from - the metrics, or the logs? We'll pick metrics since that's what we want to track</p> <p>Next, you'll get a screen like this, and a list of services which you can track. Click on EC2, and use the select all button to have all EC2 metrics showing up on the widget.</p> <p>Finally click Create widget, and you'll be able to see the widget on the dashboard</p> <p>Similarly, you can add another widget for numeric data like so :</p> <p>Finally, we'll be trying to set up an alarm.</p> <p>Click on add new widget and select alarm</p> <p>You'll be redirected to the alarms dashboard</p> <p>Click on Create alarm</p> <p>We'll be asked to select the metric on which we want to set an alarm. Search, and select CPUUtilization</p> <p>You'll then be asked to specify the conditions for the alarm - we'll set the alarm when the CPUUtilization is Greater than 0.6. (It's a pretty low number, but since we wish to see the alarm triggered while not having that amount of utilization, we're keeping it this way)</p> <p>You'll then be prompted to configure notifications - we choose to get notified 'in alarm', that is, when the threshold has been breached</p> <p>Next, we are asked to select an SNS topic. SNS stands for Simple notification service, an AWS service used to send alerts to users. We'll click on creating a new topic, and add our email ID in the email endpoint</p> <p>Click on create topic</p> <p>Finally, you'll be asked to enter the name of the alarm. And then, the alarm will be created</p> <p>You'll get a notification on the top stating that you'd need to verify your subscription to the SNS. Go to the email ID you'd entered in the alarm page, and you'll see a mail from AWS with a confirmation link, like this :</p> <p>If you do not see it, check spam.</p> <p>Once you hit the confirm link, you're then confirmed to start receiving the notification messages.</p> <p>Now, go to your SSH terminal,and run the following command, to trigger the CPU usage.</p> <pre><code>sudo npm i -g pm2 </code></pre> <p>Within a few seconds, you'll see the state of the alarm change to 'In alarm', and you'll have received an email from AWS with the alert</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#conclusion">#</a></h2> <p>Thus, in this tutorial, you understood why metrics are important, and how we can use AWS's cloudwatch service to setup and track metrics for your instances. We set up an EC2 instance, and configured cloudwatch to track metrics on it.</p> <p>You can further expand on this knowledge and track metrics across your projects to drive improvements</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/aws-cloudwatch/#references">#</a></h2> <ul> <li><a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-tutorials.html">AWS Cloudwatch official tutorials</a></li> </ul> </content>
</entry>
<entry>
<title>Setting up load balancing using Nginx</title>
<link href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/"/>
<updated>2021-11-14T00:00:00Z</updated>
<id>https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/</id>
<content type="html"><blockquote> <p>This post has been written in collaboration with <a href="https://backtobackswe.com/">BacktoBackSWE.com</a>, a portal for interview preparation.</p> </blockquote> <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#overview">#</a></h2> <p>In this tutorial, we'd be understanding some core concepts of load balancing - what it is, why we need it using a practical example. We'll then be using setting up three server instances using AWS EC2. We'll then understand what Nginx is, and configure it on the servers so that one of them acts as a load balancer and directs requests to the other two</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#prerequisites">#</a></h2> <p>A basic understanding of AWS will be helpful - what is an instance, what is SSH etc. You'll need an AWS account to set up the servers. If you don't have one, you'll have to sign up on https://aws.amazon.com. You'll be asked for Credit/Debit card details, but as long as you stick to the instructions in this tutorial, you won't be charged.</p> <h2 id="introduction-to-load-balancing" tabindex="-1">Introduction to load balancing<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#introduction-to-load-balancing">#</a></h2> <p>Very few things in software engineering sound like they are. Fortunately, load balancing is one of them. Let's consider Uber - an application that sees varying loads in a day based on the time of day - if it's rush hour, the application will be overloaded with requests from the thousands of folks who need to get to their offices on time. In contrast, in the middle of the night, the number of requests will be way lesser.</p> <p>To handle such scenarios, what does Uber do? They keep multiple servers - each with the same application as their sister server, and all of these sister servers are connected to a main load balancer, not directly to the outside world. Now, when the requests for booking a ride come in, they go to the load balancer, which redirects the requests to any of the sister servers. The LB also keeps track of how many requests are being processed by each server, so that any one server doesn't get overwhelmed and die of exhaustion, while the others sit around swatting flies. This way, the 'load' - the number of requests coming in, gets 'balanced' across the servers, and thus, allows all users to have a smooth experience.</p> <p>That's the core concept of load balancing.</p> <h2 id="introduction-to-aws-hosting-services-and-ec2" tabindex="-1">Introduction to AWS hosting services and EC2<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#introduction-to-aws-hosting-services-and-ec2">#</a></h2> <p>AWS isn’t something you’re new to, or you won’t be reading this tutorial, but a one liner for it is that it’s a cloud hosting solutions provider by Amazon that allows you to host, manage and scale applications. For the sake of this tutorial, AWS will provide you the remote server where your React app will eventually run. The server itself will be located in some Amazon Data center, but you’d be able to access it remotely from your PC via a set of commands. We’ll be using the EC2 service of AWS. EC2 stands for Elastic Compute Cloud, and it does what we described above - lets you access a remote server and host applications on it</p> <h2 id="setting-up-an-aws-ec2-instance" tabindex="-1">Setting up an AWS EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#setting-up-an-aws-ec2-instance">#</a></h2> <p>Next, let’s set up a remote EC2 server instance. You’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We'll be going with 3 instances - 2 as server instances, and third a load balancer. They'll be ditto copies of each other for now, until we configure one of them.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/aws-multiple.PNG" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image6.png" alt="" /></p> <p>Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.</p> <p>Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section.</p> <p>For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/security-group-for-load-balancer.PNG" alt="" /></p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/review-and-launch-load-balancer.PNG" alt="" /></p> <p>Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instances are starting, and after a few minutes, they'll be started. Your EC2 home page should look like this (Three running instances. Ignore the fourth terminated one you can see here. It's an old one):</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-three-instances-running.PNG" alt="" /></p> <p>For easier understanding, let's rename our instances. If you hover around their names, you'll see a pencil icon - you can click on it to rename the instances - Server-A, Server-B and Load-Balancer, like so :</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-renamed-servers.PNG" alt="" /></p> <p>Now that our instances are running, we have to connect to each one of them. We'll connect to them via the SSH command line, the terminal. For easy access, we'll stay connected to all three of them via three separate terminals</p> <p>Select one of the instances, and click on Connect. You'll be taken to another page.</p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the <code>ssh -i</code> one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p>Repeat the exact same process for the other two servers in two separate command prompts</p> <p>If all goes well, you should have the three cmds open, looking like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-three-cmds.PNG" alt="" /></p> <p>Great going.</p> <p>Now, we'll be installing Nginx onto each of the three servers, to permit us to load balance</p> <h2 id="intro-to-nginx" tabindex="-1">Intro to Nginx<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#intro-to-nginx">#</a></h2> <p>Nginx is a lot of things. Primarily, it's a web server - it takes requests for applications hosted on it, and returns the corresponding files as response to the requests. What does it look like? It's essentially a software that you download and setup on a machine. It has configurations, that once setup, will allow the host machine to accept incoming requests, process them, and send out the outputs.</p> <p>This request-response ability of Nginx can be put to other uses as well - such as load balancing, reverse proxying, and so on. Load balancing is what we're going to use it for, in this tutorial.</p> <p>Since Nginx has the ability to accept requests, we can also configure it to accept requests, and based on preset rules, direct those requests to other Nginx servers.</p> <p>See the reason for the three servers now? Each of those will have nginx set up on them, and thus, all of them can accept incoming requests and return the corresponding responses. We'll configure one of them to work as a load balancer, such that all it does is accept the traffic, and redirect it to either of the two other servers.</p> <p>Now that we're clear with the theory, let's see how we can set up our servers for the task</p> <h2 id="configuring-the-servers" tabindex="-1">Configuring the servers<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#configuring-the-servers">#</a></h2> <p>Go to the server A command prompt, and type the following command</p> <pre><code>sudo apt-get update </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-apt-get-update.PNG" alt="" /></p> <p>Once that's done, this command :</p> <pre><code>sudo apt-get install nginx </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-install-nginx.PNG" alt="" /></p> <p>Now, go to the EC2 instance dashboard, select Server A, copy its public IPv4 DNS from the details below(remember, copy it - directly opening the URL might lead to unexpected errors) and paste it in a new browser window.</p> <p>You should see a plain HTML page like so :</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-public-dns-home-page.PNG" alt="" /></p> <p>Repeat the exact same procedure for servers B and Load Balancer, and ensure that you see the Welcome to Nginx page on the public DNS links for both of these as well.</p> <p>Next, let's try to edit this page so that we can uniquely identify the server the page is on just by looking at it.</p> <p>As you might've guessed, the content comes from a simple index.html page that comes with the nginx installation.</p> <p>In the terminal for server B, we'll go into the directory that houses the index.html page using the following command :</p> <pre><code>cd /var/www/html </code></pre> <p>Type</p> <pre><code>ls -l </code></pre> <p>to list the files inside the directory, and sure enough, you'll see a file named something like <code>index.nginx-debian.html</code> (The <code>nginx-debian</code> thing refers to the nginx version tells that we have the Debian distribution of nginx downloaded - Debian is a Linux distribution, like Ubuntu and Fedora)</p> <p>This is the file whose contents we'll have to edit to customize them for the server we're on.</p> <p>Type</p> <pre><code>sudo nano index.nginx-debian.html </code></pre> <p>which will open the file in the Nano editor - a text editor for Ubuntu. And sure enough, you can see the Welcome to nginx content in the file that you are able to see on the public DNS.</p> <p>Replace the content of the file like this (for server B).:</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-nano-server-B.PNG" alt="" /></p> <p>Once that's done, do a command/Ctrl + X to exit the editor. The terminal will prompt if you want to save it - type Y and hit enter to return back to the terminal.</p> <p>Repeat the exact same process for Server A.</p> <h2 id="configuring-load-balancer" tabindex="-1">Configuring load balancer<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#configuring-load-balancer">#</a></h2> <p>Now is the main configuration change - configuring the load balancer to manage the requests going to A and B by routing through it. Based on the ratio we decide, x% of the requests will be going to server A, and the remaining to B.</p> <p>This configuration is done in the nginx.conf file. To go there</p> <pre><code>cd /etc/nginx </code></pre> <p>Then, to open the file</p> <pre><code>sudo nano nginx.conf </code></pre> <p>Do NOT forget the <code>sudo</code> since you'd otherwise not be able to save the file after editing it - editing a configuration file requires superuser permission.</p> <p>You'll see some pre-written content in the file already. Clear all of it, and paste the following content in there :</p> <pre><code>http { upstream myapp { server &lt;Server_1_Address&gt; weight=1; server &lt;Server_2_Address&gt; weight=1; } server { listen 80; location / { proxy_pass http://myapp } } } </code></pre> <p>And replace the &lt;Server_1_Address&gt; by the Public IPv4 address of Server A, and similarly, for B.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-nano-conf-final.PNG" alt="" /></p> <p>Since we updated the configuration file, we'd need to restart nginx, which we do by this command :</p> <pre><code>sudo systemctl restart nginx </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-restart.PNG" alt="" /></p> <p>Note that we didn't have to restart the service after updating the index.html file, since we didn't change any Nginx configuration when we edited the file.</p> <p>Now, if you go to the public DNS of Load balancer and refresh it - you'll see Server A. Refresh it again - Server B, and this alternates each time.</p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-done.PNG" alt="" /></p> <p><img src="https://blog.dkpathak.in/img/scalex/load-balancing/nginx-done-2.PNG" alt="" /></p> <p>So, what just happened? And what's all the gobbledygook we wrote in the conf file?</p> <p>The first <code>http {}</code> reflects the type of requests we'll be accepting - HTTP requests. Upstream means that the requests will be sent FROM the load balancer, to the other servers. What other servers? The servers defined inside that block - defined by their IP addresses. 'myapp' is the name of the group of servres. We then have the server addresses, and weights for each. What do the weights represent? The ratio of the requests - right now, it's 1:1, that's why we see requests going between A and B alternately. You may tweak the weights to see the corresponding changes. In real life, some servers are often larger and can handle more requests, and thus, are allotted more weight.</p> <p>The <code>server{}</code> block shows the port number the requests should be listened for on(80 - the HTTP port). The remaining line is the most crucial - it essentially says, whenever you encounter the route '/', replace it with http://myapp, aka, our server group. That one line is responsible for directing requests to the respective servers.</p> <p>Thus, this is how we've successfully setup a load balancing system using three AWS servers</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-load-balancing-using-nginx/#references">#</a></h2> <ul> <li> <p><a href="https://developer.mozilla.org/en-US/docs/Learn/Common_questions/What_is_a_web_server">What is a web server</a></p> </li> <li> <p><a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass">Proxy_pass</a></p> </li> </ul> </content>
</entry>
<entry>
<title>Intro to Server security</title>
<link href="https://blog.dkpathak.in/intro-to-server-security/"/>
<updated>2021-11-21T00:00:00Z</updated>
<id>https://blog.dkpathak.in/intro-to-server-security/</id>
<content type="html"><blockquote> <p>This post has been written in collaboration with <a href="https://backtobackswe.com/">BacktoBackSWE.com</a>, a portal for interview preparation.</p> </blockquote> <h2 id="introduction" tabindex="-1">Introduction<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#introduction">#</a></h2> <p>None of us would want an attack on our servers - neither us, who actually own the applications being run on the servers and whose bread and butter depend on the application running smoothly, nor the cloud provider - AWS, Azure, GCP, who actually own the server and whose bread and butter depend on our continuing to use the server.</p> <p>However, server attacks, breaches and data thefts have been as old as the concept of shared servers itself. And interestingly, the job of ensuring security is equally distributed between the cloud provider, and the user.</p> <p>Why, you ask. The cloud provider owns the server, and therefore, it ought to be their responsibility to ensure its security, no? If you open a locker in a bank, they don't ask you to provide a security guard for the safety of your locker,do they? That's their responsibility.</p> <p>However, the bank does ask you to clearly specify the owners and expect you to remember your details every time you wish to visit your locker. If you end up revealing your details to a thief, who can then access your locker, it's not really the bank's fault, is it? Just like that, AWS does have general firewalls and gatekeepers that are usually meant to keep 'unauthorized' requests out. However, if you end up authorizing a client IP, the firewalls and gatekeepers of AWS will have no choice but to let it through. Thus, it's what AWS calls a 'shared responsibility model', with clearly defined areas which AWS secures, and the others, that the user does.</p> <p>In this tutorial, we'll be looking at some of the options AWS provides us to ensure security of the servers we rent.</p> <h2 id="1-users-responsibilities" tabindex="-1">1. User's responsibilities<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#1-users-responsibilities">#</a></h2> <p>The facets of security that the user is responsible for, include :</p> <p>A. Control network access : Control what requests can come to the server, from what sets of IPs can make requests, and how they can do it(ports). The concept of security groups, which we'll also be doing practically, falls here.</p> <p>B. Credential management : Who all have the credentials to connect to your server, such as the ..pem file, the access to the private IP of the server, and so on.</p> <p>C. Server OS updates : What security and critical software updates should be allowed onto the server, how frequently, and what trusted sources.</p> <p>D. IAM roles : IAM stands for Identity and Access Management, and is mainly useful when different people are responsible for different sets of services on AWS. For instance, the you want to restrict EC2 connection to only a select few, but wish to allow RDS access to some other members of the database team - you can configure that with IAM.</p> <p>We'll be getting a hands on understanding of A and D, as well as understanding the concepts behind some of the other security practices AWS encourages.</p> <h2 id="2-security-groups" tabindex="-1">2. Security groups<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#2-security-groups">#</a></h2> <p>We start with this topic, as it's the most commonly configured when working with AWS EC2 for new cloud users.</p> <p>A security group is a firewall for incoming and outgoing traffic for the server. You can configure it to specify the protocols(SSH, TCP, HTTP, HTTPS etc) and corresponding ports you wish to allow traffic on, and what all IPs you wish to allow traffic from. These are called inbound rules. Similarly, there are outbound rules - that define what all you want your server to be able to access. This is usually kept open, since you're mainly bothered with what comes into the server, and not what goes out.</p> <p>To understand the role of a security group better, we'll be provisioning an EC2 instance, launching an application on it and customizing the security group to allow access to it. If you already have an EC2 instance running, you may skip the next section.</p> <h2 id="setting-up-an-aws-ec2-instance" tabindex="-1">Setting up an AWS EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#setting-up-an-aws-ec2-instance">#</a></h2> <p>As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image3.png" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image6.png" alt="" /></p> <p>Next comes the security group option. Do not edit anything in there for now. We'll edit it later.</p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image9.png" alt="" /></p> <p>Next, select the instance, and click on Connect on the top bar. It’ll open this page :</p> <p><img src="https://blog.dkpathak.in/intro-to-server-security/..img/scalex/image1.png" alt="" /></p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i … one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image14.png" alt="" /></p> <p>Great going.</p> <h2 id="setting-up-react-application-on-ec2" tabindex="-1">Setting up React application on EC2<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#setting-up-react-application-on-ec2">#</a></h2> <p>The next step is to set up a sample application on the instance. We'll be using a simple React todolist app for the same. We just have to clone it on the instance, like we'd have done for our local laptop/PC.</p> <pre><code>git clone https://github.com/gagangaur/React-TODO-App.git </code></pre> <p>You need not know any React for this, since we're only focusing on the security aspects.</p> <p>Once it's cloned, we'll have to install npm, and then set up the dependencies for the project</p> <p>The commands are</p> <pre><code> sudo apt get update sudo apt get npm </code></pre> <pre><code>cd React-TODO-App npm install </code></pre> <p>Finally, once the dependencies are installed, we run the application using <code>npm run start</code>, and if you'd followed all the steps perfecty, you should see the app running on port 5000</p> <p><img src="https://blog.dkpathak.in/img/scalex/image16.png" alt="" /></p> <p>So now, ideally, you should be able to see the app running on the server, right? We access the instance using the public IPv4 address. Copy it from the EC2 console home and paste it on the address bar, and add a :5000 at the end to indicate the port number. Did the application load?</p> <p>Unfortunately not.</p> <p>The reason is - security group. Remember, we hadn't made any change to the default security group settings when setting up the instance. And by default, inbound rules restrict everything but SSH access into port 22 - which we used to connect to the instance using the <code>ssh -i..</code> command. To be able to access the running application from our browser, we need to allow access to the port the application is running on, 5000, to the outside world.</p> <p>To do that, go to AWS. n the left navigation pane, scroll down to find the “Network and Security” section, and within it, Security groups. Open it, and select the new security group we’d created when we were setting up the instance(not the default one).</p> <p>Below, go to the Inbound rules tab, and hit the edit inbound rules button.</p> <p>Now, put in a custom TCP connection rule for port 5000, and allow access from 'anywhere'. Note, to avoid issues arising due to DHCP(Read <a href="https://www.quora.com/Does-my-IP-address-constantly-change-or-stay-the-same">this</a> for more info), we're allowing access from anywhere, but you can restrict only specific IPs to access some ports.</p> <p>Once that’s done, save the rules, and come back to the public IP page, and refresh. If you didn’t mess up, you should be able to see the application loading on port 5000 now!</p> <p>This is the one of the most important security measures put in place by AWS to ensure you're in charge of what traffic you allow in and out of the server.</p> <h2 id="iam---identity-and-access-management" tabindex="-1">IAM - Identity and Access Management<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#iam---identity-and-access-management">#</a></h2> <p>In professional settings, you'll be working across a large team, with multiple people with different responsibilities. It's neither required, nor safe to grant each of the users complete access to everything on your instances. For instance, folks in the database team have only to deal with the RDS services, and have little use case for the lambda services.</p> <p>To manage the permissions for users, we use this service called IAM.</p> <p>Go to https://console.aws.amazon.com/iamv2/home#/home</p> <p>You should see a screen similar to this. This is the IAM home page.</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-1.PNG" alt="" /></p> <p>Select the Users option from the left navigation tab, and it'll show the the existing list of users - empty initially. Let's create a user</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-2.PNG" alt="" /></p> <p>In the AWS access type section, select the password option, and add a custom password of your choice - we're doing this for ease of access. Leave the rest as it is, and click the Next:Permissions button</p> <p>You'll then see an option to add a user to a group. As the name suggests, a user group is a set of users who'll have similar permissions and accesses. Thus, all members of the database team will have similar set of permissions - to view, edit the database. We won't be bothering with a user group creation in this tutorial.</p> <p>Click on the attach existing policies directly tab</p> <p>Here, we have to specify the permissions we wish to grant to this user.</p> <p>We'll add the PowerUserAccess, since we want this user to have complete control of the EC2 instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-5-power-user-access.PNG" alt="" /></p> <p>In the set permissions boundary section, leave it unchanged. Click on the Next:Tags button</p> <p>Add a tag like so</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-6.PNG" alt="" /></p> <p>Click on Next:Review, take a scan at all the options you've chosen, and finally hit Create user to see a screen like this</p> <p><img src="https://blog.dkpathak.in/img/scalex/security/iam-7.PNG" alt="" /></p> <p>Woohoo! You have successfully created a new user. You can download the csv containing the user details, or mail the access details to the user you wish to assign it to. The user will then be able to access the information she/he is allotted, without you having to share your root AWS account password. Sweet, no?</p> <h2 id="network-isolation" tabindex="-1">Network Isolation<a class="tdbc-anchor" href="https://blog.dkpathak.in/intro-to-server-security/#network-isolation">#</a></h2> <p>A virtual private cloud (VPC) is a virtual network in your own logically isolated area in the AWS Cloud. Use separate VPCs to isolate infrastructure by workload or organizational entity.</p> <p>A subnet is a range of IP addresses in a VPC. When you launch an instance, you launch it into a subnet in your VPC. Use subnets to isolate the tiers of your application (for example, web, application, and database) within a single VPC. Use private subnets for your instances if they should not be accessed directly from the internet.</p> <p>These were some of the major security features that AWS allows us to leverage and customize, to ensure server security.</p> </content>
</entry>
<entry>
<title>Setting up a NodeJS service for production</title>
<link href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/"/>
<updated>2021-11-16T00:00:00Z</updated>
<id>https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/</id>
<content type="html"><blockquote> <p>This post has been written in collaboration with <a href="https://backtobackswe.com/">BacktoBackSWE.com</a>, a portal for interview preparation.</p> </blockquote> <h2 id="table-of-contents" tabindex="-1">Table of contents<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#table-of-contents">#</a></h2> <ul> <li> <p>Overview</p> </li> <li> <p>Prerequisites</p> </li> <li> <p>Introduction to Node Express and MongoDB</p> </li> <li> <p>Introducing the application we’ll be using</p> </li> <li> <p>Introduction to AWS and EC2 hosting services</p> </li> <li> <p>Setting up an AWS EC2 instance</p> </li> <li> <p>Cloning Node Express app on server</p> </li> <li> <p>Setting up MongoDB</p> </li> <li> <p>Testing the application so far.</p> </li> <li> <p>Setting up additional packages</p> </li> <li> <p>Setting up monitoring using PM2</p> </li> <li> <p>Conclusion</p> </li> <li> <p>References</p> </li> </ul> <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#overview">#</a></h2> <p>NodeJS is a server side programming framework using JavaScript. Using NodeJS frameworks like Express, you can create backend services quickly and wire them up with the frontend, all in Javascript.</p> <p>We’ll be using a Node-Express application built along the lines of the Zomock application, with a MongoDB database. You’ll get a basic understanding of a Node-Express application, and some things you need to consider while building for production. You’ll than setup a remote server using AWS EC2, similar to how you’d done in the React tutorial. You’ll than set up MongoDB using MongoDB’s cloud offering called Atlas, and connect your Node-Express app to MongoDB. Finally you’ll run your service using PM2 to keep the application running even after you’ve closed down the SSH connection. We conclude with some additional steps you can yourself choose to add to your project, and finally, leave you with some references for further information.</p> <p>Let’s jump in</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#prerequisites">#</a></h2> <p>You’re expected to have a basic understanding of what Node and Express are and how to write simple NodeJS code to start a server. Here is a sample tutorial in case you’re entirely new. You should have a basic idea of Postman, which we’ll be using to check if our service is working as expected.</p> <h2 id="introduction-to-node-express-and-mongodb" tabindex="-1">Introduction to Node, Express and MongoDB<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#introduction-to-node-express-and-mongodb">#</a></h2> <p>NodeJS(or simply Node) is a JavaScript runtime, and a server side framework, meaning that it allows you to create server side applications, and provides you the environment to run them in. Express is a framework based on NodeJS to help you create the endpoints for the application.</p> <p>MongoDB is a NoSQL database that stores data in the form of documents and collections. It is NoSQL, since it doesn't have tables and doesn't enforce a fixed schema across all documents.</p> <p>In case you need further brushing up on any of these, take a look at the links in the last section of the tutorial. Note that while we'll not be focusing on the development aspect, and instead will be looking at the deployment, you're still expected to know the basics to be able to understand some of the concepts we'll be using.</p> <h2 id="introduction-to-the-application-well-be-using" tabindex="-1">Introduction to the application we'll be using<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#introduction-to-the-application-well-be-using">#</a></h2> <p>We'll be using a simple mock Zomato API express application for the tutorial. This API exposes an endpoint to return a list of restaurants with details like rating, cost. You can also add restaurants by making a POST request. The application uses Node and Express for the logic, and MongoDB as a database, which we'll be setting up from scratch in the coming sections.</p> <h2 id="introduction-to-aws-hosting-services-and-ec2" tabindex="-1">Introduction to AWS hosting services and EC2<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#introduction-to-aws-hosting-services-and-ec2">#</a></h2> <p>AWS isn’t something you’re new to, or you won’t be reading this tutorial, but a one liner for it is that it’s a cloud hosting solutions provider by Amazon that allows you to host, manage and scale applications. For the sake of this tutorial, AWS will provide you the remote server where your React app will eventually run. The server itself will be located in some Amazon Data center, but you’d be able to access it remotely from your PC via a set of commands. We’ll be using the EC2 service of AWS. EC2 stands for Elastic Compute Cloud, and it does what we described above - lets you access a remote server and use it to host applications</p> <h2 id="setting-up-an-aws-ec2-instance" tabindex="-1">Setting up an AWS EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#setting-up-an-aws-ec2-instance">#</a></h2> <p>Next, let’s set up a remote EC2 server instance. As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image3.png" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image6.png" alt="" /></p> <p>Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.</p> <p>Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section. Finally, we’ll add a custom TCP rule for the port our application is going to expose - port 3000.</p> <p>For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image17.png" alt="" /></p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/image9.png" alt="" /></p> <p>Next, select the instance, and click on Connect on the top bar. It’ll open this page :</p> <p><img src="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/..img/scalex/node-mongo/image1.png" alt="" /></p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/main-node-ssh-connected.PNG" alt="" /></p> <p>Great going.</p> <p>Now, our next step is to bring the code into our instance and run it. To do that, we'll clone the repo we're working with, using</p> <pre><code>git clone https://github.com/dkp1903/zomock.git </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/zomock-clone.PNG" alt="" /></p> <p>Once it's complete, go to the installed folder using</p> <pre><code>cd zomock </code></pre> <p>We'll have to create an additional .env file in the repo. What is this file for? Our app has some configurations and credentials that we'd rather keep secret. This includes things like database passwords, connection urls and so on. Thus, we need a file where we can store this, and NOT commit this file to version control. The .env file is the accepted standard.</p> <p>In our case, we'll be storing two things - one, the PORT number of our application and two, the connection URL to our MongoDB database, which includes a database username and password. For now, we'll start with just the port number, and add the database URL once we set up the database in the next section. To create the env file, type</p> <pre><code>nano .env </code></pre> <p>This will open the env file in the Nano text editor.</p> <p>Add the following line in there :</p> <pre><code>PORT=5000 </code></pre> <p>To save the file, press Ctrl + X. You'll be prompted if you want to save the changes. Enter Y, and the file will be saved and you'll go back to the CLI.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/nano-env.PNG" alt="" /></p> <p>The next step is to install the dependencies.</p> <pre><code>npm install </code></pre> <p>Did you get an error? Ofcourse you did. You need to install NPM on the instance. How do you do that? The answer’s in the error itself :</p> <pre><code>sudo apt install npm </code></pre> <p>If you get an error like this, use the command sudo apt get update and then rerun the above command</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/error-1.PNG" alt="" /></p> <p>This will take a few minutes to complete. Once it’s done, try running npm install again, and you’ll see that this time, you’re able to.</p> <p>In case you see an error like this now, or anytime throughout this project, add a sudo before any command you run(for eg, sudo npm install)</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/node-error-sudo.PNG" alt="" /></p> <p>Now, start the application using</p> <pre><code>npm run start </code></pre> <p>You should see a line saying Server running on port 5000</p> <p>Are we done? Not quite. We still haven't set up the database, and thus, we wouldn't be able to do anything at all with the service. Let's resolve that in the next section.</p> <h2 id="setting-up-mongodb" tabindex="-1">Setting up MongoDB<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#setting-up-mongodb">#</a></h2> <p>We'll be using the MongoDB cloud service called Atlas to create the database that our Node-Express service will be interacting with. One of the great advantages of MongoDB is this cloud service, that you can setup, configure and maintain without having to install anything at all anywhere, something that's not found in existing relational DB systems like Postgres or MySQL.</p> <p>MongoDB has a free tier option, and that's what we'll be using. Remember, you'll not be prompted to add your billing details anywhere. If you are, that means you did a step wrong.</p> <p>To get started, go to mongodb.com, and log in/create an account. Follow through the steps to set up your account.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-login.PNG" alt="" /></p> <p>Then, you'll be asked to select a cluster type. Select the free version as shown</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-cluster-free.jfif" alt="" /></p> <p>Next, you'll be asked to customize your cluster details like hosting zone. Leave everything unchanged, and ensuring that there's no total cost at the bottom, select Create.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-create.jfif" alt="" /></p> <p>It'll take a minute or two for your cluster to get created. Once it's ready, you should see a screen like this.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-pre-connect.PNG" alt="" /></p> <p>Carefully take a look at the various details being shown, such as the R W graph - R and W stand for Reads and Writes respectively, which is an important metric for determining the traffic to your DB.</p> <p>The connections graph shows the number of connections to your DB. A connection is either via an application, as we'll do, or via the command line, and for practical purposes, represents the number of folks modifying/viewing our database.</p> <p>The in/out graph shows the bytes transferred to/from the database every second.</p> <p>Data size is the size of the database.</p> <p>Now, to establish a connection to the database, we need to do a few things first.</p> <p>Click on connect next to the cluster name, and you'll be prompted to add a connection IP address. This will clarify what all traffic do we want to allow to connect to the database. Remember, in a production application, you dare not give direct database access to anyone and everyone, or you might end losing/leaking thousands of users' data. However, for ease of access, we'll start with the 'Allow access from anywhere' option, since we'll be trying to connect via an EC2 instance, which has a dynamic IP, and thus, you'd have to keep updating the rules every now and then.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-add-ip.PNG" alt="" /></p> <p>Click on Add IP Address</p> <p>Next, you have to create a database user. You can create any username and password(make sure you remember it).</p> <p>Next, you'll be asked to choose a connection method - via shell(CLI), Compass(GUI) or via an application, which is the one we'll use. You'll then be asked to pick a driver version, and a connection string. Ensure that the driver is Node.JS and version is 4.0 and later. Copy the connection string.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/mongo-connection-url.PNG" alt="" /></p> <p>Now, go to the .env file we'd created on our server instance. Add a line there (no extra spaces, or you might face unexpected errors) :</p> <pre><code>MONGO_URL=&lt;the-string-you-had-copied&gt; </code></pre> <p>And replace the username and password with the user's credentials you had created.</p> <p>Did you see why we did that? We wish to restrict access to the database, and thus, the connection string, which is used to connect to the database, will only be present in a secure local environment and will not be committed with the rest of the code.</p> <p>With this, you'll finally have added the last requirement to your code. Now, we can run the application using</p> <p><code>npm run start</code></p> <p>Now, in addition to the 'Server running on port 5000', you should see an additional</p> <p>'Connected to database' message as well.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/npm-run-start.PNG" alt="" /></p> <p>If you don't, you need to recheck your connection string.</p> <h2 id="testing-the-application-done-so-far" tabindex="-1">Testing the application done so far<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#testing-the-application-done-so-far">#</a></h2> <p>Now, we need to test if the application is actually working. Since it's a backend only service without a frontend, we'd need to use an API testing tool. We'll be going with postman.</p> <p>Go to postman.com. If it's your first time with Postman, there'll be some setup steps.</p> <p>If we were developing this on our local laptops/PCs, we'd have used a localhost:5000 link. However, since it's on a remote server, we need to find the IP address of the server.</p> <p>This IP can be found from the AWS instance details - Public IPV4 address.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/image11.PNG" alt="" /></p> <p>Paste the IP into the request field on Postman. Add an <code>http://</code> before the IP and a <code>:5000</code> after.</p> <p>Now, if you check the Readme of the repo, hitting the /restaurants endpoint should retrieve a list of restaurants present in the DB. Add a <code>/restaurants</code> after the <code>:5000</code> and hit send.</p> <p>If it works well, you should see an empty array <code>[]</code> in the response tab, since there's no data in the database yet. If you get an error like connection refused or request timed out, recheck the IP.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/postman-get.PNG" alt="" /></p> <p>Now, let's try adding some data to the DB. Another look at the readme file will show that making a POST request to the endpoint <code>/restaurants/add</code> will create a restaurant. So update the endpoint, and add the following restaurant data in the body :</p> <pre><code> { &quot;_id&quot;: &quot;6073ccae8bab295faebb5718&quot;, &quot;name&quot;: &quot;Kiran Plaza&quot;, &quot;rating&quot;: &quot;5&quot;, &quot;image&quot;: &quot;https://i.ibb.co/ZTHr2cM/res-sample.jpg&quot;, &quot;cost&quot;: &quot;350&quot;, &quot;numOfReviews&quot;: &quot;4380&quot;, &quot;discount&quot;: &quot;40%&quot;, &quot;spec&quot;: &quot;Chinese&quot;, &quot;area&quot;: &quot;Koramangala&quot; } </code></pre> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/add-res.PNG" alt="" /></p> <p>Now, rerun the get request, and you should see this restaurant being returned.</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/get-restaurants-2.PNG" alt="" /></p> <h2 id="setting-up-additional-packages" tabindex="-1">Setting up additional packages<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#setting-up-additional-packages">#</a></h2> <p>Great, so you got it all running on a server. But we’re not done. What happens if you close off the terminal. Try doing just that and see if your get requests still work.</p> <p>As expected, they won’t. And that doesn’t make sense. For a server to stay up, you need not have to keep a dedicated computer with a terminal on all day - then there’s no point in holding a remote server.</p> <p>Fortunately, there’s a simple npm package that can keep your service running even when your terminal isn’t. It’s called pm2(most likely short for process monitoring and management). Apart from ensuring that the server remains up, you can use it to check the status of all your node processes running at any time to figure out which of those are causing the issue, logs management, to track the application and see where errors/bugs/incidents if any, occur, and metrics such as memory consumed.</p> <p>So, we’ll be installing the same on our server and then configuring it to start our node service. Again SSH into the instance using the ssh -i command, go to the project directory, and write</p> <p><code>npm i -g pm2</code></p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/pm2.PNG" alt="" /></p> <p>Note the <code>-g</code> flag. It stands for global, meaning that pm2 will be installed as a global package, not just for our project. This is important, because pm2 is expected to handle the restarting of the application even if our project stops, and any project level dependency would not be able to do it.</p> <p>Once that’s done, we need to start our service using pm2.</p> <p>The command for that is</p> <pre><code>pm2 start zomock/index.js 5000 -i max --watch </code></pre> <p><code>-i max</code> - allows us to run processes with the max number of threads available. Because NodeJS is single-threaded, using all available cores will maximize the performance of the app.</p> <p><code>--watch</code> - allows the app to automatically restart if there are any changes to the directory.</p> <p>Note that the above command should be run in the root(outside of the zomock directory)</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/pm2.PNG" alt="" /></p> <p>Now, if you close the terminal and make a GET request, you'll see that you're able to still get a response.</p> <blockquote> <p>Note : Due to an issue with PM2, sometimes the production environment is unable to parse the MongoDB connection string correctly from the .env file. So, in case you get a connection refused issue when making a get request, declare the mongo_url as a const in index.js itself, and use the constant instead of the <code>process.env.MONGO_URL</code> and you should be good to go</p> </blockquote> <h2 id="monitoring-using-pm2" tabindex="-1">Monitoring using pm2<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#monitoring-using-pm2">#</a></h2> <p>In production environments, we often need to monitor our deployed code for issues/crashes, so they can be resolved quickly. Fortunately, pm2 can help us with that as well.</p> <p>Enter the command <code>pm2 monitor</code> on the terminal.</p> <p>It'll prompt you to sign up for a pm2 account, and once you do, you'll get a URL which holds the metrics dashboard for your application</p> <p>If you go to that URL in the browser, you'll be able to see metrics of your application like the requests being made, as well as issues and errors. This is extremely advantageous when working with a large number of users</p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/pm2-1.PNG" alt="" /></p> <p><img src="https://blog.dkpathak.in/img/scalex/node-mongo/pm2-2.PNG" alt="" /></p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#conclusion">#</a></h2> <p>Thus, in this tutorial, you learnt how to deploy a Node-Express based application onto an EC2 server you'd set up from scratch. You also set up a MongoDB database and connected it to your application. You then ensured that your application continues running even when you close off the terminal running the development process. Finally, you learnt some concepts of monitoring and set up monitoring for your application using PM2</p> <p>Some of the most major challenges in backend development for production is to track errors and handle them gracefully. You should further research on how to handle exceptions, how to catch errors, log them and ensure that the user has a seamless experience.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-nodejs-service-for-production/#references">#</a></h2> <ul> <li> <p><a href="https://pm2.io/">PM2 docs</a></p> </li> <li> <p><a href="https://stackify.com/node-js-logging/">NodeJS logging</a></p> </li> </ul> </content>
</entry>
<entry>
<title>Setting up a production ready application with React</title>
<link href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/"/>
<updated>2021-11-14T00:00:00Z</updated>
<id>https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/</id>
<content type="html"><blockquote> <p>This post has been written in collaboration with <a href="https://backtobackswe.com/">BacktoBackSWE.com</a>, a portal for interview preparation.</p> </blockquote> <h2 id="table-of-contents-" tabindex="-1">Table of contents :<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#table-of-contents-">#</a></h2> <ul> <li> <p>Overview</p> </li> <li> <p>Prerequisite knowledge</p> </li> <li> <p>Why should you read this tutorial</p> </li> <li> <p>Introduction to React - building an app for production</p> </li> <li> <p>Introduction to AWS EC2</p> </li> <li> <p>Downloading and running the React app source code locally</p> </li> <li> <p>Creating a build</p> </li> <li> <p>Setting up and connecting to a remote EC2 instance</p> </li> <li> <p>Using pm2 to run the app on the instance</p> </li> <li> <p>Additional pointers on scaling and future references</p> </li> </ul> <h2 id="overview" tabindex="-1">Overview<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#overview">#</a></h2> <p>Creating a website on localhost, versus deploying it in a production environment is like comparing a zoo to a forest. There’s way more stuff you need to consider when you’re building for the end user - including scaling, fallbacks, load balancing, security, monitoring, CDNs and so on. In this tutorial, we’ll take our first step into deploying a React application into a production environment and actually seeing it work live, while learning some important concepts that go into ensuring that the app works as expected, in the way. We’ll be using a sample todolist application and deploy it to an AWS EC2 instance. You are free to use the same sample app, or any app of your choice.</p> <h2 id="prerequisites" tabindex="-1">Prerequisites<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#prerequisites">#</a></h2> <p>Since we’re focusing on deploying the application and not creating it, you need not know everything about React. However, you should be aware of the way React works - the concept of virtual dom, how a page is built and populated, and so on. While we’ll be covering some of the concepts in brief in the following section, the react documentation is a good reference point in case you need to refresh on any of the above concepts.</p> <p>You’ll also need to set up an AWS account. The steps we follow will fall within the free tier offering of AWS, but you’d still need a debit/credit card to sign up. However, as long as you follow all the steps correctly, you won’t be charged.</p> <h2 id="why-should-you-read-this-tutorial" tabindex="-1">Why should you read this tutorial<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#why-should-you-read-this-tutorial">#</a></h2> <p>Developing a web app UI is only the base camp - the rest of the trip to the top of Mt Everest is in deploying it to real users, ensuring that traffic is balanced, that any failure is monitored and handled, and that any security vulnerabilities that might compromise user data are caught and remedied.</p> <p>This tutorial will focus on deploying a react application on an EC2 instance. Along the way, you’ll learn how a React build gets created and rendered, how we set up and connect to a remote server thousands of miles away via a few terminal commands, the concepts of instances and security groups, and how we can set these up in a few clicks on AWS.</p> <p>This knowledge will be critical for you to develop applications built for hundreds of users, which is the aim with which most apps are built. Introduction to React - building for production You’ll most likely be aware of what React is and does, it’s a JavaScript library used to create UI components. It uses a Javascript + HTML like syntax called JSX. The HTML bit defines the way the UI looks, and the JS populates data and adds functionality to the application. React is the most popular frontend library these days, given its learning curve is much less steep compared to its competitors like Angular or Vue.</p> <p>Your first foray into React development would have started with something like <code>npx create-react-app myapp</code>, a command which bootstraps a sample react application and runs it on localhost:3000. However, when you want to let your users use your app, you can’t give them a localhost:3000 link. You need to first ‘build’ the application using <code>npm run build</code>, which creates a directory called build and contains ‘minified’(simplified) CSS, JS and HTML pages and static assets.</p> <p>If some of the above concepts sound alien to you, do spend some time in understanding how React works under the hood. Some helpful resources are linked in the last section of the tutorial.</p> <h2 id="introduction-to-aws-hosting-services-and-ec2" tabindex="-1">Introduction to AWS hosting services and EC2<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#introduction-to-aws-hosting-services-and-ec2">#</a></h2> <p>Again, AWS isn’t something you’re new to, or you won’t be reading this tutorial, but a one liner for it is that it’s a cloud hosting solutions provider by Amazon that allows you to host, manage and scale applications. For the sake of this tutorial, AWS will provide you the remote server where your React app will eventually run. The server itself will be located in some Amazon Data center, but you’d be able to access it remotely from your PC via a set of commands. We’ll be using the EC2 service of AWS. EC2 stands for Elastic Compute Cloud, and it does what we described above - lets you access a remote server and host applications on it</p> <h2 id="downloading-and-running-the-react-app-locally" tabindex="-1">Downloading and running the React app locally<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#downloading-and-running-the-react-app-locally">#</a></h2> <p>The first step is to get hold of the app which we’re going to deploy. As we said earlier, you can perform the deployment steps with any react app of your choice, but if you don’t have one, or are a nerd student and want to follow instructions down to the letter, clone this repo to your local using the following command :</p> <p><code>git clone https://github.com/gagangaur/React-TODO-App.git</code></p> <p>Next, we install the dependencies and run the application locally. To do that</p> <p><code>cd React-TODO-App</code> <code>npm install</code> <code>npm start</code></p> <p>This will start the react app on port 3000 - you can check it out by going to http://localhost:3000 on your browser.</p> <h2 id="creating-a-build" tabindex="-1">Creating a build<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#creating-a-build">#</a></h2> <p>What you saw on localhost:3000 was a development version of the application - which is visible only to you, and as such cannot be displayed to users.</p> <p>We need to create a build of this - package that we can then use to show the app to the users. Go to the terminal and type</p> <p><code>npm run build</code></p> <p>Once the command runs, you’ll notice that a build directory has been created in your root folder. Go to the file explorer and open it. You’ll see a list of assets like images, as well as a folder called static. Open it, to further reveal 2 folders - CSS and JS.</p> <p>What the build command has done, is that it has converted the React code into these CSS and JS files, which’ll now complement the index.html file to load the app.</p> <p>Now, how do we view this ‘built’ version of our app? We need to ‘serve’ our static files so that they are the ones that open up on localhost:3000, instead of the development version.</p> <p>To do that, install a package called serve, using</p> <p><code>npm i -g serve</code></p> <p>Once that’s done, run</p> <p><code>serve -s build</code></p> <h2 id="setting-up-an-aws-ec2-instance" tabindex="-1">Setting up an AWS EC2 instance<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#setting-up-an-aws-ec2-instance">#</a></h2> <p>Next, let’s set up a remote EC2 server instance. As said before, you’ll need an AWS account for the same. If you don’t already have one, you’d need to create it. Remember, it’ll ask you for debit/credit card credentials, but as long as you follow the steps in this tutorial, you will not get charged for it.</p> <p>To set up an AWS account, go to https://aws.amazon.com and follow the steps to set up an account. You’ll get a confirmatory mail once your account is set up and ready.</p> <p>Once you login to the account, you should see a screen similar to this</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/react/image2.png" alt="" /></p> <p>Click on the blue ‘Launch a virtual machine’ line, and you’ll be taken to the EC2 setup screen, wherein you’d have to select an AMI, an Amazon Machine Image.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image13.png" alt="" /></p> <p>An AMI describes the configuration of the server you’d be using to host your application, including the OS configuration - Linux, Ubuntu, Windows etc. If you have been following tech news, a Mac version was also released for the first time in early 2021.</p> <p>We’ll be going with Ubuntu server 20.04. You may choose another, but the rest of the steps might vary slightly. Also, do NOT choose an option that doesn’t have the ‘Free tier eligible’ tag, otherwise, you’ll be having to sell off some jewellery to pay the AWS bill.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image5.png" alt="" /></p> <p>The next step is choosing an instance type. This describes the server configuration, including CPU, memory, storage, and so on.</p> <p>Here, we’ll pick the t2.micro instance types, which is the only one available in the free tier. You’ll need larger ones as your application size and requirements in RAM or processing speed increase. In case you’re not clear with any of the column fields, click the information icon next to the headings to get a description of what it means.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image4.png" alt="" /></p> <p>Once this is done, click on Next: Configure Instance Details</p> <p>Here, you’re asked the number of server instances you wish to create and some properties regarding them. We only need one server instance. The rest of the properties are auto filled based on the configuration we selected in earlier steps and/or default values, and thus, should be kept as it is.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image3.png" alt="" /></p> <p>Next, click on Add storage</p> <p>As the name suggests, storage refers to the amount of storage in our server. Note that this isn’t the storage you’d consider for storing databases. This is temporary storage that will last only as long as the instance lasts, and thus, can be used for things like caching. A size of 8GB, that’s part of the free tier, and is the default, suffices our purpose.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image15.png" alt="" /></p> <p>Next, we’d be adding a tag for our instance. It is a key:value pair that describes an instance. Since we only have a single instance right now, it is not very useful, but when you are working with multiple instances and instance volumes, as will be the case when the application scales, it is used to group, sort and manage these instances.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image6.png" alt="" /></p> <p>Next, we’ll be adding a security group to our instance. A SG is practically a firewall for your instance, restricting the traffic that can come in, what ports it can access, called inbound, and the traffic that can go out, called outbound. There’s further options to restrict the traffic based on IP. For instance, your application will run on port 3000, and thus, that’s a port you’d want all your users to be able to access. Compare that to a Postgres database service running on port 5432. You don’t want anyone else but you meddling with that, so you’ll restrict the IP of that port to only you.</p> <p>Create a new security group. Next, we have to add the rules for the group, describing what ports are accessible to the outside world, and who they are accessible to. Note that outbound traffic has no restrictions by default, meaning that your application can send a request to anywhere without any restriction from the SG unless you choose to restrict it. As for inbound, we’ll first add HTTP on port 80 and HTTPS on port 443. Next, we’ll add an SSH rule for port 22. SSH stands for Secure Socket Shell and will allow you to connect to your instance, as we’ll soon see in the coming section. Finally, we’ll add a custom TCP rule for the port our application is going to expose - port 3000.</p> <p>For simplicity, we’ll keep the sources of all of those at ‘anywhere’. Ideally, SSH should be limited only to those you want to allow to connect to your instance, but for the sake of the tutorial, we’ll keep it at anywhere.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image17.png" alt="" /></p> <p>Once the rules are set, click on Review and Launch. You’ll be shown the configurations you’ve selected to ensure you didn’t make a mistake anywhere. Once you hit launch, you’ll be asked to create/select a key pair. As the name suggests, it’s a pair of keys - one held by AWS, and the other by you, that acts as a sort of password for you to connect to your instance. Anyone wishing to SSH into this instance must have access to this key file or they won’t be able to.</p> <p>The content of the file is RSA encrypted, which uniquely determines your access to the instance. Click on create new, give it a name(that you must remember), and download it.</p> <p>It’s recommended that you download the .pem key file to C:/Users/Home directory on Windows( /home/usr or similar for Linux and Mac), to avoid any access issues.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image10.png" alt="" /></p> <p>Once the file is downloaded, you’ll get a prompt that your instance is starting, and after a few minutes, your instance will be started. Your EC2 home page should look like this. Note the Name : Main(tag), the Instance type t2.micro that we selected when we were setting up the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image9.png" alt="" /></p> <p>Next, select the instance, and click on Connect on the top bar. It’ll open this page :</p> <p><img src="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/..img/scalex/react/image1.png" alt="" /></p> <p>This lists a few ways in which you can connect to the instance. Go to the SSH client tab. Now, we’ll be using the terminal to connect to your instance(remote server). For that, open a new terminal as administrator(superuser or sudo for linux), and navigate to the directory where you stored the .pem key file.</p> <p>First, we’ll run the chmod 400 keyfilename.pem command to allow read permission on that file, and remove all other permissions. Note that if the key file gets overwritten, you’ll lose SSH access to that instance forever, and you’ll have to recreate the instance, since you won’t get the .pem file to download again.</p> <p>And once you’re done with that, it’s time for the high jump - connecting via a simple command to a remote computer thousands of miles away. The command to run will be on the AWS page as shown above - the ssh -i … one</p> <p>It means that we’re ssh-ing into the instance defined by the DNS(the .amazonaws.com thing), and proof that we’re authorized to do it, is in the pem file.</p> <p>It’ll ask a confirmation prompt that you have to type yes to, and if all works well, you should see a welcome to Ubuntu text as shown above, which means that you’re now logged into the instance.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image14.png" alt="" /></p> <p>Great going.</p> <p>Now, our next step is to bring the code into our instance and run it. To do that, we’ll do a git clone exactly the same way we cloned the repo on our local system, using the git clone command.</p> <p>Once you’re done cloning the repo, the next step is to install the dependencies and start the application. Navigate to the repo directory and try running</p> <p><code>npm install</code></p> <p>Did you get an error? Ofcourse you did. You need to install NodeJS on the instance. How do you do that? The answer’s in the error itself :</p> <p>sudo apt install nodejs</p> <p>This will take a few minutes to complete. Once it’s done, try running npm install again, and you’ll see that this time, you’re able to.</p> <p>Finally, the moment of truth - run</p> <p><code>npm run start</code></p> <p>Once you see the application live on localhost:5000 written on the terminal, you’ll have to navigate to the server IP to check if it works.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image16.png" alt="" /></p> <p>This IP can be found from the AWS instance details - Public IPV4 address. Copy that, and paste it onto a browser tab, and add :3000 after it.</p> <p>If the application did work correctly - you should be able to see the same screen that you were able to see locally on your machine.</p> <p><img src="https://blog.dkpathak.in/img/scalex/react/image8.png" alt="" /></p> <p>As we’d seen above, a simple npm run start gives us the development version. However, this is a production environment we’re running the app on, and we need to ‘build’ the app, using</p> <p><code>npm run build</code></p> <p>Then, following the same steps as we did above, install the serve package and use the command</p> <p><code>serve -s build</code> to serve the build version.</p> <p>Looks good. Or does it?</p> <p>Did you notice the port number? 5000. Do you think we’d be able to access it with the security rules we created?</p> <p>To find out, go to the public IP browser tab and replace the :3000 by :5000.</p> <p>Oops. Doesn’t work, does it? Wouldn’t it be great if AWS could just ‘guess’ the port number!</p> <p>Unfortunately, that functionality is still not active, and thus, we need to manually add the port 5000 to be allowed. To do that, go to the instances page. In the left navigation pane, scroll down to find the “Network and Security” section, and within it, Security groups. Open it, and select the new security group we’d created when we were setting up the instance(not the default one).</p> <p>Below, go to the Inbound rules tab, and hit the edit inbound rules button.</p> <p>Now, put in a custom TCP connection rule for port 5000, and allow access from? You guessed it - anywhere.</p> <p>Once that’s done, save the rules, and come back to the public IP page, and refresh. If you didn’t mess up, you should be able to see the application loading on port 5000 now!</p> <p>Great, so you got it all running on a server. But we’re not done. What happens if you close off the terminal. Try doing just that and see if your website still works.</p> <p>As expected, it won’t. And that doesn’t make sense. For a server to stay up, you need not have to keep a dedicated computer with a terminal on all day - then there’s no point in holding a remote server.</p> <p>Fortunately, there’s a simple npm package that can keep your app running even when your terminal isn’t. It’s called pm2(most likely short for process monitoring and management). Apart from ensuring that the server remains up, you can use it to check the status of all your node processes running at any time to figure out which of those are causing the issue, logs management, to track the application and see where errors/bugs/incidents if any, occur, and metrics such as memory consumed.</p> <p>So, we’ll be installing the same on our server and then configuring it to start our react app. Again SSH into the instance using the ssh -i command, go to the project directory, and write</p> <p><code>npm i -g pm2</code></p> <p>Note the <code>-g</code> flag. It stands for global, meaning that pm2 will be installed as a global package, not just for our project. This is important, because pm2 is expected to handle the restarting of the application even if our project stops, and any project level dependency would not be able to do it.</p> <p>Once that’s done, we need to start our app using pm2. And remember, we’re looking at the build version,</p> <p>The command for that is</p> <p><code>pm2 serve React-TODO-App/build/ 3000</code></p> <p>Note that the above command should be run in the root. If elsewhere, edit the path to the build folder accordingly. And we’ve used the port to be 3000. You may use 5000 as well.</p> <p>Now, if you close the terminal, you’ll see that the application continues to stay up and running.</p> <h2 id="conclusion" tabindex="-1">Conclusion<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#conclusion">#</a></h2> <p>Thus, in this tutorial, we learnt what it means to build a React app for production, how do we create a build locally, and how it works. We then learned how to set up and configure a remote EC2 server, and managed access details. We then set up our repo on the instance, and ran it. Since we wanted the app to continue running even when we closed the terminal, we used the pm2 package for that.</p> <p>In future blogs, we’ll be looking at how to add load balancers to balance the traffic on our application.</p> <h2 id="references" tabindex="-1">References<a class="tdbc-anchor" href="https://blog.dkpathak.in/setting-up-a-production-ready-application-with-react/#references">#</a></h2> <ul> <li> <p><a href="https://create-react-app.dev/docs/production-build/">Creating a production build - React</a></p> </li> <li> <p><a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.create-cluster.console.configure-inbound-rules.html">Configuring EC2 inbound rules</a></p> </li> </ul> </content>
</entry>
<entry>
<title>Objectivizing milestones, the Agile way</title>
<link href="https://blog.dkpathak.in/objectivizing-milestones-the-agile-way/"/>
<updated>2021-11-07T00:00:00Z</updated>
<id>https://blog.dkpathak.in/objectivizing-milestones-the-agile-way/</id>
<content type="html"><p>Determining when a task is 'done', is often harder than doing the task itself, especially when it involves a creative turn of mind like writing a blog post, painting a portrait, and so on. And going by 'satisfaction' also doesn't work - the creators can almost never be perfectly satisfied with their work, and thus, unless you have a specific stop point, your tasks could end up progressing infinitely. When there're other stakeholders involved, like clients, this problem exacerbates - you say you did your best, but the client still doesn't agree.</p> <p>The makers of Agile anticipated this problem, and incorporated features and principles in their methodology to ensure that milestones, capacities and goals were objectivised. And it's helpful if we can take a leaf out of the agile book to ensure we are able to arrive at tasks' conclusion much faster.</p> <p>These are 3 steps that we can take to ensure we're more objective :</p> <h2 id="1-objectivizing-capacity-" tabindex="-1">1. Objectivizing capacity :<a class="tdbc-anchor" href="https://blog.dkpathak.in/objectivizing-milestones-the-agile-way/#1-objectivizing-capacity-">#</a></h2> <p>There's a term in the agile methodology for this - velocity estimation. You have a fixed amount of time and mental resources. You estimate the total number of hours in a day you believe you can work productively for your tasks, and only then take up tasks. For instance, let's say you wish to take up a new side project. In a typical 24 hour work day, you have 8 hours of sleeping, 9 hours of work, and 2 hours of chores, which leaves you with 5. That's your capacity for the day, and you know that even operating at your best, you cannot take up tasks that'd take longer than 5 hours.</p> <p>This objectivization goes beyond the 'I'll find time to do it today' myth, and instead focuses on a real chunk of time you have available, making you feel less overwhelmed and more in control of your time and energy.</p> <h2 id="2-objectivizing-acceptance-criteria-milestones-" tabindex="-1">2. Objectivizing acceptance criteria/milestones :<a class="tdbc-anchor" href="https://blog.dkpathak.in/objectivizing-milestones-the-agile-way/#2-objectivizing-acceptance-criteria-milestones-">#</a></h2> <p>When is a task 'done'? When does it look 'good enough' to go into the 'Done' category? Unless we have a very clear and objective milestone to achieve, we'll never get anything done to satisfaction. In agile, this takes the form of 'acceptance criteria' for features. Before putting a feature into development, the developer and the product team agree to when that feature will be considered complete to avoid back and forth over expectations.</p> <p>The same can be, and should be, done with our personal tasks, especially those that require a creative turn of mind. When I first wrote this very post, I never knew what I wanted it to look like at the end, even whether I should write 3 steps or 5. After a lot of confusion that lasted days, I created a set of essential points that I wanted the post to cover, and other features like word limit that I wanted it to have. Only then did I start writing it in earnest and sure enough, I was able to hit the acceptance criteria within less than an hour</p> <ol start="3"> <li>Objectivizing habits : We all want good habits. But only a fraction of us actually end up keeping up with our aspired habits for long. And the reason isn't always our laziness and lack of consistency. Often times, habits we plan are so subjective that we don't really know of the next course of action that we should take to keep the streak going, and everyday, we first need to think about what we need to do, and when can we consider our habit done for the day.</li> </ol> <p>Instead of this, having a clear set of actionable items that we should do everyday will slowly switch our bodies to autogear and soon enough, the habits will start coming subconsciously.</p> <p>In Agile, this takes the form of consistent flows in ceremonies - a daily standup is always limited to answering three questions -</p> <ol> <li> <p>What did you do yesterday</p> </li> <li> <p>What will you do today</p> </li> <li> <p>Blockers</p> </li> </ol> <p>A retrospective meeting is always defined by what went well in the previous sprint, and what could have been done better. Converting the subjective into these relatively more objective questions ensures that the ceremonies get completed within time and that they actually do what they're meant to do without losing track.</p> <p>A simple example of an adaptation in our personal lives is our workout routines. Instead of scheduling 'core workout' twice a week, make it '40 situps and 80 leg rotations' with an increase of 4 reps week on week. This objective goal means that your brain doesn't have to worry about thinking what workout it has to do, only the measurable rep count that needs to be met.</p> <p>These 3 categories of objectivizing our life facets can therefore, greatly alleviate the progress we make on our tasks and goals.</p> </content>
</entry>
<entry>
<title>3 free calendar-cum-todolist apps you can use to time-block your day</title>
<link href="https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/"/>
<updated>2021-11-06T00:00:00Z</updated>
<id>https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/</id>
<content type="html"><p>The market is overflowing today with todo lists, and with calendar applications. With the definition of 'work' being a lot more than just our job, and with multiple things going through our minds at the same time, these apps ensure that we're able to capture tasks and that we see them through when we have enough bandwidth.</p> <p>Sadly, both the todo list type of apps, and the calendar type of apps, lack completeness individually. You can type in a zillion tasks in your todo list, but unless you have assigned a time to them and follow that calendar religiously, those tasks will stay untouched on the list forever.</p> <p>On the other hand, if you have a calendar but no way you can check things off or organize them, you wouldn't really be able to track whether an item you'd scheduled is completed or not.</p> <p>Thus, the optimal solution would be an application that combines these functionalities - a todolist to manage your tasks, and a calendar to block time for doing them. And here we have shortlisted three free solutions that do just this.</p> <h3 id="1-routine" tabindex="-1">1. <a href="https://routine.co/">Routine</a><a class="tdbc-anchor" href="https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/#1-routine">#</a></h3> <p>Disclaimer : At the time of writing this, Routine is still in private beta, and thus, it's not accessible to all. However, it holds great potential and is expected to go live very soon, and thus, is part of this list.</p> <p>Routine allows you to have an inbox of todo items, and a calendar parallely, synced with your Google calendar. You can drag and drop tasks from the inbox into the calendar. Better, you can even move your GCal events and the same will be synced to your calendar - this is a feature not found in many calendar apps. Additionally, Routine also follows a page model like Notion, where every task/event can be opened as a page for notes. It has Intellisense to automatically gauge date/time from the task name, which is another great plus.</p> <p>Pros :</p> <ol> <li> <p>Drag drop tasks and gcal events and have a two way sync - many other apps don't allow GCal events do be modified by another app</p> </li> <li> <p>Very accessible - tasks can be added, and the platform can be navigated just via the keyboard</p> </li> <li> <p>UI and UX are the best among the three apps discussed here - it breathes minimalism and focus.</p> </li> <li> <p>Easily schedule tasks thanks to Intellisense.</p> </li> </ol> <p>Cons :</p> <ol> <li>Still in private beta, so many features are lacking. Some of these include :</li> </ol> <ul> <li> <p>integrations with other apps like Todoist</p> </li> <li> <p>no Android app</p> </li> <li> <p>No Reminders</p> </li> <li> <p>No project system to sort tasks into</p> </li> </ul> <p><img src="https://blog.dkpathak.in/img/calendar-todolist/routine-1.jfif" alt="" /></p> <h3 id="2-kosmotime" tabindex="-1">2. Kosmotime<a class="tdbc-anchor" href="https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/#2-kosmotime">#</a></h3> <p>Kosmotime is a time blocking and tracking application. You can add tasks, sort them into projects, and schedule them onto the calendar. This app also syncs with Google calendar to display GCal events onto the Kosmotime calendar, however, you cannot modify the GCal events. One unique feature about Kosmotime is the concept of focus blocks, wherein you can block a time for a set of tasks grouped together - a manifestation of the concept of time blocking. You can also track time across each task using the inbuilt timer, however, you'll have to remember to start and stop the timer for the task. I once remember working 72 hours non-stop on a task :P</p> <p>Pros :</p> <ul> <li> <p>Can group tasks into projects</p> </li> <li> <p>Can drag and drop tasks onto calendar</p> </li> <li> <p>Can create focus blocks</p> </li> <li> <p>Time tracking for tasks</p> </li> <li> <p>Good UI/UX</p> </li> </ul> <p>Cons :</p> <ul> <li> <p>Google calendar events cannot be modified from Kosmotime calendar</p> </li> <li> <p>Only has integrations for Slack and Asana</p> </li> <li> <p>No Android application</p> </li> <li> <p>No reminder option</p> </li> <li> <p>Lacks time, date intellisense - you have to manually set the time and date for each task or drag and drop it</p> </li> <li> <p>No dark mode(Might not be a con for many, but it is for me)</p> </li> </ul> <p><img src="https://blog.dkpathak.in/img/calendar-todolist/kosmotime.PNG" alt="" /></p> <h3 id="3-plan" tabindex="-1">3. <a href="https://getplan.co/">Plan</a><a class="tdbc-anchor" href="https://blog.dkpathak.in/3-free-calendar-cum-todolist-apps-you-can-use-to-time-block-your-day/#3-plan">#</a></h3> <p>Among the three, plan offers the richest set of features, (and if I am not wrong) has been around the longest. It also allows you to create tasks, sort them into projects and drag and drop them onto calendar, just like the other two. Additionally, you can view the tasks in list, kanban or timeline views. My favorite feature, though, is that it allows you to edit and drag and drop Google calendar events as well. Moreover, it has a chrome extension which creates a plan homepage, and therein, you can even check off events - something that both the above apps miss. That 'checking off' is like productivity-adrenaline.</p> <p>Additionally, it has a documents feature for you to create and store documents. It also has a metrics dashboard wherein you can view the time spent on each task/project.</p> <p>The downsides are that it has a buggy UI/UX and is often not responsive(some of the content gets cropped out of the screen and there's no scroll)</p> <p>Pros :</p> <ul> <li> <p>Allows you to edit and drag calendar events from the app as well, and even allows you to check off Gcal events in its chrome extension</p> </li> <li> <p>Has on browser reminders(notifications)</p> </li> <li> <p>Has a variety of views to visualize tasks - list, kanban, timeline</p> </li> </ul> <p>Cons :</p> <ul> <li>UI is buggy and content often gets cropped out of the picture with no option for scrolling</li> </ul> <p><img src="https://blog.dkpathak.in/img/calendar-todolist/plan.PNG" alt="" /></p> <p>Thus, these were three calendar-cum-todolist management applications that you can use to block time for various tasks, with their pros and cons.</p> </content>
</entry>
<entry>
<title>5 Agile processes you can use to improve your personal productivity</title>
<link href="https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/"/>
<updated>2021-11-05T00:00:00Z</updated>
<id>https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/</id>
<content type="html"><h3 id="intro-to-agile" tabindex="-1">Intro to Agile<a class="tdbc-anchor" href="https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/#intro-to-agile">#</a></h3> <p>For the uninitiated, Agile started as a methodology to deliver software, improving upon the flaws of the then popular waterfall model. The idea was simple - to speed up, you must adapt to change, incorporate it quickly, and deliver updates incrementally. For instance, before Agile, teams spent months finalizing the expected features, then developed them, then tested them, and just as they were about to deliver, they realized a client requirement had changed rendering much of their months of effort practically wasted. And anyone who has worked for any length of time in the tech/corporate world knows that if there's anything fickle in the world, that's client requirements. And Agile was a means to not resist change, but instead accept it, and tune our processes to fit the inevitable change, by delivering small updates in short chunks of time, while taking feedback and incorporating it in further iterations.</p> <p>And as teams and companies realized that this was not at all a bad idea and helped them deliver more and therefore, improve their financials, they adapted the practice religiously, making tweaks to the process that could suit larger teams and projects. This led to several different forms of Agile making their way out in the open, each suited to a particular team or objective or process - Scrum, Kanban, SAFe, to name a few.</p> <p>Today, Agile is adapted by almost all development teams around the globe.</p> <p>But interestingly, you don't have to be a development team, or even a developer to utilize the power of Agile.</p> <h3 id="agile-for-the-individual" tabindex="-1">Agile for the individual<a class="tdbc-anchor" href="https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/#agile-for-the-individual">#</a></h3> <p>Most of us are agile in some way - we spend 15 mins at the start of the day planning our tasks and time - that's like a daily standup in agile. We often don't go all in on an idea, but instead, take an incremental step by step approach to see if the outcome's worth the effort - another agile mindset. However, creating a formal structure including some of these instinctive habits, and adapting a few formal processes can make us more productive in doing tasks, personal projects, as well as general mundane tasks.</p> <h3 id="5-agile-processes-you-can-use-in-your-daily-life" tabindex="-1">5 agile processes you can use in your daily life<a class="tdbc-anchor" href="https://blog.dkpathak.in/5-agile-processes-you-can-use-to-improve-your-personal-productivity/#5-agile-processes-you-can-use-in-your-daily-life">#</a></h3> <h4>1. Daily standup</h4> <p>A daily standup typically includes answering the following three questions :</p> <ul> <li> <p>What did you accomplish yesterday?</p> </li> <li> <p>What are your goals for today?</p> </li> <li> <p>Any blockers?</p> </li> </ul> <p>In a team, each developer talks about her/his own content on the above three points. The same format can be taken up by you personally as well. You start by reflecting on everything you accomplished yesterday, which not only gives you motivation, but also pick up on where you could do better today. You then check out your goals for today - work, personal, social, all of it, and put it up on a todo list so that you can check them off. Finally, you think about any blockers you have - any work task that requires an input from a teammate, a social obligation that's gonna eat up a couple hours of your time? This is helpful for you to set your expectations for the day.</p> <p>If you already have a standup for work, it's recommended that you do a quick personal standup before it, so that the blockers you find for yourself can be discussed in the team standup.</p> <p>Are there tools you can use to help you in this? The simplest option would be a calendar to place your blockers, and a todo list to keep track of tasks. If you're feeling nerdy, <a href="https://dailybot.co/">Dailybot</a> is a slack bot that you can use to customize questions sent to you over Slack that you can answer</p> <h4>2. Weekly/bi-weekly retrospectives</h4> <p>As the name suggests, you retrospect on the past week(s). What went well, what could be improved?</p> <p>Sprints are usually 2 weeks long in development teams, and a retrospective happens at the end of each sprint to improve the next sprint.</p> <p>How can you use it personally? You have a habit that you started. You need to take a reflection to see if it's working, and make tweaks to improve. For instance, when reflecting, you realize that you're more able to absorb the content of a book you want to read more in the morning, than at night, and you therefore update your calendar to block time in the morning instead of the night. Similarly, you take note of some activities that are stopping you from being at your productive peak - social media scrolling, Netflix, Zomato, and plan to reduce them in the next week.</p> <p>The core tenet of agile is adapting to change, and retrospecting ensures that you're aware of what needs to be changed, and how.</p> <p>Tools you can use for retrospectives can be as easy as looking across the past two weeks on your calendar/todo list to try and recall where things can be optimized. However, if you want to delve into detail and have the time and will power to, you can use time tracking tools like <a href="https://app.kosmotime.com/">Kosmotime</a> to track the time spent across various tasks, that'll allow you to determine if the outcome of a task was worth the amount of time you spent on it.</p> <h4>3. Velocity estimation and prioritization</h4> <p>Velocity refers to 'units of work' that can be allotted to a given task. A relatively simpler task has lower number of units, and a complex or a time consuming one will have higher. Each team has a fixed number of units per day they can manage to complete, and thus, this allows quantitative estimation of how many tasks they can do in a given sprint.</p> <p>Prioritization is as the name suggests - needs no explanation.</p> <p>The same can be applied in our daily tasks as well - in each retrospective/planning session, we assign points to each task we have to do, and decide how many we can fit in a day/week. For instance, while writing this blog post, I assigned 4 points to this task, which should take me a couple of hours, and this helped me blocking my calendar accordingly.</p> <p>Similarly, task prioritization is critical, since we always have way more to do than we're capable of, and some things need to be done way more urgently than others. Thus, giving priorities to some tasks and accomplishing them sooner should be part of retrospectives, plannings and standups.</p> <p>Todo lists like Todoist have a priority flag, which can be used to prioritize and then filter tasks on priority. While velocity is not a direct feature in many todo applications, it's as simple as a number you assign to a task, as long as you have a clear idea in mind/on paper, as to the relation between the unit of work and the time you should block for it.</p> <p>This exercise might often seem like an overkill - you might think that it's much easier to just do a task right away, than go through this entire process of setting velocities, priorities and what not, and while that is indeed the case sometimes, not all tasks can be just 'done'. For instance, I had to plan this blog well beforehand and allot time for it, so that I could research on the content, finalize the points, leave enough time for proofreading and publishing. I could not just sit up one fine day and shoot through all of this in lesser time. It requires some thought on what tasks can be just 'done right away', and what need planning.</p> <h4>4. Frequent delivery</h4> <p>While it may sound like this doesn't apply for a lot of our daily tasks, well, it does really help on long term projects, to complete small chunks of work rather than hoping for a grand release. For instance, doing a Rangoli for Diwali - plan the design one day, the chalk outline another, the colors the third day, instead of waiting for a day you could do it all.</p> <p>If you're working with other people/clients, this is even more important since you can get feedback before you've done a lot of work that could potentially go to 'waste', if the requirements or priorities change.</p> <p>When I was writing a blog for a firm some months back, I decided to go through it all in one sitting, for which I took a week, and on the day of presentation, realized that I had gotten the topic all wrong.</p> <p>Continuous feedback is another tenet critical to agile, and while your tasks might not usually have other people involved, your own feedback and expectations are important enough to ensure are at par, and thus, try to keep deliveries short and frequent.</p> <p>Jira has versioning for delivery of software products - version 1.0.1, version 1.0.2 and so on. This can be customized for other tasks as well, including those that aren't clear enough to be defined via versions.</p> <h4>5. Using metrics for assessing output</h4> <p>One thing almost everyone can agree on - a task can almost never be 'good enough' for all stakeholders. You will always find ways to do it better. The client will always have further feature requests. And when working with personal tasks which are often not deadline bound from the start, it's tough to know when to stop and when to keep improving. A subjective 'does it look good' is a very ambiguous and unhelpful milestone to achieve, and does nothing to consider the effort put into the activity. Instead, there should be more objective metrics that can help us analyze how we performed and as Taylor Swift puts it, 'If the high was worth the pain'.</p> <p>Agile has numerous metrics to track team performance, such as cycle time, lead time, burndown, throughput, business value, but almost none of these should be taken up blindly as a metric for your own tasks or projects, because focusing on a wrong metric can lead the project into an entirely non productive direction.</p> <p>For instance, judging the growth of a blog by number of articles per month isn't a great idea, if you do not consider the quality of articles that go in there.</p> <p>Instead, you should research and finalize a metric you'll aim to optimize as you go through a project, and customize it so that an improvement in the metric score actually translates into the project becoming better. In the above example, the number of minutes an average user stays on the blog gives a good idea of whether the blog is doing good from the 'quality' perspective.</p> <p>The tools for this process will depend on the metric you aim to use - Google Analytics for traffic tracking, time tracking for amount of time spent, Jira/other project management tools for number of features delivered, and so on.</p> <p>Thus, these were 5 agile processes that you could implement in your daily life and personal projects as well, to optimize your productivity. But before you do that, remember the very core tenet of agile - adapting to change. Priorities change, circumstances change, requirements change. And it's important that you realize this, expect the change and adapt to it</p> </content>
</entry>
<entry>
<title>An introduction to static code analysis using Sonar</title>
<link href="https://blog.dkpathak.in/an-introduction-to-static-code-analysis-using-sonar/"/>
<updated>2021-10-02T00:00:00Z</updated>
<id>https://blog.dkpathak.in/an-introduction-to-static-code-analysis-using-sonar/</id>
<content type="html"><blockquote> <p>Good programmers write code for humans first, and computers next</p> </blockquote> <p>No idea who said that above line, or if anyone said it at all before I stole it off the internet, but damn right it is.</p> <p>Code changes more often than I change my mind(which is saying something), and it's almost certain that the next change to the code you're writing right now, will be done by someone other than you. In such a case, ensuring that code is readable, maintainable, follows a set of standard practices becomes critical.</p> <p>In a large organization with a crazy big codebase worked on by multiple teams and developers, the problem is exacerbated - no one really knows who wrote the code they are having to debug, and thus, it does save a lot of WTFs if the code follows coding practices.</p> <p>So now the question comes - who ensures developers follow standard practices? You can't give all developers a book of rules, and ask them to refer to it before each variable name they type. There is a need for a tool that checks code as the developer types, and points out the issues and the flaws</p> <p>And this tool, is called Sonar.</p> </content>
</entry>
<entry>
<title>Intro to Async Javascript</title>
<link href="https://blog.dkpathak.in/intro-to-async-javascript/"/>
<updated>2021-10-04T00:00:00Z</updated>
<id>https://blog.dkpathak.in/intro-to-async-javascript/</id>
<content type="html"><p>Most developers who come to JavaScript from Java, which uses threads for Asynchronicity are often left wondering - why the hell can JS not do the same? Or can it? Let's find out.</p> <p>Asynchronicity is the ability to break the regular flow of control in a script, in order to not let the program stall on blocking operations - calls that take a long time to complete - talk network requests or such.</p> <p>Asynchronicity is usually achieved by using multiple threads - this is the way Java does it - all the stuff that you don't want your main thread to bother wasting time on, just spawn off a new thread for.</p> <p>But that means, that Java has the ability to directly create and manage threads - this makes it decidedly more complex, but that's the way it was built.</p> <p>JavaScript is higher order than Java, and was initially meant to be a scripting language, and not to manage threads. Now, when the use case for it did come, the powers that be had two choices - add new features to the language to make threading allowable from within JavaScript, or find another way for it.</p> <p>And the far thinking powers behind the language, decided to try something else, rather than complicate JavaScript.</p> <p>How could you make a language Asynchronous, without actually calling multiple threads?</p> <p>The first place they looked for, is where JS ran - in the browser, initially. Java, on the other hand, runs on a server.</p> <p>JavaScript doesn't do every little thing itself - it takes help of several web browser 'APIs', to do things - for instance, there's a timer API, an XHR API, and so on.</p> <p>JavaScript uses these APIs to defer some logic to the browser, and expects the output when it's done.</p> <p>That means, that JavaScript could also use the same logic to defer code that it knows would take a long time.</p> <p>And that's where comes in setTimeout(), the gateway to the world of AsyncJS.</p> <p>To the uninitiated, setTimeout() is a function used within JavaScript, which looks something like this -</p> <pre><code>setTimeout(() =&gt; alert('Hello'), 1000) </code></pre> <p>Don't worry if it looks like Gobbledygook, here's a more readable version of the same snippet :</p> <pre><code>setTimeout(function a(){ alert('Hello') }, 1000) </code></pre> <p>Better now?</p> <p>setTimeout takes two params - a function, and a time in milliseconds. The function is what you'd call a callback function, a function that will be CALLED BACK, when the time as provided as the second param, is up. Meaning, after 1000ms, the function a is called.</p> <p>That's the more widely accepted, slightly inaccurate version.</p> <p>What goes behind the scenes, is horrifyingly amazing(Go figure:/)</p> <p>setTimeout is NOT a JavaScript function, first. There goes your belief system, but I am sorry - it is what it is.</p> <p>It's instead, a facade, for a web browser API being called behind the scenes - the Timer API, and for once, in the godforsaken world of bad naming conventions, does exactly what it sounds like - it's a TIMER.</p> <p>The execution is interesting - when the JavaScript compiler runs into setTimeout, it just does two things - it first heaves a sigh of relief, and then, throws the whole thing, the callback, and the time, to the web browser API, and forgets all about it. Literally, yes. Forgets.</p> <p>For JavaScript, that line has finished execution. It can continue on with the rest of the code.</p> <p>Did you see it happen? Did you see how JavaScript went asynchronous without us having to mess with threads?</p> <p>That's the beauty.</p> <p>So, what happens when the compiler throws the stuff to the browser API? Simple, the browser counts till the time given as param is up, and then adds the function to the call stack.</p> <p>The call stack is where functions go, to get called.</p> <p>This isn't entirely accurate, but that's something we'll discuss in a coming probe.</p> <p>Till then, just remember this - JavaScript is NOT an Asynchronous language. It's synchronous by design, but supports Async functionality by cleverly utilizing the environment it runs in - the browser.</p> <p>Coming in the next probe, what really happens with the callback function, and why it isn't very good.</p> </content>
</entry>
<entry>
<title>Your next side project like a pro</title>
<link href="https://blog.dkpathak.in/your-next-side-project-like-a-pro/"/>
<updated>2021-10-04T00:00:00Z</updated>
<id>https://blog.dkpathak.in/your-next-side-project-like-a-pro/</id>
<content type="html"><p>A lot of us often complained, especially during our college years, the wide discrepancy between how we did side projects in college, vs how they work in the industry, and I am not referring just to the scale and complexity of the industry project.</p> <p>Even if a project of the same level was done personally by us, vs it being done in a formal software industry setting, the process would be entirely different - the latter would involve requirements analysis, PRD making, design creating, feature branches, code reviews, automated tests, linting and more.</p> <p>And this means that even if we do projects on the side, it doesn't give us enough confidence of us being able to do justice to our roles immediately in a software industry setting.</p> <p>But tell you the truth - it doesn't have to be so. To quite an extent, we can tweak our side project structure to emulate as much of a 'real' software project as to give us a pretty fair idea of what we're up against when we enter the industry.</p> <p>Here are a few points you should look at/implement when doing a side project :</p> <ol> <li> <p>Requirement analysis/Software Requirements Specification/Product Requirements Document : This is one thing we often ignore or take for granted, simply because we start our project with an idea or a set of features in mind, and we usually try to keep em verbal and limited to those. More often, we often follow tutorials that walk us through the project's code step by step, and we emulate the same thing. These factors mean that we don't spend enough time doing a requirement analysis to analyze the feasibility and priority of each feature, create user stories and so on.</p> <p>This means that we are only looking at a project from a constrained point of view - and this isn't the case in most software projects. In the industry, we're given a bird's eye view of the project, its expected functionalities and user base, and it's we who have to formalize and structure it into requirements. This allows us to think of the business/user side of things, teach us to prioritize important features so that we're spending less effort on unimportant features, and have a clear set of iterative goals in mind.</p> <p>Creating a PRD or an SRS is subjective - product managers spend days making a PRD in an industry setting, but you need not do the same. Just understanding the contents of a typical SRS and PRD should give you enough knowledge to create a simple one for your next project within an hour.</p> <p>The important part is to stick to it.</p> </li> <li> <p>Sprint planning : We often binge our projects - we go into frenzy mode and do em all working 10-12 hours a day for 3-4 days, then give them up for a week, and repeat the cycle. Moreover, what we do in each cycle is also dependent on our moods, what the tutorial guy is teaching and so on. A more scalable idea would be to plan things beforehand in terms of sprints. A sprint is a period of software engineering with a definitive goal and clearly defined expected outcomes. It can range anything from a couple of days to multiple weeks, based on complexity.</p> </li> </ol> <p>The advantage of this, is that after each sprint, you have a significant chunk of the project ready, matching the requirements that you set at the beginning of the sprint, as well as regularly make tweaks to your priorities and deadlines based on each sprint's review.</p> <p>Again, this isn't as hard as it seems. You have the final SRS/PRD of the project - you need to break it down to ACHIEVABLE chunks, with deadlines for each set of features.</p> <p>This can and I recommend, should go into a project management tool - preferably JIRA(the most commonly used in the industry), Trello, Notion or such like. No, 'verbally speaking and remembering' sprint features doesn't work. Writing it down on paper might sound appealing and might be a preferred way for many of us, but it doesn't work that way in the industry.</p> <ol start="3"> <li>Design (Frontend AND Backend)</li> </ol> <p>In most of our side projects, especially those we follow tutorials for, we immediately start writing teh code - if it's a backend thing, we start making the DB queries and Schema. If it's frontend, we directly start writing React. This, however, doesn't work in the industry, because first, unlike in side projects, you don't have tutorials you can blatantly emulate, and second, there's a lot of open questions that you need to take a call on - the design system of the project, the theme, the schema design, the flow of the website, and a zillion others. A design is created first, then that design is iterated and improved upon, and the design is then implemented in code.</p> <p>Frontend design, creating UI mockups of the end product is usually done using tools like Figma or Sketch, but you need not do this if you don't want to spend a lot of time learning these. Instead, you can use an tools like <a href="https://whimsical.com/">Whimsical</a> or <a href="http://diagrams.net/">Diagrams.net</a> to create a similar but vastly less complex version of the design - something like a Wireframe, so that you know what components go where, the style guides(color palette, typography, transitions etc) and so on. Note that the design is not set in stone, not even in the software industry, it is iterated upon by the designers, the product managers and the developers based on UX, priorities and complexity of implementation, so do not worry if you think you're gonna tweak your design later on - just make sure that changes go from design to code, not vice versa.</p> <p>In case of a backend/full stack application, you need to create a rough architecture of what are the different entities involved - the DB, the backend server, and so on so that it forms a coherent pathway for data to flow. This architecture can be as simple as a bunch of boxes for simple projects like Todo applications, but increase in complexity when things like Microservices, messaging queues are involved. Another important aspect you should think of beforehand, is schema design - you should not change your database schema whenever you feel like it - in a production environment, it'll cause disaster if you remove one small field and it breaks the app for a zillion users. Schema design is carefully analyzed and planned based on the required fields, while ensuring ACID compliance(in case of SQL DBs), and creating a pathway for schema changes to be made without affecting existing users.</p> <ol start="4"> <li>Project structure</li> </ol> <p>As beginners, a lot of us often tend to not be aware, or care a lot, about file structure in projects. Even the tutorials we follow usually do not enforce this. We might have one single file making API calls, creating the UI across multiple tables and so on. This is extremely unscalable - the second you have to add an extra feature, you will get overwhelmed because different code snips, stuck in the same file will confuse you.</p> <p>Separation of concerns and modularity are extremely critical when doing a software project, especially if we want it to scale seamlessly. For instance, consider a React project - you could have easily dumped your entire code into the single App.js file. But we don't/shouldn't do that. Instead, a separate folder for each component - each folder contains an index.js file to hold the JS file and a styles.scss file, if required. Additionally, Child components are created as subfolders inside the parent component.</p> <p>Similarly in case of a Node.js backend project, it is recommended to follow the MVC architectural style. You'll have a separate folder for models, which represents the database schema, another one for controllers, which coordinate between the requests we get from frontend, and the business logic, and the services folder, which holds the business logic and makes the API/Database calls.</p> <p>This kind of structure ensures that if we want to add a service, we very easily are able to do it by adding a file in the services folder, without touching the logic for the models or controller.</p> <p>This might seem like an unnecessary exercise when there're a few components, but as applications scale to hundreds, or thousands of features, this organization is what keeps the project manageable.</p> <ol start="5"> <li>Static Code Analysis - Linting</li> </ol> <p>Linting is the process of checking for basic structural and syntactical correctness in your code, statically, that is, without actually running it. This includes checking formatting of your code, checking redeclaration of variables, poor error handling and lots more.</p> <p>This is an automated way of improving what was once done manually - a developer would write some code, another senior developer would review it and suggest these style changes and syntactical error rectifications. However, this hurts developer productivity. Linters are scripts that run through the code, check for these issues, and in most cases, even create a commit after fixing the issues, so that you don't waste time on doing this 'menial' stuff.</p> <p>We have ESLint for JavaScript, SonarQube for Java, Pylint for Python and so on.</p> <p>These are commonly used in teams, and can be implemented in pet projects to focus on code quality without spending time on it.</p> <ol start="6"> <li>Testing</li> </ol> <p>This is, by far, the most important, and most unheeded piece of the Software Development life cycle. I am yet to hear of a case in my lifetime, where I saw a pet project that was tested. It's a whole different ball game in industry projects, however - every project depends on it NOT failing for something, anything the user might do. Mistaken email formats, hitting the back button anytime, and so on. Not to mention the stuff that can go wrong due to network issues - especially in case of critical applications like payment apps.</p> <p>Imagine if a user is carrying out a large transaction, the bank's servers catch up and the transaction fails, but the user is anyway shown the money deducted popup - imagine the frustration. Testing ensures that such cases are minimized.</p> <p>Testing in bigger companies is a process as complex as the development itself, however, you need not follow all stages in your pet projects. You can start with unit testing your code, that is, testing separate modules, files, components, to ensure that that particular component works well in isolation. Frameworks like Angular already provide the test files inbuilt, and you only need to tweak them slightly and run them. In React, you can use a library like React-Testing or Jest. Similarly for Java you have JUnit. This type of testing ensures that there's nothing wrong with your component logic - a mistaken API call, an incorrect query and so on. This however, isn't it.</p> <p>You need to make sure that your entire application works in a good flow. You have to emulate a user's journey through your app as closely as possible, and account for each case - what if the user makes a mistake with the email, what if he/she presses the back button before the transaction is done, and so on. End to end tests ensure that this flow goes on - using libraries like Cypress or Protractor for React, Angular or Selenium etc for Java.</p> <ol start="7"> <li>Regular commits</li> </ol> <p>We usually develop our projects on our local system, and once we're done, we push it to Github, to ensure that we can put up the links in our resume. That however, isn't the right way to use version control, and definitely not the way it works in the industry.</p> <p>In the industry, version control is used for collaboration between multiple developers who work on various branches/forks, and in case of bugs, track what change was the bug introduced in. There's also a concept of Continuous Integration, which means that code is released in increments, tests are automatically run on it, and that code is integrated into the main central codebase.</p> <p>All of this is pretty easy to implement in our projects too.</p> <p>First, set milestones in your project that will determine when an important feature has been implemented or done. For instance, adding the css for a login form, creating the core logic for checkout, and so on. Every time you hit a milestone, you create a new branch, make a commit, and instead of pushing directly to the main branch, you raise a pull request.</p> <p>Now, you go to Github, and merge it into your main branch. Here an additional step that can be taken up is writing automated tests. Tests that run on each deploy to ensure that it builds well and doesn't break anything. This can be configured using the deployment tools like Netlify. Further details in the deployment section.</p> <p>The advantage of this process is that this is exactly how things work in the industry - different developers are responsible for different features, so instead of pushing everything on the main branch, they create separate branches and raise pull requests.</p> <ol start="8"> <li>Code review</li> </ol> <p>This is a process which is norm, necessary and critical in software development teams. There'll be at least 2 other developers who'll look at the code you wrote, to check if it's structured well, logically sound, follows all requirements. This ensures that more bugs can be caught out at the early stage, before moving to the testing environment.</p> <p>Code reviews are usually done by experienced developers who have already seen lots of code, and thus, are very clear on the common pitfalls and checkpoints they have to ensure are ticked.</p> <p>Now, in your pet project, in most cases, you'll be working individually, and that means that you don't have anyone to review your code. Anyone but you, that is. And that's what you have to do. Read through your own code. Try to figure out if there are some pieces that can be optimized, refactor it, add helpful comments.</p> <p>Reading code is a monotonous exercise, especially when you'd written it yourself, but it's critical to ensuring quality, and something you're gonna spend a lot of time doing in the industry, so do develop this habit.</p> <p>Once you think your code is as optimal and clean as you can make it, mark it reviewed(Github has a feature for that), and only then merge the PR.</p> <ol start="9"> <li>Deployment</li> </ol> <p>So you did a project, which rn, runs on <a href="http://localhost/">localhost</a>:8080. You know how it works and looks like, but how're you gonna show it to others. You can't expect every potential interviewer to download your source code, install the dependencies and run it to check. Moreover, no software project is ever made to live on localhost - it has to go to what is called a 'production environment' sometime, where actual users use it.</p> <p>There's various levels at which you can deploy your project, and they differ based on the tech stack of your project. If it's just HTML, CSS, JS, you can directly activate Github pages from your repository settings and the project will immediately be live on <project-name>.github.io .</project-name></p> <p>In case of frontend projects involving a JS framework/library like React, Angular etc, you have to use a platform like Netlify, Vercel or Heroku. Netlify is the easiest of the lot. All you have to do is connect your github repo to your netlify account, specify the 'build' command, and that's it. You'll get a deployed link within minutes. Vercel is similar.</p> <p>Note that backend projects like NodeJS have to have their server on all the time, unlike projects like React, which only build once, and then serve an index.html file and run JS on the browser. Thus, Netlify/Vercel won't work for backend projects, where you need your server to remain on to accept requests and send responses. Heroku is a good option to start with in this case. It works similar to Netlify in terms of setting up the project.</p> <p>These platforms, however, abstract away several complexities of deployment, and these are almost never used in software industry settings. In the industry, we use cloud solutions like AWS EC2, GCP or Azure hosting. These provide us servers where we can store and run our app, and they're guaranteed to stay up more than 99.5% of time. These cloud solutions have a zillion other features such as setting up load balancers, domain mapping and more, all of which are common in the industry.</p> <p>The only concern is that these platforms have a very limited free tier, and if you're not very careful, you could end up getting an extravagant bill, so when you use these, make sure you follow a decent tutorial, and do not do something without understanding its implications. My record is getting a 4 lakh bill from AWS.</p> <p>Thus, following these steps will set you off on a journey to do your pet projects in a much more professional, industry-oriented fashion, so that you do not face a lot of trouble when entering the industry.</p> <h3 id="optional" tabindex="-1">Optional<a class="tdbc-anchor" href="https://blog.dkpathak.in/your-next-side-project-like-a-pro/#optional">#</a></h3> <ol> <li> <p>Team/group projects</p> <p>Almost no project in the industry is done by a single person. Even if there's just one developer, there'll be one designer/product manager/tester alongside. And working in a team is world apart from working individually. You have to understand others' code, designs, priorities and tweak your code and thinking accordingly.</p> <p>This point is still optional because group projects aren't possible or feasible for everyone. However, if you can, find a group of 2-3 like minded friends and do a project together. Assign features to different members, have regular meetings, review each others' code, and it'll literally be like working in the industry.</p> </li> </ol> </content>
</entry>
<entry>
<title>Asking questions for a software engineer</title>
<link href="https://blog.dkpathak.in/asking-questions-for-a-software-engineer/"/>
<updated>2021-09-19T00:00:00Z</updated>
<id>https://blog.dkpathak.in/asking-questions-for-a-software-engineer/</id>
<content type="html"><p>Asking questions at the right time, in the right way, to the right person</p> <p>Almost all freshers will agree that they've been encouraged by their teammates and managers to ask questions. However, this has a fine print : it actually goes, 'ask questions, if you can't find out the answer yourself'</p> <p>And not because they don't wanna answer, but because</p> <ul> <li> If you keep asking questions you can find out with a few google searches, you're no good as a developer - you need to learn the art of googling. </li><li> The managers/teammates usually have tasks of their own and are helping you on the side, which means that more often than not, they won't find enough time to answer your questions. </li><li> Answering questions is wayyyy tougher than asking them, especially if you're answering to a noob. Suppose you ask a question that you think is innocuous, and has a one line answer - "What does this imported package do". Now, your teammate's mind races back to what the package is, why it was brought in, why it's used, how it's used, and a few zillion other things, most of which would make no sense to you. So, she/he has to filter these out in a way so that you're able to grasp the essentials without feeling dumb or overwhelmed. That's tricky business. </li></ul> <p>So, what should you do? How should you 'ask questions'?</p> <p>First, the 'right way'</p> <ol> <li> Any query you find, first google it. Right away, as it is. Maybe you find a blank google search result - very very rare. You'll find something that can complement your understanding in some way, even if it doesn't give you the complete answer. But you'll at least have some more idea and can ask the question to your teammate in a more refined way so that the tough choices your teammate would face, mentioned in point 3 above can be minimized. </li><li> Instead of asking a teammate to explain it all - tell her/him what you've understood and ask her/him to validate/correct you. If you've got it 70% right, the teammate only need explain 30%, saving both of your times. If you've got it entirely wrong, the teammate would know that there's something lacking in your fundamental understanding and correct that first. If you've got it entirely right, you're getting a promotion sooner. </li><li> Try asking the teammate for a resource where you can learn more about the question you're asking. That way, the teammate will not be under pressure to explain 'everything' to you, and instead, guide you to a resource, which can help you better. </li><li> Make a habit of taking notes of what you ask and their answers. We often overestimate our memories and underestimate all the crap that's gonna take a chunk off our memories, so you best have it in written somewhere so that you can save yourself from your teammates' irritation by asking the same question 20 times. <p>Next, the right time. If you're an overexcited sorta person, you wanna know the entire architecture and each and every package right on the first bloody day of the job, because you then wanna go and be Napoleon. Or if you're the shy sort, you keep stalling, waiting for the 'right time' until it's too late. Figuring out the right time to ask comes mainly with experience, but a thumb rule is that if it's something that's blocking your progress, ask it right away. If you think the teammate is going to come to this question, give her/him an opportunity to address it. If they skip, then ask. Finally, ensure that the teammate is in the right frame of mind when you ask a question, not when they're debugging a critical prod issue.</p> <p>Finally, the right person. You could ask the same question to an immediate senior, your manager, and your team lead, and get different answers. You need to figure out which of these would work best in the context in which you're seeking an answer.</p> <p>For instance, if you're struggling with a syntactical issue, you should most likely reach out to an immediate senior, someone who has the closest interface with the code, since they can give you the quickest answer. If you're looking at understanding a big picture of a project or a feature, someone who's been around longer can help better.</p> </li></ol></content>
</entry>
</feed>