<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Marika's blog]]></title><description><![CDATA[Sharing hands-on insights in full-stack development, AWS serverless architectures, and applied AI.]]></description><link>https://blog.marikabergman.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 01:34:09 GMT</lastBuildDate><atom:link href="https://blog.marikabergman.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Boost Data Modeling Efficiency Using Amazon DynamoDB MCP Server]]></title><description><![CDATA[Data modelling for DynamoDB is often a complicated task due to a fundamental shift in design philosophy compared to traditional databases. You need to approach things using ‘access pattern driven design’, which means that you need to know all of your...]]></description><link>https://blog.marikabergman.com/boost-data-modeling-efficiency-using-amazon-dynamodb-mcp-server</link><guid isPermaLink="true">https://blog.marikabergman.com/boost-data-modeling-efficiency-using-amazon-dynamodb-mcp-server</guid><category><![CDATA[AWS]]></category><category><![CDATA[DynamoDB]]></category><category><![CDATA[mcp]]></category><category><![CDATA[database design]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Wed, 15 Oct 2025 12:45:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759063006631/068483f8-88cb-4263-b1a9-3452f4197c5a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Data modelling for DynamoDB is often a complicated task due to a fundamental shift in design philosophy compared to traditional databases. You need to approach things using ‘access pattern driven design’, which means that you need to know all of your application’s query patterns upfront and create the design based on them. This of course is the reverse of the traditional database design and often quite complex (although it can also be a lot of fun). It is also time consuming when using a tool like NoSQL Workbench for DynamoDB where you need to manually test different database models and how those would work with your access patterns.</p>
<p><a target="_blank" href="https://awslabs.github.io/mcp/servers/dynamodb-mcp-server/#instructions">Amazon DynamoDB MCP Server</a> can really help in this process. You can start working through the schema using the MCP server or alternatively you could use it as a learning tool by making your own design first and then seeing whether the tool would have done the same design. The feedback you get is quite detailed, so it will really help to understand why the tool has taken certain decisions.</p>
<h2 id="heading-setting-up-amazon-dynamodb-mcp-server">Setting Up Amazon DynamoDB MCP Server</h2>
<p>There are several ways to run the MCP server using agentic tools. I used it with AWS Q CLI and the installation is straightforward. The configuration is simply added to the settings files of the AWS Q, and after that it is able to use the tool whenever you open a new AWS Q chat in the terminal:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758964693876/88c5d8b6-1173-4edf-a4a7-cf626f50daf6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758965484284/e9f5c7b2-ca42-438e-8ec8-a8170e703ef3.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-working-through-the-requirements">Working Through the Requirements</h2>
<p>The process starts by describing the type of application you are building and the business context. The tool will ask you questions, and based on your answer,s it will keep completing the requirements on the list that is shown in the terminal, as well as the complex requirements in the two files it is saving in your current folder: <code>dynamodb_requirements.md</code> and <code>dynamodb_data_model.md</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758964918234/ad1d21a2-ef8a-432f-a4b9-0e02d944d512.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-collecting-access-patterns">Collecting Access Patterns</h2>
<p>After having an overview of the type of application we are building and the basic requirements, it is time to list the exact access patterns. The tool does this by asking questions about the volume and exact functionality that is required:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758965378672/1ad5032f-7fdf-4212-85cc-4bb3e15dabb7.png" alt class="image--center mx-auto" /></p>
<p>The tool will work through adding access patterns, and it will also suggest you ones that you might have forgotten, such as return, getting low stock alerts, or admin functions like user management in the context of the inventory management application.</p>
<h2 id="heading-finalising-the-data-model-design">Finalising the Data Model Design</h2>
<p>After collecting all of the access patterns, the tool is ready to finalise the data model design. A detailed description is saved in the markdown document, and it will also give you a summary on the terminal. It will also explain the key design decisions it has taken:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758965797131/b688e8db-b603-4133-a28b-cd6ad3b4ed48.png" alt class="image--center mx-auto" /></p>
<p>One of the best things is that you are also able to ask it questions about the design. Below is an example of a question, and the answer is quite detailed — only parts of it are shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758966655231/bfdc0971-98d5-4de3-aa3e-8069ec0b69fa.png" alt class="image--center mx-auto" /></p>
<p>The example answer regarding simple table design also covered the problem a single-table design would create in this scenario, such as sales reports needing to use FilterExpression and what kind of requirements might have made the tool choose a single-table design, such as if brand changes were affecting product displays immediately. You can ask more in-depth questions about any of the aspects and you would get more detailed explanation with examples of situations where for example a single table design would be suitable.</p>
<p>After the initial database design, you are able to continue the discussion and make changes. After I had finalised the initial database modelling, I asked the tool to add modelling for an OCR (Optical Character Recognition) process with a ‘human in the loop’ implementation. We had a short back-and-forth conversation about the functionality that I wanted to achieve by having a system that will send label images to AWS Textract, after which a Lambda function will check against the database whether the brand and model name exists or whether it is something that has already been misread and corrected before. Otherwise we would need to find the closest match (using for example string similarity algorithm as it is not something that DynamoDB supports natively) which is then sent to the frontend for the human to check the correctness. Essentially this would be a system that will improve its correctness over time.</p>
<p>When the tool was aware of the schema and how the process would work, it was able to explain the steps that would be taken as part of the workflow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759054171436/ff062126-4c56-496e-8179-6b3229ba48fc.png" alt class="image--center mx-auto" /></p>
<p>Apart from helping with the database modelling, the tool can also help you implement it and it can also help you manage the database (in the context of CRUD actions and schema-level interactions), as long as you have your AWS credentials set up. I personally feel the database modelling is the greatest advantage that it offers as that is usually the most demanding aspect.</p>
<p>After I had finalised the database design, I did also take advantage of the tool’s awareness of the schema and made it create some of the queries for me using my preferred SDK. I was then able to simply copy and paste the queries directly into my code. In summary, I found the tool very useful and will definitely use it to get started with my next DynamoDB project.</p>
<p>It is worth noting, that the tool’s recommendations are advisory and a human review, load testing and validating assumptions remains crucial. It could be that the model will for example propose GSIs that looks optimal for access patterns but under high write loads could create hot partitions. LLM-driven explanations can also occasionally state implicit assumptions (e.g. expected query frequencies) and you need to verify that these actually match your real usage.</p>
]]></content:encoded></item><item><title><![CDATA[Spec-Driven Development: Coordinating Code with Kiro AI IDE]]></title><description><![CDATA[The new AI IDE, Kiro, has just been launched in preview, and over the past few weeks, I have been able to try its features. Below, I share some of my initial experiences and the unique advantages Kiro has brought to my software development flow.  
Ki...]]></description><link>https://blog.marikabergman.com/spec-driven-development-coordinating-code-with-kiro-ai-ide</link><guid isPermaLink="true">https://blog.marikabergman.com/spec-driven-development-coordinating-code-with-kiro-ai-ide</guid><category><![CDATA[buildwithkiro]]></category><category><![CDATA[software development]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Mon, 14 Jul 2025 22:52:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752248341868/6863d591-c05f-4a10-aa63-dc004cfbd51a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The new AI IDE, Kiro, has just been launched in preview, and over the past few weeks, I have been able to try its features. Below, I share some of my initial experiences and the unique advantages Kiro has brought to my software development flow.  </p>
<p>Kiro has two different ‘modes’: spec mode and vibe mode. The vibe mode is how most developers would understand it - direct implementation based on a prompt focusing on conversational style and without a formal structure. However, what makes Kiro unique is the spec mode. During a spec session, a formal workflow is followed using a structured approach. This makes it easier to plan and coordinate tasks, resulting in a paper trail for future reference and team collaboration. The whole flow of spec-driven development is something new and different, and I have really enjoyed the organised and logical way of programming, which is very different from the at times chaotic vibe coding where you are unsure of what is happening and how to fix things that have gone wrong.</p>
<p>I have been testing Kiro with a couple of existing projects, where the expectation for Kiro is to start by scanning the project and trying to understand the requirements and the structure of it. Kiro creates spec files that are like documentation for your own little project manager.</p>
<p>To illustrate, I'll use a simple serverless project that utilizes AWS services. Kiro is versatile and compatible with any cloud provider, but AWS was simply what I used for this particular project. The project had a functioning Lambda function code that interacts with Amazon Bedrock and some database tables, but there was no infrastructure as code apart from an empty CloudFormation folder. Generally, the project was unorganised and uncompleted and my first prompt requested Kiro to improve the repository structure and also add the infrastructure-as-code capability for the project. Based on this prompt, Kiro started to create documentation in the <code>specs</code> folder based on it’s standard <code>design - requirements - tasks</code> -flow. I will describe below the flow in more details in the context of this specific example.</p>
<h2 id="heading-design">Design</h2>
<p>The purpose of the <code>design.md</code> document is to describe what Kiro is aiming to achieve with the steps that it is planning to take. In my specific example, the document contained the following details:</p>
<ul>
<li><p>Current state of the architecture (for example, <code>agent.py</code> located at the repository root)</p>
</li>
<li><p>Target state of the architecture represented as a directory tree</p>
</li>
<li><p>Infrastructure as code - listing how the files will be structured and where this like development environment parameters will be stored. The design document also listed all the AWS resourced that the IaC is meant to create</p>
</li>
<li><p>Documentation updates that would be done on the <code>README.md</code> after the steps have been completed</p>
</li>
<li><p>Data models that would be used in the DynamoDB tables</p>
</li>
<li><p>It also describes thing like error handling, testing strategy and implementation considerations</p>
</li>
</ul>
<p>Once Kiro has completed the design document it will ask you to verify whether you are happy with it before moving on to the next step. At this point, you are able to ask it to make modifications if want to make a change or if you want to exclude something (for example, testing) and tackle it as a separately later on.</p>
<h2 id="heading-requirements">Requirements</h2>
<p>Once you have confirmed that you are happy with the design file, Kiro will move on to create the <code>requirements.md</code> file. In this document, it will summarise the requirements to user stories, which for my specific project were, for example, the following:  </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752532474883/fb647178-3946-4ce6-9cf4-533c0edf2ff8.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752532503027/3e77d8bb-8b3f-43aa-84ad-26de40067e74.png" alt class="image--center mx-auto" /></p>
<p>Each of these requirements will have an acceptance criteria listed in the form of <code>when - then - shall -</code>statements:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752532718746/17d6a767-5683-4ea6-83aa-bc59b713dcff.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-tasks">Tasks</h2>
<p>After you have confirmed that you are happy with the requirements file, the fun part begins. Kiro is going to create a <code>tasks.md</code> file. This file will list all the different tasks that need to be taken in order to reach the previously described design that follow the requirements. In my example, the first task was to create new directory structure and move source files. This task included not only creating and moving the files, but also updating any relative import paths if needed. The task file also clearly describes which requirements this specific task is going to fulfil.</p>
<p>The nice things is that you are able to manually start the tasks directly from the file, and you will always have an easy overview of which tasks have already been completed and which are still waiting for completion. After the task have been completed, you can easily review the changes that have been made:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752225727587/0e0b4afe-a6fc-4e74-824a-ab324b96f59e.png" alt class="image--center mx-auto" /></p>
<p>This allows you to easily follow execution step-by-step and request changes as you go if anything doesn’t look the way you wanted it to.</p>
<h2 id="heading-further-specs">Further specs</h2>
<p>After completing the tasks from my original prompt, I asked Kiro to complete a few additional features. The first request was to add an AWS API gateway in front of the Lambda function, and the second was to create a simple frontend application to provide a user-friendly interface. For both of these requests, Kiro created new spec directories with their own <code>design.md</code>, <code>requirements.md</code> and <code>tasks.md files.</code> This separation of concerns makes it easy to keep track of what is happening and to have clear documentation for the future:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752502884796/2c248c3a-b1bf-4a0e-a9a8-696b0c2247a9.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-hooks">Hooks</h2>
<p>Hooks is a feature that I have until now tested only with very basic examples, but there are plenty of use cases where it would be very valuable. Hooks are essentially event listeners that, when triggered by specific IDE events or user actions, automatically launch a full agent execution session. Kiro creates a hook folder, which contains a file for each hook.</p>
<p>The hook I used in this project was to update hook the documentation. It listened to the Python source files, CloudFormation templates and requirements.txt changes to automatically update the <code>README.md</code> document. The prompt used for the agent execution session when triggered, is the following:</p>
<p>‘<em>The source code or infrastructure has been modified. Please review the changes and update the README.md file to reflect any new functionality, API changes, infrastructure changes, dependencies, or usage instructions. Pay special attention to CloudFormation template changes and deployment instructions. Ensure the documentation stays current with the codebase.</em>‘</p>
<p>The nice thing about this hook is that it doesn’t only update the documentation when it has completed tasks, but the hook will make updates also if you, as a developer, have manually made changes to any of the files. Having said that, the above doesn’t take into consideration the fact that the hook will consume agentic interactions and there will be price considerations in terms of how often you want your hooks to run, so this would need to be balanced.</p>
<h2 id="heading-conclusions">Conclusions</h2>
<p>My experience of Kiro has mainly focused on the main ‘design-requirements-tasks’ -flow, and I haven’t yet had a chance to test other features such as integration with MCP servers or steering files, and I have only done some initial exploration with the hooks. With such a new product there are, of course, sometimes been small issues and bugs, but generally, I have absolutely enjoyed working with the design-requirements-tasks flow. I wonder if it also depends on the personal working and documentation style whether someone would find this way of working compelling, but I find the organisation and structure really helpful even with a personal project and in a teamwork setting, it would bring added benefits for coordination. It makes it so much easier when you can clearly follow what AI is doing and make small adjustments as needed. And having a clear documentation for future reference is the second main advantage - imagine coming back to the project after a while and having clear feature spec directories that contain tasks lists to give you an easy overview of the status.</p>
]]></content:encoded></item><item><title><![CDATA[Maths Revision Material via Amazon Nova: An Experiment]]></title><description><![CDATA[Finding effective ways to help primary school children revise maths and find their knowledge gaps must be one of those areas where AI tools are going to be able to massively help us. Around this idea, I recently embarked on a little experiment. My id...]]></description><link>https://blog.marikabergman.com/maths-revision-material-via-amazon-nova-an-experiment</link><guid isPermaLink="true">https://blog.marikabergman.com/maths-revision-material-via-amazon-nova-an-experiment</guid><category><![CDATA[Amazon Web Services]]></category><category><![CDATA[Amazon Nova]]></category><category><![CDATA[AI]]></category><category><![CDATA[Amazon Bedrock]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Mon, 19 May 2025 13:32:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747579096782/5eac62f4-96f3-47a9-9a3b-cecab4bfc8bb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Finding effective ways to help primary school children revise maths and find their knowledge gaps must be one of those areas where AI tools are going to be able to massively help us. Around this idea, I recently embarked on a little experiment. My idea was born after I found out that the Oak National Academy that has maths resources across the UK national curriculum, and they actually have an open API to access some of these resources.</p>
<p>As the API is currently in beta and only some of the resources are available, I started by checking what is there that I could work with. The Oak Academy website has video lessons for different maths units, and each of the lessons has an exit quiz. The exit quizzes could be an interesting material to work with, as you could use them to create more similar quizzes that would then test the pupil, for example, at the end of the year across the whole year’s curriculum to see if there are some units or lessons that they are not confident with and to help them revise.</p>
<p>The exit quizzes contain usually six questions each, and most of the questions have an image attached to them in this type of format:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747609813303/4660525e-a880-4395-a748-65b8712e4e03.png" alt class="image--center mx-auto" /></p>
<p>I started by fetching the exit quiz materials for year 1 and 2 and saved them in Amazon S3 as PDF files. Next, I would configure the Amazon Nova models to use these materials as starting point for creating similar content. The idea was to use one Nova model to extract information from the PDF and create content for a new quiz question based on it. Another Nova model would then be used to create an image based on the image description:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747660921821/2701b2ff-4904-4852-bfb4-f7b81a84ddd8.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-describing-the-quizzes-and-creating-content-with-amazon-nova-lite">Describing the quizzes and creating content with Amazon Nova Lite</h2>
<p>In order to use the quizzes that I now have saved as PDF files, I would need to transform them in a format the model can utilise. For this task, I used Amazon Nova Lite, which is a low-cost multimodal model that can process image, video and text input.</p>
<p>I wanted the model to extract some information from the PDF - the lesson name, the question and answer options. I also wanted it to determine what the correct answer to the question is, as well as create a detailed description of the image that is associated with the question. In addition, I wanted it to create a new, similar question-answer pair and a description for an image that could be associated to that new question. The model returned the data in this kind of format:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"lesson_name"</span>: <span class="hljs-string">"Add and subtract 1 to and from a 2-digit number crossing the tens boundary"</span>,
  <span class="hljs-attr">"original_question"</span>: <span class="hljs-string">"Which decade is missing on the number line? Tick 1 correct answer the fifties the sixties the forties"</span>,
  <span class="hljs-attr">"original_answer_options"</span>: [
    <span class="hljs-string">"the fifties"</span>,
    <span class="hljs-string">"the sixties"</span>,
    <span class="hljs-string">"the forties"</span>
  ],
  <span class="hljs-attr">"original_correct_answer"</span>: <span class="hljs-string">"the sixties"</span>,
  <span class="hljs-attr">"original_imageDescription"</span>: <span class="hljs-string">"The image shows a number line with numbers 49 and 50 marked. There is a gap between 50 and 60, indicating a missing decade. The options provided are 'the fifties', 'the sixties', and 'the forties'."</span>,
  <span class="hljs-attr">"new_question"</span>: <span class="hljs-string">"Which number is missing on the number line? Tick 1 correct answer 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60"</span>,
  <span class="hljs-attr">"new_answer_options"</span>: [
    <span class="hljs-string">"45"</span>,
    <span class="hljs-string">"46"</span>,
    <span class="hljs-string">"47"</span>,
    <span class="hljs-string">"50"</span>
  ],
  <span class="hljs-attr">"new_correct_answer"</span>: <span class="hljs-string">"45"</span>,
  <span class="hljs-attr">"new_imageDescription"</span>: <span class="hljs-string">"The image shows a number line with numbers 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, and 60 marked. There is a gap between 44 and 46, indicating a missing number. The options provided are '45', '46', '47', and '50'."</span>
}
</code></pre>
<p>The model was good at extracting different questions from the document, determining correct answers, and describing the relevant images. Also, the newly created questions seemed to have similar style and were aimed at a similar age group, at least based on the limited tests that I was able to complete within a restricted timeframe. The prompt had to be modified a few times based on different issues I came across, such as the model getting confused by blank spaces in some of the questions. Further tests would most likely reveal more such issues that would help define the prompt further.</p>
<h2 id="heading-creating-a-new-image-with-amazon-nova-canvas">Creating a new image with Amazon Nova Canvas</h2>
<p>From the process with Amazon Nova Lite I now had a new question - an answer to it and a description for a new image that could be used. The new image could now be created with Amazon Nova Canvas, which is an image generation model that can be used to create images from image/text. I first tried creating images based on purely the text prompt that I had from the previous step, combined with a general prompt that described what the purpose of the image is and what style should be followed. The images were of nice quality, but the difficulty was getting the style right. It seemed that even after trying several types of prompts, the model was unwilling to create the type of images that would be suitable for children’s maths exercises but rather wanted to create more realistic images. For example, if the prompt asked for a clear image of a certain number of apples next to each other that a child could easily count, the created image showed the apples in a more artistic and realistic display so that some apples were only partially visible. Even bigger issue was accuracy - if the prompt asked for an image with three apples, the produced images sometimes contained two or four apples instead and the results didn’t seem very reliable.</p>
<p>I then tried adding the original image as a conditioning image to the prompt - in this example to get the model to create a number line like in the first example:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747612362678/2ef927f7-2e7b-491c-b00d-aa587bb0ed04.png" alt class="image--center mx-auto" /></p>
<p>I thought the conditioning image might help getting the style right and helping the model to understand exactly I need. Style-wise, this worked as expected and the created image was very close to the style of the original image. For example, when the prompt requested a number line and the conditioning image showed a number line, the created image had some kind of number line as well - whereas without the conditioning image the whole concept of the number line was often lost. However, the number line didn’t look otherwise right. I further experimented by changing the control strength. When the strength was closer to 1, I could get images that were very close to the original image (and didn’t follow the prompt requesting the changes to match the new question &amp; answer pair at all). By reducing the strength closer to zero, the results became more interesting and didn’t follow the original image exactly but still displayed, for example, a number line as requested. But as it is clearly visible, the numbers themselves were completely mixed up, and these images wouldn’t work for their intended purpose:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747610603706/b922c14f-b9ab-4d13-be4f-56c456b83d11.png" alt class="image--center mx-auto" /></p>
<p>In summary, the workflow of using the existing quizzes as a foundation of creating more similar question-answer pairs worked quite well, apart from the image creating inaccurate numerals. If it was possible to create accurate images, this workflow could be used in creating a large number of questions across the curriculum. As the questions were not simply copies of each other with different numbers, but the model rather had some creativity, the revision quizzes could remain interesting for the children. I will re-try this idea at some point when the models are capable of higher numeral accuracy in image creation.</p>
]]></content:encoded></item><item><title><![CDATA[Building Location-Based Access Patterns in DynamoDB]]></title><description><![CDATA[When working on adding location-based search to a DynamoDB database project, I discovered there's more than one way to handle geospatial queries. The general principle is that geospatial queries can be managed by storing a geohash as an attribute for...]]></description><link>https://blog.marikabergman.com/building-location-based-access-patterns-in-dynamodb</link><guid isPermaLink="true">https://blog.marikabergman.com/building-location-based-access-patterns-in-dynamodb</guid><category><![CDATA[AWS]]></category><category><![CDATA[DynamoDB]]></category><category><![CDATA[geohash]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Tue, 12 Nov 2024 15:01:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730722785782/d42edd74-9fdf-4f21-8bcb-80f3e89ba191.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When working on adding location-based search to a DynamoDB database project, I discovered there's more than one way to handle geospatial queries. The general principle is that geospatial queries can be managed by storing a geohash as an attribute for each item in the database. A geohash is a string that has been created based on longitude and latitude. The length of the geohash determines how wide area it covers - essentially the whole earth is divided into boxes and you can choose the size of the boxes you want to work with. I will not go into a deeper explanation of it here as there are plenty of good resources online that explain in detail the way geohashes work.</p>
<p>Looking for a way to implement this, made me first look into the <a target="_blank" href="https://www.npmjs.com/package/dynamodb-geo">DynamoDB Geohash</a> library. I wanted to understand whether this package would work with my existing schema where I was already planning to store all of the items in a single DynamoDB table.</p>
<h2 id="heading-with-dynamodb-geo-npm">With dynamodb-geo-npm</h2>
<p>I started by creating a sample project where I would create a new table using the library to see how exactly it works. The table that was created had this schema:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730714237632/8b784b6d-0474-47db-8361-b2d1d8d482d8.png" alt class="image--center mx-auto" /></p>
<p>The geohash attribute contains the full geohash, whereas the hashKey attribute contains the first 5 characters of the geohash to ensure the items are evenly distributed across partitions.</p>
<p>In addition, the library created a local secondary index and that is where the actual search based on the geohash will happen:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730714282402/90a48e22-d55e-4f66-bfa8-db1fac32c83d.png" alt class="image--center mx-auto" /></p>
<p>Creating queries using the library is very simple, you simply define the coordinates of your centre point and the radius in meters around it where you want your search locations to be located.</p>
<p>The limitations is that the library doesn’t support composite primary keys. So let’s say you have different type of items, such as schools and libraries. Now, if you wanted to search only for the libraries within certain location, you would ideally create some kind of composite key that allows you to separate these type of items as required by your access patterns. When you are not able to have that level of separation, you end up getting both types of items in the search results. So in order for this to work, you would need to have a separate DynamoDB table for schools and another one for libraries or alternatively apply filters in the application layer after retrieving the results.</p>
<h2 id="heading-without-dynamodb-geo-npm">Without dynamodb-geo-npm</h2>
<p>Based on the limitations, I wanted to try how to implement this in a way that would work with the existing schema where all items are stored within one table following the principle of single-table design. This required a couple of steps that had been handled behind the scenes when using the library.</p>
<p>The first step was to use the <a target="_blank" href="https://www.npmjs.com/package/ngeohash"><code>ngeohash</code></a> library to create a geohash for each item that is saved in the database. For that you need the coordinates of the location and the library will then create a geohash from these coordinates. This geohash will be a string containing letters and numbers. Previously when testing with dynamodb-geo, the geohashes were actually number-only hashes as the library internally converted the format from the alphanumeric hash. I then saved the geohash in the existing database and created a global secondary index that would allow me to search based on the item type (library or school etc) and the geohash:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731422620381/94c49623-7187-454b-9890-28b474a1ed1f.png" alt class="image--center mx-auto" /></p>
<p>It is important to have the geohash as the sort key, so that you are able to later use the ‘begins with’ type queries.</p>
<p>In order to query from this tale, the first step is to determine a ‘bounding box’ around the centre point. This means that you have a geopoint and then determine a radius such as 2000m around it. Using the <a target="_blank" href="https://www.npmjs.com/package/geolib"><code>geolib</code></a> library it was easy to get a bounding box coordinates around a centre point:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730739717383/f70e8a65-bc6b-4d94-bf93-3c31f376955e.png" alt class="image--center mx-auto" /></p>
<p>As the bounding box is a ‘box’ it will end up having some coordinates that are actually outside of the defined radius as described above. These locations can be filtered out later if only the locations within a certain distance are required.</p>
<p>The next step was to determine what the geohashes based on these coordinates are. This can be done using the ngeohash library. The library will simply return you an array of geohashes based on the defined coordinates and the length of geohash that you want. The length of geohash determines its precision and the size of area it covers. So that is something that you will have to decide based on your exact scenario. For example for the 2000m radius a 6 character geohash might be the best option as using only 5 characters would give you too wide cells whereas using 7 characters would be too fine-grained and end up requiring more queries than necessary. The concept might seem confusing at first but testing with a few different variations will make it easier to see what the options are and what makes sense. Based on the coordinates and length of desired geohash, the ngeohash library will return you an array of geohash cells:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730739602666/01e019c9-5132-4c9e-ba59-77bd1903ffd9.png" alt class="image--center mx-auto" /></p>
<p>These cells are now again wider than the actual bounding box coordinates were, just based on the way these cells are calculated. So you will end up having again more results that don’t belong inside the exact search radius. So again, these need to be filtered out in the end if you require results that are only within the defined radius.</p>
<p>Now when you have a list of hashes, you would need to make a query for each of these hashes. These can be done using the GSI. As the GSI sort key was created using the full geohash, now when searching for items we need to use the ‘begins with’ option as we are searching items within a certain radius rather than the exact geolocation as geohashes have a hierarchical property where prefixes represent larger areas:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730720689471/bfb87b26-7e79-406e-924d-0483aa9a7e15.png" alt class="image--center mx-auto" /></p>
<p>It is worth mentioning that although this system will require several database queries, the same would apply if using the dynamodb-geo library as it would also do several database queries behind the scenes (according to the documentation typically 8 queries are executed per radius search).</p>
<p>As a result, you would have a list of library locations that are located within those geohashes. Each item would have it’s coordinates and based on those coordinates it is possible to do further organising such as finding out which location is closest to the exact coordinates of the search point and calculate what the distance is by using the haversine formula. And as mentioned before, do to the way the search was done, there would be some items in the results that don’t fit in within the exact radius, so those would need to be filtered out if that’s your requirement.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Creating a geohash based search without using the dynamodb-geo library is not a very complicated thing to do and the steps are quite straightforward. But of course deciding which option to go with is a more complicated decision and there are many trade-offs to consider. The library would be optimised and if you are working with your own schema, the latency and cost will depend on the way you have designed your database and access patterns. The main aspects impacting the performance of your design would the geohash precision (length), number of queries that are needed and the post-processing overhead depending how much you need to further process the items after the database queries have been completed. So as usual with DynamoDB, this would require careful considerations and monitoring.</p>
]]></content:encoded></item><item><title><![CDATA[Designing a GitOps Pipeline for AWS and Terraform]]></title><description><![CDATA[EDIT: An updated and more accurate diagram can be found in the project’s repository. The repository also contains further technical implementation details.  
I have wanted to learn more about GitOps and found the perfect opportunity to do that with a...]]></description><link>https://blog.marikabergman.com/designing-a-gitops-pipeline-for-aws-and-terraform</link><guid isPermaLink="true">https://blog.marikabergman.com/designing-a-gitops-pipeline-for-aws-and-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[gitops]]></category><category><![CDATA[github-actions]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Tue, 15 Oct 2024 13:54:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728994081216/1f5a8099-b042-4528-8893-16b857a14f8b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>EDIT</strong>: An updated and more accurate diagram can be found in the project’s <a target="_blank" href="https://github.com/mariberg/gitops-2024">repository</a>. The repository also contains further technical implementation details.  </p>
<p>I have wanted to learn more about GitOps and found the perfect opportunity to do that with a hands-on approach at the <a target="_blank" href="https://courses.morethancertified.com/p/gitops-with-terraform">GitOps for Terraform MiniCamp</a>. In this post, I'll walk you through the technical requirements of the project—what tools, resources, and setups are needed to implement the fully functional GitOps pipeline to deploy AWS infrastructure using Terraform—before delving into the implementation details in future posts.</p>
<p>In this project, I will create a GitHub Actions CI/CD pipeline that will run through certain jobs and deploy the AWS infrastructure. I have summarized the steps of the pipeline and the way it will be triggered and connected to other services in the diagram below. This is, of course, my current understanding based on the requirements, and it might be that when I start the actual implementation, I will notice some parts are not accurate and this diagram will be modified.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728986923935/4947cc94-3dd7-4c27-93e8-03963d39949e.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-backend-resources">Backend resources</h2>
<p>The first step in the project will be to create the backend infrastructure using Cloudformation. This will be done separately, not as part of the Terraform infrastructure, as it will be used to store the Terraform state file itself. There will be a DynamoDB table that will have only one item—LockID. This item is simply used to make sure that several workflows can’t make changes to the Terraform infrastructure simultaneously, which would lead to conflicts. When our workflow is modifying the infrastructure, there will be a LockID in the DynamoDB table, which would prevent any other workflows from making changes until we have finished. The actual Terraform state file will be stored in an S3 bucket.</p>
<p>Additionally, the OIDC role that will be needed in the later stages to access AWS will be created with Cloudformation. All of these are resources that will remain unchanged during later work and will lay the foundation for this project.</p>
<h2 id="heading-github-actions-workflow">GitHub Actions workflow</h2>
<p>The first step I want to automate after making modifications to my code is a step that should run even before being able to commit any code changes. This step is called a pre-commit hook. The hook will run <code>terraform fmt</code>, which has the main purpose of ensuring that the Terraform code is formatted properly following style guidelines.</p>
<p>The next step will be to push my code to a feature branch in my GitHub repository. The main branch will be protected and the correct process will be to push the code always to a feature branch first, which will then trigger the GitHub actions workflow. The workflow starts running through several different steps. These will include</p>
<ul>
<li><p><code>TSLint</code> (analyze Terraform code for best practices, syntax issues, and possible errors)</p>
</li>
<li><p><code>Terraform fmt</code> (same as the pre-commit hook, just to make sure)</p>
</li>
<li><p><code>Terraform validate</code> (making sure that the Terraform configuration is valid according to the Terraform syntax)</p>
</li>
<li><p><code>Terraform plan</code></p>
</li>
<li><p><code>Infracost</code></p>
</li>
</ul>
<p>In the last step, a tool called Infracost is used to estimate the cost of the Terraform infrastructure. At this point, also a tool called Open Policy Agent (OPA) will also be integrated into the workflow. The idea is to enforce a policy that fails if the estimated cost of infrastructure exceeds a certain threshold. This is a way of automatically enforcing cost controls in the Terraform workflow.</p>
<h2 id="heading-dispatch-step">Dispatch step</h2>
<p>We want to make sure that there is some kind of human interaction before the actual resources are deployed on AWS and there are several ways of doing that. The easiest way will be to add a dispatch step, which will ensure that the workflow requires manual approval before moving on to the deployment.</p>
<p>When the approval for deployment has been given, the workflow will need a way of authenticating itself to access your AWS account to provision resources. The most secure way of doing this is using OIDC—OpenID connect. OIDC will utilize the IAM role that has already been created using Cloudformation. GitHub actions will assume that IAM role, which will give it short-lived credentials to access AWS. These credentials will be destroyed afterwards and cannot be re-used.</p>
<h2 id="heading-deployment">Deployment</h2>
<p>Once the workflow has access to AWS, it will first make sure that there is no LockID currently in the DynamoDB table. It will then move on to provision the resources that have been defined in the Terraform code. In this project, it would be a simple EC2 instance. In addition to that, some resources can be provisioned to monitor our infrastructure. AWS Config and EventBridge can be set up to run a scheduled drift detection, which would notify us if our Terraform state file doesn’t match the deployed infrastructure. Other services such as Lambda and CloudWatch events could be set up to run scheduled port accessibility checks for the EC2 instance.</p>
<p>After the required changes to the infrastructure have been made, the new state will be stored in the state file in the S3 bucket and the DynamoDB state lock will be released.</p>
<p>After successful deployment, it is time to merge the feature branch code to the main branch. In the approach that we are taking, the infrastructure is the ‘source of truth’, meaning that the deployment will happen first and until the merge, the main branch will actually have code that doesn’t anymore match the current infrastructure. The alternative approach would be to merge the code to the main branch first, which would keep the main branch strictly as the ‘source of truth’. This would come with challenges, such as having to roll back code if there turns out to be an issue with the deployment and for this reason, validating everything before merging, is often the preferred workflow.</p>
<h2 id="heading-next-steps">Next steps</h2>
<p>There are several ‘bonus challenges’ that I would like to add to the project as soon as I have implemented the above-described parts. One of them would be to deploy to multiple environments (stage, prod). It would also be very useful to automatically open an issue for the repository if the schedule check notices that the infrastructure has drifted. Another nice addition would be to configure the GitHub actions workflow to ignore non-terraform changes, meaning the workflow would only start running if there have been changes to the Terraform code. There might be other extension ideas once I start working through the implementation.</p>
]]></content:encoded></item><item><title><![CDATA[AI Adventures at a HealthTech Hackathon:  A Developer's Perspective]]></title><description><![CDATA[I recently participated in MediHacks 2024 hackathon. During the hackathon, our team accepted the challenge of creating an AI-powered conversational co-pilot for emergency dispatchers. As emergency dispatchers' training and guidebooks are often outdat...]]></description><link>https://blog.marikabergman.com/ai-adventures-at-a-healthtech-hackathon-a-developers-perspective</link><guid isPermaLink="true">https://blog.marikabergman.com/ai-adventures-at-a-healthtech-hackathon-a-developers-perspective</guid><category><![CDATA[medihacks]]></category><category><![CDATA[hackathon]]></category><category><![CDATA[full stack]]></category><category><![CDATA[ai integration]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Mon, 29 Jul 2024 11:06:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721059657131/078e50c4-1bb6-49fb-9317-8c931852def3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently participated in <a target="_blank" href="https://www.medihacks.org">MediHacks 2024 hackathon</a>. During the hackathon, our team accepted the challenge of creating an AI-powered conversational co-pilot for emergency dispatchers. As emergency dispatchers' training and guidebooks are often outdated, there is a need to develop a training tool, that would leverage modern technology. This tool would be a conversational AI that can simulate various life-threatening scenarios, helping dispatchers rehearse and improve their response skills.</p>
<p>As I was working in a team with two amazing AI developers, this seemed like a very exciting opportunity to get my first experience building a solution that utilizes AI. I'm not going into a detailed explanation of the AI solution in this article as it is not my area of speciality, instead of I am focusing on my experience as a a fullstack developer who took part in designing, developing and testing the app as well as integrating some of the AI endpoints to our frontend application. We had additionally also a frontend developer in the team, so we had a balanced team with different skills and interests and that was a great starting point.</p>
<h2 id="heading-designing-the-app">Designing the app</h2>
<p>We started our project by brainstorming what kind of functionalities the app should have for the user and what kind of building blocks would be required. Do we need</p>
<ul>
<li><p>a database where to save data --&gt; yes, we want to save user data and data of completed simulations</p>
</li>
<li><p>a backend --&gt; yes, to interact with the database</p>
</li>
<li><p>authentication --&gt; yes, an easy implementation with Firebase</p>
</li>
</ul>
<p>We would need a React app, NodeJS backend, MongoDB database and the AI element, which our application would access via Fast API. It was decided that we would connect the React frontend directly to the Fast API as for the purposes of this hackathon it would be easier for us to integrate the endpoints directly into the React app. The architecture of our application is described in the below diagram:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721044885923/929ebd1e-0d71-496e-b234-147990771c95.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-implementation">Implementation</h2>
<p>As we had team members with various specialities in our team, we started working on all the different components simultaneously. We started creating the frontend components based on Figma design, building the NodeJS backend that interacts with MongoDB and verifies authentication as well as building the AI backend.</p>
<p>Each system was tested separately - for example by testing the NodeJS backend with Postman, we made sure that the database interactions worked as expected and Firebase authentication tokens were validated. Similarly, the AI API was tested using similar tools to make sure that it is accepting the API requests and returning the kind of information we expect.</p>
<p>When everything was working, it was just a matter of integrating everything. Well, they should fit together like pieces of a puzzle as we already planned everything, right? It's never that straightforward, but always a lot of fun and a great learning experience.</p>
<h2 id="heading-integrating-frontend-backend-and-ai-backend">Integrating frontend, backend and AI backend</h2>
<p>Integrating the frontend to the NodeJS backend was straightly straightforward as our application is dealing with simple CRUD actions. Integrating the frontend application to the AI API was something I didn't have previous experience with. In all fairness, it was not that different from making 'traditional' API calls, especially as our AI API was built with a simple architecture for the purposes of this hackathon.</p>
<p>We didn't have a way of managing context in the LLM application, but instead, we sent a lot of data (all previous conversation history) with every single API call. This needed careful management at the frontend for the data that should always be passed to the backend. This would of course be an inefficient way of running a production application, but worked well for our prototype.</p>
<h2 id="heading-lessons-learnt">Lessons learnt</h2>
<p>For me, the main benefit of this project was getting a better understanding of how AI APIs are built. I got a lot of insight into the challenges of collecting appropriate data and had a chance to see how adjusting the prompt can fully change the responses we get from the AI.</p>
<p>I also learnt how beneficial it is to do early integrations during development, even if it is just a part of the applications or one API endpoint. For example, we hadn't considered how there would be a delay in the responses we get from the AI, which meant we didn't have the time to change the UI to make it more obvious to the user that they have to wait. Had we tested it earlier, we could have still applied some UI changes that could have improved the user experience by making the waiting period more transparent and less frustrating.</p>
<p>Overall, it was a great hackathon where I learned a lot and it was very rewarding to see how we managed to build an application where the user can interact with AI within such a short timeline.</p>
<p>Further technical details (including the AI implementation) can be found in the <a target="_blank" href="https://github.com/adimidania/911-Coach-AI">GitHub repository</a>.</p>
]]></content:encoded></item><item><title><![CDATA[React + AWS + Terraform Tutorial: Deploying a Serverless Contact Form]]></title><description><![CDATA[This tutorial walks you through creating a simple contact form front-end web application with React. The contact form will be connected to a serverless AWS backend, leveraging Amazon Simple Email Service, AWS Lambda and API Gateway. When you submit t...]]></description><link>https://blog.marikabergman.com/react-aws-terraform-tutorial-deploying-a-serverless-contact-form</link><guid isPermaLink="true">https://blog.marikabergman.com/react-aws-terraform-tutorial-deploying-a-serverless-contact-form</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Tue, 23 Jan 2024 11:24:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705938505980/8e92f181-2ac9-474c-8c51-62d2ff8a6808.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This tutorial walks you through creating a simple contact form front-end web application with React. The contact form will be connected to a serverless AWS backend, leveraging Amazon Simple Email Service, AWS Lambda and API Gateway. When you submit the content on the frontend application, the contents are sent to your email.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705746336678/aa982145-3eca-4fca-b100-e6b879b7eb9d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-before-you-begin">Before you begin</h2>
<p>This tutorial utilizes several tools and requires familiarity with basic tools like Git, as well as a basic understanding of frontend development, and knowledge about AWS services and infrastructure as code. The main purpose of this tutorial is to build a small project where all of these technologies are connected. If any of these tools is new to you and feels challenging, it is a great chance to go and read some documentation and then gain some practical experience by following this tutorial.</p>
<p>This project has a simple folder structure, where the main component is the Terraform code that is used to create the AWS resources needed to run the serverless backend. We will also have folders to store the Lambda code and frontend code in the same repository:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705937496859/4084400c-04f6-4164-848d-850004adca0a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-react">React</h2>
<p>The React project is going to be simple to limit the length of this tutorial, but you could of course easily extend the frontend application. We will use a React UI library called Mantine to create the contact form. Mantine has several pre-built components and styles and we can create a simple form easily.</p>
<p>The easiest way to get started with Mantine is by using one of their ready Vite templates. You can read more about it <a target="_blank" href="https://mantine.dev/getting-started/">here</a> and find the Vite template <a target="_blank" href="https://github.com/mantinedev/vite-template">here</a>. Vite is a front-end tool that can be used to create React apps. The template includes all React and Mantine UI dependencies, so by using this template you don't need to do any further installations and it is a quick way to get started.</p>
<p>After you have used the template to create a new repository and created a local project from it, you can install dependencies (<code>npm install</code>) and run the project (<code>npm run dev</code>). The home page contains a mock page and to keep this project really simple, we will add our contact form directly on the homepage.</p>
<p>Mantine UI has a ready form that we can use, we only need to add some logic to make it work. To use Mantine form, you first need to run the following installation <code>npm install @mantine/form</code> and after that, you are ready to replace the code in your <code>Home.page.tsx</code> -file with the following code:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { useForm } <span class="hljs-keyword">from</span> <span class="hljs-string">'@mantine/form'</span>;
<span class="hljs-keyword">import</span> {
  TextInput,
  Textarea,
  SimpleGrid,
  Group,
  Title,
  Button
} <span class="hljs-keyword">from</span> <span class="hljs-string">'@mantine/core'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">HomePage</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> form = useForm({
    <span class="hljs-attr">initialValues</span>: {
      <span class="hljs-attr">name</span>: <span class="hljs-string">''</span>,
      <span class="hljs-attr">email</span>: <span class="hljs-string">''</span>,
      <span class="hljs-attr">subject</span>: <span class="hljs-string">''</span>,
      <span class="hljs-attr">message</span>: <span class="hljs-string">''</span>,
    },
    <span class="hljs-attr">validate</span>: {
      <span class="hljs-attr">name</span>: <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> value.trim().length &lt; <span class="hljs-number">2</span>,
      <span class="hljs-attr">email</span>: <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> !<span class="hljs-regexp">/^\S+@\S+$/</span>.test(value),
      <span class="hljs-attr">subject</span>: <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> value.trim().length === <span class="hljs-number">0</span>,
    },
  });

  interface FormValues {
    <span class="hljs-attr">name</span>: string;
    email: string;
    <span class="hljs-comment">//subject: string;</span>
    message: string;
  }

  <span class="hljs-keyword">const</span> handleSubmit = <span class="hljs-keyword">async</span> (values: FormValues) =&gt; {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-comment">// Replace with your actual API endpoint URL</span>
      <span class="hljs-keyword">const</span> apiUrl = <span class="hljs-string">'https://12345.execute-api.eu-west-2.amazonaws.com/test'</span>; 

      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> fetch(apiUrl, {
        <span class="hljs-attr">method</span>: <span class="hljs-string">'POST'</span>,
        <span class="hljs-attr">headers</span>: {
          <span class="hljs-string">'Content-Type'</span>: <span class="hljs-string">'application/json'</span>,
        },
        <span class="hljs-attr">body</span>: <span class="hljs-built_in">JSON</span>.stringify(values),
      });

      <span class="hljs-keyword">if</span> (response.ok) {
        <span class="hljs-comment">// Request successful, do something here</span>
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Email sent successfully!'</span>);
      } <span class="hljs-keyword">else</span> {
        <span class="hljs-comment">// Request failed, handle errors here</span>
        <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error sending email.'</span>);
      }
    } <span class="hljs-keyword">catch</span> (error) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'An error occurred:'</span>, error);
    }
  };

  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">form</span> <span class="hljs-attr">onSubmit</span>=<span class="hljs-string">{form.onSubmit(handleSubmit)}</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">Title</span>
        <span class="hljs-attr">order</span>=<span class="hljs-string">{2}</span>
        <span class="hljs-attr">size</span>=<span class="hljs-string">"h1"</span>
        <span class="hljs-attr">style</span>=<span class="hljs-string">{{</span> <span class="hljs-attr">fontFamily:</span> '<span class="hljs-attr">Greycliff</span> <span class="hljs-attr">CF</span>, <span class="hljs-attr">var</span>(<span class="hljs-attr">--mantine-font-family</span>)' }}
        <span class="hljs-attr">fw</span>=<span class="hljs-string">{900}</span>
        <span class="hljs-attr">ta</span>=<span class="hljs-string">"center"</span>
      &gt;</span>
        Get in touch
      <span class="hljs-tag">&lt;/<span class="hljs-name">Title</span>&gt;</span>

      <span class="hljs-tag">&lt;<span class="hljs-name">SimpleGrid</span> <span class="hljs-attr">cols</span>=<span class="hljs-string">{{</span> <span class="hljs-attr">base:</span> <span class="hljs-attr">1</span>, <span class="hljs-attr">sm:</span> <span class="hljs-attr">2</span> }} <span class="hljs-attr">mt</span>=<span class="hljs-string">"xl"</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">TextInput</span>
          <span class="hljs-attr">label</span>=<span class="hljs-string">"Name"</span>
          <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Your name"</span>
          <span class="hljs-attr">name</span>=<span class="hljs-string">"name"</span>
          <span class="hljs-attr">variant</span>=<span class="hljs-string">"filled"</span>
          {<span class="hljs-attr">...form.getInputProps</span>('<span class="hljs-attr">name</span>')}
        /&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">TextInput</span>
          <span class="hljs-attr">label</span>=<span class="hljs-string">"Email"</span>
          <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Your email"</span>
          <span class="hljs-attr">name</span>=<span class="hljs-string">"email"</span>
          <span class="hljs-attr">variant</span>=<span class="hljs-string">"filled"</span>
          {<span class="hljs-attr">...form.getInputProps</span>('<span class="hljs-attr">email</span>')}
        /&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">SimpleGrid</span>&gt;</span>

      <span class="hljs-tag">&lt;<span class="hljs-name">TextInput</span>
        <span class="hljs-attr">label</span>=<span class="hljs-string">"Subject"</span>
        <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Subject"</span>
        <span class="hljs-attr">mt</span>=<span class="hljs-string">"md"</span>
        <span class="hljs-attr">name</span>=<span class="hljs-string">"subject"</span>
        <span class="hljs-attr">variant</span>=<span class="hljs-string">"filled"</span>
        {<span class="hljs-attr">...form.getInputProps</span>('<span class="hljs-attr">subject</span>')}
      /&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">Textarea</span>
        <span class="hljs-attr">mt</span>=<span class="hljs-string">"md"</span>
        <span class="hljs-attr">label</span>=<span class="hljs-string">"Message"</span>
        <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Your message"</span>
        <span class="hljs-attr">maxRows</span>=<span class="hljs-string">{10}</span>
        <span class="hljs-attr">minRows</span>=<span class="hljs-string">{5}</span>
        <span class="hljs-attr">autosize</span>
        <span class="hljs-attr">name</span>=<span class="hljs-string">"message"</span>
        <span class="hljs-attr">variant</span>=<span class="hljs-string">"filled"</span>
        {<span class="hljs-attr">...form.getInputProps</span>('<span class="hljs-attr">message</span>')}
      /&gt;</span>

      <span class="hljs-tag">&lt;<span class="hljs-name">Group</span> <span class="hljs-attr">justify</span>=<span class="hljs-string">"center"</span> <span class="hljs-attr">mt</span>=<span class="hljs-string">"xl"</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">Button</span> <span class="hljs-attr">type</span>=<span class="hljs-string">"submit"</span> <span class="hljs-attr">size</span>=<span class="hljs-string">"md"</span>&gt;</span>Send message<span class="hljs-tag">&lt;/<span class="hljs-name">Button</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">Group</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">form</span>&gt;</span></span>
  );
}
</code></pre>
<p>Your project now has a simple contact form that will validate the form fields:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705925034892/37bd4064-48d7-4c17-a8a7-87772b5c8c28.png" alt class="image--center mx-auto" /></p>
<p>Submitting the form will try to send the data to the <code>apiUrl</code> defined above, which won't work yet as we haven't implemented the serverless backend. While this tutorial walks you through setting up the front-end app locally, we'll design it to seamlessly connect with the serverless backend once it's deployed.</p>
<h2 id="heading-aws">AWS</h2>
<p>The creation of AWS resources requires you to have an AWS account. You also need to create credentials for your AWS account to be used with Terraform in the next section. Additionally, there are a couple of steps we are going to be doing directly on the AWS console.</p>
<p>As the goal of this project is to create a contact form that will send the contents to an email account, we will need an email address for this and this email account needs to be verified by AWS. The easiest way to do this is in the <a target="_blank" href="https://docs.aws.amazon.com/ses/latest/dg/creating-identities.html">Amazon SES</a> (Simple Email Service) console. Simply click 'verified identities' - 'create identity' and follow the instructions. The identity status of your email must be 'verified' for this project to work:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705926410525/55eafb1b-91d6-43e8-a0d4-7ed3161595e4.png" alt class="image--center mx-auto" /></p>
<p>Another setup step in the console is creating an S3 bucket that is used for storing the Lambda code. This could be done using AWS CLI or by creating a script, but the easiest way for now is to do this directly in the console. You can create a bucket manually and make a note of the name of the bucket, as we are going to need it for our Terraform code. Now, if you followed the instructions in the beginning, you have an aws-lambda folder where you can create a file called <code>index.mjs</code>. Below is the Lambda code you can add to the file:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { SESClient, SendEmailCommand } <span class="hljs-keyword">from</span> <span class="hljs-string">"@aws-sdk/client-ses"</span>;
<span class="hljs-keyword">var</span> ses = <span class="hljs-keyword">new</span> SESClient({ <span class="hljs-attr">region</span>: <span class="hljs-string">"eu-west-2"</span> }); <span class="hljs-comment">// Change here your region</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> (event) =&gt; {
  <span class="hljs-keyword">const</span> eventData = <span class="hljs-built_in">JSON</span>.parse(event.body);

  <span class="hljs-keyword">const</span> email = eventData.email;
  <span class="hljs-keyword">const</span> name = eventData.name;
  <span class="hljs-keyword">const</span> message = eventData.message;


  <span class="hljs-keyword">const</span> emailBody = <span class="hljs-string">`Hello,\n\nYou have received a new message via the 
    contact form on your website. Below are the details of the message:
    \n\n**Sender Information:**\n- Name: <span class="hljs-subst">${name}</span>\n- Email: <span class="hljs-subst">${email}</span>\n\n
    **Message:**\n<span class="hljs-subst">${message}</span>\n\nPlease respond to this message at your 
    earliest convenience.`</span>


  <span class="hljs-keyword">const</span> command = <span class="hljs-keyword">new</span> SendEmailCommand({
    <span class="hljs-attr">Destination</span>: {
     <span class="hljs-comment">//Change here your destination email address</span>
      <span class="hljs-attr">ToAddresses</span>: [<span class="hljs-string">'email@example.com'</span>],
    },
    <span class="hljs-attr">Message</span>: {
      <span class="hljs-attr">Body</span>: {
        <span class="hljs-attr">Text</span>: { <span class="hljs-attr">Data</span>: emailBody },
      },

      <span class="hljs-attr">Subject</span>: { <span class="hljs-attr">Data</span>: <span class="hljs-string">"New Message from Contact Form"</span> },
    },
    <span class="hljs-attr">Source</span>: <span class="hljs-string">'email@example.com'</span>, <span class="hljs-comment">//Add here your source email address</span>
  });


  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">let</span> response = <span class="hljs-keyword">await</span> ses.send(command);

    response = {
      <span class="hljs-attr">statusCode</span>: <span class="hljs-number">200</span>,
      <span class="hljs-attr">headers</span>: {
        <span class="hljs-comment">//Change here the URL where your frontend app is running:</span>
        <span class="hljs-string">"Access-Control-Allow-Origin"</span>: <span class="hljs-string">"http://localhost:5173"</span>, 
        <span class="hljs-string">"Access-Control-Allow-Headers"</span>: <span class="hljs-string">"Content-Type"</span>,
        <span class="hljs-string">"Access-Control-Allow-Methods"</span>: <span class="hljs-string">"POST, OPTIONS"</span>
      },
    };

    <span class="hljs-keyword">return</span> response;


  } <span class="hljs-keyword">catch</span> (error) {
    <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error:'</span>, error);


    <span class="hljs-keyword">return</span> {
      <span class="hljs-attr">statusCode</span>: <span class="hljs-number">500</span>,
      <span class="hljs-attr">body</span>: <span class="hljs-built_in">JSON</span>.stringify({ <span class="hljs-attr">message</span>: <span class="hljs-string">'Internal server error'</span> }),
    };
  }
};
</code></pre>
<p>As you can see, there are a couple of things you need to modify in this code. First of all, change the region to your preferred AWS region. You also need to change the email address to the one you have added and verified through the AWS console (you can use the same email address as 'to' and 'from' email to keep things simple). For the CORS settings, you also need to specify the URL address where your front-end application will be running. This setting is crucial to prevent unauthorized access to your API, as it ensures that only authorized domains can send requests to your backend</p>
<p>To store the code in the S3 bucket, you first need to manually zip the file as having the file in a zipped format is a requirement for Lambda deployment. Name the zipped folder as <code>lambda.zip</code> as that's how it will be referred to later in the code. Now upload this zipped file to the S3 bucket you have created previously. Your Lambda code will now be ready in the AWS cloud to be used by your Terraform code when we start creating the resources.</p>
<h2 id="heading-terraform">Terraform</h2>
<p>The infrastructure of our project will be provisioned using Terraform. This tutorial assumes you have created an account for Terraform Cloud, installed <a target="_blank" href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli">Terraform CLI</a> and created an <a target="_blank" href="https://developer.hashicorp.com/terraform/cloud-docs/users-teams-organizations/organizations">organization</a> and <a target="_blank" href="https://developer.hashicorp.com/terraform/cloud-docs/workspaces/creating">workspace</a> on your Terraform Cloud account. Furthermore, as we are creating AWS resources with Terraform, we need access to the AWS account to be able to do this. Terraform can be used for local or remote execution and the local execution would require local configuration. However, in this tutorial, we are using Terraform with <a target="_blank" href="https://developer.hashicorp.com/terraform/cloud-docs/run/remote-operations">remote execution</a>, which means you need to add your AWS credentials as environment variables for the Terraform Cloud workspace:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705665168535/437d0c74-6878-4c7c-805f-dab503585d3b.png" alt class="image--center mx-auto" /></p>
<p>At this point, you should have all the setup ready to be able to start adding the code for the resources and start running Terraform commands from your account.</p>
<p>We will now start adding code to describe the AWS resources. We start with Lambda, as we in the previous steps uploaded the actual code for the function itself. In the <code>modules/serverless-backend-aws</code> folder, create a new file called <code>resource-lambda.tf</code>:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_lambda_function"</span> <span class="hljs-string">"serverless-contact-form-lambda"</span> {
  function_name = <span class="hljs-string">"ServerlessContactForm"</span>

  <span class="hljs-comment"># change the name of the S3 bucket to the one you have </span>
  <span class="hljs-comment"># created through the console</span>
  s3_bucket = <span class="hljs-string">"serverless-contact-form-lambda"</span>
  s3_key    = <span class="hljs-string">"lambda.zip"</span>

  handler = <span class="hljs-string">"index.handler"</span>
  runtime = <span class="hljs-string">"nodejs18.x"</span>

  role = <span class="hljs-string">"<span class="hljs-variable">${aws_iam_role.lambda_exec.arn}</span>"</span>
}

resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"lambda_exec"</span> {
  name = <span class="hljs-string">"serverless_contact_form"</span>

  assume_role_policy = &lt;&lt;EOF
{
  <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
  <span class="hljs-string">"Statement"</span>: [
    {
      <span class="hljs-string">"Action"</span>: <span class="hljs-string">"sts:AssumeRole"</span>,
      <span class="hljs-string">"Principal"</span>: {
        <span class="hljs-string">"Service"</span>: <span class="hljs-string">"lambda.amazonaws.com"</span>
      },
      <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-string">"Sid"</span>: <span class="hljs-string">""</span>
    }
  ]
}
EOF
<span class="hljs-comment"># inline policy in order to access SES</span>
 inline_policy {
    name = <span class="hljs-string">"SESPermissionsPolicy"</span>
    policy = &lt;&lt;EOF
{
  <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
  <span class="hljs-string">"Statement"</span>: [
    {
      <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-string">"Action"</span>: [
        <span class="hljs-string">"ses:SendEmail"</span>,
        <span class="hljs-string">"ses:SendRawEmail"</span>
      ],
      <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"*"</span>
    }
  ]
}
EOF
  }
}


resource <span class="hljs-string">"aws_lambda_permission"</span> <span class="hljs-string">"apigw"</span> {
  statement_id  = <span class="hljs-string">"AllowAPIGatewayInvoke"</span>
  action        = <span class="hljs-string">"lambda:InvokeFunction"</span>
  function_name = <span class="hljs-string">"<span class="hljs-variable">${aws_lambda_function.serverless-contact-form-lambda.function_name}</span>"</span>
  principal     = <span class="hljs-string">"apigateway.amazonaws.com"</span>

  source_arn = <span class="hljs-string">"<span class="hljs-variable">${aws_api_gateway_rest_api.serverless-contact-form-api.execution_arn}</span>/*/*"</span> 
}
</code></pre>
<p>This code creates for us three resources:</p>
<ul>
<li><p>the Lambda function itself. As you can see, the code that will be populated within the function is taken from an S3 bucket. Change here the name of the S3 bucket you have created previously</p>
</li>
<li><p>Lambda execution role to permit Lambda to send emails via SES</p>
</li>
<li><p>Permission for API Gateway to invoke the Lambda function</p>
</li>
</ul>
<p>Next you can create another file in the same folder: <code>resource-api-gateway.tf</code>:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_api_gateway_rest_api"</span> <span class="hljs-string">"serverless-contact-form-api"</span> {
  name = <span class="hljs-string">"serverless-contact-form-api"</span>
  description = <span class="hljs-string">"API Gateway for the serverless contact form"</span>
}


resource <span class="hljs-string">"aws_api_gateway_integration"</span> <span class="hljs-string">"lambda"</span> {
  rest_api_id = <span class="hljs-string">"<span class="hljs-variable">${aws_api_gateway_rest_api.serverless-contact-form-api.id}</span>"</span>
  resource_id = <span class="hljs-string">"<span class="hljs-variable">${aws_api_gateway_method.proxy_root.resource_id}</span>"</span> 
  http_method = <span class="hljs-string">"<span class="hljs-variable">${aws_api_gateway_method.proxy_root.http_method}</span>"</span> 

  integration_http_method = <span class="hljs-string">"POST"</span>
  <span class="hljs-built_in">type</span>                    = <span class="hljs-string">"AWS_PROXY"</span>
  uri                     = <span class="hljs-string">"<span class="hljs-variable">${aws_lambda_function.serverless-contact-form-lambda.invoke_arn}</span>"</span>
}

resource <span class="hljs-string">"aws_api_gateway_method"</span> <span class="hljs-string">"proxy_root"</span> {
  rest_api_id   = <span class="hljs-string">"<span class="hljs-variable">${aws_api_gateway_rest_api.serverless-contact-form-api.id}</span>"</span>
  resource_id   = <span class="hljs-string">"<span class="hljs-variable">${aws_api_gateway_rest_api.serverless-contact-form-api.root_resource_id}</span>"</span>
  http_method   = <span class="hljs-string">"POST"</span>
  authorization = <span class="hljs-string">"NONE"</span>
}


resource <span class="hljs-string">"aws_api_gateway_deployment"</span> <span class="hljs-string">"test"</span> {
  depends_on = [
    <span class="hljs-string">"aws_api_gateway_integration.lambda"</span>,
  ]

  rest_api_id = <span class="hljs-string">"<span class="hljs-variable">${aws_api_gateway_rest_api.serverless-contact-form-api.id}</span>"</span>
  stage_name  = <span class="hljs-string">"test"</span>
}
</code></pre>
<p>This creates for us four resources:</p>
<ul>
<li><p>the API gateway called 'serverless-contact-form-api'</p>
</li>
<li><p>an integration with the Lambda function for the POST method</p>
</li>
<li><p>a POST method</p>
</li>
<li><p>API Gateway deployment stage called 'test'</p>
</li>
</ul>
<p>Another file that we need in this folder is called <code>outputs.tf.</code> In this file, we are going to define outputs that we want to expose outside of this module after the infrastructure has been provisioned. In this case, there will be an output for the API gateway and API Gateway URL. We need to refer to the APi Gateway resource in <code>main.tf</code> and that is why we need to expose it as an output. However, the API Gateway URL is not referred to in the <code>main.tf</code> file, instead, we imply want to print it in the console as we are going to need it for the API call in the frontend application.</p>
<pre><code class="lang-bash">output <span class="hljs-string">"api_gateway_contact_form"</span> {
  value = aws_api_gateway_rest_api.serverless-contact-form-api
}
output <span class="hljs-string">"api_gateway_url"</span> {
  value = aws_api_gateway_deployment.test.invoke_url
}
</code></pre>
<p>Now we have created the files that define what resources we want to create at AWS. We still need to add some general configuration to explain to Terraform what exactly we want it to do. This is done in <code>main.tf</code> file:</p>
<pre><code class="lang-bash">terraform {
  cloud {
    organization = <span class="hljs-string">"MyOrganization"</span> <span class="hljs-comment">#Add here your organization</span>
    workspaces {
      name = <span class="hljs-string">"contact-form"</span> <span class="hljs-comment">#Add here your workspace name</span>
    }
  }
}

provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"eu-west-2"</span> <span class="hljs-comment">#Add here your region</span>
}

module <span class="hljs-string">"serverless-backend-aws"</span> {
  <span class="hljs-built_in">source</span> = <span class="hljs-string">"./modules/serverless-backend-aws"</span>
}

module <span class="hljs-string">"cors"</span> {
  <span class="hljs-built_in">source</span> = <span class="hljs-string">"squidfunk/api-gateway-enable-cors/aws"</span>
  version = <span class="hljs-string">"0.3.3"</span>

  api_id          = module.serverless-backend-aws.api_gateway_contact_form.id
  api_resource_id = module.serverless-backend-aws.api_gateway_contact_form.root_resource_id
  allow_headers = [<span class="hljs-string">"Content-Type"</span>]
  allow_methods = [<span class="hljs-string">"OPTIONS"</span>, <span class="hljs-string">"POST"</span>]
    <span class="hljs-comment">#Add here the URL where your frontend application is running:</span>
  allow_origin = <span class="hljs-string">"http://localhost:5173"</span> 
}
</code></pre>
<p>The first thing this file tells Terraform is the details of the Terraform Cloud workspace we want to use for this project. For it to access your workspace in the cloud, you need to log in from the command line by using <code>Terraform login</code>.</p>
<p>The provider is the plugin that allows Terraform to interact with the API of the specific service provider. As we are creating services at AWS, we will be using the AWS provider for this.</p>
<p>Lastly, we need to list all of the modules we want Terraform to create for us. In this tutorial, we are creating a module for the serverless backend, additionally, we could add here a module for the frontend as well if we were deploying it with this same infrastructure.</p>
<p>We also create a separate module for CORS settings, which leverages a community-provided module that simplifies the CORS configuration for API Gateway. We need to refer here to the API Gateway resource we created and this is why we in the previous steps added that to to the <code>outputs.tf</code> in the <code>serverless-backend-aws</code> module.</p>
<p>The below diagram summarizes the different parts of the Terraform code:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706008427342/0fa14dbd-76a7-4930-b8f6-a08ea294c251.png" alt class="image--center mx-auto" /></p>
<p>The last file needed at the project root level is <code>outputs.tf</code>, just like the one we had in the module folder. We want to refer here to the one value we want to print in the console, namely the URL of the API Gateway. Once the backend has been deployed, we want to add this URL to the front-end application</p>
<pre><code class="lang-bash">output <span class="hljs-string">"api_gateway_url"</span> {
    description = <span class="hljs-string">"Bucket name for our static website hosting"</span>
    value = module.serverless-backend-aws.api_gateway_url
}
</code></pre>
<h2 id="heading-finalizing-the-project">Finalizing the Project</h2>
<p>It is now time to run <code>Terraform plan</code> and <code>Terraform deploy</code> to create the serverless backend. You can monitor in the terminal whether the creation of all resources is successful and you also see your resources listed in the Terraform cloud console. If all goes well, you will see your API Gateway URL printed in the terminal and can add that to your front-end application. Now submitting the form should be successful and the form data should arrive to your email:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705936556954/b503144a-e867-42da-9b2f-37d9753b65cb.png" alt class="image--center mx-auto" /></p>
<p>If you end up getting an error, it is probably a small mistake you have made somewhere along the way and a great opportunity to dig in and learn more about the services. It could be something as simple as not adding the correct origin URL to CORS settings - going to the AWS console and testing the Lambda function and API Gateway in isolation might help you find out where the issue is between the frontend, API Gateway, Lambda and SES.</p>
<h2 id="heading-next-steps">Next Steps</h2>
<p>Now that you have the simple contact form working, you could do several things to extend and improve this project. As already touched upon previously, if frontend development is your thing, you could build the React application into a real application for example by leveraging Mantine UI's AppShell and other components. The form itself would also need some improvements - you could add error handling and improve the user experience for example with a notification to show when the form has been submitted.</p>
<p>You could also figure out how to deploy the front end on S3 and CloudFront and connect it to your custom domain. This could be done by adding an additional front-end module alongside the serverless-backend module. See my <a target="_blank" href="https://github.com/mariberg/serverless-contact-form-aws-terraform">repository</a> for some tips and example code for this.</p>
<p>The Lambda code could also be improved, it doesn't for example have any error handling at the moment. There could also be a more automated way of uploading the code rather than zipping the code manually and uploading it to the S3 bucket.</p>
<p>After making some improvements to the project, you could then move on to automating the deployment for example by using GitHub Actions. You would define the deployment steps in a workflow file and configure triggers for actions (for example push events could trigger a new deployment).</p>
<p>I hope this tutorial has helped you build your first serverless contact form. My code is available in this <a target="_blank" href="https://github.com/mariberg/serverless-contact-form-aws-terraform">repository</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Creating a Terraform Custom Provider - Terraform Cloud Project Beginner Bootcamp]]></title><description><![CDATA[The first two weeks of the Terraform Cloud Project Bootcamp from Exampro were spent on getting our development environment in Gitpod up and running, as well as becoming familiar with the basic features of Terraform. We learnt how to create, update an...]]></description><link>https://blog.marikabergman.com/creating-a-terraform-custom-provider-terraform-cloud-project-beginner-bootcamp</link><guid isPermaLink="true">https://blog.marikabergman.com/creating-a-terraform-custom-provider-terraform-cloud-project-beginner-bootcamp</guid><category><![CDATA[Terraform]]></category><category><![CDATA[custom provider terraform]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Fri, 13 Oct 2023 11:33:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1697142245372/8573a90f-be88-4069-b99a-fed25750bd9d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The first two weeks of the <a target="_blank" href="https://terraform.cloudprojectbootcamp.com/">Terraform Cloud Project Bootcamp</a> from <a target="_blank" href="https://www.exampro.co/">Exampro</a> were spent on getting our development environment in Gitpod up and running, as well as becoming familiar with the basic features of Terraform. We learnt how to create, update and delete resources via Terraform and how the Terraform state can be managed either locally or remotely in Terraform Cloud.</p>
<p>As Terraform is a cloud-agnostic infrastructure as code (IaC) tool, it can interact and manage resources in practically any cloud provider. The project created during this boot camp included AWS resources, so we of course used the AWS provider. We also experimented with the ‘Random’ provider, which can be used to create for example random strings. It does this by using entirely Terraform logic and without interacting with any other services.</p>
<p>However, to further experience how Terraform can indeed be used for almost anything, we went ahead to create a provider of our own. Our custom provider is used to create resources on the ‘TerraTowns Cloud’ platform, which has been created for the purposes of this boot camp. Participants can create ‘homes’ in the Terratown by using Terraform to create these resources.</p>
<p>To summarize, we are using in total three providers:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697143144042/65cc36e5-e8b0-41ce-85bb-952cebcd4195.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-creating-the-custom-provider">Creating the Custom Provider</h2>
<p>The development process of the custom provider can be summarized in five steps. We needed to have a mock server that could be used locally for testing, a bunch of bash scripts and Go code for creating the provider. These steps are visualized below and explained in further detail in the following paragraphs:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697187587109/1250df15-28bd-4ac0-9736-224886546275.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-1-mock-server">Step 1 - Mock Server</h3>
<p>The mock server was created using Sinatra, which is a lightweight Ruby framework that can be used to build simple web servers. When running the mock server on localhost, we were able to make sure that our bash scripts were working as intended to perform CRUD (create, read, update, delete) operations on the mock server.</p>
<h3 id="heading-step-2-skeleton-for-the-custom-terraform-provider">Step 2 - Skeleton for the Custom Terraform provider</h3>
<p>Terraform providers are typically created using the Go (Golang) programming language so this was the chosen language for our provider as well.</p>
<p>The <code>main.go</code> was first created with a simple 'hello world' skeleton, just to test if our set-up is working and if we are able to create a provider. To test the functionality, we obviously also needed a bash script that creates the custom provider. As Go files are compiled binaries, they are not dynamically run. You compile them into binaries and then you run the binaries. So running the bash scripts basically creates for us a binary file that is then used as a provider.</p>
<h3 id="heading-step-3-connecting-the-custom-provider-and-mock-server">Step 3 - Connecting the Custom Provider and Mock Server</h3>
<p>At this point, we knew that we were able to use a bash script to take the Go code, compile it and generate the binary executable that can be used as a Terraform custom provider. We also knew that the mock server was functional and CRUD operations should work. The next step was to connect these two things and make sure that the custom provider was able to call the endpoints on the mock server. To test this, we added Terraform configuration that utilizes a custom Terraform provider named "terratowns.:</p>
<pre><code class="lang-dart">terraform {
  required_providers {
    terratowns = {
      source = <span class="hljs-string">"local.providers/local/terratowns"</span>
      version = <span class="hljs-string">"1.0.0"</span>
    }
  }
}

provider <span class="hljs-string">"terratowns"</span> {
  endpoint = <span class="hljs-string">"http://localhost:4567"</span>
  user_uuid=<span class="hljs-string">"e328f4ab-b99f-421c-84c9-4ccea042c7d1"</span> 
  token=<span class="hljs-string">"9b49b3fb-b8e9-483c-b703-97ba88eef8e0"</span>
}
</code></pre>
<p>Running <code>terraform apply</code> now instructs Terraform to utilize the 'Terratowns' custom provider. The configuration specifies that the custom provider can be found in a local file, which is the binary file created from the Go code. This custom provider then interacts with the CRUD endpoints on the mock server.</p>
<h3 id="heading-step-4-testing-the-production-server">Step 4 - Testing the Production Server</h3>
<p>Now that everything was working as intended locally, it was time to make sure that we could interact with the production server. In order for this to work, the endpoint at <code>main.tf</code> had to be changed and we also had to make sure that we had access to the Terratown Cloud by having a valid user_uuid and access token. This test was successful and our Terratown resource was created on the production server.</p>
<h3 id="heading-step-5-creating-the-whole-infrastructure">Step 5 - Creating the whole infrastructure</h3>
<p>As a final step after the custom provider was working, we included the Terraform configuration for the AWS infrastructure as well. We could now run <code>Terraform apply</code>, which would use two different providers to create resources on two different cloud platforms. By running just this one command, we created a 'Terratowns home' recourse on Terratowns cloud, as well as an S3 bucket and CloudFront distribution on AWS cloud. Furthermore, the domain URL of the CloudFront distribution was referenced by the 'Terratowns home' resource, which meant that we could provide a direct link from the deployed resource on Terratowns cloud to the static website that was deployed on AWS.</p>
<hr />
<p>The code for this project can be found in my repository, which you can access <a target="_blank" href="https://github.com/mariberg/terraform-beginner-bootcamp-2023/tree/main">here</a>.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Cloud Project Bootcamp - IaC]]></title><description><![CDATA[After spending months working through the bootcamp and creating resources through 'click ops' and AWS CLI, it was time to start automating our infrastructure provisioning. For most of the stacks we used CloudFormation, however, the DynamoDB stack was...]]></description><link>https://blog.marikabergman.com/aws-cloud-project-bootcamp-iac</link><guid isPermaLink="true">https://blog.marikabergman.com/aws-cloud-project-bootcamp-iac</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Cloud Project Bootcamp ]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[cloudformation]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Thu, 13 Jul 2023 12:28:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1689245870439/03c9c864-21a4-4bc1-b763-ad7431cca1a2.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After spending months working through the <a target="_blank" href="https://aws.cloudprojectbootcamp.com">bootcamp</a> and creating resources through 'click ops' and AWS CLI, it was time to start automating our infrastructure provisioning. For most of the stacks we used CloudFormation, however, the DynamoDB stack was created by using AWS SAM (Serverless Application Model). During week 8 we had also already created the Serverless Image Processing by using AWS CDK (Cloud Development Kit).</p>
<p>Before deciding the design of your stacks it's important to consider how they are all connected. Which stack needs to be created first and which stacks are going to reference each other? Our stacks and the way they cross-reference each other are shown in the diagram and described in more detail below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1689252924714/787c6072-929b-4875-a57e-d1e0146d04ed.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-networking-cluster-service-and-database-layers">Networking, Cluster, Service and Database Layers</h3>
<p>All of the CloudFormation artifacts are going to be stored in an S3 bucket that was created manually through the AWS console before we started creating the individual CloudFormation templates.</p>
<p>The first stack was created for the 'networking layer'. We had previously used the default VPC during the bootcamp, but at this point, it was decided that we want to create a custom VPC. Apart from VPC, the networking stack creates also an internet gateway, VPC gateway attachment, route tables, subnets (3 public subnets and 3 private subnets that are not yet used) and subnet route table associations. Any values that other stacks might need are exported so that those can be referenced by other stacks.</p>
<p>The second stack to be created was for the 'cluster layer'. It contains the ECS cluster, application load balancer, HTTP&amp;HTTPS listeners, listener rules, ALB security group, service security group and target group. To the HTTPS listener we imported the CertificateArn we had already created during one of the previous weeks.</p>
<p>The 'service layer' is one where you have to carefully consider which services you want to include within one template. You might not want to have something like ECR or task definitions tightly coupled in a CloudFormation template. We ended up including task definition in the template, however later realized this was not the best possible approach as now updating our task definition causes our Fargate service to restart as well, and that is something that we would like to handle only through the CI/CD pipeline.</p>
<p>The 'database layer' includes an RDS database instance, security group and database subnet group. The database layer ends up being quite tightly coupled with the service layer as the service layer keeps hanging and re-starting containers without a successful db connection. For this reason, we had to move the service security group from the service stack to the cluster stack, so that it is created first.</p>
<p>The above-mentioned stacks are connected as they reference each other as shown in the diagram. The stacks that are introduced next are different as they are independent and don't cross-reference other stacks.</p>
<h3 id="heading-frontend-cloudfront">Frontend (CloudFront)</h3>
<p>The React.js frontend of this application was originally built as a container. We containerized the application on week 1 and deployed it as a service to Fargate on week 6. However, throughout the bootcamp there was a discussion about whether the frontend should be deployed using S3 and CloudFront instead. The benefits of using ECS would be on-demand-based scaling, easier version control and rollbacks and advanced deployment options such as canary deployments.</p>
<p>For purposes of this bootcamp it was decided that utilizing S3 and CloudFront is sufficient and makes more sense than deploying it as a container. We proceeded by implementing this directly via CloudFormation rather than creating it manually first like we have done with other stacks.</p>
<p>The frontend stack creates a CloudFront distribution, S3 bucket, bucket policy and Route 53 record set. Deploying any changes happens now by manually creating a static build and then using a library called `<code>aws_s3_website_sync</code>`, which syncs a folder from the local dev environment to the S3 bucket and then invalidates the CloudFront cache. A CI/CD pipeline could be created for this by using GitHub Actions.</p>
<h3 id="heading-dynamodb-layer-sam">DynamoDB Layer (SAM)</h3>
<p>The DynamoDB layer was created using AWS SAM, which is a subset of CloudFormation. It is designed especially for serverless applications and provides a simplified and higher-level abstraction for defining and deploying serverless resources. The stack created a DynamoDB table, Lambda function, Lambda log group, Lambda log stream an execution role.</p>
<p>As it's not a security best practice to use our main user's access key to update our DDB table, a new CloudFormation stack for 'machine user' was created. This IAM user has access to update DynamoDB. The access keys were manually updated to the parameter store.</p>
<h3 id="heading-serverless-image-processing-cdk">Serverless Image Processing (CDK)</h3>
<p>The serverless image processing was created by using AWS CDK (Cloud Development Kit). With CDK you can use your preferred programming language (in this case TypeScript) to define your AWS resources. CDK automatically generates CloudFormation templates based on the infrastructure code you write.</p>
<p>The CDK stack creates two S3 buckets, bucket policy, Lambda function and SNS topic. It is worth noting that the whole serverless image processing architecture required two further Lambda functions and a CloudFront distribution. These were created manually through the AWS console and haven't yet been managed by any IaC tool.</p>
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>In conclusion, our journey to automate infrastructure provisioning has achieved significant milestones. What still remains, is incorporating the serverless image processing fully into CloudFormation by leveraging the CDK stack as a nested stack and ensuring all necessary Lambda functions are included in the CloudFormation template. Additionally, the frontend could be fully automated by implementing a CI/CD pipeline, such as GitHub Actions.</p>
<hr />
<p>Link to my previous article about <a target="_blank" href="https://hashnode.com/post/clg4txi3o000209mb0z158ha9">AWS Cloud Project Bootcamp's DynamoDB week</a>.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Cloud Project Bootcamp - DynamoDB]]></title><description><![CDATA[During the past weeks, I have been very busy with Andrew Brown's amazing free AWS Cloud Project bootcamp. We have been through billing and architecture, containerising our application with Docker, using Honeycomb and X-ray for distributed tracing, us...]]></description><link>https://blog.marikabergman.com/aws-cloud-project-bootcamp-dynamodb</link><guid isPermaLink="true">https://blog.marikabergman.com/aws-cloud-project-bootcamp-dynamodb</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Cloud Project Bootcamp ]]></category><category><![CDATA[DynamoDB]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Thu, 06 Apr 2023 08:01:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1680606572408/ed33dbbf-ca6a-490a-ab1d-2676e93a3cee.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>During the past weeks, I have been very busy with Andrew Brown's amazing free <a target="_blank" href="https://aws.cloudprojectbootcamp.com">AWS Cloud Project bootcamp</a>. We have been through billing and architecture, containerising our application with Docker, using Honeycomb and X-ray for distributed tracing, using Rollbar for bug-tracking and monitoring, using Cognito for decentralized authentication and creating an RDS Postgres instance.</p>
<p>Every single week has been challenging, but the DynamoDB week has turned out to be the most stretching until now. Below I cover the main points regarding DynamoDB data modelling for our application.</p>
<h2 id="heading-data-modelling-access-patterns">Data modelling - access patterns</h2>
<p>We started with a very insightful 2-hour <a target="_blank" href="https://www.youtube.com/watch?v=5oZHNOaL8Og&amp;list=PLBfufR7vyJJ7k25byhRXJldB5AiwgNnWv&amp;index=50">live stream</a> about data modelling. The design that was chosen for our database is <em>a simple table design</em>. It is a popular choice these days and works well in this kind of scenario where all data is closely linked together. Based on its name it sounds simple but turns out to be quite complicated in terms of data modelling. To get everything working and to keep the cost down, it is crucial to have your <em>data mapped</em> with all different access patterns.</p>
<p>When designing a relational database you simply map the data in logical entities, see what data belongs together in each table and then figure out how to access the data from these tables by using joins. However, with DynamoDB you have to approach this from a completely different perspective. You have to think of your application and <em>what data it is going to need and how</em>. When you know your access patterns, you can start to think about how to organize your data. You can break the rules you would have with relational databases - data can even be duplicated if that works with your access patterns! Storage is cheap and you want your base table to support as many of your access patterns as possible so duplicating data could make sense depending on the situation. You could also choose to save some of the data as JSON instead of separate items if it's not going to be used in any of your queries.</p>
<p>There are so many options for designing the data model for your database. To get the best results from DynamoDB in terms of <em>cost-effectiveness and performance</em>, you really need to do these initial steps correctly.</p>
<h2 id="heading-access-patterns-in-our-application">Access patterns in our application</h2>
<p>Our application is a messaging app where the user is able to see a list of their conversations (message groups) and then click an individual message group and see all messages that belong to that message group. Additionally, the user is obviously able to send messages - these could be either completely new messages that start new message groups or further messages to existing message groups. Based on this it was possible to list our <em>initial access patterns</em>:</p>
<ul>
<li><p>pattern A: showing a single conversation (message group).</p>
</li>
<li><p>pattern B: a list of conversations (message groups).</p>
</li>
<li><p>pattern C: create a new message</p>
</li>
<li><p>pattern D: add a message to an existing message group</p>
</li>
<li><p>pattern E: update a message group using DynamoDB streams</p>
</li>
</ul>
<p>So the database is going to have one table, which is going to contain messages and message groups. Each item is going to have a unique uuid among other fields such as date, display name and message content. Each message group is also going to be listed twice as two individual items, from the perspective of the two users who are parts of the conversation. This is because a list of conversations cannot be displayed identically to both users, the person who is looking at their message groups wants to see <em>the name of the other user</em> listed as a topic of that message group.</p>
<h2 id="heading-partition-keys-and-sort-keys">Partition keys and sort keys</h2>
<p>Then we come to the hardest part of data modelling, <em>choosing the partition key.</em> Partition key means an identifier for the item and it dictates under which partition DynamoDB puts the item under the hood. The partition key doesn't have to be unique and several items can have the same partition key. <em>Sort key</em> instead allows you to uniquely identify that item and allows it to be sorted. The <em>primary key</em> in DynamoDB can be either a simple primary key or a composite primary key (a combination of partition key and sort key). A partition key is always obligatory for any query and only an equality operator can be used. The sort key is not obligatory and not using it would simply return everything.</p>
<p>Our application has two access patterns that relate to messages and three that relate to message groups. For messages, we have to be able to <em>write new messages</em> and <em>display the messages that belong to a certain message group</em>. The best option is to use <em>message_group_uuid</em> as the partition key and <em>created_at</em> as the sort key for it. This is quite logical as we want to display a <em>single conversation</em>, so its identifier uuid is the easiest way to access it. Using created_at as a sort key will give us the option to display the messages within certain timeframes:</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_message</span>(<span class="hljs-params">client,message_group_uuid, message, my_user_uuid,                  my_user_display_name, my_user_handle</span>):</span>
    now = datetime.now(timezone.utc).astimezone().isoformat()
    created_at = now
    message_uuid = str(uuid.uuid4())

    record = {
      <span class="hljs-string">'pk'</span>:   {<span class="hljs-string">'S'</span>: <span class="hljs-string">f"MSG#<span class="hljs-subst">{message_group_uuid}</span>"</span>},
      <span class="hljs-string">'sk'</span>:   {<span class="hljs-string">'S'</span>: created_at },
      <span class="hljs-string">'message'</span>: {<span class="hljs-string">'S'</span>: message},
      <span class="hljs-string">'message_uuid'</span>: {<span class="hljs-string">'S'</span>: message_uuid},
      <span class="hljs-string">'user_uuid'</span>: {<span class="hljs-string">'S'</span>: my_user_uuid},
      <span class="hljs-string">'user_display_name'</span>: {<span class="hljs-string">'S'</span>: my_user_display_name},
      <span class="hljs-string">'user_handle'</span>: {<span class="hljs-string">'S'</span>: my_user_handle}
    }
</code></pre>
<p>For message groups, it gets a little bit more complicated. We have to be able to <em>list message groups, add messages to message groups and update message group details</em>. As each user naturally needs to see the message groups that belong exactly to them, the logical option is to use <em>my_user_uuid</em> as the partition key. This will work well as there are two message groups for each conversation, so each participant is going to have a version of the message group with their user uuid. As we want to be able to sort the message groups based on date, the sort key is going to be <em>last_message_at</em>:</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_message_group</span>(<span class="hljs-params">client, message,my_user_uuid, my_user_display_name, my_user_handle, other_user_uuid, other_user_display_name, other_user_handle</span>):</span>
    table_name = <span class="hljs-string">'cruddur-messages'</span>

    message_group_uuid = str(uuid.uuid4())
    message_uuid = str(uuid.uuid4())
    now = datetime.now(timezone.utc).astimezone().isoformat()
    last_message_at = now

    my_message_group = {
      <span class="hljs-string">'pk'</span>: {<span class="hljs-string">'S'</span>: <span class="hljs-string">f"GRP#<span class="hljs-subst">{my_user_uuid}</span>"</span>},
      <span class="hljs-string">'sk'</span>: {<span class="hljs-string">'S'</span>: last_message_at},
      <span class="hljs-string">'message_group_uuid'</span>: {<span class="hljs-string">'S'</span>: message_group_uuid},
      <span class="hljs-string">'message'</span>: {<span class="hljs-string">'S'</span>: message},
      <span class="hljs-string">'user_uuid'</span>: {<span class="hljs-string">'S'</span>: other_user_uuid},
      <span class="hljs-string">'user_display_name'</span>: {<span class="hljs-string">'S'</span>: other_user_display_name},
      <span class="hljs-string">'user_handle'</span>:  {<span class="hljs-string">'S'</span>: other_user_handle}
    }
</code></pre>
<p>The catch is that the value of the sort key will of course have to be <em>updated every time a new message is created</em> and added to the message group so that it reflects the date of the actual latest message (access pattern E). This is where a global secondary index is needed.</p>
<h2 id="heading-global-secondary-index">Global secondary index</h2>
<p>GSI is a concept that takes some time to get familiar with. It is basically an index with a partition key and a sort key that can be different from those in the base table. You can imagine creating a new index almost as creating a new table in SQL. It can contain the same items as the base table but in a different order. That means the data is the same, but we twist it and look at it differently. GSIs always add extra costs and you want to avoid them if you can - as already previously mentioned, your base table should support as many of your access patterns as possible.</p>
<p>For our final access pattern E, we want to update the sort key (last_message_at) to reflect the sort key of the latest message (created_at). This will be implemented by using a DynamoDB stream. Every time a new message is created and pushed to a message group, the DynamoDB stream catches the event and triggers a Lambda function. So, how do we get this Lambda function to update the sort key?</p>
<p>As previously mentioned, the message groups have user_uuid as the partition key. So for each update, we have two different message groups with two different user_uuids (as there are always two versions of each conversation, one from the perspective of each participant). Hence we won't be able to find the correct message groups that we need to update based on the partition key. We could of course do a scan with a filter, but that is not a cost-effective solution.</p>
<p>The best option in this situation is to use a GSI. This basically <em>creates a clone of our primary table</em> using the message_group_uuid as the partition key, but the two tables are kept in sync. This GSI allows for <em>querying the table based on the message_group_uuid attribute</em>, in addition to the primary key attributes 'pk' and 'sk':</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680766087994/2f20936d-6483-4b67-8915-27e182539592.png" alt class="image--center mx-auto" /></p>
<p>The GSI was added to the schema:</p>
<pre><code class="lang-python">GlobalSecondaryIndexes= [{
    <span class="hljs-string">'IndexName'</span>:<span class="hljs-string">'message-group-sk-index'</span>,
    <span class="hljs-string">'KeySchema'</span>:[{
      <span class="hljs-string">'AttributeName'</span>: <span class="hljs-string">'message_group_uuid'</span>,
      <span class="hljs-string">'KeyType'</span>: <span class="hljs-string">'HASH'</span>
    },{
      <span class="hljs-string">'AttributeName'</span>: <span class="hljs-string">'sk'</span>,
      <span class="hljs-string">'KeyType'</span>: <span class="hljs-string">'RANGE'</span>
    }],
    <span class="hljs-string">'Projection'</span>: {
      <span class="hljs-string">'ProjectionType'</span>: <span class="hljs-string">'ALL'</span>
    },
  }],
</code></pre>
<p>Now the creation of a new message will be captured by the DynamoDB stream, which triggers a Lambda function that will use the GSI to <em>query all message groups where the message group uuid matches the partition key</em> of the message. It will then replace the sort key (last_message_at) with the sort key value (created_at) of the message. The sort keys for the message and two message groups are now matching:</p>
<p><img src="https://github.com/mariberg/aws-bootcamp-cruddur-2023/raw/main/journal/assets/sk.PNG" alt="sk" /></p>
<p>There is of course a lot more that could be said about the implementation, which was challenging and included a lot of troubleshooting and debugging. However, the whole week has been an outstanding learning experience. Now it's time to get ready for a new week of the BootCamp - ECS Fargate.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Cloud Resume Challenge - creating my portfolio website with AWS CDK]]></title><description><![CDATA[I found the Cloud Resume Challenge by Forrest Brazeal while looking for ideas to practise my AWS skills. I first became interested in tech after taking part in an ERP integration project and cross-functional end-to-end testing in my previous order ma...]]></description><link>https://blog.marikabergman.com/aws-cloud-resume-challenge-creating-my-portfolio-website-with-aws-cdk</link><guid isPermaLink="true">https://blog.marikabergman.com/aws-cloud-resume-challenge-creating-my-portfolio-website-with-aws-cdk</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[cloud-resume-challenge]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Marika Bergman]]></dc:creator><pubDate>Tue, 15 Nov 2022 14:40:17 GMT</pubDate><content:encoded><![CDATA[<p>I found <a target="_blank" href="https://cloudresumechallenge.dev/">the Cloud Resume Challenge</a> by Forrest Brazeal while looking for ideas to practise my AWS skills. I first became interested in tech after taking part in an ERP integration project and cross-functional end-to-end testing in my previous order management role. There were tech people who worked with the mysterious backend and as soon as we couldn't make something work, they would go and adjust things directly at the backend. I never got to see the mysterious backend and I was curious and fascinated. My journey to tech began. I learned fullstack development and joined an AWS learning path. And now, cloud computing is not only the next big thing, but I also find it technically super interesting. The chance of creating a full stack project in the cloud, automating it and even running my own end-to-end testing, just seemed like something I absolutely wanted to do - no more tech people to call, backend is what I make with it.</p>
<p>The challenge consists of several chunks and multiple options along the way to choose your own apporach:</p>
<ul>
<li>Getting Cloud Practitioner Certification.</li>
<li>Resume in HTML/CSS deployed in S3 as a static website.</li>
<li>AWS CloudFront Distribution in front of the S3 bucket.</li>
<li>Custom DNS domain name.</li>
<li>Javascript calling the API and displaying a visitor counter on the website.</li>
<li>API Gateway that accepts requests from your web app and communicates with the database through Lambda function.</li>
<li>Database in DynamoDB to save and update the visitor counter.</li>
<li>Lambda function that updates the database and returns an updated value to API Gateway.</li>
<li>Cypress tests to complete end-to-end testing for your code.</li>
<li>Infrastructure as Code.</li>
<li>CI/CD.</li>
<li>Blog post.</li>
</ul>
<p>The below diagram explains the structure of my project:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668523060863/wPjzoB4YX2.png" alt="diagram (4).png" /></p>
<h1 id="heading-frontend">Frontend</h1>
<p>After setting up AWS SSO to keep my account secure, I started with the first step. At this point my aim was to configure all my resources manually by clicking around in the AWS web console.</p>
<p>For my website I used a template, which I customized slightly to suit my own needs. I set up CloudFront distribution, which accessed my S3 bucket and had it's own HTTPS address. Next I got my custom domain from CloudFlare, which I also use to manage DNS. I issued a certificate through ACM (Amazon Certificate Manager) and added it and the custom domain to my CloudFront Distribution. A few trial and error here and there and that was pretty much what it took to get the frontend working.</p>
<p>I also wrote Javascript for my S3 bucket and tested it locally with a locally run NodeJS backend. However it was time to park that for the moment and start working on the structure of the backend.</p>
<h1 id="heading-backend">Backend</h1>
<p>I started by creating a new DynamoDB table, which was easy as the database only had to have one value. I selected on-demand paying plan, which shouldn't create any costs with this level of usage. Next I went to create a Lambda function with NodeJS runtime. I had to choose whether to create two functions to get and update the item or to use only one function to do both. I decided it's more purposeful to use only one function in this case, as I could simply take advantage of the the 'ReturnValues' that the function returns after updating the database.</p>
<p>After Lambda was working, it was time to create my API Gateway. First I had to choose between REST API and HTTP API, which was slightly confusing and I initially chose HTTP API, yet ended up changing my setup to REST API at the later stages of the project as the CDK construct for HTTP API gateway was experimental and creating REST API seemed more straightforward. Anyway, I got the whole thing working with Postman and was ready to connect the frontend and backend together. Obviously things didn't work the first or second time and the CORS-error became a very familiar sight. But here we were in the end, the whole full stack project was working and I could navigate to my custom domain and see the updated visitor count displayed on my website.</p>
<h1 id="heading-end-to-end-testing">End-to-end testing</h1>
<p>I had never used Cypress before so I had to do some learning before installing Cypress locally and starting to write my first tests with Typescript. I kept my tests simple and there are definitely a lot more tests I could go and write to improve the process. The tests also helped me notice a few things that were missing from the set up - such as custom error responses in CloudFront. Anyway, improving my tests later on will be a breeze, because of what I implemented next.</p>
<h1 id="heading-iac-and-cicd">IaC and CI/CD</h1>
<p>The challenge description suggests that you first complete the whole project using the AWS web console and once you're ready, you go back and define your resources as 'infrastructure as code' that can be deployed automatically. That's what I did and I can definitely agree it's a great way to learn and by the time I started with IaC I knew exactly what I had to do.</p>
<p>There are plenty of tools to choose from and I decided to go with AWS CDK (Cloud Development Kit) as it allows you to use Typescript to code your infrastructure and it just seemed to me like a logical way of doing things. I created new CDK apps for frontend and backend, added there all the code I had written in previous steps and used Typescript to define my application resources. Obviously it took a few failed deployments and rollbacks to get everything working, but I was still quite surprised how straigthforward it actually was. After that all I needed to do was to set up GitHub Actions to automate deployments and run Cypress tests.</p>
<h1 id="heading-final-thoughts">Final thoughts</h1>
<p>So what do I have as an end result? Two repos: one for frontend and another for backend, each including the AWS CDK app source code and each automatically deployed whenever a new commit is pushed. Such a nice and simple thing when it all comes together, yet it has taken endless hours of figuring out all the pieces of the puzzle. But I honestly can't imagine a better way to learn.</p>
<p>To see the final product please visit <a target="_blank" href="https://marikabergman.com">my portfolio website</a> and GitHub repos for <a target="_blank" href="https://github.com/mariberg/portfolio_backend">backend code</a> and <a target="_blank" href="https://github.com/mariberg/portfolio_frontend">frontend code</a>.</p>
]]></content:encoded></item></channel></rss>