Configuring-AWS-Cloud-Solution-For-2-Company-Websites-Using-A-Reverse-Proxy-Technology

Configuring-AWS-Cloud-Solution-For-2-Company-Websites-Using-A-Reverse-Proxy-Technology

Toluwase Makanjuola

Toluwase Makanjuola

10 min read·Just now

NB: This infrastructure setup is not included in the AWS free tier. Therefore, delete all resources created immediately after completing the project. If resources are not deleted, monthly costs may be unexpectedly high. Also, setting up a budget and configuring notifications for when our spending reaches a set limit is highly recommended.

What is a reverse proxy technology?

A reverse proxy is a server, application, or cloud service that intermediates one or more web servers. It intercepts and examines incoming client requests before passing them on to the web server, and then returns the web server’s response to the client.

Features of a reverse proxy technology?

  1. Intercepts Requests: A reverse proxy catches requests from a user’s browser and relays them to the web server.

  2. Returns Responses: The reverse proxy gets a response from the web server and sends it back to the user.

  3. Provides Security: A reverse proxy can shield users from harmful content and prevent malware and ransomware attacks.

  4. Balances Load: A reverse proxy can spread incoming traffic across multiple servers to avoid overloading any single server.

  5. Monitors Traffic: A reverse proxy can track and log traffic to web servers, aiding in the detection and prevention of security threats.

  6. Provides Analytics: A reverse proxy can offer detailed insights into website traffic and usage patterns.

  7. Manages IP Addresses: A reverse proxy can forward or mask a client’s IP address, enhancing privacy and improving traffic routing.

  8. Directs Requests: A reverse proxy can route requests based on various parameters, such as the user’s device, location, or network condition.

In this project, we are building a secure infrastructure inside AWS VPC (Virtual Private Cloud) network for a company (funmi’s Ink) that uses WordPress CMS for its main business website, and a Tooling Website for its DevOps team. As part of the company’s desire for improved security and performance, a decision has been made to use NGINX's reverse proxy technology to achieve this. Cost, Security, and Scalability are the major requirements for this project. Hence, implementing the architecture designed below, ensures that infrastructure for both websites, WordPress and Tooling, is resilient to Web Server’s failures, can accommodate increased traffic, and, at the same time, has a reasonable cost.

Starting our AWS Cloud Project

A few requirements must be met before we begin: properly configuring our AWS account and Organization Unit. Watch How To Do This Here

  1. We Created an AWS Master account. (Also known as Root Account) Within the Root account, we created a sub-account and named it DevOps. (we used another email address to complete this) Within the Root account, we created an AWS Organization Unit (OU). and named it Dev. (We will launch Dev resources in there) We moved the DevOps account into the Dev OU. We logged into the newly created AWS account using the new email address.

  2. We created a free domain name for our company at the Freenom domain registrar here.

  3. We created a hosted zone in AWS and mapped it to our free domain from Freenom. Watch how to do that here

Setting Up a Virtual Private Network (VPC)

We always refer to the architectural diagram and ensure that our configuration is aligned with it.

  1. We created a VPC

  2. We created subnets as shown in the architecture

  3. We created a route table and associated it with public subnets

  4. We created a route table and associated it with private subnets

  5. We created an Internet Gateway

  6. We edited a route in the public route table and associated it with the Internet Gateway. (This is what allows a public subnet to be accessible from the Internet)

  7. We created 3 Elastic IPs

  8. We created a Nat Gateway and assign one of the Elastic IPs (*The other 2 will be used by Bastion hosts)

  9. We created a Security Group for:

  • Nginx Servers: Access to Nginx should only be allowed from a Application Load balancer (ALB). At this point, we have not created a load balancer, therefore we will update the rules later. For now, we just created it and put some dummy records as a placeholder.

  • Bastion Servers: Access to the Bastion servers should be allowed only from workstations that need SSH into the Bastion servers. Hence, you can use your workstation public IP address. To get this information, simply go to your terminal and type curl www.canhazip.com

  • Application Load Balancer: ALB will be available on the Internet

  • Webservers: Access to Webservers should only be allowed from the Nginx servers. Since we do not have the servers created yet, just put some dummy records as a placeholder, we will update it later.

  • Data Layer: Access to the Data layer, which is comprised of Amazon Relational Database Service (RDS) and Amazon Elastic File System (EFS) must be carefully designed - only webservers should be able to connect to RDS, while Nginx and Webservers will have access to EFS Mountpoint.

Proceed With Compute Resources

We need to set up and configure compute resources inside our VPC. The resources related to compute are:

Setting Up Compute Resources for Nginx

Provision EC2 Instances for Nginx

Proceed With Compute Resources

You will need to set up and configure compute resources inside your VPC. The resources related to compute are:

Set Up Compute Resources for Nginx

Provision EC2 Instances for Nginx

  1. We created an EC2 Instance based on CentOS Amazon Machine Image (AMI) in any 2 Availability Zones (AZ) in any AWS Region (it is recommended to use the Region that is closest to your customers). Use EC2 instance of T2 family (e.g. t2.micro or similar)

  2. We ensured that it has the following software installed:

  • python

  • ntp

  • net-tools

  • vim

  • wget

  • telnet

  • epel-release

  • htop

  1. We created an AMI out of the EC2 instance

Preparing Launch Template For Nginx (One Per Subnet)

  1. We make use of the AMI to set up a launch template

  2. We ensured the Instances were launched into a public subnet

  3. We assigned the appropriate security group

  4. We configured Userdata to update yum package repository and install nginx

Configuring Target Groups

  1. Select Instances as the target type

  2. We ensured the protocol HTTPS on secure TLS port 443

  3. We ensured that the health check path is /healthstatus

  4. We registered Nginx Instances as targets

  5. We ensured that health check passes for the target group

Configuring Autoscaling For Nginx

  1. We selected the right launch template

  2. We selected the VPC

  3. We selected both public subnets

  4. We enabled Application Load Balancer for the AutoScalingGroup (ASG)

  5. We selected the target group we created before

  6. We ensured that we have health checks for both EC2 and ALB

  7. The desired capacity is 2

  8. Minimum capacity is 2

  9. Maximum capacity is 4

  10. We set scale out if CPU utilization reaches 90%

  11. We ensured there is an SNS topic to send scaling notifications

Setting Up Compute Resources for Bastion

We provision the EC2 Instances for Bastion

  1. We created an EC2 Instance based on CentOS Amazon Machine Image (AMI) per each Availability Zone in the same Region and same AZ where we created the Nginx server

  2. We ensured that it has the following software installed

  • python

  • ntp

  • net-tools

  • vim

  • wget

  • telnet

  • epel-release

  • htop

  1. We associated an Elastic IP with each of the Bastion EC2 Instances

  2. We created an AMI out of the EC2 instance

Preparing Launch Template For Bastion (One per subnet)

  1. We Made use of the AMI to set up a launch template

  2. We ensured the Instances were launched into a public subnet

  3. We Assigned the appropriate security group

  4. We configured Userdata to update yum package repository and install Ansible and git

Configuring Target Groups

  1. We select Instances as the target type

  2. We ensured the protocol is TCP on port 22

  3. We register Bastion Instances as targets

  4. We ensured that health check passes for the target group.

Configuring Autoscaling For Bastion

  1. We selected the right launch template

  2. We selected the VPC

  3. We selected both public subnets

  4. We enabled Application Load Balancer for the AutoScalingGroup (ASG)

  5. We select the target group we created before

  6. We ensured that we have health checks for both EC2 and ALB

  7. The desired capacity is 2

  8. Minimum capacity is 2

  9. Maximum capacity is 4

  10. We set scale out if CPU utilization reaches 90%

  11. We ensured there is an SNS topic to send scaling notifications

Setting Up Compute Resources for Webservers

Provisioning the EC2 Instances for Webservers

Now, we need to create 2 separate launch templates for both the WordPress and Tooling websites

  1. We created an EC2 Instance (Centos) each for WordPress and Tooling websites per Availability Zone (in the same Region).

  2. We ensured that it has the following software installed

  • python

  • ntp

  • net-tools

  • vim

  • wget

  • telnet

  • epel-release

  • htop

  • php

  1. We created an AMI out of the EC2 instance

Preparing Launch Template For Webservers (One per subnet)

  1. We made use of the AMI to set up a launch template

  2. We ensured the Instances are launched into a public subnet

  3. We assigned the appropriate security group

  4. Configure Userdata to update yum package repository and install wordpress (Only required on the WordPress launch template)

TLS Certificates From Amazon Certificate Manager (ACM)

We need TLS certificates to handle secured connectivity to our Application Load Balancers (ALB).

  1. We navigate to AWS ACM

  2. We requested a public wildcard certificate for the domain name we registered in Freenom

  3. We Used DNS to validate the domain name

  4. And we tagged the resource

Configuring Application Load Balancer (ALB)

Application Load Balancer To Route Traffic To NGINX

Nginx EC2 Instances will have configurations that accept incoming traffic only from Load Balancers. No request should go directly to Nginx servers. With this kind of setup, we will benefit from intelligent routing of requests from the ALB to Nginx servers across the 2 Availability Zones. We will also be able to offload SSL/TLS certificates on the ALB instead of Nginx. Therefore, Nginx will be able to perform faster since it will not require extra compute resources to validate certificates for every request.

  1. We created an Internet-facing ALB

  2. We ensured that it listens on HTTPS protocol (TCP port 443)

  3. We ensured the ALB was created within the appropriate VPC | AZ | Subnets

  4. We chose the Certificate from ACM

  5. We selected the Security Group

  6. We selected Nginx Instances as the target group

Application Load Balancer To Route Traffic To Web Servers

Since the webservers are configured for auto-scaling, there is going to be a problem if servers get dynamically scaled out or in. Nginx will not know about the new IP addresses or the ones that get removed. Hence, Nginx will not know where to direct the traffic.

To solve this problem, we use a load balancer. But this time, it was an internal load balancer. Not Internet facing since the web servers are within a private subnet, and we do not want direct access to them.

  1. We created an Internal ALB

  2. We ensured that it listens on HTTPS protocol (TCP port 443)

  3. We ensured the ALB was created within the appropriate VPC | AZ | Subnets

  4. We chose the Certificate from ACM

  5. We selected the Security Group

  6. We selected webserver Instances as the target group

  7. We ensured that health check passes for the target group

NOTE: This process was repeated for both WordPress and Tooling websites.

Setting up EFS

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic Network File System (NFS) for use with AWS Cloud services and on-premises resources. In this project, we will utilize EFS service and mount filesystems on both Nginx and Webservers to store data.

  1. We created an EFS filesystem

  2. We created an EFS mount target per AZ in the VPC, associated it with both subnets dedicated for data layer

  3. We associate the Security groups created earlier for data layer.

  4. We created an EFS access point. (Giving it a name and leave all other settings as default)

Setting up RDS

Pre-requisite: We created a KMS key from Key Management Service (KMS) to be used to encrypt the database instance.

Amazon Relational Database Service (Amazon RDS) is a managed distributed relational database service by Amazon Web Services. This web service running in the cloud is designed to simplify the setup, operations, maintenance & scaling of relational databases. Without RDS, Database Administrators (DBA) have more work to do, due to RDS, some DBAs have become jobless

To ensure that our databases are highly available and also have failover support in case one availability zone fails, we configure a multi-AZ setup of RDS MySQL database instance. In our case, since we are only using 2 AZs, we can only failover to one, but the same concept applies to 3 Availability Zones. We will not consider possible failure of the whole Region.

To configure RDS,

  1. We created a subnet group and added 2 private subnets (data Layer)

  2. We created an RDS Instance for mysql 8.*.*

  3. To satisfy our architectural diagram, we will need to select either Dev/Test or Production Sample Template. But to minimize AWS cost, we select the Do not create a standby instance option under Availability & durability sample template (The production template will enable Multi-AZ deployment)

  4. Configuring other settings accordingly (For test purposes, most of the default settings are good to go). In the real world, you will need to size the database appropriately. You will need to get some information about the usage. If it is a highly transactional database that grows at 10GB weekly, you must bear that in mind while configuring the initial storage allocation, storage autoscaling, and maximum storage threshold.

  5. We configured VPC and security (ensuring the database is not available from the Internet)

  6. We configured backups and retention

  7. We encrypt the database using the KMS key created earlier

  8. We enabled CloudWatch monitoring and export Error and Slow Query logs (for production, also include Audit)

Note This service is an expensive one. Ensure to review the monthly cost before creating. (DO NOT LEAVE ANY SERVICE RUNNING FOR LONG)

Configuring DNS with Route53

Earlier in this project, we registered a free domain with Freenom and configured a hosted zone in Route53. But that is not all that needs to be done as far as DNS configuration is concerned.

We need to ensure that the main domain for the WordPress website can be reached, and the subdomain for the Tooling website can also be reached using a browser.

Creating other records such as CNAME, alias and A records.

NOTE: We can use either CNAME or alias records to achieve the same thing. But alias record has better functionality because it is faster to resolve DNS record, and can coexist with other records on that name. You can read here to get to know more about the differences.

  • We created an alias record for the root domain and direct its traffic to the ALB DNS name.

  • We Created an alias record for tooling.<ourdomain>.com and direct its traffic to the ALB DNS name.

Congratulations!

We have just created a secured, scalable, and cost-effective infrastructure to host 2 enterprise websites using various Cloud services from AWS. At this point, our infrastructure is ready to host real website loads.