How to access Load Balancer logs

AWS provides access logs for Elastic Load Balancers (ELB), allowing you to monitor and analyze traffic patterns. Below are general steps to access and view ELB access logs:

Amazon ELB Access Logs:

  1. Navigate to the EC2 Console:
    • Open the AWS EC2 Console.
  2. Select Load Balancers:
    • In the left navigation pane, choose “Load Balancers” under the “Load Balancing” section.
  3. Choose Your Load Balancer:
    • Click on the name of the load balancer for which you want to access logs.
  4. View Access Logs:
    • In the “Description” tab, look for the “Attributes” section.
    • Check if the “Access logs” attribute is set to “Enabled.”
  5. Access Logs Location:
    • If access logs are enabled, you can find them in an S3 bucket specified in the “S3 bucket” field.
  6. Navigate to S3 Bucket:
    • Open the AWS S3 Management Console.
    • Access the S3 bucket mentioned in the “S3 bucket” field.
  7. Access Log Files:
    • Inside the S3 bucket, you should find log files with a naming convention like <load-balancer-name>/<YYYY>/<MM>/<DD>/....
    • Download or view the log files to analyze the access logs.

AWS CLI:

You can also use the AWS Command-Line Interface (CLI) to access ELB access logs:

# Replace <your-load-balancer-name> and <your-s3-bucket-name> with your actual values

aws s3 cp s3://<your-s3-bucket-name>/<your-load-balancer-name>/<path-to-log-file> .

This command downloads the specified log file to the current directory.

Analyzing Access Logs:

Access logs typically include information such as client IP addresses, request timestamps, response status codes, and more. You can use tools like AWS Athena, Amazon CloudWatch Logs Insights, or other log analysis tools to query and visualize the logs.

Remember to adjust the steps based on the specific type of load balancer you are using (Application Load Balancer, Network Load Balancer, or Classic Load Balancer). Always refer to the official AWS documentation for the most accurate and up-to-date information.

These logs are much helpful if you are looking from which instance the request is coming.

Export Users from Cognito in AWS

Export Cognito Users in AWS

As of today, there is no way to directly export users from Cognito in AWS.

But we can use AWS CLI or AWS SDK to get the list of users.

  • First step is to install AWS CLI on your machine

Click here to download and install AWS CLI

  • Next, step is to configure AWS on your machine. To do this, open cmd (command prompt) and do the following –
$ aws configure
AWS Access Key ID [None]: YourAccessKeyId
AWS Secret Access Key [None]: YourSecretAccessKey
Default region name [None]: YourRegion
Default output format [None]: json

Replace the above with your values. For more info, click here.

  • To get the list of all users in Cognito, run the following command
aws cognito-idp list-users --region <region> --user-pool-id <userPoolId> 
--output json > users.json

The above will return the list of users in a json format. If you want to get the result in a table format, run the following command

aws cognito-idp list-users --region <region> --user-pool-id <userPoolId>  --output table > users.txt
  • Now if you want to convert the result json to csv. Use the following code snippet.
private static void ConvertJsonToCsv()
{
 using (var csv = new ChoCSVWriter("users.csv").WithFirstLineHeader())
 {
   using (var json = new ChoJSONReader("CognitoUsers.json")
             .WithField("Username")
             .WithField("Email", jsonPath: "$.Attributes[?(@.Name == 
              'email')].Value", isArray: true)
             .WithField("UserStatus")
         )
  {
     csv.Write(json);
  }
 }
}

Restore SQL database from .bak file stored in AWS S3 bucket

A Sql Database can be restored using .bak file

A Sql Database can be restored using .bak file

The below query can be used to restore database using a .bak file stored in AWS S3 bucket

exec msdb.dbo.rds_restore_database

-- replace name with the database name

@restore_db_name='<name>', 

--Replace backup s3 path with the actual path. 
--Eg. prod-backups/DLProdDB-12-07-19
--In this, prod-backups is the bucket name and DLProdDB-12-07-19 is the .bak --file name

@s3_arn_to_restore_from='arn:aws:s3:::{backup s3 path}'


To check the status of the restore, use the following query –

exec msdb.dbo.rds_task_status @db_name='<name>'

For more info, click here

Interested in Cryptocurrency. Register and start investing here

Earn a side income by affiliate marketing. Learn here how to do it.

Building APIs with API Gateway

API Gateway is one of the managed services provided by AWS to create, publish, monitor and secure REST and WebSocket APIs.

Serverless development allows the developer to focus more on the application code and logic and not worry about the infrastructure details.

Scaling which is handled by AWS also becomes simpler as the developer only has to invoke the APIs in response to events. There are no more idle servers sitting and waiting for the events to trigger as AWS manages the availability of servers as and when required.

The key benefits of serverless development is –

  • No infrastructure management
  • No idle servers
  • High availability
  • Flexible scaling.

There are a number of managed services provided by AWS that act as the building blocks for your application. Some of them are AWS Lambda that provide you with compute services, S3 for storage services, Amazon Aurora for the relations database needs or Dynamo DB for NoSQL databases.

All the interaction with these managed services provided by AWS involve invoking an API and getting the response back which mainly consist of the business logic you need got your application.

The service which we are going to discuss further is API Gateway which is designed to help you develop and operate the APIs.

 

API Gateway

API Gateway is one of the managed services provided by AWS to create, publish, monitor and secure REST and WebSocket APIs. The REST APIs created using API Gateway are HTTP-based and stick to the REST protocol which allows for stateless client-server communication.

You can implement the standard HTTP methods like GET, PUT, POST, DELETE, and PATCH.

You can simply access the API Gateway from the AWS Management Console and if you are using one of the available programming languages in AWS SDKs, then SDK could be used to access the API Gateway. It can also be accessed using the AWS Command Line Interface as well.

 

How API Gateway works

api gateway architecture In the above diagram, on the left we have all users, web and mobile applications, IoT devices and all the private applications that are on VPC or on-premises. All these connect to the API Gateway using the HTTP-based request

API Gateway receives the request and determine what to do with that request. API Gateway adds a CloudWatch monitoring where you can easily monitor the API calls. It also determines if the response from the application is cached in the memory (API Gateway Cache).

API Gateway then invoke a number of backend managed services like AWS Lambda, EC2 endpoints, etc.

 

Choosing the right API endpoint (Host name of the API)

The selection of the right API endpoint depends on where the majority of your application traffic originates. There are basically 3 types of API endpoints –

  • Edge optimized – It is helpful to reduce client latency as the API requests are routed to the nearest CloudFront Point of Presence (POP).
  • Regional – It is helpful to reduce latency when the calls are triggered from the same region as the API
  • Private – It is helpful when you want to only expose the APIs inside your VPC.

Edge Optimized

edge optimized API Gateway deploy a CloudFront distribution which is managed by API Gateway. The nearest CloudFront receives the request from the customer and route it to API Gateway endpoint in the AWS region. This is how the customer latency is reduced.

 

Regional

 

 

This is completely different than Edge Optimized end point that we just discussed above. In this there is no CloudFront distribution deployed by AWS. The customer request directly goes to the API Gateway that are in a specific AWS region. This endpoint provides lot of flexibility as you can customize different scenarios like load balancers, etc.

You can decide how the request from the customer is routed towards the endpoint. You can also add firewalls, caching and SSL certificates as well with this style.

 

Private

 

 

This endpoint is only exposed from within the VPC. It means that all the API requests originates from within your own VPC and they are not exposed publicly.

This type of API endpoint is used for secure work environments like financial institutions, etc.

 

It is very important to select the right API endpoint from the above 3 types. You can easily choose the type of endpoint you want to use from the dropdown while creating the API. If your requirements change later on, you can simply come back and change the API and you are not locked down to only one type of endpoint.

AWS Firecracker

AWS announced Firecracker in 2018, which is a new virtualization and open source technology developed using the language Rust.

AWS Lambda was launched few years back and it provided a secure serverless experience to the developers where they don’t have to manage the severs and completely avoid managing the infrastructure as well.  Lambda currently processes trillions of executions at a time for over thousands of customers. Learn more about AWS Lambda here and get your hands dirty!

In 2018, AWS launched Fargate that extended the benefits of serverless to containers and is currently running millions of containers for the AWS customers.

Firecracker Introduction

AWS announced Firecracker in 2018, which is a new virtualization and open source technology developed using the language Rust.

It is basically a new way to enhance the backend implementation of both Lambda and Fargate. This new technology makes the use of KVM (Kernel-based Virtual Machine) and provide RESTful API to launch secure and lightweight micro virtual machines in non-virtualized environments. It combines the speed, efficiency and performance by containers with the security offered by the virtual machines.

Firecracker runs on Intel processors today, with support for AMD and ARM coming in 2019. It is open sourced under Apache 2.0 and supports Linux host and guest operating systems with kernel versions 4.14

Key features –

  • Secure – It uses multiple layers of isolation that provides enhanced security over traditional virtual machines.
  • Less overhead – You can run thousands of secure VMs with any combination of vCPU and memory to match the application requirement.
  • Speed – It accelerates loading of kernel with a minimal configuration that enables fast startup times. You can launch a microVM in less than 125ms.
  • Open source – It is an open source project. You can find the git repo here
  • Efficiency – Each microVM runs with a reduced memory overhead of 5MiB.



Source – https://aws.amazon.com/blogs