AWS Security Architect https://awssecurityarchitect.com/ Experienced AWS, GCP and Azure Security Architect Sat, 31 May 2025 01:39:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 214477604 AWS DMZ Public and Private Subnets, Traffic to Internal VPC https://awssecurityarchitect.com/aws-network-security/dmz-public-and-private-subnets-traffic-to-internal-vpc/ https://awssecurityarchitect.com/aws-network-security/dmz-public-and-private-subnets-traffic-to-internal-vpc/#respond Fri, 30 May 2025 15:18:47 +0000 https://awssecurityarchitect.com/?p=314 Cloud DMZ Architecture Overview Yes, a DMZ (Demilitarized Zone) in the cloud can include both a public subnet and a private subnet. This configuration helps to separate internet-facing resources from […]

The post AWS DMZ Public and Private Subnets, Traffic to Internal VPC appeared first on AWS Security Architect.

]]>
Cloud DMZ Architecture Overview

Yes, a DMZ (Demilitarized Zone) in the cloud can include both a public subnet and a private subnet. This configuration helps to separate internet-facing resources from internal application layers that require added security.

DMZ Subnet Responsibilities

  • Public Subnet: Load balancers, web servers, bastion hosts. Directly accessible from the internet.
  • Private Subnet: Backend services like APIs, internal app servers, or firewalls. No direct internet access.

Traffic Flow

  1. User sends request to public IP on the DMZ Public Subnet (e.g., via a Load Balancer).
  2. Load Balancer forwards request to app servers in the DMZ Private Subnet.
  3. App servers in the DMZ Private Subnet may access internal services in the Internal VPC.

Network Diagram

Firewall Rules

1. DMZ Public Subnet → DMZ Private Subnet

Type Protocol Port Range Source Description
HTTP TCP 80 sg-dmz-public Allow web traffic from public to private
HTTPS TCP 443 sg-dmz-public Allow secure traffic
Custom TCP TCP 8080 sg-dmz-public For custom app ports

2. DMZ Private Subnet → Internal VPC

Type Protocol Port Range Source Description
MySQL TCP 3306 sg-dmz-private Allow DB queries
HTTPS TCP 443 sg-dmz-private Access to internal APIs
Custom TCP TCP 8443 sg-dmz-private Optional app ports

Optional: NACL Rules

DMZ Private Subnet NACL

Rule # Type Protocol Port Range Source CIDR Allow/Deny
100 HTTP TCP 80 10.0.1.0/24 ALLOW
110 HTTPS TCP 443 10.0.1.0/24 ALLOW

Internal VPC Subnet NACL

Rule # Type Protocol Port Range Source CIDR Allow/Deny
100 MySQL TCP 3306 10.0.2.0/24 ALLOW
110 HTTPS TCP 443 10.0.2.0/24 ALLOW

 

The post AWS DMZ Public and Private Subnets, Traffic to Internal VPC appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/aws-network-security/dmz-public-and-private-subnets-traffic-to-internal-vpc/feed/ 0 314
Analyzing Terabytes of VPC Flow Log data – Part 2 – Notes from the field https://awssecurityarchitect.com/data-analytics-and-data-processing/analyzing-terabytes-of-vpc-flow-log-data-part-2-notes-from-the-field/ https://awssecurityarchitect.com/data-analytics-and-data-processing/analyzing-terabytes-of-vpc-flow-log-data-part-2-notes-from-the-field/#respond Sat, 29 Jun 2024 04:57:47 +0000 https://awssecurityarchitect.com/?p=309 First  read –  Analyzing Terabytes of VPC Flow Log Data – part 1 Example Workflow Ingestion and Storage: Configure VPC Flow Logs to send logs to an S3 bucket. Use […]

The post Analyzing Terabytes of VPC Flow Log data – Part 2 – Notes from the field appeared first on AWS Security Architect.

]]>
First  read –  Analyzing Terabytes of VPC Flow Log Data – part 1

Example Workflow

  1. Ingestion and Storage:
    • Configure VPC Flow Logs to send logs to an S3 bucket.
    • Use AWS Glue to create a catalog of the data.
  2. Data Processing:
    • Set up an Amazon EMR cluster with Apache Spark.
    • Use Spark to process and transform the data, e.g., filtering specific IP ranges, aggregating traffic data, etc.
    python

    from pyspark.sql import SparkSession

    spark = SparkSession.builder.appName("VPCFlowLogsAnalysis").getOrCreate()

    # Load data from S3
    df = spark.read.json("s3://your-bucket/vpc-flow-logs/*")

    # Data transformation
    df_filtered = df.filter(df['destination_port'] == 443)

    # Aggregation
    df_aggregated = df_filtered.groupBy("source_address").sum("bytes")

    # Save processed data back to S3
    df_aggregated.write.mode("overwrite").json("s3://your-bucket/processed-vpc-flow-logs/")

  3. Data Analysis:
    • Use Athena to query the processed data stored in S3.
    sql

    SELECT source_address, SUM(bytes) AS total_bytes
    FROM processed_vpc_flow_logs
    GROUP BY source_address
    ORDER BY total_bytes DESC;
  4. Visualization and Reporting:
    • Connect Amazon QuickSight to Athena or Redshift.
    • Create dashboards to visualize metrics like total bytes transferred, top source IPs, etc.

Optimization Tips

  • Partitioning: Partition the VPC Flow Log data in S3 by date (e.g., year/month/day) to improve query performance.
  • Compression: Use data compression formats like Parquet or ORC to reduce storage costs and improve query performance.
  • Scaling EMR: Adjust the size and number of nodes in the EMR cluster based on the volume of data and processing requirements.

The post Analyzing Terabytes of VPC Flow Log data – Part 2 – Notes from the field appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/data-analytics-and-data-processing/analyzing-terabytes-of-vpc-flow-log-data-part-2-notes-from-the-field/feed/ 0 309
Analyzing Terabytes of VPC Flow Log data – Part 1 https://awssecurityarchitect.com/data-analytics-and-data-processing/analyzing-terabytes-of-vpc-flow-log-data-part-1/ https://awssecurityarchitect.com/data-analytics-and-data-processing/analyzing-terabytes-of-vpc-flow-log-data-part-1/#respond Sat, 29 Jun 2024 04:57:26 +0000 https://awssecurityarchitect.com/?p=311 Analyzing terabytes of VPC Flow Log data requires a robust and scalable approach to handle the large volume of data efficiently. Here are the key steps and tools involved in […]

The post Analyzing Terabytes of VPC Flow Log data – Part 1 appeared first on AWS Security Architect.

]]>
Analyzing terabytes of VPC Flow Log data requires a robust and scalable approach to handle the large volume of data efficiently. Here are the key steps and tools involved in the process:

1. Data Ingestion and Storage

Firstly, the VPC Flow Log data needs to be ingested and stored in a scalable and accessible format.

  • Amazon S3: Store the raw VPC Flow Log data in Amazon S3. S3 provides durable and scalable storage for large datasets.
  • AWS Glue: Use AWS Glue to catalog the data stored in S3, making it easier to query using tools like Amazon Athena.

2. Data Processing

To process and transform the data, you can use distributed data processing frameworks.

  • Amazon EMR: Run big data frameworks like Apache Spark or Hadoop on Amazon EMR to process and transform the data. EMR is a scalable platform for processing large datasets.
  • AWS Lambda: For smaller or near real-time processing tasks, AWS Lambda can be used to trigger processing functions based on new data arriving in S3.

3. Data Analysis

Analyzing the data involves querying and aggregating the VPC Flow Logs to derive meaningful insights.

  • Amazon Athena: Use Amazon Athena to query the VPC Flow Logs directly from S3. Athena is a serverless interactive query service that allows you to analyze data using standard SQL.
  • Redshift: Load the processed data into Amazon Redshift for more complex and large-scale analytical queries. Redshift is a fully managed data warehouse service.

4. Visualization and Reporting

Visualizing the analyzed data helps in deriving insights and making data-driven decisions.

  • Amazon QuickSight: Use Amazon QuickSight to create interactive dashboards and visualizations. QuickSight can directly connect to Athena and Redshift for real-time data visualization.
  • Tableau/Power BI: For more advanced visualization capabilities, you can use third-party tools like Tableau or Power BI.

The post Analyzing Terabytes of VPC Flow Log data – Part 1 appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/data-analytics-and-data-processing/analyzing-terabytes-of-vpc-flow-log-data-part-1/feed/ 0 311
Packet Capture and AWS VPC Flow Logs https://awssecurityarchitect.com/aws-network-security/packet-capture-and-aws-flow-logs/ https://awssecurityarchitect.com/aws-network-security/packet-capture-and-aws-flow-logs/#respond Sat, 29 Jun 2024 04:49:29 +0000 https://awssecurityarchitect.com/?p=305 Also read PCAP (Packet Capture) overview AWS VPC Flow Logs do not use PCAP (Packet Capture) format. Instead, VPC Flow Logs capture metadata about the traffic flowing to and from […]

The post Packet Capture and AWS VPC Flow Logs appeared first on AWS Security Architect.

]]>
Also read PCAP (Packet Capture) overview

AWS VPC Flow Logs do not use PCAP (Packet Capture) format. Instead, VPC Flow Logs capture metadata about the traffic flowing to and from network interfaces in a Virtual Private Cloud (VPC). This metadata is stored in a structured log format, typically in Amazon CloudWatch Logs or Amazon S3.

Data Captured by VPC Flow Logs

VPC Flow Logs capture information such as:

  • Version: The version of the flow log format.
  • Account ID: The ID of the AWS account that owns the network interface.
  • Interface ID: The ID of the network interface for which traffic is recorded.
  • Source Address: The source IP address of the traffic.
  • Destination Address: The destination IP address of the traffic.
  • Source Port: The source port of the traffic.
  • Destination Port: The destination port of the traffic.
  • Protocol: The IANA protocol number of the traffic (e.g., TCP is 6, UDP is 17).
  • Packets: The number of packets transferred during the flow.
  • Bytes: The number of bytes transferred during the flow.
  • Start Time: The time at which the flow started.
  • End Time: The time at which the flow ended.
  • Action: Whether the traffic was accepted or rejected.
  • Log Status: The status of the flow log.

Example of a VPC Flow Log Entry

Here is an example of a single VPC Flow Log entry:

2 123456789012 eni-abc123de 192.168.1.1 10.0.0.1 443 12345 6 10 840 1623101047 1623101107 ACCEPT OK

Breakdown of the Example Entry

  • 2: The version of the flow log format.
  • 123456789012: The AWS account ID.
  • eni-abc123de: The ID of the network interface.
  • 192.168.1.1: The source IP address.
  • 10.0.0.1: The destination IP address.
  • 443: The destination port (HTTPS).
  • 12345: The source port.
  • 6: The protocol (TCP).
  • 10: The number of packets transferred.
  • 840: The number of bytes transferred.
  • 1623101047: The start time of the flow (in Unix epoch time).
  • 1623101107: The end time of the flow (in Unix epoch time).
  • ACCEPT: The action taken (whether the traffic was accepted or rejected).
  • OK: The log status (indicating the logging status).

Differences from PCAP

  • Granularity: PCAP files capture the entire packet, including headers and payloads. VPC Flow Logs capture metadata about the flow, not the packet contents.
  • Format: PCAP is a binary format, while VPC Flow Logs are plain text entries.
  • Use Case: PCAP is used for detailed packet-level analysis, often in network troubleshooting and forensics. VPC Flow Logs are used for monitoring and analyzing network traffic patterns and security within AWS environments.

Usage of VPC Flow Logs

  1. Security Monitoring: Analyze traffic patterns to detect suspicious activities or security breaches.
  2. Compliance: Maintain logs for auditing and compliance requirements.
  3. Performance Monitoring: Identify and troubleshoot network performance issues by examining traffic flow data.
  4. Cost Management: Understand data transfer costs by analyzing traffic volume.

In summary, AWS VPC Flow Logs do not use PCAP format. Instead, they provide a high-level overview of network traffic, capturing essential metadata to help with security monitoring, compliance, performance analysis, and cost management.

The post Packet Capture and AWS VPC Flow Logs appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/aws-network-security/packet-capture-and-aws-flow-logs/feed/ 0 305
PCAP  Overview https://awssecurityarchitect.com/aws-network-security/pcap-overview/ https://awssecurityarchitect.com/aws-network-security/pcap-overview/#respond Sat, 29 Jun 2024 04:48:06 +0000 https://awssecurityarchitect.com/?p=306 PCAP  Overview PCAP (Packet Capture) files are used to record network traffic data for analysis. They capture and store data packets transmitted over a network, allowing network administrators, security analysts, […]

The post PCAP  Overview appeared first on AWS Security Architect.

]]>
PCAP  Overview

PCAP (Packet Capture) files are used to record network traffic data for analysis. They capture and store data packets transmitted over a network, allowing network administrators, security analysts, and developers to examine the details of network communications. Here’s an overview of key aspects of PCAP files:

Key Concepts

  1. Packet Capture: PCAP files contain captured network packets. These packets include the raw data sent across the network, along with headers containing metadata such as source and destination IP addresses, protocols, and timestamps.
  2. File Format: The PCAP file format is standardized, which means it can be used across different network analysis tools. Common extensions for these files are .pcap or .cap.
  3. Tools for Capturing and Analyzing PCAP Files:
    • Wireshark: A popular open-source network protocol analyzer that can capture and interactively browse the contents of PCAP files.
    • tcpdump: A command-line packet analyzer that allows users to capture and display packets being transmitted or received over a network.
    • libpcap: A portable C/C++ library for network traffic capture. It’s used by tools like tcpdump.
  4. Use Cases:
    • Network Troubleshooting: Analyzing PCAP files helps identify network issues such as latency, packet loss, or misconfigurations.
    • Security Analysis: Security professionals use PCAP files to detect and investigate potential security threats, including intrusions and malware activities.
    • Protocol Analysis: Developers use PCAP files to understand and debug network protocol implementations.
  5. File Structure:
    • Global Header: Contains metadata about the file, such as the version of the pcap format and the timestamp resolution.
    • Packet Headers: Each captured packet starts with a header that includes a timestamp, the length of the packet, and other metadata.
    • Packet Data: The actual bytes of the captured packet, which include both the header and the payload of the original network packet.

Basic Workflow

  1. Capture: Network traffic is captured using a tool like tcpdump or Wireshark, creating a PCAP file.
  2. Analyze: The captured PCAP file is opened in a tool like Wireshark for detailed analysis. Analysts can filter, search, and inspect the packet data.
  3. Interpret: The data is interpreted to understand network performance, identify issues, or investigate security incidents.

Example of Capturing Traffic with tcpdump

bash

# Capture traffic on interface eth0 and save to a file named capture.pcap
tcpdump -i eth0 -w capture.pcap

Example of Opening a PCAP File in Wireshark

  1. Open Wireshark.
  2. Go to File > Open.
  3. Select the PCAP file you want to analyze.
  4. Use Wireshark’s filtering and analysis tools to examine the captured data.

PCAP files are essential for deep network analysis and provide invaluable insights into network traffic, making them a critical component in network administration and cybersecurity.

The post PCAP  Overview appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/aws-network-security/pcap-overview/feed/ 0 306
S3 customer session https://awssecurityarchitect.com/s3-security/s3-customer-session/ https://awssecurityarchitect.com/s3-security/s3-customer-session/#respond Thu, 09 Nov 2023 15:16:45 +0000 https://awssecurityarchitect.com/?p=298 There is no excerpt because this is a protected post.

The post S3 customer session appeared first on AWS Security Architect.

]]>
This content is password protected. To view it please enter your password below:

The post S3 customer session appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/s3-security/s3-customer-session/feed/ 0 298
aws_controltower_control – terraform – preventive and detective control tower controls https://awssecurityarchitect.com/control-tower/aws_controltower_control-terraform-preventive-and-detective-control-tower-controls/ https://awssecurityarchitect.com/control-tower/aws_controltower_control-terraform-preventive-and-detective-control-tower-controls/#respond Fri, 21 Jul 2023 17:46:14 +0000 https://awssecurityarchitect.com/?p=275 There is no excerpt because this is a protected post.

The post aws_controltower_control – terraform – preventive and detective control tower controls appeared first on AWS Security Architect.

]]>
This content is password protected. To view it please enter your password below:

The post aws_controltower_control – terraform – preventive and detective control tower controls appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/control-tower/aws_controltower_control-terraform-preventive-and-detective-control-tower-controls/feed/ 0 275
AWS EC2 – Proceed without Key Pair https://awssecurityarchitect.com/ec2-security/aws-ec2-proceed-without-key-pair/ https://awssecurityarchitect.com/ec2-security/aws-ec2-proceed-without-key-pair/#respond Sun, 11 Dec 2022 14:10:52 +0000 https://awssecurityarchitect.com/?p=153 While creating the instance , you will be prompted to “Proceed without key pair” . You can still connect to the instance provided: the sshd in your AMI is configured to use […]

The post AWS EC2 – Proceed without Key Pair appeared first on AWS Security Architect.

]]>
While creating the instance , you will be prompted to “Proceed without key pair” . You can still connect to the instance provided:

  • the sshd in your AMI is configured to use password based authentication

The post AWS EC2 – Proceed without Key Pair appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/ec2-security/aws-ec2-proceed-without-key-pair/feed/ 0 153
AWS Backups using SSM doc and bash https://awssecurityarchitect.com/ec2-security/aws-backups-using-ssm-doc-and-bash/ https://awssecurityarchitect.com/ec2-security/aws-backups-using-ssm-doc-and-bash/#respond Thu, 27 Oct 2022 16:50:34 +0000 https://awssecurityarchitect.com/?p=136 How do I kick off a command line based backup job (AWS backup job) from an SSM Doc?   Create your SSM managed EC2 instance (with the SSM agent installed). […]

The post AWS Backups using SSM doc and bash appeared first on AWS Security Architect.

]]>
How do I kick off a command line based backup job (AWS backup job) from an SSM Doc?

 

  1. Create your SSM managed EC2 instance (with the SSM agent installed). (SSM agent is pre-installed on AWS AMIs, and needs to be installed on custom AMIs).
  2. Use the python script provided in this repo. 
  3. Call the python script from a Command line (for testing purposes). Execution : python ec2_volume_snapshot.py <volume_id> <region_name>
  4. Once tested from the command line, use a bash script to wrap the python command above. The bash script lives in the SSM doc. It runs on the linux OS on an EC2 that is SSM managed.

Sample python program to call aws backup service and perform a backup

import subprocess
import sys
import boto3

def execute_shell_commands(commands):
MyOut = subprocess.Popen(commands,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout,stderr = MyOut.communicate()

#for command in commands:
command_string = ” “.join(commands)
print(“Command executed : %s” % command_string)
if stdout is not None:
stdout = stdout.decode(“utf-8”)
print(“Stdout :\n%s” % stdout)
if stderr is not None:
stderr = stderr.decode(“utf-8”)
print(“Stderr :\n%s” % stderr)

# Run pre-script
execute_shell_commands([‘sudo’, ‘service’, ‘apache2’, ‘stop’])

volume_id = sys.argv[1]

region_name = sys.argv[2]

ec2 = boto3.resource(‘ec2’, region_name=region_name)
volume = ec2.Volume(volume_id)
snapshot = volume.create_snapshot()
snapshot.wait_until_completed()

ec2_client = boto3.client(‘ec2’, region_name=region_name)
snapshot_details = ec2_client.describe_snapshots(SnapshotIds=[snapshot.id])
print(“Snapshot details :\n%s” % snapshot_details)

# Run post-script
execute_shell_commands([‘sudo’, ‘service’, ‘apache2’, ‘start’])
execute_shell_commands([‘sudo’, ‘service’, ‘apache2’, ‘status’])

Sample bash script (in SSM doc) to call a python command

#!/bin/bash

MYSTRING="Do something in bash"
echo $MYSTRING

python - << EOF
myPyString = "Do something on python"
print myPyString

EOF

echo "Back to bash"

 

 

 

The post AWS Backups using SSM doc and bash appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/ec2-security/aws-backups-using-ssm-doc-and-bash/feed/ 0 136
S3 ACLs and Bucket Policies https://awssecurityarchitect.com/s3-security/s3-acls-and-bucket-policies/ https://awssecurityarchitect.com/s3-security/s3-acls-and-bucket-policies/#respond Sun, 25 Sep 2022 14:33:48 +0000 https://awssecurityarchitect.com/?p=83 S3 ACLs and S3 Bucket Policies ACLs were the first authorization mechanism in S3. Bucket policies are the newer method, and the method used for almost all AWS services. Policies can […]

The post S3 ACLs and Bucket Policies appeared first on AWS Security Architect.

]]>
S3 ACLs and S3 Bucket Policies

ACLs were the first authorization mechanism in S3. Bucket policies are the newer method, and the method used for almost all AWS services.

Policies can implement very complex rules and permissions, ACLs are simplistic (they have ALLOW but no DENY)

ACL Granularity – Bucket level vs. Object Level ACLs

Bucket ACL – entire bucket needs to be accessed – e.g. by a log writer (log delivery group in AWS).

Object ACLs can be used when permissions vary by object

User Policies versus Bucket Policies

Use Bucket policies when an entire group of resources – e.g. an entire account or set of accounts (cross account access) is to be granted permissions on the bucket.

User Policies are better if you want to manage individual / group permissions by attaching policies to users (or user groups).  This is different from attaching a policy at the bucket level, since this policy is attached to the User (IAM) resource.

Summary – S3 ACLs and S3 Bucket Policies

The use cases for when to use which are highlighted in this post. For an advanced AWS IAM or overall security consultation, please Contact AWS Security Architect

The post S3 ACLs and Bucket Policies appeared first on AWS Security Architect.

]]>
https://awssecurityarchitect.com/s3-security/s3-acls-and-bucket-policies/feed/ 0 83