Перейти к основному содержимому
Перейти к основному содержимому

Monitoring Cloudflare Logs with ClickStack

TL;DR

This guide shows you how to ingest Cloudflare logs into ClickStack using ClickPipes. You'll configure event-driven ingestion via SQS and set up ClickPipes to continuously ingest logs from S3 into ClickHouse.

A demo dataset is available if you want to test before configuring production.

Time Required: 15-20 minutes

Integration with existing Cloudflare Logpush

This section assumes you have Cloudflare Logpush configured to export logs to S3. If not, follow Cloudflare's AWS S3 setup guide first.

Prerequisites

  • ClickStack instance running
  • Cloudflare Logpush actively writing logs to an S3 bucket
  • AWS permissions to create SQS queues and IAM roles
  • S3 bucket name and region where Cloudflare writes logs

Create SQS queue

Create an SQS queue to receive notifications when Cloudflare uploads new log files to S3.

Via AWS Console:

  1. Navigate to SQS → Create queue
  2. Type: Standard
  3. Name: cloudflare-logs-queue
  4. Click Create queue
  5. Copy the Queue URL

Configure access policy:

Select your queue → Access policy tab → Edit → Replace with:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {"Service": "s3.amazonaws.com"},
    "Action": "SQS:SendMessage",
    "Resource": "arn:aws:sqs:REGION:ACCOUNT_ID:cloudflare-logs-queue",
    "Condition": {
      "ArnEquals": {
        "aws:SourceArn": "arn:aws:s3:::YOUR-BUCKET-NAME"
      }
    }
  }]
}

Replace REGION, ACCOUNT_ID, and YOUR-BUCKET-NAME with your values.

Configure S3 event notifications

Configure your S3 bucket to notify the queue when new files arrive.

  1. S3 bucket → PropertiesEvent notificationsCreate event notification
  2. Name: cloudflare-new-file
  3. Event types: ✓ All object create events
  4. Destination: SQS queue → Select cloudflare-logs-queue
  5. Click Save changes

Create IAM role for ClickPipes

ClickPipes needs permission to read from S3 and consume SQS messages.

Get ClickHouse Cloud IAM ARN:

  1. ClickHouse Cloud Console → SettingsNetwork Security Information
  2. Copy the IAM Role ARN

Create IAM role:

  1. AWS Console → IAM → RolesCreate role
  2. Trusted entity: Custom trust policy
  3. Paste:
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {
      "AWS": "YOUR-CLICKHOUSE-CLOUD-ARN"
    },
    "Action": "sts:AssumeRole"
  }]
}
  1. Role name: clickhouse-clickpipes-cloudflare
  2. Click Create role

Add permissions:

  1. Select role → Add permissionsCreate inline policy
  2. Paste:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetBucketLocation",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::YOUR-BUCKET-NAME",
        "arn:aws:s3:::YOUR-BUCKET-NAME/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "sqs:ReceiveMessage",
        "sqs:DeleteMessage",
        "sqs:GetQueueAttributes"
      ],
      "Resource": "arn:aws:sqs:REGION:ACCOUNT_ID:cloudflare-logs-queue"
    }
  ]
}
  1. Policy name: ClickPipesCloudflareAccess
  2. Copy the Role ARN

Create ClickPipes job

  1. ClickHouse Cloud Console → Data SourcesCreate ClickPipe
  2. Source: Amazon S3

Connection:

  • Bucket: Your Cloudflare logs bucket
  • Region: Your bucket region
  • Authentication: IAM Role → Paste Role ARN from previous step

Ingestion:

  • Mode: Continuous
  • Ordering: Any order
  • Enable SQS
  • Queue URL: Paste from Step 1

Schema:

ClickPipes auto-detects schema from your logs. Review and adjust field types as needed.

Example schema:

CREATE TABLE cloudflare_logs (
    EdgeStartTimestamp DateTime64(3),
    ClientIP String,
    ClientCountry LowCardinality(String),
    ClientRequestMethod LowCardinality(String),
    ClientRequestPath String,
    EdgeResponseStatus UInt16,
    EdgeResponseBytes UInt64,
    CacheCacheStatus LowCardinality(String),
    BotScore Nullable(UInt16),
    SecurityAction LowCardinality(Nullable(String)),
    RayID String
) ENGINE = MergeTree()
ORDER BY (EdgeStartTimestamp, ClientCountry, EdgeResponseStatus)
PARTITION BY toYYYYMMDD(EdgeStartTimestamp);

Click Create ClickPipe

Verify data in ClickHouse

Wait 2-3 minutes for initial ingestion, then query:

-- Check row count
SELECT count() FROM cloudflare_logs;

-- View recent requests
SELECT 
    EdgeStartTimestamp,
    ClientIP,
    ClientCountry,
    ClientRequestMethod,
    ClientRequestPath,
    EdgeResponseStatus
FROM cloudflare_logs
ORDER BY EdgeStartTimestamp DESC
LIMIT 10;

Demo dataset

For users who want to test before configuring production, we provide sample Cloudflare logs.

Download sample dataset

curl -O https://datasets-documentation.s3.eu-west-3.amazonaws.com/clickstack-integrations/cloudflare/cloudflare-logs.json.gz

The dataset includes 24 hours of HTTP requests with realistic patterns covering traffic spikes, security events, and geographic distribution.

Upload to S3

aws s3 cp cloudflare-logs.json.gz \
  s3://YOUR-BUCKET-NAME/demo/20250127_demo.json.gz

This triggers SQS notification → ClickPipes processes automatically.

Verify demo data

SELECT count() FROM cloudflare_logs;
-- Should show demo records

SELECT 
    toDate(EdgeStartTimestamp) as date,
    count() as requests
FROM cloudflare_logs
GROUP BY date
ORDER BY date DESC;

Dashboards and visualization

Download the dashboard configuration

Import dashboard

  1. HyperDX → DashboardsImport Dashboard
Import dashboard
  1. Upload cloudflare-logs-dashboard.jsonFinish Import

View dashboard

The dashboard includes:

  • Request rate and traffic volume
  • Geographic distribution
  • Cache hit rates
  • Error rates by status code
  • Security events

Troubleshooting

No files appearing in S3

Verify Cloudflare Logpush is active:

  • Cloudflare Dashboard → Analytics & Logs → Logs → Check job status

Generate test traffic:

curl https://your-cloudflare-domain.com

Wait 2-3 minutes and check S3.

SQS not receiving messages

Verify S3 event notification:

  • S3 bucket → Properties → Event notifications → Confirm configuration exists

Test SQS policy:

aws sqs get-queue-attributes \
  --queue-url YOUR-QUEUE-URL \
  --attribute-names Policy

ClickPipes not processing files

Check IAM permissions:

  • Verify ClickHouse can assume the role
  • Confirm S3 and SQS permissions are correct

View ClickPipes logs:

  • ClickHouse Cloud Console → Data Sources → Your ClickPipe → Logs

Data not appearing in ClickHouse

Verify table exists:

SHOW TABLES FROM default LIKE 'cloudflare_logs';

Check for schema errors:

SELECT * FROM system.query_log 
WHERE type = 'ExceptionWhileProcessing'
  AND query LIKE '%cloudflare_logs%'
ORDER BY event_time DESC
LIMIT 10;

Next steps

  • Set up alerts for security events
  • Optimize retention policies based on data volume
  • Create custom dashboards for specific use cases

Going to production

For production deployments:

  • Enable daily subfolders in Cloudflare Logpush for better organization
  • Configure SQS Dead Letter Queue for failed message handling
  • Set up CloudWatch alarms for queue depth monitoring
  • Review partitioning strategy based on query patterns