Video Walkthrough

Watch the end-to-end Apilog deployment and dashboard overview.

Documentation

Complete guide to building with Apilog

Quick Start

Get started with Apilog in 5 minutes

Prerequisites

  • Docker and Docker Compose
  • Git
  • A domain or server to host Apilog

Installation & Setup

1

Clone the repository

git clone https://github.com/APIL0g/APILog.git
cd APILog
2

Configure environment variables

Copy the example environment file and configure your settings

# Copy this file to `.env` (e.g. `cp .env.example .env`) and adjust the values.
# 이 파일을 `.env`로 복사한 뒤(`cp .env.example .env`) 환경에 맞게 값을 채워주세요.

############################################################
# Required Settings (필수 설정)
############################################################

# InfluxDB database name where APILog writes/reads analytics events.
# (If you used docker-compose, the default is usually `apilog_db`).
# APILog이 데이터를 저장/조회할 InfluxDB 데이터베이스 이름 (docker-compose 기본값: `apilog_db`).
INFLUX_DATABASE=<Influx Db database name>

# Public base URL of the site you want to snapshot/analyze.
# Example: https://example.com (include protocol, no trailing slash).
# 스냅샷·분석 대상 실서비스의 기본 URL (프로토콜 포함, 마지막 슬래시 제외 권장).
TARGET_SITE_BASE_URL=<your site domain or Ip address>

############################################################
# Optional Settings (선택 설정) — 필요한 경우에만 수정
############################################################

# CORS allow list (comma separated or * for all origins)
# 다중 도메인은 쉼표로 구분, 전부 허용하려면 *.
CORS_ALLOW_ORIGIN=*

# LLM (Ollama) Settings (used by AI Insights)
# AI Insights에서 사용하는 Ollama 기본 설정입니다.
LLM_PROVIDER=ollama
LLM_ENDPOINT=http://ollama:11434
LLM_MODEL=llama3:8b
LLM_TEMPERATURE=0.2
LLM_TIMEOUT_S=60
LLM_MAX_TOKENS=1024

# AI Report LLM (OpenAI) Settings — leave blank to disable.
# AI Report 기능을 쓰지 않으면 비워두셔도 됩니다.
AI_REPORT_LLM_PROVIDER=openai_compat
AI_REPORT_LLM_ENDPOINT=https://api.openai.com
AI_REPORT_LLM_MODEL=gpt-4.1
# Fill with your OpenAI-compatible API key if you want to enable AI Report.
# AI Report 기능을 쓰려면 OpenAI 호환 API 키를 여기에 입력하세요.
AI_REPORT_LLM_API_KEY=
AI_REPORT_LLM_MAX_TOKENS=4096
AI_REPORT_LLM_TEMPERATURE=0.2
AI_REPORT_LLM_TIMEOUT_S=300

# AI caching / internal API endpoints
# AI 캐시 및 내부 API 엔드포인트 설정입니다.
AI_INSIGHTS_CACHE_TTL=60
AI_INSIGHTS_EXPLAIN_CACHE_TTL=0
AI_REPORT_FETCH_BASE=http://apilog-api:8000

# Where to persist AI-generated dynamic widget specs (JSON file path)
# Docker 환경에서는 /snapshots가 이미 마운트됩니다.
DYNAMIC_WIDGETS_PATH=/snapshots/dynamic_widgets.json

# Optional settings reference (선택 설정 안내)
# - LLM_*: Adjust only for advanced LLM tuning / LLM 동작을 세밀히 조정할 때만 변경
# - AI_INSIGHTS_* / AI_REPORT_FETCH_BASE: Modify when 캐시 정책이나 내부 API 주소를 바꿔야 할 때만 수정하세요.
3

Run with Docker

Start Apilog using Docker Compose. This will set up the database and application automatically.

docker compose up -d --build

# Check if services are running:
docker compose ps

# View logs:
docker compose logs -f
Need the docker-compose reference?

Expand to review how each service is wired. Un-comment the gpus:all block when you want to pass through a GPU.

# Docker Compose for APILog Production Environment
# APILog 운영 배포용 Docker Compose 구성 (Prod)
############################################################

services:
  ############################################################
  # 1) InfluxDB 3 Core
  #    Time-series DB (HTTP API on port 8181, local-only access)
  #    시간 기반 데이터 저장소 (8181 포트, 내부 전용)
  ############################################################
  influxdb3-core:
    image: influxdb:3-core
    container_name: influxdb3-core

    environment:
      # Storage type: local filesystem
      # 스토리지 타입: 로컬 파일 시스템
      INFLUXDB3_OBJECT_STORE: file

      # Data directory (persisted to volume)
      # 데이터 디렉터리 (볼륨에 영속 저장)
      INFLUXDB3_DB_DIR: /var/lib/influxdb3

      # Node ID (required argument)
      # 노드 ID (필수 인자)
      INFLUXDB3_NODE_ID: influx-node0

      # Disable auth for self-contained deployments
      # 단일 배포 구성에서는 인증 비활성화
      INFLUXDB3_START_WITHOUT_AUTH: "true"

    command:
      # Entry command with explicit parameters / 필수 인자 명시
      - influxdb3
      - serve
      - --log-filter
      - info
      - --object-store
      - file
      - --plugin-dir
      - /plugins
      - --node-id
      - influx-node0

    # Expose port 8181 only inside the compose network (no host binding)
    # compose 네트워크 내부에서만 8181 포트를 노출 (호스트 포트 사용 안 함)
    expose:
      - "8181"

    volumes:
      # Database / catalog / parquet storage
      # DB / 카탈로그 / Parquet 데이터 저장
      - influx-data:/var/lib/influxdb3
      # Home metadata for influxdb3 user
      # influxdb3 사용자 홈 메타데이터
      - influx-meta:/home/influxdb3/.influxdb3
      # Plugins directory (rollups, custom scripts)
      # 플러그인/커스텀 스크립트 디렉터리
      - influx-plugins:/plugins

    restart: unless-stopped


  ############################################################
  # 2) apilog-api (FastAPI backend)
  #    Event ingestion & query API
  #    이벤트 수집/조회 API
  ############################################################
  apilog-api:
    container_name: apilog-api
    build: ./back/app

    environment:
      # Backend <-> InfluxDB connection info
      # API와 InfluxDB 연결 정보
      INFLUX_URL: ${INFLUX_URL}
      INFLUX_DATABASE: ${INFLUX_DATABASE}

      # Web API behavior (CORS & LLM)
      # 웹 API 동작 (CORS, LLM)
      CORS_ALLOW_ORIGIN: ${CORS_ALLOW_ORIGIN}
      LLM_PROVIDER: ${LLM_PROVIDER}
      LLM_ENDPOINT: ${LLM_ENDPOINT}
      LLM_MODEL: ${LLM_MODEL}
      LLM_MAX_TOKENS: ${LLM_MAX_TOKENS}
      LLM_TEMPERATURE: ${LLM_TEMPERATURE}
      LLM_TIMEOUT_S: ${LLM_TIMEOUT_S}
      AI_REPORT_LLM_PROVIDER: ${AI_REPORT_LLM_PROVIDER}
      AI_REPORT_LLM_ENDPOINT: ${AI_REPORT_LLM_ENDPOINT}
      AI_REPORT_LLM_MODEL: ${AI_REPORT_LLM_MODEL}
      AI_REPORT_LLM_API_KEY: ${AI_REPORT_LLM_API_KEY}
      AI_REPORT_LLM_MAX_TOKENS: ${AI_REPORT_LLM_MAX_TOKENS}
      AI_REPORT_LLM_TEMPERATURE: ${AI_REPORT_LLM_TEMPERATURE}
      AI_REPORT_LLM_TIMEOUT_S: ${AI_REPORT_LLM_TIMEOUT_S}

      # AI widgets' knobs (cache/internal API/snapshot target)
      # AI 위젯 설정 (캐시/내부 API/스냅샷 대상)
      AI_INSIGHTS_CACHE_TTL: ${AI_INSIGHTS_CACHE_TTL}
      AI_INSIGHTS_EXPLAIN_CACHE_TTL: ${AI_INSIGHTS_EXPLAIN_CACHE_TTL}
      AI_REPORT_FETCH_BASE: ${AI_REPORT_FETCH_BASE}
      TARGET_SITE_BASE_URL: ${TARGET_SITE_BASE_URL}

    depends_on:
      - influxdb3-core
      - ollama

    # Expose backend port internally (reverse proxy consumes it)
    # 내부에서만 8000 포트를 노출 (Nginx가 프록시)
    expose:
      - "8000"

    volumes:
      # Persisted snapshot images mounted at /api/snapshots
      # 히트맵 스냅샷 저장용 볼륨
      - snapshots:/snapshots

    restart: unless-stopped


  ############################################################
  # 3) apilog-nginx (Frontend + Reverse Proxy)
  #    Serves static dashboard + proxies API
  #    프런트 대시보드 및 API 리버스 프록시
  ############################################################
  apilog-nginx:
    container_name: apilog-nginx
    build:
      context: .
      dockerfile: infra/nginx/Dockerfile

    # Map host 10000 -> container 80 (adjust if port conflict occurs)
    # 호스트 10000 포트를 컨테이너 80에 매핑 (충돌 시 수정)
    ports:
      - "10000:80"

    depends_on:
      - apilog-api

    restart: unless-stopped


  ############################################################
  # 4) ollama (Local LLM Server)
  #    Provides local LLM endpoint for AI widgets
  #    AI 위젯에 필요한 로컬 LLM 서버
  ############################################################
  ollama:
    image: ollama/ollama:latest
    container_name: ollama

    environment:
      # Network binding for the Ollama service
      # LLM 서비스 바인딩
      OLLAMA_HOST: 0.0.0.0
      OLLAMA_KEEP_ALIVE: 1h
      # If not provided via .env it falls back to llama3:8b
      # .env에 없으면 기본값(laama3:8b) 사용
      LLM_MODEL: ${LLM_MODEL:-llama3:8b}

    # Only expose internally; FastAPI calls it via service name.
    # 내부에서만 노출, FastAPI가 서비스 이름으로 접근
    expose:
      - "11434"

    volumes:
      # Cache downloaded models
      # 다운로드 모델 캐시
      - ollama-data:/root/.ollama

    entrypoint:
      - /bin/sh
      - -c
      - |
        echo "Bootstrapping Ollama model: $LLM_MODEL"
        ollama serve &
        PID=$!
        sleep 10
        if [ -n "$LLM_MODEL" ]; then
          ollama pull "$LLM_MODEL" || echo "Warning: failed to pull $LLM_MODEL"
        fi
        wait $$PID

    healthcheck:
      test: ["CMD", "curl", "-fsS", "http://localhost:11434/api/tags"]
      interval: 10s
      timeout: 5s
      retries: 20

    restart: unless-stopped

    # gpus: all


##############################################################
# Named Volumes (Persistent storage)
# 영속 데이터를 위한 볼륨 정의
##############################################################
volumes:
  influx-data:
  influx-meta:
  influx-plugins:
  snapshots:
  ollama-data:
4

Add tracking code to your website

Add the Apilog tracking script to your website's HTML, just before the closing </head> tag:

<!-- Add this to your website's <head> section -->
<script>
  src="http://<Public IP or Domain>:10000/apilog/embed.js"
  data-site-id="main"
  data-ingest-url="http://<Public IP or Domain>:10000/api/ingest/events"
  strategy="beforeInteractive"
</script>

Replace '<Public IP or Domain>' with your Public Ip or Domain

Configuration

Configure Apilog to match your needs

Environment Variables

These values map directly to the .env template above. Update them before running docker-compose.

INFLUX_DATABASE

Bucket/database that stores every analytics event.

apilog_db
TARGET_SITE_BASE_URL

Public base URL of the site you want to snapshot/analyze.

https://example.com
AI_REPORT_LLM_API_KEY

OpenAI-compatible API key required only if you enable the AI Report feature.

sk-...

Expand the optional settings below to view the CORS/LLM/AI cache configuration.

Optional settings

Toggle to view the optional CORS/LLM/AI cache configuration.

  • CORS_ALLOW_ORIGIN: Comma-separated origins that are allowed to call apilog-api.
  • LLM_PROVIDER: LLM provider identifier. Keep 'ollama' unless you swap providers.
  • LLM_ENDPOINT: Internal URL apilog-api uses to reach the LLM service.
  • LLM_MODEL: Exact Ollama model tag used for generating insights.
  • LLM_TEMPERATURE: Controls randomness of AI responses (lower values are more deterministic).
  • LLM_TIMEOUT_S: Seconds to wait before timing out LLM requests.
  • AI_INSIGHTS_CACHE_TTL: Cache duration for AI insights in seconds.
  • AI_INSIGHTS_EXPLAIN_CACHE_TTL: Cache TTL for AI explanations (0 disables caching).
  • AI_REPORT_FETCH_BASE: Internal API endpoint used by the AI widgets.
############################################################
# Optional Settings (선택 설정) — 필요한 경우에만 수정
############################################################

# CORS allow list (comma separated or * for all origins)
# 다중 도메인은 쉼표로 구분, 전부 허용하려면 *.
CORS_ALLOW_ORIGIN=*

# LLM (Ollama) Settings (used by AI Insights)
# AI Insights에서 사용하는 Ollama 기본 설정입니다.
LLM_PROVIDER=ollama
LLM_ENDPOINT=http://ollama:11434
LLM_MODEL=llama3:8b
LLM_TEMPERATURE=0.2
LLM_TIMEOUT_S=60
LLM_MAX_TOKENS=1024

# AI Report LLM (OpenAI) Settings — leave blank to disable.
# AI Report 기능을 쓰지 않으면 비워두셔도 됩니다.
AI_REPORT_LLM_PROVIDER=openai_compat
AI_REPORT_LLM_ENDPOINT=https://api.openai.com
AI_REPORT_LLM_MODEL=gpt-4.1
# Fill with your OpenAI-compatible API key if you want to enable AI Report.
# AI Report 기능을 쓰려면 OpenAI 호환 API 키를 여기에 입력하세요.
AI_REPORT_LLM_API_KEY=
AI_REPORT_LLM_MAX_TOKENS=4096
AI_REPORT_LLM_TEMPERATURE=0.2
AI_REPORT_LLM_TIMEOUT_S=300

# AI caching / internal API endpoints
# AI 캐시 및 내부 API 엔드포인트 설정입니다.
AI_INSIGHTS_CACHE_TTL=60
AI_INSIGHTS_EXPLAIN_CACHE_TTL=0
AI_REPORT_FETCH_BASE=http://apilog-api:8000

# Where to persist AI-generated dynamic widget specs (JSON file path)
# Docker 환경에서는 /snapshots가 이미 마운트됩니다.
DYNAMIC_WIDGETS_PATH=/snapshots/dynamic_widgets.json

Best Practices

  • Always use environment variables for sensitive configuration
  • Enable HTTPS in production for secure data transmission
  • Regularly backup your analytics database
  • Monitor your Apilog instance performance and scale as needed

Core Features

Explore what Apilog can do

Real-time Analytics

Track page views, user sessions, and events in real-time with low latency data processing.

Custom Dashboards

Build personalized dashboards using our flexible portlet system. Drag, drop, and arrange widgets to suit your workflow.

Privacy-First

All data stays on your infrastructure. No third-party tracking, complete GDPR compliance.

Event Tracking

Track custom events, conversions, and user interactions with our simple JavaScript SDK.

Data Export

Export your analytics data in multiple formats (CSV, JSON, SQL) for further analysis.

Self-Hosted

Deploy on your own servers or cloud infrastructure. You own and control all your data.

Troubleshooting

Common issues and solutions

Events not appearing in dashboard

Check that your API key is correct and the tracking script is loaded before any tracking calls.

CORS errors

Add your domain to the CORS_ORIGIN environment variable in your Apilog configuration.

High latency

Consider deploying Apilog closer to your users or using a CDN for the tracking script.