DA
Tüm yazılara dön
Database Performance Optimizasyonu: SQL'den NoSQL'e Kadar Pratik Stratejiler

Database Performance Optimizasyonu: SQL'den NoSQL'e Kadar Pratik Stratejiler

DatabasePerformanceSQLNoSQLOptimizasyon

Database Performance Optimizasyonu: SQL'den NoSQL'e Kadar Pratik Stratejiler

Veritabanı performansı, modern uygulamaların başarısını doğrudan etkileyen en kritik faktörlerden biridir. Yavaş bir veritabanı, en iyi yazılmış kodun bile işe yaramaz hale gelmesine neden olabilir. Bu kapsamlı rehberde, on beş yıllık veritabanı deneyimime dayanarak, SQL'den NoSQL'e kadar tüm veritabanı türleri için pratik performans optimizasyon stratejilerini paylaşacağım.

Database Performance'ın Temel Prensipleri

1. Performans Ölçümünün Temelleri

Optimizasyon yapmadan önce, neyi ölçtüğünüzü bilmelisiniz:

-- PostgreSQL'de query performance monitoring
EXPLAIN (ANALYZE, BUFFERS, VERBOSE, FORMAT JSON)
SELECT u.name, p.title, COUNT(c.id) as comment_count
FROM users u
JOIN posts p ON u.id = p.author_id
LEFT JOIN comments c ON p.id = c.post_id
WHERE u.created_at >= '2025-01-01'
GROUP BY u.id, u.name, p.id, p.title
ORDER BY comment_count DESC
LIMIT 20;

Anahtar metrikler:

  • Execution Time: Query'nin çalışma süresi
  • Planning Time: Query planı oluşturma süresi
  • Shared Hit Ratio: Cache hit oranı
  • Buffer Usage: Memory kullanımı
  • I/O Operations: Disk okuma/yazma sayısı

2. Database Connection Management

// Connection pooling ile performans optimizasyonu
const { Pool } = require("pg");

const pool = new Pool({
  user: "myuser",
  host: "localhost",
  database: "mydb",
  password: "mypassword",
  port: 5432,

  // Connection pool settings
  max: 20, // Maximum pool size
  min: 5, // Minimum pool size
  connectionTimeoutMillis: 5000,
  idleTimeoutMillis: 30000,
  acquireTimeoutMillis: 60000,

  // Health checks
  allowExitOnIdle: true,
});

// Proper connection usage
async function getUserPosts(userId) {
  const client = await pool.connect();

  try {
    await client.query("BEGIN");

    const userResult = await client.query("SELECT * FROM users WHERE id = $1", [
      userId,
    ]);

    const postsResult = await client.query(
      "SELECT * FROM posts WHERE author_id = $1 ORDER BY created_at DESC",
      [userId]
    );

    await client.query("COMMIT");

    return {
      user: userResult.rows[0],
      posts: postsResult.rows,
    };
  } catch (error) {
    await client.query("ROLLBACK");
    throw error;
  } finally {
    client.release(); // Always release connection back to pool
  }
}

SQL Database Optimizasyonu

Index Stratejileri

1. B-Tree Indexes (En Yaygın)

-- Single column index
CREATE INDEX idx_users_email ON users(email);

-- Composite index (sıralama önemli!)
CREATE INDEX idx_posts_author_created ON posts(author_id, created_at);

-- Bu index şu query'leri optimize eder:
SELECT * FROM posts WHERE author_id = 123;
SELECT * FROM posts WHERE author_id = 123 AND created_at > '2025-01-01';

-- Ancak bu query için yeterli değil:
SELECT * FROM posts WHERE created_at > '2025-01-01' AND author_id = 123;
-- Bu durumda şu index gerekli:
CREATE INDEX idx_posts_created_author ON posts(created_at, author_id);

2. Partial Indexes (Koşullu İndeksler)

-- Sadece aktif kullanıcılar için index
CREATE INDEX idx_active_users_email
ON users(email)
WHERE is_active = true;

-- Sadece yayınlanmış yazılar için index
CREATE INDEX idx_published_posts_created
ON posts(created_at)
WHERE status = 'published';

-- Bu index'ler daha küçük ve hızlıdır
-- Storage space tasarrufu ve maintenance cost düşük

3. Functional Indexes

-- Case-insensitive arama için
CREATE INDEX idx_users_email_lower
ON users(LOWER(email));

-- JSON alanlar için
CREATE INDEX idx_user_preferences_theme
ON users((preferences->>'theme'));

-- Full-text search için
CREATE INDEX idx_posts_content_fts
ON posts USING gin(to_tsvector('english', content));

Query Optimization Teknikleri

1. Join Optimization

-- KÖTÜ: N+1 Query Pattern
-- Her post için ayrı query çalıştırır
SELECT * FROM posts WHERE author_id = 1;
SELECT * FROM users WHERE id = 1;
SELECT * FROM posts WHERE author_id = 2;
SELECT * FROM users WHERE id = 2;
-- ... (N kez tekrar)

-- İYİ: Single join query
SELECT
    p.id,
    p.title,
    p.content,
    p.created_at,
    u.name as author_name,
    u.email as author_email
FROM posts p
INNER JOIN users u ON p.author_id = u.id
WHERE p.status = 'published'
ORDER BY p.created_at DESC
LIMIT 20;

-- DAHA İYİ: CTE ile complex queries
WITH recent_active_users AS (
    SELECT id, name, email
    FROM users
    WHERE last_login_at > NOW() - INTERVAL '30 days'
        AND is_active = true
),
popular_posts AS (
    SELECT
        p.id,
        p.title,
        p.author_id,
        COUNT(c.id) as comment_count
    FROM posts p
    LEFT JOIN comments c ON p.id = c.post_id
    WHERE p.created_at > NOW() - INTERVAL '7 days'
    GROUP BY p.id, p.title, p.author_id
    HAVING COUNT(c.id) > 5
)
SELECT
    pp.title,
    pp.comment_count,
    rau.name as author_name
FROM popular_posts pp
INNER JOIN recent_active_users rau ON pp.author_id = rau.id
ORDER BY pp.comment_count DESC;

2. Subquery vs EXISTS Optimization

-- YAVAŞ: Subquery with IN
SELECT * FROM users
WHERE id IN (
    SELECT DISTINCT author_id
    FROM posts
    WHERE created_at > '2025-01-01'
);

-- HIZLI: EXISTS kullanımı
SELECT * FROM users u
WHERE EXISTS (
    SELECT 1 FROM posts p
    WHERE p.author_id = u.id
        AND p.created_at > '2025-01-01'
);

-- DAHA HIZLI: JOIN kullanımı (duplikasyon riski var)
SELECT DISTINCT u.*
FROM users u
INNER JOIN posts p ON u.id = p.author_id
WHERE p.created_at > '2025-01-01';

3. Window Functions ile Pagination

-- Geleneksel OFFSET/LIMIT (büyük offset'lerde yavaş)
SELECT * FROM posts
ORDER BY created_at DESC
LIMIT 20 OFFSET 10000; -- 10000 row skip etmek zorunda

-- Cursor-based pagination (her zaman hızlı)
SELECT * FROM posts
WHERE created_at < '2025-05-20 10:00:00'  -- Last seen timestamp
ORDER BY created_at DESC
LIMIT 20;

-- Window function ile ranking
SELECT
    *,
    ROW_NUMBER() OVER (PARTITION BY category_id ORDER BY score DESC) as rank_in_category
FROM posts
WHERE rank_in_category <= 10;  -- Her kategoride top 10

Database Schema Design

1. Normalization vs Denormalization

-- Normalized design (3NF)
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100) NOT NULL,
    email VARCHAR(255) UNIQUE NOT NULL,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE posts (
    id SERIAL PRIMARY KEY,
    title VARCHAR(200) NOT NULL,
    content TEXT,
    author_id INTEGER REFERENCES users(id),
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE comments (
    id SERIAL PRIMARY KEY,
    content TEXT NOT NULL,
    post_id INTEGER REFERENCES posts(id),
    author_id INTEGER REFERENCES users(id),
    created_at TIMESTAMP DEFAULT NOW()
);

-- Denormalized design (performans için)
CREATE TABLE posts_denormalized (
    id SERIAL PRIMARY KEY,
    title VARCHAR(200) NOT NULL,
    content TEXT,
    author_id INTEGER,
    author_name VARCHAR(100),  -- Denormalized
    author_email VARCHAR(255), -- Denormalized
    comment_count INTEGER DEFAULT 0, -- Calculated field
    created_at TIMESTAMP DEFAULT NOW()
);

-- Trigger ile denormalized data'yı sync tutma
CREATE OR REPLACE FUNCTION update_post_comment_count()
RETURNS TRIGGER AS $$
BEGIN
    IF TG_OP = 'INSERT' THEN
        UPDATE posts_denormalized
        SET comment_count = comment_count + 1
        WHERE id = NEW.post_id;
    ELSIF TG_OP = 'DELETE' THEN
        UPDATE posts_denormalized
        SET comment_count = comment_count - 1
        WHERE id = OLD.post_id;
    END IF;
    RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER trigger_update_comment_count
    AFTER INSERT OR DELETE ON comments
    FOR EACH ROW EXECUTE FUNCTION update_post_comment_count();

2. Partitioning Strategies

-- Range partitioning (tarih bazlı)
CREATE TABLE posts_partitioned (
    id BIGSERIAL,
    title VARCHAR(200) NOT NULL,
    content TEXT,
    author_id INTEGER,
    created_at TIMESTAMP NOT NULL DEFAULT NOW(),
    PRIMARY KEY (id, created_at)
) PARTITION BY RANGE (created_at);

-- Her ay için partition oluştur
CREATE TABLE posts_2025_01 PARTITION OF posts_partitioned
    FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');

CREATE TABLE posts_2025_02 PARTITION OF posts_partitioned
    FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');

-- Automatic partition creation
CREATE OR REPLACE FUNCTION create_monthly_partition()
RETURNS void AS $$
DECLARE
    start_date date;
    end_date date;
    partition_name text;
BEGIN
    start_date := date_trunc('month', CURRENT_DATE);
    end_date := start_date + interval '1 month';
    partition_name := 'posts_' || to_char(start_date, 'YYYY_MM');

    EXECUTE format('CREATE TABLE IF NOT EXISTS %I PARTITION OF posts_partitioned
                    FOR VALUES FROM (%L) TO (%L)',
                   partition_name, start_date, end_date);
END;
$$ LANGUAGE plpgsql;

-- Monthly cron job olarak çalıştır

NoSQL Database Optimizasyonu

MongoDB Performance Tuning

1. Index Strategies

// MongoDB index creation
db.users.createIndex({ email: 1 }, { unique: true });

// Compound index
db.posts.createIndex({
  author_id: 1,
  created_at: -1, // Descending sort
  status: 1,
});

// Text search index
db.posts.createIndex(
  {
    title: "text",
    content: "text",
  },
  {
    weights: { title: 10, content: 1 },
    name: "text_search_idx",
  }
);

// Geospatial index
db.locations.createIndex({ coordinates: "2dsphere" });

// Partial index (MongoDB 3.2+)
db.users.createIndex(
  { email: 1 },
  { partialFilterExpression: { is_active: true } }
);

2. Aggregation Pipeline Optimization

// İYİ aggregation pipeline (order önemli!)
db.posts.aggregate([
  // 1. Filter early (reduce working set)
  {
    $match: {
      created_at: { $gte: new Date("2025-01-01") },
      status: "published",
    },
  },

  // 2. Project only needed fields
  {
    $project: {
      title: 1,
      author_id: 1,
      created_at: 1,
      view_count: 1,
    },
  },

  // 3. Lookup after filtering
  {
    $lookup: {
      from: "users",
      localField: "author_id",
      foreignField: "_id",
      as: "author",
      pipeline: [{ $project: { name: 1, email: 1 } }],
    },
  },

  // 4. Sort with index support
  { $sort: { view_count: -1 } },

  // 5. Limit results
  { $limit: 20 },
]);

// Index hint usage
db.posts
  .find({ author_id: ObjectId("...") })
  .hint({ author_id: 1, created_at: -1 });

3. Schema Design Patterns

// Embedding vs Referencing strategy

// EMBEDDING (for 1:few relationships)
{
    _id: ObjectId("..."),
    title: "Blog Post",
    content: "...",
    author: {  // Embedded document
        name: "John Doe",
        email: "john@example.com"
    },
    comments: [  // Embedded array (limit ~100 items)
        {
            text: "Great post!",
            author: "Jane",
            created_at: ISODate("...")
        }
    ]
}

// REFERENCING (for 1:many relationships)
// Users collection
{
    _id: ObjectId("user1"),
    name: "John Doe",
    email: "john@example.com"
}

// Posts collection
{
    _id: ObjectId("post1"),
    title: "Blog Post",
    content: "...",
    author_id: ObjectId("user1")  // Reference
}

// HYBRID (for large datasets)
{
    _id: ObjectId("post1"),
    title: "Blog Post",
    content: "...",
    author: {
        _id: ObjectId("user1"),
        name: "John Doe"  // Frequently accessed data
    },
    comment_count: 42,
    latest_comments: [  // Cache of recent comments
        {
            text: "Recent comment",
            author: "Jane",
            created_at: ISODate("...")
        }
    ]
}

Redis Performance Optimization

1. Data Structure Selection

const redis = require("redis");
const client = redis.createClient();

// String operations
await client.set("user:123:name", "John Doe", "EX", 3600); // 1 hour TTL
await client.get("user:123:name");

// Hash for object-like data
await client.hMSet("user:123", {
  name: "John Doe",
  email: "john@example.com",
  last_login: Date.now(),
});
await client.hGetAll("user:123");

// List for queues and timelines
await client.lPush(
  "notifications:user:123",
  JSON.stringify({
    type: "comment",
    message: "New comment on your post",
    timestamp: Date.now(),
  })
);

// Set for unique collections
await client.sAdd("user:123:followers", "user:456", "user:789");
await client.sIsMember("user:123:followers", "user:456");

// Sorted Set for leaderboards
await client.zAdd("leaderboard:2025", { score: 1500, value: "user:123" });
await client.zRevRange("leaderboard:2025", 0, 9); // Top 10

// Advanced: Pipeline for batch operations
const pipeline = client.pipeline();
pipeline.set("key1", "value1");
pipeline.set("key2", "value2");
pipeline.get("key1");
const results = await pipeline.exec();

2. Caching Patterns

// Cache-Aside Pattern
async function getUser(userId) {
  const cacheKey = `user:${userId}`;

  // Try cache first
  let user = await redis.get(cacheKey);
  if (user) {
    return JSON.parse(user);
  }

  // Cache miss - fetch from database
  user = await db.users.findById(userId);
  if (user) {
    // Cache for 1 hour
    await redis.setEx(cacheKey, 3600, JSON.stringify(user));
  }

  return user;
}

// Write-Through Pattern
async function updateUser(userId, userData) {
  const cacheKey = `user:${userId}`;

  // Update database first
  const user = await db.users.findByIdAndUpdate(userId, userData, {
    new: true,
  });

  // Update cache
  await redis.setEx(cacheKey, 3600, JSON.stringify(user));

  return user;
}

// Write-Behind (Write-Back) Pattern
class WriteBackCache {
  constructor() {
    this.dirtyKeys = new Set();
    this.flushInterval = setInterval(() => this.flush(), 5000); // Flush every 5s
  }

  async set(key, value) {
    await redis.set(key, JSON.stringify(value));
    this.dirtyKeys.add(key);
  }

  async flush() {
    if (this.dirtyKeys.size === 0) return;

    const keys = Array.from(this.dirtyKeys);
    this.dirtyKeys.clear();

    // Batch write to database
    const pipeline = [];
    for (const key of keys) {
      const value = await redis.get(key);
      if (value) {
        pipeline.push(this.writeToDatabase(key, JSON.parse(value)));
      }
    }

    await Promise.all(pipeline);
  }
}

Cross-Database Performance Strategies

Connection Pooling ve Load Balancing

// Multi-database connection management
class DatabaseManager {
  constructor() {
    // Read replicas
    this.readPools = [
      new Pool({ ...config, host: "read-replica-1" }),
      new Pool({ ...config, host: "read-replica-2" }),
      new Pool({ ...config, host: "read-replica-3" }),
    ];

    // Write master
    this.writePool = new Pool({ ...config, host: "master" });

    this.readIndex = 0;
  }

  // Round-robin load balancing for reads
  getReadConnection() {
    const pool = this.readPools[this.readIndex];
    this.readIndex = (this.readIndex + 1) % this.readPools.length;
    return pool.connect();
  }

  getWriteConnection() {
    return this.writePool.connect();
  }

  async executeQuery(query, params, isWrite = false) {
    const pool = isWrite ? this.writePool : this.getReadConnection();
    const client = await pool.connect();

    try {
      const result = await client.query(query, params);
      return result.rows;
    } finally {
      client.release();
    }
  }
}

// Usage
const dbManager = new DatabaseManager();

// Read operations go to replicas
const users = await dbManager.executeQuery(
  "SELECT * FROM users WHERE is_active = $1",
  [true],
  false // Read operation
);

// Write operations go to master
await dbManager.executeQuery(
  "INSERT INTO users (name, email) VALUES ($1, $2)",
  ["John Doe", "john@example.com"],
  true // Write operation
);

Query Caching Strategies

// Multi-layer caching
class QueryCache {
  constructor() {
    this.memoryCache = new Map();
    this.redisClient = redis.createClient();
    this.maxMemoryCacheSize = 1000;
  }

  async get(key) {
    // L1: Memory cache (fastest)
    if (this.memoryCache.has(key)) {
      return this.memoryCache.get(key);
    }

    // L2: Redis cache
    const redisValue = await this.redisClient.get(key);
    if (redisValue) {
      const parsed = JSON.parse(redisValue);
      this.setMemoryCache(key, parsed);
      return parsed;
    }

    return null;
  }

  async set(key, value, ttl = 3600) {
    // Set in both caches
    this.setMemoryCache(key, value);
    await this.redisClient.setEx(key, ttl, JSON.stringify(value));
  }

  setMemoryCache(key, value) {
    // LRU eviction
    if (this.memoryCache.size >= this.maxMemoryCacheSize) {
      const firstKey = this.memoryCache.keys().next().value;
      this.memoryCache.delete(firstKey);
    }
    this.memoryCache.set(key, value);
  }

  generateCacheKey(query, params) {
    return `query:${crypto
      .createHash("md5")
      .update(query + JSON.stringify(params))
      .digest("hex")}`;
  }
}

// Cached query execution
class CachedDatabase {
  constructor() {
    this.db = new DatabaseManager();
    this.cache = new QueryCache();
  }

  async query(sql, params = [], cacheTTL = 300) {
    const cacheKey = this.cache.generateCacheKey(sql, params);

    // Try cache first
    let result = await this.cache.get(cacheKey);
    if (result) {
      return result;
    }

    // Cache miss - execute query
    result = await this.db.executeQuery(sql, params);

    // Cache the result
    await this.cache.set(cacheKey, result, cacheTTL);

    return result;
  }
}

Database Monitoring ve Alerting

// Performance monitoring
class DatabaseMonitor {
  constructor() {
    this.metrics = {
      queryCount: 0,
      slowQueries: [],
      connectionPoolStats: {},
      cacheHitRatio: 0,
    };

    this.slowQueryThreshold = 1000; // 1 second
    this.monitoringInterval = setInterval(() => this.report(), 60000); // Every minute
  }

  logQuery(query, duration, params) {
    this.metrics.queryCount++;

    if (duration > this.slowQueryThreshold) {
      this.metrics.slowQueries.push({
        query,
        duration,
        params,
        timestamp: new Date(),
      });

      // Alert for slow queries
      this.alertSlowQuery(query, duration);
    }
  }

  alertSlowQuery(query, duration) {
    console.warn(`SLOW QUERY DETECTED: ${duration}ms`);
    console.warn(`Query: ${query}`);

    // Send to monitoring service (Datadog, New Relic, etc.)
    // this.monitoring.sendMetric('database.slow_query', duration, {
    //     query_hash: this.hashQuery(query)
    // });
  }

  updateConnectionPoolStats(poolId, stats) {
    this.metrics.connectionPoolStats[poolId] = {
      totalConnections: stats.totalCount,
      idleConnections: stats.idleCount,
      waitingClients: stats.waitingCount,
      timestamp: new Date(),
    };
  }

  updateCacheStats(hits, misses) {
    const total = hits + misses;
    this.metrics.cacheHitRatio = total > 0 ? hits / total : 0;
  }

  async report() {
    console.log("Database Performance Report:", {
      "Queries/min": this.metrics.queryCount,
      "Slow queries": this.metrics.slowQueries.length,
      "Cache hit ratio": (this.metrics.cacheHitRatio * 100).toFixed(2) + "%",
      "Connection pools": this.metrics.connectionPoolStats,
    });

    // Reset counters
    this.metrics.queryCount = 0;
    this.metrics.slowQueries = [];
  }
}

// Database wrapper with monitoring
class MonitoredDatabase {
  constructor() {
    this.db = new CachedDatabase();
    this.monitor = new DatabaseMonitor();
  }

  async query(sql, params = [], cacheTTL = 300) {
    const startTime = Date.now();

    try {
      const result = await this.db.query(sql, params, cacheTTL);
      const duration = Date.now() - startTime;

      this.monitor.logQuery(sql, duration, params);

      return result;
    } catch (error) {
      const duration = Date.now() - startTime;
      this.monitor.logQuery(sql, duration, params);
      throw error;
    }
  }
}

Advanced Performance Techniques

Database Sharding

// Consistent hashing for sharding
class DatabaseSharding {
    constructor(shards) {
        this.shards = shards; // Array of database connections
        this.virtualNodes = 150; // Virtual nodes per shard
        this.hashRing = this.buildHashRing();
    }

    buildHashRing() {
        const ring = new Map();

        this.shards.forEach((shard, shardIndex) => {
            for (let i = 0; i < this.virtualNodes; i++) {
                const hash = this.hash(`${shard.id}:${i}`);
                ring.set(hash, shardIndex);
            }
        });

        // Sort by hash value
        return new Map([...ring.entries()].sort());
    }

    hash(key) {
        let hash = 0;
        for (let i = 0; i < key.length; i++) {
            const char = key.charCodeAt(i);
            hash = ((hash << 5) - hash) + char;
            hash = hash & hash; // Convert to 32-bit integer
        }
        return hash;
    }

    getShardForKey(key) {
        const keyHash = this.hash(key);

        // Find the first hash >= keyHash
        for (const [hash, shardIndex] of this.hashRing) {
            if (hash >= keyHash) {
                return this.shards[shardIndex];
            }
        }

        // Wrap around to first shard
        return this.shards[this.hashRing.values().next().value];
    }

    async query(key, sql, params) {
        const shard = this.getShardForKey(key);
        return await shard.query(sql, params);
    }

    // Cross-shard queries (scatter-gather)
    async queryAllShards(sql, params) {
        const promises = this.shards.map(shard =>
            shard.query(sql, params).catch(err => ({ error: err }))
        );

        const results = await Promise.all(promises);

        // Combine results and filter errors
        const validResults = results.filter(result => !result.error);
        return validResults.flat();
    }
}

// Usage
const sharding = new DatabaseSharding([
    { id: 'shard1', query: async (sql, params) => /* shard1 connection */ },
    { id: 'shard2', query: async (sql, params) => /* shard2 connection */ },
    { id: 'shard3', query: async (sql, params) => /* shard3 connection */ }
]);

// Single shard query
const userPosts = await sharding.query(
    `user:${userId}`,
    'SELECT * FROM posts WHERE author_id = $1',
    [userId]
);

// Cross-shard aggregation
const allRecentPosts = await sharding.queryAllShards(
    'SELECT * FROM posts WHERE created_at > $1 ORDER BY created_at DESC LIMIT 10',
    [new Date(Date.now() - 24 * 60 * 60 * 1000)]
);

Database Replication Patterns

// Master-Slave with automatic failover
class ReplicatedDatabase {
  constructor() {
    this.master = new Pool({ ...config, host: "master" });
    this.slaves = [
      new Pool({ ...config, host: "slave1" }),
      new Pool({ ...config, host: "slave2" }),
    ];

    this.masterAvailable = true;
    this.healthCheckInterval = setInterval(() => this.healthCheck(), 5000);
  }

  async healthCheck() {
    try {
      await this.master.query("SELECT 1");
      if (!this.masterAvailable) {
        console.log("Master database recovered");
        this.masterAvailable = true;
      }
    } catch (error) {
      if (this.masterAvailable) {
        console.error("Master database failed:", error.message);
        this.masterAvailable = false;
      }
    }
  }

  async read(sql, params) {
    // Use slaves for read operations
    const slave = this.slaves[Math.floor(Math.random() * this.slaves.length)];
    return await slave.query(sql, params);
  }

  async write(sql, params) {
    if (!this.masterAvailable) {
      throw new Error("Master database unavailable");
    }

    try {
      return await this.master.query(sql, params);
    } catch (error) {
      this.masterAvailable = false;
      throw error;
    }
  }

  async transaction(callback) {
    if (!this.masterAvailable) {
      throw new Error("Master database unavailable for transactions");
    }

    const client = await this.master.connect();

    try {
      await client.query("BEGIN");
      const result = await callback(client);
      await client.query("COMMIT");
      return result;
    } catch (error) {
      await client.query("ROLLBACK");
      throw error;
    } finally {
      client.release();
    }
  }
}

Monitoring ve Debugging Tools

Query Analysis Tools

-- PostgreSQL specific monitoring queries

-- Top 10 slowest queries
SELECT
    query,
    calls,
    total_time,
    mean_time,
    max_time,
    stddev_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

-- Index usage statistics
SELECT
    schemaname,
    tablename,
    indexname,
    idx_tup_read,
    idx_tup_fetch,
    idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;

-- Table bloat analysis
SELECT
    schemaname,
    tablename,
    n_dead_tup,
    n_live_tup,
    ROUND(n_dead_tup * 100.0 / NULLIF(n_live_tup + n_dead_tup, 0), 2) as bloat_pct
FROM pg_stat_user_tables
WHERE n_dead_tup > 0
ORDER BY bloat_pct DESC;

-- Connection analysis
SELECT
    state,
    COUNT(*) as connection_count
FROM pg_stat_activity
GROUP BY state;

Application-Level Monitoring

// Comprehensive database metrics
class DatabaseMetrics {
  constructor() {
    this.prometheus = require("prom-client");
    this.setupMetrics();
  }

  setupMetrics() {
    this.queryDuration = new this.prometheus.Histogram({
      name: "db_query_duration_seconds",
      help: "Database query duration in seconds",
      labelNames: ["operation", "table", "status"],
      buckets: [0.01, 0.05, 0.1, 0.5, 1, 2, 5],
    });

    this.queryCounter = new this.prometheus.Counter({
      name: "db_queries_total",
      help: "Total number of database queries",
      labelNames: ["operation", "table", "status"],
    });

    this.connectionPoolGauge = new this.prometheus.Gauge({
      name: "db_connection_pool_size",
      help: "Current database connection pool size",
      labelNames: ["pool_name", "status"],
    });

    this.cacheHitRatio = new this.prometheus.Gauge({
      name: "db_cache_hit_ratio",
      help: "Database cache hit ratio",
      labelNames: ["cache_type"],
    });
  }

  recordQuery(operation, table, duration, success) {
    const status = success ? "success" : "error";

    this.queryDuration
      .labels(operation, table, status)
      .observe(duration / 1000);

    this.queryCounter.labels(operation, table, status).inc();
  }

  updateConnectionPool(poolName, active, idle, waiting) {
    this.connectionPoolGauge.labels(poolName, "active").set(active);
    this.connectionPoolGauge.labels(poolName, "idle").set(idle);
    this.connectionPoolGauge.labels(poolName, "waiting").set(waiting);
  }

  updateCacheHitRatio(cacheType, ratio) {
    this.cacheHitRatio.labels(cacheType).set(ratio);
  }
}

Best Practices ve Checklists

Database Design Checklist

Schema Design

  • [ ] Proper normalization level (3NF for OLTP, denormalized for OLAP)
  • [ ] Appropriate data types (don't use VARCHAR(255) for everything)
  • [ ] Proper constraints (NOT NULL, CHECK, UNIQUE)
  • [ ] Meaningful naming conventions

Indexing Strategy

  • [ ] Primary keys on all tables
  • [ ] Foreign key indexes
  • [ ] Covering indexes for frequent queries
  • [ ] Partial indexes where applicable
  • [ ] Remove unused indexes

Query Optimization

  • [ ] Avoid SELECT * in production code
  • [ ] Use LIMIT for large result sets
  • [ ] Proper JOIN types (INNER vs LEFT)
  • [ ] WHERE clauses before JOINs when possible

Connection Management

  • [ ] Connection pooling configured
  • [ ] Appropriate pool sizes
  • [ ] Connection timeouts set
  • [ ] Proper connection cleanup

Performance Monitoring Checklist

Metrics to Track

  • [ ] Query response times
  • [ ] Connection pool utilization
  • [ ] Cache hit ratios
  • [ ] Disk I/O metrics
  • [ ] Memory usage
  • [ ] Replication lag (if applicable)

Alerting Thresholds

  • [ ] Slow query alerts (>1s)
  • [ ] High connection usage (>80%)
  • [ ] Low cache hit ratio (<90%)
  • [ ] Disk space alerts (<20% free)
  • [ ] Replication lag alerts (>5s)

Sonuç ve İleriye Dönük Stratejiler

Database performansı optimizasyonu sürekli bir süreçtir ve şu prensipleri unutmayın:

1. Measure First, Optimize Second

Optimizasyon yapmadan önce mevcut performansı ölçün ve darboğazları tespit edin.

2. Start Simple, Scale Gradually

Basit çözümlerle başlayın (index, query optimization), gerektiğinde kompleks çözümlere (sharding, replication) geçin.

3. Know Your Access Patterns

Uygulamanızın data access pattern'lerini anlayın ve buna göre optimize edin.

4. Monitor Continuously

Performans metrikleri sürekli izleyin ve proaktif olarak sorunları tespit edin.

5. Plan for Growth

Bugünkü ihtiyaçlar kadar gelecekteki büyüme için de planlama yapın.

Database performansı, uygulamanızın başarısının temel taşlarından biridir. Doğru stratejiler ve araçlarla, kullanıcılarınıza hızlı ve güvenilir bir deneyim sunabilirsiniz. Her optimizasyon adımında iş ihtiyaçlarınızı ve teknik kısıtlarınızı dengeli bir şekilde değerlendirin.

Unutmayın: En iyi database, kullanıcılarınızın fark etmediği database'dir.