Go e MinIO: Object Storage com S3-Compatible API

MinIO é o servidor de object storage de código aberto mais popular, compatível com a API S3 da AWS. Escrito em Go, oferece alta performance e é ideal para aplicações que precisam de storage escalável sem depender de serviços cloud proprietários.

Neste guia, você aprenderá a integrar MinIO com aplicações Go, desde operações básicas até padrões avançados de produção.

Índice

  1. Por que MinIO?
  2. Configuração do Ambiente
  3. SDK AWS S3 para Go
  4. Operações Básicas
  5. Uploads Avançados
  6. Presigned URLs
  7. Streaming e Large Objects
  8. Padrões de Produção

Por que MinIO?

Vantagens

1. Compatibilidade S3 100% compatível com API S3 da AWS. Mesmo código funciona com MinIO, AWS S3, ou qualquer outro storage S3-compatible.

2. Performance Escrito em Go, otimizado para alta performance. Benchmarks mostram throughput superior a muitas soluções cloud.

3. Custo Open source, sem custos de egress/ingress. Ideal para self-hosting ou hybrid cloud.

4. Kubernetes-Native Operador oficial para Kubernetes, facilita deploy em ambientes containerizados.

Casos de Uso

  • Backup e Arquivamento: Dados de aplicações, logs, snapshots
  • Media Storage: Imagens, vídeos, documentos de usuários
  • Data Lake: Storage para analytics e ML
  • Static Assets: Arquivos estáticos de aplicações web
  • CDN Origin: Origem para CDNs de conteúdo
┌─────────────────────────────────────────────────────────┐
│                    Aplicação Go                        │
│                  (SDK AWS S3)                          │
└───────────────────┬─────────────────────────────────────┘
                    │ HTTP/HTTPS
┌─────────────────────────────────────────────────────────┐
│                     MinIO Server                       │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  │
│  │   Bucket 1   │  │   Bucket 2   │  │   Bucket 3   │  │
│  │   (images)   │  │   (backups)  │  │   (logs)     │  │
│  └──────────────┘  └──────────────┘  └──────────────┘  │
│                                                         │
│  Storage: Local / NFS / Distributed (Erasure Coding)   │
└─────────────────────────────────────────────────────────┘

Configuração do Ambiente

Docker Compose (Desenvolvimento)

# docker-compose.yml
version: '3.8'

services:
  minio:
    image: minio/minio:latest
    ports:
      - "9000:9000"  # API S3
      - "9001:9001"  # Console Web
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    command: server /data --console-address ":9001"
    volumes:
      - minio-data:/data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

volumes:
  minio-data:
# Iniciar
$ docker-compose up -d

# Acessar console web
$ open http://localhost:9001
# Login: minioadmin / minioadmin

Kubernetes (Produção)

# minio-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio
spec:
  replicas: 1
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
      - name: minio
        image: minio/minio:latest
        args:
        - server
        - /data
        - --console-address
        - ":9001"
        env:
        - name: MINIO_ROOT_USER
          valueFrom:
            secretKeyRef:
              name: minio-credentials
              key: access-key
        - name: MINIO_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: minio-credentials
              key: secret-key
        ports:
        - containerPort: 9000
        - containerPort: 9001
        volumeMounts:
        - name: storage
          mountPath: /data
        livenessProbe:
          httpGet:
            path: /minio/health/live
            port: 9000
          initialDelaySeconds: 30
          periodSeconds: 30
      volumes:
      - name: storage
        persistentVolumeClaim:
          claimName: minio-storage
---
apiVersion: v1
kind: Service
metadata:
  name: minio
spec:
  selector:
    app: minio
  ports:
  - port: 9000
    targetPort: 9000
    name: api
  - port: 9001
    targetPort: 9001
    name: console

SDK AWS S3 para Go

Instalação

go get github.com/aws/aws-sdk-go-v2
go get github.com/aws/aws-sdk-go-v2/config
go get github.com/aws/aws-sdk-go-v2/credentials
go get github.com/aws/aws-sdk-go-v2/service/s3

Configuração do Cliente

package storage

import (
    "context"
    "fmt"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/credentials"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

// Configuração do MinIO
type MinioConfig struct {
    Endpoint  string // ex: "localhost:9000"
    AccessKey string
    SecretKey string
    UseSSL    bool
    Region    string
}

// NewClient cria cliente S3 configurado para MinIO
func NewClient(cfg MinioConfig) (*s3.Client, error) {
    // Configura endpoint resolver para MinIO
    endpointResolver := aws.EndpointResolverWithOptionsFunc(
        func(service, region string, options ...interface{}) (aws.Endpoint, error) {
            return aws.Endpoint{
                URL:               fmt.Sprintf("http://%s", cfg.Endpoint),
                HostnameImmutable: true,
                Source:            aws.EndpointSourceCustom,
            }, nil
        },
    )

    // Carrega configuração
    awsCfg, err := config.LoadDefaultConfig(
        context.Background(),
        config.WithRegion(cfg.Region),
        config.WithEndpointResolverWithOptions(endpointResolver),
        config.WithCredentialsProvider(
            credentials.NewStaticCredentialsProvider(
                cfg.AccessKey,
                cfg.SecretKey,
                "",
            ),
        ),
    )
    if err != nil {
        return nil, fmt.Errorf("falha ao carregar config AWS: %w", err)
    }

    client := s3.NewFromConfig(awsCfg, func(o *s3.Options) {
        o.UsePathStyle = true // Necessário para MinIO
    })

    return client, nil
}

// Factory para diferentes ambientes
func NewDevClient() (*s3.Client, error) {
    return NewClient(MinioConfig{
        Endpoint:  "localhost:9000",
        AccessKey: "minioadmin",
        SecretKey: "minioadmin",
        UseSSL:    false,
        Region:    "us-east-1",
    })
}

func NewProductionClient() (*s3.Client, error) {
    // Carrega de variáveis de ambiente
    return NewClient(MinioConfig{
        Endpoint:  getEnv("MINIO_ENDPOINT", "minio:9000"),
        AccessKey: getEnv("MINIO_ACCESS_KEY", ""),
        SecretKey: getEnv("MINIO_SECRET_KEY", ""),
        UseSSL:    getEnvBool("MINIO_USE_SSL", true),
        Region:    getEnv("MINIO_REGION", "us-east-1"),
    })
}

Operações Básicas

Gerenciamento de Buckets

package storage

import (
    "context"
    "fmt"
    "time"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/service/s3"
    "github.com/aws/aws-sdk-go-v2/service/s3/types"
)

type BucketManager struct {
    client *s3.Client
}

func NewBucketManager(client *s3.Client) *BucketManager {
    return &BucketManager{client: client}
}

// CreateBucket cria um novo bucket
func (bm *BucketManager) CreateBucket(ctx context.Context, name string) error {
    _, err := bm.client.CreateBucket(ctx, &s3.CreateBucketInput{
        Bucket: aws.String(name),
        CreateBucketConfiguration: &types.CreateBucketConfiguration{
            LocationConstraint: types.BucketLocationConstraint("us-east-1"),
        },
    })
    if err != nil {
        return fmt.Errorf("falha ao criar bucket: %w", err)
    }
    return nil
}

// ListBuckets lista todos os buckets
func (bm *BucketManager) ListBuckets(ctx context.Context) ([]string, error) {
    result, err := bm.client.ListBuckets(ctx, &s3.ListBucketsInput{})
    if err != nil {
        return nil, fmt.Errorf("falha ao listar buckets: %w", err)
    }

    var buckets []string
    for _, b := range result.Buckets {
        buckets = append(buckets, aws.ToString(b.Name))
    }
    return buckets, nil
}

// DeleteBucket remove um bucket (deve estar vazio)
func (bm *BucketManager) DeleteBucket(ctx context.Context, name string) error {
    _, err := bm.client.DeleteBucket(ctx, &s3.DeleteBucketInput{
        Bucket: aws.String(name),
    })
    if err != nil {
        return fmt.Errorf("falha ao deletar bucket: %w", err)
    }
    return nil
}

// BucketExists verifica se bucket existe
func (bm *BucketManager) BucketExists(ctx context.Context, name string) (bool, error) {
    _, err := bm.client.HeadBucket(ctx, &s3.HeadBucketInput{
        Bucket: aws.String(name),
    })
    if err != nil {
        // Verifica se é erro de "not found"
        return false, nil
    }
    return true, nil
}

// SetBucketPolicy configura política de acesso
func (bm *BucketManager) SetBucketPolicy(ctx context.Context, name, policy string) error {
    _, err := bm.client.PutBucketPolicy(ctx, &s3.PutBucketPolicyInput{
        Bucket: aws.String(name),
        Policy: aws.String(policy),
    })
    return err
}

// EnableVersioning habilita versionamento
func (bm *BucketManager) EnableVersioning(ctx context.Context, name string) error {
    _, err := bm.client.PutBucketVersioning(ctx, &s3.PutBucketVersioningInput{
        Bucket: aws.String(name),
        VersioningConfiguration: &types.VersioningConfiguration{
            Status: types.BucketVersioningStatusEnabled,
        },
    })
    return err
}

// SetLifecyclePolicy configura política de ciclo de vida
func (bm *BucketManager) SetLifecyclePolicy(ctx context.Context, name string) error {
    _, err := bm.client.PutBucketLifecycleConfiguration(ctx, &s3.PutBucketLifecycleConfigurationInput{
        Bucket: aws.String(name),
        LifecycleConfiguration: &types.BucketLifecycleConfiguration{
            Rules: []types.LifecycleRule{
                {
                    ID:     aws.String("delete-old-files"),
                    Status: types.ExpirationStatusEnabled,
                    Filter: &types.LifecycleRuleFilter{
                        Prefix: aws.String("temp/"),
                    },
                    Expiration: &types.LifecycleExpiration{
                        Days: aws.Int32(7),
                    },
                },
            },
        },
    })
    return err
}

Operações de Objetos

package storage

import (
    "bytes"
    "context"
    "fmt"
    "io"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/feature/s3/manager"
    "github.com/aws/aws-sdk-go-v2/service/s3"
    "github.com/aws/aws-sdk-go-v2/service/s3/types"
)

type ObjectStorage struct {
    client *s3.Client
}

func NewObjectStorage(client *s3.Client) *ObjectStorage {
    return &ObjectStorage{client: client}
}

// Upload faz upload de dados
func (os *ObjectStorage) Upload(
    ctx context.Context,
    bucket, key string,
    data []byte,
    contentType string,
) error {
    _, err := os.client.PutObject(ctx, &s3.PutObjectInput{
        Bucket:      aws.String(bucket),
        Key:         aws.String(key),
        Body:        bytes.NewReader(data),
        ContentType: aws.String(contentType),
    })
    if err != nil {
        return fmt.Errorf("falha no upload: %w", err)
    }
    return nil
}

// UploadWithMetadata inclui metadados customizados
func (os *ObjectStorage) UploadWithMetadata(
    ctx context.Context,
    bucket, key string,
    data []byte,
    contentType string,
    metadata map[string]string,
) error {
    _, err := os.client.PutObject(ctx, &s3.PutObjectInput{
        Bucket:      aws.String(bucket),
        Key:         aws.String(key),
        Body:        bytes.NewReader(data),
        ContentType: aws.String(contentType),
        Metadata:    metadata,
    })
    return err
}

// Download baixa um objeto
func (os *ObjectStorage) Download(
    ctx context.Context,
    bucket, key string,
) ([]byte, error) {
    result, err := os.client.GetObject(ctx, &s3.GetObjectInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
    })
    if err != nil {
        return nil, fmt.Errorf("falha no download: %w", err)
    }
    defer result.Body.Close()

    data, err := io.ReadAll(result.Body)
    if err != nil {
        return nil, fmt.Errorf("falha ao ler dados: %w", err)
    }

    return data, nil
}

// DownloadToWriter baixa para um io.Writer
func (os *ObjectStorage) DownloadToWriter(
    ctx context.Context,
    bucket, key string,
    w io.Writer,
) error {
    result, err := os.client.GetObject(ctx, &s3.GetObjectInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
    })
    if err != nil {
        return err
    }
    defer result.Body.Close()

    _, err = io.Copy(w, result.Body)
    return err
}

// Delete remove um objeto
func (os *ObjectStorage) Delete(ctx context.Context, bucket, key string) error {
    _, err := os.client.DeleteObject(ctx, &s3.DeleteObjectInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
    })
    return err
}

// DeleteMultiple remove múltiplos objetos
func (os *ObjectStorage) DeleteMultiple(
    ctx context.Context,
    bucket string,
    keys []string,
) error {
    var objects []types.ObjectIdentifier
    for _, key := range keys {
        objects = append(objects, types.ObjectIdentifier{Key: aws.String(key)})
    }

    _, err := os.client.DeleteObjects(ctx, &s3.DeleteObjectsInput{
        Bucket: aws.String(bucket),
        Delete: &types.Delete{
            Objects: objects,
        },
    })
    return err
}

// List lista objetos em um bucket
func (os *ObjectStorage) List(
    ctx context.Context,
    bucket, prefix string,
) ([]ObjectInfo, error) {
    var objects []ObjectInfo
    
    paginator := s3.NewListObjectsV2Paginator(os.client, &s3.ListObjectsV2Input{
        Bucket: aws.String(bucket),
        Prefix: aws.String(prefix),
    })

    for paginator.HasMorePages() {
        page, err := paginator.NextPage(ctx)
        if err != nil {
            return nil, err
        }

        for _, obj := range page.Contents {
            objects = append(objects, ObjectInfo{
                Key:          aws.ToString(obj.Key),
                Size:         aws.ToInt64(obj.Size),
                LastModified: aws.ToTime(obj.LastModified),
                ETag:         aws.ToString(obj.ETag),
            })
        }
    }

    return objects, nil
}

type ObjectInfo struct {
    Key          string
    Size         int64
    LastModified time.Time
    ETag         string
}

// HeadObject obtém metadados sem baixar
func (os *ObjectStorage) HeadObject(
    ctx context.Context,
    bucket, key string,
) (*ObjectInfo, error) {
    result, err := os.client.HeadObject(ctx, &s3.HeadObjectInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
    })
    if err != nil {
        return nil, err
    }

    return &ObjectInfo{
        Key:          key,
        Size:         aws.ToInt64(result.ContentLength),
        LastModified: aws.ToTime(result.LastModified),
        ETag:         aws.ToString(result.ETag),
    }, nil
}

Uploads Avançados

Multipart Upload

package storage

import (
    "context"
    "fmt"
    "os"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/feature/s3/manager"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

// UploadLargeFile faz upload de arquivos grandes com multipart
func (os *ObjectStorage) UploadLargeFile(
    ctx context.Context,
    bucket, key, filePath string,
) error {
    file, err := os.Open(filePath)
    if err != nil {
        return err
    }
    defer file.Close()

    // Configura uploader com concorrência
    uploader := manager.NewUploader(os.client, func(u *manager.Uploader) {
        u.PartSize = 64 * 1024 * 1024 // 64MB por parte
        u.Concurrency = 5              // 5 partes em paralelo
    })

    stat, err := file.Stat()
    if err != nil {
        return err
    }

    // Barra de progresso
    progress := &progressReader{
        reader:   file,
        size:     stat.Size(),
        progress: make(chan int64),
    }

    // Goroutine para monitorar progresso
    go func() {
        for p := range progress.progress {
            percentage := float64(p) / float64(stat.Size()) * 100
            fmt.Printf("Progresso: %.2f%%\n", percentage)
        }
    }()

    _, err = uploader.Upload(ctx, &s3.PutObjectInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
        Body:   progress,
    })

    return err
}

type progressReader struct {
    reader   io.Reader
    size     int64
    read     int64
    progress chan int64
}

func (pr *progressReader) Read(p []byte) (int, error) {
    n, err := pr.reader.Read(p)
    pr.read += int64(n)
    
    select {
    case pr.progress <- pr.read:
    default:
    }
    
    return n, err
}

// UploadDirectory faz upload recursivo de diretório
func (os *ObjectStorage) UploadDirectory(
    ctx context.Context,
    bucket, prefix, dirPath string,
) error {
    return filepath.Walk(dirPath, func(path string, info os.FileInfo, err error) error {
        if err != nil {
            return err
        }

        if info.IsDir() {
            return nil
        }

        // Calcula key relativa
        relPath, _ := filepath.Rel(dirPath, path)
        key := filepath.Join(prefix, relPath)
        
        // Normaliza para forward slashes
        key = filepath.ToSlash(key)

        return os.UploadLargeFile(ctx, bucket, key, path)
    })
}

Presigned URLs

URLs Temporárias

package storage

import (
    "context"
    "fmt"
    "time"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

// PresignedURLGenerator gera URLs temporárias
type PresignedURLGenerator struct {
    presignClient *s3.PresignClient
}

func NewPresignedURLGenerator(client *s3.Client) *PresignedURLGenerator {
    return &PresignedURLGenerator{
        presignClient: s3.NewPresignClient(client),
    }
}

// GetDownloadURL gera URL para download (GET)
func (p *PresignedURLGenerator) GetDownloadURL(
    ctx context.Context,
    bucket, key string,
    expiry time.Duration,
) (string, error) {
    req, err := p.presignClient.PresignGetObject(ctx, &s3.GetObjectInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
    }, s3.WithPresignExpires(expiry))
    
    if err != nil {
        return "", fmt.Errorf("falha ao gerar URL: %w", err)
    }
    
    return req.URL, nil
}

// GetUploadURL gera URL para upload direto (PUT)
func (p *PresignedURLGenerator) GetUploadURL(
    ctx context.Context,
    bucket, key string,
    expiry time.Duration,
    contentType string,
) (string, error) {
    req, err := p.presignClient.PresignPutObject(ctx, &s3.PutObjectInput{
        Bucket:      aws.String(bucket),
        Key:         aws.String(key),
        ContentType: aws.String(contentType),
    }, s3.WithPresignExpires(expiry))
    
    if err != nil {
        return "", err
    }
    
    return req.URL, nil
}

// GetMultipartUploadURL inicia multipart upload com presigned URL
func (p *PresignedURLGenerator) GetMultipartUploadURL(
    ctx context.Context,
    bucket, key string,
    expiry time.Duration,
) (uploadID string, url string, err error) {
    // Cria multipart upload
    createResult, err := p.presignClient.PresignCreateMultipartUpload(ctx,
        &s3.CreateMultipartUploadInput{
            Bucket: aws.String(bucket),
            Key:    aws.String(key),
        },
    )
    if err != nil {
        return "", "", err
    }

    return "upload-id", createResult.URL, nil
}

Handler HTTP para Upload Direto

package handlers

import (
    "encoding/json"
    "net/http"
    "time"

    "myapp/storage"
)

type UploadHandler struct {
    urlGenerator *storage.PresignedURLGenerator
    bucket       string
}

func NewUploadHandler(generator *storage.PresignedURLGenerator, bucket string) *UploadHandler {
    return &UploadHandler{
        urlGenerator: generator,
        bucket:       bucket,
    }
}

func (h *UploadHandler) GenerateUploadURL(w http.ResponseWriter, r *http.Request) {
    var req struct {
        Filename    string `json:"filename"`
        ContentType string `json:"content_type"`
    }

    if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
        http.Error(w, err.Error(), http.StatusBadRequest)
        return
    }

    // Gera key única
    key := fmt.Sprintf("uploads/%d-%s", time.Now().Unix(), req.Filename)

    url, err := h.urlGenerator.GetUploadURL(
        r.Context(),
        h.bucket,
        key,
        15*time.Minute,
        req.ContentType,
    )
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    resp := struct {
        URL         string `json:"url"`
        Key         string `json:"key"`
        ExpiresIn   int    `json:"expires_in_seconds"`
    }{
        URL:       url,
        Key:       key,
        ExpiresIn: 900, // 15 minutos
    }

    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(resp)
}

Streaming e Large Objects

Upload em Streaming

package storage

import (
    "context"
    "io"

    "github.com/aws/aws-sdk-go-v2/feature/s3/manager"
)

// StreamingUpload faz upload de io.Reader
func (os *ObjectStorage) StreamingUpload(
    ctx context.Context,
    bucket, key string,
    reader io.Reader,
    contentType string,
) error {
    uploader := manager.NewUploader(os.client)

    _, err := uploader.Upload(ctx, &s3.PutObjectInput{
        Bucket:      aws.String(bucket),
        Key:         aws.String(key),
        Body:        reader,
        ContentType: aws.String(contentType),
    })

    return err
}

// Download em Streaming (para resposta HTTP)
func (os *ObjectStorage) StreamDownload(
    ctx context.Context,
    bucket, key string,
    w io.Writer,
) (int64, error) {
    result, err := os.client.GetObject(ctx, &s3.GetObjectInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
    })
    if err != nil {
        return 0, err
    }
    defer result.Body.Close()

    return io.Copy(w, result.Body)
}

Padrões de Produção

1. Circuit Breaker

package storage

import (
    "sync"
    "time"
)

type CircuitBreaker struct {
    failures     int
    threshold    int
    timeout      time.Duration
    lastFailure  time.Time
    state        State
    mutex        sync.RWMutex
}

type State int

const (
    StateClosed State = iota
    StateOpen
    StateHalfOpen
)

func NewCircuitBreaker(threshold int, timeout time.Duration) *CircuitBreaker {
    return &CircuitBreaker{
        threshold: threshold,
        timeout:   timeout,
        state:     StateClosed,
    }
}

func (cb *CircuitBreaker) Call(fn func() error) error {
    if cb.isOpen() {
        return fmt.Errorf("circuit breaker is open")
    }

    err := fn()
    cb.recordResult(err)
    return err
}

func (cb *CircuitBreaker) isOpen() bool {
    cb.mutex.RLock()
    defer cb.mutex.RUnlock()

    if cb.state == StateOpen {
        if time.Since(cb.lastFailure) > cb.timeout {
            cb.mutex.RUnlock()
            cb.mutex.Lock()
            cb.state = StateHalfOpen
            cb.mutex.Unlock()
            cb.mutex.RLock()
        } else {
            return true
        }
    }
    return false
}

func (cb *CircuitBreaker) recordResult(err error) {
    cb.mutex.Lock()
    defer cb.mutex.Unlock()

    if err == nil {
        cb.failures = 0
        cb.state = StateClosed
        return
    }

    cb.failures++
    cb.lastFailure = time.Now()

    if cb.failures >= cb.threshold {
        cb.state = StateOpen
    }
}

2. Retry com Backoff

func (os *ObjectStorage) UploadWithRetry(
    ctx context.Context,
    bucket, key string,
    data []byte,
    maxRetries int,
) error {
    var err error
    
    for attempt := 0; attempt < maxRetries; attempt++ {
        err = os.Upload(ctx, bucket, key, data, "application/octet-stream")
        if err == nil {
            return nil
        }

        // Verifica se é retryable
        if !isRetryableError(err) {
            return err
        }

        // Backoff exponencial
        delay := time.Duration(math.Pow(2, float64(attempt))) * 100 * time.Millisecond
        time.Sleep(delay)
    }

    return fmt.Errorf("failed after %d attempts: %w", maxRetries, err)
}

3. Pool de Conexões e Cache

type StorageService struct {
    client   *s3.Client
    bucket   string
    metadataCache *ristretto.Cache // Cache de metadados
}

func NewStorageService(client *s3.Client, bucket string) (*StorageService, error) {
    cache, err := ristretto.NewCache(&ristretto.Config{
        NumCounters: 1e7,     // 10M
        MaxCost:     1 << 30, // 1GB
        BufferItems: 64,
    })
    if err != nil {
        return nil, err
    }

    return &StorageService{
        client:        client,
        bucket:        bucket,
        metadataCache: cache,
    }, nil
}

func (s *StorageService) GetObjectInfo(ctx context.Context, key string) (*ObjectInfo, error) {
    // Tenta cache primeiro
    if cached, found := s.metadataCache.Get(key); found {
        return cached.(*ObjectInfo), nil
    }

    // Busca do storage
    info, err := s.getObjectInfoFromStorage(ctx, key)
    if err != nil {
        return nil, err
    }

    // Cache por 5 minutos
    s.metadataCache.SetWithTTL(key, info, 1, 5*time.Minute)

    return info, nil
}

Conclusão

Neste guia, você aprendeu:

Configuração: MinIO com Docker e Kubernetes ✅ SDK AWS: Cliente S3 configurado para MinIO ✅ Operações: Buckets, uploads, downloads, listagem ✅ Avançado: Multipart uploads, presigned URLs ✅ Streaming: Large objects e streaming ✅ Produção: Circuit breaker, retry, cache

Próximos Passos

  1. Go e Terraform - Infraestrutura como código
  2. Go e Prometheus - Monitoramento
  3. Go Observability - Logs, métricas e traces

FAQ

Q: Posso usar o mesmo código para AWS S3 e MinIO? R: Sim! O SDK AWS S3 funciona com ambos. Apenas configure o endpoint corretamente.

Q: Qual o limite de tamanho de arquivo? R: Teoricamente ilimitado. Para arquivos > 100MB, use multipart upload.

Q: É seguro usar presigned URLs? R: Sim, desde que use HTTPS e expiry curto (15-60 minutos).

Q: Como garantir durabilidade? R: Use erasure coding (distribuído) ou replicação para múltiplos discos.