Microserviços transformaram a forma como desenvolvemos sistemas escaláveis. Go é a linguagem preferida para microserviços em empresas como Netflix, Uber, e Kubernetes. Neste guia completo, você aprenderá a arquitetar, construir e operar microserviços robustos em Go.
Por Que Go é Perfeito para Microserviços
O Match Perfeito
┌─────────────────────────────────────────────────────────────────┐
│ MICROSERVIÇO │ GO │
├─────────────────────────────────────────────────────────────────┤
│ Leve e rápido │ Binário único, startup < 100ms │
├─────────────────────────────────────────────────────────────────┤
│ Escalável │ Goroutines (milhões simultâneas) │
├─────────────────────────────────────────────────────────────────┤
│ Confiável │ Tipagem forte, sem runtime pesado │
├─────────────────────────────────────────────────────────────────┤
│ Eficiente │ Memória mínima (10-50MB por serviço) │
├─────────────────────────────────────────────────────────────────┤
│ Portátil │ Cross-compile, Docker nativo │
├─────────────────────────────────────────────────────────────────┤
│ Simples │ Código explícito, fácil manutenção │
└─────────────────────────────────────────────────────────────────┘
Comparação com Outras Linguagens
| Aspecto | Go | Java | Node.js | Python |
|---|---|---|---|---|
| Startup | ~50ms | ~5s | ~2s | ~1s |
| Memória | 10-50MB | 200-500MB | 100-300MB | 150-400MB |
| Concorrência | Nativa | Threads pesadas | Callbacks | GIL limita |
| Binário | Único | JVM necessária | Node runtime | Interpreter |
| Performance | Nativa | Boa (JIT) | Média | Lenta |
Arquitetura de um Microserviço Go
Estrutura de Projeto Recomendada
order-service/
├── cmd/
│ └── api/
│ └── main.go # Entry point
├── internal/
│ ├── domain/
│ │ ├── order.go # Entidades
│ │ └── errors.go # Erros de domínio
│ ├── application/
│ │ ├── service.go # Lógica de negócio
│ │ └── dto.go # Data Transfer Objects
│ ├── infrastructure/
│ │ ├── http/
│ │ │ ├── handler.go # HTTP handlers
│ │ │ ├── router.go # Configuração de rotas
│ │ │ └── middleware.go # Middlewares
│ │ ├── persistence/
│ │ │ ├── postgres.go # Implementação do repo
│ │ │ └── redis.go # Cache
│ │ └── messaging/
│ │ └── kafka.go # Event publisher
│ └── config/
│ └── config.go # Configurações
├── pkg/
│ └── logger/ # Shared packages
├── api/
│ └── proto/ # Protocol Buffers
├── deployments/
│ ├── docker/
│ └── k8s/
├── go.mod
└── Makefile
Implementação Completa
// internal/domain/order.go
package domain
import (
"errors"
"time"
)
var (
ErrInvalidAmount = errors.New("valor do pedido inválido")
ErrOrderNotFound = errors.New("pedido não encontrado")
)
type Order struct {
ID string `json:"id"`
UserID string `json:"user_id"`
Items []Item `json:"items"`
Total float64 `json:"total"`
Status Status `json:"status"`
CreatedAt time.Time `json:"created_at"`
}
type Item struct {
ProductID string `json:"product_id"`
Quantity int `json:"quantity"`
Price float64 `json:"price"`
}
type Status string
const (
StatusPending Status = "PENDING"
StatusPaid Status = "PAID"
StatusShipped Status = "SHIPPED"
StatusDelivered Status = "DELIVERED"
)
func (o *Order) CalculateTotal() {
var total float64
for _, item := range o.Items {
total += item.Price * float64(item.Quantity)
}
o.Total = total
}
func (o *Order) Validate() error {
if o.Total <= 0 {
return ErrInvalidAmount
}
return nil
}
// internal/application/service.go
package application
import (
"context"
"fmt"
"time"
"order-service/internal/domain"
)
// Portas (interfaces) para dependências
type OrderRepository interface {
Save(ctx context.Context, order *domain.Order) error
GetByID(ctx context.Context, id string) (*domain.Order, error)
Update(ctx context.Context, order *domain.Order) error
}
type PaymentService interface {
Process(ctx context.Context, orderID string, amount float64) error
}
type EventPublisher interface {
PublishOrderCreated(ctx context.Context, order *domain.Order) error
}
type OrderService struct {
repo OrderRepository
payment PaymentService
publisher EventPublisher
}
func NewOrderService(
repo OrderRepository,
payment PaymentService,
publisher EventPublisher,
) *OrderService {
return &OrderService{
repo: repo,
payment: payment,
publisher: publisher,
}
}
func (s *OrderService) CreateOrder(ctx context.Context, userID string, items []domain.Item) (*domain.Order, error) {
order := &domain.Order{
ID: generateID(),
UserID: userID,
Items: items,
Status: domain.StatusPending,
CreatedAt: time.Now(),
}
order.CalculateTotal()
if err := order.Validate(); err != nil {
return nil, err
}
// Salvar no banco
if err := s.repo.Save(ctx, order); err != nil {
return nil, fmt.Errorf("falha ao salvar pedido: %w", err)
}
// Publicar evento assíncrono
go func() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
s.publisher.PublishOrderCreated(ctx, order)
}()
return order, nil
}
func (s *OrderService) ProcessPayment(ctx context.Context, orderID string) error {
order, err := s.repo.GetByID(ctx, orderID)
if err != nil {
return err
}
if err := s.payment.Process(ctx, orderID, order.Total); err != nil {
return fmt.Errorf("pagamento falhou: %w", err)
}
order.Status = domain.StatusPaid
return s.repo.Update(ctx, order)
}
func generateID() string {
return fmt.Sprintf("ORD-%d", time.Now().UnixNano())
}
Padrões de Comunicação
1. Síncrono: HTTP REST
Já coberto no guia de APIs REST. Para microserviços, adicione:
// pkg/client/client.go
package client
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"time"
)
type ServiceClient struct {
baseURL string
httpClient *http.Client
}
func NewServiceClient(baseURL string) *ServiceClient {
return &ServiceClient{
baseURL: baseURL,
httpClient: &http.Client{
Timeout: 10 * time.Second,
},
}
}
func (c *ServiceClient) Get(ctx context.Context, path string, result interface{}) error {
url := c.baseURL + path
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return err
}
resp, err := c.httpClient.Do(req)
if err != nil {
return fmt.Errorf("request falhou: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("HTTP %d", resp.StatusCode)
}
return json.NewDecoder(resp.Body).Decode(result)
}
func (c *ServiceClient) Post(ctx context.Context, path string, body, result interface{}) error {
url := c.baseURL + path
jsonBody, _ := json.Marshal(body)
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonBody))
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.httpClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("HTTP %d", resp.StatusCode)
}
if result != nil {
return json.NewDecoder(resp.Body).Decode(result)
}
return nil
}
2. Síncrono: gRPC
Mais eficiente que HTTP para comunicação serviço-a-serviço:
// api/proto/order.proto
syntax = "proto3";
package order;
option go_package = "order-service/api/proto";
service OrderService {
rpc CreateOrder(CreateOrderRequest) returns (Order);
rpc GetOrder(GetOrderRequest) returns (Order);
rpc StreamOrders(StreamOrdersRequest) returns (stream Order);
}
message CreateOrderRequest {
string user_id = 1;
repeated Item items = 2;
}
message Item {
string product_id = 1;
int32 quantity = 2;
double price = 3;
}
message Order {
string id = 1;
string user_id = 2;
double total = 3;
string status = 4;
}
// Implementação do servidor gRPC
package grpc
import (
"context"
"google.golang.org/grpc"
"order-service/api/proto"
"order-service/internal/application"
)
type Server struct {
proto.UnimplementedOrderServiceServer
service *application.OrderService
}
func NewServer(service *application.OrderService) *Server {
return &Server{service: service}
}
func (s *Server) CreateOrder(ctx context.Context, req *proto.CreateOrderRequest) (*proto.Order, error) {
// Converter proto para domain
items := make([]domain.Item, len(req.Items))
for i, item := range req.Items {
items[i] = domain.Item{
ProductID: item.ProductId,
Quantity: int(item.Quantity),
Price: item.Price,
}
}
order, err := s.service.CreateOrder(ctx, req.UserId, items)
if err != nil {
return nil, err
}
return &proto.Order{
Id: order.ID,
UserId: order.UserID,
Total: order.Total,
Status: string(order.Status),
}, nil
}
func StartGRPCServer(service *application.OrderService, port string) error {
lis, err := net.Listen("tcp", ":"+port)
if err != nil {
return err
}
grpcServer := grpc.NewServer(
grpc.UnaryInterceptor(loggingInterceptor),
)
proto.RegisterOrderServiceServer(grpcServer, NewServer(service))
return grpcServer.Serve(lis)
}
func loggingInterceptor(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
start := time.Now()
h, err := handler(ctx, req)
log.Printf("gRPC %s %v", info.FullMethod, time.Since(start))
return h, err
}
3. Assíncrono: Eventos com Message Broker
// internal/infrastructure/messaging/kafka.go
package messaging
import (
"context"
"encoding/json"
"log"
"github.com/segmentio/kafka-go"
"order-service/internal/domain"
)
type KafkaPublisher struct {
writer *kafka.Writer
}
func NewKafkaPublisher(brokers []string) *KafkaPublisher {
return &KafkaPublisher{
writer: &kafka.Writer{
Addr: kafka.TCP(brokers...),
Balancer: &kafka.LeastBytes{},
},
}
}
func (k *KafkaPublisher) PublishOrderCreated(ctx context.Context, order *domain.Order) error {
event := map[string]interface{}{
"event_type": "order.created",
"order_id": order.ID,
"user_id": order.UserID,
"total": order.Total,
"timestamp": order.CreatedAt,
}
payload, _ := json.Marshal(event)
return k.writer.WriteMessages(ctx, kafka.Message{
Topic: "orders",
Key: []byte(order.ID),
Value: payload,
})
}
// Consumer
func StartOrderConsumer(brokers []string, handler func(ctx context.Context, event OrderEvent) error) {
reader := kafka.NewReader(kafka.ReaderConfig{
Brokers: brokers,
Topic: "orders",
GroupID: "order-service",
})
defer reader.Close()
for {
msg, err := reader.ReadMessage(context.Background())
if err != nil {
log.Printf("Erro ao ler mensagem: %v", err)
continue
}
var event OrderEvent
if err := json.Unmarshal(msg.Value, &event); err != nil {
log.Printf("Erro ao parsear evento: %v", err)
continue
}
if err := handler(context.Background(), event); err != nil {
log.Printf("Erro ao processar evento: %v", err)
}
}
}
Service Discovery
Client-Side com Consul
// pkg/discovery/consul.go
package discovery
import (
"fmt"
"net"
"strconv"
"github.com/hashicorp/consul/api"
)
type ConsulClient struct {
client *api.Client
}
func NewConsulClient(addr string) (*ConsulClient, error) {
config := api.DefaultConfig()
config.Address = addr
client, err := api.NewClient(config)
if err != nil {
return nil, err
}
return &ConsulClient{client: client}, nil
}
func (c *ConsulClient) Register(serviceID, serviceName string, port int) error {
registration := &api.AgentServiceRegistration{
ID: serviceID,
Name: serviceName,
Port: port,
Check: &api.AgentServiceCheck{
HTTP: fmt.Sprintf("http://localhost:%d/health", port),
Interval: "10s",
Timeout: "5s",
},
}
return c.client.Agent().ServiceRegister(registration)
}
func (c *ConsulClient) Deregister(serviceID string) error {
return c.client.Agent().ServiceDeregister(serviceID)
}
func (c *ConsulClient) Discover(serviceName string) (string, error) {
services, _, err := c.client.Health().Service(serviceName, "", true, nil)
if err != nil {
return "", err
}
if len(services) == 0 {
return "", fmt.Errorf("nenhuma instância de %s disponível", serviceName)
}
// Simple round-robin (em produção, use cache do lado do cliente)
service := services[0].Service
return fmt.Sprintf("%s:%d", service.Address, service.Port), nil
}
Kubernetes DNS
Em Kubernetes, use DNS interno para discovery:
// service discovery via Kubernetes DNS
func getServiceURL(serviceName string) string {
namespace := os.Getenv("NAMESPACE")
if namespace == "" {
namespace = "default"
}
// Formato: <service>.<namespace>.svc.cluster.local
return fmt.Sprintf("http://%s.%s.svc.cluster.local:8080", serviceName, namespace)
}
Load Balancing
Client-Side com Retry
// pkg/loadbalancer/balancer.go
package loadbalancer
import (
"context"
"errors"
"sync"
"sync/atomic"
)
type LoadBalancer struct {
instances []string
counter uint32
}
func New(instances []string) *LoadBalancer {
return &LoadBalancer{instances: instances}
}
func (lb *LoadBalancer) Next() string {
if len(lb.instances) == 0 {
return ""
}
idx := atomic.AddUint32(&lb.counter, 1) % uint32(len(lb.instances))
return lb.instances[idx]
}
// Retry com exponential backoff
func WithRetry(ctx context.Context, maxRetries int, fn func() error) error {
var lastErr error
for i := 0; i < maxRetries; i++ {
if err := fn(); err != nil {
lastErr = err
// Wait com exponential backoff
delay := time.Duration(1<<i) * 100 * time.Millisecond
select {
case <-time.After(delay):
continue
case <-ctx.Done():
return ctx.Err()
}
}
return nil
}
return fmt.Errorf("max retries exceeded: %w", lastErr)
}
Circuit Breaker
Proteja contra cascata de falhas:
// pkg/circuitbreaker/breaker.go
package circuitbreaker
import (
"errors"
"sync"
"time"
)
type State int
const (
StateClosed State = iota // Normal
StateOpen // Falhou, rejeitando
StateHalfOpen // Testando
)
type CircuitBreaker struct {
failThreshold int
successThreshold int
timeout time.Duration
state State
failures int
successes int
lastFailure time.Time
mu sync.RWMutex
}
func New(failThreshold, successThreshold int, timeout time.Duration) *CircuitBreaker {
return &CircuitBreaker{
failThreshold: failThreshold,
successThreshold: successThreshold,
timeout: timeout,
state: StateClosed,
}
}
func (cb *CircuitBreaker) Execute(fn func() error) error {
if !cb.allow() {
return errors.New("circuit breaker OPEN")
}
err := fn()
cb.recordResult(err)
return err
}
func (cb *CircuitBreaker) allow() bool {
cb.mu.Lock()
defer cb.mu.Unlock()
switch cb.state {
case StateClosed:
return true
case StateOpen:
if time.Since(cb.lastFailure) > cb.timeout {
cb.state = StateHalfOpen
cb.failures = 0
cb.successes = 0
return true
}
return false
case StateHalfOpen:
return true
}
return false
}
func (cb *CircuitBreaker) recordResult(err error) {
cb.mu.Lock()
defer cb.mu.Unlock()
if err == nil {
cb.handleSuccess()
} else {
cb.handleFailure()
}
}
func (cb *CircuitBreaker) handleSuccess() {
switch cb.state {
case StateHalfOpen:
cb.successes++
if cb.successes >= cb.successThreshold {
cb.state = StateClosed
cb.failures = 0
cb.successes = 0
}
case StateClosed:
cb.failures = 0
}
}
func (cb *CircuitBreaker) handleFailure() {
cb.lastFailure = time.Now()
switch cb.state {
case StateHalfOpen:
cb.state = StateOpen
case StateClosed:
cb.failures++
if cb.failures >= cb.failThreshold {
cb.state = StateOpen
}
}
}
func (cb *CircuitBreaker) State() State {
cb.mu.RLock()
defer cb.mu.RUnlock()
return cb.state
}
Uso no Cliente HTTP
func (c *ServiceClient) CallWithCircuitBreaker(ctx context.Context, path string) error {
cb := circuitbreaker.New(5, 3, 30*time.Second)
return cb.Execute(func() error {
return c.Get(ctx, path, nil)
})
}
Distributed Tracing
// pkg/tracing/tracing.go
package tracing
import (
"context"
"log"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/jaeger"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
"go.opentelemetry.io/otel/trace"
)
func InitTracer(serviceName string) (*sdktrace.TracerProvider, error) {
exp, err := jaeger.New(jaeger.WithCollectorEndpoint(
jaeger.WithEndpoint("http://jaeger:14268/api/traces"),
))
if err != nil {
return nil, err
}
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exp),
sdktrace.WithResource(resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
)),
)
otel.SetTracerProvider(tp)
return tp, nil
}
// Middleware HTTP para tracing
func TracingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
tracer := otel.Tracer("http")
ctx, span := tracer.Start(r.Context(), r.URL.Path,
trace.WithAttributes(
attribute.String("http.method", r.Method),
attribute.String("http.url", r.URL.String()),
),
)
defer span.End()
next.ServeHTTP(w, r.WithContext(ctx))
})
}
// Propagar contexto entre serviços
func propagateTrace(ctx context.Context, req *http.Request) {
tracer := otel.GetTracerProvider().Tracer("client")
_, span := tracer.Start(ctx, "outgoing-request")
defer span.End()
// Injetar headers de tracing
otel.GetTextMapPropagator().Inject(ctx, propagation.HeaderCarrier(req.Header))
}
Deploy em Kubernetes
Dockerfile Otimizado
# Build stage
FROM golang:1.21-alpine AS builder
WORKDIR /app
# Cache dependencies
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Build estático
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
-ldflags='-w -s -extldflags "-static"' \
-a -installsuffix cgo \
-o order-service \
./cmd/api
# Final stage - distroless
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /app/order-service .
USER nonroot:nonroot
EXPOSE 8080 9090
ENTRYPOINT ["/order-service"]
Kubernetes Manifests
# deployments/k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
labels:
app: order-service
spec:
replicas: 3
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
spec:
containers:
- name: order-service
image: order-service:latest
ports:
- name: http
containerPort: 8080
- name: grpc
containerPort: 9090
env:
- name: PORT
value: "8080"
- name: DB_HOST
valueFrom:
secretKeyRef:
name: db-credentials
key: host
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- order-service
topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
ports:
- name: http
port: 8080
targetPort: 8080
- name: grpc
port: 9090
targetPort: 9090
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: order-service-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: order-service
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Observabilidade
Métricas com Prometheus
// pkg/metrics/metrics.go
package metrics
import (
"net/http"
"strconv"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
var (
requestsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total de requisições HTTP",
}, []string{"method", "path", "status"})
requestDuration = promauto.NewHistogramVec(prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "Duração das requisições HTTP",
Buckets: prometheus.DefBuckets,
}, []string{"method", "path"})
activeRequests = promauto.NewGauge(prometheus.GaugeOpts{
Name: "http_active_requests",
Help: "Requisições HTTP ativas",
})
)
func MetricsHandler() http.Handler {
return promhttp.Handler()
}
func Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
activeRequests.Inc()
defer activeRequests.Dec()
// Wrap response writer para capturar status
rw := &responseWriter{ResponseWriter: w, statusCode: http.StatusOK}
next.ServeHTTP(rw, r)
duration := time.Since(start).Seconds()
requestsTotal.WithLabelValues(r.Method, r.URL.Path, strconv.Itoa(rw.statusCode)).Inc()
requestDuration.WithLabelValues(r.Method, r.URL.Path).Observe(duration)
})
}
type responseWriter struct {
http.ResponseWriter
statusCode int
}
func (rw *responseWriter) WriteHeader(code int) {
rw.statusCode = code
rw.ResponseWriter.WriteHeader(code)
}
Health Checks
// internal/infrastructure/http/health.go
package http
import (
"net/http"
"database/sql"
)
type HealthChecker struct {
db *sql.DB
}
func NewHealthChecker(db *sql.DB) *HealthChecker {
return &HealthChecker{db: db}
}
func (h *HealthChecker) Health(w http.ResponseWriter, r *http.Request) {
response := map[string]interface{}{
"status": "healthy",
"checks": map[string]string{
"database": "ok",
},
}
// Verificar banco
if err := h.db.Ping(); err != nil {
response["status"] = "unhealthy"
response["checks"].(map[string]string)["database"] = "error: " + err.Error()
w.WriteHeader(http.StatusServiceUnavailable)
}
json.NewEncoder(w).Encode(response)
}
func (h *HealthChecker) Ready(w http.ResponseWriter, r *http.Request) {
// Verificar se está pronto para receber tráfego
// Ex: migrations completas, caches aquecidos
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{"ready": "true"})
}
Padrões Avançados
Saga Pattern para Transações Distribuídas
// Coordena transações entre múltiplos serviços
func (s *OrderService) CreateOrderWithSaga(ctx context.Context, order *Order) error {
saga := NewSaga()
// Step 1: Reservar estoque
saga.AddStep(SagaStep{
Name: "reserve_inventory",
Execute: func() error {
return s.inventoryClient.Reserve(ctx, order.Items)
},
Compensate: func() error {
return s.inventoryClient.ReleaseReservation(ctx, order.ID)
},
})
// Step 2: Processar pagamento
saga.AddStep(SagaStep{
Name: "process_payment",
Execute: func() error {
return s.paymentClient.Charge(ctx, order.UserID, order.Total)
},
Compensate: func() error {
return s.paymentClient.Refund(ctx, order.ID)
},
})
// Step 3: Criar envio
saga.AddStep(SagaStep{
Name: "create_shipment",
Execute: func() error {
return s.shippingClient.CreateShipment(ctx, order)
},
Compensate: func() error {
return s.shippingClient.CancelShipment(ctx, order.ID)
},
})
return saga.Execute(ctx)
}
Checklist de Microserviços em Produção
- Health checks - Liveness e Readiness probes
- Graceful shutdown - Tempo para finalizar requests
- Observabilidade - Logs estruturados, métricas, tracing
- Resiliência - Circuit breaker, retry, fallback
- Configuração - Externa (env vars, configmaps)
- Segurança - Zero trust, mTLS entre serviços
- Rate limiting - Proteção contra overload
- Resource limits - CPU/memory limits configurados
- Auto-scaling - HPA configurado
- CI/CD - Pipeline de deploy automatizado
Próximos Passos
Aprofunde seus conhecimentos:
- Go Concurrency Patterns - Goroutines avançadas
- Go Testing - Testes de integração
- Go e gRPC - Comunicação de alta performance
- Go para APIs REST - Fundamentos HTTP
Microserviços em Go: simples, rápidos e confiáveis. Compartilhe sua arquitetura!