Class: Seo::AnthropicBatchClient

Inherits:
Object
  • Object
show all
Defined in:
app/services/seo/anthropic_batch_client.rb

Overview

Lightweight Faraday client for the Anthropic Message Batches API.
Used by the SEO batch pipeline to submit analysis requests at 50% cost.

API reference: https://platform.claude.com/docs/en/build-with-claude/batch-processing

Examples:

Submit a batch

client = Seo::AnthropicBatchClient.new
batch_id = client.create_batch(requests)
status = client.get_batch(batch_id)
client.stream_results(batch_id) { |result| process(result) }

Defined Under Namespace

Classes: BatchError, RateLimitError

Constant Summary collapse

BASE_URL =
'https://api.anthropic.com'
API_VERSION =
'2023-06-01'

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(api_key: nil) ⇒ AnthropicBatchClient

Returns a new instance of AnthropicBatchClient.

Raises:

  • (ArgumentError)


22
23
24
25
# File 'app/services/seo/anthropic_batch_client.rb', line 22

def initialize(api_key: nil)
  @api_key = api_key || Rails.application.credentials.dig(:claude_ai, :api_key)
  raise ArgumentError, 'Anthropic API key is required' if @api_key.blank?
end

Class Method Details

.build_request(custom_id:, system_prompt:, user_prompt:, model:, max_tokens: Seo::PageAnalysisService::MAX_OUTPUT_TOKENS, temperature: Seo::PageAnalysisService::TEMPERATURE, thinking: nil) ⇒ Hash

Build a single batch request entry from prompt data.
Wraps the system prompt with cache_control for prompt caching
(shared system prompt across all batch items = high cache hit rate).

Parameters:

  • custom_id (String)

    Unique identifier for this request

  • system_prompt (String)

    The system prompt text

  • user_prompt (String)

    The user message text

  • model (String)

    Model ID (e.g. "claude-sonnet-4-6-20260217")

  • max_tokens (Integer) (defaults to: Seo::PageAnalysisService::MAX_OUTPUT_TOKENS)

    Maximum output tokens

  • temperature (Float) (defaults to: Seo::PageAnalysisService::TEMPERATURE)

    Sampling temperature

  • thinking (Hash, nil) (defaults to: nil)

    Thinking config (e.g. { type: 'enabled', budget_tokens: 10000 })

Returns:

  • (Hash)

    A batch request object ready for create_batch



93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
# File 'app/services/seo/anthropic_batch_client.rb', line 93

def self.build_request(custom_id:, system_prompt:, user_prompt:, model:,
                       max_tokens: Seo::PageAnalysisService::MAX_OUTPUT_TOKENS,
                       temperature: Seo::PageAnalysisService::TEMPERATURE,
                       thinking: nil)
  params = {
    model: model,
    max_tokens: max_tokens,
    system: [
      {
        type: 'text',
        text: system_prompt,
        cache_control: { type: 'ephemeral' }
      }
    ],
    messages: [
      { role: 'user', content: user_prompt }
    ]
  }

  # Anthropic requires temperature=1 when thinking is enabled
  if thinking
    params[:thinking] = thinking
    params[:temperature] = 1
  else
    params[:temperature] = temperature
  end

  { custom_id: custom_id, params: params }
end

Instance Method Details

#cancel_batch(batch_id) ⇒ Hash

Cancel a batch that is still processing.

Parameters:

  • batch_id (String)

    The batch ID

Returns:

  • (Hash)

    Updated batch status



76
77
78
79
# File 'app/services/seo/anthropic_batch_client.rb', line 76

def cancel_batch(batch_id)
  response = connection.post("/v1/messages/batches/#{batch_id}/cancel")
  handle_response(response)
end

#create_batch(requests) ⇒ Hash

Create a new Message Batch.

Parameters:

  • requests (Array<Hash>)

    Array of batch request objects, each with:

    • custom_id: unique identifier (e.g. "seo_page_123")
    • params: standard Messages API params (model, max_tokens, system, messages)

Returns:

  • (Hash)

    Batch response with :id, :processing_status, :request_counts



33
34
35
36
37
38
39
# File 'app/services/seo/anthropic_batch_client.rb', line 33

def create_batch(requests)
  response = connection.post('/v1/messages/batches') do |req|
    req.body = { requests: requests }.to_json
  end

  handle_response(response)
end

#get_batch(batch_id) ⇒ Hash

Retrieve the current status of a Message Batch.

Parameters:

  • batch_id (String)

    The batch ID (e.g. "msgbatch_01HkcTjaV5uDC8jWR4ZsDV8d")

Returns:

  • (Hash)

    Batch status with :processing_status, :request_counts, :results_url



45
46
47
48
# File 'app/services/seo/anthropic_batch_client.rb', line 45

def get_batch(batch_id)
  response = connection.get("/v1/messages/batches/#{batch_id}")
  handle_response(response)
end

#stream_results(batch_id) {|Hash| ... } ⇒ Object

Stream results from a completed batch as JSONL.
Each line is a JSON object with custom_id and result.

Parameters:

  • batch_id (String)

    The batch ID

Yields:

  • (Hash)

    Each result object with :custom_id and :result

Raises:



55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# File 'app/services/seo/anthropic_batch_client.rb', line 55

def stream_results(batch_id)
  batch = get_batch(batch_id)
  results_url = batch['results_url']
  raise BatchError, "Batch #{batch_id} has no results_url (status: #{batch['processing_status']})" unless results_url

  response = connection.get(results_url)
  raise BatchError, "Failed to fetch results: #{response.status}" unless response.success?

  response.body.each_line do |line|
    line = line.strip
    next if line.empty?

    result = JSON.parse(line)
    yield result
  end
end