Scroll logs
POST/v1/scroll
This endpoint can take 2 types of call requests. The first type runs a search query that returns a scrollID
and the first batch of paginated results. The second request type passes only the scroll_id
(The variation in the field name is intentional) to fetch the next batches of paginated results. This endpoint always returns results as a stringified JSON.
How it works:
First, send a request to establish the scrollID
. This initial request contains the query object and additional parameters, similar to the v1/search
endpoint, with the exception that dayOffset
and accountIds
are not supported. The request will return the field scrollId
and the number of hits
, representing the number of matching results.
For example, the scroll_Id
string may have a value *************80Y1JVcldDaVEAAAAAjeoh8hZYNkVkXzNhWVJRaUIwcWF5TEVnU2ZR
.
Next, send the scroll_id
in the request body to retrieve the log results as a stringified JSON. Each call returns the next page, where each page can return a maximum of 1000 results. Every time you resend the same scroll_id
in the request body, it returns the next page until it reaches the end of the results. Note that 'scrollID' expires after 20 minutes.
Every time you send the request with the same scroll_id
, the next batch of results is returned. Keep sending the same scroll ID as many times as needed to retrieve all of the available results. The results are paginated, and every request returns the next page, one at a time.
When the call returns an empty array, you'll know you've reached the end of your results.
Note:
- Send the field
scroll_id
in requests (snake_case). - Receive the field
scrollID
in your responses (camelCase). It expires after 20 minutes. Please ensure to change the region in the URL to match your account's region.
Request
- application/json
Body
- The query can only run on 2 consecutive indexes. By default, the query runs on data sent today and yesterday. You can also add a filter on
timestamp
to search a smaller time frame. - When using
query_string
,allow_leading_wildcard
must be set tofalse
to disable leading wildcards. In other words, the query can't start with*
or?
- Can't use
fuzzy_max_expansions
,max_expansions
, ormax_determinized_states
- Can't sort on analyzed fields, such as the
message
field - If you omit
_source
from the request, all fields are returned. - If you pass
'_source': false
, it will exclude the_source
field from the results. - Array [
- ]
- Time search must be ≤ 5 minutes. If no time is specified, default is
1m
(1 minute). - When using the
size
element, the value must be ≤1000
- Can't nest 2 or more bucket aggregations of these types:
date_histogram
,geohash_grid
,histogram
,ip_ranges
,significant_terms
,terms
- Can't sort or aggregate on analyzed fields, such as the
message
field - Aggregation type
significant_terms
andmulti_terms
can't be used - If the request specifies aggregations, only the initial search response will contain the aggregations results
Add a search query to receive the scrollID
in the result.
The query can take any of the parameters described in the Elasticsearch Search API DSL documentation with the exceptions stated below.
You can only add the query
parameters if you are not passing the scroll_id
in the request.
Limitations
Possible values: <= 1,000
Default value: 10
Number of results to return
Of the results found, the first result to return.
Limitations
_source string[]
The object includes
specifies an array of strings specifying an array of fields to return.
Array of fields to return
string
These time units are supported:
Unit | Description |
---|---|
m | minutes |
s | seconds |
ms | milliseconds |
micros | microseconds |
nanos | nanoseconds |
Limitations
Apply field aggregations. See the Elasticsearch guide for details.
Limitations
Note: You can use aggs
or aggregations
as the field name
Responses
- 200
successful operation. hits
are the total number of logs that match the query, which will always be in the 0-2 day range. total
are the actual logs that are returned when using the query, which are not limited by the selected time range.
- application/json
- Schema
- Example (from schema)
Schema
Keep passing this ID in the request until you've retrieved all of the results. Copy this ID and pass it as the field scroll_id
in a request to the same endpoint to retrieve the next page of results. (Remember to first clear the request body of all other parameters. The scrollId
is valid for 20 minutes.)
Query results in stringified JSON format. 'hits' are the total number of logs that match the query.
{
"code": 200,
"scrollId": "DnF1ZXJ5VGhlbkZldGNoCQAAAAAWXRbqFlNpSWRrTUtXUUR1N1pJbG9uSkJINncAAAAAFp6B-xZTTVFrMGt4eVFnZXhQZV9YbVRrU3NnAAAAABakA8QWNjY1RUZtdWZRS1NZZWt1ZERTNHNaQQAAAAAWXRbrFlNpSWRrTUtXUUR1N1pJbG9uSkJINncAAAAAFl0W7BZTaUlka01LV1FEdTdaSWxvbkpCSDZ3AAAAABQ1nb4WVjRyRlUxZWRUU0dzbTV5VVVqYkhxdwAAAAAUdHVqFlF0b3Znei1ZUXgtZEkyZkR3M0pMbGcAAAAAFvGs6hZKVklxaXIyZ1NOQzF5NHg1cmhtVDV3AAAAABR0dWkWUXRvdmd6LVlReC1kSTJmRHczSkxsZw==",
"hits": "string"
}