The query phase identifies which documents satisfy the search request, but we still need to retrieve the documents themselves. This is the job of the fetch phase, shown in Fetch phase of distributed search.
The distributed phase consists of the following steps:
The coordinating node identifies which documents need to be fetched and
issues a multi GET
request to the relevant shards.
Each shard loads the documents and enriches them, if required, and then returns the documents to the coordinating node.
Once all documents have been fetched, the coordinating node returns the results to the client.
The coordinating node first decides which documents actually need to be
fetched. For instance, if our query specified { "from": 90, "size": 10 }
,
the first 90 results would be discarded and only the next 10 results would
need to be retrieved. These documents may come from one, some, or all of the
shards involved in the original search request.
The coordinating node builds a multi-get request for each shard that holds a pertinent document and sends the request to the same shard copy that handled the query phase.
The shard loads the document bodies—the _source
field—and, if
requested, enriches the results with metadata and
search snippet highlighting.
Once the coordinating node receives all results, it assembles them into a
single response that it returns to the client.
The query-then-fetch process supports pagination with the from
and size
parameters, but within limits. Remember that each shard must build a priority
queue of length from + size
, all of which need to be passed back to
the coordinating node. And the coordinating node needs to sort through
number_of_shards * (from + size)
documents in order to find the correct
size
documents.
Depending on the size of your documents, the number of shards, and the
hardware you are using, paging 10,000 to 50,000 results (1,000 to 5,000 pages)
deep should be perfectly doable. But with big-enough from
values, the
sorting process can become very heavy indeed, using vast amounts of CPU,
memory, and bandwidth. For this reason, we strongly advise against deep paging.
In practice, ``deep pagers'' are seldom human anyway. A human will stop paging after two or three pages and will change the search criteria. The culprits are usually bots or web spiders that tirelessly keep fetching page after page until your servers crumble at the knees.
If you do need to fetch large numbers of docs from your cluster, you can
do so efficiently by disabling sorting with the scroll
query,
which we discuss later in this chapter.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。