mirror of
https://github.com/nspcc-dev/neofs-node.git
synced 2026-03-01 04:29:10 +00:00
Fail SearchV2 requested with unreachable queries #1356
Labels
No labels
I1
I2
I3
I4
S0
S1
S2
S3
S4
U0
U1
U2
U3
U4
blocked
bug
config
dependencies
discussion
documentation
enhancement
enhancement
epic
feature
go
good first issue
help wanted
neofs-adm
neofs-cli
neofs-cli
neofs-cli
neofs-ir
neofs-lens
neofs-storage
neofs-storage
performance
question
security
task
test
windows
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
nspcc-dev/neofs-node#1356
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @cthulhu-rider on GitHub (Feb 19, 2025).
Is your feature request related to a problem? Please describe.
no object can ever match following queries:
N < X && N > X-likeN < -MaxUint256,N <= -MaxUint256-POS,N > MaxUint256,N >= MaxUint256+POS1 is 4ever false while 2 is a current protocol limit (false until extensions)
currently,
SearchV2server responds w/ empty result andOKto any mentioned query. From one side, this is correct: no object matches these filters. From the other one, when OK recved, client cannot distinguish "not found for now" from "not found and will never be" states. This can hide some app-side bugs and worsen the understanding of sys behaviorDescribe the solution you'd like
respond to unreachable queries with particular status:
Bad Requestto limit overflows400)Describe alternatives you've considered
keep returning empty
OKresultAdditional context
@roman-khimov commented on GitHub (Feb 19, 2025):
I'm somewhat concerned about deliberately slow queries. We're mostly making them to be efficient since we need performance and fast replies, but an attacker can make v2 work about the same time as v1 did returning zero or some small number of results. And this would be cluster-wide. I don't see a lot of ways to solve it other than some timeouts, but those wont't be trivial either (need to be checked while iterating over the DB).
@cthulhu-rider commented on GitHub (Feb 19, 2025):
sounds like QoS control with limits and a fee for the load. In theory, it sounds interesting, in practice, we aint even close