mirror of
https://github.com/nspcc-dev/neofs-node.git
synced 2026-03-01 04:29:10 +00:00
evacuate is not working with big objects #808
Labels
No labels
I1
I2
I3
I4
S0
S1
S2
S3
S4
U0
U1
U2
U3
U4
blocked
bug
config
dependencies
discussion
documentation
enhancement
enhancement
epic
feature
go
good first issue
help wanted
neofs-adm
neofs-cli
neofs-cli
neofs-cli
neofs-ir
neofs-lens
neofs-storage
neofs-storage
performance
question
security
task
test
windows
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
nspcc-dev/neofs-node#808
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @vkarak1 on GitHub (Oct 17, 2022).
Originally assigned to: @cthulhu-rider on GitHub.
I have created object1 with size of 500M and put on node1, then i set all shards on node4 to RO state and issued evacuate command against one shard and object count on each node hasn't been changed at all.
Expected Behavior
Count of objects on evacuated shard have to be migrated to node1/node2/node3.
Current Behavior
The evacuate command reports that "Shard has successfully been evacuated." but amount of objects wasn't changed at all.
Steps to Reproduce (for bugs)
neofs-cli --rpc-endpoint node1.neofs:8080 --wallet wallet.json container create --name test --policy "REP 1 " --basic-acl public-read-write --awaitdd if=/dev/urandom of=object1 bs=1M count=500neofs-cli --rpc-endpoint node1.neofs:8080 -w wallet.json object put --file object1 --cid AAPiVrUsdbwJ79KSpJukLSc4AXVM77STF5znhG7tWCYf --no-progresscurl -s localhost:6672 | rg neofs_node_object_counter | sed 1,2dPlease find the result below:
node1:
node2:
node3: empty
node4:
neofs-cli control shards set-mode --mode read-only --endpoint localhost:8091 -w /etc/neofs/storage/wallet.json --alloutput:
Please find netmap snapshot result below:
Logs
Your Environment
NeoFS Storage node
Version: v0.32.0-125-gbcf3df35
GoVersion: go1.18.4
Linux glagoli 5.10.0-18-amd64 #1 SMP Debian 5.10.140-1 (2022-09-02) x86_64 GNU/Linux
Server setup and configuration:
cloud, 4 VMs, 4 SN, 4 http qw, 4 s3 gw
@cthulhu-rider commented on GitHub (Oct 25, 2022):
Seems like evacuation doesn't work for "small" objects too. Here what I've seen in logs on evacutation job:
First strange thing that node has opened some blobovnicza. Despite this, node initialized replication routine as expected. But the
object successfully replicatedmessage contains node which already contains the replica. So we have a false-positive replication here.@cthulhu-rider commented on GitHub (Oct 25, 2022):
We've discussed the behavior with @fyrchik and it is expected. Evacuation routine doesn't seek to evacuate data according to placement policy. Instead, it makes sure that object is available in the container (at least 1 replica is needed).
@vkarak1 I suggest to adjust the test to this behavior: we expect to see no more than 1 missing replica of the object, but at least 1 stored replica.
I'm gonna document the evacuation to be more clear, but this doesn't block the testing.
@cthulhu-rider commented on GitHub (Oct 26, 2022):
I reproduced the problem...nah that's actually not a problem in current system design.
Node
Aevacuates the objects from the shard to the other container nodeB. After that node'sBPolicer checks if the object is stored in the container according to its policy. And there is: nodeAstill respond with the object, soBdecides that it holds redundant replica and throws it away.@vkarak1 commented on GitHub (Oct 26, 2022):
This is expected behavior with REP1, closing the issue.