-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Few questions on Milvus when running LAION 100M large dataset #367
Comments
gpu won't work in your case, unless you are using gpu index. We are working on a mode build with GPU and search with CPu but it's not there yet |
bulkinsert and index 100M data usually takes couple of hours. each search could take hundred milliseconds so It kindly depend on how much queries you want to run |
@agandra30
This depends on the performance of your machine. Note that you have chosen
|
Thank you @xiaofan-luan and @alwayslove2013 for your replies . Appreciate your support . Is there any way i can use the 100M dataset for filter search for "filtering search 1% and 99%" test case . It is restricted to only 10M dataset , how can i use that option on the UI for the same LAION 100M dataset ? I used the same LAION dataset as custom dataset, but it only let me do the no-filter search performance test , but not 1% and 99%. test which i am looking to perform . is there anyway you guide us. ? In the filtering test case 1% and 99% is the search happens serially or concurrently ?
Thanks @xiaofan-luan for the reply , when you meant that the memory won't enough, you mean the 48GB (46068MiB)/GPU == 48x4 == 192 GB is not sufficient to process a 100Million Dataset ? . I am assuming that because we are only using gpus for processing nothing to store correct ? Is GPU_CAGRA recommended instead of HSNW ? |
@agandra30 Currently,
|
I am using VectorDBbench to perform and analyse the milvus capabilities before it handle our load at scale. ?
We are using 1 server with 4 NVIDIA L40S gpus and i have assigned 2 for querynode and 2 for indexnode.
When i ran the no filter search performance test index LAION 100M dataset and index type is DISKANN with K=100 and the entire setup just hung for like hours together in optmize state and wondering there is no more logs to see whats happening in this state ?
Few questions :
Last status
my helm values.yaml file looks like this :
`
indexNode:
resources:
requests:
nvidia.com/gpu: "2"
limits:
nvidia.com/gpu: "2"
queryNode:
resources:
requests:
nvidia.com/gpu: "2"
limits:
nvidia.com/gpu: "2"
mmap:
# Set memory mapping property for whole cluster
mmapEnabled: true
# Set memory-mapped directory path, if you leave mmapDirPath unspecified, the memory-mapped files will be stored in {localStorage.path}/ mmap by default.
mmapDirPath: /mnt/vector/clustersetup_files/
minio:
enabled: false
externalS3:
enabled: true
host: "xx..xx.xxx.xx"
port: "xx"
accessKey: "mykey"
secretKey: "myskey"
useSSL: false
bucketName: "milvusdb"
rootPath: ""
useIAM: false
cloudProvider: "aws"
iamEndpoint: ""`
The text was updated successfully, but these errors were encountered: