Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Offline engine performance is not better than local server when running batch #1872

Open
5 tasks done
jischein opened this issue Nov 1, 2024 · 2 comments
Open
5 tasks done
Assignees

Comments

@jischein
Copy link

jischein commented Nov 1, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

I'm running some benchmarks to test using the offline engine for batch processing Llama 405B ( sglang.Engine.generate() ) vs. spinning up a server and running the same batch of requests locally against that live SGLang server.

Reproduction

Local server batch benchmark:

  • First, boot up a local server with CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct-FP8 --tp 8 --mem-fraction-static 0.8 --port 8001
  • Next, I ran the following script
import json
import time
import requests
from typing import Dict, Any, List
import torch
from tqdm import tqdm
from multiprocessing import Pool, cpu_count

def process_single_request(request: Dict[str, Any]) -> Dict[str, Any]:
    try:
        response = requests.post(
            "http://localhost:8001/v1/chat/completions",
            json=request['body']
        )
        response.raise_for_status()
        response_data = response.json()
        
        # Format result
        processed_result = {
            "id": f"cmpl-{response_data['id']}",
            "custom_id": request['custom_id'],
            "response": {
                "choices": [{
                    "message": {
                        "role": "assistant",
                        "content": response_data["choices"][0]["message"]["content"]
                    }
                }],
                "usage": {
                    "prompt_tokens": response_data["usage"]["prompt_tokens"],
                    "completion_tokens": response_data["usage"]["completion_tokens"],
                    "total_tokens": response_data["usage"]["total_tokens"]
                }
            }
        }
        return processed_result
    except Exception as e:
        print(f"Error processing request: {e}")
        return None

def process_with_progress(prepared_requests: List[Dict[str, Any]]):
    with Pool(processes=cpu_count()) as pool:
        results = list(
            tqdm(
                pool.imap(process_single_request, prepared_requests),
                total=len(prepared_requests),
                desc="Processing requests"
            )
        )
    return [r for r in results if r is not None]  # Filter out any failed requests

def main():
    # Load requests
    print("Loading requests...")
    with open('mmlu_batch_requests.jsonl', 'r') as f:
        requests_data = [json.loads(line) for line in f if line.strip()]
    
    # Process batch with timing
    print(f"Starting batch processing of {len(requests_data)} requests...")
    start_time = time.time()
    
    # Process all requests using multiprocessing
    results = process_with_progress(requests_data)
    
    # Calculate totals
    total_input_tokens = sum(r["response"]["usage"]["prompt_tokens"] for r in results)
    total_completion_tokens = sum(r["response"]["usage"]["completion_tokens"] for r in results)
    
    end_time = time.time()
    total_time = end_time - start_time
    
    # Calculate and print statistics
    tokens_per_second = total_completion_tokens / total_time if total_time > 0 else 0
    
    print(f"\nBatch Processing Statistics:")
    print(f"Total time: {total_time:.2f} seconds")
    print(f"Total input tokens: {total_input_tokens}")
    print(f"Total completion tokens: {total_completion_tokens}")
    print(f"Tokens per second: {tokens_per_second:.2f}")
    print(f"Number of requests processed: {len(results)}")
    if torch.cuda.is_available():
        print(f"GPU type: {torch.cuda.get_device_name()}")
    
    # Save results
    print("\nSaving results...")
    with open('mmlu_outputs.jsonl', 'w') as f:
        for result in results:
            f.write(json.dumps(result) + '\n')

if __name__ == "__main__":
    main()

Batch inference benchmark:

Run the following script

import json
import time
import sglang
from typing import Dict, Any, List
import torch

def prepare_prompts(requests_data: List[Dict[str, Any]], llm: sglang.Engine) -> tuple[List[str], List[Dict[str, Any]]]:
    prompts = []
    sampling_params_list = []
    
    for request in requests_data:
        messages = request['body']['messages']
        conversation = [{"role": msg["role"], "content": msg["content"]} for msg in messages]
        prompt = llm.get_tokenizer().apply_chat_template(
            conversation=conversation, tokenize=False, add_generation_prompt=True
        )
        prompts.append(str(prompt))
        
        sampling_params = {
            "max_new_tokens": request['body'].get('max_tokens', 2048),
            "temperature": request['body'].get('temperature', 0.7)
        }
        sampling_params_list.append(sampling_params)
    
    return prompts, sampling_params_list

def main():
    # Initialize model
    print("Initializing model...")
    llm = sglang.Engine(
        model_path="meta-llama/Meta-Llama-3.1-405B-Instruct-FP8",
        mem_fraction_static=0.8,
        tp_size=8
    )
    
    # Load requests
    print("Loading requests...")
    with open('mmlu_batch_requests.jsonl', 'r') as f:
        requests_data = [json.loads(line) for line in f if line.strip()]
    
    # Prepare inputs
    print("Preparing prompts...")
    st = time.time()
    prompts, sampling_params_list = prepare_prompts(requests_data, llm)
    print(f"Time to prepare prompts: {time.time() - st:.2f} seconds")
    print(prompts[10])
    print(sampling_params_list[10])
    
    # Time the generation
    print("Starting generation...")
    start_time = time.time()
    outputs = llm.generate(prompts, sampling_params_list)
    end_time = time.time()
    
    # Calculate statistics
    total_time = end_time - start_time
    total_input_tokens = sum(output['meta_info']['prompt_tokens'] for output in outputs)
    total_completion_tokens = sum(output['meta_info']['completion_tokens'] for output in outputs)
    tokens_per_second = total_completion_tokens / total_time if total_time > 0 else 0
    
    # Print statistics
    print(f"\nBatch Processing Statistics:")
    print(f"Total time: {total_time:.2f} seconds")
    print(f"Total input tokens: {total_input_tokens}")
    print(f"Total completion tokens: {total_completion_tokens}")
    print(f"Tokens per second: {tokens_per_second:.2f}")
    print(f"Number of requests processed: {len(requests_data)}")
    print(f"GPU type: {torch.cuda.get_device_name()}")
    
    # Save results
    print("\nSaving results...")
    results = []
    for output, request in zip(outputs, requests_data):
        result = {
            "id": f"cmpl-{output['meta_info']['id']}",
            "custom_id": request['custom_id'],
            "response": {
                "choices": [{
                    "message": {
                        "role": "assistant",
                        "content": output['text']
                    }
                }],
                "usage": {
                    "prompt_tokens": output['meta_info']['prompt_tokens'],
                    "completion_tokens": output['meta_info']['completion_tokens'],
                    "total_tokens": output['meta_info']['completion_tokens'] + output['meta_info']['prompt_tokens']
                }
            }
        }
        results.append(result)
    
    with open('mmlu_outputs.jsonl', 'w') as f:
        for result in results:
            f.write(json.dumps(result) + '\n')

if __name__ == "__main__":
    main()

Test data set

Results

Local server batch

Starting batch processing of 500 requests...
Processing requests: 100%|██████████████████████████████████████████████████████████████████████████████| 500/500 [02:56<00:00,  2.84it/s]

Batch Processing Statistics:
Total time: 176.35 seconds
Total input tokens: 89474
Total completion tokens: 49230
Tokens per second: 279.16
Number of requests processed: 500
GPU type: NVIDIA A100-SXM4-80GB

Offline mode batch

Batch Processing Statistics:
Total time: 177.41 seconds
Total input tokens: 89974
Total completion tokens: 49971
Tokens per second: 281.67
Number of requests processed: 500
GPU type: NVIDIA A100-SXM4-80GB

Environment

PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35

Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB

Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      46 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             96
On-line CPU(s) list:                0-95
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
CPU family:                         6
Model:                              106
Thread(s) per core:                 2
Core(s) per socket:                 24
Socket(s):                          2
Stepping:                           6
BogoMIPS:                           4000.03
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear arch_capabilities
Virtualization:                     VT-x
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          3 MiB (96 instances)
L1i cache:                          3 MiB (96 instances)
L2 cache:                           192 MiB (48 instances)
L3 cache:                           32 MiB (2 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-47
NUMA node1 CPU(s):                  48-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; TSX disabled

Versions of relevant libraries:
[pip3] flashinfer==0.1.6+cu121torch2.4
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-ml-py==12.560.30
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pyzmq==26.2.0
[pip3] torch==2.4.0
[pip3] torchao==0.6.1
[pip3] torchvision==0.19.0
[pip3] transformers==4.45.2
[pip3] triton==3.0.0
[pip3] zmq==0.0.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0	GPU1	GPU2	GPU3	GPU4	GPU5	GPU6	GPU7	NIC0	NIC1	NIC2	NIC3	NIC4	NIC5	NIC6	NIC7	NIC8	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	NV12	NV12	NV12	NV12	NV12	NV12	NV12	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	PHB	0-47	0		N/A
GPU1	NV12	 X 	NV12	NV12	NV12	NV12	NV12	NV12	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	PHB	0-47	0		N/A
GPU2	NV12	NV12	 X 	NV12	NV12	NV12	NV12	NV12	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	PHB	0-47	0		N/A
GPU3	NV12	NV12	NV12	 X 	NV12	NV12	NV12	NV12	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	PHB	0-47	0		N/A
GPU4	NV12	NV12	NV12	NV12	 X 	NV12	NV12	NV12	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	48-95	1		N/A
GPU5	NV12	NV12	NV12	NV12	NV12	 X 	NV12	NV12	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	48-95	1		N/A
GPU6	NV12	NV12	NV12	NV12	NV12	NV12	 X 	NV12	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	48-95	1		N/A
GPU7	NV12	NV12	NV12	NV12	NV12	NV12	NV12	 X 	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	48-95	1		N/A
NIC0	SYS	SYS	SYS	SYS	PHB	PHB	PHB	PHB	 X 	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS
NIC1	SYS	SYS	SYS	SYS	PHB	PHB	PHB	PHB	PHB	 X 	PHB	PHB	SYS	SYS	SYS	SYS	SYS
NIC2	SYS	SYS	SYS	SYS	PHB	PHB	PHB	PHB	PHB	PHB	 X 	PHB	SYS	SYS	SYS	SYS	SYS
NIC3	SYS	SYS	SYS	SYS	PHB	PHB	PHB	PHB	PHB	PHB	PHB	 X 	SYS	SYS	SYS	SYS	SYS
NIC4	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	 X 	NODE	NODE	NODE	NODE
NIC5	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	 X 	PHB	PHB	PH
GPU type: NVIDIA A100-SXM4-80GB

Saving results...
ubuntu@avior-a100-b-1:~/batch-worker$ python3 live_sg_2.py
Loading requests...
Starting batch processing of 500 requests...
Processing requests: 100%|██████████████████████████████████████████████████████████████████████████████| 500/500 [02:23<00:00,  3.50it/s]

Batch Processing Statistics:
Total time: 143.36 seconds
Total input tokens: 89474
Total completion tokens: 48855
Tokens per second: 340.79
Number of requests processed: 500
GPU type: NVIDIA A100-SXM4-80GB

Saving results...
ubuntu@avior-a100-b-1:~/batch-worker$ python3 live_sg_2.py
Loading requests...
Starting batch processing of 500 requests...
Processing requests: 100%|██████████████████████████████████████████████████████████████████████████████| 500/500 [02:56<00:00,  2.84it/s]

Batch Processing Statistics:
Total time: 176.35 seconds
Total input tokens: 89474
Total completion tokens: 49230
Tokens per second: 279.16
Number of requests processed: 500
GPU type: NVIDIA A100-SXM4-80GB

Saving results...
ubuntu@avior-a100-b-1:~/batch-worker$
[2] 0:bash*                                                                                               "avior-a100-b-1" 16:36 01-Nov-24
B
NIC6	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	PHB	 X 	PHB	PHB
NIC7	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	PHB	PHB	 X 	PHB
NIC8	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	 X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7
  NIC8: mlx5_8
(reverse-i-search)`python3 ': CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ^Cthon3 -m sglang.launch_server     --model-path meta-llama/Meta-Llama-3.1-405B-Instruct-FP8     --tp 8     --mem-fraction-static 0.8     --port 8001
(env) ubuntu@avior-a100-b-1:~/batch-worker$ tmux attach -t 2
[detached (from session 2)]
(env) ubuntu@avior-a100-b-1:~/batch-worker$ python3 -m sglang.check_env
Python: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.2, V12.2.140
CUDA Driver Version: 535.183.01
PyTorch: 2.4.0+cu121
sglang: 0.3.4.post1
flashinfer: 0.1.6+cu121torch2.4
triton: 3.0.0
transformers: 4.45.2
requests: 2.32.3
tqdm: 4.66.5
numpy: 1.26.4
aiohttp: 3.10.10
fastapi: 0.115.3
hf_transfer: 0.1.8
huggingface_hub: 0.26.1
interegular: 0.3.3
packaging: 24.1
PIL: 10.4.0
psutil: 6.1.0
pydantic: 2.9.2
uvicorn: 0.32.0
uvloop: 0.21.0
zmq: 26.2.0
vllm: 0.6.3.post1
multipart: 0.0.12
openai: 1.52.1
anthropic: 0.37.1
NVIDIA Topology:
	GPU0	GPU1	GPU2	GPU3	GPU4	GPU5	GPU6	GPU7	NIC0	NIC1	NIC2	NIC3	NIC4	NIC5	NIC6	NIC7	NIC8	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	NV12	NV12	NV12	NV12	NV12	NV12	NV12	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	PHB	0-47	0		N/A
GPU1	NV12	 X 	NV12	NV12	NV12	NV12	NV12	NV12	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	PHB	0-47	0		N/A
GPU2	NV12	NV12	 X 	NV12	NV12	NV12	NV12	NV12	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	PHB	0-47	0		N/A
GPU3	NV12	NV12	NV12	 X 	NV12	NV12	NV12	NV12	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	PHB	0-47	0		N/A
GPU4	NV12	NV12	NV12	NV12	 X 	NV12	NV12	NV12	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	48-95	1		N/A
GPU5	NV12	NV12	NV12	NV12	NV12	 X 	NV12	NV12	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	48-95	1		N/A
GPU6	NV12	NV12	NV12	NV12	NV12	NV12	 X 	NV12	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	48-95	1		N/A
GPU7	NV12	NV12	NV12	NV12	NV12	NV12	NV12	 X 	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	48-95	1		N/A
NIC0	SYS	SYS	SYS	SYS	PHB	PHB	PHB	PHB	 X 	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS
NIC1	SYS	SYS	SYS	SYS	PHB	PHB	PHB	PHB	PHB	 X 	PHB	PHB	SYS	SYS	SYS	SYS	SYS
NIC2	SYS	SYS	SYS	SYS	PHB	PHB	PHB	PHB	PHB	PHB	 X 	PHB	SYS	SYS	SYS	SYS	SYS
NIC3	SYS	SYS	SYS	SYS	PHB	PHB	PHB	PHB	PHB	PHB	PHB	 X 	SYS	SYS	SYS	SYS	SYS
NIC4	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	 X 	NODE	NODE	NODE	NODE
NIC5	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	 X 	PHB	PHB	PHB
NIC6	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	PHB	 X 	PHB	PHB
NIC7	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	PHB	PHB	 X 	PHB
NIC8	PHB	PHB	PHB	PHB	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	PHB	PHB	PHB	 X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7
  NIC8: mlx5_8


Hypervisor vendor: KVM
ulimit soft: 1048576
@jischein jischein changed the title [Bug] [Bug] Offline engine performance is not better than local server for running batch Nov 1, 2024
@jischein jischein changed the title [Bug] Offline engine performance is not better than local server for running batch [Bug] Offline engine performance is not better than local server when running batch Nov 1, 2024
@ByronHsu ByronHsu self-assigned this Nov 1, 2024
@jischein
Copy link
Author

jischein commented Nov 7, 2024

hey @ByronHsu — curious if any findings / updates here

@merrymercy
Copy link
Contributor

@ByronHsu Are you able to reproduce this?
On the other side, @ByronHsu finds the offline engine is faster than an online server in this benchmark script. #1968

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants