Skip to content
View chenllliang's full-sized avatar
🎯
Focusing
🎯
Focusing

Highlights

  • Pro

Organizations

@pkunlp-icler

Block or report chenllliang

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
chenllliang/README.md

Hi there 👋

  • I am Liang Chen (陈亮), currently a third-year PhD student at the school of CS, Peking University. I am lucky to study with the guidance of Prof. Baobao Chang. I am interested in Multimodal Understanding and Generation with Foundation Models.
  • Google Scholar
  • Homepage

Feel free to drop an email if you are interested in connecting.

Pinned Loading

  1. LMM101/Awesome-Multimodal-Next-Token-Prediction LMM101/Awesome-Multimodal-Next-Token-Prediction Public

    Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey

    37 1

  2. pkunlp-icler/FastV pkunlp-icler/FastV Public

    [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models

    Python 317 12

  3. DnD-Transformer DnD-Transformer Public

    Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation"

    Python 61 4

  4. pkunlp-icler/PCA-EVAL pkunlp-icler/PCA-EVAL Public

    [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain

    Jupyter Notebook 100 3

  5. MMEvalPro MMEvalPro Public

    Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs

    Python 22 2

  6. HaozheZhao/MIC HaozheZhao/MIC Public

    MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU

    Python 337 15