我是Sakina,目前在 GLM-4 团队,正在寻找 TOP Talents 加入我们 ~ Join us! Top Talents for AGI, Global Hiring!!
About our team: Our world-leading AI team has developed the cutting-edge large language and multimodal models and built the high-precision billion-scale knowledge graphs, the combination of which uniquely empowers us to create a powerful data- and knowledge-driven cognitive engine towards AGI.
- GLM-4 大模型算法科学家/工程师
- CogVLM 多模态大模型算法科学家/工程师
- CodeGeeX2 大模型算法 代码方向
- AgentBench/AgentLM 大模型算法
- TTS 语音算法
- AI Infra GLM-platform,深度学习框架,推理加速,网络,K8S
- AIGC AI-Native C端产品经理
- 前后端,底层开发,ACM、NOI竞赛选手
关于 ChatGLM 关于大模型 AGI,有任何感兴趣的话题,欢迎交流~ 可以戳我详聊👇
- Email: [email protected]
- Wechat: SakinaWEI
LLM Research Scientist/Engineer
Responsibilities:
-
Design and deploy state-of-the-art NLP/Multimodal LLM
-
Research areas include, but are not limited to, efficient large language model architecture, multimodal learning, self-supervised representation learning, unified cross-task learning, dataset construction, RLHF, etc.
Qualifications:
-
PhD/Master in Computer Science, Artificial Intelligence, or a related field.
-
Solid research accumulation in natural language understanding, machine learning, deep learning, and multimodal domains.
-
Excellent large model research capabilities, with a preference for those who have published high-quality papers in top conferences such as NeurIPS, ICLR, ICML, ACL, EMNLP, CVPR, JMLR, etc.
-
Outstanding collaborative abilities, able to coordinate with platform, data, and other teams to complete systematic work, excellent direction planning and implementation capabilities.
ML System Research Scientist/Engineer
Responsibilities:
-
Lead the creation of next-generation, high-capacity LLM platforms.
-
Collaborate with software engineers to build platforms with cutting-edge models.
Qualifications:
-
PhD/Master in Computer Science, Artificial Intelligence, or a related field.
-
Have prior experience working with training and inference of large language models.
-
Have experience in High performance, large-scale ML systems, GPUs, Kubernetes, Pytorch, or OS internals
-
Proficiency in programming languages such as Python or C++ and a track record of working with deep learning frameworks (e.g., pytorch, deepspeed, etc.).
-
Strong understanding of distributed computing framework & performance tuning and verification for training/finetuning/inference.
-
Being familiar with PEFT or MoE is a plus.
如果您对我的项目或工作有任何疑问或建议,请随时与我联系😊~