- Date: 2023-03-29
- Venue: University of Vienna Bio Building
- Meetup Page: https://www.meetup.com/vienna-deep-learning-meetup/events/292279874/
Slides:
- [PDF - Intro, Survey, Jobs, Hot Topics]
- [PDF - PINNs - Physics Inspired Neural Networks]
- [PDF - Truth or Dare - How LLMs disregard truth]
Hi everyone,
please join us for our next Deep Learning meetup on March 29, at the
University of Vienna Biology Building (UBB)
Lecture hall 1 (on the ground floor)
Djerassipl. 1, 1030 Wien
We will have a presentation by Sebastian Schaffer und Lukas Exl (University of Vienna) on Physics Informed Neural Networks, followed by a networking break!
For the second part of the evening we have Jason Hoelscher-Obermaier on How Large Language Models Disregard Truth And What To Do About It, followed by a Hot Papers session on Fine-tuning and running your own LLM by René Donner (mva.ai)!
Hope to see you there!
Rene, for the VDLM organizers
Schedule
18:30 Welcome & Intro
18:40
Physics Informed Neural Networks
Sebastian Schaffer and Lukas Exl
Wolfgang Pauli Institute // Research Platform Mathematics-Magnetism-Materials & Faculty of Mathematics, University of Vienna
Physics-Informed machine learning is a contemporary novel methodology for solving (partial) differential equations in (natural) sciences. A Physics-Informed Neural Network (PINN) can learn a low parametric solution to a whole problem class in high dimensions in an unsupervised fashion, whereas traditional methods would not be able to interpolate the exponentially growing solution space. Therefore, chances are high that we see a shift of traditional (mesh-based) numerical methods to data driven machine learning models for a broad spectrum of applications.
This talk covers the basics of PINNs and explains their advantages over current methods. Examples will be given to supplement the information and we will present some current research topics in the field of micromagnetism.
:: Announcements & Job Openings :: :: Break & Networking ::
20:10
Truth Or Dare: How Large Language Models Disregard Truth And What To Do About It
Jason Hoelscher-Obermaier, independent researcher
Large language models (LLMs) produce falsehoods just as fluently as truths. LLM-produced falsehoods come in many forms: Currently dominant forms of LLM untruthfulness include hallucinations and the reproduction of stereotypes and common misconceptions. These pose significant societal risks. Future forms of LLM untruthfulness, such as targeted manipulation or outright deception, could be even worse. I will give an overview of what we know about the truthfulness of LLMs and what can be done to improve it.
Hot Papers session
How to fine-tune and run your own LLM on commodity hardware: Weight Quantization and Low-Rank Adaptation for Large Language Models
René Donner,mva.ai
:: Networking ::
21:30 Wrap up & End