Exploring Model Sharing in the Age of Foundation Models

Videos
ML+X
UW-Madison
Multimodal learning
Foundation models
Model sharing
Hugging Face
LLM
LMM
LLaVA
Deep learning
  1. Model sharing and reproducible ML
  2. LLaVA-NeXT and model sharing
Presenters

Chris Endemann

Haotian Liu, PhD

Date

March 12, 2024

Model sharing (via platforms such as Hugging Face) has become more commonplace over the past few years as practitioners increasingly rely on pretrained models and foundation models to find patterns in their data. In this month’s forum, we discuss best practices surrounding model sharing and learn about the recently released LLaVA-NeXT model — a large multimodal model (LMM) which can be used for vision+text related tasks.


Model sharing and reproducible ML - Chris Endemann

During this facilitated discussion, we will share ideas and experiences surrounding best practices in reproducible ML. We will explore questions such as:

  1. Why should you share your ML model?
  2. What are potential challenges, risks, or ethical concerns associated with model sharing and reproducing ML workflows?
  3. How do you market or share your models in a way that ensures proper use?
  4. What pieces must be well-documented to ensure reproducible and responsible model sharing?

LLaVA-NeXT and model sharing - Haotian Liu

We’ve been open sourcing the LLaVA-series multimodal models in the past year, and our LLaVA-series of work has inspired many creative researches and explorations based on our open-source releases. We’ll give a brief introduction of our LLaVA model and how we’ve been releasing model weights and maintaining our open-source code bases. We’ll also discuss some challenges we face on open-source release, including hosting cost, privacy, commercial license, etc.