-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Hi @XueYing126 🤗
Niels here from the open-source team at Hugging Face. I discovered your work on Arxiv and was wondering whether you would like to submit it to hf.co/papers to improve its discoverability.If you are one of the authors, you can submit it at https://huggingface.co/papers/submit.
The paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim
the paper as yours which will show up on your public profile at HF, add Github and project page URLs.
Your abstract mentions that "Code, models, dataset" are mentioned as available at your GitHub repository (https://github.com/eth-siplab/GroupInertialPoser). It'd be great to make the "Group Inertial Poser" checkpoints and the "GIP-DB" dataset available on the 🤗 hub once they are released, to improve their discoverability/visibility.
We can add tags so that people find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.
Uploading models (Group Inertial Poser)
See here for a guide: https://huggingface.co/docs/hub/models-uploading.
In this case, we could leverage the PyTorchModelHubMixin class which adds from_pretrained and push_to_hub to any custom nn.Module. Alternatively, one can leverages the hf_hub_download one-liner to download a checkpoint from the hub.
For Group Inertial Poser, a keypoint-detection pipeline tag would be suitable, given its focus on 3D full-body pose estimation.
We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. We can then also link the checkpoints to the paper page.
Uploading dataset (GIP-DB)
Would be awesome to make the GIP-DB dataset available on 🤗 , so that people can do:
from datasets import load_dataset
dataset = load_dataset("your-hf-org-or-username/your-dataset")See here for a guide: https://huggingface.co/docs/datasets/loading.
For GIP-DB, given it's an "IMU+UWB dataset for two-person tracking" focused on motion recordings for pose estimation, a keypoint-detection task category would be appropriate.
Besides that, there's the dataset viewer which allows people to quickly explore the first few rows of the data in the browser.
Let me know if you're interested/need any help regarding this, especially when the artifacts are fully ready for release in the GitHub repository!
Cheers,
Niels
ML Engineer @ HF 🤗