You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

RayZer 4D β€” v12d and v12e

Training checkpoints and source-code snapshots for two 4D-ERayZer variants:

  • v12d β€” erayzer_core.model.4d_erayzer_v12d.ERayZer4D trained on Dynamic-RE10K with the local-mean-loss variant (dre10k_freeze_dynrecon_localmeanloss_8gpu).
  • v12e β€” erayzer_core.model.4d_erayzer_v12e.ERayZer4D β€” branch-color variant trained on Dynamic-RE10K (v12e_branch_color_dre10k_freeze_dynrecon_8gpu).

Repository layout

rayzer_4/
β”œβ”€β”€ v12d/
β”‚   β”œβ”€β”€ config.yaml                     # training config used for this run
β”‚   β”œβ”€β”€ checkpoints/
β”‚   β”‚   β”œβ”€β”€ ckpt_0000000000005000.pt    # 5k, 10k, …, 30k steps (6 ckpts)
β”‚   β”‚   └── …
β”‚   └── code/                           # source snapshot at train time
β”‚       β”œβ”€β”€ erayzer_core/ data/ training_utils/ third_party/ …
β”‚       β”œβ”€β”€ train.py
β”‚       └── train_4d_v12d_dre10k_freeze_dynrecon_localmeanloss.yaml
└── v12e/
    β”œβ”€β”€ config.yaml
    β”œβ”€β”€ checkpoints/                    # 5k, 10k, …, 40k (8 ckpts)
    └── code/                           # same layout as v12d/code/

Each checkpoint is ~3.2 GB. wandb logs, evaluation dumps, __pycache__, and vendored .git histories have been stripped from the code snapshots.

Loading

import torch, yaml
from omegaconf import OmegaConf

cfg = OmegaConf.load("v12d/config.yaml")

# Make the snapshotted source tree importable:
import sys; sys.path.insert(0, "v12d/code")

from erayzer_core.model import __dict__ as models  # noqa
# v12d β†’ models["4d_erayzer_v12d"] exposes the ERayZer4D class
# v12e β†’ models["4d_erayzer_v12e"] exposes the ERayZer4D class
ModelMod = __import__("erayzer_core.model.4d_erayzer_v12d", fromlist=["ERayZer4D"])
model = ModelMod.ERayZer4D(cfg)

sd = torch.load("v12d/checkpoints/ckpt_0000000000030000.pt",
                map_location="cpu", weights_only=False)
model.load_state_dict(sd["model"], strict=False)
model.eval()

The config.yaml under each version dir matches what train.py consumed during that run. Inference configs for downstream gradio / viewer apps live under code/config/.

Related

  • Upstream code base: this repo is a subset of the 4D_RAVI working tree at the commits these runs were launched from.
  • Raw training data mirrors: 2inf/kitti-raw, 2inf/nuscenes-raw, 2inf/waymo-raw (the 4D-RayZer models here are trained on Dynamic-RE10K, not on those three).

License

Research use only. See the upstream WildRayZer / RayZer licenses for details.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support