RayZer 4D β v12d and v12e
Training checkpoints and source-code snapshots for two 4D-ERayZer variants:
- v12d β
erayzer_core.model.4d_erayzer_v12d.ERayZer4Dtrained on Dynamic-RE10K with the local-mean-loss variant (dre10k_freeze_dynrecon_localmeanloss_8gpu). - v12e β
erayzer_core.model.4d_erayzer_v12e.ERayZer4Dβ branch-color variant trained on Dynamic-RE10K (v12e_branch_color_dre10k_freeze_dynrecon_8gpu).
Repository layout
rayzer_4/
βββ v12d/
β βββ config.yaml # training config used for this run
β βββ checkpoints/
β β βββ ckpt_0000000000005000.pt # 5k, 10k, β¦, 30k steps (6 ckpts)
β β βββ β¦
β βββ code/ # source snapshot at train time
β βββ erayzer_core/ data/ training_utils/ third_party/ β¦
β βββ train.py
β βββ train_4d_v12d_dre10k_freeze_dynrecon_localmeanloss.yaml
βββ v12e/
βββ config.yaml
βββ checkpoints/ # 5k, 10k, β¦, 40k (8 ckpts)
βββ code/ # same layout as v12d/code/
Each checkpoint is ~3.2 GB. wandb logs, evaluation dumps, __pycache__, and vendored
.git histories have been stripped from the code snapshots.
Loading
import torch, yaml
from omegaconf import OmegaConf
cfg = OmegaConf.load("v12d/config.yaml")
# Make the snapshotted source tree importable:
import sys; sys.path.insert(0, "v12d/code")
from erayzer_core.model import __dict__ as models # noqa
# v12d β models["4d_erayzer_v12d"] exposes the ERayZer4D class
# v12e β models["4d_erayzer_v12e"] exposes the ERayZer4D class
ModelMod = __import__("erayzer_core.model.4d_erayzer_v12d", fromlist=["ERayZer4D"])
model = ModelMod.ERayZer4D(cfg)
sd = torch.load("v12d/checkpoints/ckpt_0000000000030000.pt",
map_location="cpu", weights_only=False)
model.load_state_dict(sd["model"], strict=False)
model.eval()
The config.yaml under each version dir matches what train.py consumed during that
run. Inference configs for downstream gradio / viewer apps live under code/config/.
Related
- Upstream code base: this repo is a subset of the
4D_RAVIworking tree at the commits these runs were launched from. - Raw training data mirrors:
2inf/kitti-raw,2inf/nuscenes-raw,2inf/waymo-raw(the 4D-RayZer models here are trained on Dynamic-RE10K, not on those three).
License
Research use only. See the upstream WildRayZer / RayZer licenses for details.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support