·
AI & ML interests
datasets, research papers, experimentation, vision, classification, text encoders, tokenization, llms, diffusion, distillation, and more.
Recent Activity
repliedto their post about 22 hours ago Today, I'll be determining the codebook capacity and utility potential for the larger batteries; Fresnel, Johanna, Grandmaster, Freckles, and Johanna-F variants, which should give a good indication of which models are capable of handling codebooks and which are more errant. The earlier all use SVD while the later do not. The differences are noted per and the behavior divergent.
I anticipate the D=16 will be more errant, and the final-state variants of those could very well be much more difficult or costly to inference as their axis bends are likely considerably harder to track. However, I'm confident that enough bounces will give the yield required so I'll set up some high-yield noise barrages to determine how much of them we can in fact extract from Johanna, and then set up similar barrages for images to map the internals of Fresnel and Grandmaster.
Grandmaster will be tricky, as it was an experimental Johanna-256 finetuned series meant to map sigma noised image inputs to recreate Fresnel behavioral output. Noised image goes in -> Fresnel-grade replication comes out in high res.
This allowed preliminary Dall-E Mini-esque VAE generation and will be explored further for the stereoscopic translation subsystem, to allow image generation in the unique format of diffusion that I was working out. I anticipate this system to be more than capable at making monstrosities, so I won't be posting TOO MANY prelims on this one, but the high-capacity potential of these noise makers are meaningfully powerful. Getting uniform codebooks in-place for these models will allow full transformer mapping downstream instead of just guess working the MSE piecemeal, which the earlier versions and variants were doing.
I'm straying from the CLS specifically for this series because CLS creates adjudicated pools of bias orbiting the INCORRECT orbiter some SVAE. The orbital target IS the soft-hand accumulated bias with the sphere-norm, so having a competitor isn't going to be a good option. View all activity Organizations
view article The Polygonal Omega: Trained Sphere-Solvers Are Projective Codebooks
view article Three Geometric Bands in a Sphere-Normalized Patch Autoencoder
view article The Geometric Engine: Structural Attractors in Neural Network Weight Space
view article FL Hybrid Eigendecomposition Beating cuSOLVER's Mathematical Purity with Compilable PyTorch
published an article about 1 month ago view article Ryan Spearman: Geometric Variant Effect Prediction Through Quaternion-Composed Dual Expert Alignment
published an article about 1 month ago view article Fused Batched Thin SVD: Engineering a 5000× Speedup with Triton Kernels
published an article about 1 month ago view article A geometric encoder's toolkit: deterministic primitives for hyperspherical image encoding
published an article about 1 month ago view article Constellation Relay, Geometric Bottleneck, and the Re-Emergence of the Potential 0.29154 Binding Constant
published an article about 1 month ago view article Procrustes ViT Shared Manifold Alignment Experimentation
published an article about 2 months ago view article Geometric Memory III: Resonant Optimization, Consensus Distillation, and Evolutionary Training Paradigms
published an article about 2 months ago view article Geometric Memory II: Sequence Reconstruction, Diffusion Integration, and the Numerical Topology of Alignment
published an article about 2 months ago view article Geometric Memory: Context Extension and Cross-Model Alignment Through Pentachoron Regularization
published an article about 2 months ago view article Geometric Fusion: Cross-Modal Alignment Through Shared Pentachoron Geometry
published an article about 2 months ago view article QWEN 3.5 Residual Thinking Embeddings: How Language Models Transform Text Through Deliberative Generation
view article The Slop Code Problem: A Field Guide to Working With Your LLM Coding Companion
view article Geometric Structural Vocabulary: Scaling Deterministic Mathematics Into Universal Model Conditioning
view article Reading the Geometry of Learned Representations: How Synthetic Primitives Became a Rosetta Stone for VAE Latent Spaces
view article KSimplex Geometric Prior for Stable Diffusion: Complete Mathematical Reference
view article Geometric Manifold Walking: Stable High-Accuracy Multi-Encoder Fusion Without Backbone Training