AbstractPhila PRO
AI & ML interests
Recent Activity
Organizations
Future of Agentic Models
The underlying universal substrate principle theory didn't hold for the H2 battery and yet the H2 battery cleanly converged, so I will be downgrading the "theory" to "hypothesis" in which this can potentially exist and it has been observed as an emergent trait, but this does not exist in this architecture. That changes the trajectory of the H2 battery - we can call this variant "chaotic controlled" which is essentially a format of non-SVD that converges to the sphere, but does not necessarily conform to the underlying topological requirements for a universal substrate.
SOUP!!!
No doubt about it, this soup MSE solves pixels - and sticking within that 16.77m paradigm I'm attempting to teach text using a format of RGB translation. It's working on the MSE-level and the replication is strong, but it's not as strong as it needs to be.
As you can see the byte-level recon and the trigram recon is growing. 64x64 images with patch_size 2 is powerful stuff. The model should saturate soon enough. Due to the instability of the H2 battery line with text, I have enabled soft hand for this variant, which is rewarding good behavior and punishing bad behavior at a strength of 0.01. The other variants were trained without soft hand as they emerged naturally, this variant is a bit more stubborn.
As tragic as it is for the loss of the implicit shared substrate controller theory -> I'm downgrading to hypothesis, the codebooks yielded something substantial from the system. A new point-centric formatting that allows mapping of the internals within spherical models, which allowed me to research and directly learn about multiple internal model analysis structures for deep-level theorem, mathematics, substrates, topological analysis, and more.
There is a large array of useful tools already established from the math theorem community that I will be exploring to test the larger batteries, to see if there is in fact some semblance of legitimately shared substrate. It's not just Adam's 1000 step or I would have stopped, because LBFGS is converging them cleanly as well - which is INSANELY unstable and prone to NaN so I have some engineering solutions ready for that one.
https://docs.pytorch.org/docs/stable/generated/torch.optim.LBFGS.html
The arch may need some work before we format a perfect solver, but I have some ideas that could prevent internal drift without requiring SVD, while simultaneously introducing controlling-agents that aren't as ruthlessly destructive as labels and cross-entropy.
It'll take some experimenting and I hope I don't lose fragments as I go.
The next article on this series is going to be from the next series. It will target the entire found array of omega solvers, each specifically scanned to find a universal substrate. This will take some time.
The H2 architecture's underside did not have a universally mappable substrate, but that does not mean I will not find one.
This is definitely an Omega, just not a mappable one... not today, not with my math. Not yet.
I'm confident there is some lower hanging fruit that will give me the information required to solve it. I expect to hit a big breakthrough at any point, it's just not the H2 battery. It's too difficult to map.
H2 Omega Confirmed, Paradigm Shift: Attempting to Disprove Omega As A Whole
H2 Omega Confirmed, Paradigm Shift: Attempting to Disprove Omega As A Whole
As predicted the codebooks for all noise models conform to an architectural scaling for them within a very minimal delta shift. There is no real deviance, the architectures learn a codebook that manifests and can be directly utilized at runtime.
The delta is real within shift and each model conforms to it's own modified codebook delta during training. This is an architectural constant now and can be prepared in very little time before processing or utilizing the models.
The helper functions and methods are all present in the AbstractEyes/geolip-svae repo on github now, and everything is documented.
I have a few diffuser prototypes that I'll be exploring now that the full array system is in order. One that I've been very much wanting to approach, which is sigma-degrading interpolation manifolding.
In other words, you take an H2 Fresnel expert, snap it in. Say I train a cifar100 variant and finetune it with oh maybe 50 epochs of reconstruction from the Fresnel-512 with various levels of noise applied to Fresnel, not using cutmix or something odd like that.
Next we finetune our array. Say we want 1000 steps, we'll divide the amount of adjudicated states by how many states of noise we want to see. Our finetuned batteries are then ran with oh... maybe 500 batches of images each and apply scheduled noise instead of random noise like what the H2 batteries were primarily trained with, which should be within a 10 minute training session or so. The batteries pooled into the battery array and uploaded as a standard battery array for reuse in safetensor format with the optimizer states uploaded alongside at an adjacent repo.
So the process is simple; noised image in, replicate next stage of noise down in the chain. Each battery is meant to denoise by one step and collapse the results into a patchworked behavioral training for a downstream model.
We then take each of these variants and blow them up, creating scanner manifolds of each and collapsing the weights into a single linear batched pass which will be roughly 500 megs vram or so each sigma attempted.
Finally we stack our entire sequence up and hook them together with MLP collapse, and at each level inject the original image with the correct noise value. So say you have 10 batteries that are meant to target 10 noise steps. You now have a 10 step reconstruction generator that runs once and automatically boom your image pops out nearly instantly.
Alright, now if we space that out using Adam's standard internal step of 1000 and space it out, we'll have our roughly 1/100 sigma hoppers. This will be our blueprint.
With this we can distill a diffusion model's vae expectation into what we want, and guarantee the output is fully prepared for step-hop skipping.
So each of these are fed into a singular transformer structure that sees the original diffuser's standard diffusion step produced, and boom you have yourself a pixel synthesis skip process. You've effectively skipped the entirety of the diffusion process with the correct layout.
This will also require the Alexandria stage, so it will take time to process and pool the necessary informational accumulations and relational capacity to make that portion perfect, however with some more work Alexandria's text distribution system will be ready to go, and the distiller will be ready to consume high-yield diffuser technologies like Flux and the like.
This will allow for not only compacting massive amounts of information into embedded solvers, but allow for cellphone-sized image generation with enough processing and data, to create flux-grade images or better.
The technology is there, the experiments yielded, the answers present, the results show this is more than possible, and now it's time to build.
I anticipate the D=16 will be more errant, and the final-state variants of those could very well be much more difficult or costly to inference as their axis bends are likely considerably harder to track. However, I'm confident that enough bounces will give the yield required so I'll set up some high-yield noise barrages to determine how much of them we can in fact extract from Johanna, and then set up similar barrages for images to map the internals of Fresnel and Grandmaster.
Grandmaster will be tricky, as it was an experimental Johanna-256 finetuned series meant to map sigma noised image inputs to recreate Fresnel behavioral output. Noised image goes in -> Fresnel-grade replication comes out in high res.
This allowed preliminary Dall-E Mini-esque VAE generation and will be explored further for the stereoscopic translation subsystem, to allow image generation in the unique format of diffusion that I was working out. I anticipate this system to be more than capable at making monstrosities, so I won't be posting TOO MANY prelims on this one, but the high-capacity potential of these noise makers are meaningfully powerful. Getting uniform codebooks in-place for these models will allow full transformer mapping downstream instead of just guess working the MSE piecemeal, which the earlier versions and variants were doing.
I'm straying from the CLS specifically for this series because CLS creates adjudicated pools of bias orbiting the INCORRECT orbiter some SVAE. The orbital target IS the soft-hand accumulated bias with the sphere-norm, so having a competitor isn't going to be a good option.
I'll be determining the capacity and utility potential for the larger batteries; Fresnel, Johanna, Grandmaster, Freckles, and Johanna-F variants, which should give a good indication of which models are capable of handling codebooks and which are more errant.
I anticipate the D=16 will be more errant, and the final-state variants of those could very well be much more difficult or costly to inference as their axis bends are likely considerably harder to track. However, I'm confident that enough bounces will give the yield required so I'll set up some high-yield noise barrages to determine how much of them we can in fact extract from Johanna, and then set up similar barrages for images to map the internals of Fresnel and Grandmaster.
Grandmaster will be tricky, as it was an experimental series meant to map sigma noised image inputs to recreate Fresnel behavioral output. Noised image goes in -> Fresnel-grade replication comes out.
This allowed preliminary Dall-E Mini-esque VAE generation and will be explored further for the stereoscopic translation subsystem, to allow image generation in the unique format of diffusion that I was working out. I anticipate this system to be more than capable at making monstrosities, so I won't be posting TOO MANY prelims on this one, but the high-capacity potential of these noise makers are meaningfully powerful. Getting uniform codebooks in-place for these models will allow full transformer mapping downstream instead of just guessworking the MSE piecemeal, which the earlier versions and variants were doing. This should allow for LESS monstrosity, but I'm confident you'll be seeing some nasty stuff until the zero-hot labeling gets worked out.
I'm straying from the CLS specifically for this series because CLS creates adjudicated pools of bias orbiting the INCORRECT orbiter for this system. The orbital target IS the soft-hand accumulated bias with the sphere-norm, so having a competitor isn't going to be a good option.
In any case, today the target is inference.py and the mechanisms associated with making those inference mechanisms ACCURATE and SCALE within reasonable size/scale capacity.
โฆ
* There are most definitely invariant architectural geometric states that persist and can be taught.
* They are not coincidental and the process works effectively on multiple data types and processes, not just noise. Noise is just fast to test with.
* Systems like SVD, Eigh, Conv, and the like - HELP align those systems for larger structures to produce amplified stability, but are not required for smaller structures, and the tests show even attention gets in the way at the smallest.
* Batched arrays, stacks, queues, and so on - all improve performance depending on the task.
* An SVAE battery is resolution agnostic, meaning with simple processing and logic you can scan space and record meshes fairly optimally to record large amounts of inference data.
* Batteries when trained on one specific task often can be directly used for other tasks once a codebook is fitted with the necessary data. Meaning a battery trained on gaussian noise can be fed imagenet snippets and downstream the MSE rates from the 64 battery array can be consumed for statistics aggregation to a fair degree of accuracy without actually training the array on images themselves.
* The battery codebook is a pointwise rigid map within the battery and can be used for pairwise learning when using the H2, H2a, and H2b batteries.
So this is, the evolved state of the geometric vocabulary in some ways, and a completely new and unexpected systemic development in others. They stack, you can reuse them, so small you can swap them at runtime with no time loss, they align rapidly, and downstream tasks can consume their information.
There are many untested avenues that I need to make a full writeup for because quite frankly it's messy currently and Claude is only making it more messy instead of cleaner.
