AI & ML interests

None defined yet.

Recent Activity

JuanxiĀ 
posted an update about 1 month ago
view post
Post
4426
šŸ“¢ Awesome Multimodal Modeling

We introduce Awesome Multimodal Modeling, a curated repository tracing the architectural evolution of multimodal intelligence—from foundational fusion to native omni-models.

šŸ”¹ Taxonomy & Evolution:

Traditional Multimodal Learning – Foundational work on representation, fusion, and alignment.
Multimodal LLMs (MLLMs) – Architectures connecting vision encoders to LLMs for understanding.
Unified Multimodal Models (UMMs) – Models unifying Understanding + Generation via Diffusion, Autoregressive, or Hybrid paradigms.
Native Multimodal Models (NMMs) – Models trained from scratch on all modalities; contrasts early vs. late fusion under scaling laws.
šŸ’” Key Distinction:
UMMs unify tasks via generation heads; NMMs enforce interleaving through joint pre-training.

šŸ”— Explore & Contribute: https://github.com/OpenEnvision/Awesome-Multimodal-Modeling
  • 3 replies
Ā·
iChubaiĀ 
updated a dataset about 1 month ago
JuanxiĀ 
posted an update 4 months ago
view post
Post
2166
Recent Updates on ScalingOpt | Your Stars are Appreciated

We are pleased to announce several key updates to the ScalingOpt project:

Pyramid Visualization Structure
Following a suggestion from Yufei, we have introduced a pyramid-based visualization framework to systematically outline the layered architecture of Foundation Models—from foundational principles to infrastructure-level details. This addition is designed to assist teams in organizing and presenting related materials more clearly.

Integration of Optimizer Summaries by Yifeng
We extend a warm welcome to Yifeng (author of MARS), who has joined the project. He has contributed a comprehensive summary of over 100 optimizers, now available in ScalingOpt. This resource can be accessed via the ā€œOptimization Summary Sheetā€ on the homepage or under the Optimizers page, featuring a reader-friendly interface that supports easy viewing, downloading, and citation.

Growing Community of Members
We continue to update and expand the list of active members. Researchers interested in Optimization & Efficient AI are encouraged to join and participate in discussions. Feedback and suggestions are also highly welcomed and will be reviewed and incorporated on an ongoing basis.

Tutorials in Progress
The tutorial development is actively underway. Currently, we have prepared over 300 slides and are refining and expanding the content in collaboration with contributors.

This community is driven purely by passion and a commitment to open knowledge sharing. Your support through starring the repository is greatly appreciated!
  • 1 reply
Ā·
JuanxiĀ 
posted an update 5 months ago
view post
Post
2625
ScalingOpt is continuously evolving! We are steadily expanding the Community section with new content. For our Blog, we've launched by featuring work from Jianlin Su and are actively translating insightful posts from scientific communities into English to share on ScalingOpt (we'll keep curating excellent community blogs and providing English versions alongside the originals).

We operate under the Creative Commons Attribution-NonCommercial principle, sharing knowledge freely and openly. We welcome your ideas, suggestions, and feedback to help shape ScalingOpt's future.

If you find this initiative valuable, please consider following and starring the project to show your support. Thank you!
  • 2 replies
Ā·