Tequila: Trapping-free Ternary Quantization for Large Language Models Paper • 2509.23809 • Published Sep 28, 2025 • 2
Tequila: Trapping-free Ternary Quantization for Large Language Models Paper • 2509.23809 • Published Sep 28, 2025 • 2
PRIMA.CPP: Speeding Up 70B-Scale LLM Inference on Low-Resource Everyday Home Clusters Paper • 2504.08791 • Published Apr 7, 2025 • 137
Distributed Pruning Towards Tiny Neural Networks in Federated Learning Paper • 2212.01977 • Published Dec 5, 2022 • 1