125 views
MIT HAN Lab
MLSys'24 Best Paper - AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Login with Google Login with Discord