One reviewer pointed out that the methods ZIP was compared against (like BLACKVIP and BPTVLM) were from 2023, and suggested that more recent 2024 benchmarks should have been included for a fairer comparison.
Reviewers highlighted that the paper's design choices, specifically "feature sharing," were well-motivated and helped the model stay expressive despite the simplifications. Critical Perspectives
Reviewers pointed out that the soft prompt reparameterization design choices were thoroughly tested, including detailed ablation studies. 27cc3576a6f149e95cf68afc3e25cd6c.zip
It looks like there's no response available for this search. Try asking something else.
The primary consensus among reviewers is that ZIP significantly reduces the "query cost"—the number of times you have to ask the model for a result—while maintaining or improving accuracy. One reviewer pointed out that the methods ZIP
While the reviews were generally positive, experts noted a few areas for improvement:
Reviewers from the research community have shared their direct impressions of the work: It looks like there's no response available for this search
This paper introduces a method called designed to improve how we tune large "black-box" models (like CLIP) when we don't have access to their internal code or gradients. Performance and Efficiency