Although robotic grasp planning has been extensively studied in the literature, comparing the performance of different approaches still proves challenging due to the lack of standardization in hardware setup and benchmarking protocols. This work addresses the issue with a threefold contribution. First, it provides a standardized hardware platform and a software framework integrating a benchmarking protocol for grasp planning algorithms (GRASPA). Second, it uses such a framework to benchmark three state-of-the-art algorithms in a reproducible way. Third, it employs the framework to investigate the effect of camera pose variance in visual-based grasp planning. We show how the proposed benchmarking setup can be used to provide insight into the results not only to compare different vision-based grasp planners but also to evaluate different parameter configurations within the same grasp planner, for instance, camera viewpoint with respect to the scene. To ease the reproducibility of our results and usability of the platform, we provide extensive information for replicating the experimental setup and installing our software in the Supplemental Information document. All the software used in this paper is freely available online in the form of Docker images..