Algorithm developers usually use double precision data types to be able to focus on the mathematical functionality of the algorithm. When this algorithm is implemented as a hardware module, the data accuracy must be reduced to a minimum number of bits that still fulfills the system performance requirements. The process of converting the floating-point algorithm to bit-level optimized model is complicated and requires special knowledge. This paper introduces a simple and robust quantization methodology for HLS designs based on value range analysis.