A vision-based system presents one of the most reliable methods for achieving an automated robot-assisted manipulation associated with surgical knot tying. However, some challenges in suture thread detection and automated suture thread grasping significantly hinder the realization of a fully automated surgical knot tying. In this article, we propose a novel algorithm that can be used for computing the 3-D coordinates of a suture thread in knot tying. After proper training with our data set, we built a deep-learning model for accurately locating the suture’s tip. By applying a Hessian-based filter with multiscale parameters, the environmental noises can be eliminated while preserving the suture thread information. A multistencils fast marching method was then employed to segment the suture thread, and a precise stereomatching algorithm was implemented to compute the 3-D coordinates of this thread. Experiments associated with the precision of the deep-learning model, the robustness of the 2-D segmentation approach, and the overall accuracy of 3-D coordinate computation of the suture thread were conducted in various scenarios, and the results quantitatively validate the feasibility and reliability of the entire scheme for automated 3-D shape reconstruction.