ijaers social
google plus

International Journal of Advanced
Engineering, Management and Science

ijaems google ijaems academia ijaems pbn nauka gov JournalToc Scilit logo Tyndale Library WorldCat indiana Library WorldCat aalborg university Library J-Gate academickeys ijaems rootindexing ijaems reddit ijaems research bib ijaems slideshare ijaers digg ijaems tumblr ijaems plurk ijaems I2OR ijaems ASI ijaems bibsonomy

Autonomous Camera Movement for Robotic-Assisted Surgery: A Survey
( Vol-3,Issue-8,August 2017 )


Mehrdad J. Bani


Robotic-assisted surgery, autonomous, camera movement, task and gesture recognition.


In the past decade, Robotic-Assisted Surgery (RAS) has become a widely accepted technique as an alternative to traditional open surgery procedures. The best robotic assistant system should combine both human and robot capabilities under the human control. As a matter of fact robot should collaborate with surgeons in a natural and autonomous way, thus requiring less of the surgeons’ attention. In this survey, we provide a comprehensive and structured review of the robotic-assisted surgery and autonomous camera movement for RAS operation. We also discuss several topics, including but not limited to task and gesture recognition, that are closely related to robotic-assisted surgery automation and illustrate several successful applications in various real-world application domains. We hope that this paper will provide a more thorough understanding of the recent advances in camera automation in RSA and offer some future research directions.

ijaers doi crossrefDOI:


Cite This Article:
Show All (MLA | APA | Chicago | Harvard | IEEE | Bibtex)
Paper Statistics:
  • Total View : 166
  • Downloads : 13
  • Page No: 829-836

[1] M. J. Fard, “Computational Modeling Approaches for Task Analysis in Robotic-Assisted Surgery,” Wayne State University, 2016.
[2] A. R. Lanfranco, A. E. Castellanos, J. P. Desai, and W. C. Meyers, “Robotic Surgery,” Ann. Surg., vol. 239, no. 1, pp. 14–21, 2004.
[3] A. Cuschieri, “Whither minimal access surgery: tribulations and expectations.,” Am. J. Surg., vol. 169, no. 1, pp. 9–19, Jan. 1995.
[4] M. J. Fard, S. Ameri, and R. D. Ellis, “Toward Personalized Training and Skill Assessment in Robotic Minimally Invasive Surgery,” in Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress on Engineering and Computer Science 2016, 2016.
[5] K. Moorthy, Y. Munz, A. Dosis, J. Hernandez, S. Martin, F. Bello, T. Rockall, and A. Darzi, “Dexterity enhancement with robotic surgery,” Surg. Endosc., vol. 18, no. 5, pp. 790–5, May 2004.
[6] S. Ku, S. E. Salcudeaii, and B. Columbia, “Dexterity Enhancement in Microsurgery using a Motion-Scaling System and Microgripper,” pp. 77–82, 1995.
[7] S. Ameri, M. J. Fard, R. B. Chinnam, and C. K. Reddy, “Survival Analysis based Framework for Early Prediction of Student Dropouts,” in Proceedings of the 25th ACM International on Conference on Information and Knowledge Management - CIKM ’16, 2016, pp. 903–912.
[8] G. F. Dakin and M. Gagner, “Comparison of laparoscopic skills performance between standard instruments and two surgical robotic systems,” Surg. Endosc., vol. 17, no. 4, pp. 574–9, Apr. 2003.
[9] A. K. Pandya, L. A. Reisner, B. W. King, N. Lucas, A. M. Composto, M. D. Klein, and R. D. Ellis, “A Review of Camera Viewpoint Automation in Robotic and Laparoscopic Surgery,” Robotics, vol. 3, pp. 310–329, 2014.
[10] L. Tao, L. Zappella, G. D. Hager, and R. Vidal, “Surgical gesture segmentation and recognition,” Lect. notes Comput. Sci., pp. 339–346, 2013.
[11] J. Rosen, J. D. Brown, L. Chang, M. N. Sinanan, and B. Hannaford, “Generalized approach for modeling minimally invasive surgery as a stochastic process using a discrete Markov model,” IEEE Trans. Biomed. Eng., vol. 53, no. 3, pp. 399–413, Mar. 2006.
[12] L. P. Golenberg, “Task Analysis , Modeling , And Automatic Identification Of Elemental Tasks In Robot- Assisted Laparoscopic Surgery,” 2010.
[13] M. J. Fard, S. Ameri, R. Darin Ellis, R. B. Chinnam, A. K. Pandya, and M. D. Klein, “Automated robot-assisted surgical skill evaluation: Predictive analytics approach,” Int. J. Med. Robot. Comput. Assist. Surg., p. e1850, Jun. 2017.
[14] F. Lalys and P. Jannin, “Surgical process modelling: a review.,” Int. J. Comput. Assist. Radiol. Surg., vol. 9, no. 3, pp. 495–511, 2014.
[15] G. Forestier, F. Lalys, L. Riffaud, B. Trelhu, and P. Jannin, “Classification of surgical processes using dynamic time warping,” J. Biomed. Inform., vol. 45, no. 2, pp. 255–64, Apr. 2012.
[16] J. E. Bardram, A. Doryab, R. M. Jensen, P. M. Lange, K. L. G. Nielsen, and S. T. Petersen, “Phase recognition during surgical procedures using embedded and body-worn sensors,” 2011 IEEE Int. Conf. Pervasive Comput. Commun., pp. 45–53, Mar. 2011.
[17] N. Padoy, “Workflow and Activity Modeling for Monitoring Surgical Procedures,” Technical University of Munich, 2010.
[18] M. J. Fard, A. K. Pandya, R. B. Chinnam, M. D. Klein, and R. D. Ellis, “Distance-based time series classification approach for task recognition with application in surgical robot autonomy,” International Journal of Medical Robotics and Computer Assisted Surgery, 2016.
[19] J. S. Lopscomb, “A trainable gesture recognizer,” Pattern Recognit., vol. 24, no. 9, pp. 895–907, 1991.
[20] W. M. Newman and R. F. Sproull, Principles of Interactive Computer Graphics. McGraw-Hill, 1974.
[21] D. H. Rubine, “The Automatic Recognition of Gestures,” Carnegie Mellon University, 1991.
[22] M. J. Fard, P. Wang, S. Chawla, and C. K. Reddy, “A Bayesian Perspective on Early Stage Event Prediction in Longitudinal Data,” IEEE Trans. Knowl. Data Eng., vol. 28, no. 12, pp. 3126–3139, Dec. 2016.
[23] K. S. Fu, “Syntactic recognition in character recognition,” Math. Sci. Eng., vol. 112, pp. 6–7, 1974.
[24] G. E. H. S. Sidney Fels, “Glove-talk: a neural network interface between a data- glove and a speech synthesizer,” IEEE Trans. neural netwroks, vol. 3, no. 6, pp. 2–8, 1992.
[25] A. Mulder, “Hand Gestures for HCI,” Tech. Rep., no. February, 1996.
[26] R. Gopalan and B. Dariush, “Toward a vision based hand gesture interface for robotic grasping,” 2009 IEEE/RSJ Int. Conf. Intell. Robot. Syst., pp. 1452–1459, Oct. 2009.
[27] P. Garg, N. Aggarwal, and S. Sofat, “Vision Based Hand Gesture Recognition,” World Acad. Sci. Eng. Technol., pp. 972–977, 2009.
[28] Y. Wu and T. S. Huang, “Hand modeling analysis and recognition for vision-based human computer interaction,” IEEE Signal Process. Mag, vol. 18, no. 3, pp. 51–60, 2001.
[29] C. Lenz, A. Sotzek, R. Thorsten, M. Huber, and S. Glasauer, “Human Workflow Analysis using 3D Occupancy Grid Hand Tracking in a Human-Robot Collaboration Scenario,” pp. 3375–3380, 2011.
[30] H. C. Lin, I. Shafran, D. Yuh, and G. D. Hager, “Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions.,” Comput. Aided Surg., vol. 11, no. 5, pp. 220–230, 2006.
[31] Y. Munz, A. M. Almoudaris, K. Moorthy, A. Dosis, A. D. Liddle, and A. W. Darzi, “Curriculum-based solo virtual reality training for laparoscopic intracorporeal knot tying: objective assessment of the transfer of skill from virtual reality to reality,” Am. J. Surg., vol. 193, no. 6, pp. 774–83, Jun. 2007.
[32] N. Ahmidi, G. D. Hager, L. Ishii, G. Fichtinger, G. L. Gallia, and M. Ishii, “Surgical gesture classification from eye tracking and tool motion in Minimally Invasive Surgery,” Lect. notes Comput. Sci., vol. 63, no. 63, pp. 295–302, 2010.
[33] M. J. Fard, S. Ameri, R. B. Chinnam, A. K. Pandya, M. D. Klein, and R. D. Ellis, “Machine Learning Approach for Skill Evaluation in Robotic-Assisted Surgery,” in Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress on Engineering and Computer Science 2016, 2016.
[34] C. E. Reiley, H. C. Lin, B. Varadarajan, B. Vagvolgyi, S. Khudanpur, D. D. Yuh, and G. D. Hager, “Automatic recognition of surgical motions using statistical modeling for capturing variability,” Stud. Health Technol. Inform., vol. 132, no. 1, pp. 396–401, Jan. 2008.
[35] S. M. Cristancho, “Quantitative modelling and assessment of surgical motor actions in minimally invasive surgery,” Vancouver: The University of British Columbia, 2008.
[36] C. Richards, J. Rosen, B. Hannaford, C. Pellegrini, and M. Sinanan, “Skills evaluation in minimally invasive surgery using force / torque signatures,” pp. 791–798, 2000.
[37] C. E. Reiley, H. C. Lin, B. Varadarajan, B. Vagvolgyi, S. Khudanpur, D. D. Yuh, and G. D. Hager, “Automatic recognition of surgical motions using statistical modeling for capturing variability.,” Stud. Health Technol. Inform., vol. 132, no. 1, pp. 396–401, 2008.
[38] M. J. Fard, S. Ameri, R. B. Chinnam, and R. D. Ellis, “Soft Boundary Approach for Unsupervised Gesture Segmentation in Robotic-Assisted Surgery,” IEEE Robot. Autom. Lett., vol. 2, no. 1, pp. 171–178, Jan. 2017.
[39] A. G. Gallagher, M. Al-Akash, N. E. Seymour, and R. M. Satava, “An ergonomic analysis of the effects of camera rotation on laparoscopic performance.,” Surgical endoscopy, vol. 23, no. 12, pp. 2684–91, Dec-2009.
[40] L. W. Way, L. Stewart, W. Gantert, K. Liu, C. M. Lee, K. Whang, and J. G. Hunter, “Causes and Prevention of Laparoscopic Bile Duct Injuries,” vol. 237, no. 4, pp. 460–469, 2003.
[41] J. Jasper, P. Breedveld, and J. Herder, “Camera and instrument holders and their clinical value in minimally invasive surgery,” Surg Laparosc Endosc Percutan Tech, vol. 14, pp. 145–152, 2004.
[42] R. Hurteau, S. Desantis, and E. P. De Montrial, “Laparoscopic Surgery Assisted by a Robotic Cameraman: Concept and Experimental Results,” pp. 2286–2289, 1994.
[43] Y. F. Wang, D. R. Uecker, and Y. Wang, “A new framework for vision-enabled and robotically assisted minimally invasive surgery,” Comput. Med. Imaging Graph., vol. 22, no. 6, pp. 429–37, 1999.
[44] G. Wei and G. Hirzinger, “Real-Time Visual Servoing for Laparoscopic Surgery,” Eng. Med. Biol. Mag. IEEE, vol. 16, no. 1, pp. 40–45, 1997.
[45] A. Casals and J. Amat, “Automatic Guidance of an Assistant Robot in Laparoscopic Surgery,” no. April, pp. 895–900, 1996.
[46] S. Ko and D. Kwon, “A surgical knowledge based interaction method for a laparoscopic assistant robot,” RO-MAN 2004. 13th IEEE Int. Work. Robot Hum. Interact. Commun. (IEEE Cat. No.04TH8759), pp. 313–318, 2004.
[47] S. Yamaguchi, A. Nishikawa, J. Shimada, K. Itoh, and F. Miyazaki, “Real-time image overlay system for endoscopic surgery using direct calibration of endoscopic camera,” Int. Congr. Ser., vol. 1281, pp. 756–761, May 2005.
[48] A. Nishikawa, H. Nakagoe, K. Taniguchi, and Y. Yamada, “How Does the Camera Assistant Decide the Zooming Ratio of Laparoscopic Images ?,” pp. 611–618, 2008.
[49] R. D. Ellis, a. Cao, a. Pandya, a. Composto, M. Chacko, M. Klein, and G. Auner, “Optimizing the Surgeon-Robot Interface: The Effect of Control-Display Gain and Zoom Level on Movement Time,” Proc. Hum. Factors Ergon. Soc. Annu. Meet., vol. 48, no. 15, pp. 1713–1717, Sep. 2004.
[50] R. D. Ellis, a. Cao, a. Pandya, a. Composto, M. D. Klein, and G. Auner, “Minimizing Movement Time in Surgical Telerobotic Tasks,” Proc. Hum. Factors Ergon. Soc. Annu. Meet., vol. 49, no. 11, pp. 1099–1103, Sep. 2005.