Reconstruction of 3d video from 2d real-life sequences
DOI:
https://doi.org/10.17533/udea.redin.14658Keywords:
video sequence, anaglyph, depth map, dynamic rangeAbstract
In this paper, a novel method that permits to generate 3D video sequences using 2D real-life sequences is proposed. Reconstruction of 3D video sequence is realized using depth map computation and anaglyph synthesis. The depth map is formed employing the stereo matching technique based on global error energy minimization with smoothing functions. The anaglyph construction is implemented using the red component alignment interpolating the previously formed depth map. Additionally, the depth map transformation is realized in order to reduce the dynamic range of the disparity values, minimizing ghosting and enhancing color preservation. Several real-life color video sequences that contain different types of motions, such as translational, rotational, zoom and combination of previous ones are used demonstrating good visual performance of the proposed 3D video sequence reconstruction.
Downloads
References
B. Blundell, A. Schwartz. Volumetric three dimensional display systems. Ed. Wiley. New York. Vol. 5. 2000. pp. 196-200.
M. Halle. “Autostereoscopic displays and computer graphics”. Comput Graph. Vol. 31. 1997. pp. 58-62. DOI: https://doi.org/10.1145/271283.271309
E. Dubois. “A projection method to generate anaglyph stereo”. IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 3. 2001. pp. 1661- 1664.
I. Ideses, L. Yaroslavsky. “A method for generating 3D video from a single video stream”. Proc of the Vision, Modeling and Visualization. Ed. Aka Gmbh. Erlangen. Germany. 2002. pp. 435-438.
I. Ideses, L. Yaroslavsky. “New Methods to produce high quality color anaglyphs for 3-D visualization”. Lecture Notes in Computer Science. Vol. 3212. 2004. pp. 273-380. DOI: https://doi.org/10.1007/978-3-540-30126-4_34
S. S. Beauchemin, J. L. Barron. “The computation of optical flow”. ACM Computing surveys. Vol. 27. 1995. pp. 436-466. DOI: https://doi.org/10.1145/212094.212141
J. L. Barron, D. J. Fleet, S. S. Beauchemin. “Performance of optical flow techniques”. International Journal of Computer Vision. Vol. 12. 1994. pp. 43-77. DOI: https://doi.org/10.1007/BF01420984
D. J. Heeger. “Model for the extraction of image flow”. Journal of the Optical Society of America. Vol. 4. 1987. pp. 1455-1471. DOI: https://doi.org/10.1364/JOSAA.4.001455
D. J. Heeger. “Optical flow using spatiotemporal filters”. International Journal Computer Vision. Vol. 1. 1988. pp. 279-302. DOI: https://doi.org/10.1007/BF00133568
X. Huang, E. Dubois. “3D reconstruction based on a hybrid disparity estimation algorithm”. IEEE International Conference on Image Processing. Vol. 8. 2006. pp. 1025-1028. DOI: https://doi.org/10.1109/ICIP.2006.312674
C. Zitnick, T. Kanade. “A cooperative algorithm for stereo matching and occlusion detection”. Robotics Institute Technical Reports. No. CMU.RI-TR-99-35. Carnegie Mellon University. 1999.
H. H. Barker, T. O. Binford. “Depth from edge and intensity based stereo”. Proc. of the 7th International Joint Conference on Artificial Intelligence. Vancouver. 1981. pp. 631-636.
D. Comaniciu, P. Meer. “Mean-shift: a robust approach toward feature space analysis”. IEEE Transactions Pattern Anal Machine Intelligence. Vol. 24. 2002. pp. 603-619. DOI: https://doi.org/10.1109/34.1000236
B. B. Alagoz. “Obtaining depth maps from color images by region based stereo matching algorithms”. OncuBilim Algorythm and Systems Labs. Vol. 08. 2008. Art.4. pp. 1-12.
I. Ideses, L. Yaroslavsky, B. Fishbain. “Real time 2D to 3D video conversion”. J. Real Time Image Proc. Vol. 2. 2007. pp. 3-9. DOI: https://doi.org/10.1007/s11554-007-0038-9
I. Ideses, L. Yaroslavsky. “3 methods to improve quality of color anaglyph”. J. Optics A. Pure, Applied Optics. Vol. 7. 2005. pp. 755-762. DOI: https://doi.org/10.1088/1464-4258/7/12/008
I. Ideses, L. Yaroslavsky, B. Fishbain, R. Vistuch. “3D compressed from 2D video”. Stereoscopic displays and virtual reality systems XIV in Proc. of SPIE & IS&T Electronic Imaging. Vol. 6490. 2007. pp. 64901C. DOI: https://doi.org/10.1117/12.703416
L. Yaroslavsky, J. Campos, M. Espínola, I. Ideses. “Redundancy of steroscopic images: Experimental evaluation”. Optics Express. Vol. 13. 2005. pp. 0895-10907. DOI: https://doi.org/10.1364/OPEX.13.010895
L. Yaroslavsky, Holography and digital image processing. Ed. Kluwer Academic Publishers. Boston. 2004. pp. 600. DOI: https://doi.org/10.1007/978-1-4757-4988-5
W. Sanders, D. McAllister. Producing anaglyphs formsynthetic images. Conf. 5005: A stereoscopic displays and applications XIV. Proc. SPIE/IS&T. Vol. 5006. 2003. pp. 348-358. DOI: https://doi.org/10.1117/12.474130
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2018 Revista Facultad de Ingeniería
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Revista Facultad de Ingeniería, Universidad de Antioquia is licensed under the Creative Commons Attribution BY-NC-SA 4.0 license. https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en
You are free to:
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial — You may not use the material for commercial purposes.
ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
The material published in the journal can be distributed, copied and exhibited by third parties if the respective credits are given to the journal. No commercial benefit can be obtained and derivative works must be under the same license terms as the original work.