A. Gaitatzes,"Interactive Diffuse Global Illumination Discretization Methods for Dynamic Environments",
[ 5.2M]
Book Chapters
A. Gaitatzes, G. Papaioannou,
"Progressive Screen-space Multi-channel Surface Voxelization",
in Wolfgang Engel (ed.) "GPU Pro 4: Advanced Rendering Techniques", AK Peters / CRC Press, 2013.
Abstract: To alleviate the problems of screen-space voxelization techniques, but maintain their benefit of predictable, controllable and bound execution time relative to full-scene volume generation methods, we introduce the concept of Progressive Voxelization. The volume representation is incrementally updated to include the newly discovered voxels and discard the set of invalid voxels, which are not present in any of the current image buffers. Using the already available camera and light source buffers, a combination of volume injection and voxel-to-depth-buffer re-projection scheme continuously updates the volume buffer and discards invalid voxels, progressively constructing the final voxelization.
The algorithm is lightweight and operates on complex dynamic environments where geometry, materials and lighting can change arbitrarily. Compared to single-frame screen-space voxelization, our method provides improved volume coverage (completeness) over non-progressive methods, while maintaining its high performance merits.
Abstract: An increasing number of rendering and geometry processing algorithms relies on volume data to calculate anything from effects like smoke/fluid simulations, visibility information or global illumination effects. We present two real-time and simple-to-implement novel surface voxelization algorithms and a volume data caching structure, the Volume Buffer, which encapsulates functionality, storage and access similar to a frame buffer object, but for threedimensional scalar data. The Volume Buffer can rasterize primitives in 3d space and accumulate up to 1024 bits of arbitrary data per voxel, as required by the specific application. The strength of our methods is the simplicity of the implementation resulting in fast computation times and very easy integration with existing frameworks and rendering engines.
Abstract: We present a novel method to accelerate the computation of the visibility function of the lighting equation, in dynamic scenes composed of rigid, non-penetrating objects. The main idea of the technique is to pre-compute for each object in the scene its associated four-dimensional field that describes the visibility in each direction for all positional samples on a sphere around the object, we call this a displacement field. We are able to speed up the calculation of algorithms that trace visibility rays to near real time frame rates. The storage requirements of the technique, amounts from one byte to one bit per ray direction making it particularly attractive to scenes with multiple instances of the same object, as the same cached data can be reused, regardless of the geometric transformation applied to each instance. We suggest an acceleration technique and identify the sampling method that gives the best results based on experimentation.
Paper: 590K
(159 submissions, 12.5% accepted as journal papers)
Images & Movies:
Ambient Occlusion of several models
BibTex reference:
@article{Gaitatzes:2008,
author = {Gaitatzes, Athanasios and Chrysanthou, Yiorgos and Papaioannou, Georgios},
title = {{Presampled Visibility for Ambient Occlusion}},
journal = {16th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG)},
year = {2008},
month = {February},
keywords = {hemisphere,indirect lighting,pre-computed visibility,queries,query-point,tracing rays,uniform distribution}
}
Proceedings Papers
K. Vardis, G. Papaioannou, A. Gaitatzes,
"Multi-view Ambient Occlusion with Importance Sampling",
in Proceedings of the
"Symposium on Interactive 3D Graphics and Games",
(I3D 2013), Orlando Florida, March 2013.
Abstract: Screen-space ambient occlusion and obscurance (AO) techniques have become de-facto methods for ambient light attenuation and contact shadows in real-time rendering. Although extensive research has been conducted to improve the quality and performance of AO techniques, view-dependent artifacts remain a major issue. This paper introduces Multi-view Ambient Occlusion, a generic per-fragment view weighting scheme for evaluating screen-space occlusion or obscurance using multiple, arbitrary views, such as the readily available shadow maps. Additionally, it exploits the resulting weights to perform adaptive sampling, based on the importance of each view to reduce the total number of samples, while maintaining the image quality. Multi-view Ambient Occlusion improves and stabilizes the screen-space AO estimation without overestimating the results and can be combined with a variety of existing screenspace AO techniques. We demonstrate the results of our sampling method with both open volume- and solid angle-based AO algorithms.
Paper: 1196K
Images & Movies:
Camera-only AO
Camera + 1 shadow map
Camera + 2 shadow maps
Incremental AO Contribution
Final Image
Reference Ray-traced AO
BibTex reference:
@InProceedings{Vardis:2013,
author = {Vardis, Kostas and Papaioannou, Georgios and Gaitatzes, Athanasios},
title = {{Multi-view Ambient Occlusion with Importance Sampling}},
booktitle = {Symposium on Interactive 3D Graphics and Games (I3D)},
year = {2013},
address = {New York, NY, USA},
publisher = {ACM},
location = {Orlando, Florida},
keywords = {ambient occlusion, ambient obscurance, screen space, real-time rendering}
}
Abstract: In this paper we present a novel real-time algorithm to compute the global illumination of scenes with dynamic geometry and arbitrarily complex dynamic illumination. We use a virtual point light (VPL) illumination model on the volume representation of the scene. Light is propagated in void space using an iterative diffusion approach. Unlike other dynamic VPL-based real-time approaches, our method handles occlusion (shadowing and masking) caused by the interference of geometry and is able to estimate diffuse inter-reflections from multiple light bounces.
Paper: 206K
Images:
Direct Illumination
Indirect Illumination
Final Image
BibTex reference:
@InProceedings{Mavridis:2010,
author = {Mavridis, Pavlos and Gaitatzes, Athanasios and Papaioannou, Georgios},
title = {{Volume-based Diffuse Global Illumination}},
year = {2010},
journal = {International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP)},
keywords = {real-time global illumination,spherical harmonics,voxels}
}
Abstract: In this paper we present a novel real-time algorithmto compute the global illumination of dynamic scenes with arbitrarily complex dynamic illumination. We use a virtual point light (VPL) illumination model on the volume representation of the scene. Unlike other dynamic VPL-based real-time approaches, our method handles occlusion (shadowing and masking) caused by the interference of geometry and is able to estimate diffuse inter-reflections from multiple light bounces.
Paper: 333K
Images & Movies:
Direct Illumination
Indirect Illumination
Final Image
BibTex reference:
@InCollection {Gaitatzes:2010,
author = {Gaitatzes, Athanasios and Mavridis, Pavlos and Papaioannou, Georgios},
title = {{Interactive Volume-Based Indirect Illumination of Dynamic Scenes}},
booktitle = {Intelligent Computer Graphics 2010},
publisher = {Springer Berlin / Heidelberg},
year = {2010},
editor = {Plemenos, Dimitri and Miaoulis, Georgios},
volume = {321},
series = {Studies in Computational Intelligence},
pages = {229-245},
note = {10.1007/978-3-642-15690-8_12},
isbn = {978-3-642-15689-2},
journal = {Intelligent Computer Graphics},
}
Abstract: We present a novel GPU-based method for accelerating the visibility function computation of the lighting equation in dynamic scenes composed of rigid objects. The method pre-computes, for each object in the scene, the visibility and normal information, as seen from the environment, onto the bounding sphere surrounding the object and encodes it into maps. The visibility function is encoded by a four-dimensional visibility field that describes the distance of the object in each direction for all positional samples on a sphere around the object. In addition, the normal vectors of each object are computed and stored in corresponding fields for the same positional samples for use in the computation of reflection in ray-tracing. Thus we are able to speed up the calculation of most algorithms that trace rays to real-time frame rates. The pre-computation time of our method is relatively small. The space requirements amount to 1 byte per ray direction for the computation of ambient occlusion and soft shadows and 4 bytes per ray direction for the computation of reflection in ray-tracing. We present the acceleration results of our method and show its application to two different intersection intensive domains, ambient occlusion computation and stochastic ray tracing on the GPU.
Paper: 396K
Images & Movies:
BibTex reference:
@InProceedings{Gaitatzes:2010,
author = {Gaitatzes, Athanasios and Andreadis, Anthousis and Papaioannou, Georgios and Chrysanthou, Yiorgos},
title = {{Fast Approximate Visibility on the GPU using pre-computed 4D Visibility Fields}},
year = {2010},
month = {February},
journal = {18th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG)},
keywords = {hemisphere,indirect lighting,pre-computed visibility,tracing rays,uniform distribution},
}
D. Christopoulos, A. Gaitatzes,"Multimodal Interfaces for Educational Virtual Environments",
in Proceedings of the
"13th Panhellenic Conference on Informatics",
(PCI 2009), Corfu Greece, September 10-12, 2009.
Abstract: Educational applications often are slow to leverage and use new interaction devices in order to bring new value and allow new forms of gameplay. Following decades of research on how to use 3D simulation and Virtual Environments in education, attention has recently turned to exploring Multi-User-Virtual-Environments for the educational community. In the following paper we present the results of a pilot simulation battle, created for educational purposes combining the positive aspects of multi-user virtual environments, edutainment VR applications and new Human Computer Interaction (HCI) interfaces. We present the technology used, as well as an evaluation case study of the human-computer interaction results.
Paper: 300K
Images & Movies:
BibTex reference:
@InProceedings{Christopoulos:2009,
author = {Christopoulos, Dimitrios and Gaitatzes, Athanasios},
title = {{Multimodal Interfaces for Educational Virtual Environments}},
booktitle = {13th Panhellenic Conference on Informatics},
publisher = {IEEE},
year = {2009},
pages = {197--201},
doi = {10.1109/PCI.2009.8},
isbn = {978-0-7695-3788-7},
journal = {13th Panhellenic Conference on Informatics},
keywords = {Virtual Reality, Natural Interfaces, Educational Applications, Multi User Environments},
}
Abstract: Rendering realistic outdoor scenes in realtime applications is a difficult task to accomplish since the geometric complexity of the objects, and most notably of trees, is too high for current hardware to handle efficiently in large amounts. Our method generates trees with self-similarity, andlater exploits this property by heavily sharing prerendered textures of similar parts of the tree. The intrinsic tree hierarchy of the trees, combined withtheir self-similarity, allows generation of multiplelevels of detail. Here we present the flow of the processing stage, from the collection of the required input data until the export of the models in all their levels of detail as well as related and additional data.
Paper: 806K
Images & Movies:
BibTex reference:
@InProceedings{Koniaris:2009,
author = {Koniaris, Charalampos and Gaitatzes, Athanasios and Papaioannou, Georgios},
title = {{An Automated Modeling Method for Multiple Detail Levels of Real-Time Trees}},
year = {2009},
pages = {53--60},
month = mar,
publisher = {Ieee},
doi = {10.1109/VS-GAMES.2009.15},
isbn = {978-0-7695-3588-3},
journal = {1st International IEEE Conference in Serious Games and Virtual Worlds (VS-GAMES)},
keywords = {foliage,modeling,tree},
}
A. Gaitatzes, G. Papaioannou, D. Christopoulos and G. Zyba,
"Media Productions for a Dome Display System",
in Proceedings of the
"ACM Symposium on Virtual Reality Software and Technology",
(VRST 2006), Limassol Cyprus, November 1-3, 2006.
Abstract: As the interest of the public for new forms of media grows, museums and theme parks select real time Virtual Reality productions as their presentation medium. Based on three-dimensional graphics, interaction, sound, music and intense story telling they mesmerize their audiences. The Foundation of the Hellenic World (FHW) having opened so far to the public three different Virtual Reality theaters, is in the process of building a new Dome-shaped Virtual Reality theatre with a capacity of 130 people. This fully interactive theatre will present new experiences in immersion to the visitors. In this paper we present the challenges encountered in developing productions for such a large spherical display system as well as building the underlying real-time display and support systems.
Paper: 105K
Images & Movies:
BibTex reference:
@InProceedings{Gaitatzes:2006,
author = {Gaitatzes, Athanasios and Papaioannou, Georgios and Christopoulos, Dimitrios and Zyba, Gjergji},
title = {{Media Productions for a Dome Display System}},
year = {2006},
pages = {261},
address = {New York, New York, USA},
publisher = {ACM Press},
doi = {10.1145/1180495.1180548},
isbn = {1595933212},
journal = {ACM Symposium on Virtual Reality Software and Technology (VRST)},
keywords = {computer clusters,spherical display systems,stereoscopic display},
}
Abstract: Occlusion culling is a genre of algorithms for rapidly eliminating portions of three-dimensional geometry hidden behind other, visible objects prior to passing them to the rendering pipeline. In this paper, an extension to the popular shadow frustum culling algorithm is presented, which takes into account the fact that many planar occluders can be grouped into compound convex solids, which in turn can provide fewer and larger culling frusta and therefore more efficient elimination of hidden geometry. The proposed method combines planar and solid occluders using a unified selection approach and is ideal for dynamic environments, as it doesn't depend on precalculated visibility data. The solid occluders culling algorithm has been applied to commercially deployed virtual reality systems and test cases and results are provided from actual virtual reality shows.
Paper: 826K
Images & Movies:
BibTex reference:
@InProceedings{Papaioannou:2006,
author = {Papaioannou, Georgios and Gaitatzes, Athanasios and Christopoulos, D.},
title = {{Efficient Occlusion Culling using Solid Occluders}},
year = {2006},
isbn = {8086943038},
journal = {14th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG)},
keywords = {dynamic environments,games,hidden surface removal,virtual reality,visibility},
}
Gaitatzes A., Christopoulos D., Papaioannou G.,
"Virtual Reality Systems and Applications: The Ancient Olympic Games",
in Proceedings of the
"10th Panhellenic Conference on Informatics",
(PCI 2005) in P. Bozanis and E.N. Houstis (Eds.), "Advances in Informatics", Springer-Verlag, LNCS 3746, University of Thessaly, Volos, Greece, November 11-13 2005.
Gaitatzes A., Christopoulos D., Papaioannou G.,
"The Ancient Olympic Games: Being Part of the Experience",
in Proceedings of the
"5th International Symposium on Virtual Reality, Archaeology and Cultural Heritage",
(VAST 2004) and
"2nd Eurographics Workshop on Graphics and Cultural Heritage",
Oudenaarde, Belgium, December 7-10 2004.
[ 371K]
Gaitatzes A., Christopoulos D., Roussou M.,
"Reviving the past: Cultural Heritage meets Virtual Reality,"
in Proceedings of the
"Virtual Reality, Archaeology, and Cultural Heritage",
(VAST 2001) at Glyfada, Nr Athens, Greece, November 2001.
Abstract: The use of immersive virtual reality (VR) systems in museums is a recent trend, as the development of new interactive technologies has inevitably impacted the more traditional sciences and arts. This is more evident in the case of novel interactive technologies that fascinate the broad public, as has always been the case with virtual reality. The increasing development of VR technologies has matured enough to expand research from the military and scientific visualization realm into more multidisciplinary areas, such as education, art and entertainment. This paper analyzes the interactive virtual environments developed at an institution of informal education and discusses the issues involved in developing immersive interactive virtual archaeology projects for the broad public.
Paper: 173K
Images & Movies:
BibTex reference:
@InProceedings{Gaitatzes:2001,
author = {Gaitatzes, Athanasios and Christopoulos, Dimitrios and Roussou, Maria},
title = {{Reviving the Past: Cultural Heritage meets Virtual Reality}},
year = {2001},
isbn = {1-58113-447-9},
location = {Glyfada, Greece},
pages = {103--110},
doi = {10.1145/584993.585011},
publisher = {ACM},
address = {New York, NY, USA},
journal = {2001 Conference on Virtual Reality, Archeology, and Cultural Heritage (VAST)},
keywords = {Computer Archaeology,Cultural Heritage,Edu-cation,Immersion,Virtual Reality.},
}
Gaitatzes A., Christopoulos D., Voulgari A., Roussou M.,
"Hellenic Cultural Heritage through Immersive Virtual Archaeology,"
in Proceedings of the
"6th International Conference on Virtual Systems & MultiMedia",
(VSMM 2000) Gifu, Japan 4-6 October 2000.
[ 146K]
Fragomeni J.M., Hillberry B.H., Sanders Jr. T.H., Gaitatzes A.G., 1989,
"Integration of Micro structural Development and Properties
Design into the CAD/CAM Environment," in Proceedings of the
"Micro structural Development and Control in Materials Processing",
presented at the Annual Meeting of the American Society of Mechanical Engineers, MD-Vol. 14, pp.1-9.