PhD Research 🎓

Abstract

Digital multimedia content and presentation means are rapidly increasing their sophistication and are now capable of describing detailed representations of the physical world. 3D exploration experiences allow people to appreciate, understand and interact with intrinsically virtual objects. 

Communicating information on objects requires the ability to explore them under different angles, as well as to mix highly photorealistic or illustrative presentations of the object themselves with additional data that provides additional insights on these objects, typically represented in the form of annotations. Effectively providing these capabilities requires the solution of important problems in visualization and user interaction. 

In this thesis, I studied these problems in the cultural heritage-computing-domain, focusing on the very common and important special case of mostly planar, but visually, geometrically, and semantically rich objects. These could be generally roughly flat objects with a standard frontal viewing direction (e.g., paintings, inscriptions, bas-reliefs), as well as visualizations of fully 3D objects from a particular point of views (e.g., canonical views of buildings or statues). Selecting a precise application domain and a specific presentation mode allowed me to concentrate on the well defined use-case of the exploration of annotated relightable stratigraphic models (in particular, for local and remote museum presentation). 

My main results and contributions to the state of the art have been a novel technique for interactively controlling visualization lenses while automatically maintaining good focus-and-context parameters, a novel approach for avoiding clutter in an annotated model and for guiding users towards interesting areas, and a method for structuring audio-visual object annotations into a graph and for using that graph to improve guidance and support storytelling and automated tours.

We demonstrated the effectiveness and potential of our techniques by performing interactive exploration sessions on various screen sizes and types ranging from desktop devices to large-screen displays for a walk-up-and-use museum installation.

Keywords #ComputerGraphics #HumanComputerInteraction #InteractiveLenses #FocusAndContext #AnnotatedModels #CulturalHeritageComputing

Scalable Exploration of Complex Objects and Environments Beyond Plain Visual Replication​ [Thesis]

M. Ahsan, “Scalable exploration of complex objects and environments beyond plain visual replication,” 2023.

DOI: hdl.handle.net/11584/355681 [PDF]  

Demonstration and Pilot Videos 

Scalable exploration of complex objects and environments beyond plain visual replication

Joint Camera and Lens Control for Focus-and-Context Exploration

This work demonstrates a novel approach for assisting users in 2D data exploration with an interactive lens. Our first key-contribution is a novel user-interface to jointly control lens and camera to support effective focus-and-context exploration.


Scalable exploration of complex objects and environments beyond plain visual replication

Assisted and automatic navigation in an annotated model

In 2D Data exploration, this work exploits and extends the concept of data annotations to provide guidance in discovering interesting areas while navigating with the lens suggesting annotations and displaying them without introducing clutter.

Acquisition, reconstruction, and exploration of paintings from retable of San Bernardino.

This work demonstrates the acquisition, reconstruction, and scalable exploration of paintings from retable of San Bernardino.


Acquisition, reconstruction, and exploration of Stele Di Nora.

This work demonstrates the acquisition, reconstruction, and scalable exploration of Stele Di Nora.