Report 4: Volumetric data, Computational Photography

Post Reply
glegrady
Posts: 203
Joined: Wed Sep 22, 2010 12:26 pm

Report 4: Volumetric data, Computational Photography

Post by glegrady » Mon Oct 05, 2020 1:13 pm

MAT594GL Techniques, History & Aesthetics of the Computational Photographic Image
https://www.mat.ucsb.edu/~g.legrady/aca ... f594b.html

Please provide a response to any of the material covered in this week's two presentations by clicking on "Post Reply". Consider this to be a journal to be viewed by class members. The idea is to share thoughts, other information through links, anything that may be of interest to you and the topic at hand.


Report for this topic is due by November 10, 2020 but each of your submissions can be updated throughout the length of the course.
George Legrady
legrady@mat.ucsb.edu

zhangweidilydia
Posts: 12
Joined: Fri Jan 19, 2018 11:09 am

Re: Report 4: Volumetric data, Computational Photography

Post by zhangweidilydia » Sun Nov 08, 2020 10:49 pm

This week's topic is interesting. I am familiar with many examples presented in the pdf. It's always great to look through them again.
It reminds of me one of the art papers published in this year's Siggraph conference: Pixel of Matter: New Ways of Seeing with an Active Volumetric Filmmaking System.
We introduce an installation art project using the active volumetric filmmaking technology to investigate its possibilities in art practice. To do that, we developed a system to film volumetric video in real time, thereby allowing its users to capture large environments and objects without fixed placement or preinstallation of cameras. Active volumetric filmmaking helps us realize the digital reconstruction of physical space in real time and can be expected to ultimately facilitate the coexistence of real and virtual space
They also develop an artwork by using this system - Pixel of Matter - http://seonghoonban.com/procjects/pixel-of-matter/

Screen Shot 2020-11-08 at 10.48.36 PM.png
‘Pixel of Matter’ starts with the question of how to recognize the participant (real world) from the point of view of the art installation (computer). In this work, instead of a fixed camera, a moving camera recognizes changes of position through both binocular disparity and neural network analysis and subsequently shows the process of computer vision perceiving space in the same way that a real person perceives space. Through this process, this work will derive new methods of perception regarding how the installation can perceive the real world.

‘Pixel of Matter’ consists of a steel hardware structure and software. The structure is a hemisphere with a diameter of 3 m, hanging over viewers’ heads and rotating to capture the space in 360°. Then, the depth (RGB-D) images from the moving positions are reconstructed to fill the volume gradually. The process of systematically growing 3D pixels in the void space was visualized using the concept of the first material synthesis process—a metaphor of the Big Bang alluded to by the newly formed material in the digital 3D space. The pixels that compose the volumetric shapes of the viewers are decomposed, and each pixel is scattered in space to create a new universe.

‘Pixel of Matter’ allows you to experience the coexistence of digital space and real space. The newly synthesized digital matter is displayed in various perspectives through multiple screens, allowing viewers to recognize the understanding of their existence.
What makes this project special is it makes the machines are becoming the principal agent overseeing human beings.

merttoka
Posts: 21
Joined: Wed Jan 11, 2017 10:42 am

Re: Report 4: Volumetric data, Computational Photography

Post by merttoka » Mon Nov 09, 2020 4:08 am

With the introduction of the first commercially available depth sensors, our civilization has adopted infrared cameras at an increasing rate. In just one decade, they evolved from niche products that only technology enthusiasts use to devices that millions of people depend on to unlock their smartphones. These devices gave us the ability to digitize our physical environments with 3D surface reconstruction algorithms. Even though it became commonplace, I still find the procedure used by most of these devices quite poetic: Intruding the physical space with invisible infrared projections. I wonder if we are confusing any animals who can see the infrared portion of the electromagnetic spectrum with our daily activities.
Image

Many artists and engineers indeed explore the invisible portion of the spectrum. An interesting example from the lecture was the "Wifi Camera."
This sculptural work demonstrates the dense activity in our everyday environments in the microwave range. I am curious to learn more about whether the device's shape is a requirement of the wifi camera set-up or an aesthetic decision. It reminds me of a crosssection of a satellite dish.
Image

This week's discussion on computational photography reminded me of an interesting technical work that uses high-speed RGB cameras: speech reconstructions from visual information. It uses the simple idea of vibrations on ordinary objects when sound waves collide with it. Depending on the original sound's frequency, these vibrations can be minuscule to detect with regular cameras. A stationary high-speed camera records 6000 frames every second to analyze the shifts in pixel colors and reconstructs a noisy version of a speech that happens near the object.
It seems like later, they extended the work to work with commodity cameras with 60fps, but it wasn't enough to recover the full audio signal from it. However, they report that "it may still be good enough to identify the gender of a speaker in a room; the number of speakers; and even, given accurate enough information about the acoustic properties of speakers' voices, their identities". (http://people.csail.mit.edu/mrub/VisualMic)

ehrenzeller
Posts: 8
Joined: Thu Oct 22, 2020 7:10 pm

Re: Report 4: Volumetric data, Computational Photography

Post by ehrenzeller » Mon Nov 09, 2020 7:46 pm

This week’s topics certainly reawakened my inner scientist. Depth Sensing, Photogrammetry and Computational Photography undoubtedly have many applications in the entertainment and surveillance aspects of modern life. From AR games to self-driving cars, there are countless uses for these technological advancements, yet something I kept returning to was using these technologies, namely LIDAR, for mapping in the traditional sense—in GIS applications.

As someone without a current iPhone, or iPad Pro, I was curious how their LIDAR sensors worked and found this informative blog with GIFs of the tools in use: https://blog.halide.cam/lidar-peek-into ... d38910e9f8. Though LIDAR fueled applications have yet to fully blossom, it appears as if gaming, real estate photography, and interior design, namely the Ikea app, are among the first to fully embrace this new tool.

According to https://enterprise.dji.com/news/detail/ ... atial-data, here are a number of novel uses for LIDAR equipped drones including:

“Accident Scenes
LiDAR is an active system that uses ultraviolet, near-infrared light to image objects requiring no external light for effective mapping. For example, when monitoring an interstate pileup at night, a LiDAR-equipped drone can easily be deployed, making a single pass over the site.
As a UAV-based solution, accurate information with visible details will be returned instantaneously, which can then be admitted as evidence in court. On the ground, wreckers and sanitation crews can begin the process of cleaning up quickly, saving thousands if not millions of dollars by freeing up commuters and spending less on accident personnel.

Forestry
The production of paper, maple syrup, and other critical products require the efficient management of productive forests, yet managing these vast areas can be overwhelming due to their sheer size. Traditional methods for assessing a forest inventory are time-consuming and inefficient – at times relying on rough estimates for large areas. Using a LiDAR-equipped drone, foresters can measure canopy heights, coverage, tree density and even measure the location and height of individual trees.
This removes the guesswork and inefficiencies of traditional methods, with the added benefit of LiDAR being able to conduct these measurements even when human eyes cannot as they do not rely on natural light to operate.

Agriculture & Landscaping
On large scale farms, landscape 3D mapping has become crucial to implementing effective irrigation systems. For example, on large rice plantations, farmers need to build levees, which require accurate knowledge of the terrain for a system of levees to work. A LiDAR-equipped drone can collect data in a single pass allows farmers or consultants to progress through a large field quickly. Previous methods proved cumbersome and would involve waiting for the fields to dry up enough so that trucks could maneuver the terrain.”

Terrain modeling, archaeology and mine inspections are also discussed on the site.

NOAA has also adopted aereal LIDAR to monitor shorelines and flooding (https://oceanservice.noaa.gov/facts/lidar.html).

k_parker
Posts: 9
Joined: Sun Oct 04, 2020 11:54 am

Re: Report 4: Volumetric data, Computational Photography

Post by k_parker » Tue Nov 10, 2020 12:30 pm

Prior to this week's discussion on volumetric data and computational photogrammetry I was playing around with a few different iphone apps- Trnio and Matterport (primarily used for real estate)- to experiment with moving my studio practice/installation into a digital space. These are not really practical for capturing volumetric data, but it was fascinating that the technology could be applied to something like a free app with a pretty user friendly image capturing process.

While playing around with these I was unaware of the point cloud mesh process of photogrammetry. As a result I was caught up in the visual impact of the point cloud representations in Dan Holdsworth’s Continuous Topography, Antoine Delach’s Ghost Cells, and Arnaud Colinart Notes On Blindness, specifically the process of capturing space/depth and then putting them into this black void. There is something really interesting in the process of meticulously capturing the exterior surfaces of an object/environment - something that seems natural to how we biologically, visually understand space and then isolating this representation in an environment that is almost impossible to imagine- visually without anything.
images.jpeg
Dan Holdsworth’s Continuous Topography
images.jpeg (10.14 KiB) Viewed 4014 times
With this concept, I am immediately led to the research/ representation of dark matter and dark energy. I am certainly not an expert in this topic, however, I am slightly familiar with after reading the book “The 4 Percent Universe”https://en.wikipedia.org/wiki/The_4_Percent_Universe by Richard Panek this summer. The book is from 2011, so not the most up to date but the general point is …
”The book's namesake comes from the scientific confusion over how ordinary matter makes up only four percent of the mass-energy in the universe, with the rest consisting of mysterious dark matter and dark energy that are both invisible and almost impossible to detect.[1] It is due to dark matter that galaxies are able to keep their shape, with the mass of dark matter creating enough gravitational force to hold the stars that make up a galaxy together. Dark energy, however, is a substance or force responsible for the accelerating expansion of the universe over time”
It is my understanding that dark matter and dark energy are somewhat blanketed terms to describe the universal mass that theoretically must exist in the universe, but does not interact with electromagnetic forces so is not visually detectable. The presence of dark matter is only detectable by its effects on visual objects. This relates to a few artists/works we looked at in class such as the Wifi Camera.

Relating dark matter back to photogrammetry, specifically the visual impact of point cloud, I am interested in the external contact point of an object used as a representation. The object is only understood in photogrammetry as the surface where it is interacting with an external environment- essentially the object can only be represented in its relationship to what it is not. Of course photography and biological visual processes rely on the external relationship to electromagnetic radiation, however with photogemetry, the mass/interior of the object can be essentially taken out of the equation.

A couple of related artists I have been looking at for this are Yasuaki Onishi http://onys.net/reverse-of-volume-ec/, and Myung Keun Koh http://www.koreanartistproject.com/eng_ ... _reg_no=36.
Yasuaki-Onishi-Reverse-of-volume-RG-2012-470-x-1340-x-1210-cm-glue-plastic-sheet-other-Rice-Gallery-Houston-TX-USA.jpg
Yasuaki Onishi
images-1.jpeg
Myung Keon Koh
images-1.jpeg (6.67 KiB) Viewed 4014 times
Last edited by k_parker on Sat Dec 12, 2020 10:10 am, edited 1 time in total.

chadress
Posts: 8
Joined: Sun Oct 04, 2020 11:57 am

Re: Report 4: Volumetric data, Computational Photography

Post by chadress » Tue Nov 10, 2020 3:44 pm

Screen Shot 2020-11-10 at 2.58.40 PM.png
Dan Holdsworth, from the series Continuous Topography



Trace Manipulation
_____


As we progress along our explorations of the history of the computational image I continue (perhaps stubbornly) to measure these technological developments against what I consider to be the photograph’s fundamental achievement: It’s ability to capture some trace of objective reality. Much of what we have discussed so far in class can be understood as a sort of archaeology of trace manipulation. I will use the term ‘trace’ here in both the theoretical sense and descriptor of the act itself. For it’s theoretical underpinnings I am alluding to its continued use (creating its own trace… a meta-trace perhaps) in discussions of photography, from Pierce to Benjamin and others such as Sontag and Krauss.

We’ve duplicated, distorted, generated, visualized, constructed, de-constructed and perhaps re-constructed documents first created with light bouncing off real objects. Lidar and other depth sensing technologies take me back to the earliest beginnings of photography, specifically the Camera Lucida. Both technologies offered a means to trace reality, a 2D construct in the case of the Camera Lucida (by hand drawing the projected image) and a 3D construct in the case of lidar (multiple laser points in both the visible and infrared spectrum) and photogrammetry.

While these new technologies are altering how we interact with the world writ large, from self-driving cars to (as Mert and Alex point out) unlocking our iphones, artists are also finding novel ways to alter reality, continuing this manipulation of the trace.

Dan Holdsworth’s work I was familiar with previously. In his series Continuous Topography (2016-2018) and the more recent Acceleration Structures (2019-2020), he uses small drones equipped with Lidar to map spaces of disappearance, particularly retreating glaciers. Holdsworth uses this initial data as a point cloud to create 3d topographies. These are then texture mapped to simulate a realistic landscape, but only partially, for Holdsworth leaves certain areas of the scanned terrain unfinished and exposed. This mesh of negative space - a computer generated geometric framework - references what is lost when the glacier melts, retreating and returning with the seasonal changes, or perhaps retreating forever. He then makes 2d prints for display, or animates the topologies in 3D, allowing the viewer to move in, around and through these simulations.

Screen Shot 2020-11-10 at 2.59.21 PM.png
Dan Holdsworth, from the series Continuous Topography

Weidi’s presentation of her work was especially intriguing, and in some ways mirrors a few of the technological approaches found in Holdsworth’s pieces. Wedi’s project Volume of Voids (2020) takes covid-induced social distancing space as it’s basis - a conceptual use of negative space first captured photographically. This virtual no man’s land - a void of viral fear - is then turned into a three dimensional cloudpoint which is further manipulated into three dimensional sculptures and physically rendered via three dimensional printing techniques. These sculptures, which Weidi refers to as “artefacts” are then paired with pandemic-related language gleaned from News sources.

Screen Shot 2020-11-10 at 3.09.30 PM.png
Weidi Zhang, Volume of Voids, 2020
Screen Shot 2020-11-10 at 3.08.40 PM.png
Weidi Zhang, Volume of Voids, 2020

In both Holdworth’s and Weidi’s work, Benjamin’s aura is surely displaced, but a trace of reality remains.

_______
Links:
http://holdsworth.works/
https://www.zhangweidi.com/vov

yichenli
Posts: 14
Joined: Mon Apr 16, 2018 10:23 am

Re: Report 4: Volumetric data, Computational Photography

Post by yichenli » Tue Dec 01, 2020 10:40 pm

Katie and Chad both brought up the fragmented aesthetic of scanned scenes as surfaces and traces. I would like to add something to this discussion by looking at an example from lecture, Volker Kuchelmeister's Parragirls Past, Present (2017), an "immersive narrative experience" consisting of 3d scans of the Parramatta Girls' Home (1887-1974) and interviews of the women who were institutionalized by welfare services and abused. From the trailer and excerpt video, the artwork is very moving, however I will mostly be using it as an example to look at the general limitations of using "broken" 3d models to convey fragmented memory.
Screenshot_2020-12-01 prj_parraGirls_1 jpg (JPEG Image, 1280 × 640 pixels).png

The artist chose this visual language since, like episodic memory, it is "mutable, reconstructed and fragmented":
With one trend in CGI to generate ever more realistic depictions, this project takes a deliberate step back and makes use of a primitive reconstructed reality, the point-cloud representation. It is to achieve a visual language, which in my view, is compatible with how episodic memory operates. It is mutable, reconstructed and fragmented. This aesthetic sets the tone and creates the atmosphere for a somber narrative that is meant to engage a viewer on a emotional level to ultimately promote empathy towards the affected women telling their stories.
The use of 3D scanning for heritage preservation (i.e. sculptures, architecture) is not uncommon, Parragirls caught my attention since it seems to make an suggestion between heritage/memory and ruin. Despite the building's interior being in a state of disrepair (as documented in video taken by the artist), its exterior seem quite well-preserved (parragirls.org.au). After the closure of the Home, the site was used as a detention center for women and a children's shelter until 1983, and as administrative offices of the child welfare department (1983-2010, one could see its use as child welfare services admin office as belonging to the same continuum as the abuse and violence). These uses following the closure of the Home may be why its exterior is so well kept. Compared to the current state of the Home, its counterpart in the artwork almost seems like a preemptive ruination.

Although defamiliarization or alienation is an important artistic strategy (the work may look less powerful and somber if the artist had used video footage), as an art history major I can't help but notice this discrepancy between the relatively recent occupation of the Home and the distance put between the viewer and the physical site. Personally, it seems like the ruin-like appearance of the work and its suggested "pastness" may lead one to quarantine the violence in the past and overlook the continuities from then, to the 2010s, to the present. If the "somber" tone is set by the visual language of the point cloud model of the site, suggesting incompleteness or ruin, I wonder if it is actually necessary or appropriate to visualize a ruined building to convey a somber narrative told by traumatized people?

I feel like the point cloud, as visualization of data points (and more importantly to the viewer of the artwork, the incompleteness of data), is for reconstructing and surveying, similar to how historians deal with fragmented and never complete pieces of evidence. The perspective is looking towards a distanced past (I noticed that Chad used the word "archaeology"). Despite the associations one may have from seeing point cloud models, the incompleteness of data (which they are) are different from the fragmentation of personal memory.

Bibliography
Volker Kuchelmeister, Parragirls Past, Present (2017).https://kuchelmeister.net/portfolio/par ... t-present/
Parragirls http://www.parragirls.org.au/parramatta-girls-home.php

wqiu
Posts: 14
Joined: Sun Oct 04, 2020 12:15 pm

Re: Report 4: Volumetric data, Computational Photography

Post by wqiu » Tue Dec 15, 2020 11:09 pm

I would like to share a very interesting research that is presented in Siggraph 2020.

Immersive Light Field Video with a Layered Mesh Representation
https://augmentedperception.github.io/deepviewvideo/
Light field video streaming.png
Traditional videos/photos are taken from single vantage point. Light Field imageries, however, are taken from multiple vantage points. As a result, the captured imagery can be refocused to different distance in the scene(Lytro camera), or viewed in different angle(Blade Runner Esper photo analysis). The later feature is very useful for VR experience. When users wear VR headset, the imagery in the headset is simulated with the user's head movements to provide immersive experience. However, if the imagery is taken by a 360-degree camera from one single vantage position, though the imagery can be changed accordingly when head is turning around, the imagery cannot be changed correctly when the head moves to another location, which causes discomfort to users.

With the help of different camera arrays, light field imageries can be captured. This allows the users to move in a larger space than a fixed position before the imagery adjustment become incorrect. The dimension of the allowed movable space is determined by the design of the camera rig. The paper mentioned above allows the user move his/her head within a spherical space with a diameter of 70 centi-meters.

There has been research utilizing deep learning techniques to use images taken from one vantage point to refer the images from others. It helps the user experience, but the state-of-the art research in this fashion still produce perceivable artifacts.

Given the problem how light field can be captured solved, the problem becomes how to play it back efficiently, specifically for videos. Each frame of light filed video consists of images taken by multiple cameras, 46 cameras in the paper. This huge volume of data is impossible to be processed by a computer at a refresh rate high enough, such as 90 fps, to avoid make the users feel dizzy.

The 46 images taken at the same time for one video frame is called a Multi-Plane Image. This paper use machine learning to determine given the current perspective, which is defined by the head's position and orientation, which part in each image to be used to stitch the final view. By using only the cropped areas of the images, it dramatically compressed the volume of data. The image corps are used as texture to the 3D model of the scene, which is very efficient to be run at real time. The textures are then grouped together to make a texture atlas, which is a 2D image where all image crops is laid out. When streaming the light field video, the texture atlas is the actual image data to be transported. It is 2D image frames so it can be compressed more with video compression techniques such as H.265 without dramatically sacrificing playback quality.



Video Atlas.png
Constucted Result.png

Post Reply