Project 1 NOISE Instructions

glegrady
Posts: 203
Joined: Wed Sep 22, 2010 12:26 pm

Project 1 NOISE Instructions

Post by glegrady » Fri Jan 14, 2011 11:00 am

Please post your first NOISE projects in this section:

Select the topic Category (like arts 102 -> winter 2011 -> Project 1 Noise
then click "POST A REPLY" then

Add a precise title, something like "Proj 1: Noise & Form" (or customize it slightly to describe your project)
Paste in a paragraph of descriptive text
Then add an image by going below the text box and clicking on "ADD AN ATTACHMENT"
Select your image with BROWSE, then click on "ADD THE FILE"
Then click on "PLACE INLINE"
When you are done with all of this go back add more text, then more images...
and finally click on "SUBMIT" tab below the text box, or you could click on SUBMIT as you go along (to save work-in-progress) and then re-open and add more data.

You can go back and correct your previous posts, but you cant change someone else's post.

By clicking on "QUICK REPLY" you can respond to someone else's post.
Last edited by glegrady on Fri Jan 14, 2011 12:02 pm, edited 2 times in total.
George Legrady
legrady@mat.ucsb.edu

kyle_gordon

Re: Project 1 NOISE Instructions

Post by kyle_gordon » Mon Jan 17, 2011 4:59 pm

Project #1: Noise & Form

Kyle Gordon's Images:

All of my images draw from the parts of Harmon’s article regarding the recognition of faces, specifically relating to those of toggling noise and blurring. Each photo I used consists of drawing on the principle of precisely blurred portraits, which is discussed at the end of Harmon’s article, shortly describing how as resolution is scaled down and then back up, detail and features of an image disappear or blend together, making the image less distinct. Afterwards I used a series of gausian noise and uniform noise to bring back some of the features of my images, usually followed by a blur. The image as a result would produce a less distinct result, but still recognizable which I hope to achieve in some of my images, while others I sought to make them unrecognizable or recognized as different figures.

Image 1: Indian Girl
Indian-Girl.jpg
Original:
b9456523b312558b24407cf18bbb2406.jpg

Resize 20x20
• Resize 500x500
• Noise 18.92 (Uniform)
• Blur 4.8 (Gaussian)
• Levels 54/ 1.11/ 212
• Exposure .14, gamma .64
• Hue 137, Saturation -32m lightness -7
• Exposure .41, offset .0061, gamma .28
• Equalize



Image 2: Fire at the Tea Party
Tea-Party.jpg
Original:
07845757646d437de139df3fbca22215.jpg
Resize 30x20
• Resize 600x467
• Noise 21.27 (Uniform)
• Blur 3.6 (Gaussian)
• Hue -35, saturation 32, lightness -41
• Equalize
• Channel Mixer Red +200, Green +200
• Levels 0/.33/255
• Noise 47.15 (U)



Image 3: The Rat
Rat.jpg
Original:
Arkanys_8___Fancy_rat_by_DianePhotos.jpg
Resize to 25x24
• Size to 500x480
• Add noise 37.74 (Gaussian)
• Add Blur 2.1 (Gaussian)
• Equalize
• Add noise 23.62 (Gaussian)
• Add blur 2.5 (Gaussian)
• Turn up hue slightly
• Change exposure to 1.28, offset to -.1367, and gamma to 1.26
• Blur 27 (Gaussian)
• Equalize
• Hue -58, saturation 100, lightness 54



Image 4: Team Rocket
Team-Rocket.jpg
Original:
team_rocket_by_ryoko_demon-d2yan8t.jpg
Resize to 21x31
• Resize 500x731
• Add noise 30.68 (Uniform)
• Blur 3.7 (Gaussian)
• Equalize
• Offset -.1898, gamma 1.94
• Hue -29
• Add noise 51.85 (Gaussian)
• Add blur 2.5 (Gaussian)
• Equalize
• Exposure:
6.61, offset .0388, gamma .7
• Hue +56, saturation +93, lightness -37



Image 5: Violin
Violin.jpg
Original:
Kamelot_I_by_middeneaht.jpg
Resize 20x13
• Resize 600x390
• Add noise 28.33 (Uniform)
• Add blur 3.6 (Gaussian)
• Levels 47/.67 / 215
• Blur 12.5 (Gaussian)
• Hue 97, saturation 36, lightness -5
Last edited by kyle_gordon on Tue Jan 18, 2011 2:13 pm, edited 1 time in total.

jguzman02
Posts: 6
Joined: Tue Jan 11, 2011 2:02 pm

Re: Project 1 NOISE Instructions

Post by jguzman02 » Mon Jan 17, 2011 11:06 pm

Project #1:Noise and Form

Jessica Guzman's Images:

In his article, Leon Harmon experiments with the human ability to recognize blurred and pixilated images, and how closely people could recognize the original. My overall goal for this project was to play on the recognition of the images, as well as test the difference in picture alterations. I wanted to experiment with the different types of blurring and amount of noise. Also, The images have also been scaled up and down to test the effects of performing similar or same tasks. It was interesting to me how repeating size changes could alter the image in different ways. I also played with the image qualities such as hue/saturation, and brightness/contrast. In this project, some of the images were made to try and resemble the originals, whereas some images were made to try to resemble something different than the original.


Image 1: Colored Hair Woman
noise2.jpg
resize: 8 x 516 (nearest Neighbor)
resize 800 x 516 (smooth)
Noise 7.16 (Gaussian)
Resize 8 x 516 (smooth)
Resize 800 x 1000 (smooth)
Curves output(104) Input(93)
Image 2: Alley Cat
noise4.jpg
Resize 3 x 350 ( nearest neighbor)
Resize 350 x 350 (smooth)
Noise 26 (Gaussian)
Brightness -34, Contrast 26
Resize 10 x 10 (nearest neighbor)
Resize 350 x 350 (smooth)
Change offset -.1531, Gamma 1.24
Image 3: Parade
noise3.jpg
Resize 9 x 742 (bilinear)
Resize 989 x 742 (bilinear)
Contrast 67
Motion blur angle -68, distance 28
Contrast 70
Image 4: Computer Forest
noise5.jpg
Resize 900 x 5 (nearest neighbor)
Resize 900 x 586 (smooth)
Brightness -37, Contrast 36
Saturation 50
Image 5: Train Station
noise6.jpg
Noise 42.44 (uniform)
Resize 10 x 10 (smooth)
Resize 800 x 586 (bilinear)
Saturation 35

cgowdey
Posts: 5
Joined: Thu Jan 13, 2011 2:03 pm

Re: Project 1 NOISE Instructions

Post by cgowdey » Mon Jan 17, 2011 11:17 pm

Experiments in Degradation - Caitlin Gowdey

After reading Harmon's article, Recognition of Faces, the main point that I focused on for this project emerged out of his statement (discussing low-frequency vs high-frequency relationships and loss of detail),

"Recognition of the most strongly blurred cannot depend on the identification of features. The high-frequency information required to represent the eyes, the ears and the mouth is lost. Although some intermediate frequencies remain, their representation of the chin, the cheeks and the hair is not clear. The low-frequency information that relates to head shape, neck-and-shoulder geometry and gross hairline is all that remains unimpaired, yet this alone seems to be adequate for rather good recognition among individuals in a restricted population."

From this, I wanted to experiment with losing the high frequency relationships and preserving low-frequency information, even in images whose pixels were otherwise entirely compressed, stretched, and manipulated. The images I chose to use started with random screenshots from the movie Escape to LA and then moved on to news photographs of the flooding happening in Haiti, Queensland, and Sri Lanka. The Escape to LA photos had interesting results, and through the use of image compression, the unsharp mask, curves, noise, vibrance and saturation, I was left with an image whose color scheme ended up entirely different than the original image without any manipulation of color balance or hue, and whose original image was almost entirely lost, save for the low-frequency information of the angle of the arm, the hood of the car, and other geometric shapes that, upon first glance seem lost, but when squinted at or compared (the original to the manipulated), you can see the original angles and shapes semi-coming through. The other images reacted the same way; I never messed with color balance or hue, and yet with each image, through the processes of detail reduction with compression of the image size, along with the unsharp mask, curves, blur, and noise, the colors changed drastically.

I then ventured away a bit from my original, low-frequency focus, and started to explore the nature of the pixels that were being pulled apart by the compression and decompression and forced to reveal their hidden color values, the values that compose shadows, the values hiding in water, all the colors that are there but invisible to the naked eye because they are used to form the values that become shapes. It seemed like it wasn't until things were stretched apart and sharpened that any "secret" values appeared, but once they were, the pixels hiding to the naked eye were unveiled. I thought the most interesting results of the hidden color values came from images of water, which is why I switched from Escape from LA to pictures of floods.
escapefromLA.jpg
escape.jpg
ESCAPE FROM LA:
Beginning width: 496
Beginning height: 312
Height: 312 reduced to 10 - method: bicubic sharper
Height: 10 brought back to 312 - method: nearest neighbor
Image size doubled 200%
Unsharp mask - Amount: 375% Radius: 20.7 pixels Threshold: 41 levels
monochromatic noise - uniform - 11.86%
Weight: 992 reduced to 9 - back to 992 - method: nearest neighbor
Height: 624 reduced to 16 - back to 624 - method: nearest neighbor
Unsharp mask - Amount: 500% Radius: 31.1 px Threshold: 18 levels
Add noise - 7.16% - uniform
Equalize
Curves- output: 164 input: 47
haiti.jpg
haitimod.jpg
HAITI:
Width: 650
Height: 425
Width: 650 reduced to 20 - method - bicubic sharper
Width: 20 back to 650 - method - nearest neighbor
Blur: 3.6 px
Noise: 18.92%
Unsharp Mask: Amount: 167% Radius: 250 px Threshold: 40 levels
Image enlarge: 130%
Width: 845
Height: 553
Height: 553 reduced to 55 - method - bilinear
Height: 55 back to 553 - method - nearest neighbor
Width: 845 reduced to 12 - method - bicubic
Width: 12 back to 845 - method - nearest neighbor
Levels: 39, 3.16, 216
Unsharp Mask: Amount: 74% Radius: 18.4 px Threshold: 2 levels
Blur: 1.3 px
Unsharp Mask: Amount: 220% Radius: 1.8% Threshold: 1 level

queensland.jpg
queenslandmod.jpg
QUEENSLAND:
Width: 1024
Height: 668 reduced to 7 and back to 668- method- nearest neighbor
Width: 1024 reduced to 10 and back to 1024 - method - nearest neighbor
Width: 1024 reduced to 24 - method- bicubic
Width: 24 back to 1024 - method - nearest neighbor
Height: 668 reduced to 18 - method - bicubic smoother
Height: 18 back to 668 - nearest neighbor
Noise: 18.92%
Unsharp Mask: Amount: 169% Radius: 96.5 px Threshold: 2 levels
Blur: Radius: 1.5 px
Width: 1024 -reduced to 100 px method - bicubic sharper
Width: 100 back to 1024 - method - nearest neighbor
Height: 668 reduced to 66 - method - bicubic smoother
Height: 66 back to 668 - method - nearest neighbor
Equalize
Curves - Output: 84 Input: 52
Noise: 9.51%
Equalize
Blur: .7 px
Levels: 19, 1.30, 234
Vibrance: +99
Saturation: -3
srilanka.jpg
srilankamod.jpg
SRI LANKA:
Width: 982 reduced to 8 - method - bilinear
Height: 620
Width: 8 back to 982 - method - nearest neighbor
Height: 620 reduced to 62 - bicubic sharper
Height: 62 back to 620 - nearest neighbor
Width: 982 reduced to 82 - method - nearest neighbor
Width: 82 back to 982 - method - bilinear
Noise: 25.98% - monochromatic
Levels: 40, .80, 196
Width: 982 reduced to 100 - method - nearest neighbor
Width: 100 back to 982 - method - nearest neighbor
Noise: 21.27%
Blur: 8.2 px
Curves: Output: 73 Input: 79
Noise: 4.80
Unsharp Mask: Amount: 220% Radius: 2.7 px Threshold: 1 level
Width: 982 reduced to 16 - method - bicubic
Width: 16 back to 982 - method - nearest neighbor
Last edited by cgowdey on Tue Jan 18, 2011 4:40 pm, edited 2 times in total.

emilyrabinowitz
Posts: 3
Joined: Mon Jan 17, 2011 8:30 pm

Re: Project 1 NOISE Instructions

Post by emilyrabinowitz » Tue Jan 18, 2011 3:32 am

Project 1: Noise & Form: Discovering Patterns
Emily Rabinowitz

For this project, I built myself a set of conceptual rules analogous to a few different findings in The Recognition of Faces by Harmon.

On sketch artist's process: "Few observers, unless they are specially trained, can give satisfactory clues to appearance in words. Most can point to features similar to those they remember, however and that is how the reconstruction artist usually begins."
To give clues to a criminal's appearance in words is like trying to paint photoshop algorithms by hand. Instead, I chose to "recognize" the "features" or the thumbprints of the transformations applied to the photos.

On perception: "An interesting and provocative characteristic of block portraits is that once recognition is achieved more apparent detail is noticed. It is as though the mind's eye superposes additional detail on the coarse optical image. Moreover, once a face is perceived it becomes difficult not to see it..."
Once recognition of the algorithm's visual patterns were made, I made sure to accentuate those patterns and only add to, not diminish, the level of detail that I could uncover.

More on perception: "...faces can be recognized as well as discriminated. It is possible not only to tell one from another but also to pick one from a large population and absolutely identify it, to perceive it as something previously known."
In order to accentuate found patterns, I used my recognition of hue discrimination. By understanding the differences in hue among various shapes, I could choose the hues that i wished to manipulate discretely.

More on sketch artist's process: "The first attempt, although obviously resembling the original photograph, differed from it in the depiction of important features and proportions. When limited feedback was allowed, however, there was rapid improvement."
The feedback I am allowing to influence my pieces are the "corrections" made by photoshop. This includes transformations such as level adjustments, and curves, and viewing the histogram. At times I chose to purposefully rebel against these corrections, such as making drastic changes in concavity on "curves" for Pattern17. Other times, I chose to follow photoshop's advice on how to best balance the photo, such as in Pattern13, where I adjusted levels and curves constantly to make the range of the grayscale as balanced as possible.

My goal is to discover the visual patterns of the algorithms encoded in photoshop while making transformations on pixels.
_________________________________________________________________________________
_________________________________________________________________________________
Pattern11
pattern11.jpg
Original
pattern11orig.jpg
Image Size 20x20 px (Nearest Neighbor)
Image Size 2000x2000 px (Bicubic Smoother)
Hue/Saturation RGB saturation +92
Noise: Median 100 px
Hue/Saturation RGB saturation +49; Greens sat. +47; Reds sat. -43; Yellows sat. -21
Highpass 4.6 px
Hue/Saturation Greens, Cyans, Blues, Yellows, Reds saturation +100
Levels 0 2.21 161; output 255 0

_________________________________________________________________________________
Pattern12
pattern12.jpg
Original
pattern12orig.jpg
Image Size 30x10 px (Nearest Neighbor)
Image Size 3000x1000 px (Bicubic Sharper)
Noise 33.03% (Uniform)
Noise: Median 3 px
Hue/Saturation Cyans sat. +100; Magentas sat. +62; Reds sat. +100 lightness +74
Levels 0 2.10 255
Curves RGB output:61 input:56; Blue output:164 input:181; Red output:138 input:107
Image Size 3000x3000 px (Bilinear)
Highpass 90.9 px
Noise: Median 90 px
Levels 77 .56 190
Vibrance -25

_________________________________________________________________________________
Pattern13
pattern13.jpg
Original
pattern13orig.jpg
Levels 0 1.43 207
Levels 42 1.23 255
Hue/Saturation RGB saturation +90
Noise: Median 85 px
Levels RGB 5 .95 255; Green 0 1.96 244
Curves RGB output: 234 input:203; Red out:155 in:155; Green out:79 in:17; Blue out:253 in:214
Noise: Median 40 px
Highpass 11.5 px
Highpass 11.5 px
Hue/Saturation Cyans sat. +100; Greens sat +100 lightness -100; RGB sat. -100
Image Size 1067x600 px (Bicubic)
Noise: Median 1 px
Levels 51 1.09 255

_________________________________________________________________________________
Pattern17
pattern17.jpg
Original
pattern17orig.jpg
Image Size 16x4 px (Nearest Neighbor)
Image Size 1600x1600 px (Bicubic Smoother)
Hue/Saturation RGB sat. +94
Noise: Median 93 px
Hue/Saturation saturation RGB +35; Yellows +51; Greens +65; Cyans +48; Blues +55; Magentas +54
Highpass 90.9 px
Hue/Saturation saturation Greens +52; Cyans +100; Yellows +25; Reds +73; Magentas +78
Curves RGB output:170 input:101
Noise: Dust & Scratches 5 px
Levels 0 .54 255
Hue/Saturation Blues hue +118 sat. +100; Greens hue -100; Cyans hue +15; Magentas hue +82 sat. +100 lightness +21
Highpass 250.0 px
Highpass 250.0 px
Highpass 250.0 px
Highpass 250.0 px

_________________________________________________________________________________
Pattern18
pattern18.jpg
Original
pattern18orig.jpg
Image Size 3x3 (Nearest Neighbor)
Image Size 3000x3000 (Bicubic Sharper)
Levels 0 .91 241
Hue/Saturation saturation Magentas +91; Blues +92; Cyans +88; Yellows +82; Reds +93
Highpass 32.8 px
Levels 127 1.00 134

_________________________________________________________________________________
_________________________________________________________________________________

Techniques:

Once an image is thoroughly destroyed and manipulated, to capture the underlying patterns adjust specific saturation levels, send the image through a highpass filter, and adjust the levels.

Median is a good transformation to use to add patterns to an image with little information.

Curves is a good tool to sculpt the banding of discrete hue families.

Additional discoveries:

Increasing the size of Pattern16 (not shown) added an interesting border that did not exist in the smaller image.

shaun
Posts: 6
Joined: Thu Jan 06, 2011 1:58 pm

Project 1, Noise, Form and Randomness

Post by shaun » Tue Jan 18, 2011 1:03 pm

To be added

jlcanterbury
Posts: 5
Joined: Tue Jan 11, 2011 2:11 pm

Re: Project 1 NOISE

Post by jlcanterbury » Tue Jan 18, 2011 1:42 pm

After reading harmon's article I wanted to experiment with how changing an original image, particularly making it into a very low resolution one, would affect the viewer's ability to recognize the subject, or relate it back to the original image. I found all of my images by searching around randomly on google images. The changes that I applied to the images all started with bringing the image size down, then back up, thus creating a very low resolution form from the original image. It is interesting that after these changes, the image seems like it is almost completely devoid of characteristics from the original image, however when you look closer you can see areas of color represented by one or more larger pixels, and forms begin to take shape when you compare. It is also interesting that after the low res image is composed, adding noise or blur (which is technically further distortion), actually improves the ability to recognize forms.

scuba original
scooba.jpg
scuba original
scuba
scooba1.jpg
original size: 800x600
resize 8x8, nearest neighbor
resize 800x800, nearest neighbor
add noise, 15%

train
train1.jpg
train
train.jpg
size:650x450
resize, 8x10, bicubic sharper
resize, 650x450
resize, 6x4 nearest neighbor
levels adjust
gaussian blur, 25%
equalize

space original
space.jpg
space
space_resize_equalize_noise.jpg
original size: 1473x1147
resize: 2x1147
resize 1473x1147
add noise 25%
equalize
resize 4x1147
resize1473x1147

sumo original
sumo.jpg
sumo
sumo_resize_nearest_unsharp_resize.jpg
size: 299x344
resize, 6x6, nearest neighbor
resize, 300x300, nearest neighbor
image size, 200%
unsharp mask 15

davidgordon
Posts: 7
Joined: Tue Jan 11, 2011 2:07 pm

Proj. 1: Noise & Form - Critical Band Masking

Post by davidgordon » Tue Jan 18, 2011 1:55 pm

From the Harmon article, I was interested in the relationship of critical band masking to image perception. Harmon suggests that adding noise inhibits our perception of an image largely by masking its spatial frequencies, just as, in sound, a sine tone masks another sine tone occupying a close frequency range. According to the article, adding noise at close frequencies to those of the original image has the largest impact on recognition, while noise added outside the critical band tends to preserve the significant features of the image. Stromeyer (citation below) proposes that the critical band for vision is about 0.5-0.7 "octaves".

For this project, I wanted to apply the critical band masking concept to a process for adding noise to an image. I suspected that, given a critical band of around 0.5-0.7 octaves, noise spaced at frequencies of an octave or greater would create perceptually distinct noise patterns, since they occur at different frequency levels. I imagined this process could lead to balanced or aesthetically pleasing compositions. I thought one way to achieve this result might be to proportionally shrink an image to a width of 10 pixels, add noise, then proportionally increase the image size by greater than double, repeating the process several times.

For simplicity, I decided on a specific series of image widths: 10, 24, 52, 108, 246, 513, 1024. For image resampling, I used the "nearest neighbor" algorithm for most of the images, while the noise amounts varied between 10-30%, and the types of noise varied between monochromatic/chromatic and uniform/gaussian. The images from this process seemed to have more balanced noise than earlier images I produced.

Article: "Spatial-Frequency Masking in Vision: Critical Bands and Spread of Masking" by Stromeyer, Charles F. and Bela Julesz. online:
http://www.opticsinfobase.org/view_arti ... 0%28CDL%29
Attachments
closet-edit.jpg
closet.JPG
1. Reduce image width to 10px (constrain proportions; nearest neighbor resampling); 2. Add 15% noise (gaussian, chromatic); 3. enlarge image width to 24px (constrain proportions; nearest neighbor resampling); 4. add noise with same settings, repeat process at these widths, adding the same noise each time: 52px, 108px, 246px, 513px, 1024px
candles-edit.jpg
candles.jpg
1. Reduce image width to 10px (constrain proportions; nearest neighbor resampling); 2. Add 15% noise (gaussian, chromatic); 3. enlarge image width to 24px (constrain proportions; nearest neighbor resampling); 4. add noise with same settings, repeat process at these widths, adding the same noise each time: 52px, 108px, 246px, 513px, 1024px
hotel-edit.jpg
hotel.jpg
1. Reduce image width to 10px (constrain proportions; bilinear resampling); 2. Add 15% noise (gaussian, chromatic); 3. enlarge image width to 24px (constrain proportions; nearest neighbor resampling); 4. add noise with same settings, repeat process at these widths, adding the same noise each time: 52px, 108px, 246px, 513px, 1024px
winter-edit2.jpg
winter.jpg
1. Reduce image width to 10px (constrain proportions; nearest neighbor resampling); 2. Add 15% noise (uniform, chromatic); 3. enlarge image width to 24px (constrain proportions; nearest neighbor resampling); 4. add noise with same settings, repeat process at these widths, adding the same noise each time: 52px, 108px, 246px, 513px, 1024px
winter-edit.jpg
winter.jpg
1. Reduce image width to 10px (constrain proportions; nearest neighbor resampling); 2. Add 15% noise (uniform, monochromatic); 3. enlarge image width to 24px (constrain proportions; nearest neighbor resampling); 4. add noise with same settings, repeat process at these widths, adding the same noise each time: 52px, 108px, 246px, 513px, 1024px

ariel
Posts: 15
Joined: Thu Sep 30, 2010 1:25 pm

Re: Project 1 NOISE Instructions

Post by ariel » Tue Jan 18, 2011 4:45 pm

In the Harmon article he explains that even as noise and blur is added to an image, it is still recognizable. I chose to mainly use the addition of noise and blur to my images to alter them from their original state. Along with noise and blur, the hue and saturation were changed to see if the original colors were altered as well how identifiable the image would remain. I also found it interesting that at high levels of blur and noise some percentage of people were still able to recognize the image from basic forms such as the eyes or lips. I tried to connect this to my images because I thought it would be interesting to see if it would still be recognizable simply form similar forms and shapes with the two images such as where the shape of a roof was. In some images I chose to add more noise and blur and in some I chose to keep them more similar to the original picture.

First Image: People in garden
art 1.jpg
Original:
frank-carter-people-walking-under-roof-of-pink-cherry-blossoms-kyoto-japan.jpg
Resize: 20 x 300
Resize: 800 x 300 (Bicubic Smoother)
Noise: 19 %
Resize: 15 x 300
Resize: 900 x 300 (Bilinear)
Noise: 25%
Resize: 25 x 300
Resize: 900 x 300 (Nearest Neighbor)
Radial Spin: 30%
Exposure
Motion Blur: 68 degree angle, 40 pixels

Second Image: Shine
art 4.jpg

Original:
architecture-01.jpg
Resize: 15x 631
Resize: 800 x 631
Noise: 20%
Hue and Saturation
Motion Blur: 45 degree angle, 81 pixels
Dust and Scratches: 89 pixels, 36 levels
Lens Blur
Radial Blur
Add Curves

Third Image: Garbage
art3.jpg
Original:
Knossou dump 2008-03-19-1.jpg
Resize: 15 x 960
Resize: 1000 x 960 (Bicubic Smoother)
Exposure
Noise: 75%
Shape Blur
Dust and Scratches: 84 pixels, 47 levels
Noise: 63%
Resize: 1500 x 960 (Bilinear)
Lens Blur
Radial Blur: 53 amount zoom
Resize: 800 x 960 (Nearest Neighbor)
Motion blur

Fourth Image: Sunset
art5.jpg
Original:
sunset_4_bg_010303.jpg
Resize: 20 x 768
Resize: 1000 x 768 (Bilinear)
Hue and Saturation
Noise: 30.68%
Motion Blur: 40 pixels
Levels
Dust and Scratches: 69 pixels, 109 levels
Bright and contrast
Noise: 24%


Fifth Image: Beverly Hills
art2.jpg
Original
beverly_hills_rodeo_drive.jpg
Resize: 20 x 450
Resize: 1000 x 450 (bilinear)
Hue and Saturation
Motion Blur
Add Curves

shaun
Posts: 6
Joined: Thu Jan 06, 2011 1:58 pm

Project 1, Noise, Form and Randomness

Post by shaun » Tue Jan 18, 2011 5:46 pm

I'll update this more on Thursday, but here's a quick run through for a single image.

We begin with some random image (chosen automatically from among a random sample, see getimg.py), say, like this one:
Image

then obfuscate it in photoshop:

Image

Lucky for us, it is already grayscale, otherwise we'd have to convert it (in photoshop, or in matlab with the command rgb2gray). The reason we have to do this is because we will be using a two dimensional discrete fourier transform--fft2 in matlab--and an rgb image has three dimensions, namely width, height and color.

Now, one of the main uses of fourier transforms (fft) in image processing is to remove periodic artifacts, such as noise or other patterns. This is done with the use of general filters (high pass, etc., not to be confused with photoshop's filters, which affect only the spatial domain), or by hand using notch filters. Notch filters are just a way to block small parts of the spectrum. You can think of them as little dots which you cover the artifacts with. Using an fft on an image will give two things. One is the magnitude (how much signal is present at any given point) of frequencies in the spectrum, and the one with which we're concerned. The other part, the angle or phase in the complex plane, is not important for us, but here is what it looks like for our image:

Image


The part we're concerned with is this:
Image

The bright areas--save for the center--represent periodic information, which in our case represents the blockiness.

Let's try to reduce it. We'll use a bandpass filter (let frequencies above given min and below given max pass though, cut all others). Here is the one my program creates, at bottom left (using subtraction with two butterworth--which basically means smooth--low pass filters):
Image

And here is the image after applying the filter, and bringing it back to the spatial domain (using an inverse fft, ifft2 in matlab).

Image

The before again:
Image


Not that impressive? Remember, we are only applying a broad filter, not notching anything by hand.

Here is an example of how effective it can be doing it by hand (determining the "spots" to filter). This wasn't done by me. If you squint hard, you can see the information being obscured by the pattern.

Image
Image

I've uploaded all my code to,
http://www.cs.ucsb.edu/~srd/artst102/




----------------------------------------------
Attachments
byhanda.png
byhandb.png
filter_ex.png
freq_ex.png
phase.png
bandpass_ex.png
distorted_ex.png
distorted_ex.png (7.72 KiB) Viewed 9163 times
5359034705.JPEG

Post Reply