Modelling of the background (“uninteresting parts of the scene”), and of the foreground, play important roles in the tasks of visual detection and tracking of objects. This paper presents an effective and adaptive background modelling method for detectin
To employ these normalized coordinates we reformulate the test in equation (3). Let (rb, gb, Ib) be the sample value at a background pixel xb and (rt, gt, It) be the sample value at this pixel (i.e., xt) in frame t. If the background is totally static, we can expect the pixel in shadow to be dark (but not totally black): β≤It/Ib≤1 and, conversely, when the pixel is highlighted by strong light 1≤It/Ib≤γ. Thus, we express the tolerance for consensus on the intensity channel in terms of the ratio: β≤It/Ib≤γ i.e., equation (1) is replaced by : cccc 1 if xt xb≤Tr c∈{r,g}and β≤xt/xb≤γ Γ(m,t)= 0 otherwisec
ic∈{I} (4)
where β,γare constant and are chosen empirically (in our case, we set β=0.6; γ=1.5) and Tr is set as described in section 2.3.4.
However, there are still some problems remaining:
1) When the intensity I is small, the estimated normalized color (r, g, b) can be very noisy. This is because of the nonlinear transformation from the RGB space to the normalized rgb color space in Equation (3). We address this issue in section 2.3.2.
2) When the chromaticity component of the foreground pixel is similar to that of the background pixel, the ratio test we have just defined can fail - see section 2.3.3.
2.3.2 Normalized Color Noise
Figure 1 (a) shows one frame of the image sequence “Light Switch” (LS) in the Wallflower dataset [27]. We selected three hundred frames (frame 1001 to 1300) of LS where the light was switched off. Except for rolling interference bars on the screen, the rest of the pixels remain static. From Figure 1 (b) and (c), we can see that when the intensities of image pixels in Figure 1 (a) are low, the estimated standard variances of both the r channel and the g channel are high 9