Modelling of the background (“uninteresting parts of the scene”), and of the foreground, play important roles in the tasks of visual detection and tracking of objects. This paper presents an effective and adaptive background modelling method for detectin
(corresponding to bright pixels in Figure 1 (b) and (c)). This illustrates that the estimated r and g
values are very noisy when the intensity values are low.
(a) (b) (c)
Figure 1. (a) One frame of LS; Images showing the standard variance of the red channel (b) and the green channel (c) in the normalized rgb color space over 300 frames of LS.
To solve this problem, when the intensity value I is high, both r and g values are reliable. We use
x = (r, g, I) as the color channels. However, when the intensity I of the pixel is lower than a threshold Itd (which is experimentally set to 7), r and g values are not reliable. In this case, we use only the intensity I (we then only have one color channel).
(r,g,I)ifI≥Itd x= ()IifI<Itd (5)
2.3.3 Validation of Pixels inside Holes
When a foreground object has similar color to the background scene, there may be holes in the detected foreground regions (i.e., the foreground pixels inside the holes are wrongly labelled as background pixels). Let us re-consider Equation (4), although β≤xtI/xbI≤γ can be used to
suppress shadows, the intensity information has been disregarded to some extent. If the chromaticity component of foreground pixels is similar that of background pixels, the difference of the intensity part may be large but still within the range of β≤xtI/xbI≤γ (this is notable
especially when xbIis large), so in such cases the pixels are wrongly marked. For these pixels,
10