云题海 - 专业文章范例文档资料分享平台

当前位置:首页 > Shadow Remover Image Shadow Removal Based on Illumination Recovering Optimization(word版本) - 图文

Shadow Remover Image Shadow Removal Based on Illumination Recovering Optimization(word版本) - 图文

  • 62 次阅读
  • 3 次下载
  • 2025/5/26 1:32:13

Since the illumination in the

shadow boundaries

usually changes dramatically, to receive

satisfied shadow

Fig. 3. Overlap region between patches. Both the shadow region removal and lit regions are decomposed using overlapped patches. For

we each patch in the shadow regions, we can find its nearest patch in results,

the lit regions for shadow removing. further subdivide patches on the shadow boundaries with smaller patch size. In our experiments, the pixels with S1 < a < S2 are considered as shadow boundary pixels, where ^1 = 0.2 and S2 = 0.9. We subdivide the patch containing boundary pixels into four smaller patches. Using this method, we can receive adaptive patch decomposition for the input image. Note that we consider the patch containing pixels with a < S2 as the shadow patch. The remaining patches are lit patches.

Using adaptive decomposition, the content of the input image can be expressed using the shadow patches {Sk}k=1,...,Ns and the lit patches {Lk}k=1,…,叫,where Ns is the number of shadow patches and Ni is the number of lit patches. The lit patches present guided samples for shadow patches used in shadow removal. For each patch Si in {Sk}, we find a matching patch Lj in {Lk} which has similar texture with Si. For the patch pair (Si,Lj), we remove the shadow on patch Si by applying our local illumination recovering operator.

a region R with a covariance matrix of the feature points:

CR = -------- (zk — //)(zk — ^i)T

n-1

k=1

1 n

(7)

where n is the pixel number of region R, zk is the d-dimensional feature vector of the k-th pixel, and /i is the mean feature vector of the pixels in region R. The d-dimensional feature vector can be chosen freely depending on different applications. In our method, we choose a 6D vector (intensity, chromaticity, first derivatives of the intensity in x and y directions, second derivatives of the intensity in x and y directions) as the feature vector of each pixel, and CR is a 6 x 6 covariance matrix. As subtracting the mean feature vectors, covariance matrices are less sensitive to illumination, which is preferred for patch matching between shadow regions and lit regions.

Since covariance matrices do not live on Euclidean space, we use the Cholesky decomposition [33] to transform covariance matrices into Euclidean space. We use the following vector [33] to represent the covariance matrix:

f(CR) = {//, V6LU ? ? ? , V6L6,-V6LU ? ? ? , -V6L6}r

(8)

where Li is the i th column of the lower triangular matrix L computed from the Cholesky decomposition of CR: CR = LLt. To accelerate this search, we construct a KD-tree for the vector f(CR) of all lit patches. We query the KD-tree with the vector f(CR) of the shadow patches, and extract some nearest patches as the candidate lit patches for each patch Si. In fact, a larger number of candidates contribute to more accurate patch matching while it requires higher computational cost. To achieve a good tradeoff between efficiency and accuracy during the patch matching, we choose five candidate lit patches in our experiments.

Spatial Distance: Spatial distance between two patches is also an important cue to find the corresponding patch. As spatially neighboring patches usually have similar illumination information, thus, for shadow removal, finding a lit patch close to shadow patch with similar texture is more appropriate to produce illumination coherent results. In our method, spatial distance metric is used to select the final matched patch from the five candidate lit patches. The spatial distance is computed as the square sum of the difference between the lit patch center and the shadow patch center. For the five candidate lit patches, we choose the patch which has the smallest spatial distance to Si. Xiao et al. [5] used Gabor wavelet descriptor to extract the texture feature, however this method is influenced by the illumination. That is, this descriptor is illumination

dependently. In contrast, the texture descriptor we use to extract the texture information is illumination independently. In Fig. 4, we compare patch matching results based on our patch matching strategy by using region covariance descriptor and the Gabor wavelet descriptor [5], respectively. The results show that our texture descriptor outperforms Gabor wavelet

B. Fast Nearest Patch Matching

For each shadow patch Si, we develop an effective texture matching metric to find a nearest patch in the lit regions. Our patch matching method is based on the following observation: the texture and color of two patches with the same reflectance are similar only when the illumination is the same; the color and intensity of these two patches may differ when the illumination is different. Thus, the patch matching metric should be illumination independently, that is, it should be robust in the presence of illumination changes. As the illumination difference between shadow regions and lit regions is usually large, we perform the effective patch search from two aspects. For finding the matched patch pair (Si,Lj), each aspect acts as a constraint condition in the searching processes.

Covariance Matrices: Region covariance descriptor [31] is an efficient feature description for a region in an image, which provides strong discriminative power in distinguishing

local texture and image structures [32]. It represents descriptor using for texture matching.

Fig. 4. Evaluation of the proposed texture descriptor. (a) The two source shadow patches are shown in white box, the target patches found by Gabor filter texture descriptor [5] are shown in red box, and our results are shown in green box, (b) close-ups for sample patches in (a).

(a) (b) (c)

Fig. 5. Local illumination recovering optimization. (a) Input image, (b) shadow removal result without using overlapped patches, (c) shadow removal result using overlapped patches and illumination consistency optimization.

After the procedure of illumination consistency optimization, the inconsistent artifacts of the neighboring patches can be avoided or greatly alleviated, and we receive high-quality shadow-free result, as illustrated in Fig. 5(c). In our experiment, we set denser and smaller patches on the shadow boundaries, which helps to receive satisfactory result on the boundaries. 2) Texture Detail Enhancement: The illumination consistency optimization makes consistent transition between shadow removed patches. However, due to the weighted averaging, sometimes it will leads to some texture blurring artifacts, especially for image with weak texture structure, as illustrated in Fig. 6(b). Furthermore, to recover the illumination information of the heavy shadow regions, the texture detail may not be recovered thoroughly.

In order to recover texture details, we apply the image gradient of the original shadow regions as the guidance to enhance the received shadow-free result with the purpose to maintain gradient information of shadow regions. We define a optimization function as follows:

^ (idetail - jfree)2 +

x e S

(vldetail - VIx)2

x e S

(10)

C. Coherence Recovering Optimization

Once we have found the nearest patch in the lit regions for each shadow patch, we can remove the shadow using Equation (6). Because of independent processing for each shadow patch, illumination between patches may be inconsistent, as illustrated in Fig. 5(b). To get coherent results and eliminate the potential blurring artifacts, we introduce illumination consistency optimization and texture detail enhancement in our method. Texture detail enhancement is an optional step which is required when texture blurring exists in shadow regions after the illumination consistency optimization.

1) Illumination Consistency Optimization: Since the sufficient overlap between patches, a pixel may be contained by multiple shadow patches. Thus the overlapped shadow pixel has different shadow-free value for different patch. Intuitively, to receive consistent result between adjacent patches, each pixel should take all possible shadow-free values into consideration. Let S(x) be a patch set containing pixel x. The weight 叫=1 — is the optimized weighting factor for pixel x in patch Si e S(x), where dis(x, center_Sk) is the spatial distance between pixel x and the patch center of Sf. Let IferSe be the shadow-free result of pixel x by performing local illumination recovering operator using the matched patch pair (Si, Lj). The illumination consistency optimization result for pixel x is computed as weighted average of all the possible shadow-free values Si e S(x): w Jfrie

The first term of this energy function is the data term. Idetail is the target value at pixel x. I^ree is the shadow removed value generated from illumination consistency optimization. The second term is the gradient constraint term, whose purpose is to maintain the gradient information of the shadow regions. V is the gradient operator.义1 is a user controlled parameter used for balancing the contribution of the gradient constraint term. A large A1 is set when the shadow regions show significant blurring artifacts.

Because of the influence of shadow, especially heavy shadow, the gradient may be weakened in shadow regions. Hence, we set a weight coefficient 义2 for the original gradient in Equation (10), whose objective is to compensate for the gradient weakening derived from light occluding. The value of 义2 is controlled by user, which is fixed for one image. But for different images, a large 义2 can be set when the color differences between adjacent pixels in shadow regions are small. The optimization function is rewritten as: ^ (Idetail - Ifree)2 +

x e S

(VIdetail - A2VIx)2

x e S

(11)

We solve the above linear system using gradient decent method. As shown in Fig. 6, with the texture detail recovering enhancement, the texture details in shadow regions can be efficiently recovered. The step for texture detail enhancement is optional, and it is required only when texture blurring occurs after illumination consistency optimization. This blurring is a relatively rare event. In all the results presented in this paper, only Fig. 6(d) and the fourth column in Fig. 15 need such optimization.

D. Shadow Boundary Processing

Our method can effectively recover the illumination around the shadow boundaries where there is smooth transition between shadow regions and lit regions, as illustrated in the

Fig. 6. Texture detail enhancement. (a) Input images, (b) shadow removal results without texture detail covering enhancement, (c) close-ups for the red boxes in (b), (d) shadow removal results with texture detail recovering, where I1 = 1, I2 = 2.0 in the first row and I1 = 0.5, I2 = 0 8 in the second row, (e) close-ups for the red boxes in (d).

(a)

(b) (c) (d) (e) (f) (g)

Fig. 7. Shadow boundary processing. (a) Input images with shadows, (b) the image details, (c) shadow removal results without shadow boundary processing, (d) close-ups for the red boxes and the blue boxes, (e) the trimaps for shadow boundary regions and sample regions, white regions are the target shadow boundaries and the pink regions are the sample regions, (f) shadow removal results with shadow boundary processing, (g) close-ups for white box regions, where the bottom and top close-ups correspond to the results with and without shadow boundary processing, respectively.

boxes of the first row in Fig. 7(c). But for some complex and sharp shadow boundaries, as shown in the second row in Fig. 7(c), our current method may not work well. The main difficulty in shadow boundaries is that some of the detail information is lost at shadow boundaries due to the dramatic changes of illumination on shadow boundaries, which leads to some defects for some existing shadow removal methods.

Let I be the input image, and Ifllter is the filtered map by bilateral filter [34], we compute the image detail by D = I — Ifllter. We can observe from Fig. 7(b) that, the detail information on the shadow boundaries is sometimes seriously destroyed or missing. Thus, to receive satisfied shadow-free results on sharp shadow boundaries, different from the linear interpolation method such as [5], we need to develop more effective shadow boundary processing techniques. Compared with the whole image, the target shadow boundary regions are relatively small. The example-based texture synthesis method can handle it very well. Inspired by the texture synthesis method [35], we present the constrained texture synthesis to recover the texture and illumination information on the shadow boundaries.

Let ^ be the target shadow boundary regions. For each pixel in ^, Tx is a window centered at pixel x with size of r x r. To remove transitional difference between lit regions and shadow removed regions, we minimize the following objective function to recover boundary texture and illumination: =(\\\\Tx —

M || +

ed§e八||2 E(Ix, {Tx}) (12) X G0 where ^ d(x)2 + c(W)2) Iedge is the target value at

pixel x. The first term measures the appearance difference between Tx and M in a L1 norm fashion. M is a r x r window in the sample regions. In our experiments, we set r = 7. The second term is the proximity term constraining the search space.仍 is the balance parameter. d(x) is the distance between pixel x and the boundary of the lit region and c(W) = ^ is the strength parameter for adjusting the proximity constraint. W is the largest image dimension (image width or height).

We constrain the sample regions near the target shadow boundary regions, and obtain the sample regions by dilating the mask of ^, such as the pink regions in Fig. 7(e). We apply a two-step iterative method to get the optimized results. For

II -MFig. 8. Shadow editing. (a) Input image, (b) the direction illumination of the shadow areas are tuned by setting n = 1/2, u = 1, (c) shadow editing result, where tj = l, o = 1 in the shadow areas and // = 1/5, o = 1 in the lit areas, (d) the specified transition region for shadow edge softening, (e) shadow edges softening result with the attenuation function fx = s/1 — dx, where dx is the normalization distance between the pixel Ix and the shadow boundary.

Algorithm 1 Shadow Removal Algorithm Input: RGB image I Output: Shadow-free RGB image 1: Detect the shadow regions (Section III)

2: Decompose the input image into adaptive patches (Section V-A)

3: for each patch Si ^ S do

4: Patch matching: Find an optimal matching patch Lj in L

(Section V-B)

5: Patch shadow removal: Remove the shadow in patch Si

applying the Local illumination recovering operator on patch pair (Section IV) 6: end for

7: Illumination optimization: perform weighted average for the overlapped pixels (Section V-C)

8: Texture details recovering (optional) (Section V-C) 9: Boundary processing (optional) (Section V-D)

each iteration, we first find Tx by minimizing Equation (12), then evaluate the value for each pixel x in ^. We repeat the two steps iteratively until satisfying the convergence value. Based on these two terms, for each patch in target shadow boundary regions, we can find a nearest patch in sample regions, which will be used for synthesizing the textures in target shadow boundary regions. As illustrated in Fig. 7 (f), with the controllable texture synthesis on shadow boundaries, the illumination and texture details are effectively recovered, and the results are also consistent with the surrounding regions. In algorithm 1, we outline the main steps of the proposed shadow removal algorithm. Each step has been detailed in the previous sections. In our shadow removal system we only need to manually set three parameters:仍,义 1 and 义2.

VI. APPLICATIONS Our illumination recovering operator can be easily extended to image editing applications, such as shadow editing and color transfer.

1) Shadow Editing: According to the image formation model [30], one pixel in an image can be represented as: Ix = (nLd + vLa)Rx, the parameter n, v depend on the light

(c)

Fig. 9. Image shadow removal and editing based on illumination recovering optimization. (a) Input image, (b) shadow removal result, (c) shadow editing for specific area with n = 01, u = 0.4.

(b)

conditions and can be considered as the light attenuation factor or the object occluding factor. In the lit areas, n and v are defined as 1. In the umbra areas, n is 0 and v is 1. To remove the shadow, we add the direct illumination to the shadow areas, that is, setting n = 1 in the shadow areas. We can also modify the value of n and v in specified areas for producing new shadow editing results, which corresponds to change the direct illumination or indirect illumination for the specified area.

We first specify a shadow sample and a nonshadow sample with similar texture. Then we utilize the illumination recovering operator and the relationship between Ld and La : Ld = tLa for shadow editing, where t is computed based on the shadow sample and nonshadow sample in the input image. By setting the direct illumination and the indirect illumination, the intensity of the specified areas can be expressed as: I^dlt = In Fig. 8 and Fig. 9, we give several

shadow editing results by setting different values for n and v . Compared with the shadow editing method [3] based on Poisson shadow interpolation and the illumination editing method [36], our method is easier to simulate a variety of lighting conditions. Our illumination recovering operator can also be applied to soften the sharp shadow edges, as illustrated in Fig. 8(e). Softening shadow is generated with illumination transition from the shadow areas to the lit areas. This process can be simulated using our method. As shown in Fig. 8(d), we first specify a transition region around the shadow edges. For the transition region, we define a direct illumination attenuation function fx which varies depending on the distance between pixel x and the shadow edges. Then the shadow edge softening can be achieved by performing the following illumination

搜索更多关于: Shadow Remover Image Shadow Re 的文档
  • 收藏
  • 违规举报
  • 版权认领
下载文档10.00 元 加入VIP免费下载
推荐下载
本文作者:...

共分享92篇相关文档

文档简介:

Since the illumination in the shadow boundaries usually changes dramatically, to receive satisfied shadow Fig. 3. Overlap region between patches. Both the shadow region removal and lit regions are decomposed using overlapped patches. For we each patch in the shadow regions, we can find its nearest patch in results, the lit regions for shadow rem

× 游客快捷下载通道(下载后可以自由复制和排版)
单篇付费下载
限时特价:10 元/份 原价:20元
VIP包月下载
特价:29 元/月 原价:99元
低至 0.3 元/份 每月下载150
全站内容免费自由复制
VIP包月下载
特价:29 元/月 原价:99元
低至 0.3 元/份 每月下载150
全站内容免费自由复制
注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
微信:fanwen365 QQ:370150219
Copyright © 云题海 All Rights Reserved. 苏ICP备16052595号-3 网站地图 客服QQ:370150219 邮箱:370150219@qq.com