Many science and engineering applications necessitate the optimal control or design of systems described by partial differential equations (PDEs) with uncertain inputs such as coefficients, boundary conditions and initial conditions. In this talk, I formulate such problems as risk-averse optimization problems in Banach space. For many popular measures of risk such as coherent risk measures, the resulting risk-averse objective function is often nonsmooth and requires an enormous number of samples, and hence PDE solves, to accurately evaluate. Additionally, the nonsmooth objective function precludes the use of rapidly converging derivative-based optimization algorithms. To address these challenges, I present a general smoothing technique for risk measures based on the epigraphical calculus. I show that the resulting smoothed risk measures are differentiable and converge in a variational sense to the original nonsmooth risk measure. Moreover, under mild assumptions, I prove consistency of this smooth approximation for both minimizers and stationary points of the target optimization problem. Under slightly stronger assumptions, I further prove a convergence rate for the minimizers of the smoothed problem. I conclude with numerical examples confirming these results.